TMC Completes Massive IBM J2EE / .NET Study

Discussions

News: TMC Completes Massive IBM J2EE / .NET Study

  1. TMC Completes Massive IBM J2EE / .NET Study (299 messages)

    The Middleware Company has completed a massive study comparing IBM WebSphere against Microsoft .NET across multiple dimensions: developer productivity, manageability, reliability, and application performance. This latest study is the most extensive research endeavor TMC has conducted to date.
     
    For this study TMC assembled two independent teams, one for J2EE using IBM WebSphere, the other for Microsoft .NET. Each team received the same specification for a loosely-coupled system to be developed, deployed, tuned and tested in a controlled laboratory setting. The WebSphere team developed two different implementations of the specification, one using IBM’s model-driven tool Rational Rapid Developer (RRD), the other with IBM’s code-centric WebSphere Studio Application Developer (WSAD). The .NET team developed its single implementation using Visual Studio.NET as the primary development tool.

    TMC recruited an independent auditor to design the specification, oversee the lab environment, enforce rules of engagement, and conduct validation tests. The auditor has issued an independent report along with his results.

    In each of the following areas, the TMC report presents the quantitative results from the auditor as well as extensive, detailed descriptions of the teams’ experiences and insights:
    • How quickly the team could develop the system to spec and deploy it for functional testing
    • How quickly and accurately the team could configure and tune their system for performance
    • How well the system did on a series of performance tests
    • How easily the team could deploy changes under load
    • How well the system handled failover
    How did IBM and J2EE do in their native environments? Did Microsoft .NET live up to expectations? Is this study conclusive?

    To get these answers and to obtain the TMC final report, the independent auditor’s report, the system specification and the source code for the three implementations, go to:
    http://www.middlewareresearch.com/endeavors/040921IBMDOTNET/endeavor.jsp

    See eWeek's Article covering this study and results: http://www.eweek.com/article2/0,1759,1645550,00.asp

    TheServerSide Editors’ Note: This research is produced and hosted by The Middleware Company, a research and media company, and the parent company of this site. TheServerSide.com and TheServerSide.NET are independent media sites of The Middleware Company and as such do not produce research themselves, and have no involvement in this study beyond providing this thread of discussion for your positive and negative comments alike, just as other media organizations have reported on it (eWEEK, InternetNews).

    Threaded Messages (299)

  2. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    Well, again "Microsoft commissioned The Middleware Co. Inc. to study productivity and performance comparisons between Microsoft's Visual Studio .Net 2003 and IBM's WebSphere and other tools".

    Amazingly, next month when IBM commissions an "independent study", it finds that Websphere is faster!

    I dunno, it's just really really hard for me to believe that it really took twice as long with java, or that it an article that goes on and on and on about how wonderful Microsoft is really has to do with an impartial study.
  3. I completly agree with you, Paul.
    OTOH, they say that Websphere/WSAD is a slow, unproductive and painful environnement. We can't really blame TSS, the whole community already knew that for a fact.
    Next time they should test .NET against a real, productive J2EE environnement.
  4. Keep in mind what the sponsor wanted...[ Go to top ]

    TMC didn't have a choice of products to test. Microsoft is only concerned about getting competitive leverage over IBM/Linux/WebSphere, so that is the study they sponsored.
  5. Keep in mind what the sponsor wanted...[ Go to top ]

    OTOH, the WebSphere teams chose Oracle 9i for the database and not DB2. I've always had better luck using WebSphere with DB2 rather than Oracle.
  6. Keep in mind what the sponsor wanted...[ Go to top ]

    TMC didn't have a choice of products to test. Microsoft is only concerned about getting competitive leverage over IBM/Linux/WebSphere, so that is the study they sponsored.
    Of course not. Microsoft had already tested everything before comissioning the work to TMC to be sure of the result, hence Websphere (its cumbersome and clumsy nature is pretty much common knowledge). It does not really matter that the comparison is between .NET and Websphere, because they will use it as .NET vs. J2EE and you are seeing this happening already.
  7. Well it pays for the website :-)[ Go to top ]

    +1

    It's like those annoying adverts that appear in the middle of online articles. Just like when you're reading an article on Linux you get a Microsoft advert half-way through. It pays for the website. And so we put up with this nonsense in the meantime. Nonsense : here defined as _any_ report commisioned by an IT vendor.
  8. Can't do that[ Go to top ]

    If they did that they would not get their check from Microsoft.

    A fair and un-biased testeing method would be to create a spec so detailed and precise that there was no room for interpretation then ask each contender to provide their own team to implement that specification. This way each team could bring in their experts, and the rest of us would be able to study some great code.
    Again, the problem would be to find a sugar daddy willing to pay for the study (prior to knowing the outcome). Remember: TMC does it for money!

    A final reflection: A task that takes < 100 man-houors to complete does not say anything about the performance of a complex real-world solution. Without havaing studied the task and the resulting code I can conclude that complex solutions that require sophisticated logic and transactional integrity while handling large volumes will take much longer than that to develop.
    And here is the crux: A two-tier environment (like the .NET) is inherently less suited for such tasks than J2EE.

    Microsoft-comissioned studies have often been biased in the same way: Remember the 'Get the Facts' study that compared the Total Cost of Ownership of a Windows system running on a PC Server to the cost of a Linux system running on a mainframe?!
  9. Well, again "Microsoft commissioned The Middleware Co. Inc. to study productivity and performance comparisons between Microsoft's Visual Studio .Net 2003 and IBM's WebSphere and other tools".I dunno, it's just really really hard for me to believe that it really took twice as long with java, or that it an article that goes on and on and on about how wonderful Microsoft is really has to do with an impartial study.
    Hi Paul. Thanks for the early comments. I can appreciate your skepticism on any comparisons between IBM and Microsoft. However, I encourage you to read through the report, the auditor's report, and the associated notes. There is a good reconstruction of what happened where. In fact, in many situations, third parties were brought in to optimize tuning of Linux and WebSphere -- my point being that the extensiveness of this study and the efforts taken by multiple groups in the analysis and tuning of systems was significant.

    And, since the technologies (note that this is an IBM vs. .NET study, not J2EE vs. .NET) are compared on multiple dimensions, each dimension should be compared individually to grok the entire picture. I found it to be an educational (and long) read.

    Tyler
    The Middleware Company (a witness to the study, but not directly involved)
  10. And, since the technologies (note that this is an IBM vs. .NET study, not J2EE vs. .NET) are compared on multiple dimensions, each dimension should be compared individually to grok the entire picture.
    This is not a J2EE vs. .NET study, but that is exactly how it is going to play out. All you need to say is that Websphere is a standard J2EE application server, and there it is. Next time you hear how crappy J2EE is, they are pointing at this study.
  11. This is not a J2EE vs. .NET study, but that is exactly how it is going to play out. All you need to say is that Websphere is a standard J2EE application server, and there it is. Next time you hear how crappy J2EE is, they are pointing at this study.
    You're exactly right. And this is not a new phenomenon - it has been happening for a long time, hasn't it? Understandably, the J2EE vendors are happy when it is good news (and they all share credit) but unhappy when it is bad news (and they all get the blame). The origin of the confision was the original positioning of J2EE from Sun: "the one platform you need, available from multiple vendors" (my paraphrase of course, but I think it is accurate). I guess the differences in implementations that you refer to, show that J2EE really isn't the same across all vendors. Hmmm, not vendor neutral eh? Interesting. . .
  12. The origin of the confision was the original positioning of J2EE from Sun: "the one platform you need, available from multiple vendors" (my paraphrase of course, but I think it is accurate). I guess the differences in implementations that you refer to, show that J2EE really isn't the same across all vendors. Hmmm, not vendor neutral eh? Interesting. . .
    Interesting Dino? I'm sure it's for someone used to receive everything from the only one and true vendor. As far as I remember, functionally J2EE is the same regardless the vendor. Regarding non functional requirements, there are differences. Do you find it enough interesting? I could elaborate an example between .NET on Windows and Mono on Linux to explain me better.


    Cheers
  13. Interesting Dino?[ Go to top ]

    The origin of the confision was the original positioning of J2EE from Sun: "the one platform you need, available from multiple vendors" (my paraphrase of course, but I think it is accurate). I guess the differences in implementations that you refer to, show that J2EE really isn't the same across all vendors. Hmmm, not vendor neutral eh? Interesting. . .
    Interesting Dino? I'm sure it's for someone used to receive everything from the only one and true vendor. As far as I remember, functionally J2EE is the same regardless the vendor.
    Yep, J2EE is the same across those vendors - which means you get the same APIs. But those darn differences!
    Regarding non functional requirements, there are differences. Do you find it enough interesting?
    Yes, I guess that's my point. I find this very interesting and often overlooked. An API spec does not define a platform. Variances across Sun, BEA, IBM, and JBoss make real differences in practice - in things like security, failover, developer productivity. The current TMC study, and the reaction to the report, shows it.

    Having the same API set - let's assume it provides bonafide 100% portability. Let's assume for the same of this discussion that nobody uses portals or platform or app-specific connectors, nobody uses a replicated caching such as is provided in JBoss, WebSphere and WebLogic but not specified in J2EE, nobody uses Web services (pre J2EE v1.4 there was no standard API or DD). Now once I build and deploy the app I am able to re-deploy it to another J2EE-compliant container yes? BUT, the app code is not the app. There is so much more: operational procedures, configuration settings, functional performance, how to set up clustering, the semantics of clustering in failure scenarios, and on and on. None of this is contained within any .java module. It is all "server specific". So J2EE is the same, but what is the real benefit if there are so many other differences? POSIX is the same, but does that mean AIX is hot-swappable for Solaris?
    I could elaborate an example between .NET on Windows and Mono on Linux to explain me better.
    Yes, I expect you would find similar issues in this pairing. Portability of application code might be "almost there" at least for most things, but what about all the other aspects?

    Cheers, likewise!
  14. Interesting Dino?[ Go to top ]

    Dino,

    Thank you very much for a reasoned response. I'm not going to do a Java apologie, but perhaps to give you more ammunition against the Java world. Sincerely, I believe there are some points in your dissertation that bark at the wrong tree.
    Regarding non functional requirements, there are differences. Do you find it enough interesting?
    Yes, I guess that's my point. I find this very interesting and often overlooked. An API spec does not define a platform.
    I agree. I believe a platform is defined by: 1) Technology, 2) Frameworks 3) Tools. You have in the Microsoft world the three things standardized due there is a one major proveider for the three.

    But the same occurs in the Java world, do you know? When I migrated from Java to .NET I found the .NET framework very comprehensive and a very well thought thing. But also I missed the strong and huge community platform support. In the Java camp, people doesn't depend on one sole company nor API spec to make things. There are a lot of alternatives. So even if many key pieces of a modern IT puzzle could be missing, there are often more than one alternative. Of course, this raises the bar for begineers, so it could be used as an argument against Java platform as a whole: there are many alternatives, besides "official" providers.

    Sadly, I'm aware that this doesn't make a good marketing scenario.
    Variances across Sun, BEA, IBM, and JBoss make real differences in practice - in things like security, failover, developer productivity. The current TMC study, and the reaction to the report, shows it.
    Not so much Dino. Let me tell you something, I know some major global companies that use JBoss and Eclipse/Netbeans for development and BEA (or another J2EE server) for production. And of course there are deployment problems and idiosincracies, but nevertheless they are sovable in a reasonable amount of time. Remember also, that much development is done on Windows and deployment in *ix machines. That's a living probe of the portability mantra.
    Having the same API set - let's assume it provides bonafide 100% portability. ... Now once I build and deploy the app I am able to re-deploy it to another J2EE-compliant container yes? BUT, the app code is not the app. There is so much more: operational procedures, configuration settings, functional performance, how to set up clustering, the semantics of clustering in failure scenarios, and on and on. None of this is contained within any .java module. It is all "server specific". So J2EE is the same, but what is the real benefit if there are so many other differences?
    It depends on who do you look at it. It could be seean as a disadvatange or as a clear separation of concerns between development time duties and deployment time duties. Most programmers use to ignore this simple fact, and don't think their programs in terms of "deploymentability". }

    Tasks you've mentioned have to be accomplished no matter if you use Perl, Java, .NET or whatever environment. The fact that in .NET those tasks are performed in a some kind unique fashion, doesn't imply they don't have to be implemented. If they are defined as metadata inside a source code file or as an external XML file, is an implementation decision made by the platform stewards.

    For example (since you agreed to use some .NET/Mono comparison) many concerns you mentioned refer to issues addressed in System.EntepriseServices namespace, not implemented in Mono. These are addressed in the form of tools and frameworks, name it IIS or Windows, but is hard to tell where .NET as techonology ends and where IIS-Windows begin. Security in .NET apps is managed via IIS. How do you accomplish this in Mono? Using Mono specific ways. So, how much is this different in the Java planet? Which is the real benefit for developers that C# and colateral are an ECMA standard? I'm afraid the same as Java. What do you think?


    Cheers


    Javier
  15. Interesting Dino?[ Go to top ]

    Javier, I am very intrigued by this dialogue.
    I agree. I believe a platform is defined by: 1) Technology, 2) Frameworks 3) Tools. You have in the Microsoft world the three things standardized due there is a one major proveider for the three. But the same occurs in the Java world, do you know? When I migrated from Java to .NET I found the .NET framework very comprehensive and a very well thought thing. But also I missed the strong and huge community platform support. In the Java camp, people doesn't depend on one sole company nor API spec to make things. There are a lot of alternatives. So even if many key pieces of a modern IT puzzle could be missing, there are often more than one alternative. Of course, this raises the bar for begineers, so it could be used as an argument against Java platform as a whole: there are many alternatives, besides "official" providers. Sadly, I'm aware that this doesn't make a good marketing scenario.
    Yes, my take agrees with yours here. There are many many choices in the Java world, and in some cases the existence of choices is very good. And in some cases there are so many choices that it is actually a disadvantage. For an example of choices at the architectural (or what you have called the technology) level, look at Servlets, along with EJB stateful Session Beans - isn't there some overlap here? Or, for an example of choices at the product level, look at tools: JBuilder, JDeveloper, Eclipse, Workshop, IntelliJ, and even JDE for Emacs. Lots and lots of choices, which is good, but the downside is there is not a single community of tools plugin providers (the Java Tools Community attempts to address this), and a second downside is that there is not a single, common skills base.

      Interesting - the industry apparently desires a standard in tools, judging from the consolidation that seems to be occurring behind Eclipse.
    Variances across Sun, BEA, IBM, and JBoss make real differences in practice - in things like security, failover, developer productivity. The current TMC study, and the reaction to the report, shows it.
    Not so much Dino. Let me tell you something, I know some major global companies that use JBoss and Eclipse/Netbeans for development and BEA (or another J2EE server) for production. And of course there are deployment problems and idiosincracies, but nevertheless they are sovable in a reasonable amount of time. Remember also, that much development is done on Windows and deployment in *ix machines. That's a living probe of the portability mantra.
    No doubt that Java has provided the cross-compiling capability for enterprise apps (used to be only an embedded systems concept). Develop on Windows and deploy on Unix. In fact, seems to me this addressed a huge issue for the unix world, which was: Unix systems were expensive, and you couldn't outfit a large team of devs with *nix workstations without spending a large premium. Linux changes this today, but 4 years ago, the Java cross-compile capability really addressed this nicely. In other words, use the cheap Windows workstation to develop apps for an expensive *nix (Java) server.

    And I am also not doubting the portability between JBoss and WebLogic (or you pick the pair), if you stick to the portable API set. And this is especially nice when you are talking about dev/test on JBoss and system test (+later deployment) on WebLogic.

    BUT.... I still believe the code is not the app. Portable code does not make a portable app. It is not possible to just pick up an ear and drop it into a different container and say, "Boom, it's ported."
    So J2EE is the same, but what is the real benefit if there are so many other differences?
    It depends on who do you look at it. It could be seean as a disadvatange or as a clear separation of concerns between development time duties and deployment time duties. Most programmers use to ignore this simple fact, and don't think their programs in terms of "deploymentability". }Tasks you've mentioned have to be accomplished no matter if you use Perl, Java, .NET or whatever environment. The fact that in .NET those tasks are performed in a some kind unique fashion, doesn't imply they don't have to be implemented.
    Absolutely so. Every app needs to deal with these issues. I am not saying that .NET includes some magic that eliminates these aspects. Instead the point is that, even if you have a portable API subset (J2EE), the differences in these areas make for real distinctions between products. In other words, J2EE is not a platform. To consider a platform, you need to look at a specific vendor implementation, with the items 1, 2, and 3 that you mentioned above, and all the specific implications of those items.
    {...discussion of Mono versus .NET ...} So, how much is this different in the Java planet? Which is the real benefit for developers that C# and colateral are an ECMA standard? I'm afraid the same as Java. What do you think? Cheers, Javier
    I think the situations are quite parallel. C# and CLI as ECMA standards do not consistitute a platform, at least not if you are looking for an enterprise app software platform. Mono is an attempt, as I understand it, to reproduce .NET on a Linux (er, non-Windows) base. But it will differ from .NET in all the ways you have cited, and probably more. The promise of app portability (not just API portability) is a difficult one to fulfill. It has been tried many times and the only time it works is when a single vendor fulfills the promise. Eg, Oracle is portable across Sun, AIX, Windows, and Linux. Or, WebSphere is portable across Linux, Windows and HPUX. BUT, a J2EE-app is not, in practice, going to be truly portable across WebLogic, JBoss, WebSphere, and Oracle.

    I am pretty darn sure that this thread wasn't supposed to be discussing portability. but the question of "Is it J2EE or is it WebSphere?" raises the question of just what is J2EE and what does it really give me?

    Dino
    Microsoft
  16. Interesting Dino?[ Go to top ]

    Hi again. Here are my thoughts about why I believe J2EE is, and why it is useful for me. Three reasons come to my mind.
    I am pretty darn sure that this thread wasn't supposed to be discussing portability. but the question of "Is it J2EE or is it WebSphere?" raises the question of just what is J2EE and what does it really give me?
    1) What J2EE means to me as a customer, is manouver margin when dealing costly contracts with suppliers, while allowing me to mix non standard productivity frameworks, technologies and tools in order to achieve an acceptable productivity level.

    1.1) With simple servlet, JSP, JDBC and JNDI technology, it is possible to build solutions for many, many enterprise computing problems (not all, I concede). These technologies are part of J2EE, and makes unnecesary to use proprietary vendor extensions, while keeping an acceptable productivity level. If I have several "simple" applications deployed in disparate locations, licesing costs could be prohibitive.

    1.2) That gives me freedom of choice when buying and deploying applications, since I can negotiate lower licensing prices either among J2EE vendors (or even with Microsoft) I have the chance to change my mind. Of course it could be costly and risky, but overall it's possible. Nobody wants to get divorced, but as a last resource, you know you can count on it.

    2) Is a political insure something like a safe bet. I'm not buying some obscure proprietary technology, but something backed by big IT companies that gives me scale economies. Overall, is a thing that works. Perhaps not the best way, but nevertheless, you know it's possible to get the job done.

    3) Recognizes beyond interoperability the fact that in a network environment not all the systems are homogeneous.
    Lots and lots of choices, which is good, but the downside is there is not a single community of tools plugin providers (the Java Tools Community attempts to address this), and a second downside is that there is not a single, common skills base. Interesting - the industry apparently desires a standard in tools, judging from the consolidation that seems to be occurring behind Eclipse.
    I don't know if it is industry or market who desires standardization in tools. And regarding not having a single common skills base, I see two aspects: skills related to tools and skills related to technologies. What I think is the following:

    1) In the Java world is more usual to have common skills related to specific technologies and frameworks: servlets, struts, hibernate. Because of this, Java developers are frequently considered to be more knowledgeable than their .NET counterparts.

    2) In the .NET world is more usual to have common skills related to tools: Visual Studio. Because of this, .NET developers are frequently considered to be more productive than their Java counterparts.

    While in the Java planet seems to be some convergence about tools standardization, in the .NET world seems to increase the diferentiation between C# and VB programmers. At the end of the day, either side seems to adopt or be "contaminated" by the issues raised before in the other.
    BUT.... I still believe the code is not the app. Portable code does not make a portable app. It is not possible to just pick up an ear and drop it into a different container and say, "Boom, it's ported."
    I'd like to point out two things.

    First, it's pefectly possible to do that with a WAR file. And a WAR file is part of J2EE. Regarding the EAR file, what I think is relevant is the fact that it is possible at a "reasonable" cost. I'm talking about "horizontal portability" (among vendors) and "vertical portability" (among hardware platforms: intel32, intel 64, RISC).

    Second, I believe a portable app requires portable data. Two years ago we were requiered to design an application capable of work with an open source database and with Oracle. The application grew, and many Interbase stored procedures were migrated to Oracle. Now we have the choice of run some funds on the Linux servers, and the more profitable funds on the SuperDome or the Sun 15K.

    So I agree with you, "Portable code does not make a portable app"
    In other words, J2EE is not a platform. To consider a platform, you need to look at a specific vendor implementation, with the items 1, 2, and 3 that you mentioned above, and all the specific implications of those items....I think the situations are quite parallel. C# and CLI as ECMA standards do not consistitute a platform, at least not if you are looking for an enterprise app software platform.
    The items being 1) Technology, 2) Frameworks and 3) Tools. Well, semantics issues aside, *for me* J2EE is a platform, and the set of three items is what I call a development environment. OK, OK, strictu sensu, J2EE is not a platform, but perhaps a "platform definition"?

    Anyway, I won't try to convince anyone about if J2EE is or not a platform ;-)

    In my machine I have no Visual Studio at all, nor IIS installed. What I have is the .NET SDK, Sharp Develop, WebMatrix, Cassini, Nant, NDoc, FXCop etc. Do I have the platform? the development environmet? the implementation? the technology? I don't know. I'm just able to get the job done.


    Cheers
  17. No one has mentioned the built-in support for Struts in WSAD, or claim had extensive experience with WSAD. What if an experienced WSAD/WebSphere dev team against a novice .NET dev team?????
  18. I covered why we didn't use struts in a previous response. We had to consider both productivity and performance. Struts could have been slightly more productive, but would have given us a performance hit over plain JSP/Servlet.
  19. What if an experienced WSAD/WebSphere dev team against a novice .NET dev team (instead of a novice WSAD/WebSphere dev team against an experienced .NET dev team)?????
  20. Experience of the WSAD team does appear to be key here. The WSAD/WebSphere developers made some crucial errors that a couple of weeks experience or indeed a review of the freely available literature would have pointed out were complete folly. Why not break the timesheets down into time spent learning the product and time actually spend developing?

    Further key bizarre decisions in the development process (like the lack of a code versioning and sharing tool as highlighted below) are incredible. This may not be fair but the J2EE developers did come across as quite naive.

    However, I do concede the point that out of the box .NET is probably quicker to pick up and use - largely since one would use the common microsoft libraries and this means you don't have to make decisions/design common functionality and libraries. I do feel that with an established J2EE team with a framework that they are familiar with we would see far better developer performance.
  21. Choices...[ Go to top ]

    <dino>
    There are many many choices in the Java world, and in some cases the existence of choices is very good. And in some cases there are so many choices that it is actually a disadvantage. For an example of choices at the architectural (or what you have called the technology) level, look at Servlets, along with EJB stateful Session Beans - isn't there some overlap here? Or, for an example of choices at the product level, look at tools: JBuilder, JDeveloper, Eclipse, Workshop, IntelliJ, and even JDE for Emacs. Lots and lots of choices, which is good, but the downside is there is not a single community of tools plugin providers (the Java Tools Community attempts to address this), and a second downside is that there is not a single, common skills base.
    </dino>

    I agree with you. Choices can make your life difficult... But therefore you need to take care of this topic carefully. Remember: democracy. It is good to have a lot of parties instead of one dictator, or don't you think so? So, for me it's a matter of one dictator (MS) or many parties in democracy (Java) ;-) (he he he, just kidding, don't take this too seriously).

    Anyway, we also find this topic very difficult - as we developed OpenUSS in year 2000 (still until today, I find that this is a difficult stuff) - because of those choices in Java world. Therefore we "separate the concerns". We built an infrastructure project (one concern, EJOSA project) which is used by one or many applications projects (another concern, OpenUSS, POW, OpenFjord, ...).

    You can read our paper at:
    http://openuss01.uni-muenster.de/foundation/faculty/FacultyInfoDetailPage.po?FacultyInfoId=1086323847801

    I also write down some of the choices (general view) and how you would make your own choice:
    http://prdownloads.sourceforge.net/ejosa/ejosa-revo2.1-doc.pdf?download

    Cheers,
    Lofi.
  22. Security in ASP.NET and Mono[ Go to top ]

    For example (since you agreed to use some .NET/Mono comparison) many concerns you mentioned refer to issues addressed in System.EntepriseServices namespace, not implemented in Mono. These are addressed in the form of tools and frameworks, name it IIS or Windows, but is hard to tell where .NET as techonology ends and where IIS-Windows begin. Security in .NET apps is managed via IIS. How do you accomplish this in Mono? Using Mono specific ways. So, how much is this different in the Java planet? Which is the real benefit for developers that C# and colateral are an ECMA standard? I'm afraid the same as Java.
    Just a clarification: by default ASP.NET uses Windows Authentication but that can easily be switched to Forms Authentication, in this case IIS plays no role and you rely on ASP.NET mechanisms, these mechanisms work just right inside Mono.
  23. Security in ASP.NET and Mono[ Go to top ]

    Just a clarification: by default ASP.NET uses Windows Authentication but that can easily be switched to Forms Authentication, in this case IIS plays no role and you rely on ASP.NET mechanisms, these mechanisms work just right inside Mono.
    Thanks Edgar, your example helps me to ilustrate another benefit of a J2EE container, that I forgot to mention on another message.

    That benefit being there is a clear defined set of services that you can rely on no matter which platform you're server is running. Many services given by a Java application server are provided en the .NET world by IIS and Windows. There's no guarantee you can use these services on another platform. OTOH this is of little relevance, since most .NET applications are not designed with portability in mind.

    Cheers
  24. ... That benefit being there is a clear defined set of services that you can rely on no matter which platform you're server is running. Many services given by a Java application server are provided en the .NET world by IIS and Windows. There's no guarantee you can use these services on another platform. OTOH this is of little relevance, since most .NET applications are not designed with portability in mind.Cheers
    No doubt the portability story of J2EE is stronger than that of .NET but there's less confussion of what is dependant on the Windows OS and what uses just the .NET platform facilities than what you're suggesting, the Windows Authentication vs. Forms Authentication illustrates the point. We are currently developing an application that works both in Microsoft .NET and Mono and, although we have found weak points, the project is coming along well.
  25. No doubt the portability story of J2EE is stronger than that of .NET but there's less confussion of what is dependant on the Windows OS and what uses just the .NET platform facilities than what you're suggesting, the Windows Authentication vs. Forms Authentication illustrates the point. We are currently developing an application that works both in Microsoft .NET and Mono and, although we have found weak points, the project is coming along well.
    Edgar, frankly a big (really big) number of .NET developers hardly distinguish between Visual Studio and the pure .NET framework. OTOH, after developing authentication modules and login modules (for JAAS you know), believe me, it's very clear to me what I can count on on a given platform.

    If you re-read carefully my former messages I was talking about services that you give for sure on your selected platform, and picked up ASP.NET authentication mechanism as an example because it is an easy example. Out of the box in ASP.NET, forms authentication is the unique mechanism not tied to IIS to perform authentication activities, name it retrieve Windows or AD accounts.


    Cheers.
  26. (note that this is an IBM vs. .NET study, not J2EE vs. .NET)
    Download page:

    "The Middleware Company research report of the study: Comparing Microsoft .NET and IBM WebSphere/J2EE: A Productivity, Performance, Reliability and Manageability Analysis."

    Why can't we get a fair shake from you guys? Eweek trumpets this as J2EE versus .NET thanks to this. Whatever happened to truth in advertising?
  27. What was that ?[ Go to top ]

    Did this @sshat actually use the word 'grok'. I was close to sniffing you out as a marketing puke, but that did it right there. Yeah, your "study" is fair and unbalanced, go sell crazy somewhere else, we're all full here ...
  28. MS Marketing Machine[ Go to top ]

    I think this is where MS are really good at-Marketing. The whole process here might be fraught with all sorts of flaws, but when a decission marker in some of these companies see this, they will be happy to buy into it this .NET thing. I think the J2EE camp isn't doing enough to sell J2EE. What happened to Sun's Java marketing campaign. BTW MS is flooding African Universities with all manner of .NET free products and they have even made the .NET conference an annual event for african universities..and where is the J2EE camp ...??? The answer is I don't know.
  29. Sun's J2EE Marketing[ Go to top ]

    I think this is where MS are really good at-Marketing. The whole process here might be fraught with all sorts of flaws, but when a decission marker in some of these companies see this, they will be happy to buy into it this .NET thing. I think the J2EE camp isn't doing enough to sell J2EE. What happened to Sun's Java marketing campaign. BTW MS is flooding African Universities with all manner of .NET free products and they have even made the .NET conference an annual event for african universities..and where is the J2EE camp ...??? The answer is I don't know.
    Sun's marketing reminds me a lot of Sybase's marketing. Abyssmal!! Sybase had and has some technically excellent products, but has failed to market them. Look where they are in market share - about 1% in the app server mkt and very low in the DB market. Sybase's issue was technical arrogance. Sun has the same issue. They believe that people will beat a path to their door because they have excellent products. What they forget is that people have to know about these products first. Above all, MS is a marketing organization as much as it is a technology organization. Sun and Sybase are technical without the marketing organizations.
    JMDW...
  30. Microsoft good at marketing???[ Go to top ]

    Are you nuts? Microsoft good at marketing? Comparted to IBM? Yes, you must be nuts.
    Microsoft is good at marketing to the consumer, but their enterprise marketing leaves much to be desired, especially when compared to IBM.
    IBM is doing all that they can to sell WebSphere ... that's why it's the #1 J2EE AppServer when counting licenses sold, even though it is an inferior product.
  31. I've had experience working with VB/Visual Studio 6 and Java/J2EE. Microsofts strength has always been its tools like VS and VS.NET that make development of common coding constructs and scenarios simple for the lay programmer. Having worked on several large projects, I can see why a manager in a large company may choose to develop using Microsoft technologies, considering that it is much easier to do. While J2EE app servers try to achieve robustness, MS tries to concentrate on how business decisions for choice of platform are made and tries to make that choice easier, by reducing the upfront cost. You may actually land up paying more trying to maintain an application built using MS stuff, but it is likely you will get it out to market earlier than a competitor using J2EE. The lesson for J2EE really is 'simplify' and provide good tools to make development of some of the more complex pieces simple. What I would really be interested in seeing is a coherent opensource J2EE toolset including a J2EE server, full fledged IDE with a lot of templates to ease development and documentation all bundled together as cohesively at the .NET stuff. You can't go after this stuff in the piecemeal manner we have been going after in the J2EE world. Bottomline, managers are looking for a whole suite of tools not just pieces, at a low upfront cost, with enough trained developers on that suite available at a reasonable price and good support once deployed. This really is the challenge.
  32. You are basing your comparison of J2EE vs. .NET on your experience with VS 6 and VB 6.0.
    As anyone that has done any amount of VB6 and VB.NET development can tell you, that's like comparing JavaScript to J2EE. While Microsoft does look to provide "business value" (and why is that a bad thing?), you cannot make any statements regarding .NET by comparing your experience with VB6 to .NET. It just shows how long you've been out of the Microsoft world and how much you need to learn about how they've caught up with (and, in some ways, surpassed) the Java vendors.
  33. VB/Visual Studio 6 != Visual Studio .NET[ Go to top ]

    You are basing your comparison of J2EE vs. .NET on your experience with VS 6 and VB 6.0. As anyone that has done any amount of VB6 and VB.NET development can tell you, that's like comparing JavaScript to J2EE. While Microsoft does look to provide "business value" (and why is that a bad thing?), you cannot make any statements regarding .NET by comparing your experience with VB6 to .NET. It just shows how long you've been out of the Microsoft world and how much you need to learn about how they've caught up with (and, in some ways, surpassed) the Java vendors.
    I've been using VS.NET the last two years on a fairly important application. I don't see any real difference between C# and Java. the differences to me are trivial. J2EE vs .NET stack on the other hand is very different. There are certain things I like about .NET, but there's plenty of things that simply are night and day. I'd like to hear your thoughts on how .NET stack is better than J2EE? Of the things I dislike in .NET is .NET controls. If i really want a nice GUI, it makes much more sense to bypass the browser all together and just use HTTP between the client and the server. .NET controls are just as bad as applets. But then again, with avalon microsoft is moving back towards rich client + http protocol. On the server side, DTC + COM+/DCOM, + IIS + Biztalk is not the same functionality as a mature EJB container.

    It's going to be while before Microsoft produces a mature product that really is equal to Weblogic, JBoss or Websphere in terms of fault tolerance, scalability and reliability. Then there's the whole integration with heterogenious environments. .NET is still in the infant stage for complex integrations. Microsoft management and CLR developers have gone on the record that COM+ does not scale to global scale, whereas CORBA and EJB do. VS.NET really is a good improvement over previous version of Visual Studios, but equal to J2EE is questionable. If by better you mean less code to write and more wizards, then I'd agree.

    Writing higly customized applications with VS.NET is more of pain than anything else. From what I see, you have to override all the default stuff. But that's my biased perspective.
  34. short update brush course[ Go to top ]

    "CORBA, EJB"
    Legacy software, read SOA.

    "Weblogic, JBoss or Websphere"
    The correct epithet is "Elephant servers"

    "It's going to be while before Microsoft produces a mature product that really is equal to Weblogic, JBoss or Websphere in terms of fault tolerance, scalability and reliability"

    They already have with "Indigo"

    "Of the things I dislike in .NET is .NET controls"

    .NET controls is nothing else than Struts but better, read usercontrols for real innovation.

    "I'd like to hear your thoughts on how .NET stack is better than J2EE?"

    The current study that we are discussing, for example.

    The Java developers has already voted with their feet. Read Spring/Tomcat/J2EE. Check Bea/Weblogic situation and stock...

    Regards
    Rolf Tollerud
  35. short update brush course[ Go to top ]

    "CORBA, EJB" Legacy software, read SOA.

    That sounds like marketing speak to me. Those who work in diverse integration environments have been doing SOA for a while. It's just marketing spin from Microsoft, IBM, Oracle and Sun to pump sales. Don't be fooled by it. Go ask EDI what SOA means and they'll give you their perspective. Or ask OMG what SOA means and they'll tell you it's a marketing spin.

    "Weblogic, JBoss or Websphere" The correct epithet is "Elephant servers"

    building high availability server applications is very hard and it actually requires many of the features provided by EJB containers, Tuxedo and transaction monitors. Try building a high availability transactional server and then tell me if those things are non-essential junk.

    They already have with "Indigo"

    since I've never played with Indigo, I can't say how it scales. If we look at Microsoft's track record, it takes 3 releases to reach the level of performance Microsoft promises for the first release. That would mean atleast 6 years after the first release. At some point, microsoft will have reproduced tuxedo. Tuxedo took a long time to reach it's current performance, so it's not like it happened over night. Unisys recently announced they are supporting Linux. From my understanding (which could be totally wrong), Unisys was one of the biggest sellers of windows servers running 8 or more CPU's. Their move shows there's definitely a demand for large x86 systems running linux.

    The current study that we are discussing, for example.The Java developers has already voted with their feet. Read Spring/Tomcat/J2EE. Check Bea/Weblogic situation and stock..

    Bea is loosing to IBM, but not to microsoft from what I know of the financial software sector. IBM is doing good these days and winning plenty of large contracts. I know plenty of people who hate EJB and think's it's over kill. I also plenty of people who live by EJB, because they have very complex integration requirements. The real world is hardly black and white as Sun, IBM, Oracle and Microsoft would have us believe. All of them lie and stretch the truth, but that's cuz marketing guys have a different definition of truth.

    Everything has it's place. I really think this whole "1 way for everything" approach is counter productive and gets in the way of progress. Windows, mainframes, high end Unix, C++, Cobol, Java, and C# aren't dying or taking over the world. blindly believing marketing is a bit foolish. I've done it in the past and came to my senses. the best thing developers can do is demand the companies back up their claims with cold hard facts and full disclosure.
  36. short update brush course[ Go to top ]

    If we look at Microsoft's track record, it takes 3 releases to reach the level of performance Microsoft promises for the first release. That would mean atleast 6 years after the first release. At some point, microsoft will have reproduced tuxedo. Tuxedo took a long time to reach it's current performance, so it's not like it happened over night
    Instead of waiting 6 years, why not write .NET services using Tuxedo right now?

    See http://www.otpsystems.com/DotTux.html

    Robin Boerdijk
    OTP Systems Oy
  37. Why this study is garbage[ Go to top ]

    Based on cost this study is garbage:
    Because they choose WebSphere ND which supports clustering and EJBs. Well, then why did the study not use EJBs? To be fair the study should have choosen WebSphere Express which is considerably less expensive. Oh, and failover then is simply done via your load balancer (edge server, or HTTP server IBM HTTPD aka Apache). This requires that the wsad application was written as stateless, which they didn't ignoring many of the webservices/j2ee best practices.

    Based on productivity this study is garbage:
    -The developers didn't use an SCM with WSAD and 'had some false starts'.. well duh, if you don't use an SCM you should be shot.
    -The .Net developers used the DataGrid control for paging tabular data, but did not use the equivalent features in WSAD (WSAD includes a JSF based table widgets that support paging at either the server or client level with no hand coding)
    -The developer's were pretty close to ignorant. They copied the webservices.jar from the WSAD test environment into WAS ND? Are they trying to make their lives difficult? They replaced the IBM JRE with Sun's to make the JRE support a non-standard jvm argument? (PS. The IBM JRE handles garbage collection much better than Sun's which obviates the need for the non-standard gc parameter).
    -If this truly was an IBM vs MSFT study then why was Sun's IDE used for the mobile aspects instead of IBM's Device Developer?
    -Why did they 'experiment with verticle scaling' of WebSphere? Did horizontal not suffice (which it did)? Or were they trying to make sure every possible deployment option was tested thus ensuring the IBM solution was less productive?
    -To improve performance the developer wrote an object pool. Hmm, that's nice, why not just enable some of the caching capabilities of WebSphere? Oh that's right, because then it'd take less time and perform better and support features unavailable in a .Net solution.
    -They found session persistence too slow and spent time examining/tweaking things out.. Hmm, how slow is too slow? How big was their session? Both important aspects left out of the document, I'll have to look at the code to see why they had such problems. But, this is generally only broken when your sessions include too much information.
    -I have no bloody idea what they are trying to do in their 'hot deployment' of websphere section. Sounds like they are copying parts of an application, but also restarting servers, then deploying to each server manually.. Really, the problem to me sounds like they don't have a clue. With WebSphere ND you don't need to do any of that garbage, its taken care of for you.

    This study, like most studies are marketing funded garbage.
  38. Why this study is garbage - is it?[ Go to top ]

    I would like to see a response from IBM on the study. Until then I'll reserve judgement on whether it is garbage. But clearly some of your comments are off base.
    Based on cost this study is garbage:Because they choose WebSphere ND which supports clustering and EJBs. Well, then why did the study not use EJBs?
    The WebSphere implementation did use EJBs. Page 38 says the RRD implementation used MDBs. Following pages say the WSAD implementation also used MDBs, and page 57 says the Customer Service app wrapped JMS with Session Beans.
    To be fair the study should have choosen WebSphere Express which is considerably less expensive. Oh, and failover then is simply done via your load balancer (edge server, or HTTP server IBM HTTPD aka Apache). This requires that the wsad application was written as stateless, which they didn't ignoring many of the webservices/j2ee best practices.
    Using the lower-cost Express version would have been an option except for the use of EJBs and clustering, neither of which is supported in Express. And Whoops! isn't Express suppoted on machines with up to only 2 CPUs. But this test bed was 4x4p. I think the reason they used Edge Server and not IHS for the load balancing is to get the failover. Also is it not the case that Edge Server is not available separately, and is licensed only with the higher-cost WAS ND?

    For all these reasons, it looks like WAS ND was in fact required for this test and the pricing comparison looks valid to me.
    Based on productivity this study is garbage:-The developers didn't use an SCM with WSAD and 'had some false starts'.. well duh, if you don't use an SCM you should be shot.
    Strange, because they used a SCM with RRD. In any case, this aspect was probably overshadowed by the productivity advantage inherent in having already done the work. By the time they started with WSAD, they had built the app once with RRD.
    The .Net developers used the DataGrid control for paging tabular data, but did not use the equivalent features in WSAD (WSAD includes a JSF based table widgets that support paging at either the server or client level with no hand coding)
    Would have been nice to see how much benefit there is to the JSF stuff. I didn't see any mention of JSF in the report document.
    They copied the webservices.jar from the WSAD test environment into WAS ND? Are they trying to make their lives difficult?
    Who knows, but doesn't the report imply that this fixed something?
    They replaced the IBM JRE with Sun's to make the JRE support a non-standard jvm argument? (PS. The IBM JRE handles garbage collection much better than Sun's which obviates the need for the non-standard gc parameter).
    Yep, that was obviously an error. But in any case, they did not run in this configuration. They went back to the original IBM JRE quickly.
    If this truly was an IBM vs MSFT study then why was Sun's IDE used for the mobile aspects instead of IBM's Device Developer?
    For the mobile app, it would be interesting to see if IBM’s Device Developer is more productive than Sun One Mobile Studio. It might be, but it might not be. Is there synergy between RRD and WebSphere Device Developer? But this was just a small part of the app anyway.
    Why did they 'experiment with verticle scaling' of WebSphere? Did horizontal not suffice (which it did)? Or were they trying to make sure every possible deployment option was tested thus ensuring the IBM solution was less productive?
    There was a time when it was a best practice among Java app vendors to install multiple instances of a JVM on a particular box, because of the anomalous behavior of the garbage collector when dealing with large heaps. Basically, with multi-gigabyte heaps in a single JVM, the world would stop while the GC happened. I haven't seen the recent guidance from IBM on whether this still applies. Maybe the Websphere team hadn't seen it either. I assume IBM would have fixed this in a recent JVM, but it's good to be sure.
    To improve performance the developer wrote an object pool. Hmm, that's nice, why not just enable some of the caching capabilities of WebSphere? Oh that's right, because then it'd take less time and perform better and support features unavailable in a .Net solution.
    Regarding caching, there were strict requirements on app disallowing caching of database info -- all info had to be fresh from the database. (No Option A caching here)
    They found session persistence too slow and spent time examining/tweaking things out.. Hmm, how slow is too slow? How big was their session? Both important aspects left out of the document, I'll have to look at the code to see why they had such problems. But, this is generally only broken when your sessions include too much information.
    This seems to be a complaint about the results they saw, and now how they dealt with them? What's the point here?
    I have no bloody idea what they are trying to do in their 'hot deployment' of websphere section. Sounds like they are copying parts of an application, but also restarting servers, then deploying to each server manually.. Really, the problem to me sounds like they don't have a clue. With WebSphere ND you don't need to do any of that garbage, its taken care of for you.
    The study report references the WebSphere document that describes the steps they tried to follow to do the hot deployment. But it would be good to get more commentary on this piece from the team members?
    This study, like most studies are marketing funded garbage.
    Suum cuique.

    In the end, I think the report stands as a valuable contribution. Sure there were some problems, on both teams, in both implementations. Things could have been done better, as with any project. Generally speaking though, it represents a large project successfully executed, and the experiences and observations TMC reported seem quite valid, and reflective of the typical experiences of a project team, despite those problems.
  39. Why this study is garbage - is it?[ Go to top ]

    Hi Dino,

    Are you the guy who works for Microsoft as says in this link?: http://c2.com/cgi/wiki?DinoChiesa

    Glad to meet you.
  40. Are you the guy who works for Microsoft?[ Go to top ]

    Hi Dino,Are you the guy who works for Microsoft as says in this link?: http://c2.com/cgi/wiki?DinoChiesaGlad to meet you.
    YES, that's me. I have worked for Microsoft for the past 5 years.
    I apologize - I should have been signing my posts.

    Dino
    Microsoft
  41. VERY VERY Interesting![ Go to top ]

    Have a look at Dino's user profile and follow the links at the bottom which show users who have likely posted from "the same location".

    ...
    The following table contains a list of other community members who have posted from the same location as this user. Users could be linked due to a variety of reasons, including posts made from a shared IP (dialup, dynamic DSL/broadband, wireless location), multiple aliases per person, etc.
    ...

    For each of these you can follow even more links to people who have posted from "the same location" as them, and so it goes on. I'm not sure how accurate this is, but this might explain some of the very biased comments being made by some posters on many previous threads. Posting as a MS employee without signing your posts on a Java forum is very sneaky.
  42. VERY VERY Interesting![ Go to top ]

    Posting as a MS employee without signing your posts on a Java forum is very sneaky.
    I apologized previously, and I am doing it again. I'm sorry for any deception. I should have signed my posts. I have previously posted here and publicly noted my employer, and I was thinking people knew me. Ok, that was wrong. I'm sorry. But I would point out that I am using my real name. I am not trying to hide anything, sorry if it appeared that I was.

    Dino
    still with Microsoft
  43. Another "Get The FUD" nonsense from Microsoft[ Go to top ]

    When I saw the title, the first question that came to my mind was who sponsored it. As usual the big M$
  44. What a technology realty show!![ Go to top ]

    Anyone with a college level training on scientific method knows the
    funniness of such a vender-sponsored study of two-developer team showdown.
    If you disagree with me, I would recommend TSS
    submit this paper to a peer-reviewed journal or
    conference by IEEE or ACM.
    I would love to see what reviews this kind of research will get.
    It is very sad I spent my sleeping hours reading this nonsense.
    I have to suggest that enterprise Java guys need to start a new
    vender-neutral forum for us to have fair exchange of technical
    ideas and experiences and say bye to this one.

    -YT
  45. Preaching to the congregation[ Go to top ]

    There is already such a forum, Javalobby. There you can be sure of that all Java negatives are immediately moderated away. The address is http://javalobby.com/.

    I hope you will get satisfied! :)
  46. No marks for Design[ Go to top ]

    The study mostly looked at the short term gains like how fast the application can be built ? To me the J2EE team spent more time on building frameworks and following the best practices rather focussing on the speed.

    I think evaluation team could have taken design, other environment factors such as tools and development approach into consideration while doing the final report.
  47. Study not surprising[ Go to top ]

    This study tells us nothing new. Developing on .net is easier and takes less time initiatlly. It takes less intelligent programmers. Businesses like this, the bottom line is right now for alot of companies.

    The J2EE architecture takes smarter developers but over time the application can be moved to multiple platforms and Operations Systems. J2EE is much more maintainable long term and much more scaleable. The study doesn't really represent that.

    I have done .NET and J2EE for a long time. With .NET apps you throw them away and start over. I work with several J2EE apps most are much more maintainable and easier to add functionality to. J2EE will continue to be a slower development, but more focused on longer term investments and large scalability.

    What's nice about the WebSphere approach is our apps run on what ever boxes we can get money for. We take old windows boxes, IBM xSeries out of the scap heap and put Linux on them. Not surprisingly our WebSphere apps usually run faster and more reliable on the old Linux boxes than the newer Windows boxes. What's nice about our production AIX and Linux boxes is that they are more secure and don't have to be rebooted to apply some security patch. Our environment is WSAD5.1.2, Rational Rose, CVS (on a linux box), and WebSphere 5.1. I can't image using WSAD without using a CM tool like cvs. CVS is brainless and an easy way to share code. No one would have thought about using RRD in it's current state. Some of the RRD features will go into WSAD, after that RRD will die, it's know within IBM.
  48. $$$[ Go to top ]

    $$$ $$$$$ $$ $$$$, $$$ $ $$$$ $$$$$. yeah i'm so full!
  49. As someone who spent far to long in academia (admittedly in neurosciences) before coming to IT, I find it heart breaking that a paper this inept and badly thought out has generated so much commentary. The differential aspects between the two projects would have the scientific method thrown out from a first-year University course, let alone a study that is designed to guide/encourage/inform decisions that could be worth hundreds of millions of dollars. I believe YT Chen was spot-on when he implied that if this was submitted to a top rank peer-reviewed journal, it would be rejected out of hand due to its design flaws.

    Vendor funded studies are nothing new and are an established method of backing up marketing claims, but only when done properly. The idealised double blind placebo-controlled cannot be performed in the IT environment ("No sir, I didn't have any idea which app server I was using when I deployed my files using the Websphere Administrative Console!"). A series of tests of varying sizes: two-tier, n-tier, with/without legacy apps, etc, using developers of similar experience (including similar experience in the selected dev environment!) on as similar hardware/software as possible would be far more useful to the industry, and if carried out and shown to be successful would not be able to be laughed at as marketing spin.

    That of course is the crux. What vendor in their right mind would construct a study where they knew they were going to fail, or even possibly fail? A buried study can be just as damaging as a failed one. Reality means backing your horse by ensuring that the design specs display your product to its best and your competitors to its worst whist still be able to claim that the study was unbiased and fair.

    Welcome to the real world boys, it ain't pretty but it's the way things get done.

    As a J2EE developer, I have no experience to judge whether the claims of .NET's superiority made are valid and can be backed up by real world examples, and I'm the first to admit that I have a vested interest in the success of Java and its frameworks and technologies (hell, I being employed), but studies like this do no-one any favours.
  50. The is just a marketing campaign by Microsoft!

    The .NET developers had solid previsouse experience with .NET platform and VS.NET (C# implied).

    While the J2EE team had no experinece with WSAD (in the report) and WebSphere (e.g. using Sun's JVM and other evidences found by the community).

    Even the application development skill and J2EE skill of the J2EE team in the study are questionable based on the following two facts:

    1. Did not use a source control such as CVS for serious software development (in the WSAD version).

    2. Decided not to use Struts due to 'performance overhead' (a claim cannot be found in any other source), instead, re-invented the wheel by building their own little Web presentation framework.

    BTW, there is built-in wizard support for Struts in WSAD (also in JBuilder and JDeveloper). And Struts alone without IDE support was already a productivity boost for Java based Web application development.
  51. This is just a marketing campaign by Microsoft!The .NET developers in the study had solid previsouse experience with .NET platform and VS.NET (C# implied).While the J2EE team in the stydy had no experinece with WSAD (in the report) and WebSphere (e.g. using Sun's JVM and other evidences found by the community).
  52. 300th Post[ Go to top ]

    This is the 300th post for this message.
    (Count em all if you want !!! )
  53. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    Why not have chosen an open-source app server such as jboss in order to carry out this study? Java is closely tied to open source software. The results in terms of price would have been very different.
    Julien.
  54. Why not have chosen an open-source app server such as jboss in order to carry out this study? Java is closely tied to open source software. The results in terms of price would have been very different.Julien.
    This research endeavor was designed to be an IBM vs. .NET analysis as opposed to a J2EE vs. .NET comparison. The research was configured to compare an enterprise application in development, test, deployment and maintenance mode -- deployed in likely production environments for preferred Microsoft and IBM architectures. In the case of IBM, there were two complete stacks created: one on Rational Rapid Developer and another with WSAD / WebSphere deployed on Linux.

    Since the focus was on likely large customer deployment scenarios, choosing IBM (not J2EE) as a focus area was an early decision made -- so other J2EE application servers wasn't considered for this effort.
  55. Biggest bang for the buck[ Go to top ]

    Why not have chosen an open-source app server such as jboss in order to carry out this study? Java is closely tied to open source software. The results in terms of price would have been very different.Julien.
    MS is moving into the Enterprise space--IBM's space. That's why they went after WebSphere.
  56. Study? What study?[ Go to top ]

    Why not have chosen an open-source app server such as jboss in order to carry out this study?
    The term "study" is used very loosely.
    Change the term from "study" to "marketing document" andt the question becomes:
    Why not have chosen an open-source app server such as jboss in order to carry out this marketing document?

    The answer is obvious.
  57. Woooow[ Go to top ]

    A very impresive study. It could be titled "everything we know about sofware so far".


    Some coments:
    - to be fair, LAMP (aka PHP) should have been included.
    (which leads to... tomcat, ibatis, pgSQL, Eclipse stack possibiliity as well.

    Scary:
    "Somasegar said Microsoft will be delivering changes to its development technology that will enable developers to create applications with 50 percent to 70 percent less code required." I assume he is talking about XAML (watch out JDNC!)

    OT:
    But ... afaik, most banks and large companies ban MS Servers due to security and cost of operation. Illustration:
    http://media.trendmicro.com/product/nvw/index.html

    .V

    ms vs Open/Source "get the fud":
    http://www.pcmag.com/article2/0,1759,1618813,00.asp
  58. Wooooow - 50-70% code reduction?[ Go to top ]

    Scary:"Somasegar said Microsoft will be delivering changes to its development technology that will enable developers to create applications with 50 percent to 70 percent less code required." I assume he is talking about XAML (watch out JDNC!)
    Nope, I think Soma was talking about server-side programming, where XAML is mostly client-side UI. There are some nice advances in ASP.NET that make common scenarios simpler and easier.
  59. Wooooow - 50-70% code reduction?[ Go to top ]

    Scary:"Somasegar said Microsoft will be delivering changes to its development technology that will enable developers to create applications with 50 percent to 70 percent less code required." I assume he is talking about XAML (watch out JDNC!)
    Nope, I think Soma was talking about server-side programming, where XAML is mostly client-side UI. There are some nice advances in ASP.NET that make common scenarios simpler and easier.
    Our projects consist from:
    - MTLOC - Manually typed LOC; - that is what developers actually type in redactors and IDEs.
    - GLOC - Generated LOC; that is some kind of“plumbing” code that various code generators and helpers create to support certain technologies: CORBA, WS, EJB, persistence, etc.
    - ULOC - Used LOC; - number of code lines in all used libraries. I would differentiate among them

    o Trusted code: TULOC ? stuff like JDK or OS, some widely used and well known libraries (Oracle JDBC for instance)
    o Risky code: RULOC ? some stuff from a small vendor, first time used framework etc.
    Note: What is TULOC and what is RULOC is a completely personal/team decision which might not have apparent reasons.
    I assume good code quality and smart developers here, that means some direct correlations between LOC, project complexity and man-months.
    There is a question:
    - Which project is more complex: one that uses 1000 MTLOC or another that has 100 MTCLOC and 10000 RULOC?
    Full text at http://kgionline.com/articles/mtloc/mtloc.jsp

    So, where is the reduction and what is the price?
  60. Wooooow - 50-70% code reduction?[ Go to top ]

    Nope, I think Soma was talking about server-side programming, where XAML is mostly client-side UI. There are some nice advances in ASP.NET that make common scenarios simpler and easier.
    That's right. Use of ASP.NET forms is a very neat way to build web applications, so neat that the Java camp tried to emulate it via Java Server Faces. The relevance of the study of course have to do with the tools and frameworks you're using to build applications. If someone sets up an environment to compare a Java application that uses Hibernate and a .NET application that uses ADO.NET typed datasets (in order to emulate similar functionality when building DTOs, as shown in the Microsoft's patterns book) then the LOC counting would vary, but if you're not using Hibernate nor typed datasets, that's irrelevant to you. Of course, Microsoft loyalists would cry "That's not J2EE!!!". Mmhh, well, I think Java loyalists (zealots?) are more vocal, he, he ;-)

    I've been doing some consulting in Java and .NET applications, and what I've learned is what is really portable, vendor neutral and vendor independent is our own developer's stupidity. It doesn't matter if you develop in Java nor .NET, the blockers have to do with processes and methodologies, nor technology.


    Happy flamewar ;-)


    Cheers
  61. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    Quoting from EWeek article
    <quote>
    The comparison not only involved productivity and performance, but also cost. According to the study, the Microsoft results are based on a system running Visual Studio .Net running on Windows Server 2003 and costing $19,294. The IBM results are based on a system running WebSphere Network Deployment edition running on Red Hat Linux and costing $253,996.
    </quote>

    Damn thing! This is overwhelming! We better surrender and move on to meet the future: .NET

    Regards,
    Horia

    P.S. From where can I download a free (for personal use) .NET stack (SQL Server, IIS, Visual Studio .NET, docs) so I can start learning?
  62. From where can I download a free .NET stack (SQL Server, IIS, Visual Studio .NET, docs) so I can start learning?
    You can get free C# express here:

    http://lab.msdn.microsoft.com/express/vcsharp/default.aspx

    (it works with pgSQL since pgSQL supports Windows connectivity)

    I am not sure how to port it to Mac? Mono?!

    .V

    ps: This is a balanced report IMO. Just becuase it disagrees w/ your religion, does not make it wrong. There are good things in C#, and for developers, choices are a good thing.
    Linux/Open source will allways be used for heavy lifting, and .NET will be used for deparmental solutions.
  63. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    Linux/Open source will allways be used for heavy lifting, and .NET will be used for deparmental solutions.
    Why? Why can't Linux/Open Source be used for that too? In fact, it is better suited.
  64. How about maintenance and integration?[ Go to top ]

    We have many more factors in life cycle of product. Is this study covers?
    -Maintenance efforts
    -Integration with other systems
    -Deployment to production
    -Testing etc.

    I think we include more criteria’s to study two technologies otherwise it will be one company’s marketing efforts over other.
  65. How about maintenance and integration?[ Go to top ]

    We have many more factors in life cycle of product. Is this study covers? -Maintenance efforts-Integration with other systems-Deployment to production -Testing etc.I think we include more criteria’s to study two technologies otherwise it will be one company’s marketing efforts over other.
    I saw failover. How about clustering? caching? Adding a new server to the cluster/failover/cache?
  66. IBM Websphere is the most aggressively marketed Application Server in J2EE, so I see a good reason why MSFT went after IBM with respect to the study.
    As far as using the report for J2EE vs. .Net discussion, yes it would make it to those arguments as long as IBM is the well-respected App Server Java vendor with over one-third of j22ee market. So I conclude most of you agree that atleast 40% of J2EE world (and most of it in the High-end corp environment)is using a framework or environment that is inferior to .Net framework.
  67. .NET Resources[ Go to top ]

    P.S. From where can I download a free (for personal use) .NET stack (SQL >>Server, IIS, Visual Studio .NET, docs) so I can start learning?

    There are a number of free options available for .NET development, I've listed some below, but you'll find many more on the web.

    Database
    .NET doesn't require you to use a specific database provider so you are not required to use SQL Server for .NET applications and you can use MySQL, Access or whatever database you prefer for your .NET application.

    You can download SQL Server 2000 SP3 (MSDE) for free at: http://www.microsoft.com/sql/downloads/2000/sp3.asp

    Web Server
    IIS is available with Windows XP Professional or above. Another option would be to use Cassini - http://www.asp.net/Projects/Cassini/Download/Default.aspx?tabindex=0&tabid=1. There are also several companies that offer free web hosting for ASP.NET applictions like Brinkster - www.brinkster.com.

    Tool
    While Visual Studio .NET 2003 is not available for free (other then a 60 day trial), there are a number of free tools you can use for .NET development.

    C# Builder Personal Edition - http://www.borland.com/products/downloads/download_csharpbuilder.html

    #Develop - http://www.sharpdevelop.com/OpenSource/SD/Default.aspx

    C# add-in for Eclipse - http://www.improve-technologies.com/alpha/esharp/

    C# for emacs - http://www.cybercom.net/~zbrad/DotNet/Emacs/

    You can also test drive the next release of Visual Studio by using Express which is designed for developers learning the .NET Framework - http://lab.msdn.microsoft.com/express/

    Documentation
    You can find the full, free documentation at: http://msdn.microsoft.com/library/
    Or use the .NET Quickstarts to get you started - http://samples.gotdotnet.com/quickstart/
  68. This is overwhelming! We better surrender From where can I download a free (for personal use) .NET stack (SQL Server, IIS, Visual Studio .NET, docs) so I can start learning?
    Take it easy. If WS costs a lot more than Windows it doesn't mean that you have run to download net stuff and try to learn all that. Just think about time it would take - isn't it better to spend it with friends, family or however you prefer ?
  69. LOL. Don't worry. I was kind of sarcastic. But it doesn't heart to learn new languages and technologies anyway.

    The report however IS overwhelming.

    Regards,
    Horia
  70. Here you can download a .NET stack[ Go to top ]

    http://lab.msdn.microsoft.com/express/vwd/
  71. Service Packs from MS[ Go to top ]

    In general Service Pack objective is to fix bugs.
    Service Pack 1 for .NET v1.1 Broke My ASP.NET App
    http://weblogs.asp.net/pwilson/archive/2004/09/16/230591.aspx
  72. Service Packs from MS[ Go to top ]

    In general Service Pack objective is to fix bugs.Service Pack 1 for .NET v1.1 Broke My ASP.NET App
    I feel your pain. Again, and again, and ...
  73. Your free .NET stack[ Go to top ]

    You download MSDE (personal version of SQL Server) at http://www.microsoft.com/sql/msde/downloads/download.asp. But you can use MySQL if you like.

    IIS is part of Windows, so if you have Windows you have it. If you don't have windows, use apache on linux with mod_mono (http://www.mono-project.com)

    You can use SharpDevelop (http://www.icsharpcode.net/OpenSource/SD/) or the .NET Framework SDK (http://www.microsoft.com/downloads/details.aspx?familyid=9b3a2ca6-3647-4070-9f41-a333c6b9181d&displaylang=en); there's everything you need: compilers, samples, docs and a debugger.

    Life is great, isn't it? Always surprising...
  74. Price Comparison is a joke[ Go to top ]

    <blockquoteWindows Server 2003 and costing $19,294. The IBM results are based on a system running WebSphere Network Deployment edition running on Red Hat Linux and costing $253,996.This is a load of crap. Compare Apples to Apples. Also throw in the price of the Database. It is obvoiusly not perfect. I however do agree that productivity is better with Visual Studio then most J2EE IDEs. That is one area where the Java Community needs to improve.
  75. your comparison is a joke[ Go to top ]

    This thread is about Websphere against .NET and is invalid for any other combination of tools.

    I can guarantee you that you never will se Microsoft commission a study of .NET tools against Spring, Tomcat, Velocity & iBATIS! :)
  76. your comparison is a joke[ Go to top ]

    This thread is about Websphere against .NET and is invalid for any other combination of tools.I can guarantee you that you never will se Microsoft commission a study of .NET tools against Spring, Tomcat, Velocity & iBATIS! :)
    And Sal's company probably didn't try them.

    A lot of companies don't look outside things they can pay for. They usually go to IBM or Bea or Oracle and then say, "Gee, Java is expensive". And to that I say "Choose wisely Grasshopper."
  77. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    TSS is a great J2EE news portal, but i give no credibility anymore with its studies.
  78. Oliver Martin:
    TSS is a great J2EE news portal, but i give no credibility anymore with its studies.
    Why ? Because you do not like results of study ? :))
    I'm involved now in a project using ASP.NET, and I must say, I enjoy environment (VS), language (C#), framework (ASP.NET) and loads of exelent third-party components and tools (Resharper, Intragistic, Deklarit...)
    Besides open source movenent getting momentum in dotnet (NUnit, Nant, NHibernate, NSpring)

    Aren't you guys pro-choice ? I'm happy that I have a choice between 2 exelent platforms - java and dotnet.
    And I think such studies ave very useful for both worlds. Because it is a healthy competition. It is what drives Sun and MS to get better.
  79. Why ? Because you do not like results of study ? :))I'm involved now in a project using ASP.NET, and I must say, I enjoy environment (VS), language (C#), framework (ASP.NET) and loads of exelent third-party components and tools (Resharper, Intragistic, Deklarit...)Besides open source movenent getting momentum in dotnet (NUnit, Nant, NHibernate, NSpring)Aren't you guys pro-choice ?
    The problem isn't so much the results of the study, but that the methodology can be *easily* picked apart.

    The study makes overtures toward the scientific method, and yet it falls flat in one of the first choices: a team of 2 .net developers vs. a team of 2 J2EE developers. This may have been dictated by budgetary requirements, but it makes it nearly impossible to create a scientifically valid study. Even if the pairs had equivalent credentials, they may have different ability levels. It was also clear that the particular J2EE team had little, if any, experience on the WebSphere software, whereas the .Net team had plenty of experience on the application server software. Even if J2EE deliverables are mostly portable, the application servers are different, and it doesn't make sense to allow experts on the .Net platform and then turn around and set WebSphere amateurs as experts on that software.

    This one deficiency may account for much of the difference in results.
  80. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    I expected .NET is better than WebSphere as it's one of the worst J2EE implementations.

    My projects are at least 2x more performant than J2EE so better than .NET. This test is a very good hint I should not switch to MSFT.
  81. Sun's JVM attempted?[ Go to top ]

    On Page 59 of 109:
    "Sun’s JVM uses the parameter -XX:+UseConcMarkSweepGC to turn on concurrent GC.
    However, when the team tried it, they got an error indicating that the JVM could not start. The reason was that WebSphere 5.1 is installed with IBM’s JVM 1.4.1, which does not recognize this parameter.The team briefly tried having WebSphere use Sun’s JVM instead, but quickly ran into other errors."

    Huh? If you've worked with WebSphere for a while, you will know that WebSphere runs only on IBM JDKs. Besides, this is explicitly made clear in the software prerequisites for WebSphere.

    Why would you even attempt to try running it on Sun's JVM?
  82. Sun's JVM attempted?[ Go to top ]

    On Page 59 of 109:"Sun’s JVM uses the parameter -XX:+UseConcMarkSweepGC to turn on concurrent GC.However, when the team tried it, they got an error indicating that the JVM could not start. The reason was that WebSphere 5.1 is installed with IBM’s JVM 1.4.1, which does not recognize this parameter.The team briefly tried having WebSphere use Sun’s JVM instead, but quickly ran into other errors."Huh? If you've worked with WebSphere for a while, you will know that WebSphere runs only on IBM JDKs. Besides, this is explicitly made clear in the software prerequisites for WebSphere. Why would you even attempt to try running it on Sun's JVM?
    There are a couple of points here: WebSphere only runs on IBM's JVM--so much for 'choice'. Secondly, even seasoned WebSphere/J2EE staff end up chasing red-herrings as a result of all of WebSphere's 'moving parts' (OS/JVM/WebSphere/J2EE/apps), all adding to the overall cost. Why they didn't know that WebSphere only runs on IBM's JVM is of concern.
  83. Sun's JVM attempted?[ Go to top ]

    Huh? If you've worked with WebSphere for a while, you will know that WebSphere >runs only on IBM JDKs. Besides, this is explicitly made clear in the software >prerequisites for WebSphere.
    >Why would you even attempt to try running it on Sun's JVM?

    This is a myth. Websphere on Solaris uses the Sun's JVM.

    -Pallav
  84. Sun's JVM attempted?[ Go to top ]

    This particular project was on Red Hat Linux. The software pre-requisites at http://www-306.ibm.com/software/webservers/appserv/doc/v51/prereqs/was_v51.htm requires an IBM JDK on Linux on Intel.

    My point is that if you've spent time configuring/tuning WebSphere, you will pretty much find that most of the JVM perf. tuning literature available is for Sun's JVM and hence, is of no use in tuning WebSphere. (other than conceptual knowledge gained).

    This is like following a Honda manual on a Nissan car. The fact that the experts on this project took this step, rather nonchalantly it seems, is odd.
  85. Sun's JVM attempted?[ Go to top ]

    Huh? If you've worked with WebSphere for a while, you will know that WebSphere runs only on IBM JDKs. Besides, this is explicitly made clear in the software prerequisites for WebSphere.



    I just wanted to correct you on the line above which said WebSphere runs only on IBM JDKs.. Websphere does run on Sun JDK for Solaris platform. Infact you will not find an IBM JDK for Solaris.
  86. Re: Sun's JVM attempted?[ Go to top ]

    If you've worked with WebSphere for a while, you will know that WebSphere runs only on IBM JDKs. Besides, this is explicitly made clear in the software prerequisites for WebSphere. Why would you even attempt to try running it on Sun's JVM?
    Isn't Java supposed to be "write-once-run-anywhere"? Why should the VMs be different? Yes, after having used WebSphere, I realize this is true (though on Solaris IBM uses the Sun VM with some extra libraries). Ask BEA how important picking a good VM is (JRockit?)...
  87. Re: Sun's JVM attempted?[ Go to top ]

    That's beside the point. The people working on the project are supposed to be WebSphere experts with years of experience. The fact that WebSphere does not work with non-IBM JVMs is inescapable. Besides, the JVM implementation is where vendors differentiate themselves. In fact, from an SDK perspective, that's the only place where they can differentiate themselves. This does not break the "Write Once, Run Anywhere" proposition.
  88. Re: Sun's JVM attempted?[ Go to top ]

    We did try the Sun JVM, as the report says and ran into multiple problems with library configurations; however we spent a few hours plugging away in an attempt to resolve them but eventually ruled out that line of effort as unproductive.

    Why did we try this when the docs clearly warn against it? The reason is that we wanted to leave no stone unturned. From past experience we know that the speed and frequency of garbage collection is a vital part of maximizing high user load throughput, so despite documentation warnings we thought it worth devoting some experimentation time to swapping JVMs and studying the collection behavior. In previous studies we have ignored those warnings with other app servers and/or versions and reaped *huge* returns in performance.

    Why did we stop? One of the measured constraints in this study was the time spent tuning. We looked at:

    1) The estimated time to successfully finish a non-IBM JVM configuration.

    2) Our best estimation of future potential problems, e.g. what would we need to re-fix when we swapped JDBC drivers, and other libraries.

    3) The best case scenario for performance increase with a non-IBM JVM.

    From these factors we decided that the non-IBM JVM experimentation was worth half a day at most. At the end of that time we moved on to more productive tuning areas, as documented.

    I hope that helps clear up some of the questions.

    All the best,
    Will
  89. Having done limited work with MS tools, I understand that sometimes, especially for easy projects, it can be faster. Many of my colleagues find Java cumbersome, and I sometimes curse at how I could be doing a task much faster in many other programming languages. It's discouraging to study web services under J2EE, only to be sneered at by other coders that claim it's easy as pie in .Net.

    Bottom line, I'm worried that Fortune 500 companies are switching to .Net, and concerned that programmer productivity does not seem to be a big deal for J2EE tool vendors. What should I be doing to be more productive with Java? What tools, frameworks, etc, would you stack up against .Net if you could fund your own study? And are there any groups that are on our side, trying to make Java a better tool for coders?
  90. Having done limited work with MS tools, I understand that sometimes, especially for easy projects, it can be faster. Many of my colleagues find Java cumbersome, and I sometimes curse at how I could be doing a task much faster in many other programming languages. It's discouraging to study web services under J2EE, only to be sneered at by other coders that claim it's easy as pie in .Net.Bottom line, I'm worried that Fortune 500 companies are switching to .Net, and concerned that programmer productivity does not seem to be a big deal for J2EE tool vendors. What should I be doing to be more productive with Java? What tools, frameworks, etc, would you stack up against .Net if you could fund your own study? And are there any groups that are on our side, trying to make Java a better tool for coders?
    since I work in the financial software industry, the top 10 firms are moving heavily to J2EE + Linux. Non of them would dare use .NET on the server side. there's plenty of old VB + winforms apps moving to .NET, instead of Swing or SWT. Even the major third party financial software firms are moving to Java, because frankly trying to get .NET to scale easily and reliable is very hard. When they try to integrate with the big firms, they get laughed out when their Microsoft centric apps blow up.

    I personally would only use .NET for client side apps, but I wouldn't willingly or happily choose .NET on the server side. For large trading system that have to scale, .NET sucks. I've spent considerable time researching and investigating TPC benchmarks and white papers from MSDN/Technet. Once I get down to the real details and try to reproduce the results, the hard data tells me it's fluff.

    that's not to say it's impossible to scale .NET to the same level as J2EE. My experience tells me one would have to write a JMS server, an EJB container, and a transaction server. For small apps, .NET is probably going to be easier for an average programmer or a VB programmer. Just don't expect that app to scale well for transactional processing with 20 or more concurrent transactions.

    I also find this stupid "50% less code" retarded. One of the biggest issues I hear from CTO isn't how many lines of code, but maximizing code re-use and getting away from disposable code. Many of the microsoft shops I know spend 80% of their time rewriting and fixing bad code. That translates to a loss in efficiency and customers, because it takes 6-8 months to add a custom extension for a customer. Of course these problem exist every where, but my personal experience is a VB programmer with 5 yrs experience knows less than a Java developer with 5 yrs experience. I'm sure others will disagree, but that' my experience.
  91. Having done limited work with MS tools, I understand that sometimes, especially for easy projects, it can be faster.
    It simply is faster. Productivity is higher. I have done my own test. Tools make the difference for the most part. If you think Microsoft is done, wait till you see the changes in Visual Studio 2005 which make Enterprise Team Development much easier then it was before.

    Also, Configuration, Tuning and Testing is easier with the Microsoft platform because of limited variables such as OS, WebServer, Etc.

    I wish the J2EE and Java vendors would catch up with Microsoft. I really prefer Java and J2EE but My company ended up with Microsoft because of the above issues. Cost was an issue but Java Vendors, especially BEA were willing to work with us on pricing.
  92. Having done limited work with MS tools, I understand that sometimes, especially for easy projects, it can be faster.
    It simply is faster. Productivity is higher. I have done my own test. Tools make the difference for the most part. If you think Microsoft is done, wait till you see the changes in Visual Studio 2005 which make Enterprise Team Development much easier then it was before.Also, Configuration, Tuning and Testing is easier with the Microsoft platform because of limited variables such as OS, WebServer, Etc.I wish the J2EE and Java vendors would catch up with Microsoft. I really prefer Java and J2EE but My company ended up with Microsoft because of the above issues. Cost was an issue but Java Vendors, especially BEA were willing to work with us on pricing.
    For what you have done it probably was. I would love to know what tools you tried and what sort of applications you built. And maintained. From my experience, the long haul goes to my (not sure about anyone elses) Java toolset. I use VS.Net too. Comparing non-visual code development - Eclipse makes my life MUCH easier. I've spent days trying to figure out why an ASP.Net app stopped working. Usually it was a OS patch that broke it but figuring out what changed where was painful. That is where too much integration was a pain. Life is not easy in either world.
  93. Re: Tyler's Note[ Go to top ]

    This research endeavor was designed to be an IBM vs. .NET analysis as opposed to a J2EE vs. .NET comparison. ..... Since the focus was on likely large customer deployment scenarios, choosing IBM (not J2EE) as a focus area was an early decision made -- so other J2EE application servers wasn't considered for this effort.
    Well if it was designed to be an IBM vs .net why lead this news post with IBM J2EE vs .NET? I really think the idea here was to get attention to J2EE vs .NET else you wouldn't have led the news post the way it is now.

    Also, this leads to start thinking...TSS is increasingly becoming a microsoft shop.. Or is TSS trying to handwave and act a good citizen to both J2EE and .NET to earn a few extra $$$

    Anyway many of these results don't compare apples to apples...they are far from reality and choosing IBM J2EE against .NET is not fair to the J2EE world. Is TSS a .net community or java? If so, we should be running tests that are fair to both sides... IBM is not representative of the J2EE world. If it was a paid initiative possibly the results of it should be posted here because it doesn't balance both sides of the coin. It was done out of mutual interest for TSS and microsoft to make $$$...
  94. my previous post[ Go to top ]

    I meant NOT...missed the word in this para
    If it was a paid initiative possibly the results of it should be posted here because it doesn't balance both sides of the coin. It was done out of mutual interest for TSS and microsoft to make $$$...
    If it was a paid initiative possibly the results of it should NOT be posted here because it doesn't balance both sides of the coin. It was done out of mutual interest for TSS and microsoft to make $$$...
  95. IBM not representative?[ Go to top ]

    ...and choosing IBM J2EE against .NET is not fair to the J2EE world.... ...IBM is not representative of the J2EE world....
    I'm a J2EE guy but you're killing me. As I understand the latest analysts reports, IBM was leading the J2EE charge.... OS not withstanding, saying IBM is not representative of the J2EE world is just plain wrong.

    -sam
  96. ot: sun[ Go to top ]

    IBM has most J2EE market share as per netcraft!

    But:
    http://news.com.com/Sun+Weve+turned+over+a+new+leaf/2100-1010_3-5375931.html?tag=nl

    Wow, if I can get AMD + Linux + Java:
    http://www.sun.com/desktop/sysadmin.html
    .V

    ps: Yes, I am MS, it's easy to pass the silly tests.
  97. MS has had a great impact on Java / J2EE[ Go to top ]

    OK - so this study comparing .NET and WS might not reflect the truth. On the other hand, dot-net and c-sharp with all the hype and marketing has really helped to improve Java and J2EE - let me explain:

    With Java 5 we get a lot of new great features - most of them with massive inspiration from C#, which in turn "stole" a lot of ideas from Java.

    And why did IBM open source Eclipse? Remember the study 2 or 3 years ago, that compared .NET and J2EE, and found that Java/J2EE really needed a single IDE, that everybody would just write plugins for (as is the case with MS VS)???

    And why do we get a MASSIVE simplifacation of EJB in v. 3.0?? And why did the participants in JCP aggree to basically just indemnify Hibernate as a replacement for entity beans??? (NOT because they like the fact that this will give JBoss an advantage compared to the other J2EE servers).

    Finally - why do the Java/J2EE vendors still stick together? Why do Sun, BEA, IBM, Oracle and now even JBoss still cooperate??? I tell you why - nothing makes a better aliance than a common enemy - especially if than common enemy is very powerful AND using every trick (even the dirty ones) to endager the existence of the participants of the oposing aliance.

    Hail MS .NET - nothing could have a better effect on Java and J2EE!!!!
  98. Agreed with you, Ramus[ Go to top ]

    Agreed with you, Ramus, this .NET vs J2EE comparison shows the weaknesses of J2EE let's find and accept them, only through this we can avoid them..
  99. Mainsoft has a cool product that uses Visual Studio .NET for J2EE development.

    http://www.mainsoft.com/products/vmw_j2ee.html

     It's completely integrated into VS, and even supports some popular 3rd party .NET components. They are then able to cross compile to Java Byte code and run on a Mono stack built to run on J2EE. Websphere, Weblogic and JBoss are all supported. I tried it out with their Petstore sample app, and it was very smooth. Their solution also makes integration with EJB's seemless, with no bridging.

     

    While I agree that the WSAD version did not take advantage of all of the J2EE advantages that WSAD simplifies [ JSF, EJBs ], the .NET features are indeed very easy to use in Visual Studio. The Mainsoft solution lets you leverage it for J2EE nicely. In fact - you don't need two teams with different skill sets to code up a .NET version and a J2EE version.

     

    No doubt Microsoft and IBM will continue to slug out the productivity advantages of one or the other development technologies in these studies. It's nice to have the ability to use both technologies to our advantage, in the meantime...
  100. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    I use websphere on a daily basis and I can't stand it. I've used other app servers , including weblogic (when I used to be an instructor for TMC), JBoss, Gemstone and a few others here and there and websphere is a perfect example of a product that should have died a long, long time ago. If websphere had come from a company other than IBM it certainly would have. Only IBM's customer base allowed such a work of junk to stay alive. What is really sad is that MS can use this report as a general bashing of J2EE now that IBM is (depending on which reports you read) the leading vendor, regardless of what the title says. What is even more sad is that websphere uses a good number of good open source projects and products to make up websphere and still couldn't get it right. For example websphere is fronted with Apache's HTTP server (which is a bit unnecessary if you ask me and only slows down the requests for a JSP app), commons-logging, axis, jetspeed, jasper and a few others.

    Having said all of that, I actually came from a VB background, back in the 5 - 6 days, and I personally wouldn't want to go back. I find all of the 'make java easier' essays a bunch of bull. I can pick a couple of different tools that will let me build a java/jsp app with the same relative ease as visual studio. Heck, even Dreamweaver which isn't even a java IDE can let me do that.

    I don't think the problem is that Java is difficult, it is that the J2EE vendors try to cram the entire J2EE stack down your throat so you have to buy the high dollar version of their server, and then have round table discussions on how to increase developer productivity and compete with Microsoft. What is worse, is that there isn't even a fair comparison between J2EE and .Net. It should be between J2EE and .Net/MS Transaction server/MSMQ (I didn't read the code or the report details but I doubt the MS solution used these technologies). One thing Mircrosoft did right was not try to force their customers into using all possible portions of a technology stack, but only what they really needed. I recently built a small app with JSTL/JSP tags (gasp!!!) hitting mySQL and running on tomcat. Difficult? No. Time comsuming? Nope. Multi-tier wonder architecture? not even close. But it worked, worked well and met my requirements. Normally I would never do that as I am a fan of domain models and good architecture and good frameworks, but it was a quick, slightly dirty app that did exactly what I wanted. If I had used some IDE's RAD tool or worse an MDA tool, that amount of junk code that would have gotten spit out when have been a nightmare.

    Maybe when the J2EE vendors stop trying to sell more than what customers need and start actually paying attention to what developers need and want (they say they do, but I highly doubt it) as well as providing easy to use tools, and I mean the app servers here not the IDEs, then maybe their developers' productivity will increase enough so that management will see this increase and keep buying licenses and tools from the vendors.

    Speaking on ease of use, it takes 14+ clicks to deploy an EJB/war app on websphere. That is just rediculous. Compare that to dropping an ear file in a directory for JBoss and just restarting a web app takes 4+ clicks compared to Tomcats 1.
  101. +1

    I came from a VB background too. Love Java and all that comes with it. I still do VB6 and also do VS.Net. Eclipse makes my life so much easier developing real applications.
  102. Not a fair comparison?[ Go to top ]

    What is worse, is that there isn't even a fair comparison between J2EE and .Net. It should be between J2EE and .Net/MS Transaction server/MSMQ (I didn't read the code or the report details but I doubt the MS solution used these technologies).
    Robert, no need for doubts. Read the report, because all of those things were in fact included in the study. Actually, you won't find "MS Transaction server" because it hasn't been called that for about 5 years. But look for COM+, and you will find what you suggest.
    Maybe when the J2EE vendors stop trying to sell more than what customers need and start actually paying attention to what developers need and want (they say they do, but I highly doubt it) as well as providing easy to use tools, and I mean the app servers here not the IDEs, then maybe their developers' productivity will increase enough so that management will see this increase and keep buying licenses and tools from the vendors.
    Interesting, again!

    Pray tell, what incentive would IBM have for delivering something simple and easy, like Spring + Hibernate + AXIS?
  103. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    I just realized that yeh, it is close to 14 clicks to install an EAR on WAS. Granted, it is more than dropping a file on the filesystem like jboss. But, if that is your largest complaint of not standing websphere then get real. It takes probably 45 seconds to click those 14 clicks while 30 seconds to copy a file across the network compared to the man years to make an application. I'd like to hear about your bigger beefs.
  104. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    I just realized that yeh, it is close to 14 clicks to install an EAR on WAS. Granted, it is more than dropping a file on the filesystem like jboss. But, if that is your largest complaint of not standing websphere then get real. It takes probably 45 seconds to click those 14 clicks while 30 seconds to copy a file across the network compared to the man years to make an application. I'd like to hear about your bigger beefs.
    Compared to the 5 seconds to redeploy in Resin... Times 50x per day I do it... Times 15 developers = 500 minutes lost productivity per day = 2500 minutes / week = over 40 hours of lost productivity per week = over 1 person's time. Sounds significant to me.
  105. to be smart - is not to be wrong[ Go to top ]

    Look, we have more or less clear result and clear message to
    managers and stakeholders - MS (NET) is better than IBM (J2EE)
    I can tell you - it works! first TSS study did influenced
    our bosses. And we narrowly escape from NET under
    "only Oracle" cover.

    Yes, suprise!, M$ oursmart J@EE comunity : 4 processors to
    forse expensive level of WS, inviting not the best J@EE player
    (as they say, I have no experience with IBM). Whatever...
    They pay, they order the music and win in a honest competition.
    Been a swing champion do not hope to win waltz competition...

    Ok, lets make swing competition, play rock-and-roll with intelij+
    hiberspring on resin, ADF/JSF on Oracle-Orion, eclipse+ibatis on tomcat,
    yes, LAMP is fine. All under heavy metal tranctional load.
    ...Just a second..and for M$ we will run "Smoke on the water" and
    see Rolf dancing waltz.

    Alex V.
  106. To be Smart - not wrong for business[ Go to top ]

    Alex,

    Unless you are running a small P.O.S. app like PetShop on your home PC, a real business with a Production App will be using a 4Proc box. It's a fair machine for the test
  107. Compared to the 5 seconds to redeploy in Resin... Times 50x per day I do it... Times 15 developers = 500 minutes lost productivity per day = 2500 minutes / week = over 40 hours of lost productivity per week = over 1 person's time. Sounds significant to me.
    How bad is your app that you and other developers have to change it 750 times per day?

    FWIW, WebSphere does support hot deployment -- just drop new class files or JSPs into the deployment directory.
  108. Continous integration[ Go to top ]

    May be because they are trying to do something called "continous integration":

    http://www.martinfowler.com/articles/continuousIntegration.html
  109. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    I just realized that yeh, it is close to 14 clicks to install an EAR on WAS. Granted, it is more than dropping a file on the filesystem like jboss. But, if that is your largest complaint of not standing websphere then get real. It takes probably 45 seconds to click those 14 clicks while 30 seconds to copy a file across the network compared to the man years to make an application. I'd like to hear about your bigger beefs.
    - Their classloaders are a nightmare, with the portal being the worst
    - The resource usage is fairly high. Granted this isn't really an issue when your production box is big, but in dev it is a real drain. We used to run weblogic 5.x with only 64 meg. WAS immediately takes about 150 and very quickly jumps to over 300meg. Even WSAD will take up to 150+.
    - Try getting a remote client to connect via RMI to a websphere box. WSAD is no help there and it won't be too long before you realize you almost have to use IBMs VM on the client.
    - It is constantly not up to date with standards and specs. WAS 5 was a big leap for them, granted, but when you ask Sun to delay the release (so I heard) of the j23ee spec because you can't catch up, something is wrong. Everybody else is always ready or even ahead slightly. In the end it hurts us since we can't use things that we would like to.
  110. Their classloaders are a nightmare, with the portal being the worst- The resource usage is fairly high.
    Why do you use this nightmare ? Try PHP.

    BTW: Why do you want to talk about your problems in this forum ? This study is not about IBM tools, it is about the true loser. Do you think IBM is loser ?
  111. Speaking on ease of use, it takes 14+ clicks to deploy an EJB/war app on websphere. That is just rediculous. Compare that to dropping an ear file in a directory for JBoss and just restarting a web app takes 4+ clicks compared to Tomcats 1.
    There's such thing as JACL scripting for WebSphere. It takes just to type "./install.sh" to (re)install our app onto WebSphere (as well as onto JBoss, by the way). Including creating all resources such as queue connection factories, destinations, JDBC datasources and so on.

    All deployment descriptors, etc. are generated with XDoclet and/or XSLT automatically. So I just don't get when people say "WebSphere is complicated". Everything you don't know or don't use properly is complicated... just like removing tonsilla via the *******. Invest your time once into a bunch of build/configuration/test/deployment scripts and a lot of hours/$$$s will be spared.
  112. Speaking on ease of use, it takes 14+ clicks to deploy an EJB/war app on websphere. That is just rediculous. Compare that to dropping an ear file in a directory for JBoss and just restarting a web app takes 4+ clicks compared to Tomcats 1.
    There's such thing as JACL scripting for WebSphere. It takes just to type "./install.sh" to (re)install our app onto WebSphere (as well as onto JBoss, by the way). Including creating all resources such as queue connection factories, destinations, JDBC datasources and so on.All deployment descriptors, etc. are generated with XDoclet and/or XSLT automatically. So I just don't get when people say "WebSphere is complicated". Everything you don't know or don't use properly is complicated... just like removing tonsilla via the *******. Invest your time once into a bunch of build/configuration/test/deployment scripts and a lot of hours/$$$s will be spared.
    Ant would work too. There are Websphere Ant tasks. Comes with WSAD.
  113. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    Wonderful[report from all angles of view ]
    sometimes ago IBM published another report of comapresion between .Net and Webpshere
    that report shown J2ee solution based on Ws is more cheaper and more effective than
    .Net based solution
    We should wait for IBM response to this report.
  114. To get these answers and to obtain the TMC final report, the independent auditor’s report, the system specification and the source code for the three implementations, go to:

    I haven't read the report, but I would bet my last dollar that, because Microsoft is paying for it, TMC will find whatever Microsoft wants them to find. We have all seen this happen so many times that it is amaizing that Microsoft still bothers to do it.
  115. Hi Joe-

    I can certainly understand the need for skepticism, but let me provide some relevancy on the work performed. I would hope that the community digs through the research to assess the effort instead of assuming that having a sponsor means there is bias. In fact, there are some dimensions of research that showed Microsoft not reflecting as well as IBM.

    So, given the obvious scrutiny, why does TMC perform this research? Are we for sale? We are not paid to write what someone else wants. If you dig through the records on middleware-company.com and TSS, there have been a number of changes made to increase trust and credibility to the research and the process:

    - New executive leadership team
    - Research code of conduct published
    - MiddlewareRESEARCH.com created with the purpose of distributing all artifacts with every research item
    - Disclosures moved to page one so that full disclosure can occur
    - Publication of specs and declarations of intended work to the community and third party experts so that they can provide input into the intended work and, in some cases, monitor the work.
    - Ability for vendors to veto publication, but they have no input into what is authored

    We welcome additional input on how to make this research more credible. And, while trust is individual and has to be earned, we are working hard to build a trust basis with the community.

    While our early research efforts had us falling off the proverbial bike, our approach since has been to assess improvement areas, make positive changes and find a way to ride the bike without falling off. We believe that the industry is better served by TMC being a better research organization as opposed to running away when the going gets tough.

    TMC performs this research because the questions are worthwhile. These are questions that the world (and us) would like to have answered -- and in many cases the expense of doing the research properly is significant and couldn't be done without a sponsor. The alternative would be to not have this data at all, which would leave the community speculating and making conjections. While this report isn't conclusive, it is data that the world didn't have before. The reports are a wonderful learning experience (the team documented every pitfall and nuance they had in deploying IBM on a Linux cluster in a production environment) and the data is useful when scrutinized in the context it was performed.

    Tyler
  116. So, given the obvious scrutiny, why does TMC perform this research?
    Do you know the meaning of word "research" ? Is it research if TSS fails to setup some tool and claims company or technology A is better than B because TSS is lame or do not know how to use CVS ? Do you balme compiler for your bugs and claim it is a research too ?
    Learn to develop homepages, try PHP, JAVA, read some book about fun things like math and probably you will learn to do research later too, but try to fix your homepage first.
  117. I haven't read the report, but I would bet my last dollar that, because Microsoft is paying for it, TMC will find whatever Microsoft wants them to find. We have all seen this happen so many times that it is amaizing that Microsoft still bothers to do it.
    Not having read the report, you wouldn't have seen section 1.4, which reads:
      1.4 Does a “sponsored study” always produce results favorable to the sponsor?

      No.

      Our arrangement with sponsors is that we will write only what we believe, and only what we can stand behind, but we allow them the option to prevent us from publishing the study if they feel it would be harmful publicity. We refuse to be influenced by the sponsor in the writing of this report. Sponsorship fees are not contingent upon the results. We make these constraints clear to sponsors up front and urge them to consider the constraints carefully before they commission us to perform a study.
  118. I haven't read the report, but I would bet my last dollar that, because Microsoft is paying for it, TMC will find whatever Microsoft wants them to find. We have all seen this happen so many times that it is amaizing that Microsoft still bothers to do it.
    Not having read the report, you wouldn't have seen section 1.4, which reads:
      1.4 Does a “sponsored study” always produce results favorable to the sponsor?  No.  Our arrangement with sponsors is that we will write only what we believe, and only what we can stand behind, but we allow them the option to prevent us from publishing the study if they feel it would be harmful publicity. We refuse to be influenced by the sponsor in the writing of this report. Sponsorship fees are not contingent upon the results. We make these constraints clear to sponsors up front and urge them to consider the constraints carefully before they commission us to perform a study.
    Are sponsors allowed to veto some parts of a study, or only the full report as a whole? If they can veto just some parts of it and at the same time allow for other parts, it would allow them to filter the bad things, and make up a report that shows only the good things, hiding the bad things from the press. Such a report wouldn't represent the truth.

    Best Regards,
    Henrique Steckelberg
  119. Are sponsors allowed to veto some parts of a study, or only the full report as a whole? If they can veto just some parts of it and at the same time allow for other parts, it would allow them to filter the bad things, and make up a report that shows only the good things, hiding the bad things from the press. Such a report wouldn't represent the truth.Best Regards,Henrique Steckelberg
    Hi Henrique. It is the case that sponsors can only vetoe the entire report, not sections of it.

    To date, there has been one research endeavor (of about 15 performed) where the vendor chose not to have the report published for consumption.

    Tyler
  120. Are sponsors allowed to veto some parts of a study, or only the full report as a whole? If they can veto just some parts of it and at the same time allow for other parts, it would allow them to filter the bad things, and make up a report that shows only the good things, hiding the bad things from the press. Such a report wouldn't represent the truth.
    Well it would be helpful if TSS clears out what their connection with Microsoft is? That is beyond being business partners... I realize microsoft sponsors the theserverside.net community site..what else do we not know about?

    I think this piece of information is useful.

    Tyler: remember when you hold the camera itself you are providing biased news depending upon where you turn the camera. In this case the camera is sponsored by Microsoft.

    I can see TSS increasingly wanting to become an analyst like firm -- a new business venture. GOOD LUCK! that is all I can say...
  121. blockquote>TSS increasingly wanting to become an analyst like firm -- a new business venture. GOOD LUCK! that is all I can say...TSS - your enterprise .nOT .JAVA but .net community...
  122. "OTOH, they say that Websphere/WSAD is a slow, unproductive and painful environnement. We can't really blame TSS, the whole community already knew that for a fact."

    "Websphere (its cumbersome and clumsy nature is pretty much common knowledge"
    Yes, we all know that. So why do you try to defend the hopeless Websphere application server?
    (http://www.theserverside.com/news/thread.tss?thread_id=16610#65952)
    "It is because of "products" like websphere that gives the whole industry a bad reputation and makes consulting a scam rather than a honorable profession."

    "Next time they should test .NET against a real, productive J2EE environnement."

    Yes exactly, for instance a Spring/Tomcat/J2EE stack. That should be this years test not Websphere/.NET, that should have been last years test.

    Regards
    Rolf Tollerud
  123. Can somebody use Spring/hibernate or spring running in a J2EE server?
  124. Can somebody use Spring/hibernate or spring running in a J2EE server?
    Wouldn't it be Spring instead of J2EE, not Spring inside of J2EE. There's a book called Better Faster Lighter Java by Bruce Tate the guy who wrote "Bitter EJB" that covers this.
  125. Spring inside web servlet container[ Go to top ]

    I mean deploy Spring inside one of web servlet container( such as tomcat or J2EE appServer weblogic/websphere), following the requrement
  126. Can somebody use Spring/hibernate or spring running in a J2EE server?
    Wouldn't it be Spring instead of J2EE, not Spring inside of J2EE. There's a book called Better Faster Lighter Java by Bruce Tate the guy who wrote "Bitter EJB" that covers this.
    Dino, I wish many sales representatives (Microsoft, Oracle, IBM, don't ask me for names, I have to protect the guilties) where as informed about the competition as you. Believe me, they are not. Also, even as you work for whom you work for, I believe your opinions are more balanced than others expressed by Microsoft trolls daunting here (not to mention java zealots)


    Cheers.
  127. Either some people here are still missing what J2EE means (mixing it with EJB), or they are missing what Spring is. J2EE is a bunch of services with a standard API: servlets, JSP, EJB, etc. Spring is an Application API, it is NOT an application server. It does NOT substitute J2EE services, it leverages them. So yes, you may run Spring on top of a J2EE server, but being modular, you can also use its APIs on a standalone program, with no J2EE service at all if you don't need them. For example, a simple client-server Swing application accessing a DB through Hibernate could use Spring APIs for that, no J2EE at all.

    So no, Spring is not the end all, be all J2EE substitute. It is a layer that you can put on your J2EE to make your life easier. If you need caching, distributed transactions, messaging, and other J2EE services, Spring won't give it to you, you'll still have to use a J2EE server or similar Open Source tools, and add Spring on top of it if you'd like to.

    PS: seems like Rolf de la Mancha got himself some professional assistance here on the forum! :)

    Best Regards,
    Henrique Steckelberg
  128. The logic of the TSS members is the same as usual, that is, nonexistent. You can not at the same time say that the test is invalid or unfair and then also say that they should not have used Websphere because it is such a piece of crap compared to the other j2EE servers. And besides, why did no one said anything when IBM talked eBay into choosing Websphere? (poor souls that really has learned what "to pay" means.)

    The first J2EE/.NET Petstore benchmarks test got far reaching consequences despite Richard Öberg & Co. Without it we would not have products like iBatis and Spring and it was first after that that .NET really picked up traction. TSS is single handedly changing the world. It will be interesting to see what will happened now.

    So Henrique, are you really going to recommend Websphere again? Please answer yes or no. I ask you i horror. The resilience of the TSS members to facts and evidence is really amazing.

    PS: seems like Rolf de la Mancha got himself some professional assistance here on the forum! :)

    The comparion breaks down. Don Quijote was unsuccessful.

    Regards
    Rolf Tollerud
  129. Ebay uses IBM Rolf?[ Go to top ]

    Look for yourself. EBay uses a huge MS .DLL. Only the stuff on the periphery, like the preferences/personalization stuff, uses WebSphere. Basically they are tied down by their legacy code, and are only adding new stuff in Java. Haven't replaced the good old eBayISAPI.dll yet...

    http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&category=62053&item=8132148754&rd=1
  130. eBay, in Dantes purgatory..[ Go to top ]

    Charles: "(eBay)Haven't replaced the good old eBayISAPI.dll yet..."

    hi hi, After hiring 100+ expensive top J2EE developers + IBM consulting cost for 3 years. But we all know that IBM consulting comes so cheap so it probably won't matter...

    Charles: ""Stop screwing with generics and think about how to build a decent rich client library, a bug-free JVM, and a decent development environment."

    And don't forget, a decent memory management.

    Interesting enough, they sell it! :)

    Regards
    Rolf Tollerud
  131. Is eBay really using DLLs?[ Go to top ]

    Charles,
    I think you are wrong about eBay using DLL technology for their mainstream site. If you look at the HTTP header for what they return when you access the url you quoted:
    http://cgi.ebay.com/ws/eBayISAPI.dll?ViewItem&category=62053&item=8132148754&rd=1

    then you can see:

    HTTP/1.1 200 OK
    Server: Microsoft-IIS/5.0
    Date: Wed, 22 Sep 2004 13:43:48 GMT
    Connection: close
    Server: WebSphere Application Server/4.0
    Content-Type: text/html;charset=iso-8859-1
    Set-Cookie: .....etc....

    In case you did not know - WebSphere App Server provides plugins for all major HTTP servers including IIS and you can make URL for a servlet look like anything you want it to be, including ".DLL" or anything else. (I have used WebSphere Studio provided TCP Monitoring server to obtain HTTP header).

    One of the flaws I found in this study was their use of Edge Server and two instances of IHS. Why did not they just use one IHS and do WLM via its WAS provided plugin? This could simplify things a lot and speed them up too. Or they could still use Edge Server but bypass the IHS and route directly to WAS web container http port (which is just a point of entry for JSPs). This could give them a lot of extra performance and do good failover.

    Roman
    IBM
    This post does not represent opinions of my employer.
  132. Is eBay really using DLLs?[ Go to top ]

    Hi Roman,

    Any relationship to the Swedish composer Johan Helmich Roman?

    Anyway,

    "One of the flaws I found in this study"
    You could sponsor and monitor tests yourself to ensure against any flaws.

    Can you answer this riddle, mystery, puzzle,

    Why is IBM more shy than a treenymph, forest faun or a budding young irish maiden when it comes to tests with MS?

    Regards
    Rolf Tollerud
  133. Is eBay really using DLLs?[ Go to top ]

    Why is IBM more shy than a treenymph, forest faun or a budding young irish maiden when it comes to tests with MS?RegardsRolf Tollerud
    Perhaps it's because companies like IBM and Sun have been prominent players in Enterprise class computing for years. They don't need to prove their track record, the long historic track record is there to see.

    It is Microsoft who is trying all they can to break into the Enterprise class computing space and it is they who have to prove their stuff is worthy of the title, not the other way around.
  134. Why IBM is shy?[ Go to top ]

    Can you answer this riddle, mystery, puzzle,
    >> Why is IBM more shy than a treenymph, forest faun
    >> or a budding young irish maiden when it comes to tests with MS?

    Why waste money for bad publicity? IBM customers' success is louder then any marketing studies. Recent example - take a look at Aug 30'2004 article about eBay: http://www.eweek.com/article2/0,1759,1640234,00.asp

    Quotes from the article:
    "Q:
    What hardware platform are you on now?

    A:
    We use Sun [Microsystems Inc.] systems, as we did before. We use Hitachi Data Systems [Corp.] storage on Brocade [Communications Systems Inc.] SANs [storage area networks] running Oracle [Corp.] databases and partner with Microsoft for the [Web server] operating system. IBM provides front and middle tiers, and we use WebSphere as the application server running our J2EE code—the stuff that is eBay. The code is also migrated from C++ to Java, for the most part. Eighty percent of the site runs with Java within WebSphere.

    Q:
    Did you look at Microsoft .Net?

    A:
    We did, and we thought that using a J2EE-compliant system gave us a lot more flexibility with respect to the other components in our architecture. .Net was definitely a viable solution, and Microsoft's a wonderful partner. But we wanted the extra flexibility of being able to port to other application servers and other underlying infrastructures." [end of quote]

    I think superficial reader may not understand all of the details and perhaps deem this study credible (sure they even disclosed the source - they must be sure in what they've done!), but I hope in reality smart people will not pay much attention to this or similar studies done for/by vendors. The study points out areas that need to be evaluated (development productivity, price/perf, etc.), but all numbers and conclusions are invalid due to multiple flaws (incorrect architectural and design decisions, different skill levels, level of familiarity with the product, prior preparation, etc. - not to mention behind the scenes action). You can rest assured that it is quite easy to do a study that will prove exactly opposite. As long as you know the end result, you can always make a case for it, don't you remember it from your university phisics labs?

    >> Any relationship to the Swedish composer Johan Helmich Roman?

    Roman is my first name, plus I'm not very good at music. I'm much better at squash (not food, but sport - http://www.psa-squash.com).

    Roman
    IBM
    This post does not represent opinions of my employer.
  135. Rent a report research[ Go to top ]

    This "research" has nothing to do with J2EE or .NET.

    There is a bigger stake here. IBM is supporting Linux in a big way. This has made MS nervous.

    They have started actions like supporting actions by SCO and "Get the Facts" campaign.

    This research is part of the "Get the FUD" campaign. TSS has joined the queue with Enderle, Didio, Forrester research, Yankee group etc. This is a shameless grab for money disguised as research.

    If TSS were honest:

    They would answer the following questions:

    1. Why compare WSAD with Visual Studio? What is the value in it for end users?
    2. Why the study did not focus on other options like WebLogic, Tomcat, Eclipse, Linux.
    3. Who set the agenda for this research?
  136. Why the IBM Websphere stack[ Go to top ]

    1. Why compare WSAD with Visual Studio? What is the value in it for end users?2. Why the study did not focus on other options like WebLogic, Tomcat, Eclipse, Linux.
    3. Who set the agenda for this research?
    These are good questions.

    The sponsor, Microsoft, chose to compare the Microsoft .NET technology stack to the IBM Websphere technology stack. We presume that that was because based on what they were seeing in the marketplace, Microsoft felt that it was important and beneficial for them to illustrate how those two particular platforms compare, for competitive marketing purposes.

    That decision having been made, The Middleware Company (with no Microsoft or IBM involvement) conducted the experiment exactly as described in the report, with auditors involved. IBM/Websphere-certified and .NET-certified consultants were hired as needed, as described in the report.

    The report takes explicit care to not generalize the results to all of J2EE, to all situations in which .NET or Websphere may be used, or generalize in any other way. As other readers have noted above, it is simply a very detailed description of what we did and what we found.

    For sponsored research, our arrangement with sponsors is that we will write only what we find, but that sponsors have the right to prevent publication if they feel that publishing those results will be harmful to them. More about this is in the Disclosures section on page 2. As Tyler mentioned above, out of the last fifteen or so sponsored studies we have done, one has been killed by a sponsor. Also, on more than one occasion we have informed a propective sponsor that their product or technology would not fare well in the experiment we would devise.

    We know that the study is subject to a number of external and internal variables. Clearly sponsors want to illustrate their strengths. We devise and conduct experiments that test these strengths and describe them in great detail so that you can decide whether the experiment was fair. We make these threads available, and link to them from the endeavor page, so that your accolades and criticism alike can be read by all.

    We have declined numerous requests for studies from several sponsors. We conduct studies where the questions are interesting or significant to a large portion of our audience, and bear investigation. The work is not intended to be conclusive, but to add additional data points and perspective.

    For other shocking truths please read the report, including the Disclosures on page 2.

    Salil Deshpande
    The Middleware Company
  137. Why the IBM Websphere stack[ Go to top ]

    For other shocking truths please read the report
    The truth is very clear from this "study", You are too lame to use j2ee and IBM tools, you have failed to implement homepage with this tool and you are loosers. But there is nothing new, loosers always blame tools, god or govement.
  138. Why the IBM Websphere stack[ Go to top ]

    Salil Deshpande wrote:

    We know that the study is subject to a number of external and internal variables. Clearly sponsors want to illustrate their strengths. We devise and conduct experiments that test these strengths and describe them in great detail so that you can decide whether the experiment was fair.

    >>>>>>>>>>>>>>>>
    Salil, thanks for admitting as much.

    The key step in vendor-funded studies is exactly that the test specifications are designed to suit the vendor. That is most of the battle won.

    Coupled with the fact that you include seemingly incompetent/naive WebSphere developers and go with a WebSphere stack that is overkill for the specification, the battle is more than won.

    TMC may argue that all the biased choices, inconsistencies, follies are listed in the detailed report and hence it is clean. But how many people read those 100+ page reports and subsequent criticisms of those. Whereas a lot of people read the headlines that MS will circulate claiming .NET is better than IBM WebSphere (and worse better than J2EE). MS (and other vendors) know this (and so does TMC, I suppose) and hence MS is prepared to pay tons of money to TMC for the so-called "research".

    In brief, the headlines get much more publicity while the holes in the study are buried in 100+ page reports which very few read.
    Result: MS benefits, TMC benefits, honesty and the credibility of TMC/TSS suffer.

    Question to TMC folks: Do you admit designing the specs to favor Microsoft? Did MS have a role in defining the specs? Why was caching disallowed in the specs? Is all of this fair just because the flaws/holes are documented in a huge report?

    Regards,
    Kalyan
  139. Why the IBM Websphere stack[ Go to top ]

    Question to TMC folks: Do you admit designing the specs to favor Microsoft?
    I'm glad that these questions are surfacing. There is a lot of skepticism around the effort at hand, its design and execution. Once again, these questions would be great to assess and analyze independently (then we could get the community to help us pick products, teams, etc!), but in the world where the choice is to either not perform the research and leave the world untouched versus engaging a sponsor to do the research and add a data point for the world to consider (or dismiss), we'll take the latter.

    To answer your specific question, no a specification is not designed to favor a company's products. However, a company comes to us saying, "we want to do a developer productivity study on areas X, Y, Z because we think will perform well. And, in the design, we'll incorporate those and add in A, B, and C as well. But certainly, in order for research to be sponsored, the test specification is carefully reviewed by the sponsor. We take the extra precaution that other firms don't do of including the specification for review on MiddlewareRESEARCH.com before the work is done and asking third party experts (individuals) to also comment on the specification. So, there is a process, often times done over email, where the community gets to have input into what is done (and that is the case with this specification -- there was input from non-Microsoft / non-TMC developers, but it wasn't overwhelming).
    Did MS have a role in defining the specs?
    I think this was addressed above, but yes, they are definitely involved in providing input into the high level areas that the specification should cover. They didn't really care to get involved in the detailed design of the specification. For example, Microsoft might say, "We want to try and craft a reasonable mobility scenario for the application." This is the level of comments that would be exchanged.
    Why was caching disallowed in the specs?
    I'm not sure. Perhaps one of the architects involved will comment on this.
    Is all of this fair just because the flaws/holes are documented in a huge report?
    Well, fair is certainly a term that is subject to interpretation. Our objective is to answer a question that hasn't been answered before. We are merely providing another data point to the industry. There are a number of reasons that this data isn't conclusive (and this thread has pointed out some of those elements), but we feel that it's more fair to provide full disclosure of our entire set of experiences -- allowing the community to come to a fully informed opinion -- as opposed to just releasing the results, hiding the sponsors, and covering up the process.

    Tyler
  140. Why the IBM Websphere stack[ Go to top ]

    Hi Everyone-

    I just wanted to follow up on the comment above asking why data caching was disallowed in the spec. I got this quote straight out of the auditor's report:
    It is important to note that caching of database information or HTML pages was
    disallowed for all tests, per the application specification. All data returned from the database had to reflect current database information at all times, with one exception. For the Customer Service application, upon login, user detail information was allowed to be stored in session state. Both J2EE and .NET have data caching capabilities, so it is important to note that these capabilities were not used in any tested scenario. .NET assemblies were not allowed to take advantage of the .NET Cache API, and J2EE components had to return live data from the database on each request. Finally, no output
    caching (HTML page caching) was allowed for any implementation, although all
    platforms tested do support this capability.
    Tyler
  141. Salil Despande wrote:
    For sponsored research, our arrangement with sponsors is that we will write only what we find, but that sponsors have the right to prevent publication if they feel that publishing those results will be harmful to them. More about this is in the Disclosures section on page 2.
    That is whole point everyone here is saying.

    TMC is loosing credibility by taking $$$ to make one company that can afford to pay you to look better than may be a company that can't afford to pay you. Do you see where this is going? And all this is being covered under one umbrella J2EE vs .net.

    I think the sponsored research you should be such that sponsors cannot prevent publication. Then we will see how many push the buttons.

    Anways it is really sad to see TMC bend down so much just to fill their pockets with cash. Really a step down for TMC and w.r.t Microsoft - their tactics haven't changed. No matter what Antitrust lawsuit they have, the power of money seems to keep them going. But, honest and the good will survive in the end.

    LIKE THE MOVIE "THE LORD OF THE RINGS". It just needs perseverance and strong will and the faith.

    We shall how long you guys keep going with this kind of attitude... With this I am removing myself from TMC all together and not going to visit TheServerSide.com. This is my contribution to serve the cause I believe in. You had lost one member. I am sure many more might follow, but it is their decision.

    CHEERS!
  142. Salil Despande wrote:
    For sponsored research, our arrangement with sponsors is that we will write only what we find, but that sponsors have the right to prevent publication if they feel that publishing those results will be harmful to them. More about this is in the Disclosures section on page 2.
    That is whole point everyone here is saying.TMC is loosing credibility by taking $$$ to make one company that can afford to pay you to look better than may be a company that can't afford to pay you. Do you see where this is going? And all this is being covered under one umbrella J2EE vs .net. I think the sponsored research you should be such that sponsors cannot prevent publication. Then we will see how many push the buttons. Anways it is really sad to see TMC bend down so much just to fill their pockets with cash. Really a step down for TMC and w.r.t Microsoft - their tactics haven't changed. No matter what Antitrust lawsuit they have, the power of money seems to keep them going. But, honest and the good will survive in the end.LIKE THE MOVIE "THE LORD OF THE RINGS". It just needs perseverance and strong will and the faith. We shall how long you guys keep going with this kind of attitude... With this I am removing myself from TMC all together and not going to visit TheServerSide.com. This is my contribution to serve the cause I believe in. You had lost one member. I am sure many more might follow, but it is their decision.CHEERS!
    HOW DO I REMOVE MYSELF? MY PROFILE DOESN'T ENABLE ME TO REMOVE IT? ADVISE....
  143. Yes, very big mistake[ Go to top ]

    419 471 - 418 919 = 552

    552 new members in one day. Ah! Excuse me, if we distract you it is only 551!
    How sad :(

    I can also see that you have participated in only 4 threads and that you share your ip number with 69 others..
  144. Empty vessels make the most noise[ Go to top ]

    Rolf,

    Truly you are an ignorant, irritating tard.

    When presented with a logical and reasoned response (such as from Peter Lin) you fall back onto purile and facile "mouthing off" - you really should read http://www.disinfopedia.org/wiki.phtml?title=Propaganda_techniques
  145. Empty vessels make the most noise[ Go to top ]

    Rolf,Truly you are an ignorant, irritating tard.When presented with a logical and reasoned response (such as from Peter Lin) you fall back onto purile and facile "mouthing off" - you really should read http://www.disinfopedia.org/wiki.phtml?title=Propaganda_techniques
    Our friend Rolf Quixote de la Mancha must have read this instead... :)
  146. EJB == The biggest fiasco in IT history[ Go to top ]

    Believe me, logical and reasoned response is the last you want.

    In the last year .NET has overtaken J2EE. Here is for example job search statistic from www.it.jobserve.com:

    .net j2ee
    April 2003: 435 561
    May: 454 465
    September: 686 716
    November: 801 834
    Januari 2004 1006 1105
    April 1461 1297
    September 1676 1369

    Netcraft report ASP.NET Overtakes JSP and Java Servlets
    http://news.netcraft.com/archives/2004/03/23/aspnet_overtakes_jsp_and_java_servlets.html

    And Forester say that 56% of the enterprises use .Net and only 44% use J2EE
    http://www.microsoft.com/windowsserversystem/forresterdotnet.mspx

    Test and benchmarks (like this one) shows .NET coming out top every time

    The we have articles like this one,
    Is .Net Stealing Java's Thunder? .Net is fast becoming the developer's platform of choice
    http://www.webservicespipeline.com/23900832

    In a few months .NET 2.0 is will be released, the proverbial famous third MS version

    In response to this the Java camp has two defenses,
    1) The first one is Windows security (again! :)
    2) The second is that Java/J2EE still has the lead "in heavy lifting".

    Let us examine this

    1) Security
    "In May this year, 19,208 successful breaches were recorded against Linux based systems, compared to 3,801 against MS Windows based systems"
    http://www.theinquirer.net/?article=9845

    And as Redhat & Co keeps adding more third part junk to the distribution every day this trend will only be stronger. The fact remains; today Window Server 2004 Advanced Edition is a more secure OS than Linux.

    2) "High-Level" systems.
    What that is regarded as "High-Level" seems to change all time, bigger and bigger, faster and faster. On MS reference site there are hundreds of serious, mission critical systems, but for some reason they are not regarded as "High-Level" by persons like Peter Lin. Look how he denounces the London Stock Exchange System
    http://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseStudyID=13911

    But despite searching and asking for over 1.5 year, I have not found a single "High-Level" Java EJB Server application, that is with more transactions per second.

    Anyway at the very top there is only COBOL/CICS!

    To defend EJB today is pathetique. Imagine EJB/CORBA against SOA/Indigo/Rich-Clients (Browser based or not). Allow me to laugh.

    Best Regards
    Rolf Tollerud
  147. EJB == The biggest fiasco in IT history[ Go to top ]

    Rolf

    You should tell MS about this.
    Why MS should start the "Get the Facts" campaign and
    spend multi-millions in paid research(like this one).

    If you have read the latest MS SEC filings, they have said Linux is a big threat.
  148. EJB == The biggest fiasco in IT history[ Go to top ]

    "If you have read the latest MS SEC filings, they have said Linux is a big threat"

    And why shoudn't Linux be a big threat? 0 cost. It was called "dumping" in old times.

    It is a free country. Microsoft has the right to commission a study that it knows that it will win. Especially and just so more so when, as we all know, it is victim of a "lynch-mob" thorough the world.

    IBM, SUN and BEA can do so also. They all have money enough. In fact they have lots and lots of money. Unfortunately they know they can’t win any study whatsoever.
  149. EJB == The biggest fiasco in IT history[ Go to top ]

    It is a free country.
    Uh. Which country? Since we are discussing Microsoft in this context can we assume U.S.A.? If so ... Well, it really isn't "free". One doesn't have the freedom to do whatever they want. Like lie, deceive, give "statistics" or however one wants to define "reports" and "studies".
  150. Low hanging fruit[ Go to top ]

    This is almost too easy...

    1. JobServe Statistics - A single company, reporting data for a single country, where the numbers reported are vacancies (not total # employed by each sector) is not terribly informative.

    2. NetCraft Report - Interesting, but 7 months out of date. Also fails to differntiate between J2EE on the front end (JSP/Servlets) J2EE on the back end (EJB, JCA etc) so again, not terribly informative

    3. "Tests and Benchmarks (like this one) shows .NET comming out on top every time" - Not a statement of fact as evidenced by the length of this discussion - see previously posted URL as to why this statement is meaningless.

    4. WebService Pipeline Article - unsubstantiated FUD, only makes reference to Forrester study (commissioned by MS).

    5. The Inquirer Article on Security - Apart from the fact that this article is from May 2003 since when did Linux = J2EE ?

    6. "The fact remains; today Window Server 2004 Advanced Edition is a more secure OS than Linux" - see previously posted URL as to why this statement is meaningless.

    7. "To defend EJB today is pathetique. Imagine EJB/CORBA against SOA/Indigo/Rich-Clients (Browser based or not)." - J2EE is not just EJB - and I assume you are bringing CORBA into this discussion to try and tar J2EE with the same brush - again I refer you to the previously posted URL.
  151. Low hanging fruit[ Go to top ]

    Your real name must be Pollyanna.

    "The word "Pollyanna" made its way into the dictionary some years after book by Eleanor Porter. Used today in a disparaging sense, referring to someone whose irrepressible optimism fails to take into account the hard facts of the real world".
  152. Very low hanging fruit ;-)[ Go to top ]

    Rolf,
    You seem to have lost your drive somehow.

    Anyways, good luck trying to portray a server operating whose architecture does not allow for removal of the integrated web browser (let alone other non-essential components) as a serious contender in the enterprise space.

    As long as the official and the only full-featured implementation of the .NET platform remains tightly coupled to what I see as an inferior server operating and an inferior web server, no amount of so called "studies" or spin-doctoring on the part of Microsoft proponents (paid or otherwise) can change my personal perception of the said platform as inherently inferior.

    But keep on trying. It's fun ;-)

    Oleg
  153. Oleg,

    Well I am only one of the many that have discovered that the way is more fun than to arrive.

    When Miguel de Icaza called J2EE academic crap (meaning EJB) he just put into essence an almost universal opinion here on TSS and among J2EE developers like Rod Johnson and Juergen Hoeller (topselling Expert One-on-One J2EE Development without EJB) or Bruce Tate with popular Better, Faster, Lighter Java.

    It was more fun when everybody looked upon the J2EE Application Servers as the next best thing after sliced bread and scores of young fresh Java proselytes directly out of school found themselves in charge of large J2EE/EJB projects with funds unlimited. And Windows was judged after Windows 98 and everybody though that MS was going to be split up and had banked themselves around the table with knife and fork waiting for their bits..

    Ah nostalgia! There is hardly anyone more that blurbs "Organize inter-service transfers according to use cases from known domain objects into a coarse-grained Composite"!

    What shall I do now? I need another calling! God!

    Regards
    Rolf Tollerud
    (
  154. Oleg,Well I am only one of the many that have discovered that the way is more fun than to arrive. When Miguel de Icaza called J2EE academic crap (meaning EJB) he just put into essence an almost universal opinion here on TSS and among J2EE developers like Rod Johnson and Juergen Hoeller (topselling Expert One-on-One J2EE Development without EJB) or Bruce Tate with popular Better, Faster, Lighter Java. It was more fun when everybody looked upon the J2EE Application Servers as the next best thing after sliced bread and scores of young fresh Java proselytes directly out of school found themselves in charge of large J2EE/EJB projects with funds unlimited. And Windows was judged after Windows 98 and everybody though that MS was going to be split up and had banked themselves around the table with knife and fork waiting for their bits..Ah nostalgia! There is hardly anyone more that blurbs "Organize inter-service transfers according to use cases from known domain objects into a coarse-grained Composite"!What shall I do now? I need another calling! God!RegardsRolf Tollerud(
    So are we to believe something is true, because someone writes "A is crap and B is great?" What ever happened to the scientific method and verifying the claims in a controlled repeatable manner. I think at minimum one would have to atleast try to reproduce a set of tests before making public declarations. Blindly deferring to some author's opinions without deep knowledge is a bit foolish.
  155. Apparently you did ;-)[ Go to top ]

    Funnily enough, I also happen to be a huge fan of Rod Johnson and also happen to share his distaste for entity EJBs (if you had read the book you would have known that he believes session EJBs do have their place in the enterprise application development). However, all this does not help your assertions about growing viability of .NET in the enterprise space.

    Better help your Microsoft buddies (or bosses) realize that the sooner they deprecate IIS & IE and embrace competing OSS platforms, Linux in the first place, the more .NET stands a chance of evolving into something useful beyond rich-UI applications.

    Oleg
  156. Apparently you did ;-)[ Go to top ]

    Oleg,

    "the sooner they deprecate IIS & IE and embrace competing OSS platforms, Linux in the first place, the more .NET stands a chance of evolving into something useful.."

    Not according to the TSS study..

    419733 - 418 919 = 814 new members since yesterday..
  157. My dear poor Rolf,

    How does this number prove anything or mean anything beyond the fact that number of accounts in the TSS user database grew by 814 records since yesterday. ;-)))))))))))))))

    Cmon, you can do better, or at least be funnier. Try it again, and try harder.

    Oleg
  158. EJB == The biggest fiasco in IT history[ Go to top ]

    But despite searching and asking for over 1.5 year, I have not found a single "High-Level" Java EJB Server application, that is with more transactions per second. Anyway at the very top there is only COBOL/CICS! To defend EJB today is pathetique. Imagine EJB/CORBA against SOA/Indigo/Rich-Clients (Browser based or not). Allow me to laugh. Best RegardsRolf Tollerud
    Wow, fantastic reasoning here. So to summarize, you've made your decision based on the fact that no one has told you step-by-step instructions on how to build high performance, high scalability transactional systems with EJB, therefore they don't exist. With that kind of attitude, I don't think anyone would hire you do work on server applications. I have 5 years of server side development experience, but I consider that light weight compared to the veterans I know.

    If you don't mind me asking, what kind of hardcore server side experience do you have with distributed transactions for large or global transactional systems? Just to be totally clear, by transactions I mean Atomic TPC-C transactions. Not bulk inserts for a data mining setup. And before you claim COM+ is equal to EJB, I would advise you to read a dozen full disclosures for TPC-C SqlServer results. You'll find the results from HP use tuxedo in the full disclosure. COM+ is just a wrapper for their port of tuxedo to windows.

    I don't speak for the industry, so these are purely my observations. You could be right and I could be wrong. People have been claiming mainframes are dying for 30 years. If anything, it looks like mainframes have 100 lives. Maybe after another 15 years of hardcore server side experience I'll know enough to say without a doubt you're right or wrong. Based on my experience so far, what you say is not backed by real world facts that I have seen. Best way to win the argument here is with a benchmark and full source disclosure.

    If what you claim is right, I don't think HP would have bothered to use tuxedo to run their TPC-C benchmarks and use an embedded C component to accelerate inserts. Then again, you could be a super genius and know something HP doesn't. I'm just an average programmer trying to learn as much as I can with minimal amount of stupidity and prejudice. When I first started doing server side development I had many of these preconcieved notions. After I made the usual mistakes, it became clear to me why these types of systems are built this way. If you have hard numbers proving indigo matches MQSeries, I'd love to see the numbers and full disclosure. Until someone provides verifiable proof, I'll wait until I've stress tested it and gotten a deep understanding of how it really works before I make a judgement on Indigo.
  159. My my personal obeservation is that for MOST j2ee based applications the throughput requirements are not as high as the one previously mentioned on this forum. Yes there there are applications requiring these realy high througputs but realy not all of them.

    I work for a bank and over here we still use COBOL programs running in CICS, IMS or IDMS for the high througput transaction oriented systems. J2ee applications act as front ends for these systems. I think this is the case in many more finacial services companies.

    So we are spreaking of perhap the the top 5% of all j2ee applications (or something like) that realy need the high throughput. In my exceperience it is not a good idea to generalize all applications and optimize all applications for a load they will never have to serve in a production situation. From my perspective the realy high throughput requirements are in most j2ee (and .net)cases purely theoretiocal. For many applications the complexity of EJB is overkill and one will not really benifit from this complexity.

    That being said I was triggered by your TPC-C remarks. I just had a brief look at the top level TPC-C spot (the IBM p690 one). Perhaps I am mistaken by what I see in the design of the test is that the database runs on the giant p690 machine. In front of this machine act 40 2-way windows based iis servers serving up the load to the database. Doesn't this at least a little bit look like a cluster of application servers to you? From what I understand the Transaction Manager used in this test really is COM+.
  160. IBM results[ Go to top ]

    That being said I was triggered by your TPC-C remarks. I just had a brief look at the top level TPC-C spot (the IBM p690 one). Perhaps I am mistaken by what I see in the design of the test is that the database runs on the giant p690 machine. In front of this machine act 40 2-way windows based iis servers serving up the load to the database. Doesn't this at least a little bit look like a cluster of application servers to you? From what I understand the Transaction Manager used in this test really is COM+.
    I spent an hour to look at the full disclosure and from what I can see, the setup is similar to HP. Your statement is correct at a high level. I pasted the queue portion from the full disclosure.

    The Delivery transaction was submitted to an ISAPI queue that is separate from the COM+ queue that the other transactions used. This queue is serviced by a variable amount of threads that are separate from the worker threads inside the web server. Web server threads are able to complete the on-line part of the Delivery transaction and immediately return successful queuing responses to the drivers. The threads servicing the queue are responsible for completing the deferred part of the transaction asynchronously.

    Once you get down to page 140, you'll see the actual transaction manager is written in C/C++. COM+ is used as a thread pool and object manager, which seems consistent with how HP uses COM+. Around page 155-160, you'll see the code for COM+. I suck at C/C++, so I could be totally off. COM+ is doing an important job, but the actual transaction management logic looks like a combination of C/C++. I don't see any classes actually implementing the usual COM+ transaction API, so I'm not sure what that means. I'm not convinced IBM is actually using the default transaction manager from Microsoft. Perhaps someone with C/C++ expertise can take a look and give a better explanation. Enough of my rambling.
  161. Peter!

    How is it that COM+ has the top spot at TPC-C non-clustered and 4 of the top 10 places and that J2EE servers is totally absent from the list? ;)

    http://www.tpc.org/tpcc/results/tpcc_perf_results.asp?resulttype=noncluster&version=5&currencyID=0

    Just curious
    Rolf Tollerud
  162. P.S.[ Go to top ]

    re verifying in a controlled and repeatable manner..

    Maybe Microsoft is bribing the TCP Council too?
  163. Peter!How is it that COM+ has the top spot at TPC-C non-clustered and 4 of the top 10 places and that J2EE servers is totally absent from the list? ;)http://www.tpc.org/tpcc/results/tpcc_perf_results.asp?resulttype=noncluster&version=5&currencyID=0Just curiousRolf Tollerud
    I'm gonna read that response as a joke, since I've read through dozens TPC results for both J2EE and COM+/SqlServer over the last 4 years. Of the ones I've read, I didn't see HP or IBM actually use microsoft's transaction manager. If anything, the pattern I see is COM+ is only used for the thread pool and object pool. Feel free to point me to a specific page of any of the TPC-C results that shows they are using Microsoft's COM+ transaction API and Microsoft's transaction manager. If you're claiming COM+ kicks butt because it's wrapping tuxedo or IBM's transaction client, than I agree. But I would also give a huge credit to tuxedo, HP and IBM for making COM+ work with proven transaction monitors. If I had to build a high availability transaction server with windows, I definitely wouldn't do it without IBM or HP's help. Since I prefer java, I much rather use Tuxedo or IBM's transaction monitor with EJB's.
  164. Peter: "Feel free to point me to a specific page of any of the TPC-C results that shows they are using Microsoft's COM+ transaction API and Microsoft's transaction manager"

    Hmm, maybe someone from Microsoft will enlighten us in that matter. But Tuxedo is not written in Java to my knowledge is it? So why do you use Tuxedo as an argument for J2EE/EJB?

    http://www.otpsystems.com/

    Robin Boerdijk
    "A second big advantage of Tuxedo is simplicity. The Tuxedo API is so simple that, in this respect, moving to J2EE is a step backward rather than forward. The Tuxedo API supports client/server, message queuing and publish/subscribe communication in a single API based on a consistent, service oriented architecture. The Tuxedo API is refreshingly free of connections, sessions, factories, pools, and other object oriented artifacts inherent to using J2EE."

    You didn't answer about nonexistent J2EE servers at TPC-C..

    Regards
    Rolf Tollerud
  165. Tuxedo[ Go to top ]

    Robin Boerdijk "A second big advantage of Tuxedo is simplicity. The Tuxedo API is so simple that, in this respect, moving to J2EE is a step backward rather than forward. The Tuxedo API supports client/server, message queuing and publish/subscribe communication in a single API based on a consistent, service oriented architecture. The Tuxedo API is refreshingly free of connections, sessions, factories, pools, and other object oriented artifacts inherent to using J2EE." You didn't answer about nonexistent J2EE servers at TPC-C..RegardsRolf Tollerud
    Tux was originally created at AT&T labs. It was later extended, which created Tuxedo. It then passed through several hands and ended up at the current owner BEA. I believe the current tuxedo is a hybrid Java + C/C++ implementation. As to why no EJB is present in the top 10 entries, I don't have an answer to that. Earlier this year, there were 2 entries using EJB. NEC, IBM and HP continually run new benchmarks to sell their newest/biggest hardware. I'm sure they'll submit EJB results in the future, since they are all in the business of using software to push hardware sales.

    If you actually knew anything about EJB and Tuxedo from first hand experience, the simplicity of tuxedo is also a limitation. CORBA and EJB were created because tuxedo isn't a good place to stick a ton of regulatory compliance or transaction validation logic. Let's say you're a mutual fund company that specializes in money market funds. The SEC has rules that say you can't go beyond a specific percentage invested in a specific issuer. If the fund has 30 million individual customers and each customer has 30 positions. Running regulatory compliance for every single transaction is a real pain, because you're talking about 900million rows of data you have to aggregate. Doing that with tuxedo would be insane and silly. Doing that in an EJB is more reasonable, though still very hard. I take TPC results as the absolute maximum performance.

    Real world performance will always be significantly lower than TPC max. Again, my experience is in a very focused area, so it's not applicable to other situations. This stuff is very hard to build and even harder to scale. I won't bother going into OLAP, and OLTP techniques, since it's obvious you haven't bothered to read TPC full disclosures and spend the time needed to understand the low level nuts and bolts.
  166. very interesting[ Go to top ]

    Ok, thank you, I have to go. It is surely fascinating to hear from somebody that has actual experience from the "higher elections". With that I don't mean that "Windows programming against 50.000 APIs" is not equally challenging...

    But I miss my old opponent Cameron Purdy! I have heard that he is playing chess now..Hello Cameron wherever you are.

    Have a nice weekend!

    Regards
    Rolf Tollerud
  167. Peter!How is it that COM+ has the top spot at TPC-C non-clustered and 4 of the top 10 places and that J2EE servers is totally absent from the list? ;)http://www.tpc.org/tpcc/results/tpcc_perf_results.asp?resulttype=noncluster&version=5&currencyID=0Just curiousRolf Tollerud
    in case you missed the finer details. all the systems in the top 10 spots are equivalent to mainframes with 1Tb of ram. The top two systems both have 1Tb of RAM. The other common trick to really high throughput is to handle the transactions in memory using embedded C and write it to disk in a lazy fashion. Of course you knew that.
  168. IBM results[ Go to top ]

    Peter,

    Doing a quick text search through the document I wasn't able to find any COM+ related transaction call. I think the TM of the DB2 database must be used for the real transaction managment (in that case no distributed transactions?).

    You wrote that in the HP test Tuxedo was used, but to me Tuxedo is not the same as EJB. From my perpective using Tuxedo as a TM is more or less equivalent to using the JTA api in the Java space.

    To me EJB is a different concept. What are you actually using for your high end systems EJB? JTA? Tuxedo? Java? And in what relation, EJB should abstract the transaction API away and still you say you are using Tuxedo.

    Gr,
    Frank
  169. IBM results, TPC-C, etc[ Go to top ]

    My my my, we all seem to be feeling our oats today! What a lot of shelling and youting !
    Too much, me thinks.

    But let me jump in for a bit anyway.

    Peter Lin wrote:
    People have been claiming mainframes are dying for 30 years. If anything, it looks like mainframes have 100 lives.
    Ok, look. If you examine IBM's business closely over the past 10 years, there has been a steady transition AWAY from Mainframe revenue = both hardware and software. At one point IBM was synonymous with Mainframe. Today IBM is about the same size it was in 1998 (without adjusting for inflation) and its mainframe revenues have shrunk substantially. In fact over 50% of IBM today (Revenue wise) is services, not hardware or software.

    Of the remaining 49+%, IBM does not disclose the actual balance of MF versus non-MF revenue, regularly. At one point 10 years ago they claimed 90/10. This was at a point when IBM services was much smaller, proportionately and in real terms, than it is today. Today the split between MF and non-MF is probably more like 60/40, but we don't know. So no, Mainframes are not dead. IBM still makes money from them. But nobody else does (every other mainframe company has expired or pulled out: Sperry, Burroughs, Hitachi, etc). And consider the huge, Huge, HUGE increase in computing investment in these past 10 years, an explosion in IT investment really, while IBM's mainframe business is actually shrinking in real terms. Now what can you conclude from this? I leave it to you all.

    More from Peter Lin:
    And before you claim COM+ is equal to EJB, I would advise you to read a dozen full disclosures for TPC-C SqlServer results. You'll find the results from HP use tuxedo in the full disclosure. COM+ is just a wrapper for their port of tuxedo to windows.
    Peter, you're not playing fair here. Nobody claimed COM+ is equal to EJB, but since you brought it up, there is not a single EJB usage in all of TPC-C, while the use of COM+ within this benchmark is fairly common, especially at the top of the performance and price/performance lists. The #1 and #3 entries currently use COM+ as the TM.
     Feel free to point me to a specific page of any of the TPC-C results that shows they are using Microsoft's COM+ transaction API and Microsoft's transaction manager.
    If you examine the TPC-C spec, you may be surprised to learn that it is a DATABASE benchmark, primarily, and the role of the transaction manager or transaction monitor (TM) in the benchmark is specified as:

     - request/service prioritization
     - multiplexing/de multiplexing of requests/services
     - automatic load balancing
     - reception, queuing, and execution of multiple requests/services concurrently

    In fact the TPC-C spec does not require distributed two-phased commit to be performed at the application level, and because 2PC is costly, most entries do not perform 2PC, at least not through application code (Some 2PC may be performed by the distinct instances of databases in a clustered configuration, but this is not visible in app code. There is no transaction API employed to do this.). Nonetheless the role of the TM is important in optimizing the load on the database.

    Since you brought up external benchmarks, some more interesting questions, along the lines Rolf has been pursuing, are:
     - Why are there no EJB results in TPC-C?
     - Why would the JCP create the SpecJAppServer, which specifically rule out non-Java systems (no PHP, no COM+, no nothing except Java) when other suitable technology-neutral benchmarks already exist? (see below)
     - In a recent effort to build a new technology-neutral app server benchmark within spec, why did some vendors refuse to include price as a metric in the benchmark result report, resulting in abandonment of the benchmark specification effort?

    There seems to be a trend here. . .
    Tux was originally created at AT&T labs. It was later extended, which created Tuxedo. It then passed through several hands and ended up at the current owner BEA. I believe the current tuxedo is a hybrid Java + C/C++ implementation.
    You can believe what you want, but Tuxedo is not built in Java. It has a Java client binding. Tyler Jewell can confirm this, or anyone else who has actually used the product or formerly worked for BEA.
     As to why no EJB is present in the top 10 entries, I don't have an answer to that. Earlier this year, there were 2 entries using EJB.
    This is incorrect. There has never, ever, ever, ever, ever (etc etc) been an EJB entry in TPC-C. There have been entries labeled "WebSphere", but if you read the full-disclosure reports, you will find that they use "WebSphere App Server Enterprise Edition" which is also known as TXSeries, which was previously known as Encina. This is middleware implemented in C to do the request monitoring and throttling - the TM job in the TPC-C spec. It is not EJB. It is not Java. Recently IBM has returned to calling these things "TXSeries", (example) since they no longer (apparently) market the thing formerly known as "WAS EE".

    I await with great anticipation the first EJB-based TPC-C entry. We all do, I'm sure. But don't bother going to look for it every week. When it happens, there will be HUGE headlines on TSS.COM.

    This is not to say there are no EJB implementations of TPC-C. There are. But none are public. The perf isn't good enough to place in the top 10, top 20 maybe. So the vendors that have these implementations won't publish them.
  170. IBM results, TPC-C, etc[ Go to top ]

    drat! I forgot to sign the above!

    Please make a note, the above was posted by:
    Dino
    who works for Microsoft
  171. IBM results, TPC-C, etc[ Go to top ]

    First off, good response. Any errors in my post are my own fault and laziness.
    If you examine IBM's business closely over the past 10 years, there has been a steady transition AWAY from Mainframe revenue = both hardware and software. At one point IBM was synonymous with Mainframe. Today IBM is about the same size it was in 1998 (without adjusting for inflation) and its mainframe revenues have shrunk substantially.
    absolutely, IBM's business has changed and mainframes do not have the same position they had 20 years ago. But I don't equite that with dying. Business needs and trends evolve, but I do know one of the biggest financial firms is still purchasing mainframes. That doesn't constitute mainframes is coming back either. It is what it is.
    Peter, you're not playing fair here. Nobody claimed COM+ is equal to EJB, but since you brought it up, there is not a single EJB usage in all of TPC-C, while the use of COM+ within this benchmark is fairly common, especially at the top of the performance and price/performance lists. The #1 and #3 entries currently use COM+ as the TM.
    You're right, no one claimed COM+ is equal to EJB in this thread, but I've talked to several .NET architects that claim it is. You're most likely right about the entry earlier this year using websphere, since IBM bundles their transaction monitor with it. Personally I haven't used and haven't had a chance. What little I know is from reading several full disclosures and trying to glean as much as I can. Like I said, I consider myself a light weight with only 5 years of heavy server side experience.
    If you examine the TPC-C spec, you may be surprised to learn that it is a DATABASE benchmark, primarily, and the role of the transaction manager or transaction monitor (TM) in the benchmark is specified as: - request/service prioritization - multiplexing/de multiplexing of requests/services - automatic load balancing - reception, queuing, and execution of multiple requests/services concurrently. In fact the TPC-C spec does not require distributed two-phased commit to be performed at the application level, and because 2PC is costly, most entries do not perform 2PC, at least not through application code (Some 2PC may be performed by the distinct instances of databases in a clustered configuration, but this is not visible in app code. There is no transaction API employed to do this.). Nonetheless the role of the TM is important in optimizing the load on the database.
    Yup, you're absolutely correct TPC-C is a database benchmark and the TM is not what is being tested. TM plays an important role. I think the benchmark specs don't require transaction monitors or any particular method of managing transactions. I can accept your definition of TM, though from a nuts and bolts perspective, I'm more interested in the actual implementation of how transaction are managed. So from that perspective, the queue mechanism isn't all that crucial. From a fault tolerance and clustering perspective, the queue mechanism is very important to insuring a queued transaction isn't lost. TPC-C also does not address the fault tolerance of the TM, which is a significant requirement in the financial world. In terms of 2 phase commit, I belive TPC-C puts it in terms of Isolation. Over the last 3 years, many of the oracle results use Isolation 0 or 1. I wish I had the pdf files handy to quote, but I don't.
    Why are there no EJB results in TPC-C?
    I'm not sure who is right to be blunt. I do remember reading a couple TCP-C entries using weblogic over the last 3 years. You may very well be right and the weblogic entries could have been using tuxedo without any EJB's. I do know current production system using EJB's achieving thousands of transaction/second. I have no clue why SpecJAppServer doesn't allow other technologies.
    You can believe what you want, but Tuxedo is not built in Java. It has a Java client binding. Tyler Jewell can confirm this, or anyone else who has actually used the product or formerly worked for BEA.
    my own fault for being unclear. I believe the core of tuxedo is still the original C/C++. since I don't work for BEA, I don't know the details. I wish I could, cuz it would be great fun to read it and glean some insight. I'm only familiar with the client API. I assume BEA's tuxedo uses JNI to call the native stuff, so there has to be a Java layer in there. I believe BEA provides some additional features through the java api, which isn't part of the original C/C++ tuxedo. Again, I'm not an expert. If I had access to the full source for the various versions, it would be fun to study it closely and write up a thorough report.
    There has never, ever, ever, ever, ever (etc etc) been an EJB entry in TPC-C. There have been entries labeled "WebSphere", but if you read the full-disclosure reports, you will find that they use "WebSphere App Server Enterprise Edition" which is also known as TXSeries, which was previously known as Encina. This is middleware implemented in C to do the request monitoring and throttling - the TM job in the TPC-C spec. It is not EJB. It is not Java. Recently IBM has returned to calling these things "TXSeries", (example) since they no longer (apparently) market the thing formerly known as "WAS EE".
    You may very well be right and it's my own fault for being unclear. Unfortunately I don't have any old weblogic entries saved any where, so i can't verify it. Since the whole point of TPC-C is maximing database throughput, it would only make sense to use EJB's to reduce read traffic to the database. Trying to make EJB act like tuxedo within the TPC-C specification would need a solid reason in my mind.

    Cases where I would use EJB to manage transactions would be pre-trade compliance. Again, this is a very specific class of problem that isn't well suited to a pure tuxedo approach. mainly because it requires reading lots of historical data to perform compliance analysis. it would be horribly wrong to use tuxedo that way. doing it with a stateless EJB would be a better approach, since much of the data can be cached and only changes every few minutes.

    My preferred approach to scaling transactions is with messaging and distributed transactions. I take responsibility for any of my obtuse remarks, which resulted in misinterpretation. I do see a value in EJB's for complex integration environments. It's just one piece of a large puzzle for complex systems. the learning curve is very steep and I've made plenty of mistakes. Just because it's hard doesn't mean it has no value, or that it's some curse. the goal for me is learning from my own and other people's mistakes. If you happen to have an old weblogic entry handy, please post it and I'll gladly prove myself wrong. from the real world deployments of EJB I know of, all of them are atleast 10x more complex than TCP-C specification.

    thanks for taking time to correct the errors and mistakes in my rant. if all posts were 1/10th as clear as yours, it would far easier to carry on useful discussions. As you pointed out, there's numerous ways to build transactional applications. Before Java, J2EE, COM+, C# and .NET were invented, phone companies were handling transactions with C and C++ just fine. I don't take criticism personally, so feel free to point anymore errors in my post. I'm only human.
  172. IBM results, TPC-C, etc[ Go to top ]

    I looked at all the TPC-C results using sort by hardware and it looks like the weblogic entries were withdrawn. Since they were with drawn and never verified, I wouldn't give it much credit. So I would say you're right there currently isn't any ejb submission, but I do remember seeing some entries in the past. I actually look at TPC every month. In my previous response I was a bit unclear. what I mean by architects are lead architects working on projects with .NET and not a member of the CLR or .NET team.
  173. Yo,

    This comparison is just a bunch of crap like others done by the big boys. The only way to really know is evaluate it yourself. Period. Sad truth to that is the time involved. Frankly, a business is trying to make money. If they can hire developers for less doing java and they can use a ton of free software, they may do that. If MS is cheaper, maybe that is the way to go (doubtful, I am biased towards Java). Even so, I have seen all too often that the majority of companies have thier moronic people in position of power/decisions that determine these things with lack of knowledge or research which is best. So frankly, I don't think this is going to hurt either camp much. For me, as a java engineer I wouldn't work for a company that can't employ the proper people to make these decisions. I am not opposed to .NET either, I think in many ways it is very kewl. I would not object to learning C#, although I'd still prefer Java at this point for many reasons that have been mentioned a million times over elsehwere.

    None the less, I am working on a set of open-source tools I hope to offer the java developers soon. One is my plugin engine, nearly completed for a 1.0 release that makes it very easy to add plugin capabilities to existing applications, as well as build new apps around it. It is non -UI based, highly dynamic and similar to the Eclipse plugin engine, yet different in many ways. On top of it I am building an application framework that should, when completed, provide a number of commonly found application capabilities. Full support to dynamically add new plugins is achieved via the plugin engine (including hot-swap reload at runtime, unload, activate as needed (when accessed at runtime), and more). Extending the platform is very simple, simply choose one of many extension points or event points available via the supplied plugins, as well as any 3rd party added plugins. Planned for the 1.0 release is a UI framework that provides file open/save selection with custom file chooser (highly configurable), preferences, configuration save/load capabilities for all preferences, help system (html initially, others may follow), suite of components such as wizards, custom panels, drag/drop components, buttons and a lot more. The framework will employ a similar concept to Eclipse in that internal windows are "views" (or some other name we may come up with) and can all be dragged/dropped around like toolbox's, dropped into one another to save space and form a tabbable set of windows, possibly other effects. In the non-ui domain, we'll have plugins for support web services, email services, ftp services, some sort of security capability (not sure yet) possibly with a role based system so that levels can use/not use features of the platform, pgp or the likes to allow encryption/decryption capabilities, a "server" system that allows remote connections to control the app, auto-update of plugins with multiple locations to find plugins, and possibly a lot more.

    Basically, the goal is to provide a very robust UI framework for client side application/desktop development that has a large set of features readily available. The best thing is, like some have said here, you are not forced to ship this large framework. You can pick and choose the plugins you want to use. If your app is very simple, then don't ship the help, preferences, security stack and other things. Need something more robust, use more plugins. With plugins being packaged into a single file archive format including embeded jar/zip libraries, they are easy to deploy and no need to unzip/install, just drop in to one or more plugin locations and they are automatically loaded or reloaded.

    The one big holdup is resources. Right now it is just I and one other working on all this, including our day jobs, family, etc. So time is a big factor that prevents us from getting this out there. There is still a lot of work to do, so I apologize if anyone got their hopes up. I want to get this out within the next year (gasp), but we do need help building plugins for the framework. I know Eclipse has it's RCP app out there which is basically what we are providing, but with more capabilities and based on Swing, not SWT. If anyone is interested, please join up on our mailing list. We are actually about to release our new site at www.platonos.org, right now nothing is there. For the time being our old site which is genpluginengine.sourceforge.net is available and a mail list is there you can join up on for the next few weeks or so until our new site is up.

    Hope to see some help and email.
  174. correction and clarification[ Go to top ]

    You can believe what you want, but Tuxedo is not built in Java. It has a Java client binding. Tyler Jewell can confirm this, or anyone else who has actually used the product or formerly worked for BEA.
    To clarify my comment on Tuxedo. You're absolutely right Dino. There's no java on the server side of tuxedo. It's only on the client side, which really just wraps corba using jni and then there's jolt, which I've never used. Here's the link http://e-docs.bea.com/tuxedo/tux81/overview/overviea.htm#1109517. Like I said, I'm no expert in tuxedo and I've only done prototyping for research and read through full disclosures. My posts weren't intended as trolls or shrils, just my limited perspective. For myself, having a firm grasp of how Tuxedo should and should not be used is more important than having the API's memorized. It's much easier to quote API than understand why a particular approach scales well and under what conditions it wouldn't work.

    I've spent the last two years exploring ways to scale complex financial transactions. In my particular case, neither tuxedo or COM+ alone is sufficient, due to some heavy analytics. Using EJB is one option I considered, but I don't have big enough servers to really see what it can do under heavy loads. Instead, I lean towards a messaging approach and use EJB as a smart cache. This lets me get around hardware limitations and makes it a bit easier to plug into existing J2EE systems running EJB's. Doing operations like "as of x date" for analytics is a rather difficult because the dataset is large and very OLAP like.

    Some of these operations can't be done in real time using the raw data, so techniques like summary tables, is useful for nightly batch processes. during the work hours, other techniques are needed to incrementally calculate the change. In some cases, it may be desirable to propogate the new value back to the database.

    Depending on the data, some should use tuxedo to just save it. Other cases require updating existing processes using JMS or events if the process resides within the same JVM. Now obviously, one could argue it is feasible to do the same thing with COM+ and MSMQ, but I find JMS + Stateless EJB a better fit. In my case, there's a large amount of data that is common to all users, but that data does shift throughout the day. Not all shifts are important, so the data feed coming in is filtered. When a change needs to be propogated, it could easily use a 2 phase approach to save the value and then notify the stateless EJB. The more tricky parts are related to multiple processes updating a single account.

    One could easily say, "it's ok to be out of sync" and solve the problem that way. But in some cases, that's not acceptable. If three changes affect a given account, and those changes affect queued and new orders, my bias is towards a stateful approach. In some cases, the business case requires the changes and transactions be considered as a set or multiple sets. Handling these kinds of processes is a bit hairy and rather complex. Someone who is an expert in COM+ would probably disagree with me and say COM+ is easier.

    for me atleast, the benefit of ejb is it allows me to avoid making a transaction when it's not valid. If I didn't have some kind of EJB, the design would always have to insert the transaction into the database, which makes validation even harder. Even worse is if that table has triggers, doing an insert might result in a storedProc throwing an exception. Given the scenario I just described, how you tackle it using COM+ Dino?

    I'm curious to hear your thoughts. By the way, the scenario is not artificial. It's rather common for mutual fund companies.
  175. Zombies from Night of The Living Dead[ Go to top ]

    So to sum up then, that it should be some small slice of the market under COBOL/CICS but over .NET that should be beneficial to a "Big Java EJB Application Server", it is what it is, just a myth.

    The summing up of capabilities of the EJB servers is better done by Alexander Jerusalem:
    "we ridiculed Microsoft for it claiming they didn't understand server side enterprise necessities. Now we have server side applications that behave like 1992 desktop applications."

    http://www.theserverside.com/news/thread.tss?thread_id=24823#116183

    "The Elephant Java EJB Application Servers time is over. Actually they are more like Zombies from Night of The Living Dead, being kept alive long after they should be dead and buried".

    Regards
    Rolf Tollerud
  176. P.S.[ Go to top ]

    I meant "under COBOL/CICS but over .NET or a typical Spring/Tomcat/J2EE solution"
  177. So to sum up then, that it should be some small slice of the market under COBOL/CICS but over .NET that should be beneficial to a "Big Java EJB Application Server", it is what it is, just a myth.The summing up of capabilities of the EJB servers is better done by Alexander Jerusalem:
    "we ridiculed Microsoft for it claiming they didn't understand server side enterprise necessities. Now we have server side applications that behave like 1992 desktop applications."
    http://www.theserverside.com/news/thread.tss?thread_id=24823#116183"The Elephant Java EJB Application Servers time is over. Actually they are more like Zombies from Night of The Living Dead, being kept alive long after they should be dead and buried".RegardsRolf Tollerud
    Yeah, better back you arguments with lots of this kind of quote, maybe someday someone will fall for that, at least it surely will make for the lack of experience on the subject.
  178. Why? Alexander Jerusalem little essay seems accurate, unquestionable, explicit, precise and unambiguous.
    http://www.theserverside.com/news/thread.tss?thread_id=24823#116183

    I have my share of Java development, but when I saw the direction the Java world was going, that is, EJB embedded in overgrown Applications Servers with millions of rows of code, led by Well-meaning Impractical Theoreticians who habitually blurbed "Organize inter-service transfers according to use cases from known domain objects into a coarse-grained Composite" , my experience told me that our ways have to part.

    "Veritas odium parit: Truth purchaseth anger."
     -Thomas Wilson

    Regards
    Rolf Tollerud
  179. my experience told me that our ways have to part
    I wouldn't call reading blogs and forums "experience", but if you want to call it that way, it's a free world.
  180. you can not try everything[ Go to top ]

    You forget that back in that time, all blogs and forums, + any other information you could get hold on, was unanimously positive to EJB and the whole enchilada with J2EE Application Servers. So the decision was taken by my personal experience only. But it was easy. I deeply distrust Well-meaning Impractical Theoreticians in all areas of life, not only IT. The only thing that is more dangerous than a Well-meaning Impractical Theoretician is a Well-meaning Impractical Theoretician with a mission.

    It is impossible to read or try everything. If you read every day without sleeping or eating all your life, you just will have time to finish the British collections of Sumeric inscriptions. So you have to use discernment.

    Every seasoned developer that is not totally green behind his ears know that it is only one thing that will help you survive - KISS, keep it simple stupid. That so many fell for the Big Elephant Java Servers is incredible, and comparable to the tulip craze in the middle of the sixteenth century.

    Regards
    Rolf Tollerud
  181. you can not try everything[ Go to top ]

    Funny how you use big words and etc. and then say keep it simple stupid. Hmmm.
  182. last nail in the coffin for Websphere[ Go to top ]

    "Funny how you use big words and etc"

    What big words? Is there anyone that still thinks that "there is small slice of the market under COBOL/CICS but over .NET or a typical Spring/Tomcat/J2EE solution that should be beneficial to a "Big Java EJB Application Server"

    after Jamie Schiner post above?

    Do you?

    Regards
    Rolf Tollerud
  183. last nail in the coffin for Websphere[ Go to top ]

    "Funny how you use big words and etc"What big words? Is there anyone that still thinks that "there is small slice of the market under COBOL/CICS but over .NET or a typical Spring/Tomcat/J2EE solution that should be beneficial to a "Big Java EJB Application Server" after Jamie Schiner post above?Do you?RegardsRolf Tollerud
    Rolf,
      Look at most of your posts (it is usually the 'etc.' and not big words). It is like listening to Dennis Miller. Only not as funny or interesting or intuitive and usually doesn't make as much sense. What I am saying is you want yourself and everyone else to code simply (KISS) but you don't want to speak simply. It seems you have a lot of Literary knowledge. You might want to take speach class or two. It will help you know when to use your literary knowledge. And when not to :)

    As for Jamie Schiner - his was mostly cut and paste code from an article. Big words are not the some a ton of words. Yeah it was long and boring. But I think it did actually have a point. Read the article on the computerworld website. It is much easier on the eyes.

    Mark
  184. am I the victim of a cruel joke?[ Go to top ]

    To hammer the nail once again, to avoid any misunderstanding: I think Java can be just as Fast, Productive and Stable as anything else. My "crusade" is not against Java but against Big Elephant EJB Application servers that in my eyes are the "biggest idiotic academic crap ever dreamt up by aliens from other space". So please disregard my bad language and ev speech-implement and only answer the real issue:

    "Do you think that there is a place for Websphere?" Why?

    And thank you for the criticism of my language. I will try to follow your advice and do my best to improve.

    Best regards
    Rolf Tollerud
  185. am I the victim of a cruel joke?[ Go to top ]

    To hammer the nail once again, to avoid any misunderstanding: I think Java can be just as Fast, Productive and Stable as anything else. My "crusade" is not against Java but against Big Elephant EJB Application servers that in my eyes are the "biggest idiotic academic crap ever dreamt up by aliens from other space". So please disregard my bad language and ev speech-implement
    Your language isn't bad. :) Just more obtuse.
     and only answer the real issue:"Do you think that there is a place for Websphere?" Why?
    Ok, will do - I think there is place in the Java space for Websphere. Why? Or Where? Not really sure yet. I've not done anything yet that couldn't be done without an OOS app server or less. :) But some clients want the support of a big vendor, don't trust OOS, like their vendor and/or have deep pockets.
    And thank you for the criticism of my language. I will try to follow your advice and do my best to improve.Best regardsRolf Tollerud
    Feel free to return the favor.
  186. Irony[ Go to top ]

    for someone who keeps asking for proof of scalability, the full source for Jboss, Geronimo, Enhydra, JoNas, tomcat, jetty, seda, haboob, JTA, hibernate, OpenJMS, ActiveMQ, Joram, JOTM and Spring are available. All you have to do to get some real world experience is download 2-3 and try it. then you can read the source to see exactly how it performs and why it does/doesn't perform.

    Is it possible to read the source code for all of them and understand every single line? Probably not. the key to getting a deeper understanding of how to build scalable apps is figuring which parts are critical for performance and focusing on those. knowledge comes at a price and not every one is willing to put in the effort. Personally, I have a hard time keeping track of the various tricks I've learned from tomcat, seda, haboob, jboss, saxpath, jaxen, xerces, xalan, XPP, Jibx, XStream, commons-el and a couple other OSS packages. Even if I don't use a particular application for a project, the knowledge contained in the source is a wealth of experience.

    Some might say, "get a life" instead of spending all that time reading code, specifications, and peer-review articles. A couple of hours actually using something or implementing a server component will do more than reading hundreds of blogs. Blogs are great fun, but it's no substitute for hands on experience.

    If you're desire to learn server side applications is earnest Rolf, I will gladly point you to specific parts of SEDA, tomcat, XPP or any other OSS app I've read.
  187. Webservices works fine for me[ Go to top ]

    No thanks, for the moments I am happy with my Rich-Clients with asynchronous calls to stateless servers with WSE 2.0 (that scale ad infinitum). If something better comes up, please notify me!

    Regards
    Rolf Tollerud
  188. Webservices works fine for me[ Go to top ]

    No thanks, for the moments I am happy with my Rich-Clients with asynchronous calls to stateless servers with WSE 2.0 (that scale ad infinitum). If something better comes up, please notify me!RegardsRolf Tollerud
    Web Services? So speed isn't important?
  189. Webservices works fine for me[ Go to top ]

    No thanks, for the moments I am happy with my Rich-Clients with asynchronous calls to stateless servers with WSE 2.0 (that scale ad infinitum). If something better comes up, please notify me!RegardsRolf Tollerud
    That is quite a claim. Web services scale "infinitum"? Over the last 4 years, I've ran probably over 200 benchmarks measuring various XML parsers to see the CPU and memory utilization as the number of concurrent parser processes increase. I've done this on both Java and .NET. Guess what, even on a 2.4ghz P4, the practical limit is 10-15 concurrent parser processes. I've also ran a ton of benchmarks using Microsoft's XmlSql with single table and 2-3 tables joined together. Guess what, XmlSql is 100x slower than plain old OleDB. On a nice 4 CPU dell server, the practical concurrent queries using XmlSql is around 8. At around 6 concurrent SqlXml queries, the memory usage jumps up dramatically and the cpu usage goes over 50%. Depending on the type of queries, the CPU usage could be lower. In some cases, with just 4 concurrent queries, the CPU usage was 100%. These were all select queries.

    Now obviously, one could get around this and use .NET Remoting, which performs much better. But, if you're using remoting, why bother using webservice calls or SOAP at all. XML is great for flexibility, but it is terrible performance wise. I have a performance article I wrote last year for Tomcat, which compares the performance of software parsers vs. XML hardware accelerators.

    Unless you've managed to write a magic XML parser that scales 10x better than the default .NET parser, webservices does not scale well. The current crop of top xml parser are reaching the limits and it's unlikely someone can beat the top 3 parsers by 3x without hard acceleration.

    I invite you to back up the claim that webservices scales "infinitum" with some real numbers.
  190. Webservices works fine for me[ Go to top ]

    "Now obviously, one could get around this and use .NET Remoting, which performs much better."
    You say so? I advice you to read Ingo Rammer (author of Advanced .NET Remoting)

    .NET Remoting Use-Cases and Best Practices
    http://www.thinktecture.com/Resources/RemotingFAQ/RemotingUseCases.html

    Microsoft's XmlSql?

    Of course we are not using MS XmlSql, no XML parser is involved, the XML-response is constructed within a read-only datareader loop. The datareader is the fastest method to get data in the .NET world. The only XML parsing is done at the client.

    By saying that the application "scale ad infinitum" I mean that when all state is maintained by the client and there are no container based server-sessions whatsoever, another box to the cluster or "server farm" is only a phone call away.

    Regards
    Rolf Tollerud
    ("performance is our business")
  191. Webservices works fine for me[ Go to top ]

    "Now obviously, one could get around this and use .NET Remoting, which performs much better."
    You say so? I advice you to read Ingo Rammer (author of Advanced .NET Remoting).NET Remoting Use-Cases and Best Practiceshttp://www.thinktecture.com/Resources/RemotingFAQ/RemotingUseCases.htmlMicrosoft's XmlSql?Of course we are not using MS XmlSql, no XML parser is involved, the XML-response is constructed within a read-only datareader loop. The datareader is the fastest method to get data in the .NET world. The only XML parsing is done at the client.By saying that the application "scale ad infinitum" I mean that when all state is maintained by the client and there are no container based server-sessions whatsoever, another box to the cluster or "server farm" is only a phone call away.RegardsRolf Tollerud("performance is our business")
    thanks for link. the advice on that page look like standard RPC tips and tricks. by use remoting, I meant throw soap out the door completely. if someone were to tell me, "build a server side webservice and make sure it scales to handle 30 concurrent requests/second, the first I would do is throw soap out." Since you've stated your doing everything with a heavy client, that would definitely explain your bias. But it would also mean you're not doing distributed transactions and there isn't much shared data that must be replicated across all active sessions. would that be an accurate summary?

    or am I missing something.
  192. Webservices works fine for me[ Go to top ]

    Peter,

    I have not made any SOA/Soap distributed transactions I must admit, but that will change with Indigo. I am not sure what you mean with "shared data that must be replicated across all active sessions", the server is 100% stateless and there is no session-info that needs to be replicated over the cluster "a la Coherence". Every connection is opened and closed immediately and returned to the pool.

    Regards
    Rolf Tollerud
  193. P.S.[ Go to top ]

    30 concurrent requests/second? I think that with a 6-box setup, 3000 is more like it. TMC will confirm that some day.
  194. P.S.[ Go to top ]

    30 concurrent requests/second? I think that with a 6-box setup, 3000 is more like it. TMC will confirm that some day.
    for the sake of friendly debate. Let's just calculate how many webservice requests that would be per day.

    3000 X 60 X 60 X 24 = 259,200,000

    Since the case you described has the server producing soap, but not consuming XML, I would have to say 3K shouldn't be any problem for a cluster of 6 servers with 2-4 CPU each and gigabit ethernet. That would break down to roughly 500 webservice req/sec per server.

    Lets say the requirements change and now your server has to consume XML. With quad CPU systems and 4Gb of ram, the practical limit for concurrent parser processes would be roughly

    15 processes x 4 cpu = 60 concurrent webservice requests per server

    Since it's XML, chances are the fastest each request will take is 350ms for min, max could be much higher.

    60 x 3 = 180 webservice requests/sec per server

    these are numbers based the benchmarks I've performed using a variety of P4 systems ranging from 1.4ghz to 2.6ghz.

    180 req/sec x 6 servers = 1080 req/second for the whole cluster of 6 servers

    On the otherhand, if you use Opteron CPU's, the max throughput may be higher, though I don't an opteron server at my disposal, so I can't provide any numbers.

    What I meant by shared data is this. Let's say you're asked to build a trading system and it gets data feeds from the major exchanges and other data providers. The users need up-to-date information and they have to be able to send out a buy/sell bid orders. For each order, there may be hundreds of responses, which will all arrive at different times. The user has a GUI to create orders (commonly called blotter) and the server needs to notify any trader of responses to their order. the server should also route any orders that fit the criteria to one or more users.

    Since the volume of messages is tremendous, it's not feasible to save all messages, especially since many of them are not applicable and constitute noise. This means some kind of memory component has to filter the inbound messages and check them against queued orders. Based on the queued orders, the server needs to send out messages. Only when an order is complete should the system log the transactions in the database. Now here is a big catch. If two people put out a buy bid and a single seller matches the criteria, the first one to complete the transaction gets the shares they want. The second person then has the option of splitting an order and look else where to complete their purchase.

    sounds easy right :)
  195. Hmm[ Go to top ]

    1080 req/second for the whole cluster of 6 servers
    Ok, I can buy that.

    The trading system, etc etc..

    sounds easy right :)
    Not exactly! :)))

    Let me think a while, I'll be back tomorrow.

    Regards
    Rolf Tollerud
  196. after having though about it..[ Go to top ]

    I still would use XML/SOAP service oriented architecture but at the server something heavier is needed, I would recommend GigaSpaces. http://www.gigaspaces.com

    "GigaSpaces offers the first grid server for real-time distributed transaction processing. Designed for massive, fluctuating transaction ..."

    Real-time Market Data Distribution and Middleware
    A Case Study from GigaSpaces and Merrill Lynch

    http://www.jini.org/meetings/seventh/Shalom.ML.pdf

    Not that I ever have used this product. :)

    Regards
    Rolf Tollerud
  197. P.S.[ Go to top ]

    The point is, someimes Java is right, sometimes .NET is right, but Websphere is never a good choise, IMO.
  198. Rolf,

    What incentive have you received from MS (or some J2EE vendor) to curse WebSphere in such a desperate and hopeless way.
  199. after having though about it..[ Go to top ]

    I still would use XML/SOAP service oriented architecture but at the server something heavier is needed, I would recommend GigaSpaces. http://www.gigaspaces.com"GigaSpaces offers the first grid server for real-time distributed transaction processing. Designed for massive, fluctuating transaction ..." Real-time Market Data Distribution and MiddlewareA Case Study from GigaSpaces and Merrill Lynch http://www.jini.org/meetings/seventh/Shalom.ML.pdf Not that I ever have used this product. :)RegardsRolf Tollerud
    yeah, gigaspaces and javaspaces are definitely potential solutions to part of the problem. Since I haven't used gigaspaces either, I can't say if it sufficient to handle the case I described. The real situation is actually a bit more complicated than just having the second person split the order or look else where to complete the order.

    in reality, most orders are executed in bulk in mutual fund companies. Once a large order has been completed to the point where to buyers and sellers all agree, it's still not done. depending on the type of security, there's a 3 day waiting period, which means one or more transactions may have to be cancelled. these kinds of situations are sometimes referred to as "as of" process. basically, you have to go back to time T and do a bunch of corrective transactions.

    based on the execellent article aboug gigaspaces, I would have to say it is powerful. I've been researching and experimenting with distributed indexes the last four years. Since I don't have access to gigaspaces source, I would do the next best thing, d/l JTA the java peer-to-peer application and read as much code as I can before my brain explodes. Within the rule engine world, many of these techniques have been explored with varying degrees of success and failure.

    my guess is a distributed index/memory approach would be feasible, if a given distributed transaction fits within Single Process Multiple Data (SPMD) definition. If on the otherhand, there's fairly complex dependencies and the commit threshold depends on the context, it can be more troublesome. In those cases, there has to be some stateful component keeping track of whether or not any given transaction should be rolled back. Doesn't really matter where you put it. I definitely wouldn't even try using PLSql for this type of process. Debugging a 50 page stored procedure may take far more time than writing the same thing in java.

    Doing this type of stuff on the client side would be impractical, unless every one uses a 8 CPU server with 8Gb of ram.
  200. after having though about it..[ Go to top ]

    Since I don't have access to gigaspaces source, I would do the next best thing ...
    You can, however, get Javaspaces and other such implementations/variations. Check out Blitz(http://www.dancres.org/blitz/index.html) and JGroups (http://www.jgroups.org). There was another open source Javaspace implementation but he got brought into Gigaspaces.
  201. after having though about it..[ Go to top ]

    Since I don't have access to gigaspaces source, I would do the next best thing ...
    You can, however, get Javaspaces and other such implementations/variations. Check out Blitz(http://www.dancres.org/blitz/index.html) and JGroups (http://www.jgroups.org). There was another open source Javaspace implementation but he got brought into Gigaspaces.
    thanks for link to blitz. I wasn't aware of it. I am familiar with JGroups. Yet another package to read, now if only I can figure out how to get by with 2 hours of sleep instead of 5, I can learn a bit more.
  202. after having though about it..[ Go to top ]

    Since I don't have access to gigaspaces source, I would do the next best thing ...
    You can, however, get Javaspaces and other such implementations/variations. Check out Blitz(http://www.dancres.org/blitz/index.html) and JGroups (http://www.jgroups.org). There was another open source Javaspace implementation but he got brought into Gigaspaces.
    thanks for link to blitz. I wasn't aware of it. I am familiar with JGroups. Yet another package to read, now if only I can figure out how to get by with 2 hours of sleep instead of 5, I can learn a bit more.
    Find a buddy or two you can trust to share the load. Kinda like a investment group does. But I get the feeling you're the kind-of-guy who "needs" to do it himself. :)
  203. after having though about it..[ Go to top ]

    Find a buddy or two you can trust to share the load. Kinda like a investment group does. But I get the feeling you're the kind-of-guy who "needs" to do it himself. :)
    I do have a couple of good friends who "share the load", but I actually enjoy reading code for fun and education. Even though I don't get to use most of the apps in real projects, I find that reading the source code the tips and tricks help make my own code better. It's that, or watch stupid TV. So far the side effect I see is, I need a new brain or a larger hard drive.
  204. after having though about it..[ Go to top ]

    Find a buddy or two you can trust to share the load. Kinda like a investment group does. But I get the feeling you're the kind-of-guy who "needs" to do it himself. :)
    I do have a couple of good friends who "share the load", but I actually enjoy reading code for fun and education. Even though I don't get to use most of the apps in real projects, I find that reading the source code the tips and tricks help make my own code better. It's that, or watch stupid TV. So far the side effect I see is, I need a new brain or a larger hard drive.
    I feel your pain. Or joy. Depends how you look at it. I do this (Java, .Net, VB ... all that goes with it) AND help my wife with our retail gift store.
  205. after having though about it..[ Go to top ]

    I still would use XML/SOAP service oriented architecture but at the server something heavier is needed, I would recommend GigaSpaces. http://www.gigaspaces.com"GigaSpaces offers the first grid server for real-time distributed transaction processing. Designed for massive, fluctuating transaction ..." Real-time Market Data Distribution and MiddlewareA Case Study from GigaSpaces and Merrill Lynch http://www.jini.org/meetings/seventh/Shalom.ML.pdf Not that I ever have used this product. :)RegardsRolf Tollerud
    Finally, something I can agree with. :) Actually, I didn't get to it yesterday but I was going to post a link on what Orbitz was doing with Jini. Now to find that link on Computerworld.

    Anywho, I've been preaching this concept of "ask the network for a service and get it". Need more "servers" to provide the service? Just add it. The cool thing is that web services can participate. Of course one needs to stop thinking "data" and start thinking "objects". :)
  206. Awesome[ Go to top ]

    Think Gigaspaces on a 700 computer cluster/Grid! One would feel like God. :)
  207. Awesome[ Go to top ]

    http://www.computerworld.com/softwaretopics/software/appdev/story/0,10801,95663,00.html?from=story%5Fkc
  208. Hmm[ Go to top ]

    Thank you for the link.

    Reading the story (building a service-oriented architecture to get disparate systems to interoperate) and thinking of thousands of nodes interoperating, without human interference, each node with hundreds of boxes in Gigaspaces like clusters reminds me of a history "Answer," by Fredric Brown.

    "Dwar Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore through the universe a dozen pictures of what he was doing.

    He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe--ninety-six billion planets--into the supercircuit that would connect them all into the one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

    Dwar Reyn spoke briefly to the watching and listening trillions. Then, after a moment's silence, he said, "Now, Dwar Ev."

    Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six billion planets. Lights flashed and quieted along the miles-long panel.

    Dwar Ev stepped back and drew a deep breath. "The honor of asking the first question is yours, Dwar Reyn."

    "Thank you," said Dwar Reyn. "It shall be a question that no single cybernetics machine has been able to answer."

    He turned to face the machine. "Is there a God?"

    The mighty voice answered without hesitation, without the clicking of single relay.

    "Yes, now there is a God."

    Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

    A bolt of lightning from the cloudless sky struck him down and fused the switch shut."
  209. Hmm[ Go to top ]

    Good story. Good thing it is Science Fiction. It is, Right? :) right? uh oh ... :(
  210. Hi all,
    I personally think that with distributed computing – and both J2EE and .Net are built to support distribution – there has to be an
    architectural change to the way we handle data – especially in high-volume, mission-critical situations. As mentioned above we can’t
    keep throwing metal at the problem. The GigaSpaces product Rolf and Peter mentioned above <full-disclosure> (and keeping in mind that they are not
    GigaSpaces employees and I am…) </full-disclosure>, actually provides an In-memory – highly-distributed – data grid and supports BOTH .Net
    and J2EE (as well as C++ and Web Services) – or any combination thereof. The case study mentioned by Rolf actually demonstrated 2500 MDDL
    events per second on a Blade-level machine. And anyone familiar with MDDL will appreciate that this is pretty darn fast…. Also, the case
    study scalied linearly to 60 blades… - i.e. an addition of another blade allowed an additional 2500 MDDL events / sec. to be processed, etc.
    This includes transforming MDDL to object format and back.

    Just to clarify: while the GigaSpaces product is based on JavaSpaces - and we do believe we have the best JavaSpaces implementation out there... - we are an Enterprise Application Grid that provides powerful services such as I described above and many others. In fact we can work with any JavaSpaces engine (Blitz or otherwise) to provide these services.

    Thanks, Rolf, for mentioning us (GigaSpaces) - we really appreciate this.

    Gad Barnea
    GigaSpaces Technologies
  211. after having though about it.. (GigaSpaces)[ Go to top ]

    I talked to the GigaSpaces a few years back (at least 2). I was impressed with their product and really wanted to use it. Unfortunately I had to deal (and am still dealing with) this problem -
    there has to be an
    architectural change to the way we handle data
    Everywhere I go, I "preach" this. (Hey, I spent 3.5 years working towards becoming one, what-a-ya expect?)
  212. after having though about it.. (GigaSpaces)[ Go to top ]

    Mark,
    Right on! It never ceases to amaze me how slowly the IT community is "getting it" - that handling data is fundamentally an architectural concern. It also is interesting to note how the various developments in software (and hardware) design over the past few decades have come together to allow this new type of data Grid. Next year is going to be the 20th anniversary of David Gelertner's famous publication on the Linda programming language - arguably the mother of all distributed computing. It would be nice to see 2005 as the year when true distributed computing really takes off.

    Cheers - Gad
  213. T+1 and financial protocols[ Go to top ]

    Hi all,I personally think that with distributed computing – and both J2EE and .Net are built to support distribution – there has to be anarchitectural change to the way we handle data – especially in high-volume, mission-critical situations. As mentioned above we can’t keep throwing metal at the problem.
    I would have to agree 100%. Although I have no experience with JavaSpaces beyond reading the Spec, I have been studying grid techniques the last 4 years in my spare time. The thing that interests me most is distributed indexes, which is why I mentioned JTA. Beyond just distributed indexes, my research has been distributing analytics in a way to get closer to T+1. Within the real of pre-trade compliance, there are some very difficult problems due to the nature of the analytics.

    If you don't mind a question Gad. How easy would it be to perform an aggregation of distributed objects within GigaSpaces? by aggregation, I mean some mathematic function like, mean, median, sum, weighted sum and duration. The thing I have in mind is this. Say someone wants to be able to run historical analysis of a group of stocks based on GISC categories. In the risk management area of securities, some system try to do things like, "is the delta of the price greater than X duration." In some cases, I've seen X defined as "last 10 years". within the same system, it's not uncommon for there to be different definitions of X.

    One of the challenges is the fact that a single security may participate in hundreds of these kinds of analytics. It's not really practical to load all the data into memory or pre-calculate. In many cases, pre-calculating is combinatorial.
  214. T+1 and financial protocols[ Go to top ]

    Hi Peter,
    Excellent questions. BTW - it sounds like you're really interested in cutting-edge grid, and you can always download a free evaluation copy of GigaSpaces from our web site to try out your ideas...

    But back to your question. If I understand you correctly, you would like to be able to run a query-like operation on entries in the space(s) - e.g. the sum of all entries that satisfy a certain criteria. It is very easy to do this with GigaSpaces - espcially with version 4.0 coming out soon (Beta release available on our site). There are several way to do this, but one of them is really cool: you can actually use standard SQL to query the spaces(s).

    We have quite a bit of experience working with high-volume, massive trading systems. I'll be happy to take this off-line with you if you want. You can email me at gad at gigaspaces dot com.

    Cheers - Gad
    GigaSpaces Technologies
  215. T+1 and financial protocols[ Go to top ]

    Gad, could you please post the info about querying and high-volume in this ( http://www.theserverside.com/news/thread.tss?thread_id=28952 ) thread. Seems someone there thinks it is only possible with RDMBSs.

    Mark
  216. IBM results[ Go to top ]

    Peter,Doing a quick text search through the document I wasn't able to find any COM+ related transaction call. I think the TM of the DB2 database must be used for the real transaction managment (in that case no distributed transactions?). You wrote that in the HP test Tuxedo was used, but to me Tuxedo is not the same as EJB. From my perpective using Tuxedo as a TM is more or less equivalent to using the JTA api in the Java space. To me EJB is a different concept. What are you actually using for your high end systems EJB? JTA? Tuxedo? Java? And in what relation, EJB should abstract the transaction API away and still you say you are using Tuxedo.Gr,Frank
    totally agree. tuxedo and ejb are totally difference concepts. outside of distributed transactions or hairy integrations, I try to avoid ejb's altogether. If all i need is high transaction throughput for simple inserts, makes sense to just use tuxedo or messaging approach. I may have tunnel vision from working on compliance related applications the last two years. most likely I am.
  217. EJB == The biggest fiasco in IT history[ Go to top ]

    But despite searching and asking for over 1.5 year, I have not found a single "High-Level" Java EJB Server application, that is with more transactions per second. Anyway at the very top there is only COBOL/CICS! To defend EJB today is pathetique. Imagine EJB/CORBA against SOA/Indigo/Rich-Clients (Browser based or not). Allow me to laugh. Best RegardsRolf Tollerud
    Wow, fantastic reasoning here. So to summarize, you've made your decision based on the fact that no one has told you step-by-step instructions on how to build high performance, high scalability transactional systems with EJB, therefore they don't exist. With that kind of attitude, I don't think anyone would hire you do work on server applications. I have 5 years of server side development experience, but I consider that light weight compared to the veterans I know.If you don't mind me asking, what kind of hardcore server side experience do you have with distributed transactions for large or global transactional systems? Just to be totally clear, by transactions I mean Atomic TPC-C transactions. Not bulk inserts for a data mining setup. And before you claim COM+ is equal to EJB, I would advise you to read a dozen full disclosures for TPC-C SqlServer results. You'll find the results from HP use tuxedo in the full disclosure. COM+ is just a wrapper for their port of tuxedo to windows.I don't speak for the industry, so these are purely my observations. You could be right and I could be wrong. People have been claiming mainframes are dying for 30 years. If anything, it looks like mainframes have 100 lives. Maybe after another 15 years of hardcore server side experience I'll know enough to say without a doubt you're right or wrong. Based on my experience so far, what you say is not backed by real world facts that I have seen. Best way to win the argument here is with a benchmark and full source disclosure.If what you claim is right, I don't think HP would have bothered to use tuxedo to run their TPC-C benchmarks and use an embedded C component to accelerate inserts. Then again, you could be a super genius and know something HP doesn't. I'm just an average programmer trying to learn as much as I can with minimal amount of stupidity and prejudice. When I first started doing server side development I had many of these preconcieved notions. After I made the usual mistakes, it became clear to me why these types of systems are built this way. If you have hard numbers proving indigo matches MQSeries, I'd love to see the numbers and full disclosure. Until someone provides verifiable proof, I'll wait until I've stress tested it and gotten a deep understanding of how it really works before I make a judgement on Indigo.
    If you are trying to have an inteligent chat with Rolf backed with serious technical arguments, you may be expecting too much. He will start talking about fashion, the red party, area 51, the x-files etc when he runs out of answers. And yes, he does not have any experience in server side processing.

    The things ppl does for having 15 minutes of fame...
  218. lies, damn lies, and Anecdotes[ Go to top ]

    In this thread and in many others, there is lots of debate about how much server-side and/or mission-critical app development is done on the .NET/Windows Server platform vs. J2EE + (base OS) platform. Most of the "Answers" are anecdotal. The fact is that any one person cannot really "know" the macro trend unless that person conducts a survey.

    If you buy that, then maybe check this out:
    An independent survey conducted by Forrester, not funded by Microsoft or any other vendor as far as I know, from May 2004. This survey (blurb here) showed that 56% of firms today are choosing .NET as the primary development platform as opposed to 44% for J2EE.

    I know what y'all are thinking: "another study sponsored by Microsoft". Well, no. This study was conceived, designed, and executed completely independently by Forrester. We (Microsoft) found out about it when it was released. (ask Nicholas Wilkoff at Forrester to verify this if you want). We were so happy about it, we put a link to the study on the Microsoft web site: http://www.microsoft.com/forrester. "Frankly, we were surprised to see [.Net] as dominant as it was," Nick was quoted as saying. (here).

    But that's not all. Recently BZMedia (publisher of SDTimes Magazine) conducted their own independent survey. Again, this was not prompted by or sponsored by Microsoft. It's part of a series of surveys they have been doing. This one was published September 15, 2004. (click here for the PDF). This survey showed .NET usage rising from 53% to 66% from 2002 to 2004, while J2*E (all Java) rose from 52% to 56%.

    Evans Data conducted a similar survey and found results for .NET almost exactly equivalent to BZMedia's: Today, 52 percent of those polled say they use .NET, and 68 percent of those same respondents said they plan to develop applications using .Net by 2005. Again this was not something MS was involved in.

    For those counting at home, the score is .NET 3, J2EE 0.

    Not satisfied with letting others conduct all the studies, Microsoft separately asked Gartner Customer Research to conduct a usage and adoption study around application platforms being used for mission critical applications within large corporations. You can access it here: http://download.microsoft.com/download/7/6/c/76ca8514-aea5-4114-8820-7ab3d8bd45fb/Gartnermissioncrit.pdf.

    This survey was designed and paid for by Microsoft, and was just recently completed. Gartner Customer Research did all the work and compiled the results. They used a random sampling of US-based organizations (from Dun and Bradstreet listing, above a certain cutoff size). Also the survey sample was large enough to project the results across US-based organizations of similar size (mean size of company surveyed was 25,000 employees, I believe).

    The survey was pretty broad, but one thing it showed was consistent with these other independent studies: the leading platform for mission-critical server-based development is .NET.

    Now, I know what is going to happen. People are going to squawk that Microsoft bought the results. That's pure poppycock. Maybe it's true that if the data didn't show .NET in a leadership position, Microsoft wouldn't publish the results. But that doesn't change the actual result. And it doesn't change the other three independent studies.

    Look, there is just too much data to wave away at this point. The true believers can discard every piece of data they don't like, but it doesn't change the facts.

    I am always interested in more data. If there is something else from a credible source that says something different, let's see it. This would specifically exclude anecdotal evidence or statements that start with "everyone knows that the most used platform is...".

    Bottom line, the point of this post is to say:
     - anecdotal evidence doesn't cut it.
     - studies show both .NET and Java are in wide use for real enterprise development projects, and adoption of both will likely continue to grow.
     - the same studies show that just 30 months years after the intro of .NET, there is wide and growing adoption.

    re: this third point, maybe the study we are discussing here points out some of the reasons why: high productivity, great integration between .NET and the core OS, good scalability and reliability, and a cost significantly lower (1/10th) than that of the leading commercial alternative. And maybe most importantly, a focus on Web Services and interop with J2EE and non-Windows systems, which was a big switch for Microsoft.

    There's still lots of work to do! The .NET 2.0 technology, in beta now, will add to the platform significantly. And I'm sure the Java crowd is not standing still either. It's an interesting time to be in enterprise software.


    -Dino
    [Microsoft]
  219. one last thing[ Go to top ]

    I know, I know, you all are tired of my posts. And yeah, I need a life.

    One last thing, though:
    In the end, data and studies from third parties such as the Middleware Company and analyst reports are just data points. They're not decisions. We always recommend organizations perform their own analysis and hands-on evaluations, comparing the technologies on their own merits based on their own criteria for their own circumstances. The Middleware Study, at the least, suggests a framework and methodology for doing just that.

    -Dino
    (still working for Microsoft, and donning NOMEX now)
  220. re: Dino's post - lies, damn lies, etc[ Go to top ]

    Ok, Dino, I'll bite. But if this is the trend, what, 60+ % of developers, companies, etc are moving to .NET, then how do you answer the larger shift to linux on the server side due to MS inability to somehow quite making us the beta testers of their much flawed OS and other products. Before anyone replies "dude, you hate MS", not so. I love a lot of MS products. XBox, games, even Outlook as much as I hate the crappy security/protection it provides. MS has a lot to be proud of and I for one would even like to learn C# and .NET. BUT, these reports showing shifts and growth in .NET and loss in J2EE when so many are fed up and moving completely off of MS and to Linux which is more stable and safe/secure.. I am sorry, I find all of it hard to believe, period. With entire countries, such as China, many small countries, and a few big wins for JDS, I can't believe the large portion of companies/developers are choosing the .NET to develop in over J2EE when they are moving to linux as their back-end. Hell, a lot of countries are moving to linux for their desktop as well. How many software companies are going to rewrite their software products in each platform's best C/C++ version? The only language out there other than web specific stuff, that gives you client applications is Java, period.

    In my last post I mentioned my open-source project to bring java desktop back, and it's catching on a little now, and more so later. Eclipse RCP is also seeing very solid growth, and while I think it's long over due, Java is starting to see a band of brothers form in the community on several fronts. The desktop is but one of them and is finally coming back. I am blown away at how many java swing jobs are now available. Maybe the desktop is but a small part of the original post, but as I see it, J2EE is far ahead of .NET. In shear numbers alone in terms of companies, you have 10's of companies supporting J2EE implementations, with only one .NET implementation, and .NET is only on Windows while J2EE is on Linux, Windows, Solaris, OSX, HP, etc. Watch out for the Windows64 and Linux64 versions running on Opterons and G5's with full 64-bit.

    Ah well, to each their own I guess.
  221. Research Data[ Go to top ]

    Dino

    I think the projections and studies from the "research firms" need to be consumed with a grain of salt.

    I personally know of two dedicated MS sites(one a large financial institution and another a huge government department here) gave up on .Net after trying for more than a year.

    Do you know what they use now? IBM's J2EE
  222. Believe me, logical and reasoned response is the last you want.
    Of course! What else would we expect from someone who's driven by fashion (!!!)instead of technical excellence? :)
    In the last year .NET has overtaken J2EE. Here is for example job search statistic from www.it.jobserve.com: .net j2eeApril 2003: 435 561May: 454 465September: 686 716November: 801 834Januari 2004 1006 1105April 1461 1297September 1676 1369Netcraft report ASP.NET Overtakes JSP and Java Servletshttp://news.netcraft.com/archives/2004/03/23/aspnet_overtakes_jsp_and_java_servlets.htmlAnd Forester say that 56% of the enterprises use .Net and only 44% use J2EEhttp://www.microsoft.com/windowsserversystem/forresterdotnet.mspxTest and benchmarks (like this one) shows .NET coming out top every timeThe we have articles like this one, Is .Net Stealing Java's Thunder? .Net is fast becoming the developer's platform of choicehttp://www.webservicespipeline.com/23900832In a few months .NET 2.0 is will be released, the proverbial famous third MS versionIn response to this the Java camp has two defenses,1) The first one is Windows security (again! :)2) The second is that Java/J2EE still has the lead "in heavy lifting".Let us examine this 1) Security"In May this year, 19,208 successful breaches were recorded against Linux based systems, compared to 3,801 against MS Windows based systems"http://www.theinquirer.net/?article=9845And as Redhat & Co keeps adding more third part junk to the distribution every day this trend will only be stronger. The fact remains; today Window Server 2004 Advanced Edition is a more secure OS than Linux.2) "High-Level" systems.What that is regarded as "High-Level" seems to change all time, bigger and bigger, faster and faster. On MS reference site there are hundreds of serious, mission critical systems, but for some reason they are not regarded as "High-Level" by persons like Peter Lin. Look how he denounces the London Stock Exchange System http://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseStudyID=13911But despite searching and asking for over 1.5 year, I have not found a single "High-Level" Java EJB Server application, that is with more transactions per second.Anyway at the very top there is only COBOL/CICS! To defend EJB today is pathetique. Imagine EJB/CORBA against SOA/Indigo/Rich-Clients (Browser based or not). Allow me to laugh. Best RegardsRolf Tollerud
    Yes, logical and reasoned responses are out of question indeed... :(
  223. "Can somebody use Spring/hibernate or spring running in a J2EE server?"

    Yes you can, but why? Spring can run the necessary J2EE components by itself.
    That would be some competition! Win some loose some, photo finish, two first class professional systems spent against each other.

    Excitement, verve.

    But it is not going to happen. No one would sponsor it because no one would know what the winner would be.
  224. "Can somebody use Spring/hibernate or spring running in a J2EE server?"Yes you can, but why? Spring can run the necessary J2EE components by itself.That would be some competition! Win some loose some, photo finish, two first class professional systems spent against each other. Excitement, verve.But it is not going to happen. No one would sponsor it because no one would know what the winner would be.
    I think spring team need to do this, it will make a case. I read develop J2EE withou EJB, it made a case, pretty good.
    This time will make spring more popular!!!
  225. "Spring can run the necessary J2EE components by itself."

    While Spring would certainly have been a more interesting choice, the study would then be "Spring v/s .Net", which is not of much consequence to MS, is it? :-)

    Besides, 2PC seems to have been a requirement here, since they used an XA driver. That's the kind of stuff where the "well-meaning impractical theoreticians" reign. Spring is wise to not bite more than it can chew.
  226. Spring/Hibernate with J2EE app server[ Go to top ]

    Can somebody use Spring/hibernate or spring running in a J2EE server?
    Hi There,

      Sure, you can do that. You can use Spring to interact with stateless EJBs. Further, you can interact with the DB layer using the Hibernate support.

    BR,
    ~A
  227. I am sure that TMC doesn’t believe that the money they received had any influence on their findings. I'm also pretty sure Forrester felt that the money they received didn't influence their infamous M$ purchased J2EE study either. However, I have read the results of many vendor purchased studies over the years, and I have never seen one in which the purchasers solution turned out to be inferior to the competing solution that the vendor had chosen to target.

    I have studied research methods, the psychology and sociology of research, as well as the philosophy of science. There are an astonishing number of ways in which researchers inadvertently trip themselves up. In the scientific community this usually leads to findings that make a great initial impression, but which are (usually) quickly discarded after attempts to duplicate their findings fail. This isn’t because the researchers are defective, it is simply how scientific research works; by trial and the elimination of error.

    Performing meaningful research in the IT field is difficult because there are a huge number of variables and the cost associated with controlling for each of those variables is prohibitive.

    To do a study on the relative merits of something as complex as J2EE vs. .Net (with reasonable scientific rigor) there would need to be a number of teams using both toolsets. This is necessary because there can be way too much variability among a small number of teams. Also, because it is very easy for psychological factors to affect the performance of the teams, each team should not only be unaware of who is funding the study, but they shouldn’t even be aware of the real reason for the study.

    Furthermore, the requirements for the systems to be built in the study should not be chosen by anyone with knowledge of the reason for the study. Ideally, because it is possible that the requirements chosen for a single system might inadvertently favor one toolset over another, there should be a number of systems built. The requirements for the various systems should reflect a reasonable spectrum of common project types that would be developed on the platforms to be compared.

    Each team should get a list of requirements that must be met and should use their tools to the best of their ability. If the sample size is large enough the results of such a study will be a set of normal distributions for each platform measured against each of the requirement sets. Even though there will be too much variability to compare individual data points, the set of normal distributions can be meaningfully compared.

    Of course vendor purchased research, as opposed to scientific research, is never subjected to the evolutionary process of the scientific community. At most they get a few howls from the offended party. They are not concerned with how well their FUD stand up two or three years down the line. If the “get the facts” type of marketing campaign that the company builds around the research is able persuade a decent number of IT managers to adopt their product, then the research has fulfilled its objective, and was well worth the money spent.

    PS:
    One huge red flag is that the vendor is allowed to suppress the study if it does not like the results. Who's to say the M$ didn't perform 10 of these "massive" studies and the first 9 didn't go their way. We would never know that there was any other point of view. This is not an uncommon practice in the IT industry, and most people have a pretty good idea of how well M$ rates in the honesty and integrity departments (This is also a problem in real scientific research, simply because scientists often neglect to go through the trouble of preparing negative findings for publication).
  228. please, the TSS members are not naive[ Go to top ]

    Joe: "I have read the results of many vendor purchased studies over the years, and I have never seen one in which the purchasers solution turned out to be inferior to the competing solution that the vendor had chosen to"

    Of course no vendor purchase or sponsor any studie if they are not sure of the result. But,

    It is not the result from from TSS, Enderle, Didio, Forrester, the Yankee group and all the others that cement the result, but the fact that IBM & co doesn't commission any studies at all.

    This is as significant as when the Greek sprinter Kenteris didn't show up for drug test before the Olympic opening ceremony. No amount of squirming excuses can hide that in the same way that it did not helped Kenteris to argue that "he did not know that he was wanted for tests". ;)

    Regards
    Rolf Tollerud
  229. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    Forget this J2EE / .NET crap , common business programming language is the future and this study proves it across multiple dimensions: developer productivity, manageability, reliability, and application performance.

    http://www.infogoal.com/cbd/cbdz040.htm
  230. One(1) voice againd 1000's[ Go to top ]

    "the Microsoft results are based on a system running Visual Studio .Net running on Windows Server 2003 and costing $19,294. The IBM results are based on a system running WebSphere Network Deployment edition running on Red Hat Linux and costing $253,996."

    From the Idiot, the Duracell, the Ninja, Poor Rolf, etc..

    Best regards :)
    Rolf Tollerud
  231. WebSphere wrong flag bearer[ Go to top ]

    From the eWeek article:
    For building some enterprise applications, it took developers 195 man-hours with WebSphere, whereas building the same applications using Visual Studio took only 94 man-hours, Somasegar said the study said. Application server installation and configuration took 22 man-hours in the WebSphere environment and only 4 man-hours in the Microsoft environment.
    That's all very impressive, except I could do the same comparison between WebLogic and WebSphere and get similar results, if not better. I tell people from my experience of developing the same app against both WLS and WS that it's about a factor of 2 less productive in WS. Switch to Resin and with Spring and no EJBs and you'll get another 30-40% improvement. The problem is WS, not Java or J2EE.
  232. WebSphere wrong flag bearer[ Go to top ]

    I could do the same comparison between WebLogic and WebSphere and get similar results.
    The problem is WS, not Java or J2EE.
    Except thats not how the headlines are going to play it.

    "Websphere is the market leading Application Server!" they will say.
    If only people knew what lengths IBM go to to prop up their sales numbers... (I know of companies with just several cpu's in production that account for 100's cpus in "sales"... and IBM pay them to take them)

    My conclusions:
    It seems as though the guys using Webphere werent really that familiar with it - which doesnt make for a fair Websphere vs .Net comparison.

    I am somewhat amazed at two facts though:
    a) trying to use SunJVM tuning params on the IBM JVM
    b) trying to use Websphere on the sun JRE.
    Anyone who knows the first thing about the sun/ibm JVM's and Websphere know that these are ludicrous things to suggest - let alone spend time on...
    Firstly jvm option starting with -XX is JVM specific. Duh!
    Secondly, anyone who knows the first thing about Websphere knows that its tied to the IBM ORB implementation in its jvm.
    This is Java/Websphere 101 stuff...
    While two IBM consultants were used for "initial tuning", it seems they didnt stick around much.

    Therefore, I am not surprised at the outcome. The Websphere.* development stack is a pain to work with if you havent spent some time with the product and have an intuitive feel for the IBM-ness - and learn where to find stuff. I have been there, and I know what its like. Their website is a piece if shite when it comes to finding stuff on it. Not even the IBM consultants know where to find half the stuff.

    That said:
    No-one in their right mind uses WSAD for development.
    Nor do they use Websphere in development.
    The development round-trip for both of these is crap. You may have a J2EE server built into the IDE - but it doesnt take the same patches/fixes, so your screwed if, say, there is a bug that stops sitemesh working...
    We deploy on Webfear, but we use tomcat/orion + IntelliJ for dev.
    Where we use Weblogic (production and dev) + IntelliJ , we get similar results to the tomcat/orion+IntelliJ combo....

    That said, VS.Net does have a very low barrier to startup.
    There is one ****-off install - and it just works - there is nowhere near the pain that you have with the Websphere.* development stack.

    Unless of course your IT Security policy is such that you cannot install IIS on your desktop (due to Nimda-style risks). Then you have to install IIS on a server and resort to very hacky work-arounds that dont scale beyond 1 developer.
    I suspect details like this dont make it into TMC's report :-)


    The bottom line for the J2EE industry is that IBM is making everyone in it look bad.
    I wonder when they are going to pull their head out of their collective asses and realize it.


    -Nick
  233. WebSphere wrong flag bearer[ Go to top ]

    IMHO, having been part of this project, my best experience WAS with using WSAD. It proved to be a pretty sweet development environment for all the code intensive tasks of the WSAD version of the app. And the test WS environment proved to be very efficient for testing (minimal startup time, hot redeploy of apps, etc). The downside for productivity was lack of wizards / helpers for the persistence framework (only Entity Beans have such things).

    If you take a look at our handcrafted architecture (not the RRD part, we really had to go with what RRD produced for that) you will see we tried to make it perform as well as possible. We only used JSP, Servlets, JNDI lookup (of data sources, queues), MDBs, JTA and JDBC. No Session or Entity Beans. We managed our own transactions where needed and that worked very nicely.

    The only objects put into the users sessions were those required by the specification: the User object and batched tickets for submission over messaging. Session replication was needed to ensure these persisted in case of failover.

    Other technologies were considered for the various layers.

    For presentation we thought about Struts or JSF, but past experience had shown a performance degradation compared with JSP/Servlets, despite a slight productivity improvement. Similarly with Persistence. You can't really get anything to perform faster (if caching is not allowed) than writing your own prepared statements. You may get producivity improvements with Hibernate or JDO, but for this application that would have been unnecessary overkill.

    The best performer was the messaging piece. WebSphere MQ with a JMS interface and MDBs beat the pants off MS MQ.

    It WOULD be really interesting to see how the same application running in BEA WebLogic, JBoss or other app server performed...
  234. WebSphere wrong flag bearer[ Go to top ]

    Couldn&#8217;t agree more I have been working with Weblogic 5.x &#8211; 8.1 over the past five years with very little problems and those I have encountered have always been solved quickly via bea support or newsgroups.

    I am just coming to the end of 12 months working with Websphere 5.1. It&#8217;s been an absolute nightmare. It took IBM over six weeks to solve problems with a simple JMS Client.

    If Microsoft wanted to show .NET in the best light they have couldn&#8217;t chosen a better (worse) Application Server to use. This report would have been more valid if teams from Sun, bea, Oracle, JBOSS etc had been allowed to &#8220;compete&#8221; against Microsoft as well.
  235. WebSphere wrong flag bearer[ Go to top ]

    You may have a J2EE server built into the IDE - but it doesnt take the same patches/fixes, so your screwed if, say, there is a bug that stops sitemesh working..
    This is incorrect. I routinely deploy the WAS fixpacks to the WebSphere Test Environment.
  236. WebSphere wrong flag bearer[ Go to top ]

    This is incorrect. I routinely deploy the WAS fixpacks to the WebSphere Test Environment
    How do you apply a WAS e-fix to WSAD?
  237. WebSphere wrong flag bearer[ Go to top ]

    This is incorrect. I routinely deploy the WAS fixpacks to the WebSphere Test Environment
    How do you apply a WAS e-fix to WSAD?

    Not to WSAD, but, rather, the Test Environment in WSAD. It is actually a full-blown WAS server, so you can apply fixpacks in the same manner as you would any WAS instance... through the Update Installer.

    There may be some confusion here, because for WAS 4.x servers, the Test Environment is different from the WAS standalone. And this definitely was a problem.

    Granted, IBM's documentation on this subject is pretty poor (ok, it barely exists at all), and in early versions of WSAD 5, the Test Environment didn't even have setupCmdLine.bat installed correctly, so you had to make a lot of manual tweaks to get it to work.

    Jonathan
  238. come on Richard..[ Go to top ]

    Now it is only needed that Richard Öberg writes something he calls "the IBM Webstore/.NET Study Revisited" where he makes umpteen arguments why the test was unfair, each weighing 0.0001 gram, whereupon the Java community will jointly exclaim "Rickard Öberg has written a first-class critique of the IBM Webstore/.NET benchmark" and that he in his analysis "tears apart many of the claims made by The Middleware Company (TMC)". :)

    Regards
    Rolf Tollerud
  239. I certainly wouldn't. I believe the design of the J2EE application, not leveraging much popular open-source tools, explains the gap in productivity much more than the relative merits of VS.NET and WSAD.

    Just look at their code: this is plain vanilla J2EE - no Spring/Struts/Hibernate/Other framework involved. The study pinpoints the problem in its "Quantitative Results" section" J2EE developers have spent a significant amount of the total time doing "system time development tasks". On page 37: "The main reason for the higher total under common is that the J2EE team developed frameworks for the Web, business logic and persistence tiers."
    A look at the code of the J2EE/WSAD application confirms this. No real enterprise project would start like this these days.

    Also, the comparison of productivity between RRD and VS.NET is heavily biased: the developers on the .NET side had 3 years of experience with their IDE, the RRD guys had to catch up learning a new and complex tool.

    I infer the following from this:
    1) This study does'nt tell much about the difference of programmer productivity related to the tools that are being compared.
    2) This study does highlight what every good architect already knows: don't spend time writing infrastructure code if you you can use open-source code that does that for you. Spend all your time focusing on business requirements instead, and use/reuse well known open-source frameworks and libraries.
  240. This would be true if the goal of the study was productivity only, but we knew we were coding for performance as well. Can you honestly say you would use any of these frameworks if performance (even a 10% decrement) was a crucial? In previous experience I found, for example, that Struts added about a 40% overhead to plain JSP/Servlets in a simple form submission test case. Similarly, I can't imagine any persistence layer being faster than plain JDBC with prepared statements, if caching is not required.
  241. M$ paid FUD : about Linux and Java[ Go to top ]

    It's nothing new ...TSS is a pawn for M$ to publish garbage about Java like they do with Linux, similar paid "independent" ( non bias oh yea !!) vendors .

    Windoze and .Not is still inferior in the real world, buggy as hell.
  242. M$ paid FUD : about Linux and Java[ Go to top ]

    It's nothing new ...TSS is a pawn for M$ to publish garbage about Java like they do with Linux, similar paid "independent" ( non bias oh yea !!) vendors .Windoze and .Not is still inferior in the real world, buggy as hell.
    Guess you must have heard me yelling at MS and IIS and .Net right about the time you posted this.

    If MS.Net is easy for someone (ie smooth dev, install and maintain) then:
    A. They are not doing anything complicated.
    B. They have an in at MS so they can get problems solved quickly.
    C. They definitely are not doing COM Interop - especially with Exchange.
    D. They never install service packs.
    E. They don't refactor their code.
  243. Any guess, any theory?[ Go to top ]

    "Windoze and .Not is still inferior in the real world, buggy as hell."

    Not according to this test - that shows that it is Websphere that is inferior , buggy as hell.. You could add that uptime is lousy too..IMO + that the real world values from Java consultants without 14 years experience is far worse..

    "TSS is a pawn for M$ to publish garbage about Java"

    How come IBM, BEA and Sun never sponsor any test with .NET?

    Any guess, any theory?

    Regards
    Rolf Tollerud
  244. P.S.[ Go to top ]

    Lack of funds, perhaps? :)
  245. M$ fud spread .....[ Go to top ]

    M$ puts these fud out because let me gues their product sucks.

    They put fud on linux every day has that changed at all.

    Lets see they always market that windoze is secure ...
                                                       IE is secure .....
                                                        ActiveX /Com/.Not secure .....
                                                        Exchange server secure .......
                                                         this list can be as long as your nose......


    Time to laugh ....hahaha. Rolf you know M$ product sucks but hey you get paid by them just like Dino to put up with the crap.
  246. Any guess, any theory?[ Go to top ]

    "Windoze and .Not is still inferior in the real world, buggy as hell."Not according to this test - that shows that it is Websphere that is inferior , buggy as hell.. You could add that uptime is lousy too..IMO + that the real world values from Java consultants without 14 years experience is far worse.."TSS is a pawn for M$ to publish garbage about Java"How come IBM, BEA and Sun never sponsor any test with .NET?Any guess, any theory?RegardsRolf Tollerud
    I was laughing so hard, i had to respond. Have you ever tried to make MSMQ or Biztalk handle 3K transactional messages second in a distributed transaction system? When i say distributed transaction I mean transactions that have to make insert/update to local and remote databases. It also has to go through regulatory compliance check and other compliance validation. The working dataset should be a medium size, like 50 million rows. The number of trading terminals should be atleast 50 and distributed all over the country. It should also be able to handle peak loads of 10K transactions per second.

    I'd like to see someone do a test comparing .NET and IBM stack for the scenario I just described. IBM's stack works and scales well. Those who think the most important thing is "how many lines" it takes and "how many junior programmers" the project needs have no clue how to build scalable reliable applications. I know plenty of production systems that handle a constant load of 2-4K transactions per second using IBM stack. The same can't be said of .NET.
  247. Any guess, any theory?[ Go to top ]

    The system you described uses Websphere (app server)?

    Pratheep P
  248. Any guess, any theory?[ Go to top ]

    The system you described uses Websphere (app server)?Pratheep P
    I know atleast 3 of the top 10 trading firms do use IBM websphere + MQSeries to run their core trading systems. Some use BEA + MQSeries to handle those kinds of loads.
  249. "I was laughing so hard"[ Go to top ]

    I am not laughing, too cynical..

    Peter: "I personally would only use .NET for client side apps, but I wouldn't willingly or happily choose .NET on the server side. For large trading system that have to scale, .NET sucks."

    That is just plain nonsense. For instance,
    London Stock Exchange .NET System achives 3000 transactions per second.
    http://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseStudyID=13911

    Check also,
    TheServerSide Calls for Real World J2EE Project Stories
    http://www.theserverside.com/news/thread.tss?thread_id=17347

    Pre W: "TSS is "approached" (and PAID!) by a commercial entity to show that their product is better"

    MS regulary as a clock publish tests.

    Why doesn't IBM?
    Why doesn't IBM?
    Why doesn't IBM?

    Any guess or theory?

    Curious
    Rolf Tollerud
    (please, don't mention tests where .NET is forbidden to participate :)
  250. "I was laughing so hard"[ Go to top ]

    I am not laughing, too cynical..Peter: "I personally would only use .NET for client side apps, but I wouldn't willingly or happily choose .NET on the server side. For large trading system that have to scale, .NET sucks."That is just plain nonsense. For instance,London Stock Exchange .NET System achives 3000 transactions per second.http://www.microsoft.com/resources/casestudies/CaseStudy.asp?CaseStudyID=13911Check also,TheServerSide Calls for Real World J2EE Project Storieshttp://www.theserverside.com/news/thread.tss?thread_id=17347Pre W: "TSS is "approached" (and PAID!) by a commercial entity to show that their product is better"MS regulary as a clock publish tests.Why doesn't IBM? Why doesn't IBM? Why doesn't IBM? Any guess or theory?CuriousRolf Tollerud(please, don't mention tests where .NET is forbidden to participate :)
    I already responded to someone else about the London Stock exchange on TSS.NET. So rather than repeat the same thing, I'll just use a link http://www.theserverside.net/news/thread.tss?thread_id=28737#138226 . The london stock exchange is using Microsoft Analysis service to aggregate data. Strictly speaking it is a data mining application and not the trading platform for london stock exchange. London Stock exchange gets a lot more transactions than just 3K per second.
  251. like Diogenes "Quaero hominem"[ Go to top ]

    London Stock exchange gets a lot more transactions than just 3K per second

    Thank you for the analysis, perhaps you can help me with another matter.

    "I am in search of the high-level JeEE applications". (Very free translation)

    To be specific, I have been trying for an extended period of time to compile a list over J2EE systems with EJB that handles over 3000 transactions (DB reads and/or writes) per second. As you can see in the previously thread,

    TheServerSide Calls for Real World J2EE Project Stories
    http://www.theserverside.com/news/thread.tss?thread_id=17347

    they are elusive to find. Perhaps you know of any?

    Regards
    Rolf Tollerud
  252. like Diogenes "Quaero hominem"[ Go to top ]

    London Stock exchange gets a lot more transactions than just 3K per secondThank you for the analysis, perhaps you can help me with another matter."I am in search of the high-level JeEE applications". (Very free translation)To be specific, I have been trying for an extended period of time to compile a list over J2EE systems with EJB that handles over 3000 transactions (DB reads and/or writes) per second. As you can see in the previously thread,TheServerSide Calls for Real World J2EE Project Storieshttp://www.theserverside.com/news/thread.tss?thread_id=17347they are elusive to find. Perhaps you know of any?RegardsRolf Tollerud
    http://www.infogoal.com/cbd/cbdz040.htm

    "Today, the typical enterprise application in a large organization consists of dozens of COBOL programs running on an IBM mainframe computer. To drive the workstations or PCs that use those applications, the programs run under a world-class transaction processor known as CICS (Customer Information Control System). These COBOL/CICS applications process billions of online transactions every day in businesses like banks, airlines, insurance companies, and hospitals, and they provide the business logic and database processing for most large e-business sites.

    In fact, one estimate says that COBOL/CICS applications account for 60% of all the applications that are currently in operation, and another estimate says that these applications process 85% of all the transactions that are processed. So when you withdraw money from an ATM, place an airline reservation, or order a product over the Internet, chances are that a COBOL/CICS application has been used to process your transaction. "

    Author:
    Mike Murach is the president of Mike Murach & Associates, Inc., a leading publisher of COBOL, mainframe, Java, and .NET books.

    IBM invented SQL and RDBMS too. TSS implemented lame homepage and claim they are scientists.
  253. like Diogenes "Quaero hominem"[ Go to top ]

    London Stock exchange gets a lot more transactions than just 3K per secondThank you for the analysis, perhaps you can help me with another matter."I am in search of the high-level JeEE applications". (Very free translation)To be specific, I have been trying for an extended period of time to compile a list over J2EE systems with EJB that handles over 3000 transactions (DB reads and/or writes) per second. As you can see in the previously thread,TheServerSide Calls for Real World J2EE Project Storieshttp://www.theserverside.com/news/thread.tss?thread_id=17347they are elusive to find. Perhaps you know of any?RegardsRolf Tollerud
    I'm not sure what you mean by "high level J2EE applications". Honestly, the only systems I am aware of handling those kinds of loads use big iron for the database. I know of a few medium sized banks that handle about 3-4K queries per second with a decent mix of read/write (75/25). Their primary database is Oracle on a big Sun box. But that's just one of their databases. It's not uncommon for banks to have old B-tree database co-existing with RDBMS in a production environment.

    The bigger banks I know of still use a mix of mainframe and high end unix boxes to handle 5-10K transactions per second. No matter what Intel or Microsofts says, a quad 3ghz CPU server just can't handle 2K+ transactions per second. Batch inserts don't count as TPC-C transactions. In the most optimistic situation where someone manages to get their inserts down to 50ms on a 4 CPU box, you're gonna max out around 200-250 transactions/second. Some problems are just easier to solve by scaling up to better hardware designed to handle high concurrency. I'm not a hardware expert, but a Sun server with 8 CPU is not the same thing as a Dell server with 8 CPU.

    Obviously, I can't say the names of companies, since all of them want to keep their architecture secret. The stuff I know is from first hand experience, or information from friends I trust and know well. My advice is this, since no large financial firm is gonna say, "hey, tell the world how we handle transactions," you're going to have to run your own independent benchmarks if you want to make the information public. I spend a lot of time experimenting with different approaches to scaling and improving performance, and built my own dev environment. what I can say is the top 5 financial companies in boston use either Websphere, weblogic or SunOne. All of them use a combination of stateless and stateful session beans. If you really want to see first hand how to scale large transactional applications, get a job at one of the financial firms in Boston or NY. I know my answer isn't really helpful, but I don't have permission to go into detail.
  254. good old capitalism in practice[ Go to top ]

    To all those that advocate that TSS should be restricted to "Java/J2EE" only I just say: "imagine how boring it would be". Besides,

    Mark Boon: ".NET is of course Java. A slightly different breed of Java, but it's still Java. They may cleverly disguise it with different names, CLR, CLI and more, it's of course the same as a JVM, byte-code etc.."

    You can just take a look at SDK 5.0 to see how much "Sun Java" has benefited from the competition with "MS Java". For better or worse, the situation is what it is.

    If COBOL/CICS does all the real heavy lifting in the world, there should be no more problem for .NET/C# to fill up the rest than for Java/J2EE to do it. If Java EJB applications servers is still more common in certain business markets it is only because of lazy momentum. They are in for some tough competition, as this study shows. And for the lower elections, say projects up to 5 million dollar, I believe that .NET already has taken the lead.

    Regards
    Rolf Tollerud
  255. good old capitalism in practice[ Go to top ]

    To all those that advocate that TSS should be restricted to "Java/J2EE" only I just say: "imagine how boring it would be". Besides, Mark Boon: ".NET is of course Java. A slightly different breed of Java, but it's still Java. They may cleverly disguise it with different names, CLR, CLI and more, it's of course the same as a JVM, byte-code etc.."You can just take a look at SDK 5.0 to see how much "Sun Java" has benefited from the competition with "MS Java". For better or worse, the situation is what it is.If COBOL/CICS does all the real heavy lifting in the world, there should be no more problem for .NET/C# to fill up the rest than for Java/J2EE to do it. If Java EJB applications servers is still more common in certain business markets it is only because of lazy momentum. They are in for some tough competition, as this study shows. And for the lower elections, say projects up to 5 million dollar, I believe that .NET already has taken the lead.RegardsRolf Tollerud
    that might be true of the projects you work on, but from my experience in the Boston area, that is not true of large financial institutions in boston like fidelity, putnam, state street bank, fleet/Bank of America, citigroup, and jp morgan. One of the bigger OMS providers decided to move to J2EE a few years back because they weren't able to scale to the needs of the "big boys". those in the industry know which company I'm hinting at.
  256. out with the old, in with the new[ Go to top ]

    Peter,

    I do really respect your competence, you seem to know what you are talking of.
    I myself never have worked with "high-level" applications; my specialty is complex Rich-Web-clients. "Windows applications". Nevertheless I do perceive myself to have a very good discernment, perhaps because my education is not entirely technical only.

    So I say it again, the large Java EJB Application servers are out, they are not hip, and is destined for the dust bin. I have never been wrong in any prediction yet in the 3 years I have been a member of TSS. Wait and see.

    Regards
    Rolf Tollerud
  257. out with the old, in with the new[ Go to top ]

    Peter,I do really respect your competence, you seem to know what you are talking of.I myself never have worked with "high-level" applications; my specialty is complex Rich-Web-clients. "Windows applications". Nevertheless I do perceive myself to have a very good discernment, perhaps because my education is not entirely technical only.So I say it again, the large Java EJB Application servers are out, they are not hip, and is destined for the dust bin. I have never been wrong in any prediction yet in the 3 years I have been a member of TSS. Wait and see.RegardsRolf Tollerud
    actually, the trend I see is the financial firms in boston are moving deeper into J2EE. My best guess for this trend are the following:

    1. their in house staff finally have deep experience scaling ejb to the same level as old mainframes

    2. most of these firms have already spent hundreds of millions on hardware over the last 4 years. they have to live with it for atleast another 15 years to get their investment back.

    3. the recent developments with java spaces and grid is starting to get significant traction. in this area Java is more mature. How long java will maintain a lead in this area is anyone's guess.

    4. I know of atleast 1 firm in boston using java for client side. This is primarily driven by message oriented middleware MOM approach to transaction process. but really, MOM owes alot to Tux and Tuxedo and all the work AT&T put into transaction monitors. One could just as well use .NET client and send messages to a J2EE backend using MOM approach.

    5. In many of the shops I know first hand, mentioning .NET for backend heavy lifting like transaction processing is seen as a joke. Most of them have tried to scale .NET to their needs for an extended period of time and gave up after 10 months.

    6. I know of several New York financial firms that are moving heavily into grid architecture using linux + java. By heavily, I mean buying thousands of servers to build a massive shared memory architecture.

    the financial sector is a very specific area. I suspect (without any proof) that most small/medium size companies that only need to handle 1-3 concurrent requests/queries per second are going with .NET instead of Java. In most of those cases, they are microsoft shops to begin with. I don't know of any small shop that is moving from .NET to J2EE. the main reason is the learning curve is much too steep and their developers have zero experience with large scale applications. For these firms, their needs are small and it's just silly to go J2EE route. It's not cost effective to migrate to J2EE for small shops. but that's my biased opinion.
  258. the Customer is always right[ Go to top ]

    Peter,

    For me it is incredible that you (which obviously is quite intelligent and even experienced) do not see the writing on the wall. All this applications that you speak of has a horrible user interface that just as well could have been tool generated. The situation is the same as for some years ago and even the arguments is the exactly the same, namely the "Mainframe vs the PC" syndrome. The Mainframe lost then as the "Mainframe" will loose again and for the same reasons. What you forget is that it is not the management or even the CTOs that have the final say, it is the users == "the Customer".

    Speak about deja vu! ;)

    Regards
    Rolf Tollerud
  259. the Customer is always right[ Go to top ]

    Peter,For me it is incredible that you (which obviously is quite intelligent and even experienced) do not see the writing on the wall. All this applications that you speak of has a horrible user interface that just as well could have been tool generated. The situation is the same as for some years ago and even the arguments is the exactly the same, namely the "Mainframe vs the PC" syndrome. The Mainframe lost then as the "Mainframe" will loose again and for the same reasons. What you forget is that it is not the management or even the CTOs that have the final say, it is the users == "the Customer".Speak about deja vu! ;)RegardsRolf Tollerud
    Really. I didn't realize mainframes are dead. My area of focus is server side, so honestly I'm not the best judge of GUI applications. I've done a fair amount of web design and graphic design, but I don't cosider that hardcore GUI development. I'm definitely a light weight when it comes to GUI development. As for beautiful heavy clients. Building a pretty swing GUI you have override just about everything. Honestly, if aesthetics is equally important to functionality, using anything other than native API is going to be hard and most likely a pain.

    I think the argument "customer has the final say" is a redherring. If the business is all about transactions, what is more important?

    A. a nice gui and horrible scalability
    b. an usable gui and rock solid scalability
    c. get rock solid scalability first, then worry about pretty gui later

    Ask a financial analyst what is more important to them. Or better yet, ask a fund manager what is more important. Building transactional systems is completely different building a rich client. But that's my bias perspective.
  260. M$ Just does not 'get it' does it?[ Go to top ]

    Sad comparison. The fact that they choose the slowest, crappiest, most expensive J2EE implementation on the market to test against speaks volumns of their confidence in .NET. Why not take one of the leading J2EE stacks and compare it?

    Where is the comparison between a free J2EE stack like:

    Fedora Linux with 2.6 Kernel
    Sun Java 5
    Apache Tomcat 5.5
    PostgreSQL or MySQL or Firebird

    Cost: $0

    Now I would be willing to bet that, on the same hardware, this combination would perform *very* well, perhaps the same or better than a .NET stack. Heck, even if that stack costed the same as .NET (which it does not), I would *still* choose it simply because I don't want to be tied to Microsoft and their increasingly restrictive licencing schemes.

    Microsoft *are* evil. I've heard that they are even paying obnoxious people to pose as 'linux advocates' at their shows to make those who support Linux appear as immature or stupid. This is beyond low. I have no respect for that company at all.

    We need choice in the enterprise software world, so .NET's existence is not a bad thing, but I choose J2EE for it's flexibility, performance, and freedom.

    Mike
  261. You do not get it...[ Go to top ]

    Sad comparison. The fact that they choose the slowest, crappiest, most expensive J2EE implementation on the market to test against speaks volumns of their confidence in .NET. Why not take one of the leading J2EE stacks and compare it?
    hmmm...THE leading stack was picked for the test.
    Now I would be willing to bet that, on the same hardware, this combination would perform *very* well, perhaps the same or better than a .NET stack.
    It's been done. Nope. NET performs better
    I've heard that they are even paying obnoxious people to pose as 'linux advocates' at their shows to make those who support Linux appear as immature or stupid. This is beyond low. I have no respect for that company at all.
    I've heard that Aliens have Elvis brain; but I can't prove that either. You saying it doesn't make it true; it's called FUD
  262. Learn from this one![ Go to top ]

    Folks,

    I think it is worth looking at these studies to see how Java can be made better. Some parts of Java rally smoke MS, and some parts really pale in comparison. I should know because I've worked with both. I want to love Java because I've spent 5 years pursuing nothing else. But I get serious cases of .NET envy now and again.

    Things that are good about Java:
    1. Maturity. The JVM, even the Sun one, will beat the MS CLR in performance hands down on similar code. I happen to think that due to structures and stack allocation the CLR could eventually win out, but the fact is that right now, Java code will smoke C# or VB.NET code written in a similar style. 5:1 or better.
    2. Write once, run anywhere. Write it on a Sun, test it on your desktop at home. But don't over estimate the value; most organizations/enterprises support a homogenous Windows environment. And Intel chips keep gaining market share.
    3. APIs (breadth). Java these days has good APIs for almost everything.
    4. Variety. Go with Sun, IBM, JBoss, JRockit, BEA, or whoever. Wide ranges of prices. Open source. Gotta love it.

    Things that are good about .NET:
    1. Good tools beat the Java ones (especially IBM's) hands down. Try them, you'll like them. Then copy their good features and bring them to Java. Please.
    2. Simplicity. The MS APIs are simpler. There are reasons for this. They only support Windows technologies, not portable technologies. They came later. Etc.
    3. APIs. MS has far better rich client APIs (Windows Forms). Better means you can get better results in less time with less bugs.
    4. Less bugs. There are less optimizations in MS stuff since it is less mature, but there are less critical bugs. Having used IBM WebSphere and MS stuff, I'm painfully aware. Lesson: Make it run right, then think about fast/versatile/whatever.
    5. Better, unified library design. Non-blocking web services invocations is a stellar example of how MS delivered an important design in a simple package long before the competition. Having a single web design (ASP.NET) is nice, contrasted with Struts, JSF, and many others. And MS is not afraid to delegate to the database if that's what the doctor ordered.

    Now I'll rant about IBM WS for a sec. Half our development time was spent waiting for EJB stubs to compile. It took hours to compile our stuff under IBM, and 30 minutes to build and deploy a small change to a server component. When we swithced to JBoss we found we could bould our entire suite in 10 minutes, a small change in 35 seconds, and save the $10K/CPU. So clearly IBM != J2EE. And the IBM tools stink too. WSAD is designed to be pluggable, not for productivity. Bad move.

    Anyway, take this to heart. Think about what we can learn, and what we can do better. Stop screwing with generics and think about how to build a decent rich client library, a bug-free JVM, and a decent development environment. Sure, some of the comparisons are unfair, but just because we're paranoid doesn't mean .NET isn't better in some regards.
  263. Learn from this one![ Go to top ]

    Charles,

    This is quite a summary of the differences between the platforms, but I think you are WAY off with the performance figures. My company is roughly 99% Java, with all but one of our clients running on Java. For the one client on .NET we built and support a sizable billing application, and we have been able to get that to perform as well as it would have done had we wrote it in Java. In the past we had many more clients on .NET and performance was fantastic.

    The reason we moved our clients away from .NET was purely business related. First up the costs for Microsoft tooling are quite high once you get past five developers (you can get a five dev MSDN pack under MCP for about $1000 per year) whereas with Java I guess we have spent say £300 in the last 18 months on tools! Secondly, when going in with a pitch for a system we stand a lot better chance of being cost competitive when we are using PostgreSQL, Tomcat/JBoss and other OSS gubbins compared to SQL Server and .NET. When I see reports saying that MS is cheaper that Java OSS I just don't see how that fits with ANY of our customers, but I am not a business analyst so what do I know?

    From my experience both .NET and J2EE are excellent platforms and it is naive to just discount .NET because it was born in Redmond. I choose Java for my clients not because I prefer it (which I do!) but because it is better for them. Likewise, if I have a client who is committed to MS in a large way and wants to stick with MS then .NET is a viable solution, especially when they have an MS-based infrastructure already.

    Rob Harrop
  264. Learn from this one![ Go to top ]

    See "TORPEDO" it is a very fun FUD war too. There was more lame TSS studies, but I forgot them all. I think we need some pool to find the most lame TSS study, before to forget it.
  265. Learn from this one![ Go to top ]

    Rob Harrop wrote:
    First up the costs for Microsoft tooling are quite high once you get past five developers (you can get a five dev MSDN pack under MCP for about $1000 per year) whereas with Java I guess we have spent say £300 in the last 18 months on tools!
    True, MS charges for Visual Studio. The position we take is that the productivity you gain from the tool, over a years' time, is worth much more than the price of it. While it is true you can get things for free, the Microsoft philosophy is about value, not price. MS won't win on price. It's got to be value-for-money. (Same philosophy applies to Windows-v-Linux).

    I admit that half of the apps I write are done in Emacs with csharp-mode and Nant. But in some clear cases, VS is an absolute requirement.
    Secondly, when going in with a pitch for a system we stand a lot better chance of being cost competitive when we are using PostgreSQL, Tomcat/JBoss and other OSS gubbins compared to SQL Server and .NET.
    But of course there is a third path - PostgreSQL can work with Java AND with .NET. If SQL Server is too pricey for your requirements, that PostgreSQL delivers a better balance of value and cost, cool. There are several .NET managed providers for PostgreSQL (and MySQL), you can find some of them on sourceforge. There is also a lower-cost SQL Server called MSDE - it is the same SQL engine, but throttled for max 5 concurrent connections.

    {I apologize for the commercial message here, but the point is, you can mix and match in .NET as well as in Java}.

    Dino
    yes, still with Microsoft
  266. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    *All* TSS studies are crap. This study is not an exception too, it just shows how TSS studies are lame. Any result from this kind of study can be ignored because there is no result in FUD War, It is just dirty attempt to make damage for some company by lame webmasters.
    I will say the same after IBM or SUN become "better" than MS next time too, becuse TSS can not be trusted. Fix your homepage and learn HTML before to do any science.
    http://validator.w3.org/check?verbose=1&uri=http%3A//theserverside.com/
  267. No version management??[ Go to top ]

    A number of points are already made here regarding the bad performance of the Websphere team. IMHO choosing a monster like Websphere + WSAD (I've got no experience with RRD) is the first step in the line of mistakes - favoring .NET. I'm not saying that .NET wouldn't have won if a more lightweight J2EE platform would have been chosen - but at leasst the gap wouldn't have been so significant.

    But one thing baffled me hard - especially regarding the fact that the Java teams had a minimun of 14 years of development experience:

    8.3.3.2 Miscellaneous WSAD Headaches
    ...
    Sharing source code. Because the team chose not to use source control software, they shared code by zipping up their workspace. After a couple of false starts they learned which files not to share.

    I just can't believe that... I can't even imagine working without a version management in a single person project!

    Looking at the productivity table in 8.1.1 it's not surprising that they took 29 hours for "System wide dev tasks" versus 2 of the .NET team.

    Matz
  268. No version management??[ Go to top ]

    But one thing baffled me hard - especially regarding the fact that the Java teams had a minimun of 14 years of development experience:8.3.3.2 Miscellaneous WSAD Headaches...Sharing source code. Because the team chose not to use source control software, they shared code by zipping up their workspace. After a couple of false starts they learned which files not to share.I just can't believe that... I can't even imagine working without a version management in a single person project!Looking at the productivity table in 8.1.1 it's not surprising that they took 29 hours for "System wide dev tasks" versus 2 of the .NET team.Matz
    Yes, it is very fun. Looks like this study is fake or was made by idiots.
  269. No version management??[ Go to top ]

    How tough is CVS? Not very. And the plugin for WSAD/Eclipse is great.

    On my last project we had bunch of Java "newbies" (my apologies to them for using that term - they were either MF converts or new developers) using WSAD and CVS and I they had an easier time than this study group.

    My experience with Java and WSAD/Eclipse and WAS at multiple locations and my experience with Microsofts platform just doesn't coincide with the results of this "study".
  270. Version Control Clarification[ Go to top ]

    The reason version control was not used for the WSAD piece was that we were outside the lab at this point. We had two geographically disperse people (one West coast, one East coast) building this application, and did not have the time or resources to set up a remote CVS system to handle version control.

    Quite frankly, it didn't really cause any issues, since we were working on separate parts of the application (one Customer service, the other work order processing) after one developer built the common pieces. The "false starts" mentioned in document refer to including some WSAD workspace meta files in the zips, which caused local problems. This probably would have happened if we had shared the workspace through CVS as well.
  271. No version management??[ Go to top ]

    Are you sure you have used Eclipse/WSAD before? WSAD buit-in support for CVS is terrfic!
  272. No version management??[ Go to top ]

    Are you sure you have used Eclipse/WSAD before? WSAD buit-in support for CVS is terrfic!
    Yes, we had indeed and I know it is. I explained the version management issue here.
  273. WSAD/WebSphere bashing[ Go to top ]

    WSAD alone may not be the most productive J2EE IDE. But with a targeted server combination (in this case WSAD/WebSphere), it may well outperform other combinations such as JBuilder/WebLogic in developer productivity.

    Sadly there are a lot of bashing of WSAD out there by many after-hour-only J2EE developers.
  274. Another week, another lame study[ Go to top ]

    I imagine this will be similar to previous .NET/Java etc. studies on this site. IBM and Co. will issue a statement refuting the claims of the study. TMC will run another version of the study showing that the first one was flawed.

    This reminds me of the previous studies claiming that .NET is 28 times as fast as J2EE. After various re-runs this claim is withdrawn, eventually they admit that J2EE and .NET have equal performance :
    http://www.gotdotnet.com/team/compare/middleware.aspx
    And this is on a MS website.

    Needless to say every MS marketing employee on the planet will be wheeling (the original uncorrected version of) this study out at every opportunity for the next few years. VB kiddies and pointy-haired bosses will lap it up and MS wins again.
  275. Another week, another lame study[ Go to top ]

    VB kiddies and pointy-haired bosses will lap it up and MS wins again.
    I do not know who is the winner in this FUD War, but I know the looser, it is TSS.
  276. And today's other news....[ Go to top ]

    Surprise! Drug company releases sponsored research that show's their drug is more effective than competitors compounds

    Shock! Horror! Car Manufacturer releases sponsored study that show's their model is safer than competitors

    Blimey! Research company goes out of business for unbiased sponsored reports

    etc etc
  277. TSS is maturing[ Go to top ]

    Surprise! TSS, that gets their 418919 member and publishes benchmark study report no 15 this week is now regarded as the most serious research company in the IT business.

    Congratulations! When will you start market-research-surveys to compete with Evans? There are a lot of things I want to know. For instance,

    Google reported only 1% Linux desktop system while other Open Source claim significantly more. What is the truth?

    What is the real market share of the Apache server vs IIS when you count only the companies that have their own server?

    What is the normal uptime/downtime for a typical EJB Server application?

    And so on and so on..

    Regards
    Rolf Tollerud
  278. Selling the same stuff multiple times ...[ Go to top ]

    I used to visit this site at least once a day ... but not any more ... It lost its credibility with its "Pet-Store games" and Im surprised that they are still selling the same stuff.

    While I would agree with some aspects of the "report", one must look at what is behind it. There are MANY things behind it, two in particular:

    1) TSS is "approached" (and PAID!) by a commercial entity to show that their product is better - this is marketing. Who would be foolish to pay somebody to show that their products dont perform better, and who would be foolish to initiate something where their product wouldnt shine (IN WHAT WILL BE TESTED!).

    2) By publishing these kind of controversial reports, TSS drives traffic to its site - and gets money from it (ads) - it wins again!

    TSS guys make this portal look almost like political arena ("say two different stories to two diff group of ppl"); by publishing a report that clearly choses the winner ... and then they come here to the "loser-side", and say: "look there ... look there ... read very carefully and you will find that things are not as bad as they seem to be ...". I find this very dishonest!
  279. About eBay[ Go to top ]

    Funny thing is that:

    ebay.com is "powered by IBM"
    ebay.de is "powered by Sun"

    :)
  280. About eBay[ Go to top ]

    But if you actually do any searches or other queries, some of them are performed via DLLs. I believe on the IBM product they use is DB2, but I may be wrong there.
  281. TSS and its role: a history?[ Go to top ]

    I used to visit this site at least once a day ... but not any more ... It lost its credibility with its "Pet-Store games" and Im surprised that they are still selling the same stuff.While I would agree with some aspects of the "report", one must look at what is behind it. There are MANY things behind it, two in particular:1) TSS is "approached" (and PAID!) by a commercial entity to show that their product is better - this is marketing. Who would be foolish to pay somebody to show that their products dont perform better, and who would be foolish to initiate something where their product wouldnt shine (IN WHAT WILL BE TESTED!). 2) By publishing these kind of controversial reports, TSS drives traffic to its site - and gets money from it (ads) - it wins again!TSS guys make this portal look almost like political arena ("say two different stories to two diff group of ppl"); by publishing a report that clearly choses the winner ... and then they come here to the "loser-side", and say: "look there ... look there ... read very carefully and you will find that things are not as bad as they seem to be ...". I find this very dishonest!
    I really think TSS is trying to pat both sides of the coin so that they can claim to be good citizens to both the .NET and J2EE crowd. This way they win depending upon where the CASH COW is. It is like a mutual fund - invest in stocks and bonds. IF STOCK GOES UP, bond goes down. Either way there interest is slowly turning towards $$$ and You are right....when you say - they keep saying look here and look there... trying to distract the J2EE community so that they can bargain with companies on how much they should pay for ads.

    I am beginning to slowly want to stay off TSS for good! I will possibly watch this site just a few more times before I say "enough is enough!" Time to move on and forget TSS altogether... They are history.

    TSS is beginning to loose its credibility... Their page is beginning to look like SYS - con JDJ marketing site...
  282. TSS and its role: a history?[ Go to top ]

    I used to visit this site at least once a day ... but not any more ... It lost its credibility with its "Pet-Store games" and Im surprised that they are still selling the same stuff.While I would agree with some aspects of the "report", one must look at what is behind it. There are MANY things behind it, two in particular:1) TSS is "approached" (and PAID!) by a commercial entity to show that their product is better - this is marketing. Who would be foolish to pay somebody to show that their products dont perform better, and who would be foolish to initiate something where their product wouldnt shine (IN WHAT WILL BE TESTED!). 2) By publishing these kind of controversial reports, TSS drives traffic to its site - and gets money from it (ads) - it wins again!TSS guys make this portal look almost like political arena ("say two different stories to two diff group of ppl"); by publishing a report that clearly choses the winner ... and then they come here to the "loser-side", and say: "look there ... look there ... read very carefully and you will find that things are not as bad as they seem to be ...". I find this very dishonest!
    I really think TSS is trying to pat both sides of the coin so that they can claim to be good citizens to both the .NET and J2EE crowd. This way they win depending upon where the CASH COW is. It is like a mutual fund - invest in stocks and bonds. IF STOCK GOES UP, bond goes down. Either way there interest is slowly turning towards $$$ and You are right....when you say - they keep saying look here and look there... trying to distract the J2EE community so that they can bargain with companies on how much they should pay for ads. I am beginning to slowly want to stay off TSS for good! I will possibly watch this site just a few more times before I say "enough is enough!" Time to move on and forget TSS altogether... They are history.TSS is beginning to loose its credibility... Their page is beginning to look like SYS - con JDJ marketing site...
    If fact all they are interested in is showing off Controversies! that is it. The more Controversies the better for TSS - it generates lot of traffic and discussion. IT doesn't matter what happens at the end...

    PUT some numbers, then show somebody else beats them, then go to the losers and ask them if they want to publish a report (ask them to pay more $$$) ...

    TSS: the readers here are pretty smart and can figure out what you are upto!!
  283. decision makers - This is a single data point[ Go to top ]

    Most smart decision makers will not take a report like this and say "Ok, now it is time to switch to .Net". Open minded, cost conscious technical leaders will say "Hmm, interesting...I agree with this point, disagree with this..." etc. The leaders will probably attempt to pilot the alternative technology and analyze the results. Based on those results, they will build a business case for making the change. The business case will include cost savings after a specified period of time. It will also include less tangible measures including flexibility, skill set availability among others.

    From my perspective, solving business problems in a cost effective, reliable scalable, maintainable way is the goal.

    Thanks for the data point TSS!
  284. I am wondering then why I keep getting consulting work for migrating .NET implementations for enterprise apps to J2EE.
  285. TMC Completes Massive IBM J2EE / .NET Study[ Go to top ]

    I get the distinct impression from browsing thru that report that the engineers responsible for the ibm system do not use the ibm product stack on a regular basis
    and thus where still "finding their way" so to speak
  286. Why all the bashing???[ Go to top ]

    I simply don't understand why all the bashing for TSS !!!
    (No i dont work for them - that'd be great though :-) )
    And i am a total java guy.(That'll erase any bias towards M$)

    I tend to give credibility to TSS's research (based on past results and people associated with it)
    I'm not saying they are scientists (as one of the posts suggested!!), the results are what "they found". Every study has limitations and constraints. The results they posted need not represent all possible cases.

    But, as far as the scenario they considered is concerned, i think their result does hold.

    One simple question: Which is a better OS? Windows or Linux??? Well, if it is windows, how come there are so many linux systems in production? and Vice-versa.
    What im trying to point out is, there is no SILVER BULLET system (Considered each was developed by some pretty smart people on either side). I really dont think there is any foul play in this research. (Thats my point of view anyway)

    As they say with cars: "Your mileage may vary" :-)
  287. Summary of the research results[ Go to top ]

    The research results can be summarized as follows:
    For certain specifications, designed to favor .NET, it can be seen that productivity and performance with VS.NET is higher than with WSAD when implementation on WSAD is done using inexperienced/incompetent developers with a near-total disregard for available literature and best practices.

    Some of the criticisms of the IBM products in this discussion is way over the top. I do agree that many of them do have a steeper learning curve (that however applies to most enterprise class software).

    Kalyan "don't work for IBM"
  288. "..implementation on WSAD is done using inexperienced/incompetent developers with a near-total disregard for available literature and best practices."

    Maybe they should have used developers of 28 years of experience instead of paltry 14?

    Don't you notice the whining tone in your voice? The same as in the last 20-30 benchmarks that all was crooked and all twisted deliberately in Microsofts favor.. :)

    Obviously there are no British Java developers. They would rather be dead than behave in that fashion.

    Regards
    Rolf Tollerud
  289. the whining and squirming goes on..[ Go to top ]

    "..implementation on WSAD is done using inexperienced/incompetent developers with a near-total disregard for available literature and best practices."

    Any one can tell "inexperienced/incompetent developers" really means inexperienced/incompetent WebSphere developers in the original posting.

    BTW, how many WebSphere shop not using DB2?
  290. presstop, redo all tests..[ Go to top ]

    I have also heard that one of the WSAD developers was left-handed!
  291. the whining and squirming goes on..[ Go to top ]

    Maybe they should have used developers of 28 years of experience instead of paltry 14?
    Definitely not. Way too much to unlearn. :)

    So is that 28 years of experience with Java? :)
    Is that 1 months experience 336 times?

    No I am not whining. But I definitely am crying. :(
  292. Some .NET cheats got through![ Go to top ]

    Posted for a friend...
    I checked out the new TMC comparison.

    Once again, different database backends -- makes a nonsense of the benchmarks since SQLserver is very fast, and Oracle requires a DBA to perform well.

    The .NET application is really pure 2 tier -- its like servlets running stored procedures.

    Particularly telling is that the .NET crew really knew their SQL (I've checked the code!)
    - Their main query sproc does SELECT TOP 500 (i.e. immediately limits the size of the result set to a small % of the total prior to selecting and grouping. This is a MAJOR cheat, and the audit didn't pick up on it. Never mind the DB server not being loaded, this affects latency. Everyone knows that concurrency is helped by short transactions, hence this is critical.)
    - Sprocs are faster than external SQL (even precompiled); the DB server can do more optimisations when it see's what's coming
    - The Oracle version had more indices on key tables; thus slowing its insert speed. Also, the MS version had a clustered index on custid/date -- another major cheat since it puts the data in order on the disk. Not sure if MS identity columns are faster than Oracle sequences.

    They should get both teams to code against Sybase; it would be fairer.

    Finally, the MSMQ XA didn't work properly - it wasn't doing remote XA properly; but deliver and ignore dups type processing; which has lower latency due to fewer network trips.

    Why don't the Java crowd just write JSP's/Pojos that call stored procedures in these tests?
    This type of app is always faster -- .NET is not magical. The first rule of fast TP systems is to keep data in the database.

    JD-- I've lost my account on ServerSide; I wouldnt mind you posting this on my behalf.
  293. Some .NET cheats got through! - NOT[ Go to top ]

    I checked out the new TMC comparison.

    Once again, different database backends -- makes a nonsense of the benchmarks since SQLserver is very fast, and Oracle requires a DBA to perform well.
    Both teams had the choice to use either SQL Server or Oracle, that was their choice. The WebSphere team had extensive knowledge of Oracle, and in fact Oracle was tuned such that it was not a bottleneck, as noted in the auditors report.
    The .NET application is really pure 2 tier -- its like servlets running stored procedures.
    No, it's not. There is a model class that maps DB records to objects; there is a data access layer, and a middle tier business object layer that calls into the data layer with model objects passed between layers for full abstraction. Finally, there is a UI tier, cleanly separated into a separate ASPX code-behind layer that makes calls only into the business tier. It's a full 3-tier implementation, just like the WebSphere code.
    Particularly telling is that the .NET crew really knew their SQL (I've checked the code!)

    - Their main query sproc does SELECT TOP 500 (i.e. immediately limits the size of the result set to a small % of the total prior to selecting and grouping. This is a MAJOR cheat, and the audit didn't pick up on it. Never mind the DB server not being loaded, this affects latency. Everyone knows that concurrency is helped by short transactions, hence this is critical.)
    This is per the specification. Just download the specification, queries are specified to only return up to 500 rows. Both teams got the exact same specification, and per the specification addendum (which documents clarifiications to spec that were required during developemt), the max 500 rows returned across all search queries was re-verified. The spec is posted, so you can see for yourself.
    Sprocs are faster than external SQL (even precompiled); the DB server can do more optimisations when it see's what's coming
    The WebSphere/WSAD implementation also used stored procs (PL/SQL) when they found a perf advantage for doing so. And the .NET app did not use all stored procs, for example that team found on some complex searches it was easier to build up SQL on the fly in the code vs. a stored proc. And... I hate to rehash this, but it is incorrect that stored procs are always faster than dynamic SQL. We learned this last time through (c. July 2003). It used to be the case that sprocs were always faster, but at this point both Oracle and SQL Server have such good precompiler engines for parameterized dynamic SQL that there is no difference in most cases. None. The real perf advantage to stored procs is where multiple queries can be done in one longer stored proc called only once from the app. This is exactly what the WebSphere team was able to do in their WSAD implementation for the largest insert in the app, as they document in the report.
    The Oracle version had more indices on key tables; thus slowing its insert speed. Also, the MS version had a clustered index on custid/date -- another major cheat since it puts the data in order on the disk. Not sure if MS identity columns are faster than Oracle sequences.
     
    JD, your colleage is grasping at straws here. Each team did their database tuning, in both cases the database was not a bottleneck in the test. Repeat: the perf test was not limited by the database. And besides, .NET/SQL and WebSphere/Oracle split the perf tests: both won on 2 of 4 tests. It suggests that both apps were well-tuned.

    Anyway, perf is not the only, maybe not even the biggest, issue in this study, depending on your PoV. It's much broader than a perf test.

    as always,
    -Dino
    [Microsoft]

    (have I worn out my welcome yet?)
  294. Some .NET cheats got through! - NOT[ Go to top ]

    JD, your colleage is grasping at straws here. Each team did their database tuning, in both cases the database was not a bottleneck in the test. Repeat: the perf test was not limited by the database. And besides, .NET/SQL and WebSphere/Oracle split the perf tests: both won on 2 of 4 tests. It suggests that both apps were well-tuned.Anyway, perf is not the only, maybe not even the biggest, issue in this study, depending on your PoV. It's much broader than a perf test. as always,-Dino[Microsoft](have I worn out my welcome yet?)
    Thanks for your comprehensive reply, personally I'm keeping out of this one. I know it's unlike me but firstly I hate benchmarks, secondly comparing IBM's J2EE to .NET is like comparing apples and prawn, they are two different beasts, neither of which I like or use much anyway (IBM J2EE and .NET that is, I love apples and prawns, not together of course :-)).

    Dino, I look forward to engaging with you on a more worthwhile topic, meanwhile I'll just leave those that care to continue this thread.

    I have forwarded your reply and will suggest my colleague finds his TSS login to participate directly.

    Don't forget, if perf/speed is all you're after write it in assembler!

    -John-
  295. ETrade Financial Corp. and other online stock brokerages are at war over how fast they can execute a trade. Ameritrade Holding Corp. kicked off the execution promise game in 2001 by guaranteeing a turnaround of 10 seconds or less. ETrade countered with a nine-second guarantee, and since then it has all been downhill.

    Last year, Ameritrade promised to complete trades within five seconds, and in March, ETrade lowered its pledge to two seconds.

    ETrade declines to say how many of its trades fail to complete in two seconds, but a spokeswoman says it will forgo less than $1 million in trading commissions this year as a result of the guarantee. The company collected $191 million in commissions in the first six months of this year.

    ETrade has 3.5 million customer accounts and completes more than 100,000 trades each day, on average. The company says most of its trades are completed in less than a second, a remarkable achievement for a complex operation that spans multiple computers, routers and applications, not all of them controlled by ETrade.

    "ETrade has a sophisticated infrastructure," says Tim Carpenter, senior brokerage analyst at GomezPro, an Internet performance management company recently acquired by Watchfire Corp. in Waltham, Mass. "They are not bogged down by trying to integrate with a legacy infrastructure, like an older broker that moved from the off-line world to the online world. They didn't have to make concessions."

    Excess Capacity Cheap

    The company has gone through several major IT overhauls since 1996, when it began Internet-based trading. The most important transition, one that is still going on, has been a cutover from proprietary products to open-source components—principally Apache Web server software, Tomcat application server software and the Linux operating system. The move from Sun Solaris/Sparc to Linux/Intel at the top two layers of the three-tiered architecture, in particular, made for a whole new economic ballgame for ETrade.


    In 2001, the cost of proprietary systems was rising while the cost of open systems running on Intel processors was falling, says Joshua S. Levine, ETrade's chief technology and administrative officer. "And it wasn't just the price of the chip; it was memory, peripherals, everything. We said, 'If we stay on a proprietary architecture, we'll always be trapped by the rising price of proprietary vendor equipment.'"

    With the savings from open-source products came the luxury of being able to overprovision by buying spare capacity for every conceivable hardware failure and spike in demand. "When you are buying servers so cheaply, the concept of 'capacity' drops away," Levine says. "We buy a machine for $3,900, and we don't even buy maintenance on it. When it fails, we throw it away."

    ETrade's three-tiered architecture includes a server farm, which it continues to build out horizontally, with 700 Linux/Intel boxes running most of the time at some small fraction of their capacity. Making that work, and making it easy to add more boxes, requires software that can move the workload around and the ability to swap out failed servers, in real time. ETrade uses load-balancing software from Resonate Inc. but is migrating to a similar tool from NetScaler Inc. that it says is faster.

    Those products load-balance within an application, so if trade routing needs more horsepower or encounters a hardware failure, for example, that's addressed automatically and immediately. ETrade is now looking at ways to create a virtualized server farm for use by multiple applications. "How do you jump-start or repurpose a server?" says Hartley Caldwell, vice president of re-engineering. "We are thinking about that in the future."

    The Right Routes

    An ETrade customer may connect to the Internet over one path. But ETrade then finds the best route back to the customer using Adaptive Networking Software from San Mateo, Calif.-based RouteScience Technologies Inc., which can adjust traffic in subseconds to meet user-specified policies about application availability, performance and network usage. Route adjustments are made using open-standard application programming interfaces on routers and other network devices.

    "It's a common myth that you can't manage the Internet," says Lloyd Taylor, technology vice president at Keynote Systems Corp. "Using the proper measurements, and understanding how routing works and working on your providers, you can actually control a good portion of the performance."

    The move to Linux/Intel servers reduced trade times 30% and enabled the nine-second trade guarantee. Getting down to two seconds required other moves, including bringing ETrade's system for routing—moving trade orders to the proper exchange and bringing back confirmations—in-house.

    ETrade was using a third-party service, but it saw that it could improve reliability and speed by developing its own routing software. Its internal system, RoutX, eliminated route hops and optimized the software, Levine says.

    Extreme Measures

    Gomez Inc. and Keynote both monitor ETrade Internet activity continuously. Keynote sends test transaction suites every few minutes from each of 20 locations around the U.S. and reports the results to ETrade every 15 minutes.

    Says Caldwell, "Every morning we review the previous day's operations and the coming day's operations with the operational staff around the world. What was the customer's experience, how many million hits did we have, how many Web errors did we have, were they customer Web errors, did our load-balancing solution remove them in time, and so on."

    Greg Framke, an executive vice president and head of IT at ETrade, says, "We have developed a rigor around performance and availability that has really made a big difference. Measuring reliability and performance is almost a cult with us."

    At the middle layer of the three-tiered trading system, ETrade runs BEA Systems Inc.'s Tuxedo transaction manager, and that could be the next target for the company's move to open-source, Levine says. "We are very interested in Java Message Service—JMS," he says.

    The move to JMS would be straightforward, and its benefits would be primarily economic, according to Caldwell. "Tuxedo is fairly expensive on a per-CPU basis," he explains.

    While the top two tiers of the trading and routing systems run on Linux, the Sybase database layer remains on Sun Sparc systems running Solaris. A move to Sybase on Linux is possible, but that would require a significant system redesign and have an uncertain operational impact, Caldwell says.

    Rather than making an intermediate move to Sybase on Linux, Framke says ETrade could go to a grid database concept in which data is cached at various points up and down the application stack. "If I could put data closer to the user, I could serve it up faster," he says.

    That could improve availability as well. "If you look at the way we have architected our front end, it is rare—a very rare event—to have a user who couldn't on the second click get to where they were going," Framke adds. "And that's what I want in the database."

    Watchfire's Carpenter says that many ETrade customers are not concerned about a few seconds of trade execution time. But for "active traders"—those making hundreds or thousands of trades a year—"execution quality and speed are of the utmost importance."

    Says Levine, "What I'm most proud of is that we operate at 100% every day, 24 hours a day. And it's out there for everyone to see if you have a failure. I mean, my career could end right now with a three-hour outage."

    http://computerworld.com/managementtopics/ebusiness/story/0,10801,96136,00.html
  296. Thank you for the interesting story.

    And for conforming my point. Sometimes Java is right, sometimes .NET is right, but the "Big J2EE Application Server" == slow, cumbersome, over-architectured, unproductive, with lots of downtime, is never right.

    Regards
    Rolf Tollerud
  297. Thank you for the interesting story.And for conforming my point. Sometimes Java is right, sometimes .NET is right, but the "Big J2EE Application Server" == slow, cumbersome, over-architectured, unproductive, with lots of downtime, is never right.RegardsRolf Tollerud
    I would agree it's an interesting article. But as the article stated, ETrade avoid all the complexity by not having legacy systems to deal with. That makes their job 1000x easier than if they had all the excess baggage most brick-and-mortar trading firms have to deal with.

    Not everyone has that luxury :). Integrating with legacy stuff is a total pain the butt. I don't think any of my friends have ever said, "I love integrating with legacy mainframes, old b-tree databases, and a mix of all sorts of stuff." I'm gonna guess ETrade doesn't do any pre-trade compliance validation, otherwise they would have a real hard time making sure they execute trades within 2 seconds. Normally, retail (ie individuals) trading doesn't have to deal with gov regulations. Well that's not true, cuz someone who works for a financial company has to file with the SEC they plan to buy/sell X security.

    now if only some big shot at a brick-and-mortar company would write a detailed article about how EJB's are used to handle complex scenarios, it may clear the FUD around whether complexity is the result of over-design or just a reality of the business requirements.
  298. TMC shows itself to be Microsoft lapdogs in this study. The neglected to follow their own lessons-learned from a previous study that they had to retract from November of 2002. In that retraction, TMC clearly states
    "We also admit to having made an error in process judgment by inviting Microsoft to participate in this benchmark, but not inviting the J2EE vendors to participate in this benchmark. We now realize that following this procedure is critical for the J2EE community to accept benchmark results and perceive them as credible." Did they invite IBM or an IBM business partner to participate in their tests? NO. If this was a mistake in 2002, why is it not a mistake here? Relying on inexperienced developers employed by TMC to do the J2EE development had a significant impact on the value of the study's results. Why do I know they were inexperienced, despite promises to the contrary from TMC? They chose a non-IBM HTTP server, clearly against IBM recommendations; they used RRD instead of WSAD; they mysteriously chose POJO instead of EJBs; the .NET had more experience with their tools than the Java folks had with theirs (p 37). It is unforgivable that TMC would repeat the mistakes of a previous study. Pathetic.
  299. Check out this IBM Response to the TMC Study:


    http://www.sys-con.com/story/?storyid=46828


    Mike.
  300. Their developers wrote their own POJO framework. How retarded is that when Hibernate has been the most popular POJO framework for a while? :-)

    You also have to wonder why they didn't go nuts w/ stored procedures in the Java implementation. With the previous Petstore contest, that's mainly how MS got their "speed" at the cost of unmaintainability ;-)

    TMC should publish the names of these developers so companies know not to hire them...

     ken