Discussions

News: TMC Releases Performance Case Study Results

  1. TMC Releases Performance Case Study Results (150 messages)

    The Middleware Company has released a J2EE and .NET Performance case study, the latest study (an MDA productivity study was released a few weeks ago) based on their Application Server Baseline Spec. Except for the web services test, the two platforms came out mostly equal in performance.

    How did TMC get to the results? Starting in February, TMC invited experts to define a spec that case studies would implement. TMC then published the spec and opened it for public review in May.

    Last week, the Productivity Study Results were unveiled and now the Performance Study Results have been published.

    TMC invited all of the vendors whose products are in the study to be involved in the tests. Some of them accepted. Some did not.

    The study goes into a lot of detail on what was tuned and how it was done. This information is invaluable as it shows how to tune this application in J2EE (and .NET).

    The following sections have the most relevant details related to tuning J2EE:

    - Tuning the Java Virtual Machine
    - Tuning the app server's runtime settings
    - Tuning the application's deployment characteristics
    - Experiences with EJB Container Managed Persistence 2.0

    The study also explains why appservers and .NET can behave differently in the PetStore application scenario. For more details, see the sections: "J2EE appserver X much better than J2EE appserver Y - an anomaly?" and, "Technical theories on why app server X out performed app server Y"

    The case study is divided into three testing areas:

    Web Application Test: This tested performance hosting a typical Web application with steadily increasing user loads. This test used two different databases: Oracle 9i and Microsoft SQL Server 2000.

    24 Hour Reliability Test: This tested the sustainable performance and reliability of platforms over a 24-hour period as transaction-heavy clients are run against the Web application under a constant, peak-throughout user load. This test used different databases for each code base based upon performance data of the application server. The J2EE code bases used Oracle 9i. The .NET code base used Microsoft SQL Server.

    Web Services Test: This tested the performance of the application server hosting a Web service accessible over SOAP 1.1.

    Results:

    View the results of the study paper:
    http://www.middleware-company.com/casestudy/tmc-performance-study.shtml

    View "Why this study is different to the one in October 2002"
    http://www.middleware-company.com/casestudy/comparing-oct-2002-and-jul-2003.pdf

    View the Case Study home page:
    http://www.middleware-company.com/casestudy

    View the Application Server Platform Baseline Specification.
    http://www.middleware-company.com/casestudy/specification.pdf

    View interviews with TMC with:
    - Salil Deshpande the baseline spec, the expert group, and the concept of 'case studies' as opposed to benchmarks.
    - Will Edwards discussing the tech choices in the various tiers, comparing the TMC mPetstore to the JPetstore, and talking about why EJBs were used.

    View coverage and complaints from the original PetStore:
    http://www.middleware-company.com/casestudy/coverage.shtml

    Threaded Messages (150)

  2. Waht a retreat![ Go to top ]

    Well, one year has been sufficient to report that J2EE and .NET are equally performant from ".NET outperforms J2EE"...what next...another year after probably truth will bespoken...J2EE rules!
  3. Any hints?[ Go to top ]

    Will you call Apache in Orcale 9iAS as "integrated" or
    "external". I would guess BEA and 9iAS are X,Y.
    IBM was hit hard last time and, my guess, they skip the party.

    ????

    Alex V.
  4. Application Servers[ Go to top ]

    I would have guessed X was Oracle 9iAS and Y was BEA. Oracle 9iAS/Orion always seemed much speedier to me in direct performance then Weblogic, particularly when using the built in HTTP web server of both. Additionally, the report stated that vendor X declined to participate while vendor Y was enthusiastic. That sounds like what I would expect from Oracle and BEA respectively to me.
  5. Re: Application Servers[ Go to top ]

    ..and which one was JBoss? :)
  6. Not exactly[ Go to top ]

    I would have guessed X was Oracle 9iAS and Y was BEA.


    If my hands weren't tied, my guess is that you would be surprised by the answer.
  7. Not exactly[ Go to top ]

    If my hands weren't tied, my guess is that you would

    >be surprised by the answer.

    Can I send you some scissors? In all seriousness, this test was so well done that knowing who X and Y were is more interesting then then the J2EE versus .NET. It would be fascinating to see the same tests done for just J2EE application servers. ECPerf is rather useless due to the variations in hardware and other things. Hopefully in the future vendors will participate more actively.
  8. Not exactly (guessing game)[ Go to top ]

    //
    Posted By: Edgar S?nchez on July 31, 2003 @ 12:57 PM

    > I would have guessed X was Oracle 9iAS and Y was BEA.

    If my hands weren't tied, my guess is that you would be surprised by the answer.
    //

    vise versa ? Oracle already provide strong support for database,
    and HAS strong enthusiazm for 9iAS. Apache in 9iAS is "integrated"
    with 9iAS, but still via Apache plug-in (my best guess).

    There are also
    Pramati (can be enthusiastic),
    Sun (hardly will go to Richmond),
    JBoss (slow according many studies),
    Novell (????),
    IBM (last year looser, I do not belive it is here)
    BEA (most probably X or Y)

    Alex V.
  9. Guessing game[ Go to top ]

    Apache in 9iAS is "integrated"

    >with 9iAS, but still via Apache plug-in (my best guess).

    WebLogic has a built-in http-server. And it's fast.. :) Disclaimer: this is a non-scientific opinion

    //ras
  10. Application Servers[ Go to top ]

    |
    |I would have guessed X was Oracle 9iAS and Y was BEA.
    |

    Reading some of the discussion of the internals of the appserver, I would say that X was BEA and Y was IBM.

    -Nick
  11. Waht a retreat![ Go to top ]

    For anyone looking to compare the Oct 2002 case study and this one it's probably worth following the link above:

    View "Why this study is different to the one in October 2002"
    http://www.middleware-company.com/casestudy/comparing-oct-2002-and-jul-2003.pdf

    Even though they are the same on the surface there are enough differences in the tests to make them quite different.

    So, I think that comparing the reults like this might not be too valid.

    Cheers, Will
  12. Waht a retreat![ Go to top ]

    Political correctness is now invading the software industry. Anyone who has seriously worked with .Net and j2ee will quickly realize that .net out performs j2ee by a wide margin. This is political correctness gone amuck.
  13. soap engine[ Go to top ]

    It is interesting to know which Java Soap engine was used
    in the test.

    With WebSphere at least, the bundled soap engine was
    Apache Soap 2.2 with WAS 4.0 (really old)
    and Axis 1.0 with WAS 5.0 (as tech preview only).
    The current soap engine from Apache gruop is Axis 1.1 already.

    If the engine is not that up to date, then you get bad results.

    Popular (performant?) Java soap engine are:
    Axis
    WASP
    GLUE,

    is anyone of them used in the test?
  14. soap engine[ Go to top ]

    Jian,

    As you know whenever an app server vendor bundles a third party library or product of any kind it's always hard to make sure the latest version of the app server has the latest version of the third party product.

    However, anticipating this we got pretty expert at grafting the latest version of a third party product on to all the app servers that we were using. We'd then retest to see if we got better results than with the bundled version. This happened twice during the testing once with an Apache version and once with Axis. Having said this the throughput improvement we got was negligible in both cases. If it had been significant we'd have written up what we did in the case study report so that others could do the same thing.

    Cheers, Will
  15. Waht a retreat![ Go to top ]

    Mackie,

    I think after the October 2002 comparison of .NET and J2EE, doing it again was about the least politically correct thing we could have done. However, there were enough interesting questions from the first study that we felt it was worth it, things like:

    - Supposing distributed transactions were removed, what would the effect be?
    - Supposing we allowed more caching what would the effect be?
    - JDK 1.4 was really new for the Oct 2002, results, now it's integrated and more mature, would things be different?

    ...and that's just a few of the questions that were being asked.

    This set of tests was designed to answer some of these questions and increase the overall level of understanding of what both .NET and Java can do without playing favorites because that's what science is all about.

    Trust me, if we were into "correctness" we'd keep our mouths shut and sit on the sidelines.

    All the best, Will
  16. That's quite an extensive report, and it looks like there's quite a bit of valuable information in there for anyone who has the time to read it and analyze the findings ;-). As expected, the two platforms are very similar in performance. (It does still show an impressive lead in the Microsoft web service performance numbers, which is worth analyzing in more detail, to determine what improvements can be made in the Java-based implementations.)

    I did appreciate the disclosure section. I think in general TMC is to be commended for their efforts, and I hope this isn't the last paper we see from these tests.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  17. I've been reading the report for the last half hour and I must say, it is fabulous work. Bravo TMC.
  18. Note that not every Java-based Web services implementation comes from the J2EE vendors - there are a couple of very fast SOAP runtimes out there that perform extremely well.
  19. "Note that not every Java-based Web services implementation comes from the J2EE vendors"
    Glue from MindElectric performs as well as .NET and it is the simplest and the fastest web services platform on java. But The Mind Electric is not an App Server vendor.
  20. Fooled[ Go to top ]

    So all those MS reports saying that .Net is 2x or 3x faster than Java are all lies?!?!? DAMN! I was almost believing them... :)
  21. Fooled[ Go to top ]

    Should we believe these results because we agree with them? Does that make them more true? I don't think so. Phooey.
  22. Fooled[ Go to top ]

    Jeff,

    Are you saying that the test is not a real example or something more nefarious?

    If the former; could you maybe expand a little so we can improve future tests?

    If the latter; I can only promise that they really, really are real, honest guv:)

    Cheers,
    Will
  23. Corrected values[ Go to top ]

    Hello, my 6 cents:

    ==============================================
    "maximum load" must be taken at point, where
    http response errors / timeouts overpass some limit.
    In the graphs, this is point of FIRST
    deviation from y=x/5 line. Thus, real data would be
    (approximately, by visual estimation):

     Fig 1.
    servlet-X 900
    EJB-X 1300 (ap. -20%)
    NET 1550

    Fig 3:
    servlet-X 800
    EJB-X 1300
    NET 1800

    There was a statement for error tolerance of 1% (too big for me),
    but bars in both figures 2 and 4 ABSOLUTELY brake this rule.

    Graph error% / load will reveal this....

    ==============================================
    It was noted several times (e.g. Rice Univ papers)
    that CPU usage has direct impact on performance limit and
    errors offset.

    If possible, can CPU usage vs. load be shown?
    Thank you in advance.

    ==================================================

    Response times against error% are SIMILAR function for
    both NET and EJB-X (see table 2, make two plots, they overlap,
    at least in most interesting 0.2-2% range). Servlet-X plot
    is a bit scattered, but also the same song.

    Now, taking 1% error margin we have
    NET 1440
    EJB-X 1240 (ap. -20%)
    That is very similar to visual estimation.

    ===========================================

    Conclusions:

    1.
    As per presented results, NET is 20% faster.
    (what is still small to be consider as THE difference)

    2.
    I would support "political correctness" conspiracy :-))))

    Questions:

    Microsoft personnel used smaller connection and thread
    pools going against Oracle (4 and 8) than SQL Server (8 and 12).
    Web service results were exactly 8:12. Can somebody
    clarify that "evil" Microsoft did not used pool size to
    hit "small guy" Oracle ? :-)

    Thanks,

    Alex V.
  24. Corrected values[ Go to top ]

    Alex,

    > "maximum load" must be taken at point, where
    > http response errors / timeouts overpass some limit.
    > In the graphs, this is point of FIRST
    > deviation from y=x/5 line. Thus, real data would be
    > (approximately, by visual estimation):

    These graphs show throughput per second vs. User Load and I don’t understand how you make the extrapolation that therefore http errors can be inferred from the deviation of the line x/5, could you please explain the logic. Thanks.
     

    > There was a statement for error tolerance of 1% (too big for me),
    > but bars in both figures 2 and 4 ABSOLUTELY brake this rule.
    >
    > Graph error% / load will reveal this....

    For all the tests we could achieve error % < 0.01% 1% was an upper bound we set as a test rule, however, in the end, nobody reached this level


    > ==============================================
    > It was noted several times (e.g. Rice Univ papers)
    > that CPU usage has direct impact on performance limit and
    > errors offset.
    >
    > If possible, can CPU usage vs. load be shown?
    > Thank you in advance.
    >
    > ==================================================

    In all cases the limiting factor of the test was the CPU utilization. This utilization was directly proportional to the throughput and the point at which 100% CPU utilization was reached was the point at which the slope of the throughput line became <= 0.


    > Response times against error% are SIMILAR function for
    > both NET and EJB-X (see table 2, make two plots, they overlap,
    > at least in most interesting 0.2-2% range). Servlet-X plot
    > is a bit scattered, but also the same song.

    Again, I do not understand the way in which you are deducing error% could you explain?


    > 1.
    > As per presented results, NET is 20% faster.
    > (what is still small to be consider as THE difference)

    I don’t agree with the deduction from the data, but you should try and convince me, I just don’t see it right now.


    > Microsoft personnel used smaller connection and thread
    > pools going against Oracle (4 and 8) than SQL Server (8 and 12).
    > Web service results were exactly 8:12. Can somebody
    > clarify that "evil" Microsoft did not used pool size to
    > hit "small guy" Oracle ? :-)

    This was purely empirical. We all had many days to try a number of different configurations, we watched Microsoft do this and they watched us. These were the pool sizes at which they ran best against Oracle and SQL Server. Don’t forget that in .NET they are using completely different managed providers for the two databases and the providers just behave differently.
  25. JPetstore vs. mPetstore[ Go to top ]

    A very nice report, thanx!

    It is quite interesting to see that mPetstore (with EJB *and* CMP EB) outperforms JPetstore (without EJB). As I understand the report, this is mostly because of Struts, which is used by JPetstore (reflection)?

    Gee... I would like to hear more about this.... It's a pitty that TMC did not have enough time to convert those Struts property into static JSP expressions.

    So, again dynamic vs. static for performance and the winner is... ;-)

    Any comments on this?

    Regards,
    Lofi.
  26. JPetstore vs. mPetstore[ Go to top ]

    Lofi,

    I know this doesn't entirely answer your question but one interesting side observation is that if you look at the web service results (page 50) you'll notice that the JPetstore approach beat the EJB/CMP approach (very slightly).

    This was kind of interesting to me because the web service front end on them both was almost identical, and of course struts was now factored out. So the only real difference was that one approach used EJB's and the other almost direct JBDC.

    Cheers, Will
  27. JPetstore vs. mPetstore[ Go to top ]

    Yes, you're right Will... So, this is really interesting ;-) So, Struts is really the bottleneck...

    Regards,
    Lofi.
  28. JPetstore vs. mPetstore[ Go to top ]

    Yes, struts was a bottleneck but drilling down deeper than that what hurt struts was the Java reflection API. We found that to be the cause of a number of slowdowns in all the applications and one of the things we did was remove it wherever it was not absolutely needed.

    Now having said that; this is a real stressed environment. Sometimes I look at the numbers and loose sight of the fact that the struts framework app ran > 1,000 hits per second, about 3.5 million per hour, on a reasonably sized piece of hardware, but by no means the largest possible. That's usually more than enough for most sites.

    Cheers, Will
  29. JPetstore vs. mPetstore[ Go to top ]

    Will

    Could you explicit the places in struts that where you cut struts features?? is this documented at the study pdf? Could you say which parts of struts can be a bottleneck, as logic tag libs, validation, declarative exception handling, etc... ?

    Emerson Cargnin
    TRE-SC - Regional Electoral Court
    Brazil
  30. JPetstore vs. mPetstore[ Go to top ]

    Emerson,

    Sorry to be tardy in replying, I've been looking at, and trying to reply to, the thread pretty much constantly over the last couple of days and other work is starting to pile up.

    I, and the other guys in the team, will look back at our notes and see if we can come up with an answer for up but I wanted to give everyone a heads up that it might not be an immediate response.

    Thanks a bunch,
    Will

     
    > Could you explicit the places in struts that where you cut struts features?? is this documented at the study pdf? Could you say which parts of struts can be a bottleneck, as logic tag libs, validation, declarative exception handling, etc... ?
    >
    > Emerson Cargnin
    > TRE-SC - Regional Electoral Court
    > Brazil
  31. JPetstore vs. mPetstore[ Go to top ]

    Could you explicit the places in struts that where you cut struts features??

    > is this documented at the study pdf? Could you say which parts of struts can
    > be a bottleneck, as logic tag libs, validation, declarative exception
    > handling, etc... ?

    Emerson,

    The JPetstore application didn't use many of the more complex features of Struts so I can't paint a really broad picture of overall Struts performance. In our test app, we measured low-level performance using Indepth for J2EE and found that the property accessor tags were one of the larger front-end time contributors. In particular, pages where properties were accessed repeatedly inside of a loop (like search results) had the majority of their load times come from the reflective property accessor tags.

    As for the logic tags, validation, and declarative exception handling, the only thing that the app really used was a looping tag, which didn't have much overhead associated with it.

    James Kao
    The Middleware Company
  32. JPetstore vs. mPetstore[ Go to top ]

    James

    I ask that because last year when working at a bank company we did some benchmarks between struts logic tags and java scriplets and the difference was very big (don't remember the exact numbers). And we didn't take in account the bean population time.

    >> Could you explicit the places in struts that where you cut struts features??
    >> is this documented at the study pdf? Could you say which parts of struts can
    >> be a bottleneck, as logic tag libs, validation, declarative exception
    >> handling, etc... ?

    >Emerson,

    >The JPetstore application didn't use many of the more complex features of >Struts so I can't paint a really broad picture of overall Struts performance. >In our test app, we measured low-level performance using Indepth for J2EE and >found that the property accessor tags were one of the larger front-end time >contributors. In particular, pages where properties were accessed repeatedly >inside of a loop (like search results) had the majority of their load times >come from the reflective property accessor tags.

    >As for the logic tags, validation, and declarative exception handling, the only >thing that the app really used was a looping tag, which didn't have much >overhead associated with it.

    >James Kao
    >The Middleware Company
  33. JPetstore vs. mPetstore[ Go to top ]

    Yes,

    Struts performance S(TR)UCKS!
  34. With Axis, the web service is a special web application.
    If the study shows that PetValue applications run similiar on both
    .Net and J2EE, hence approves J2EE are good as well with web applications,
    how comes the web service test is so bad?

    Web service is special web application at least because
    it does a lot of converting from soap messages to
    Java objects back and forth, so a lot of serialization going on.
    But I/Os on the two platforms have similiar speeds, just plain
    sockets under the cover?

    .Net default SOAP format is doc/literal, while Axis default is soap encoding.
    Would that be the cause of difference? Is attachment used? if so,
    .Net can get upperhand if they use DIME instead of the SwA as it is
    a more efficient encoding.
  35. Are soap message the same[ Go to top ]

    Are the soap messages exchanged on both platforms
    exactly the same?
  36. Are soap message the same[ Go to top ]

    Jian,

    Yes they were, if you look in the case study specification the message format is in there.

    Cheers, Will
  37. Do you know if The Middleware Company has plans to benchmark java in different architectures ?

    Integration side
    EJB x JDBC DAO x JDO x Hibernate

    Web
    JSP x Struts x Other Framework

    GUI
    Swing x SWT

    ...
  38. Erik, and others who asked about more case studies in the future:

    As Will implied in one of his messages, these case studies can get to be large investments of time and other resources, and we cannot afford to do these completely by ourselves. So we need (1) the community to be interested in certain scenarios, and (2) we need at least a couple of key vendors (hopefully major vendors for that scenario) to accept our invitation to play.

    We are definitely going to do more case studies, some of which will definitely address performance, and will address other Java architectures. But as I write this, we are just catching our breath from finishing this one, and do not have concrete plans on exactly what the next Performance case study will involve.

    Please do note that there is an Interoperability case study coming up, and the results of the MDA Productivity case study were posted last week.

    Since we did our first case study last year we have been contacted by a number of vendors suggesting/requesting case studies of various scenarios. We have to match those suggestions with suggestions like yours about what the community would like to see, keeping in mind that we can't do everything.

    We'll figure this out shortly and update you all here as we have been doing.

    Sincere thanks for all your comments and support. You make our jobs meaningful and enjoyable.

    Salil Deshpande
    The Middleware Company

    p.s. I'm pretty sure we won't do Swing vs. SWT.
  39. Great Job[ Go to top ]

    Salil, this new benchmark is much, much better than the original. Great job, thank you.
  40. Another Data Point[ Go to top ]

    Pure speculation, but it is interesting that the new SPECjAppServer numbers show a similar discrepency between WebLogic and WebSphere on almost the exact same hardware.

    http://www.spec.org/jAppServer2002/results/jAppServer2002.html

    Eric
    BEA Systems
  41. On the web service side there are just a whole heck of a lot less things you can tune and not as much room for us to change the out of the box behaviour.

    My opinion while working on this came to be that Microsoft have been putting a lot more time into web services, and for longer, than the Java community. If you look at their progress on things like WS-SEC, WS-TRANSACTIONS, they had/have releases before J2EE vendors.

    So, with J2EE vendors racing to play catch-up on functionality all the time they just haven't had time to catch a breath and seriously look at how the libraries perfrom, yet.

    Just my 2p worth.
    Will
  42. web service[ Go to top ]

    Will,

    Does TMC have plan to do more testing
    for different Java based soap implemenation?

    I think you are right, J2EE camp needs time to catch up with web service
    performance. hopefully things get better with J2EE 1.4 implemented with
    app server vendors.

    Jian
  43. web service[ Go to top ]

    Jian,

    We're sort of spoiled for choice right now on what to do next. There are a few things that form good follow-on projects from this one and web services are right up there. One of the big problems, like everything else, is funding the big lab setups are really expensive, so we need partners to be able to afford it. My hope is that come the autumn we'll look into doing some follow on work, but right now that's just speculation.

    Thanks a lot for the input!
    Will
  44. .Net default SOAP format is doc/literal, while Axis default is soap encoding.

    Would that be the cause of difference? Is attachment used? if so,
    .Net can get upperhand if they use DIME instead of the SwA as it is
    a more efficient encoding.<

    Nope, not at all. Any encoding will yield roughly the same result. .Net objects are "annotated objects", all de/serializations are just annotations. So when you instatiated it, you have a lightweight objects. Unlike in any java web service toolkit de/serializations are part of the objects, if it isn't then most probably that toolkit is using reflection which have a toll in performance.

    Nick Laborera
  45. JPetstore vs. mPetstore[ Go to top ]

    Lofi,

    Yes, the cost of Java reflection is absolutely horrible. It's a real shame too, because the frameworks that can be built around reflection are really nice (Struts, Hibernate, Axis etc.). Luckily, reflection is fast enough for 90% of the systems out there. Note to JCP/Sun: We need faster reflection and REAL (compiled) properties (a la Delphi, C# etc.).

    Another big performance difference is in that EJBs integrate into the app server much nicer than a single web component (web application) can. With EJBs, the app server "knows" more about the application than with a simple web app. To the app server, a simple web app like JPetStore is just a black box that does everything (presentation to persistence). It sends requests in, hopes for the best, and waits for the response. With the application componentized (not necessarily distributed) like mPetStore, the app server can do much more to optimize itself, not only during deploy time, but it can also react more intelligently to load changes at runtime. A good analogy for this is a CPU. Consider the differences between the 8086 and the P4. Intel didn't simply build a faster execution unit, they built different execution units and sub-components for different types of work: integer units, floating point units, caches, pipelines, schedulers etc. Ever wonder why an old Pentium 233 was generally faster than a 486/233? Architecture!

    A very smart architect once told me this, and I disagreed. (Partha - I owe you a beer!) ;-)

    That said, is the performance gain worth the extra development cost? That's something every architect/tech lead needs to ask in each case.

    Cheers,

    Clinton
  46. JPetstore vs. mPetstore[ Go to top ]

    Thanx Clinton, for the nice explanation! So, static (compiled) and component-based can increase the performance. This is wonderful!

    Anyway, you've done a great job with JPetstore.

    Regards,
    Lofi.
  47. JPetstore vs. mPetstore[ Go to top ]

    Lofi,

    >
    > Yes, the cost of Java reflection is absolutely horrible. It's a real shame too, because the frameworks that can be built around reflection are really nice (Struts, Hibernate, Axis etc.).

    It is not true for this kind of use cases, reflection overhead is not very "horrible" in Struts or Hibernate, Hibernate uses dynamic code generation
    to optimeze reflection, but reflection weight is very small in dynamic persistence. Struts reflection is very trivial too, it populates a few properties in the most of use cases, It is more overhead in String -> byte[] converters to output generated content.
     The main overhead in dynamic code is method lookup, it is cached in "dynamic" frameworks and it is not a problem for server applications.
     The most expensive operations can be optimized with dynamic code generation, but we need to detect the weight of "slow" code before to blame reflection.
  48. JPetstore vs. mPetstore[ Go to top ]

    It is not true for this kind of use cases, reflection overhead is not very "horrible" in Struts or Hibernate, Hibernate uses dynamic code generation

    > to optimeze reflection, but reflection weight is very small in dynamic persistence. Struts reflection is very trivial too, it populates a few properties in the most of use cases, It is more overhead in String -> byte[] converters to output generated content.

    Struts lacks in performance, probably reflection. Struts killed the performance of the last project I worked with, so I won't use Struts in my next web project.
  49. .Net will probably be faster[ Go to top ]

    .Net is still in version 1.1 equals to J2EE in performance and promisses to be much faster in the next versions (see microsoft roadmap), so J2EE has much to do. Time to stop using reflection
  50. doesn't matter[ Go to top ]

    Erik: .Net is still in version 1.1 equals to J2EE in performance and promisses to be much faster in the next versions (see microsoft roadmap), so J2EE has much to do. Time to stop using reflection

    I disagree.

    Who cares if J2EE is a little faster, or .NET is a little faster? (Besides this Mackie fellow.)

    If reflection makes the software more reliable and easier to build and costs a couple of percent performance, that should be more than acceptable for most (but not all) applications.

    I've been using Java since '96 and .NET since it was in beta and they're both plenty fast now and (for the most part) they both work pretty well. I like Java better for a lot of reasons, including that our software runs on servers (most of which are not Windows), but if I were building Windows GUI apps, I'd consider using .NET.

    OTOH - it is true that without Microsoft making performance gains, Sun wouldn't have improved their JVM performance, and it's true that the app server vendors woke up too after the previous TMC benchmarks, so it's not all bad ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  51. doesn't matter[ Go to top ]

    Erik: .Net is still in version 1.1 equals to J2EE in performance and promisses to be much faster in the next versions (see microsoft roadmap), so J2EE has much to do. Time to stop using reflection

    >
    > I disagree.
    >
    I agree that you disagree. My statement "Time to stop using reflection" is too generic

    > If reflection makes the software more reliable and easier to build and costs a couple of percent performance, that should be more than acceptable for most (but not all) applications.
    >

    a couple of percent performance is acceptable, but in many cases its not true.
    I'd rather use bytecode manipulation than reflection when its possible.
  52. doesn't matter.. mostly..[ Go to top ]

    I agree.. One point that is sort of confusing (in a way) is that this was actually a very controlled test. Remember in J2EE we have the *freedom and flexability* to run on many different platforms, in different JVMs, and so forth. So, this was run on Windows, but could run on Linux with IBM's JDK.. or with BEA's JDK.. or on a Sun.. or.. I would hazard a guess that picking and choosing the platform and JVM could increase performance numbers with no real code changes, if you were worried about performance.

    I do agree that the JVM and Server container vendors are paying more attention to speed than they were. One thing that this test doesn't show very well is using so many of the other components available. For example, what about a test with Jetty and Hibernate.. or Resin.. or using Glue or Axis.. etc. My point being that a wonderful advantage of Java is to be able to pick and choose the components, servers, infrastructure, JVM, server platform, environment that you want to use. Some will be faster.. some will be easier to use.. some will have other advantages to you as a business, developer, etc. These features and advantages will never show up on a performance chart.
  53. Linux vs Win2K3[ Go to top ]


    but could run on Linux with IBM's JDK.. or with BEA's JDK.. or on a Sun.. or.. I would hazard a guess that picking and choosing the platform and JVM could increase performance numbers with no real code changes, if you were worried about performance.
    <
    Why do you think running on Linux would be faster? TMC said they initially benchmarked 4 different app servers on Win2k3 and Linux. They then picked the 2 fastest combinations to tune further. In both cases, the OS was Win2K3.

    If you're running on Win2K3, why would you spend more $$$ (or time and effort) on a J2EE app server?

    Seeing Linux-based numbers would have been fun. My guess is that they weren't pretty.
  54. Linux vs Win2K3[ Go to top ]

    Juki: If you're running on Win2K3, why would you spend more $$$ (or time and effort) on a J2EE app server?

    Obviously, that is a question that Microsoft would like to emphasize, particularly in the SMB sector. Windows Server is the modern day AS/400, plus a GUI minus the reliability and at a fraction of the cost ;-). In other words, you get quite a bit "out of the box" with Windows for doing file service, web service, etc. Years ago, IBM minted mines of money from selling AS/400s into the SMB market, and Microsoft does the same today with Windows. It's a good approach.

    The challenge for Microsoft is that on one side Windows (and to some extent, the Intel platform) is not a fully accepted server platform for security and reliability reasons, and on the other side, for anyone accepting the Intel platform, there's a free alternative (Linux) that typically comes with a lot more software (e.g. a free database) and is a lot more secure and reliable. It's what we call the pinata effect, where a company gets stuck between a low end that's coming up and a high end that they can't penetrate quickly enough. (All that said, most companies would love to be "stuck" in Microsoft's position ;-)

    Juki: Seeing Linux-based numbers would have been fun. My guess is that they weren't pretty.

    In our tests (which are all Java and mainly stress CPU, memory and network IO), Linux is anywhere from 15% slower to maybe 3% faster than Windows, but generally a little slower (maybe 5%).

    Windows has a much different (and often better) way of scheduling threads, for example. (There are similar implementations for Linux to what Windows does, but you typically have to install and build them yourself, and they're still very immature last time I checked.)

    Windows seems to have better network IO, probably because some of the drivers were better.

    However, Linux is much more solid, even when it gets screwed up it's still more solid. Windows (and in fairness I should point out that these tests were NOT on Windows 2003) gets sluggish and confused after it experiences heavy or sustained load. While we have almost never managed to crash Windows doing load tests with Java-based applications, under load we do see some strange (unexplainable) network errors and things just start breaking and Dr. Watsoning and Event Logging and magically stopping and/or disappearing. Frankly, Windows doesn't give me the warm fuzzies. I'd love to re-test on Windows 2003 and be able to eat my words, but until then ....

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  55. iSeries (AS/400)[ Go to top ]

    Cameron,

    I haven't made it through all of the docs yet. I saw where they said the hardware they ran this on, does anywhere list an approximate cost for this hardware? I would like to know roughly which current iSeries (AS/400) model(s) that would match up against in terms of cost.

    I would like to see these tests performed on an iSeries. IBM has a lot of interesting technology in the JVM for the iSeries and I would like to see how it handles these loads. I am guessing that cost wise, this test would be in the ballpark of a model 825. If you factored in the costs of the database and application server, you might even be able to push it up to a model 870.

    Mark
  56. Linux vs Win2K3[ Go to top ]

    Interesting perspective. Thanks for sharing.

    We have done a pretty fair amount of load/stress testing (using managed code) with Win2k3. Compared even with Win2k, we've found it to be much more stable and faster... Clearly, MSFT has put a lot of effort into this release.

    And free options for web services support on Java are really limited. Functionally, Axis is just fine. Performance-wise, it's awful.

    Yes... Linux can be free, but can easily morph into something awfully expensive. 3rd-party management/monitoring software can really put a dent in your wallet compared with Windows. (Nagios and Big Brother just don't get it done for us. We tried very hard to be happy with it, but in the end, we just couldn't buy into it.)

    And if you want to run Weblogic on Redhat, you have to shell out thousands for RH Advanced Server cause Weblogic isn't supported on plain 'ol Redhat.
  57. Linux vs Win2K3[ Go to top ]

    Juki: We have done a pretty fair amount of load/stress testing (using managed code) with Win2k3. Compared even with Win2k, we've found it to be much more stable and faster... Clearly, MSFT has put a lot of effort into this release.

    That's why I wanted to be careful to exclude W03 from my conclusions. If it's improved as much as you suggest, I'll gladly recommend it to our customers currently running W2K.

    Juki: Yes... Linux can be free, but can easily morph into something awfully expensive. 3rd-party management/monitoring software can really put a dent in your wallet compared with Windows.

    No kidding. By the time you get commercial software for management and backup, Windows and Linux both run at least $20,000 a box. It's unbelievable. And if you want Microsoft SQL Server, you're quickly pushing the $50,000 a box envelope (does anyone still run the non-enterprise version?) However, it's all overkill for most servers.

    I'll give you a quick example. Some of our customers are setting up compute grids. These are all x86 based because Intel and AMD absolutely kill IBM/Sun/etc. on raw single-CPU performance (and obviously on performance/$). So they can put ~80 blades in a rack, and each blade can have 2x CPUs, so about 160 CPUs per rack. Their hardware cost per rack is under half a million dollars, but their concern is on the software side, because if they have to put $10k or $20k of software on each blade, the whole economic theory of "commodity hardware" is completely thrown off, especially in this "poor man's economy". Windows is simply a no-go on these systems -- they're all Linux because (a) it's a natural fit (simplicity of full console maintainability) and (b) it automatically chops $1K of software cost per blade. There's a ton of relatively good (but limited functionality) open source software for managing large numbers of Linux nodes. Since these are compute nodes, they have no "local data" or configuration to back up, so if there is a failure, they just replace the blade with a cold spare. The end results is not a zero admin cost per blade, but it's pretty close. The loss of a blade is only noticed by a daily cron job (or similar). The grid itself appears to be functioning no differently, although it may run a miniscule amount slower without the dead blade's horsepower. Some of these blades have no moving parts -- booting from flash and connecting to a SAN or NAS for the configuration and application images.

    Anyway, it is a funny conversation how expensive "free" Linux and "commodity" Windows can get in a hurry, especially in a carefully controlled datacenter.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  58. JPetstore vs. mPetstore[ Go to top ]

    I am going to have to call BS on this, having experience in large Struts concurent users. Struts had 0 impact on performance, it was all in the DAO.
    .V
  59. I am going to have to call BS on this, having experience

    > in large Struts concurent users. Struts had 0 impact on
    > performance, it was all in the DAO.
    > .V

    Vic,

    The performance of an application in a test environment is heavily dependant upon the specifics of both the application and the environment. If database calls are the most intensive portion of an application, then the latency in the JVM will be focussed upon the data access layer. However in the Petstore application, database access was extremely fast and most of the time spent in the application was in generating HTML and pushing I/O to/from the browser.

    This creates a system where the overhead incurred by reflection manifests itself in a particuarly acute manner since delays introduced by all other application functions have been minimized. This is also why there was such a large difference in performance between the two application servers. In an environemnt where the database is the prime latency contributor, all test variations will tend towards the same results since all test variations still use the same database. However, in the case of the Petstore application, the database was highly optimized, and caching code further limited the impact of data access. As a result, CPU and I/O in were by far the dominant limiting factors for application performance. This caused performance facets like reflection latency and app server I/O handling to express themselves much more strongly than they might in a more database-intensive app.

    James Kao
    The Middleware Company
  60. j2se 1.5 and j2ee 1.4[ Go to top ]

    Hi guys
      The Microsoft way of approach is to build the best of breed application
    and bundling it with additional softwares
       Like pb and crystal reports = VB6 bundle.
     But this time it wont work out much as bcoz the best of breeds are not the
    softwares its the guy who develops it---- Open source Community
        And the above performance criteria by comparing j2ee 1.3 and j2se1.4
    with webservices based .NET wont be appreciable till j2ee 1.4 and j2se 1.5 releases out .
  61. James,
    My reply was to Eric in the thread.
    I think your group did a GREAT job.

    However... it is not realy realistic that a DB is so cached for any production app I know. Realisticaly, there is usualy a lot of CRUD that invalidates the cache, and people tend to have joins that are take 5 tables, and lots of rows. So a DB model design is the limiting factor that I have seen.
    If the app. server was the slow part ever, instead of doing 1 week of consulting, it is usualy cheaper to buy another app. server (Say Resin, J:Rockit plus NewISys 2100 are less than $5K) and solve the problem with muscle.
    ( I am also sure J:Rockit (and/or Resin) would make it much faster, I have seen it.)

    I could not tell what DAO you used in the glance of the report?

    (but I will read it word for word this weekend)

    important:
    **********************************
    Great Job Middle ware
    **********************************
    Report seems technical, balanced, compleate, objective. It shows the effort. The original report was very good, and this is a big jump, impresive. I hope you do a next one.
    And it is FREE!!!! Wow.
    Thanks to everyone involved and funding it.

    .V

    ( if you should do a new version, I would be glad to contribute my P&T experience and training, I worked on some Terabyte DBs, and very high concurent loads, and I am certified in P&T)
  62. JPetstore vs. mPetstore[ Go to top ]

    I am going to have to call BS on this, having experience in large Struts concurent users. Struts had 0 impact on performance, it was all in the DAO.

    > .V

    I'm sorry to disagree. In fact Struts 1.0 or 1.1 impacts on performance from the user perspective. I have numbers to comprove. This great framework, Struts, is not a choice to me by performance problem.
  63. JPetstore vs. mPetstore[ Go to top ]

    Erick,
    Can you share those numbers/data so I can try to reproduce?
    I do have a test enviroment!

    .V
    My private e-mail is vic_cekvenich at baseBeans dot com if you do not want to post here.
  64. JPetstore vs. mPetstore[ Go to top ]

    It is not true for this kind of use cases, reflection overhead is not very "horrible" in Struts or Hibernate, Hibernate uses dynamic code generation

    > > to optimeze reflection, but reflection weight is very small in dynamic persistence. Struts reflection is very trivial too, it populates a few properties in the most of use cases, It is more overhead in String -> byte[] converters to output generated content.
    >
    > Struts lacks in performance, probably reflection. Struts killed the performance of the last project I worked with, so I won't use Struts in my next web project.

    I do not think it was reflection. I am not sure about current JSP implementation status, but jsp body tags was the performance killer a year ago,
    but it depends on jsp implementation, it was slow on jasper.
    It is true, reflection is slow, but if you populate 10-20 bean properties per request it is not a problem.
    BTW try to cache content in web applications and it will be no "JPetstore vs. mPetstore" performance difference.
  65. JPetstore vs. mPetstore[ Go to top ]

    Juozas,

    >> It is not true for this kind of use cases, reflection
    >> overhead is not very "horrible" in Struts or Hibernate,
    >> Hibernate uses dynamic code generation

    So why does Hibernate uses dynamic code generation? It does so because the performance of standard reflection is absolutely nightmarish, which was my point. I was not implying a Hibernate or CGLIB performance problem.

    >> The main overhead in dynamic code is method lookup, it is
    >> cached in "dynamic" frameworks and it is not a problem for
    >> server applications.

    Although I tend to agree with your statement, method lookup is a part of reflection --and it's SLOW compared to direct method invocation (even with dynamic codegen. Here's an example:

    Consider a JavaBean (Bean) with 5 properties (int id, String name, Date birthday, double height, Bean bean). I accessed the following properties:

    id, name, birthday, height, bean.id, bean.name, bean.birthday, bean.height

    I iterated this 100,000 times using 4 different approaches:

    1) Direct bean.getMethod() calls
    2) iBATIS StaticBeanProbe.getObject() (uses CGLIB)
    3) Commons PropertyUtils.get[Simple|Nested]Property () (used by Struts)
    4) class.getMethod().invoke()

    I executed test twice, only measuring the second round and I executed the known fastest first to allow the slower to benefit from any HotSpot optimization. The total time spent accessing properties was:

    Direct.method() = 20ms (probably didn't really register at all)
    StaticBeanProbe = 3244ms
    PropertyUtils = 5839ms
    Method.invoke() = 12207ms

    Note: I could barely measure normal method invocation performance, even with 100,000 iterations (and I don't have the patience to do 1M). The results here are probably just bumps (IIRC Windows can only report in 10ms).

    The topic here is reflection vs. compiled code (not dyn gen). In that context, reflection is absolutely slow. Now imagine the tens of thousands of property accesses *per second* that were taking place in the MJPetStore application...now double it if you add a reflection based persistence layer too!

    PS: If you like, send me an email and I'll send you the code.

    Cheers,
    Clinton
  66. Reflection[ Go to top ]

    Clinton a GOOD implementation of a reflective framework ALWAYS caches Method lookups. (I don't know if BeanUtils does ... my experience is that BeanUtils is amazingly buggy but thats another story.....)

    In the case of Hibernate (with CGLIB reflection optimizer turned OFF), we fully discover all needed getter/setter methods at system initialization time. No method lookups are ever done at runtime!

    One of the problems with people thinking that reflection is slow, is that they don't use reflection properly!

    Now, of course, we use CGLIB MetaClass, which _is_ just a kind of reflection (just one which uses bytecode generation).

    I'm pretty certain that you cannot beat the performance of CGLIB in doing

    JavaBean -> Object[] property array

    even with handcoded Java. The bytecode will be approximately the same either way.

    peace :)
  67. Interesting numbers[ Go to top ]

    Hi, yet another 2 cents:

    All data points (~75 except 2 point), in table 2 give SINGLE
    error% vs. responce time plot (better viewed in log/log scale).
    NET data point in table 5 fits also.

    So, I conclude that, responce time is directly, and
    non-specificly [to test environment], set by
    number of errored/timeouted requests.
    Apparently, in load/test software, timeout was set
    to 5 sec, which corresponds to 20% errors (!!!!!) at 1 second
    average responce time.

    Sorry, I would call it bad math:
    -- it hides real [good] responce times.
    -- it hides errors presence
    -- it allows to report "maximum throughput"
    at point of 20-30% failures, thus, overstating
    1% error limit by 50% in load.
    -- it hides (here) the fact, that under the same load,
    in wide range of loads, NET has 10-20 times LESS
    errors/timeouts that best J2EE.

    Alex V.
  68. errors[ Go to top ]

    Will,

    I am sorry for my second, a bit agressive post without
    answering, I was studing how to make log scale in Exell.
    here we are:

    AR = all requests per second = users / think time = users /5

    %Errors = AR - TPS / Ar * 100

    No errors >> Users / 5 = TPS, rigth ?

    Per my experience, errors [and timeouts] are the most
    advance worning. Actually, it must be the FIRST measurement.
    Delay in 1-3 sec is much lesser problem than 0.1% missing
    responces, rigth?

    =====================================

    Thank you for info on Oracle and CPU. Apparently,
    only superposition of TPS, errors%, CPU vs load
    can show full picture.

    Thanks again.

    Alex V.
  69. errors[ Go to top ]

    Alex,

    Ahhhhhhhh, I get it cool!

    Since there are many queues involved deviation from the theoretical throughput line (x/5) does not mean an error, things could just get queued somewhere. In fact this is what's happening when the throughput slope goes >= 0. Since the clients won't error until 120 seconds have passed and we set a 3 second max response time then it's okay to queue a request for at least that long and there will be no error. You're so right a slower response time is way better than an error.

    So, I think you make the observation that .NET has a more linier path to peak utilization than J2EE but not that the peak numbers are false.

    Cool stuff to chat about huh! :)

    Cheers, Will
  70. errors[ Go to top ]

    Will,

    NO, you definition of error is a bit loose.

    By whatever reasons, under identical load, NET lost (!!!)
    ~10 TIME less requests than EJB-X (~5 times with Oracle,
    and 10-20 times with SQLServer). I am from Java-Oracle
    camp, but this is what the numbers are telling us.
    I honestly wish to be wrong......

    Let's take it simple. LoadSofware sends 100 requests
    per second. Measured TPS is 98.5 . It means
    EVERY SECOND there are 1.5% transactions are MISSING.

    Since there was good averaging over time, the fact that some
    transactions will be returned in next second(s) does not matter.

    They can be missing because of:

    -- they just really dead == never will be back == dropped
    in server (where is a server-side log? :-) )

    -- they will arrive after client timeout limit and, thus,
    never counted in TPS. Just increase client timeout setting
    and this point will be clear. In many use cases
    client will not wait 1-5 seconds; in such cases,
    from client prospective, long responce is also means
    "dead transaction"

    Good test software tells responce time distribution.
    Bad test software shows wrong picture, if any.

    If I may suggest improvements, it would be:

    detect and report REAL DEAD transactions (t>120s)
    detect and report PRACTICALLY DEAD tx (120s>t>3s)
    responce time distribution stat (various averages)
    Run the same test with some OTHER test software.
    Measure also time to first responce bite
    Record and report CPU on client, server and database computers.

    One year ago I made small test at home over week-ends.
    It was amasing that different test packages show totally different
    pictures. If interesting, please see:

    http://pages.infinit.net/sir/test/test-1.htm
    /Disclosure: 6-pack beer was used during the test.
    Currently, I do not work for A, B, C, D.....Z/

    have a nice day. :-)

    Alex V.
  71. errors[ Go to top ]

    Alex,

    I was defining errors as either a TCP socket error, e.g. connection refused by the server or a recognized http error code, or a client failed to receive a response within a specified time (120s).

    I think that ties in with what you were saying?

    Thanks,
    Will

    P.S. I just browsed http://pages.infinit.net/sir/test/test-1.htm, looks like a really good bit of work thanks a bunch!
  72. errors[ Go to top ]

    Will,

    //
    I was defining errors as either a TCP socket error, e.g. connection refused by the server or a recognized http error code, or a client failed to receive a response within a specified time (120s).
    //

    OK, agree, so what is (Users/ThinkTime - TPS)
    if not such errors (per second)?

    Alex V.
  73. Reflection[ Go to top ]

    When a web framework populates some command bean with a few properties, we are talking about 1 or 2 ns per request. Compared to the overhead of database access, transactional cache lookup, business computations, and HTML code generation, I consider this not worth thinking about in practice. We should be much more worried about custom tags and the extremely reflection-heavy JSP Expression Language, but even those shouldn't matter in many scenarios.

    For the extreme case of 1000s of concurrent users on applications with hardly any business logic, reflection might cause significant slowdowns. But for "normal" applications with "normal" traffic, the overhead will be negligible, be it with web frameworks like Struts or persistence frameworks like Hibernate. Optimizing reflection performance with CGLIB helps of course. So in the absolute majority of cases, there isn't any reason to fall back to manual coding or code generation just to avoid reflection overhead.

    Regarding the overhead of Entity Beans vs reflection-based POJOs: I assume that the container interception and transaction control of each method call to an Entity Bean will cause more overhead than POJO reflection on load and store with e.g. Hibernate. And POJO persistence tools offer somuch development and deployment convenience. I would not recommend heavyweights like Entity Beans just because of their static code generation that can lead to slightly better performance in situations of extreme load.

    Juergen
  74. Reflection[ Go to top ]

    Gavin,

    I agree with everything you're saying, but the problem is: Struts doesn't use CGLIB. Most frameworks currently don't (indeed CGLIB 1.0 was only released a short while ago). And although I believe that BeanUtils caches method lookups (as does StaticBeanProbe), there is a significant difference in the use case of Struts and that of Hibernate. CGLIB unfortunately doesn't help much.

    First of all, Struts allows arbitrary groups of properties to be set. The CGLIB 1.0 MetaClass implementation only allows all properties to be get/set at all at once (and therein lies much of the performance gain). Therefore, a new MetaClass instance would have to be instantiated for every possible combination of property sets or gets (practical?).

    Second, Struts allows properties to be specified using a .dot notation. So we could say something like: "customer.address.street". Such a property must be parsed, the object tree navigated and finally the street property must be set/get. I'm sure there is some complex path algorithm we could write to evaluate all of the properties and decide which groups of properties to set together (perhaps via CGLIB). Currently though, I don't believe that's being done.

    I think we agree, but unfortunately Struts uses BeanUtils, which we know is fairly slow and might have had an impact on performance. I think that's the key point here. Though, I also agree with some of the other posts suggesting that the Struts taglib and JSP taglibs in general could have a greater impact than the reflection did.

    PS: Congrats on the book. It looks fantastic!

    Cheers, ;-)
    Clinton
  75. Reflection[ Go to top ]

    First of all, Struts allows arbitrary groups of properties to be set. The CGLIB 1.0 MetaClass implementation only allows all properties to be get/set at all at once (and therein lies much of the performance gain). Therefore, a new MetaClass instance would have to be instantiated for every possible combination of property sets or gets (practical?).


    It is not a very big optimization it is ~20x faster than reflection, but reflection weight is meaningful for very "big" resultsets only.

    >
    > Second, Struts allows properties to be specified using a .dot notation. So we could say something like: "customer.address.street". Such a property must be parsed, the object tree navigated and finally the street property must be set/get. I'm sure there is some complex path algorithm we could write to evaluate all of the properties and decide which groups of properties to set together (perhaps via CGLIB). Currently though, I don't believe that's being done.
    >
    It was some plans to implement runtime compiler for this stuff, but I do not it is meanigful (I think error reports from generated byte code will not be very user friendly)
  76. Reflection[ Go to top ]

    Thanks for following up Juozas. I think CGLIB is excellent. I only wish the standard reflection APIs could be as fast.

    Cheers,
    Clinton
  77. Reflection, etc.[ Go to top ]

    I agree with everything you're saying, but the problem is: Struts doesn't use CGLIB. <

    And, by your arguments, it never _will_ be able to. Hmmmmm. Interesting.

    But, perhaps it could use a future version of CGLIB for "simple cases" ... and resort to JDK reflection for complex path expressions....

    ....just speculating.

    >> The CGLIB 1.0 MetaClass implementation only allows all properties to be get/set at all at once (and therein lies much of the performance gain). <
    Actually, the CGLIB MetaClass was built by the CGLIB team directly in response to a feature request by me for what I needed for Hibernate ;) Probably if the Struts guys asked for something else, they would be keen to help!

    >> Congrats on the book. It looks fantastic! <
    Thanks Mate!

    peace

    :)
  78. JPetstore vs. mPetstore[ Go to top ]

    <quote>
    Another big performance difference is in that EJBs integrate into the app server much nicer than a single web component (web application) can. With EJBs, the app server "knows" more about the application than with a simple web app. To the app server, a simple web app like JPetStore is just a black box that does everything (presentation to persistence). It sends requests in, hopes for the best, and waits for the response. With the application componentized (not necessarily distributed) like mPetStore, the app server can do much more to optimize itself, not only during deploy time, but it can also react more intelligently to load changes at runtime.
    </quote>

    I've been wondering for some time how an app server actually achieves such optimizations due to component knowledge. I'm not talking about S
  79. JPetstore vs. mPetstore[ Go to top ]

    Sorry for the cut-off posting, I seem to be uncapable of handling a browser form ;-)

    <quote>
    Another big performance difference is in that EJBs integrate into the app server much nicer than a single web component (web application) can. With EJBs, the app server "knows" more about the application than with a simple web app. To the app server, a simple web app like JPetStore is just a black box that does everything (presentation to persistence). It sends requests in, hopes for the best, and waits for the response. With the application componentized (not necessarily distributed) like mPetStore, the app server can do much more to optimize itself, not only during deploy time, but it can also react more intelligently to load changes at runtime.
    </quote>

    I've been wondering for some time how an app server actually achieves such optimizations based on component knowledge. I'm not talking about remote stateful components like SFSBs: It's clear that the app server handles component pooling, remote session timeouts, passivation and reactivation, etc. My main concern are local stateless components, as typically used in web apps.

    If you don't keep non-thread-safe stuff in instance variables of local SLSBs, you effectively don't need to pool them. Note that resource factories like a JDBC DataSource are perfectly thread-safe. So what can an EJB container optimize in such a scenario, in contrast to using a POJO within a web app? There's the convenience of CMT of course, but that isn't a runtime optimization.

    What kind of resources would you keep in an SLSB instance variable? I can't think of any examples that aren't far-fetched. Pooling of local SLSBs is arguably a worthless benefit. If you use an appropriate solution for convenient transaction demarcation, POJOs can be as effective as SLSBs, avoiding all of the latter's development and deployment weight.

    Then there's the issue of where to hold state. In many opinions, typical web apps like the Petstore are better off their state in HTTP sessions instead of relying on local SFSBs. A decent web container like Resin can manage those sessions quite effectively, including distributing them to backup servers etc. There's no need to move state to the middle tier in such apps.

    Regarding the performance differences between JPetstore and mPetstore at very high loads, I bet that there won't be any noticeable differences if you'd use *exactly* the same implementation but no EJBs. So why use EJB for web apps like the PetStore at all, be it SLSBs or SFSBs? I simply don't see any potential for "much more optimization" via container-managed middle tier components.

    Clinton, as you seem to have changed your mind in the course of this study, could you maybe share some of your insights *in detail*? :-)

    Juergen
  80. JPetstore vs. mPetstore[ Go to top ]

    Hi Juergen,

    >> Clinton, as you seem to have changed your mind in the course of this study

    Actually, I don't think I've changed my mind. This question has been posed to me in email as well.

    To me I didn't look at this study as a matter of "what I should use" or "what I have to use" because of performance or productivity or whatever. To me it was about options. IMHO this study has done a great job of laying those options on the table in (WIBTB) a very good comparison of the technologies. I think that when we sit down at a table and say "what do we need", this study will help us fill in "the matrix" of the most likely architecture to be successful for our project.

    In each case, there were options that were better, or worse, depending on the perspective you take. For example, from one perspective, we might summarize the study as follows:

    1) Project Profile: Low/No budget, short timeline, open, extensible, platform independent, scalable to at least 1000 TPS
        -- Winner: MJPetStore (JSP, Servlets, POJO etc.)

    2) Project Profile: Med-High Budget, med-long timeline, open, extensible, platform independent, scalable to more than 1000 TPS
        -- Winner: MPetStore (JSP, Servlets, EJB)

    3) Project Profile: High Budget, short-long timeline, closed, proprietary, platform specific, tool specific, vendor specific, web services, scalable to more than 1000 TPS
        -- Winner: .Net Pet Shop (Microsoft .Net)

    This is by no means a definitive list. I just threw it together, so please take it with a grain of salt. But the point is: there is a price to be paid for each option, and each project will differ in how they want to pay.

    One of my personal beliefs is paying for people before tools and servers. So if I can save money on the tools and servers, and put it toward top-quality developers instead, that is my preference. I see Java/J2EE having a significant advantage over .Net in that respect --in both mindshare of talented developers, as well as cost of tools/servers.

    I am not surprised at all by the results, and for the most part, I don't think this report will change the way that I develop software. If I were that easy, I might have switched to .Net based on the last report....boy am I glad I stuck with Java! ;-)

    Cheers,
    Clinton
  81. JPetstore vs. mPetstore[ Go to top ]

    To me I didn't look at this study as a matter of "what I should use" or "what I have to use" because of performance or productivity or whatever. To me it was about options. IMHO this study has done a great job of laying those options on the table in (WIBTB) a very good comparison of the technologies.


    Agreed. The study lays out the options nicely.
     
    > 1) Project Profile: Low/No budget, short timeline, open, extensible, platform independent, scalable to at least 1000 TPS
    >     -- Winner: MJPetStore (JSP, Servlets, POJO etc.)
    >
    > 2) Project Profile: Med-High Budget, med-long timeline, open, extensible, platform independent, scalable to more than 1000 TPS
    >     -- Winner: MPetStore (JSP, Servlets, EJB)

    I respectfully disagree here, even with a grain of salt. Such an interpretation is exactly what this study suggest at a superficial glance, and I believe this is not only too simply put but actually misleading.

    IMO a JSP / Servlet / JDBC combo should be able to perform as well as the EJB version, at least for typical web apps that just use local EJBs. I don't believe that the better performance at extreme loads has its origin in the EJB container. I simply don't see what *exact* value the use of local EJBs should add, beyond vague "much more optimizations" by the container. And you can always add a remote component layer at a later stage, if ever necessary.

    A project with higher budget and longer timeline will probably be better off with a Hibernate version of such a web app, losing no performance compared to an EJB version. I expect the productivity to be higher too, especially when considering the deployment hassle of the EJB version. All those repeated redeployments in the development cycle are a major hindrance, and writing proper unit tests is really hard with EJB...

    > One of my personal beliefs is paying for people before tools and servers. So if I can save money on the tools and servers, and put it toward top-quality developers instead, that is my preference.

    Absolutely. That's why I recommend appropriate tools and servers, i.e. no EJBs for typical web apps that don't expose remote components - even if you expect very high loads. You can use simpler development tools then, and you'll have a much broader choice of deployment servers, at all kinds of license costs. Such a solution is significantly more portable, deployment setup couldn't be simpler.

    The root of the problem is the "take all or nothing" approach of integrated J2EE servers. With CMP, you'll tie yourself to specific servers that fulfill your persistence needs, no matter if they match the rest of your requirements too. A pluggable CMP engine may help but is effectively an even more complex solution. Using POJO persistence is almost always the better choice.

    A Hibernate-based app will be portable to any container without hassle, be it Tomcat or WebLogic Enterprise. So you can choose your container by its "classic" server features (HTTP server, servlet container, etc), and *combine* it with the POJO persistence tool of your choice (Hibernate, JDO, iBatis, whatever). This applies to projects of any size or budget.

    Juergen
  82. JPetstore vs. mPetstore[ Go to top ]

    Juergen,

    >> Such an interpretation is exactly what this study
    >> suggest at a superficial glance, and I believe this
    >> is not only too simply put but actually misleading

    I agree. It was just a hypothetical example, and I disclaimed it quite clearly. My point was that the report should not be looked upon as recommending any single approach (or the opposite). It should be looked at as one tool in your toolbox, and based on your perspective from a given project, the conclusions one can draw from the report will change.

    >> And you can always add a remote component layer at a
    >> later stage, if ever necessary.

    I absolutely aggree.

    >> I don't believe that the better performance at
    >> extreme loads has its origin in the EJB container.

    Don't get me wrong, I still don't like EJBs any more than you do. Unfortunately, this very comprehensive report would suggest that an EJB architecture can handle load better than a black-box web application. If you are absolutely certain of the opposite, I would challenge you to start an initiative prove it. I'd be happy to help.

     -- One hypothesis is that Struts imposes some overhead, but I don't think it's to the tune of 30% (but I really don't know)...

    >> better off with a Hibernate version of such a web app,
    >> losing no performance compared to an EJB version.

    This also remains to be proven. I'm not saying you're wrong, just that if you're going to make such claims, you'll have to be able to back them up with something. Especially considering that the Hibernate FAQ and the posts made by the Hibernate Team here and elsewhere would suggest that even they admit raw JDBC is faster. New case study: Hibernate vs. TopLink vs. CocoBase vs. WebLogic CMP vs. JDBC?

    >> With CMP, you'll tie yourself to specific servers
    >> that fulfill your persistence needs,

    I agree. I think CMP is a nightmare. Unfortunately, the report was about performance, not productivity. The conclusion was that EJB/CMP was faster, and I can't argue with it because I have no proof, and I don't think you do either.

    I AGREE with 90% of what you say Juergen. I think we hold similar development principles and have the same feelings about EJB and their value. That doesn't make the report incorrect in its conclusion though: J2EE with EJB/CMP came out WAY ahead of the black-box web app. It wasn't even close. I have no reason not to believe them.

    I would like nothing more than to show that developing a super-scalable application using Java is just as clean and simple as using .Net. Unfortunately, I think that we have some more work to do, because this is not a religion, and belief alone will not make Java faster and it will certainly not make us right.

    Anyone interested in finding out?

    Cheers,
    Clinton
  83. JPetstore vs. mPetstore[ Go to top ]

    Clinton,

    > Don't get me wrong, I still don't like EJBs any more than you do. Unfortunately, this very comprehensive report would suggest that an EJB architecture can handle load better than a black-box web application.

    Don't get me wrong either, I don't want to discredit the report at all. It's just not obvious to me what magic the EJB container should apply to optimize the execution of this kind of app. It doesn't seem to be obvious to you either. So I draw the conclusion that the root cause is likely to be *elsewhere*, supported by the fact that the codebases weren't the same.

    I would *love* to learn what an EJB container can do to optimize such a single-server app (with no remoting or load balancing at the component level) beyond what a "black box" app can achieve. I may be wrong of course - I guess I would learn something mysteriously new about the mechanisms of EJB then.

    >  -- One hypothesis is that Struts imposes some overhead, but I don't think it's to the tune of 30% (but I really don't know)...

    The average response time of the Struts/JDBC version is 3 times that of the scriptlets/EJB version. With such heavy caching in the business tier, I consider this an indicator for worse performance in the web tier.

    Under extreme loads, the use of reflection for all property evaluation in JSP pages and the resulting longer processing time per page could indeed account for the degradation. Of course this isn't a particular Struts issue but one that affects any heavy use of custom tags, especially with unoptimized EL-style expression evaluation.

    > >> better off with a Hibernate version of such a web app,
    > >> losing no performance compared to an EJB version.
    >
    > This also remains to be proven. I'm not saying you're wrong, just that if you're going to make such claims, you'll have to be able to back them up with something.

    Well, you're right. You've cut off my word "probably" before the "better off with a Hibernate version", though ;-) Anyway, this would need to be proven. A performance comparison between WebLogic CMP and Hibernate would be quite interesting indeed.

    > I AGREE with 90% of what you say Juergen. I think we hold similar development principles and have the same feelings about EJB and their value. That doesn't make the report incorrect in its conclusion though: J2EE with EJB/CMP came out WAY ahead of the black-box web app. It wasn't even close.

    We definitely hold similar principles, and I agree with your basic views. I just read the conclusion of the report a bit differently: J2EE with *scriptlets* and CMP came out way ahead of the *Struts tags* and JDBC version. The numbers do not indicate that the root cause was in the data access at all.

    > If you are absolutely certain of the opposite, I would challenge you to start an initiative to prove it. I'd be happy to help.

    This is indeed an appealing idea :-) It's hard to do though, as no result would be directly comparable to those in the study. As I hardly find the time for all the Spring work and these discussions, I also wonder when to do such a thing.

    For a start, you could take the JPetstore version that was used in the study, and compare it to a version that just uses scriptlets in the JSPs. This would at least given an indication of the difference in the web tier.
     
    Juergen
  84. JPetstore vs. mPetstore[ Go to top ]

    It's just not obvious to me what magic the

    >> EJB container should apply to optimize the
    >> execution of this kind of app. It doesn't
    >> seem to be obvious to you either.

    The exact optimizations will not be obvious to anyone --except perhaps the TMC testers (if they profiled the app server accross both apps), and obviously the app server vendor (which we will likely never know).

    So I think it's fair to say that these report results and hypothesis is all we have...for now.

    Until the next case study.... ;-)

    Cheers,
    Clinton
  85. JPetstore vs. mPetstore[ Go to top ]

    The exact optimizations will not be obvious to anyone --except perhaps the TMC testers (if they profiled the app server accross both apps), and obviously the app server vendor (which we will likely never know).


    Well, I do not believe in magic in any context ;-)

    I didn't mean how exactly the app server vendors *implement* optimizations but rather *what kind* of in theoretical detail. There is no chance for optimizations when there are no imaginable hooks for them. Take a local SLSB that does not keep non-thread-safe instance state (typical case). By any means, what should an app server optimize here beyond the same logic in a POJO? Remember, we are not talking about convenience like declarative transactions but rather about the execution of the "hardcore" business logic itself.

    The app server must execute the method implementation code in both cases in the caller's thread, so there cannot be any special thread pooling involved. Since there's no pooling-worthy state in the SLSB, instance pooling and locking of the instance for thread safety during method calls just make matters worse. There's a slight overhead when calling an SLSB method, due to the required interception by the container. All things considered, performance of such a local SLSB will be *slightly worse* than that of a respective POJO.

    I haven't analyzed potential optimizations through code generation in the Entity Bean container in detail yet. Like with local SLSBs, thread pools cannot be involved. So it could just have to with fast access to cached object states, or fast on-demand population from the backend database. I doubt that the access to Entity Bean state via EJB interception is faster than that of fetching a POJO from a cache. And loading an object from the database by id is very fast in Hibernate too. The details would have to be analyzed though.

    So the use of local SLSBs without non-thread-safe instance state cannot cause better performance by any means. Special optimizations for single row caching and/or refreshing with Entity Beans might make a *slight* difference compared to generic O/R tools like Hibernate - but only via CMP in highly optimized app servers like WebLogic, and only for certain use cases. As CMP will perform worse than a tool like Hibernate in many other scenarios (e.g. any set access), this cannot be generalized easily.

    > So I think it's fair to say that these report results and hypothesis is all we have...for now.

    With emphasis on *2* hypothesis for the better performance of the scriplets/EJB version: The use of EJB *and/or* the avoidance of reflection in the web tier. Neither is proven yet, although most people seem to assume the former without second thought.

    > Until the next case study.... ;-)

    :-))

    Juergen
  86. mPetstore vs. mPetstore[ Go to top ]

    Something just came to my mind: how would Oct2002 mPetstore compare to the current mPetstore, in terms of performance? Have it degraded so much, since MS implemented many "design patterns" which are common in Java projects (Factory, DAO, etc.)? This way we could evaluate how "optimized" the previous version of mPetstore was (using stored procedures, etc.), and what it the impact of these "politically correct" (;-) changes MS had to do in this new version.

    Cheers!
  87. mPetstore vs. mPetstore[ Go to top ]

    You mean msPetShop 2 vs. msPetShop 3? You might find some details at http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnbda/html/petshop3x.asp.

    There is not a single doubt in my mind that as soon as M$FT starts to implement all these distributed component design patterns that we are so accustomed to in J2EE programming, and starts to abandon their preferred approach of simply putting web (HTML) views on top of recordsets returned from stored procs, the performance of their apps will suffer big time.

    However, I am pretty amazed that now M$FT seems to have accepted the J2EE approach. Take a look at the MSDN Patterns Page at http://msdn.microsoft.com/practices/type/Patterns/Enterprise/default.asp and you may get some idea. This to me is a huge win for the Java camp, i.e., we are following the exact same programming practice. As soon as we are at a level playing field, there should be no significant performance difference between J2EE and .NET. Another consequence is that .NET's perceived advantage in development productivity and superiority in development tool should also disappear.
  88. Excellent Work[ Go to top ]

    I'm greatly impressed by this latest study. Everything looks much fairer and consistent that the previous one.

    After reading and digesting, the main conclusion I can draw is that there is no absolute black and white, everything is multiple shades of grey! As most people hopefully now realise, the best solution is the one that blends the correct and appropriate usages of the right technologies and products to solve the problem at hand. Additionally any problem can be solved in many ways and any complex application will utilise a variety of different solutions.

    Hopefully studies such as this will help to reduce the "my Product / Architecture / Pattern is better than your's type arguments" and facilitate more useful debates on how to find the best and most appropriate solutions for each specific problem on a case by case basis.
  89. Excellent Work[ Go to top ]

    i must chime in and say great job guys! i work in a standards and frameworks group in a fortune 1000 company and have been tuning in with great curiosity ever since you published the baseline spec. i have just printed out 20 copies of your report for my group. thank you. btw, how can we contact you directly.
  90. Excellent Work[ Go to top ]

    Erin,

    If you drop a line to casestudy@middleware-company.com it'll get to one of us.

    Thanks for your kind comments:)

    Will
  91. I have to say I'm very happy with this report. I was lucky to have witnessed -first hand- the amount of effort and personal commitment that Will and others put toward this case study. They absolutely deserve the positive feedback that they are receiving.

    Great job guys!

    Clinton
  92. Processor load[ Go to top ]

    I would like to see processor load lines on graphs.

    I have read performance results and my conclusion is:

    1. Oracle and MS SQL perform equally
    2. .NET and J2EE optimize database access equally well
    3. When database is not bottleneck then .NET is 2x to 3x faster then J2EE. I don't like this, but it seams to be true.

    If .NET solution performs equally as J2EE solution on a test, but .NET solution made 20% average processor load and J2EE solution made 60% average processor load then J2EE has a problem.

    Nebojsa
  93. Processor load[ Go to top ]

    Nebojsa: 3. When database is not bottleneck then .NET is 2x to 3x faster then J2EE. I don't like this, but it seams to be true.

    On which results did you base the hypothesis?

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  94. Processor load[ Go to top ]

    Nebojsa: 3. When database is not bottleneck then .NET is 2x to 3x faster then J2EE. I don't like this, but it seams to be true.

    >
    > On which results did you base the hypothesis?
    >
    > Peace,
    >
    > Cameron Purdy
    > Tangosol, Inc.
    > Coherence: Easily share live data across a cluster!


    "Web Services Test. This tested the performance of the application server as it hosts a basic web service via SOAP 1.1. The results of this test showed that the .NET platform outperformed the fastest J2EE platform, by over 200%."

    My hypothesis is that Web Services Test is the only one where application server CPU is critical resource.

    Nebojsa
  95. Processor load[ Go to top ]

    Nebojsa,

    To answer some of your questions:

    > I would like to see processor load lines on graphs.

    For both .NET and J2EE (both app servers) CPU load was 100% i.e. for everyone the application server machine CPU was the bottleneck.

    > 1. Oracle and MS SQL perform equally

    Yes, but not surprisingly Microsoft do a slightly better job when working with SQL Server. For J2EE both work equally well.

    > 2. .NET and J2EE optimize database access equally well

    For the most part this is true but since the DB was not the bottleneck I'd add "in this test" to the end of your statement since a test where the DB was the bottleneck may highlight points not shown by this case study.


    > 3. When database is not bottleneck then .NET is 2x to 3x faster then J2EE. I don't like this, but it seams to be true.

    In this test DB was not the bottleneck and at least one App Server performed as well as .NET so I'm not quite sure I agree with this conclusion. Maybe I didn't understand?

    Cheers, Will
  96. Processor load[ Go to top ]

    My bad, the sentence:

    > For both .NET and J2EE (both app servers) CPU load was 100% i.e. for everyone the application server machine CPU was the bottleneck.

    Should read:

    For both .NET and J2EE (both app servers) CPU load was 100% *in all tests* i.e. for everyone the application server machine CPU was the bottleneck *in every case*.


     Cheers, Will
  97. Processor load[ Go to top ]

    You have understood me.
    My hypothesis was wrong.

    Thank you,
    Nebojsa




    > Nebojsa,
    >
    > To answer some of your questions:
    >
    > > I would like to see processor load lines on graphs.
    >
    > For both .NET and J2EE (both app servers) CPU load was 100% i.e. for everyone the application server machine CPU was the bottleneck.
    >
    > > 1. Oracle and MS SQL perform equally
    >
    > Yes, but not surprisingly Microsoft do a slightly better job when working with SQL Server. For J2EE both work equally well.
    >
    > > 2. .NET and J2EE optimize database access equally well
    >
    > For the most part this is true but since the DB was not the bottleneck I'd add "in this test" to the end of your statement since a test where the DB was the bottleneck may highlight points not shown by this case study.
    >
    >
    > > 3. When database is not bottleneck then .NET is 2x to 3x faster then J2EE. I don't like this, but it seams to be true.
    >
    > In this test DB was not the bottleneck and at least one App Server performed as well as .NET so I'm not quite sure I agree with this conclusion. Maybe I didn't understand?
    >
    > Cheers, Will
  98. I saw a footnote for the MS source. Where are the J2EE sources? Did I miss something in the report?
  99. Cir,

    The materials are at The Middleware Company case study website:
    http://www.middleware-company.com/casestudy/

    Salil Deshpande
    The Middleware Company
  100. All Run On Win2K3???[ Go to top ]

    I don't understand and am confused.

    All benchmarks were run on Win2K3 because it was the fastest.

    If I have Win2K3, and thus, .NET, why do I spend money on J2EE?
  101. Which JVM was used?[ Go to top ]

    Just curious which JVM was used.
    I know some benchmarks showing that JRockit can out perform SUN's JVM by the the quite large margin (like 1.4 : 1)


    Michal
  102. Which JVM was used?[ Go to top ]

    Michal,

    We used Sun JVM 1.4.2, we tried a number of versions and other JVM's and found this one performed the best.

    However, it should be noted that this is a specific application rather than a full exercising of the entire virtual machine, so milage may vary with other applications.

    Cheers, Will
  103. Which JVM was used?[ Go to top ]

    <quote>
    In addition to testing the concurrent mark sweep collector for the old generation, we also examined using the parallel young generation collector (-XX:+UseParNewGC), possible as the application servers were running on an 8 CPU system. Unfortunately we found no advantage one way or the other between this option and the default copying collector. </quote>

    It would have been nice to also disclose your -verbose:gc logs or at least some sort of scaled graphs.
    It's not clear from the paper whether GC was a bottleneck. If you used 1.4.2, have you tried running with -XX:+AggressiveHeap ?
    Sun, Fujitsu and HP publish their Specjbb2000 scores using this option.

    Also, I couldn't find the full set of options used. Will you disclose them?
  104. Improvement[ Go to top ]

    With the rate that J2EE is improving in these "case studies", we'll be blazing past .NET in the next one...
  105. Interesting[ Go to top ]

    The most common J2EE server has around 40% market share? That means at most only 40% of the J2EE community is on par with .NET as far as performance goes (minus web services).

    It is also interesting that all the J2EE vendors are hiding. At least Microsoft has some *****.
  106. Hi

      I don't see anyone mentioning the 200% performance increase on calling the webservices on .NET. Could someone elaborate on the reason for this?? I would think everything else being equal I would want that additional performance for my web services.

    Thanks

    Sal
  107. So...Who Are They?[ Go to top ]

    I've been trying to find clues for the identity of Vendor X and Y, but I'm coming up empty.

    Any guesses as to who are Vendors X and Y?
  108. Math model[ Go to top ]

    Hi, Will,

    here is my model of what you test software thinking:

    TOK close to 0s (time-OK, normal responce time, negligible, zero)
    TOUT = 5s (timeout time, goes to average time calculation, but
    these transaction does not go to TPS calculation; you say TOUT is 3s)
    LOAD = users/ think time = users / 5 (request per second)
    ERR = LOAD - TPS (erors per second, missing transactions)
    AVG = (TPS*TOK + ERR*TOUT ) / LOAD =
    = TPS*TOK/LOAD + (LOAD-TPS)*TOUT/LOAD
    assuming TOK <AVG = (LOAD-TPS)*TOUT/LOAD

    ALL DATA from Table 2,4 and [at least] NET web service
    datapoints fit this model. (except two point in Servlet-X area,
    which are obvious "test errors"). In lower ERR end
    when LOAD >= TPS, TPS*TOK/LOAD part starts to play role and
    plot ERR% vs AVG goes more horizontal in left side

    My point is that the model above is NOT GOOD for recording, analyzing
    and presenting test results. I suggest changing test software :-)

    Alex V.
  109. Math model[ Go to top ]

    Alex,

    Excellent point. It’s hard to explain what I think the discrepancy is without a whiteboard but I’ll take a quick swing at it.

    When we conducted the tests we would add 500 clients then leave the run going for 30 mins. than add 500 more ... until the response time was over 3 seconds at which point the test was done. During these 30 minute segments if you saw the .NET throughput line and the J2EE throughput line you’d notice that the .NET one was a lot smoother whereas the J2EE showed a reasonable amount of fluctuation with the throughput going up and down as the JVM garbage collected, the App Server did object/EJB/http session management, etc. I described in the tuning section how we sought to minimize this fluctuation because if it becomes excessive you can get connection refused errors and the TCP layer queues aren’t large enough to handle the requests while the short pauses occur. In this report what is reported is the average transactions per second sliced out from that 30 minute period.

    What I suspect you’re seeing are missing numbers from the fluctuations outside of the slice interval. It doesn’t effect the reported results it just leaves an untidy loose and out there and I’ll have to make sure that in future I tie up that loose end.

    All the best,
    Will
  110. Math model[ Go to top ]

    Will,

    According to my _hypothesis_ ,
    the average time reported by your test
    software IS NOT VALID average time.
    Also LOAD - TPS shows average rate of
    missing transactions WITHIN 30 minutes slice.

    Switching to incremental GC, as you perfectly showed,
    does reduce request refusals, but it does not save
    J2EE from missing much more transactions (unfortunately....)
    than NET even under average loads.

    Just from common sence, average time vs errors% can
    not be so inspecific for so various test cases. It is
    wrong (sorry, my guess) test software who make these two
    values directly, unspecifically proportional.

    Any way, it is my guess, and I would leave it to your
    discussions with test software provider.

    Have a nice day! Sorry for critics, it was a great
    usefull effort ... :-)

    Alex V.
  111. Error Numbers[ Go to top ]

    I think I see what Alex is saying... Do you have the Error Reporting numbers for the tests?
  112. So...Who Are They?[ Go to top ]

    I've been trying to find clues for the identity of Vendor X and Y, but I'm coming up empty.

    >
    > Any guesses as to who are Vendors X and Y?

    Weblogic and JBoss?
  113. So...Who Are They?[ Go to top ]

    Proceeding by elimination,

    I would say that the X is Weblogic and Y is Websphere
    for the following reasons:

    - X application server
    cannot be Oracle 9IAS because the HTTP server
    is not "integrated". The HTTP server is not launched
    with the JVM but as another process.
    Oracle provides a customized Apache server but it's not embedded with the AS.
    I don't think also that application server Y is Oracle 9iAS, because the
    plug-ins for IIS is just a plug-ins for their Oracle HTTP Server.

    - Both Websphere and Weblogic provide an embedded HTTP server
    and plug-ins for famous HTTP servers. As Websphere comes with its own
    Apache based HTTP server (a.k.a IBM HTTP Server), i guess it would
    be a natural choice to use it instead of the embedded one. And
    because last time Weblogic outperformed Websphere, i would say that X
    is Weblogic ( hope so).

    Could TMC guys provide some more clues ?

    Thanks,
    Luc
  114. Re: So...Who Are They?[ Go to top ]

    - X application server

    >cannot be Oracle 9IAS because the HTTP server

    Oracle 9iAS has an integreated HTTP server in OC4J, I use it in development all the time.
  115. The server X?[ Go to top ]

    Is there a reason that vendor of server X does not like to disclose
    their name? what a good chance to claim to be the fastest
    J2EE app server.

    Server Y really sounds like WebSphere in regard of web server configuration.
    It has consistently been addresed in IBM redbooks or performance guide articles
    that the embeded web server should not be used. In this sense, it would be a bit
    unfair for WebSphere to be deployed using embeded web server if server Y
    were WebSphere.

    Jian
  116. The server X?[ Go to top ]

    Well,

    now that i have completely read the report, i am more and more certain
    that X is Weblogic. When the report's author describes X server network I/O subsystem, it totally matches the Weblogic one.
    Quoting " It seems likely that App Server X is using some kind of 2nd stage pipeline. The first uses a thread pool whose dedicated task is to accept connections and drain/fill the associated sockets using some efficient non-blocking I/O API. The second stage actually contains the J2EE worker thread pool..." ( Chapter 25, paragraph 2) => The first stage is handled by the socket reader threads which are aimed to listen to sockets and put incoming requests into the execution queue. With Weblogic, you can force usage of native socket reader threads instead of pure java ones. The first ones are not forced to poll all open sockets but are warned by the OS ( and then JVM i guess) when data can be read from a particular socket.
    Then the request are put into the execution queue before being handled by worker threads which corresponds to the 2nd stage.

    Unless others AS offer some similar mechanism, i definitely think
    X is weblogic.

    My 2 cents,
    Luc
  117. Did anyone else notice that .NET-C# had much better average response time for the same throughput than the J2EE implementations?
  118. You're Right[ Go to top ]


    Did anyone else notice that .NET-C# had much better average response time for the same throughput than the J2EE implementations?
    <
    No. I didn't notice, but you're exactly right. Under heavy load, the difference can become quite pronounced. Pretty important if the user is waiting on a response.
  119. not really .NET vs. J2EE[ Go to top ]

    Get real guys, this is not a .NET/J2EE comparison, but a .NET/the best (and most expensive) J2EE servers. After only 3 years, .NET is better than the most expensive J2EE environments, not only from a performance point of view, but also in productivity and, not lastly, client expenses required. How much money has a client to spend in order to have the J2EE application performs almost as good as a .NET equivalent ? Well, it seems that if web services are involved, no money can help him.
  120. not really .NET vs. J2EE[ Go to top ]

    <quote>
    Get real guys, this is not a .NET/J2EE comparison, but a .NET/the best (and most expensive) J2EE servers. After only 3 years, .NET is better than the most expensive J2EE environments, not only from a performance point of view, but also in productivity and, not lastly, client expenses required. How much money has a client to spend in order to have the J2EE application performs almost as good as a .NET equivalent ? Well, it seems that if web services are involved, no money can help him.
    </quote>

    You're right of course that WebLogic Enterprise (assumably Server X) is pretty expensive at 10.000 USD per CPU, especially compared to .NET. But let's not forget that you can run the Servlet/JSP/JDBC version of the app on WebLogic Express easily, even with JTA transactions, at 500 USD per CPU for the Basic Edition. And I expect Caucho's Resin to perform very competitively, at 500 USD per server. The same applies when using Hibernate and the like.

    So if choose your J2EE development and deployment environments wisely, you can achieve both high productivity and high performance, with low expenses. Take IntelliJ IDEA (500 USD per developer) and Resin (500 USD per deployment server, free for development). As you don't need EJB at all for many apps, that's a fine combo for many scenarios. Visual Studio.NET comes at 4 times that price. And server-wise, you're not bound to Windows or IIS at all.

    Very important: With J2EE, you've got a multitude of infrastructure products to choose from, both open source and commercial ones. WebLogic's SOAP engine might be sub-optimal, but so what? For web services, why not use TheMindElectric's GLUE with Resin? Its Standard Edition is even free for most commercial usages. I expect such a combo to rock in terms of performance, at negligible costs.

    Juergen
  121. Pricing Confusion?[ Go to top ]

    <But let's not forget that you can run the Servlet/JSP/JDBC version of the app on WebLogic Express easily, even with JTA transactions, at 500 USD per CPU
    >>

    But what are you going to run it on? Linux? Then you have to shell out more $$$ for Redhat AS or Enterprise, right?

    <Visual Studio.NET comes at 4 times that price.
    >>

    Four grand for VS .NET? Where do you shop? Even the uber MSDN subscription (which gives you...everything) is only around $2500. Dev Studio Professional is less than a grand.

    <For web services, why not use TheMindElectric's GLUE with Resin? Its Standard Edition is even free for most commercial usages. I expect such a combo to rock in terms of performance, at negligible costs.
    >>

    You might expect it, but you'd be wrong. Glue is a great product for Java and its performance is generally better than Weblogic and *much* better than Axis. But compared to .NET, it's no contest whatsoever.

    And while it's hard to beat the price of the free edition of Glue, there's a gaping hole in management/monitoring that you must fill if you're going to deploy into a controlled enterprise situation.
  122. Pricing Confusion?[ Go to top ]

    But what are you going to run it on? Linux? Then you have to shell out more $$$ for Redhat AS or Enterprise, right?


    On Windows 2000 maybe? ;-) Or take Resin at the even cheaper 500 USD per server, and run it on any Windows or Linux distribution of your choice.

    > Four grand for VS .NET? Where do you shop? Even the uber MSDN subscription (which gives you...everything) is only around $2500. Dev Studio Professional is less than a grand.

    Last time I checked it was 2000 USD per developer for VS.NET Enterprise. That's four times the price of *IDEA*. Remember that Resin is free for development, its 500 USD are per *deployment* server.

    > You might expect it, but you'd be wrong. Glue is a great product for Java and its performance is generally better than Weblogic and *much* better than Axis. But compared to .NET, it's no contest whatsoever.

    I don't argue that Glue will perform better than .NET's SOAP support but at least pretty good. If you've got an app that relies on Web Services heavily and .NET gives you better performance, then choose .NET! For more typical apps with a Web Service somewhere inbetween a lot of other stuff, a Resin/Glue combo will be fine too.

    Anyway, the focus on Web Services performance is a bit misleading: There are so many aspects in the field of enterprise applications that you can't really argue that extreme SOAP processing is the central one. With a J2EE web app on Resin, one should be able to achieve about the same performance as with a .NET web app - for typical apps that aren't all about SOAP processing.
  123. Pricing Confusion?[ Go to top ]

    Juki: You might expect it, but you'd be wrong. Glue is a great product for Java and its performance is generally better than Weblogic and *much* better than Axis. But compared to .NET, it's no contest whatsoever.

    Hmm, that's funny, the benchmarks I saw showed just the opposite.

    Juki: And while it's hard to beat the price of the free edition of Glue, there's a gaping hole in management/monitoring that you must fill if you're going to deploy into a controlled enterprise situation.

    Did you say "deploy into a[n] .. enterprise"? I assume we're talking data centers, not desktops. You realize that such a requirement automatically eliminates .NET for about 80-90% of companies that qualify to use the term "enterprise".

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  124. Pricing Confusion?[ Go to top ]


    Hmm, that's funny, the benchmarks I saw showed just the opposite.
    <
    I haven't seen published benchmarks comparing .NET and Glue. (I'd love to see them if they are available.) However, we did our own internal eval.

    TME publishes a benchmark comparing Glue, Axis, and Java Brand "X". While the tests are run on WinXP, .NET web services isn't benchmarked.

    I don't mean to bash Glue. It's really a nice product. Extremely easy to use.

    <You realize that such a requirement automatically eliminates .NET for about 80-90% of companies that qualify to use the term "enterprise".
    >>

    I guess that puts us in the other 10-20%. :-)

    Actually, we're pretty darned happy with Win2K3 and .NET.
  125. Pricing Confusion?[ Go to top ]

    Cameron: Hmm, that's funny, the benchmarks I saw showed just the opposite.

    Juki: I haven't seen published benchmarks comparing .NET and Glue. (I'd love to see them if they are available.) However, we did our own internal eval. TME publishes a benchmark comparing Glue, Axis, and Java Brand "X". While the tests are run on WinXP, .NET web services isn't benchmarked. I don't mean to bash Glue. It's really a nice product. Extremely easy to use.

    Well, I was just making a point. With benchmarks, you can prove whatever you set out to prove, basically. I have seen benchmarks that show TME being faster than .NET, but parsing XML etc. is a "known problem" with solutions that can theoretically only run so fast, so generally speaking they should all run at about the same speed.

    (BTW - As I pointed out, the fact that the app servers tested didn't approach that speed is pretty sad, but I think it also reflects that Web Services aren't really that important today in production. Probably in a year when TMC does their next test, you'll see the results "neck and neck" again, which should hardly surprise anyone.)

    Regarding the J2EE performance equalling or beating the .NET performance, that is pretty impressive in one sense, because Microsoft has architected the OS and the web server and the .NET runtime into a single fast but brittle package that is optimized top to bottom, while the JVM -- from Sun -- is running as a Windows program no differently than any app that you might write in C++ and deploy to Windows, and the J2EE app server -- from *** (not Sun) -- is running as a Java program inside the JVM ... and it still matches and in some cases beats the performance. And it's not tied to that OS, or that JVM, or .... In other words, you have both significantly more flexibility and a wee bit more performance to boot ... that's simply incredible! (BTW - because of this lack of tight integration on the Java side and the top-to-bottom vendor optimization from Microsoft, I predicted a couple of months ago that the .NET implementation on IIS on Windows would win the rematch by 10% or so, so I consider myself pragmatic and I really was surprised when I heard the results. I don't know if it means that .NET will still get faster, or that Java is really that good, or a little of both.)

    So, if you want an apples-to-apples comparison in terms of flexibility, I think it is fair to re-run the mpetstore on Mono on Linux or on Rotor on FreeBSD. What do you think? (I'm just kidding ... I realize that it would not be at all fair, if even possible ;-)

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  126. Pricing Confusion?[ Go to top ]

    ----
    Regarding the J2EE performance equalling or beating the .NET performance, that is pretty impressive in one sense, because Microsoft has architected the OS and the web server and the .NET runtime into a single fast but brittle package that is optimized top to bottom, while the JVM -- from Sun -- is running as a Windows program no differently than any app that you might write in C++ and deploy to Windows, and the J2EE app server -- from *** (not Sun) -- is running as a Java program inside the JVM ... and it still matches and in some cases beats the performance. And it's not tied to that OS, or that JVM, or .... In other words, you have both significantly more flexibility and a wee bit more performance to boot ... that's simply incredible!
    ----

    Yes, this impressed me too. What doesn't impress me about java is the productivity.

    Vrajmasu
  127. Pricing Confusion?[ Go to top ]

    ----
    Regarding the J2EE performance equalling or beating the .NET performance, that is pretty impressive in one sense, because Microsoft has architected the OS and the web server and the .NET runtime into a single fast but brittle package that is optimized top to bottom, while the JVM -- from Sun -- is running as a Windows program no differently than any app that you might write in C++ and deploy to Windows, and the J2EE app server -- from *** (not Sun) -- is running as a Java program inside the JVM ... and it still matches and in some cases beats the performance. And it's not tied to that OS, or that JVM, or .... In other words, you have both significantly more flexibility and a wee bit more performance to boot ... that's simply incredible!
    ----

    In most cases of enterprise application the important features that are considerd are Security, Availability and Reliability. Performance might be considered but is not the deal maker or breaker in selecting a technology. It is understood there is always some performance difference between different technologies(Until we know there is a huge difference). So performance metrics are not so important.
    One thing that we can learn from this study is, both microsoft and J2EE vendors have room to improve. Microsoft should do a good job Security and Reliability. They are improving in terms of availaibility of their servers. J2EE vendors needs to improve on productivity and lowering their costs.
  128. Productivity[ Go to top ]

    Productivity is a very debatable topic. In the right hands, J2EE is very productive, not to mention product and platform independence. I have not used .NET but whenever I hear such arguments, I get the feeling that one needs slightly more smarts to be productive when working with J2EE than .NET.
  129. Productivity[ Go to top ]

    Productivity is a very debatable topic. In the right hands, J2EE is very productive, not to mention product and platform independence. I have not used .NET but whenever I hear such arguments, I get the feeling that one needs slightly more smarts to be productive when working with J2EE than .NET.


    Some smarts may help. But first of all, one needs to choose the right tools for the job at hand. For a typical web app, IntelliJ IDEA 3.0 and Caucho's Resin 2.1 are perfect companions. Eclipse 2.1 and Tomcat 4.1 may be nice too, although Eclipse still lacks proper out-of-the-box support for web development. Such combos work nicely on either Windows, Linux, or MacOS X. Note that we're talking about a maximum of a few hundred license dollars here.

    Combine this with a web MVC framework like Struts, WebWork, or Spring's web MVC and a persistence tool like Hibernate, OJB, or a JDO implementation, and you will have a pretty lightweight but very powerful foundation for web development. For apps with dedicated layers, an application framework like Spring offers simple ways of wiring up business and data access objects, and can provide convenient transaction handling (completely without EJB).

    You can sketch a working prototype very quickly with such an environment. And for production, you can move to any server platform, be it Tomcat on Windows or WebLogic on Solaris, with the option to switch at a later stage if necessary. BTW, I agree with Cameron, I'm impressed that J2EE platforms perform so adequately compared to .NET, facing all those requirements of portability. And this isn't just about WebLogic, be it Enterprise or Express Edition: The more lightweight (and more affordable) Resin performs impressively too.

    Of course, there is still a lot of potential to improve productivity, but it's pretty nice already. In my experience, J2EE development is far simpler without EJB, no matter from what angle you look at it. With EJB, there's significantly more complexity in terms of development and deployment. This extra effort has to be justified by application requirements, else it will just hinder productivity. There can be value in exposing remote EJBs for example, but considering local EJBs as a prerequisite for proper J2EE web apps is harmful.

    Juergen
  130. Productivity[ Go to top ]

    Speaking from my own experience with J2EE and .NET, the difference in productivity is often found in what skillsets are being targeted with productivity tools. Microsoft targets most of their productivity tools at the lower end of the skill spectrum, to make it seem easy and to bring a large number of developers into their platform. They by and large succeeded with this with VB in the COM era. The j2ee platform seems to target more "real-world" productivity gains. Often times j2ee developers (and server vendors) can make people's lives a lot harder than they should be, but that's not the point of the platform and doesn't always have to be the case. The entire "descriptor" craze with J2EE is all about separating deployment from development in an effort to increase productivity. It has a steeper entry curve but in the long run is, IMHO, more productive. To the neophyte, many aspects of the J2EE platform seem like unnecessary barriers but over the long haul they prove themselves to be useful, given that they are used intelligently.

    One simple example can be found in the difference between IDEs. Visual Studio has a ton of wizards to help doing simple things. Once you reach a certain point, these wizards slow you down more than help you. As a more experienced developer, I find the IDE extremely lacking in productivity tools that would help me in real, day to day tasks of any degree of sophistication. Compare this to the IDEs in the J2EE world, which offer many extremely useful refactoring and organizational tools that I still use every day. It's not that .NET isn't capable of offering such tools, it's just that it doesn't because that's not what Microsoft is trying to do. Note that this example isn't necessarily about the platforms and their specifications so much as the tools available to them, but I see that as an important distinction between the platforms taken as a whole.

    On the whole, J2EE has a more mature, flexibile toolset and is largely targeted towards experienced developers, more so than .NET. For another example, compare Swing versus Windows Forms. Swing requires a much higher level of understanding and experience than Windows Forms. Sun is just now trying to target the less experienced developer with some of their initiatives. Now whether or not either approach is an advantage or not depends on your own situation. If you have a team of experienced developers, you're probably better off giving them tools that experienced developers can use rather than a bunch of form wizards. If you have a team of junior developers, you probably want to give them tools that can help make up for their lack of experience. Issues like this to me are far more important in choosing a platform than minor performance differences. they have the potential to cost or save a hell of a lot more of an organization's money.
  131. not really .NET vs. J2EE[ Go to top ]

    -----
    Get real guys, this is not a .NET/J2EE comparison, but a .NET/the best (and most expensive) J2EE servers. After only 3 years, .NET is better than the most expensive J2EE environments, not only from a performance point of view, but also in productivity and, not lastly, client expenses required. How much money has a client to spend in order to have the J2EE application performs almost as good as a .NET equivalent ? Well, it seems that if web services are involved, no money can help him.
    -----

    We, in the Java World, have a lot of options. None of them work quite well, but there are so many that you get by. I don't want to pay 10k$ for WebLogic? Very well, I spend half of that on 5 blind Intels and make a Jboss cluster. It may be slow and it may occasionally crack, but I've got 5 of them.
    Or, I can skip EJB's all together and go with Tomcat. Tomcat, again, is slow. But it's FREE! I can buy a dual Pentium with the price of a Windows 2003 Server and have tomcat on Linux run at least as fast as a IIS on a single cpu.
    About the development process, I must admit you're right. I don't know If you've encountered my problems, but the xml's are giving me headaches. Aspect4j is a mess and writing a build.xml for ant is full time job.

    Vrajmasu
  132. tomcat on dual pentium[ Go to top ]

    Why not give a try to php ? It seems to be faster than tomcat.
  133. Cost of Win2K3[ Go to top ]

    <I can buy a dual Pentium with the price of a Windows 2003 Server and have tomcat on Linux run at least as fast as a IIS on a single cpu.
    >>

    Are you sure? Depending on the configuration of your new machine (2 CPU/2 Gig Mem for example), your Windows 2K3 software might cost as little as $500.
  134. not really .NET vs. J2EE[ Go to top ]

    or Resin.
    Resin is widley recokginzed to be the fastest (and most popular) J2EE server, very resonably priced.
    http://news.netcraft.com/archives/2003/04/10/java_servlet_engines.html
    (note that most people use Resin and Tomcat)

    .V
    ps- Maybe we can see performance of Resin w/ jPetStore next release?
  135. not really .NET vs. J2EE[ Go to top ]

    <quote>Get real guys, this is not a .NET/J2EE comparison, but a .NET/the best (and most expensive) J2EE servers. After only 3 years, .NET is better than the most expensive J2EE environments, not only from a performance point of view, but also in productivity and, not lastly, client expenses required. How much money has a client to spend in order to have the J2EE application performs almost as good as a .NET equivalent ? Well, it seems that if web services are involved, no money can help him. </quote>

    You are right!!! I think J2EE App server vendors has to do more on performance, development tools and cost.
  136. One thing that is really out is the refrigerator. I mean those truck size boxes that Sun and others try to sell at enormous prices and inferior performance to Intel/AMD's. Those thing had a life when Windows was an unusable operating system and you had no choice. Now you can either go with Gates or with Linux, at decent prices and on cheap and powerful hardware. The J2EE appserver vendors can't rely for ever on “refrigerator compatibility”, as in .NET doesn't run on Sun's trucks.
    And yes, in Java you can pull 2 commercial and 5 opensource applications and build something on them, but they always give the feeling of not working. I mean tomcat won't see that jsp and so on. There are tons of configurations. In .NET everything is smooth and works almost out of the box.

    Vrajmasu
  137. <And yes, in Java you can pull 2 commercial and 5 opensource applications and build something on them, but they always give the feeling of not working. I mean tomcat won't see that jsp and so on. There are tons of configurations. In .NET everything is smooth and works almost out of the box.
    <
    I think that you're exactly right. A widely diverse Java/Open Source stack takes a LOT of time and effort to "get right" and even then, often has a "house of cards" feeling to it. Everything works...most of the time. (At least, that's been our experience in the "glass house" data center.)

    Even in this study, compare the time spent tuning the .NET stack vs. tuning the J2EE stacks.

    <One thing that is really out is the refrigerator. I mean those truck size boxes that Sun and others try to sell at enormous prices and inferior performance to Intel/AMD's.
    >>

    I agree. How does Sun keep from going the way of DEC, Data General, etc.? They have a serious challenge ahead of them. They're not going to survive on Java licensing fees, that's for sure.
  138. How Long Did J2EE Tuning Take[ Go to top ]

    I didn't see it specifically called out in the report - but maybe I missed it. The report says .NET tuning took 3.5 days. The first benchmarking effort took something like 10 man months of effort to tune the J2EE solution (trying to recall from memory so maybe I am wrong but I remember it was a lot of time). How long did it take this time? If I get near equal performance but it takes me an order of magnitude more time to get it, I think that needs to be in the decision matrix.
  139. How Long Did J2EE Tuning Take[ Go to top ]

    <How long did it take this time?
    >>

    A good question. Chapter 15 sure contains a lot more detail than chapter 17.

    It sure sounds like it was a lot longer than 3.5 days.
  140. How Long Did J2EE Tuning Take[ Go to top ]

    Jeff,

    The reason those numbers weren't published in the report is because it's hard to be accurate on the J2EE side, for a numebr of reasons, some of which are:

    1. On the J2EE side we had a much bigger tuning matrix; we looked at and tuned four app servers and two versions of the application. So our tuning matrix was at least eight times larger than the .NET side where they could focus on one app and one platform.

    2. We also tested not only Windows 2003, but also Win 2000, and Linux, Microsoft know they could concentrate on just Win 2003.

    3. There is obviously some cross over here, if we tested web server A with app server W, and then web server A, with app server Y, how much of the web server tuning time should we count towards the tuning time for app server Y?

    4. When we did our tuning we did find some bugs, so obviously we wouldn’t count the bug fixing time against the tuning.

    5. Not all the items in the test matrix were homogeneous, with regard to time spent, obviously some things were quite quickly eliminated/tuned others took some time so we couldn’t just take the number of days and divide by the number matrix items.

    6. We tuned the database, but that should/shouldn’t be included in the tuning numbers?

    7. Some app servers are easier to tune than others; anyone who’s worked with many app servers has nightmares about that kind of thing!

    I know all that sounds like weaseling on giving an answer and lousy time tracking on our part but it was a really complex problem to find a truthful answer to, and not our top priority, performance and tuning was.

    So, having said all this I can ballpark an answer, which would be 5-10 days. But, please, please, please keep in mind that this is a ballpark.

    Hope this helps,
    Will
  141. How Long Did J2EE Tuning Take[ Go to top ]

    If I were a customer looking to build a J2EE app, I'll have to choose among several app servers. If I picked the wrong app server, I'll never get the performance close to the right one. (I really want to know what are app server X and Y)

    Let's say, I'm very lucky and picked the right one. I'll then have to find some J2EE gurus to help me tune up the app server which will cause me a fortune just to match the performance that .net offers with very little tuning.

    This is what I found from reading this study.

    some of you might see J2EE performs as good as .NET. To me, it's not even close.

    xXx
  142. How Long Did J2EE Tuning Take[ Go to top ]

    You're right.
    It's not even close. It's like this : I'm cooling my room by pressing a button, and you're cooling yours by swinging huge fans by hand. And if you train really good, you can do about just as much cooling as me. I'm .NET and you're J2EE.

    Vrajmasu
  143. You're right.

    > It's not even close. It's like this : I'm cooling my room by pressing a button, and you're cooling yours by swinging huge fans by hand. And if you train really good, you can do about just as much cooling as me. I'm .NET and you're J2EE.
    >
    > Vrajmasu

    Yeah. In order to cool your room, you must buy M$ specific outlets, M$ specific buttons, and M$ specific air-conditioner, which in fact doesn't fit all rooms. Your J2EE cooling device allows you to choose your outlet vendor and size, button vendor, size and color or even remote control, and a choice of air-conditioner or motorized fan or hand-swinged fans from different vendors, some of them for free. Plus it has any size of fan to fit any room size, or the whole building if you like. Yeah, it must take longer to install J2EE and regulate it for your room, but man, the temperature is sure right on the spot all year long. And you won't have to turn off cooling everytime temperature must be changed, either, unlike M$'s cooler. Ain't it cool? :-)
  144. Re: How Long Did J2EE Tuning Take[ Go to top ]

    And what I found out was:

    If a company goes with .NET it might get equal or better performance. This is the positive. The negative is that there is only a single implementation of .NET (please do not mention mono in this context), which means that if that company encounters a major problem or Microsoft suddenly loses interest in the technology and slows its development (something that has happened consistently and will happen again), there will be no one to turn to. In addition, .NET is tied to Windows, which means that if that OS does not provide or stops providing what the company needs, there is no alternative again.

    If a company goes with J2EE, it will get a similar performance. Since it is cheap to throw hardware at the problem, performance almost never matters much as long as it is close enough. What matters much more is stability, scalability, being able to move to another vendor at a fraction of the cost of redevelopment, etc. All of this J2EE clearly provides, while the .NET/Windows combo is somewhat shaky. This makes J2EE a much better stategic choice (as I see it). I do not remember who said it, but he was very right: Microsoft are busily and expertly solving the wrong problem.

    So yes, you are right, it is not even close.

    These type of posts however belong in a .advocacy group, not here. If you really wanted to ask how much tuning the J2EE and .NET solutions took, you could have just asked the question. Please make posts appropriate to the topic in the future. Thank you.
  145. ---
    I do not remember who said it, but he was very right: Microsoft are busily and expertly solving the wrong problem.
    ---
    Scott Mcnealy said it. Nice source for a quote.

    Vrajmasu.
  146. How Long Did J2EE Tuning Take[ Go to top ]

    Vrajmasu and Edward,

    I don’t think you can draw the conclusions from this that you seem to be drawing, a couple of points:

    >I'll then have to find some J2EE gurus to help me tune up the app server which will
    >cause me a fortune just to match the performance that .net offers with very little tuning.

    Nobody ever said that the guys who tuned .NET were not guru’s either. In fact they work full time for Microsoft as employees or, full time contractors. So in situations were vendor help, etc. is used there was much less communication latency.

    Do not forget that this is a performance case study, not a tuning productivity case study. If it were a tuning productivity case study then you’d have to scientifically measure and adjust for things like vendor involvement, the mental context switching time switching between apps/app servers/and operating systems, the prior knowledge of the people doing the tuning, we’re pretty sharp but the MS guys are right up there too. Maybe we were smarter at J2EE than they were at .NET, but maybe not. You’d need to quantify all that stuff.

    >I'll have to choose among several app servers. If I picked the wrong app server,
    >I'll never get the performance close to the right one.

    One of the things we said in the report was that the limiting factor on App Server Y was the http->J2EE processing time. Please do not forget that this case study looks at one possible configuration of application servers. If this were not a web application, say one that relied more heavily on transaction processing then there’s a good chance that the ordering of the app server performance would have been different. Maybe if we’d allowed html output page caching results would have been different? Each customer’s requirements are different, so you cannot draw the conclusion that App Server X is the one all customer’s should buy because it’s always faster in all cases. Just as you can’t draw the conclusion that .NET is always quicker to tune.

    One last point, in reply to this:

    >I'll then have to find some J2EE gurus to help me tune up the app server.

    The reason we spent so long writing up all the things that we did on tuning and I made that video describing the tuning process was to remove some of the guru’ness from tuning. It is not something that someone with no knowledge can do but to an extent it is a just a methodical process of examination and experimentation.

    Cheers, Will
  147. How Long Did J2EE Tuning Take[ Go to top ]

    A thought...

    >Nobody ever said that the guys who tuned .NET were not guru’s either. In fact >they work full time for Microsoft as employees or, full time contractors.

    True, the MSFT guys were probably pretty smart, but look at what the end result of their tuning efforts were. There are twelve pages of J2EE tuning info that discuss all the knobs and dials that had to be turned in order to optimize J2EE. .NET had 2 pages with a smaller set of "dials" that had to be turned.

    These are broad brush statements to be sure, but it is clear that a J2EE person has to understand and tweak many more moving parts. That implies the resource is harder to train and thus harder to hire.
    Perhaps more importantly, the number of moving parts makes it harder to deduce the right way to do things. Granted, your report will help lessen the learning curve, but if the nature of my application changes then I will be back to trying to figure out the right set of dials to turn to get my app to behave.

    .NET just seems easier and I get pretty much the same thing for a lot less work.
  148. New Struts Info[ Go to top ]

    Some of you said that Struts is not scaleable.

    I now beleive that some of you test Struts with DynaBeans.
    I use FormBeans.

    When testing Struts, test with FormBeans, that is how I think most people use it.
    DynaBeans are a bad practice (another reason is that it is harder to unit test).

    Look at basicPortal design ("best Practice" Struts), that is very scelable and proven, and it does use iBatis caching DAO.

    .V
  149. summary[ Go to top ]

    I think, most significant facts can be:
    1. The first performance diffrence is somethere around 1500 virtual users => There is no performance difference in the technology for most of the applications.
    2. The .Net solution have peak in the performance and with growing number of users its performance falling down (The graphs shows the history only for the moment, where app X solution meets again .Net solution. It seems that for more virtual users the appX solution will be again better.) => for really high load application is the app X solution more reliable.
    3. Microsoft done great job last year.
    4. Java must do great job next year.

    :-)
  150. Thank You[ Go to top ]

    I wanted to post a separate message thanking all of you for reading our study, and for your support, and overwhelmingly postive and constructive comments and feedback to this study.

    I especially want to thank Clinton Begin for (a) all the hard work he did with us during the study, purely out of intellectual curiosity and the goodness of his own heart and (b) continuing to do so here by taking time out of his busy schedule to answer people's questions and comments on this thread.

    I had wanted to do say this a lot earlier, like last week, but refrained because I did not want it to seem like a big love fest right when the study was published. But we are indeed truly appreciative.

    Salil Deshpande
    The Middleware Company
  151. Review[ Go to top ]

    After reading it this weekend, here are my notes; should you want to do next one:

    1. Use the top 2 players:
    http://news.netcraft.com/archives/2003/04/10/java_servlet_engines.html
    IBM is slow (according to your tests), and BEA is losing market share (they are # 5 according to NetCraft).
    Testing latest Resin 3 vs .NET most likely will have J2EE come out on top.
    So I think test Tomcat 5, Resin 3 and latest .NET. (not the old J2EE stuff).
    Resin I think is widely acknowledged as fastest J2EE.

    2. Use J:Rockit VM . There were some comments that JDK 1.42 might be as fast…; I do not believe it. Since GC is a major issue, use a server side VM.
    J:Rockit is also widely acknowledged to be the fastest VM.
    It also has very nice remote monitoring capabilities.
    (The point of J2EE is that we can mix and match, CHOICE!)

    3. Use JSTL v 1.1 (for JSP 2.0 – Resin 3) instead of Bean tag. JSTL 1.1 can be optimized by the JSP compiler.

    4. Use formBeans and avoid Dynabeans. I suspect still that dynabeans cause the reflection issues. IBatis jPetstore can be converted to using only collections (as implemented by basicPortal.com)

    5. Make think time 1 second or less (not 5 seconds, who takes 5 seconds to click?). We want to see a loaded system.

    6. Only run the long tests, no need for data on short test, it might be less effort for you.

    7. Add a lot more CRUD transactions to invalidate the cache.

    8. Use stored procedures. SQL needs to be compiled, etc. This would not give advantage to either side. Stored procs are a best practice, so that one can have portability from Oracle plSQL to TransactSQL, etc. by just having same procedure name.
    I was a bit conceredn that you shy away from stored procedures, and Java or C# developer doing high scalability will be limited… by how much experience they have in stored procs (I think).

    9. It is not interesting to use Orcale, it be better to use PostgreSQL, so that cost of operating MS SQL vs Resin/pgSQL. (I can show you how to write pgSQL stored procs.)

    10. Why not test on Linux for long term stability test?!

    11. Use modern hardware, such as NewISys 2100.

    12. Use DiselTest (instead of loadrunner). DiselTest is open source testing tool (linked from baseBeans.com). This would make the tests scientific! Scitentific means that people could reproduce it on their machine, by downloading iBaits jPetStore, and DiselTest. This assume you would share the scripts you run.

    I do not buy that EJB is as scaleable as you showed it vs iBatis DAO.

    Congratulations on your conclusions:
    SOAP/WS/SOA is going to be a major player.

    J2EE will have 40% of market share, and J2EE players will share 40% of market share (my guess is the 40% will be split as Netcraft showed above).

    I would be glad to donate my time next time (to show you how I run Struts sites w/ 40,000 concurrent users running CRUD transactions sub second)

    .V