Servlet Performance Report updated

Discussions

News: Servlet Performance Report updated

  1. Servlet Performance Report updated (34 messages)

    Web Performance has updated the Servlet Performance Report to include the latest versions of Tomcat, Orion, Resin, Websphere, Jetty and JRun. In addition, the testing methodology has evolved to include persistent connections, multiple connections per user, session-tracking and reading/writing session data. One of the most popular servers improved dramatically since the previous report and a newcomer jumped to the front of the pack.

    Analysis
    Most of the servers in this test performed in a solid, predictable fashion - scaling well under increasing load up to the limits of the CPU. JRun, Tomcat and Resin beat out the competition for handling peak hits/sec. Tomcat and Resin showed higher peak throughput and lower page load times than JRun, which sacrificed these metrics slightly in favor of handling a greater load than the others. Jetty consumed the full CPU a little earlier than the leaders and showed substantially higher page load times as it neared peak capacity.

    Orion never used the full CPU that was available to it - which led me to suspect, based on the performance results from the previous version of this test, that it would rival the leaders if the default configuration allowed the servicing of more simultaneous connections. This theory was confirmed by adding <max-http-connections value=100000> to the Orion configuration, which allowed Orion to survive the test for another 90 seconds, coming close to the performance of Tomcat and Resin. Because this it outside of the test guidelines, the results are not included in this report.

    Websphere, also did not use the full CPU during the tests, so I suspected it is capable of better performance, as well. I increased the maximum thread pool size (from 300 to 500). This resulted in a considerable improvement both in page load time and throughput capacity. However, it would still need more tuning to catch up with the rest of the pack. That tuning is beyond the scope of this report.
    Read more at http://www.webperformanceinc.com/library/ServletReport/

    Threaded Messages (34)

  2. Where's WebLogic?[ Go to top ]

    Overall that is a fair comparison.

    It is interesting to see Websphere and Jetty perform so badly. I would be interested in seeing a performance test take into account each severs recommended performance tunning.

    It would be nice to see how WebLogic fits in as well.
  3. Where's WebLogic?[ Go to top ]

    Well, as you can see if you read the report:

    "Several servers were originally targeted for this report, but results have been withheld because their licenses prohibit the publication of performance benchmarks without permission. Permission was requested from these vendors, but the permission has not yet been granted. We hope to include them in the report soon after we receive permission to publish. These servers include:
    Sun Java System Application Server
    BEA WebLogic Application Server
    Pramati Server"

    So, that where it's at. ;-\

    /Par
  4. re: Where's SUN app server?[ Go to top ]

    We hope to include them in the report soon after we receive permission to publish. These servers include: Sun Java System Application ServerBEA WebLogic Application ServerPramati Server"So, that where it's at. ;-\/Par
    Sun Java System Application Server 7/8 uses Tomcat as the web container anyway
  5. Where's WebLogic?[ Go to top ]

    "Several servers were originally targeted for this report, but results have been withheld because their licenses prohibit the publication of performance benchmarks without permission.
    Yes, it is used to protect product reputation from fake benchmarks ( this crappy benchmarks is an example ).
  6. Bla bla bla...[ Go to top ]

    Everyone is ranting about crappy benchmarks here and there. I think Weblogic would suck wind kindia like Websphere does. Having worked in WAS 4 and other application servers, Websphere tuning is not as criticle to performance as you'd think, from my experience. Resin and Orion run like stink, and are blazing fast.

    Just because Bea doesn't want a crappy report doesn't mean the testing is invalid. I imagine that if I fired up all these servers on a box, I would get the same results as well. So how is that crappy?

    The only major compelling reason to go with Bea and IBM for app servers is their integration with their development tools and for large clusters. I can't say for sure if the large server cluster performance of Weblogic or Websphere would be better than the rest. It should be.

    Small - medium scale though, Resin, Orion, and Tomcat wipe them out. Look at which server ISPs use for hosting: Resin.

    The benchmark seems right on the money.
  7. Bla bla bla...[ Go to top ]

    Just because Resin and Tomcat are fast servlet containers doesn't mean tunning is not important or this benchmark produces some meaningfull results. It is possible to prove mySQL is the fasted RDBMS in the world using this way (I sow it on mySQL home page few years ago, I hope they dropped this crapp too)
  8. usefulness of benchmarks[ Go to top ]

    Just because Resin and Tomcat are fast servlet containers doesn't mean tunning is not important or this benchmark produces some meaningfull results. It is possible to prove mySQL is the fasted RDBMS in the world using this way (I sow it on mySQL home page few years ago, I hope they dropped this crapp too)
    By now, I would hope seasoned developers know better than take simple benchmarks as a real indication of application performance. As countless people have stated, the only valid benchmark is one you run for your own webapp. These results are still useful as baseline numbers for comparison. Say I write a webapp and I benchmark it. I can use these numbers as a base of comparison. Of course you can always run your own baseline benchmark, but it doesn't hurt to have other benchmarks to verify or counter one's own results.
  9. usefulness of benchmarks[ Go to top ]

    It is usefull, if it can help me to tune my current deployment, but as I understand this benchmark is for monkeys ("you do not need to learn it !!!").
  10. monkeys are people too[ Go to top ]

    It is usefull, if it can help me to tune my current deployment, but as I understand this benchmark is for monkeys ("you do not need to learn it !!!").
    I've seen plenty of case where someone benchmarks tomcat and complain it's slow. after lots of emails back and forth, it turns out the user has a ton of other apps running. Sadly, most of the time, developers are lazy and don't run baseline tests. Instead they write their app and deploy it without stress testing it. Once it blows up, they scream on tomcat-user mailing list that tomcat is bad or not performant. After even more emails, the user tracks down the problem and fixes their webapp. of course this happens with every servlet container. I agree it's not useful for experienced developers, but for a total newbie it might be useful. might being the key word.
  11. And where is jBoss?[ Go to top ]

    That would be interesting, especially for a Free Software vs. Pay Software comparison.

    The bad performance of Websphere is surprising but I think there's more potential for tuning in WS than for the other app servers. OTOH Websphere is a behemoth which is hard to tame and will pay out only for real large scale applications.
  12. And where is jBoss?[ Go to top ]

    Then again, you already have that in Tomcat vs. Resin for example. OTOH, not to compare apples and oranges, I would agree that it would have been interesting to see how the most popular full fledged app servers, commercial and "free", perform against each other.

    That said, I don't think it's very surprising that a more lightweight servlet container (e.g. Tomcat) performs better, measuring throughput/CPU utilization serving JSP pages/servlets, than a more heavyweight J2EE server (e.g. WebSphere). Why?

    * The latter needs to find a perfect balance between all its components, in particular the resources devoted to its EJB container vs. the servlet container. Even when the former is not used, it still takes up memory and probably even a certain amount of CPU time, in terms of monitoring and garbage collection.

    * The latter has more parameters to configure, thus making it harder to configure appropriately, which also can be concluded, reading a little between the lines in the report (WebSphere not utilizing full CPU at extreme conditions).

    * The latter, especially the commercial ones, are marketed as "enterprise", motivating the large license fees, thus will design their app servers even more towards reliability, scalability (was/can clustering (be) deactivated in WebSphere), security and transaction integrity - probably at the, not negligible, cost of pure JSP page/servlet throughput (on a single node).

    "Performance" reports will always be questionable, and almost always will be in favour of the products that prioritize "simple numbers" such as "pages served/seconds". In reality, most large installations need clustered environments with enterprise characteristics, making these kinds of reports not always, or not at least directly, applicable.

    That said, I don't disfavour lightweight frameworks and products (e.g. Tomcat + a DI/AOP framework). On the contrary, I think they are great and very hard-to-beat bang-for-the-buck ratio. :) BUT, a large installation with high demands on enterprise characteristics WILL need time for configuration, optimization and trial-and-error tuning of everything from network routers and OS kernels up to actual app server parameters to be successful, and if product A is better than product B after all that is done, is not necessarily reflected by the performance report in question.

    Anyway, kudos to IBM for allowing the publication of these performance figures, even though they must have known that it wouldn't be in their favour, simplistically speaking...
  13. And where is jBoss?[ Go to top ]

    What do you think JBoss uses? JBoss ships with Jetty or Tomcat. Lately they ship it with Tomcat as default. So you have the results for JBoss.
    Hint: look at the Tomcat and Jetty results
  14. And where is jBoss?[ Go to top ]

    OK - based upon my current job I was captured by the "J2EE" in the test subject but missed the "servlet". So this test only covers servlet operations - no EJB etc. - then of course jBoss should be comparable to the standalone Tomcat results.
  15. And where is jBoss?[ Go to top ]

    OK - based upon my current job I was captured by the "J2EE" in the test subject but missed the "servlet". So this test only covers servlet operations - no EJB etc. - then of course jBoss should be comparable to the standalone Tomcat results.
    Comparable, but not necessarily the same. For example, we used to have a problem with Tomcat-in-JBoss performance due to the JBoss UnifiedClassLoader being slow, which affected the whole app. Not sure if they fixed that now, but we fixed the problem by removing JBoss.
  16. And where is jBoss?[ Go to top ]

    So what did you replace JBoss with? If your application needs the additional J2EE APIs beyond the servlet spec, did u replace the rest of the container but still use tomcat?

    I take it you didn't write the UnifiedClassLoader then ;)
  17. And where is jBoss?[ Go to top ]

    So what did you replace JBoss with? If your application needs the additional J2EE APIs beyond the servlet spec, did u replace the rest of the container but still use tomcat?I take it you didn't write the UnifiedClassLoader then ;)
    We're using plain Tomcat instead. We were using transaction manager+mail+JDBC, but Tomcat has all that so we're fine without JBoss.
  18. crap[ Go to top ]

    Out-of-the-box Performance
    It is useful to know how these servers will perform immediately after installation. A large number of projects do not have the expertise or the resources to perform extensive performance tuning. The tests are performed on each server in their default configuration after installation. (see the Methodology section).
    I mean really.. some servers eg. resin 2.x come with very straight forward infomatmation in the config file that let you improve performance, I am sure the others do too, You don't have to be an expert in performance tuning to understand a comment like this

      <!--
         - For production sites, change class-update-interval to something
         - like 60, so it only checks for updates every minute.
        -->
  19. crap[ Go to top ]

    Out-of-the-box Performance It is useful to know how these servers will perform immediately after installation. A large number of projects do not have the expertise or the resources to perform extensive performance tuning. The tests are performed on each server in their default configuration after installation. (see the Methodology section).
    I mean really.. some servers eg. resin 2.x come with very straight forward infomatmation in the config file that let you improve performance, I am sure the others do too, You don't have to be an expert in performance tuning to understand a comment like this  <!--     - For production sites, change class-update-interval to something     - like 60, so it only checks for updates every minute.    -->
    Yes, I have never saw more lame publication on TSS too.
    It looks like monkeys started to implement, to deploy and to maintain "enterprice" applications. It explains why j2ee is too complex ... .
  20. BS BS BS BS BS[ Go to top ]

    This report is COMPLETE BS !!!

    You folks do realize that he compared the servers using their default config files...right?

    "It is useful to know how these servers will perform immediately after installation. A large number of projects do not have the expertise or the resources to perform extensive performance tuning. The tests are performed on each server in their default configuration after installation."


    Well the variance between default config files is massive...and often optimized for DEVELOPMENT. For example, here are a couple entries from Resin's standard conf:


      <!--
         - For production sites, change class-update-interval to something
         - like 60s, so it only checks for updates every minute.
        -->
      <class-update-interval>2s</class-update-interval>


      <!--
         - Expires time for a cacheable file. Production sites will
         - normally change this to '15m'
        -->
      <cache-mapping url-pattern='/' expires='2s'/>


    Leaving those values as is completely invalidates the test...unless you ensure that all the other servers have similar values - no better, no worse. And who cares about development performance?! How about performance with production settings. I'm sorry, but the statement "A large number of projects do not have the expertise or the resources to perform extensive performance tuning" is a complete BS rationalization that the tester is using as an excuse because they are either lazy or incompetent.

    Look at those standard settings from the resin config listed above. If there is not one person on a project who can look at those statements and adjust the file for production accordingly, then that project needs to be shut down. What chance does that team have to create a useful, stable, and extensible system if they can't even read a simple statement and cut and paste a value.

    What a pile of BS. Oh yeah, if I am worried about produuction pe4rformance because I am building one of the minority of sites that care, I will also be a little interested in clustering performance.
  21. Ummm, no, it's not BS[ Go to top ]

    The only possible candidate for "BS" would be Websphere, simply because it's so out of line with all of the others, and it can arguably be tuned to keep up (as mentioned in the article). None of these servers has a magic "go faster" switch that's going to give you an Order Of Magnitiude boost in performance, and when folks say "the site is too slow", they're not looking for a 5-10% boost in response.

    What this report says to me is that if you're going to select a servlet container, you should be looking at features, tools, support, documentation, etc. and not performance. These all are so similar that the differences are pretty much negligible.

    I think anyone can choose any of these containers and be happy with them. I'm sure they're much better with something more modern than 4 year old hardware as well.

    Also, the servlet container doesn't have to be the fastest part of the system, just faster than its backing database server (and I'll bet all of these are comapred to a DB on the same system).
  22. Bullshit[ Go to top ]

    The tests used in this benchmark seem very weird. How come most of the "fast" appservers are so close to each other? What's being tested here is the benchmark client. The only reason for the expensive and crappy server showing up at the bottom is because they're truely crappy. I refuse to beleive that tomcat somehow moves from being almost dead-last to being the fastest. How did that happend?
  23. The tests used in this benchmark seem very weird
    On the contrary, they are exact what logic, reason and common sense would expect them to be,

    Rolf Tollerud: "magic is not possible ...But I can give you an educated opinion."
    http://www.theserverside.com/news/thread.tss?thread_id=26932#128070

    But then again, logic, reason and common sense are not strong points of Java people are they? Neither is sense of proportions. It stopped being technology years ago and became a religion.

    Regards
    Rolf Tollerud
  24. Yes, your religion is better, this magic technology doe's not exist at this time, but it will kill JAVA, continue to wait for it.
    The tests used in this benchmark seem very weird
    On the contrary, they are exact what logic, reason and common sense would expect them to be,Rolf Tollerud: "magic is not possible ...But I can give you an educated opinion."http://www.theserverside.com/news/thread.tss?thread_id=26932#128070But then again, logic, reason and common sense are not strong points of Java people are they? Neither is sense of proportions. It stopped being technology years ago and became a religion.RegardsRolf Tollerud
  25. The tests used in this benchmark seem very weird. How come most of the "fast" appservers are so close to each other? What's being tested here is the benchmark client. The only reason for the expensive and crappy server showing up at the bottom is because they're truely crappy. I refuse to beleive that tomcat somehow moves from being almost dead-last to being the fastest. How did that happend?
    Could you please be a little more specific? What do you mean by wierd? Is it just that you don't like the results? Or do you have some data to back up your assertion?

    Why do you not believe that Tomcat has improved considerably? There were several changes to the test methodology (as documented in the report)...but beyond that, I know many have reported improvements in Tomcat over the past year. Do you have some numbers to along with your claims?
  26. Don't Forget About Vendor Support[ Go to top ]

    I work for one of the largest credit card companies and we use WebSphere for our distributed platform.

    Performance aside, IBM's support is top notch. They are very responsive and knowledgeable about their products. Any problem, we are able to tap into their support people to get answers. We sometimes get access to the WAS developers themselves.
  27. Ummm, no, it's not BS[ Go to top ]

    None of these servers has a magic "go faster" switch that's going to give you an Order Of Magnitiude boost in performance, and when folks say "the site is too slow", they're not looking for a 5-10% boost in response.
    Oh yes they do have. There's a setting in WebLogic that disables servlet and JSP reloading (similar to the one for Resin mentioned in this thread). At the company I work at, those that can't remember the XML tags actually call it the "make it fast" switch (a rough, direct translation). In our typical applications, disabling reloading slashes response time for complex pages (many JSP forwards, a few filters, almost all content dynamically generated) from 1500 ms to 100-200 ms range. I guess not disabling reloading has similary disasterous consequences with this simple benchmark, as it is dominated by server's overhead when serving a request.
  28. Ummm, no, it's not BS[ Go to top ]

    I hope this test uses same jvm parameters like -Xmx. It is important for performance even in lame test, is not it ?
  29. or not...[ Go to top ]

    This report is COMPLETE BS !!!
    We have adjusted the configuration settings that you mentioned and have updated the report to reflect the changes - see Addendum 1. The results show no discernable effect on the test results.

    http://webperformanceinc.com/library/ServletReport/index.html#addendum1

    The report is not perfect, but the issues you have raised are not valid for this testcase.

    Chris
  30. Jetty 5?[ Go to top ]

    I would wonder if anyone could guess the actual performance of Jetty 5 compared to Jetty 4. As I remember Tomcat 4 was slower than Jetty so it would be really important to know if similar improvements has been made already to Jetty 5 development like what we can see now with Tomcat 5.
  31. credit to tomcat-developers[ Go to top ]

    All the hard work tomcat-developers have put into tomcat5 has paid off. Over the last 2 years, remy and a couple others have steadily profiled and improved components of tomcat to get it to this level. There are still more improvements underway, so the performance will continue to improve.
  32. Absolutely[ Go to top ]

    Tomcat absolutely ROCKS! My congratulations to its contributors. They have done a marvelous job by providing the java community with a best of the breed servlet-container, with no strings attached.

    aXe!
  33. Absolutely[ Go to top ]

    I agree, Tomcat does rock. I didn't have to read the benchmarks to realize that. But I heard Tomcat was being phased out. Does anyone know if that is true?

    Regards,
    Michael
  34. It is a pity this benchmark used the default configurations of the servers. There is little to be learnt when comparing Jetty with 50 threads and tomcat with 150 (the defaults for both servers).

    But there are interesting elements to this study and I've blogged the jetty response at MBlog.
  35. Smart post, Greg. Thanks for the clarification.