Home

News: 883.66 JOPS@Standard on GlassFish V2

  1. 883.66 JOPS@Standard on GlassFish V2 (15 messages)

    GlassFish V2 now has the best SPECjAppServer 2004 on Sun Fire T2000. Read Scot's blog for the details and caveats. From Scot's blog.. "Good enough" is no longer good enough. Today, we posted the highest ever score for SPECjAppServer 2004 on a single Sun Fire T2000 application server: 883.66 JOPS@Standard. The Sun Fire T2000 in this case has a 1.4ghz CPU; the application also uses a Sun Fire T2000 running at 1.0ghz for its database tier. This result is 10% higher than WebLogic's score of 801.70 JOPS@Standard on the same appserver machine. In addition, this result is almost 70% higher than our previous score of 521.42 JOPS@Standard on a Sun Fire T2000, although that Sun Fire T2000 was running at only 1.2ghz. So that doesn't mean that we are 70% faster than we were, but we are quite substantially faster and are quite pleased to have the highest ever score on the Sun Fire T2000. The result reflects improvements in many areas of GlassFish such as EJBs, JMS, JSP compilation, Servlet connector and container, connection pooling and CMP 2.1. GlassFish V2 has also improved in areas not covered by SPECjAppServer 2004 like Web Services, EJB 3.0, JSF, JPA and others. [Editor's note: SPECjAppServer 2004 is neat, but uses outdated technologies that many performance-oriented programmers would prefer not to use any more: EJB 2.x, JSP are good examples. It'd be interesting to see an equivalent to SPECjAppServer updated to use things like Spring, EJB 3.0, perhaps some of the newer and speedier templating technologies. Any takers? ... Of course, SPECjAppServer is a lot of work... it's not a matter of just implementing an equivalent in a day or so. But it'd still be interesting to see an updated version.] So, what's your experience with GlassFish? Have you used it? Message was edited by: joeo@enigmastation.com
  2. I used Glassfish V1 and V2. I think other AS are superior, like JBoss, Geronimo and, of course, WebLogic.
  3. I think other AS are superior, like JBoss, Geronimo and, of course, WebLogic.
    Superior in what?
  4. I used GF v1, and I found many bugs....whereas many other AS don't have. Moreover, GF has no widely used features like: AOP. Every time I go on GF home page, I see a beta/alfa/bugged version to download! Whereas Jboss is stable. I prefer Jboss, it is free and has many good features to use!
  5. I used GF v1, and I found many bugs....whereas many other AS don't have. "many" relatively to what? We are talking about Glassfish v2 which is the only open source vendor that has published benchmark results and is Java EE 5 compatible.
    Moreover, GF has no widely used features like: AOP. Sorry but it is relatively easy to add AOP into most application server runtimes. I am currently working on a major AspectJ (AOP!) extension to our performance management product that will work with Glassfish and other application servers including JBoss AS. Recent AspectJ Blog Entries http://blog.jinspired.com/?cat=38 regards, William
  6. I used Glassfish V1 and V2.
    I think other AS are superior, like JBoss, Geronimo
    and, of course, WebLogic.
    Content-free seems to be the name of the game, so I'll just say I disagree.
  7. Working on a Java EE performance management product means I spend a significant amount of time with each of the major application servers (that do install and work out of the box). Glassfish has come on leaps and bounds in the last year. You would be very foolish to not include the product in any application server evaluation especially as it has a complete application administration web console that is comparable to offerings from the leading commercial vendors and far exceeds anything offered by other open source vendors (if at all). regards, William
  8. Hi William, I also find it surprising that a popular open source vendor who claims (or claimed) to have the most downloads ever - has never ever published a SPECjAppServer result. How do the performance characteristics of this product compare with others based on your experience? -krish PS: Long time, no hear. Regards to Rod Peacock as well.
  9. To be honest my experience with GF has been mainly in the labs were testing performance testing is focused on our core components and services and not the application server. I have seen noticeable improvements in the inter component request processing (CORBA) which I believe has been worked on over the last year. It will probably never be a fast as the Borland application server in the good old days, ;-). With many of the application servers using similar technologies (i.e. TopLink and Tomcat) the bigger battle (at least in terms of performance management) is in the scalability of the distributed application management and not just the raw runtime performance. What's the point of being the fastest at executing a piece of byte code or processing a client request when you are also the fastest at crashing. Scalability is not just a runtime concern (number of clients/requests) but a deployment concern (number of nodes/cells/partitions/apps/...). How does one go about designing a global & local deployment topologies that balance many concerns (scalability, reliability, performance, manageability,.......)? Kind regards, William PS: Rod is fine and working hard on platform ports.
  10. I used Glassfish V1 and V2.
    I think other AS are superior, like JBoss, Geronimo
    and, of course, WebLogic.
    Will be very interested in knowing your comparison basis. Glassfish V2 is open source, and performs better than these open source and paid Application Servers. Open source and being performant is a win-win for customers.. Wouldn't it be nice to see some performance publications from other open source application servers.
  11. Pleasure to deal with[ Go to top ]

    I've been using v1 and v2 for over six months now, and they've been a pleasure to deal with. Admittedly a lot of that is the new JEE 5 changes, but in terms of price tag, deployment, Ant scripting, administration interface, IDE integration and general time-to-fix for defects its been a step up from any other AS I've had to code for. Certainly the AS' you've mentioned don't rate above GF for me, in any departments.
  12. SPECjAppServer 2004 is neat, but uses outdated technologies that many performance-oriented programmers would prefer not to use any more: EJB 2.x, JSP are good examples.
    This assumes that these particular technologies contribute greatly to the performance profile of the benchmark. It also assumes that the design of the technology and/or benchmark precludes optimizations available in newer technologies such as reduction in the number of expensive SQL statements executed which by the way was a significant factor in the 2002 version of this benchmark. What would be much more useful in the published results would be to have a complete breakdown of the performance profile across tiers, layers, components and resources. Then we could ascertain whether improvements are a result of changes in the database product, application server request pipeline, memory usage, resource interaction, JVM (GC!!!),.... With this type of information there would be less black magic and a better awareness of the strengths and weaknesses in each product stack. I would be very interested in creating an execution profile so that a standard performance model could be obtained for comparison across products. The execution profile could also be used to verify that the benchmark was actually performed in the same manner (transaction semantics) as designed. Unfortunately the last time I looked one had to pay to obtain the binaries even for non-reporting purposes. regards, William Louth JXInsight Product Architect CTO, JINSPIRED "Performance monitoring and problem management for Java EE, SOA and Grid Computing" http://www.jinspired.com
  13. I would be very interested in creating an execution profile so that a standard performance model could be obtained for comparison across products. The execution profile could also be used to verify that the benchmark was actually performed in the same manner (transaction semantics) as designed. Unfortunately the last time I looked one had to pay to obtain the binaries even for non-reporting purposes.
    And finally I would get the "good heap size" numbers. :-)
  14. Different JVM versions[ Go to top ]

    This is not an apples-to-apples comparison. Sun "forgot" to mention in their blog is that the WLS result is using Sun JDK 1.5.0_10, whereas the Sun app server was using JDK 6.0 Update 2. Sun stated in the Java 6 release that the new JDK was significantly faster than JDK 5, and Sun has also made a fair set of improvements in JDK 6 Update 2. While we cannot know how much of a difference this makes, there are lots of results out there that indicate that it may be significant. Also, the comparison is using Sun hardware on Sun's OS and with Sun's JDK. And both submissions - WLS and the Sun app server - were made by Sun. As were the older WLS and WebSphere submissions that Scott allude to in the blog. So all we are seeing is Sun leapfrogging themselves on their own platform. Henrik Stahl (BEA)
  15. Henrik makes a good point that JDK 6 contributes to our performance (though probably not as much as he assumes). Which is yet another reason why Glassfish is the superior appserver: we've supported JDK 6 for more than 6 months. Weblogic doesn't support it today, and as far as I've heard they have no plans on supporting it for months to come. So it's not apples-to-apples, but then again, neither is the marketplace.
  16. Hi Scott, This does point out that benchmarks are much more useful to vendors than customers especially since every customer application has a different profile in terms of transaction (in the business use case sense) mix and technology stack. Without a transparent software execution model (best case, execution graph) as well as a system execution model (performance characteristics under different workloads) customers are as blind as ever to the runtime differences. That said the benchmarks do at least keep the major application server vendors pushing ahead with general runtime performance improvements even if at times there is some "black magic" or "custom tailoring" used to win a temporary lead. I think it is more important that each vendor keeps jostling for a better position in the race and less so who is the current leader. Of course the performance engineering team must be happy with the results which by and large are valid for most customers and not just those few using Java 6. regards, William