Sun claims best-in-class price/performance score for Sun AS8.1

Discussions

News: Sun claims best-in-class price/performance score for Sun AS8.1

  1. Sun reports a SPECjAppServer2004 benchmark result demonstrating what it claims is the best price/performance in the application tier for its Application Server 8.1 Standard Edition, with a score of 1201.44 jAppServer Operations Per Second (JOPS).

    The SPECjAppServer2004 benchmark reflects the rigors of complex applications and high volume transaction processing that is typical of today's customer environments. The test spans all major components of the application server, including Web serving, EJB architecture and messaging and includes hardware, application server software, Java Virtual Machine software, database software and a systems network.

    During the benchmark, the Application Server supported over 9400 concurrent clients, and processed over 4.3 million complex transactions per hour.

    The key point here, however, is the price/performance measure, because IBM has published results showing higher JOPS scores with WebSphere in various hardware configurations.

    Has anyone considered scoring the other application servers? Does the SPECjAppServer benchmark reflect real loads, or does it test only outlying conditions? What are your thoughts?

    Threaded Messages (19)

  2. He said / She said[ Go to top ]

    I think that it is dumb that vendors don't allow there app servers to be performance tested (by anyone not just someone they paid to get the results) **and publish the results**. I will never look at one of these dumb tests until they can be done, and published by anyone.
  3. Re: He said / She said[ Go to top ]

    Umm...

    We all know that the only thing that benchmarks measure are benchmarks.

    The reason that vendors don't like other people publishing benchmarks of their software is because, particularly with a benchmark as complex as this, there are several things that can go wrong with a benchmark.

    Would you trust IBM to benchmark Suns Appserver? Or BEA? You don't trust Sun to benchmark their own appserver, how would BEA or IBM be any better, when they're motivated to have it show in a bad light.

    Why should I trust the benchmark of an arbitrary 3rd party? How do I know they went through all of the due diligence to get the system working properly? That they didn't miss some OS or other configuration parameter? That's they're not a shill for another vendor?

    Maybe Consumer Reports can do benchmarks.

    It seems that noone is qualified to benchmark Linux, Solaris, Windows, EJB servers, file systems or databases.

    You can't trust any of them.

    In the end, the only benchmark that ever matters is your benchmark of your application on your hardware. Micro benchmarks don't matter, industry benchmarks don't matter, none of it.

    But this SPEC benchmark is pretty well documented. You can get it yourself if you like ($2000, but you can get it), you can go and get Suns App server software, buy a couple of Sun boxes, and test it out for yourself.

    Or, you can take the benchmark at face value, assume all of the vendors will cheat in their favor, but essentially assume that they can only fake so much, and that all of these benchmarks have a similar margin of error padded in to them.

    What this shows me, though, is that Sun is still putting at least some effort in to its app server. Their app server seems to have no traction in the community, but I don't know what it's like in the Real World.
  4. Re: He said / She said[ Go to top ]

    I agree and dis-agree with you.

    Yes I agree that face value bench marks are not useful, and yes most peoples benchmarks would be useless. I also agree that companies should do their own performance tests to determine if their app works on their hardware.

    But I do think that they would be useful in finding specific problems with servers. The linux benchmarks that you brought up are a good example. There are times, specifically during 2.6 development, that performance problems were detected it certian functions of the kernel, which allowed the responsible poeple to track down and fix the problems.

    If everyone was able to test all application server, I'm sure we (as a group) would find many specific performance problems with different aspects of the application servers out there. These tests would be duplicated and recheck, and in the end the vendor would have to fess up to, and fix problems (or disprove them).

    I think this process would be healthy for the J2EE app server market as a whole, and the end result would be that more performance problems would be addressed by vendors.
  5. I have a hard time finding books on this new App server of Sun's
  6. I have a hard time finding books on this new App server of Sun's

    What about 2000 pages of free documentation on docs.sun.com ???

    What about that SUN AS 7/8 PE is FREE even for PRODUCTION ???
  7. Good job[ Go to top ]

    Good job Sun. This product is a lot better than the days of iPlanet.
  8. (Some reason, TSS hates me, wouldn't let me post this a couple of times...)

    Technically you can't make price performance comparisons using the SPEC benchmark (so they say on their website).

    BUT, I think it's fair to say that what we have here is that given adequate database bandwidth (since they have one large DB machine, and several independent EJB machines), they're getting ~90 "bleems per sec" on each of the EJB tiers.

    Since those are Suns 2 CPU AMD Opteron machines, that's not an uninteresting number. (Their $3K boxes...)

    Plus they're using all of the "horrors" of the EJB stack (EJBs, Entity Beans, etc.).

    IBM published similar numbers on a similar configuration. (not the same numbers, but close enough).

    I think the other interesting thing would be a weblog of the porting process to get the benchmark to run, as what we have here is, essentially, a good size "WORA" EJB application (since, SPEC provides the source code to the application, and you're not supposed to tweak it -- save for deployment files).

    I know TSS in the past have done this (at one time, they were running on a load balanced cluseter of different App servers), I don't know if they're still doing that now or not.

    What this says in the end, to me, is that Suns app server is trying to be competetive. At least they're trying the SPEC benchmark, and I don't see BEA or Oracle in there yet (Oracle is notorious for "exploiting" benchmarks).

    I think it would be interesting if JBOSS sponsored their own benchmark (yea, it would cost them $$$ to do it -- but, so what?, they want to fight with the big boys...)
  9. Plus they're using all of the "horrors" of the EJB stack (EJBs, Entity Beans, etc.)

    SUN's entity beans are powered by SUN's JDO. Do you consider SUN's JDO also as horrors?

    Also Entity beans stay in EJB3. Is it also horror?
  10. SUN's entity beans are powered by SUN's JDO. Do you consider SUN's JDO also as horrors? Also Entity beans stay in EJB3. Is it also horror?

    When J2EE is criticized, Entity Beans comes first on the list, then EJBs in general. The "jewel" of J2EE is, apparently, the Servlet spec. Part of the critcism is the overall development model (classes, interfaces, descriptors), part is the the container itself, and part is performance (notably, again, for Entity Beans).

    So, my point is that despite the "handicap" of running EJB and Entity Beans, backed by a responsive database, they're punching through reasonably complicated transactions with an average time of 11 msec.

    That seems pretty responsive to me, I was curious if others felt that was a good number? Would a lightweight container be more performant, and perhaps by how much?

    But to me, that seems like a pretty decent number overall.
  11. When J2EE is criticized, Entity Beans comes first on the list, then EJBs in general. Part of the criticism is the overall development model (classes, interfaces, descriptors)

    this is true; but if you use right IDE (e.g. SUN JAVA Studio) it is done automatically for you; you see an EJB as a single object; in this context EJB3 is very good for developers using simple editors like vi

    also my IDE generates for me entity EJBs from data model; I just add a create methods

    even in EJB3 we will have to intercept method calls to propagate transaction and security contexts probably by dynamic proxies, so this mechanism will not disappear completely

    also not everyone sees annotations better than descriptors; some times it is better to keep all configuration in only couple of external XML files
    part is the the container itself

    I would like to really see a difference of EJB2.1 and Spring/EJB3 containers. Do you think they will do something in very different way?
    and part is performance (notably, again, for Entity Beans).

    As was discussed on TSS:
    - SF EJBs have similar performance / scalability as HTTP sessions
    - BEA EJB2.1 can be faster than Hibernate

    I do not believe we will see any big performance differences between EJB2.1 and EJB3 containers
    - local call under transaction and security context will have same performance
    - Entity beans performance is based on their implementation (JDO, TopLink, Hibernate for EJB3)
    - caching of JNDI objects done by container instead of your JNDILocator
    So, my point is that despite the "handicap" of running EJB and Entity Beans

    I do not say we do not have a reason to go from EJB2.1 to EJB3

    But saying that EJB21 is a horror implicates that a migration to Spring/EJB3 containers will give you twice better performance and productivity, what I strongly doubt

    Your technical comments are highly appreciated.
  12. the biggest disadvantage of EJB21 is that it does not support a rich OO domain model

    for me it is not a big issue, because I prefer to maintain a very good relationship with my Oracle DBA anyway ...
  13. Hi Damian,

    I will not start another discussion about use or not use EJBs, but for my real projects I have decided migrate from EJB to Spring/Hibernate for one simple feature : My application need to be deployable to many diferent AppServers, and my experience with EJB Entity Beans was terrible. Every time that I needed to deploy my application to a diferent AppServer, the work on the proprietary DD was very massive. With Hibernate/Spring I now have an application 100% deployable to any AppServer without care about DDs.

    IMHO.
  14. With Hibernate/Spring I now have an application 100% deployable to any AppServer without care about DDs.IMHO.

    You are right. But reason for that is that you do not change your app server at all. You deploy your app server (Spring/Hibernate) to different web containers ... :)

    What is going to happen if you become unhappy with Spring/Hibernate for any reason?

    -------------------------------------

    I just posted in this thread as very often I see on TSS sayings like: "the EJB21 is an absolute evil / horror / ...".

    - if EJB3 is almost ideal enterprise technology
    - you analyze affects of EJB21->EJB3 migration on your projects

    IMHO you will find that the above mentioned saying is a nonsense
  15. According to Will:

    "Technically you can't make price performance comparisons using the SPEC benchmark (so they say on their website)."

    What I read on SPEC's website is this:

    "SPEC does not endorse any price/performance metric for the SPECjAppServer2004 benchmark. Whether vendors or other parties can use the performance data to establish and publish their own price/performance information is beyond the scope and jurisdiction of SPEC. Note that the benchmark run rules do not prohibit the use of $/JOPS calculated from pricing obtained using the BOM, because $/JOPS is not a performance metric."

    So it seems there is no reason why you can't make the comparisons, just that SPEC won't endorse them.
  16. Other kinds of performance[ Go to top ]

    I think the other interesting thing would be a weblog of the porting process to get the benchmark to run, as what we have here is, essentially, a good size "WORA" EJB application (since, SPEC provides the source code to the application, and you're not supposed to tweak it -- save for deployment files).

    Never mind the weblog, that would be a <em>really</em> interesting "benchmark": how long does it take a competent developer to successfully prepare a standard J2EE module for deployment on Brand X App Server?

    On a completely unrelated subject, does anyone know why they used a 32-bit VM on a bunch of 64-bit CPUs?
  17. Re: Other kinds of performance[ Go to top ]

    On a completely unrelated subject, does anyone know why they used a 32-bit VM on a bunch of 64-bit CPUs?

    I bet that a 64 Bit JVM is slower than 32 Bit. I mean, all of a sudden, every object pointer is twice is big, so you immediately lose half your cache.

    The necessity of 64 Bit is more important for heap size than most anything else I would think, particularly if most of your time is stuck on either end of a network socket...
  18. 32-bit vs 64-bit[ Go to top ]

    Correct. A 64-bit JVM could be up to ~20% slower due to the increased load on memory caches etc caused by the larger size of pointers and other things. On Intel EM64T or AMD64 hardware part of this performance loss is regained through the slightly larger register set.

    This performance difference also explains why most SPECjbb2000 publications are made using 32-bit JVMs.

    The reason to go 64-bit is if you need a larger heap. And of course, some apps could run significantly faster with a large heap.

    Henrik Ståhl
    Product Manager, BEA JRockit
  19. they're using all of the "horrors" of the EJB stack (EJBs, Entity Beans, etc.)

    Suns server is trying to be competetive.


    +1
    .V
  20. Nice to see some numbers. I'm sick to death of armchair opinions.