BEA submits SPECjAppServer2004 score for WebLogic 9.0

Discussions

News: BEA submits SPECjAppServer2004 score for WebLogic 9.0

  1. BEA Systems has submitted a score of 1,374 JOPS (jAppServer Operations Per Second) in the SPECjAppServer2004 benchmark, with WebLogic 9.0 (on five servers) and one database engine (Oracle 10.1.0.4). This is the highest score recorded publicly for this benchmark.

    This marks BEA's first submission of SPECjAppServer2004, and the first spec benchmark using WebLogic Server 9.0.

    J2EE Server Configuration
    BEA's submission features WebLogic Server 9.0 and JRockit 5.0 on Red Hat Enterprise Linux 4 Update 1 hosted on five HP DL380 G4 two-way Intel Xeon application server systems each with 4 gig memory. IBM's previous submission used WebSphere 6.0 on SuSE Linux Enterprise Server 9 hosted on five IBM eServer xSeries 365 four-way Intel Xeon application servers each with 8 gig memory.

    Database Configuration
    This BEA WebLogic 9.0 benchmark's database was Oracle Database 10g Enterprise Edition Release 10.1.0.4 running on a 12-way HP Itanium 2 database server configuration, while the previous IBM WebSphere submission database was IBM DB2 version 8.2 running on an 8-way IBM eServer xSeries 365 database server configuration.

    Resources

    Threaded Messages (32)

  2. World record - half the hardware[ Go to top ]

    Shame on TSS for heavily editing my post. This is a world record, achieved with half the hardware of the previous record, owned by IBM on WebSphere.

    And remove the shameless plug for Sun.
  3. Plugging Sun?[ Go to top ]

    Shame on TSS for heavily editing my post. This is a world record, achieved with half the hardware of the previous record, owned by IBM on WebSphere. And remove the shameless plug for Sun.
    Shameless plug for Sun? I linked to the coverage of every vendor's release of SPECjAppServer2004. Why are you complaining about Sun's "shameless plug," when they scored the lowest on the benchmark, but not complaining about IBM's (shameful?) plug, when they competed in raw performance?

    The primary motivation for mentioning Sun's server, in context, was also because that post went into much greater detail explaining the SPECjAppServer2004 benchmark than this post did, and I felt it appropriate not to repeat the post.
  4. Sun Plug???[ Go to top ]

    Shame on TSS for heavily editing my post. This is a world record, achieved with half the hardware of the previous record, owned by IBM on WebSphere. And remove the shameless plug for Sun.

    Resources are resources regardless of who is in them. With the topic being on Java Application Servers, it's difficult not to find resources without Sun mentioned. Grow up.
  5. Thank you, Joseph, for removing the comment after the Sun link. Now we can all compare the results of the various vendors without bias.
  6. Sun Plug???[ Go to top ]

    Because spec prohibits comparisons on anything but the metrics it endorses (http://www.spec.org/jAppServer2004/docs/RunRules.html#Result_Usage), I think Gary found it unfair that the Sun story highlighted price/performance. This metric was present in SPECjAppServer 2002, but was removed from SPECjAppServer 2004 against BEA's wishes. Because of this new restriction, BEA did not make any price/performance claims. But using half the hardware for a new world record should allow readers to draw their own conclusions.
  7. Sun Plug???[ Go to top ]

    Well, in all fairness, Mr. McBride had a point, with respect to the SPECjAppserver2004 results usage guidelines (http://www.spec.org/jAppServer2004/docs/RunRules.html#Result_Usage) that specify that JOPS are the basis for results, and that other metrics should be used only for differentiated purposes. As such, a mention of price/performance as a metric is secondary to the actual score itself.

    Therefore, I modifed the reference to the Sun announcement to remove the emphasis. However, the post still serves as a useful background on SPECjAppServer2004, which is the context under which I originally thought of it.
  8. WLS 9.0 vs. WebSphere 6.0[ Go to top ]

    The reason we pulled this configuration together was to show that we could equal or beat the WebSphere configuration with half of the hardware on the app server tier.

    We want to show that even though WebLogic costs more per CPU than WebSphere, a whole WebLogic based system cost less because you need half as many WebLogic licenses to process the same number of transactions.

    Eric
    BEA Systems
  9. WLS 9.0 vs. WebSphere 6.0[ Go to top ]

    The reason we pulled this configuration together was to show that we could equal or beat the WebSphere configuration with half of the hardware on the app server tier. We want to show that even though WebLogic costs more per CPU than WebSphere, a whole WebLogic based system cost less because you need half as many WebLogic licenses to process the same number of transactions.EricBEA Systems

    BEA are absolutely right to focus on the total cost - because that is what the customer pays.

    It's also great to see some more competition around this benchmark - competition is a good thing. Such benchmarks highlight some interesting facts - ie. that in some cases FREE software can actually turn out to be very expensive (if all it does is chew up valuable hardware resources). As I've said before - just being free is no longer good enough.

    Though many people knock these kind of benchmarks they do help people make more informed decisions - in some cases 'free' makes sense in others it is worth spending money (to save money).

    So, we have IBM, Sun and BEA - who's missing ;)

    Rich Sharples
    Sun Microsystems
    http://blogs.sun.com/roller/page/sharps
  10. "BEA are absolutely right to focus on the total cost - because that is what the customer pays." - Rich

    Well... The BEA number does have a 12 way Itanium HP as the database server. :) I am curious as to what the "total cost" of the solution is versus the application server tier. You can't have one without the other. :)
  11. "BEA are absolutely right to focus on the total cost - because that is what the customer pays." - RichWell... The BEA number does have a 12 way Itanium HP as the database server. :) I am curious as to what the "total cost" of the solution is versus the application server tier. You can't have one without the other. :)

    Fair point.

    Many of the deployments I have been involved in the past 6 years make use of a centralized database - something that already exists and something that someone else is already paying for (bear with me). That is why I think it is reasonable to isolate the application tier - I think that is a reasonable currency to talk about. That is a reasonable centralized model for enterprises.

    Granted, nothing is free - but I think if you start including "database usage" you also need to consider the cost of air, power, LDAP hits, network bandwidth, admin. time, developer efficiency. These are all things you *should* consider - they are all significant but I think in this context (ie. discussing App Server throughput on TSS) it is reasonable to draw the line around the Application Tier for comparison. If for no other reason, it is easier to understand.

    All that said, clearly there are deployments where the database is dedicated and can't be easily discounted from the TCO. That is where some of the Free / Cheap / Open RDBMS become an interesting alternative and why Sun are actively investing time and resources in that area.

    Rich Sharples
    Sun Microsystems
    http://blogs.sun.com/roller/page/sharps
  12. I don't think calling this a test of App Server performance is legitimate. While BEA's result looks like it has "half" the hardware, it should be noted that most of the hardware and software is newer than that used for IBM's result. The differences in the db server are also a problem.

    If you want a real benchmark for App Server performance, then you have to standardize EVERYTHING except the App Server - same hardware, same software (down to patch levels), and so on. Otherwise it's all just PR rubbish.

    Note: I'm not discounting BEA's result. I'm discounting the PR claim that it's "half the hardware".
  13. There's a reason that the J2EE AppServer hardware is listed separately from the database hardware. Draw your own conclusions about Oracle vs. DB2, but the app server hardware numbers speak for themselves. WebLogic Server 9.0 integrates with all the major database vendors, BTW.
  14. Well... The BEA number does have a 12 way Itanium HP as the database server. :) I am curious as to what the "total cost" of the solution is versus the application server tier. You can't have one without the other. :)

    This is why I mentioned in my post "given adequate database bandwidth".

    Sun SPEC'd their Platform Edition against MySQL, running 3 App machines against a single DB machine.

    They got 88 JOPS/Server in this configuration vs ~100 JOPS/Server for a similar configuration running against Oracle. Now, to be fair, the App Servers were running faster CPUs in the Oracle benchmark, so it's a question how much the fact that they were on faster CPUS and/or against Oracle that contributed to the 12-14% boost in performance. They were also running Standard Edition in the Oracle test, but my understanding is that the PE is basically identical to SE in base performance.

    (Feel free to correct me here Rich).

    I haven't looked, but it would be nice to see the utilization numbers on the different tiers during the benchmark.

    For example, Sun was getting ~100 JOPS/Server with Oracle running both on a large 12 CPU (dual core) machine, as well as with a 4 CPU (single core) machine. The detail being that the 4CPU had 1/4 the clients that the large machine did.

    It really highlights the desire to understand whether the bottleneck was in the J2EE servers or in the DB server.
  15. This is why I mentioned in my post "given adequate database bandwidth".Sun SPEC'd their Platform Edition against MySQL, running 3 App machines against a single DB machine.They got 88 JOPS/Server in this configuration vs ~100 JOPS/Server for a similar configuration running against Oracle. Now, to be fair, the App Servers were running faster CPUs in the Oracle benchmark, so it's a question how much the fact that they were on faster CPUS and/or against Oracle that contributed to the 12-14% boost in performance. They were also running Standard Edition in the Oracle test, but my understanding is that the PE is basically identical to SE in base performance.(Feel free to correct me here Rich).

    Right,PE & SE are pretty much the same perf. wise though SE will scale better under certain circumstances.
    I haven't looked, but it would be nice to see the utilization numbers on the different tiers during the benchmark.For example, Sun was getting ~100 JOPS/Server with Oracle running both on a large 12 CPU (dual core) machine, as well as with a 4 CPU (single core) machine. The detail being that the 4CPU had 1/4 the clients that the large machine did.It really highlights the desire to understand whether the bottleneck was in the J2EE servers or in the DB server.

    I think you can assume that all servers were pretty much tapped out - in general there would be little to be gained from submitting a benchmark that left performance on the table.

    For the MySQL submission I think there was a a *little* bit of spare capacity but not enough to support an additional AppServer and you are right additional Mhz would've helped MySQL as well.

    We'll be publishing future submissions that might better illustrate the price / perf. difference between commercial and free Databases.

    Rich Sharples
    Sun Microsystems
    http://blogs.sun.com/roller/page/sharps/
  16. WLS 9.0 vs. WebSphere 6.0[ Go to top ]

    Eric,

    Actually you are in violation of the SPECjAppServer 2004 Run and Reporting rules by publicly comparing BEA's results with IBM's.

    http://www.spec.org/jAppServer2004/docs/RunRules.html#Result_Usage

    -krish
  17. WLS 9.0 vs. WebSphere 6.0[ Go to top ]

    Eric,Actually you are in violation of the SPECjAppServer 2004 Run and Reporting rules by publicly comparing BEA's results with IBM's.http://www.spec.org/jAppServer2004/docs/RunRules.html#Result_Usage-krish

    Krishnan, I think you are misinterpreting the rules. Spec prohibits comparisons to other benchmarks, but comparisons between published results within the same spec benchmark are permitted, as long as both results are in the same category (see section 3.6.3). Since IBM's and BEA's score are in the same category, comparisons are permissible.

    Craig
    BEA Systems
  18. WLS 9.0 vs. WebSphere 6.0[ Go to top ]

    Since IBM's and BEA's score are in the same category, comparisons are permissible.

    But not valid. For a valid claim, you'd have to have more performance on identical hardware with an identical database server setup.

    For a comparison to be valid, nothing may vary except for the variable being tested. This benchmark is a game, and completely flouts the scientific method.
  19. WLS 9.0 vs. WebSphere 6.0[ Go to top ]

    Since IBM's and BEA's score are in the same category, comparisons are permissible.
    But not valid. For a valid claim, you'd have to have more performance on identical hardware with an identical database server setup.For a comparison to be valid, nothing may vary except for the variable being tested. This benchmark is a game, and completely flouts the scientific method.

    The comparisons are quite valid, as long as you understand the limits of what you are comparing. You may not be able to say that product X is 47% faster than product Y using these benchmarks, but you can use your judgment to reach some general conclusions.

    BEA did use half the processors in the app tier in this benchmark. BEA's processors were slightly faster than IBM's, but IBM used much more expensive quads (per processor, quads cost a lot more than duals). Since IBM wants to put its best foot forward, I assume there is a reason they went with quads and not the cheaper dual processor boxes.

    In this instance, we rely on the competitive nature of the market to help us draw conclusions. Both IBM and BEA are trying to show their servers in the best light in this benchmark. You can bet your bottom dollar that IBM (not to mention all the companies that have not yet submitted a result) is working to beat the BEA score. They will do so using a hardware setup that may not match BEA's exactly, but will never-the-less allow intelligent people to draw reasonable conclusions.

    For now, the only conclusion to draw is that BEA's record score was achieved with half the app-tier hardware. Since duals are about a 1/3 the price of quads, the minimal speed difference between the systems are more than compensated by the price.

    Craig Blitz
    BEA Systems
  20. Something to note[ Go to top ]

    Congratulations to BEA for their benchmark, it's obviously quite powerful.

    But if you look at the Sun benchmarks, they're getting ~100 JOPS per J2EE server, given adequate DB bandwidth on the back end. You'll note that none of the benchmarks at SPEC cluster the DB, only the J2EE Tier. Without adequate DB bandwidth on the back end, all of these app servers would sieze and die.

    BEA is getting ~275 JOPS per server. Now we have to assume that the dual Xeon servers that BEA used are comparable to the dual Opteron servers that Sun used, and that Linux and Solaris are comparable as well. I think those are fair assumptions, with both platforms being within 10% of each other.

    Anyway, the key here is that there seems to be benefits (with this benchmark) to horizontal scaling. So, now you get into the competition for overall cost of the solution.

    Sun was getting their numbers with essentially their free version of the app server (the differences as far as I know between the Platform, Standard, and Enterprise are tool sets , management utilities, and clusterability (none of which, affect performance or were used here)).

    Plus, Sun is on the roadmap to making the appserver free of licensing costs across the line, not just the Platform Edition.

    Be that as it may, now you get in to the conflict between the prices of BEA licenses vs the cost of throwing hardware at the problem. Websphere obviously shares this dillema.

    You can throw JBoss and Apache into the mix here as well, but we have not seen any published benchmarks for them.

    So, the basic issue confronting someone looking at these benchmarks is that if they want performance, they need to evaluate whether it is better for their application and infrastructure to throw money at the software side or the hardware side.

    I would like to see a BEA benchmark using the smaller configurations similar to what Sun did. They have one with 4 machines, 3 J2EE and one DB server (all 2 CPU AMD boxes), where they were still getting ~100 JOPs per server. It would be nice to see BEA doing the same thing with 1 J2EE and one DB server.
  21. Now we have to assume that the dual Xeon servers that BEA used are comparable to the dual Opteron servers that Sun used

    I'm not sure that we have to assume that. The list price of an HP ProLiant DL380 G4 with the 4GB memory part and 2 disks used by BEA is $11278 on HP's web site. The list price of Sun's v20z with 4GB memory and 2 disks used is $6245 on our (Sun's) website.

    There are differences in the CPU speed and more important the expense of the memory used in the HP machines. It's yet another reason why comparing submissions like this is difficult.

    That said, I congratulate my colleagues at BEA for their fine work on 9.0!
  22. It's yet another reason why comparing submissions like this is difficult.

    Maybe I missed something here, but why is there no $/JOTS (cost of a transaction) published to help comparing submissions like in classis TPC/C benchmarks?
  23. Maybe I missed something here, but why is there no $/JOTS (cost of a transaction) published to help comparing submissions like in classis TPC/C benchmarks?

    According to the SPEC Result Usage rules, only the performance metrics actually measured by the benchmark may be quoted. SPEC made the decision for SPECjAppServer2004 that the price/performance ratio be dropped (whereas it was included in SPECjAppServer2002). So our hands are effectively tied, as are the hands of all SPEC licensees: none of us can quote any numbers around $/JOPS for SPECjAppServer2004.

    In any case, Craig's comment above still stands:
    But using half the hardware for a new world record should allow readers to draw their own conclusions.

    John Doppke
    BEA Systems
  24. Maybe I missed something here, but why is there no $/JOTS (cost of a transaction) published to help comparing submissions like in classis TPC/C benchmarks?
    According to the SPEC Result Usage rules, only the performance metrics actually measured by the benchmark may be quoted. SPEC made the decision for SPECjAppServer2004 that the price/performance ratio be dropped (whereas it was included in SPECjAppServer2002). So our hands are effectively tied, as are the hands of all SPEC licensees: none of us can quote any numbers around $/JOPS for SPECjAppServer2004.In any case, Craig's comment above still stands:
    But using half the hardware for a new world record should allow readers to draw their own conclusions.
    John DoppkeBEA Systems

    John, I see nothing in SPEC's fair use rules that prohibit making *PRICE* comparisons, the fair use rules are limited to performance metrics; and the word "prohibit" doesn't appear in the run rules.

    But that's academic anyway, each submission includes a full breakdown of the systems under test (in the Bill Of Materials), eg.

    http://www.spec.org/jAppServer2004/results/res2005q3/jAppServer2004-20050701-00011.html#bill_of_materials

    so that *ANYONE*, can quickly run through the BOM and get to the price metric. For example, our SJAS PE + MySQL 5.0 run works out at $147.95/JOP* (for the App Server Tier) which I think will continue to be the *BEST* price/performance result for while. I'd be more than happy to itemize our cost if anyone is interested.

    I'll leave it to someone else to price BEA's submission and allow "readers to draw their own conclusion".

    *Clearly, according to SPEC's run rules - SPEC do not support these numbers - they only support what's published on their web site.

    Rich Sharples
    Sun Microsystems
    http://blogs.sun.com/roller/page/sharps
  25. Rich, Thanks for pointing out my overly strict reading of the fair-use rules. As you point out, comparisons based on the Bill of Materials can and will be made.

    jAppServer 2002 did have a published price/performance metric. I believe this was based on the price of the entire "system under test" (app server, database, software licensing, and three years of support). I'm not sure if this exactly equates to the jAppServer 2004 Bill of Materials, though I think it is similar.

    Craig
    BEA Systems
  26. For example, our SJAS PE + MySQL 5.0 run works out at $147.95/JOP* (for the App Server Tier) which I think will continue to be the *BEST* price/performance result for while.

    The problem with price metrics is deciding what to include. My guess is that your quote is hardware and license costs only, correct? Should support be included? If so, what level of support and for how long time? 1 year? 3 years? Such metrics can also be circumvented through "smart" pricing models.

    Just rhetorical questions... I'm not saying BEA would be any better (or worse) if such numbers were included.

    Henrik, BEA JRockit
  27. For example, our SJAS PE + MySQL 5.0 run works out at $147.95/JOP* (for the App Server Tier) which I think will continue to be the *BEST* price/performance result for while.
    The problem with price metrics is deciding what to include. My guess is that your quote is hardware and license costs only, correct? Should support be included? If so, what level of support and for how long time? 1 year? 3 years? Such metrics can also be circumvented through "smart" pricing models.Just rhetorical questions... I'm not saying BEA would be any better (or worse) if such numbers were included.Henrik, BEA JRockit

    The $147.95/JOP is for hardware / software and premium support (3 years) - essentially what was enumerated in the BOM.

    I agree on the possibility that "smart pricing models" can have a big impact, in our case :

    Solaris 10 - $0 * 6 CPUs = $0
    AS 8.1 PE - $0 * 6 CPUs = $0

    ;)

    Rich Sharples
    Sun Microsystems
    http://blogs.sun.com/roller/page/sharps/
  28. For example, our SJAS PE + MySQL 5.0 run works out at $147.95/JOP* (for the App Server Tier) which I think will continue to be the *BEST* price/performance result for while.
    The problem with price metrics is deciding what to include. My guess is that your quote is hardware and license costs only, correct? Should support be included? If so, what level of support and for how long time? 1 year? 3 years? Such metrics can also be circumvented through "smart" pricing models.Just rhetorical questions... I'm not saying BEA would be any better (or worse) if such numbers were included.Henrik, BEA JRockit
    The $147.95/JOP is for hardware / software and premium support (3 years) - essentially what was enumerated in the BOM.I agree on the possibility that "smart pricing models" can have a big impact, in our case :Solaris 10 - $0 * 6 CPUs = $0AS 8.1 PE - $0 * 6 CPUs = $0;)Rich SharplesSun Microsystemshttp://blogs.sun.com/roller/page/sharps/

    Exactly. Not charging for software is one way of hiding the real cost. Sun is going to get its investment back somewhere, luckily for you that "somewhere" is not part of the BOM...
  29. Rich,

    Can you provide me your itemized cost that you used for arriving at $147.95/JOP*? I haven't been able to get to that number.

    Bernie Buesker
    Hewlett-Packard
  30. All commodity hardware benchmark[ Go to top ]

    Indeed, it would have been extremely interesting to see the results of this benchmark had all the hardware been unequivocally commodity, such as DL380s running RHEL4u1, or even better, standard blades such as BL20p running RHEL4u1.
  31. It is actually quite difficult to compare the BEA and IBM results since, even if all J2EE servers use the same Pentium CPU:
    - BEA uses 10 duals at 3.6GHZ
    - IBM uses 10 quads at 3GHz: 3#3.6 1 quad # 2 duals
    - BEA publishes in July 05 and IBM in Nov 04
    In between JVM were tuned and benchmark teams got smarter

    Benchmarking is a marketing game and marketing driven; ALL vendors are smart and use many tricks to circumvent the benchmark. They carefully choose to make their HW setup a little bit different from the competitor, so that they are able to say something and the contrary (just in case).
    Look at the TPC-C story, the mother of all benchmarks!
    It would be nice to have a commoditized setup, but company politics prevent this to happen

    So beware, your actual mileage may vary

    This is not to take side in the BEA/IBM contest, I like them equally:;-)
    Just a warning
  32. 64-bit jrockit?[ Go to top ]

    Can anybody tell me if the 64-bit jrockit was used for the benchmark?
  33. 64-bit jrockit?[ Go to top ]

    No, it was the 32-bit Linux version. For some reason that fact is not clear from the SPEC summary, nor do I find anything that states if the OS was 32 or 64 bits - but I am fairly certain that it was 32.

    Henrik Ståhl, BEA JRockit