BEA has yet again struck the ECperf charts, this time running Weblogic 7.0 on an HP rp8400 server. These new tests, sponsored by HP, yielded the highest performance figure yet of 37791 BBops(Benchmark Business Operations)/ min@Std but at the high cost of $38/BBops. The database server was an HP rp7400 system running Oracle 9i.
- Posted by: Nate Borg
- Posted on: June 12 2002 16:08 EDT
Check out BEA/HP's new ECPerf Results
- BEA Posts New ECperf Results Running on HP Server by Scott Gilpin on June 13 2002 11:53 EDT
- BEA Posts New ECperf Results Running on HP Server by Mileta Cekovic on June 13 2002 12:09 EDT
- BEA Posts New ECperf Results Running on HP Server by Cary Bloom on June 13 2002 12:29 EDT
- BEA Posts New ECperf Results Running on HP Server by Robin Sharp on June 13 2002 15:56 EDT
Interesting result, and it's good to see BEA making an effort to publish benchmarks for various hardware/OS configurations.
However, why did they run another benchmark with non-clustered servers? It would also be interesting to see them run one server across more than just 2 or 3 processors.
I also would like to see BEA do something like IBM did with their 7 and 9 node configurations, which showed the near-linear scalability of the system.
the reasoning they have clustering off is that clustering introduces a 20% or so performance degradation (in ECPerf, it could be higher or lower). If you're going to optimize your application and you know it doesn't need to failover, then you should turn off clustering.
However, why did they run another benchmark with non clustered servers
Because it will increase system price and decrease performance, which is bad for the benchmark ?
Nice result, but please look at following CSIRO research:
>>CSIRO has published a new version of the J2EE Application Server comparison report. It provides the most detailed analysis, evaluation and comparison published on six leading application server technologies, from IBM, Fujitsu, SilverStream, BEA, Borland and the open source JBoss product."
>>The products were evaluated qualitatively against 6 overall categories (see below). The leaders in these fields are:
Performance and Scalability - Borland
J2EE Support - Borland
EJB Support - Borland and BEA
J2EE Services - tight group of leaders including: Borland, BEA, Fujitsu, and IBM
Development and Deployment - Borland and BEA
System Management - BEA
Scalability and Availability - IBM
I'm impressed ... Fujitsu is a leader in J2EE services!!!
What were the metrics used to evaluate the app servers? This is a nice ploy to have us fork out a pile of money ...
>>This is a nice ploy to have us fork out a pile of money ...
A pile of money? I would harly call 1100 South Pacific Pesos a lot of money!
We bought one of their earlier reports - and were quite impressed.
At that time, they showed similar performance between the leaders. Incidentally, what is (probably) not in the report (I havent read the latest one) is how much effort/sweat/vendor-consultant time went into getting the performance out of each platform. Some servers required more effort than others. I know this from speaking (via email) to a couple of the CSIRO team members.
Like the study, but worth noting that the BEA ver they used is 6.1, as opposed to the 7 that ECPerf is testing. Otherwise, the results were pretty interesting.
This is much more realistic configuration than some of the other so called "clustered" ECperf results which were achieved by stringing together nearly a dozen small boxes (what a maintenance nightmare!).
Looking forward to more results on the high-end hardware systems (WebSphere on AIX or AS400, for instance!). We all know the cost/txn will be high ... but then these are no ordinary boxes either!
It would be nice if sales and marketing focused more on the development times of real world software and less on run times on theoretical of hardware.
The mark of a good product is how little development effort is needed to enhance performance requirements. EJB perf is about how much run time effort is required to meet peformance needs - that tells me all I need to know.
Robin: "EJB perf is about how much run time effort is required to meet peformance needs - that tells me all I need to know."
Unlike SPEC and TPC, for example, which measure development time? Sorry, I'm missing the point here.