News: BEA and Unisys post new SPECjAppServer2002 results
BEA and Unisys have recently published two sets of new SPECjAppServer2002 benchmark results in the Multiple Node category. With these results, BEA WebLogic Server now owns the top four throughput positions across all published configurations. Also, BEA has achieved the lowest price/performance ratio in the Multiple Node category while reaching better performance than the nearest competitor at less than half the cost. Below are some of the highlights of both tests, while the complete description and full disclosure data can be found at http://www.spec.org/osg/jAppServer2002/results/jAppServer2002.html.
- Posted by: Vadim Rosenberg
- Posted on: August 04 2004 23:55 EDT
The first benchmark, published on July 21, scored 2,587.74 TOPS@MultiNode at a cost of 139.84 $/TOPS@MultiNode.
This result can be directly compared to IBMs test that used similar Intel® Xeon-based hardware, but required 9 systems vs. only 4 used by BEA/Unisys. In that benchmark, IBM had achieved a slightly lower than BEA throughput (2,575.34 TOPS@MultiNode) at 58% higher cost (330.07 $/TOPS@MultiNode).
The second benchmark, published on August 4, increased the utilization of the application server by using a bigger database server: instead of 8 CPUs in the first test, the database server used 12 CPUs in the second one. This benchmark has shown the throughput of 3,096.49 TOPS@MultiNode at a cost of 157.66 $/TOPS@MultiNode.
The system under test included BEA WebLogic Server 8.1 and BEA WebLogic JRockit JVM running four Unisys ES3020L servers, each with Dual Intel® Xeon processors on Microsoft Windows 2003. The database server was a Unisys ES7000 Aries 410 with eight (twelve in the second test) Intel® Itanium® 2 processors running Windows 2003 and Microsoft SQL Server. The benchmark was run by Unisys team at Unisys labs.
SPECjAppServer is a trademark of the Standard Performance Evaluation Corp. (SPEC). Competitive claims reflect results published on www.spec.org as of August 4, 2004. The comparison presented is based on the highest performing results shown by BEA and IBM in the Multiple Node category. For the latest SPECjAppServer2002 results visit http://www.spec.org/osg/jAppServer2002/results/jAppServer2002.html.
WebLogic Product Marketing
BEA Systems, Inc.
- Porsche's are fast too by artful dodger on August 05 2004 09:09 EDT
- BEA and Unisys post new SPECjAppServer2002 results by Artti Jaakkola on August 10 2004 04:34 EDT
Just cause a Porsche is faster than a Toyota Camry doesn't mean I need to buy one. What's the cost in time and money of implementing a Weblogic/Portal/Workshop solution compared to Tomcat/Spring/Hibernate/plain Ant solution?
Just cause a Porsche is faster than a Toyota Camry doesn't mean I need to buy one. What's the cost in time and money of implementing a Weblogic/Portal/Workshop solution compared to Tomcat/Spring/Hibernate/plain Ant solution?Development costs are one type of expense, the cost to put the application into production is another. SPECjAppServer 2002 includes a price/performance metric that includes software, hardware, and support costs. The Unisys submissions with WLS are interesting because of their relatively low price/performance, which saves money in the long run.
In the category in which it was published (see http://www.spec.org/jAppServer2002/results/jAppServer2002.html#MultipleNode), the Unisys submission with WLS at $139 represents the best price/performance.
For full disclosure, I am a BEA employee.
Just cause a Porsche is faster than a Toyota Camry doesn't mean I need to buy one. What's the cost in time and money of implementing a Weblogic/Portal/Workshop solution compared to Tomcat/Spring/Hibernate/plain Ant solution?Very good analogy. Some people buy Porsche. Some buy VW. The nice thing is that they both run on standard gasoline. Last I checked, though, people weren't giving away cars ;-)
Coherence: Clustered JCache for Grid Computing!
You are absolutely correct.
What you missed, though, is:
1. This benchmark specifically tests application servers. Not servlet containers, not Portal, and not the development cycle. Just the J2EE servers. so, if you want, you can at the least compare this result with the other app server results out there, and for that you go to the link specified in my original message.
2. One of the main metrics - price/performance - specifically takes into account the total price of the configuration tested. It includes costs of hardware, networking, database, software, support for three years, and other stuff that a customer would expect to pay for in order to build a similar system. Short from rent of the facilities and costs of energy, this is as close to TCO as you can get.
3. The beauty of the SPECjAppServer2002 benchmark is that it's cross-vendor, and every vendor participating has to approve each result before it gets published. So, I can assure you that IBM, Oracle, and the other members agree with every letter and number in this submission.
Now, good luck with you TomCat-based portal!
[...] What's the cost in time and money of implementing a Weblogic/Portal/Workshop solution compared to Tomcat/Spring/Hibernate/plain Ant solution?Depends on whether you need to manually implement failover, JMS, remoting, connection pooling, etc.
But you are absolutely right: if you do not need these services, do not use an application server. The Toyota will do then also (perhaps even better at lower costs). When starting J2EE projects, people often miss to check if they really need an application server.
Why is BEA still testing with 2002 version of SPECAppServer?
At least I would like to see things like JMS included in benchmarks.
"The latest version expands the SPECjAppServer benchmark to exercise more capabilities of J2EE application servers. SPECjAppServer2004 load drivers access the application through the web layer for the automotive dealership and through Enterprise Java Beans (EJBs) for the manufacturing domain. The new benchmark also measures performance for the web tier and messaging infrastructure."