The IBM Websphere Application Server, Advanced Edition, ECperf benchmark results have been certified by the ECperf Expert Group and published by TheServerSide.com. The results are staggering. Websphere achieved leading performance of 10316.13 BBops (Business Operations per Minute) @ Standard Workload (STD) and very impressive price metric of 27 $/BBops (Cost of Configuration/Business Operations per Minute) @ Standard Workload (STD).
Read IBM's official announcement
See IBM's results on ecperf.theserverside.com
See the TSS interview with Shanti Subramanyam, ECperf Spec Lead
Wouldn't the performance of the database server drastically alter the test results? It seems there are too many things being tested at once here.
Well, IBM used a 2 way p series box with 2GB of RAM for the database server. Borland used a 4 way Sun box with 8Gb of RAM. IBM still got better numbers...
It'll be interesting to see future ecPerf results. Will they use the Sun servers or use hardware with a better bang/buck ratio like IBM's pSeries or Intel based servers.
(views expressed are my own and don't reflect IBMs views)
Congratulations to the WebSphere team. Great job. You r the best.
IBM has hit three targets with just one bullet.
AppServer, Sun Hardware, Oracle.
Interesting indeed. boy u wouldnt want to these many enemies now days.
Congradulations IBM! 1.3 cert and now this.
< Hoping the rest of the appservers will follow >
This puts pressure on some other competitors. I IBM is on a roll as of late.
Although competitors have some target numbers to beat.
I have used WebSphere 3.02, 3.5, WebLogic 5.0, 6.1, Borland App Server 4.5, JBoss 2.xx. But I knew I was looking at the best when I downloaded and test run WebSphere 4.0 Advanced Application Server Single Server Edition. It was the fastest thing I have ever seen when it came to startup and shutdown and I am a fan of Visual Age 3.x which helped me to deploy applications into WebSphere 3.x, and now WebSphere Studio 4.0 Application Developer looks and feels even better. Having a great application is just not enough nowadays. Having a great development, testing(debugging) and deployment tool is as much a necessity. IBM provides the best combination. WebSphere AS - WebSphere Studio 4.0 AD - Db2 UDB. Now I just wish I could work on IBM platform again.
Gotta agree, Jaya.
WAS 4.0 is a great product. And if that wasnt enough IBM came out with WSAD. Gotta say WSAD is one of the best IDE i have worked on. No wonder why both M$ n Oracle are targeting IBM these days, they see the threat.
Both are really awesome products.
I have been using WebSphere 4.0 for sometime. I really found it hard to learn because of the documentation it provides. It provides very little or no documentation for most of the features. I am really frustrated with this. On comparision, BEA's WebLogic has got wonderful documentation, and I find it very easy to learn and implement. So if IBM has to reach a wider audiance, it has to have its WebSphere documentation improved!!!
Have you looked at the WebSphere info center? It is very extensive and is updated with new content often.
There is also 11+ meg redbook all about WebSphere 4.0 that can be dowloaded in PDF format.
If you need something that's not in these publications I'd like to know what it is.
Congratulation to IBM team! Big question for IBM -
Is IBM willing to sponsor other application servers
hosting on IBM p640 Model B80 cluster / DB2 7.2
to have Orange to Orange comparision.
So now ther is 2 venodors out there that have made the ECperf test. I had the impression from som chart I saw that Borland was close to the max that was possible to take out of their config. And now IBM doubbles the number of transactions ! And at half the price !
So - is ther any technical people out there that kan help me find one or two reasons for the big difference ?
I am "just a developer" :-)
I think I know which chart you are referring to and the purpose was to show a measurement on scalability and not on possible maximum performance.
Since processors, operating environments and JVMs hopefully evolve over time, the ECperf results from any vendor is not static. The execution is still up to the operating environment and JVMs.
By running the test with the configuration IBM did, they did not only test WebSphere vs Borland AppServer, but also DB2 vs Oracle, IBM vs Sun and J2RE IBM build ca 130_20010925was vs Sun JDK 1.3
Best possible were of course if IBM could release a ECperf toolkit as Borland did, so that each and every customer could try out their operating environment.
Best regards, Bjorn Gullberg
Congratulations to IBM. I very impressed with the numbers. Even I am a big fan of Borland I should say that according to results IBM provided better SOLUTION for J2EE Application. Hopefully, soon we will see results of others.
Perhaps theserverside or the ecperf team could add a second standard configuration that might pinpoint the performance of the appserver a little more. Say a 2 CPU wintel box (yea I know) going against sometype of standard CPU/OS Oracle instance.
Just a thought,
Hmm, about the standard config... I think this is simply impossible... guess what Sybase & Oracle would say if the default DB would be DB/2; and vice versa; Guess what Sun what say when default ardware was IBM (and vice versa). Everyone would find a reason to complain ;-)
BTW: Because someone mentioned "results are not static"... right, e.g. BAS 4.5 now only costs 4000 K$/CPU, which would dramatically reduce the price of Borland's config. Guess the Sun hardware also got cheaper...
Still, IBM currently is top of ECPerf, and though I'm a big fan of Borland (and still like their AppServer best) I (we?) must give the credit to IBM Borland received before... just being fair ;-)
A standard hardware configuration is not possible and would not be a good thing either. A highly tuned infrastructure can't be built on standard hardware configurations. As much as we like to optimize every part of our infrastructures (selecting best of breed hardware, operating systems, appservers, databases and more) we would just end up suboptimizing. The ECperf bencmark gives us a $/BBops for a specific application server configuration running on a specific hardware platform. This will serve as a very good guideline for selecting a infrastructure for an organization or to make judgements about which appserver that would fit best in an existing infrastructure.
This ECperf thing might not be the answer to everything but at least it will give us some guidance and is a lot better than nothing. If a vendor is reluctant to publish benchmarks then I would highly suspect that the benchmarks are not competetive enough. The ECperf benchmark will of course be a moving target for every vendor but I am sure we pretty soon will see a trend towards some appservers simply performing better than others, be it BEA, Borland, IBM, Oracle or whatever.
Another thing that I find quite astonishing is the way some people bash certain appservers without having a clue about what they are talking about. A lot of people talk about WebSphere being unstable, poor-performing, badly documented and more. Have these people actually used the product for anything more than a quick "install and uninstall" lately? I know WebSphere has its flaws but the bashing received by some parts of the J2EE community is just not fair when you look at what IBM has accomplished the latest year or so.
We buy containers for many reasons but one important reason is efficient resource usage. Thats what connection management, statement caching and cmp optimizations are all about. Now what happens when the underlying infrastructure in one test is in general more efficient than another thus one container might get a perf boost which is external to it and appear faster. The same can be said for others.
I think we should request from vendors at least one common platform along with their perferred platform or alternatively allow someone to test these systems under multiple configurations so that we can see how they work across platforms and understand what exactly we are buying and how portable the performance is across systems.
Unless of course we are looking to measure an end-to-end offering which I think IBM is really showing.
I intend to produce a report this month or next month showing the difference performance characteristics of the most popular appserver cmp engines using JDBInsight.
Chief Technology Officer
"By running the test with the configuration IBM did, they did not only test WebSphere vs Borland AppServer, but also DB2 vs Oracle, IBM vs Sun and J2RE IBM build ca 130_20010925was vs Sun JDK 1.3 "
IBM story will be better if they show WebSphere running
on SunFire with Oracle db. So we can see how fast WebSphere
Good point. If the benchmarking is done by (or on behalf of) a vendor that also makes a database, a VM, a web server etc, how likely is it that one company's product gets benched with another company's?
WebSphere on Oracle db
Oracle app server on DB2
Web Logic on SQL Server
It's actually extremely useful to managers to know how much the entire solution will cost (software, hardware, everything.)
Moreover, in some cases a product may perform very differently on different hardware configurations because of obscure hardware features. For example, to make DB2 scale on a parallel sysplex IBM had to make changes to DB2, OS/390, and the CPU (which now has special instructions for synchronous communication with the cluster cache.) In that case the interesting thing is really how all these components work together. The effect is probably lost in DB2 with Solaris.
Well, All this IBM love in is great and I admit WAS is a good tool - if you have thousands of dallars to throw around ($10k USD for a single seat developer licence?!!??!! or $36k USD per CPU deployment)
Or if you have a dev box with Gigs of ram. Try running it under 512 Megs...dirt slow and finicky (it often locks up my Linux machine).
Only PHB will want WAS and we techies will be left out in the cold. Hard to get you App server popular if developers can't use it (due to cost or hardware restrictions). BEA/hp and oracle don't have this prob (admittedly only BEA is any good).I say go to jBoss....;)
Where are you getting your figures? It says right in the report that the cost for the software is 10k per processor, not 36K. The single server edition is even cheaper and the developer's edition can be downloaded for free. 3.5 took a lot of memory because of it always ran 2 JVMs and required a web server and database, but the 4.0 single server/developers edition runs as one process and does not need a separate web server or database. The application uses about 70meg on my W2K machine, hardly a problem.
>Well, All this IBM love in is great and I admit WAS is a good tool - if you have thousands of dallars to throw around ($10k USD for a single seat developer licence?!!??!! or $36k USD per CPU deployment)
WAS is free for development, and $10K per CPU for deployment. But, last I got prices on BEA, Weblogic was about same price, and I assume their developer version is free as well.
>Or if you have a dev box with Gigs of ram. Try running it under 512 Megs...dirt slow and finicky (it often locks up my Linux machine).
We do. runs fine.
>Only PHB will want WAS and we techies will be left out in the cold. Hard to get you App server popular if developers can't use it (due to cost or hardware restrictions). BEA/hp and oracle don't have this prob (admittedly only BEA is any good).I say go to jBoss....;)
Can't argue with JBOSS being a good choice, but I've got millions riding on the server, and WAS/BEA are safer choices. If we've been doing J2EE for a while, I'd probably be more risky, but this is the first app.
IBM proved it's twice as fast as Borland. If this is due to the hardware/DB/JVM, Borland can take the challenge and test on the same kind of configuration.
IBM beat borland with 2x performance using x/2 hardware. Period. Hell, nobody can complain bout that.
I can understand the argument if they used more hardware, but guys c'mon.
Now its upto the other hardware vendors to take that challenge and beat IBM. Period.
Whats the fuss about.
Please show a bit more professionalism. Read up on ECperf and do your maths. Its really saddens me when you see some of the responses here. Do people nowadays read only the first paragraph of a press release and not attempt to understand the data presented.
IBM has good figures there’s no knocking that (it was expected). Is their appserver really that much faster? Think about it.
Ignoring the cost comparison for now it would have been nice to have at least common hardware and platforms across ECperf runs especially considering the fact that people seem lazy in interpreting the data. This should open the door for others.
I suspect we will see some more figures coming out soon.
The reason people are complaining is that these numbers say nothing about the relative merits of Borland's app server vs. IBM's. For all we know, IBM's app server could be much slower but the database they used could be much faster making them turn in an overall higher score.
Without isolating single elements (i.e. changing nothing between tests except the app server), these ECPerf tests are useless.
"Without isolating single elements (i.e. changing nothing between tests except the app server), these ECPerf tests are useless."
Perrin, you're the only one who gets it. ECPerf is all about database. This is how Borland can have a better container than IBM and get slower numbers. It's all about DB/software and DB/hardware. IBM just came up with a fast DB config. The app server has minimal relevance in these numbers. My guess is that container time counts for less than 1% compared to DB and network I/O time.
Of course the database makes a difference but I would speculate that the performance difference is more a reflection on how the database is used. Websphere has good connection management, and lots of other caching and pooling for high performance. The transaction management must be pretty slick too.
IBM must be loving this. If a competitor still claims that Websphere is slow, than they are implicitly saying that AIX/DB2 outperforms Solaris/Oracle...or the IBM hardware is better....
No matter what, they win.
Personally, I'd like to see ECPerf numbers for Websphere on S/390 and AS/400.
Please find below some opinions about the IBM vs. Borland AppServer performance comparison:
1. IBM stacked the deck: IBM's ECperf results were achieved in a pure IBM environment: IBM hardware, OS, VM, JDK and DB2. This environment alone would provide faster performance for any appserver running on it due to IBM's faster VM/DB2 vs. Sun's VM/Oracle. Possibly Borland choose to run their tests in a heterogonous, real world environment consisting of an industry leading server and database platform combination – Sun and Oracle.
2. Clustering supported at the client? IBM's test configuration used single-server licenses at $10k each instead of using their clustering licenses that cost $17k each and are functionality comparable to the Borland AppServer with a list price of $12k. Using this type of license and custom coding to handle clustering saved them $56k (8x7K) – so how many real world deployments would depend on custom programming to manage server clustering?
3. The client was a mainframe? Another oddity about clustering, the client (emulator) system managed clustering for the test configuration servers. By pushing clustering to the client it allowed IBM to bypass the intent of the ECperf test by offloading key product functionality to a system not counted when calculating the $/BBops. How many actual deployments would rely on the client to provide this critical functionality?
And this was no ordinary "client" system, "The IBM pSeries 680 is our most powerful UNIX symmetric multiprocessor (SMP) system" with a starting list price of over $200,000 US. It was configured as follows with an estimated list price of near $500k (several times the Sun hardware costs):
- 12x450 MHz RS64-III CPUs with 8MB L2 Cache
- 32 GB RAM
- 4x17.3 GB Disks
Borland client was a Sun Enterprise 220R
- 2x450 MHz UltraSPARC-II CPUs with 4MB Ecache
- 2 GB RAM
- 2x18.2 GB Disks
4. WebSphere can't scale: IBM used two hardware servers for this test, each with 4 processors. The reason for running the test on two separate servers is that WebSphere does not scale well to 8 or more processors per server.
5. No, you can't try this at home: IBM has not provided an ECperf test kit to allow anyone to duplicate testing in their own environments.
6. Not the new "King of the Hill": John Meyer from Giga who’s IdeaByte gave Borland "king of the hill" status in December discussed IBM's testing inconsistencies and as a result he will not be giving them our crown any time soon. With this many discrepancies he did not feel IBM was worthy of the title. John is a strong proponent of running tests in the same environment to truly have a comparable test.
So, after all if IBM's price/performance formula was normalized by adding in cluster license costs and the outrageous hardware expenses for the "client" a different set of numbers would have resulted. No doubt the Borland AppServer would have a better $/BBop and IBM would be faster – simply because they applied more hardware to the problem.
And finally, the best comparison can be done by running the ECperf test in real world, eveybody's their own environments as recommended by Giga. Except that IBM has not provided the means for anyone to do this testing – until then you can just trust that IBM was completely honest in conducting their tests.
I agree that a standard platform would be nice, but saying that IBM stacked the deck is unfair. If AIX/DB2 performs better than Solaris/ORacle, why shouldn't they use it? Borland is certainly welcome to use it too.
As far as client driver comments go, it is not relevant to the test. I am sure Borland used a client driver which drove their test system to full utilization. They'd be stupid not to, and I don't think they are stupid.
Your comments about having to "just trust that IBM was completely honest in conducting their tests" are off base. The results are audited, that's the point.
I think that ECPerf is test that does not display which Java Application Server faster, it displays whose solution is more effective, or using other words: "what configuration can perform same amount of work for smaller price" - it is not race of the App Servers.
I do not think that companies will start buying IBM servers specified in ECPerf report because of ECPerf results. There are still much more to achieve when you totally changing you deployment platform and in most cases it is very expensive.
Talk about sour grapes! WebSphere doesn't scale on SMP boxes? Please! If you'd tested this then I think you'll notice we scale on SMP hardware as good as or better than any of the competition.
IBM chose the most cost effective platform for the test. Anybody who thinks Borland deliberately got slower numbers is dreaming. It would appear that they chose a more expensive, slower performing platform, clear and simple. If they deliberately used a lower performing, more expensive platform then thats their business but it looks like a mistake to me.
Next, IBM used the full version of WAS not the single server version as you incorrectly stated. The disclosure clearly states WAS 4.02 Advanced Edition. See section 7.2.5. More misinformation on your part.
The client does load balancing. Of course it does! The client ORB gets a list of available servers and load balances requests across them. Most, if not all, applications servers including WAS handle clustering in this manner (either in client stubs or the client ORB). There was no custom coding of the clustering, this is standard WAS clustering technology. See 7.5.5 of the disclosure. Once again, more disinformation.
The comments from the Giga analyst are irrelevant. The ECPerf expert group passed the results, they are valid, period.
As for the client mainframe, if Sun have excluded the hardware costs of the driver then so be it. Borlands driver hardware is not included either. But regardless, I don't think the client is a bottleneck in this benchmark. Does anybody really believe that you need a mainframe to drive these clients? I think not. I think it's more a case of we needed a driver box and that was what was handy.
Lastly, IBM have cooperated on more than one occasion with ZDNet in their J2EE benchmarks. These benchmarks were run using the independant ZDNet application and with independant ZDNet hardware. Other J2EE vendors have withdrawn from such tests. If customers want to see tests on independant platforms then I believe IBM has met this requirement in the past by actively participating in such independant tests and in all likelyhood, will again in the future. We don't invest in optimization to go hide when the time comes to step up to the plate. Not all vendors are so forthcoming for some reason...
So to summarise, well, it's best to say nothing and just draw your own conclusions on Arthurs post.
(these views are my own and don't represent the views of IBM).
Judging by your response you would appear to have alot more information at hand.
Could you tell me in your own honest opinion (not IBM's) do the figures really show that WebSphere is that much better in performance than that assumed by others in this thread (x2).
Can customers expect the same relative level of performance across different platforms? Performance with Portability? How much of the performance figures relate to the performance of the other elements of the system external to the WebSphere product? I am sure you have some idea and would like to contribute this to help offset the disinformation you claim and have stated in your response.
What do you think would be the best way to provide customers with more transparent performance data than currently shown? ECperf kits, independent benchmarks, common platforms,common configs (cluster or not)...
William and Reema.
The ECPerf kit is available. Look at the ecperf section on this web site and find the results. There are two downloadable files associated with the WebSphere result, the disclosure document and the FDA (full disclosure archive).
The FDA zip file has everything you need to run ecperf, ears, configuration, scripts, database scripts.
I have previously downloaded the code and scripts which to a large degree our the standardthe ecperf kit. I am just looking for the websphere specifics so to understand more of want is presented.
I think what might have been expected from others is a bit more documentation centered around the websphere configuration and deployment process (Have I missed something in the download?).
Such vendor kits come in handy when a customer is testing out many different products and they might not already be an expert in all of them (thats what the middleware company expects or hopes). Though this is less and less a problem nowadays with **some** AppServer vendors including yourself (or should I say your employer, IBM) providing better configuration and deployment support (I still have mightmares of the time when I did not work with Borland Enterprise products, ;-)).
Can you please tell us why IBM did not provide ECPerf Test Kit? I do see a lot of us wanting that so that we can test the results on our own configurations (which do not have mainframes as clients always :)
Please find below the answer to your misinformation claims:
In page 17 of the IBM's ECPerf test results, the price of the "WebSphere Application Server AE" was mentioned as "$10,081/processor".
According to IBM's web shop (http://www.ibm.com/shop/software
), this the product which has product code 20P5328/20P5323, namely WebSphere Application Server V4.0, Advanced Single Server Edition.
So this means that this is "The Single Server Configuration" not "The Full Configuration". As it is explained at the product page (http://www-3.ibm.com/software/webservers/appserv/advanced.html
), Single Server configuratiın does not support clustering.
So IBM used Single Server Edition in order to decrease the cost (about 56,000 USD). Since the Single Server Edition does not support clustering out-of-box, IBM did the clustering in the client side by custom coding (again, how many actual deployments would rely on the client to provide this critical functionality?). So if the price is not the issue, why IBM didn't use the Full Configuration Edition which already supports clustering out-of-box?
Are you saying the ECperf kit driver program was changed or that the client stubs provided in the ear are not the standard compiler generated ones.
It was AE (full version). Standard clustering was used. No custom code is there.
If you phone up IBMs 1-800 number and ask for the hardware and software used in the test then you'll get the prices provided in the test. It's a publicly available package price and this is disclosed in the disclosure document and this is allowed by the ECPerf specification (section 6.3.4).
Just to be absolutely clear. The package pricing is based on software, not hardware.
First, the customer joins the passport advantage program. This is free. They then order the software. If they order the number of licenses for WAS 4.02 Adv and DB/2 in the test then they get points for each license purchased. This points for this amount of licenses puts you in "band D" which qualifies you for a discount which brings the WAS price to the one in the report. You don't need to be a huge customer, a single person company will get these prices.
So, thats the deal. If nothing else then this shows the price advantage of going with a single vendor for all the software rather than seperate vendors.
Just to be clear, you get the same discount even if you run on Sun hardware, the discount is not dependant on the hardware.
The product code 20P5328 you referenced is for WAS 4.0 Advanced Edition (includes clustering support), not single-server edition. On the URL you referenced for IBM's web shop, the price shown for this product number is $10,081 (electronic delivery). Thus the price for AE quoted in the ECPerf disclosure is correct.
You also referenced 20P5323 -- that is also for 4.0 Advanced Edition, only in physical box form (media, printed docs, etc.) rather than electronic delivery.
The product code for 4.0 Advanced Single-Server Edition is 20P5313 and it has a list price of $8,000.
Views are my own and not necessarily IBM's
Couple of months back there was a posting on this site about Oracle performing better than IBM and BEA. Then there was huge out cry from the community about this being farse and asking the vendors to release their ECPerf results which would setlle the issue once n for all.
My response to the community response at that time was exactly what i am seeing and reading here and that was, "Once the vendors release the ECPerf numbers do u think, that the vendors who come down on the list or vendors who r not on the list, would absorb and accept the results. No way. Standard benchmarks would still not give u the answers, if the answers r not to ur liking"
Life goes on....
I checked with the team who did the test and the CPU load on that monster machine was barely registering and so this bears out my point that the size of the client driver is unimportant as far as the final results go.
We used it simply because it was available in the performance lab.
The toys that those guys have ;-)
If you visit the page at the following link:
And click on the link near at the bottom of the page which has title:
Product 20P5328: View file size
Then you will see a popup window which has the following content:
Product #: 20P5328
Image description: WebSphere Application Server V4.0, Advanced Single Server Edition for iSeries Portuguese, French, German, Italian, Japanese, Korean, Simplified Chinese, Spanish, Traditional Chinese, US English
Image size (in MB): 245650872
(The URL for this popup windows is: http://commerce.www.ibm.com/content/home/shop_ShopIBM/en_US/ESD/2264872.html
So as you can see the product code for the WebSphere Application Server V4.0, Advanced SINGLE SERVER EDITION is "20P5328" and the price is "$10,778.00".
I hope my explanation on how the pricing in calculated and how ECPerf allows package pricing will put this to rest. The things to take away is that the prices are valid and the product used was indeed WAS 4.02 Advanced Edition and that WAS standard clustering was used and the "mainframe" client has no bearing on the posted results (it's CPU load was under 1% during the test).
If you have other questions then feel free to ask.
This is in addition to what Billy had said,
"IBM stacked the deck: IBM's ECperf results were achieved in a pure IBM environment: IBM hardware, OS, VM, JDK and DB2...."
1. Why should IBM endorse other vendor's product?
2. If IBM were to run ECPerf on WebSphere using other vendor hardware or software, you will definitely see those headlines in thier own Websites:
a. XXX database (put whatever you want) powered IBM WebSphere performance test.
b. IBM abandon their own pSeries and use YYY hardware/OS (again, put whatever you want) for WebSphere performance test.
"WebSphere can't scale: ..."
3. It also seem implicit to me from what you had written, that ECPerf were form by a bunch of rather inefficient or technically challenged people, such that they are unable to formulate some fair benchmark for App Servers performance, or ECPerf is bias toward WebSphere App Server.
However, this is not true since "The ECPerf Spec and Toolkit are being developed as JSR #000004 under the Java Community Process 2.0 . The ECPerf Expert Group includes representatives from: Art Technology Group, BEA, Hewlett-Packard, IBM, Informix, Inprise, IONA, iPlanet, Oracle, Sun Microsystems, and Sybase".
(these views are my own and don't represent the views of IBM).
Are you not comparing oranges with apples here to a certain extent.
For example, Borland have a BBop rate of 5961.77 and WebSphere have a rate of 10316.13. To normalise the rates divide them by the injection rate. Therefore, for Borland who used an injection rate of 58 you get a BBop count of 5961.77/58 = 102.79 and for IBM 10316.13/100 = 103.16. As you can see there is not much difference between the two.
Can you really compare a distributed view against a centralised one? How would Borland have faired if they had used a cluster of two nodes and increased the injection rate?
The same goes for the $/BBops rate. As this rate is a related to the injection rate, it is hard to compare the two results when one vendor is using a higher injection rate than the other.
The bottom line is, how would Borland have faired if they were to use two nodes and increase the injection rate to 100. Think about it. They were using an injection rate of 58 over one node, if they were to scale over two nodes they could possible achieve an injection rate of 116. If you punched these figures into the ecperf equation they would most probably come out on top. And if they could achieve this on the same hardware the cost would come down as well.
You'll have to read the ECperf spec to learn a bit more about how injection rate and BBops are related. Here's a quick summary:
The injection rate is a measure of how much work you are attempting to drive. BBops/min is a measure of how much work was actually achieved. If you are meeting all of the ECperf constraints, then mathematically (according to the prescribed transaction mix and so forth) the BBops/min will be about 104.1 times the injection rate. Therefore, normalizing the results by injection rate is meaningless -- if it is a valid result, the normalized value will always be about the same.
If a vendor runs ECperf and still has resources to spare at a given injection rate, they should increase the injection rate so they can get a higher BBops/min. So we must assume that both IBM and Borland ran at the highest injection rate they could achieve at the time on their particular configurations.
It may (or may not) be true that Borland could get double the injection rate if they added a second system and used a clustered configuration. But this would approximately double the cost, so while their BBops/min would be higher, the $/BBops would be about the same. However, in practice you won't get perfect scalability by adding a second system, so the BBops/min would probably be less than double, and $/BBops would go up slightly.
And thats what i said, its now upto to the other vendors to take the challenge and beat IBM. Now whether they want to do that, using the same hardware/topology, well n good.
This is a great achivement from someone who is thought to be slow and out-of-spec. IBM has managed to pull J2EE 1.3 certification and amazing ECperf results before their major competitors. Maybe some users will stop bashing WebSphere and actually try the server before making judgement. People who has used WebSphere knows that it has steadily improved for every version since the not-so-good earlier versions.
Lets hope other vendors will get their ECperf results out the door as soon as possible so we can get some comparisons backed up by facts instead of feelings. It should be very interesting to see what kind of numbers BEA, Oracle and others will put up.
Anyway, great job IBM to get ECperf moving and supporting specs! Now it is up to the other vendors to answer the challenge.
Though I must say I was very disappointed by WebSphere 3.5 and found WAS 4.0 just okay I also admit I'm _very_ impressed by the movements shown by IBM at the moment.
You know, though I admitted WAS4 was quite a good appserver I always doubted they will keep up with the specs (i.e. J2EE 1.3).
But now as IBM has published ECPerf results and has a fully certified J2EE 1.3 server and will hopefully soon publish WAS 5.0... great job! If BEA isn't _very_ fast with catching up they'll simply be lost.
Be aware of IBM's marketing - it's not WS 4.0 that's certified J2EE 1.3. It's WebSphere technology for developers - a demo product, not orderable, and no production is supported, or even possible on it. It's a big step forward definitely, but let's not make more of it than it is. This just means that IBM engineers have been able to implement all of J2EE specs. But, this is not a product customers can use yet.
Be aware of IBM's marketing - it's not WS 4.0 that's
>certified J2EE 1.3. It's WebSphere technology for
>developers - a demo product, not orderable, and no
>production is supported, or even possible on it. It's a
>big step forward definitely, but let's not make more of
>it than it is. This just means that IBM engineers have
>been able to implement all of J2EE specs. But, this is
>not a product customers can use yet.
It has been observed that developers are often 6-9 months ahead of their operational co-workers in uptake of new technology. For this reason, IBM instituted a policy to provide a developer-only version of WebSphere some months ahead of the fully production-ready version. This is a ‘code-complete’ version, available on only one platform (Windows NT/2K), supporting only a single server instance, with only a small set of database resources supported. It is J2EE 1.3 certified. Just what is needed to get started building J2EE 1.3 applications. The J2EE 1.3 certification avoids the embarrassment that some vendors encountered by releasing “EJB 2.0” facilities before the spec was completely baked. They have retrofits to make now.
(these views are my own and don't represent the views of IBM).
Firstly, IBM's risc chips are more powerful than Sun's. Just look at the number of chips in the top-of-the-line risc offerings from both companies. IBM does more with fewer chips. This is cheaper, because you need fewer chips, and a less sophisticated motherboard.
Secondly, IBM looked carefully at the nature of the workload and concluded that it would be ok to use a cluster (I am thinking cache synchronization here). Obviously two machines with 4 chips each are cheaper than one machine with 8 chips. It's just cluster economics. It's the whole reason parallel sysplex exists.
Thirdly, the license for Oracle on 4 chips is probably way more than the license for DB2 on 2 chips, both because of the number of chips and because I am guessing that DB2 is way cheaper than Oracle.
I'd be curious to know if anyone knows if AIX has a better scheduler and tcp stack than Solaris, but they are both so mature that I would expect them to perform identically.
As far as the application server software itself goes, my guess is that if you bother to publish an ECPerf benchmark, then you have invested in optimizing your code.
I also thought so first (price difference Oracle - DB/2 and the hardware itself)... but in fact the two configurations cost "quite the same" ;-) (DB/2 is in fact lots more expensive than Oracle in this comparison) some 10Ks difference. Most difference comes from IBM had twice the transactions.
The hardware comparison seems strange... IBM was using "old" 375 MHz CPUs, Sun Ultra3@750 MHz (the best thing they have at the moment!);
Would be interesting to see how Borland performs on AIX/Power3/4...
Regarding AIX I quite often heard "There is only one Un*x, and its name will soon be 5L"... no religious wars, of course... AFAIK (we only have small AIX and Solaris machines, 2-4 CPUs) both Unices are roughly equal... though I heard AIX5L is impressive.
Disclaimer: this message is my _personal_ opinion about the WebSphere.
Small preamble: my statistical theory professor used to say that there are three kinds of lies: premediate, innocent, and statistics. What I have seen with the discussion here is all about that -- statistics, and nothing else.
In the _real world_ there are people who love the software vendor and who are sceptical about it. If you love the vendor, you'll be loving the fact that it beats someone. I'm the second one, had a lot of aches with the WAS, so I'm not really willing to share the hype of the superperformance of the WebSphere.
To begin with, when I started with in 1.5 years ago three facts stroke me deeply:
1. You need to configure a virtual host first in apache's httpd.conf, and then in WebSphere if you want things to run correctly
2. It took IBM some time to accept that WAS 3.5 has memory leaks and release a patch. Gee, if this is the heart of the e-business, this e-businnes gotta be born dead.
3. Method available() of servlet input stream returns 0 always, which breaks the contract of this method of the InputStream::availble(V)I method as specified by Sun
4. The approach of the IBM when facing the need to increase the productivity of the application is to take faster hardware, not to optimize the software. This one is based on my personal experience of working with guy from IBM in one project.
5. List can go on, but this message is not about it.
The first point is that if the software version 3.5 selling for big buck$ has such impediments that students with sober heads who have at least read Mr. Grady Booch never do, the question is, who wrote the WAS? Folks, let's face it: taking Apache as a "power engine" for the WAS is the proof that Open Source beats commercial products (well, if IBM was lazy enough so that it chose to hack apache and call it proudly IBM HTTP Server). But the DSO, and Apache itself, may be a big-time "slow movers".
The second point is that WAS, perhaps, like any software is created by a group of people who persue certain purposes and write the software for their _specific_ needs and their _proprietary_ vision of the software. We can't blame the WAS dev since anyone else would do the same. But the problem with IBM in particularly is that they are religious about their software; other products do not exist. And, BTW, this is how and why they are selling their software.
Point number three: we say "We used to blame the MS for producing buggy software. Try out the DB2/WAS and change your mind". Compared to WAS, MS solutions are much more *predictable* and *stable*.
The three points that I critisize the WAS for are: strange behavior, slowness, and unpredictability. This is NOT what you would expect from the HEALTHY heart of your e-business.
In order to minimize the flame in this thread, if you do not agree with what I have said, PROVE it with _real world_ examples.
With best regards to Java Enterprise Community,
Software Engineer/System Analyst
WAS 3.5 had its problems. I was a customer up until Aug this year and know well your position, having personally lived it. Senior IBM execs are also aware from customer feedback and about 12-18 months ago things started to change at IBM.
But, despite the issues there are quite a few very prominent web sites running on 3.5 today. I don't know if I can name them but one in particular had quite a few huge days recently when a certain childrens movie was released.
The support organization has gotten a big time work over at IBM, things ARE better now. The coding of the WAS was reorganized and again, things are better now.
Your points on broken API contracts etc should be a thing of the past now that WAS is required to pass CTS. The fact that 4.0 is J2EE 1.2 certified and 5.0 is and will be 1.3 certified should have the effect that bugs such as the one you described will be a thing of the past. IBM is committed to CTS compliance across all WAS platforms. CTS is around 15000 tests designed to ensure that all the APIs behave according to the contract.
12 months ago, WAS was considered slow and non standard. Look at the turnaround thats happened. WAS is now the fastest and most standard compliant major server out there. This isn't marketing BS, these are facts. CTS certified and fastest using an independantly audited benchmark. The tooling dependancy with VAJ is now largely gone (for better or worse), you can use JBuilder or NetBeans with WAS 4.0.
IBM has changed and these things alone should show that IBM has refocused over the last 12 months and is now on the offensive 'big time' in the J2EE market and these changes are ongoing in changing the way things work at IBM.
Plus, things are getting even better. The stuff thats coming in 5.0 is very cool (speaking as a developer) both with the base server, the tooling and the value add program extensions being layered on top.
Faster hardware. The ECPerf tests that this thread is about. We used a p640 with 375Mhz Power 3 CPUs. These chips are actually slightly slower than the Sun V880 chips used by Borland (check out the spec 2000 numbers for both chips, http://www.spec.org
). We still came out twice as fast. How did we do this on slightly slower chips?, in a word, optimization. We could have run the tests on the monster box but didn't, instead the tests were run on older smaller boxes that were in the lab.
Anyway, I don't know if this satisfies your frustration but I hope it shows that things are getting a lot better in IBM.
(views are my own and don't represent the views of IBM).
A quick point that I can't avoid: I DO agree with what you have said and I DO respect your opinion. But I wasn't frustrated with IBM at all, neither was I with WAS. By any means, WAS 3.5 was not the worst software piece I had to deal with in my professional life. So if things are changing and have changed already, then it is a good reason to persuade a customer to upgrade to 4.0.
But from the common sence everything comes at some expenses. So when I see that somebody claims that _everything_ is fine, then I simply KNOW that I'm missing something. If you are a developer, you know the feeling when you stumble on a "feature" when most of the project is written on assumption that that "feature" will be working as promised. Borland, JBoss, BEA, and everybody else have their own problems and benefits. But the Lack of this "dupolarity" of views for one subject matter is a thing that makes me look for alternative views. So, saying that "problems are gone" means to me "we got brand new problems we don't know yet about".
I also confess that I have to dig into the WAS4.0 deeper to give a more solid opinion what's good and what's bad in the _real world_. If you are interested to continue the discussion without flaming the thread, mail to a dot yantchuk at zaval dot org.
I never said things are perfect, but they are substantially better.
Warning: The WAS tools can be horrible. The Advanced Assembler Tool, which I use to generate the deployement descriptors, will give you worthless error messages when it doesn't like something. An example of this is: Null names not allowed. How would you like to spend an afternoon trying to find out what that this is supposed to mean? What it didn't like in this case was that the reference to the datasource was not in the descriptor.
Things also seem to mysteriously break. My latest problem is that I cannot restart an application (ear file). I was able to do it using DrAdmin tool, but for some reason it won't reload the application anymore and there are no error messages.
What I'm pointing out here is that my experience so far with WAS is that it is a hassle and has taken enourmous amounts of my time to work though it.
I don't see where all this hype about tools and WAS being so great is founded on. Maybe my opinion will change in the future, but my first experience has not been impressive.
Just one more thing. Maybe, somebody can tell, which JVM were used for BAS and for WAS. Because, results depend on JVM especially if we are talking about Sun JVM vs. IBM JVM. For example (this example I found in Chapter 20 from "JSP 2nd Edition" book posted here on The Server Side - http://www.theserverside.com/resources/article.jsp?l=JSP-Performance
, I hope I did not brake any legal rules referencing this source). Here important part of that result:
25 Web Clients
JVM Page Response Time
Sun catalog_5.jsp 0.731
IBM catalog_5.jsp 0.438
Therefore, as you see the IBM JMV is more then 1.5 times faster then Sun's. From documentation, I could find only that BAS used Java 2 Standard Edition 1.3.1_01, and WAS J2RE 1.3.0 IBM build. Maybe, for next test Borland can consider to look on other then Sun's JVM.
From the scratch I can say "Yes, both Opel and Lamborghini are cars with 4 wells so it seems to me why should I loose time trying to make a difference ?"
You probably will explode at this ... and exactly was the situation for me when I saw the let's say benchmark results .What are this, marketing or benchmarks
leaded, addressed and desired by real developers, engineers, computer specialists etc . ? I'm not able to understand this, as far as the normal definition for
doesn't matter which kind of scientifically experiment you want to mention is to have identical environment, initial conditions and repeatability of results. Where are this
criteria is this benchmarks? As far as we need to keep short and don't loose the time we can see :
- no identical hardware support
- no identical OS support
- no identical DBMS support
etc. etc. ...
So it looks to me as a benchmark between specialised solution provided by each company and not as a benchmark between specific application