A recent article on eWeek quotes an IBM Director attempting to defend IBM's beaten price/performance results on ECperf by claiming that BEA's test configuration was not real world, and that IBM's was conducted on high-end hardware and servers, in other words costly. The article is useful in pinpointing the 'weakness' of the $ per business operation unit metric.
The first couple of thousands business operations are normally far cheaper than the next 10,000.
Read IBM Scoffs at BEA's ECperf Benchmark Results
Read the latest ECperf price performance figures on http://ecperf.theserverside.com
This is an amusing and silly article, and definitly unfair. What is so bad about running on a Dell PowerEdge 4600? If IBM submitted a clustered server config for a benchmark, I am sure its because that was how they achieved the best performance and price combination, not because they were trying to be more real world.
The most humorous part is that they (both IBM and BEA) are running on W2K ... a good desktop OS but not one that many MIS managers will allow their mission critical apps to be hosted on. Websphere is almost always run (in production) on an AS400 or RS6000/AIX ... it is even available (special version) for the 390 ... but rarely on Windows (not counting the developer desktop, which is 95+% Windows ;-). Weblogic is probably 70% deployed on Sun Sparc/Solaris, at least for over-2-CPU configurations, although again the developer desktop is 95+% Windows.
Both App servers have can run on windows but I don't see any IBM test with windows 2000 on the ECPerf test site?
That article and WebLogic's ECPerf results simply proves that in ECPerf tests only price of hardware is matter in order to show good results. But we already had discussions about it.
I have another question:
Will Java App Server vendors (IBM or BEA) care any responsibility if I can not recreate results from ECPerf tests even if I would use same hardware and will follow their tuning instructions?
Thanks in advance,
"claiming that BEA's test configuration was not real world, and that IBM's was conducted on high-end hardware and servers, in other words costly. The article is useful in pinpointing the 'weakness' of the $ per business operation unit metric. "
This is totally a case of 'sour-grapes'. IBM also has made submissions on Xeon based servers. If newer better hardware is available at a cheaper price, that is at worst 'Moores-law' at work. And no reflection of 'non-real-world' configurations.
The benchmark itself is designed as well as any other perf benchmark. Captures the thruput and total cost. And has checks to verify no-degradation-with-time. If reliability of the hardware needs to be factored in, this is not an easy task (just the reason why no banchmark captures this). It is very difficult to quantify.
On the one hand I'd say IBM is right (Although I don't think they chose a clustered environment to be close to the "real world"), the BEA configuration won't be used in real world apps.
On the other hand it is interesting what an AppServer _can_ achieve on the cheapest hardware available.
From this point of view I think BEAs results are perfectly ok, they have a clustered configuration result ("real world") and one tuned for minimum price... in fact I think IBM should do the same, i.e. publish a result on the cheapest hardware available. You just have to read the docs accompanying the benchmark, and if you don't read them... okay, completely your fault ;-)
Personally, I'm extremely happy to see BEA post results on lower-end configurations. Depending on the application in question, that may be far more "real-world" than IBM's cluster.
I'll grant you that setting up a really serious web application deployment environment should have points of failover and pretty strong scalability, and so forth, but there are also a large number of other, less-critical, less-loaded applications that don't merit such hardware, and it can be useful to know how the application servers perform in that space.
I am very happy that BEA posted these results. It shows the futility of trying to measure a complex system with a single number. But, IBM is trying to play the same game so their posting does sound like sour grapes! I don't need IBM telling me that BEA's config would not support a realistic production environment. I can see that for myself.
I don't see why you wouldn't believe that companies are running WebLogic on Win2K -after all if you feel that the OS is unreliable then just use WLS clustering.
(The OS becomes basically irrelevant and shouldn't matter at all. )
Great point on clustering ... I have seen both WL and WS on NT4/W2K in the wild, but only on pizza boxes (i.e. many 1 or 2 cpu boxes / 1u's in a rack). Also IIS is relatively popular for fronting the app server (again, a rack of 1u/1cpu boxes).
The OS should not matter but it does. We ran clustered WS on W2000 servers but we could not get any stability. Now we run the same thing on Sun/Solaris and most problems are gone. What´s more, our Infrastructure Department does not allow any business criticall applications to run on Microsoft Operating Systems. I guess they have learnt some lessons. So I agree, the OS should not matter. But surely, it still does.