New ECperf results have been posted on TheServerSide. IBM has soared above BEA with what is by far the best performance figure of 32581.47 BBops/ min@Std. In a follow up Q & A, Matt Hogstrom, from IBM Websphere Performance, provides some insight into what changes IBM made this time around to achieve these new results.
Q: The latest submission posted by IBM just about doubles the previous submissions. Why did you choose the multi-node configuration and what does it show?
Matt: We wanted to show several things with this new submission. First, WebSphere Advanced Edition provides seamless load-balancing across a multi-node cluster. Requests are evenly distributed to the various Application Servers in the cluster without changes to the client application, in this case the ECperf driver. Also, for some customers a clustered configuration makes sense to avoid single points of failure that might be exposed by running on a single system. This submission shows that this type of configuration is quite viable and produces excellent throughput results.
Q: IBM is the first to submit using Linux. Why did IBM decide to use Linux instead of another OS?
Matt: Our customers have varying requirements for deployment. We wanted to show our commitment to Linux as well as show its ability to provide excellent performance and horizontal scalability. This submission also showcases IBM's JVM providing superior performance on the Linux platform.
Q: There are nine application server nodes in this submission. What are the scaling characteristics using this many nodes?
Matt: We found near linear scalability when we stopped at 9 nodes. With only 4 cpus on the database server, we were able to acheive a sustained transaction rate of 32k BBOPS a minute while servicing the 9 application server nodes. As appserver nodes were added, the workload on the database increased linearly so given a larger database server, we are confident that more app servers could achieve an even higher transaction rate.
Check out IBM's new ECPerf Result Posting
Read IBM's announcement
Great to see the use of LINUX
Well, IBM rocks as always. Maybe now they'll move the EBay hardware over to Intel.
>>:-) good one.
And yet I bet that if BEA runs the same thing they'll come in with a lower price :)
Incidentally, why is BEA not writing some open source software as IBM is? It would cost them some money, but their developers would probably like that and it would buy them some extra mindshare. They could probably save a few million a year on marketing.
To me IBM's image is better than BEA's because 1) they have a long history of outstanding engineering and 2) they rallied behind the idea of open source.
If BEA is smart they should neutralize this image threat by sponsoring some open source software, maybe even do some Linux work, like fixing some some thread issues that Linux might still have.
I was joking.
IBM does not rock always. I thought everyone read Websphere reviews on this forums.
Although ThinkPad is very good and IBM marketing is absolutely phenomenal.
IBM rocks sometimes but usually not in J2EE.
I do want to know the performance that a J2EE containner runs on a SMP Linux Box ( for example,4 INTEL PIII/IV CPUS,1-2G RAM)
If any one has any ideas,please send me email at firstname.lastname@example.org
IBM application server configuration is 9 Nodes. They say their solution has a linear scalability curve.
Each node is a dual CPU with 1.75 GB RAM. Each node is half
the configuration you describre i.e your configuration is
IBM configuration divided by 4.5 or 22% of IBM.
EXCEPT that you mention 1 or 2 G SMP RAM configuration and
IBM says (1.75GB *2= 3 GB). As RAM is a prime parameter,
I woud sugest adjust at 50% (if you refer to 2 GB Ram).
That means your configuration could perform 10%
(22/2 = 11 -> 10)of IBM bench. so 3300 BBop/Min i.e
or 55 BBops per second.
Last but not least what is a BBop? If we map to a JDBC averaged transaction, we would say 55 JDBC common access
per second. Including Web container processing.
It's an approximation.
I applaud the use of linux here. I wonder which distribution they used? Is it possible that they make some OS tweaks that helped them that aren't a part of a commercially available distro?
Red Hat Linux Professional 7.2 is what is listed in the
ECperf Results. It is an interesting read, but I would
like to see what kind of results could be seen on an Intel
based system with a similar configuration.
Good to see that the comments about poor JBoss performance on Linux compared to Wintel do not seem to be a problem for all J2EE app servers. Maybe the problem is more about Sun's JRE.
Isn't the eServer 330 they used Intel based?
Congratulations to IBM and Red Hat Linux. I think these results more important for usage of Red Hat Linux and IBM JRE then for WebSphere. By the way, maybe BEA will consider some day to run WebLogic on Linux with IBM JRE for better results.
Nice spin once again, but the reality is that IBM's latest submission simply addresses performance by throwing more hardware at the problem.
IBM's previous submission of 8 CPUs couldn't surpass BEA's performance so IBM had to more than double the number of CPUs to 18 and double the number of DB servers from 2 to 4. Since this resulted in a LESS than doubling of the performance, I don't see anything terribly exciting about their result.
8 CPUs = 16634, 18 CPUs = 32581
Linear scalability would have at least achieved 37427.
This ignores the fact that even with a deeper discount utilizing IBM's Passport Advantage program, the free maintenance that goes along with it, and more than half the cost of support, IBM's $13/BBOPs result is nearly double BEA's $7/BBOPs figure.
Mike Murphy said:
"8 CPUs = 16634, 18 CPUs = 32581
Linear scalability would have at least achieved 37427."
You are comparing apples and oranges since the 16634 was on W2K and 32581 was on Linux (and I believe the hardware was different but I haven't read the whole fdr yet)
"IBM's $13/BBOPs result is nearly double BEA's $7/BBOPs figure."
Agreed, but remember that the $7/BBop was for a 7539 BBop/minute configuration. In their 16696/BBop configuration, the cost went up to $18/BBop. It is probably safe to assume that a 32,000 BBop/min config for BEA would be at least that high and maybe higher. Of course the use of Linux vs W2K lowers the software costs and as you note, IBM discounts come into play....
With every new submission, new questions seem to pop up but it is nice to see the vendors putting so much focus into price/performance. I think that IBM, BEA, Pragmati and others are collectively proving the case for J2EE vs .Net
I think we need to examine the numbers closer before drawing conclusions such as yours.
Our total cost was 344247. WAS licenses and support was 169722. Lets do the math for WebLogic on the same hardware using DB2. Our hardware + the database and operating system costs is 174,525.
Using their previous submissions pricing information, 18 WL licenses would cost around 370,260.
So, WL total cost on the same hardware with DB2 would be 544,785 (370+174).
Our BOPS per CPU were within .03% of each other in that last 4 way W2K benchmark where we all used the same hardware for the app server side of things. So ignoring the CPU differences (the 4 ways used 900 P3 Xeons, our new 2 way numbers use cheaper 1Ghz+ PIIIs) lets say for a given amount of Intel MIPs that we produce the same BOPs as BEA.
So, if we use our new BOPs rating on the 9 node cluster producing 32K BOPS then with BEA running on our hardware we get 544K US gives 32K BOPS = 544785/32581 or $17/BOP.
This looks more than $14 to me.
Now, lets ignore any passport pricing etc. WAS costs retail 12K per CPU, not 20K as WL costs. Even factoring this in, WAS is still better on $/BOP
Lets look at the database. All the submissions using WebLogic used Oracle, we used DB2. BEA have consistently used twice as much database power as us as well as more memory. Twice as many CPUs equals twice the software license fees and, of course, more hardware to buy.
So, run the math again using Oracle instead of DB2 and you'll need an 8 way database box which costs at least double the 4 way plus more money for database licenses meaning the 17/BOP number above is way too low for a WebLogic/Oracle run on the same hardware.
Draw your own conclusions.
Billy (works for IBM, opinions are my own and don't represent IBM)
Get your facts right ... WebLogic does not cost $20k/CPU.
Is this classic IBM double speak or what? First you use a very dubious pricing strategy (aka Passport Advantage) to drastically reduce your price and then you wrongly inflate the competition's price!
WebLogic achieved 7539 BBops on a 2 CPU box. By simple extrapolation, if you cluster 9 of these together (like what IBM did), you should get 67851 BBops (theoretically). Let's reduce the scalability by 25% ... even this beats the pants off of your result.
So in reality, this IBM result isn't all that great as it is made out to be!
I don't want to get in a pi**ing match here. I hate being perceived as spinning. I'm only responding to Mikes post as you may have noticed.
If you're fond of facts, how-ever, here they are:
According to the HP and WL 7.0 submission, a CPU license for WL 7.0 is 17K plus the support which is 14K for 4 CPUs. Average it out and WL costs around 21K per CPU. So, I underestimated but the basics remain more or less unchanged.
As for the 7539 BOPS 2 way number, BEA got this on a pair of 2Ghz P4 Xeons, not a pair of 1.26Ghz PIII non Xeons. Would you like us to run a number on the P4s?
Billy Dude ... your comments have turned this into one big P****ing contest! :-)
Let us not forget the fact that IBM uses a "tool" to optimize the DB calls and thus defeats the very spirit of the benchmark (now don't give me the spiel about "ECperf committee approved this result so it must be kosher"). We all know that it takes just one vote to tilt the balance either way in any committee!
ECPerf disallows any ECPerf specific optimisations. We don't use any. CMPOpt is a generic optimization tool that can be used with any J2EE application, not just ECPerf. CMPOpt will be integrated and invisible (except for it's impact on performance) in future versions of WAS.
If you're saying our CMP technology is superior and therefore unfair then what can I say, bizarre! You may also notice that we can run in a CMP only configuration where as WebLogic needs to use 2 BMPs in their runs in order to even run ECPerf. This may, of course, change when 7.0 goes GA but for now, it appears that they can't run in CMP only mode. But, all these things are basically point in time statements anyway.
Can we just let this lie, Andy? It's just a benchmark. Next week there will be another one to talk about. I'm happy to keep quiet so long as nothing needs to be clarified regarding our regarding the test or our technology. If I've said anything inaccurate then I'm happy to have it pointed out and corrected.
First I must say I like the newer (4+) versions of WebSphere for production deployment... not for development, don't think WAS is so convenient and the best AppServer overally, all together I'd say I'm not a WAS fan...
But then, I don't understand you (and some others here), why are you trying to bash Billy when he tries to clarify/explain? In fact, that some IBM employees are here gets them many image points in my opinion... caring for developers is not the worst a company can do.
Then, why can the BEA-fans not simply accept that currently IBM WAS has the best ECPerf result... why doesn't BEA simply do the same (9-node cluster) if its so easy? (I know, IBM also bashed BEA for their "cheap" ECPerf result, but I agree with them there, this was not real-world, although interesting to know, as I stated in the related thread).
The IBM WAS CMPOpt tool is IMO not unfair but BEA is just _lacking_ such a tool... 100% BEAs fault, gratulations to IBM for this! As long as it is not ECPerf specific and optimizes the apps "automatically" such tools are great, they are _exactly_ what vendors compete on regarding the pure J2EE implementation, it does not defeat the spirit of the benchmark, this _is_ the spirit of the benchmark (I also don't complain that BEA uses T3, although this is IMO really something to complain about; But if you think this tool is unfair I'd say we should completely ban all non ejb-jar deployment descriptors as they all contain vendor specific optimizations, I should also say BEA jDriver is unfair, ...)
The IBM WAS CMPOpt tool is IMO not unfair but BEA is just _lacking_ such a tool... 100% BEAs fault, gratulations to IBM for this!
If I understood correctly, it attempts to mark bean methods as 'const' by analysing the bytecode - there is no need for such a tool when using 2.0 CMP. Also, I do not understand why Andy makes such a big deal out of it - it does the same thing as WebLogic's isModified() (which is only needed for 1.1 CMP anyway), but automagically. It's no different from any other vendor-specific optimization.
It would be nice if there was actually productive conversation on the thread.
CMPOpt tool: What is there to say? Is it useful in the real world, or was it just for ECPerf? It clearly has use in the real world! Please don't say it is a special: I don't think this is interesting.
It is true that 2.0CMP eliminates the dire need for this. Someone said that this shows that IBM's CMP technology is better. It is that kind of spin that sets people off, just like suggesting that DB2 is better than Oracle because IBM used 1/2 the CPUs that WebLogic did.
However, while true (in principle) that BEA's 2.0 code will automatically get the lion's share of the wins that CMPOpt delivers as pointed out by someone. I still believe BEA needs to address this issue as CMPOpt does more than isModified().
I am curious, but don't undertstand the pricing scenarios. I do know that clustering is not needed from a technical standpoint: these appservers do not need to share state at all.
I am sorry to pick on you Bernhard, but you seem the most culpable:
1. Just like this published result, the BEA published result is real-world. I have seen installations of 2 different AppServer applications that are similar to it. If you want ridiculous, check out TPC-C.
2. FYI, it is the first day. I believe BEA will respond to this: at least I hope so...for IBM's (J2EE's) sake.
3. Good comeback with CMPOpt explanation....could finish strong, but then.....why?
4. BEA JDriver is involved in a plethora of production applications worldwide! I mean ????!!!
The people who complain about things usually are not clear on the real world viability and distrust of spin. I think JDriver is fairly clear.
5. Question: What is T3? The bandwidth for this test is noise, so I assume this is not network cable. I am curious what this is.
IBM has just raised the bar, yeah. For IBM, J2EE, business in general, and even for BEA, this is a good thing. Yeah, what a concept. If you don't agree, ponder this: who thinks that IBM would have accomplished so much in the past 18 months if BEA (and WebLogic before the acquisition) hadn't lead the way for so long, setting the pace for IBM to target? Don't kid yourself, Microsoft (.NET) would be ahead of J2EE. Competition is a good thing!
No one responded to my thread earlier. This is probably because it was not inflamatory, please excuse. At any rate, here are some topics I am interested in:
1. Data Stomping. Apparently the Pramati guy had a claim that their result didn't have lost updates. I would like to know more about this.
2. How are 40K+ BBops going to be reached? Is an 8 CPU DB the only alternative? etc.
Please, let's have a discussion where we can all share ideas. Thanks, GS.
Data stomping is a way of life in the current ECPerf. If we don't do what the competition does then we lose out. They'll say they are faster and not mention that they were stomping and we weren't. Such is life. The new ECPerf will sort this out and by making it mandatory for everyone, solve this problem.
As for 40K, as CPUs drop in price to make the BOP/$ number attractive then you'd see numbers way beyond 40 for a 9 x 2 way cluster. If you look at BEAs 2 way P4 2Ghz number of 7500 then a 9 2 way 2.4Ghz should give at least 75K-80K BOPs. You'll need more database power, of course, but for WAS 8 CPUs would probably work in this scenario, not sure about WebLogic but on current form, they'd have to switch to a Unix >8 way box for the DB.
Besides software licenses, the database is the biggest factor in price on these tests. A big box for a database just costs a lot of money. In our latest numbers, the DB amounts to half the total box cost. A 4 way Intel box costs around 50K, an 8 way around 100K. If you move past 8 way then you end up with Sun/AIX boxes and they cost big money. A 16 way system won't give you much change from a million. Example, a Sun 12K with 16CPUs/32GB RAM lists at a just over a million.
Personally, for running a clustered J2EE application, Intel is the only way to go for the app server boxes. They are cheap, offer the performance and you get better contigency planning. If I have 9 boxes and one dies then the load on the remaining eight goes up by 1/9th and I still have 8 boxes. Using a Unix big box approach, you might have 2 big boxes, one dies, the load on the other doubles and until the failed box is fixed, you've got a single point of failure. The real problem is lowering the cost of administering 9 boxes versus two and that'll be solved real soon now.
So, from a cost point of view, you want to stay below 8CPUs for the database. Switching to Sun/AIX boxes drives up the cost significantly but if you need more than 8 CPUs or want Unix hardware such is life until the new Intel based 16 way boxes etc arrive and customers are comfortable using Intel boxes for servers.
For a 40K number, the database shouldn't need 8 CPUs for WAS, just use 2.4Ghz P4s instead of the 1.6's we're using on this benchmark and that should be enough. I think the new ECPerf spec will show the CPU graphs letting you see how busy the boxes actually were during the test.
I am sorry if my Data Stomping post caused flashbacks to the irrational conversation we saw about that on another thread. I simply meant that "what can be done about this age old problem?", not "those lazy AppServer vendors..."
I have been running through these numbers as you did here. (BTW, isn't a 4 way W2K box about 1/4 of what any 8 way is?) What I want is new innovation: I think the AppServers can utilize HW much more efficiently. What do you think?
:-) I appreciate your enthusiasm about CMPOpt, which is indirectly mentioned often. Congratulations, but don't get comfortable: believe me, this advantage will be gone in 2002. I hope there will be more of these cool innovations and smart ideas coming from IBM.
Keep up the good work,
W2K is a lot better than NT was on SMP boxes, plus the hardware is getting better also so it's not quite as bleak as you paint but a 8 way isn't 2x a 4 way yet anyway but it's not bad either.
As far as app servers using the hardware more efficiently, I don't know. It think the deal is that an app server can scale horizontally and databases normally scale vertically. Vertical scaling is more expensive from a box cost point of view than horizontal scaling. But, databases do exist that can scale horizontally such as Oracle OPS or DB2 EE but how well it works really depends on the application that you're putting on it and applications typically need to be partitioned to really fly on such architectures. Plus, licensing for those horizontal databases are as cheap as say app servers are.
As far as us getting comfortable, rest assured that nobody here is kicking back. Trust me.
sorry for the mistyping, I obviously meant horiztonal db are not as cheaper as their app server brethren.
Agreed. However, I am afraid I stated unclearly my statement on Wintel HW. I meant to say that I thought 4ways were about 1/4 the cost ($25K) that 8ways ($100K) are. It is also true that you don't get 2x DB throughput on an 8 way.
I also meant to say that the actual work being done by the AppServers is not "a lot". There is much room for technical performance improvement. I don't mean to imply that it is trivial to do this.
Data partitioning severly inhibits application modifications and/or additions. Witnessing the torture as the IT Dept. tries to "uncement" their data architecture is painful enough. Most commonly these are short term decisions, where the people who make those decisions usually don't want to be around when they have to be cleaned up.
Personally, for running a clustered J2EE application, Intel is the only way to go for the app server boxes. They are cheap, offer the performance and you get better contigency planning.
Not necessarily true. For 2 way app server boxes you'll do better with AMD. AMD Athlon MP is cheaper and performs better at the same time.
Data stomping is a way of life in the current ECPerf. If we don't do what the competition does then we lose out.
It is not just about Ecperf not requiring it. It is also about there being no support or no good (performing) support for concurrency-control in most AppServers.
Even now, the only way to get concurrency would be
to use DB isolation levels (that may have severe pereformance impact) or use BMPs. Or use concurrency control mechanisms that may not be cluster safe.
This, just to ensure that when a bean instance gets modified, ensure that no other transaction modifies the same bean!!
WAS 4.02 supports optimistic locking for CMPs as will 5.0
for more information.
Yes. Do you wish to tell us (at some non-NDA level) about how Pramati does this better. I briefly looked at the FDR and the FDA, but didn't find anything special. How about pointing out interesting items?
Do you wish to tell us (at some non-NDA level) about how Pramati does this better. I briefly looked at the FDR and the FDA, but didn't find anything special. How about pointing out interesting items?
The FDR nor FDA discusses the concurrency control techniques. But the deployment XMLs do contain the concurrency mode we used in the runs. And our manuals also have some info (though not detailed nor internal stuff).
Pramati Server 3.0 provides an extensive concurrency control framework that includes:
- Pessimistic Concurrency Control (Repeatable Read) - DEFAULT
- Optimistic Concurrency Control (Optimistic Repeatable Read) for CMP Beans
- Committed Read for CMP
All of these are Bean-level isolation levels specifiable during deployment. For BMP Beans, Pessimistic Concurrency Control is enforced by the container (a feature not available in most servers).
Pramati's concurrency control is totally application-independent and does not require
- exclusive DB access
- any particular settings of DB isolation levels
- Application code such as Select For Update
(nor does it require you to avoid using CMPs to effect concurrency control)
Further, Pramati's concurrency control implementation is completely cluster-safe. Same semantics available in both a cluster and standalone configurations. This is provided via a cluster-safe high-performance Distributed Lock Manager.
ECperf submissions till date, other than Pramati's, have been made without any Concurrency Control in place. Had they chosen to run with concurrency control, the options available are very expensive. Most vendors rely on DB Isolation Levels to effect Concurrency Control. This could be very expensive (as all the rows accessed, even in finders or non-transactional methods, will be locked. Thus impacting concurrency and thruput). Some vendors do have Pessimistic locking solutions that work in non-clustered and Exclusive Database alone (and not in a cluster nor when database is not exclusive, as required in ECperf).
In Pramati's ECperf submission, most beans were delpoyed in Pessimistic Concurrency mode and the Read-intensive beans were deployed in Optimistic Concurrency mode (even for Read-intensive, the Committed Read was not used, as ECperf 1.0 doesn't assume Read-intensive beans. In any case, our Optimistic Concurrency implementation performs equally well.)
[More information available in the Tech Brief, "Concurrency Control in Pramati Server" at
Ramesh: "Even now, the only way to get concurrency would be
to use DB isolation levels (that may have severe pereformance impact) or use BMPs. Or use concurrency control mechanisms that may not be cluster safe."
There are CORBA OCS implementations in Java (we use one), and there are clustered concurrency control implementations (e.g. our Coherence
You said, "The real problem is lowering the cost of administering 9 boxes versus two and that'll be solved real soon now."
What are you referring to here?
What I mean is the following.
Putting a box in a data center has two costs. One is the initial cost of the box and installation. The other is the cost of backing it up, administering it, renting data center space etc.
While smaller boxes may be cheaper to buy, they may cost the same as a bigger box in terms of the ongoing cost in a data center.
So, whether 10 small boxes or 2 larger expensive boxes is viable depends on which is cost effective depending on what you ongoing cost is. If it's low then n smaller boxes may make sense, if it's large then it may be cheaper to go with fewer larger boxes.
A factor in the above is obviously, the perceived cost of managing a farm of J2EE boxes in a data center. If I'm using J2EE software where this is made relatively easy then this may lower my on going cost. If it's hard then it increases my cost. This impacts whether it's viable to smaller or bigger boxes.
Sorry if I was misunderstood, but that was _exactly_ what I said, CMPOpt is used in the real world, and so is T3, jDriver, ... nothing special, but if the "other party" doesn't provide something similar, its their fault, and they cannot complain about it (and neither should their supporters). I'm also aware that CMP2 will automatically provide CMPOpts benefits (but IBM's CMP 1.1 technology _is_ better!).
I still think a single box (I referred to it as "BEA cheap" result) is contrary to what J2EE provides and haven't seen such a configuration in real world, neither would I recommend it.
I also hope BEA will respond, even better if they do with a similar configuration.
Alltogether I'd say we have the same opinion (with exception of the single box and T3/GIOP), I just don't get it that every time IBM is top the BEA supporters are screaming and finding something unfair.
Fair enough! :-)
So it is the line protocol between the AppServer and the DB that you find abnormal? I agree that some folks want the AppServer in a DM zone, so TPC/IP is best. I saw this concern more in the past than currently, mainly for larger installations however. Is this what you mean?
No, its not the protocol between AppServer and DB, but between the clients and the AppServer where BEA uses T3 in all tests where all other vendors use CORBA IIOP instead. While the AppServer vendors are free to use their own protocol I prefer IIOP, because
*) If I need to communicate with other AppServers/legacy apps (CORBA), ... it must be IIOP
*) As bad as firewall support for IIOP is, its even worse for T3
So while I think it wouldn't make any difference to use IIOP with WL (performance-wise) instead of T3 I have some doubts, as BEA has struggled to develop a working CORBA ORB and I'd prefer BEA releasing its performance numbers using IIOP (at least one or two tests), otherwise I don't know of the performance impact when replacing T3 with IIOP.
hope this clarifies.
Just for clarification, the latest BEA submission (3/25) lists BEA WebLogic as 10K per CPU + 2.1K for support so roughly 12K per CPU, not 21K. Check it out for yourself.
It's using Advantage edition as opposed to Premium Edition which was used previously and the one I quoted. Anyone know the difference besides price?
BEA WLS $10k/CPU
BEA WLS + clustering $17k/CPU
There's also BEA WL express, which is something ridiculously inexpensive, like $350/CPU (don't quote me).
So this makes Mikes point not so relevant. For a 9 node configuration which is what we were talking about folks, the price is 21K not 10K, you need the clustering so you need Premium edition.
Maybe we should do the same, our no cluster version is less than 8K.
BEA Pricing as of March 2002
WebLogic Standard Non-clustered $10000/CPU
WebLogic Standard Clustered $17000/CPU
WebLogic Express Non-clustered $3000/CPU
WebLogic Express Clustered $5000/CPU
Anyone heard any news on BEA's free dev and test licenses? The CEO announced it at BEA eWorld but their salespeople are acting as if they've never heard of it. That would reduce the price of using WebLogic significantly.
Well, everyone else is basically giving the servers away for dev use away. We give away AEs free for development, and I guess Oracle and HP do the same.
It won't reduce the cost of production licenses.
Well, I wasn't so wanting a response from an IBMer as some sort of clue from BEA. They said they were free, and as of now, they are not. IBM appears to be winning if AEs are free for dev. <shrug> Their silence makes me wonder. We're already BEA customers, but unmet promises .. well.
Two months after it was announced, no reality. Makes one think.
Billy, if I can add some partiality to your analysis I think we can get to an unbias point of agreeement.
1. The various tests show that WebLogic slightly outperforms WAS on middle tier work done/ CPU. This would also slightly increase if the deployment descriptors were tuned in a similar manner as the IBM deployment descriptors so that less DB calls were made (and thus blocked for). If you look at BBops/ CPU-MHz, WebLogic outperforms IBM.
2. Because of the tuning tool, WS is significantly reducing the DB load. I would love to fantasize that DB2 is somehow even on par with Oracle, which itself has a 20 year old architecture that is not too impressive. Alas, we know this is not the case: the fact the WS is using 1/2 the CPUs that WLS is means that the tuning tool is a huge advantage. This should not be downplayed (or misrepresented!).
3. Having made both of those points, I agree with the gist of the rest of your message. IBM and BEA are very close (BEA is leading in the engine, IBM in the deployment), and this is exciting for customers, and for J2EE in general. Competition is the key strategic advantage Vs. .NET!
I am not impressed by the results. But i am impressed by the different combination of H/W & S/W in the J2EE stack.
By now the "Managers" might have a rough price tag for a J2EE based application stack !!!
I think it is high-time for some one to come up some metrics & segment the market. I don't want to see the J2EE stack targetting a online Pizza shop to eBay. I am sure the requirements,scalability etc for these applications are extremely different.
The market segmented as follows..
BBops/min - 5000-10,000
Availability - 98.9 - 99.0 ( some percentage )
150K - 350K ( JBoss,Linux,PostgreSQL - JBoss,Linux,Oracle/DB2/SQLServer )
( I arrived at this figure by reducing the appserver & database price from the IBM results )
BBops/min - 20,000-50,000
Availability - 99.0- 99.9 ( some percentage )
500k - !!!( Solaris,Weblogic/websphere/iPlanet,Oracle/DB2)
BBops/min - 50,000 & above
Availability - 99.9- 99.999 ( some percentage )
I wish to see such a segmentation in the J2EE stack & all these ECPerf results posted against these individual segments.
Murali Varadarajan wrote:
BBops/min - 5000-10,000
Availability - 98.9 - 99.0 ( some percentage )
150K - 350K ( JBoss,Linux,PostgreSQL - JBoss,Linux,Oracle/DB2/SQLServer )
( I arrived at this figure by reducing the appserver & database price from the IBM results )
Surely you've got one too many 0's on the end of your price tag there? There's an ECPerf result from BEA for two Dell PowerEdge 4600's (one database server - one app server) (Dell's site says they start at around $3K a piece) that comes in at about 7500 BBops/min and they quote 7$ per BBop. That's with the cost of Windows and Oracle. At $3000 x 2 + $0 (cost of Linux) + 0 (cost of JBoss) + $0 (cost of Oracle/DB2 licenses already owned or PostgresSql), I don't see the small-end costing anywhere near hundreds of thousands of dollars.
For any small business, the cost of the system hardware and software (assuming mostly open-source - i.e. JBoss+Linux) is going to be dwarfed by the cost of the development and maintenance of the application software. For example, compare $15K to $150K.
Interesting to cognate that the BBops/ minute are about maxed out without going to clustered (god help us) or an 8-CPU DB. Any bets on when we will see 40,000 BBops/ minute and more interestingly, on what hardware for the DB? IBM has already tuned the DB calls down signifincantly with the cool tool.
BEA is winning on AppServer engine strength and IBM is winning for reducing a large problem, the redundant calls to the database. If BEA could use the IBM tool they could probably support 37,000 BBops minute on the same HW because WL > WS and Oracle > DB2.....but they can't, and that is the way it stands. Competition is a great thing!!
I don't see reasons why you say WL>WS and Oracle>DB2.
I've experienced both, from pure performance perspective, it's platform dependant:
Solaris: WL=WS and Oracle>DB2
Win2K: WS>WL and Oracle>DB2
AIX: WS>WL and DB2>Oracle
Linux (RedHat): WS>WL and DB2=Oracle (<Posgress<MySQL)
HP-UX WL>WS and Oracle>DB (Orion>WL...)
Others...I don't know
Most of appserver performance are related to JVM.
I did not run JRockit.
WL>WS is an oversimplification. It should read "WL Engine" > "WS Engine". I have not run either on AIX, but I can imagine that the IBM JVM on AIX might have some advantages. On Sun and Win2K the WLS does more work/ CPU Power than the WS engine. It turns out that a lot of that work (as far as ECPerf results) is extra DB synchronization due to the lack of the CMPOpt tool. (For more information, please read the 2/25 and 3/11 posts, knowing that WL is "stupidly" making the DB to 2x the work that WS is.)
I am impressed with the use of Linux, which will be huge and important in 2003. Shhh, just don't tell the IBMers that I said that, I don't want them getting lazy. It is safe to assume BEA will be preparing something on Linux before the year is out.
About Oracle and DB2, realize it pains me to have the opinion that Oracle is a technically better DB, but I must. Again, I have not run it on AIX, but if many certain OLAP applications are just murdered by DB2. (It is quite possible that IBM built CMPOpt more out of necessity to baby DB2 than for WS to beat WL.) I think Oracle has lacked in innovation for many years, so it is not because the bar is too high. This is just a sad story all around.
This will deviate from the main topic of App Server performance, but I think it's still relevant because the reason we are talking about App Server performance is because we are interested in which platform provides the best value. One reason IBM has been so successful with the marketing of WebSphere is not necessarily tied to the app server itself, but rather the range of products IBM has to offer(for example Lotus web collaboration tools, which can operate in an environment without a Domino infrastructure, Tivoli Management software,etc.). There really aren't many other companies that can provide both the types of products that IBM provides, but also the tight level of integration between WebSphere and those products.
This, I believe is a key reason for customers adopting the WebSphere platform, because it is much easier and in the long run likely more affordable to buy these types of tools off the shelf than to develop them internally, and with smoother integration.
I understand what you are saying. I like the fact you are bringing this up. It brings up two threads that I am passionate about:
1. If I can re-arrange what you said a bit, "One reason IBM can show such high AppServer revenue is because they can discount other items in the solution sale to enhance the WebSphere numbers." I know this is part of the game, Oracle shamelessly indulges. I just don't want our prized IBM technical folks smoking the financial/marketing dope that is being pushed to the public. I am sick of bond multiples for my stock! Build a better product and they will come.
2. If you didn't mention names one may ask "Is he talking about .NET/Microsoft?" There are two camps, J2EE, based on vendor openness (as much as possible) and the power of vendor competition in many areas to supply ongoing innovation and .NET, where Big Brother will supply us with all we need. The best news for IBM is that other key players like BEA, Sun, and even Oracle are behind J2EE. If it comes down to MS solution stack Vs. IBM solution stack, we all lose, including IBM.
IBM can provide a one stop shop with a well integrated solution, but each product has to be competitive enough to be chosen in it's own right.
Just to clarify what CMPOpt does and doesn't do. CMPOpt does analysis prior to deployment and saves that information to optimize the runtime. The goal is to avoid unneccessary stores at runtime by reducing CPU utilization and object creation. Other AppServer vendors do the same store avoidance but they do it at runtime which is more expensive. The net is that the number of SQL statements executed are about the same.
I think we all owe a but of a debt to Sun and the JSRs (4 and 131) for putting this benchmark together, laboring through the submissions and fighting the battles. The community as a whole is better off for it. Its certainly given us all something to talk about and put performance into a meaningful context for discussion and comparison.
Regarding many of the other comments about CMPOpt, T3, EJB 2.0, etc. Isn't capitialism great? Where else do we get these type of improvements in the AppServer vendor products so quickly ;-)
I'm having some trouble understanding the hype about IBM's postings. For the 2nd time running this is what is stated in clause 7.5.5:
7.5.5 If the Driver system(s) perform any load-balancing functions as defined in 4.12.5, the details of these functions must be disclosed.
"The WebSphere Application Server ORB on THE DRIVER SYSTEM balanced requests among Nodes 1 through 9. The methodology was round-robin."
What kind of real-life application would use a client-side load-balancing scheme?
Can someone comment this?
Most Applicatio servers use client side load balncing by encoding the logic inside the generated EJB client proxies. Even BEA does this.
sometimes I really can't undestand why people are contemptuos of good work, even if it is by IBM or Microsoft. I was a lot of times in the U.S. and I got the feeeling that many people just blame everything IBM or any other big company is doing...
In my opinion (I am not an IBM employee) IBM did a very good job. It's great that they show that Linux can achieve good performance and that IBM supports this platform with products like WebSphere, DB2 and their own incredible JVM. Let's say TheServerSide would've made the same test with the same hardware and Linux but using JBoss and MySQL...
I think this thread would be full of praise.
Of course IBM uses WebSphere and DB2...but the guys made a great job anyway and the results are useful for every architect out there...and without a doupt the BEA/Pramati results have the same value!
So I am waiting for new results and it doesn't matter who will post them...
One thing would be interesting. What was the bottleneck under full workload...the database???
At full load the DB is the bottleneck. There are some interesting subtopics regarding this:
1. First, IBM's tool reduces calls to the DB. The current version of ECPerf is a great showcase for this as the Java code is written to poorly utilize the DB, (I think by design). That is why you see 2x CPUs for Oracle with WL. The tool itself demonstrates thinking by IBM in an area that needs more thinking, DB-AppServer data cohesion. However, CMP 2.0 and well written applications lessen the prounounced affect we see here.
2. It does seem that the DBs would be able to handle more load. 30,000 BBOps/ minute is 500/ second. Since row collisions are rare, (read Spec for details), this does not represent a "massive load". The fact that these DBs were architected 20 years ago probably has something to do with it.
3. I mentioned Data Stomping not to beat up AppServer vendors, but to mention that sacrifices in data integrity are to alleviate performance problems with the DB. Read Committed mode is lenient, yet the DBs are still the bottleneck when outnumbered 4-1 in CPU power Vs. Java. My hypothesis is that this is the result of 15 years of stifled competition and the associated lack of innovation.