Oracle has posted its response to BEA's recent claims at being 54% faster than Oracle. In a rebuttal entitled "BEA Takes the Oracle Java Performance Challenge...and Comes Up Short", Oracle suggests that BEA cheated by not running the same benchmark as Oracle, and generally spreading FUD.
Please visit http://www.oracle.com/features/insider/index.html?oi_bonvanie_07.html
for Oracles Claim.
You really have to wonder...
Why did BEA need to modify the Java Performance Challenge benchmark before running it? Why won't BEA say what those modifications were? Why did BEA use a version of Weblogic Server that no customer can buy to run even their modified benchmark? Why didn't BEA report the same metrics used by Oracle so customers could compare head-to-head? After all, isn't that what a performance benchmark is all about? Does BEA really want anyone to compare the performance of Weblogic to Oracle9iAS? If BEA really were the Java performance leader, is this how they would go about demonstrating that fact to the world?
Well, BEA will say Oracle is spreading FUD and Oracle will say other way round, as a developer/evaluator who do you believe??
My two cents, I would say have an independent, un-biased third-party to actually test the performance metrics, on the same platforms and same hardware configs, then lets see who beats whom??
Amen!! Both of these companies have self serving agendas and it's only logical that they both spread fud. An independent tester needs to obtain out of the box products from each and run them both on the same hardware... then and only then will the truth be out...
What are these benchmarks actually testing? How realistic are they compared to real world applications? And is performance the only consideration when determining which application server to buy? I think not.
What would be good is not an independent benchmark but a group test of the leading application servers with a clear scoring system in a number of areas. Plus using something other than the Java Pet Store example would be good. I mean how many sites out there actually do it that way?
Personally, I think that with each app server you would find particular areas of strength and weakness. None of them are particular mature enough to excel in every aspect.
It's also funny that in the database and application server market the vendors are so scared of independent benchmark / reviews.
Well, consider this, Weblogic has been around for about 5 years now and is the only app server out there supporting EJB 2.0, Supports EJB clustering (BEA has been doing this for the past 2 years, oracle still does not), web services, Servlet 2.3, J2EE Connector Architecture, 2 Phase Commit across multiple DBs and the list goes on.
If you really get down into the details of it, Weblogic is probably by far the most mature app server out there.
I have to say, I've never seen a more obnoxious, arrogant site than Oracle's. I mean, Christ, H1 tags and corporate pissing contests are a really dangerous combination. And it really seems like they have some sort of complex about IBM...
I do not think anyone is trying to make Oracle out to be innocent in the way they handle themselves... the point is that this is not an excuse for BEA to act the way they have.
BEA - part of the solution or part of the problem?
I think our precious Java community must look out here...
There is only one real winner with this kind of war attitude and that is MS. Instead of throwing dirt at each other the vendors should try to concentrate on showing how good their app. servers are.
I agree with the previous messages in this thread that there should be some kind of independent tests available.
Hey Oracle, BEA, IBM, and all you other main players;
come together and sponsor an independent test!
When performing this test one could kill two birds with one stone and show the supremecy of Java over MS technology.
Unfortunately, as long as you have super high-profile personalities like Ellison, you will have these devisive debates. When Larry Ellison and a company like Oracle "throw down the guantlet", are all the other App Server vendors supposed to shrink into the shadows?
I don't think so... This sort of conflict / competition might ultimately even be helpful by "holding up a mirror" to their faces. From this, maybe better standards and levels of cooperation can emerge with regard to the "true fight" at hand.
Yeah but do not publish such poor results on the web sites, OK?
Everyone. Please ignore this hype. Your decision on which app server to select must be based on your own evaluation. I picked Weblogic after going through an evaluation of vendors that met my requirements at that time. My criteria had nothing to do whether or not one server was 54% faster than another.
Here's the list of requirements that I used for helping to make a selection.
1. Must support EJB 1.1 (now my criteria is EJB 2.0)
2. No problems with installation (support should not be required for installation).
3. Samples run without problems
4. Conforms to J2EE standards (does not use a proprietary APIs)
5. Confirmed support for Oracle 8.1.5 (now 8.1.6)
6. Confirmed support for Windows NT (now Win2K)
7. JSP engine can integrate with IIS using a proxy plug-in
8. Supports debugging with extensive logging both at start up and at run time.
10. Active newsgroup/search engine support (I really don't have time to call support when the problem might have already been solved).
11. Documentation must be complete.
There's probably some others that I forgot. Once I narrowed down the field we wrote some test apps to get a feel for performance and the ease or difficulty with which
to complete them.
Final faceoff would be a simulation of the load (transactions, requests, etc.) your expecting. Much of the performance or lack thereof has a lot to do with your architecture.
Bob - I completely agree. If a company chooses an app server simply based on marketing bs from these companies, or based on who has the highest market share, then your project is already screwed.
Different companies/applications will have different requirements that will be handled better by different vendors. There is no substitute for a meaningful evaluation.
All this bickering about performance tells me that many people are not actually deploying operational code on either application server. The truth of the matter is, performance is not a practical consideration. BY FAR your biggest hurdle is integration, fault tolerance, bugs and support.
When you lay your application on top of an application server on top of an operating system on top of a hardware platform, you will spend so much time tuning all of those layers. Further, you'll soon discover that performance will be more easily achieved through scalability not through the individual performance of your application server.
So, everyone take a deep breath, put away the benchmark reports and look through your design and implementation and test procedures! Only then will you achieve nirvana.
First of all calling perfomance unimportant tells me you haven't used scalable systems and good web applications do nothing but scale and thats the reason this "bickering" is on!! But like a good architect I will use all these bench marks with a pound of salt (and end up puking!!!)
Achieving performance through scalability sounds a big sort of oxymoron!!! Performance and scalability are inverse of each other!!!you could have said scalability through performance which would be slightly closer.
Your worries about integration and fault tolerance .... are all unfounded or confounded since they are just implementation issues. A good architecture goes along way in building robust systems! Rest everthing is tedious or cumbersome yet trivial!
Nevertheless I take your precept of nirvana!! Amen
Actually, scalability is a "made up" term. The theoretical characteristics of a distributed system are RAS: Reliability, Availability, and Serviceability. Scalability is a concoction that applies if you have a reliable and available system.
I do a thorough analysis of these characteristics and what it means to J2EE servers in the Large Scale System Design Chapter of the Mastering EJB book. You can download the chapter now.
I dont understand how reliable and serviceable could deliver scalability. Surely, you could have a "reliable and serviceable" system that supports only 10 users. But when you need 300 users, its not scalable.
Are you suggesting scalability is a part of servicability ?
Tyler was talking about properties of distributed systems.
Therefore, if you get a distributed system that is reliable and serviceable, it will be scalable - and yes, the scalability will be a direct function of serviceability.
Of course, your definition of distributed system can vary :).
Reliability is a measure of how long it takes to do a request. If a request takes 10 ms with one user and 10ms with 1 million users, then it is a highly reliable system. Every added inter-process communication in an architecture reduces reliability.
Availability is a measure of whether the service can be executed. It makes no case as to how long it takes for the service to execute.
If you have a perfectly available system with any number of users and it maintains reliability, then therefore, it must be scalable.
To beat the topic to death: :-)
To me, reliability has more to do with how successfully a system can repeatedly execute a function without failure over a period of time (months, years, etc), than with how long it takes to execute that function. In my mind, that's really a measure of performance.
IMHO, scalability has less to do with reliability, but more to do with how easily a system can be made to support increased user or processing load (like by adding hardware and NOT modifying the logical architecture or design), while maintaining performance characteristics.
Mike--- Now we are having a good discussion.
I'm going to have to counter you... Reliability according to the theory books is a measure of how long it takes to do the ideal request. Performance is how fast that ideal request would take when compared on different machines.
For example, if an ideal request under ideal circumstances takes 100ms on one machine, a more performant machine is one where the same type of request would take 50ms. Reliability is just an indication saying that whatever time that request takes, it should always take the same time to execute that request at various load levels.
Every IPC communication in a system reduces reliability because you run an added risk of having network flogging or slowdown. The first time the network has a disruption and your 100ms request takes 101ms, you have lost a measure of reliability. This is why systems that put everything into a single large cluster instead of multiple tiered clusters are more reliable, but less available.
Also, if you look at your definition of reliability, it's exactly the same as mine, except that I put a quantitative measure on it.
And once again, if you have reliability at all levels and availability, you definitely have scalability according to these defintions.
I do a pretty good analysis of this in the Large Scale System Design chapter -- I go into more detail there.
This is good stuff!
The strict definition of reliability really has *nothing* to do with 'how long' things take, - only that the _results_ are roughly the same over time, or on successive executions. The 'how long' comes in when you apply stated conditions to the test, ie "the function should execute in 50 ms". So, your def. of reliability makes the assumption that we are measuring response time. I see.
I'll have to take some time to read the chapter....
You measure of reliability though defined correctly is interpreted wrongly. Reliability simply means it has to have a measure. If a machine takes 100 ms for 1000 users and 10000ms for 10000 users that doesn't become less reliable - so if it has one of the well-known graphs for behavior, reliability is more or less ensured. In the above case if it takes 5000ms for 11000 users the system becomes unreliable!!
Your saying that "whatever time that request takes, it should always take the same time to execute that request at various load levels" is really ridiculous!!! perhaps you are referring to some IDEAL machine unless your band of "various load levels" is pretty narrow and localized!
And your idea that reliability ensures scalability is not so acceptable since they are mutually exclusive measures whose graphs would intersect just once and thats when the reliability measure would start degrading - the classic bench mark is Unix operating system, which is not "reliable" but definitely scalable!!!
I have no doubt that the benchmark's developed by Oracle are tilted to their best interest. ie, the j2ee server running wihtin the DBMS (oracle can bypass massive layers of jdbc indirection in certain situations) with transactions structured in such a way that any DBMS optimizations would kick in.
The only legitimate comparison would involve a benchmarking test developed by an idependent party.
But ... BEA ran the very same benchmark that Oracle did and showed that Oracle's tall claims were just that!
What if we as developers were to create
a web site created to show performance
amongst different app servers and
create a group to test and perform
benchmark tests for the whole world to
see. If people were into it we could
create a few examples maybe like
simple, moderate and complex
and test and show results for all
to see using all the major app servers
as well as open source, etc.
And as well have all code, and tuning, etc
open to any who might have interest?
I would be interested in helping set up such a site and taking part in tests.
I have to wade through lots of "dubious" sounding emails touting various servers; when all I really want is to easily make a good, reasonably fast and secure application.
You can email me at unixa at mixmail dot com
how can i tell... both are giants in e solutions market
Gartner Group independently evaluates app servers and other technologies like integration.
Take a look at this IBM link. It contains some of the Gartner Group results. Look for the "magic quadrant".
They give IBM the edge in "completeness of vision" and BEA the edge in "ability to execute". Oracle didn't even make into the magic quadrant.
Currently, Gartner Group rates BEA as 2 years ahead of any of its competition in the Integrations market with its offering of Platform 8.1.
Why? Surely if the oracle solution performs better what do I as the customer care? If I'm an oracle shop, then why not have all those "nasty" benefits of bypassing massive layers of jdbc indirection? Similarly, why can't I as a BEA shop do the same sorts of optimizations with their product? Isn't that the whole point? If all vendors tools were tweaked in esxactly the same way, where would the competition be? We may as well stick with the RI from Sun.
P.S. I work for neither oracle nor BEA nor do I have any desire to use either of the two products. I use JBoss which, having never seen any performance claims, is still IMHO the easisest and most comprehensive solution for the cost of a couple of mega-bytes download over the 'net :-)
So you chose Orionserver (Oracle OC4J) and got the FASTEST and most standards-compliant engine out there, right ... or did you go for getting source, so you could adapt and be prepared for the 1.4 generation JVMs and the deprecated and additional classes and therefore you did your job and went for Resin or EJBoss...right?
I HOPE you didn't spend >USD$500K for a full implementation of BEA or IBM (or USD$50K for the deprecated version) did you?
Oh, you did...I'm sorry. HEHEHEHEHEHEHEH
Michael J. Cannon
mcannon at ubiquicomm dot com
P.S.: and why would you want to interoperate with IIS, unless you are a masochist and LOVE virus and worm attacks and buggy, insecure code? Wait...there's more...XML-RPC....OOPS! I mean, of course, SOAP...check THIS out:
...and that's only the FIRST RPC Bug found in products that have been out for HOW LONG NOW???...and you guys thought today's Code Red Worm was a nightmare...wait for the Java .STORM Worm (currently on the SANS incident.org site) and what's about to happen to P2P, SOAP, Web Services and RPC. You ain't seen nothin' yet!
Excuse me, but isn't it true that Orion doesn't have cluster? Doesn't that mean the new Orion (9ias) doesn't have it either?
Excuse me, but isn't it true that Orion doesn't have >>cluster? Doesn't that mean the new Orion (9ias) doesn't >>have it either?
Orion does have some clustering capabilities. Which means that 9ias has some too...however, 9ias may have other clustering capabilities on top of what the j2ee container provides - but I don't know what, if any, those might be.
Actually, Orion does provide clustering for Web applications. It is very simple to set up and use. See their site for documentation describing how to use it.
I'd therefore imagien that it's a fairly safe bet that OC4J has the same capabilities.
One cant't take decision about buying Application Server just reading benchmark published by Application server vendor.
BEA/Web Sphere have more release/running on production enviornment then Oracle 9ias. If you want to cluster your application,then BEA has more cluster based installation then Oracle/IBM. Clustering of Application server is not a easy task. I think it takes some time for any company to deliver robust clustered application.
Read BEA's answer
This is a great news for me that "BEA takes Oracle Java
Performance Challenge and Comes Up Short? "
1)I had done JCERT certification with specialisation in Oracle J Server 8i.
and when ever i saw any fulish statements from BEA side against oracle
I feel very Bad .
2)I am 20years old Computer Science Student(MCA) and also just started working
as a software Trainee in Chandigarh.for me its a great time to work on both
the technologys and to find out the pros and cons of each other.
But I am really happy Beacuse its a tit for tat thing.
All the best and love to Larry Elicson.
Baki Sab Bakwas!
As my Indian friends would say, "abhey, ullu ke patte"! :-)
Wow, Atul bhai get a grip on yourself, both the companies have self serving agendas. These so called statements furnished out, by both of them, are nothing more than marketing hype. As a Software Engineer you have to be as un-biased as possible, cause morrow you might be evaluating one of them for your company and actually find out that the product that you personally like isn't as great either from your company's requirements wise or performance wise, but on the other hand the product that you literally hate is more tuned to your company's requirements, what would you do my friend when your boss walks to you and says "finally, what's your decision?"".
Do you go with the product that you like even though it has short comings or as common sense dictates go with the product that is tuned towards your requirements?? If your answer is "Go with what I like" then I will have to entirely agree with Nosille, "abhey, ullu ke patte".
Wow this is the right situation.. Only now software makers will turn back to their products & work on it..
This test raises some real requirements..
As pointed in the discussion it seems Java Pet Store might not be the right example for testing an Application server.
So it is right time to bring out an test suite( similar to SUN J2EE compatability )for testing the Application Server.
The test suite should address all the real requirements of the Industry. So once a product undergoes the test then it will be easy to quantify the product's real worthiness in the intended market. Should we work on preparing test suite for this...
ecperf is the answer ...
Posted by Murali Varadarajan 2001-07-15 03:20:14.0.
Wow this is the right situation.. Only now software makers will turn back to their products &
work on it..
This test raises some real requirements..
As pointed in the discussion it seems Java Pet Store might not be the right example for testing an
So it is right time to bring out an test suite( similar to SUN J2EE compatability )for testing the
The test suite should address all the real requirements of the Industry. So once a product
undergoes the test then it will be easy to quantify the product's real worthiness in the intended
market. Should we work on preparing test suite for this...
Hei i totally confused
Me totally confused too!
what say you? :-)
What I seem to hear most is that BEA does clustering really well and there are more servers out there than with any other vendor. Neither one of those particular items make the product better. Its how people use a product that makes it good or bad. I am no true fan of Oracle, and the back-biting back and forth between Oracle, BEA, and IBM, while entirely predictable, is quite pathetic, but what can one expect from marketing and sales, anyway? None of my clients read the self-promotional 'performance comparisons' when making a decision - they base it on cost-benefit (and the greatest cost doesn't give the greatest benefit), needs-analysis (do we need all of the toys you get with BEA or Oracle?), programmer availability (do we need a handful of expert J2EE developers that can work reasonably easily with most app servers [having worked extensively with all of the major players and many minor ones, once you know one, it is not too difficult to move to another] or do we need 1000 BEA programmers just because there are a 1000 BEA programmers to be had?) and other such items of consideration. By the time we have any marketing and sales staff in, we have nailed 90% of the decision and then begin looking at the different pricing options once the techno-business decisions have been made. Listening to a bunch of people crowing about how their app server did on the Java Pet Store demo, while amusing, is not terribly informative to any but the most naive of corporate decision-makers.
Keep in mind that IronFlare (makers of Orion, the App Server that Oracle is licensing) posted results from benchmarks against BEA about a year ago. Back then BEA's lawyers jumped in to suppress the results. Who knows if they were fair, but I've always felt that the IronFlare boys were a couple of honest back yard hackers...their bench's against IIS/ASP were certainly valid in my experience anyway.
In short, if you buy Weblogic you are paying 99% for lawyers/marketers/salesmen and 1% for the product. Maybe that's why Orion would cost you 1/100th the price of Weblogic for a standard dev/stage/prod setup ?
Well, Orion re-sold by Oracle is not that cheap any more. In fact, it suddenly becomes very expensive - check the pricing on Oracle's site. I guess Oracle employs much more lawyers, marketers, and sales to push it's new AS than even BEA and IBM do... or, is that support?
And in any case, I think the argument between BEA and Oracle just doens't make any real sense - BEA's market share is 35%, it's AS has been on the market for 5-6 years. Oracle's share is 3%, and it's trying to sell it's third AS in the last 2 years - the previous two have failed.
Strangely enough, nobody is even mentioning IBM here...
"Strangely enough, nobody is even mentioning IBM here... "
Thats because we are talking about non-proprietary and almost open solutions and there shouldn't be any excess baggage with any single product thats purchased otherwise then, there is question of interoperability.....!!!!!!!
I recommend BEA only from simplicity and ease of use standpoint. Nevertheless, I didn't find the performance noticeably bad.
Oracle's earlier version 4.0.8 AS which I am using currently is nothing less than nightmarish!!
In all case, it wasn't too hard for Oracle to do better than Oracle 4.08 AS, what a big shit !
I think BEA is more developper friendly than 9iAS excepted with the price, but for the functionalities provided ,
the easy way to administrate it... WebLogic is the best.
For 9iAS, i just hope developper that Oracle won't turn his
product in shit to be more compatible with its last products :)
Here's an article which surfaced sometime in June.
Don't think I'd want to risk getting myself involved in any lawsuits from using their products....
Problems with Oracle products have not been confined to CRM software. The Tri Valley Growers agricultural cooperative sued Oracle for breach of contract and expects to go to trial in November over allegations that its ERP software didn't work, the company misrepresented its functionality, and it failed to provide support.
Tri Valley Growers purchased half a dozen software products from Oracle in September 1996. Oracle received these modules from partner companies and was planning to integrate them on the co-op's system, said Peter Sipkins, an attorney with Dorsey & Whitney, which is representing Tri Valley Growers.
But the organization terminated the contract and halted further payments after spending $20 million on software, support and new hardware when Oracle said it would take much longer to integrate the software on Tri Valley's existing computers.
Tri Valley is seeking $20 million and damages in its lawsuit, Sipkins said, and Oracle has countersued for lack of payments and is seeking $400,000. He noted that Oracle in 1998 announced it would no longer rely on partners to supply its ERP products, which is now bundled with its CRM software in the 11i package.
The following from the BEA site states clearly how the test was ran and what type of change was made. It seems that Oracle is afraid of the debate to be anything but the performance of repeated row inserts -otherwise its cheating. Wake up, Larry!
How We Ran the Tests
We downloaded the Pet Store Demo application from Oracle’s Web site and recreated the environment carefully using the description in Oracle’s performance white paper located at: http://www.oracle.com/features/9i/index.html?bea.html
. For example, the individual transactions and their relative weights were recreated using information from this document, with a minor change to make the transaction scenarios more representative of a real-world application (i.e., using multiple pets rather than reusing the same pet for every transaction). We then conducted the test on a three-tier configuration that separated the clients, application server, and database server. The servers were 4-CPU 500MHz Pentium III-based Windows NT servers with 1 GB of memory.
The whole thing is a joke. All of it. BEA is afraid of any independent tests. Maybe it (BEA) IS faster at running the petstore (on their beta product?). But only the most naive will believe the test results as obtained by the company itself. Until BEA, Oracle, IBM et al submit to independent tests, these excerises are meaningless. And with independent tests, the companies won't then be reporting outright lies when discussing the results, for instance that OCJ4 (Orion) doesn't support JNDI (one of a number of lies). The information will be factual and much more realistic and won't be reported by marketing types (hopefully).
So until there is an independent test involving these app servers, these little marketing stage-sets known as "performance tests" will be absolutely meaningless.
As is famous that if you cannot convince - confuse. So Oracle is trying to get a market share by simply yelling atop it's ivory tower and creating some ripples of confusion which may lead to accidental sales for 9iAS especially from those you look at the comparison stats and swipe their credit cards. But why is BEA getting the jitters. I felt Weblogic is quite good from every standpoint. If it falls short by a small amount on performance shouldn't that still be a profitable tradeoff - after all we are not using Weblogic to launch rockets!!!!
BEA is sitting at the top of its Ivory Tower as well - and also trying to confuse. There is very little difference between BEA and Oracle as far as their marketing tactics.
"There is very little difference between BEA and Oracle as far as their marketing tactics."
I find this hard to believe ... nobody can outdo Oracle at creating hype. Look at all their self-serving ads/press releases claiming to put the likes of Siebel and everybody else out of business.
Let's face it ... Oracle has had one great success (db) and a huge disaster with every other product (app server, crm ...)
"There is very little difference between BEA and Oracle as far as their marketing tactics."
I find this hard to believe ... nobody can outdo Oracle at creating hype. Look at all their
self-serving ads/press releases claiming to put the likes of Siebel and everybody else out of
Let's face it ... Oracle has had one great success (db) and a huge disaster with every other
product (app server, crm ...)
BEA does exactly the same thing - if BEA were bigger, say as big as Oracle, chances are they would behave the same way. When BEA published its results against Oracle, it was full of misinformation about the app server, specifically OCJ4 (Orion). Absolute lies. Why does BEA (and to be fair, Oracle) engage in such tactics? It gets no one anywhere.
"If BEA were bigger ..."
your questions are rhetorical and such questions are best left unanswered.
Let's talk about the situation as it exists today. Oracle's grandstanding at Java One sounded like one last desperate attempt. "Our software runs faster, please download it" sounded like a bunch of losers whining!
Why is Oracle charging a ton of $$$ for what essentially was a free download from Iron Flare (for the Orion server)?
Please understand that I think that Oracle's tactics are pathetic - however, (as it stands today) BEA's aren't much better. Orion isn't free by the way - its free for development use, as is Oracle, but is $1500/server (regardless of CPUs). Orion and 9iAS are not one in the same - 9iAS contains Orion (OCJ4) but apparently has a bunch of other stuff on top of that, too. Is it worth the price? Of course not, but BEA's prices are obscene as well - but that said, if a company can charge a price that people will pay - more power to them.
Also as an independent consultant, I have to use and recommend many different app servers (and their associated components). IBM, BEA, Oracle (old/new app servers), Orion, EA Server, and JBoss to name a few. None are perfect and all have their problems. And there is no one-size-fits-all product. That said, I think that Oracle's move to license Orion is one of the better business decisions they have made - like BEA did, why build a j2EE container when you can buy one? Unfortunately, what they will DO with the technology remains unclear and I don't have a good feeling about it.
If Oracle does it right - they will be a competitor to BEA - which will be VERY good for the marketplace - if not, then maybe it is strike 3, Oracle's out (of the app server market, that is).
I think What BEA did was to change the source code to make Bench Test more close to a real world application. However it would do them a world of good if they let people know what exactly did they do. So that their claims could be validated. But I am not sure how close these Benchmark tests are when compared to the Applications that exist in the real world.
I think What BEA did was to change the source code to make >Bench Test more close to a real world application. However >it would do them a world of good if they let people know >what exactly did they do
Given the obviously incorrect facts that BEA posted about the Orion/OC4J product (no JNDI, uh ... yeah fellas) and their lack of publication or detail of the changes they made to the testing apps, then you'd have to view their response with more than a handful of salt.
To IBM and Oracle's credit, they did disclose the applications they used to run the tests and come up with results.
This performance stuff is essentially all moot anyway - as this long thread has seemed to conclude from many good responses. Intelligent purchasers typically have their own criteria to validate and choose from a set of candidate products.
In my view, these performance claims, counter claims (and I'm sure we'll see counter-counter-claims) are the respective marketing machines all working to ensure that their companies products get on those choice lists. If noone has ever heard of your product, then how many licenses are you going to sell?
A wise man will believe the results of his own tests. Oracle and IBM try and make that possible today by at least disclosing the source of the applications they used.
Techmetrix publishes info and benchmarks on some of the app servers on the market. Not for free though. Any one know of any other independent app server benchmarkers?
To IBM and Oracle's credit, they did disclose
> the applications they used to run the tests and
> come up with results.
Actually, no. For example, Oracle didn't disclose what version of Weblogic they ran the test on (probably 4.5.1 ;-)).
Here is BEA's response to the benchmark:
BEA has removed the reference to "no JNDI" as regards OCJ4 in their challenge description. However, Orion/OCJ4 does support JAAS, JavaMail and SOAP. They should remove those references as well.
Is this not a funny world we live in? This morning the July issue of Scientific American (finally) arrived. I only had a few minutes to browse it before leaving to work. What did I see? A full-page Oracle advertisement (page 9) about O9iAS, using comparisons between BEA WebLogic features, Websphere features and Oracle 9i AS features. Boils down to Oracle having everything, Websphere having some and BEA WebLogic looking like some amateurist toy (only two out of ten features available).
Shows again that this a dirty war, fought between the marketing divisions. Do the marketing gurus think us stupid?
Really, they are mocking our intelligence!!!
In fact this entire war makes me feel very bad. It is so much unlike everything that has to do with Java, and as mentioned before, has a bad effect on the J2EE community.
Maybe we should all show our disgust at this marketing war, God knows, they'll cool down???
Folks, did you know that BEA and Oracle product's license agreements have restrictions preventing the reporting of performance results of applications using WebLogic and Oracle application servers' software (and Oracle database too)? (BTW: IBM has no such restrictions for DB2 or WebSphere Application Server 3.5.x) Therefore Oracle and BEA both violate license agreements of each other and potentially can sue each other. That will be amusing! ;-)
As far as benchmarking, I suggest to look at the recent Giga 's "Ignore Application Server Benchmarketing" IdeaByte by Randy Heffner, July 13, 2001.
IMHO: Only independent performance benchmarks can be considered, not vendor's. The only such benchmark that is freely available to the public that I know of is recent (May'01) PCMagazine's perf benchmark: "IBM's WebSphere Application Server, Advanced Edition 3.5, was by far the fastest and most scalable server on the scenario test. At its peak, it maintained 4,000 virtual users at 177 pages per second. Because of its sophisticated dynamic page-caching algorithms, WebSphere was also the clear winner on the title search test. It also had the best A.R.T. (Average Response Time) at peak performance on nearly all of the tests… IBM's WebSphere Application Server, Advanced Edition 3.5 has an array of speed-enhancing caching techniques, rock-solid load balancing and failover handling, and a complete complement of developer tools for creating Web applications. Its management console offers minute control over settings; support for a multitude of operating systems and Web servers sets WebSphere apart. Excellent performance in our testing is the icing on the cake… If your business is looking for a complete scalable and high performance environment for Web applications, WebSphere Application Server is ready." WebSphere beat Sybase, Borland, Microsoft, and iPlanet in the High-End Application Servers category.
// ********** HERE IT GOES !!!!!!!! ************
BEA and Oracle "declined, citing lack of resources to support our review process or an unwillingness to undergo performance testing
// ********** HERE IT GOES !!!!!!!! ************
" according to the review.
Full report see at: http://www.zdnet.com/pcmag/stories/reviews/0,6755,2714183,00.html
Whoops! I believe that PC Mag report stated "Microsoft delivered performance equivalent to the fastest of the Java guys when using Oracle, and even faster when using SQL Server." Or something close to that.
PC Mag tested COM+/IIS/ASP. Their result directly contradicts the old benchmarks published by the orion guys, which showed that ASP was a real dog.
I wonder if .NET is slower or faster?
Interesting, but as stated previously in this thread, performance is not the key deciding factor.
ASP.Net languages are compiled to _native_ code I understand - as opposed to the interpreted JScript or VBScript within ASP. So expect some dramatic marketing releases of benchmarks from those guys...
Sudhakar you ask:
Why did BEA need to modify the Java Performance Challenge benchmark before running it?
A better question is why should Oracle be able to create the Java Performance Challenge benchmark in the first place?
A third party should define the benchmark, which does very little in proving which is the better application server, which by the way is Weblogic.
I've worked on project's WebLogic and Orion 1.38(Just before Oracle licensed it), and Orion was faster primarily due to it's caching, it was easier to develop with, deployed faster and our development cycle was really fast.
As for weblogic, Im pretty dissapointed. It's slow, we were forced to change our design to cater for bad performance, in particular the Entity layer. Even with performance improvement's its just too slow. We're now forced to implement a Factory pattern for lookup tables. Deployment times are also bad slowing our development cycle.
We dont use or need clustering in our current app, and with weblogic your probably going to need clustering pretty fast.
With orion you wont.
My vote is for Orion for better price/performance, anyday.
Let ECPerf decide!
DOWN WITH ORACLE!