Given BEA and Oracles recent claims regarding their own benchmarks, it's interesting to look at PC Magazine's independant benchmarks (published in May) where the vendors themselves tuned an application. Interestingly, both BEA and Oracle declined to put their super fast servers in the running for some reason.
Read Performance Tests: Java Application Servers
Is PC Magazine an authority on testing Java app servers?
Not traditionally, but they seem to have followed a reasonable benchmarking 'format', and as a credible, well known third party these numbers are meaningful.
Also they say that each company had the opportunity to tune it's product:
Each company used the same application and same database schema, deployed to its environment. Company representatives were on hand to tune the environments for optimal performance. Every company had the opportunity to configure caching parameters, connection pooling, session management, and the Oracle database--as long as it did not infringe on the EJB code.
At the one of the first pages it seems like it was not a real independent review.
Well, PC Magazine is much better in judging a product without bias. The vendors always have a nature of making the benchmarks look good on their side by highly customizing the benchmark platform to suit their needs !
I looked at the article and what seemed to stand out was all they tested was 3 app servers ! True ?
Note this comment: "The tests exposed a weakness in Borland AppServer. Our run rules called for the EJB and servlet containers to be placed in separate processes". This is an arbitrary decision. Who runs their servlet engine and EJB engine in different processes?
Who runs their servlet engine and EJB engine in different processes?
Lots of people in a security conscious environment do.
It is rare these days that there is only one firewall between the outside world and your internal systems.
In banks (though, not just banks) there are usually 3 firewalls seperating the client (IE usually) from presentation tier from the application (business logic) tier from the data tier. In a lot of cases, the intRAnet population is treated no differently to the intERnet population (nor should they be, from a security point of view).
Whats more, for scalability (and perhaps availability) reasons, you would WANT to be able to split your presentation tier from your business logic tier and have them run on seperate boxes - it really depends on your application though.
Seperating presentation and business logic makes perfect sense in the OO development domain. But when it comes to deploying code, I don't see any reason NOT to have these two services running in the same process. A distributed call from a servlet engine to an app server is exponentially slower than an intra-process call. Unless you have a compelling reason to seperate these two, don't. That is the first rule of distributed computing - if you can avoid distributing your objects, DON'T DISTRIBUTE.
As for scalability, nothing says you can't have ten boxes all lined up running serlvet engine and an EJB server in the same process.
I don't see any reason NOT to have these two <>> services running in the same process <
Well, if your security policy is such that you must seperate presentation logic from business logic with a firewall, then you HAVE to run in seperate processes.
The problem was that Borland could _only_ run in different processes not the _same_ process space. And I completely disagree, who on earth runs in the same process space?
Who runs their servlet engine and EJB engine
> in different processes?
Anyone with their servlet installed on a 'edge server' or otherwise in a topology that means it's on a separate machine.
It terms of scalability it may make sense to have your servlet engine and EJB engine in different processes, infact on different machines. You may need the power. Unless you use a beta of JDK1.4 the garbage collector is single threaded. For this reason alone some people run a java servlet engine and EJBs in different processes.
Since J2EE was designed for fully distributed systems I think this test is a very good one.
I also think that having your app server and web server on different machines is more secure - but I would weight this point less that the scalability one (someone made this point earlier).
In May oracle still didn't had its new java container which was announced at java one. This is probably the reason they didn't want to go there. Things have changed since then.
Again, all vendors - who are *truthful* and serious
about their performance - should show their ECPerf results.
According to PC lab results, I think the application was
more about testing Web (HTTP) server with Serlet on
a bunch of PCs ( 400MHz and 256MB ).
yeah i think ECPerf will be be good for the app server market. but at the same time this was an independent test conducted by, what i think is a reliable source.
I think the point everyone seems to be missing is that performance numbers are only relevant IF you (or your client) happen to be running the benchmark example on the example hardware/network. While performance should be a factor in your application server trade matrix, the performance you see really depends on your underlying hardware/OS/network configuration and the architecture of your application. In this case the platform is Intel but most of the clients I see are running on UNIX if they have real scalability/reliability issues.
I don't see any mention in the article that Oracle or BEA declined the testing or wether or not they were ever invited to participate. Seems odd that it wouldn't even mention the absense of the app server market leader (BEA Weblogic).
Aside: Any real benchmarking for servers in that market space should be done in house with real world apps, in as close to real world conditions as possible (clustered servers, hardware load balancing, etc.). In this light, the article is somewhat moot.
The reason BEA did not participate in the PC Magazine tests is that WebLogic 6.0 was just about to ship and our engineers were fully employed in getting it out on time.
The acid test (IMHO) is the number of users deploying an appserver for high volume transactional applications.
Ask your vendor of choice for reference implementations.
I find it hard to take seriously a benchmarking test which only compares three products!
Does anyone know of any tests like this involving more of the major players (eg: WebLogic)?
Senior Technical Specialist
Borland has announced their ECperf numbers:
Third party analyst research group's site released their competitive J2EE AppServer: analysis.http://www.cmis.csiro.au/adsat/j2eeexec.htm
Also worth pointing out:
Sun announced it has demonstrated the world's largest system ever to run ECperf Benchmarks using an Eight-way Sun Fire[tm] V880 server with 16 GB of memory, running the Solaris[tm] 8 Operating Environment (OE). This test resulted in a world-record performance of 5961.77 BBops/min@Std and $51/BBops running Borland AppServer 4.5 with Oracle 8i and Java 1.3.1.
But don't take our word for it... Prove it to yourself!
Comments such as these are dubious and make me wonder if the person who made them is a Borland or Sun salesperson.
Firstly, as with all benchmarks, as soon as one company sets the record, another company soon breaks it. (note that now IBM holds the ECPerf record.
Not that I love IBM or anything, soon HP or someone will re-break it by doing some ridiculous tuning.
What's the point?
What a meaningless test! From where I am sitting, I have to ask 2 key questions:
1. Who cares about WebSphere 3.5? It is so far from being a J2EE compliant application server that an educated buyer eliminates it immediately.
2. Where are the real J2EE players in the market? WebLogic, HP-Bluestone, IPlanet, et al... (Sorry, in my humble experience Oracles App server doesn't make this list)
As we have personally experienced HP-Bluestone's Total-e-Server has a much more scalable Architecture than either WebSphere or WebLogic. I would love to see an independent vendor supported test that actually compared the major application server players, but only if they ran the J2EE compliance tests first. I am tired of IBM's marketing machine comparing a totally non-standard proprietary app server to the rest of the app server world.
F.Y.I. (I am IBM certified Solution Designer, WebSphere Certified et al...) Bottom line is I know too much.
Bluestone has an interesting architecture, but the configuration/installation process is far from elegant. Having tried to install bluestone for 2 weeks to do some performance testing against weblogic, tomcat and iplanet, I gave up on bluestone. Their universal listener architecture and xml module looks like a good design from what I can see, but their configuration file is attrocious.
I don't agree to your idea that buyers eliminate WAS 3.5 just because its not J2EE compliant.
What use is J2EE compliance if it doesn't offer users reliability, scalability and performance.
I guess WAS 3.5 had all these, and that's why banks and insurance companies chose this over BEA.
J2EE compliance is a standard that is good to have, nothing more.
Where there any tests done concerning fail over , load balancing ? etc.
What about clustering and admin tools needed to cluster and scale ?.
I know this is 'hot topic' but when we have 1.6 million transactions on a daily basis to handle with a client base of 65000 (about 3500 upwards concurrent users) I honestly don't think that what was presented here would 'hack it', espescially running on Microsoft '@#$@#@' . Why not rerun these tests on Solaris (Linux ?) on some serious multiprocesor machines , using applications servers that can handle clustering , load balancing etc etc , then republish the results - I'm sure it would be make some interesting reading indeed.