How well do you understand what is happening inside your EJB application server? We trust the products to provide services such as caching, transactions, and pooling for our beans; but inside the closed environment of an EJB container, it is hard to know what is really going on.
Undercover Reports help you find out. They are based on experiments using Undercover EJB Components - that is, components which have been written specifically to monitor how they are being managed. Because the components are standards-compliant beans, they can be deployed and run on multiple products. This allows consideration of which results are distinctive to a given product and which ones are common to all.
Five reports are published today at:
http://www.scottcrawford.com/undercover
Currently, there are reports on WebLogic 7, WebSphere 5, Oracle 9iAS, JBoss 3.0, and the J2EE Reference Implementation.
There will also be a BOF session at JavaOne about the reports next week:
http://servlet.java.sun.com/javaone/sf2003/conf/bofs/display-2006.en.jsp
-
Undercover Reports on the innards of 5 EJB servers (22 messages)
- Posted by: Scott Crawford
- Posted on: June 04 2003 12:05 EDT
Threaded Messages (22)
- A little bit light... by Sacha Labourey on June 05 2003 14:07 EDT
- Good start by Cary Bloom on June 05 2003 14:18 EDT
-
Good start by Bill Burke on June 05 2003 02:39 EDT
-
dot bomb! by Cameron Purdy on June 05 2003 06:30 EDT
- a bunch of whiners ... by Cary Bloom on June 05 2003 07:48 EDT
-
dot bomb! by Cameron Purdy on June 05 2003 06:30 EDT
- Hormonal spike? by Sacha Labourey on June 05 2003 02:40 EDT
-
Good start by Bill Burke on June 05 2003 02:39 EDT
- A little bit light... by Bill Burke on June 05 2003 14:33 EDT
- Oracle Correction by Scott Crawford on June 05 2003 02:44 EDT
- Missed The Point by Nathan Zumwalt on June 05 2003 14:43 EDT
-
Possible, yes by Sacha Labourey on June 05 2003 02:55 EDT
- On testing methodology ... by Steve Cramer on June 05 2003 04:09 EDT
- Too limited to learn much from by John Hess on June 05 2003 04:11 EDT
-
Possible, yes by Sacha Labourey on June 05 2003 02:55 EDT
- Good start by Cary Bloom on June 05 2003 14:18 EDT
- Huh? by Markus Kling on June 05 2003 15:55 EDT
- Huh? by Markus Kling on June 05 2003 15:57 EDT
- Want the innards? Use JAD by Craig Pfeifer on June 05 2003 16:56 EDT
- Want the innards? Use JAD by Wille Faler on June 06 2003 05:35 EDT
- Please read the reports before making judgement by Scott Crawford on June 05 2003 20:14 EDT
- Valuable work by lecharny emmanuel on June 06 2003 04:52 EDT
- Nice colors by Fredrik Holmqvist on June 06 2003 03:12 EDT
- Undercover Reports on the innards of 5 EJB servers by Tero Vaananen on June 06 2003 08:20 EDT
- Undercover? by make ship go on June 06 2003 10:18 EDT
- JBoss 3.0.3 fixes the RMI issue by Stanford Ng on June 06 2003 14:39 EDT
-
A little bit light...[ Go to top ]
- Posted by: Sacha Labourey
- Posted on: June 05 2003 14:07 EDT
- in response to Scott Crawford
I don't know for other app servers but for JBoss, they took release 3.0.0 which went out more than one year ago! Now the official release is 3.0.8. For a report in version 1.0 published in June 2003, this is a little bit light...
What I've loved also: they did the testing on a Thinkpad laptop and Windows 2000 Pro.
Should run the next one on a Commodore 64 and EJBoss 1.0. -
Good start[ Go to top ]
- Posted by: Cary Bloom
- Posted on: June 05 2003 14:18 EDT
- in response to Sacha Labourey
A good comparison of the various app servers (of course, JBoss fanatics can be expected to point out flaws just because their app server finished dead last; Sacha's comments are a case in point). Did it ever occur to you that the tests used a version of JBoss that was available at that time? I'm sure the latest and greatest version of JBoss will have killer documentation, but Scott had to use what was available at the time. -
Good start[ Go to top ]
- Posted by: Bill Burke
- Posted on: June 05 2003 14:39 EDT
- in response to Cary Bloom
A good comparison of the various app servers (of course, JBoss fanatics can be expected to point out flaws just because their app server finished dead last; Sacha's comments are a case in point). Did it ever occur to you that the tests used a version of JBoss that was available at that time? I'm sure the latest and greatest version of JBoss will have killer documentation, but Scott had to use what was available at the time.
This is like showing you a newspaper from early 2000 showing why you should invest in internet stocks.
Bill -
dot bomb![ Go to top ]
- Posted by: Cameron Purdy
- Posted on: June 05 2003 18:30 EDT
- in response to Bill Burke
Bill: This is like showing you a newspaper from early 2000 showing why you should invest in internet stocks.
ROFL! Good one.
Any specific points that 3.0.8 (or 3.2 etc.) would cause the outcome to be significantly different? (Yes, it's a leading question, but instead of suggesting there would be a difference, I'd like to know what improvements there were on the perf side that you think would affect the outcome.) Perhaps we could request a re-run with more modern versions? Or another party could d/l the tests and do so themselves?
Peace,
Cameron Purdy
Tangosol, Inc.
Coherence: Easily share live data across a cluster! -
a bunch of whiners ...[ Go to top ]
- Posted by: Cary Bloom
- Posted on: June 05 2003 19:48 EDT
- in response to Cameron Purdy
I'm amazed that most folks here just whine most of the time, without suggesting anything constructive. Folks, it is very easy to say "these benchmarks don't do this or that", so how about actually trying to say how you can improve things other than lame comments such as "the benchmark was run on a laptop".
Gee! -
Hormonal spike?[ Go to top ]
- Posted by: Sacha Labourey
- Posted on: June 05 2003 14:40 EDT
- in response to Cary Bloom
Cary,
next time try to give arguments, not simply agressive noise. Do you objectively think:
- it is fair to use a one-year old release for testing
- performing test on a laptop shows professionalism?
If you think so, then we just have different ways to work I guess.
And that is right, I try to defend the way testing are done on JBoss (without acting as a "fanatic" I think). However, it does not mean that:
- other app servers in the "bench" haven't suffered from the same policy
- JBoss would have been the best app server
Regards,
Sacha -
A little bit light...[ Go to top ]
- Posted by: Bill Burke
- Posted on: June 05 2003 14:33 EDT
- in response to Sacha Labourey
I don't know for other app servers but for JBoss, they took release 3.0.0 which went out more than one year ago! Now the official release is 3.0.8. For a report in version 1.0 published in June 2003, this is a little bit light...
>
> What I've loved also: they did the testing on a Thinkpad laptop and Windows 2000 Pro.
>
> Should run the next one on a Commodore 64 and EJBoss 1.0.
I agree, the report says "To be fair to all vendors.....the first release of the product is used." Yet Oracle 9iAS is version 9.0.3.
Please tell me how testing older versions of products is useful? -
Oracle Correction[ Go to top ]
- Posted by: Scott Crawford
- Posted on: June 05 2003 14:44 EDT
- in response to Bill Burke
I agree, the report says "To be fair to all vendors.....the first release of the product is used." Yet Oracle 9iAS is version 9.0.3.
The first J2EE 1.3 release which is announced as a full production release for each product is the one tested. Oracle 9.0.3 was that release. The policy is consistent (even if Oracle's version numbers are weird.) -
Missed The Point[ Go to top ]
- Posted by: Nathan Zumwalt
- Posted on: June 05 2003 14:43 EDT
- in response to Sacha Labourey
These reports aren't supposed to measure performance, but to give the developer more insight into how the app server is handling things under the covers (thus, "Undercover Reports").
So, it doesn't matter what machine or OS the app server was run on... it might not even matter what version of the app server was tested.
These type of statistics and the inferences gleaned from them are good for education, but little else. To be really useful, the author needs to go to the next level.... change configurations and see how the tests change. -
Possible, yes[ Go to top ]
- Posted by: Sacha Labourey
- Posted on: June 05 2003 14:55 EDT
- in response to Nathan Zumwalt
Yes, if this is only from an architectural point of view, yes, but they do show some performance statistic as well, hence my post.
Regards,
sacha -
On testing methodology ...[ Go to top ]
- Posted by: Steve Cramer
- Posted on: June 05 2003 16:09 EDT
- in response to Sacha Labourey
From the Concepts section in the Introduction of the reports (with author's emphasis):
This technique is not useful for performance benchmarking. The products tested use different virtual machines, different wire protocols, different memory configuration, and all have their own unique characteristics. No attempt has been made to provide a "level playing field". Therefore it is neither fair nor useful to compare the performance of one product to the performance of another product on the same test. It is sometimes useful to see how the same product will perform differently in response to two similar tests; these are the only performance comparisons that appear in Undercover Reports."
I would say that this demonstrates not only the author's awareness of the issues around performance testing in general, but his overall professionalism as well.
You might still have a point if he had performed the tests for WebLogic on an enterprise-class machine with 24 CPU's, only to revert to a Palm Pilot for the JBoss tests, but the fact of the matter is that he performed all tests on the same hardware and operating system. The choice of a laptop demonstrates no lack of professionalism, either. Again, though it definitely limits the performance, all tests will be equally limited, and the desired comparisons are still valid.
It is true that certain performance or scalability limitations will only manifest themselves as you try to scale up (clustering, SMP limitations, etc.), but I think that's slightly outside the scope of this analysis. He's not trying to predict how an enterprise application will perform or scale on each particular platform, but merely trying to understand how the individual containers behave, in much the same way you might be able to infer some details about the behavior of the JIT compiler in a particular JVM based on iterative results of a suite of microbenchmarks.
And even he admits in the Hardware Issues section of "Introduction to Undercover Components" that the laptop isn't representative of the platforms on which most J2EE applications will deploy.
I'm sure there are specific points where the testing methodology could be improved. After all, I've never met a perfect test.
cramer -
Too limited to learn much from[ Go to top ]
- Posted by: John Hess
- Posted on: June 05 2003 16:11 EDT
- in response to Sacha Labourey
Unfortunately this report has so many limitations that it is almost impossible to learn much about the application servers. For one thing, the assumption that BMP behavior will imply CMP behavior is probably false for most servers. Secondly, running client and server on the same machine, with only 1 CPU, prevents the tests from showing many characteristics of the server scalability/pooling/threading capabilities. Running with 384megs of memory is also a big flaw. It is of little interest to me how a server runs with 384meg of memory, I am interested in how well it runs with adequate resources. Add in that local interfaces were not utilized, and out-of-the-box tuning parameters, and I think you are not left with much good information.
Unfortunately, I think when people see side-by-side comparisons they will draw conclusions without necessarily understanding all the constraints. In the authors defense however, he did go out of his way to state the limitations very clearly. -
Huh?[ Go to top ]
- Posted by: Markus Kling
- Posted on: June 05 2003 15:55 EDT
- in response to Scott Crawford
I don't get any insights from this documents...
Excerpts (it's flame biteing, I know!):
- you mainly test burst loads -
Bursts will never test the efficiency of caching or pooling. The borders of burst loads are set by vm, memory, threads, os-sockets, ... If you want determine the effect of caching, do random queries over a long time!
- you use appservers default configurations -
That's bad at all!
- stateless session bean instances will be shared and re-used by different clients -
Hopefully! what's the point here?
- new statefull session bean instances will be created by each client -
What else should the container do? And even if he does not, creating an object is not expensive (remember: it has NO state)!
- you test BMP! -
Well, test the container not your implementation!
- you test chaching of entity-beans in a non clusterd, non "db-shared" environment -
This perhaps works for www.mypersonalwebpage.com
... many things more ... -
Huh?[ Go to top ]
- Posted by: Markus Kling
- Posted on: June 05 2003 15:57 EDT
- in response to Markus Kling
uups .. the other way round ...
- stateless session bean instances will be shared and re-used by different clients -
Hopefully! what's the point here? And even if he does not, creating an object is not expensive (remember: it has NO state)!
- new statefull session bean instances will be created by each client -
What else should the container do? -
Want the innards? Use JAD[ Go to top ]
- Posted by: Craig Pfeifer
- Posted on: June 05 2003 16:56 EDT
- in response to Scott Crawford
I've heard that JAD (http://kpdus.tripod.com/jad.html) when properly applied to certain java files of a binary persuasion can be quite illustrative.
At least that's what a friend of mine told me. I wouldn't know. I would most certainly not do such a thing myself of course, violating all sorts of fine print in the EULA and all. -
Want the innards? Use JAD[ Go to top ]
- Posted by: Wille Faler
- Posted on: June 06 2003 05:35 EDT
- in response to Craig Pfeifer
I have actually used JAD to save the sources of a project.
All the source-files had been lost and there was no current back-up of them.
However, the JARS of the production-environment where fine.
Created a custom-ant task that decompiles files, made an ant script that decompiled complete jars from a list of jars. Voila! complete source code saved!
Ok, so the code was a bit undocumented, but its better than none at all.. -
Please read the reports before making judgement[ Go to top ]
- Posted by: Scott Crawford
- Posted on: June 05 2003 20:14 EDT
- in response to Scott Crawford
Hello all,
I don't intend to jump in on all the specific points raised. Just two things I want to say before I go quiet and let you all debate:
The primary purpose of the excercise is education - learning about EJB management strategies in practice. Although some reasonable inferences may be made about product quality, this is NOT a benchmark. I am concerned to see people on both sides of discusssion slipping into that mistake.
Please read the reports for yourself before coming to a judgement. Many people will - either mistakenly or because of an agenda - misrepresent them to you in a forum like this. The reports themselves address most of the discussion points so read them and make up your own mind.
Thanks
Scott -
Valuable work[ Go to top ]
- Posted by: lecharny emmanuel
- Posted on: June 06 2003 04:52 EDT
- in response to Scott Crawford
Thanks a lot, Scott,
this was an interesting report, and it can stand as a good start to deeper comparaisons and tests. -
Nice colors[ Go to top ]
- Posted by: Fredrik Holmqvist
- Posted on: June 06 2003 03:12 EDT
- in response to Scott Crawford
To bad that the eye has a problem focusing on red text on blue background. It's a real eyestrainer. -
Undercover Reports on the innards of 5 EJB servers[ Go to top ]
- Posted by: Tero Vaananen
- Posted on: June 06 2003 08:20 EDT
- in response to Scott Crawford
Maybe the JBOSS people can answer if the shortcomings of JBoss claimed in the document(s) are real, which could have been avoided with tuning (and how), and whether the problems have been fixed since 3.0. In addition to critisizing the comparison, you could also tell us how JBoss is now better than it was at the time of 3.0 release.
There were at least these shortcomings or questions in the JBoss tests:
1) RMI burst test failed for JBoss with more than 100 clients. Was this fixed since 3.0? Many of the following tests were hampered due to this limitation.
2) Due to 1, passivation tests for stateful session beans could not be completed. Does JBoss passivate stateful session beans?
3) There are a lot of extra entity bean instances created (Keys + Clients). The document speculates: "One possibility is that each client is causing its own bean instance to be created into the pooled state even though it ought to be possible to re-use them."
3) Message driven bean test (burst) succeeded with 20 or less clients reliably. Was anything done to improve this? This was a problem with many other J2EE 1.3 app servers so it was not only a JBoss problem.
4) JBoss has not implemented the "application client container" required by the J2EE 1.3 spec. Has this been since implemented, or is the document wrong about it?
It would be great to hear some answers to these issues, if you would so kindly to do so. I use JBoss daily at work so this is valuable information. Overall I am happy with JBoss so I am not going to switch :-) -
Undercover?[ Go to top ]
- Posted by: make ship go
- Posted on: June 06 2003 10:18 EDT
- in response to Tero Vaananen
Very light. I want those 10minutes of my life back. -
JBoss 3.0.3 fixes the RMI issue[ Go to top ]
- Posted by: Stanford Ng
- Posted on: June 06 2003 14:39 EDT
- in response to Tero Vaananen
Yes, as the RMI issue is reportedly fixed in 3.0.3, according to the Undercover report itself. Seems pretty useless to continue on with the rest of the testing using a version that is known to be bad, but the author insisted on adhering to his policy of using only the first production releases. It's his prerogative.
Other than that, I found the reports to be well-written and fair. The information is not very valuable on the whole. However, he does shed some light on what happens under the covers, which is his goal.