ECPerf, the EJB Server benchmarking spec has finally gone into Public Review. ECPerf is a performance workload benchmark meant to measure the scalability and performance of Enterprise JavaBeans servers and containers. The purpose of ECPerf is to provide a standard and trustworthy TPC-C style benchmark for EJB Servers.
The ECPerf specification also includes a real world J2EE application, which will be used to test performance of any J2EE compliant server that can run it.
The political ramifications of this benchmark suite are significant. Previously, vendors did not allow users to post bench mark results of their servers. With the advent of ECPerf, one day we will be able to compare the performance of EJB servers side by side. Boy I would love to see jBoss kick the pants off of some of the big boyz. :)
Download the ECPerf Specification
ECPerf Toolkit Early Access Edition (login required)
Very nice, but without TPC results this isn't a wonderful move forward. Comparing EJB to EJB solutions is useful in itself for choosing a container, what we really need is TPC entries so we can compare CORBA or COM+ to EJB. (IMHO)
This sounds pathetic.
"The purpose of ECPerf is to provide a standard and trustworthy TPC-C style benchmark for EJB Servers."
In the public doc Sun states is unhappy with either www.TPC.org benchmarks or www.Spec.org benchmarks.
So they make their own proprietary toy to cover for a lot of flaws in EJB spec.
What if Microsoft does a WindowsAPI benchmark, and tells you that their OS is the best in this regard ?
As a matter of fact, I would have liked to comment on the technical nature of the benchmark, but the thing is I stopped from downloading it at the last minute, because it's under NDA. I could download, but then I'd be able to comment only back to Sun.
And last time I checked Sun is a member of both TPC and Spec, they could have created a new benchmark with either of these organizations, but they chose to stay in control.
As a matter of fact they haven't submitted any Java result to Spec for quite a while, and last time they did, it looked like IBM, HP, Compaq (Digital Alpha) and even Dell/Intel can beat them easily.
What about getting the benchmark without NDA, and making a benchmark that can be implemented in whatever alternative technology ?
Do you think Sun is able to do that ?
Or are they up to ?
"So they make their own proprietary toy to cover for a lot of flaws in EJB spec."
I don't think anyone will disagree that there are flaws in the spec, but that has nothing to do with ECPerf. In terms of ECPerf being proprietary, yes it is owned by Sun but you should know that the ECPerf spec and suite is the result of almost 2 years of work done not just by Sun but also by members from ATG, BEA, HP, IBM, Informix, Inprise, IONA, iPlanet, Oracle and Sybase. These companies will sit on the ECPerf committee and will ultimately share control with Sun in validating benchmarks that get submitted.
Contrary to your suggestion that the ECPerf toolkit is a toy, I am actually really impressed with the application. It is NOT a trivial example. Infact it is really cool. Check it out, the object model is relatively large, and the app. itself has a lot of cool features like a rules engine, class caching, logging, and tons of other stuff I havn't had a chance to look at. This app. is real.
"What about ... making a benchmark that can be implemented in whatever alternative technology?"
Why should Sun care about a bechmark that can be implemented in 'whatever alternative technology'? ECPerf is designed to properly test an EJB Container, and is most definitly an EJB only Spec. That was the point from the beginning. Developers need to see benchmarking results for EJB Servers. The most credible source of such results is from a benchmark that is tailored to EJB. For Sun to have attempted ot define some 'middleware independent' spec would have been counter productive to the needs of EJB develoepers today and would likely have taken a lot longer than the 2 years it took for ECPerf to finally come out.
"What about getting the benchmark without NDA?"
What NDA are you talking about? Do you mean this line from the evaluation agreement:
"Licensee may not make performance claims based on its use of the Licensed Software unless and until such claims can be made in accordance with the procedures and requirements set forth in the then current ECperf Benchmark Specification"
If so, then you should know that the purpose of this statement is so that all claims of performance tests go through the ECPerf committee (which as mentioned earlier is comprised of members from most J2EE server vendors). This both assures the quality and the usefulness of any ECPerf benchmarks you may see. If you have a problem with this, then how would you have done it differently?
I'm not interested to discuss how smart the application may be. It would be pointless, and more it would be illegal because, quote from spec,
"The Specification contains the proprietary and confidential information of Sun ".
If you read the license carefully (the code samples itself has an even more restrictive license), you'll see that you shouldn't talk details.
Probably Sun may be willing to make an exception for you.
I thouught it is a "toy" not because of its complexity, but because of its purpose and the way it was set forth. So I keep my opinion.
"Why should Sun care about a bechmark that can be implemented in 'whatever alternative technology'? "
Well, just because most of the benchmarks done so far stick to this very rule. And that's why Sun is member of both www.TPC.org and www.Spec.org
And just because Sun preaches openness and choice all over the place and there's a natural expectation for them to be open, but lately a lot of people are dissapointed with Sun's behaviour and think of Sun as another Microsoft.
"If you have a problem with this, then how would you have done it differently? "
Well, others did it pretty well, and it's not rocket science, either.
You specify the business problem, you define the transactions in the large, you specify non-propretary requirements (like data integrity and transactional capability and backup and recovery and so forth).
And let everybody else implement using their technology of choice.
I haven't heard yet of a business problem that cannot be implemented but with EJB, or have you?
So don't worry about EJB vendors, that they didn't have where to prove themselves.
At least there's TPC-W which is totally fair with regards to technologies, though only results up there are with IIS/C/Embedded SQL.
Should the EJB vendors have consoidered that TPC-W is too simplistic and disadvantage the Servlet+JSP/EJB model they could have went through the TPC which is open and propose a more complex business problem let's say "TPC-Y".
And then they would have been out there in the open competing against MTS/CORBA or whatever alternate technologies.
By refusing to do so, it seems to me that they made themselves their little toy, to compete only against each other, like only "pour les connaiseurs".
And serve another marketing trick to their naive customers.
And also, look very bad against competing technologies.
But it seems to me that any honest and serious Java developer should not easily be tricked into this.
"Why should Sun care about a bechmark that can be implemented in 'whatever alternative technology'? "
>Well, just because most of the benchmarks done so far stick to this very rule. And that's why Sun is member of both www.TPC.org and www.Spec.org
Honestly Costin, I don't think Sun's participation in TPC.org or Spec.org or their past history of preaching openness really matters to the problem at hand. EJB Developers and decision makers need an EJB SPECIFIC benchmarking specification and toolkit, that can test EJB specific features, inorder to have "relevant" benchmarks of EJB Servers.
You mention that if it were up to you then you would have defined new TPC benchmark that has a more complex business problem. Maybe I am missing something, but I don't see how any technology independent specification could adequately test the features of EJB Servers. Furthermore, such a spec. would invariable end up being used to compare technologies, which would not help out developers like you and me because the question burning in my mind is 'which ejb server is fastest'. The approach you are suggesting would not answer that question. For example, how could a technology independent spec. give you benchmark figures on activation/passivation of session beans? It couldn't because session beans only exist in EJB.
For Microsoft it makes sense to enter Windows DNA into TPC. After all, they only have one software stack to compare to. In our world, their are multitudes of different products implementing the same J2EE Software Stacks. For us it makes sense to have a benchmark designed to properly distinguish one product from the other. If you want to talk about consistent behaviour, then since all Java API's are developed through the JCP, it makes sense for ECPerf to be developed there too.
I found the offensive lines from the agreement, and I got a good laugh when I realized that TheServerSide probably violated article 6.1 by announcing ECPerf on our homepage. :) Costin, these types of legal clauses are pretty much standard in the industry. I think they are designed to give Sun the legal right to stop companies from publicly criticizing it, particularly other vendors. Don't worry, they aren't going to come after you. :)
>>By refusing to do so, it seems to me that they made themselves their little toy, to compete only against each other, like only "pour les connaiseurs".
They definitly designed the spec to only distinguish one J2EE vendor from another, but as I explained above, that was not some evil trick, that was a requirement inorder to make a truly effective EJB benchmarking suite.
it seems to me that we'll have to agree to disagree.
But the whole story is really ridiculous and is to be laughed at. If you are happy that you'll have a way to distinguish between speed of "activation/passivation" (a bad pattern anyway, but that's other discussion), and the like, then definitely, it is a good benchmark, and it will be helpful to you.
And no, you haven't violated the license as long as you referred only to the existence of ECPerf itself, not to the particular details of the spec.
And the legal clauses are pretty standard for products (thus a third party will not be able to benchmark a product and post the results, and justly so).
But the clauses for the benchmark itslef, that's another thing to be laughed at.
Try to find that either at www.tpc.org or www.spec.org and see for yourself.
But let me explain more clearly my point .
The only "features" that matter in an EJB Server (or J2EE server or whatever) is that they offer you a platform to resolve a business problem. Or a whole domain of business problems. And offer solutions to your customers.
If they activate/passivate or cache information or only allow you to use Session Bean and throw SQL at the database, that's largely irrelevant, at the end of the day.
WE ARE IN THE BUSINESS OF SOLVING PROBLEMS AND IMPLEMENTING SOLUTIONS.
We are not in the business in taking pride how clever and smart architected our technology may be.
The demise of .com and the shake-up of the whole industry ought to make clear to all of us that customers don't need technologies , they need solutions.
And while there may be some customers out there who still buy into technology fairy tales from marketing departments of Microsoft, Sun and the like, I can assure you they will be more and more a rare commodity.
So if you really want to see 'which ejb server is fastest' you only have to ask them to implement the TPC-W (all of them also run Servlets+JSP).
Or implement TPC-C which is strictly transactional, and the generation of user interface doesn't matter.
Or create another benchmark orthogonal on any particular technology.
This way you can compare EJB vendors, AND MTS, AND any other exotic solution like a very good CORBA broker + an Object database or a CORBA broker + O/R mapper, or just CORBA broker + JDBC, or just APACHE+ MOD_PERL+DATBASE.
But they avoided to do that, because it seems to me that they really have something to hide , and many of us really have something to laugh about.
TPC-C has been generally irrelevant for some time, as most vendors have found ways to exploit the benchmark itself vs. a real-world system. (i.e. it's update intensive, so it really isn't reflective of a read mostly system. it also is biased towards shared-nothing architectures).
TPC-C also doesn't factor in the TCO of the development of the solution in terms of time to write it & maintainance costs over a 5 year period. This allows vendors to write highly optimized C code to get the desired numbers. This is not reflective of reality, where businesses are looking for time to market and low maintainability costs before they're looking for top-10 scalability numbers or low tpc/$ based purely on hardware or systems software cost.
Yet, TPC-C is still the industry standard for OLTP performance. It's not a relevant measurement, but it is *A* measurement.
ECPerf is not something to be laughed at anymore than how one would laugh at TPC-C - it's just another benchmark with a different form of measurement.
Costin, it strikes me as rather odd that you say we're in the business of "SOLVING PROBLEMS AND IMPLEMENTING SOLUTIONS" (sic), yet are exhibiting "hot-headed paranoia" about ECPerf. If we truly are in the business of being professional solutions providers, let's take the benchmarks out there for what they are without spreading FUD about a topic that clearly is important to the EJB community.
I'm certainly not trying to spread "FUD" to EJB community and I'm not "hot-headed paranoia".
If you want to be rude it doesn't help you avoid real arguments.
Laugh is certainly healthier than "FUD".
I think it's fair for me to criticize from a friendly position since I am a Java developer.
Be sure adversary critics will be much tougher, and if EJB vendors walk down this path they will have hard time fighting arguments from Microsoft and the like.
I'm not willing to discuss the technical merits of TPC-C or TPC-W or ECPerf, either.
It is a valid argument that while TPC benchmarks are orthogonal to whatever technology, ECPerf is built for EJB only.
That is you're not going to have any response to Microsoft arguments about how good are they in performance and scalability and the like.
So there is a valid question for which I doubt you have any valid answer:
Why couldn't Sun have created a technology independent benchmark so they could show how good the EJB technology really is ?
Even if you take the business problem described in the ECPerf and strip out the requirements that you have to have this and this EJB and so on, you will probably get a decent benchmark.
You can add measurements based on code complexity and any other type of things with a sound foundation in theoretical papers, so vendors won't be able to easily implement in C + Embedded SQL , like they did for the current TPC benchmarks.
I would even try to say that if an EJB vendor will implement TPC-W using Servlets + JSP + EJB and get results only 50% as good as current results on comparable hardware and costs, that will be a definite net gain for the EJB vendor because you factor in the complexity of developping and maintaining a C + Embedded SQL solution (not to mention hiring developers who still know how to do it).
And the arguments can still go on, but I'm still waiting to see a valid argument for the EJB-only benchmark versus a technology independent benchmark.
Let me detail to you why I claimed you were spouting FUD:
- You state ECPerf is under NDA. CLEARLY, it is not. (Fear)
- You state "seems to me that they really have something to hide" (Uncertainty)
- You state 'it seems to me that they made themselves their little toy, to compete only against each other, like only "pour les connaiseurs"' (Doubt)
If you'd like to have rational discourse on the subject, I'd suggest dropping the conspiracy theories.
Now, on to the topic at hand...
"And the arguments can still go on, but I'm still waiting to see a valid argument for the EJB-only benchmark versus a technology independent benchmark."
The 'valid' argument is that an EJB-only benchmark is to contrast EJB servers once one has already decided to use an EJB server. ECPerf does not obviate the need for a cross-technology benchmark. If one wants to evaluate EJB's performance in contrast to other solutions, they will have to do so with benchmarks like TPC-C or TPC-W, with the following caveats:
- TPC-W gives a lot of flexibility in how one can multiplex a database connection over the webserver. This will mean there will be performance variability depending on software design and the specific vendor's technology. It will be difficult to make this reflective of "EJB" as a whole.
- TPC-C is designed for update-intensive OLTP in a client/server environment. It focuses more on the underlying database technology and less on the TP monitor involved. As such, it suffers from similar problems to the above: how can one tell how "EJB" fares, when there's so much variability in how one can implement an EJB solution?
In short, while I think EJB servers *should* be benchmarked with TPC-C some day, I think it will be some time before we gain enough experience with TPC-C and W to be able to find the appropriate "sweet spot" EJB design that appropriately leverages those benchmarks to the extent that most vendors already do with their optimized C/Tuxedo code or C/COM+ code.
One could in the short term potentially take the same EJB design, on the same hardware configuration, and run TPC-C on various application servers, but this would require a tremendous amount of time, resources, and money, not to mention cross-vendor cooperation, which may be difficult to find -- as most vendors would prefer to submit their own results.
So, in short, we *DO* need benchmarks to contrast EJB with other technologies. However, ECPerf acknowledges that "pure performance" is not the #1 reason why someone will opt for an EJB solution, so it makes the assumption that once one chooses EJB for their own reasons, one will THEN want to know which EJB server is fastest under different configurations. This will promote competition in the EJB vendor community.
I agrre we should drop the poltics and come back to the subject.
About TPC-W, you said though by having a degree of fexibility on the choice and layout of the database (I would say the whole stack of software) the benchmark would not be reflective of the "EJB" as a whole.
Well, that's good enough, but customers do need to evaluate the whole stack, it is largely irrelevant if a particular piece on the stack (let's say the EJB Server) is supposedly the best in townm if the whole stack doesn't function well.
The thing is true about all TPC benchmarks because all of them do evaluate at least the database + the OS + the hardware, and that's absolutely relevant and a positive thing.
And those benchmarks are supported both by hardware vendors and database vendors.
I think any effort to benchmark a single piece of the software puzzle that makes a solution for the customer is futile, and I'm still laughing.
- "TPC-C is designed for update-intensive OLTP in a client/server environment".
Isn't the whole point that three tier and n-tier architecture are better (including scale better and perform better) than client server ?
And last time I read the EJB specification, its intention was clear that EJBs adress the OLTP type of applications.
I hardly find any reason why anyone would want to use EJBs for mostly static data, or do you ?
"In short, while I think EJB servers *should* be benchmarked with TPC-C some day"
Why not today ? Do you have something to hide ?
Isn't the whole EJB issue marketing overinflated and technically underengineered ?
If you do have complains about current TPC benchmarks why don't you create a TPC-For-Middleware, and let any other technology compete ?
How much do we have to wait for the "sweet spot" ?
"it makes the assumption that once one chooses EJB for their own reasons"
And that's where I have my legitimate DOUBTS, and any reasonable and serious Java developer should have.
It is at least very strange and looks cowardly at least and raises lots of doubts that EJB vendors instead of focusing on making the technology competitive , they choose first to focus on their own little toy.
If the vendors put that much effort into designing and later implementing ECPerf , when should we expect that they'll be able to compete in the real arena ?
So, I already explained my DOUBTS. Is this reasonable ?
I'll spell out clearly what EJB vendors have to hide, if you pretend you didn't understand.
They are not currently able to compete in terms of performance and scaling with competing technologies. You haven't found the sweet spot as you said.
You hide that from your regular customers because any vendor will try to outline their own advantages (like Volvo are slower but safer).
However, almost all the EJB vendors (minus the open source comunity) pretend that EJB solutions perform better and scale better, which looks to me a highly unlikely and unsusteinable claim.
Even if I am willing to accpet that EJB cannot perform on par with other technologies because it is more "high level", I have a justifiable need to know how much is the difference.
As I said before I'm willing to accept even 50% as a very good result and a net win for EJB technology, but what if it is only 5% ??
Who will tell me that if EJB vendors refuse to compete in the open and compete only against each other ?
As about the FEAR, certainly I don't fear anything because I didn't do anything illegal, you definitely misinterpreted what I said. However ECPerf is the confidential and proprietary information of Sun.
So does not confidential mean that we shouldn't discuss it publicly ?
Definitely the FUD interpretation was totally misguided, and I regret you did that on this friendly forum.
If you really want to know what is FUD, wait to hear what Microsoft will have to say if ECPerf goes out in the public.
Do you think they are not going to comment about how good their technolgy is in the TPC bechmarks and how Sun and the EJB vendors refuse to stand up to the challenge ?