News: Pramati Posts New ECperf Results on TheServerSide
Pramati's new ECperf results have been posted on TheServerSide. Running Pramati Application Server 3.0 on a Compaq DL-580 cluster, using Oracle 8.1.6 DB, Pramati has achieved a formidable performance figure of 14467.97 BBops/min@Std. Their price/performance metric measured in at $22/BBops. A Follow up Q & A with Ramesh Loganathan of Pramati has been added.
- Posted by: Nitin Bharti
- Posted on: April 30 2002 14:51 EDT
Follow Up Interview with Ramesh Loganathan of Pramati
Q: What are Pramati's primary reasons for conducting ECPerf tests on their app. server?
A: Pramati's objective in publishing ECperf results is to ensure that customers have access to replicable, reliable performance characteristics when selecting an application server for specific deployment scenarios. We are submitting a series of results that give customers a clear idea of the the performance they can expect from Pramati Server, in common real-world scenarios.
Q: How are your second set of results different from your earlier ECPerf submission?
A: This submission is on our Enterprise Edition running on a 4 node cluster. The first submission was on our Standard Edition standalone server.
This submission demonstrates the linear scalability characteristics of Pramati Server, as same class of hardware was used in both the submissions. We have demonstrated scalability of about 90%, clearly showing that your application can scale near-linearly with hardware resources. This, while still ensuring absolute cluster-safe concurrency control and no data-stomping (Pramati server locks entity bean instances that are updated to ensure that no two transactions can modify the same bean at the same time, overwriting one others changes.This locking generally comes at a price. But, using our high-performance locking strategies, this cost has been nearly eliminated).
Q: In your most recent results, you fell short about 700 BBops from BEA's February 19th posting and were $2/BBop more expensive than their $20/BBop metric. What were the factors that accounted for these differences?
A: The performance difference between BEA's submission (14,700) and Pramati's (14,000) is under 5%. Regarding the cost difference, the 8cpu 700MHz server that we used to run the DB tier, would be of about the same power as 3 (P4) Xeon 2.2 GHz CPUs. The higher number of CPUs has an adverse impact on pricing (as AppServer and DB servers Licenses and support are priced on per CPU basis by most vendors. Moreover, once you cross 4 CPUs, the DB Enterprising pricing kicks in, which is about three times standard edition's price).
In spite of this, if you look at App Server layer costs alone for these submissions, we were still at a fraction of the cost of BEA.
Q: There have been several references to Data-stomping. How is your submission different from other submissions in this respect ?
A: ECperf submissions till date, other than Pramati's, have been made without any Concurrency Control in place. Had the other vendors chosen to run with concurrency control, the options available are very expensive. Most vendors rely on DB Isolation Levels to effect Concurrency Control. This could be very expensive (as all the rows accessed, even in finders or non-transactional methods, will be locked. Thus impacting concurrency and thruput). Some vendors do have Pessimistic locking solutions that work in non-clustered and Exclusive Database alone (and not in a cluster nor when database is not exclusive, as required in ECperf).
In Pramati's ECperf submission, most beans were delpoyed in Pessimistic Concurrency mode and the Read-intensive beans were deployed in Optimistic Concurrency mode (even for Read-intensive, the Committed Read was not used, as ECperf 1.0 doesn't assume Read-intensive beans. In any case, our Optimistic Concurrency implementation performs equally well.)
Further, Pramati's concurrency control implementation is completely cluster-safe. Same semantics available in both a cluster and standalone configurations. This is provided via a cluster-safe high-performance Distributed Lock Manager.
Q: Why did Pramati not submit results with concurrency turned off?
A: We resisted this temptation. The performance will have beeen higher, but the configuration is unrealistic as no customer would want mission critical data (such as inventory counts) being overwritten/corrupted. Both our submissions are with full concurrency control in place, despite the performance overhead that such a configuration entails.
Check out Pramati's Latest ECperf Results
- Pramati Posts New ECperf Results on TheServerSide by Gary Shultzz on May 02 2002 01:58 EDT
- Pramati Posts New ECperf Results on TheServerSide by Gary Shultzz on May 02 2002 14:48 EDT
- Pramati Posts New ECperf Results on TheServerSide by ramesh loganathan on May 03 2002 00:09 EDT
Pramati Posts New ECperf Results on TheServerSide by Raj Vindhyan on May 03 2002 12:18 EDT
Pramati Posts New ECperf Results on TheServerSide by Brian Smith on May 03 2002 06:33 EDT
Pramati Posts New ECperf Results on TheServerSide by Cameron Purdy on May 03 2002 10:08 EDT
Pramati Posts New ECperf Results on TheServerSide by Gary Shultzz on May 03 2002 11:46 EDT
Pramati Posts New ECperf Results on TheServerSide by Brian Smith on May 04 2002 11:43 EDT
Pramati Posts New ECperf Results on TheServerSide by Gary Shultzz on May 05 2002 01:09 EDT
- Pramati Posts New ECperf Results on TheServerSide by Brian Smith on May 05 2002 03:05 EDT
- Pramati Posts New ECperf Results on TheServerSide by ramesh loganathan on May 05 2002 11:05 EDT
- Pramati Posts New ECperf Results on TheServerSide by Gary Shultzz on May 05 2002 01:09 EDT
- Pramati Posts New ECperf Results on TheServerSide by Brian Smith on May 04 2002 11:43 EDT
- Pramati Posts New ECperf Results on TheServerSide by ramesh loganathan on May 03 2002 11:56 EDT
- Pramati Posts New ECperf Results on TheServerSide by Gary Shultzz on May 03 2002 11:46 EDT
- Pramati Posts New ECperf Results on TheServerSide by Cameron Purdy on May 03 2002 10:08 EDT
- Pramati Posts New ECperf Results on TheServerSide by Brian Smith on May 03 2002 06:33 EDT
- Pramati Posts New ECperf Results on TheServerSide by Raj Vindhyan on May 03 2002 12:18 EDT
- Pramati Posts New ECperf Results on TheServerSide by ramesh loganathan on May 03 2002 00:09 EDT
This is interesting. I would like to hear the detailed explanation from Pramati. I would also like to see them publish a number in Read Committed mode so as to show the cost of this. It is OK if it doesn't match the IBM/BEA numbers, just so it shows relative parity, 80%ish.
The other side of the coin is that it would be great to see good data consistency numbers from the other vendors...which of course may not happen without motivation.
I interviewed Ramesh Loganathan, Director-Engineering and Pramati's representative for the ECperf Expert Group. The interview has been added to the news post.
First a disclaimer. I am a bit under the weather, but with a superficial look at it I have some issues with the Q&A:
1. HP's 2/19 posting (similar to IBMs 3/11 posting) was superceded by their 2/25 posting, which is a more fair comparison. In that test, 2 4 CPU machines (albeit 900MHz Vs. 700MHz for Pramati) were used on the middle tier. This means that Pramati used approximately 70% (more fairly 60% as there was less cache) more CPU power than WebLogic. The DB is the same story, 2x CPUs, but 700 MHz w/ 1 MB cache instead of 900 MHz w/ 2 MB cache, about 1.6x power.
In all fairness, the tests are not identical. I realize that, but don't say we are doing harder work (true) with essentially identical HW (untrue). Just like the $4 difference in price.
2. BTW, why does the DB require 8 CPUs? There may be a good answer, but the DB should not be doing much more work than with the HP results, simply the middle tier should have more serialization and latency.
3. I always have an issue with pricing comparisons here, so this is not a "Pramati" issue. Simply said, pricing comparisons at this level are only valuable if they are significant and everything is being considered. There is much more to product value than BBOps supported.
4. I am on your side for pressuring the big boys to publish a data safe result. However, to build your case you should do the right thing about play the game they are playing. It is obvious the results will not be as good, but who cares. If Pramati can handle the "data-safe" case better and compete well on pricing, that is good enough.
About hardware comparison, I was just responding to a question from Nitin comparing the two results. I fully understand that the hardware used are not 'identical' (and in my defence, I never said or hinted at this ;-)
All I said was that being slower hardware, we end up using more software licenses for similar level of work done. Thus showing more cost!
About needing 8cpu DB, given that 900Mhz p3-xeon is about 1.7x 700Mhz p3-xeon, we will need 7x700 Mhz CPUs to do the work of 4 900Mhz CPUs (used in HP-BEA). We used 8 CPUs. And we needed a little more DB power to take care of locking.
We will be making more submissions in the coming weeks and months. Our approach to submissions is to focus on real-world practical configurations (from using hardware that is likely target for initial J2EE deployments thru realistic concurrency setup), rather than get onto a numbers race.
I am sure you have your policies of not getting into a numbers race and no one is forcing you into it. The question that I as a potential customer have is from a different point of view.
As a possible customer I would like to see results IN COMPARISON with the big boys. Otherwise its quite a tough decision to bank on Pramati. If, after deciding on Pramati based on these results, a problem arises then there is always the thought in the back of the mind that I should have used one of the big boys. Its a risk that I will be unwilling to take. However as Gary put it, if I find that you are even 70% of a WebShpere or WebLogic in similar H/w, DB conditions, THEN the cost factor comes into picture and I will in most likelihood go for Pramati.
I don't see your point. If you configure your application like Pramati, you will have a safe, working application. If you configure your application like IBM and BEA, you will have an unsafe, not-working application.
Pramati is sacrificing impressive numbers to show realistic results. The other vendors are sacraficing realistic results for impressive numbers. It is up to IBM, BEA, and other vendors to prove that they perform well when concurrency controls are in place. It is not the responsibiility of Pramati to try to match their flawed test results.
Is anybody here developing applications without any concurrency control?
(I'm not a Pramati customer or employee; I have trouble even spelling the name ;)
Brian: "Is anybody here developing applications without any concurrency control?"
Yes, many apps don't have any concurrency control or that "feature" has never been tested. I've been asked at least 100 times, "Doesn't the database take care of that for me?"
As far as Pramati's implementation, I would like to see it in action. It's not an easy task to implement clustered concurrency control, so if they've done a good job, it should help give them a boost with the next ver of ecPerf. The problem is that unless they handle failover (including concurrency control) instantly and gracefully, and actually scale well in the cluster, then managers will feel better off relying on the database, since it is already a "trusted" mechanism (whether or not it works correctly).
Exactly what I have run into. Raj clarified my point well. Comparing with IBM/BEA will not look bad if you are competing on cheaper solution and improved data consistency. Since it is doubtful that they will run fully concurrent numbers until required, I think Pramati would benefit by comparing with them in the game as defined now in addition to the "data consistent" view. I would be very interested in seeing the numbers, with an open point of view. Thanks, GS
I'm just suprised that you are calling on Pramati to change their (correct) testing methodology to match their competitors (flawed) testing methodology. If the other vendors wait until ECPERF 1.1 _forces_ them to correct their results, what does that really say about them? These same companies that are saying they can't submit TPC-C results because the TPC-C test does not reflect real-world performance.
But, in my opinion, running without concurrency control is about as much of a "benchmark special" for ECPERF as Microsoft's configuration is for TPC-C. You can actually use IBM's EJB optimizer in your deployment, and you can actually use BEA's T3 protocol in your deployment (if you don't use Corba). But, for most applications you _cannot_ run without concurrency control.
That said, we should have respect for those companies that post resuits at all. IBM, BEA, and Borland are operating according the rules as written. But, I think that all the vendors should join Pramati in playing by a better (stricter) set of rules.
"I'm just suprised that you are calling on Pramati to change their (correct) testing methodology to match their competitors (flawed) testing methodology."
If by "change" you mean stop focusing on data consistency, which is important, no, I think we agree. In about 10 posts on this site I have mentioned the stuff Pramati is doing with a positive tone or data consistency. I think we agree in spirit. Here is a better explanation. The work is 1/2 done. If Pramati does not post the "most relaxed" test that fits within the rules we will all know this:
A. Pramati had better data consistency.
B. Better data consistency is more expensive (most of us already knew this)
But this will be painfully missing:
C. Pramati can relatively cheaply support good data consistency.
I was suggesting, not "changing", but supplying the rest of the necessary data. It can't lose: either the number is close to BEA/IBM, in which case we can say, "Wow, Pramati is cheaper and almost as fast", or the result will be closer to the "fully consistent" result, in which we can say, "Wow, it is relatively cheap for Pramati to support good data consistency."
"If the other vendors wait until ECPERF 1.1 _forces_ them to correct their results, what does that really say about them?"
It says they understand marketing for benchmarks well.
"But, in my opinion, running without concurrency control is about as much of a "benchmark special" for ECPERF as Microsoft's configuration is for TPC-C."
Again, I agree with you in spirit, but the responsibility here lies mostly on the ECPerf folks: they are fixing this. IBM/BEA must take full advantage of the rules that are set or they look relatively poor compared to each other, which is 90% of what they are concerned about. In agreement, there are many outrageous TPC-C configurations. I applaud the ECPerf team for having tighter reins on the benchmark. I hope it continues in this vein.
Thanks for your reply. I think I understand your point of view.
> But this will be painfully missing:
> C. Pramati can relatively cheaply support good data
But, what value is this information? Either you need consistency or you don't. If you need consistency, it doesn't matter how expensive it is relative to the inconsistent case.
For example, let's say Pramati's cost of consistency is 50%. What this would mean is that your application would run twice as fast if you ignore any data consistency problems if you deploy on Pramati's. Otherwise, you would only learn something like "my application will give incorrect results on Weblogic with cost $XXX/BBops, incorrect results on Pramati with cost $YYY/BBops, and correct results on Pramati with cost 1.5*$YYY/BBops".
So, which appserver gives you faster/cheaper _correct_ results? You can't deduce this information, without knowing the relative cost of consistency for IBM/BEA. In order to know that, IBM/BEA still have to publish a result with consistency enabled. But, if they produce such results, then what would be the point of Pramati producing data-stomping results in the first place? You wouldn't need to know the relative cost of consistency since you can compare the data-consistent results directly. So, publishing "data-stomping-enabled" results seems like a lose-lose proposition for Pramati.
Plus, maybe Pramati's appserver is not optimized or designed for data-stomping in the first place? Should that count against them?
I was suggesting, not "changing", but supplying the rest of the necessary data. It can't lose: either the number is close to BEA/IBM..
If you look closely, the price IS comparable even with data-consistency in place:
We used 700MHz xeon (as we wanted to first target hardware that is currently in use). If you take newer technology hardware, the hardware is much more powerful, much cheaper, and most importantly, will need fewer licenses (DB & AppServer). Given the cost of DB license (beyond 4 CPUs), running on a 4 (or less) CPU config will have brought our price down by 33% (to ~$14/BBOP).
Given the cost of DB license
> (beyond 4 CPUs), running on
> a 4 (or less) CPU config will
> have brought our price down
> by 33% (to ~$14/BBOP).
Ramesh, are you planning to publish a result based on this configuration?
Ramesh, are you planning to publish a result based on this configuration?
Yes Brian. We will be submitting results on newer CPUs now.
The problem is that unless they handle failover (including concurrency control) instantly and gracefully, and actually scale well in the cluster, then managers will feel better off relying on the database, since it is already a "trusted" mechanism (whether or not it works correctly).
Databases handle concurrency control effectively only when the app codes the SELECT..FOR UPDTAE locking. So this will not be transparent or declarative (as would be in the case of AppServer managing it for CMP). When it comes to failover databases dont provide any support. It has to be coded for in the middle-tier (app server or the app itself).
Pramati's EJB failover implementation is one of the best among th eAppServers; this we have had for well over a year now). It is an absolutely transparent (to app, app developer and deployer) clustering and failover. All bean types can be failedover (mid-operation) and the bean instance loaded in the failover node and the operation executed. All this when the EJB client (web component or java) program is blissfully unaware of the happenings under the cover.
Now coming to the options of failover, if an AppServer is not used, then the app developer has to code for it. If the clustering is provided by the app-server and the application manages the failover, then most likely this will be non-portable code (as the logic to connect to a failover node, when managed by the app, will be vendor specific). Now, if the clustering is also managed by the application (meaning ,the application is aware of the nodes directly) then probably this will be a portable solution.