New SPECjAppServer2001 Results from Borland and Sybase

Discussions

News: New SPECjAppServer2001 Results from Borland and Sybase

  1. Borland and Sybase have submitted new performance results to the SPECjAppServer2001 benchmark (formerly ECperf). Borland achieved $112.33 business operations per second (BOPS) and $500.36/BOPS in the single node category. Sybase submitted to the dual node category and has 202.12 BOPS and $332.73/BOPS (the best price performance figure so far).

    View the SPECJAppserver2001 results.

    Threaded Messages (24)

  2. The interesting things about the Borland submission are:

    - Simple systems can perform. With just 2 CPUs (not 32!) and one process to manage you can run a website (or other EJB heavy J2EE app) and get fast response under stress of 65*8 ~ 500 concurrent users. Performance or scalability is not necessarily *the* reason to have to invest in a multi node or mainframe setup, provided your capacity goals are modest. (Of course there are other valid reasons to want that e.g. availability)

    - Its the first ECperf/SPECj submission that uses an embedded all Java database. Part of our motivation was to show off the capabilities and stability under load of an embedded RDBMS. So far all submissions used "traditional" DBs.

    - Efficiency: given a standard Dell box with 2 CPUs and running one VM with everything "server side" i.e. EJBs and DB, how much performance do we squeeze out of the box? We wanted to investigate the efficiency of the software stack and the numbers came out pretty good at 112 BOPS. Compare with some other results on two Dells where given almost twice the CPU resources the BOPS varied anywhere between 57 and 202 i.e. half to less than twice.

    - Lastly, there was an internal engineering motivation that may be interesting to other Java developers. What has always frustrated us here while profiling the AppServer code is the terrible memory and CPU characteristics of the common JDBC drivers. It was like the AppServer code not even showing up in OptimizeIt profiles because (as an example only) the Oracle thin driver allocates a zillion byte arrays and its networking layer is always holding up the thread. With this SPECj setup we cut all that out and got to see and resolve real bottlenecks in the Containers and core database. That was a leap for the product.

    - Jishnu
    Borland Enterprise Server Team
  3. <quote>
    What has always frustrated us here while profiling the AppServer code is the terrible memory and CPU characteristics of the common JDBC drivers. It was like the AppServer code not even showing up in OptimizeIt profiles because (as an example only) the Oracle thin driver allocates a zillion byte arrays and its networking layer is always holding up the thread.
    </quote>

    Fascinating. Jishnu, do you have any data on driver inefficiencies that you could share with us?
  4. Well, regarding JDBC drivers the point I wanted to make was not so much how inefficient they are (there may be legitimate reasons for what they do, I don't know), but the fact that they come up as heavy, especially memory allocations. Just run a profiler on some typical apps.

    For instance, without really picking on the Oracle thin driver (which is among the best of the lot), here is a typical OptimizeIt output when 100 transactions touch an entity with 2 primitive fields leading to 100 loads and stores:
    Allocation backtraces for class byte[]. application
    Backtrace of code allocating byte[]
     206918 instances of byte[] allocated since last mark.
         30.07% oracle.net.ns.NetOutputStream.write()
         21.02% oracle.net.ns.NetInputStream.read()
         13.77% oracle.jdbc.ttc7.MAREngine.marshalSB4() (starting in MAREngine.java:263)
         8.52% oracle.jdbc.ttc7.MAREngine.unmarshalUB2() (starting in MAREngine.java:784)
         4.45% oracle.jdbc.ttc7.MAREngine.buffer2Value() (starting in MAREngine.java:1929)
         2.93% oracle.jdbc.ttc7.MAREngine.unmarshalUB4() (starting in MAREngine.java:845)
         2.08% oracle.jdbc.ttc7.TTIoer.init() (starting in TTIoer.java:97)
         2.03% oracle.sql.LnxLibThin.lnxmin() (starting in LnxLibThin.java:632)
         1.93% com.inprise.ejb.jts.Transaction.generateOtid() (starting in Transaction.java:871)
         1.52% oracle.net.ns.Packet.receive()
         1.52% oracle.net.ns.Packet.createBuffer()
         1.43% oracle.jdbc.dbaccess.DBDataSetImpl.getBytesItem() (starting in DBDataSetImpl.java:1029)
         1.42% oracle.sql.NUMBER._fromLnxFmt() (starting in NUMBER.java:2948)
         0.96% oracle.jdbc.ttc7.MAREngine.unmarshalCLRforREFS() (starting in MAREngine.java:1594)
         0.71% com.inprise.vbroker.orb.FastOutputStream.toByteArray() (starting in FastOutputStream.java:148)

    The total count is an order of magnitude above the next class of instances allocated during these 100 transactions. Thus we see the VM hitting minor GC sweeps all the time.

    Again, I suppose we should not hover on this topic too much as the real focus of the thread is to discuss the Sybase and Borland submissions.
  5. The points about JDBC driver performance are well made. In our benchmarking we have also seen wild variations in the time and space costs of various drivers.

    However, is that really what we should be worried about?Marc Fleury's paper: "Why I love EJBs" makes the strong point that real performance is going to come from some sort of caching.

    In Marc's case he's talking about caching EJBs which is one to a greater or lesser extent by most application servers. Another alternative is caching just above the JDBC driver (as in Isocra's livestore product). Both approaches have the effect of reducing the potentially deleterious effects of the JDBC driver (and the database itself) from a business transaction but livestore has the additional benefits of the ability to operate transparently in a clustered environment. It can also provide caching benefits to raw JDBC access from Sessions, BMP's, Servlets etc.

    Plug ends :)

    Tim Hoverd
    Isocra - livestore - the transparent database caching solution
  6. I fully agree with you about caching however, would like to mention that:

    (1) Usually it is best to fix the root cause rather than hiding it when it comes to fixing a bottleneck. So, if Oracle addresses it then we all will benifit :) when/if we have to make an RPC to the database.

    (2) SPECjAppServer's transaction mix is very update intensive, ie usually you can't avoid ejbStore() all the way to the database. Following is a snippet from Borland submission (but is a generic requirement):

    TYPE TX. COUNT MIX REQD. MIX.(5% Deviation Allowed)
    ---- --------- --- ----------
    NewOrder: 58464 50.14% 50% PASSED
    ChangeOrder: 23300 19.98% 20% PASSED
    OrderStatus: 23201 19.90% 20% PASSED
    CustStatus: 11627 9.97% 10% PASSED
    Mix Requirement PASSED


    As you can see 70% of Order related transactions (NewOrder+ChangeOrder) are update oriented, and as per the durability rules they have to be written to the disk before the transaction.commit() ends.


    It would be very interesting to learn about the benifit of using your product if you happen to do any experimentation with this benchmark.

    Regards,

    Rafay
  7. Rafay,

    | Usually it is best to fix the root cause rather
    | than hiding it when it comes to
    | fixing a bottleneck.

    That's obviously true. However one root cause that never goes away is the network hop and this is what provides a lot of livestore's benefits. Although JDBC drivers do vary the main problem is the need to marshall the call, send it to the database, unmarshall it, marshall the results, send them back to the client, and then unmarshall them again.

    | SPECjAppServer's transaction mix is
    | very update intensive

    Indeed, although a lot of real systems have a lot more reads in them. However, we've found that livestore can provide significant performance improvements at read-write ratios down to about 10% reads, although obviously it depends on the precise details of the scenario. Simply put, this is because the advantage on reads is so high you need a
    surprisingly large number of writes to counteract it.

    | It would be very interesting to learn
    | about the benifit of using your product
    | if you happen to do any experimentation
    | with this benchmark.

    We do have experience of using the ECPerf benchmark, although I believe that we're no longer allowed to reveal the figures. :-( We've produced some benchmark code ourselves and we're in the process of writing things up. If you email me off-list (tim dot hoverd at isocra dot com)
    then I'll make sure you get a copy when it's done.

    Regards,

    Tim Hoverd
    Isocra
  8. Tim,

    I think we are in agreement in terms of benifits of caching :)

    I just wanted to make a point that there are systems out there that can do TCP/IP(socket work) more efficiently than others. It may be interesting to mention that Borland Enterprise Server sits on top of VisiBroker for Java (VBJ), and usually VBJ needs to do a lot more marshalling, and unmarshalling than what Jdbc driver has to, but doesn't show up as the major resource/CPU contributor when we have Jdbc driver(s) in the same VM. As you can see in the stack that Jishnu posted, VBJ's memory consumption is no where close to that of other components.

    Regards,

    Rafay
  9. Jishnu: "What has always frustrated us here while profiling the AppServer code is the terrible memory and CPU characteristics of the common JDBC drivers...."

    Sandeep: "Fascinating. Jishnu, do you have any data on driver inefficiencies that you could share with us?"

    We saw the same thing. For example, according to the profiler, the CPU was being largely used to parse SQL on the client side by the *r**** driver. When we cached prepared statements, the JDBC operations improved (latency as measured on the app server) by 30%.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  10. That's sad :(

    <bias - I work for MySQL>

    Just for fun, I profiled MySQL Connector/J doing the exact same thing in JBoss, and compared to the 188,000-or-so byte[] allocations for the *r**** JDBC driver that Jishnu writes about, Connector/J only allocates ~ 1900.

    The total # of any objects allocated in MySQL Connector/J during 100 load/store cycles doesn't even come close to approaching 10,000.

    Now, I'll admit, MySQL isn't as robust as Oracle (yet, give us time ;) ), but I know you can run ECPerf on it (I have), and you can deploy EJB apps on it (I and many others have), and the JDBC drivers are just as robust (look at the EWeek benchmark, Oracle and MySQL are the only two databases/drivers that ran without errors under load).

    Maybe the Oracle JDBC developers need to go back under the hood with JDBC?
    </bias>
  11. Jishnu’s observations are well spoken. I have few more points to add which are more from the perspective of the JDataStore database development team.

    To start off with, it was extremely beneficial to have the Borland Enterprise Server and JDataStore development teams collaborating to ensure both of our products can hold up under the heavy transaction loads of enterprise applications.

    Although this is technically an “application server” benchmark. You need both a fast application server and a fast database to get good results. Note that 112 tps is also 112 “database” tps.

    Here are some more observations relating to our work on this benchmark:

    - Most submissions use established enterprise databases. While these are known quantities, they are typically expensive, complex to manage, deploy and maintain. In contrast, JDataStore was “embedded” into the application server process by just creating a data source and adding a single jar to the classpath. We didn’t even change the vanilla ddl provided to create the four databases used in the benchmark. Since JDataStore is all Java, the application server and database engine executed in the same process. This is a significant advantage over native database JDBC drivers which typically use tcp/ip to communicate with a database server in a separate process. Note that EJB containers typically carry on very chatty, “short duration” interactions with the database. This is also typical of OLTP benchmarks like TPC-C. In these scenarios the performance of your API layer is critical. An in-process database has a significant advantage. JDataStore also has a high performance tcp/ip based JDBC driver for those applications that need to access the database from one or more external processes.

    - JDataStore is a significant “show case” technology for Java platform. JDataStore is living proof that a complex Java software system can perform as well or better than many native c/c++ code bases performing the same functions.

    - I’m skeptical that any native database could have provided better performance using the same hardware that we did. The modest RAM, and JDBC API remoting over tcp/ip would be limiting for a heavy enterprise database. The work that the application server must perform is also very cpu and memory intensive. Notice that most dual node submissions use an application server machine with roughly twice the compute power as the database machine.
     
    - JDK 1.4.1 made a significant improvement in our performance runs. I’d estimate about a 20-25% improvement. This is especially impressive because the improvement came in the form of reduced “in-memory” computations from a benchmark that is significantly impacted by “log file” IO needed to durably commit all of those 112 tps.
     
    - The Borland Enterprise Server team used JDataStore to tune application server for other databases. The low overhead and ease of use allows them to focus on application server performance issues instead of being swamped by the complexity and overhead of a native database JDBC driver.

    Steven Shaughnessy
    JDataStore development team
  12. Jishnu,

    You wrote:

    "Compare with some other results on two Dells where given
    almost twice the CPU resources the BOPS varied anywhere
    between 57 and 202 i.e. half to less than twice."

    Interesting comparison, but comparison of results across
    categories is not permitted under the fair use rules, for
    good reason. The other results you are suggesting to
    compare with with did have to deal with the very real
    issues of network communication with the database and
    CMP/JDBC level optimizations.

    Anyway, since we seem to be plugging our products here
    (and why not), here are a few interesting things about the
    Sybase submission.

    (1) Enity bean performance is now so good as to outperform
        the roll-your-own-persistence-framework. We have found
        that it is possible to implement the session-bean
        facade to entity beans such that calls to entity beans
        from session beans are nearly as fast as a direct Java
        class call (with our new Lightweight Container).

    (2) Guaranteed database consistency is not inconsistent
        with data caching. Optimistic concurrency control can
        provide real benefit in the situation where the DBMS is
        not embedded within the app. server.

    (3) There are plenty of interesting CMP/JDBC optimizations
        available such as: prepared statement cloning,
        just-in-time stored procedures, commit-time batching of
        statements.

    And finally, I promise that we'll be back with even more
    interesting results in future :-) I am confident that 202
    BOPS is not the best that can be achieved on a 2 CPU Dell
    box in the Dual Node category, and that 112 BOPS can be
    bettered in the Single Node category.

    Keep up the good work, its nice to see some real
    competition!
  13. Evan,
        I agree that cross category comparisons are not fair. I was just trying to give some perspective to our results rather than trying to prove superiority ... its difficult to position a small scale embedded DB submission when all that a casual reader tends to do is glance at the absolute BOPS numbers on spec.org and form mental ranks!

        You will probably be flamed for the comment on entity beans with CMP performance ;-) but I fully agree, since a long time, that CMP entity beans perform just fine. One vendor's bad implementation of CMP should not be taken to mean that nobody ever got them to perform as they should have. Its good that we all are coming up with ever better numbers and hopefully that will over time push back the bad reputation.

    >
    > Keep up the good work, its nice to see some real competition!
    >
    Well, Sybase's Dual Node result is the one to beat now ... in that category! Congratulations.

    - Jishnu
  14. App server pricing[ Go to top ]

    Very interesting comments, Jishnu. I agree that hardware and software have reached a level where we can support a real EJB application on low-end hardware.
      However, this raises a serious pricing spectre that affects all application servers (except for SunONE and JBoss). The total cost of this configuration was $56,000, while the hardware costs only about $6000. Don't you think this is a pretty substantial pricing overhead for a configuration that targets small sites and small businesses, especially when there are free competitors available? Such small configurations (and customers) don't really use the high-end features that differentiate, say, Borland AppServer from JBoss.
      --JRZ
  15. App server pricing[ Go to top ]

    A significant portion of the cost is for "support".

    You really have to scrutinize each vender's reports on what the BOP cost is. There are some creative pricing schemes used to achieve good price/performance numbers.
  16. App server pricing[ Go to top ]

    Yeah, you really need to dissect the SPECj Pricing Sheets and scratch your head on every line and ask yourself "does this apply to me?". In the process of making the pricing rules fair, verifiable and acceptable to all kinds of vendors participating in SPECj benchmarks, the final dollar figure sometimes become "reality challenged" ;-) When was the last time you paid 5K odd for maintaining your W2K OS? (poor Pramati's submission had 32 grands for it to follow the rules at the time!) Furthermore list prices used for pricing licenses during benchmarks undergo brain surgery by vendors who can spend time on it and the prices end up as inversely proportional to creativity. So the bottomline is you have to examine the cost yourself.

    That said, I have to agree that nothing beats free when what you get is good and satisfies you. There is real competition there and that's okay. We still get customers at the low and medium end because of the features, tooling and because software licensing is more often a fraction of their true total cost including training, tool familiarity, vendor relationship, etc.

    As far as Borland's Enterprise Server goes however the truth is that our forte is large deployments. There are usually a number of servers and the pricing is done on the entire setup. The rules there are so different ... the point of this submission is not so much to come up with a compelling licensing cost for a small setup, but rather to show the raw performance we extract from one. Stay tuned for follow on submissions that show this single system efficiency compounding itself in horizontallly scaled deployments. That should fill out the complete picture.
  17. App server pricing[ Go to top ]

    Hi John,

    Your comments are interesting.
    I would like to make few comments about your concerns. I would make an attempt to shed some light on the context in which products are priced in these kind of submissions, and few other comments:

    * If you read SPECjAppServer2001 pricing rules, you will notice that, to meet the submisson rules the software support has to have features like :
      - 24x7 support availability via phone
      - with 4 hour response time to the customers (for more accurate details please go to http://www.spec.org/osg/jAppServer2001/docs/RunRules.html). This applies not only to appserver, but also for OS etc. So, it is very likely that small and medium enterprise may meet their needs with different level of support.

    You mentioned:

    "Don't you think this is a pretty substantial pricing overhead for a configuration that targets small sites and small businesses, especially when there are free competitors available?"

    As Jishnu earlier mentioned pricing figures are there mostly to meet the submission requirements of an "enterprise customer" that may differ from your needs, however, it would be interesting to know the sites where they provide "free" software plus support (with the similar service level agreement:) ? I have pasted some relevant details from the source (ie from http://www.spec.org/osg/jAppServer2001/docs/RunRules.html).

    <snippet from SPECjAppServer rules>

    4.2.3 Support
    Hardware maintenance and software support must be priced for 7 days/week, 24 hours/day coverage, either on-site, or if available as standard offering, via a central support facility.

    .......

    The response time for hardware maintenance requests must not exceed 4 hours on any component whose replacement is necessary for the SUT to return to the tested configuration.

    .......

    Software support requests must include problem acknowledgement within 4 hours. ......
    </snippet from SPECjAppServer rules>



    You mentioned:

    "Such small configurations (and customers) don't really use the high-end features that differentiate, say, Borland AppServer from JBoss."

    Please note that here we are talking about raw throughput (not any other enterprise features) and it should be considered a differentiating factor :), specially since there are no official submissions from JBoss or SunONE for SPECjAppServer2001.
  18. thanks
  19. Some citations from TSS before (as a reminder)

    No J2EE vendor has had the courage to post any figure on any TPC benchmark, more they are in the process of creating a J2EE only benchmark that would make it effectively imposible to compare J2EE with any competingtechnologies. But to create a benchmark that refuses competition with alternative technologies, that only shows narrow mind and cowardness

    But if ECPerf goes live I'll be ashamed that I'll be still programming in Java!

    You can only imagine how the situation where J2EE world refuses competition (it's the naked truth: with ECPerf, EJB world refuses to compete) will favor Microsoft marketing

    So we have a technology that is immature and we can't make it compete. Instead of taking the challenge and making it work, we cowardly devise a competition only for our technology, like who is the best of fools

    What we SHOULD do as a community is to get the word out to Sun (Java One is near) to take the damn thing out and fire all the documents that would show it ever existed. ECPerf is a disgrace to Java community

    I'd like to see benchmarks not only to compare servers but to see how much performance is -really- lost when going to an object/relational CMP mapper, depending on the product ECPerf is technically flawed because it doesn't define a problem, it defines an implementation . More, it is commercially flawed because it is a safe-heaven for EJB technology to avoid competition

    The worst, it is psychologically flawed.
    Java community evolved from a tradition of Unix and people who bashed Microsoft for its technical problems.
    It was no question, only a few years ago, that Unix was the serious stuff: stable, performant and scaleable, while Microsoft was a toy for workgroups and small businesses.
    We were like BMW technicians while MS developers were like Ford technicians and we had our pride for that.
    Now imagine that the old Crown Victoria suddenly runs faster than the Z8. And worse, BMW drops out of competition under the pretext:
     - "No, we are not talking performance here,we're not that interested in raw performance, we sell BMWs for leather quality and interior comfort, we'll create a separate competition for cars that have to have leather and look exactly like BMWs and have BMW engines under the hood."
    Would you want to work for BMW any longer?

    In Romania they have a saying,
    "If the stupid is not proud, he's not stupid enough"

    Regards
    Rolf Tollerud
  20. "No J2EE vendor has had the courage to post any figure on any TPC benchmark, more they are in the process of creating a J2EE only benchmark that would make it effectively imposible to compare J2EE with any competingtechnologies. But to create a benchmark that refuses competition with alternative technologies, that only shows narrow mind and cowardness"

    The purpose of ECperf (now specj*) was not so much about establishing EJB's performance vs. other technologies. It is more about establishing the performance of various vendors in the EJB space. As this benchmark is structured around a very well documented real-world manufacturing enterprise operations, many users can easily relate to this app scenario and thus use the info in their technology & vendor decision. Towards these goals, I think the benchmark does a tremendous job.

    No vendor has published TPC benchmarks as TPC-C is a database centric workload and TPC-W is a variant of TPC-C with web access; EJB benchmark (or any AppServer benchamrk) must simulate real business processing along with data access. ECperf/SPECj* benchmarks do perform heavier business processing along with data access; and was by design meant to fill this space not covered by TPC-C or TPC-W (btw, lot of the members on the ECperf expert grp or the SPECj* benchmark subcomittee, are on TPC organisation also!).

    While this may certainly not help the cause of comparing EJB with .NET nor comparing Relational with Object-relational, but that is not the objective of this benchmark. Even so, the former can easily be realised, as the Ecperf benchmark scenario & implementation can easily be realised on .NET shd someone decide to do so.

    Cheers,
    Ramesh
    - Pramati Technologies
  21. The SPECjAppServer probohits this, in fact the whole spec is carefully designed to not risk comparison with .NET.

    SPECjAppServer2002 Run and Reporting Rules, (http://spec.unipv.it/osg/jAppServer2002/docs/RunRules.html#S3_6):

    3.6 Result Disclosure and Submission

    ...
    Test results that have not been approved and published by SPEC must not use the SPECjAppServer metrics (TOPS and Price/TOPS) in public disclosures.
    ...

    3.7.2 Comparison to Other Benchmarks

    ...
    SPECjAppServer2002 results must not be publicly compared to results from any other benchmark.
    ...

    Patetique, isn’t it?

    Don't you find it extremely telling that both BEA and IBM seem to be reluctant to participate in any kind of benchmarks?

    Some how all this sounds familiar? Yes wait - it is similar to Oracle. When they dominated the TCP performance benchmarks they made a BIG deal of it.. then, when MS perfected their federated model (which Oracle could not support) and cleaned Oracle's (and EVERYONE's) clock in that benchmark, for some odd reason,suddenly,"benchmark's" didn't matter any more.

    Anyhow the Java guys only have to dump the "EJB antipattern", the popular entity bean with the session bean facade scenario, to be competitive again..

    Regards
    Rolf Tollerud
  22. Rolf,
    Your ".Net is better than J2EE" comments are adding nothing to this thread, or any of the other threads you add them to.
    I respect anyones decision to use or not use J2EE technology but this is a site for J2EE discussions. ECperf and SPECjAppServer2001 are useful for comparing performance of J2EE/EJB servers. That is what they were designed for. They are not useful in comparing J2EE versus .net. Who cares?
    Please don't keep diluting the value of every thread with the same off-topic comments. I don't go to .net sites and try to disrupt those discussions.
  23. Rolf,
    Your ".Net is better than J2EE" comments are adding nothing to this thread, or any of the other threads you add them to.
    I respect anyones decision to use or not use J2EE technology but this is a site for J2EE discussions. ECperf and SPECjAppServer2001 are useful for comparing performance of J2EE/EJB servers. That is what they were designed for. They are not useful in comparing J2EE versus .net. Who cares?
    Please don't keep diluting the value of every thread with the same off-topic comments. I don't go to .net sites and try to disrupt those discussions.
  24. < Some how all this sounds familiar? Yes wait - it is similar to Oracle. When they dominated the TCP performance benchmarks they made a BIG deal of it.. then, when MS perfected their federated model (which Oracle could not support) and cleaned Oracle's (and EVERYONE's) clock in that benchmark, for some odd reason,suddenly,"benchmark's" didn't matter any more. />

    Rolf,

    Just couple of words about your observations of the database benchmark.
    I guess you know that the federated model ( which MS uses for these results ) can achieve this score because most of the SQL is executed locally on each node, ( You have to partition the database across the nodes ) . This architecture is really good for benchmarks but totally unusable for real OLTP apps ( TPC-C is OLTP benchmark ). This is the reason that nobody runs SAP, Peoplesoft and etc using this architecture( You have to repartition the entire database and etc ) . Also you will have problems with referential integrity , you should put the reference tables on each node and etc.( In MS case ). Also it's not flexible - if you want to add additional node you should repartition the entire thing. So the points is that this architecture is good for TPC-C but not for any real application.

    Regards,
    Alex
  25. Here is the summary of all posted results sorted by vendor.
    CINT2000 is cumulative estimated SpecINT2000 result of all processors on application server and database tiers.
    EFFICIENCY is 1000*BOPS/CINT2000.

    <PRE>
    Vendor BOPS CINT2000 EFFICIENCY
    -------------------------------------------------------------
    Borland 112.33 1650 68.08
    IBM 804.09 26304 30.57
    Oracle 1 189.63 3814 49.72
    Oracle 2 558.85 8448 66.15
    Oracle 3 1476.81 30080 49.10
    Oracle average 54.99
    Pramati 57.38 2151 26.68
    Sybase 202.12 2908 69.50
    </PRE>

    I know that different architectures/hardware/operating systems/databases/JVMs/JDBC drivers cannot be compared but I think this table shows some interesting points:
    - IBM WebSphere and Pramati are slow.
    - Borland and Sybase have most efficient EJB container implementations with Oracle very close to them.

    Note that CINT2000 is estimated but I think that it can not differ from actual value more than 5%.

    Regards,
    Mileta