Discussions

News: ESB performance testing - Round 4

  1. ESB performance testing - Round 4 (12 messages)

    AdroitLogic announced results of the fourth round of ESB Performance testing carried out by Asankha Perera, since June 2007. Previous rounds have compared a leading proprietary ESB, the proprietary version of an open source ESB, the WSO2/Apache Synapse ESB, Mule CE, and Apache ServiceMix. This round does not name the results of any competitive ESB's, but makes it easy for end-users to try out the test framework on the Amazon EC2 or local hardware and measure the performance of any comparable ESB with ease. The scenarios selected will also allow end-users to experience how easy or difficult it would be to setup these simple scenarios against the ESB of their choice. The performance test suite includes a high performance Echo service based on Java NIO, and an Apache Bench style load generator to test concurrency levels from 20 to 2560 users at the ESB with message sizes ranging from 512 bytes to 100K. The test can easily be run on an "m1.large" instance on the Amazon EC2 with the provided scripts. This round 4 includes tests that measure performance of a Direct Proxy (used for Virtualization of services), a Content Based Routing (CBR) Proxy used to make routing decisions based on an XML/SOAP payload, and XSLT transformation Proxy used for translation of requests and responses, and a WS-Security proxy that allows an ESB to act as a Security gateway. http://adroitlogic.org/samples-articles-and-tutorials/15-tutorials/48-esb-performance.html Message was edited by: Staff

    Threaded Messages (12)

  2. Mule test results - updated link[ Go to top ]

    Please note: the link on the AndroitLogic page for the Mule performance test results is out of date. The correct link is: http://www.mulesoft.com/downloads/Whitepaper_perf_test_results.pdf
  3. I wish there was a You-tube face off[ Go to top ]

    After viewing the article (and visiting WSO2 and Mule's sites) I have to admit I have no confidence in the various findings. WSO2 posts results where their ESB wins! Mule posts results where their ESB wins! I wish we could have a same-hardware live face-off. Let each vendor ocnfigure their product, then see who really can get the best numbers. (Even if this did happen, I'm sure it won't, I'm sure it would be immediately and loudly contested.) At least I'd get to see how the techies make adjustments.... Rick
  4. Hi Rick This round solves this EXACT problem you bring up! A vendor will not be able to "run" and publish performance tests anymore - but simply make available required configuration - so that a user evaluating multiple options could run the test against the selected ESBs - on the same hardware, JDK etc - and see the results for himself. Note that by using Amazon EC2 - this process becomes trivial! cheers asankha
  5. Can you benchmark on a cloud?[ Go to top ]

    I would be very careful about benchmarking on a cloud. For any benchmark to be credible, the hardware and network needs to be completely isolated and under exclusive use of the system under test. Otherwise indeterminacies can occur which can skew the results. E.g. someone starts a file transfer on the same network while you're running your test. When using a cloud can you really be sure your system is completely isolated? AIUI most cloud set-ups use some kind of virtualisation and resources like networks are most likely shared amongst many users.
  6. Re: Can you benchmark on a cloud?[ Go to top ]

    Hi Tim I agree with your thoughts.. however, I believe its much closer for a fair comparison than when different vendors run them on different hardware configurations. For example WSO2 ran the Round 3 on three physical servers, while Mule ran them on two. BEA ran them on an unknown configuration. Also, in this round, both ESBs were tested on the exact same incarnation of the AMI instance - and all network operations were limited to localhost - preventing a dependence on the network. Ofcourse end-users who have dedicated hardware and an isolated network could run the same test on such an environment. The UltraESB introduces Zero-Copy proxying while using Java NIO to support thousands of concurrent users, and on a virtualized system this capability is not fully utilized. Thus on real server grade hardware the measurements sure would be more interesting. cheers asankha
  7. Hello - Planet Earth[ Go to top ]

    After viewing the article (and visiting WSO2 and Mule's sites) I have to admit I have no confidence in the various findings.

    WSO2 posts results where their ESB wins!

    Mule posts results where their ESB wins!

    I wish we could have a same-hardware live face-off. Let each vendor ocnfigure their product, then see who really can get the best numbers.

    (Even if this did happen, I'm sure it won't, I'm sure it would be immediately and loudly contested.)

    At least I'd get to see how the techies make adjustments....

    Rick
    Come on, have you never seen benchmarks before, everyone wins, that's exactly what a benchmark is, IT speak for "We're the fastest". The only real way of demonstrating if you're worthy of being used is by independent referral or by the vendor openly publishing some sort of performance test (I hesitate to use the word benchmark), simple enough that you get it working on your own hardware before you lose interest, typically 10 minutes. Why is this better, well firstly, if you can't get it working then the software is obviously not designed to actually be used, secondly the vendor is confident enough to demonstrate openly that their software is (a) easy to use and (b) stands up to the competition. If, interestingly you can't get the competition working then who cares about the performance, working is good enough! -John-
  8. Re: Hello - Planet Earth[ Go to top ]

    I strongly suggest one to take a look at other ESB performance benchmarks published as well by other vendors. Interestingly there is one which compares the Intel SOA Expressway with IBM DataPower with a test conducted by PushToTest and commissioned by Intel. However what amazes me in particular is that the test uses tiny messages of 1K or less only; to test the performance at a concurrency level of just 84 users, and publishes a rather low TPS as well. They have made the test code available as well at [1], and I suggest anyone who has access to IBM DataPower or the Intel SOA Expressway to use the ESB Performance Test Framework I've published against these two heavy weight contenders and see for themselves the performance results obtained. Also refer to this test [2] for a comparison of OpenESB and ServiceMix - which attempts to see if these ESBs can process "more than 10000 messages per minute" (i.e. 166 TPS!) asankha [1] http://www.comparedatapower.com/zones/compare-datapower/download/SOABenchmarkKit [2] http://www.predic8.com/openesb-servicemix-comparison.htm
  9. Re: Hello - Planet Earth[ Go to top ]

    I strongly suggest one to take a look at other ESB performance benchmarks published as well by other vendors. Interestingly there is one which compares the Intel SOA Expressway with IBM DataPower with a test conducted by PushToTest and commissioned by Intel.
    1) There are no files to setup datapower to run the benchmark. 2) the benchmark is totally biased. Intel says that datapower initial costs is the double as Intel gateway. What is not said in that test is that they could have replaced XI50 with XS40 which costs half of the price. We process much more messages in a single day using datapower than those processed by these crap benchmarks. It takes us 1 minute to publish a service in datapower. it's a true push button.
  10. Re: Hello - Planet Earth[ Go to top ]

    I strongly suggest one to take a look at other ESB performance benchmarks published as well by other vendors. Interestingly there is one which compares the Intel SOA Expressway with IBM DataPower with a test conducted by PushToTest and commissioned by Intel.
    IBM DataPower is not an ESB; it is a XML-processing appliance used to offload the ESB from tasks like security, syntax validation and cryptography among the others. IBM offers other products like WebSphere Message Broker and WebSphere ESB (and his bigger brother WebSphere Process Server), and I believe these are the products that should be benchmarked, rather than DataPower.
  11. Not correct[ Go to top ]

    I strongly suggest one to take a look at other ESB performance benchmarks published as well by other vendors. Interestingly there is one which compares the Intel SOA Expressway with IBM DataPower with a test conducted by PushToTest and commissioned by Intel.

    IBM DataPower is not an ESB; it is a XML-processing appliance used to offload the ESB from tasks like security, syntax validation and cryptography among the others.
    IBM offers other products like WebSphere Message Broker and WebSphere ESB (and his bigger brother WebSphere Process Server), and I believe these are the products that should be benchmarked, rather than DataPower.
    Working for IBM on SOA strategy, it isn't correct to say DataPower isn't an ESB. From the website: http://www-01.ibm.com/software/integration/datapower/xi50/ WebSphere DataPower Integration Appliance XI50 Purpose-built hardware ESB for simplified deployment and hardened security.
  12. Good point![ Go to top ]

    Ha, good point. DataPoint can be seen as an Enterprise Service Bus. Besides, there are other ESBs which aren't clear cases either. On the other hand, I heard someone say that you're an ESB if you call yourself an ESB and do data integration, and I sometimes wonder if there is truth to that.
  13. I'd suggest to all those involved to join the SPEC SOA benchmarking effort. We aim to answer these questions in a cross industry collaborative fashion. http://www.spec.org/soa Vendor published and sponsored benchmarks are always questioned. SPEC has a history of fair and accurate standardized benchmarking. Andrew Spyker - SPEC SOA Benchmarking Subcommittee Chair