Discussions

News: Mule MQ and ActiveMQ - performance test results

  1. Mule MQ and ActiveMQ - performance test results (17 messages)

    Organizations have downloaded Mule MQ for evaluation and feedback is in. Some people have been very interested in the performance of Mule MQ, and they have asked for more details about the tests run to validate performance. It's clear that performance is one of the most important criteria for selecting a JMS messaging product, so our product team spent a great deal of time benchmarking Mule MQ against the most popular JMS products currently available. One of the projects we selected for comparison was Apache ActiveMQ (most product vendors prohibit publishing benchmark data). ActiveMQ is the leading open source JMS technology, and is popular with the Mule community. For that reason, we wanted to understand how Mule MQ would compare ActiveMQ under some common scenarios. A few things that we considered as we designed our tests:
  2. Industry-accepted test harness - Benchmark tests can sometimes be subject to manipulation. We wanted to use a test harness that is publicly available and industry-accepted. We chose the Sonic Test Harness from Progress Software
  3. Request/reply and Pub/sub - Some of our customers have queue-based (request/reply) requirements, while others are looking for topic-based (publish/subscribe) messaging. We wanted to include both request/reply and pub/sub test cases in our suite of tests.
  4. Variety of use case scenarios - We wanted to test both performance and scalability, and we also wanted to see how various products performed with different sized messages. So, we created multiple tests for both request/reply and pub/sub scenarios that covered a decent set of common usages.
  5. Minimal product tuning - To make sure that the playing field is as level as possible, we used each product, including Mule MQ, with its "out-of-the-box" configuration, with no advanced tuning Details on the test cases, test harness, hardware profile, and results can be found here: http://www.mulesoft.org/display/MQ/Performance+Tests In the end, Mule MQ performed well against all of the messaging providers that we tested, including ActiveMQ. In some of the use cases, Mule MQ and ActiveMQ performed about equally to one another. In others, Mule MQ performed better than ActiveMQ, while in other test cases ActiveMQ outperformed Mule MQ. We originally benchmarked the 5.2.0 release of ActiveMQ. In our most recent testing, using ActiveMQ 5.3.0, the performance of ActiveMQ improved, so we included both benchmarks in our test results. In messaging, performance testing can be a tricky since it depends highly on your specific use case, and JMS products can be tuned to optimize with a number of different attributes for different scenarios. Any JMS vendor can publish a performance test that beats all others for a certain use case. We acknowledge this fact, and we have striven to make sure that we created an objective test.

Threaded Messages (17)

  • Why is the benchmark (especially the sonic test class extensions you created) not made in one big distribution to allow people to verify the claims quickly without much investment in time?
  • Why is the benchmark (especially the sonic test class extensions you created) not made in one big distribution to allow people to verify the claims quickly without much investment in time?
    +1 (especially the test harness property files) I highly doubt the results on the MuleSoft page. For example, PTP#1 claims 1950 msgs/sec and is done transacted/persistent. Transaction size in TestHarness is 10 by default. Now look at PTP#3 which claims 9353 msgs/sec. This is done non-transacted/auto-ack/persistent. The difference is this: #1 sends 10 msgs in one transaction and then has to synchronously wait for the return of the commit call. #2 has to synchronously wait on the return of each send call because the JMS spec states the send call must only return after the message has been written to disk. Same on the receiver side. #1 can receive up to 10 msgs and then has to synchronously wait for the return of the commit call. #2 does an auto-ack but can only do that if the message has been delivered. Therefore, if the JMS QoS for persistent messages would be in place, #3 can't be 3 times faster than #1. Rather it must be slower. So either they use some default optimization like async writes/reads or the implementation is wrong. I haven't verified the other tests yet.
  • Why is the benchmark (especially the sonic test class extensions you created) not made in one big distribution to allow people to verify the claims quickly without much investment in time?


    +1 (especially the test harness property files)

    I highly doubt the results on the MuleSoft page. For example, PTP#1 claims 1950 msgs/sec and is done transacted/persistent. Transaction size in TestHarness is 10 by default.

    Now look at PTP#3 which claims 9353 msgs/sec. This is done non-transacted/auto-ack/persistent.

    The difference is this: #1 sends 10 msgs in one transaction and then has to synchronously wait for the return of the commit call. #2 has to synchronously wait on the return of each send call because the JMS spec states the send call must only return after the message has been written to disk.

    Same on the receiver side. #1 can receive up to 10 msgs and then has to synchronously wait for the return of the commit call. #2 does an auto-ack but can only do that if the message has been delivered.

    Therefore, if the JMS QoS for persistent messages would be in place, #3 can't be 3 times faster than #1. Rather it must be slower. So either they use some default optimization like async writes/reads or the implementation is wrong.

    I haven't verified the other tests yet.
    Sorry, I meant PTP #4, not #3. And it's not only Mule MQ but ActiveMQ shows the same behavior.
  • ActiveMQ 5.2.0 The default config was used, no changes were made to the default ActiveMQ instance.
    Mule MQ The following attributes were set on queues and topics: Queue Capacity = 1000 Sync Each Write = true
    Why didn't you set the same configuration on ActiveMQ as you did with Mule ?
  • Is it Nirvana JMS[ Go to top ]

    Is MuleMQ simply a repackaging / resell / OEM of some of the my-Channels Nirvana product? http://www.my-channels.com/developers/nirvana/enterprisemanager/jndi/jmsintegration.htm
  • Re: Is it Nirvana JMS[ Go to top ]

    Hi Alex, MuleMQ is indeed an OEM of Nirvana. We tested with a number of other Jms providers and were very happy with the reliability, performance and tooling that Nirvana had to offer. We have tightly integrated and tested Mule MQ with Mule ESB for those that need/looking for a JMS provider along with Mule. Cheers, Ross
  • Anytime anyone publishes test results its bombarded by people for not doing the test right. :). whatever you do - its a no win situation for the testers. So thanks to Mule Team for putting together this test. Its a very tough and tricky task to do - especially with messaging software. I just have one question - what was the reason behind creating another MQ software when ActiveMQ is pretty good :)
  • Anytime anyone publishes test results its bombarded by people for not doing the test right. :). whatever you do - its a no win situation for the testers.
    Why not? If the numbers are good and it's verifiable? Currently it's quite misleading. For example, PTP #1 and #4 is for ActiveMQ 1170 vs 12000. This is factor 10 in a mode (non-transacted, auto-ack, guessing persistent) which is much slower due to the sync-send JMS requirement. ActiveMQ states to use sync sends for persistent messages. Therefore, the results for PTP #4 and #5 can only be true for non-persistent messages. Would be nice if the Mule guys could clearify that by putting their test harness property files on the test result page and clearly state when they've used persistent and non-persistent messages.
  • It's great to see the interest in our performance tests of Mule MQ. As Ken mentioned in his post, performance testing for something as complex as a JMS product can be tricky, and we have tried to be as open as possible. To answer some of the specific questions: # Tests 1, 2, 3 are PERSISTENT and TRANSACTED. Tests 4 & 5 are NON-PERSISTENT, NON-TRANSACTED. I have updated the results page to reflect this clearly and included the Pub-Sub Test 2 & PTP Test 2 properties for everyone's reference. For other test change the following properties: numconnections, numsessions, msgsize, destname, istransacted, deliverymode. # The properties -Dnirvana.syncWrites=true &-Dnirvana.globalStoreCapacity=1000 actually reduce Mule MQ performance. We do this so that we can do a fair comparison of transacted/persistent messaging. ActiveMQ has similar properties ( & ). You can omit these Mule MQ properties from the test if you like, it will only make Mule MQ perform faster in persistent test 1, test 2 and test 3, but that will not be a fair comparison. # There are 100's of other tuning properties that you can use in Mule MQ (In the enterprise manager look in the config tab). We stuck to the out of box config for this test. When we go to customer site we tune according to the specific use case, hardware, network topology etc. For example by default the Mule MQ socket has NIO enabled, if we use NON-NIO socket the performance was 50% higher on this specific platform. Similarly there are many other small things that can be tweaked but we chose not to. Finally, to answer the question of why we introduced Mule MQ, the simple reason is that our community and customers were demanding one from us. With JMS and ESB being so highly complementary and integrated, our user community has been looking for a JMS and ESB solution that is offered and supported by one single provider. Download and try it for yourself. You will see the difference in ease-of-use, manageability, and more - performance is just one aspect of it.
  • You know what the time it took you to type that response would have been better spent on tar'ing up the benchmark kit.
  • Is MuleMQ cheating?[ Go to top ]

    Andreas is right.

    Looks to me that MuleMQ is "cheating" and sending persistent messages asynchronously. This would be contrary to the JMS specification and breaks reliability guarantees.

    I can't find a setting in MuleMQ which tells it to send messages synchronously. Can anyone can point me to this setting?

  • Shawn I see no reason why this could not be packaged up so and ready to run out of the box. Both of them are freely available (are they not?) so why not just make it easy for anyone interested to quickly run this up.
  • Shawn I see no reason why this could not be packaged up so and ready to run out of the box. Both of them are freely available (are they not?) so why not just make it easy for anyone interested to quickly run this up.
    Agree with you and Andreas. I hope Mule folks can answer all the questions. :) Till then just take it with a pinch of salt. Every test feels a bit biased or incorrect..
  • Trying use kaha persistence instead[ Go to top ]

    the testing used default activemq persistence, it is some unfair, cause activemq itself says the default persistence is slow, it recommends using kaha persistence component, and the testing results show in non-persistence mode, activemq is faster than mule mq, but if with persistence, it is slower. So I guess if uses activemq with kaha persistence, it is definitely better than mule mq
  • Re: Trying use kaha persistence instead[ Go to top ]

    You know what we could easily publish such comparisons if the full benchmark distribution was made available instead of making such claims without hard evidence. With SPEC based benchmarks you get the whole kit though admittedly your must pay for some which I think is a mistake on their part. They (SPEC) should only charge for publication of the report and not for just access to the kit.
  • Availability of benchmark[ Go to top ]

    It is a fair criticism that the extension classes and benchmark configuration were not originally made available. We agree that it is best to be transparent, so we have posted them for download from http://www.mulesoft.org/display/MQ/Performance+Tests. They are not in a single package, but if you already have downloaded the required products it should not take very long to reproduce the test.The required steps are outlined in the section titled 'Sonic Test Harness.' If you still have problems running the test, drop us a note at info at mulesoft dot com and we will be glad to help.
  • Re: Trying use kaha persistence instead[ Go to top ]

    The test was designed to be as fair as possible and to run out of the box. Regarding KahaDB, The ActiveMQ documentation (http://activemq.apache.org/kahadb.html) indicates that KahaDB is "marginally slower" then the default persistence store, so it probably would not improve the performance numbers we saw in our tests. If you have other scenarios you think would be worthwhile to run, the complete configuration and extensions to the test harness are available for download at http://www.mulesoft.org/display/MQ/Performance+Tests along with specific instructions on how to run it.