www.jmsbenchmarks.com has published a test of several JMS servers (Sonic, MQSeries, OpenJMS, etc) and also J2EE Server Vendor JMS implementations (including BEA WebLogic's, JRun's and JBoss' JMS implementations). The comparisons were done to test compliancy, performance and reliability.
See the full story here
Quite interesting, although seems a bit old, casting doubt on it's relevance.
For instance they seem to have used Jboss 2.2.1 for the tests (which is well old).
I'm no jboss expert, but I believe the MQ has been completely overhauled since then.
SwiftMQ has also undergone a number of revisions - in fact they are now on their 3.0 release - it would be interesting to see a comparison of more up-to-date builds.
Yes. I believe the new SwiftMQ 3.0 series uses the JDK1.4 native io library as well - be interesting to see how just this alone improved performance over earlier versions.
Well, to test java.nio you need another OS than Windows. There is a bug in 1.4 which prevents selects on more than 63 fds. Furthermore, java.nio is *slower* than blocking I/O because of the select overhead. It scales when you have +500 concurrent connections.
Some comments on this test. First, they have taken very old releases. Next, they use very poor hardware, a PIII 256 MB, W2K, everything on 1 server. LOL. For the tests: I don't find any hints whether they use persistent or non-persistent messages. And at last: Who is "Cannon Group, LLC"? No website except that jmsbenchmark.com. Coming out of the Off, publish a benchmark, and call that "World Class Award"! With a PIII 256 MB! LOL.
No, really, take a look into the Sonic/Fiorano comparision from Sonic (not that from Fiorano). That's much more serious.
They also have taken an old version of JORAM 1.1 dating of october 2000 whereas a newer version was availaible in may 2.0 (version 2.0) *before* they made their benchmark. So much for the objectiveness of their tests...
Needless to say that the compliance and performance results does not reflect the current state of JORAM (as for JBoss MQ)
While I'm pleased with having Sonic named in their top three, I question the objectivity of the testing also. Between Cannon LTD’s choice of hardware, limited performance test variations, reliability weighting, old versions of competing products/new version of Open3, and a home grown compliance test suite (which Open3 passes 100%), something doesn't seem copasetic.
On the matter of compliance testing, we at Sonic Software have already had a competitor attempt to win business by showing we don't pass their home-grown compliance test. In this particular instance we found several of the competitors test cases to be non-compliant and several others to be their own interpretation of the JMS specification.
In answer to this situation, Sonic Software licensed Sun’s J2EE 1.3 CTS and proceeded to test and pass SonicMQ 3.5, which was our current product when the 1.3 CTS was released. We also have included this testing as a mandatory component of our release cycle, and as such, SonicMQ 4.0 has also passed. The 1.3 CTS includes 2100 JMS specific tests along with approximately 13000 other tests that we also passed when integrated with Suns Reference implementation.
If the Cannon group is indeed an independent testing company I applaud their efforts. However, I would recommend that they too license the only industry recognized compliance software to perform the compliance portion of their evaluation.
VP Operations, Sonic Software
how is the performance of openJMS?
I noticed that a large message in the benchmark is 10k. I'm just curious, but is anyone doing any really large messages? How about 10M+? Is there a preferred design pattern when you have a JMS-centric design but need to send a few large messages that would normally knock out the system?
Justin Chapweske, Onion Networks
The report lacks objectivity which is very critical for any comparison of competing products. On what basis was 40% weightage was given to JMS compliance? What is the basis for giving 30% weightage for reliability? If I am using a product in a mission critical environment, I would not compromise on reliability or speed or Usability ( Whatever it is supposed to mean ).
Any serious evaluation would refrain from using platitudes like "WORLD-CLASS" witout defining what WORLD CLASS is supposed to mean exactly. It all seems to be pretty vague. It is good to note that someone is actually trying to independently test JMS products, but the quality of reporting and clarity in elucidating the techniques used for testing needs to change drastically before such "AWARDS" are taken seriously by anyone who is going to buy the product.
All of these factors cast serious doubts on the competence of the effort. Why AWARDS ? Beats me.
I disagree with your statements like lack of objectivity and
competency about this benchmark.
Their criteria are very clear and make sense.
Now of course for a production project, if reliability 7dx7d and 24hx24h is a must and if your have skilled engineers you don't care about usability and documentation.
In other hand, if you need to serve 10 K Tx/second, you put performance as number one criteria with 60% weight (or more). It's up to you to decide what is the accurate picture for your project.
Now I agree that PII 333 Mhz and 256 MB Ram is very cheap and it's reflected in the performance Bench results...
I agree that they should choose another OS platform (they plan Linux and Solaris in future version ...which could cause troubles to some vendors ) because of JDK 1.4 nio problem.
I think they should more explain who they are (I did not retrieve Cannon Group LLC)and avoid titles like "world class award winners" and so on...
They should give more details on their test cases, test congurations, load injection and measurement methods.
Now those guys has ran 711 compliance Tests, 40 tests for reliability and 12 performance scenarii on 13 products.
And their result are in line on some products (SonicMQ, MQSerie, WebLogic, Jrun, JMQ and JBoss) which was found last year by some companies tests I am aware of...and not alligned with test published or contracted by the vendors themselves (like Fiorano or Sonic)
I find the planned enhancements they announce on their home page very interesting (clustering, other OS, Performance with +1Mb messages...)