JMS Performance in WebLogic/JBoss


Performance and scalability: JMS Performance in WebLogic/JBoss

  1. JMS Performance in WebLogic/JBoss (3 messages)


    Hoping someone can cast some light on this problem we are having?

    The scenario is this. A Java process issues RMI calls into an AS (WebLogic and JBoss have been used) this stateless bean forwards this 'message' onto a queue via JMS (WebLogicMQ, JBossMQ and SonicMQ have all been used). At the other end of the queue is an MDB which processes the message calls another stateless bean which again posts this 'message' onto another queue. This is repeated so in fact the message goes through three queues in its lifecycle.

    Right the problem is this. All of the above configurations AS and MOM all back-up at an injection rate of 50-60 messages per second. Under this level they process happily. Above this magic rate the queues start to back up and therefore the total round trip time grows too unacceptable levels. All intenal components such as MDB, EJB's etc have steady processing times.

    The installations of the MOM are straight out of the box. But I am sure they must be able to process more than 150 (50/second x 3 queues) messages per second. This has been tested on Solaris, Linux, WIN32 on Intel and Sun RISC h/w and they all do the same.

    We cannot find anything wrong with the code (we have used Identify Black box recorders) and all we can assume is that it is 'something' in the MOM broker. The issue seems to trigger during garbage collection.

    I hope someone has some ideas (even if it involves black candles and chickens blood) they can suggest.

    Thanks in adavance


    Threaded Messages (3)

  2. What you're hitting is the limit of being able to create objects and serialize/deserialize them. This could be a memory limit (up the heap/stack memory on the JVMs invoking the servers), processing power (get a faster computer), etc.

    However, if you're using JMS, especially through many hops, you shouldn't be concerned with "round trip" timings. One of the ideas behind messaging (JMS, SOAP, whatever( is to de-couple long-running operations. An operation will finish when it finishes.

    If you are concerned about load, then you need to bump parameters as above, and probably increase your pool of MDBs.
  3. check disk flush rate[ Go to top ]


    We've encountered a similar performance problem (SonicMQ & our own server).

    It turned out, that because our JMS messages are persistent & transacted, the message rate per second is limited by the flush rate of the hard disk.

    To verify this, you can write a small program that flushes data in the size of your messages to the disk, & measure its flush rate.

    Or just check out the performance on faster disks (RAID, SCSI &c).

  4. check disk flush rate[ Go to top ]

    how do you check the disk flush rate? Is the flush rate factor involved in log files (if they are large in the order of GB) used by app and web servers.