Avoid JVM de-optimization: Get your Java apps runnings fast right fromt the start

Many organizations warm up their JVMs before heavy loads and transaction volumes occur, but well intentioned routines that are poorly planned can actually de-optimize the Java runtime. Here are some tips from some of the industry's foremost experts on performance on how to best prime your Java Virtual Machines before big workloads hit.

For the average application, sacrificing a little speed up front in return for lower latency later on may not seem like a big deal. But in the financial markets, fortunes are made and lost in the first few minutes after the opening bell, and that means performance must be optimized right from the word 'go.' Accordingly, it is in this arena where the most aggressive steps are being taken to ensure Java virtual machines (JVMs) are most thoroughly and consistently optimized. DevOps professionals go to great lengths to essentially warm up the JVM prior to market open, hoping to fully optimize their computing platforms for the onslaught that begins with the opening bell. Unfortunately, the manner in which the JVM is primed, and the manner in which it gets used isn't always the same, and many organizations are doing themselves a disservice when they try to prep their JVMs for an upcoming workload, causing a performance paradox where optimization attempts lead to a de-optimized runtime.

Sometimes warming up the JVM isn't enough

After 10,000 operations doing the right thing, they optimize for that and everything goes fast again.

Gil Tene, Azul Systems CTO

The financial industry is one of the highest profile and most common victims of the performance paradox associated with trying to warm up a JVM. "The codes you run through when you do real trades can be slightly different than the codes you run through when you warm up," says Gil Tene, CTO and co-founder of Azul Systems. "This results in a very annoying behavior in Java. As the market opens, when the real trade hits your system that is already optimized and ready to run fast, that trade goes through a code path you never saw before. Because of that, the JIT compilers de-optimize the code, go back through interpreter execution, and learn quickly what the right thing to do is. After 10,000 operations doing the right thing, they optimize for that and everything goes fast again." Of course, the problem is the fact that after 10,000 operations, optimization is happening too late. Critical code needs to run fast the first time, the second time, and every other time it is run in the future..

How to eliminate de-optimization problems

There is no silver bullet that will stab away at every optimization problem. However, performance optimization advice offered by Tene includes:

  • Optimizing two paths rather than just one side, limiting the risk of inaccurate guesswork on the part of the JVM.
  • Employing aggressive class loading that loads but does not initialize a class until the right time (controlling this step with APIs).
  • Reusing successful optimization patterns from one day to the next, shortening the optimization cycle by taking historical activity into account.

With any machine-learning process, there will be a time cost involved. But using common sense from a human perspective certainly makes a difference. By optimizing properly, and eliminating as many de-optimization traps as possible, DevOps professionals are getting themselves one step closer to providing the maximum performance possible for their users.

How are you optimizing the performance of your JVM? Let us know.


Recommended Titles:

Java Performance by Charlie Hunt
Java Performance Tuning (2nd Edition) by Jack Shiraz
Java Performance and Scalability by Henry H. Liu

Dig Deeper on Java application deployment

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.