Tip

Improving Java performance by minimizing Virtual Machine (JVM) latency

Users demand that their applications run fast, but working with Java bytecode presents optimization problems that other architectures do not encounter. Here we look at how to improve the performance of the Java Virtual Machine (JVM) by minimizing the latency involved with bytecode to native code compilation steps.

The Java virtual machine (JVM) can perform some impressive optimizations to make deployed applications run faster. But having said that, the Java community remains well aware of the fact that the price paid for cross platform computability is the time it takes for the intermediary step in which Java bytecode needs to be turned into machine code that can be run on the target deployment platform, be it Windows, Linus, AIX or a Sun workstation. But there are interesting ways to optimize this process, one of which is pre-compiling and even caching the result of pre-compiling classes and methods into native code.

Optimization utilities and JVM switches

It can take a lot of testing, and a thorough knowledge of each application's behavior, to find the right balance.

Want to get your JVM to spend less time fretting about which methods to permanently compile into native code, and more time actually doing it? Charles Nutter suggests using  -XX:CompileThreshold. "[It] sets the number of method invocations before Hotspot will compile a method to native code. The -server VM defaults to 10,000 and -client defaults to 1500. Large numbers allow Hotspot to gather more profile data and make better decisions about in-lining and optimizations. Smaller numbers reduce 'warm up' time."

This idea was explored further in 2012 by Arun Manivannan in his search to achieve low latency for the VM. Since commonly used methods are compiled into native machine code so the JVM doesn't have to interpret the bytecode every time, he draws a common sense conclusion. "One's natural guess would be that reducing the number would mean more methods are converted to native code faster and that would mean faster application, but it may not be," says Manivannan "Considerably low numbers would mean that the server starts considerably slower because of the time taken by the JIT to compile too many methods (which may not be used that often after all)."

The optimization balancing act

Of course, it's a catch-22. Compile too many methods to native code with premature optimization, and memory is affected. Compile too few, or the wrong ones, and time is lost incrementally as the JVM keeps re-running the same processes at suboptimal speeds. It can take a lot of testing, and a thorough knowledge of each application's behavior, to find the right balance.

How are you optimizing the performance of your JVM? Let us know.

 

Recommended Titles:

Java Performance by Charlie Hunt
Java Performance Tuning (2nd Edition) by Jack Shiraz
Java Performance and Scalability by Henry H. Liu

Dig Deeper on Core Java APIs and programming techniques

App Architecture
Software Quality
Cloud Computing
Security
SearchAWS
Close