“Hey, can you drop by and take a look at something weird?. This is how I started to look into a support case leading me towards this blog post. The particular problem at hand was related to different tools reporting different numbers about the available memory.

In short, one of the engineers was investigating the excessive memory usage of a particular application which, by his knowledge was given 2G of heap to work with. But for whatever reason, the JVM tooling itself seemed to have not made up their mind on how much memory the process really has. For example jconsole guessed the total available heap to be equal to 1,963M while jvisualvmclaimed it to be equal to 2,048M. So which one of the tools was correct and why was the other displaying different information?

It was indeed weird, especially seeing that the usual suspects were eliminated – the JVM was not pulling any obvious tricks as:

  • -Xmx and -Xms were equal so that the reported numbers were not changed during runtime heap increases
  • JVM was prevented from dynamically resizing memory pools by turning off adaptive sizing policy (-XX:-UseAdaptiveSizePolicy)

Reproducing the difference

First step toward understanding the problem was zooming in to the tooling implementation. Access to available memory information via standard APIs is as simple as following:

System.out.println("Runtime.getRuntime().maxMemory()="+Runtime.getRuntime().maxMemory());

And indeed, this was what the tooling at hand seemed to be using. First step towards having an answer to question like this is to have reproducible test case. 

Having written and launched a test case using Runtime.maxMemory() made things darn obvious – even though I had specified the JVM to use 2G of heap, the runtime somehow is not able to find 85M of it

Finding the root cause

After being able to reproduce the case, I took the following note – running with the different GC algorithms also seemed to produce different results. Besides G1, which is consuming exactly the 2G I had given to the process, every other GC algorithm seemed to consistently lose a semi-random amount of memory.

Now it was time to dig into the source code of the JVM where in source code of the CollectedHeap I discovered the hint towards an answer. The answer was rather well-hidden I have to admit that. But the hint was still there for the truly curious minds to find – referring to the fact that in some cases one of the survivor spaces might be excluded from heap size calculations.

From here it was tailwinds all the way – turning on the GC logging discovered that indeed, with 2G heap the Serial, Parallel and CMS algorithms all set the survivor spaces to be sized at exactly the difference missing compared to Xmx.

The post was originally published in Plumbr Java Performance Tuning blog.