No Objects Left Behind

Java's automated memory management has been a constant source of confusion for many trying to obtain predictability in response times. A significant source of this confusion comes from pre-conceptions about how garbage collection (GC) works as well as our experiences with the earlier, less mature implementations.

Java's automated memory management has been a constant source of confusion for many trying to obtain predictability in response times. A significant source of this confusion comes from pre-conceptions about how garbage collection (GC) works as well as our experiences with the earlier, less mature implementations. These previous experiences have resulted in many development teams taking steps that actually confuse the situation even further. Given the confusion it is no wonder that Brian Goetz's recent round of talks on performance myths and writings that say, “go ahead and make a mess”, the garbage collector will look after cleaning up for you, have created a such a stir.

“Go ahead, make a mess” redux

In his recent series of talks, Brian makes the claim that memory allocation in Java is far cheaper than it is in C/C++. The justifications for this are based on his in-depth conversations with those that are responsible for developing the JVM. Despite understanding the source of Brian’s information, a number of people have been openly critical of his musings because it simply does not match their experiences in the trenches. It is not the first time theory and real world experience have collided at complete right angles to each other. For example, if you can find the advice originally given by Sun on how to tune the 1.4.1's generational spaces you will see that today’s advice is quite different. 

The advice given in that document appears to be based on the assumption that memory-to-memory copying of objects from one space to another is expensive. The conclusion one would draw from this is one should configure the JVM to move objects from young to old space as soon as it is practical. In technical parlance this meant reducing the size of the survivor spaces so that objects would be immediately promoted to the old generation. The first time I tried to follow this advice I quickly discovered that I need to ignore the “expert advice” and make survivor spaces large to delay promotion to old space. Now we have some new “expert advice” that is being questioned by other “experts”. The question is: what is it that they are seeing that is leading them to question Brian’s advice. Even more importantly, who should we be listening to?

Trashy Coding Practices

There are many coding and design practices that can help make modern garbage collection expensive. In fact, the number of ways we can hurt the garbage collector far out-weigh the number of things we can do to help it. Which design/coding practices hurt and which help is a question that is easy to ask (aka benchmark) but almost impossible to answer. It is notoriously difficult to get a micro-benchmark that will provide a correct answer to even the most trivial of questions and this question is far from trivial. This leaves us with having to conjecture about whether or not following a particular coding practice will cause a garbage collector to work harder. The complexity of the Java Virtual Machine doesn't allow us the luxury of making safe assumptions to any reasonable degree of certainty. Factor in the effects of GC implementation and VM configuration and finding an answer seems like a hopeless endeavor. 

As hopeless as it may seem in the general case, we do have tools that can help us draw some conclusions. The most basic of these tools is the information provided by verbose GC logging. You can turn on this light weight logging mechanism by using the standard flag -verbose:gc. The details of the log entries are not as interesting as the view that a good tool can provide you with. In this case the graphics have been produced using HPJTune (free download from HP). 

figure 1. Heap Usage after GC and GC Duration 

Consider the graphics in figure 1. The chart on the left shows the amount of heap memory being used after the garbage collector has run. The chart on the right shows the duration of each GC cycle for the same dataset. The data found in these charts were produced by an application that was starved for memory. Though memory starvation does cause its own set of problems it doesn’t take away from the correlation between the number of bytes left behind and GC duration. The counter-intuitive observation shows the GC that recovered the most memory ran in the least amount of time. In this run, a massive number of objects were “freed” or de-referenced in one timeout event while the application was resting. Under the conditions, the visual effect has been greatly enhanced. The correlation becomes less visible when the application is still working and not starved for memory. However it still can be seen as witnessed in figure 2.

figure 2 Heap Usage after GC and GC Duration 

Even though there is evidence of a memory leak in these charts, the application is not starved for memory. More over, the observation that GC times is correlated to the number of bytes left behind is still evident. Though there are yet other cases where the correlation is less evident, the more of these graphics that you look at the clearer it becomes that the observations tie in nicely to a key component of Brian's message, the more we leave behind, the worse things get. The graying of Brian's strong message is that this maybe the general case; however, it is not always the case. So if Brian's critics are not wrong in their observations how can Brian be right? Is it possible that part of the answer is in that the critics are looking at applications where data is very broadly scoped or otherwise pinned into memory for far longer than it should be? In both of these cases, we are looking at exactly that scenario, objects that are pinned into memory for longer than they should be.

Create More Garbage Sooner

If we believe what the graphic that is telling us that the cost of garbage collection is related to the number of objects left behind, the obvious solution is to not leave objects behind. The most reliable way to release objects as soon as possible is to narrow their scope. We can achieve this by moving statically scoped variables to be instance based. We can move instance based variables to local scoping. We need to ask questions such as: “Can we eliminate this variable altogether? Is there any element in the design that is forcing us to hold onto data for longer than is necessary?” Another important aspect is the JVM’s configuration. Is there anything we can do in that regard to help the garbage collector cope with higher rates of object churn? In a vast majority of cases the answers to these questions are yes, yes, and yes. Beware, this advice gets muddled when you consider objects that have a high cost of acquisition. 

From the code and design perspective, the teaching of (Pragmatic) Dave Thomas and Andy Hunt still offer some of the best advice that one should follow when making choices in their designs. It behooves me to refer to their work rather than have me to butcher them in this brief article. From its humble beginnings, the JVM tuning guide has improved dramatically in both depth and quality of the information provided. By following the advice in this document you can help the garbage collector run more efficiently.  However you will run into situations where you may want to consider employing the services of a tuning specialist.


What is clear in all of this is; the Java performance landscape that you once knew is no longer. Also, the Java performance landscape as it is today will most likely not be here tomorrow. So it necessitates that we set aside our current biases while we are listening to the advice offered by others. We need to look and see what has changed and evaluate the advice for correctness under the new conditions. So while the critics observations may not be wrong, what Brian is really saying is beware, prematurely optimizing your application based on old information maybe tantamount to shooting yourself in the foot. 


Kirk Pepperdine is a performance tuning specialist offering training and consulting services. He can be reached at

Dig Deeper on Application performance measurement and Java performance



Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: