One of the frustrating ironies in terms of JVM optimization is that sometimes optimization techniques can actually diminish efficiency, especially over the short term. A case in point? When the JVM runs, there is a de-optimization and re-optimization cycle that is performed where classes the JVM thinks will be needed are loaded, but if the class isn't actually used after a certain period of time, the class is unloaded. The class then gets reloaded when it is used, and as a result, the attempted optimization actually creates a performance paradox where the optimization attempt actually consumes more resources than it would if the optimization didn't actually exist.
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Swallowing a spider to optimize a fly?
Naturally, there has been a great deal of thought that has gone into overcoming the cost of loading a class, and then needlessly unloading it again before it is used for the first time. "To overcome the first time penalty," suggests Lansa technical support, "a custom Startup class can been written to create a thread that runs every 20 minutes to load the classes and keep them in memory." But of course, tests with custom implementation like this need to be performed to make sure the solution isn't more problematic than the problem it is trying to solve. And furthermore, a solution like this must be tweaked to ensure it is working effectively. Is 20 minutes the correct interval, or is it 15? Maybe the correct interval is 30 minutes. Iterative testing is always required to ensure that an optimization routine is running optimally.
One of the frustrating ironies in terms of JVM optimization is that sometimes optimization techniques can actually diminish efficiency.
Another optimization suggestion that might help avoid the constant loading and unloading of frequently used Java components? A heavy hammer to drop is to simply turn off garbage collection for specific classes to keep them from being swept out of memory. These steps may help keep the JVM hot so it doesn't have to be warmed up over and over again.
Thread dumps and shared memory space
But how does one figure out which classes should be kept in memory, regardless of how long it has been since the last invocation, and which ones should remain part of the regular garbage collection process? Helping to figure out which classes to put on the do not garbage collect list are JVM tools and switches such as Xshare:dump. This utility can also be used to deposit a shared memory file to disk with core class data that is accessed by all JVMs running in a given system. Running the application under normal usage, and then taking regular dumps can provide insights about which classes tend to be used regularly by the JVM. See which classes tend to remain in memory over time, and tell the system GC to leave these classes alone.
Administrators and operations personnel should always be thinking about optimization and performance, but at the same time, they should never try to over think their situation. In most cases, the defaults, or even basic system settings for a given industry or application are often good enough. Don't get too fancy with the various JVM settings. Sometimes attempts to fix performance problems can actually be the cause of them.
How are you optimizing the performance of your JVM? Let us know.