You can now execute your testcase, and perform assertions on anything about the memory. Like you could make sure that a class was unloaded (i.e. the classLoader was released), or check if the number of instances on the memory is steady and nothing unexpected is being created. This is a good thing, since even with Garbage Collections in complex software there is always the possibility of a thread not receiving the right message and keeping objects referenced when they were supposed to be released. Once you find a memory leak, keeping the memory leak test in your testsuite will raise an alarm if something start to leak in the future. As soon you check bad code, your test will start to fail what means FIXME as soon as possible.
Clebert Suconic has posted "JBossProfiler: Beyond Finding Memory Leaks," detailing a module that allows programmers to test the expected state of memory after a test, including instance counts, classes loaded, and more.
- Posted by: Joseph Ottinger
- Posted on: June 12 2006 09:13 EDT
- Testing Resource Consumption by William Louth on June 12 2006 10:08 EDT
Integrating such profiling information into test cases is an area under active research and development. There have been some attempts at this in the past but most failed because of - sensitivity of the comparison to slight (and valid) application changes (a test that fails constantly becomes useless) - lack of control over JVM runtime behavior (GC) - lack of control over OS (process/thread scheduling) - length of time spent in tooling to resolve differences (both minor and major) The problem I see currently is that most tools/adhoc solutions do not make a distinction between information required for asserts and information for problem resolution. The more fine grain the model used in the asserts the less likely developers will spend time trying creating tests because of test failure reports - of course to actually resolve the problem you do need this granularity. With regard to resource consumption and not memory leaks which the blog entry appears to pertain too (though I am not so sure) I recommend creating conditions on much more coarse grain metrics with a specified tolerance level. The resource metrics that I felt important when designing JXInsight's Tracer API include - Allocation Sizes (JVMPI) - Clock Time (High Resolution Clock Counter) - CPU Time (JVMPI) - Waiting Time (JVMPI) - Blocking Time (JVMPI) - GC Time (JVMPI) - Clock Adjusted (Clock Time less sum of GC + Wait + Block) Our Tracer API is published here: http://www.jinspired.com/products/jxinsight/api/com/jinspired/jxinsight/trace/Tracer.html An example of its use in benchmarking the generation of callstack by different JVM's and API is published here: http://www.jinspired.com/products/jxinsight/callstackbenchmark.html It would be extremely easy to create a JUnit or TestNG extension around our Tracer API though I think it would be much better to focus on other resource metrics (that span across tiers) for enterprise Java applications. Kind regards, William Louth JXInsight Product Architect CTO, JInspired "JEE tuning, testing, tracing and monitoring with JXInsight" http://www.jinspired.com
We are using JVMTI. JVMPI was bogus for memory comparissons. We are using JVMTI, and JVMTI is pretty efficient with memory comparissons. I am actually forcing a fullGC using JVMTI what solves the problem of lack of control over GCs. With this class/technique, if you have a failure you really have a failure, what means something not expected is being created. The tests I have created so far are valid. The only thing you need is to know what's expected to be created, in other words you have to know what your test is supposed to do.
Hi Clebert, Does your test verify the actual resource consumption throughout the measurement interval or is the comparision based on the actual heap structure at the end of the test? A JVMPI agent can easily handle the counting of object creations as well as the summing of actual bytes consumed. I fail to see how a JVMTI based agent could improve on this simple and efficient profiling approach assuming the goal is to understand the actual resource consumption. Here resource consumption to equates to both retained and temporary allocations. Note that the difference between two memory measurements does not neccessarily imply this is the actual resource consumption. In fact there are two aspects to the testing (1) how efficient was my object(counts)/memory (bytes) allocation during the measurement interval and (2) is the retained memory (refrenced heap) within acceptable/expected levels. Where JVMTI is better than JVMPI (ignoring internal implementation issues) is in heap collection and comparision - test 2. Would you agree that for enterprise applications where the execution of a test case can span multiple JVM's that it would be near impossible to ever have the asserts pass. In this context it is best to base assertions on coarse grain measurements (did I consume less than 10MB during my distributed transaction execution). I am sure at the micro test level the JVMTI facade and inventory analysis is extremely effective. With some filtering of classes it is also effective for application testing. I am still trying to keep an open mind on low level comparisons but from my experience the costs are prohiitives for development and test teams. It would be great to see this level of testing but I would first like to see application architects start to specify and test resoure consumption for improved production capacity planning. I would like to hear from others their experiences with other profiling API's that offer similar functionality. Where you attracted to the feature? Did the actual usage of the feature live up to expectations? What are the downfalls? What needs improving? Kind regards, William Louth JXInsight Product Architect CTO, JInspired "JEE tuning, testing, tracing and monitoring with JXInsight" http://www.jinspired.com
Does your test verify the actual resource consumption throughout the measurement interval or is the comparision based on the actual heap structure at the end of the test?It verifies the actual state of the heap. ProduceInventory will return a matrix of Class,DAtaPoint DataPoint will tell you how many instances and bytes a Class has. So, with that in hand you will have something like as: String = X instances, Y bytes Integer = X Instances, Y bytes And this is not based on the interval of consumption. So, there are no events being captured during these snapshots. The only overhead is during the call of produceInventory, what takes probably 1 or 2 seconds to count the entire heap. (It's fast even on large heaps).
Where JVMTI is better than JVMPI (ignoring internal implementation issues) is in heap collection and comparision - test 2.JVMPI doesn't give you HEAP navigation functions. You would have to capture object creations and object releases, and the API is kind of slow (I know I'm talking about internal implementation but it's actually a true statement). JVMTI is much better on that sense.
Would you agree that for enterprise applications where the execution of a test case can span multiple JVM's that it would be near impossible to ever have the asserts pass.The test has to be built to validate the state of a single JVM. To use produceInventories you have to be in the same JVM, it's just a matter on build the test properly. Also it's not safe to serialize the HashMap returned as the classLoader used to serialize wouldn't be safe enough to serialize classes, so it has to be in the same JVM. If you need to validate several servers at the same time, you might need a harness to validate each JVM individually. I guess if people starting doing the basics with this, we could start doing other more complex things. Just to think about, an interesting testcase someone asked me if would be possible was to validate if a given Class only has 1 instance. (Like, There is a requirement on Portlet specification that a Portlet should have only 1 instance on the JVM). You could use getObjects(String className) or getObjects(Class clazz) from JVMTIInterface to validate such scenario. You could also expand on that on HTTPSession variables. Maybe a JSP developer could verify the size of sessions or that kind of thing. Clebert Suconic