Home

News: SAP Memory Analyzer downloadable for free

  1. SAP Memory Analyzer downloadable for free (54 messages)

    SAP Memory Analyzer, a tool for analysing Java heap dumps, can now be downloaded for free. The unique features of this tool are:
    • provides powerful functions for finding the biggest objects in the heap, such as "retained size" and dominator tree analysis.
    • Easy-to-use Eclipse RCP based interface.
    • Analyze heap dumps > 1Gbyte and with up to around 20 Million objects on a 32 bit machine and even bigger heap dumps on a 64 bit machine.
    • Uses high performance algorithms, and indices, that speed up most operations after the heap dump has been initially parsed.
    • Comes with special SAP J2EE features, such as the ability to show the memory consumption for sessions and classloaders.
    • Works with the built in heap dumps of the following VMs : Sun JVMs (1.4.2_12 or higher and 1.5.0_07 or higher), HP-UX VM (1.4.2_11 or higher) and SAP JVMs (since 1.5.0)
    • Free download.

    Threaded Messages (54)

  2. Link is not working. This is really a bad example, when they design some page shouldn't take enough care?
  3. Real Download Link[ Go to top ]

    Hi, the link points to the blog of Markus. They real download link is this one: https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/webcontent/uuid/8070021f-5af6-2910-5688-bde8f4fadf31 Or this one for the Wiki on the tool: https://www.sdn.sap.com/irj/sdn/wiki?path=/display/Java/Java+Memory+Analysis Regards, Vedran
  4. Hi, Link works fine and I fixed the frame in a frame problem. Sorry for that. Regards, Markus
  5. Fortunately, no one has ever been know to run any other OS, so it's a good thing they didn't waste any resources porting it.
  6. Client is windows[ Go to top ]

    The tool is windows. You can still get a Head Dump from one of the Sun JVMs that are supported regardless of the OS. Joe
  7. 64 bit version[ Go to top ]

    Any link to a 64 bit version or at least to an installer able to run in a 64 bit operating system?
  8. Platform Support[ Go to top ]

    Hi, an installer is only available for Windows 32 Bit, but the Wiki (https://www.sdn.sap.com/irj/sdn/wiki?path=/display/Java/Java+Memory+Analysis) says: "We don't support more platforms at the moment, however, since we are based on Eclipse 3.2.0 you can try this: Download Eclipse 3.2.0 for your platform, copy our plug-ins into the plugins directory and open the Memory Analysis perspective. On Linux (x86_64/GTK 2) this proved to work." It works on Linux 64 Bit - it's just some manual work. If you already have an Eclipse installation on your platform, try to copy the plugins to your Eclipse plugins folder, restart and open the correct perspective within Eclipse. Cheers, Vedran
  9. Non installer download[ Go to top ]

    Vedran, where is the non-installer download? I can't find any link to it. As I mentioned the installer does not work in 64 bit windows machines so there is no way to directly grab the plugins folder.
  10. Hi Martin, you can open the EXE with a tool like WinRar/WinZip, look for the largest file, Resource1.zip, extract the largest file therein called MemoryAnalyzer.zip and voilà, you have your plugins in the plugins folder. Regards, Vedran
  11. Only humans (and really smart dogs) can analyze.
  12. It's really fast![ Go to top ]

    Thanks for this contribution! It's working really fast, even with a huge heap dump! For generating heap dumps you may also use the SendSignal tool: http://www.latenighthacking.com/projects/2003/sendSignal BTW: Does anybody know a tool that analyzes a VM's garbage collection behaviour and reports the best suited VM options?
  13. Re: It's really fast![ Go to top ]

    Hi, Thanks for your feedback. Regarding your question about a tool that analyses the VM's GC behaviour, the only tool that I know (at the top of my head) is http://java.sun.com/developer/technicalArticles/Programming/GCPortal/ . I can't tell you whether it works as advertised, because I've not yet had the time to try it. In my experience tuning GC parameters such as heap size is easy enough, that it can be done without a tool. The more difficult to optimize parameters such as for example the tenuring threshold will (usually) not improve your performance by a big factor. These option also tend to be application depended, and so it would be very difficult to find an optimal value for all the different applications that may run on your J2EE Server. Regards, Markus
  14. Re: It's really fast![ Go to top ]

    The problem with any GC "take a guess for me" tool is that - Any change within the application code, the workload patterns, external resource responses could invalidate a number of tweaks applied. - The knowledge is static and embedded within the tool whereas with active capacity management (monitoring, planning, ...) the knowledge based grows and adapts across systems and applications enabling improved resource forecasting. I personally prefer tools that aid in rapid knowledge acquisition (semi-automated problem analysis) and retention (recorded behavioral execution patterns, symptoms, causes/factors,...). When I see high levels of GC I look for the root cause within the application (not just the code but the concurrent workload and request/transaction mixes) before attempting JVM tweaks unless it is obvious the parameters are completely misaligned to usage. Kind regards, William Louth JXInsight Product Architect CTO, JINSPIRED "Performance Management for Java EE, SOA and Grid Computing" http://www.jinspired.com
  15. I forgot to mention that there is a tool to help determine the impact of GC on various systems, applications, components, and requests. Beautiful Evidence: Metric Monitoring http://blog.jinspired.com/?p=33 - William
  16. Good heap size?[ Go to top ]

    Thanks for the answers!
    In my experience tuning GC parameters such as heap size is easy enough, that it can be done without a tool. The more difficult to optimize parameters such as for example the tenuring threshold will (usually) not improve your performance by a big factor.
    When I see high levels of GC I look for the root cause within the application (not just the code but the concurrent workload and request/transaction mixes) before attempting JVM tweaks unless it is obvious the parameters are completely misaligned to usage.
    So, you both guys recommend not to focus too much on GC parameters. That's good news for me. I always felt a bit guilty for not using this nerd flags. But what about the heap size? How do you estimate a good heap size for a system? For a web applictaion I usually recommend the heap size -Xms=400m -Xmx=1000m per cluster node, no matter which JDK or OS. For sure, I have to take care that a node doesn't require more memory. What are your recommendations on heap size? Do you know any good Benchmark that focuses on heap size?
  17. Re: Good heap size?[ Go to top ]

    Heap size depends very much depends on your application. If you use some sort of caching I would usually suggest to set mx=ms. Concerning tweaking GC parameters I would usually suggest to use the loggc option with PrintGCDetails and analyse the log file if the NewSpace and the Survivors are big enough. If these are to small and many new objects directly get propagated into the tenured space this will eventually lead into a Full GC. - Ingo
  18. Re: Good heap size?[ Go to top ]

    "I would usually suggest to use the loggc option with PrintGCDetails and analyse the log file" The problem with this type of common and easy approach but not terribly effective approach is that you lose the complete context. Temporary objects (created and only referenced during requests) can get promoted to tenured space when a request is delayed at one of its execution points (database, messaging, any other external resource type) causing more expensive GC collections. This is why escape analysis is not so applicable for Java EE applications. The problem here is necessarily the sizing but the resource service times. Looking at GC log entries whilst oblivious to the workload patterns (level of concurrency, object allocation volumes,...) during the period could result in the incorrect change that does not address the underlying issue. A quick fix in handling an incident is understandable but it does has it cost in that the underlying problem is not analyzed sufficiently and no one is any the wiser -a reason why I personally have strong reservations regarding automatic deployment, provisioning and resource management software. kind regards, -William
  19. Re: Good heap size?[ Go to top ]

    [snip]
    So, you both guys recommend not to focus too much on GC parameters. That's good news for me. I always felt a bit guilty for not using this nerd flags.

    But what about the heap size? How do you estimate a good heap size for a system? For a web applictaion I usually recommend the heap size -Xms=400m -Xmx=1000m per cluster node, no matter which JDK or OS. For sure, I have to take care that a node doesn't require more memory.

    What are your recommendations on heap size? Do you know any good Benchmark that focuses on heap size?
    Yes. In my experience changing those "obscure" GC parameters is in almost all cases only fine tuning. I know only a few exceptions, such as if you switch to a completely new hardware design, such as Sun's Niagara Chip. I think at least in the beginning the default settings would not work very well on this chip. Regarding the heap size, Ingo mentioned in another reply the GCViewer tool. You can use this tool to see how your application behaves and then choose the heap size accordingly. We usually then set the min size equal to the max size, because that avoids the Full GC's, that the VM does before it increases the heap. Since this is all very application specific, I think a having a standard benchmark for this doesn't really make sense. You basically need an application specific benchmark. For bigger heap sizes (64 bit) and multicore servers you may also want to consider to switch to the Concurrent Mark and Sweep collector (disclaimer: not yet recommended on the SAP J2EE server) I guess I should have written a blog about this ;) Regards, Markus
  20. Re: Good heap size?[ Go to top ]

    Yes. In my experience changing those "obscure" GC parameters is in almost all cases only fine tuning.
    1. lesson learned: don't tweak the GC parameters
    We usually then set the min size equal to the max size, because that avoids the Full GC's, that the VM does before it increases the heap.
    2. lesson learend: choose Xms == Xmx
    Since this is all very application specific, I think a having a standard benchmark for this doesn't really make sense. You basically need an application specific benchmark.
    There is petshop. Isn't there any benchmark that tests a bad implementation of petshop with Tomcat, diffrent JDKs, diffrent heap sizese and diffrent OS?
    Heap size depends very much depends on your application.
    That's what I usually say, when "they" ask me. But we are still far from "good heap" size :-( I guess you now your applications very well. So, I ask everybody to send me the VM options of your applications (jdk , OS, what kind of application, how many users?) Omit any information, that is not dedicated for public. May be there will be some common values. I'm really curious!
  21. Re: Good heap size?[ Go to top ]

    Hi Rex, I do not normally set the heap max size so high for "typical" web applications as I prefer to have a larger number of JVM's on a 2 or more physical servers ensuring the application can tolerant both hardware and software failures. Again this highly dependent on the application workload (data volumes and concurrency levels). If requests start allocating 100M-200M during processing then I will revisit my defaults. By partitioning more and creating smaller sized JVM's the application as a whole can be less impacted by GC cycles. That is assuming that there is not communication and co-ordination between the partitions (nodes). Strangely enough we do have many popular Java community sites that run on a single server (hardware) with just a single web container (software). In general not very reliable sites but no one seems to be bothered or astonished. - William
  22. Re: Good heap size?[ Go to top ]

    I do not normally set the heap max size so high for "typical" web applications as I prefer to have a larger number of JVM's on a 2 or more physical servers ensuring the application can tolerant both hardware and software failures. Again this highly dependent on the application workload (data volumes and concurrency levels).
    I agree.
    Again this highly dependent on the application workload (data volumes and concurrency levels).
    Sure. So, which heap size (per VM) would you recommend for a "typical web application" as a good initial value?
  23. Re: Good heap size?[ Go to top ]

    I normally set the max heap size between 512M and 768M but this is a personal preference with very little supporting evidence other than the fact that most application servers ship with similar defaults within their scripts. This range seems a good trade-off between performance management (long GC pauses) and application availability management (more partitions). But I would expect that anyone deploying into a production environment would have already done some pre-production testing and determined the required concurrent resource capacity needs and adjusted the heap settings accordingly. OutOfMemoryError's Are Not Always Caused By Memory Leaks http://www.jinspired.com/products/jxinsight/outofmemoryexceptions.html regards, William
  24. Re: Good heap size?[ Go to top ]

    I normally set the max heap size between 512M and 768M but this is a personal preference with very little supporting evidence other than the fact that most application servers ship with similar defaults within their scripts.
    Thanks a lot for this answer!
    But I would expect that anyone deploying into a production environment would have already done some pre-production testing and determined the required concurrent resource capacity needs and adjusted the heap settings accordingly.
    Pre-production testing is done in the end. To define a "good heap size" per VM would be a great help for system planning. For JDK 1.5 I found the following advice by Sun (http://java.sun.com/docs/hotspot/gc5.0/gc_tuning_5.html):
    Unless you have problems with pauses, try granting as much memory as possible to the virtual machine.
  25. Hi William,
    OutOfMemoryError's Are Not Always Caused By Memory Leaks.
    True! It could be a footprint problem (too much memory consumption in general) or a load issue (too many requests). There are even more subtile problems, e.g. with the semi-spaces and a full tenured generation. In reality memory leaks are comparatively easy to detect with the tool if the leak can grow over time (Dominator Tree). However, in a Java application server with hundreds of deployed services and applications memory footprint problems are far more pressing. For that reason the tool allows to group the memory consumption by class loader. This is a very effective way to find out about the memory consumption of the deployed components as they all usually get their own class loader. About the load issue, the tool offers OQL to query e.g. all requests. You can then see their memory consumption, but this often doesn't help because the thread holds on to the memory, but this can be uncovered with a command which gives you the retained memory of all GC roots hold per thread. Regards, Vedran
  26. Hi Vedran, I wanted to make it clear that the posting of the performance insight article was in relation to the GC and capacity planning discussions within this thread. Before one moves an application into production an assessment of the object allocation resource needs to be performed for most of the business transactions within the application. Once the object creation costs (counts and bytes) has been assessed a transaction profile mix is created with expected concurrent workloads and data volumes. This I would argue should be much more common activity within the development and testing life-cycle than investigating heaps for memory leaks. Sadly in practice this is not the case. Most performance testing teams focus simply loading a server in order to determine ceilings rather than trying to model and understand the software and system execution models including resource consumption and management. The problem with this, which I have stated over and over, is that one has limited the knowledge acquisition which would help when production problems do occur. With this we get operations screaming at the testing teams for not performing sufficient performance or capacity testing when real problems do arise that do not fit the artificial test scenarios created. Resolution of the problem is protracted because it all looks like a black box and its anyone's guess what the problem is. William
  27. Hi Vedran,

    I wanted to make it clear that the posting of the performance insight article was in relation to the GC and capacity planning discussions within this thread.

    Before one moves an application into production an assessment of the object allocation resource needs to be performed for most of the business transactions within the application. Once the object creation costs (counts and bytes) has been assessed a transaction profile mix is created with expected concurrent workloads and data volumes. This I would argue should be much more common activity within the development and testing life-cycle than investigating heaps for memory leaks. Sadly in practice this is not the case. Most performance testing teams focus simply loading a server in order to determine ceilings rather than trying to model and understand the software and system execution models including resource consumption and management. The problem with this, which I have stated over and over, is that one has limited the knowledge acquisition which would help when production problems do occur. With this we get operations screaming at the testing teams for not performing sufficient performance or capacity testing when real problems do arise that do not fit the artificial test scenarios created. Resolution of the problem is protracted because it all looks like a black box and its anyone's guess what the problem is.

    William
    Hi William, A fully agree with most of your statements. I think performance is very often considered too late in the process. Only doing performance test at the end and then try to reach your performance goals can be very difficult, time consuming and therefore costly. Still even if you have a model, and your model is complex, you might want to test your model against the reality. You will need a tool like ours to be able to test whether you really consume as much memory as your model predicts. I think it is also a little bit surprising that so far there was not so much focus in our industrie on memory consumption analysis, neither in Java nor in any other programming language. Only one or maybe 2 other tools, apart from ours, come to me mind that would really try to make memory consumption analysis of enterprise applications feasible. Regards, Markus Kohler PS: please make JXInsight work on our application server :)
  28. Hi Markus, I believe one of the reasons for the stagnation in memory analysis techniques has been due to the approach used by tools in performing the data collection. I am delighted that the trend at the moment is to focus on analyzing the heap-dumps generated by the JVM's themselves instead of creating embedded agents to collect and stream the data to a console - mainly because the memory profilers crashed the JVM's when the data set was enterprise like which is not acceptable in a production environment. I myself have considered adding heap dump analysis to JXInsight since the formats of the dumps has become documented and stable and the contents more complete. Not to worry we are focused at this moment on other areas that have no alternative solutions at present. I will be trying out the tool as soon as I have a nice test case to work with. Does it ship with any existing dumps I can use to explore the feature set? -William PS: I will see what we can do to get the integration moving. I even wrote a JVMInsight extension to help with determining the class loader hierarchy of the SAP NetWeaver product. http://blog.jinspired.com/?p=56
  29. Hi Markus,

    I believe one of the reasons for the stagnation in memory analysis techniques has been due to the approach used by tools in performing the data collection. I am delighted that the trend at the moment is to focus on analyzing the heap-dumps generated by the JVM's themselves instead of creating embedded agents to collect and stream the data to a console - mainly because the memory profilers crashed the JVM's when the data set was enterprise like which is not acceptable in a production environment. I myself have considered adding heap dump analysis to JXInsight since the formats of the dumps has become documented and stable and the contents more complete. Not to worry we are focused at this moment on other areas that have no alternative solutions at present.

    I will be trying out the tool as soon as I have a nice test case to work with. Does it ship with any existing dumps I can use to explore the feature set?

    -William

    PS: I will see what we can do to get the integration moving. I even wrote a JVMInsight extension to help with determining the class loader hierarchy of the SAP NetWeaver product. http://blog.jinspired.com/?p=56
    I fully agree that the profiling API based heap dump implementations were pretty useless in a production environment. At least on 32 bit you would very often run out of memory and crash the VM. I think we don't ship a really interesting heap dump, but we may produce one and put it on the wiki. I will check. Regards, Markus Kohler
  30. Evaluation conclusion[ Go to top ]

    You SAP guys should acquire JINSPIRED before Oracle does.
  31. Re: Evaluation conclusion[ Go to top ]

    Hi "Rex Guildo", Does your comment mean, that you like both tools ? ;) Gruss, Markus
  32. Re: Evaluation conclusion[ Go to top ]

    Does your comment mean, that you like both tools ?
    Yes. This is my new CIS stack (alphabetic order): - GCviewer - JXInsight - SAP Memory Analyzer - SendSignal Thanks to all for your contributions! (If you know "the good heap size", don't hesitate to tell me)
  33. Re: Evaluation conclusion[ Go to top ]

    Does your comment mean, that you like both tools ?

    Yes. This is my new CIS stack (alphabetic order):

    - GCviewer
    - JXInsight
    - SAP Memory Analyzer
    - SendSignal


    Thanks to all for your contributions!
    (If you know "the good heap size", don't hesitate to tell me)
    Sounds like a good combination to me:) Also I said a general rule for memory settings doesn't really work well, I will give it a try : I assume you run only your Java application on the machine. On 32 Bit get as much memory as your OS can use (3 Gbyte on Windows for example). If you want to run with one node on the machine set your heap size as high as you can (usually around 1200Mbyte -Xms =-Xmx). Of course you have to ensure that you don't get into swapping. Consider using 2 instances of your application server running on one machine and set the heap size accordingly (1024Mbyte should work): On 64 bit set the heap size as high as you can, given your physical memory size. Using more than one node usually doesn't seem to have an advantage on 64 bit. Regards, Markus
  34. Hi William, We ship a small heap dump. After you started the tool, you will first see the Welcome Page. To get to the example heap dump do the following steps : Welcome Page -> Tutorials -> Basic Analysis -> Step 1: Open a heap dump -> Click to perform Regards, Markus
  35. Re: Good heap size?[ Go to top ]

    Hi Rex,

    I do not normally set the heap max size so high for "typical" web applications as I prefer to have a larger number of JVM's on a 2 or more physical servers ensuring the application can tolerant both hardware and software failures. Again this highly dependent on the application workload (data volumes and concurrency levels). If requests start allocating 100M-200M during processing then I will revisit my defaults. By partitioning more and creating smaller sized JVM's the application as a whole can be less impacted by GC cycles. That is assuming that there is not communication and co-ordination between the partitions (nodes).

    Strangely enough we do have many popular Java community sites that run on a single server (hardware) with just a single web container (software). In general not very reliable sites but no one seems to be bothered or astonished.

    - William
    We found that on 32 bit Machines, that it can make sense to have more than one JVM running because of the typical limitations of a 32 bit OS. The physical addressable memory might be 4Gbyte and you want to run 2 JVM's with (almost) 2 Gbyte. On the other side you will waste some memory because of classes being loaded into memory more than once and other JVM overhead. Because on 64 bit there's effectively no such memory limitation, you might not want to run more than one JVM per machine. Especially if you can use the Concurrent mark and sweep collector, you will not have any big problems with long GC pauses any more. Regards, Markus
  36. Re: It's really fast![ Go to top ]

    Another quite nice tool for GC-Analysis is the GCViewer: http://www.tagtraum.com/gcviewer.html the thread dump analyzer Samurai also has some GC Views builtin: http://yusuke.homeip.net/samurai/?english - Ingo
  37. Re: It's really fast![ Go to top ]

    For generating heap dumps you may also use the SendSignal tool: http://www.latenighthacking.com/projects/2003/sendSignal
    Forget that, it only supports thread dumps ...
  38. Re: It's really fast![ Go to top ]

    Hi "Rex Gildo",
    For generating heap dumps you may also use the SendSignal tool: http://www.latenighthacking.com/projects/2003/sendSignal

    Forget that, it only supports thread dumps ...
    No problem. It should work because the signal is the same. You only have to add the option -XX:+HeapDumpOnCtrlBreak to your JVM settings. Check my blog here https://weblogs.sdn.sap.com/pub/wlg/3800. We actually ship a similar command with our J2EE server called sapntkill.exe. You may want to check my blog here https://weblogs.sdn.sap.com/pub/wlg/4425, where I describe how to do thread dumps with it. Regards, Markus
  39. Hey, nice tool, quite difficult to install though (at least on linux)... Especially handy is the "Calculate Retained Size" option to find big objects and maybe mem leaks. regards, Ingo
  40. Dominator Tree[ Go to top ]

    Hi Ingo, Thanks for your positive feedback. Have you tried the dominator tree view ? With this view you can easily find the biggest objects (in terms of retained size) and also easily analyse, why they are big. Regards, Markus
    Hey,

    nice tool, quite difficult to install though (at least on linux)...

    Especially handy is the "Calculate Retained Size" option to find big objects and maybe mem leaks.

    regards,

    Ingo
  41. Re: Dominator Tree[ Go to top ]

    Hi Markus, yes I've tried, quite nice to see the content of e.g. big hash maps. But one suggestion: In my testing dump I have a problematic map (which caused an OOM) which has about 500.000 objects in it, selecting to display all never comes back. Maybe you can split the display in fraction in lets say about 1000 objects? Just like e.g. the YourKit-Profiler does? kind regards, Ingo
  42. Hi, cool tool. .....BUT: where is the update site?! I have seen, that there is no native code in the install package for the profiler. So it is no additional work to provide the plugin for all plattforms. Why must I install an additional Eclipse Enviroment? :-( Why SAP only support windows 32?
  43. Hi I'm currently using RAD 6.0 and I am not able to use RAD's memory analyzer/profiler due to some AgentController plugin issue. Does anyone know if I could use SAP memory Analyzer on RAd since RAD is based on eclipse? Thanks JB
  44. Hi, I don't know on which version of Eclipse RAD has been build. If it's based on Eclipse 3.2 it might work. You just have to copy the SAP Memory Analyzer plugins to the RAD plugins directory. Beware that you do that at your own risk. I would backup the RAD installation beforehand. Regards, Markus
  45. This Tool is great, thankyou[ Go to top ]

    I worked with a lot with JProbe and OptimizeIt. Also I tried JProfiler and YourKit. But from now on this tool is my favorite now to look for memory leaks. It is really fast and handles large heapdumps easily. It's easy to use as well and it presents you the information in a way easy to understand. In other tools one usually gets so many paths to one object, that it is difficult to find the critical one. Three things could make it even better: JProbe has the feature to remove a reference to see whether a loitering object can be garbage collected afterwards. This I find very helpful, because sometimes there still exists another path that prevents the object from beeing garbage collected. In the inspector the table with the instance attributes should have sortable columns. This should be pretty easy to do, but is so much helpful when having a large number of attributes. Support for the IBM-heapdump is very desirable as well. Or does anybody know about a converter? Regards, Artur
  46. Hi Artur, Thanks a lot for your compliments ! Regarding the JProbe feature, I think we have something more powerful with the "immediate dominators" functionality. Check my colleagues blog entry here : https://www.sdn.sap.com/irj/sdn/weblogs?blog=/pub/wlg/6851 Support of the IBM heap dump is currently difficult, because there's different information in these dumps than in the SUN hprof dumps and because the format is not documented (at least to my knowledge). Regards, Markus
    I worked with a lot with JProbe and OptimizeIt.
    Also I tried JProfiler and YourKit.
    But from now on this tool is my favorite now to look for memory leaks.

    It is really fast and handles large heapdumps easily.
    It's easy to use as well and it presents you the information in a way easy to understand.
    In other tools one usually gets so many paths to one object,
    that it is difficult to find the critical one.

    Three things could make it even better:
    JProbe has the feature to remove a reference to see whether
    a loitering object can be garbage collected afterwards.
    This I find very helpful, because sometimes there still exists another path that prevents the object from beeing garbage collected.

    In the inspector the table with the instance attributes should have sortable columns. This should be pretty easy to do, but is so much helpful when having a large number of attributes.

    Support for the IBM-heapdump is very desirable as well.
    Or does anybody know about a converter?

    Regards,
    Artur
  47. Hi, yes, does anybody have written a converter from: IBM System Dump/DTFJ API -> HPROF binary heap dumps ??? (NOT PHD, that format does not contain enough information for a conversion to HPROF binary heap dumps. System dumps do but to my knowledge not the format is specified by IBM, but an API for it, called DTFJ) Vedran
  48. Hi Artur, Thank you for the good feedback and suggestions. I wanted to comment on this one:
    JProbe has the feature to remove a reference to see whether a loitering object can be garbage collected afterwards. This I find very helpful, because sometimes there still exists another path that prevents the object from beeing garbage collected.
    Indeed this is very helpful. And we have already done something in this direction (but not included in the version currently available for download). What we have is the possibility to specify when using the "Paths from the GC roots" one or several Class/field pairs, defining which paths should not be taken into account. For example, if you have found that the reference through MyClass.myField should not be there, then you can calculate the paths from the GC roots, excluding the ones going through MyClass/myField. Do you think this is enough? Or you have some more suggestions? Regards, Krum
  49. Re: Enhancements[ Go to top ]

    Hi Krum, I reckon the filter you describe would do the job in many cases. Though actually removing references based on instances is more powerfull than based on class/field pairs. Imagine you got a PropertyChangeListener that causes the memory leak. Filtering out PropertyChangeSupport/listeners would then filter out a large set of object. But if there is another PropertyChangeListener reachable by a totally different path you would not be able to find it -- and I had these cases. There is one more little feature I would find useful: in the "Paths from the GC roots"-View the button "Find Next Paths" always searches for 30 next paths. How about a second button "Find all paths", or make the number for next configurable somewhere? Regards, Artur
  50. Hi, Has anyone come across a NPE when trying to load a heap dump? I get the following : !SESSION 2007-07-02 13:45:02.848 ----------------------------------------------- eclipse.buildId=I20070608-1718 java.version=1.6.0_01 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=linux, ARCH=x86_64, WS=gtk, NL=en_GB Command-line arguments: -os linux -ws gtk -arch x86_64 !ENTRY com.sap.tools.memory.ui.core 1 0 2007-07-02 13:47:37.247 !MESSAGE Heap /sbcimp/dyn/logfiles/jetpac/MS-TFI1-JeT_Stm2.hprof contains 36,440,149 objects !ENTRY org.eclipse.core.jobs 4 2 2007-07-02 13:57:59.952 !MESSAGE An internal error occurred during: "parsing /sbcimp/dyn/logfiles/jetpac/MS-TFI1-JeT_Stm2.hprof". !STACK 0 java.lang.NullPointerException at com.sap.tools.memory.snapshot.hprof.HprofParserHandlerImpl.cleanupGarbage(HprofParserHandlerImpl.java:501) at com.sap.tools.memory.snapshot.hprof.HprofParserHandlerImpl.createSnapshot(HprofParserHandlerImpl.java:321) at com.sap.tools.memory.snapshot.SnapshotFactory.parse(SnapshotFactory.java:402) at com.sap.tools.memory.snapshot.SnapshotFactory.createSnapshot(SnapshotFactory.java:113) at com.sap.tools.memory.ui.core.internal.ParseHeapDumpJob.run(ParseHeapDumpJob.java:40) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)
  51. Hi Scott, I have analyzed the heap dump you sent me, and found that the problems have appeared, because the format of the hprof dump generated by using jmap had some differences (in my opinion incorrect) compared to the dumps the VMs on the different platforms produce on OutOfMemoryError or on ctrl_break event. I have implemented some changes, so that our future versions are able to work with jmap-produced dumps despite of the mentioned differences. Thank you for pointing us to the problem. Regards, Krum
  52. Just wondered if you've ever heard of FindRoots? The idea of using the dominator tree looks strangely familiar :) Your tool looks interesting though. We have a parser for IBM PHD files so you shouldn't need to know the format. There should be sufficient info in the PHD file to do your analysis. One advantage of PHD files is that they are much smaller than HPROF files. I believe someone is working on a PHD to HPROF converter.
  53. Hi David, Yep, the dominator was a major breakthrough in the tool! Thanks for commenting on it! Maybe you have seen that we use the concept of dominators not only in a dominator tree? We use dominators for: a.) Instant retained size per object b.) Dominator tree showing the biggest distinct objects c.) Grouping of the dominator tree to isolate aggregation patterns d.) Following the dominators with exclude pattern to learn about the object with keeps an object or a set of objects alive e.) Getting an absurdly quick approximation/lower bound for the retained size of a set of objects If you are interested, in my blog I compiled a short history on how we exploited the dominators and how we stumbled over them in a research paper while trying to solve the problem of computing the retained size per single object instantly. Regards, Vedran
  54. IBM Heap Dumps[ Go to top ]

    Hi David, About the PHD files, as far as we know, PHD files miss information about class loaders, GC roots and field values. Unfortunately class loaders are very important for grouping memory consumption per deployment unit (see my blog also on this). GC roots are very important for GC simulation and dominators. I read that unreferenced objects can be interpreted as GC roots, but GC roots don’t have to be unreferenced. I am not a PHD expert, I have to admit, but from what I saw I just don’t have the correct GC information for a proper interpretation of the data. The missing field values just kill a lot of our features, e.g. name resolvers to better/quicker grasp what the data means (e.g. names for class loaders, values of Strings). Furthermore we use the field values to look out for aggregation patterns, e.g. grouping objects by their “semantic key/value” and looking out for systematic activity. System dumps contain that missing information and DTFJ exposes it. At least that’s my understanding. That’s the reason why I hope for someone to write a converter from system dumps/DTFJ to HPROF binary heap dumps. Regards, Vedran
  55. six pack abs[ Go to top ]

    six pack abs six pack abs