Hi, I have a problem with the gc performance in production.

    The enviroment is the next:
    Linux Red Hat 9.0
    2 Gigas of memory
    Jboss 3.2.3

    I am using these parameters:
    -XX:MaxNewSize=256m (igual que NewSize)

    About 19:00-20:00 pm are the hours with more users but depend of the day at 22:00 pm or 1:00 am or 18:30 pm... It doesn´t happen at the same hour and It doesn´t happen daily, the system begins to answer slowly and finally we have to restart jboss.

    Looking de gc log with the tool "hpjtune", we can see that there are a lot of calls of gc (scavenge and full gc) and there are an increasing heap size with each gc.
    The system calls to old gc and after a a old gc, the head size drops to a much lower level, but the level of heap size is getting bigger and finally all gc are only old gc. There are not any scavenge gc.
    Finally we have to restart jboss when the heap size is about 700M.

    If there is not any problem, the graphic shows a scavenge gc and full gc (50% each one), not old gc, and the maximum level of heap size is 200M, and when the users are not using the system, the level of heap size is 50M.

    Could anybody help me?

    Thanks in advance
  2. GC and Heap Size settings[ Go to top ]

    We have been playing around with the parameters that I explained before.

    We changed Xms to 1024m and after several hours running with these setting, we obtained this:
    java.lang.OutOfMemoryError: unable to create new native thread

    Then we decided to reduce Xmx to 1024m instead of 1536 and Xms to 880m.
    We have 2GB of memory and I think that the act of reducing the maximun Java heap size will
    made the "native heap" bigger and this change should prevent any OutOfMemoryErrors based on the
    native heap.

    Our server only has Jboss. The 2GB are for Operative System and Jboss.

    Now, we are testing these setting;

    Could anybody tell me if these settings are ok????

    We had been testing our application with JProve an JProfiler and we hadn´t found a memory leak.

    About Xss exactly I don´t know if 1024k is too much or not. We are using the values of ulimit by default.
    In our application we are creating threads to do specific functionality,
    but I don´t know if we are using a lot or not.

    We have 3 problems:

    1. OutOfMemory: unable to create new native thread: I have explained it before.
    2. StackOverFlowError: This appears sometimes and the Jboss dies.
    3. Anormal behaviour of GC. I have explain it in the previous topic.
  3. GC and Heap Size settings[ Go to top ]

    Probably answering the below questions might help:

    Why are you setting the Xss size? Can you not leave it to the VM/OS to use the default thread stack size?

    I believe you are running a web-site on your JBoss. In that case, why didn't you chose 'Throughput Collector' instead of 'Concurrent Low Pause Collector'?

    Can you not leave it to the GC algorithm to determine the number of threads that are to be created for GC'ing, rather than you specifying 2?

    Your young generation size (220m) is smaller than the tenured generation. The default ration of 1:2 should be okay, so you might be good doing away with the options NewSize and MaxNewSize.

    To explain this more (and if my understaing is correct the following are the figures at the startup of your server):

    Young Generation = 220m
    Each Survivor space = 220/(8+2) = 22 m
    Eden = 220 - 2*22 = 176m
    Tenured = 880-220 = 660m

    This shows that your eden space is too low when compared to the tenured space. Try removing the NewSize and MaxNewSize numbers. Also, you can do away with the survivor ratio, because the default ratio of 1:32 should be good.

    Set your -XX:PermSize and -XX:MaxPermSize to 128m.

    Finally, this is what I propose (unless you want to go with the Concurrent low pause collector):

    -Xms 768m
    -Xmx 1024m

    Try this out and all the best.


    You should run JVMStat and watch the rate of data moving through eden->tenured->old gen.

    Your eden should represent a size that is somewhere a little above the maximum garbage created by servicing requests - that is data that is garbage as soon as the request has been serviced.

    Your tenured generation should be used like a buffer for medium data that will exist beyond the time of a request, but not for the life of that application.

    All up, running JVM stat on the VM will show that both your eden and tenured are too small compared to old gen. You don't want eden larger then your tenured generation unless your application actually fits that profile, that is, more temporary data then medium data.

    The best thing you can do is to start looking at your java packages, work out how much data and what kind of lifecycle that data may have, and start splitting your application up into different areas. Then incrementally test it with various JVMstat options, tune the system so that the minimum of data moves from eden to tenured, and absolutely try and get to a point where no data makes it into old gen after the system has loaded its cached/reference data set.

  5. Hi, We also observed the similar behaviour of only GC running in old generation if we use using the "-XX:+UseConcMarkSweepGC". Try to run your application without this setting if you can effort a bit longer pause whenever FULL GC runs.

    "OutOfMemory: unable to create new native thread" --
    You generally get this error message if the JVM faces problem while increasing its upper limit after a GC cycle. This happens if you are having very less memory (RAM & Swap) in the hardware. To avoid this 1) keep both Xms and Xmx as same and 2) make sure that you have sufficient memory on your box 3) Increase your min & max of permanent generation from default to 128MB (Default values are min :0 and max :64MB)

    Generally always try to go by the default settings (Xss, your generation etc) of JVM.
  6. java.lang.OutOfMemoryError - this problem which makes java less attractive for using. We played with all java parameters - but still do not have satisfactory result.

    As a result - we have cluster - with 5 tomcats , which runs on one computer. Is it possible to use all physical memory, and virtual memory as well and do not have such errors at all?

  7. Did try just -Xincgc ? This is a very agressive garbage collector, and runs very well on 1.4.2. The drawback it can degrade performance and has limited GC performance on Tiger.
  8. use jboss4.0.4