Discussions

Performance and scalability: max heap size on x86 (linux, windows) = 1960meg?

  1. I think I've found a severe scalbility issue with java on 32 bit linux.
    I'm trying to tune up a linux server as a j2ee machine and want to exploit all 6gigs of memmory it has. However I keep running up against a problem that the jvm's max heap (Xmx) can't exceed 1960 meg. I have tested this with suns jdk1.4.1_02, blackdown 1.4.1, jrockit 8 and IBMs 1.4.0. I have setup redhat 8.0 to use the high memmory kernel and used ulimit to remove all relevent memmory limits. However i still can't set Xmx (max heap size) to above 1900meg without getting "could not reserve enough space for object heap" as an error.
    For an explanation of the cause see http://developer.java.sun.com/developer/bugParade/bugs/4435069.html
    Has anyone found a way round this?
    Most documents relate to jdk1.3 but the problem appears to be the same in jdk1.4+
  2. Clustering, anyone?[ Go to top ]

    The most common approach to resolving heap-related issues (most commonly centered around balancing the impact of GC in JVM's which did not support concurrent collection) has been to deploy multiple server instances, each with a manageable heap size--in this case, something <= 1960m. Yes, this complicates things (possibly needlessly), but it doesn't appear that anyone will move on the root issue anytime soon.

    Out of curiosity, which app server are you using?

    cramer
  3. which app server = tomcat[ Go to top ]

    I'm using tomcat 4.1.24-LE-jdk1.4 as the app server.
    Mainly beacause i work in a university so all university centric java apps tend to get tested on tomcat first (and quite often only tested on tomcat).
  4. What do you need that much heap size for?

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  5. I need a large heap size because i'm using apache cocoon as a webapp to tranform, on the fly, large xml documents. These documents are transformed to pdf using the apache FOP functionality rolled into cocoon.

    Cocoon as far as possible tries to use SAX event streams but when it comes to FOP transformations it seem to need to read the FO into a large dom tree. This sucks up alot of memmory, its also pretty processor intensive.

    The current record is a 919page (1.6M) pdf generated on the fly. Admittedly in production i don't see transformations getting this large, but it is a useful test.
  6. Wouldn't it be better to run multiple JVMs, each with a more moderate heap size? Load balance with a "mod_..." on Apache?

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  7. I just profiled one of the pdf generations and it took up 530megs of memmory, about a third of the max heap, so i need a large max heap. My worry is a developer will dream up a genertor that will need more than 2gig of heap for one request. Clusterring would not get round this. The ideal would be to have the ability to assign more than 2 gigs to the heap.

    Talking to some of my academic colleagues some science apps would definately need more than 2 gig so this rules out java on linux. If the solaris 4 gig limit is true it may rule sparc out too. I'm not sure what the 64 bit architectures are like for java and max heap.

    A clusterred architecture would be better in terms of load management but sometime you just need a large heap for one process.
  8. I wasn't even talking clustering ... otherwise I'd point you to Coherence ;-) ... I meant more of a stateless farm of commodity servers. You could still do that, with RMI (or some form of single-threaded request model) per JVM to allow close to 2GB heap per request.

    OTOH, AFAIK with 64-bit JVMs (Solaris/Sparc for example) you don't have a 2GB limit. But I've never tried to allocate an 8GB heap with the 64-bit Sun JVMs, so I cannot be certain. There are 64-bit JVMs from BEA now for Itanium, for Sun Solaris (from Sun), and coming pretty soon for the AMD chip, which is pretty cheap for lots of commodity computing power. For 64-bit JVMs, I don't know anything about PA-RISC/HPUX or Power/AIX (we only test their 32-bit versions right now, and I haven't heard anything specific on whether they even have 64-bit JVMs), but I'm guessing there is something available there.

    Peace,

    Cameron Purdy
    Tangosol, Inc.
    Coherence: Easily share live data across a cluster!
  9. Other XSL-FO engines?[ Go to top ]

    Cal,

    I'll admit right off the bat that I'm not that familiar with Cocoon or (especially) FOP, but a few minutes browsing the FOP site brought me to a memory-specific bit of wisdom, containing the following:

    "One of FOP's stated design goals is to be able to process input of arbitrary size. Addressing this goal is one of the prime motivations behind the FOP Redesign."

    In addition to exploring configuration, platform, and deployment alternatives, perhaps there are other XSL-FO engines with better memory management that support Cocoon or can be wired to it with minimal effort. Feel free to disregard this if you've already done this homework, but here are a couple that I found with a quick-n-dirty google: XEP Rendering Engine, IBM alphaWorks XSL-FO Composer, XML Mill. There are also tools like iText that are simply libraries for generating PDF's programmatically.

    Probably not what you wanted to hear, but I hope it helps.

    cramer

    P.S. -- BTW, as of 1.4, Java supports 64-bit SPARC architectures. I haven't personally verified large heap size support, but this should provide access to heaps larger than 2GB or 4GB.