Terracotta strong-ARCs the competition with the next release of BigMemory


News: Terracotta strong-ARCs the competition with the next release of BigMemory

  1. Today Terracotta has announced the latest release of BigMemory. The press release comes with the standard announcements that you’d expect to hear from Terracotta, namely that their BigMemory product can help your programs access tens of terabytes of in memory data at microsecond speeds. Of course, that’s not really anything new in the world of Terracotta.

    “In previous releases we were doing 300 to 500 gigs of memory per instance. Now we are going north of a terabyte per instance. It’s tuned more, it’s faster and it’s lower latency. But that’s not necessarily the big news that TheServerSide readers who are familiar with Terracotta will be interested in.”  Says Terracotta CEO, Ari Zilka.

    No, what’s particularly interesting about this release is the introduction of a technology Terracotta has named Automatic Resource Control (ARC), a new facility that is now built into the Terracotta product line. Terracotta says this is a game changer, providing the ability to dynamically and directly allocate memory across multiple caches.

    “BigMemory, together with automatic resource control and fencing, ensures that certain data won’t steal cache from another cache. Without that discreet control, you’re at the mercy of the engine to keep the system as fast as it can be.” Says Ari.

    With ARC, you can specify that a certain piece of data be placed permanently in a certain cache, where it can’t get pushed out or demoted to a different caching layer.

    “With ARC, you can’t bust data out of the cache. Now, you can pin a cached piece of data to a particular layer, saying ‘keep this in L1, never in L2.’ You can keep cached data pinned to a node, or pinned in BigMemory. And if I stop the JVM and come back up, it will pull it all back into the pertinent cache from Terracotta before serving requests.”

    Furthermore, this new release provides the ability to essentially ‘fence off’ your various caches. “With one cache fenced off from another, you’re assured that your application is not stealing from once cache to feed another.” Ari asserts that this type of fencing can fend off the unpredictability of a non-fenced cache, where response times can be significantly impacted when one cache trounces on another cache’s data.

    With BigMemory now serving terabytes of memory per instance, ARC providing the ability to pin important pieces of data to a particular cache, and the facility for fencing off one cache from another, Terracotta is hoping to change the game in the world of data caching. If this product indeed lives up to it’s press release, it looks like indeed they have. 


  2. Yawn