GemStone Systems has announced GemFire, an in-memory object caching system pluggable into any JVM. GemFire is intended to provide a zero-latency object cache for applications running on multiple JVMs on the same machine. GemFire uses Operating System style shared-memory techniques, which offers realtime performance by eliminating disk and network bottlenecks.
- Posted by: Floyd Marinescu
- Posted on: March 21 2002 03:54 EST
Beaverton, Ore.- March 20, 2002- GemStone Systems today announced GemFire, zero-latency performance software that significantly improves the performance and scalability of real-time business applications, enabling customers to make split-second business decisions, immediately adapt to market changes and increase user and system productivity. GemFire is the first product to allow data to be shared across multiple Java™ virtual machines at the application level using shared-memory techniques, which offers breakthrough performance by eliminating disk and network bottlenecks.
GemFire will make it easy for developers to integrate shared-memory technology incrementally into existing Java, C and C++ architectures without overhaul to the application, and build it into new applications without having to learn the intricacies of shared-memory programming. GemFire expresses shared-memory capabilities in familiar Java terms, which increases developer productivity and enables faster deployment of business applications.
"We are excited to leverage our 20 years of experience developing performance software to bring the power and speed of shared-memory computing to our customers and the development community, said Dan Ware, President and CEO of GemStone Systems. "Our industry-leading customers demand zero-latency performance to ensure they can exceed the expectations of their customers, and to give them the competitive edge to be increasingly profitable."
GemFire Offers Features to Enhance Performance, Administration and Development
GemFire Increases Performance:
· Uses shared memory for sharing data, object change notification, thread synchronization and garbage collection across multiple virtual machines to significantly increase the performance of applications
· Enables the creation, access or manipulation of shared data from thousands of concurrently executing application components at in-memory speeds
· Optimizes general-purpose programming commands for fast, safe performance in high-volume environments
· Reduces unnecessary database communication on a customer’s systems as much as 98 % and is over 40x faster
GemFire Reduces Administration Time and Costs:
· Automatic garbage collection of the shared-memory object space
· Console for configuration, starting and stopping GemFire
· Unobtrusive, fast performance monitoring of both system and application-defined statistics
· Can be introduced into an existing architecture incrementally without overhaul to the application
GemFire Enables Faster Time to Market for Java Developers:
· Expresses shared memory capabilities in familiar Java form, so developers don’t need to learn shared-memory programming
· Provides an off-the-shelf software component that can be incrementally integrated into existing architectures and built into new applications
· Extends the simple Java synchronization model without additional programming
· The desired high-performance in applications can be realized with simple programming ensuring high developer productivity and fast deployment of business applications
GemStone has been developing high-performance, distributed object technology for 20 years; and GemFire, which is built on the core technologies of GemStone/J™ and GemStone Facets, ™ is the next evolution in providing zero-latency performance, which is becoming a requirement for businesses today.
GemFire will be demonstrated for the first time at the 2002 Worldwide Java Developers Conference, JavaOne, in San Francisco (Booth # 1619). The product will enter the beta cycle this summer. For more information about GemFire, download a technical white paper at www.gemstone.com.
About GemStone Systems
GemStone Systems’ technology enables customers to make split-second business decisions and immediately adapt to market change. Our zero-latency performance technology significantly increases the performance and manageability of real-time business applications, and increases user and system productivity. Building on 20 years experience with high-performance, highly scalable systems software, this revolutionary technology supports multiple languages, integrates with existing architectures and can be built into new applications. Our industry-leading customers like JPMorgan Chase, Intercontinental Exchange, UBS Warburg, Bell South Telecommunications and Orient Overseas Container Line depend on our technology and expertise to build scalable, mission-critical applications that have set the standard in their industries, and given them the competitive edge to be increasingly profitable.
Headquartered in Beaverton, Oregon, GemStone sells and supports its solutions through its US sales force and a worldwide network of distributors. For detailed product and company information, please visit www.gemstone.com
- On a single machine? by Perrin Harkins on March 21 2002 13:49 EST
- On a single machine? by Cameron Purdy on March 21 2002 20:18 EST
- On a single machine? by Shon Schetnan on March 21 2002 21:14 EST
On a single machine? by Mileta Cekovic on March 22 2002 03:43 EST
- On a single machine? by Matthias Ernst on March 24 2002 12:19 EST
- On a single machine? by Cary Bloom on March 26 2002 01:22 EST
- On a single machine? by Mileta Cekovic on March 22 2002 03:43 EST
- On a single machine? by Stu Charlton on March 27 2002 13:28 EST
So all it does is share data on a single machine? That's not exactly difficult. You could do that with simple serialization and a dbm file, or numerous other ways that have been around for decades. Why don't they try to help with something that's actually hard, like sharing synchronized data across a cluster?
Perrin: "Why don't they try to help with something that's actually hard, like sharing synchronized data across a cluster?"
That sounds familiar ;-)
Our Coherence software provides a synchronized shared data store among any number of JVM processes on any number of physical servers.
Gemstone does more than share mere "data". Let’s talk about the difficulty of sharing objects across multiple VMs. Sharing data is easy. Sharing memory, objects and an application environment isn't. GemStone does this and has been doing this for a very long time.
Gemstone does this and much more... It is always sad to see how few people in industry understand what GemStone does.
No. Gemstone has provided object level persistence across multiple JVM's regardless of the host the JVM is running on for years.
I believe the main difference between this product and previous products is that this product can be used on any VM, not just Gemstone's VM.
As I have understood, they are using shared-memory (maybe API from JDK 1.4) to make inter-VM communication on single machine as fast as possible. I belive it is much fater then using UDP (as in other products).
Although usability of inter-VM communication on sigle machine could be questionable (put all applications in sigle VM, if it is possible), it could be used for applications that do not scale well on multi-processor machines (for example WebLogic, ;-) ).
That's been Gemstone's argument for a long time. VM's do not scale to infinity. Today's garbage collectors, for example, are fine for up to 1G. So if you want to scale further, run a bunch of VMs on a large, single machine and use Gemstone as a shared transactional object cache.
I'm not sure, whether
the usage of shared memory
is the right way ..
In our case the shared memory
settings GemstoneJ needs in the
past breaks other applications
we used too ..
And from the programmer
point of view, all shared memory
calls are system calls ..
and if you have too much of them
on multiprocessor maschine the
overall system performance
will go down ..
Has anybdoy tried it on
Solaris Multiprocessor Maschine ?
Are you talking about the Solaris config directive
set shmsys:shminfo_shmmax=<some number of bytes> ?
This is a Solaris OS thing. I vaguely remember some other product having an issue, but this is an OS thing - basically enabling the kernal to allow processes to share memory and how much. It's not really fair to say that this is a GemStone problem or that we 'broke other applications'.
If you are going to have a true mulitprocessor system, you are going to have context switches, and yes this takes time. Because of this basic fact, you don't get double the performance when you go from one CPU to two, regardless of your application. Are you saying that the OS, in particular, does not handle mulitprocessor use very well? Are you advocating racks of single processor machines? Hope this doesn't sound inflamitory - it's not meant to, I just feel this is one of the prices of multiprocessor architectures, but in the end, it's the better price to pay then sticking a network between each CPU.
sorry for the very late reply ..
You are right, I'm talking about this
kernel configuration parameters ..
And my intentation was not to make this
to an Gemstone Problem .. it was more
meant as hint to to try all components
which you need with this shared memory settings,
because it can be a killer ..
We had good results with one vm on a 10 processor machine
and could see with proctool that the vm indeed uses all
CPUs .. but nevertheless I agree with you, that it is better to have multiple VMs with a smaller number of threads.
All the best !
I've had terrible experiences with WebSphere scalability ... since you profess to knowing so much about app server scalability, can you help me out?
If you don't have anything substantial to contribute, kindly refrain from posting crap.
Sharing data on a single machine is easy?
Sure, with one user.
I've seen trading systems using gigabytes of UNIX shared memory and C++, and the memory & threading bugs are still killing the system 4 years after release.
GemStone has long been a leader in transactional caching, this is just extending that expertise into high-performance temporary caches.
Perhaps our press release falls a little short of explaining why and how one would use our new product, GemFire. Please forgive us - it's a new product, so we are learning how to explain it.
At the core of GemFire is a shared object domain in memory. This object domain can be shared across multiple VMs and eventually from C/C++ applications. Unlike our Facets product, GemFire will not initially support sharing these objects across multiple machines, but we may consider that in future releases. In my experience, there are three main topics that need to be explained in regards to GemFire:
Why multiple processes? Can't we do everything in one VM?
Well, you can try, but it is just not practical. Even as VMs become more robust, experience doesn't show them to really take advantage of mulitple processors well. Two maybe, 24, no. At the risk of starting the age old war of 'Are processes easier to control than threads', my vote is for processes. There are more ways to start, stop, examine and debug applications that are in separate processes than if you throw them into one VM. Another issue with the single VM is what happens when/if you get an OutOfMemory Error. In my books, when this happens, I don't trust any thread in that VM. Who knows in what critical area that exception was thrown. But the best argument for multple processes is for the flexibility and resiliancy of the architecture in a 7x24 operation. You may need to take certain parts of your system offline independently or scale different services by adding more instances of one service independently. GemFire give you 'tightly linked' memory for performance with 'loosly linked' memory for resiliancy.
Is it hard to do shared memory?
Across processes, yes. SUN has its 'intimate shared memory' model, but that is VERY low level and doesn't provide any services. To have a shared memory model, you need process(es) to create (independently of any one VM), monitor and manage the memory. Then, if you are going to put an object model on top, you have to manage how those objects are going to be stored, updated and shared. One of the biggest problems is implementing a way to synchronize the threads across the machines and serialize their access to updating objects so that updates are always atomic. You also need to consider garbage collection and page fragmentation.
When would you use GemFire?
Firstly, GemFire is not implemented by seralizing the objects and sending them across sockets between the processes. It is designed for situations that need much faster and much finer grained sharing than that. But it's more than just sharing objects. Many applications that share objects could use the simpler approaches that are already out there. GemFire really shines when it comes to synchronizing threads that are acessing the object domain, registering listeners and sending notification events. It does this very, very fast with extremely fine grained (i.e. object instance not topic) resolution. Typical applications include trading systems, auction systems, telecommunications systems, billing systems and collaborative gaming systems. These scenarios all have the need for extremely fast event notification and synchronization.
I hope this helps explain GemFire more clearly.