GemStone Systems has announced the availability of GemFire Real-Time Performance 1.0. GemFire integrates data sources across different JVMs as a single system image, providing applications with a unified view of shared data at in-memory speeds.
Check out GemFire and read the press release.
-
GemStone Announces Gemfire 1.0 Real-Time Cache (13 messages)
- Posted by: Floyd Marinescu
- Posted on: October 24 2002 13:25 EDT
Threaded Messages (13)
- GemStone Announces Gemfire 1.0 Real-Time Cache by Sacha Labourey on October 24 2002 14:33 EDT
- GemStone Announces Gemfire 1.0 Real-Time Cache by Guglielmo Lichtner on October 24 2002 19:33 EDT
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Dean Sheehan on October 25 2002 04:58 EDT
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Cameron Purdy on October 25 2002 08:27 EDT
- GemStone Announces Gemfire 1.0 Real-Time Cache by Dean Sheehan on October 25 2002 11:49 EDT
- GemStone Announces Gemfire 1.0 Real-Time Cache by Sean Broadley on October 27 2002 07:53 EST
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Guglielmo Lichtner on October 25 2002 01:05 EDT
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Dean Sheehan on October 25 2002 01:43 EDT
- GemStone Announces Gemfire 1.0 Real-Time Cache by Guglielmo Lichtner on October 25 2002 03:56 EDT
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Guglielmo Lichtner on October 28 2002 09:52 EST
-
GemStone Announces Gemfire 1.0 Real-Time Cache by David Brown on October 28 2002 06:00 EST
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Aleksandar Milenovic on October 30 2002 07:23 EST
- GemStone Announces Gemfire 1.0 Real-Time Cache by Cameron Purdy on October 30 2002 09:36 EST
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Aleksandar Milenovic on October 30 2002 07:23 EST
-
GemStone Announces Gemfire 1.0 Real-Time Cache by David Brown on October 28 2002 06:00 EST
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Dean Sheehan on October 25 2002 01:43 EDT
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Cameron Purdy on October 25 2002 08:27 EDT
-
GemStone Announces Gemfire 1.0 Real-Time Cache by Dean Sheehan on October 25 2002 04:58 EDT
- GemStone Announces Gemfire 1.0 Real-Time Cache by Guglielmo Lichtner on October 24 2002 19:33 EDT
-
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Sacha Labourey
- Posted on: October 24 2002 14:33 EDT
- in response to Floyd Marinescu
I've had the chance to see a presentation of this product: this is really awesome!
For those of you that like the beauty of distributed caches, you should take a look at it, from a pure technological standpoint, that is really impressive.
The cache-layer will really become more and more important in the future IMHO: it will become a central player. -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Guglielmo Lichtner
- Posted on: October 24 2002 19:33 EDT
- in response to Sacha Labourey
Umm. I looked at the data sheet and "the in-memory speed" is just for apps on the same machine. It was too good to be true.
What is really missing on the market right now is a Java-based synchronous communication solution, something that can run over gigabit ethernet, or some other low-latency system area network. I mean something that can send packets without giving up control to the kernel. This would allow to share data in a matter of microseconds instead of milliseconds.
And it has to be cheap, otherwise I think Oracle R.A.C. does this and DB2 on Z/OS has definitely been doing this for years, but those solutions are out of the reach of the average J2EE project.
Missing that, the best thing a cache vendor can do right now is come up with innovative, effective concurrency control-related ideas. Then you can probably avoid sending a lot of packets anyhow. This is yet another offering with only optimistic or pessimistic concurrency control.
P.S. This post must be like a lightning rod for Cameron ...
-
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Dean Sheehan
- Posted on: October 25 2002 04:58 EDT
- in response to Guglielmo Lichtner
There are Java-based distributed synchronised caching technologies - notice I said synchronized rather than synchronous, more later. You refer to one of them, Coherence from Tangasol, there is also livestore from Isocra the company I represent and various other solutions as well.
I think gigabit ethernet and not relinquishing to the kernel are red-hearings. If it runs over 10Mb it runs over 100Mb ... You are never going to be able to send packets without going through an O/S system call unless you write your own network drivers and the benefits of avoiding the kernel are miniscule compared to all of the other costs in a system.
Making it a synchronous communication channel is also a bad idea as this introduces more latency into the system (definitely want to relinquish to the kernel if you are so that your CPU can be doing something useful). Whether then communication is sync or async is an irrelevant. The important end user concern should be over the failure modes of the cache instances during their synchronisation and what the implications are for corrupt or stale data. Livestore manages these failure modes by acknowledging the benefits of the underlying database and optionally writing control data through the database in order to ensure durability of the synchronisation messages in a failure scenario. Note the database is not the main route for cache synchronisation messages, just the durable store behind them should they be missed.
On concurrency I like the idea of finding something between pessimistic and optimistic but I'm not sure users will accept doggymistic. To be serious for a moment, my current belief is that distributed cached scalability can only be achieved when using an optimistic concurrency control but it is an area that Isocra will be spending a lot of research on in the coming months.
Cheers,
- Dean
[email protected] -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Cameron Purdy
- Posted on: October 25 2002 08:27 EDT
- in response to Dean Sheehan
Guglielmo : "P.S. This post must be like a lightning rod for Cameron ..."
I'm certainly following the thread with interest ;-) ... however the topic is about the Gemfire release. GemFire is a unique product; I haven't seen anything else in the Java space built around a single-machine shared memory model, and shared memory can obviously be a lot quicker than TCP/IP for example. For very large servers, this opens some interesting possibilities; remember, you aren't typically worried about an e15k or a superdome going down, so you don't necessarily need the multi-machine clustering. Besides, GemStone has already announced their intention to add multi-machine clustering to GemFire in the future, probably on top of JMS or using something like Coherence or Javagroups.
Also, the relationship with JBoss (note Sacha Labourey's comment above) has been good for raising awareness of the product. I'm still curious on how it is priced, but the accounts that we've typically seen GemStone products in are the type that aren't too concerned about price.
My opinion on this market is pretty simple: There's a lot of untapped potential, since basically every high-end system has at least one need in this area. We're seeing first-year ROI of over 500% in a lot of cases, and I've spoken at length with Mike Nastos at GemStone, and he's mentioned some similar findings of their own. There's also a growing trend in this market for vendors that complement each other to work together to provide more robust solutions; we're working to build relationships with companies like GemStone, Isocra, JDO vendors, etc. Obviously, the real winner is the customer that now has some really amazing solutions available off-the-shelf. I think that's really the strength of Java -- not the language, not the JVM, but the companies that have decided to participate in the market, to compete in a field that demands interoperability and where there's a critical mass of solutions and integrators and developers and consultants and writers and ....
Anyway, congrats to GemStone on the release!
Dean: "Coherence from Tangasol"
Actually, this was the real "lightening rod" ... it's "Tangosol" ;-)
Peace,
Cameron Purdy
Tangosol, Inc.
Coherence: Easily share live data across a cluster! -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Dean Sheehan
- Posted on: October 25 2002 11:49 EDT
- in response to Cameron Purdy
Cameron,
Apology for the misspelling of Tangosol - I'm known for my typos!
Cheers,
- Dean
[email protected] -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Sean Broadley
- Posted on: October 27 2002 19:53 EST
- in response to Cameron Purdy
<quote>
GemStone has already announced their intention to add multi-machine clustering to GemFire in the future, probably on top of JMS or using something like Coherence or Javagroups.
</quote>
I can't see a technical reason why JMS is a completely bad idea here, but it just seems wrong.
You want something that provides low latency & high throughput of messages in a clustered environment. That's not what JMS was intended to be for.
JMS specifies an interface and programing model for 'Reliable message queue' products, where Enterprise level subsystems (usually loosely coupled) wish to send notifications to each other in a reliable, asynchronous fashion.
Sean -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Guglielmo Lichtner
- Posted on: October 25 2002 13:05 EDT
- in response to Dean Sheehan
"I think gigabit ethernet and not relinquishing to the kernel are red-hearings. If it runs over 10Mb it runs over 100Mb ... You are never going to be able to send packets without going through an O/S system call unless you write your own network drivers and the benefits of avoiding the kernel are miniscule compared to all of the other costs in a system."
I respectfully disagree. A substantial chunk of the time the machine is running kernel code to copy data back and forth. This is not really a problem with 10Mbps or 100Mbps, but with 1000Mbps, since the MTU is still 1500 bytes, the machine spends a LOT of time copying data from the jvm, to the kernel, to the network card (and back,) and doing context switches (saving process data and getting it back.)
I can give you references if you like.
"... my current belief is that distributed cached scalability can only be achieved when using an optimistic concurrency control ..."
I see it this way: optimistic and pessimistic concurrency control arise from the assumption that we should aim for serializability of transactions (hence two phase locking or multiversion two phase locking.) But the reason behind the serializability assumption is that if you are a DBMS vendor you can't assume anything about what the transactions will do and therefore you need a generic mechanism. This is a definitely a start. But the way to the highest transaction throughput is semantic concurrency control. Transactions should talk to each other and resolve conflicts in intelligent ways. Right now the dialogue between two different transaction is elementary, and no longer adequate.
Guglielmo -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Dean Sheehan
- Posted on: October 25 2002 13:43 EDT
- in response to Guglielmo Lichtner
Guglielmo,
I agree that as the Mbit rate goes up that the cost of getting in and out of Java is going to increase relatively and at some point this becomes an important cost in the system but I've generally found other more serious issues first. If you have a reference for more information I would be keen to follow it up. Any thoughts on whether the new NIO implementations get anywhere near helping in this area, I'm guessing they don't.
Interesting view on concurrency control. I agree that having transactions, or rather their governing business processes, negotiate with each other over the correct result of their concurrent updates is the best way but I fail to see how this could increase the scalability with respect to distributed data stores (caches).
Ta,
- Dean
deans at isocra dot com
-
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Guglielmo Lichtner
- Posted on: October 25 2002 15:56 EDT
- in response to Dean Sheehan
"Interesting view on concurrency control. I agree that having transactions, or rather their governing business processes, negotiate with each other over the correct result of their concurrent updates is the best way but I fail to see how this could increase the scalability with respect to distributed data stores (caches)."
If you use optimistic concurrency control, will have to deal with rollbacks. If you use pessimistic concurrency control, you have deal with liveness (thread waiting) and deadlock. Obviously optimistic concurrency control is more scalable than the pessimistic one because you don't get these bottlenecks where everybody is waiting for the same data item. But if you have long-running transactions then rollbacks are not acceptable either. The only way out of this problem is to get into what the transactions actually do and how they can resolve their conflicts in constructive ways.
When you think about it, it takes time to roll back, so in a way rolling back is the same as blocking, because time is wasted either way.
Guglielmo -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Guglielmo Lichtner
- Posted on: October 28 2002 09:52 EST
- in response to Dean Sheehan
"If you have a reference for more information I would be keen to follow it up. Any thoughts on whether the new NIO implementations get anywhere near helping in this area, I'm guessing they don't."
The new IO was needed to be able to do TCP/IP multiplexing, but it still goes through the TCP stack.
Here are some resources that I have found interesting. If you are interested to discuss anything (something people enjoy talking about this stuff) email me at lichtner at bway dot net:
http://www.nersc.gov/research/FTG/via/
http://aspen.ucs.indiana.edu/CandCPandE/jg99papers/C436JGFSIchangFINAL/c436newchang.pdf
http://www.cs.berkeley.edu/~mdw/proj/seda/
http://state-threads.sourceforge.net/docs/st.html
Guglielmo -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: David Brown
- Posted on: October 28 2002 18:00 EST
- in response to Guglielmo Lichtner
You guys are all brining up great points, things we at GemStone have been considering and working on for some time. It's obvious that one product will not suit everyone's needs - some people need the ultimate in speed, some need fine grained updates, some need geographical distribution, others need integration with EAI, XML, databses, the list goes on. We do have extensive plans to build on the GemFire product line, taking advantage of partner integrations, emerging specifications and advances in hardware
This announcement was for GA of GemFire 1.0, which has some interesting uses on its own, particularly if you have a number of processes, Java and/or C that need to co-ordinate activity against a shared object domain.
I'd be happy to answer questions about the product roadmap on an individual basis.
David Brown
david dot brown at gemstone dot com
-
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Aleksandar Milenovic
- Posted on: October 30 2002 07:23 EST
- in response to David Brown
Hi,
Really interested product. Although I am wandering what would be the benefits comparing with OODBs, like Versant? Most of them offers two level distributed cache, with notifications when an object has been updated - and persistency, of course.
Do you have any performance benchmarking information, like read/update/create times in dependency on number of accessed processed/threads and number of listeners (Observers)?
Thanks,
Aleksandar -
GemStone Announces Gemfire 1.0 Real-Time Cache[ Go to top ]
- Posted by: Cameron Purdy
- Posted on: October 30 2002 09:36 EST
- in response to Aleksandar Milenovic
Aleksandar: "I am wandering what would be the benefits comparing with OODBs, like Versant?"
GemStone's products (such as GemStone/J and Facets) are OODBMSs. They definitely compare to products such as Versant. GemFire is a little bit different in its approach, and there are no direct comparables (shared memory for multiple processes on a single server).
Peace,
Cameron Purdy
Tangosol, Inc.
Coherence: Easily share live data across a cluster!