- Data caching
News: Oracle Coherence vs. GigaSpaces XAP
What are the differences between two of the largest Java run-time data caching and computational grid products? Gojko Adzic worked with both technologies over a year and presents an unbiased analysis of both that will help you understand their differences and advantages to assist your selection of one technology over the other for a specific problem domain. The article provides a thorough comparison of both products in these areas:
- Posted by: Eugene Ciurana
- Posted on: June 11 2009 10:47 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Joseph Ottinger on June 11 2009 11:02 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Ilya Sterin on June 11 2009 11:46 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Pablo Ruggia on June 11 2009 13:29 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Shay Hassidim on June 11 2009 01:37 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Nati Shalom on June 11 2009 04:36 EDT
- Melodramatic? by Nikita Ivanov on June 11 2009 06:15 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Cameron Purdy on June 12 2009 10:19 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Brian Oliver on June 14 2009 04:34 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Nati Shalom on June 15 2009 06:45 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Cameron Purdy on June 16 2009 10:10 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Nati Shalom on June 16 2009 06:12 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Cameron Purdy on June 18 2009 12:03 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Billy Newport on June 18 2009 02:45 EDT
Re: Oracle Coherence vs. GigaSpaces XAP by Nati Shalom on June 18 2009 05:54 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Cameron Purdy on June 18 2009 09:47 EDT
- Oracle Coherence Claims -- Are you kidding me?? by Gideon Low on February 08 2010 01:17 EST
- Re: Oracle Coherence vs. GigaSpaces XAP by Cameron Purdy on June 18 2009 12:03 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Nati Shalom on June 16 2009 06:12 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Cameron Purdy on June 16 2009 10:10 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Nati Shalom on June 15 2009 06:45 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Nati Shalom on June 11 2009 04:36 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Cameron Purdy on June 11 2009 02:27 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Nikita Ivanov on June 11 2009 04:18 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Shay Hassidim on June 11 2009 01:37 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by Pablo Ruggia on June 11 2009 13:29 EDT
- Re: Oracle Coherence vs. GigaSpaces XAP by John Davies on June 12 2009 12:21 EDT
It's not a bad writeup on Gojko's part, although he gets the notification mechanism on GigaSpaces wrong; he also oversimplifies his conclusion somewhat IMO (although I do agree on general principles.) Terracotta is an interesting comparison project to these two. (This was brought up in the comments, as was a correction to the GigaSpaces notification mechanism.) To me, TC offers a similar solution to Coherence, albeit transparently to the client code; that said, I doubt I'm the normal DSO user.
Gigaspaces deploys data to a fixed number partitions and fixes it for the lifetime of the data space. If a machine goes down, a backup partition will take over and on clouds you can even have a new machine instance started up for you automatically, but you cannot increase or decrease the number of partitions after the grid has started.This is not really true, at least I don't remember this being a constraint, it's been a year since I last touched GS. Out system was able to deploy more data partitions on different nodes on the fly and the data got repartitioned/moved around when we added and/or removed nodes. This was a big requirement for us, as we needed to deploy on EC2 and scale the grid up and down during the week for scalability and cost reasons and GS handled it very well. I do remember that this configuration wasn't that straight forward, I think you had to set the maximum number of nodes you could allow in the grid and you can scale up to that number. Maybe someone from GS team and/or Joe can comment on that.
I'm not 100% sure, but I think that trick goes like this: Deploy 15 partitions to 5 machines. If you see that the partitions are using too much memory (using for example a SLA) you can get a new machine and "migrate" some partitions to that new machine. Therefore you can grow from 5 machines to 15. So the sentence should be "the number of partitions is fixed, but not the number of machines"
You are right Pablo. See this for details: http://www.gigaspaces.com/wiki/display/SBP/Capacity+Planning Shay Hassidim Deputy CTO GigaSpaces
You are right Pablo.Adding to Shay, Pablo and Ilia ... You can see my detailed response on that regard to this specific point: "As far as i know this is the same with both products only that we explicit partition instances and Coherence uses implicit logical partitions. In both cases dynamic scaling would be changing the number of running partitions per JVM container (GSC in our terminology). You can start with 100 of partitions even if you have two machines and spread those partitions as soon there are more resource available. When a machine unit goes down the system will not wait till a new machine becomes available - it will scale down to the existing containers as long as it detects that there is enough memory and CPU capacity available. I’m not sure how scaling down works with coherence but one thing to check is whether there scaling down could lead to out of memory issues in case there is not enough capacity on the available machines. I would refer to the main difference between the two approaches as black-box(Coherence) vs White-box (GS). In our philosophy clustering behavior should be managed in the same consistent way across the entire application stack which means that when we manage a partition or when we scale the application we scale not just the data but the business logic and messaging and any other component that needs to be associated with that. Our experience showed the the black-box approach is simpler to get started with but can be fairly complex once you start to deal with scaling on other layers of the application such as the business logic, messaging layer or the web layer. In many cases this leads to different clustering models across the application tiers which leads to more moving parts and complexity etc. For example in our case if a data grid container or a web container crashes the process of maintaining their high availability would be exactly the same. As of XAP 7.0 release we also added the ability for users to write their own custom SLA and scaling behavior melodramatically - See reference here This will enable you to monitor the entire application behavior and decide what threshold should trigger scaling out or down, automate the entire deployment and manage self-healing when there is a failure. This wouldn’t be possible if we would have taken the black-box approach." Based on the comment in a followup response by one Coherence users my assumption seem to be accurate: Coherence’s partition count is also fixed and cannot be changed while the cluster is running Nati S GigaSpaces
See this for details:
Nati, This is the first time I'm seeing it in such context :)
scaling behavior melodramatically...Cheers, -- Nikita Ivanov GridGain - Open Cloud Platform
Nati,Oops... It should have been: As of XAP 7.0 release we also added the ability for users to write their own custom SLA and scaling behavior dynamically - See reference here Nikita thanks for pointing this out .. Nati S. GigaSpaces
This is the first time I'm seeing it in such context :)
scaling behavior melodramatically...
Nati, All I know is that I can download Coherence, unzip it, and start up 100 (or 200, etc.) machines running it without changing the config, and it will automatically and dynamically scale out the data management across those machines, and it's worked that way for seven years now. (You probably remember that we pioneered the concept, since copied with varying success by a few dozen open source and commercial packages, including memcached and gigaspaces.) Peace, Cameron Purdy Oracle Coherence: Data Grid for Java, .NET and C++
Cameron I never argued that the world wouldn't exist before Coherence came to the world. I'm sure that you believe in it and if that is the case I'm happy for you. I didn't even argued that Coherence is easier to get started with (My comments explicitly talks about that). Having said that dealing with dynamic scaling requires that you would consider things like: 1. What if i mistakenly started a new instance - would that trigger re balancing? In many cases i would like to control when re balancing happens. What options do i have to do that? 2. What if i want to make sure that backup and primaries will be spread between primary and disaster recovery sites i.e. that primaries and backup will not exist in the same data center even if they are running on two separate machines. 3. If my data partition fails over how do i handle fail-over of the business logic that is attached to it. 4. What if i need to scale down - what will happen to my data if a machines goes down and the existing machines are already at full capacity? 5. How do i know where every piece of my data is running and when it moves where to? Sometimes I'll need to trigger other activities as a result of such event, how do i do that? These are just few of the questions that one would need to consider when choosing between the two approaches. We tend to believe that giving control and visibility is core even if it comes at a cost of initial user experience (And were making continues progress to keep that cost at minimum). Anyway if you'll compare the two approaches I'm sure that you'll see quite a difference in the philosophy and obviously the implementation behind it. I'm not sure that the two of us are the best Judges to decide which one is better as there is clearly pros and cons in each approach and they both seem to be successful. Nati S GigaSpaces
Hey Nati, I think we all agree with one thing. These products, like other Data Grid products, often have very different philosophies, vastly different implementation strategies and definitely different terminology. The differences often out number the similarities so much so it's basically pointless trying to compare small technical features (let's call them "knobs"), even when those "knobs" have exactly the same "names". eg: a Replicated Cache in Coherence is not the same as a Replicated Cache in Gemstone's Gemfire. As there are few standards in this space and given that no one will admit to copying ideas, what applies to Gigaspaces (or some other product) does not necessarily apply to Coherence and/or vice-versa. Thus comparing "knobs" is only useful when the "knobs" are at least similar and have the same function (or unless one has a fascination with "knobs") Your statement;
Coherence’s partition count is also fixed and cannot be changed while the cluster is runningclearly demonstrates this point. It's pretty clear that you don't not understand how Coherence manages "partitions", their purpose and/or how customers use it. Please don't think for any moment that your statements are a limitation/weakness of Coherence compared to Gigaspaces or any other implementation. Time and time again this has been proved not to be the case. Unfortunately Gigaspaces (and associated sales/partners etc), continue to bring it up as a big thing. I'm guessing it's to gain some form of leverage... Fair enough. If that's the case it's called competition. But let's at least be clear about it up front about it. Your knowledge is very limited in this area with respect to Coherence. Making the statement bold does not increase your authority on the matter. You're correct though. When using Coherence out-of-the-box, the partition count for an individual Partition Service is set to a default (you can call it fixed if you like). What you fail to mention (possibly deliberately, possibly because you don't know, possibly because you don't want anyone to know) is that in Coherence; a). Partitions are not bound to individual servers. ie: you can and typically have more partitions than there are physical or virtual servers in a cluster. Coherence works out how best to allocate and manage partitions so that in the event that a server is lost, no information is lost... something I believe that most data grid vendors (apart from Oracle) still require the developers to understand and/or manually manage carefully (or it's done via some proxy/GUI wizard). Fundamentally this means the "intelligence" is being provided by the "developer" as it's not "built into" the product. b). The number of partitions by default in Coherence is pretty high (257 partitions) when compared with all other products. ie: Coherence doesn't suggest or force an initial model where there is "one partition per server". So unlike most products (and I think Gigaspaces is like this) where when you start a 3 server system, you end up with a correlated number of partitions (ie: 3 partitions), Coherence is different. Coherence by default starts with a suitably high number of partitions, even if you have just one server! Consequently if you add another server to a Coherence Cluster (at runtime without requiring a GUI) you don't force developers/operations people to start thinking about reconfiguring partitions (until you go past 257, but then again that is a deployment decision, not one that limits the number of servers in a cluster). ie: You can have a cluster much larger that the partition count and sometimes this is actually desirable! The thing is this: Why would anyone put developers in a position where they have to continually re-configure/re-partition their data when servers join/leave a cluster? It just makes "operational scalability" a pain and error prone. I guess this is why so many products in this space require GUIs, just to start them. The infrastructure should be able to at least do this for you. We'll that's our belief. See... a different philosophy. c). The Partitioned "Services", which make use of partition count and other information are dynamically re-loadable and thus dynamically configurable (if you really must) - it's just that almost no one does this as the defaults are often sufficient! Why change something if there is no need? Why make it harder to use? d). Part of pre-production check list is setting partition counts to a suitable level so that you don't have to change it. Again most people never change past the default. e). Partitions are only used to organize information and are not related to capacity. f). Having more partitions than the server count is often a very good thing. Increases parallelism and makes better use of multi-core systems! Think of it this way: Developers using out-of-the-box Coherence can comfortably scale a system up or down (dynamically on the fly) from 1 to 257 servers (physical or virtual, cloud or no cloud) without a GUI and without even thinking about partitions or requiring some console to manage this. While I think this is a good thing, others may disagree. For small systems (with just a few servers) it's trivial to manage servers, with large systems (10's, 100's or 1000's of servers spread around the world), the last thing you want to do is stuff around with partitions (or remember too). Having; a). Been a Coherence customer for nearly 4 years (in a variety of positions and companies) b). To evaluate all of these products, including Gigaspaces (in that time) c). Personally assisted several Gigaspaces customers in the past few years to migrate onto Coherence (for various technical and business reasons, scalability being one) d). Worked with some of the largest Gigaspaces customers (they told me) e). Talked to/with probably thousands engineers about Coherence and partition management I can honestly say no one has ever found partition count configuration with Coherence to be a drop dead issue, especially previous Gigaspaces customers. Regards -- Brian Global Solutions Architect Oracle Coherence (and former customer)
Brian You got me lost a bit. It looks to me that that you either misinterpreted large part of my previous comments or that I didn't made myself clear enough. Let me try again: The points I was trying to make are: 1. Both products support dynamic clustering 2. Both products use a fixed number of logical partitions and change the number of those partitions between the available JVM's. The main difference between the two approaches toward dynamic scaling are: 1. Coherence performs the migration of logical partitions implicitly when ever a new JVM joins the cluster - a user have no control or visibility as to where each partition is running and when it is moving. I referred to this model as the blackbox approach. 2. GigaSpaces on the other-hand uses a generic container model and enable either automatic re-location of partitions between containers based on user defined SLA (Memory, CPU etc) or even manually. The areas in which those differences becomes important is in area's that i pointed out earlier such as in cases where you want to keep primary and backup separate between disaster recovery sites etc. I didn't quite got your view as to how Coherence can handle any of those scenarios. I'll appreciate if you could elaborate on that in case i missed something. The comment about the fixed number of partitions was actually a quote from a Coherence user on the original post. You seem to mistakenly interpret that as an attempt to point to a limitation of Coherence. What i was saying is that we use that exact model - I therefore found your entire argument, Justifying why fixed number of partitions is a good thing quite amusing as we don't really have an argument there. One last comment: If that wasn't clear the writer of this article selected GigaSpaces at the end of the day to run his application. You can read the details of his selection in his direct response. If I'll need to summarize his comment then i would put it in the following way: Coherence and GigaSpaces provides first of class Data-Grid solution. For pure caching scenarios Coherence seem to be simpler to use. For distributed transaction processing in which data-grid is part of the solution GigaSpaces is a better fit as we provides a more complete solution due to the native Space Based Architecture support. Nati S GigaSpaces
Coherence performs the migration of logical partitions implicitly when ever a new JVM joins the cluster - a user have no control or visibility as to where each partition is running and when it is moving. I referred to this model as the blackbox approach.I don't mean to be pedantic, but I want to correct a few of your statements: * When a new JVM that is configured to manage a certain set of partitioned data joins into a cluster that already hosts that data, then that new JVM uses a greedy algorithm to request that the other servers asynchronously transfer responsibilities to it in order to load-balance the cluster. That allows the application to keep running close to full speed while some data is migrated incrementally (and in a load balanced manner) to the new server. * There is complete visibility to the partitioning locations. From any server at any time, it is possible to determine the number of partitions for a particular cache, the number of backups for those partitions, and the location of each partition and each synchronous replica (backup) of that partition. * There is complete visibility to the partitioning decisions. Partitions have full life-cycle events that applications can listen and react to. The one thing that you were correct about is that Coherence does not delegate HA decisions related to partitioning to the application code. In other words, Coherence makes partitioning decisions automotonically in order to ensure high availability at all times, and does not ask the application code to participate in those decisions. The downside to this decision is that application code cannot control which servers manage which partitions. The upside to this decision is high availability, which has always been our primary goal with Coherence (and arguably the primary reason for our market success).
Coherence and GigaSpaces provides first of class Data-Grid solution. For pure caching scenarios Coherence seem to be simpler to use. For distributed transaction processing in which data-grid is part of the solution GigaSpaces is a better fit as we provides a more complete solution due to the native Space Based Architecture support.I chuckle at your continued attempts to denote Coherence as a caching solution. Admittedly, the overwhelming market demand (for eight years now) has been for distributed caching functionality; probably 90% or more of our deployments make at least some use of our distributed caching functionality. An intermediate category that you missed is the "distributed data management" functionality, such as you find in HTTP session management, JMS message stores, and many other use cases. Only a small percentage of our customers make use of the high-end Coherence Data Grid capabilities, which enable application logic and data to be collocated (data moved to logic, or logic moved to data) and executed in parallel in large-scale environments. While we have had the good fortune to be very successful in this high end data grid market, it's still relatively young (we only introduced these capabilities 4-5 years ago) and there's obviously still room for other companies like Gigaspaces to enter.
If that wasn't clear the writer of this article selected GigaSpaces at the end of the day to run his application.Congratulations! Next time, you're buying the beer ;-) Peace, Cameron Purdy Oracle Coherence: Data Grid for Java, .NET and C++
Cameron Thanks for the detailed response .. That was the type of information i was looking for. When i was referring to the term Control i was referring mainly to the ability to control the cluster behavior (as appose to configuration). Maybe you can clarify the following points: 1. Is there away to control whether data migration will happen only when there is a real need for it (not when a new JVM joins the cluster)? 2. Is there away to control that primary and backup will not be provisioned on the same data-center? 3. What will happen to my data if a machines goes down and the existing machines are already at full capacity?
"If that wasn't clear the writer of this article selected GigaSpaces at the end of the day to run his application." Congratulations! Next time, you're buying the beer ;-)With pleasure ...
Nati - 1. Is there away to control whether data migration will happen only when there is a real need for it (not when a new JVM joins the cluster)? It's going to be hard to explain this to you, since Gigaspaces has such a different architecture, resembling more of a federated system (e.g. routing by the clients, a la memcached) than a cluster (a la Coherence). In Coherence, the location and load-balancing of data is performed asynchronously and transparently to the application, without data loss or corruption, and without relaxing write consistency. When a server joins the cluster and it is configured to manage data, that is what it does, so if there is "no real need for it", then the server wouldn't be there. 2. Is there away to control that primary and backup will not be provisioned on the same data-center? Yes, but we do not suggest clustering to be used across two data centers (again, because our clustering is not just a bunch of servers federated by clients, but an actual cluster). Our Push Replication feature is typically used instead; see the Oracle Incubator for more details. High availability across a WAN is certainly one of our main selling points for our data grid edition, since (we've been told by customers) it's the only solution for achieving HA data grids across multiple data centers. 3. What will happen to my data if a machines goes down and the existing machines are already at full capacity? Running Gigaspaces or running Coherence? ;-) Coherence supports off-heap storage, overflow, etc. It has for years. Normally though you'd use an N+1 capacity plan, and (e.g. with Oracle Enterprise Manager or WebLogic Liquid Operations Control) that would ensure that there was always an extra server, even if one went down. Honestly though, if you're running a system at full capacity, you should not be running systems .. ;-) Peace, Cameron Purdy Oracle Coherence: Data Grid for Java, .NET and C++
Or just avoid all this and buy IBM WebSphere eXtreme Scale :) It's the other 'only' product that does multi-data center replication...
Cameron Thanks again for the detailed response. It would be refreshing if for once you could leave your pompousic tone out.
Gigaspaces has such a different architecture, resembling more of a federated system (e.g. routing by the clients, a la memcached)Correction: With GigaSpaces we balance between client and server side logic where load-balancing is done on the client side and replication and write-consistency is done on the serverside. The advantage of that is that we can ensure single network hop for each read operation. As for consistency, as the writer noted in his post we also support transaction integrity i.e. we will roll-back the state of all object if the trasaction fails something that the writer found lacking in the case of Coherence.
it's the only solution for achieving HA data grids across multiple data centers.Referring to an incubator project (that is outside of your product)and saying that you provide the only solution for WAN doesn't sound convincing to me. In any case I would encourage you to check your sources WRT to your above statement, you couldn't be more wrong than that.
with Oracle Enterprise Manager or WebLogic Liquid Operations Control) that would ensure that there was always an extra server, even if one went downWe believe cluster automation and orchestration is core and should be tightly integrated with the application and cluster management. It should also be part of your development and testing environment. I found that many of the external orchestration tools where designed for integration presupposes but are not well suited for cluster management. Having said that it is a valid solution if your willing to carry the complexity (And cost) that is associated it.
Honestly though, if you're running a system at full capacity, you should not be running systems .. ;-)Fair point - a better approach in such cases would be to avoid getting into potential overflow situation in the first place and just wait till there would be available resources to deal with that failure. In our case such resources would be provisioned automatically and this brings me back to the point i was making above about the importance of keeping the cluster automation and orchestration tightly integrated. Nati S GigaSpaces
It would be refreshing if for once you could leave your pompousic tone out.Seriously, I've seen your questions at least a dozen times before. Your salesperson or salespeople feed them to potential customers all of the time as FUD. Honestly, it gets really old after a while. Nonetheless, despite being bear-baited by you (for the n-th time :-p), you are still correct: I should have done the right thing and taken the high road in my response.
The advantage of that is that we can ensure single network hop for each read operation.Yes, of course. I have to assume you knew that's true of Coherence.
As for consistency, as the writer noted in his post we also support transaction integrity i.e. we will roll-back the state of all object if the trasaction fails something that the writer found lacking in the case of Coherence.First, rolling back stuff doesn't equate to consistency. Unless something has changed in the past month, Gigaspaces doesn't provide any guarantees for consistent reads across two servers, for example. (I was told that it doesn't even provide consistency guarantees across two partitions on the same server, but I've never verified that claim.) To your point though, we do not provide XA transactions in our current product. What we do provide is once-and-only-once guarantees on operations, which enable applications to compose transactions for XTP systems (where distributed 2PC just isn't going to cut it). It's more work, but the throughput is several orders of magnitude higher. As but one throughput example, benchmarks of distributed transactions in Gigaspaces have been pretty clearly documented. (For the record, that's 208 TPS, and in his testing, he saw a factor of 3x to 6x penalty.)
Referring to an incubator project (that is outside of your product) ..It may be "outside" of the coherence.jar file, but it's a published project (you know, like Tomcat or Eclipse is published ;-), built by our product team and widely used by our customers for some time now.
.. saying that you provide the only solution for WAN ..That's not exactly what I said, but I can't be any less subtle without posting a story that would qualify as FUD, so I'll humbly retract my statement and admit that I shouldn't have said it in the first place.
We believe cluster automation and orchestration is core and should be tightly integrated with the application and cluster management.Nati, you're being disingenuous. Just point out that your product includes this capability now and you think that your product is better as a result; honestly, you don't have to pretend to be objective when you're the CTO of a competing company. I don't mind you liking your own product; in fact, I'd be a bit worried if you didn't!
.. a better approach in such cases would be to avoid getting into potential overflow situation in the first place and just wait till there would be available resources to deal with that failure. In our case such resources would be provisioned automatically ..Yes. We have a rule-driven management engine that does just that as part of Coherence Suite (which used to be called WebLogic Application Grid). Even before that though we've been watching customers do this for years. I'm sure you've heard of Data Synapse Fabric Server for example. Peace, Cameron Purdy Oracle Coherence: Data Grid for Java, .NET and C++
Cameron I'm trying to think of a good way to close this thread. I would start by reminding you that it all started by a statement that all other products are basically a clone of Coherence. Later you admitted that GigaSpaces is fundamentally different. You also started by touching on your strongest selling point e.g. dynamic scaling. I hope that now you realize that there is more then one way to implement this and other capabilities. I think that the writer of this article outlines in a fairly balanced manner the strengths and weaknesses of each product based on his experience so well need to give him the credit for that. If that wasn't clear, I wanted you and your team to know that i hold a lot of respect to you personally and to Coherence team in general. I think that the competition between the two companies leads both teams to excel and innovate and that's what competition should do. I do hope that we will all learn to take the "high road" and keep a constructive dialogue. After all we all have things to learn from each other and as you could see there is always more then one way to approach a given requirement. That in itself shouldn't hold any of us from claiming that "our own high road" is better then the other:) One other thing that cross my mind while writing those lines is that the timing might be right to start thinking of standardizing the Data Grid semantics and API.... Have a great weekend Nati S GigaSpaces P.S your argument on consistency and transaction support in GigaSpaces is incorrect for the most part, I'll leave that statement open ended right now otherwise will continue dragging each other through this endless discussion.
I would start by reminding you that it all started by a statement that all other products are basically a clone of Coherence.Not to be argumentative, but I was responding to what I considered to be your inane quoting and bolding (and thank God not ALL CAPS-ING) of irrelevant information about Coherence that seemed to serve no purpose other than FUD. Going back to my response, I simply pointed out that we pioneered the concept of the dynamically partitioned HA data grid. If I'm wrong, I'm certain you could post a link to show otherwise (Ari for example claims that he built it all himself in college in 1994, and mentioned that Parc had it all in the 1970s, which is pretty definitive proof in and of itself if you ask me ;-). Note that we didn't patent it. We didn't threaten to sue you for doing something similar. I really don't understand what's so evil about coming up with something interesting and being proud of having done so. I hope you are as proud of the things that you've built.
Later you admitted that GigaSpaces is fundamentally different.Nati, I honestly don't see a contradiction here. Don't you think that the architecture of Gigaspaces is fundamentally different?
I think that the competition between the two companies leads both teams to excel and innovate ..On this, you and I have always agreed. The competition has certainly not hurt Coherence sales, and I assume (based on the 25+ competitors that have entered the space since 2001) that the overall market is growing and doing pretty well. Peace, Cameron Purdy Oracle Coherence: Data Grid for Java, .NET and C++
Hi Cameron, I should have posted a reply when I first saw this; sorry for the delay. When you say: (For the record, that's 208 TPS, and in his testing, he saw a factor of 3x to 6x penalty.) you are correct, but the full quote from my blog entry is: "These performance figures have little to do with fully optimized GigaSpaces performance. As I mentioned above, there are faster ways to do these writes than the simple approach I used for these tests. What the results do indicate, however, is that you can expect to pay a 3x – 6x performance cost for using distributed transactions over independent writes." I'm sure that neither of us would want to give readers the impression that typical local transaction rates in well-written and properly deployed GigaSpaces applications are in the region of hundreds of transactions per second, when we know that thousands of transactions per second are more typical of GigaSpaces. -Dan
Cameron Purdy wrote: Our Push Replication feature is typically used instead; see the Oracle Incubator for more details. High availability across a WAN is certainly one of our main selling points for our data grid edition, since (we've been told by customers) it's the only solution for achieving HA data grids across multiple data centersMy goodness! I don't track TSS for a while and it's "anything goes" time? I personally designed and wrote (with help from my brilliant colleague Barry Oglesby) the Alpha of GemStone's WAN connectivity/replication, which won us one of our biggest competitive enterprise customer deals in a bake-off with Mr. Shalom's product/team (YOUR Coherence was eliminated from consideration before hands-on work started). The testing in this bake-off was the most comprehensive I had ever seen, with WAN simulators separating 3 distinct "sites", and each site running upwards of 50 processes to simulate true production loads. Latency averages and outliers were measured and correlated for all 150 processes to confirm steady performance under high data rates. GemFire passed all tests with flying--including failover/failback with zero data loss--after only THREE DAYS of on-site testing work with the customer. Unfortunately, the testing dragged-on for three extra weeks as our competitor kept asking for "extra time". Furthermore, the testing criteria had to be dummied-down TWICE (things like not testing OS failure & thus backup-file buffered data loss, which GemFire does prevent) because the customer had to keep the appearance of a fair apples-to-apples competitive bake-off. In the end, said competitor never did complete all the requirements of the testing even with all the extra time. GemFire's WAN Replication technology was GA'd soon after (that's 2005, FIVE YEARS AGO). It is deployed at several of the world's biggest financial institutions, over global networks connecting data centers in all the worlds major financial markets, running both super high-throughput data like market data quotes AND zero-data-loss tolerance data such as trade executions, and is generally regarded as one of the most reliable and easy-to-use replication technologies around. Coherence's "only solution" is still in the "incubator" stage. Nice one. For Pete's sake Cameron, how can you spew this BS and keep a straight face? Either you're willfully ignoring facts you don't like to hear or have a VERY selective way of assimilating information about competitors. You can hide behind "our customer tells us" . . . pretty lame. RAISE YOU GAME! There was a period of time when the three leading products in our space (now known as Distributed Caching Platforms for those that don't track the analysts closely) were in a tight race for Best-of-Breed status. Having continued with a whole lot of in-the-trenches customer development work encompassing all of the toughest aspects of scaling reliable distributed systems, and recently having specifically worked on the migration of an application from Coherence to GemFire, I can say with full confidence that GemFire is now--by a very wide margin in the case of Coherence--the best DCP product on the market. This is not to say that the other products don't have their place, or that they aren't quite capable of delivering value to their customers, but the kind of BS that passes for marketing/evangelism/FUD that I quoted from in this posting has no place anywhere. Period. There is a reason why the most demanding customers license GemFire even when Oracle's salespeople add Coherence into their Enterprise database sales deals for free. Nati, Cameron, Ari . . . if you feel like challenging me on ANY technical reason why you don't think my statement is true, BRING IT ON!! But please Cameron, debate with the due respect all that have worked hard in this space deserve, and leave out the hyperbole. Cheers, Gideon Principal Architect -- Customer Facing Solutions GemStone Systems gideon.low-AT-gemstone.com
Hi Gideon -
No need to get your panties in a twist ;-)
Unfortunately, I don't get to look at your software, and no one has ever described to me that they're using Gemstone in this manner.
> Coherence's "only solution" is still in the "incubator" stage. Nice one.
I'm not sure what you are getting at, but it sounds like you are attempting to be insulting. Our "incubator" is a way of delivering high-value customizable frameworks to our customers, and every single project in our incubator is used by customers in production. IMHO our incubator projects are quite mature, as evidenced by their wide adoption and their relatively low rate of patching ;-)
> I personally designed and wrote [..] GemStone's WAN connectivity/replication, which won us one of our biggest competitive enterprise customer deals in a bake-off with Mr. Shalom's product/team (YOUR Coherence was eliminated from consideration before hands-on work started).
Gideon, you really are unbelievable. I can only assume you're talking about a financial institution whose initials are ML, since I don't know any other big deal in that timeframe that we didn't win. Unfortunately, Tangosol was never involved with that deal. (I'll email you some of the sordid details; they don't belong in public.) As you know, when that financial institution finally took a look at Coherence, they bought it and completely switched all their applications to Coherence. Also, when that bank was purchased, the purchasing bank switched their applications to Coherence too (from the other vendor that you mentioned). They're a great partner and customer, and we're working hard to take great care of them. I just visited with them this past month, and they already have dozens of large systems running successfully on Coherence.
> I can say with full confidence that GemFire is now--by a very wide margin in the case of Coherence--the best DCP product on the market.
I'm glad that you're proud of your company and software, and as always, I'm glad to have you as a competitor.
Cameron Purdy | Oracle Coherence
I'm not 100% sure, but I think that trick goes like this ..Or with Coherence, just add servers ;-) Peace, Cameron Purdy Oracle Coherence: Data Grid for Java, .NET and C++
No tricks with GemFire Enterprise either. Add more servers and it expands to take advantage of the new capacity. Cheers Sudhir Menon GemFire Enterprise: The Elastic data fabric
I'm not 100% sure, but I think that trick goes like this ..Or in upcoming GridGain 3.0 just say to add 3 more instances on the cloud: GridFactory.getGrid().controlCloud(3, "ec2", "my-image"); Regards, -- Nikita Ivanov GridGain - Open Cloud Platform
Since cloud API been mentioned… With GigaSpaces Cloud Computing Framework (CCF), you don’t just add instances on the cloud in dynamic manner , you deploy complete end-to-end application. Within a single click/ API call / Command you start machines on the cloud , deploy HTTP load-balancers , deploy web servers , deploy databases , deploy data grid and deploy your services. You can scale dynamically every tier of your application (not only the Data-Grid or the Compute-Grid!), and survive any failure of the system components (not just the data-grid). CCF allows you to run multiple instances of your application simultaneously or different versions to co-exists side-by-side. For more details see: http://www.gigaspaces.com/wiki/display/CCF/CCF4XAP+Documentation+Home If you are looking for a nice example how GigaSpaces can scale any tier of the application in dynamic manner see how GigaSpaces can scale Mule based application (plus also make it totally resilient with continuous high-availability): http://www.gigaspaces.com/wiki/display/SBP/Mule+ESB+Example Shay
A rather unfortunate title. I've written many articles (here) and talked about these two technologies over the years but I would never put one up against the other like that. I think Cameron (CTO and founder of Tangosol) and Nati (CTO and founder of GigaSpaces) would be the first two agree, firstly that their technology is undoubtedly the best but secondly that they are both very different in their approach and problems they solve. Comparing these two in the same space has about as much logic as comparing C++ and Java, there's an overlap but they are both specialist tools that solve different problems. I've always sought a good reference that uses both technologies to demonstrate their differences and how they could be made to complement each other but sadly most banks (where they're most widely used) tend to have one or the other and rarely both. Of course I'd have to add Gemstone and Terracotta to the list in this regard and may be even Hazelcast and GridGain. It's interesting now that because some of the banks have merged recently the technologies have in fact been forced together but the winning choice is usually a political one as managers compete for the remaining jobs. I find it difficult to think of a server-side application without thinking in terms of the master-worker pattern and obviously GigaSpaces fits that beautifully, on the other hand it's difficult to imagine scaling a cache or persistence layer without Coherence. -John-
John, You bring up some good points. Not only is it an unfortunate title, the discussion itself obscures the fact that there are other products like GemFire, which have been widely adopted in financial services and other verticals and are used on a daily basis in grid computing environments where scaling down is as important as scaling up. It also obscures the fact that automatically moving data around without taking into account the workload and the kind of data that is active at the time the new capacity becomes available, (or goes away) can create more problems. Jags points to approaches taken by GemFire in his recent blog post here Nati does highlight something important in that anytime you move beyond a laptop based POC into deployments and go into hundreds of servers, explict user control and policy driven scaling do play an important role, and I know for a fact that GemFire uses that approach to scaling, redundancy recovery, balancing out partitions etc., with reasonable defaults. Integrating the master-worker pattern into distributed data management products like GemFire, Giga, Coherence etc. has certainly been one of the more innovative things in this space (Off course it is a matter of debate whether god commanded a CTO to do so first and the others just followed, but we can leave that out for now ) and even here, dynamic policy driven rebalancing which takes into account all of the current workload on the cluster is what provides the most optimal performance I think a data container that is capable of linear scale and an implementation of a data aware master-worker pattern are integral parts of any first class product in this space, and being good at one or the other is necessary but not sufficient. GemFire certainly provides both of these. You don't have to take my word for it. You can read the details on the community portal Cheers Sudhir Menon GemFire:The Enterprise Data Fabric
Agreeing with you - Let's make sure that there is a clear distinction between the prototypes and actual production examples also. It's not about an intellectual exercise only.