Azul Systems and Gemstone announce partnership


News: Azul Systems and Gemstone announce partnership

  1. Azul Systems and Gemstone announce partnership (9 messages)

    Azul Systems and Gemstone have announced a strategic alliance, geared at providing data grids to deliver near-instantaneous response times on incredible data loads - claiming upwards of 40000 writes per second per node and beyond 300GB of memory resident business data per node. Azul Systems will provide GemStone's customers with large amounts of highly scalable CPU and memory capacity to scale grid nodes and enable co-location of processing and data with the Azul Compute Appliances. GemStone's GemFire will provide data caching, distribution and consistency management along with advanced and automatic failover, failback and self-healing capabilities. The two companies have a marketing initiative, where they can identify customers who would benefit from the collaborative effort, and also a research and development initiative, where GemFire has been optimized to deploy on Azul computing devices. It's also worth noting that developers who use GemFire have a short path to leveraging Azul's compute appliances; relying on GemFire for synchronization and caching means that the migration to Azul to scale up is very easy.

    Threaded Messages (9)

  2. Any comparative data on running Coherence or GigaSpaces on Azul hardware? I'm just wondering how much of this is Gemstone and how much is Azul... Best, Nikita Ivanov. GridGain - Grid Computing Made Simple
  3. Any comparative data on running Coherence [..] on Azul hardware?
    Tangosol (now Oracle) and Azul have partnered closely on the engineering side for years to ensure the best possible performance for Coherence applications. Azul has added a number of optimizations for Coherence to optimize the interconnects between co-located Data Grid engines (multiple grid members on a single Azul box) as well as Azul-distributed Data Grid engines (multiple grid members spread across multiple Azul boxes). The end result is massive scalability across any number of Azul boxes without paying the I/O latency penalty associated with the Azul proxy architecture. Having never actually seen the Gemfire product in use, I'd be reticent to comment on how we compare. Peace, Cameron Purdy Oracle Coherence: The Java Data Grid
  4. Having never actually seen the Gemfire product in use, I'd be reticent to comment on how we compare.
    Not to take away anything from Gemstone but numbers mentioned in the original post are meaningless unless compared with something similar. I can easily imagine numbers much better or worse... (especially on Azul hardware) Best, Nikita Ivanov. GridGain - Grid Computing Made Simple
  5. I'm just wondering how much of this is Gemstone and how much is Azul...
    In a highly concurrent environment like Azul, to maximize the benefits, we (GemStone) spent time carrying out large scale benchmarks, tuning the configuration and the implementation ... scenarios like, what happens when hundreds of threads firehose updates into a large fully replicated cache hosted on Azul compute pools. You want to take advantage of techniques like "directPath" provided by Azul to replicate to other Azul nodes, analyzing and re-factoring for contention points that would otherwise not show up in standard multi-core deployment, smarter buffering and parallel io to capitalize on the multiple network interfaces, etc Cheers! - Jags Ramnarayan ( )
  6. Well done guys[ Go to top ]

    Hi Jags, Long time no hear, congrats on the partnership, I heard good things from the Azul guys when they were testing with GemStone. To answer someone else's earlier questions about performance numbers, these are, as has already been stated, pretty useless without detailed context and even then are only of limited value. I know the big Azul box will manage one billion (10^9) reads a second from a Java map while simultaneously writing 10 million a second. Of what, who cares? It's an impressive number. A few months ago I did a talk in New York with Azul's CTO (Gil), via a dial-up connection we demonstrated over 130,000 complex SWIFT messages being parsed a second. Again a relatively useless figure because the largest SWIFT user in the world only puts through about 2 million a day they use our software of course. So, being more practical, software tuned to run on an Azul box will put through some incredibly impressive figures, the medium boxes were an order of magnitude faster than the fastest Intel box we had and Intel lent us that last year to benchmark their new 4 core chips for their press releases. I used Gemfire some time ago and it stood up well against Tangosol and GigaSpaces. Of course Nati will tell you GigaSpaces is better, Cam will tell you Coherence is better, and Jags will tell you GemFire is the best. They're all right of course, depending what your requirements are and how you run the tests. Each of them will quote you a client that will swear they tested all three and X was by far the best product, they will each give you another client where they displaced X in favour of Y. They each have their own specialities and they do that better than their competitors. I've compared these at Symposium talks, they're all easy to use and run loops around J2EE architecture. Both GigaSpaces and Tangosol (nka Oracle) have been working with Azul for some time now so I'd like to welcome GemStone to the group. I have to say that they wouldn't have got in with Azul at this stage unless they were up in the big-boys league. -John-
  7. Re:Well done guys[ Go to top ]

    Very nice response, it is nice to see these kinds of technical response to sensitive technical issues. Keep that kind spirit up :) Thanks
  8. We (Azul) believe that data grids are a natural fit to run on Azul compute appliances. All of the main solutions – including GemStone, Coherence, GigaSpaces and TerraCotta – have great potential on the Azul platform. By combining our tremendous memory and CPU scalability with the functionality of data grid software, customers are able to achieve greater data scalability and application performance by enabling much tighter locality of processing and data. We have been working very closely with GemStone and their solution has been specifically optimized to leverage all the capabilities that Azul has to offer – including high concurrency, memory scalability, Pauseless Garbage Collection and DirectPath. These optimizations make it a very compelling offering on Azul. To date, we have not had the same depth of engineering engagement with the other vendors, so it’s difficult to say how their solutions would compare. However we certainly welcome further collaboration with these other vendors to achieve the same level of optimization for Azul. Gaetan Castelein Azul Systems
  9. When I read about this kind of RAM sizes, I can easily dream of loading up the whole database in memory and just forget about RDBMS tuning. The database is still there in the hard drive, but with a good cache provided by the ORM tool, it would get transparently loaded in memory without any changes to the application.... sorry, I'm off topic here :) Regards, Paul Casal Sr Developer - The Enterprise Open Source Billing System
  10. Hi Paul, The performance of a database is not based solely on read access. I assume your database in a grid would eventually need to write some data to storage. What about the log manager? Maybe the grid could eventually give such durable guarantees but in a different sense than how we perceive it today. I think we are more likely to see the merging of the data grid and computing grids with some data grid solutions becoming commoditized and common place with many enterprise services and applications. Maybe eventually the distinction between local memory access, remote memory access and offline storage access will dissipate completely but with such transparency there will be issues as long as average case performance figures deviant sufficiently to be noticeable at an user level. I am not really qualified to comment on the future grids but I do know a thing about databases and tuning. William