Caching Made Easy - The fastest round trip is the one that doesn't happen


News: Caching Made Easy - The fastest round trip is the one that doesn't happen

  1. Understanding Caching Scenarios by Joseph B. Ottinger

    Caching has been a hot topic on TheServerSide as of late, and it's no wonder why: the introduction of an efficient caching solution can be one of the most cost effective ways of improving application performance.

    In the following article, Joseph B Ottinger provides us his insight into the most important points an individual needs to reckon with when entertaining the idea of introducing a caching mechanism into their architecture. From defining the most pertinent points, to introducing words like "memoization" that nobody has ever heard of before, the following article is a must read for anyone looking at the enterprise caching space.

    Understanding Caching Scenarios

    It probably bears mentioning that Joe himself is an employee of Gigaspaces, and as such, his insights might lean a tad towards the Gigaspaces solution, but overall, the article is both insightful and largely unbiased.

    "Caching systems are, by their nature, fairly simple, yet can have many applications. However, it’s important to consider exactly where and how they’re used – and using the right cache, with the right capabilities, can affect your end architecture in many advantageous ways."

    Threaded Messages (5)

  2. Usual problem....[ Go to top ]

    ...with short intros like that is that they leave a lot behind (and that's not Joe's problem). This topic deserve a good paper to even scratch the surface. More over, A LOT in data grids (a.k.a distributed caches) depends on the vendor as all solution are extremely vendor specific. In a good sense, I can argue about each and every point he makes and few are downright invalid from my point of view (again, MY point of view).


    If me or Cameron would write about our individual products - readers will be surprised how even basic terminology and approaches can differ - and therefore quasi-generalized introductions like that are of small value IMHO.


    Kuddos for putting it together thought as it is a good intro into GigaSpaces cache design.



    Nikita Ivanov.

    GridGain = Compute + Data + Cloud

  3. Nice Intro ![ Go to top ]

    This is nice introduction for beginners. Caching is such a large topic and very much contextual. Frankly speaking I have been using all the topologies/approached mentioned in various projects without even knowning the names. As I said it is contextual so it depends on your application's requirements and design/architecture to use what approach.

    My 2 cents would be not to blindly follow others and try to design your cache as per your requirements. No single solution can fit all scenarios.

  4. Good introduction article to the core features, Joe. 

    Of course, it will be lot more appealing if it offered a vendor neutral view or covered several vendors. 

    A few points:

    Hibernate L2 caching: is primitive at best and only well suited if your enitre cache is colocated on the OR mapping node. Any query is first executed on the DB and then hibernate fetches the objects one key at a time from the cache. Works well if each of these fetches is exercised locally. With a highly scalable, partitioned cache, that is hardly the case. You essentially can turn a single RPC call into (n+1) RPC calls, where 'n' is the result set size. Arghh! 

    On beyond caching: event notification semantics based on continuous queries, durable subscription support and ordering semantics are important. 

    Consider expanding to include support for selective or full replication across multiple clusters over WAN boundaries

    Persistence and recovery: Important when considering a cache that will host a large volume of data. You need the caching system to parallel persist the data and offer quick parallel recovery without requiring a complete reload from the database. Imagine the cost of doing this if the cluster were to be bounced. 

    Elasticity: the ability to dynamically expand and/or contract capacity.



  5. Thanks.[ Go to top ]

    Funny: I agree. But then again, the first stop for a vendor-neutral discussion refused to participate - and I didn't want to get it wrong. (I was trying to make sure that I understood everything properly, so I wasn't saying things that weren't true.) After that, well, heck - I have an employer, the concepts are fairly general, and I *do* have a point-of-view myself...

  6. Caching benchmark report[ Go to top ]

    Interesting caching benchmark report can be found here: