Livestore is a transparent JDBC distributed data cache, which holds data local to the application server. The result is the elimination of time consuming and resource hungry round-trips over the network to fetch data from a physical database resulting speed increases for frequently or recently accessed data.
announces the release of its new product, livestore, which is simple to integrate yet addresses all of these problem areas. livestore is a transparent JDBC distributed data cache, which holds data local to the application server. The result is the elimination of time consuming and resource hungry round-trips over the network to fetch data from a physical database resulting in orders of magnitude increase in speeds for frequently or recently accessed data.
Unlike other caching solutions, livestore provides code free integration which means that applications do not have to be specially adapted to take advantage of the locally held data. The product can be slotted into an existing or new application by simply pointing the application at livestore instead of at the existing JDBC driver. None of the security and integrity features of the application are compromised and, because of livestore's J2EE and JDBC standards compliance, there is no technology lock-in.
To read more about livestore and download a free trial copy go to the Isocra website on
This sounds interesting. But as I said before
In last 1 month so many new tools and new packages have been announced. When and who and how is one going to keep up with all this, where to find the use, how to get back to someone in case of any bug.
Isnt it little overwhelming ? Not that anything is wrong with that :)
shs l: "When and who and how is one going to keep up with all this, where to find the use, how to get back to someone in case of any bug."
Isocra Livestore is a commercial product. If you find a bug, you tell them, because they were the ones that took your money.
As for where to find the use, that is a no-brainer with a caching JDBC driver. I think this is a great idea ... it's similar to what TimesTen is doing (JDBC acceleration) but I'm placing my bets on companies like Isocra that can actually deliver software for less than $100k per server. If this software works as advertised, it could be pretty big.
: Easily share live data across a cluster!
The website talks about the database being the definitive version of the data, however one question I would have is how does the application ensure that the cache contains the latest and greatest of data on subsquent reads?
If I do an update through JDBC, does livestore figure out what changed and update the cache? If a transaction is rolled back does livestore similarly keep the cache in check? What about clustered environments, does livestore maintain the cache integrity for reads across all nodes?
This product sounds like it would be most useful in a web application whose underlying data does not change much but not that great in a "web application" type environment where data changes frequently, is that a correct assumption?
Sometimes you post too quickly, ignore the question on caching, I see they deal with it on the website. Still curious about updates and invalidating the cache though.
livestore looks at all the SQL going through all the JDBC connections to the livestore instances in a cluster. When an update comes along the database and, if possible, the cache are updated. In the rare cases where it's not possible to update the cache and the row[s] is[are] present then those rows are flushed from the cache. Obviously, we don't like flushing the cache but sometimes we just have to. There's a more complete discussion of when we do this in the product reference guide. The cache synchronisation takes care of keeping all the caches in sync with these changes.
In general, livestore is very conservative about making sure the data in the cache are valid.
Very cool looking product.
Quick question. Does Livestore's caching make an assumption that the only data access route is through livestore? How would live store deal with a 'diamond' like scenario where some data is accessed via the livestore JDBC driver, then a user comes in through say thr RBMS management tool, and deletes that data, now a user comes back through livestore. How does livestore detect the cache inconsistancy?
Personified Technologies LLC
Livestore does assume that it will be notified of any changes made to the data it is caching. If the application making the changes is a Java application, the easiest way to make sure this happens is to have it use Livestore itself. It can be configured to take up next to no memory, and just send changes to other Livestores through the clustering system.
If it is not a Java application, it is still possible to have it feed changes to the rest of the Livestore cluster, although the APIs are not public and we would need to help you with that. If you don't have access to the source code -
the app is something like a database admin tool - it is still possible to make sure livestore is informed of the change using database triggers.
That's pretty amazing. We launched a beta of a very similar product just a few weeks ago.
Looks like you've beat us to the post. Knowing who we're up against I know you're going to be a tough act to follow!
Best of luck with Livestore.
Chief Engineer, phWorks
Tel: +44 (0) 20 7511 0737
Fax: +44 (0) 870 458 1627
Thanks for the kind comments. We found out about your PhDataCache product a couple of weeks ago and it does have some similarity to livestore.
However, I think I'm right in saying that it isn't a write-through distributed cache like livestore but a timed repository for infrequently changing result sets. Obviously useful though, but in different situations from livestore.
Good luck with the product.
Another quick question:
What does livestore do in case of complex queries, like ones that use complex joins, aggregates, inline queries and DB verdor and user defined functions. What about triggers that perform side-effect updates, stored procedures that return result sets...
I think JDBC caching is effective only if your queries don't use any of the stuff mentioned.
And caching of result sets that do not strictly map to specific table leads to either cache size explosion or poor performace.
Livestore focuses on accelerating the kinds of queries made by the vast
majority of J2EE applications. Complex queries are answered
correctly by delegating them to the database. Livestore can
answer joins including outer joins from cache, but stored procedure queries,
and queries using functions or subqueries are delegated to the database in
this version of livestore.
Similarly, writes made via a livestore connection are always
passed on to the database. This ensures the write makes it safely to the
In the majority of cases livestore will reflect the changes made by the SQL
internal cache as well. In the few cases where the write is too complex or
it is being made by a stored procedure, livestore will flush the minimal
amount of data from its cache to ensure that the application always gets
If you read the technical briefing on the website, you'll see that livestore
does not cache result sets. Rather, it is a partial in-memory database in
its own right. There is therefore no combinatorial explosion to worry about.
Thank you for your answers. Livestore is sure good product to accelerate database applications with the minimal amount of effort.
I only wanted to point out that JDBC layer is not an optimal place for caching, from pure performance point of view (object layer is much more promising). But, sure, from ROI point of view, it is very good.
I agree that in a perfect world everyone would be using a sensible object approach leaving the execution environment for those objects, and the services that support them such as caching, to the vendors.
There is CMP, BMP, Sessions, Servlets and JSP making direct JDBC calls, JDO, vendor O/R mapping, in-house O/R mapping tools and countless other stuff I can't even categorise and livestore can sit below all of them without impacting them or their portability to none-livestore platforms.
99.99% of the performance gain from caching comes from avoiding the network and therefore the cost of representation transformation such as O/R mapping is negligible. I would postulate, no hard facts you understand, that it is possible for pure relational data access to outperform object based access in some circumstances and vise-versa for others.
Dean, you miss the most important point regarding your answer to Mileta ;)
If you cache at the object level, the cache layer is absolutly not transparent for the developer. To some extent it can be more or less hidden (in the O/R layer for example) but if you want to leverage an object cache system you will have to query your objects using specific pattern( no bulk loading etc..)
IMO, the beauty of a product like Livestore is that is really transparent for the developer, it does NOT impact the code.
Ok i missed the "without impacting them" part so you did not really miss it ;)
I have a quick question regarding the cache when in a clustered environment.
Is the cache across the cluster garuanteed to be upto date. By this i mean if two nodes in the cluster were both updating the the same row at the same time would one update fail. If so how is this achieved? does the developer have to do anything?
livestore's cache is guaranteed to be up to date within the limits of network latency. In the cases where one application attempts to modify a row of data that has already been remotely modified but its synchronisation message is in-flight, optimistic locking will catch the race and raise an exception.
livestore automatically manages the addition of optimistic locking by transforming the applications SQL queries and updates to retrieve and check version information leaving the application untouched.
Thanks Dean for the quick reply.
Your answer states that version information about the data is stored somewhere, and this versioning allows Livestore to perform optimistic locking. Is the version data kept in a seperate table in the database?
The version information is stored in a column within the application's own tables rather than on a separate table requiring a join.
A future version of livestore will support more "unusual" forms of optimistic locking such as time stamp, modified columns, all columns and off-table storage but experience has shown that these are the exception rather than the rule.