To cache data objects (be they value objects or entity beans) that are frequently read, yet represent mutable data (read-mostly). This problem is not so difficult when only one server (JVM) is in use, but is much more complicated when applied to a cluster of servers. The Active Clustered Expiry Cache can solve this problem.
This pattern is somewhat similar to the Seppuku pattern published by Dimitri Rakitine. I'd consider Seppuku to be an more specific (and very cool) variation of the ACE Cache pattern, involving Read-Only Entity Beans and Weblogic Server.
This pattern is J2EE compliant and vendor neutral (although the Seppuku pattern is a neat one for Weblogic users).
Typically, in a data-driven application, some data is read more frequently that other data. Although most DBMSs will cache queries for such data in memory, enabling fast retrieval, is it often desired to have something even faster: A cache in memory in the server (app or even web).
This is easy enough in the case of a single node server (no cluster). The application is designed so that reading this data goes through a singleton cache interface, which either returns the cached data or retrieves (and caches) it from the DB. Changing this data goes through the same interface, invalidating the cached data.
This is also easy to accomplish in a cluster, if the data is truly read-only. Then each node will have an instance (singleton) of the cache, populate it as necessary. No expiration (a term used throughout this text in lieu of "invalidation") is necessary, since the data never changes. In addition, a cache such as this can have a "timeout", so that items will only be stale for a maximum time.
Where this becomes difficult is when a clustered environment is necessary (high load, failover) and the data is mutable. In a nutshell, changes to data must be reflected in ALL caches, so that stale reads do not occur. Ensuring that all nodes are notified synchronously also has performance problems, both with network traffic and also contention between notifiers (publishers) and caches being notified (subscribers). However, it is still very desirable to have asynchronous expiration across the cluster, so that all caches will be synced in a "timely" manner. Hence the "smart" cache; it is aware of its peers, and keeps in sync.
This restriction means that the cached data must be considered read-only. Because the caches are expired asynchronously, there is a small interval of time when the data is stale. This is fine for data that is only to be read for output; after all, the request for such data could have come a split second earlier. But, if cached data is read, then the application decides (using a cached read which is (slightly) stale) to change the data, then we are violating ACIDity. The goal here is NOT to build an ACID, cluster-wide, in-memory data store; the DB and application server vendors are counted on to provide that kind of functionality.
Use the ACE Cache when
1) Data is "read-mostly"
2) Application server tier is clustered.
3) Data is read by many simultaneous requests
4) Data is not usually changed (at runtime) through other means (e.g. direct SQL by an admin, other kinds of applications)
DataObject: The data object itself
This could be an entity bean or separate value object.
DataObjectKey: A key object, satisfying equals() and hashCode(), to uniquely retrieve the DataObject
This could be an EJB Primary Key, or just any key class
Cache: Used to store the data objects, mapping DataObjectKey to DataObjects. Best performance if a singleton, and must be synchronized appropriately.
This could be backed by a Map, or possibly an application server's entity bean cache (e.g. WL Read-Only beans).
For some stripped down interfaces, see the end of the text. I might expose more implementation code later on, but it uses many of my libraries of utilities, and dragging all of that in here would make this post quite a novel!
The Cache has reference to the DAO (Data Access Object) logic, whether embedded within an EJB or not. If the cache is queried, and the DataObject does not exist, then the DataObject is created (and cached). This means that different instances of the cache (in different processes) will be populated differently (this can be exploited, especially when dealing with user-specific data).
These DataObjects are for read-only, so if they are entity beans, they should be read-only beans. If they are value objects, they need to have a flag set so that they cannot be "saved". What is more, since these DataObject instances are shared between all callers of the Cache, the DataObjects need to *immutable*, either by only implementing a read-only interface, or by throwing exceptions (usually RuntimeExceptions) when mutating methods are called.
The expiration logic is also tied to the Cache object. The Cache subscribes to a JMS Topic (or referenced by a MessageDrivenBean). Then when a value object is "saved" or an entity bean's ejbStore() method is called, the Cache is expired for a particular DataObjectKey. The cache then publishes (to all caches but itself) the DataObjectKey. The listeners (onMessage()) then expire that key (and value) from that cache. So, asynchronously, all Caches across the cluster are brought into sync.
If using WL read-only entity beans, the link between the Cache and the expiry logic already exists (see Seppuku). If using Value Objects, one way to integrate the expiration logic is to have each Value Object keep reference to the Cache. Then, when the Value Object is "saved", the (local) Cache is expired, and then the remote Caches are expired asynchronously (and actively).
1. The DB will be queried much less often by read requests.
2. There will be much less object creation in the application servers.
3. There may be small latencies in read-only data propagating across the cluster.
4. The cache may take up significant heap space in the application server.
Implementation Issues Beyond The Scope of the Pattern (I can comment on these separately):
1. Managing cache size (e.g. LRU scheme, different caches for different DataObjects or not)
2. Trading off granular caching with coarse data retrieval (hard or soft references between value objects?). Some REALLY cool stuff here.
3. Proper synchronization of the Cache and the DAO within
4. Strategies/frameworks for making DataObjects immutable when needed, and for integrating this pattern with data access control (permissions)
* Cache interface, to hide our many implementations
public interface ICache
// Hook to tell cache what to do if it does not contain requested item
public Object miss(Object aKey)
public void flush();
public void expire(Object aKey);
public void hit(Object aKey);
public Object get(Object aKey)
public void add(Object aKey, Object aValue);
public void addAll(Map aMap);
public boolean contains(Object aKey);
* extends the Cache interface to provide method for bulk access, hitting, missing, and expiry.
public interface IBulkCache
public Map getAll(Collection aColl)
public Map missAll(Collection aColl)
public void hitAll(Collection aColl);
public void expireAll(Collection aColl);
/** Just a tag interface, implementations might expose ids, perhaps.
* equals() and hashCode(),and also compareTo() implementations important
public interface IValueObjectKey
//this one has dependencies on lots of other stuff, so don't try to compile it
public interface IValueObject
public void flagReadOnly();
public boolean isReadOnly();
public void flagImmutable()
public boolean isImmutable();
public boolean isImmutableCapable();
public boolean isCacheable();
public boolean isValueObjectCloneable();
public boolean isSanitized()
public void sanitize()
public IValueObjectKey getValueObjectKey();
public Object clone();
public IValueObject cloneDeep()
public void save()
//this other part deals with the Value Object graph, important for getting expiry right
public boolean isValueObjectReferencesEnabled();
public Collection getValueObjectReferences()
throws ValueObjectReferencesNotEnabledException; //value object references and value object reference lists
public Collection getReferencedValueObjects()
throws ValueObjectReferencesNotEnabledException; //flat collection of the value objects
public Collection getValueObjectsDeep()
throws ValueObjectReferencesNotEnabledException; // all contained value objects, including this one
public Map getValueObjectMapDeep()
throws ValueObjectReferencesNotEnabledException; // all contained value objects, including this one
I read your article. Its informative and very well written, even a fresh guy (new to design patterns) like me was able to understand most of it.
But I am not able to grasp concept behind the terms "value objects" and "read-only entity beans". If you can just give a brief on these concepts, it will be helpful.
And also I was thinking of the need for so many methods in IValueObject?
I'm glad you found the article worthwhile. Here is just a bit of background on the two terms you mentioned:
Read-Only Entity Beans: Some application servers, like Weblogic, support labeling an EB as read-only, by which is meant that ejbLoad() will only be called once (or only once every specified time interval) and that ejbStore() is never called. Check out the WL 6.1 docs
Value Objects: A J2EE design pattern
where requests are not handed references to EBs, but rather data container objects (which could implement the same business interface as the EB) in order to achieve more efficient data retrieval. Extensions of this pattern allow for these containers to be modified and then "saved", which can map back down to EB calls.
As for the number of methods in IValueObject: This could be whittled down quite a bit to get an implementation up and running. I just decided to include some of the methods I use to keep track of a graph of DataObjects that is retrieved (and maybe modifed and saved), and also to govern immutability, cloneability, and cacheability.
Hope this helps,
Thank you very much for the answers.
It will be greatly appreciated if you can give some advice for beginners like me on what it takes to become masters in this field.
I've tried to build the same kind of cluster-wide/aware cache system using exactly the same approach you have described in your pattern. I'm not sure whether this problem is specific to Weblogic 6.1 but JMS Topic in WL6.1 is not capable of failing over, hence a single-point of failure here.
A JMS topic is hosted on an instance of a JMS server running on 1 instance of WL in one of the clustered machines. If that server happens to go down, none of the other servers in the cluster are able to publish any updates to the other still surviving server.
I think Lawrence wanted to avoid implementation specifics to make his pattern portable. WebLogic 6.x JMS implementation supports multicast (or you can use JavaGroups instead), so there is no single point of failure.
Anyway, WebLogic 6.1 supports this kind of non-transactional distributed caching already - see readMostlyImproved example at Seppuku
You pose an interesting question. Dimitri is correct, I wanted to try and keep implementation specifics such as application server vendor out of the pattern. However, the problem you mention is still a problem.
One solution is this: You can set up polling threads in each server that check the JMS server for a heartbeat, and then flush the entire cache is the server is thought to be down. Of course, this is going against the J2EE spec, so pick your poison.
One comment on multicast, which is that I don't think multicast JMS and JavaGroup messages are considered to be "guaranteed delivery". These cache notification messages MUST be delivered or else. But maybe these mechanisms are "reliable enough" for it to be a better solution.
One last thing about the JMS server. I ran the code from which this pattern evolved on WL, and I found better JMS performance when running JMS on a standalone server, as opposed to having one EJB server handle the JMS (could be on the same box, just different process).
Great pattern, best thing I've seen here in a while :)
I've used something similar in a non-EJB environment for light-weight web-server clusters running with plain JSP/Servlets and got excellent performance. Anyway, the point is I didn't rely on JMS (too heavy) so I did multicast myself, and I'd like to comment about the level of delivery assurance.
The reason the delivery is not assured is that UDP doesn't guarantee packet delivery. However, if your entire cluster is hosted within the same LAN (usually it's in the same building, and even in the same room) the number of hops is small and the delivery is allmost completely guaranteed. If you really need assurance you can send a packet two or three times (with a retry counter in it). These patterns are so light-weight, I didn't see any performance difference. Jini discovery protocols use this technique, if you want a sourch code sample.
Another point worth noting is that in some ways it's better to do the invalidation only when the updating transaction commits. This can be done using a JTA Synchronization interface. In a web-server you don't have a transaction manager and I wrote a simple one myself, but in the app-server scenario it shouldn't be a big problem.
You make some very interesting points.
About multicast delivery, that is a very good point that UDP over the LAN should be almost 100% reliable. I might try that out and see how much more load can be supported before the cache "falls behind" sending expiry notifications, as Gene pointed out above.
We tried the JavaGroup implementation at one point, had some problems with it, and then never came back to it. Might be good to revisit that.
About expiring upon commit, I agree that it is a better policy (otherwise, rolled-back TXs will expire needlessly, although this would have a variable impact depending on the application).
I actually accomplished that behavior using a technique that is implementation-specific.
The application in which I used this pattern used BMP entity beans for all updates, and stand-alone DAO-JDBC for pure reads. I embedded the expiration logic in the EB such that only when ejbStore() was called did the expiration happen. Furthermore, delay-updates-until-end-of-txn was enabled, so that this was only called upon commit.
I thought about doing Bean Managed Transactions and tying in to JTA, but never got around to it. Can you elaborate on what was involved with your implementation?
I am not familiar with JavaGroup. Does it implements JMS on top of UDP multicast? If so, could you please direct me to some information?
About the txn point:
You can't get the notifications you need using JTA interfaces portably with EJB (up to and including 2.0). EJB only gives you access to UserTransaction, while what you actually need is Transaction. Once you get a hold of a Transaction, your cache can register a Synchronization with it to be notified of commit. You can get a Transaction with vendor specific interfaces. I know for sure WebLogic provides one (although I can't remember the exact method... should be easy to find in the docs). As I mentioned in my post, I didn't work in an EJB environment anyway, so getting the Transaction didn't place any portability constraints on my code.
However, if I were to implement the same kind of functionality with EJB, I would probably use a different solution which is more portable. Session beans (stateful) can recieve notifications of transaction events by implementing SessionSynchronization. If you use a facade session bean to wrap your beans, you can make it recieve the txn events and pass them on to the cache. The down-side of this is that the facade will have to be stateful.
If you don't want to make the facade stateful there is one more alternative I can think of. If you know for sure that your facade isn't going to recieve a client transaction context (usually the case), you can make it invoke EJBContext.getRollbackOnly at the end of each method. Then it can react appropriately based on the return value. However, here there is a tricky part: if your facade throws a system exception (that is, the exception is thrown somewhere along the thread, not including calls to other EBs) the transaction will be rolled back, but only after your method finishes (i.e, when the container gets the exception). So you have to pay special attention to that, and catch every possible RuntimeException and RemoteException... that can get quite cumbersome, I imagine.
As a final note, this pattern is very efficient and recommended, but it is not portable as per EJB spec (1.1, 2.0). It isn't portable because of many problems, but the main problem which I think is unsolvable, is that you can't make different clients use the same entity bean instance without reloading in between. You just can't, and the vendor can't either because there are no "read-only" transactions in current EJB. I can highlight some specific sections in the spec if that's useful to someone. However, this is one of the few cases where I would go ahead and completely break the spec, because the performance advantages you get with special-purpose read-mostly caches are just to big to let go (IMHO). Also, most vendors will in fact provide these read-only transactions soon (many already do), for exactly the same reason. There is also no "legal" way to hold a singleton, so even if you completely give up entity beans and use DAO and value objects, you still run into portability problems... However, the chance that these problems will actually break something in your code is very small (IMO).
Thanks for the comments about the possible TX pitfalls. Very interesting stuff. You've obviously used this pattern in a application that involved enough rollbacks; my application rarely encounters these, so hence my easy but <100% solution ... :)
As a more general comment, I think the complexity of implementing this pattern (as we have shown in this thread) shows that it shouldn't really be a developer-pattern after all, but a vendor-pattern. I wonder how close the rumored transactional cache in WL 7.0 works (how it is implemented and how it performs).
I totally agree with you on the benefits of a read-mostly cache, especially when most of the time these objects are just rendered in a JSP and that's it. We got big, big performance improvements. Of course, as I noted in the pattern, frequency of cache hit and by what kind of request can make this invaluable or just a small bonus, depending.
One comment about the read-only/read-write interface problem. The way I implemented it was to keep the value object read-only interface as the return type of the method. Writing callers can cast if they want to. Of course, this isn't ideal, but I think it is better than implementing more accessors (with different names than the read-only methods) to return the read-write interfaces.
This will lead to more RuntimeExceptions (including ClassCastExceptions due to programmer error), but those kind of exceptions are OK in development I think.
Check out JavaGroups at SourceForge
. It is based on IP Multicast. We got it up and running with test cases, but had trouble sending messages with our own classes (NoClassDefFoundError, although you think that it would just treat the message as a byte payload).
I guess I am responding to all 3 of your posts at once! Too bad TSS doesn't offer a Thread.join() ;) Floyd?
Hi, I've been trying to come up to speed on caching within an EJB app server and I must say it is quite frustrating, even in the case of caching 'mostly-read' data within a single app server.
I hope someone can reconcile these two statements:
From the description of the pattern
"This is easy enough in the case of a single node server (no cluster). The application is designed so that reading this data goes through a singleton cache interface, which either returns the cached data or retrieves (and caches) it from the DB. Changing this data goes through the same interface, invalidating the cached data. "
and from the upcoming book "EJB Design Pattern"s Oct 3rd - EJB Strategies, Tips and Idioms under the section
"Using Java Singletons is ok - if used correctly"
"There is nothing wrong with using a Singleton class, as long as developers DO NOT use it in read-write fashion, in which case EJB threads calling in may need to be blocked. It is this type of behaviour that the spec is trying to protect against. Using a singleton for read-only behaviour, or any type of service that can allow EJB’s to access it independently of one another is fine."
Since you would occasionally write to the cache to update it that goes against the advice of how to use singletons. Probably in practice updating with the singleton works out just fine, since you are getting such an important functionality and shouldn't worry about sticking to the spec 100%.
Comments/advice are greatly appreciated.
Mark: "Hi, I've been trying to come up to speed on caching within an EJB app server and I must say it is quite frustrating, even in the case of caching 'mostly-read' data within a single app server."
That is the simple case, and it isn't too bad as long as you stick with a good pattern and don't try to get too fancy. An MRU cache extending Hashtable (etc.) isn't too hard to put together in an afternoon. The real question is using it in conjunction with things like EJBs, particularly when there are transactions involved. Pardom me for advertising, but that's why we're adding JTA support into our caching products.
Mark: "I hope someone can reconcile these two statements: ... 'This is easy enough in the case of a single node server (no cluster). The application is designed so that reading this data goes through a singleton cache interface ...' and ... 'There is nothing wrong with using a Singleton class, as long as developers DO NOT use it in read-write fashion, in which case EJB threads calling in may need to be blocked.' ... Since you would occasionally write to the cache to update it that goes against the advice of how to use singletons."
It is not an issue. Some of the warnings and proscriptions in the EJB spec are a bit moribund (?) or minimally anal. IMHO That's because the EJB spec comes from the world of "the container knows best and the developer should be a moron". (Not that IBM had anything to do with it ;-)
If you look at it the positive way, you could say "The container manages the threads and all shared objects for you so you should not have to worry about synchronizing." That's probably a better way to look at it, until you cheat and use a singleton, in which case the limitations in the spec _have to_ go out the window.
Our local hashed caching implementation uses minimal synchronization (no sync required on reads for example) and has notifications and automatic cached entry expiry. I've pasted in the JavaDoc below to give you some ideas. One of the things that I strongly believe in is using existing interfaces when (a) they are accepted and (b) they are applicable. As a result, we use java.util.Map as the basis for all of our caching implementations.
Our clustered caching implementation can use our local hashed caching implementation (or any other java.util.Map) in our upcoming 1.1 release, so the API remains unchanged (java.util.Map), the notifications remain the same, but it works transparently whether local or clustered.
One other thing to look at is the caching JSR from Oracle. IMHO it is hopelessly complex but it's just my opinion and no one seems to agree with me on this one ;-) ... here's the link: http://www.jcp.org/jsr/detail/107.jsp
A generic cache manager.
The implementation is thread safe and uses a combination of Most Recently Used (MRU) and Most Frequently Used (MFU) caching strategies.
The cache is size-limited, which means that once it reaches its maximum size ("high-water mark") it prunes itself (to its "low-water mark"). The cache high- and low-water-marks are measured in terms of "units", and each cached item by default uses one unit. All of the cache constructors, except for the default constructor, require the maximum number of units to be passed in. To change the number of units that each cache entry uses, either set the Units property of the cache entry, or extend the Cache implementation so that the inner Entry class calculates its own unit size. To determine the current, high-water and low-water sizes of the cache, use the cache object's Units, HighUnits and LowUnits properties. The HighUnits and LowUnits properties can be changed, even after the cache is in use. To specify the LowUnits value as a percentage when constructing the cache, use the extended constructor taking the percentage-prune-level.
Each cached entry expires after one hour by default. To alter this behavior, use a constructor that takes the expiry-millis; for example, an expiry-millis value of 10000 will expire entries after 10 seconds. The ExpiryDelay property can also be set once the cache is in use, but it will not affect the expiry of previously cached items.
The cache can optionally be flushed on a periodic basis by setting the FlushDelay property or scheduling a specific flush time by setting the FlushTime property.
Cache hit statistics can be obtained from the CacheHits, CacheMisses and HitProbability read-only properties. The statistics can be reset by invoking resetHitStatistics. The statistics are automatically reset when the cache is cleared (the clear method).
The Cache implements the ObservableMap interface, meaning it provides event notifications to any interested listener for each insert, update and delete, including those that occur when the cache is pruned or entries are automatically expired.
This implementation is designed to support extension through inheritence. When overriding the inner Entry class, the Cache.instantiateEntry factory method must be overridden to instantiate the correct Entry sub-class. To override the one-unit-per-entry default behavior, extend the inner Entry class and override the calculateUnits method.
I mostly agree with Cameron's sentiment regarding these techniques and the spec. I think the point should also be made, however, that this is one reason why this kind of functionality needs to be at a lower level (i.e. container), so as not to violate any spec.
FYI - we released our 1.1 version
today with the size-limited cache features:
Lawrence: "Is an LRU or LFU limited cache the best way to go for your usage pattern?"
We went with a combination ... both are generally "good" but if you have to choose one or the other you will occasionally end up with very non-optimal caches. OTOH if you combine both, you end up with very few cases that are "less good", and very few "poor" cases, except when the cache is just too darned small. ;-)
Lawrence: "As far as access control, integrating an access control check with the API is not really that interesting, but I found it to be convenient. The only issue arises if permissions are cached with a data object, which makes it not sharable across many reading clients."
That's a nice idea. Did you base it on a pattern that you found in a core Java class?
No, it was a home-brewed idea. Basically, you supplied a request object to the API, which not only contained the right key(s) for the data, but also your security principal, essentially. It also contained all the access parameters, like read-only/read-write, whether you wanted it to be immutable (and therefore didn't have to clone it), etc. etc.
Internally we'd do the permission checks (we had our own data-driven thing, cool, but very complicated) and then, if possible, return the shared immutable instance. But if you were getting your own instance to mutate, then your associated permissions were stored in that instance, and the framework would stop you from doing things you weren't allowed to do at the business method level, without waiting for a "save".
So we did *all* of our data-related permission checks at this layer. Worked pretty well to keep code clean.
WLS 6.1 Does not support failover in a JMS server.The JMS server is non clusterable object.The MDB's can be clustered and pinned to the one JMS server in the cluster hosting the
In WLS 6.0 ther was atweak in which you could target the JMS server to the Cluster but it used be aboud only to one of the stances and in event of that failing another JMS instance came up on on of the WLS in the cluster.
So it didn't provide load balancing then too but atleast we could force some failover support.
But now in WLS 6.1 there is no load balancing nor fail over.
The "load balancing" described using JMS connection pools is a mis nomer, at beast it gives cluster wide accesibiluty to the JMS through transparent re rounting of the client call to the JMS server.
I hope this explains the problem.
Hi, I have to ask, is it the proper approach to throw an exception when some code tries to modify the read-only data?
I think that the approach would be simply to block such code's execution and return an error message. Please comment as this addresses basic exception handling.
The best way to implement these immutable objects is to not have any public methods that change the state of the object (i.e. mutate or reassign any member variables), or return a reference to any mutable member variable.
But if you have a business object interface that has both accessing and mutating methods, and business logic that deals with this interface, then you are kind of stuck. The only way to get by without rewriting all of the client code is to throw RuntimeExceptions (hopefully some child class of) out of the mutating methods. After all, this is what RuntimeExceptions are for, where no recovery at all is possible, because it is *programmer error* (trying to write to a read-only object). Of course, the caveat to all of this is that you only catch these errors at runtime, even if it is development runtime.
If you are starting from scratch or are willing to change some client code, then I recommend the following approach:
Divide the business interface into a read-only interface and a read-write interface (extending the read-only interface). Have your implementation only return objects that implement the interface required.
If the implementation of the object is too tied up to separate, then you can always have a dynamic proxy front the implementation class, exposing either the read-write interface, or the read-only interface, depending on what you want. This way, even if a client programmer tries to downcast to the read-write interface, it won't work.
Hope this helps,
Just read another one of Lawrence's post...
I have a note about the strategy of using a read-only
interface and extending it into a read/write interface. A problem arrises when you need to return another value object in one of the getter methods. In the read only interface, you should return the read-only interface of the object you are returning. But in the read/write interface, you would like to return the read/write version. This makes perfect sense as far as OO concepts go, because you are actually returning a sub-type of the read interface.
However, current version of the Java language do not allow this kind of variant return type. See Bug id 4144488 in Sun's Bug Database (actually this is an RFE, not a bug).
So currently you can either declare you return the read interface, and then cast it to the read/write one (bad practice, cumbersome) or try some other approaches. I can describe my own approach if anyone is interested. It is partially driven by my own cache design, which I think is somehwat different from Lawrence's.
Heh... Sorry about the fragmentation...
One last note:
"Furthermore, delay-updates-until-end-of-txn was enabled, so that this was only called upon commit."
This is not accurate. The container potentially calls ejbStore() on many different EBs before commiting. One of the subsequent ejbStore calls may fail (for instance, if the DB gives some error, even unexpectedly, like "out of segment space" or "can't serialize transaction"). Such error will abort the transaction.
Another, perhaps bigger problem occurs when the invalidation messages get to the target very quickly. With UDP multicast it is nearly real-time, so messages get around allmost instantly. If you notify the other caches before you commit, they may refresh their cached copy before you commited and see old values. If you're data is truly read-mostly, the caches may not get a chance to refresh themselves for a long while.
Great pattern, but does it work in practice and production??? :-)
Actually this pattern was deployed on Kiko 4? months ago. If you look at the architectural diagram, page 1:
This cache spans both the web and app cluster (used by both) and drastically eliminates hits to the database tier. It's this very pattern that gave Kiko a huge performance boost 4 months ago.
Dimitri, Lawrence and I were well aware of your Seppuku ReadMostly pattern and in fact studied it as a reference. However we wanted a more generic cache that's independent of Weblogic, hence the motivation for the ACE Cache.
Of course, there are some limitations to this cache that can be improved upon. One is partial failover and full failover detection and handling. Another is load handling under extreme volume, which of course is dependent on the messaging provider. But I see a lot of immediate benefits to this pattern, and its annoyances can be worked out in the long run.
Hello there guys,
Sorry to bug you here.
Please help me. I am implementing Ed Roman's Mastering
EJB( the first book ) on-line business system on
WebLogic6.1. All his beans are working fine ( entity
CMP, statefull wrappers, stateless ones ) through
clients ( simple java test classes ), but as soon as I
start using exactly the same code in Servlets which are deployed on the same WebLogic server I always get UnexpectedException failing to invoke on entity or
stateful wrapper beans' methods.
double price = product.getBasePrice();// works fine
quote.addProduct( product );
double price = product.getBasePrice();//
UnexpectedException: failed to invoke on method
It looks like "product", which is CMP entity bean's
remote interface, gets disattached from its bean,s instance immediately after I send it to "quote", which is a statefull wrapper bean holding a Vector of those "products" to operate on later.
Please explain why it happens, if you can.
Thanking you in advance,
Very neat pattern.
I have a user object that is frequently accessed and is read/write. I need to use a clustered environment and would like to take advantage of this pattern. Is there any way I can ensure that the user will see consistency of the updates?
The only way to be absolutely sure of a consistent view is to do all the expiration synchronously (the A in ACE cache stands for Asynchronous).
If you can tolerate possibly slightly stale data for *reads* to the client, then ACE is for you. You can use tricks to make sure that requests from the same client go to the same node in the cluster, to minimize the chance that a client sees some stale, some current data.
However, if this is not acceptable, then something more along the lines of Cameron's product (which I've yet to test but hear good things about) is more up your alley. Or else rely on the DB for now.
Indeed a very useful and practical design.
We have implemented the same pattern but with a slight tweak. Following are the reasons
1) For occasional writes using JMS for cache invalidation is a heavy weight solution. Using some kind of Multicast mechanism would be
a simple and more portable solution and also App server independent solution.
2) Since this pattern focuses on achieving an APP server independent solution to the problem of "read-mostly” data, I think we should re
Consider the usability of JMS for Cache Expiry.
This is for the following reasons
1) Heavy weight
2) Fault Tolerance
Even in Weblogic 6.1 JMS is not clusterable. The MDB's and the JMS connection are clusetrable but the JMS hosting the destinations is not
I have tried to explain this particular problem in this same thread some time back.
So in event of that particular WLS instance going down the complete caching logic will come for a toss.
3) Portability of the Code
Websphere 4.0.x does not support JMS and does have MDB's.It has to be integrated with MQ series or something else.
To balance out for MDB's we had to implement session beans which did the same job but the deployment had to specify which the beans
for this job etc. So essentially we had two sets of classes each App server specific.
4) We have achieved the implementation of this pattern using Multicasting for cache expiry. In event of an ejbStore being called for the
database object the Broadcaster sends out the message to all the listeners( The Cache value object implements the Listener interface) .
On receiving the message the listener i.e. the Cache invalidates that particular record or object.
5) We have written this Expiry solution from ground up and not relied on JavaGroups.Some level of reliability has been added on UDP.
6) The solution is portable, lightweight, and Fault Tolerant.
If you have any comments on this Please open up
Yes, we were aware of the heavyweight-edness of JMS and nonclustering JMS of WL 6.1. Hence with suggestion from Dimitri, we looked at JavaGroups (sourceforge.net) and I wrote a MessageFrameworkAdapter that allows any client (including this SmartCache) to plug-and-play between JMS, JavaGroups and JSDT. Once a vendor gets something right, we simply plug in the new implementation without change to the client code.
BTW, In the last few days I had a chance to play with the latest product by Cameron's company - http://www.tangosol.com/products-clustering.jsp
and it looks very promising. It is a distributed cache implementation with some very cool features. (On the coolness scale, I think Tangosol code-morphing product is still the coolest though ;-)).
I have the same question about how to keep a stateful EJB synchronized in a cluster
This is not only interesting for clustered caches.
Hasn't someone submitted a pattern yet for keeping things synchronous in a cluster?
I think for example of the 'pattern' where you can register flags (you give it a name and
a value of 'true' or 'false') in a stateful session bean and other EJB's access
this bean (fast, through a local interface) for example to know if they have to do
something exceptional like refreshing.
First, consider that you might not need this at all. The whole point to a Stateful SB is that there is one instance of it for a given client, and your client stub can always find it (or its backup). So why do you need to manually replicate data within it elsewhere?
But, you could be accessing this stateful EJB within the app-tier, and want to have the optimization of a local call to get this stateful SB.
In this case, you could use the general ACE strategy for Stateful SBs as long as they have a method of sending messages to each other, be it JMS or something more low level like JavaGroups or a custom IP multicast implementation. Note: You can't use local interfaces for this communication, since they must go from node to node in the cluster!
You will also need to have on-the-fly topic registration or message filtering to cut down on noise if you have many sets of SBs communicating.
The discussion as to whether this kind of optimization should be transparently provided by the vendor (optionally or enforced by the spec) is a totally different discussion. Stay tuned for a post on that from me either here, or on BEA's developer site (there is a thread of that nature right now).
Dimitri (or Cameron, if you have been paying attention to this thread), maybe you could share with us a little more detail about how this Tangosol worked in your evaluation.
BTW, is this Cameron's replicated cache, or the distributed cache (by which I took to mean cached objects not being replicated on all nodes, but sent over the wire sometimes)?
Obviously a replicated cache has the best raw performance, since objects are on the heap already. But a distributed cache could be more scalable, since more data can be cached overall with the same memory footprint in each node.
Also about synchronous vs asynchronous caching:
From the brief description, it seems like Tangosol is synchronous, and with transactional support to boot (VERY cool).
Gene and I (and another one of our old coworkers) always talked about the cluster-wide transactional cache being the "Holy Grail" of enterprise application architecture.
But the response of our DB guy was always "well then that would be almost an in-memory DB, like TimesTen" or another such product, with the benefits of direct Java object storage, but without things like indexing and sophisticated querying (just hashing primary keys, basically, which must be Strings in the Tangosol product).
So I am curious as to how far along this path a product like Cameron's goes ... from one end of the spectrum (my ACE Cache pattern) to the other (in-memory DB). And I can't wait to hear about how well the synchronous expiry performs.
Lawrence: "is this Cameron's replicated cache, or the distributed cache (by which I took to mean cached objects not being replicated on all nodes, but sent over the wire sometimes)?"
The Coherence product is a replicated cache. Constellation is the distributed cache, but it will not be available until late Q1/2002. (Distributing a transactional cache is VERY hard. It will tie your mind into knots! ;-)
Lawrence: "Obviously a replicated cache has the best raw performance, since objects are on the heap already. But a distributed cache could be more scalable, since more data can be cached overall with the same memory footprint in each node."
Exactly! With Constellation, we should be able to manage literally terabytes of data without ever hitting the disk. (Our TCMP (Tangosol Cluster Management Protocol) infrastructure can theoretically support thousands of servers in a cluster, although we don't have that much hardware to test with!)
Lawrence: "Also about synchronous vs asynchronous caching: From the brief description, it seems like Tangosol is synchronous, and with transactional support to boot (VERY cool)."
It is actually both. If you do a "dirty" read, it is async. If you do a locked read, it could be synchronous: the issuer for the particular resource must issue the lock, and that could require a sync'd network request.
Lawrence: "Gene and I (and another one of our old coworkers) always talked about the cluster-wide transactional cache being the "Holy Grail" of enterprise application architecture."
You took the words right out of my mouth. To be able to semi-linearly scale up a transactional architecture and provide data integrity and failover to boot is just the coolest thing! That's exactly where we are headed.
Lawrence: "But the response of our DB guy was always "well then that would be almost an in-memory DB, like TimesTen" or another such product, with the benefits of direct Java object storage, but without things like indexing and sophisticated querying (just hashing primary keys, basically, which must be Strings in the Tangosol product)."
Exactly. In Constellation, the cached objects are XML. XML is already supported in Coherence ... the doc states that objects must be Serializable, but we also support our own XmlSerializable and XmlElement interfaces (see our online doc) which we can expose as DOM objects. (We think that XML is a much better way to go for object state / persistence than JDBC, and once you have a good XML schema, it's relatively obvious how to put a JDBC access layer on top of it.)
Coherence doesn't support a cache-limit and automatic expiry (yet?), but if you send me your email address I will send you the source for our in-process (non-clustered) cache that has these features (doc'd online at http://www.tangosol.com/downloads/javadoc/com/tangosol/util/Cache.html
The Coherence 1.0 download (requires registration), overview and FAQ pages are at:
BTW The latest Coherence build (build 22) is fully self-configuring (but still fully manually configurable using an XML config file) and has a built-in command line test so you can actually see it working without integrating it into your app or app server.
I liked your article very much and was very helpful! I am new to JMS and as such like its concept. So if i am trying to say, cache someone's mailbox or a bunch of messages from a mail-server and want to obviate the need to access the server each time to retrieve mails, what would a good approach be for designing a cache based on your caching model. Should i construct a MDB and cache a message each time there is an update on the mailbox. Because those messages that do not change can expire after a certain period from the cache. Only those that change should probably be updated in the cache or those that are new should be added to the cache. What would you suggest? I want to implement the pattern you are suggesting. Or is there already an implementation I can get. Thanks
I would love some feedback on the following.
It is more linked to Seppuku pattern than ACE but Seppuku did not have its own (highly merited) column in the patterns...
- work in any container that supports read-only entity beans (and no more)
- hard believer of Tyler's true power of entity bean : readMostly entities is a matter of fact and jdbc sucks.
- (!!) synchronized invalidation of read-only beans when a read-write bean change the data
In other words,
- I don't have a cluster of servers to tell to refresh its state.
- I would prefer talking to the read-only bean after the transaction of the update is finished so that I am sure the db has been touched and that my read-only bean will see the latest data.
- I don't want to use JMS or any messaging/asynchronous feature but want to make a direct/synchronous call. I do not want the situation where a client will update the data (in one Tx) then read the data (with no Tx or in another Tx) and still see the old one because JMS/... was not quick enough.
Proposal (Review of Gal's proposal)
In all setter method of the entity bean (just after the line "dirty = true;" ;-) ) I create/call a stateful session bean and give him the home of the read-only entity bean and the pk.
This stateful session bean implements javax.ejb.SessionSynchronization.
In its afterCompletion() method, it calls the read-only beans fbpk() and invalidate() method. It also throws a RuntimeException to seppuku himself.
If you access data via a local data manager that either goes directly to the database or to an apserver, then you can implement a local/remote caching system. The JCS, in jakarta-turbine-stratum, is a flexible distributed caching system that is useful in this pattern.
I really appreciate your pattern and agree that it should be provided by app server vendors. But in the mean time we mortals need to address the issue. A couple of questions for you:
1. You recommend that cache should be "synchronized appropriately". Could you share the implementation? I'd think the ideal would be to allow simultaneous access for objects with different keys, and use double-check synchronization idiom for clients attempting to read the same object. We may also need to synchronize on the Cache singleton itself when inserting new keys. What do you think?
2. You've mentioned few issues that are beyond the scope of the pattern. I am specifically interested in topics "different caches for different DataObjects or not" and "integrating this pattern with data access control (permissions).
I'd appreciate any details.
In response to your interesting questions:
1. Synchronization of the cache is a tricky issue. What I ended up doing was synchronizing on the key (or actually the key's singleton "lock" object) regardless whether doing a read or write. Of course, if a cached read was the result, the lock was held for a very short period of time.
Now, the reason I did this (lock on read as well as write) was API driven. I wanted this to be a transparent cache, where clients that needed data simply tried to read from the cache, and the cache would either have it and return it, or get it, cache it, and return it. Otherwise, the client has to worry about checking to see if the cache has it, and if it does not, then getting it, and populating the cache, all hoping not to race against another client thread. I judged the overhead of obtaining a monitor and additional contention to be worth this simplification.
To what are you referring by the double-check synchronization idiom ? If you are referring to lazily synchronizing by prechecking a condition to see if an synchronized operation needs to be done, be aware that this approach has problems when implemented in Java. If you read this article
or many other similar ones out there (search Google for "double check locking"), it might be clearer as to why I always had to synchronize.
You are correct in that the only way to allow for new keys is to briefly synchronize the entire cache, since the keys must have matching singleton lock objects.
I'd be interested to hear others' ideas on how to synchronize this kind of read-through cache.
2. I don't want to dive into deep discussions about these other issues here, but in brief, this is all about caching strategy. Is an LRU or LFU limited cache the best way to go for your usage pattern? Is usage/caching priority similar across all your data objects, or different?
As far as access control, integrating an access control check with the API is not really that interesting, but I found it to be convenient. The only issue arises if permissions are cached with a data object, which makes it not sharable across many reading clients.
Hope this helps.
First of all I appreciate you for your wonderful work towards developer community. I felt bit hard to follow the patterns and this is because I'm fresh to EJB. Now I'm working on a project in which we have to develop session beans (stateful) which will query the database. The database has 40 million records. The user will qive his search criteria and the session bean has to pull the data from database and show to the user (on his browser) pagewise. Then the user may click NEXT / PREV or PAGENOs (like google) to view particular page. Now my question is, how to make the data available immediately to the bean. If suppose, the bean access database for each & every request (i.e when user clicks NEXT / PREV buttons) it will be time consuming. Instead is there any method by which we can place all the selected records in memory and the bean can look for the data on the memory (not in the database). It like simple way of caching. How to achieve this? (NB: if EJB supports thread we can do it by pulling data as a seperate process. But EJB doesn't recommand to use threads).
Expecting valuable hints from you,
Khalil Ahamed Munavary (khahmed at apis dot dhl dot com)
Well, you definitely wouldn't want to put all the records in memory. But like you say, you wouldn't want to go get 1 each time, as that would hurt performance.
I suggest a Page By Page Iterator design (see Sun's J2EE Design Patterns), where you retrieve the right number of rows at a time (probably using a JDBC query).
Remember that caching is only useful if many (like >10)requests can use the cached data before the data needs to be refreshed. So only if many clients are going to browse these records would you want to even try to cache the record sets (a "page" in the page by page iterator) in your application.
This pattern ties in very well with Sun's recent Java Data Objects specification. The JDO specification allows for transparent persistence in a backend-neutral manner. See Sun's JDO page
for more info about JDO.
By using JDO and either a session bean or entity bean facade pattern, you can structure your application such that all database reads bypass the application server container altogether without dealing with JDBC. (I'm sure the [insert appserver vendor name here] folks will love this concept...) This brings you the alleged transparency of using CMP entity beans without the cost or the bulky requirements of the specification.
Further, if your JDO vendor is clever (see SolarMetric's Kodo JDO
for example), then you might be able to get high-performance caching to boot. Kodo JDO has a distributed cache that was designed to make exactly this type of pattern happen behind-the-scenes. In fact, Kodo JDO's implementation allows you to even bypass the app server for some writes, so you can minimize your app server cluster to just enough machines to run your system-critical code in robust fashion, and use cheap non-container machines for most of your scalability needs.
Sounds pretty nice, but please check on the FAQ URL (http://faq.solarmetric.com:8080
/) ... I couldn't access it.
Thanks for your post, Patrick. A couple comments:
ACE is a pattern, not an implementation. As such, it can be tied to any persistence mechanism, be it JDO or EJB entity beans, or straight JDBC for that matter.
Of course, one of my many points is that this pattern should be provided by vendors, "under the hood". I think we can all see that JDO can take advantage of this pattern just as much as an EJB container.
Your idea of bypassing the container for writes to save $ is an interesting one, can't say that I've heard it before. But of course, we *are* paying for the JDO implementation, right?
I have implemented a similar cache framework, using javagroups for broadcasting. I came across this pattern only today,had I been here a few weeks before, I could have saved considerable amount of time.
Please comment on this,
My cache consists of read only and read-write objects.
1. In case of read-write objects, the objects have to be expired after a time interval. I do this by setting the creation time in CacheKey [IValueObjectKey ]. A cleaner thread polls the keys periodically and checks if the object has live more than the expiry time, if so then removes it. Any better way of doing this.
2. In case of read-only objects that objects can be changed only by the administrator. These objects store configuration details and are referenced by almost all the components[EJB's] in the server. Now coming to the actual problem, For each session, which spawns multiple requests, I need the read-only objects to be consistent, i.e., even if the read-only objects are changed by the administrator, the reference held by the session in the first request should give me the same details even in the 2nd or 3rd request , till the session expires.
My system flow is like this,
1. The user enters his username and password and submits the jsp [request 1].
2. My authentication components accept the request, store any session info by creating a read-write object, gets the Config [read-only object] from the cache.
3. Component does the necessary processing, and assuming that the authentication fails, prompts the user to retry entering the username and password.
4. User enters the username and password again and submits the jsp[request 2].
5. Authentication components accepts the request, retrieves any session info from the read-write objects, gets the config again form the Cache.
6. Does the necessary processing.
Now what if the administrator changes the config in between the request 1 and 2. The config got from the cache in step.2 and step.4 is different.
One solution to this is for each session to create a new copy of config and store it in the read-write object along with the session info. So in step.4 the component gets the config from the read-write object rather than the cache. But by making multiple copies i'll be wasting memory. How can I do this.
In your pattern, what is hit(Ojbect key) and miss(Ojbect key) used for. Also can you please eloborate on Value Object graph methods in your IValueObject interface.
Following on from this discussion has anyone ever considered the applicability of JavaSpaces? I have a similar problem in replicating data across a clustered environment but the objects I deal with need to be read/write.
I have only thought about using JavaSpaces but have applied them to a similar problem implementing a 'black board' architecture for parallel processing across a shared set of objects - works really neatly!
I will be looking into JavaSpaces and developing a proof-of-concept so will keep you informed as to the findings. But in the meantime any ideas comments sugesstions....
"I have a similar problem in replicating data across a clustered environment but the objects I deal with need to be read/write."
If you get a few extra cycles, could you do a quick comparison (performance and ease of implementation) between Javaspaces and Coherence. I haven't done much work with Javaspaces yet, but it looks promising. Drop me an email (cpurdy at tangosol dot com) and I can provide a full development license etc.
"3. There may be small latencies in read-only data propagating across the cluster. "
Is there anyway to compensate for the above deficiency using some sort of a feedback loop ?
"Is there anyway to compensate for the above deficiency [latencies in read-only data propagating] using some sort of a feedback loop ?"
Yes, by using optimistic locking (aka "version number pattern"): keep a counter that distinguishes each written state of your object, and verify the counters of all relevant objects using direct database access right before comitting. I think Bea 7 does that now, so the vendors have started to relieve us of some of the systems programming.
This is a very interesting pattern. We need to build a caching mechanism which would contain mutable objects and that is when i hit upon this article.
I have read your article and would like to try it out. The problem i am facing is that i am not able to understand the functionality of some of the API's mentioned in it. Eg miss(), hitAll(), missAll() and few more in the IValueObject interface. Would it be possible for you to shared the cache framework in more depth.
Can you also direct me to some more reading material on Cache design, LRU algorithms etc?
Thanks in advance