What is the correct way to deal with the following problem:
Let's say we have 2 clients, C1 and C2.
C1 finds entity bean B
C2 removes B
C1 calls B.foo() and gets NoSuchEntityException
Transactions can help, but we're trying not to use them for read-only methods to avoid deadlocks and increase throughput. Catching NoSuchEntityException doesn't look like a good idea, especially taking into account that it leads to Tx rollback (consider that C1 runs a method with 'Supports' attr., and this method can be called with and without Tx. In the second case we will recover, in the first one we will only spend resources). Does there exist any standard solution?
I think the correct answer depends on your specifics? Is the disappearing bean required for the operation? If so, you might want to wrap the NoSuchEntityException and throw something more meaningful. If it isn't, then you'll need to isolate the operation on the second bean into its own transaction context if a transaction is in progress.
In this particular case I can catch this exception and continue processing. But the pattern
-find the bean
-call it's methods
is very common, and carefully catching NoSuchEntityException is not an option. I understand that dozens of our session beans that perform read-only operations are not protected from crashing at any moment. This is frustrating..
In the hope that I understand your question the right way, here is my view of the situation:
(also see 10.5.10 of the EJB 2.0 specs):the container has the option to use transactions or some optimistic strategy for your problem. There is no 'standard' solution I know of (if there is I'd be glad to hear about it).
If you choose not to use transactions then you have to resort to optimistic locking of some sort (otherwise you lose all correctness in the first place - probably not what you want). However, research has shown that this leads to many rejected requests for hot-spot data. Moreover, you lose fairness (there is no guarantee that who comes first is also served first).
The pessimistic approach (ie transactions and locks) is usually better for such hot-spot cases; clients wait for locks but at least they don't get rejected every time, and there is some fairness (first come first served). If you use Oracle underneath then it may even use past images to give the foo() method its read-only data, without actually preventing the remove at all.
So: even if you don't do updates transactions can be useful - if alone to ensure correctness of reads in the presence of concurrent deletes ;-) On the other hand, optimistic locking is usually better if you have low concurrency and hence low probability of failure in the first place. At the price of crashing frequently in higher concurrency...
Guy PardonJTA transactions for J2SE and J2EE
Thangs Guy, your answer really helps. I was hoping that I've missed something and this common situation has any solutions more lightweight than transactions.
We're using JBoss & MySQL, and JBoss doc explicitly says to make transactions as short-lived and fine-grained as possible. We experienced deadlock problems when tried to start Tx on every action from high-level facades, and had to change our vision of their applicability.
It is interesting that JBoss can mark beans/methods as read-only, and it only directs to exclude bean from transactions. So, if I'll mark B as read-only, and C1, C2 will use transactions to ensure isolation, I will still get an exception because B won't be enrolled? :) (I guess it is more tricky..)
I'll try to use optimistic locking (instance per transaction) as far as B is a read-only bean and there won't be any collisions. And the general conclusion is that we're not in a fairy-tale and there are no ideal solutions ;)
1. Increasing the isolation level should cause some control as well as using "select for update" to acquire row level locks.
2. A non-J2EE way to solve this is to lock objects either via db or a lock table. This would stop clients from removing data that was already found.
3. Instead of hard deleting data use soft delete or archive so that the records do not disappear, i.e. they are still available for reads but can no longer be edited. I do this alot when i need history, i.e. change the state from say "active" to "archived" or some such. Note that archived data is generally not seen via queries which filter it out.
Hope some of this helps.
I can use row-level locks/table locks but I will end up with RemoveExceptions instead of NoSuchEntityException, thus having to invent some remove-fail-retry logic.
Soft delete won't work at all because after you call B.remove(), container *must* throw NoSuchEntityException on every attempt to use any B method, as per EJB spec, despite the fact entity record is still in DB. B reference will be invalidated.
Is it possible for you to modigy the enititybean code? Then you could catch the NoSuchEntityException and do something more sensible.
Erm.. I don't understand. Yes, I surely can modify entity bean code, but it is EJB container who throws NoSuchEntityException. Catching it in the client (session) bean is one of the options I've described in my original post.
It is the *bean* that throws the exception, the container only handles it. But if you are using CMT then the deployment tools generate the bean code so you are right, it is of no use.
So, one way would be to use BMP instead of CMP. Only your bean methods then run in separate transactions...
- Only your bean methods then run in separate transactions... -
That is ofcourse not true, entities always use CMT.
Sure, you are right, a bean must throw NoSuchEntityException if, say, ejbStore fails. (And we're using BMP). I mixed it up with NoSuchObjectException which is thrown by a container (6.7.1). And in case of manual control I'll have to catch it as well.