My project has 80 session bean which will provide their corresponding remote interfaces defined for access. We are planning to have a common class do the lookup for all the individual modules. I am struck with two designs:
1. Get the reference of all the remote interfaces at one go (when the common class is instantiated) and store them in the Hashtable. Now when the individual modules needs the reference, I can just take it from the Hashtable and return it.
2. Lookup each time for the Home interface, get the remote interface and return it to the calling function.
Which approach will be better. Would it be a problem it two modules use the same reference at the same instance of time.
i will recomnmed here a way which is similar to the u r first option have one class any container class which going to hold the all refernces for all u r beans and other can access this ...due to this design u r lot of resources will be saved ie:- each time trip to the JNDi server will be avoided.As if u r two modules are using the same sesison bean dont bother this is not u r job concurrency issue is not there in the session beans each module will get the different insatnce of the u r session bean.
revert me back for more clarifiaction
But just imagine this situation. I have one remote interface reference which is shared by two objects.
obj1 --- <common remote ref>
obj2 --- <common remote ref>
But the objects are working parallely. Suppose, when the object 'obj1' calls the server, it will be assigned with a session bean object (atleast till the function call finished in case of stateless).
<common remote ref> ---> session bean 1
Suppose, this thread is pre-empted and the thread of the second object start, which again calls the remote method using the same reference. Now, will a new session bean instance will bound to this remote interface reference or it will call the method in session bean 1?
If a new session bean object is created, then what about the function being called by 'obj1' on the instance of the first session bean?
Storing one references for each of the remote interfaces is not a good design. It will nullyfy the pooling of the session beans provided by the EJB container and also there is a probability of two different clients using a same session bean which should be avoided as session beans are not thread safe. U can store the references for each of the home objects which would save the JNDI lookup time. And Ur client can use the home object to retrieve a remote interface when ever it requires one.
If the session bean is stateless, option 1 is OK.
Otherwise, use the approach posted above.
I support the second way of doing, as severe level of pooling problems are expected due to the first design approach.
what happens if you are using a clustered enviroment and one of the server crashes and bean references become invalid. How do you handle failover ?
Client proxy will handle it transparently.
If a server fails, the client proxy MIGHT handle it transparently. For instance, WebLogic 5.1.0:
If the server dies mid or between method calls, you will get an exception. You can relook up the bean using the home interface (which is cluster aware) and you will get a new instance, started on a new server for you. (Any mid way transaction on the server that died is rolled back.) WebLogic keeps one instance of an entity bean for each primary key in the cluster and uses server affinity as long as said server is alive.
Stateful Session Bean
For the same situation as above, you will have to relook up against the home interface. Note that any state information stored in the bean on the other server will be lost. These beans are server affine. Note, this has changed in WebLogic 6.0 I believe.
Statelss Session Bean
1) If the bean is idempotent (Probably spelt that wrong, but never mind!)
This means that calling the method twice will NOT lead to multiple updates to some data repository where only a single update should occur. Under this situation the client proxy will transparently fail over, even MID method call.
2) If the bean is not marked as idempotent, then it will fail over transparently between method calls, but will fail mid method call with an exception.
I wouldn't cache the remote interfaces in the client. Cache the home interfaces instead, using either option above (or a combination of the two.) You can quite happily have two threads call a create() method on the home interface at the same time, they'll get different bean instances back.
Your client app can store the remote interface references for as long as it likes, but you should keep them with the thread that is using them.
Update is not idempotent..right?
Updates are most often not idempotent this is true. But if your update is calling some form of stored procedure that detects duplicates and handles them inside the database, then you are OK with making the bean idempotent.
It's a marker to the server more than anything else. It will happily let you mark any stateless session bean as idempotent. It's down to you to decide if the citeria apply, and it's down to you to work out how to make the criteria apply, the server couldn't really care less.
Stored procedures are "stateless",how it can "detects duplicates",I mean how can it remember if it is a "retry" or an unrelated new call? Can you give an example? Thx.
A stored procedure something like this..... (I may have the syntax wrong, been a while. This is for Sybase)
create proc insertRow @key int, @value varchar(255) as
// See if row exists, hold the shared lock so that we don't
// let anything else get in. (Note, transaction isolation
// level must be correctly set when running this proc.)
if not exists (select 1 from table where key = @key HOLDLOCK)
insert table (key, value) values (@key, @value)
You could also add some error handling to this proc to catch the key error if you want to run at a more concurrent isolation level. So, you could catch the duplicate key error code and simply choose to ignore it. I definitely can't remember the syntax for that though! :-)
The downside here is that you can't make any distinction between genuine errors (Duplicate should not have happened) and errors you don't care about. Unless that is you use a different proc under different circumstances, but some people don't like that.
The downside here is that you can't make any distinction between genuine errors (Duplicate should not have happened) and errors you don't care about
That is exactly what I am questioning,the stored procedure cannot tell if it is a "retry" or an unrelated new call.
If it is a new call and duplication is detected, the whole
transaction should be aborted in most cases, unless..
but EJB is for mission critical applications,right?:-)
An update is an update...I still doubt update can be
cache the home interfaces instead of the remotes.
I would extend option 1 to look up serialized handles to the remote references, get the references using the handles and storing them in the Hashtable.
If the references are invalidated, catch the exception, do a Home lookup, get the reference, store it in the Hashtable. At the same time, get the handle, serialize it for later use.
This should be faster than looking up the Home references every time the app starts.