I have a stateless session bean with TX_BEAN_MANAGED methods and TX_SERIALIZABLE isolation level. I am not doing a usertransaction.begin() in the method, but i am getting the connections from a datasource. I am not doing any updates/insert/delete as well. It is only read.
Are the connections that i took within the method associated to any transaction?
If i do a read on a row from this method, will there be a lock on that row(since isolation level is serializabe) and for how long?
(If appserver info is required, it is WS and DB2 on NT)
thanks for your help!
Bean managed means the code is taking care of things, in which case, since you don't do tx.begin() the answer to your question is no, you will not be in a transaction.
not really sure what you are doing will give you what you want. First, why is this bean TX_BEAN_MANAGED? If the logical transaction spans a single method call then simply make it a container managed TX. The only real reasons for having a bean managed TX is if the operation requires several separate transactions in a single method or to create a long running transaction that spans multiple client calls. The first can be avoided through refactoring, the use of a container managed TX and using multiple methods with the TX_REQUIRES_NEW attribute. The second option, to create a long running TX, should be avoided if at all possible.
In the scenario that you describe, the database connection will not be associated with a global TX. There will however be a transaction at the database level. The duration of this transaction and the locks on rows depends on the autocommit flag setting. If autocommit is true then the lock is held until the result set has been read. If false, then the lock is held until your code calls Connection.commit(). There will be row locking, rows will have a read lock. If any other concurrent user has uncommited changes on one of the rows that you want to read then your process will block. If you are making a single SELECT request with no backward cursor movement, if the SELECT is not repeated then SERIALIZABLE is way over the top. Use READ_COMMITTED for a single SELECT query.
The main problem that I see with your approach is that no global transaction is being started. If your bean does not call any other then you will probably get away with it but you really should be beginning the UserTransaction before you get the DataSource. This will ensure that any call you make to other beans will be able to share their TX context.
I think you should reconsider your design. Why is the bean TX_BEAN_MANAGED? If you need multiple transactions in a single client call then use a container managed TX and refactor the bean placing the separate transactions in separate methods marked TX_REQUIRES_NEW. If using TX_BEAN_MANAGED so that the transaction can span multiple clients calls then just don't do it.
Secondly reconsider the use of SERIALIZABLE. If you are issuing a single select statement, if you are reading the result set once in the forward direction then you do not require a repeatable read or protection from phantoms. READ_COMMITTED is all that is required.
Thanks a lot.. It expains most of the problems that occurs during my testing. Actaully, i am debugging some EJBs that already exists.
But i have the following question on db connections
1) When we use datasource with JTA, are we supposed to use methods of java.sql.Conection like commit(), rollback(), setAutoCommit() etc..? In my case, since there is not transaction, i assume that it is allowed.
2)Now, i tried changing isolation level to read_committed. Then in another EJB method (which has a JTA Transaction associated), 'select for update' doesn't keep lock on the row. Any idea about the minimum isolation level required for 'select for update' to work as i except? Does it depend on the database?
3)I have another issue in which simple insert statment into a table gives me deadlocks. This is again within a JTA user transation, with several transactions going on in parrallel. The isolation level is read_committed. The table has got a composire primary key(2 columns).
1) When using a JTA enabled DataSource you should not use any of those methods. The DataSource should already have autocommit set to false when it is obtained from JNDI. Commit and rollback are called by the transaction manager when the global transaction is committed, not by the bean code. I think that not setting up a UserTransaction indicates a poor design and may be harmful but I honestly don't know enough about the WebSphere implementation to say 'do not do this'.
2) Not really sure what you're asking here. First, the behaviour of locking is highly database dependent. To answer this question you must provide more information about the timing and activity of each concurrent transaction. My limited understanding of DB2 is that by selecting a row you get a shared lock. If you then update the row in the same TX the lock must be promoted to an exclusive lock. Issuing SELECT for UPDATE means that the read will immediately obtain an exclusive lock. If you work with READ_COMMITTED then your SELECT will block on any row that is exclusively locked by another TX.
3) Deadlock is possible in any transaction that locks more than a single resource. Higher isolation levels involve more locks. What are the other concurrent transactions doing? What isolation level are they operating under?
Do you have a DBA on the project and have you spoken to them about this?
Here is more explanation
2) Cuncurrent transactions (more specifically, different threads of same transaction code) are trying to read a row, modify the data and then update the row. So to prevent deadlock, a select for update on that row is done first, followed by modification of data and update. If the isolation level is TX_SERIALIZABLE, this works as expected. If i change isolation level to read_committed, then second transaction doesn't wait for the first thread to complete the update.
3)We have tracked the sql in which deadlock is happening. It is an insert to a table as i described. This is the only transaction that access that table. But there are multiple instances of same transaction since a number of users are trying to execute that transaction at same time. Other cuncurrent transactions doesn't access this table at all.
I tried scenario (2) and found a few interesting things. First, READ_COMMITTED will work. Second, the update (U) lock is only applied to the row when it is directly accessed. That is, the row lock only appears when you access a field of the ResultSets current row. Exceuting the SELECT FOR UPDATE is not enough, calling ResultSet.next() is not enough. The row only locks when a column is read. Obviously DB2 is trying to hold the lock for the shortest minimum time. Third, use separate Statement objects for the SELECT and the UPDATE.
As far as the deadlock goes you're going to have to do some investigation on this one. I tried using the scenario from (2) with READ_COMMITTED and with SERIALIZABLE isolation. With READ_COMMITTED the table is locked using an IX lock and the individual rows obtain an update (U) lock. The IX table lock allows other threads access to the table provided they negotiate using row locks. With SERIALIZABLE isolation the entire table is locked for update (U). Note that this means that only a single thread has access to the entire table for the duration of its transaction.
To debug your problem I suggest you start a DB2 CLP session logged in as DB2SYSADM (probably Administrator) and execute the following 2 commands.
1) UPDATE MONITOR SWITCHES USING LOCK ON
2) GET SNAPSHOT FOR LOCKS ON mydbname
The issue looks solved.
We didn't make any database changes. Instead we changed isolation level to Read_committed and started transactions where ever we were not doing earlier