EJB programming & troubleshooting: Concurrent transactions

  1. Concurrent transactions (5 messages)


    I have a table table_serial in my database, which I use to generate primary keys for inserting records in other tables. I do not want the query/update on this table to be a part of transactions to insert in other tables. So I use, JDBC APIs, to suspend the connection to insert in other tabes, resume the connection to qry and update table serial table, commit this transaction and then resume the connection to insert in other tables. However, I am required to use UserTransaction object / Container managed transactions. Is there a way to do this while keeping the transactions on table_serial table out of scope of the main transactions.

    I have implemented similar thing on a multi-threaded C program, works beautifully, the Java program also works, but as I understand, its advisable to use UserTransaction object or CM transactions.

    Any pointers to this are welcome.


    Threaded Messages (5)

  2. Concurrent transactions[ Go to top ]

    In my implementation, I used an CMP entity bean. The bean is mapped to a table, which has two columns: ID to describe the S/N ID and SERIAL to describe the number. The bean has a business method, getNextSerial(), and a finding method, findBySNID(id). When I need a serial number, I find the bean, and invoke the getNextSerial method. The back end database is Oracle and the application server is WLS6.1.

    I set the transaction attribute for that finding method to MANDATORY, the getNextSerial method to SUPPORTS. WLS6.1 has a special isolation level setting for Oracle: READ-COMMITTED-FOR-UPDATE, which will lock a row once it's accessed. That entity bean is only used in other EJBs because the entity bean needs to be part of a transaction. It works fine.
  3. Concurrent transactions[ Go to top ]

    Question: Can you please explain the usage of MANDATORY? I though MANDATORY meant, a transaction has to existing prior to making the EB invocation. Do we not want the primary key generation/fetch to take place in it's own (short-lived) transaction? In which case, wouldn't REQUIREDNEW be more appropriate?

    Thanks in advance.
  4. Concurrent transactions[ Go to top ]

    In our case, we wanted the serial numbers to be continuous. For example, AG2002010100001 is the first one serial number generated for a prefix AG on Jan 1st, 2002. The second one must be AG2002010100002, and so on. So we did not make the serial number generator to perform its operations in its own transaction: when something goes wrong, the number has to be rolled back. Surely this will be a bottle neck for the whole system. If someone get the number and the transaction wad not committed until 3 minutes later, the serial number for AG is locked and no other clients can get the next serial number for AG. So I asked the programmers to get the next serial number just before the last step of the whole operation. I could not think a better solution.

    So if continuous S/N is not your concern, you surly can make that transaction RequiresNew or Required. However I don't think RequireNew makes much sense. RequiresNew means a new transaction will be started. The original transaction won't care about if the new transaction is committed or rolled back. (Think about it: if it's important to know whether the sub-transaction is successful, why bother creating a sub-transaction? One transaction will be enough for it.) I don't think it a nature of serial number generator.

    Dright Ho
  5. Concurrent transactions[ Go to top ]

    In my case, its not necessary for me to have Serial nos without gaps. Gaps are OK with me, I use the primary key field mainly for Replication, its Informix database. Since, my prog is multi-threaded, the threads pickup the range of PK, based on their time-slice. However, its important for me that transactions on table-serial are always successful, if not, the prog. has to exit. And send a NMS trap.
  6. Concurrent transactions[ Go to top ]

    My suggestion for RequiresNew was only in regards to not keep the table locked for longer than necessary which as you said is a drawback of your implementation.

    Unless I'm missing something, the original "problem" assuming consequtive keys is not a requirement - and I would surmise most systems should not have such a requirement - is that some keys may be "lost" if the original transaction fails.

    It would be nice to know why a system should impose a consequtive primary key requirement. Surely, primary keys should not have any business meaning, in which case all they need to be is unique, no ???