EJB design: what primery key pattern must i use?
Hi, i'm a very young EJB programer and i like to make an Entity EJB primery key patter to all databases. Throughput isn't very important but...
- Posted by: Enrique Rodriguez
- Posted on: October 02 2001 16:13 EDT
I prefer a a pattern that is suitable wiht all database more than the throughput.
anybody have an advice???
thanks, and pardon for may english.
- what primery key pattern must i use? by Joost van de Wijgerd on October 03 2001 06:01 EDT
- RE: what primery key pattern must i use? by David Jones on October 04 2001 09:47 EDT
- RE: what primery key pattern must i use? by Jeroen van Erp on October 08 2001 08:02 EDT
- RE: what primery key pattern must i use? by Richard Kenyon on October 09 2001 03:15 EDT
RE: what primery key pattern must i use? by Jeroen van Erp on October 09 2001 03:51 EDT
RE: what primery key pattern must i use? by Richard Kenyon on October 09 2001 06:18 EDT
- RE: what primery key pattern must i use? by Jeroen van Erp on October 10 2001 03:58 EDT
- RE: what primery key pattern must i use? by Richard Kenyon on October 09 2001 06:18 EDT
- RE: what primery key pattern must i use? by Jeroen van Erp on October 09 2001 03:51 EDT
Depending on your needs, the easiest thing is to use a random generator that generates random longs. This is of course only suitable if you are not in a clustered environment, but it works for every database I know of.
There is a chance, how small it may seem that a duplicate key appears! SO don't forget to check if the key generated is not already present. Or catch a DuplicateKeyException, and re-generate it.
After reviewing a number of patterns for unique primary key generation for the Open For Business project we decided on a rather creative one that you don't hear much about.
We use a table in the database with two columns: the name of the PK series, and the current value. A bank of values is retrieved whenever needed to make it run faster overall.
With this approach there is still the problem that two servers hitting the database server might "collide" and get overlapping primary key banks. To avoid this we use an ethernet style collision detection algorithm that goes something like this:
1. get the current value from the database
2. use an update statement add your bank size to the value in the database in a single call (ie: value = value + 10)
3. get the current value again from the database and make sure it has only incremented by 10, if so there was no collision and you have your bank of key
4. if the value incremented by a number different than 10 you collided with something, so wait a random amount of time and try again
With only 2-4 servers hitting the same database you will rarely have collisions, but when they do happen they should resolve pretty quickly.
This approach is completely database independent and only relies on simple selects and updates, and of course an insert to get a row into the database for the key series name.
David E. Jones
The Open For Business Project
jonesde at ofbiz dot org
This is indeed a far better scheme than given above. We also figured this would be a logical way to go with our EJB application. What we are looking at at the moment is whether it is possible to Atomize the action of looking up and adding 10. So that a (complicated) collision detection algorithm becomes obsolete.
You should never get collisions if this pattern is implemented correctly. Only one 'process' can update the database at any time, so as long as you do a :-
UPDATE key_table SET next_id FROM next_id + bank_size
SELECT next_id FROM key_table
You should be OK.
I definitely do not see why this should avoid collisions. As I see it, two processes/beans can interleave between the update and select.
1: Does update, finishes update (next_id = next_id + 10)
2: Does update, finishes it (next_id = next_id + 20)
2: reads next_id, gets next_id+20 (internally substracts 10, for 10 keys reserved)
1: reads next_id, gets next_id+20 (internally substracts 10, for 10 keys reserved)
Result, both process 1 and 2 will now generate keys with vlues next_id+11 to next_id+20, And both will generate a DupKeyExc.
This can only be avoided by using atomic actions, or collision detection schemes. Where I prefer atomic actions, ie (read and write without possibility of interleaving)
I may be missing something here, but surely the update would lock the record until the end of the transaction?
Yes, but normally the end of an update transaction is the update itself, a transaction does not span 2 SQL commands. So actually two transactions have been specified.
I will read it up again this weekend, have a good example in one of my database books from university.
We are waiting for this example¡¡¡
This will help me a lot, thank you¡¡¡
Yes, but normally the end of an update transaction is the >update itself, a transaction does not span 2 SQL commands. >So actually two transactions have been specified.
Rubbish :-) I have to disagree here - a transaction is a series of SQL commands (updates/reads) that occur between a start and a commit! In EJB terms, the transaction can be User managed, or handled by the container.
To prove this, I created a small test EJB that did the following:-
display message "starting"
Updated a record by key
display message "sleeping"
sleep for 30 seconds
display message "reading"
Read record by key
Display message "finished"
I then ran two simultaneous clients, each calling the ejb.
The results were :-
This proved that the 'UPDATE' locked the record until the end of the transaction.
This is standard SQL behaviour as far as I know, not specific to Sybase (on which I ran the test).
Agreed! But you DO have to specify the COMMIT in that case ;) Standard behaviour is Auto-commit, which means, one SQL statement, One Commit.