Many databases and some application servers weaken serializability with their so-called isolation levels.
"So-called"? Aren't they defined in SQL standard? SQL2 defined four standard isolation levels more than a decade ago.
The weaker isolation levels are not the same from one database to another.
Actual implementation of SERIALIZABLE isolation level (ok, of SERIALIZABLE keyword) can be different as well, and some databases may not support it at all.
This requires you to reason about using inconsistent data and this is hard. You have to use application knowledge to argue that a transaction reading an inconsistent, possibly to be rolled-back value, doesn’t matter to the correctness of the application.
Unless the transaction isolation level is DIRTY READ, the value is absolutely not "possibly to be rolled-back". If you are talking about inconsistent updates or phantom data, this applies only to the case when the second transaction makes multiple
reads. This is important for multiuser desktop applications, but I would say that this does not make a big difference for a web application. Database transaction usually is not kept opened during a whole client session to increase concurrency (this article is exactly about that). So, each HTTP request usually starts a new database transaction and immediately commits it (within possibly the same application transaction, here is where EJB may help). Each transaction usually makes only one read or update. Thus there is no difference would the value change or not, because it is read only once
, and this is a committed
value unless for some strange reason DBA set isolation level to DIRTY READ.
In the example, if a transaction reserves the last car, a second transaction can observe that fact and conclude there are no cars available. But if later we compensate for the first transaction by canceling the reservation, the second transaction has observed an inconsistent state.
To notice the change in data the second transaction must be long-lived and must reread data. Why would we design a long-lived multiread transaction, if we already decided to go with short ones for update? More realistically, the second transaction will start, observe that there are no cars available and will return to a user with "Nothing available, try again later". Or, if database designer were a little smarter, the second transaction would read "Reserved but not confirmed" flag and would tell a user "Some cars may be available later, try in N minutes", where N = M - K, where M is average time spent by a user from starting reservation to its finish, and K is the length of the current user session.
[By using HLS,] if at any point a transaction rolls back, the CompletionSignalSet is responsible for ensuring that the enclosing activity must fail, triggering any remaining compensation transactions.
This support from application server is nice, but as I see nothing changes on the database level. Which means, that if compensating database transaction fails, system becomes inconsistent. Thus, HLS is just a convenient framework, but it does not solve the original problem.