"Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> wrote:
> In my first reply I wondered if the presence of concurrent "read
> committed" transactions would somehow affect the correctness of the
> algorithm, as the authors don't mention that.
Yeah, I was concerned about that, too. In thinking it through I've
convinced myself that there is a choice in implementation, which seems
to have a pretty obvious winner.
(1) If READ COMMITTED and SNAPSHOT isolation levels don't change at
all, there would be a behavioral difference between this technique and
strict two phase locking (S2PL) implementations of serializable
transactions. With S2PL, even READ COMMITTED transactions can only
view the database in a state which is consistent with some serial
application of SERIALIZABLE transactions. Under the algorithm from
this paper, without changes to other isolation levels, if you want to
view the database in a coherent state relative to SERIALIZABLE
transactions, you must use a SERIALIZABLE transaction.
(2) Promote everything to SERIALIZABLE by having all transactions,
regardless of isolation level, take out SIREAD locks and check for
unsafe access patterns. This would, strictly speaking, conform to the
SQL standard, because an implementation is free to promote requests
for any level of isolation to a more strict level; however, it hardly
seems useful.
So, I think the only sane thing to do in this regard would be to
document that there is a difference from blocking implementations of
SERIALIZABLE in the guarantees provided for non-serializable
transactions.
-Kevin