Simon Riggs <simon(at)2ndQuadrant(dot)com> wrote:
> It's common to find applications that have some transactions
> explicitly coded to use SERIALIZABLE mode, while the rest are in
> the default mode READ COMMITTED. So common that TPC-E benchmark
> has been written as a representation of such workloads.
I would be willing to be that any such implementations assume S2PL,
and would not prevent anomalies as expected unless all transactions
are serializable.
> The reason this is common is that some transactions require
> SERIALIZABLE as a "fix" for transaction problems.
That is a mode of thinking which doesn't work if you only assume
serializable provides the guarantees required by the standard. Many
people assume otherwise. It does *not* guarantee blocking on
conflicts, and it does not require that transactions appear to have
executed in the order of successful commit. It requires only that
the result of concurrently running any mix of serializable
transactions produce a result consistent with some one-at-a-time
execution of those transactions. Rollback of transactions to
prevent violations of that guarantee are allowed. I don't see any
guarantees about how serializable transactions interact with
non-serializable transactions beyond each transaction not seeing any
of the phenomena prohibited for its isolation level.
> If you alter the default_transaction_isolation then you will break
> applications like this, so it is not a valid way to turn off SSI.
I don't follow you here. What would break? In what fashion? Since
the standard allows any isolation level to provide more strict
transaction isolation than required, it would be conforming to
*only* support serializable transactions, regardless of the level
requested. Not a good idea for some workloads from a performance
perspective, but it would be conforming, and any application which
doesn't work correctly with that is not written to the standard.
-Kevin