From: | Alex <zhihui(dot)fan1213(at)gmail(dot)com> |
---|---|
To: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Serialization questions |
Date: | 2019-08-20 08:47:37 |
Message-ID: | CAKU4AWqS2BQ19ppRE=pWiiJSA-DgvrvKLmMmdXo648c=okkDQw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Before understanding how postgres implements the serializable isolation
level (I have see many paper related to it), I have question about how it
should be.
I mainly read the ideas from
https://www.postgresql.org/docs/11/transaction-iso.html.
In fact, this isolation level works exactly the same as Repeatable Read
except that it monitors for conditions which could make execution of a
concurrent set of serializable transactions behave in a manner inconsistent
with all possible serial (one at a time) executions of those transactions.
in repeatable read, every statement will use the transaction start
timestamp, so is it in serializable isolation level?
When relying on Serializable transactions to prevent anomalies, it is
important that any data read from a permanent user table not be considered
valid until the transaction which read it has successfully committed. This
is true even for read-only transactions ...
What does the "not be considered valid" mean? and if it is a read-only
transaction (assume T1), I think it is ok to let other transaction do
anything with the read set of T1, since it is invisible to T1(use the
transaction start time as statement timestamp).
Thanks
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2019-08-20 08:48:47 | Make SQL/JSON error code names match SQL standard |
Previous Message | Laurenz Albe | 2019-08-20 08:34:30 | Re: understand the pg locks in in an simple case |