From: | Florian Pflug <fgp(at)phlo(dot)org> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: WIP fix proposal for bug #6123 |
Date: | 2011-08-03 16:27:28 |
Message-ID: | 16343994-F9B2-4795-BC6D-734DF50132EE@phlo.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Aug3, 2011, at 17:55 , Robert Haas wrote:
> On that note, I think in some ways the problems we're hitting here are
> very much like serialization anomalies.
Yeah, I had the same feeling of familiarity ;-)
> If the user updates a tuple
> based on its PK and sets some non-PK field to a constant, and while
> we're doing that, a BEFORE trigger updates any field in the tuple
> other than the PK, then in theory it seems we ought to be able to
> reconcile the updates. It feels like the result ought to be the same
> as if we'd simply run the BEFORE-trigger update to completion, and
> then run the main update. However, if the BEFORE-trigger modifies any
> columns that the main update examined while constructing its value for
> NEW, then the updates can't be serialized.
Going down that road opens the door to a *lot* of subtle semantic
differences between currently equivalent queries. For example,
UPDATE T SET f=f, a=1
would behave differently then
UPDATE T SET a=1
because in the first case the new row would depend on the old row's
value of "f", while in the second case it doesn't.
> There's no way to get the
> same result as if you'd done either one of them first, because they
> are inextricably intertwined.
>
> In practice, my hand-wavy reference to "reconciling the updates" is a
> problem because of the way the trigger interface works. It feels
> pretty impossible to decide that we're going to do the update, but
> with some other random values we dredged up from some other update
> replacing some of the ones the user explicitly handed to us. But if
> the trigger could return an indication of which column values it
> wished to override, then it seems to me that we could piece together a
> reasonable set of semantics. It's not exactly clear how to make that
> work, though.
I dunno, that all still feels awfully complex. As you said yourself,
this case is quite similar to a serialization anomaly. Taking that
correspondence further, that reconciliation of updates is pretty much
what the EPQ machinery does in READ COMMITTED mode. Now, we ourselves
have warned users in the past to *not* use READ COMMITTED mode if they
do complex updates (e.g., use UPDATE ... FROM ...), because the behaviour
of that reconciliation machinery in the present of concurrent updates
is extremely hard to predict. I thus don't believe that it's a good idea
to introduce similarly complex behaviour in other parts of the system -
and particularly not if you cannot disable it by switching to another
isolation level.
Simply throwing an error, on the other hand, makes the behaviour simple
to understand and explain.
best regards,
Florian Pflug
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2011-08-03 16:31:20 | Re: WIP: Fast GiST index build |
Previous Message | Vorarlberger | 2011-08-03 16:20:11 | possible new feature: asynchronous sql or something like oracles dbms_job.submit |