From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Peter Geoghegan <pg(at)heroku(dot)com> |
Cc: | Andres Freund <andres(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE |
Date: | 2013-09-23 19:49:50 |
Message-ID: | CA+TgmobHNkPRcAWuh2S7ftJE4DKzzG_yn+e4qu1kuAB04jieqQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Sep 20, 2013 at 8:48 PM, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
> On Tue, Sep 17, 2013 at 9:29 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> On Sat, Sep 14, 2013 at 6:27 PM, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
>>> Note that today there is no guarantee that the original waiter for a
>>> duplicate-inserting xact to complete will be the first one to get a
>>> second chance
>
>> ProcLockWakeup() only wakes as many waiters from the head of the queue
>> as can all be granted the lock without any conflicts. So I don't
>> think there is a race condition in that path.
>
> Right, but what about XactLockTableWait() itself? It only acquires a
> ShareLock on the xid of the got-there-first inserter that potentially
> hasn't yet committed/aborted.
That's an interesting point. As you pointed out in later emails, that
cases is handled for heap tuple locks, but btree uniqueness conflicts
are a different kettle of fish.
> Yeah, you're right. As I mentioned to Andres already, when row locking
> happens and there is this kind of conflict, my approach is to retry
> from scratch (go right back to before value lock acquisition) in the
> sort of scenario that generally necessitates EvalPlanQual() looping,
> or to throw a serialization failure where that's appropriate. After an
> unsuccessful attempt at row locking there could well be an interim
> wait for another xact to finish, before retrying (at read committed
> isolation level). This is why I think that value locking/retrying
> should be cheap, and should avoid bloat if at all possible.
>
> Forgive me if I'm making a leap here, but it seems like what you're
> saying is that the semantics of upsert that one might naturally expect
> are *arguably* fundamentally impossible, because they entail
> potentially locking a row that isn't current to your snapshot,
Precisely.
> and you cannot throw a serialization failure at read committed.
Not sure that's true, but at least it might not be the most desirable behavior.
> I respectfully
> suggest that that exact definition of upsert isn't a useful one,
> because other snapshot isolation/MVCC systems operating within the
> same constraints must have the same issues, and yet they manage to
> implement something that could be called upsert that people seem happy
> with.
Yeah. I wonder how they do that.
> I wouldn't go that far. The number of possible additional primitives
> that are useful isn't that high, unless we decide that LWLocks are
> going to be a fundamentally different thing, which I consider
> unlikely.
I'm not convinced, but we can save that argument for another day.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Grittner | 2013-09-23 19:55:58 | Re: record identical operator |
Previous Message | Stephen Frost | 2013-09-23 19:45:00 | Re: record identical operator |