From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Noah Misch <noah(at)leadboat(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: foreign key locks, 2nd attempt |
Date: | 2012-01-31 13:17:40 |
Message-ID: | CA+Tgmob6FQUPHA_Shgnwj5oFwQc4wUTFkzwJWS5PAPhoLXKqyA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jan 30, 2012 at 6:48 PM, Noah Misch <noah(at)leadboat(dot)com> wrote:
> On Tue, Jan 24, 2012 at 03:47:16PM -0300, Alvaro Herrera wrote:
>> The biggest item remaining is the point you raised about multixactid
>> wraparound. This is closely related to multixact truncation and the way
>> checkpoints are to be handled. If we think that MultiXactId wraparound
>> is possible, and we need to involve autovacuum to keep it at bay, then I
>
> To prove it possible, we need prove there exists some sequence of operations
> consuming N xids and M > N multixactids. Have N transactions key-lock N-1
> rows apiece, then have each of them key-lock one of the rows locked by each
> other transaction. This consumes N xids and N(N-1) multixactids. I believe
> you could construct a workload with N! multixact usage, too.
>
> Existence proofs are one thing, real workloads another. My unsubstantiated
> guess is that multixactid use will not overtake xid use in bulk on a workload
> not specifically tailored to do so. So, I think it's enough to notice it,
> refuse to assign a new multixactid, and tell the user to clear the condition
> with a VACUUM FREEZE of all databases. Other opinions would indeed be useful.
I suspect you are right that it is unlikely, but OTOH that sounds like
an extremely painful recovery procedure. We probably don't need to
put a ton of thought into handling this case as efficiently as
possible, but I think we would do well to avoid situations that could
lead to, basically, a full-cluster shutdown. If that happens to one
of my customers I expect to lose the customer.
I have a couple of other concerns about this patch:
1. I think it's probably fair to assume that this is going to be a
huge win in cases where it avoids deadlocks or lock waits. But is
there a worst case where we don't avoid that but still add a lot of
extra multi-xact lookups? What's the worst case we can imagine and
how pathological does the workload have to be to tickle that case?
2. What algorithm did we end up using do fix the set of key columns,
and is there any user configuration that can or needs to happen there?
Do we handle cleanly the case where the set of key columns is changed
by DDL?
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-01-31 14:05:07 | Re: patch for parallel pg_dump |
Previous Message | Robert Haas | 2012-01-31 13:06:51 | Re: [v9.2] Add GUC sepgsql.client_label |