From: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
---|---|
To: | Amit Langote <amitlangote09(at)gmail(dot)com> |
Cc: | Justin Pryzby <pryzby(at)telsasoft(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: ALTER TABLE .. DETACH PARTITION CONCURRENTLY |
Date: | 2021-04-23 17:12:19 |
Message-ID: | 20210423171219.GA26578@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2021-Apr-23, Amit Langote wrote:
> Back in the 1st session:
>
> end;
> insert into fk select generate_series(1, 10000);
> INSERT 0 10000
> Time: 47400.792 ms (00:47.401)
I guess I was wrong about that ... the example I tried didn't have 1000s
of partitions, and I used debug print-outs to show when a new partdesc
was being rebuilt, and only six were occurring. I'm not sure why my
case behaves so differently from yours, but clearly from the timing this
is not good.
> I am afraid we may have to fix this in the code after all, because
> there does not seem a good way to explain this away in the
> documentation.
Yeah, looking at this case, I agree that it needs a fix of some kind.
> If I read correctly, you did try an approach of caching the
> PartitionDesc that we currently don't, no?
I think the patch I posted was too simple. I think a real fix requires
us to keep track of exactly in what way the partdesc is outdated, so
that we can compare to the current situation in deciding to use that
partdesc or build a new one. For example, we could keep track of a list
of OIDs of detached partitions (and it simplifies things a lot if allow
only a single partition in this situation, because it's either zero OIDs
or one OID); or we can save the Xmin of the pg_inherits tuple for the
detached partition (and we can compare that Xmin to our current active
snapshot and decide whether to use that partdesc or not).
I'll experiment with this a little more and propose a patch later today.
I don't think it's too much of a problem to state that you need to
finalize one detach before you can do the next one. After all, with
regular detach, you'd have the tables exclusively locked so you can't do
them in parallel anyway. (It also increases the chances that people
will finalize detach operations that went unnoticed.)
--
Álvaro Herrera 39°49'30"S 73°17'W
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2021-04-23 17:13:15 | Re: A test for replay of regression tests |
Previous Message | Andres Freund | 2021-04-23 17:04:52 | Re: A test for replay of regression tests |