From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | ladayaroslav(at)yandex(dot)ru |
Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #15946: "duplicate key" error on ANALYZE of table partitions in transaction |
Date: | 2019-08-10 14:23:16 |
Message-ID: | 618.1565446996@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
PG Bug reporting form <noreply(at)postgresql(dot)org> writes:
> Running this:
> ...
> Throws this error:
> ERROR: duplicate key value violates unique constraint "pg_statistic_relid_att_inh_index"
> DETAIL: Key (starelid, staattnum, stainherit)=(61056, 1, f) already exists.
Hm, you don't need all the fancy partitioning stuff:
regression=# create table t as select generate_series(1,10) x;
SELECT 10
regression=# begin;
BEGIN
regression=# analyze t, t;
ERROR: duplicate key value violates unique constraint "pg_statistic_relid_att_inh_index"
DETAIL: Key (starelid, staattnum, stainherit)=(35836, 1, f) already exists.
It appears to work fine without the BEGIN:
regression=# analyze t, t;
ANALYZE
but then
regression=# begin;
BEGIN
regression=# analyze t, t;
ERROR: tuple already updated by self
I think the conclusion is that if we aren't using per-table
transactions we'd better do a CommandCounterIncrement between
tables in vacuum()'s loop.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | PG Bug reporting form | 2019-08-10 22:28:08 | BUG #15947: Worse plan is chosen after giving the planner more freedom (partitionwise join) |
Previous Message | PG Bug reporting form | 2019-08-10 12:45:26 | BUG #15946: "duplicate key" error on ANALYZE of table partitions in transaction |