From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Phill Kenoyer <pgsql(at)c0de(dot)net>, Glen Eustace <geustace(at)godzone(dot)net(dot)nz> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Primary Key Problems |
Date: | 2001-12-19 00:56:32 |
Message-ID: | 19852.1008723392@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Phill and Glen,
We've just tracked down one mechanism that allows duplicate rows to be
spawned --- see http://fts.postgresql.org/db/mw/msg.html?mid=1078374
and following discussion. In the example given by Brian Hirt, VACUUM's
creation of a duplicate row causes a unique-key violation to be
reported, but I think if he'd made the indexes in the other order,
the error would go undetected, leaving duplicate rows in the table.
What I'm currently puzzling over is whether this bug explains your
recent problem reports, or whether there are still more bugs lurking.
The bug is actually fairly general: checking the validity of a tuple
while a VACUUM is in process on the table can lead to the tuple being
marked good when it shouldn't be. But I do not currently see any way
to trigger the bug other than the one Brian reported, namely creating
a functional index with a function that tries to scan its own table.
Neither of you mentioned having done any such thing in your reports,
but I wonder whether you'd ever had such an index on the tables that
you saw problems with.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Andersen | 2001-12-19 03:56:05 | Load problems... |
Previous Message | Marc Spitzer | 2001-12-18 18:09:40 | Re: How do I vacuum safely? And how often should I reindex |