From: | Михаил Кечинов <kechinoff(at)gmail(dot)com> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: DELETE ERROR: tuple concurrently updated |
Date: | 2009-12-29 12:24:12 |
Message-ID: | c8f0e27c0912290424r4745c779m3f05df4975203f0d@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Good. Now I have error:
docs=# REINDEX TABLE document;
ERROR: could not create unique index "pkey_document"
DETAIL: Table contains duplicated values.
So, I have primary key and I have some rows with similar "numdoc", but
"numdoc" is primary key and must be unique.
I can't drop pkey because there are some tables with foreign keys:
docs=# alter table document drop constraint pkey_document;
NOTICE: constraint fk_search_document_vid_numdoc on table
ref_search_document_v id depends
on index pkey_document
ERROR: cannot drop constraint pkey_document on table document because other
obj ects depend on it
HINT: Use DROP ... CASCADE to drop the dependent objects too.
So, I can drop pkey only with other fkey's. It's bad. Is there any methods,
how to clean pkey? When I try to delete some duplicate document, I have that
error: tuple concurrently updated
...
2009/12/29 Greg Stark <gsstark(at)mit(dot)edu>
> On Tue, Dec 29, 2009 at 9:41 AM, Михаил Кечинов <kechinoff(at)gmail(dot)com>
> wrote:
> > When I try to delete one row from database (for example):
> > delete from document where numdoc = 901721617
> > I have this error:
> > ERROR: tuple concurrently updated
> > SQL state: XX000
> > I know, that no one deleting this row at same time.
> > What's mean this error?
>
> So this error can only come from a normal SQL-level delete if there is
> associated TOAST data which is being deleted as well. In which case
> that TOAST data must be already marked deleted -- which shouldn't be
> possible.
>
> It sounds like you have a database where some writes from earlier
> transactions reached the database and others didn't. That can happen
> if you take an inconsistent backup (without using pg_start_backup())
> or if the drive you're using confirmed writes before crashing but
> didn't actually write them.
>
> You might be able to get somewhat further by reindexing the TOAST
> table for this table. To do so do "REINDEX TABLE document". But note
> that you could run into further errors from the missing toast data.
>
> --
> greg
>
--
Михаил Кечинов
http://www.mkechinov.ru
From | Date | Subject | |
---|---|---|---|
Next Message | Osvaldo Kussama | 2009-12-29 13:01:03 | Re: cross-database time extract? |
Previous Message | Greg Stark | 2009-12-29 11:38:26 | Re: DELETE ERROR: tuple concurrently updated |