From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | maxim(dot)boguk <maxim(dot)boguk(at)gmail(dot)com> |
Cc: | Pg Bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: BUG #6393: cluster sometime fail under heavy concurrent write load |
Date: | 2012-01-11 15:24:50 |
Message-ID: | 1326295419-sup-3436@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Excerpts from maxim.boguk's message of mar ene 10 23:00:59 -0300 2012:
> The following bug has been logged on the website:
>
> Bug reference: 6393
> Logged by: Maxim Boguk
> Email address: maxim(dot)boguk(at)gmail(dot)com
> PostgreSQL version: 9.0.6
> Operating system: Linux Ubuntu
> Description:
>
> I have heavy write-load table under PostgreSQL 9.0.6 and sometime (not
> always but more then 50% chance) i'm getting the next error during cluster:
>
> db=# cluster public.enqueued_mail;
> ERROR: duplicate key value violates unique constraint
> "pg_toast_119685646_index"
> DETAIL: Key (chunk_id, chunk_seq)=(119685590, 0) already exists.
>
> chunk_id different each time.
>
> No uncommon datatypes exists in the table.
>
> Currently I work on create reproducible test case (but it seems require 2-3
> open write transaction on the table).
I don't see how can this be done at all, given that cluster grabs an
exclusive lock on the table in question. An better example illustrating
what you're really doing would be useful.
--
Álvaro Herrera <alvherre(at)commandprompt(dot)com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support
From | Date | Subject | |
---|---|---|---|
Next Message | Vic | 2012-01-11 15:24:56 | Re: BUG #6392: leak memory while restore/load dump |
Previous Message | Tom Lane | 2012-01-11 15:21:14 | Re: BUG #6391: insert does not insert correct value |