From: | Christoph Berg <christoph(dot)berg(at)credativ(dot)de> |
---|---|
To: | Harry Ambrose <harry(dot)ambrose(at)gmail(dot)com> |
Cc: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, adsj(at)novozymes(dot)com, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: ERROR: unexpected chunk number 0 (expected 1) for toast value 76753264 in pg_toast_10920100 |
Date: | 2017-08-30 11:17:30 |
Message-ID: | 20170830111730.ma2wyxwqrvssbdbq@msg.df7cb.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Re: Harry Ambrose 2017-08-25 <CAK4Knu86q98fWNzYntsZk1uGca6F129wLShmrkzyb2EZkSMDWQ(at)mail(dot)gmail(dot)com>
> Hi All,
>
> Sorry to open this can of worms again. However, we are still struggling
> with this issue across quite a large amount of our estate.
Hi,
we've just seen exactly this error on a customer database running
9.5.3 (postgresql95.x86_64 9.5.3-2PGDG.rhel6). Luckily just one tuple
was affected.
Symptoms were:
# select text from k... where id = 719764749;
ERROR: unexpected chunk number 0 (expected 1) for toast value 3347468184 in pg_toast_922511637
The toast table itself was perfectly ok, with a single chunk:
# select * from pg_toast.pg_toast_922511637 where chunk_id = 3347468184;
chunk_id | 3347468184
chunk_seq | 0
chunk_data | ...valid text string... (with bytea_output = 'escape')
Updating or deleting the field/row didn't work:
# update k... set text = '...same text as above...' where id = 719764749;
ERROR: XX000: tuple concurrently updated
ORT: simple_heap_delete, heapam.c:3171
# delete from k... where id = 719764749;
ERROR: XX000: tuple concurrently updated
ORT: simple_heap_delete, heapam.c:3171
The problem persisted over the last two weeks (eventually noticed by
pg_dump starting to fail between August 15th and 18th). The server was
started on July 26th.
Besides the ERRORing statements above, I didn't actively resolve it,
suddenly SELECTing the original row just worked again. According to
pg_stat_user_tables, autovacuum didn't hit in. I can't say if there
were backends open for two weeks. At the time it resolved itself, the
oldest backend was from August 27th.
If xmin/xmax/multixact... data from this server is interesting, I can
extract it on request.
Christoph
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Johnson | 2017-08-30 13:43:16 | Veritas cluster management |
Previous Message | Luca Looz | 2017-08-30 06:45:42 | Re: Serializable Isolation and read/write conflict with index and different keys |