| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Willy-Bas Loos <willybas(at)gmail(dot)com> |
| Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: cache lookup failed for index |
| Date: | 2016-06-28 18:46:05 |
| Message-ID: | 25865.1467139565@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Willy-Bas Loos <willybas(at)gmail(dot)com> writes:
> [ pg_dump sometimes fails with ]
> pg_dump: [archiver (db)] query failed: ERROR: cache lookup failed for
> index 231808363
This wouldn't be too surprising if you're constantly creating and dropping
indexes. There's a small window between where pg_dump starts its
transaction and where it's able to acquire lock on each table; but since
it's working from a transaction-start-time view of the catalogs, it would
still expect the table to have all the indexes it did at the start.
If you've got a lot of DDL going on, maybe the window wouldn't even be
that small: pg_dump's attempt to lock some previous table might've blocked
for awhile due to DDL on that one.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Sridhar N Bamandlapally | 2016-06-29 06:07:26 | Sub-query having NULL row returning FALSE result |
| Previous Message | Willy-Bas Loos | 2016-06-28 17:26:38 | Re: cache lookup failed for index |