From: | Ron <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Postgresql crashing during pg_dump |
Date: | 2021-12-22 14:43:00 |
Message-ID: | 2330a009-51ee-9552-b1c6-809b49d0efac@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12/22/21 8:40 AM, Tom Lane wrote:
> Paulo Silva <paulojjs(at)gmail(dot)com> writes:
>> I have a huge table with 141456059 records on a PostgreSQL 10.18 database.
>> When I try to do a pg_dump on that table, postgresql gives a segfault,
>> displaying this message:
>> 2021-12-22 14:08:03.437 UTC [15267] LOG: server process (PID 25854) was
>> terminated by signal 11: Segmentation fault
> What this sounds like is corrupt data somewhere in that table.
>
> There's some advice about dealing with such cases here:
>
> https://wiki.postgresql.org/wiki/Corruption
>
> If this is extremely valuable data, you might prefer to hire somebody
> who specializes in data recovery, rather than trying to handle it
> yourself. I'd still follow the wiki page's "first response" advice,
> ie take a physical backup ASAP.
COPY the table in PK ranges to narrow down the offending record?
--
Angular momentum makes the world go 'round.
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2021-12-22 15:21:16 | Re: surprisingly slow creation of gist index used in exclude constraint |
Previous Message | Tom Lane | 2021-12-22 14:40:30 | Re: Postgresql crashing during pg_dump |