From: | Stefan Froehlich <postgresql(at)froehlich(dot)priv(dot)at> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: server process (PID 2964738) was terminated by signal 11: Segmentation fault |
Date: | 2022-11-07 10:17:16 |
Message-ID: | 20221107101716.GA20949@static.231.150.9.176.clients.your-server.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Nov 06, 2022 at 09:48:32AM -0500, Tom Lane wrote:
> Stefan Froehlich <postgresql(at)froehlich(dot)priv(dot)at> writes:
> > | # create extension amcheck;
> > | # select oid, relname from pg_class where relname ='faultytablename_pkey';
> > | [returns oid 537203]
> > | # select bt_index_check(537203, true);
> > | server closed the connection unexpectedly
> Another idea is to try using contrib/pageinspect to examine each
> page of the table. Its output is just gobbledegook to most
> people, but there's a good chance it'd fail visibly on the
> corrupted page(s).
Fortunately I was able to identify a window of 100 records (out of
25 mio.) containing all the errors. After deleting and re-inserting
those records everything seems to be ok (at least, pg_dump and
"reindex database" work without errors).
I suspect a bad RAM module to be the root of the problems. We'll
see.
Side question: If it is possible to simply delete and create such
records is it necessary that the server *core* *dumps*? There could
be a switch adding additional safety (at the cost of performance)
which would make troubleshooting not only much faster but
non-invasive for the other databases on the same server as well.
Bye,
Stefan
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2022-11-07 11:19:35 | Re: server process (PID 2964738) was terminated by signal 11: Segmentation fault |
Previous Message | Pilar de Teodoro | 2022-11-07 10:02:46 | postgres replication without pg_basebackup? postgres 13.3 |