From: | TAKATSUKA Haruka <harukat(at)sraoss(dot)co(dot)jp> |
---|---|
To: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: fix "Success" error messages |
Date: | 2019-11-22 03:23:50 |
Message-ID: | 20191122122350.b5fbcc6189791f471ba594a9@sraoss.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 21 Nov 2019 10:40:36 +0100
Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> wrote:
> On 2019-11-21 02:42, TAKATSUKA Haruka wrote:
> > FATAL: could not access status of transaction ..
> > DETAIL: Could not read from file (pg_clog/.... or pg_xact/....) ...: Success.
> >
> > This error has caused the server to fail to start with recovery.
> > I got a report that it happend repeatedly at the newly generated
> > standby cluster. I gave them advice to comfirm the low level server
> > environment.
> >
> > However, in addition to improving the message, should we retry to read
> > the rest of the data in the case reading too few bytes?
> > What about a limited number of retries instead of a complete loop?
>
> If we thought that would help, there are probably hundreds or more other
> places where we read files that would need to be fixed up in the same
> way. That doesn't seem reasonable.
>
> Also, it is my understanding that short reads can in practice only
> happen if the underlying storage is having a serious problem, so
> retrying wouldn't actually help much.
OK, I understand.
In our case, the standby DB cluster space is on DRBD.
I will report the clear occurrence condition if it is found.
thanks,
Haruka Takatsuka
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2019-11-22 03:37:59 | Re: logical decoding : exceeded maxAllocatedDescs for .spill files |
Previous Message | Mark Dilger | 2019-11-22 02:36:33 | Re: Assertion failing in master, predicate.c |