| From: | Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp> |
|---|---|
| To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
| Cc: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: pg_verify_checksums failure with hash indexes |
| Date: | 2018-08-29 11:10:15 |
| Message-ID: | 20180829201015.d9d4fde2748910e86a13c0da@sraoss.co.jp |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On Wed, 29 Aug 2018 16:01:53 +0530
Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > By the way, I think we can fix this also by clearing the header information of the last
> > page instead of setting a checksum to the unused page although I am not sure which way
> > is better.
> >
>
> I think that can complicate the WAL logging of this operation which we
> are able to deal easily with log_newpage and it sounds quite hacky.
> The fix I have posted seems better, but I am open to suggestions.
Thank you for your explanation. I understood this way could make the
codes complicated, so I think the way you posted is better.
Regards,
--
Yugo Nagata <nagata(at)sraoss(dot)co(dot)jp>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Andrew Gierth | 2018-08-29 11:31:09 | Re: Catalog corruption |
| Previous Message | Dilip Kumar | 2018-08-29 10:34:53 | Re: pg_verify_checksums failure with hash indexes |