From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | anisimow(dot)d(at)gmail(dot)com, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #17462: Invalid memory access in heapam_tuple_lock |
Date: | 2022-04-11 16:48:51 |
Message-ID: | CAH2-Wzmxcidt0u4_7pfXZ_ExEOKa_Epb-b3vAcM_D899WAO2rA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Mon, Apr 11, 2022 at 9:35 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > The other backend's page defragmentation step (from pruning)
> > would render our backend's HeapTuple pointer invalid. Presumably it
> > would just look like an invalid/non-matching xmin in our backend, at
> > the point of control flow that Valgrind complains about
> > (heapam_handler.c:509).
>
> Right, but there are other accesses below, and in any case match
> failure isn't necessarily the right thing.
That's what I meant -- it very likely would have been a match if the
same scenario played out, but without any concurrent pruning. With a
concurrent prune, xmin won't ever be a match (barring a
near-miraculous coincidence). That behavior is definitely wrong, but
also quite subtle (compared to what might happen if we got past the
xmin/xmax check). I think that that explains why it took this long to
notice the bug.
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2022-04-11 17:34:17 | Re: BUG #17462: Invalid memory access in heapam_tuple_lock |
Previous Message | Tom Lane | 2022-04-11 16:35:52 | Re: BUG #17462: Invalid memory access in heapam_tuple_lock |