From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: margay fails assertion in stats/dsa/dsm code |
Date: | 2022-06-28 18:04:32 |
Message-ID: | CA+TgmoYGNgRwN=pVK70GC=CtU8sUXwNC1admiH2H4iUjEC=F3g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Jun 2, 2022 at 8:06 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> I know that on Solaris we use dynamic_shared_memory=posix. The other
> Solaris/Sparc system is wrasse, and it's not doing this. I don't see
> it yet, but figured I'd report this much to the list in case someone
> else does.
My first thought was that the return value of the call to
dsm_impl_op() at the end of dsm_attach() is not checked and that maybe
it was returning NULL, but it seems like whoever wrote
dsm_impl_posix() was pretty careful to ereport(elevel, ...) in every
failure path, and elevel is ERROR here, so I don't see any issue. My
second thought was that maybe control had escaped from dsm_attach()
due to an error before we got to the step where we actually map the
segment, but then the dsm_segment * would be returned to the caller.
Maybe they could retrieve it later using dsm_find_mapping(), but that
function has no callers in core.
So I'm kind of stumped too, but did you by any chance check whether
there are any DSM-related messages in the logs before the assertion
failure?
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Matthias van de Meent | 2022-06-28 18:18:46 | Re: making relfilenodes 56 bits |
Previous Message | Jeff Davis | 2022-06-28 18:01:26 | Re: Hardening PostgreSQL via (optional) ban on local file system access |