From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
Cc: | Rahila Syed <rahilasyed90(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Enhancing Memory Context Statistics Reporting |
Date: | 2024-10-29 14:51:57 |
Message-ID: | hi23wbergcrdxzvoibpmiu3vpgkn7pop5mn4zqepfoah3h3w4j@hiltn5pw4f3r |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2024-10-26 16:14:25 +0200, Alvaro Herrera wrote:
> > A fixed-size shared memory block, currently accommodating 30 records,
> > is used to store the statistics.
>
> Hmm, would it make sene to use dynamic shared memory for this?
+1
> The publishing backend could dsm_create one DSM chunk of the exact size that
> it needs, pass the dsm_handle to the consumer, and then have it be destroy
> once it's been read.
I'd probably just make it a dshash table or such, keyed by the pid, pointing
to a dsa allocation with the stats.
> That way you don't have to define an arbitrary limit
> of any size. (Maybe you could keep a limit to how much is published in
> shared memory and spill the rest to disk, but I think such a limit should be
> very high[1], so that it's unlikely to take effect in normal cases.)
>
> [1] This is very arbitrary of course, but 1 MB gives enough room for
> some 7000 contexts, which should cover normal cases.
Agreed. I can see a point in a limit for extreme cases, but spilling to disk
doesn't seem particularly useful.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Gustafsson | 2024-10-29 14:53:43 | Re: doc issues in event-trigger-matrix.html |
Previous Message | Junwang Zhao | 2024-10-29 14:49:33 | Re: general purpose array_sort |