From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
Cc: | "Imseih (AWS), Sami" <simseih(at)amazon(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: allow changing autovacuum_max_workers without restarting |
Date: | 2024-06-18 20:43:34 |
Message-ID: | 20240618204334.i5ar2fie4vbb3fhm@awork3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2024-06-18 14:00:00 -0500, Nathan Bossart wrote:
> On Mon, Jun 03, 2024 at 04:24:27PM -0700, Andres Freund wrote:
> > On 2024-06-03 14:28:13 -0500, Nathan Bossart wrote:
> >> On Mon, Jun 03, 2024 at 12:08:52PM -0700, Andres Freund wrote:
> >> > Why do we think that increasing the number of PGPROC slots, heavyweight locks
> >> > etc by 256 isn't going to cause issues? That's not an insubstantial amount of
> >> > memory to dedicate to something that will practically never be used.
> >>
> >> I personally have not observed problems with these kinds of bumps in
> >> resource usage, although I may be biased towards larger systems where it
> >> doesn't matter as much.
> >
> > IME it matters *more* on larger systems. Or at least used to, I haven't
> > experimented with this in quite a while.
> >
> > It's possible that we improved a bunch of things sufficiently for this to not
> > matter anymore.
>
> I'm curious if there is something specific you would look into to verify
> this. IIUC one concern is the lock table not fitting into L3. Is there
> anything else? Any particular workloads you have in mind?
That was the main thing I was thinking of.
But I think I just thought of one more: It's going to *substantially* increase
the resource usage for tap tests. Right now Cluster.pm has
# conservative settings to ensure we can run multiple postmasters:
print $conf "shared_buffers = 1MB\n";
print $conf "max_connections = 10\n";
for nodes that allow streaming.
Adding 256 extra backend slots increases the shared memory usage from ~5MB to
~18MB.
I just don't see much point in reserving 256 worker "possibilities", tbh. I
can't think of any practical system where it makes sense to use this much (nor
do I think it's going to be reasonable in the next 10 years) and it's just
going to waste memory and startup time for everyone.
Nor does it make sense to me to have the max autovac workers be independent of
max_connections.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-06-18 20:48:18 | Re: Xact end leaves CurrentMemoryContext = TopMemoryContext |
Previous Message | Andrew Dunstan | 2024-06-18 20:42:04 | Re: IPC::Run accepts bug reports |