From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Tweaking DSM and DSA limits |
Date: | 2019-06-20 18:20:27 |
Message-ID: | CA+TgmoZBojjidtcfSMmgAqBMX61XTtrq2=SW_gcxuMOEadd5Xw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jun 18, 2019 at 9:08 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> It's currently set to 4, but I now think that was too cautious. It
> tries to avoid fragmentation by ramping up slowly (that is, memory
> allocated and in some cases committed by the operating system that we
> don't turn out to need), but it's pretty wasteful of slots. Perhaps
> it should be set to 2?
+1. I think I said at the time that I thought that was too cautious...
> Perhaps also the number of slots per backend should be dynamic, so
> that you have the option to increase it from the current hard-coded
> value of 2 if you don't want to increase max_connections but find
> yourself running out of slots (this GUC was a request from Andres but
> the name was made up by me -- if someone has a better suggestion I'm
> all ears).
I am not convinced that we really need to GUC-ify this. How about
just bumping the value up from 2 to say 5? Between the preceding
change and this one we ought to buy ourselves more than 4x, and if
that is not enough then we can ask whether raising max_connections is
a reasonable workaround, and if that's still not enough then we can
revisit this idea, or maybe come up with something better. The
problem I have with a GUC here is that nobody without a PhD in
PostgreSQL-ology will have any clue how to set it, and while that's
good for your employment prospects and mine, it's not so great for
PostgreSQL users generally.
As Andres observed off-list, it would also be a good idea to allow
things that are going to gobble memory like hash joins to have some
input into how much memory gets allocated. Maybe preallocating the
expected size of the hash is too aggressive -- estimates can be wrong,
and it could be much smaller. But maybe we should allocate at least,
say, 1/64th of that amount, and act as if
DSA_NUM_SEGMENTS_AT_EACH_SIZE == 1 until the cumulative memory
allocation is more than 25% of that amount. So if we think it's gonna
be 1GB, start by allocating 16MB and double the size of each
allocation thereafter until we get to at least 256MB allocated. So
then we'd have 16MB + 32MB + 64MB + 128MB + 256MB + 256MB + 512MB = 7
segments instead of the 32 required currently or the 18 required with
DSA_NUM_SEGMENTS_AT_EACH_SIZE == 2.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2019-06-20 18:24:07 | Re: UCT (Re: pgsql: Update time zone data files to tzdata release 2019a.) |
Previous Message | Robert Haas | 2019-06-20 17:53:08 | Re: UCT (Re: pgsql: Update time zone data files to tzdata release 2019a.) |