From: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Tweaking DSM and DSA limits |
Date: | 2019-10-20 23:21:52 |
Message-ID: | CA+hUKGKCSh4GARZrJrQZwqs5SYp0xDMRr9Bvb+HQzJKvRgL6ZA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jun 21, 2019 at 6:52 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
> On 2019-06-20 14:20:27 -0400, Robert Haas wrote:
> > On Tue, Jun 18, 2019 at 9:08 PM Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
> > > Perhaps also the number of slots per backend should be dynamic, so
> > > that you have the option to increase it from the current hard-coded
> > > value of 2 if you don't want to increase max_connections but find
> > > yourself running out of slots (this GUC was a request from Andres but
> > > the name was made up by me -- if someone has a better suggestion I'm
> > > all ears).
> >
> > I am not convinced that we really need to GUC-ify this. How about
> > just bumping the value up from 2 to say 5?
>
> I'm not sure either. Although it's not great if the only way out for a
> user hitting this is to increase max_connections... But we should really
> increase the default.
Ok, hard-to-explain GUC abandoned. Here is a patch that just adjusts
the two constants. DSM's array allows for 5 slots per connection (up
from 2), and DSA doubles its size after every two segments (down from
4).
> > As Andres observed off-list, it would also be a good idea to allow
> > things that are going to gobble memory like hash joins to have some
> > input into how much memory gets allocated. Maybe preallocating the
> > expected size of the hash is too aggressive -- estimates can be wrong,
> > and it could be much smaller.
>
> At least for the case of the hashtable itself, we allocate that at the
> predicted size immediately. So a mis-estimation wouldn't change
> anything. For the entires, yea, something like you suggest would make
> sense.
At the moment the 32KB chunks are used as parallel granules for
various work (inserting, repartitioning, rebucketing). I could
certainly allocate a much bigger piece based on estimates, and then
invent another kind of chunks inside that, or keep the existing
layering but find a way to hint to DSA what allocation stream to
expect in the future so it can get bigger underlying chunks ready.
One problem is that it'd result in large, odd sized memory segments,
whereas the current scheme uses power of two sizes and might be more
amenable to a later segment reuse scheme; or maybe that doesn't really
matter.
I have a long wish list of improvements I'd like to investigate in
this area, subject for future emails, but while I'm making small
tweaks, here's another small thing: there is no "wait event" while
allocating (in the kernel sense) POSIX shm on Linux, unlike the
equivalent IO when file-backed segments are filled with write() calls.
Let's just reuse the same wait event, so that you can see what's going
on in pg_stat_activity.
Attachment | Content-Type | Size |
---|---|---|
0001-Adjust-the-constants-used-to-reserve-DSM-segment-slo.patch | application/octet-stream | 2.3 KB |
0002-Report-time-spent-in-posix_fallocate-as-a-wait-event.patch | application/octet-stream | 1.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | raf | 2019-10-20 23:31:15 | Re: jsonb_set() strictness considered harmful to data |
Previous Message | Andrew Dunstan | 2019-10-20 22:51:05 | Re: jsonb_set() strictness considered harmful to data |