From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Jan Wieck <jan(at)wi3ck(dot)info>, Postgres hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Limiting memory allocation |
Date: | 2022-05-18 15:19:35 |
Message-ID: | 20220518151935.GI9030@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
* Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
> Stephen Frost <sfrost(at)snowman(dot)net> writes:
> > On Tue, May 17, 2022 at 18:12 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> >> ulimit might be interesting to check into as well. The last time I
> >> looked, it wasn't too helpful for this on Linux, but that was years ago.
>
> > Unfortunately I really don’t think anything here has materially changed in
> > a way which would help us. This would also apply across all of PG’s
> > processes and I would think it’d be nice to differentiate between user
> > backends running away and sucking up a ton of memory vs backend processes
> > that shouldn’t be constrained in this way.
>
> It may well be that they've not fixed its shortcomings, but the claim
> that it couldn't be applied selectively is nonsense. See setrlimit(2),
> which we already use successfully (AFAIK) to set stack space on a
> per-process basis.
Yeah, that thought was quite properly formed, sorry for the confusion.
That it's per-process is actually the issue, unless we were to split
up what we're given evenly across max_connections or such, which might
work but would surely end up wasting an unfortunate amount of memory.
Consider:
shared_buffers = 8G
max_memory = 8G
max_connections = 1000 (for easy math)
With setrlimit(2), we could at process start of all user backends set
RLIMIT_AS to 8G + 8G/1000 (8M) + some fudge for code, stack, etc,
meaning each process would only be allowed about 8M of memory for
work space, even though there's perhaps only 10 processes running,
resulting in over 7G of memory that PG should be able to use, but isn't.
Maybe we could do some tracking of per-process actual memory usage of
already running processes and consider that when starting new ones and
even allow processes to change their limit if they hit it, depending on
what else is going on in the system, but I'm really not sure that all of
this would end up being that much more efficient than just directly
tracking allocations and failing when we hit them ourselves, and it sure
seems like it'd be a lot more complicated.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2022-05-18 15:41:54 | Re: Limiting memory allocation |
Previous Message | Alvaro Herrera | 2022-05-18 15:11:34 | Re: Limiting memory allocation |