Re: Add the ability to limit the amount of memory that can be allocated to backends.

From: James Hunter <james(dot)hunter(dot)pg(at)gmail(dot)com>
To: Jim Nasby <jnasby(at)upgrade(dot)com>
Cc: Tomas Vondra <tomas(at)vondra(dot)me>, Jeremy Schneider <schneider(at)ardentperf(dot)com>, "Anton A(dot) Melnikov" <a(dot)melnikov(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, Stephen Frost <sfrost(at)snowman(dot)net>, reid(dot)thompson(at)crunchydata(dot)com, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com>
Subject: Re: Add the ability to limit the amount of memory that can be allocated to backends.
Date: 2025-01-08 23:57:39
Message-ID: CAJVSvF7pJMLwQNFcG2dcY0ZgUb_5pVxp6+bkzy3Bid_4vRDong@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jan 6, 2025 at 1:07 PM Jim Nasby <jnasby(at)upgrade(dot)com> wrote:
>
> I’ve been saying “workload management” for lack of a better term, but my initial suggestion upthread was to simply stop allowing new transactions to start if global work_mem consumption exceeded some threshold. That’s simplistic enough that I wouldn’t really consider it “workload management”. Maybe “deferred execution” would be a better name. The only other thing it’d need is a timeout on how long a new transaction could sit in limbo.

Yes, this seems like a good thing to do, but we need to handle
"work_mem" , by itself, first.

The problem is that it's just too easy for a query to blow up work_mem
consumption, almost instantaneously. By the time we notice that we're
low on working memory, pausing new transactions may not be sufficient.
We could already be in the middle of a giant Hash Join, for example,
and the hash table is just going to continue to grow...

Before we can solve the problem you describe, we need to be able to
limit the work_mem consumption by an in-progress query.

James

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Treat 2025-01-09 00:01:53 Re: New GUC autovacuum_max_threshold ?
Previous Message Abhishek Chanda 2025-01-08 23:32:57 Adding support for SSLKEYLOGFILE in the frontend