From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | James Hunter <james(dot)hunter(dot)pg(at)gmail(dot)com> |
Cc: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, "Anton A(dot) Melnikov" <a(dot)melnikov(at)postgrespro(dot)ru>, Andres Freund <andres(at)anarazel(dot)de>, Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, Stephen Frost <sfrost(at)snowman(dot)net>, reid(dot)thompson(at)crunchydata(dot)com, Arne Roland <A(dot)Roland(at)index(dot)de>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, vignesh C <vignesh21(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, "stephen(dot)frost" <stephen(dot)frost(at)crunchydata(dot)com> |
Subject: | Re: Add the ability to limit the amount of memory that can be allocated to backends. |
Date: | 2024-12-30 23:12:23 |
Message-ID: | CAApHDvpxVWGaQK=qKvert7_N5S84yjKjDsPr9pM_Y2+httN9-A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, 28 Dec 2024 at 08:14, James Hunter <james(dot)hunter(dot)pg(at)gmail(dot)com> wrote:
> 2. We use this backend_work_mem to "adjust" work_mem values used by
> the executor. (I don't care about the optimizer right now -- optimizer
> just does its best to predict what will happen at runtime.)
While I do want to see improvements in this area, I think "don't care
about the optimizer" is going to cause performance issues. The
problem is that the optimizer takes into account what work_mem is set
to when calculating the costs of work_mem-consuming node types. See
costsize.c for usages of "work_mem". If you go and reduce the amount
of memory a given node can consume after the costs have been applied
then we may end up in a situation where some other plan would have
suited much better.
There's also the problem with what to do when you chop work_mem down
so far that the remaining size is just a pitiful chunk. For now,
work_mem can't go below 64 kilobytes. You might think that's a very
unlikely situation that it'd be chopped down so far, but with
partition-wise join and partition-wise aggregate, we could end up
using a work_mem per partition and if you have thousands of partitions
then you might end up reducing work_mem by quite a large amount.
I think the best solution to this is the memory grant stuff I talked
about in [1]. That does require figuring out which nodes will consume
the work_mem concurrently so that infrastructure you talked about to
do that would be a good step forward towards that, but that's probably
not the most difficult part of that idea.
I definitely encourage work in this area, but I think what you're
proposing might be swapping one problem for another problem.
David
From | Date | Subject | |
---|---|---|---|
Next Message | Tatsuo Ishii | 2024-12-30 23:57:07 | Re: Row pattern recognition |
Previous Message | Peter Smith | 2024-12-30 22:58:35 | Re: Log a warning in pg_createsubscriber for max_slot_wal_keep_size |