From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Stephen Frost <sfrost(at)snowman(dot)net>, Peter Geoghegan <pg(at)bowt(dot)ie>, Jeff Davis <pgsql(at)j-davis(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Bruce Momjian <bruce(at)momjian(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Default setting for enable_hashagg_disk |
Date: | 2020-07-11 00:47:03 |
Message-ID: | CAKFQuwa2gwLa0b+mQv5r5A_Q0XWsA2=1zQ+Z5m4pQprxh-aM4Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-docs pgsql-hackers |
On Fri, Jul 10, 2020 at 5:16 PM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
> Stephen mentions in [1] that:
> > Users who are actually hit by this in a negative way
> > have an option- increase work_mem to reflect what was actually happening
> > already.
>
> Peter is not a fan of that idea, which can only be due to the fact
> that will also increase the maximum memory consumption allowed by
> other nodes in the plan too.
That isn't the only reason for me - the main advantage of hash_mem is that
we get to set a default to some multiple greater than 1.0 so that an
upgrade to v13 has a region where behavior similar to v12 is effectively
maintained. I have no feel for whether that should be 2.0, 4.0, or
something else, but 2.0 seemed small and I chose to use a power of 2.
My concern is that if we do hash_mem and
> have that control the memory allowances for Hash Joins and Hash Aggs,
> then that solution is just as good as Stephen's idea when the plan
> only contains Hash Joins and Hash Aggs.
>
> As much as I do want to see us get something to allow users some
> reasonable way to get the same performance as they're used to, I'm
> concerned that giving users something that works for many of the use
> cases is not really going to be as good as giving them something that
> works in all their use cases. A user who has a partitioned table
> with a good number of partitions and partition-wise joins enabled
> might not like it if their Hash Join plan suddenly consumes hash_mem *
> nPartitions when they've set hash_mem to 10x of work_mem due to some
> other plan that requires that to maintain PG12's performance in PG13.
>
I don't know enough about the hash join dynamic to comment there but if an
admin goes in and changes the system default to 10x in lieu of a targeted
fix for a query that actually needs work_mem to be increased to 10 times
its current value to work properly I'd say that would be a poor decision.
Absent hash_mem they wouldn't update work_mem on their system to 10x its
current value in order to upgrade to v13, they'd set work_mem for that
query specifically. The same should happen here.
Frankly, if admins are on top of their game and measuring and monitoring
query performance and memory consumption they would be able to operate in
our "do nothing" mode by setting the default for hash_mem to 1.0 and just
dole out memory via work_mem as they have always done. Though setting
hash_mem to 10x for that single query would reduce their risk of OOM (none
of the work_mem consulting nodes would be increased) so having the GUC
would be a net win should they avail themselves of it.
The multiplier seems strictly better than "rely on work_mem alone, i.e., do
nothing"; the detracting factor being one more GUC. Even if one wants to
argue the solution is ugly or imperfect the current state seems worse and a
more perfect option doesn't seem worth waiting for. The multiplier won't
make every single upgrade a non-event but it provides a more than
sufficient amount of control and in the worse case can be effectively
ignored by setting it to 1.0.
Is there some reason to think that having this multiplier with a
conservative default of 2.0 would cause an actual problem - and would that
scenario have likely caused an OOM anyway in v12? Given that "work_mem can
be used many many times in a single query" I'm having trouble imagining
such a problem.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2020-07-11 01:19:39 | Re: Default setting for enable_hashagg_disk |
Previous Message | David Rowley | 2020-07-11 00:16:33 | Re: Default setting for enable_hashagg_disk |
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2020-07-11 01:19:39 | Re: Default setting for enable_hashagg_disk |
Previous Message | Zhenghua Lyu | 2020-07-11 00:32:32 | Re: distribute_restrictinfo_to_rels if restrictinfo contains volatile functions |