Re: Worse performance with higher work_mem?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Israel Brewster <ijbrewster(at)alaska(dot)edu>
Cc: "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Worse performance with higher work_mem?
Date: 2020-01-14 00:19:37
Message-ID: 5228.1578961177@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Israel Brewster <ijbrewster(at)alaska(dot)edu> writes:
> In looking at the explain analyze output, I noticed that it had an “external merge Disk” sort going on, accounting for about 1 second of the runtime (explain analyze output here: https://explain.depesz.com/s/jx0q <https://explain.depesz.com/s/jx0q>). Since the machine has plenty of RAM available, I went ahead and increased the work_mem parameter. Whereupon the query plan got much simpler, and performance of said query completely tanked, increasing to about 15.5 seconds runtime (https://explain.depesz.com/s/Kl0S <https://explain.depesz.com/s/Kl0S>), most of which was in a HashAggregate.
> How can I fix this? Thanks.

Well, the brute-force way not to get that plan is "set enable_hashagg =
false". But it'd likely be a better idea to try to improve the planner's
rowcount estimates. The problem here seems to be lack of stats for
either "time_bucket('1 week', read_time)" or "read_time::date".
In the case of the latter, do you really need a coercion to date?
If it's a timestamp column, I'd think not. As for the former,
if the table doesn't get a lot of updates then creating an expression
index on that expression might be useful.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Israel Brewster 2020-01-14 00:41:41 Re: Worse performance with higher work_mem?
Previous Message Israel Brewster 2020-01-13 23:58:58 Worse performance with higher work_mem?