From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | Peter Hussey <peter(at)labkey(dot)com>, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Questions on query planner, join types, and work_mem |
Date: | 2010-07-28 04:39:27 |
Message-ID: | 2239.1280291967@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
>> Well, the issue you're hitting is that the executor is dividing the
>> query into batches to keep the size of the in-memory hash table below
>> work_mem. The planner should expect that and estimate the cost of
>> the hash technique appropriately, but seemingly it's failing to do so.
> Hmm, I wasn't aware that hash joins worked this way wrt work_mem. Is
> this visible in the explain output?
As of 9.0, any significant difference between "Hash Batches" and
"Original Hash Batches" would be a cue that the planner blew the
estimate. For Peter's problem, we're just going to have to look
to see if the estimated cost changes in a sane way between the
small-work_mem and large-work_mem cases.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-07-28 05:09:34 | Re: Pooling in Core WAS: Need help in performance tuning. |
Previous Message | Jayadevan M | 2010-07-28 04:27:29 | Re: Questions on query planner, join types, and work_mem |