From: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Peter Hussey <peter(at)labkey(dot)com>, pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Questions on query planner, join types, and work_mem |
Date: | 2010-07-28 04:01:51 |
Message-ID: | 1280289550-sup-6155@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Excerpts from Tom Lane's message of mar jul 27 20:05:02 -0400 2010:
> Peter Hussey <peter(at)labkey(dot)com> writes:
> > 2) How is work_mem used by a query execution?
>
> Well, the issue you're hitting is that the executor is dividing the
> query into batches to keep the size of the in-memory hash table below
> work_mem. The planner should expect that and estimate the cost of
> the hash technique appropriately, but seemingly it's failing to do so.
> Since you didn't provide EXPLAIN ANALYZE output, though, it's hard
> to be sure.
Hmm, I wasn't aware that hash joins worked this way wrt work_mem. Is
this visible in the explain output? If it's something subtle (like an
increased total cost), may I suggest that it'd be a good idea to make it
explicit somehow in the machine-readable outputs?
From | Date | Subject | |
---|---|---|---|
Next Message | Jayadevan M | 2010-07-28 04:27:29 | Re: Questions on query planner, join types, and work_mem |
Previous Message | Robert Haas | 2010-07-28 02:05:34 | Re: Pooling in Core WAS: Need help in performance tuning. |