Re: hash join hashtable size and work_mem

From: "Timothy J(dot) Kordas" <tkordas(at)greenplum(dot)com>
To: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: hash join hashtable size and work_mem
Date: 2007-03-14 17:28:12
Message-ID: 45F830AC.9030900@greenplum.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Tom Lane wrote:
> If the planner has correctly predicted the number of rows, the table
> loading should be about NTUP_PER_BUCKET in either regime. Are you
> sure you aren't just wishing that NTUP_PER_BUCKET were smaller?

Maybe I wish NTUP_PER_BUCKET was smaller. But I don't think that's the whole
story.

The planner estimates definitely play a role in my concern here. For
mis-estimated inner relations, the current calculation may over-subscribe
the hash-table even if more work_mem was available (that is, there are too
many hash collisions *and* memory isn't being used to the fullest extent
allowed).

I've been tracking the number of tuples which land in each bucket, and I'd
like to see that number go down as I increase work_mem.

I would expect for the same data a hash-join with a work_mem of 256MB to run
faster than one run with 32MB; even if the inner relation is only 30MB.

the implementation I've been experimenting with actually takes the average
of the current implementation (ntuples/10) and the spill version
(work_mem/(tupsize * 10).

-Tim

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2007-03-14 17:48:44 Re: hash join hashtable size and work_mem
Previous Message Tom Lane 2007-03-14 16:51:01 Re: hash join hashtable size and work_mem