From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Timothy J(dot) Kordas" <tkordas(at)greenplum(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: hash join hashtable size and work_mem |
Date: | 2007-03-14 17:48:44 |
Message-ID: | 26289.1173894524@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Timothy J. Kordas" <tkordas(at)greenplum(dot)com> writes:
> I would expect for the same data a hash-join with a work_mem of 256MB to run
> faster than one run with 32MB; even if the inner relation is only 30MB.
Once you get to the point where each tuple is in a different bucket, it
is clearly impossible for further increases in hashtable size to improve
matters. All you can do is waste RAM and cache lines.
Now if we set NTUP_PER_BUCKET = 1 we would not be exactly at that critical
point because of uneven bucket loading and other factors ... but I
question whether there's enough incremental improvement available to
justify making the hashtable much larger than that.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Eddie Stanley | 2007-03-14 19:24:59 | Re: My honours project - databases using dynamically attached entity-properties |
Previous Message | Timothy J. Kordas | 2007-03-14 17:28:12 | Re: hash join hashtable size and work_mem |