From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Bruno Wolff III <bruno(at)wolff(dot)to> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: Solving hash table overrun problems |
Date: | 2005-03-04 16:12:43 |
Message-ID: | 27970.1109952763@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruno Wolff III <bruno(at)wolff(dot)to> writes:
> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Bruno Wolff III <bruno(at)wolff(dot)to> writes:
>>> If K is way low this could be very slow.
>>
>> How so? You're not concerned about the time to do the division itself
>> are you?
> No, rather having lots of entries in the same hash buckets.
That won't happen because we are going to set K with an eye to the
maximum number of rows we intend to hold in memory (given work_mem).
With the addition of the dynamic batch splitting logic, that number
of rows is actually reasonably accurate.
The only way this scheme can really lose badly is if there are large
numbers of tuples with exactly the same hash code, so that no matter how
much we increase N we can't split up the bucketload. This is a risk for
*any* hashing scheme, however. In practice we have to rely on the
planner to not choose hashing when there are only a few distinct values
for the key.
> I just noticed that it wasn't mentioned that an overflow could occur at this
> step.
It can't, because we aren't loading the outer tuples into the hash
table. We are just considering them one at a time and probing for
matches.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2005-03-04 16:20:30 | I am in Copenhagen |
Previous Message | Bruno Wolff III | 2005-03-04 15:46:07 | Re: Solving hash table overrun problems |