From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: A better way than tweaking NTUP_PER_BUCKET |
Date: | 2013-06-22 22:48:45 |
Message-ID: | CA+U5nM+aTcGSYc=fcNFUUDiYi7Gp3EGBXHJWkq6xm-mMhQXdrQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 22 June 2013 21:40, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> I'm actually not a huge fan of this as it's certainly not cheap to do. If it
> can be shown to be better than an improved heuristic then perhaps it would
> work but I'm not convinced.
We need two heuristics, it would seem:
* an initial heuristic to overestimate the number of buckets when we
have sufficient memory to do so
* a heuristic to determine whether it is cheaper to rebuild a dense
hash table into a better one.
Although I like Heikki's rebuild approach we can't do this every x2
overstretch. Given large underestimates exist we'll end up rehashing
5-12 times, which seems bad. Better to let the hash table build and
then re-hash once, it we can see it will be useful.
OK?
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2013-06-22 23:13:05 | Re: A better way than tweaking NTUP_PER_BUCKET |
Previous Message | Heikki Linnakangas | 2013-06-22 21:08:46 | Re: A better way than tweaking NTUP_PER_BUCKET |