From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Teodor Sigaev <teodor(at)sigaev(dot)ru>, David Rowley <dgrowleyml(at)gmail(dot)com>, Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: hash_create API changes (was Re: speedup tidbitmap patch: hash BlockNumber) |
Date: | 2014-12-20 04:03:55 |
Message-ID: | 5494F52B.7060008@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/19/14, 5:13 PM, Tom Lane wrote:
> Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> writes:
>> On 12/18/14, 5:00 PM, Jim Nasby wrote:
>>> 2201582 20 -- Mostly LOCALLOCK and Shared Buffer
>
>> Started looking into this; perhaps https://code.google.com/p/fast-hash/ would be worth looking at, though it requires uint64.
>
>> It also occurs to me that we're needlessly shoving a lot of 0's into the hash input by using RelFileNode and ForkNumber. RelFileNode includes the tablespace Oid, which is pointless here because relid is unique per-database. We also have very few forks and typically care about very few databases. If we crammed dbid and ForkNum together that gets us down to 12 bytes, which at minimum saves us the trip through the case logic. I suspect it also means we could eliminate one of the mix() calls.
>
> I don't see this working. The lock table in shared memory can surely take
> no such shortcuts. We could make a backend's locallock table omit fields
> that are predictable within the set of objects that backend could ever
> lock, but (1) this doesn't help unless we can reduce the tag size for all
> LockTagTypes, which we probably can't, and (2) having the locallock's tag
> be different from the corresponding shared tag would be a mess too.
> I think dealing with (2) might easily eat all the cycles we could hope to
> save from a smaller hash tag ... and that's not even considering the added
> logical complexity and potential for bugs.
I think we may be thinking different things here...
I'm not suggesting we change BufferTag or BufferLookupEnt; clearly we can't simply throw away any of the fields I was talking about (well, except possibly tablespace ID. AFAICT that's completely redundant for searching because relid is UNIQUE).
What I am thinking is not using all of those fields in their raw form to calculate the hash value. IE: something analogous to:
hash_any(SharedBufHash, (rot(forkNum, 2) | dbNode) ^ relNode) << 32 | blockNum)
perhaps that actual code wouldn't work, but I don't see why we couldn't do something similar... am I missing something?
> Switching to a different hash algorithm could be feasible, perhaps.
> I think we're likely stuck with Jenkins hashing for hashes that go to
> disk, but hashes for dynahash tables don't do that.
Yeah, I plan on testing the performance of fash-hash for HASH_BLOBS just to see how it compares.
Why would we be stuck with Jenkins hashing for on-disk data? pg_upgrade, or something else?
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2014-12-20 04:17:54 | Re: Commitfest problems |
Previous Message | Noah Misch | 2014-12-20 02:26:58 | Re: Reducing lock strength of adding foreign keys |