From: | Peter Geoghegan <pg(at)heroku(dot)com> |
---|---|
To: | Alexander Korotkov <aekorotkov(at)gmail(dot)com> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Oleg Bartunov <obartunov(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Teodor Sigaev <teodor(at)sigaev(dot)ru>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development Hackers <pgsql-hackers(at)postgresql(dot)org>, Maciek Sakrejda <maciek(at)heroku(dot)com> |
Subject: | Re: jsonb and nested hstore |
Date: | 2014-03-11 01:19:45 |
Message-ID: | CAM3SWZQXkntrDL7x0=+GxFAbY9TXhekcVGKENT63tphX=8HziA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Mar 10, 2014 at 4:19 AM, Alexander Korotkov
<aekorotkov(at)gmail(dot)com> wrote:
> Here it is.
So it looks like what you have here is analogous to the other problems
that I fixed with both GiST and GIN. That isn't surprising, and this
does fix my test-case. I'm not terribly happy about the lack of
explanation for the hashing in that loop, though. Why use COMP_CRC32()
at all, for one thing?
Why do this for non-primitive jsonb hashing?
COMP_CRC32(stack->hash_state, PATH_SEPARATOR, 1);
Where PATH_SEPARATOR is:
#define PATH_SEPARATOR ("\0")
Actually, come to think of it, why not just use one hashing function
everywhere? i.e., jsonb_hash(PG_FUNCTION_ARGS)? It's already very
similar. Pretty much every hash operator support function 1 (i.e. a
particular type's hash function) is implemented with hash_any(). Can't
we just do the same here? In any case it isn't obvious why the
requirements for those two things (the hashing mechanism used by the
jsonb_hash_ops GIN opclass, and the hash operator class support
function 1 hash function) cannot be the same thing.
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2014-03-11 01:33:21 | Re: [PATCH] Store Extension Options |
Previous Message | Bruce Momjian | 2014-03-11 01:13:36 | Re: pg_upgrade on high number tables database issues |