From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | "Jonathan S(dot) Katz" <jkatz(at)postgresql(dot)org>, David Rowley <dgrowleyml(at)gmail(dot)com>, exclusion(at)gmail(dot)com, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18240: Undefined behaviour in cash_mul_flt8() and friends |
Date: | 2023-12-25 15:58:59 |
Message-ID: | 1851211.1703519939@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Michael Paquier <michael(at)paquier(dot)xyz> writes:
> Looking at that, I can see that Peter has added a few tests to test
> the predictability of plans generated with non-hashable types, and
> that these are based on money. See 6dd8b0080787.
> One possible pick to replace money for these tests is tsvector, that
> cannot be hashed and has an equality operator. Perhaps these had
> better be replaced anyway?
Hm. A quick check in pg_opclass shows that these are the types that
currently have btree but not hash opclasses:
# select ob.opcintype::regtype from (select * from pg_opclass where opcmethod = 403) ob
left join (select * from pg_opclass where opcmethod = 405) oh using(opcintype) where oh.opcintype is null;
opcintype
-------------
bit
bit varying
money
tsvector
tsquery
(5 rows)
I'm a little nervous about using tsvector or tsquery, as it seems
pretty plausible that somebody would get around to making hash
support for them someday. Perhaps the same argument could be made
about bit or varbit, but I'd bet a lot less on that happening,
as those are backwater-ish types (not even in the standard
anymore, IIRC). So I'd think about using one of those.
Or we could build a for-test-purposes-only datatype, but that
could require a lot of scaffolding work.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | happygo | 2023-12-26 09:12:25 | Re: BUG #18253: aarch64 oel 7 repomd.xml: [Errno 14] HTTPS Error 404 |
Previous Message | Tom Lane | 2023-12-25 14:53:25 | Re: Issue with Running VACUUM on Database with Large Tables |