From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Why hash OIDs? |
Date: | 2018-09-02 20:41:51 |
Message-ID: | CA+TgmoaUyT5CW6v6E6ro=NyFxkb0=Si8ocDsAhxVgtFtcO8ACQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Aug 28, 2018 at 8:02 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> I think this argument is a red herring TBH. The example Robert shows is
> of *zero* interest for dynahash or catcache, unless it's taking only the
> low order 3 bits of the OID for the bucket number. But actually we'll
> increase the table size proportionally to the number of entries, so
> that you can't have say 1000 table entries without at least 10 bits
> being used for the bucket number. That means that you'd only have
> trouble if those 1000 tables all had OIDs exactly 1K (or some multiple
> of that) apart. Such a case sounds quite contrived from here.
Hmm. I was thinking that it was a problem if the number of OIDs
consumed per table was a FACTOR of 1000, not just if it was a POWER of
1000. I mean, if it's, say, 4, that means three-quarters of your hash
table buckets are unused, which seems poor. But maybe it's not really
a big enough problem in practice for us to care? Dunno.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2018-09-02 20:46:49 | Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept |
Previous Message | Robert Haas | 2018-09-02 20:37:06 | Re: Stored procedures and out parameters |