From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Why hash OIDs? |
Date: | 2018-08-28 14:09:01 |
Message-ID: | CA+TgmoYWYvA6fNH6b-rSRqY5ACxX3y2ZMf_hhkvuOSuiMU882w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Aug 27, 2018 at 10:12 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
> Huh? Oids between, say, 1 and FirstNormalObjectId, are vastly more
> common than the rest. And even after that, individual tables get large
> clusters of sequential values to the global oid counter.
Sure, but if you get a large cluster of sequential values, a straight
mod-N bucket mapping works just fine. I think the bigger problem is
that you might get a large cluster of values separated by exactly a
power of 2. For instance, say you have one serial column and one
index:
rhaas=# create table a (x serial primary key);
CREATE TABLE
rhaas=# create table b (x serial primary key);
CREATE TABLE
rhaas=# select 'a'::regclass::oid, 'b'::regclass::oid;
oid | oid
-------+-------
16422 | 16430
(1 row)
If you have a lot of tables like that, bad things are going to happen
to your hash table.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | David Steele | 2018-08-28 14:27:09 | Re: Would it be possible to have parallel archiving? |
Previous Message | Amit Kapila | 2018-08-28 14:03:15 | Re: pg_verify_checksums failure with hash indexes |