From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | greg(at)turnstep(dot)com |
Cc: | pgsql-general(at)postgresql(dot)org, blindsey(at)cog(dot)ufl(dot)edu |
Subject: | Re: postgres metadata |
Date: | 2003-11-27 03:31:45 |
Message-ID: | 19737.1069903905@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
greg(at)turnstep(dot)com writes:
> The problem is that the oid column has no "unique" constraint ...
unless you add one, viz:
create unique index mytable_oids on mytable (oid);
which is de rigueur for any table you intend to rely on OID as an
identifier for. The index is needed not only to ensure uniqueness
but as a mechanism for fast access to a particular row by OID.
You should be aware though that once the OID counter wraps around (every
4 billion OIDs) there is a small chance that a newly-created OID will
duplicate a prior entry, resulting in a "duplicate key" failure in a
transaction that really didn't do anything wrong. If you have a moral
aversion to writing retry loops in your client code then this will
disgust you. My own take on it is that there are enough reasons why you
will need retry loops that one more shouldn't bug you.
These comments generally apply to SERIAL and the other alternatives
Greg mentioned, as well. The only difference is how fast do the
identifiers get eaten and how far is it to the wraparound point ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-11-27 03:41:17 | Re: disaster recovery |
Previous Message | Bruce Momjian | 2003-11-27 03:28:24 | Re: disaster recovery |