From: | brad <brad(at)kieser(dot)net> |
---|---|
To: | Paul Caskey <paul(at)nmxs(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au>, Postgres Users <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: 4 billion record limit? |
Date: | 2000-07-26 10:10:05 |
Message-ID: | 397EB8FD.39EDDDD5@kieser.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-novice |
THe implications of the OIDs not wrapping are immense. We have some extremely
active databases that will easily reach this limit in two or three years. For
some applications, such as ecommerce, dumping then reinserting the rows is
not an option for large databases due to the 24 hours nature of their work.
This is a much more complex problem than it would at first seem as the
"tripping up" over old records with low OIDs still presents a problem, yet if
the system is changed to try and manage a list of available OIDs, it will
then hit performance problems.
Simply waiting for 64bit numbers is rather inelegant and also presumes usage
parameters for the database... remember Bill Gates saying that he couldn't
foresee any usage for more than 64MB of RAM? Besides which, PostgreSQL is the
best DB around... there's a high standard to maintain!
Some initial ideas:
a) The system remains working the way that it does until a configurable
cutoff point (% of OIDs remaining), at which point the DBA can either switch
on some sort of OID "garbage collection" and take the associated performance
hit, or dump the data and reinsert it to pack the OIDs.
GARBAGE COLLECTION:
b) The system could wrap around, keeping an internal pointer of where it is
in the OID chain. It could scan the OIDs sequentially finding the first free
OID. It coudl then store that position as the new start point for the next
time an OID is needed.
c) An OID compression utility could be writen that doesn't require bringing
the DB down (but will obviously have a performance it). As running this
utilty would be a known impact, provision could be made, or the knock
budgeted for and accounted for while not bringing the entire DB to a halt.
d) OIDs themselves could form a list. The OID system as it stands now could
be the "default mode" but if the OIDs run out, then a new OID list is
started. This means the OID usage would then change to work along the same
lines that the Intel memory adressing works. Personally I hate it, but it
does work.
e) OIDs could be scrapped in favour of some other system.
Well, that's my two pence worth.
Brad
Paul Caskey wrote:
> Tom Lane wrote:
> >
> > Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au> writes:
> > > Paul Caskey wrote:
> > >> 1. This implies a hard limit of 4 billion records on a server, right?
> >
> > > Basically, yes.
> >
> > It's only a hard limit if your application assumes OIDs are unique.
> > If you don't assume that, then I think it's not a big problem.
> >
> > It's possible (though obviously not especially likely) that you might
> > get OID collisions in the system tables after an OID-counter wraparound.
>
> This implies they do wrap around. So they are reused? Chris said no, but
> you're saying yes.
>
> (Maybe they wrap around "by accident", by adding one to MAXINT, which will
> give zero on an unsigned int, I believe. Will the system choke on zero?
> Has anyone tested this wraparound?)
>
> I will not have 4 billion records in one table or even one database. But
> on a large server with many databases, it is conceivable to have 4 billion
> records on one machine. With a lot of insert/delete activity, over a few
> years, it is certainly conceivable to have 4 billion inserts. If the oids
> don't wrap, I have a problem. I can ignore it for a long time, but it
> will loom, like Y2K. :-)
>
> Even if they do wrap, if I have some old records lying around with a low
> OIDs, they will trip me up. Like you said, these are "the outer limits",
> but I'm thinking ahead.
>
> Someone suggested in private that I pg_dump/restore all my data to
> "repack" the oids which start around 1700 on a fresh database. Thanks for
> that idea. Also thanks, Tom, for the sanity check Re: terabytes of data
> with 4 billion records. It's still possible, especially in coming years.
> It would be a big feather in PG's cap to "fully support" 64-bit platforms
> such as IRIX and Solaris (finally) and, coming soon to a theater near you,
> Linux on IA-64.
>
> --
> Paul Caskey paul(at)nmxs(dot)com 505-255-1999
> New Mexico Software 5041 Indian School NE Albuquerque, NM 87110
> --
From | Date | Subject | |
---|---|---|---|
Next Message | Bob Parkinson | 2000-07-26 10:46:21 | planner switch from index scan to seq scan? |
Previous Message | Guillaume Perréal | 2000-07-26 07:40:09 | Re: Web front-end |
From | Date | Subject | |
---|---|---|---|
Next Message | Florian Meyer-Kassel | 2000-07-26 10:20:11 | Too many clients - Error |
Previous Message | Michael R. Jinks | 2000-07-26 06:39:27 | One store, two machines? |