From: | "Ara Anjargolian" <ara(at)jargol(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | big integer design question |
Date: | 2004-01-26 05:51:07 |
Message-ID: | 000701c3e3d0$65c64060$6401a8c0@charterpipeline.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I am developing the DB code for what I hope will be
become a rather large/popular web application. The design
uses a unified object model, all object_ids coming from one
master sequence (suffice it to say it must be this way to meet
other design goals).
Being the worrier that I am, I am concerned that we may some
day overflow the integer currently holding the object_id.
So my question is, keeping performance and feasability in mind, would
it be better to just make all the table and cross-type index query
changes to move to a big integer from the get-go,
or would it be better to just wait to see if/when we reach say a billion on
our
ID sequence and worry about it then.
Any and all opinions, insights, and experiences on this topic would
be much appreciated.
Sincerely,
Ara Anjargolian
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Abi Saab | 2004-01-26 08:01:19 | Object Relational features in PostgreSQL |
Previous Message | Bruce Momjian | 2004-01-26 05:34:58 | Re: trust auth in 7.4 |