From: | Thomas Lockhart <lockhart(at)alumni(dot)caltech(dot)edu> |
---|---|
To: | Paul Caskey <paul(at)nmxs(dot)com> |
Cc: | Postgres Users <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: 4 billion record limit? |
Date: | 2000-07-28 02:14:21 |
Message-ID: | 3980EC7D.D49CF7F0@alumni.caltech.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-novice |
> FWIW, I checked into MySQL, and as far as I can tell, they have nothing
> like this implicit 4 billion transactional "limit". So maybe competitive
> spirit will drive the postgres hackers to fix this problem sooner than
> later. ;-)
We have *never* had a report of someone pushing this 4GB limit, and
theoretical problems usually go into the long-term development plan, not
in the "OHMYGODITSBROKEN" list.
- Thomas
From | Date | Subject | |
---|---|---|---|
Next Message | brianb-pggeneral | 2000-07-28 02:26:36 | Backup/dump of huge tables and performance |
Previous Message | frank | 2000-07-28 02:10:43 | Re: Is Pg 7.0.x's Locking Mechanism BROKEN? |
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Swan | 2000-07-28 04:47:00 | Re[2]: upgrade from 6.4 to 7.02 problem |
Previous Message | John McKown | 2000-07-28 00:17:32 | Re: timestamp and null value |