| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Jeffery Collins <collins(at)onyx-technologies(dot)com> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: Very large table... |
| Date: | 2000-06-14 23:30:31 |
| Message-ID: | 16230.961025431@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Jeffery Collins <collins(at)onyx-technologies(dot)com> writes:
> Does anyone have any experience with very large postgresql tables? By
> very large, I mean a table with ~38 million records, each record will
> have between 80 and 128 bytes (we are not sure of some column sizes yet)
> in ~10 columns with probably 3 btree-indexes. Basically the table will
> hold all of the Postal Service deliverable addresses in the US in a
> somewhat compressed form.
> My concerns are in the area of performance and robustness.
Should be OK as long as you are using a recent release (preferably 7.0).
Our support for tables over 2 gig used to be a little flaky, but it's
been wrung out...
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Franck Martin | 2000-06-15 00:00:24 | Library database ? |
| Previous Message | Robert B. Easter | 2000-06-14 23:29:46 | Re: PL/TCL spi_exec insert problem |