From: | Francesco Dalla Ca' <f(dot)dallaca(at)cineca(dot)it> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | postgres teoretical and (best-)pratical size limits |
Date: | 2005-06-30 10:03:47 |
Message-ID: | 42C3C382.5020108@cineca.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
I have some questions about the subject.
in the postgresql F.A.Q.:
http://www.postgresql.org/docs/faqs.FAQ.html#4.4
"...
4.4) What is the maximum size for a row, a table, and a database?
These are the limits:
Maximum size for a database? unlimited (32 TB databases exist)
Maximum size for a table? 32 TB
Maximum size for a row? 1.6TB
Maximum size for a field? 1 GB
Maximum number of rows in a table? unlimited
Maximum number of columns in a table? 250-1600 depending on
column types
Maximum number of indexes on a table? unlimited
..."
1) How are computed the maximum size of a table?
On the manual:
http://www.postgresql.org/docs/8.0/interactive/sql-createtable.html
"...
A table cannot have more than 1600 columns. (In practice, the effective
limit is lower because of tuple-length constraints.)
..."
2) What's the tuple-length constraint? (and what's the max tuple-length?)
3) These are teoretical limits, but exists some licterature (articles,
examples, case-studies) what addresses on pratical performance
degradation derived from use of large tables with large amount of data?
(eg.: max tables size suggested, max rows, max tuple size...)
Thanks for the answers and best regards.
From | Date | Subject | |
---|---|---|---|
Next Message | Martin Fandel | 2005-06-30 10:41:49 | Re: restore database from bare files |
Previous Message | Hannes Dorbath | 2005-06-30 09:58:39 | Re: unicode |