From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Curt Sampson <cjs(at)cynic(dot)net> |
Cc: | jtp <john(at)akadine(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: general design question |
Date: | 2002-04-20 03:37:35 |
Message-ID: | 18031.1019273855@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
Curt Sampson <cjs(at)cynic(dot)net> writes:
> However, for tables that are already narrow, you may get little
> performance gain, or in some cases performance may even get worse,
> not to mention your data size blowing up bigger. Postgres has a
> quite high per-tuple overhead (31 bytes or more) so splitting small
> tables can actually cause growth and make things slower, if you
> frequently access both tables.
Right. The *minimum* row overhead in Postgres is 36 bytes (32-byte
tuple header plus 4-byte line pointer). More, the actual data space
will be rounded up to the next MAXALIGN boundary, either 4 or 8 bytes
depending on your platform. On an 8-byte-MAXALIGN platform like mine,
a table containing a single int4 column will actually occupy 44 bytes
per row. Ouch. So database designs involving lots of narrow tables
are not to be preferred over designs with a few wide tables.
AFAIK, all databases have nontrivial per-row overheads; PG might be
a bit worse than average, but this is a significant issue no matter
which DB you use.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Curt Sampson | 2002-04-20 04:55:38 | Re: general design question |
Previous Message | Tom Lane | 2002-04-20 03:24:07 | Re: Backup very large databases |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-04-20 04:06:11 | Re: Schema (namespace) privilege details |
Previous Message | Curt Sampson | 2002-04-20 03:19:27 | Re: Schema (namespace) privilege details |