From: | ptjm(at)interlog(dot)com (Patrick TJ McPhee) |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: [OT] "advanced" database design (long) |
Date: | 2008-02-05 03:59:33 |
Message-ID: | 13qfnp53p5osaeb@corp.supernews.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In article <33c6269f0802032014i3878ec3co4488b4835ef1e3d8(at)mail(dot)gmail(dot)com>,
Alex Turner <armtuk(at)gmail(dot)com> wrote:
%
% I"m not a database expert, but wouldn't
%
% create table attribute (
% attribute_id int
% attribute text
% )
%
% create table value (
% value_id int
% value text
% )
%
% create table attribute_value (
% entity_id int
% attribute_id int
% value_id int
% )
%
% give you a lot less pages to load than building a table with say 90 columns
% in it that are all null, which would result in better rather than worse
% performance?
Suppose you want one row of data. Say it's one of the ones where the
columns aren't all nulls. You look up 90 rows in attribute_value, then
90 rows in attribute, then 90 rows in value. You're probably looking at
3-6 pages of index data, and then somewhere between 3 and 270 pages of
data from the database, for one logical row of data.
--
Patrick TJ McPhee
North York Canada
ptjm(at)interlog(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-02-05 04:10:39 | Re: question about PANIC: hash table "Shared Buffer Lookup Table" corrupted |
Previous Message | LiuYan 刘研 | 2008-02-05 03:39:18 | postgresql-8.3.0-1-binaries-no-installer: gssapi32.dll missed ? |