From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Hilary Forbes <hforbes(at)dmr(dot)co(dot)uk> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Record size |
Date: | 2003-06-11 13:11:50 |
Message-ID: | 10301.1055337110@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Hilary Forbes <hforbes(at)dmr(dot)co(dot)uk> writes:
> Now the question. Does the number of fields that you have in a record and indeed their size affect performance on a SELECT statement when only a subset of fields is being extracted? So suppose that I have
> f1 integer
> f2 varchar(4)
> f3 varchar(20)
> f4 text
> and f4 contains reams of data. (well eg 4k). If I just want f1 and f2, will the performance of the SELECT statement be affected by the fact that f4 is large in physical size? If so, I would be better having two parallel tables one with fields f1,f2,f3 and one with f1,f4 as most of the time I don't want to read in the f4.
Most of the possible benefit applies automatically, because large values
of f4 will be "toasted" (moved out of line). I don't think it's worth
contorting your table structure for. You might care to run some
experiments to verify that theory, though. (But update first;
experiments against 7.0 don't necessarily prove anything about 7.3 ...)
> As a secondary question, presumably it is better to have a permanently compiled view in the database defined as
> CREATE VIEW myview as SELECT f1,f2,f3 from mytable
> rather than issuing the query each time direct to the underlying table?
There's essentially no difference in performance. Views are not
pre-optimized.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2003-06-11 16:18:38 | Re: Record size |
Previous Message | Christoph Haller | 2003-06-11 11:00:57 | Re: find open transactions/locks in 7.2? |