From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | John Moore <postgres(at)tinyvital(dot)com> |
Cc: | Postgresql Admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Performance impact of record sizes |
Date: | 2002-07-04 20:39:01 |
Message-ID: | 200207042039.g64Kd1S10376@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
John Moore wrote:
>
> >If you don't want a limit, use TEXT. Long values are automatically
> >stored in TOAST tables to avoid performance problems with sequential
> >scans over long row values.
>
>
> Thanks...
>
> I wasn't quite clear enough in my question.... I am focused on OLTP
> performance, and in my case the vast majority of the rows will have only a
> few hundred bytes in that column, while a few (1%?) may be larger.
>
> I assume that internally there is a fixed page size (by which I mean cache
> buffer size or disk read size) for normal records. In my case, 99% of the
> physical rows should be short, and would fit easily in whatever that size
> is. So I *suspect* I want to keep the data in the physical row, rather than
> using TEXT and having it stored separately from the record.. The question
> is... are there any unexpected consequences. For example, if I have a whole
> bunch of rows with, say, a 10K varchar field which is only populated with a
> hundred or two bytes each, will it perform just as well as if that field
> was defined as a 200 byte field?
>
> A related question: is it more expensive to use varchar than fixed char
> fields? I assume some additional work in physically unpacking the record.
>
> My past experience is with Informix, and a lot with very old versions where
> views were high cost, and so were varchars. Likewise, you didn't want your
> physical row to exceed the size of a physical page if you could avoid it.
When the row length nears/exceeds the page size (8k) the longer column
values are automatically pushed into a TOAST backup table so it is all
done transparently, no matter what length you specify for the column,
and performance is always good, i.e., if you have a 100mb value in a
column, and do a sequential scan, that 100mb is only accessed if the
column containing the 100mb is accessed.
Specifically, it isn't the defined length of the column that counts, but
the total length of each individual row that determines if the TOAST
backup table is used for _some_ of the column values.
Does that answer your question?
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-07-04 23:49:45 | Re: Performance impact of record sizes |
Previous Message | John Moore | 2002-07-04 20:23:02 | Re: Performance impact of record sizes |