From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | John Moore <postgres(at)tinyvital(dot)com> |
Cc: | Postgresql Admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Performance impact of record sizes |
Date: | 2002-07-04 19:47:28 |
Message-ID: | 200207041947.g64JlSM04801@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
John Moore wrote:
> We have a need to store text data which typically is just a hundred or so
> bytes, but in some cases may extend to a few thousand. Our current field
> has a varchar of 1024, which is not large enough. Key data is fixed sized
> and much smaller in this same record.
>
> Our application is primarily transaction oriented, which means that records
> will normally be fetched via random access, not sequential scans.
>
> The question is: what size thresholds exist? I assume that there is a
> "page" size over which the record will be split into more than one. What is
> that size, and does the spill cost any more or less than I had split the
> record into two or more individual records in order to handle the same data?
>
> Obviously, the easiest thing for me to do is just set the varchar to
> something big (say - 10K) but I don't want to do this without understanding
> the OLTP performance impact.
>
If you don't want a limit, use TEXT. Long values are automatically
stored in TOAST tables to avoid performance problems with sequential
scans over long row values.
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-07-04 19:50:28 | Re: views: performance implication |
Previous Message | Gregor Mosheh | 2002-07-04 19:25:09 | memory strangeness |