Performance impact of record sizes

From: John Moore <wx-chase(at)tinyvital(dot)com>
To: pgsql admin <pgsql-admin(at)postgresql(dot)org>
Subject: Performance impact of record sizes
Date: 2002-07-04 18:24:33
Message-ID: 5.1.1.6.2.20020704111915.04499de8@pop3.norton.antivirus
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

We have a need to store text data which typically is just a hundred or so
bytes, but in some cases may extend to a few thousand. Our current field
has a varchar of 1024, which is not large enough. Key data is fixed sized
and much smaller in this same record.

Our application is primarily transaction oriented, which means that records
will normally be fetched via random access, not sequential scans.

The question is: what size thresholds exist? I assume that there is a
"page" size over which the record will be split into more than one. What is
that size, and does the spill cost any more or less than I had split the
record into two or more individual records in order to handle the same data?

Obviously, the easiest thing for me to do is just set the varchar to
something big (say - 10K) but I don't want to do this without understanding
the OLTP performance impact.

Thanks in advance

John Moore

http://www.tinyvital.com/personal.html

UNITED WE STAND

Browse pgsql-admin by date

  From Date Subject
Next Message Chad R. Larson 2002-07-04 18:29:32 Re: Authentication in batch processing
Previous Message Bruce Momjian 2002-07-04 16:16:13 Re: Authentication in batch processing