From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | "Bruce Momjian" <bruce(at)momjian(dot)us> |
Cc: | <pgsql-hackers(at)postgresql(dot)org>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Subject: | Re: TOAST usage setting |
Date: | 2007-05-29 15:23:38 |
Message-ID: | 874plvpsat.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Bruce Momjian" <bruce(at)momjian(dot)us> writes:
> Gregory Stark wrote:
>> "Bruce Momjian" <bruce(at)momjian(dot)us> writes:
>>
>> > I tested TOAST using a method similar to the above method against CVS
>> > HEAD, with default shared_buffers = 32MB and no assert()s. I created
>> > backends with power-of-2 seetings for TOAST_TUPLES_PER_PAGE (4(default),
>> > 8, 16, 32, 64) which gives TOAST/non-TOAST breakpoints of 2k(default),
>> > 1k, 512, 256, and 128, roughly.
>> >
>> > The results are here:
>> >
>> > http://momjian.us/expire/TOAST/
>> >
>> > Strangely, 128 bytes seems to be the break-even point for TOAST and
>> > non-TOAST, even for sequential scans of the entire heap touching all
>> > long row values. I am somewhat confused why TOAST has faster access
>> > than inline heap data.
Is your database initialized with C locale? If so then length(text) is
optimized to not have to detoast:
if (pg_database_encoding_max_length() == 1)
PG_RETURN_INT32(toast_raw_datum_size(str) - VARHDRSZ);
Also, I think you have to run this for small datasets like you have well as
large data sets where the random access seek time of TOAST will really hurt.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory Stark | 2007-05-29 15:53:41 | Re: TOAST usage setting |
Previous Message | Jeroen T. Vermeulen | 2007-05-29 15:02:25 | Re: What is the maximum encoding-conversion growth rate, anyway? |