From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | "Bruce Momjian" <bruce(at)momjian(dot)us> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Zeugswetter Andreas ADI SD" <ZeugswetterA(at)spardat(dot)at>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: TOAST usage setting |
Date: | 2007-05-30 17:15:53 |
Message-ID: | 87ejky9qra.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Bruce Momjian" <bruce(at)momjian(dot)us> writes:
> Uh, am I supposed to be running more TOAST tests? Would someone explain
> what they want tested?
If you want my opinion I would say we need two tests:
1) For TOAST_TUPLE_TARGET:
We need to run the test scripts you have already for sizes that cause actual
disk i/o. The real cost of TOAST lies in the random access seeks and your
tests all fit in memory so they're missing that.
2) And for TOAST_MAX_CHUNK_SIZE:
Set TOAST_MAX_CHUNK_SIZE to 8k and TOAST_TOAST_TUPLE_TARGET to 4097 and store
a large table (larger than RAM) of 4069 bytes (and verify that that's creating
two chunks for each tuple). Test how long it takes to do a sequential scan
with hashtext(). Compare that to the above with TOAST_MAX_CHUNK_SIZE set to 4k
(and verify that the toast table is much smaller in this configuration).
Actually I think we need to do the latter of these first. Because if it shows
that bloating the toast table is faster than chopping up data into finer
chunks then we'll want to set TOAST_MAX_CHUNK_SIZE to 8k and then your tests
above will have to be rerun.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-05-30 17:16:58 | Re: Postmaster startup messages |
Previous Message | Joshua D. Drake | 2007-05-30 17:12:08 | Re: Style of file error messages |