From: | "Zeugswetter Andreas ADI SD" <ZeugswetterA(at)spardat(dot)at> |
---|---|
To: | "Bruce Momjian" <bruce(at)momjian(dot)us> |
Cc: | "Jim C(dot) Nasby" <decibel(at)decibel(dot)org>, "Gregory Stark" <stark(at)enterprisedb(dot)com>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: TOAST usage setting |
Date: | 2007-06-08 10:07:23 |
Message-ID: | E1539E0ED7043848906A8FF995BDA579021B342F@m0143.s-mxs.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> My next suggestion would be to leave EXTERN_TUPLES_PER_PAGE
> as is, but:
> Split data wider than a page into page sized chunks as long
> as they fill whole pages.
> Split the rest with EXTERN_TUPLES_PER_PAGE (4) as now.
> This would not waste more space than currently, but improve
> performance for very wide columns.
>
> I can try to do a patch if you think that is a good idea,
> can't do a lot of testing though.
I have a PoC patch running, but it is larger than expected because of
the size checks during read
(toast_fetch_datum_slice not done, but would be straight forward).
Also the pg_control variable toast_max_chunk_size would need to be
renamed and reflect the
EXTERN_TUPLES_PER_PAGE (4) number and the fact that fullpage chunks are
used
(else the chunk size checks and slice could not work like now).
Should I pursue, keep for 8.4, dump it ?
The downside of this concept is, that chunks smaller than fullpage still
get split into the smaller pieces.
And the < ~8k chunks may well outnumber the > ~8k on real data.
The up side is, that I do not see a better solution that would keep
slice cheap and still lower the overhead even for pathological cases.
Andreas
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2007-06-08 10:09:14 | Re: Performance regression on CVS head |
Previous Message | ohp | 2007-06-08 09:41:56 | Re: little PITR annoyance |