From: | James Mansion <james(at)mansionfamily(dot)plus(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | full_page_write and also compressed logging |
Date: | 2008-04-18 19:55:31 |
Message-ID: | 4808FCB3.7090904@mansionfamily.plus.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Has there ever been any analysis regarding the redundant write overhead
of full page writes?
I'm wondering if once could regard an 8k page as being 64 off 128 byte
paragraphs or
32 off 256byte paragraphs, each represented by a bit in a word. And,
when a pageis dirtied
by changes some record is kept of this based on the paragraphs
affected. Then you could
just incrementally dump the pre-image of newly dirtied paragraphs as you
go, and the cost
in terms of dirtied pages would be much lower for the case of scattered
updates.
(I was also wondering about just doing preimages based on chaned byte
ranges but the
approach above is probably faster, doesn't dump the same range twice,
and may fit
the existing flow more directly)
Also - has any attempt been made to push log writes through a cheap
compressor, such
a zlib on lowest setting or one like Jeff Bonwick's for ZFS
(http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/os/compress.c)
Would work well for largely textual tables (and I suspect a lot of
integer data too).
James
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Browne | 2008-04-18 19:57:10 | Re: Message queue table.. |
Previous Message | Tom Lane | 2008-04-18 19:27:02 | Re: Message queue table.. |