>>> On Mon, Sep 24, 2007 at 4:17 PM, in message
<46F7E335(dot)EE98(dot)0025(dot)0(at)wicourts(dot)gov>, "Kevin Grittner"
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>>>>> On Thu, Sep 6, 2007 at 7:03 PM, in message
>> <1189123422(dot)9243(dot)29(dot)camel(at)dogma(dot)ljc(dot)laika(dot)com>, Jeff Davis <pgsql(at)j-davis(dot)com>
>> wrote:
>>>
>>> I think ... there's still room for a simple tool that can zero out
>>> the meaningless data in a partially-used WAL segment before compression.
>
> so I'm looking for advice, direction, and suggestions before I get started.
Lacking any suggestions, I plowed ahead with something which satisfies
our needs. First, rough, version attached. It'll save us buying another
drawer of drives, so it was worth a few hours of research to figure out
how to do it.
If anyone spots any obvious defects please let me know. We'll be running
about 50,000 WAL files through it today or tomorrow; if any problems turn
up in that process I'll repost with a fix.
Given the lack of response to my previous post, I'll assume it's not worth
the effort to do more in terms of polishing it up; but if others are
interested in using it, I'll make some time for that.
Adding this to the pipe in our archive script not only saves disk space,
but reduces the CPU time overall, since gzip usually has less work to do.
When WAL files switch because they are full, the CPU time goes from about
0.8s to about 1.0s.
-Kevin