From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: checkpoint_timeout and archive_timeout |
Date: | 2012-09-10 23:03:44 |
Message-ID: | CAMkU=1z3=w+ESmK+K8k0m6YMNgV+uCz9fv3B-MUJtrtWmxeZrw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Aug 16, 2012 at 9:30 AM, Kevin Grittner
<Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
> Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
>
>> So a server that is completely free of
>> user activity will still generate an endless stream of WAL files,
>> averaging one file per max(archive_timeout, checkpoint_timeout).
>> That comes out to one 16MB file per hour (since it is not possible
>> to set checkpoint_timeout > 1h) which seems a bit much when
>> absolutely no user-data changes are occurring.
>
...
>
> BTW, that's also why I wrote the pg_clearxlogtail utility (source
> code on pgfoundry). We pipe our archives through that and gzip
> which changes this to an endless stream of 16KB files. Those three
> orders of magnitude can make all the difference. :-)
Thanks. Do you put the clearxlogtail and the gzip into the
archive_command, or just do a simple copy into the archive and then
have a cron job do the processing in the archives later? I'm not
really sure what the failure modes are for having pipelines built into
the archive_command.
Thanks,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Edson Richter | 2012-09-10 23:04:28 | Re: Compressed binary field |
Previous Message | Kevin Grittner | 2012-09-10 22:06:13 | Re: Compressed binary field |