From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Jesper Krogh <jesper(at)krogh(dot)cc> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: wal-size limited to 16MB - Performance issue for subsequent backup |
Date: | 2014-10-21 21:12:11 |
Message-ID: | CAMkU=1w8H_NhWfaTFJVx2Ya53e6dH1iwEYzBRh2oxtZtuuxs3A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Oct 20, 2014 at 12:03 PM, <jesper(at)krogh(dot)cc> wrote:
> Hi.
>
> One of our "production issues" is that the system generates lots of
> wal-files, lots is like 151952 files over the last 24h, which is about
> 2.4TB worth of WAL files. I wouldn't say that isn't an issue by itself,
> but the system does indeed work fine. We do subsequently gzip the files to
> limit actual disk-usage, this makes the files roughly 30-50% in size.
>
> That being said, along comes the backup, scheduled ones a day and tries to
> read off these wal-files, which to the backup looks like "an awfull lot of
> small files", our backup utillized a single thread to read of those files
> and levels of at reading through 30-40MB/s from a 21 drive Raid50 of
> rotating drives, which is quite bad. That causes a daily incremental run
> to take in the order of 24h. Differential picking up larger deltas and
> full are even worse.
>
Why not have archive_command (which gets the files while they are still
cached) put the files directly into their final destination on the backup
server?
> Suggestions are welcome. An archive-command/restore command that could
> combine/split wal-segments might be the easiest workaround, but how about
> crash-safeness?
>
I think you would just have to combine them by looking at the file name and
seeking to a specific spot in the large file (rather than just appending to
it) so that if the archive_command fails and gets rerun, it will still end
up in the correct place. I don't see what other crash-safeness issues you
would have, other than the ones you already have. You would want to do the
compression afterward combining, not before, so that all segments are of
predictable size.
It should be pretty easy as long as want your combined files to consist of
either 16 or 256 (or 255 in older versions) WAL files.
You would have to pass through directly any files not matching the filename
pattern of ordinary WAL files.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2014-10-21 21:36:58 | Re: Autovacuum fails to keep visibility map up-to-date in mostly-insert-only-tables |
Previous Message | Peter Eisentraut | 2014-10-21 20:21:42 | Re: Allow format 0000-0000-0000 in postgresql MAC parser |