From: | Glen Parker <glenebob(at)nwlink(dot)com> |
---|---|
To: | postgres general <pgsql-general(at)postgresql(dot)org> |
Cc: | Greg Smith <gsmith(at)gregsmith(dot)com> |
Subject: | Re: WAL archiving to network drive |
Date: | 2008-08-22 00:57:56 |
Message-ID: | 48AE0F14.5060505@nwlink.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Greg Smith wrote:
> On Wed, 20 Aug 2008, Glen Parker wrote:
> The database will continue accumulating WAL segments it can't recycle if
> the archiver keeps failing, which can cause the size of the pg_xlog
> directory (often mounted into a separate, smaller partition or disk) to
> increase dramatically. You do not want to be the guy who caused the
> database to go down because the xlog disk filled after some network
> mount flaked out. I've seen that way too many times in WAN environments
> where the remote location was unreachable for days, due to natural
> disaster for example, and since under normal operation pg_xlog never got
> very big it wasn't sized for that.
>
> It will also slow things down a bit under heavy write loads, as every
> segment change will result in creating a new segment file rather than
> re-using an old one.
So you advocate archiving the WAL files from a small xlog volume, to a
larger local volume. Why not just make the xlog volume large enough to
handle overruns, since you obviously have the space? Copying each WAL
from one place to another on the local machine FAR outweighs the extra
overhead created when WAL files most be created rather than recycled.
Also, you mention days of down time, natural disasters, and a WAN. My
DBMS and archive machines are in the same room. If I had to deal with
different locations, I'd build more safety into the system. In fact, in
a way, I have. My WALs are archived immediately to another machine,
where they are (hours later) sent to tape in batches, which is then
hiked off location; emulating to some extent your decoupled system.
> OK, maybe you're smarter than that and used a separate script. DBAs are
> also not happy changing a script that gets called by the database every
> couple of minutes, and as soon as there's more than one piece involved
> it can be difficult to do an atomic update of said script.
Yes I'm smarter than that, and I'm also the DBA, so I don't mind much ;-)
-Glen
From | Date | Subject | |
---|---|---|---|
Next Message | tuanhoanganh | 2008-08-22 02:20:15 | Re: Pg dump Error |
Previous Message | Tom Lane | 2008-08-22 00:44:02 | Re: oracle rank() over partition by queries |