From: | Peter Brunnengräber <pbrunnen(at)bccglobal(dot)com> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | Hot_standby WAL archiving question |
Date: | 2016-05-02 15:53:52 |
Message-ID: | 1206916146.2714.1462204427454.JavaMail.pbrunnen@Station8.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hello all,
So I have been working on setting up an active/standby clustered postgresql 9.2 using corosync. The corosync documentation has me enable WAL archiving "archive_command = 'cp %p /db/data/postgresql/9.2/pg_archive/%f'"
I assume so that the secondary can catch up in async (log shipping) mode before going into sync (streaming) mode. But I am noticing as I push data through the DB, that my pg_archive keeps growing and I am worried about running out of disk space.
Per the postgresql documentation, I set "wal_keep_segments" but I believe this only effects the segments in the pg_xlog and not what the archive_command is processing.
So, since I am not keeping the archived WAL statements for long term use, is it safe to set "archive_cleanup_command = 'pg_archivecleanup /db/data/postgresql/9.2/pg_archive %r'" on the master? Also will the archive cleanup command even have effect? I only read about it being on slaves to clean-up what has already been processed.
Thank you!
-With kind regards,
Peter Brunnengräber
From | Date | Subject | |
---|---|---|---|
Next Message | Keith | 2016-05-02 16:07:12 | Re: Hot_standby WAL archiving question |
Previous Message | Dave Johansen | 2016-05-02 15:20:08 | Re: Deadlock when inserting from multiple processes |