From: | bricklen <bricklen(at)gmail(dot)com> |
---|---|
To: | Mark Steben <mark(dot)steben(at)drivedominion(dot)com> |
Cc: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: amount of WAL logs larger than expected |
Date: | 2016-05-09 18:39:29 |
Message-ID: | CAGrpgQ-ptD_h2i0VAVfqQOrX9pyQHEfhj3Wftr+sVKdOWX_34A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Mon, May 9, 2016 at 11:23 AM, Mark Steben <mark(dot)steben(at)drivedominion(dot)com>
wrote:
> We run postgres 9.2.12 and run vacuum on a 540GB database.
>
> I've attached a 'show all' for your reference. With CHECkPOINT_SEGMENTS
> set at 128
> and CHECKPOINT_COMPLETION_TARGET set at 0.9 I wouldn't expect the number
> of logs in pg_xlog to creep much over 400. But when we run vacuum, the
> number can climb to over 5000 and threatens to blow out on space. Is there
> something else I should be looking at causing this unexpected number of
> logs?
>
> Our server is also master for a slony1.2.2.3 slave and also to a hot
> standby server.
>
>
Is the hot standby in a different network or over the WAN? Have you checked
for bandwidth saturation? Also, have a look at the directory the hot
standby is receiving WALs at, check if they most recent ones have a current
timestamp. If the WALs that are arriving have much older timestamps than
what is being generated on the primary, that could indicate slow transfer.
For example, I had an issue recently where 4k WALs built up on the primary
during a large ETL process. It took a few hours to ship those (compressed)
WALs over the WAN to the replica's data centre.
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2016-05-09 21:30:30 | Re: Autovacuum of pg_database |
Previous Message | Joshua D. Drake | 2016-05-09 18:27:43 | Re: amount of WAL logs larger than expected |