From: | "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | "Jay Levitt *EXTERN*" <jay(dot)levitt(at)gmail(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: High checkpoint_segments |
Date: | 2012-02-15 08:14:25 |
Message-ID: | D960CB61B694CF459DCFB4B0128514C2077EBC45@exadv11.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Jay Levitt wrote:
> We need to do a few bulk updates as Rails migrations. We're a typical
> read-mostly web site, so at the moment, our checkpoint settings and
WAL are
> all default (3 segments, 5 min, 16MB), and updating a million rows
takes 10
> minutes due to all the checkpointing.
>
> We have no replication or hot standbys. As a consumer-web startup,
with no
> SLA, and not a huge database, and if we ever do have to recover from
> downtime it's ok if it takes longer.. is there a reason NOT to always
run
> with something like checkpoint_segments = 1000, as long as I leave the
> timeout at 5m?
There's nothing wrong with the idea except for the amount of WAL and a
huge checkpoint that can stall your system for a while in a worst-case
scenario. You can't get rid of checkpoint I/O completely.
I'd tune to a more conservative value, maybe 30 or at most 100 and see
if that solves your problem. Check statistics to see if checkpoints
are time-driven or not. As soon as almost all checkpoints are time-
driven, further raising of checkpoint_segments won't do anything for
you.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Andre Lopes | 2012-02-15 08:49:16 | Re: Backup database remotely |
Previous Message | Berend Tober | 2012-02-15 07:37:00 | Re: Easy form of "insert if it isn't already there"? |