From: | Vick Khera <vivek(at)khera(dot)org> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: exceptionally large UPDATE |
Date: | 2010-10-29 14:21:14 |
Message-ID: | AANLkTikQRvni6ZmeHFJ7KBi-5aYsHYk8N-c1BZ-tJVZ7@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Oct 28, 2010 at 1:06 PM, Ivan Sergio Borgonovo
<mail(at)webthatworks(dot)it> wrote:
> What I'm planning to do is:
> max_connections = 5
> shared_buffers = 240M
> work_mem = 90MB
> maintenance_work_mem = 1GB
> max_fsm_pages = 437616
> max_fsm_relations = 1200
> checkpoint_segments = 70
> default_statistics_target = 30
> #log_min_duration_statement = 1000
>
default_statistics_target = 100 is the new "default" for newer
postgres, and with good reason... try that.
if you boost your checkpoint_segments, also twiddle the
checkpoint_timeout (increase it) and checkpoint_completion_target
(something like 0.8 would be good, depending on how fast your disks
are) values to try to smooth out your I/O (ie, keep it from bursting
at checkpoint timeout). Is 5 connections really enough for you?
And like I said before, you can set the work_mem and/or
maintenance_work_mem on a per-connection basis as needed, so for your
big update you can increase those values just during that work without
affecting the rest of the system.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-10-29 14:25:49 | Re: create table as select VS create table; insert as select |
Previous Message | Adrian Klaver | 2010-10-29 13:58:47 | Re: Can Postgres Not Do This Safely ?!? |