From: | Vick Khera <vivek(at)khera(dot)org> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: exceptionally large UPDATE |
Date: | 2010-10-28 12:58:34 |
Message-ID: | AANLkTi==_bprDq-yhYCuRBUuESuAh0nHAFom4bvpFuhM@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 27, 2010 at 10:26 PM, Ivan Sergio Borgonovo
<mail(at)webthatworks(dot)it> wrote:
> I'm increasing maintenance_work_mem to 180MB just before recreating
> the gin index. Should it be more?
>
You can do this on a per-connection basis; no need to alter the config
file. At the psql prompt (or via your script) just execute the query
SET maintenance_work_mem="180MB"
If you've got the RAM, just use more of it. 'd suspect your server
has plenty of it, so use it! When I reindex, I often give it 1 or 2
GB. If you can fit the whole table into that much space, you're going
to go really really fast.
Also, if you are going to update that many rows you may want to
increase your checkpoint_segments. Increasing that helps a *lot* when
you're loading big data, so I would expect updating big data may also
be helped. I suppose it depends on how wide your rows are. 1.5
Million rows is really not all that big unless you have lots and lots
of text columns.
From | Date | Subject | |
---|---|---|---|
Next Message | Gabriele Bartolini | 2010-10-28 13:37:47 | Re: DB become enormous with continuos insert and update |
Previous Message | Thom Brown | 2010-10-28 12:16:42 | Re: Can't take base back up with Postgres 9.0 on Solaris 10 |