From: | Ivan Sergio Borgonovo <mail(at)webthatworks(dot)it> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: exceptionally large UPDATE |
Date: | 2010-10-28 17:06:13 |
Message-ID: | 20101028190613.68466262@dawn.webthatworks.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, 28 Oct 2010 08:58:34 -0400
Vick Khera <vivek(at)khera(dot)org> wrote:
> On Wed, Oct 27, 2010 at 10:26 PM, Ivan Sergio Borgonovo
> <mail(at)webthatworks(dot)it> wrote:
> > I'm increasing maintenance_work_mem to 180MB just before
> > recreating the gin index. Should it be more?
> >
>
> You can do this on a per-connection basis; no need to alter the
> config file. At the psql prompt (or via your script) just execute
> the query
>
> SET maintenance_work_mem="180MB"
> If you've got the RAM, just use more of it. 'd suspect your server
> has plenty of it, so use it! When I reindex, I often give it 1 or
> 2 GB. If you can fit the whole table into that much space, you're
> going to go really really fast.
> Also, if you are going to update that many rows you may want to
> increase your checkpoint_segments. Increasing that helps a *lot*
> when you're loading big data, so I would expect updating big data
> may also be helped. I suppose it depends on how wide your rows
> are. 1.5 Million rows is really not all that big unless you have
> lots and lots of text columns.
Actually I'm pretty happy with performance of the DB under normal
circumstances. I never investigated to much if I could squeeze it
more. But when I have to deal with such "huge" updates the
performance is painful.
You made me start to wonder if I could improve performances even
under normal load.
But right now I've to take care of this huge (well the use of huge
is just related to the performance I'm obtaining right now) update.
The things I've touched compared to stock configuration where:
max_connections = 100
shared_buffers = 240M
work_mem = 42MB
maintenance_work_mem = 180MB #(generally it is 40MB)
# these were touched as of autovacuum suggestion
max_fsm_pages = 437616
max_fsm_relations = 1200
checkpoint_segments = 35
random_page_cost = 3.0
default_statistics_target = 30
log_min_duration_statement = 1000
The box is running apache, total average occupied length of tetxt
for each row should be around 1Kb on the largest table.
What I'm planning to do is:
max_connections = 5
shared_buffers = 240M
work_mem = 90MB
maintenance_work_mem = 1GB
max_fsm_pages = 437616
max_fsm_relations = 1200
checkpoint_segments = 70
default_statistics_target = 30
#log_min_duration_statement = 1000
Any improvement?
--
Ivan Sergio Borgonovo
http://www.webthatworks.it
From | Date | Subject | |
---|---|---|---|
Next Message | Ivan Sergio Borgonovo | 2010-10-28 17:28:15 | share lock when only one user connected? |
Previous Message | akp geek | 2010-10-28 16:59:40 | Re: moving database objects from one schema to other |