From: | hubert depesz lubaczewski <depesz(at)gmail(dot)com> |
---|---|
To: | Aaron Turner <synfinatic(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: 10+hrs vs 15min because of just one index |
Date: | 2006-02-10 09:00:34 |
Message-ID: | 9e4684ce0602100100x3d75554apb9decc3d08af52bd@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 2/10/06, Aaron Turner <synfinatic(at)gmail(dot)com> wrote:
> So I'm trying to figure out how to optimize my PG install (8.0.3) to
> get better performance without dropping one of my indexes.
> Basically, I have a table of 5M records with 3 columns:
> pri_key (SERIAL)
> data char(48)
> groupid integer
> there is an additional unique index on the data column.
> The problem is that when I update the groupid column for all the
> records, the query takes over 10hrs (after that I just canceled the
> update). Looking at iostat, top, vmstat shows I'm horribly disk IO
> bound (for data not WAL, CPU 85-90% iowait) and not swapping.
> Dropping the unique index on data (which isn't used in the query),
for such a large update i would suggest to go with different scenario:
split update into packets (10000, or 50000 rows at the time)
and do:
update packet
vacuum table
for all packets. and then reindex the table. should work much nicer.
depesz
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2006-02-10 09:36:35 | Re: Basic Database Performance |
Previous Message | James Dey | 2006-02-10 08:22:35 | Basic Database Performance |