From: | Alan Hodgson <ahodgson(at)simkin(dot)ca> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: 121+ million record table perf problems |
Date: | 2007-05-18 19:08:41 |
Message-ID: | 200705181208.41163@hal.medialogik.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Friday 18 May 2007 11:51, "Joshua D. Drake" <jd(at)commandprompt(dot)com> wrote:
> > The update query that started this all I had to kill after 17hours. It
> > should have updated all 121+ million records. That brought my select
> > count down to 19 minutes, but still a far cry from acceptable.
You're going to want to drop all your indexes before trying to update 121
million records. Updates in PostgreSQL are really quite slow, mostly due
to all the index updates. Drop indexes, do the updates, create a primary
key, cluster the table on that key to free up the dead space, then recreate
the rest of the indexes. That's about as fast as you can get that process.
Of course, doing anything big on one disk is also going to be slow, no
matter what you do. I don't think a table scan should take 19 minutes,
though, not for 121 million records. You should be able to get at least
60-70MB/sec out of anything modern. I can only assume your disk is
thrashing doing something else at the same time as the select.
--
"We can no more blame our loss of freedom on Congressmen than we can
prostitution on pimps. Both simply provide broker services for their
customers." -- Dr. Walter Williams
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Mayer | 2007-05-18 19:21:39 | Re: Background vacuum |
Previous Message | Gene Hart | 2007-05-18 18:57:32 | choosing fillfactor |