From: | Francisco Reyes <lists(at)stringsutils(dot)com> |
---|---|
To: | Jonathan Blitz <jb(at)anykey(dot)co(dot)il> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Adding and filling new column on big table |
Date: | 2006-05-30 15:58:38 |
Message-ID: | cone.1149004718.536638.48418.1000@zoraida.natserv.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Jonathan Blitz writes:
> I just gave up in the end and left it with NULL as the default value.
Could you do the updates in batches instead of trying to do them all at
once?
Have you done a vacuum full on this table ever?
> There were, in fact, over 2 million rows in the table rather than 1/4 of a
> million so that was part of the problem.
What hardware?
I have a dual CPU opteron with 4GB of RAM and 8 disks in RAID 10 (SATA).
Doing an update on a 5 million record table took quite a while, but it did
fininish. :-)
I just did vacuum full before and after though.. That many updates tend to
slow down operations on the table aftewards unless you vacuum the table.
Based on what you wrote it sounded as if you tried a few times and may have
killed the process.. this would certainly slow down the operations on that
table unless you did a vacuum full.
I wonder if running vacuum analyze against the table as the updates are
running would be of any help.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-05-30 16:20:23 | Re: pg_dump issue |
Previous Message | mcelroy, tim | 2006-05-30 15:33:54 | Re: pg_dump issue |