From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Chris <dmagick(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pgsql performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: update query taking too long |
Date: | 2007-06-28 06:39:46 |
Message-ID: | 468357B2.1040205@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Chris wrote:
> Tom Lane wrote:
>> Any foreign keys leading to or from that table?
>
> Nope :(
>
>> 3.5 million row updates are not exactly gonna be instantaneous anyway,
>> but only FK checks or really slow user-written triggers would make it
>> take upwards of an hour ...
>
> No triggers, functions.
Of course you really want a trigger on this, since presumably domainname
should always be kept in sync with emailaddress. But that's not the
immediate issue.
> Table is pretty basic.
>
> I have a few indexes (one on the primary key, one on emailaddress etc)
> but the 'domainname' column is a new one not referenced by any of the
> indexes.
>
> FWIW (while the other update is still going in another window):
What's saturated? Is the system I/O limited or CPU limited? You *should*
be limited by the write speed of your disk with something simple like this.
What happens if you do the following?
CREATE TABLE email_upd_test (id SERIAL, email text, domainname text,
PRIMARY KEY (id));
INSERT INTO email_upd_test (email) SELECT n::text || '@' || n::text FROM
(SELECT generate_series(1,1000000) AS n) AS numbers;
ANALYSE email_upd_test;
\timing
UPDATE email_upd_test SET domainname=substring(email from position('@'
in email));
UPDATE 1000000
Time: 35056.125 ms
That 35 seconds is on a simple single-disk IDE disk. No particular
tuning done on that box.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Chris | 2007-06-28 06:49:58 | Re: update query taking too long |
Previous Message | Chris | 2007-06-28 06:37:43 | Re: update query taking too long |