From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | mlw <markw(at)mohawksoft(dot)com> |
Cc: | Hackers List <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Transactions vs speed. |
Date: | 2001-01-14 04:11:57 |
Message-ID: | 12015.979445517@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
mlw <markw(at)mohawksoft(dot)com> writes:
> Take this update:
> update table set field = 'X' ;
> This is a very expensive function when the table has millions of rows,
> it takes over an hour. If I dump the database, and process the data with
> perl, then reload the data, it takes minutes. Most of the time is used
> creating indexes.
Hm. CREATE INDEX is well known to be faster than incremental building/
updating of indexes, but I didn't think it was *that* much faster.
Exactly what indexes do you have on this table? Exactly how many
minutes is "minutes", anyway?
You might consider some hack like
drop inessential indexes;
UPDATE;
recreate dropped indexes;
"inessential" being any index that's not UNIQUE (or even the UNIQUE
ones, if you don't mind finding out about uniqueness violations at
the end).
Might be a good idea to do a VACUUM before rebuilding the indexes, too.
It won't save time in this process, but it'll be cheaper to do it then
rather than later.
regards, tom lane
PS: I doubt transactions have anything to do with it.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-01-14 04:20:30 | Re: CRCs |
Previous Message | Alfred Perlstein | 2001-01-14 03:50:24 | Re: Transactions vs speed. |