From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Feite Brekeveld <feite(dot)brekeveld(at)osiris-it(dot)nl> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: slow update but have an index |
Date: | 2001-08-17 14:18:43 |
Message-ID: | 29467.998057923@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Feite Brekeveld <feite(dot)brekeveld(at)osiris-it(dot)nl> writes:
> fields. So I made a dump, and hacked the dump into SQL statements like:
> update accounting set status = 'C' where seqno = 1566385;
> ....
> and the other 74,000
> This is awfully slow. How come ? The index on the seqno should give
> speedy access to the record.
I see you've already solved your problem, but for the archives here's
a couple of suggestions:
1. Make sure you actually *are* getting an index scan --- use EXPLAIN
on the query to check. If not, have you VACUUM ANALYZEd lately?
2. Wrap the series of commands into a single transaction to avoid
per-update transaction overhead:
BEGIN;
update ...;
update ...;
...
COMMIT;
This can save a good deal of disk activity, since each commit forces
fsync.
3. Rethink whether you can't accomplish the same thing in fewer SQL
commands. The overhead of parsing and planning a query is way more
than the time taken to find and update one single record. So, the
more work done per command, the better.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Eckermann | 2001-08-17 14:25:58 | Re: assigning result of SELECT in TRIGGER |
Previous Message | Colin 't Hart | 2001-08-17 14:18:21 | Re: Sparc seems very slow |