From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Bill Thoen" <bthoen(at)gisnet(dot)com> |
Cc: | "pgsql-general General" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: UPDATE and Indexes and Performance |
Date: | 2008-10-15 18:25:28 |
Message-ID: | dcc563d10810151125m62e23d56tc6932d15e4cafcfc@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 15, 2008 at 10:42 AM, Bill Thoen <bthoen(at)gisnet(dot)com> wrote:
> Does PG (8.1) ever use existing indexes when executing an UPDATE?
>
> I've got some tables with millions of records and whenever I update a column
> that involves most or all the records the EXPLAIN command seems to indicate
> that it isn't using the pre-existing indexes. This result in a slow update,
> which is further slowed by the presence of indexes. So when doing a large
> update should I just drop the indexes first, or is there some good reason to
> keep them?
You're assuming that seq scan is making it slow. You can always use
the enable_xxx settings to turn off sequential scan etc to see if it
runs faster with an index. Also, you might have a tuning issue going
on and indexed lookups would be faster.
If you're hitting every record, it's probably best to do a seq scan as
index scans, as previously mentioned hit both the index and the table.
What's your work_mem set to? What about random_page_cost,
effective_cache_size, and shared_buffers?
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2008-10-15 18:27:32 | Re: NATURAL JOINs |
Previous Message | Tom Lane | 2008-10-15 18:15:10 | Re: Drupal and PostgreSQL - performance issues? |