From: | "Spiegelberg, Greg" <gspiegelberg(at)cranel(dot)com> |
---|---|
To: | "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: DELETE with filter on ctid |
Date: | 2007-04-10 13:37:34 |
Message-ID: | 82E74D266CB9B44390D3CCE44A781ED90B6ADE@POSTOFFICE.cranel.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Craig,
I'm not using a TEMP TABLE in this DELETE however I have tried an
ANALYZE prior to the DELETE but it hardly makes a dent in the time.
Please look at the other follow-up email I just sent for full details.
Greg
-----Original Message-----
From: Craig A. James [mailto:cjames(at)modgraph-usa(dot)com]
Sent: Monday, April 09, 2007 5:58 PM
To: Spiegelberg, Greg
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: [PERFORM] DELETE with filter on ctid
Spiegelberg, Greg wrote:
> We have a query which generates a small set of rows (~1,000) which are
> to be used in a DELETE on the same table. The problem we have is that
> we need to join on 5 different columns and it takes far too long.
You may have encountered the same problem I did: You *must* run ANALYZE
on a temporary table before you use in another query. It's surprising
that this is true even for very small tables (a few hundred to a few
thousand rows), but it is. I had a case where I created a "scratch"
table like yours, and the before/after ANALYZE performance was the
difference between 30 seconds and a few milliseconds for the same query.
Craig
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-04-10 13:54:02 | Re: how to efficiently update tuple in many-to-many relationship? |
Previous Message | Spiegelberg, Greg | 2007-04-10 13:35:14 | Re: DELETE with filter on ctid |