From: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
---|---|
To: | Gregory Stark <stark(at)enterprisedb(dot)com> |
Cc: | "Abraham, Danny" <danny_abraham(at)bmc(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: Chunk Delete |
Date: | 2007-11-18 14:18:23 |
Message-ID: | 1195395503.11052.6.camel@PCD12478 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, 2007-11-15 at 17:13 +0000, Gregory Stark wrote:
> DELETE
> FROM atable AS x
> USING (SELECT ctid FROM atable LIMIT 50000) AS y
> WHERE x.ctid = y.ctid;
Have you tried to EXPLAIN this one ? Last time I tried to do something
similar it was going for a sequential scan on atable with a filter on
ctid. The other form using "where ctid = any (array(select ctid
from ..." (see my previous post forwarding Tom's suggestion) was going
for a ctid scan, which should be orders of magnitudes faster than the
sequential scan for big tables and small chunks.
Cheers,
Csaba.
From | Date | Subject | |
---|---|---|---|
Next Message | MaXX | 2007-11-18 15:57:09 | Re: Compressed Backup too big |
Previous Message | Mag Gam | 2007-11-18 13:56:32 | tsearch2 best practices |