From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | mARK bLOORE <mbloore(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: how to delete many rows from a huge table? |
Date: | 2009-05-26 05:45:24 |
Message-ID: | dcc563d10905252245r1c00c032s3149909795098a86@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, May 25, 2009 at 1:29 PM, mARK bLOORE <mbloore(at)gmail(dot)com> wrote:
> The table listing what I want to delete has just the key values.
>
> If I just do
>
> DELETE FROM backlinks b USING bad_links bl WHERE b.key = bl.key;
>
> then it grinds for an hour or so and runs out of memory.
>
> If I do
>
> DELETE FROM backlinks b WHERE b.key IN (SELECT bl.key FROM bad_links
> bl LIMIT 40 OFFSET 0);
>
> it finishes in milliseconds, but if I set the LIMIT to 50, it runs for
> minutes til I kill it. EXPLAIN says that it is doing a sequential
> scan on several of the partitions in the 50 case, but not the 40.
> Auto-vacuuming is running on the DB.
>
> If I put that DELETE into a procedure and loop on the offset, it acts
> like the 50 case, even if i set the LIMIT to 1.
What does explain of those three queries show you? I'm guessing that
cranking up work_mem may help.
From | Date | Subject | |
---|---|---|---|
Next Message | Ramiro Diaz Trepat | 2009-05-26 07:28:54 | very large tables |
Previous Message | Intengu Technologies | 2009-05-26 05:29:37 | Re: Assistance in importing a csv file into Postgresql |