From: | "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Help with large delete |
Date: | 2022-04-16 14:47:20 |
Message-ID: | 20220416144720.bzpqnjmno324ucb3@hjp.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 2022-04-16 08:25:56 -0500, Perry Smith wrote:
> Currently I have one table that mimics a file system. Each entry has
> a parent_id and a base name where parent_id is an id in the table that
> must exist in the table or be null with cascade on delete.
>
> I’ve started a delete of a root entry with about 300,000 descendants.
> The table currently has about 22M entries and I’m adding about 1600
> entries per minute still. Eventually there will not be massive
> amounts of entries being added and the table will be mostly static.
>
> I started the delete before from a terminal that got detached. So I
> killed that process and started it up again from a terminal less
> likely to get detached.˘
>
> My question is basically how can I make life easier for Postgres?
Deleting 300k rows doesn't sound that bad. Neither does recursively
finding those 300k rows, although if you have a very biased distribution
(many nodes with only a few children, but some with hundreds of
thousands or even millions of children), PostgreSQL may not find a good
plan.
So as almost always when performance is an issue:
* What exactly are you doing?
* What is the execution plan?
* How long does it take?
hp
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp(at)hjp(dot)at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-04-16 15:33:57 | Re: Help with large delete |
Previous Message | Rob Sargent | 2022-04-16 13:50:00 | Re: Help with large delete |