From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | "Dirschel, Steve" <steve(dot)dirschel(at)thomsonreuters(dot)com> |
Cc: | "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Troubleshooting a long running delete statement |
Date: | 2021-10-07 04:44:17 |
Message-ID: | CAHOFxGoG1knuZUioBu-FjL-daF9t6LY1MchQ=F9U6fApW+OJaQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Oct 6, 2021 at 12:00 PM Dirschel, Steve <
steve(dot)dirschel(at)thomsonreuters(dot)com> wrote:
> Here is what I could see in Postgres:
>
> - When I did an explain on the delete I could see it was full scanning
> the table. I did a full scan of the table interactively in less than 1
> second so the long runtime was not due to the full tablescan.
>
>
If finding the rows is fast, but actually deleting them is slow and perhaps
won't even finish, I would strongly consider adding a where clause such
that a small fraction of the deletes would be done (perhaps in a
transaction that gets rolled back) and do the explain (analyze, buffers) on
that modified command. Yes, the planner may decide to use an index to find
which rows to delete, but if finding the rows was already fast and it is
the non-obvious work that we want to profile, then it should be fine to do
1% of the deletes and see how it performs and where the time goes.
>
From | Date | Subject | |
---|---|---|---|
Next Message | Maciek Sakrejda | 2021-10-07 05:15:36 | Re: [EXT] Re: Troubleshooting a long running delete statement |
Previous Message | Mladen Gogala | 2021-10-07 01:30:59 | Re: [EXT] Re: Troubleshooting a long running delete statement |