Re: Temporarily very slow planning time after a big delete

From: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
To: walter(at)carezone(dot)com
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Temporarily very slow planning time after a big delete
Date: 2019-05-21 01:00:38
Message-ID: CAKJS1f8NOLra_4itso0TGgY=1rZ3y8vjAcEjBjduEFs1m+ea2Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, 21 May 2019 at 12:44, Walter Smith <walter(at)carezone(dot)com> wrote:
>
> We had a mysterious (to us) slowdown today that I'm hoping someone can explain just based on PG's principles of operation. It got better by itself so it seems like it was "normal" behavior -- I just don't know what behavior it was exhibiting.
>
> We have a table of user notifications containing about 80 million rows. It gets a lot of inserts continually, and is cleaned up once a day. There are no updates. In all history there have been about 330 million rows created.
>
> Today we deleted about 15 million rows in one transaction from this table. Immediately afterwards, a particular SELECT started running very slowly -- 500 to 3000 ms rather than the usual <1ms.
>
> We did an EXPLAIN ANALYZE on this select and it was still doing an index scan as usual. The *planning time* for the query is what had gotten slow. The query itself was still executing in <1ms.

It would be good to know which version you're running here. It
basically sounds very much like get_actual_variable_range() will be
the culprit. Basically, if a constant value that's being used by the
planner to determine row estimates with falls outside the statistic's
histogram and a btree index exists that we can use to look up the
actual bound of the data, then we do so in that function. If you've
just deleted a bunch of rows then that index scan may have to traverse
a bunch of dead tuples before finding that first live tuple. This
code has changed a few times in recent times, see fccebe421 and
3ca930fc3, which is why your version is of interest.

Apart from that, if you want to confirm that's the issue and you just
want it fixed, just VACUUM the table. You should likely be doing that
anyway directly after your bulk delete.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2019-05-21 01:04:58 Re: Temporarily very slow planning time after a big delete
Previous Message Walter Smith 2019-05-21 00:43:45 Temporarily very slow planning time after a big delete