Re: Specific query taking time to process

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Duncan Whitham <dwhitham(at)zaizi(dot)com>
Cc: Fahiz Mohamed <fahiz(at)netwidz(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Michael Lewis <mlewis(at)entrata(dot)com>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: Specific query taking time to process
Date: 2020-01-30 16:35:50
Message-ID: 22539.1580402150@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Duncan Whitham <dwhitham(at)zaizi(dot)com> writes:
> We now only need 1 environment as we can replicate the performance problem
> on a copy of live – snapshot/restore from AWS of live. We now have a vacuum
> analyse running every night on the 3 tables in question on live – to
> eliminate bloat and inaccurate stats as the root of the problem.

Hmm, doesn't seem like that's getting the job done. I can see at
least one serious misestimate in these plans:

> -> Bitmap Heap Scan on alf_node_aspects aspect_1
> (cost=3420.59..418372.63 rows=163099 width=8) (actual time=1.402..5.243
> rows=4909 loops=1)
> Recheck Cond: (qname_id = 251)

It doesn't seem to me that such a simple condition ought to be
misestimated by a factor of 30, so either you need to crank up
the stats target for this column or you need to analyze the
table more often.

The other rowcount estimates don't seem so awful, but this one is
contributing to the planner thinking that "SubPlan 1" is going to
be very expensive, which probably accounts for it trying to avoid
what's actually a cheap plan.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Duncan Whitham 2020-01-31 11:01:20 Re: Specific query taking time to process
Previous Message Duncan Whitham 2020-01-30 12:22:58 Re: Specific query taking time to process