Re: Bad plan on a huge table query

From: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
To: Daniel Cristian Cruz <danielcristian(at)gmail(dot)com>
Cc: Alban Hertroys <haramrae(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Bad plan on a huge table query
Date: 2013-03-21 22:10:36
Message-ID: CAMkU=1yiprngkJhMB-ebhwZO2YhEfLWi9buGs8a65q9GhCzeAA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Mar 21, 2013 at 12:30 PM, Daniel Cristian Cruz <
danielcristian(at)gmail(dot)com> wrote:

>
>
>> Are you using autovacuum? If so, you probably need to tune it more
>> aggressively. For the short term, running an ANALYSE on those tables should
>> at least get you more accurate query plans.
>>
>
> I've done it; with default_statistics_target on 1000, 100 and 200 (left it
> on 200, which was production config too).
>

You are doing an manual analyze each time you change
default_statistics_target, right?

Can you do an "analyze verbose aula_confirmacao" and see if the output is
as expected?

what happens if you do:

explain (analyze, buffers)
select count(*) from aula_confirmacao where
inicio::DATE BETWEEN DATE_TRUNC('YEAR', CURRENT_TIMESTAMP) AND
CURRENT_TIMESTAMP;

From your explain plan http://explain.depesz.com/s/GDJn, step 9, the row
estimates for that simple query should be off by a factor of
23 (101508/4442), yet there is no apparent reason for that to give a bad
estimate, other than bad statistics. There are no filters so cross-column
correlations can't be throwing it off, so why is it so bad?

Also, it would be nice to see:

select * from pg_stats where tablename = 'idx_aula_confirmacao_2' \x\g\x

(which I assume is a function-based index)

Cheers,

Jeff

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2013-03-22 00:25:17 Re: regexp_replace failing on 9.0.4
Previous Message Rob Sargent 2013-03-21 21:23:46 Re: regexp_replace failing on 9.0.4