Re: 'Interesting' prepared statement slowdown on large table join

From: Shaun Thomas <sthomas(at)peak6(dot)com>
To: "Prodan, Andrei" <Andrei(dot)Prodan(at)awinta(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: 'Interesting' prepared statement slowdown on large table join
Date: 2011-05-11 15:55:27
Message-ID: 4DCAB16F.2060300@peak6.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 05/11/2011 06:08 AM, Prodan, Andrei wrote:

> Index Scan using attr_name_value on big_table (cost=0.00..22.85
> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)

Holy inaccurate statistics, Batman!

Try increasing your statistics target for attr_name and attr_value in
your big table. I know you said you set it to 300 on party_id, but what
happened here is that the optimizer thought this particular name/value
combo in your big table would return less rows, and it was horribly,
horribly wrong.

You might think about bumping up your default_statistics_target anyway
to prevent problems like this in general. But definitely increase it on
those two columns and reanalyze. My guess is that your big_table is big
enough that each analyze gets a different random sample of the various
attr_name and attr_value combinations, so occasionally it'll get too few
and start badly skewing query plans.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas(at)peak6(dot)com

______________________________________________

See http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Shaun Thomas 2011-05-11 16:04:49 Re: Postgres refusing to use >1 core
Previous Message Tom Lane 2011-05-11 14:31:36 Re: help speeding up a query in postgres 8.4.5