From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Steve McLellan <smclellan(at)mintel(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Query performance over a large proportion of data |
Date: | 2009-03-10 23:19:53 |
Message-ID: | dcc563d10903101619j4c668570m1e3fdc9a539f7acd@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Mar 10, 2009 at 3:12 PM, Steve McLellan <smclellan(at)mintel(dot)com> wrote:
>
> Nested Loop (cost=466.34..192962.24 rows=15329 width=12) (actual
> time=13653.238..31332.113 rows=131466 loops=1)
Both your query plans end with this nested loop join which is taking
up about half your time in your query. Notice the estimation of the
result set is off by a factor of about 10 here, which means a nested
loop might be not so good a choice for this. Try increasing default
stats target and re-analyzing to see if that helps. 1000 is the max
you can give that a shot right off to see if it helps. If it does,
drop it until the numbers start to go off again and stop.
For a quicker test, you can set enable_nestloop = off in the psql
command line and then run the query by hand and see if that helps.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-03-11 00:09:04 | Re: Query performance over a large proportion of data |
Previous Message | Kevin Grittner | 2009-03-10 22:06:38 | Re: Query performance over a large proportion of data |