From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Matthew Wakeling <matthew(at)flymine(dot)org>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: merge join killing performance |
Date: | 2010-05-19 20:27:05 |
Message-ID: | AANLkTimfo4qryWUFj6sYRSlQrFZpgLaqB3__Vt0y2Rdl@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
On Wed, May 19, 2010 at 10:53 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Matthew Wakeling <matthew(at)flymine(dot)org> writes:
>> On Tue, 18 May 2010, Scott Marlowe wrote:
>>> Aggregate (cost=902.41..902.42 rows=1 width=4)
>>> -> Merge Join (cost=869.97..902.40 rows=1 width=4)
>>> Merge Cond: (f.eid = ev.eid)
>>> -> Index Scan using files_eid_idx on files f
>>> (cost=0.00..157830.39 rows=3769434 width=8)
>
>> Okay, that's weird. How is the cost of the merge join only 902, when the
>> cost of one of the branches 157830, when there is no LIMIT?
>
> It's apparently estimating (wrongly) that the merge join won't have to
> scan very much of "files" before it can stop because it finds an eid
> value larger than any eid in the other table. So the issue here is an
> inexact stats value for the max eid.
I changed stats target to 1000 for that field and still get the bad plan.
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2010-05-19 20:37:05 | Re: pg_upgrade docs |
Previous Message | Stefan Kaltenbrunner | 2010-05-19 19:53:18 | pg_upgrade docs |
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2010-05-19 20:47:06 | Re: merge join killing performance |
Previous Message | Scott Marlowe | 2010-05-19 17:08:21 | Re: merge join killing performance |