From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Searching for the cause of a bad plan |
Date: | 2007-09-24 17:55:22 |
Message-ID: | 1190656522.4181.228.camel@ebony.site |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, 2007-09-24 at 16:04 +0200, Csaba Nagy wrote:
> On Mon, 2007-09-24 at 14:27 +0100, Simon Riggs wrote:
> > Csaba, please can you copy that data into fresh tables, re-ANALYZE and
> > then re-post the EXPLAINs, with stats data.
>
> Well, I can of course. I actually tried to generate some random data
> with similar record count and relations between the tables (which I'm
> not sure I succeeded at), without the extra columns, but it was happily
> yielding the nested loop plan. So I guess I really have to copy the
> whole data (several tens of GB).
>
> But from my very limited understanding of what information is available
> for the planner, I thought that the record count estimated for the join
> between table_a and table_b1 on column b should be something like
>
> (estimated record count in table_a for value "a") * (weight of "b" range
> covered by table_b1 and table_a in common) / (weight of "b" range
> covered by table_a)
There's no such code I'm aware of. Sounds a good idea though. I'm sure
we could do something with the histogram values, but we don't in the
default selectivity functions.
--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Carlo Stonebanks | 2007-09-24 18:12:01 | Re: REPOST: Nested loops row estimates always too high |
Previous Message | Niklas Johansson | 2007-09-24 16:00:29 | Re: TEXT or LONGTEXT? |