From: | Frank Schoep <frank(at)ffnn(dot)nl> |
---|---|
To: | Michael Stone <mstone+postgres(at)mathom(dot)us> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Bad planner decision - bitmap scan instead of index |
Date: | 2007-08-17 15:59:30 |
Message-ID: | AB3E3291-7CC3-4EFD-A755-6E1D91BFEADB@ffnn.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Aug 17, 2007, at 5:23 PM, Michael Stone wrote:
> On Fri, Aug 17, 2007 at 10:43:18AM +0200, Frank Schoep wrote:
>> On Aug 17, 2007, at 9:28 AM, hubert depesz lubaczewski wrote:
>> (cost=0.00..37612.76 rows=14221 width=48) (actual
>> time=0.125..13.686 rows=2000 loops=1)
> [snip]
>> I'm not an expert at how the planner decides which query plan to use,
>
> Neither am I. :) I do notice that the estimated number of rows is
> significantly larger than the real number; you may want to bump up
> your statistics a bit to see if it can estimate better.
I think the actual number of 2000 rows is based on the LIMIT (100)
and OFFSET (1900) clauses. 14K rows will have to be sorted, but only
2000 have to actually be returned for PostgreSQL to be able to
satisfy the request.
A few weeks ago I set default_statistics_target to 50 to try and
nudge the planner into making better judgments, but apparently this
doesn't influence the planner in the desired way.
Should I try upping that value even more? I took 50 because the
'letter' column only has uppercase letters or digits (36 different
values). 50 seemed a good value for reasonable estimates.
Sincerely,
Frank
From | Date | Subject | |
---|---|---|---|
Next Message | Guy Rouillier | 2007-08-17 22:48:05 | Re: High update activity, PostgreSQL vs BigDBMS |
Previous Message | Michael Stone | 2007-08-17 15:23:29 | Re: Bad planner decision - bitmap scan instead of index |