Re: Use of Functional Indexs and Planner estimates

From: Greg Stark <gsstark(at)mit(dot)edu>
To: "Scott Marlowe" <smarlowe(at)qwest(dot)net>
Cc: "Russell Smith" <mr-russ(at)pws(dot)com(dot)au>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Use of Functional Indexs and Planner estimates
Date: 2004-06-09 15:59:41
Message-ID: 87n03cvi1e.fsf@stark.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


"Scott Marlowe" <smarlowe(at)qwest(dot)net> writes:

> > -> Seq Scan on rules
> > (cost=0.00..22296.32 rows=11294 width=12)
> > (actual time=540.149..2047.308 rows=1 loops=1)

> Simple, the planner is choosing a sequential scan when it should be
> choosing an index scan. This is usually because random_page_cost is set
> too high, at the default of 4. Try settings between 1.2 and 2.x or so
> to see how that helps. Be sure and test with various queries of your
> own to be sure you've got about the right setting.

Unless you make random_page_cost about .0004 (4/11294) it isn't going to be
costing this query right (That's a joke, don't do it:). It's thinking there
are 11,000 records matching the where clause when in fact there is only 1.

If you know how an upper bound on how many records the query should be finding
you might try a kludge involving putting a LIMIT inside the group by. ie,
something like

select rulename,redirect
from (select rulename,redirect
from ...
where ...
limit 100) as kludge
group by rulename,redirect

This would at least tell the planner not to expect more than 100 rows and to
take the plan likely to produce the first 100 rows fastest.

But this has the disadvantage of uglifying your code and introducing an
arbitrary limit. When 7.5 comes out it you'll want to rip this out.

--
greg

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message ken 2004-06-09 19:31:00 Index oddity
Previous Message jason.servetar 2004-06-08 18:45:26 RamDisk