From: | Dennis Bjorklund <db(at)zigo(dot)dhs(dot)org> |
---|---|
To: | Michael Brusser <michael(at)synchronicity(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pgsql-Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Slow query - index not used |
Date: | 2004-01-21 13:42:54 |
Message-ID: | Pine.LNX.4.44.0401211435120.17713-100000@zigo.dhs.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, 21 Jan 2004, Michael Brusser wrote:
> So 'rows' values are incorrect.
You can increase the statistics-gathering for that column with ALTER TABLE
and probably get better estimates.
> Also looking at queries with 'KnowledgeBase'
> and 'OtherParam' - does seq. scan make sense?
>
> I mean 'rows' has value of about 5000 records from the total of 75000
> records on the table.
It doesn't sound so strange to me. I don't know exactly what limits
postgresql uses but it probably need to fetch every page in the table to
find all those 5000 records. If it has to do that then the index scan
would not help that much (it might even make it slower).
It's easy to test what happens if it do the index scan instead of the seq.
scan. Just do SET enable_seqscan TO false; before you try the query. Then
you can compare the times with and without index scan.
Remember, even if it finds a row in the index. it still has to fetch the
actual row from the table also. So if it needs to fetch all pages from the
table the total amount of IO is "all of the table" + "the relevant part of
the index". The if it's faster or not depends on such things as if it's
already cached in memory. Setting the effective_cache_size correctly lets
postgresql take into account how much file cache you have which can effect
the plan.
--
/Dennis Björklund
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2004-01-21 14:42:53 | Re: Allow backend to output result sets in XML |
Previous Message | Michael Brusser | 2004-01-21 12:46:41 | Re: Slow query - index not used |