From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: index usage (and foreign keys/triggers) |
Date: | 2003-02-28 16:47:09 |
Message-ID: | 87heaomjxu.fsf@stark.dyndns.tv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"scott.marlowe" <scott(dot)marlowe(at)ihs(dot)com> writes:
> Well, my experience has been that accidentally picking an index lookup
> when a sequential scan would be better may cost up to twice as much as the
> seq scan, due to the slower random access. And usually these are queries
> you expect to be slow anyway, like something that selects 90% of a 10M row
> table.
My experience is similar. Usually if it picks an index scan when a sequential
scan would be better further experimentation shows the two are roughly the
same anyways. Whereas when it fails the other way it can be very very bad for
an OLTP system like a web server.
> But, making the mistake the other way can be much more costly. I've
> watched queries that took 30 or more seconds with a seq scan drop to sub
> second with indexes. So, don't be afraid about dropping below 2.
However, I've just tried something new and it seems to be helping. I tried
raising cpu_tuple_cost instead. It now chooses nested loops with index lookups
far more aggressively, even when random_page_cost isn't insanely low.
I've actually raised cpu_tuple_cost to 0.1. That's a factor of 10 higher. That
seems to be what is required to get the ratio of costs between nested loops
and merge joins to line up with the ratio of actual times.
I wonder if other people see similar behaviour?
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Ericson Smith | 2003-02-28 16:47:54 | Recreating a primary key |
Previous Message | Berend Tober | 2003-02-28 16:03:00 | Some useful plpgsl |