From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | Christoph Haller <ch(at)rodos(dot)fzk(dot)de>, paul(at)tmsl(dot)demon(dot)co(dot)uk (Paul Thomas), pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Index not used - now me |
Date: | 2004-02-09 16:38:42 |
Message-ID: | 20551.1076344722@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Greg Stark <gsstark(at)mit(dot)edu> writes:
> You could also try lowering random_page_cost. Some people find as low as 1.2
> or so to be useful, but that would almost certainly be lying to postgres about
> the costs of random access and would cause it to use index scans aggressively
> even when they're not faster.
If the database is small enough to mostly fit in kernel disk cache, then
a very low random_page_cost is telling the truth, not lying. One might
also think about raising the various CPU_xxx cost parameters, since CPU
effort becomes proportionally much more significant when "I/O" only
requires a trip to kernel buffers.
However, Christoph's example looks to me like a classic case of testing
a toy table and assuming the results will extrapolate to production-size
cases. This table is small enough that it hardly matters which method
the planner chooses. I'd counsel being very wary about adjusting the
cost settings based on only this example.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | scott.marlowe | 2004-02-09 17:00:54 | Re: techniques for manual ordering of data ? |
Previous Message | Christopher Browne | 2004-02-09 16:18:55 | Re: timestamptz - problems |