From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my> |
Cc: | PostGreSQL <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Forcing use of indexes |
Date: | 2003-04-03 05:09:33 |
Message-ID: | 25901.1049346573@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my> writes:
> If my O/S has a cache of say 1GB and my DB is < 1GB and is totally in cache
> would setting effective_cache_size to 1GB make the optimizer decide on
> index usage just as setting random_page_cost to 1?
I don't feel like going through the equations at the moment (it's open
source, read for yourself) but certainly if table+index are less than
effective_cache_size the cost estimate should be pretty low.
> If random page cost is high but so is effective_cache_size does postgresql
> use sequential scans first time round and then index scans second time
> round if everything cached?
No, there is no notion of "first time round" vs "second time round".
> But the main thing is: is it hard for the optimizer to tell whether a
> DB/table/index is completely in effective_cache_size?
It knows the table & index size as last recorded by VACUUM. This might
not match up with current reality, of course...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Yuriy Rusinov | 2003-04-03 05:18:59 | Configuration |
Previous Message | Martijn van Oosterhout | 2003-04-03 05:08:15 | Re: unable to dump database, toast errors |