From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | Oliver Elphick <olly(at)lfix(dot)co(dot)uk> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Script to compute random page cost |
Date: | 2002-09-10 15:28:23 |
Message-ID: | 200209101528.g8AFSOW14990@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Oliver Elphick wrote:
> Available memory (512M) exceeds the total database size, so sequential
> and random are almost the same for the second and subsequent runs.
>
> Since, in production, I would hope to have all active tables permanently
> in RAM, would there be a case for my using a page cost of 1 on the
> assumption that no disk reads would be needed?
Yes, in your case random_page_cost would be 1 once the data gets into
RAM.
In fact, that is the reason I used only /data/base for testing so places
where data can load into ram will see lower random pages costs.
I could just create a random file and test on that but it isn't the
same.
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
From | Date | Subject | |
---|---|---|---|
Next Message | Sean Chittenden | 2002-09-10 15:30:30 | Re: Optimization levels when compiling PostgreSQL... |
Previous Message | Bruce Momjian | 2002-09-10 15:27:06 | Re: Script to compute random page cost |