From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Greg Smith <greg(at)2ndQuadrant(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: random_page_cost vs seq_page_cost |
Date: | 2012-02-08 00:58:28 |
Message-ID: | 20120208005828.GB17580@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Feb 07, 2012 at 05:06:18PM -0500, Greg Smith wrote:
> On 02/07/2012 03:23 PM, Bruce Momjian wrote:
> >Where did you see that there will be an improvement in the 9.2
> >documentation? I don't see an improvement.
>
> I commented that I'm hoping for an improvement in the documentation
> of how much timing overhead impacts attempts to measure this area
> better. That's from the "add timing of buffer I/O requests" feature
> submission. I'm not sure if Bene read too much into that or not; I
> didn't mean to imply that the docs around random_page_cost have
> gotten better.
>
> This particular complaint is extremely common though, seems to pop
> up on one of the lists a few times each year. Your suggested doc
> fix is fine as a quick one, but I think it might be worth expanding
> further on this topic. Something discussing SSDs seems due here
> too. Here's a first draft of a longer discussion, to be inserted
> just after where it states the default value is 4.0:
I was initially concerned that tuning advice in this part of the docs
would look out of place, but now see the 25% shared_buffers
recommentation, and it looks fine, so we are OK. (Should we caution
against more than 8GB of shared buffers? I don't see that in the docs.)
I agree we are overdue for better a explanation of random page cost, so
I agree with your direction. I did a little word-smithing to tighten up
your text; feel free to discard what you don't like:
Random access to mechanical disk storage is normally much more expensive
than four-times sequential access. However, a lower default is used
(4.0) because the majority of random accesses to disk, such as indexed
reads, are assumed to be in cache. The default value can be thought of
as modeling random access as 40 times slower than sequential, while
expecting 90% of random reads to be cached.
If you believe a 90% cache rate is an incorrect assumption
for your workload, you can increase random_page_cost to better
reflect the true cost of random storage reads. Correspondingly,
if your data is likely to be completely in cache, such as when
the database is smaller than the total server memory, decreasing
random_page_cost can be appropriate. Storage that has a low random
read cost relative to sequential, e.g. solid-state drives, might
also be better modeled with a lower value for random_page_cost.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ It's impossible for everything to be true. +
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2012-02-08 01:02:21 | Re: psql case preserving completion |
Previous Message | Bruce Momjian | 2012-02-07 23:40:13 | Re: [HACKERS] pgindent README correction |