Re: postgresql.conf (Proposed settings)

From: "Zeugswetter Andreas SB SD" <ZeugswetterA(at)spardat(dot)at>
To: "mlw" <markw(at)mohawksoft(dot)com>, "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: postgresql.conf (Proposed settings)
Date: 2001-11-21 10:38:56
Message-ID: 46C15C39FEB2C44BA555E356FBCD6FA41EB424@m0114.s-mxs.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


> The random_page_cost is changed because of an assumption that the
bigger systems
> will be more busy. The more busy a machine is doing I/O the lower the
differential
> between a sequential and random access. ("sequential" to the
application is less
> likely sequential to the physical disk.)

I think this reasoning is valid, but would we then not rather need
something like
a scan_page_cost, that would need to be raised ? Or are the CPU costs so
small,
that only the relation between scan and random counts ?

> I'd like to open a debate about the benefit/cost of shared_buffers.
The question
> is: "Will postgres' management of shared buffers out perform O/S
cache? Is there a
> point of diminishing return on number of buffers? If so, what?

I think the main point for PostgreSQL buffers is to account for "dirty"
pages.
This only because we use OS files and can thus rely on OS file caching.
If your application is update intensive, then you should have sufficient
buffers to hold most dirtied pages between checkpoints. Does that make
sense ?

Andreas

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message andrea gelmini 2001-11-21 11:02:11 Re: beta3
Previous Message Michael Meskes 2001-11-21 09:27:24 Re: ecpg+AIX 5L compile failure with current