From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Thomas Finneid <tfinneid(at)fcon(dot)no> |
Cc: | jd(at)commandprompt(dot)com, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: explanation of some configs |
Date: | 2009-02-07 16:36:32 |
Message-ID: | 603c8f070902070836u7e8b2cfbrf20c6e704992d85d@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>>> effective_cache_size
>>
>> This is just a hint to tell the planner how much cache will generally be
>> available.
>
> ok, but available for what?
The documentation on these parameters is really very good.
http://www.postgresql.org/docs/8.3/interactive/runtime-config-query.html
http://www.postgresql.org/docs/8.3/interactive/runtime-config-resource.html
effective_cache_size doesn't actually reserve any memory, but it
enables the planner to know something about what will probably happen
when attempting to execute queries (and therefore generate better
plans).
>> The number should be reflective of your shared buffers +
>> available operating system cache. If you database is postgresql only you
>> can generally set this very high 75% of available ram. If not then you
>> need to tone it down.
>
> So that means, if I have 8GB ram and have set shared buffer to e.g. 4GB, I
> should set the effective_cache_size to at least 4GB otherwise the planner
> will assume I dont have as much memory available so it would be sort of
> pointless so set shared_buffer to 4GB?
You might want to look at pgtune as a starting point for tuning these settings.
http://pgfoundry.org/projects/pgtune/
But, yes, if you read the documentation (links above) it says to make
effective_cache_size equal to shared_buffers plus however much of the
operating system disk cache you expect PostgreSQL to get.
>>> checkpoint_segments
>>> - specifies the number of segments?
>>
>> The number of segments that will be used before a checkpoint is forced.
>
> So to sum it up:
>
> - Specifies the number of memory segments the WAL will use before a
> checkpoint occur. (A checkpoint guarantees the data has been written to
> disk, including dirty pages.)
> - A segment is 16MB and the number of actually used segments are dynamic.
> - If this number is too low or the transaction is large, PG will spend more
> time on performing checkpoint operations which decreases performance.
I think this is pretty much right. The WAL logs are always flushed to
disk right away (unless you fool with the asynchronous_commit or fsync
parameters), so you are not at risk of losing data even if the server
crashes before the next checkpoint. But the checkpoints keep you from
accumulating too much WAL (which eats disk space and makes recovery
slower in the event of a crash).
> Q1:
>
> So checkpoint_time is then just another way of expressing the same?
> I.e. to ensure that if the segments have not been filled, which would foce a
> checkpoint, a checkpoint is at least forced at the specified time lapse?
http://www.postgresql.org/docs/8.3/interactive/wal-configuration.html
"The server's background writer process will automatically perform a
checkpoint every so often. A checkpoint is created every
checkpoint_segments log segments, or every checkpoint_timeout seconds,
whichever comes first."
> Q2:
>
> So how does this relate to WAL buffers? It seems to me that wal_buffers are
> not needed. Based on the above explanation.
wal_buffers are in-memory buffers that hold WAL that has not yet been
flushed to disk. WAL segments are files hold the WAL that has been
written to disk but not yet recycled (perhaps because the
corresponding data blocks haven't yet been written out).
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Grzegorz Jaśkiewicz | 2009-02-07 17:23:40 | Re: inheritance, and plans |
Previous Message | justin | 2009-02-07 15:40:52 | Re: explanation of some configs |