| From: | pinker <pinker(at)onet(dot)eu> |
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Checkpoints tuning |
| Date: | 2014-10-23 13:24:00 |
| Message-ID: | 1414070640656-5824026.post@n5.nabble.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
I have saved data from pg_stat_bgwriter view following Greg Smith's advice
from his book:
select now(),* from pg_stat_bgwriter;
and then aggregated the data with query from his book as well.
checkpoint segments was first 30 and next day I have increased it to 200,
and results has changed:
<http://postgresql.1045698.n5.nabble.com/file/n5824026/Auswahl_235.png>
now percent of checkpoints required because of number of segments is bigger
and backend writer share is also too high- I assume it's not what should
happen.
I'm not sure how to interpret correlation between allocation and written
data?
The bigger amount of data written per sec is a good sign?
--
View this message in context: http://postgresql.1045698.n5.nabble.com/Checkpoints-tuning-tp5824026.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Björn Wittich | 2014-10-24 05:16:48 | Re: extremly bad select performance on huge table |
| Previous Message | Björn Wittich | 2014-10-22 15:13:42 | Re: extremly bad select performance on huge table |