| From: | Janning Vygen <vygen(at)kicktipp(dot)de> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org |
| Subject: | suggestion: log_statement = sample |
| Date: | 2009-03-16 13:26:56 |
| Message-ID: | 200903161426.56662.vygen@kicktipp.de |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Hi,
we ran a large database on moderate hardware. Disks are usually the slowest
part so we do not log every statement. Sometimes we do and our IOwait and CPU
increases by 10%. too much for peak times!
it would be nice if you could say:
log_statement = sample
sample_rate = 100
you would get a good sample to analyze your database usage. Of course
log_min_duration helps a lot as you see your slowest queries. But with a tool
like hibernate, you have often have the problem issuing many many small
statements like "SELECT * from table where id = ?".
They don't show up in the log with a reasonable log_min_duration setting.
With my proposal every 100th query is logged and you get a detailed view of
your database usage without excessive disk IO. Of course it should be
combinable with log_min_duration.
What do you think about it?
kind regards
Janning
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Marlowe | 2009-03-16 14:13:51 | Re: large table starting sequence scan because of default_statistic_target |
| Previous Message | Luis Cevallos | 2009-03-16 13:26:42 | Imagenes |