From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Matt Kelly <mkellycs(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Exposing the stats snapshot timestamp to SQL |
Date: | 2015-02-20 03:01:12 |
Message-ID: | 32461.1424401272@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Matt Kelly <mkellycs(at)gmail(dot)com> writes:
>> Yeah. The only use-case that's been suggested is detecting an
>> unresponsive stats collector, and the main timestamp should be plenty for
>> that.
> The problem with doing highly granular snapshots is that the postgres
> counters are monotonically increasing, but only when stats are published.
> Currently you have no option except to divide by the delta of now() between
> the polling intervals. If you poll every 2 seconds the max error is about
> .5/2 or 25%. It makes reading those numbers a bit noisy. Using
> (snapshot_timestamp_new
> - snapshot_timestamp_old) as the denominator in that calculation should
> help to smooth out that noise and show a clearer picture.
Ah, interesting! Thanks for pointing out another use case.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | David Steele | 2015-02-20 03:20:51 | Re: pgaudit - an auditing extension for PostgreSQL |
Previous Message | David Steele | 2015-02-20 02:58:34 | Re: Allow "snapshot too old" error, to prevent bloat |