From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Peter Geoghegan <pg(at)heroku(dot)com> |
Cc: | Greg Stark <stark(at)mit(dot)edu>, John R Pierce <pierce(at)hogranch(dot)com>, PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: log_checkpoints, microseconds |
Date: | 2014-04-10 19:20:15 |
Message-ID: | 20785.1397157615@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Peter Geoghegan <pg(at)heroku(dot)com> writes:
> On Thu, Apr 10, 2014 at 11:45 AM, Greg Stark <stark(at)mit(dot)edu> wrote:
>> I think his point is that to go from microseconds to msec (which I
>> think should just be "ms" btw) you want to multiply by 1000 not
>> divide.
> Right.
I think you're both wrong. 1000 usec = 1 msec, not the other way round.
> Or just use the INSTR_TIME_GET_MILLISEC() macro to begin with,
> and do neither.
The code appears to be trying to track stats at the microsecond
level. The printout is following a policy decision that we prefer
to report units of msec, but that does not mean that we shouldn't
keep microsecond precision internally.
In short, I see nothing that needs fixed here.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2014-04-10 19:43:54 | Re: log_checkpoints, microseconds |
Previous Message | Peter Geoghegan | 2014-04-10 19:10:31 | Re: log_checkpoints, microseconds |