From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Peter Eisentraut <peter(at)eisentraut(dot)org> |
Cc: | "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, Hannu Krosing <hannuk(at)google(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: What is a typical precision of gettimeofday()? |
Date: | 2024-06-19 16:36:34 |
Message-ID: | 130996.1718814994@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Peter Eisentraut <peter(at)eisentraut(dot)org> writes:
> AFAICT, pg_test_timing doesn't use gettimeofday(), so this doesn't
> really address the original question.
It's not exactly hard to make it do so (see attached).
I tried this on several different machines, and my conclusion is that
gettimeofday() reports full microsecond precision on any platform
anybody is likely to be running PG on today. Even my one surviving
pet dinosaur, NetBSD 10 on PowerPC Mac (mamba), shows results like
this:
$ ./pg_test_timing
Testing timing overhead for 3 seconds.
Per loop time including overhead: 901.41 ns
Histogram of timing durations:
< us % of total count
1 10.46074 348148
2 89.51495 2979181
4 0.00574 191
8 0.00430 143
16 0.00691 230
32 0.00376 125
64 0.00012 4
128 0.00303 101
256 0.00027 9
512 0.00009 3
1024 0.00009 3
I also modified pg_test_timing to measure nanoseconds not
microseconds (second patch attached), and got this:
$ ./pg_test_timing
Testing timing overhead for 3 seconds.
Per loop time including overhead: 805.50 ns
Histogram of timing durations:
< ns % of total count
1 19.84234 739008
2 0.00000 0
4 0.00000 0
8 0.00000 0
16 0.00000 0
32 0.00000 0
64 0.00000 0
128 0.00000 0
256 0.00000 0
512 0.00000 0
1024 80.14013 2984739
2048 0.00078 29
4096 0.00658 245
8192 0.00290 108
16384 0.00252 94
32768 0.00250 93
65536 0.00016 6
131072 0.00185 69
262144 0.00008 3
524288 0.00008 3
1048576 0.00008 3
confirming that when the result changes it generally does so by 1usec.
Applying just the second patch, I find that clock_gettime on this
old hardware seems to be limited to 1us resolution, but on my more
modern machines (mac M1, x86_64) it can tick at 40ns or less.
Even a raspberry pi 4 shows
$ ./pg_test_timing
Testing timing overhead for 3 seconds.
Per loop time including overhead: 69.12 ns
Histogram of timing durations:
< ns % of total count
1 0.00000 0
2 0.00000 0
4 0.00000 0
8 0.00000 0
16 0.00000 0
32 0.00000 0
64 37.59583 16317040
128 62.38568 27076131
256 0.01674 7265
512 0.00002 8
1024 0.00000 0
2048 0.00000 0
4096 0.00153 662
8192 0.00019 83
16384 0.00001 3
32768 0.00001 5
suggesting that the clock_gettime resolution is better than 64 ns.
So I concur with Hannu that it's time to adjust pg_test_timing to
resolve nanoseconds not microseconds. I gather he's created a
patch that does more than mine below, so I'll wait for that.
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
use-gettimeofday-for-instr_time.patch | text/x-diff | 747 bytes |
measure-nsec-in-pg_test_timing.patch | text/x-diff | 1.0 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2024-06-19 16:38:35 | Re: BitmapHeapScan streaming read user and prelim refactoring |
Previous Message | Peter Eisentraut | 2024-06-19 16:13:14 | Re: Pgoutput not capturing the generated columns |