What is a typical precision of gettimeofday()?

From: Peter Eisentraut <peter(at)eisentraut(dot)org>
To: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: What is a typical precision of gettimeofday()?
Date: 2024-03-19 08:28:37
Message-ID: be0339cc-1ae1-4892-9445-8e6d8995a44d@eisentraut.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Over in the thread discussing the addition of UUIDv7 support [0], there
is some uncertainty about what timestamp precision one can expect from
gettimeofday().

UUIDv7 uses milliseconds since Unix epoch, but can optionally use up to
12 additional bits of timestamp precision (see [1]), but it can also
just use a counter instead of the extra precision. The current patch
uses the counter method "because of portability concerns" (source code
comment).

I feel that we don't actually have any information about this
portability concern. Does anyone know what precision we can expect from
gettimeofday()? Can we expect the full microsecond precision usually?

[0]:
https://www.postgresql.org/message-id/flat/CAAhFRxitJv=yoGnXUgeLB_O+M7J2BJAmb5jqAT9gZ3bij3uLDA(at)mail(dot)gmail(dot)com
[1]:
https://datatracker.ietf.org/doc/html/draft-ietf-uuidrev-rfc4122bis#section-6.2-5.6.1

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2024-03-19 08:55:51 Re: UUID v7
Previous Message Ashutosh Bapat 2024-03-19 07:39:53 Re: A problem about partitionwise join