From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Parag Paul <parag(dot)paul(at)gmail(dot)com>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Issue with the PRNG used by Postgres |
Date: | 2024-04-11 01:52:59 |
Message-ID: | 65063.1712800379@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I wrote:
> I'm not worried about it being slower, but about whether it could
> report "stuck spinlock" in cases where the existing code succeeds.
On fourth thought ... the number of tries to acquire the lock, or
in this case number of tries to observe the lock free, is not
NUM_DELAYS but NUM_DELAYS * spins_per_delay. Decreasing
spins_per_delay should therefore increase the risk of unexpected
"stuck spinlock" failures. And finish_spin_delay will decrement
spins_per_delay in any cycle where we slept at least once.
It's plausible therefore that this coding with finish_spin_delay
inside the main wait loop puts more downward pressure on
spins_per_delay than the algorithm is intended to cause.
I kind of wonder whether the premises finish_spin_delay is written
on even apply anymore, given that nobody except some buildfarm
dinosaurs runs Postgres on single-processor hardware anymore.
Maybe we should rip out the whole mechanism and hard-wire
spins_per_delay at 1000 or so.
Less drastically, I wonder if we should call finish_spin_delay
at all in these off-label uses of perform_spin_delay. What
we're trying to measure there is the behavior of TAS() spin loops,
and I'm not sure that what LWLockWaitListLock and the bufmgr
callers are doing should be assumed to have timing behavior
identical to that.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2024-04-11 01:54:49 | Re: Potential stack overflow in incremental base backup |
Previous Message | David Steele | 2024-04-11 01:52:00 | Re: post-freeze damage control |