From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com> |
Cc: | Michail Nikolaev <michail(dot)nikolaev(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, Alexander Korotkov <aekorotkov(at)gmail(dot)com>, reshkekirill <reshkekirill(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Slow standby snapshot |
Date: | 2022-11-22 16:28:02 |
Message-ID: | 3460130.1669134482@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com> writes:
> We seem to have replaced one magic constant with another, so not sure
> if this is autotuning, but I like it much better than what we had
> before (i.e. better than my prev patch).
Yeah, the magic constant is still magic, even if it looks like it's
not terribly sensitive to the exact value.
> 1. I was surprised that you removed the limits on size and just had
> the wasted work limit. If there is no read traffic that will mean we
> hardly ever compress, which means the removal of xids at commit will
> get slower over time. I would prefer that we forced compression on a
> regular basis, such as every time we process an XLOG_RUNNING_XACTS
> message (every 15s), as well as when we hit certain size limits.
> 2. If there is lots of read traffic but no changes flowing, it would
> also make sense to force compression when the startup process goes
> idle rather than wait for the work to be wasted first.
If we do those things, do we need a wasted-work counter at all?
I still suspect that 90% of the problem is the max_connections
dependency in the existing heuristic, because of the fact that
you have to push max_connections to the moon before it becomes
a measurable problem. If we do
- if (nelements < 4 * PROCARRAY_MAXPROCS ||
- nelements < 2 * pArray->numKnownAssignedXids)
+ if (nelements < 2 * pArray->numKnownAssignedXids)
and then add the forced compressions you suggest, where
does that put us?
Also, if we add more forced compressions, it seems like we should have
a short-circuit for a forced compression where there's nothing to do.
So more or less like
nelements = head - tail;
if (!force)
{
if (nelements < 2 * pArray->numKnownAssignedXids)
return;
}
else
{
if (nelements == pArray->numKnownAssignedXids)
return;
}
I'm also wondering why there's not an
Assert(compress_index == pArray->numKnownAssignedXids);
after the loop, to make sure our numKnownAssignedXids tracking
is sane.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-11-22 16:29:40 | Re: fixing CREATEROLE |
Previous Message | Simon Riggs | 2022-11-22 16:17:34 | Re: Damage control for planner's get_actual_variable_endpoint() runaway |