From: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Cc: | Michael Paquier <michael(at)paquier(dot)xyz>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Gregory Smith <gregsmithpgsql(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, david(dot)christensen(at)crunchydata(dot)com |
Subject: | Re: pgbench logging broken by time logic changes |
Date: | 2021-07-08 17:15:24 |
Message-ID: | alpine.DEB.2.22.394.2107081510410.170227@pseudo |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello Thomas,
> Thanks! This doesn't seem to address the complaint, though. Don't
> you need to do something like this? (See also attached.)
>
> + initStats(&aggs, start - (start + epoch_shift) % 1000000);
ISTM that this is: (start + epoch_shift) / 1000000 * 1000000
> That should reproduce what pgbench 13 does implicitly when it uses
> time(NULL).
I understand that you are shifting the aggregate internal start time to
epoch, however ISTM that other points in the program are not shifted
consistently with this, eg the while comparison in doLog? Also if start
time is log shifted, then it should not be shifted again when printed (in
logAgg). Attached version tries to be consistent.
> Namely, it rewinds to the start of the current *wall clock* second, so
> that all future aggregates also start at round number wall clock times,
> at the cost of making the first aggregate miss out on a fraction of a
> second.
ISTM that it was already wall clock time, but not epoch wall clock.
I'm okay with realigning aggregates on full seconds.
> I wonder if some of the confusion on the other thread about the final
> aggregate[1] was due to this difference.
Dunno. The parallel execution with thread is a pain when handling details.
> By rounding down, we get a "head start" (because the first aggregate is
> short), so we usually manage to record the expected number of aggregates
> before time runs out.
Fine with me if everything is consistent.
> It's a race though. Your non-rounding version was more likely
> to lose the race and finish before the final expected aggregate was
> logged, so you added code to force a final aggregate to be logged.
ISTM that we always want to force because some modes can have low tps, and
the aggregates should be "full".
> Do I have this right? I'm not entirely sure how useful a partial final
> aggregate is
If you ask for 10 seconds run with 1 aggregate per second, you expect to
see (at least, about) 10 lines, and I want to ensure that, otherwise
people will ask questions, tools will have to look for special cases,
missing rows, whatever, and it will be a pain there. We want to produce
something simple, consistent, reliable, that tools can depend on.
> (it's probably one you have to throw away, like the first one, no?
> Isn't it better if we only have to throw away the first one?).
This should be the user decision to drop it or not, not the tool producing
it, IMO.
> I'm not sure, but if we keep that change, a couple of very minor nits:
> I found the "tx" parameter name a little confusing. Do you think it's
> clearer if we change it to "final" (with inverted sense)?
I agree that tx is not a very good name, but the inversion does not look
right to me. The "normal" behavior is
> For the final aggregate, shouldn't we call doLog() only if agg->cnt > 0?
No, I think that we should want to have all aggregates, even with zeros,
so that the user can expect a deterministic number of lines.
> I think I'd be inclined to take that change back out though, making this
> patch very small and net behaviour like pgbench 13, if you agree with my
> explanation for why you had to add it and why it's not actually
> necessary with the fixed rounding shown above. (And perhaps in v15 we
> might consider other ideas like using hi-res times in the log and not
> rounding, etc, a topic for later.)
I think that I'm moslty okay.
> I don't really see the value in the test that checks that $delay falls
> in the range 1.5s - 2.5s and then ignores the result. If it hangs
> forever, we'll find out about it, and otherwise no human or machine
> will ever care about that test. I removed it from this version. Were
> you really attached to it?
YES, REALLY! It would just have caught quite a few of the regressions we
are trying to address here. I want it there even if ignored because I'll
look for it to avoid regressions in the future. If the test is actually
removed, recreating it is a pain. If you really want to disactivate it,
use if(0) but PLEASE let it there so that it can ne reactivated for tests
very simply, not bad maintaining some test outside of the tree.
Also, if farm logs show that it is okay on all animals, it can be switched
on by removing the ignore trick.
> I made some very minor language tweaks in comments (we don't usually
> shorten "benchmark" to "bench" in English, "series" keeps the -s in
> singular (blame the Romans), etc).
Thanks! My English is kind of fuzzy in the details:-)
> I think we should make it clear when we mean the *Unix* epoch (a
> comment "switch to epoch" isn't meaningful on its own, to me at
> least), so I changed that in a few places.
Ok.
Attached v16:
- tries to be consistent wrt epoch & aggregates, aligning to Unix epoch
as you suggested.
- renames tx as accumulate, but does not invert it.
- always shows aggregates so that the user can depend on the output,
even if stats are zero, because ISTM that clever must be avoided.
- put tests back, even if ignored, because I really want them available
easily.
When/if you get to commit this patch, eventually, do not forget that I'm
pushing forward fixes contributed by others, including Kyotaro Horiguchi
and Yugo Nagata.
--
Fabien.
Attachment | Content-Type | Size |
---|---|---|
pgbench-log-fix-16.patch | text/x-diff | 9.9 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Zhihong Yu | 2021-07-08 17:22:04 | Re: Have I found an interval arithmetic bug? |
Previous Message | Bruce Momjian | 2021-07-08 16:51:45 | enable_resultcache confusion |