From: | Tatsuo Ishii <ishii(at)sraoss(dot)co(dot)jp> |
---|---|
To: | nagata(at)sraoss(dot)co(dot)jp |
Cc: | coelho(at)cri(dot)ensmp(dot)fr, thomas(dot)munro(at)gmail(dot)com, m(dot)polyakova(at)postgrespro(dot)ru, alvherre(at)2ndquadrant(dot)com, pgsql-hackers(at)postgresql(dot)org, teodor(at)sigaev(dot)ru |
Subject: | Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors |
Date: | 2021-07-07 07:11:23 |
Message-ID: | 20210707.161123.574070522694073225.t-ishii@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Indeed, as Ishii-san pointed out, some users might not want to terminate
> retrying transactions due to -T. However, the actual negative effect is only
> printing the number of failed transactions. The other result that users want to
> know, such as tps, are almost not affected because they are measured for
> transactions processed successfully. Actually, the percentage of failed
> transaction is very little, only 0.347%.
Well, "that's very little, let's ignore it" is not technically a right
direction IMO.
> In the existing behaviour, running transactions are never terminated due to
> the -T option. However, ISTM that this would be based on an assumption
> that a latency of each transaction is small and that a timing when we can
> finish the benchmark would come soon. On the other hand, when transactions can
> be retried unlimitedly, it may take a long time more than expected, and we can
> not guarantee that this would finish successfully in limited time.Therefore,
> terminating the benchmark by giving up to retry the transaction after time
> expiration seems reasonable under unlimited retries.
That's necessarily true in practice. By the time when -T is about to
expire, transactions are all finished in finite time as you can see
the result I showed. So it's reasonable that the very last cycle of
the benchmark will finish in finite time as well.
Of course if a benchmark cycle takes infinite time, this will be a
problem. However same thing can be said to non-retry
benchmarks. Theoretically it is possible that *one* benchmark cycle
takes forever. In this case the only solution will be just hitting ^C
to terminate pgbench. Why can't we have same assumption with
--max-tries=0 case?
> In the sense that we don't
> terminate running transactions forcibly, this don't change the existing behaviour.
This statement seems to be depending on your perosnal assumption.
I still don't understand why you think that --max-tries non 0 case
will *certainly* finish in finite time whereas --max-tries=0 case will
not.
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
From | Date | Subject | |
---|---|---|---|
Next Message | Kyotaro Horiguchi | 2021-07-07 07:11:25 | Re: 回复: Why is XLOG_FPI_FOR_HINT always need backups? |
Previous Message | jianggq@fujitsu.com | 2021-07-07 07:07:43 | unexpected data loaded into database when used COPY FROM |