From: | Tatsuo Ishii <ishii(at)sraoss(dot)co(dot)jp> |
---|---|
To: | nagata(at)sraoss(dot)co(dot)jp |
Cc: | coelho(at)cri(dot)ensmp(dot)fr, thomas(dot)munro(at)gmail(dot)com, m(dot)polyakova(at)postgrespro(dot)ru, alvherre(at)2ndquadrant(dot)com, pgsql-hackers(at)postgresql(dot)org, teodor(at)sigaev(dot)ru |
Subject: | Re: [HACKERS] WIP aPatch: Pgbench Serialization and deadlock errors |
Date: | 2021-07-07 12:50:16 |
Message-ID: | 20210707.215016.2091144688016450280.t-ishii@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>> Well, "that's very little, let's ignore it" is not technically a right
>> direction IMO.
>
> Hmmm, It seems to me these failures are ignorable because with regard to failures
> due to -T they occur only the last transaction of each client and do not affect
> the result such as TPS and latency of successfully processed transactions.
> (although I am not sure for what sense you use the word "technically"...)
"My application button does not respond once in 100 times. It's just
1% error rate. You should ignore it." I would say this attitude is not
technically correct.
> However, maybe I am missing something. Could you please tell me what do you think
> the actual harm for users about failures due to -D is?
I don't know why you are referring to -D.
>> That's necessarily true in practice. By the time when -T is about to
>> expire, transactions are all finished in finite time as you can see
>> the result I showed. So it's reasonable that the very last cycle of
>> the benchmark will finish in finite time as well.
>
> Your script may finish in finite time, but others may not.
That's why I said "practically". In other words "in most cases the
scenario will finish in finite time".
> Indeed, it is possible an execution of a query takes a long or infinite
> time. However, its cause would a problematic query in the custom script
> or other problems occurs on the server side. These are not problem of
> pgbench and, pgbench itself can't control either. On the other hand, the
> unlimited number of tries is a behaviours specified by the pgbench option,
> so I think pgbench itself should internally avoid problems caused from its
> behaviours. That is, if max-tries=0 could cause infinite or much longer
> benchmark time more than user expected due to too many retries, I think
> pgbench should avoid it.
I would say that's user's responsibility to avoid infinite running
benchmarking. Remember, pgbench is a tool for serious users, not for
novice users.
Or, we should terminate the last cycle of benchmark regardless it is
retrying or not if -T expires. This will make pgbench behaves much
more consistent.
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese:http://www.sraoss.co.jp
From | Date | Subject | |
---|---|---|---|
Next Message | Fujii Masao | 2021-07-07 12:57:37 | Re: track_planning causing performance regression |
Previous Message | Jim Mlodgenski | 2021-07-07 12:44:56 | Re: Hook for extensible parsing. |