From: | Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr> |
---|---|
To: | Marko Tiikkaja <marko(at)joh(dot)to> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pgbench vs. SERIALIZABLE |
Date: | 2013-05-19 06:43:26 |
Message-ID: | alpine.DEB.2.02.1305190737170.7438@localhost6.localdomain6 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>> Should it give up trying under some conditions, say there are more errors
>> than transactions?
>
> I don't really see the point of that. I can't think of a scenario where you
> would get too many serialization errors to even finish the pgbench test.
My point is really to avoid in principle a potential infinite loop under
option -t in these conditions, if all transactions are failing because of
a table lock for instance. If pgbench is just killed, I'm not sure you get
a report.
> At any rate, as proposed, this would fail horribly if the very first
> transaction fails, or the second transaction fails twice, etc..
Yep. Or maybe some more options to control the expected behavior on
transaction failures ? --stop-client-on-fail (current behavior),
--keep-trying-indefinitely, --stop-client-after=<nfails>... or nothing if
this is not a problem:-)
--
Fabien.
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2013-05-19 08:15:09 | Block write statistics WIP |
Previous Message | Heikki Linnakangas | 2013-05-19 06:29:46 | Re: Better LWLocks with compare-and-swap (9.4) |