From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>, Jesper Pedersen <jesper(dot)pedersen(at)redhat(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Subject: | Re: Speedup twophase transactions |
Date: | 2016-12-27 04:31:28 |
Message-ID: | CAB7nPqSoq-sSozNhNTg6XJ=0H9YsxCZ15=0hfPOAN2GbhdwW-w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Dec 27, 2016 at 12:59 PM, Stas Kelvich <s(dot)kelvich(at)postgrespro(dot)ru> wrote:
> Standard config with increased shared_buffers. I think the most significant
> impact on the recovery speed here is on the client side, namely time between
> prepare and commit. Right now I’m using pgbench script that issues commit
> right after prepare. It’s also possible to put sleep between prepare and
> commit
> and increase number of connections to thousands. That will be probably the
> worst case — majority of prepared tx will be moved to files.
I think that it would be a good idea to actually test that in pure
recovery time, aka no client, and just use a base backup and make it
recover X prepared transactions that have created Y checkpoints after
dropping cache (or restarting server).
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2016-12-27 05:09:05 | Re: Potential data loss of 2PC files |
Previous Message | Stas Kelvich | 2016-12-27 03:59:00 | Re: Speedup twophase transactions |