From: | Michael Paquier <michael(at)paquier(dot)xyz> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Michael Banck <michael(dot)banck(at)credativ(dot)de>, Stephen Frost <sfrost(at)snowman(dot)net>, Fabien COELHO <coelho(at)cri(dot)ensmp(dot)fr>, David Steele <david(at)pgmasters(dot)net>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Online verification of checksums |
Date: | 2018-09-29 09:20:33 |
Message-ID: | 20180929092033.GE1823@paquier.xyz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Sep 29, 2018 at 10:51:23AM +0200, Tomas Vondra wrote:
> One more thought - when running similar tools on a live system, it's
> usually a good idea to limit the impact by throttling the throughput. As
> the verification runs in an independent process it can't reuse the
> vacuum-like cost limit directly, but perhaps it could do something
> similar? Like, limit the number of blocks read/second, or so?
When it comes to such parameters, not using a number of blocks but
throttling with a value in bytes (kB or MB of course) speaks more to the
user. The past experience with checkpoint_segments is one example of
that. Converting that to a number of blocks internally would definitely
make sense the most sense. +1 for this idea.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2018-09-29 12:14:02 | Re: Online verification of checksums |
Previous Message | Tomas Vondra | 2018-09-29 08:51:23 | Re: Online verification of checksums |