From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: WAL insert delay settings |
Date: | 2019-02-14 16:02:24 |
Message-ID: | CAOuzzgrJ1bYh9amgz8jfV2bZESLkXVWFyio7-jYxMmEpsfw5hA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
On Thu, Feb 14, 2019 at 10:15 Peter Eisentraut <
peter(dot)eisentraut(at)2ndquadrant(dot)com> wrote:
> On 14/02/2019 11:03, Tomas Vondra wrote:
> > But if you add extra sleep() calls somewhere (say because there's also
> > limit on WAL throughput), it will affect how fast VACUUM works in
> > general. Yet it'll continue with the cost-based throttling, but it will
> > never reach the limits. Say you do another 20ms sleep somewhere.
> > Suddenly it means it only does 25 rounds/second, and the actual write
> > limit drops to 4 MB/s.
>
> I think at a first approximation, you probably don't want to add WAL
> delays to vacuum jobs, since they are already slowed down, so the rate
> of WAL they produce might not be your first problem. The problem is
> more things like CREATE INDEX CONCURRENTLY that run at full speed.
>
> That leads to an alternative idea of expanding the existing cost-based
> vacuum delay system to other commands.
>
> We could even enhance the cost system by taking WAL into account as an
> additional factor.
This is really what I was thinking- let’s not have multiple independent
ways of slowing down maintenance and similar jobs to reduce their impact on
I/o to the heap and to WAL.
Thanks!
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2019-02-14 16:06:24 | Re: WAL insert delay settings |
Previous Message | Alvaro Herrera | 2019-02-14 16:02:09 | Re: Early WIP/PoC for inlining CTEs |