Re: cost delay brainstorming

From: Andres Freund <andres(at)anarazel(dot)de>
To: Nathan Bossart <nathandbossart(at)gmail(dot)com>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: cost delay brainstorming
Date: 2024-06-18 20:32:38
Message-ID: 20240618203238.3wqhu722ozyux3do@awork3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2024-06-18 13:50:46 -0500, Nathan Bossart wrote:
> Have we ruled out further adjustments to the cost parameters as a first
> step?

I'm not against that, but I it doesn't address the issue that with the current
logic one set of values just isn't going to fit a 60MB that's allowed to burst
to 100 iops and a 60TB database that has multiple 1M iops NVMe drives.

That said, the fact that vacuum_cost_page_hit is 1 and vacuum_cost_page_miss
is 2 just doesn't make much sense aesthetically. There's a far bigger
multiplier in actual costs than that...

> If you are still recommending that folks raise it and never recommending
> that folks lower it, ISTM that our defaults might still not be in the right
> ballpark. The autovacuum_vacuum_cost_delay adjustment you reference (commit
> cbccac3) is already 5 years old, so maybe it's worth another look.

Adjusting cost delay much lower doesn't make much sense imo. It's already only
2ms on a 1ms granularity variable. We could increase the resolution, but
sleeping for much shorter often isn't that cheap (you need to set up hardware
timers all the time and due to the short time they can't be combined with
other timers) and/or barely gives time to switch to other tasks.

So we'd have to increase the cost limit.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andrew Dunstan 2024-06-18 20:42:04 Re: IPC::Run accepts bug reports
Previous Message Andres Freund 2024-06-18 20:23:36 Re: Xact end leaves CurrentMemoryContext = TopMemoryContext