Re: Direct I/O

From: Andres Freund <andres(at)anarazel(dot)de>
To: Noah Misch <noah(at)leadboat(dot)com>
Cc: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Direct I/O
Date: 2023-04-09 21:45:16
Message-ID: 20230409214516.htl4vok3sxtb2wu2@awork3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2023-04-08 21:29:54 -0700, Noah Misch wrote:
> On Sat, Apr 08, 2023 at 11:08:16AM -0700, Andres Freund wrote:
> > On 2023-04-07 23:04:08 -0700, Andres Freund wrote:
> > > There were some failures in CI (e.g. [1] (and perhaps also bf, didn't yet
> > > check), about "no unpinned buffers available". I was worried for a moment
> > > that this could actually be relation to the bulk extension patch.
> > >
> > > But it looks like it's older - and not caused by direct_io support (except by
> > > way of the test existing). I reproduced the issue locally by setting s_b even
> > > lower, to 16 and made the ERROR a PANIC.
> > >
> > > [backtrace]
>
> I get an ERROR, not a PANIC:

What I meant is that I changed the code to use PANIC, to make it easier to get
a backtrace.

> > > If you look at log_newpage_range(), it's not surprising that we get this error
> > > - it pins up to 32 buffers at once.
> > >
> > > Afaics log_newpage_range() originates in 9155580fd5fc, but this caller is from
> > > c6b92041d385.
>
> > > Do we care about fixing this in the backbranches? Probably not, given there
> > > haven't been user complaints?
>
> I would not. This is only going to come up where the user goes out of the way
> to use near-minimum shared_buffers.

It's not *just* that scenario. With a few concurrent connections you can get
into problematic territory even with halfway reasonable shared buffers.

> > Here's a quick prototype of this approach.
>
> This looks fine. I'm not enthusiastic about incurring post-startup cycles to
> cater to allocating less than 512k*max_connections of shared buffers, but I
> expect the cycles in question are negligible here.

Yea, I can't imagine it'd matter, compared to the other costs. Arguably it'd
allow us to crank up the maximum batch size further, even.

Greetings,

Andres Freund

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Michael Paquier 2023-04-09 22:32:07 Re: Add index scan progress to pg_stat_progress_vacuum
Previous Message Andres Freund 2023-04-09 21:42:37 Re: differential code coverage