From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>,Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>,PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: pgsql: Compute XID horizon for page level index vacuum on primary. |
Date: | 2019-03-30 21:45:05 |
Message-ID: | 9AE2E8AF-8968-4774-8D6E-4D0A3B4FC19D@anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-committers pgsql-hackers |
Hi,
On March 30, 2019 5:33:12 PM EDT, Thomas Munro <thomas(dot)munro(at)gmail(dot)com> wrote:
>On Sun, Mar 31, 2019 at 8:20 AM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
>> On Sat, Mar 30, 2019 at 8:44 AM Robert Haas <robertmhaas(at)gmail(dot)com>
>wrote:
>> > Overall I'm inclined to think that we're making the same mistake
>here
>> > that we did with work_mem, namely, assuming that you can control a
>> > bunch of different prefetching behaviors with a single GUC and
>things
>> > will be OK. Let's just create a new GUC for this and default it to
>10
>> > or something and go home.
>>
>> I agree. If you invent a new GUC, then everybody notices, and it
>> usually has to be justified quite rigorously. There is a strong
>> incentive to use an existing GUC, if only because the problem that
>> this creates is harder to measure than the supposed problem that it
>> avoids. This can perversely work against the goal of making the
>system
>> easy to use. Stretching the original definition of a GUC is bad.
>>
>> I take issue with the general assumption that not adding a GUC at
>> least makes things easier for users. In reality, it depends entirely
>> on the situation at hand.
>
>I'm not sure I understand why this is any different from the bitmap
>heapscan case though, or in fact why we are adding 10 in this case.
>In both cases we will soon be reading the referenced buffers, and it
>makes sense to queue up prefetch requests for the blocks if they
>aren't already in shared buffers. In both cases, the number of
>prefetch requests we want to send to the OS is somehow linked to the
>amount of IO requests we think the OS can handle concurrently at once
>(since that's one factor determining how fast it drains them), but
>it's not necessarily the same as that number, AFAICS. It's useful to
>queue some number of prefetch requests even if you have no IO
>concurrency at all (a single old school spindle), just because the OS
>will chew on that queue in the background while we're also doing
>stuff, which is probably what that "+ 10" is expressing. But that
>seems to apply to bitmap heapscan too, doesn't it?
The index page deletion code does work on behalf of multiple backends, bitmap scans don't. If your system is busy it makes sense to like resource usage of per backend work, but not really work on shared resources like page reuse. A bit like work mem vs mwm.
Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-03-30 22:59:01 | pgsql: Speed up planning when partitions can be pruned at plan time. |
Previous Message | Thomas Munro | 2019-03-30 21:33:12 | Re: pgsql: Compute XID horizon for page level index vacuum on primary. |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-03-30 21:48:26 | Re: Teach pg_upgrade test to honor NO_TEMP_INSTALL |
Previous Message | Thomas Munro | 2019-03-30 21:33:12 | Re: pgsql: Compute XID horizon for page level index vacuum on primary. |