Tuning threshold for BAS_BULKREAD (large tables)

From: "Jamison, Kirk" <k(dot)jamison(at)jp(dot)fujitsu(dot)com>
To: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Tuning threshold for BAS_BULKREAD (large tables)
Date: 2019-01-22 07:35:10
Message-ID: D09B13F772D2274BB348A310EE3027C6412928@g01jpexmbkw24
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

I have a source code-related question on BufferAccessStrategyType BAS_BULKREAD.
Currently, this access method is set internally to cache tables larger than 1/4 of shared_buffers.
src/backend/access/heap/heapam.c:initscan()
if (!RelationUsesLocalBuffers(scan->rs_rd) &&
scan->rs_nblocks > NBuffers / 4)
...
/* During a rescan, keep the previous strategy object. */
if (scan->rs_strategy == NULL)
scan->rs_strategy = GetAccessStrategy(BAS_BULKREAD);

Users can tune their shared_buffers size, but not able to tune this component.
I'm just wondering how it affects the current workload when the table size is larger than the database.
Does it really cache the large tables in shared buffers? How does it affect other buffers/pages stored in the shared buffers?

Oracle also has a quite-related user parameter that allocates space for large tables in the buffer cache.
https://docs.oracle.com/database/121/VLDBG/GUID-A553169D-C6CD-443E-88C3-B746D5E32923.htm#VLDBG14145

I want to ask how has PostgreSQL optimized this with synchronized sequential scans, etc.?
If it's beneficial, I'm wondering if it would be helpful also in Postgres for users to tune it instead of the hardcoded threshold (Nbuffers / 4)?

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Fabio Pardi 2019-01-22 08:11:15 Re: Memory and hard ware calculation :
Previous Message Dagan McGregor 2019-01-22 02:16:59 Re: Manage PostgreSQL Database for GITLAB Application?