pgsql: Fix incorrect calculation in BlockRefTableEntryGetBlocks.

From: Robert Haas <rhaas(at)postgresql(dot)org>
To: pgsql-committers(at)lists(dot)postgresql(dot)org
Subject: pgsql: Fix incorrect calculation in BlockRefTableEntryGetBlocks.
Date: 2024-04-05 17:50:41
Message-ID: E1rsni5-000fUn-Ae@gemulon.postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-committers

Fix incorrect calculation in BlockRefTableEntryGetBlocks.

The previous formula was incorrect in the case where the function's
nblocks argument was a multiple of BLOCKS_PER_CHUNK, which happens
whenever a relation segment file is exactly 512MB or exactly 1GB in
length. In such cases, the formula would calculate a stop_offset of
0 rather than 65536, resulting in modified blocks in the second half
of a 1GB file, or all the modified blocks in a 512MB file, being
omitted from the incremental backup.

Reported off-list by Tomas Vondra and Jakub Wartak.

Discussion: http://postgr.es/m/CA+TgmoYwy_KHp1-5GYNmVa=zdeJWhNH1T0SBmEuvqQNJEHj1Lw@mail.gmail.com

Branch
------
master

Details
-------
https://git.postgresql.org/pg/commitdiff/55a5ee30cd65886ff0a2e7ffef4ec2816fbec273

Modified Files
--------------
src/common/blkreftable.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)

Browse pgsql-committers by date

  From Date Subject
Next Message Andrew Dunstan 2024-04-05 20:08:58 pgsql: Silence some compiler warnings in commit 3311ea86ed
Previous Message Tomas Vondra 2024-04-05 17:38:44 pgsql: Check HAVE_COPY_FILE_RANGE before calling copy_file_range