From 55a5ee30cd65886ff0a2e7ffef4ec2816fbec273 Mon Sep 17 00:00:00 2001 From: Robert Haas Date: Fri, 5 Apr 2024 13:39:29 -0400 Subject: [PATCH] Fix incorrect calculation in BlockRefTableEntryGetBlocks. The previous formula was incorrect in the case where the function's nblocks argument was a multiple of BLOCKS_PER_CHUNK, which happens whenever a relation segment file is exactly 512MB or exactly 1GB in length. In such cases, the formula would calculate a stop_offset of 0 rather than 65536, resulting in modified blocks in the second half of a 1GB file, or all the modified blocks in a 512MB file, being omitted from the incremental backup. Reported off-list by Tomas Vondra and Jakub Wartak. Discussion: http://postgr.es/m/CA+TgmoYwy_KHp1-5GYNmVa=zdeJWhNH1T0SBmEuvqQNJEHj1Lw@mail.gmail.com --- src/common/blkreftable.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/src/common/blkreftable.c b/src/common/blkreftable.c index bfa6f7ab5d8..845b5d1dc46 100644 --- a/src/common/blkreftable.c +++ b/src/common/blkreftable.c @@ -410,7 +410,11 @@ BlockRefTableEntryGetBlocks(BlockRefTableEntry *entry, if (chunkno == start_chunkno) start_offset = start_blkno % BLOCKS_PER_CHUNK; if (chunkno == stop_chunkno - 1) - stop_offset = stop_blkno % BLOCKS_PER_CHUNK; + { + Assert(stop_blkno > chunkno * BLOCKS_PER_CHUNK); + stop_offset = stop_blkno - (chunkno * BLOCKS_PER_CHUNK); + Assert(stop_offset <= BLOCKS_PER_CHUNK); + } /* * Handling differs depending on whether this is an array of offsets -- 2.39.5