multirange_recv and BlockRefTableReaderNextRelation were incautious
about multiplying a possibly-large integer by a factor more than 1
and then using it as an allocation size. This is harmless on 64-bit
systems where we'd compute a size exceeding MaxAllocSize and then
fail, but on 32-bit systems we could overflow size_t leading to an
undersized allocation and buffer overrun.
Fix these places by using palloc_array() instead of a handwritten
multiplication. (In HEAD, some of them were fixed already, but
none of that work got back-patched at the time.)
In addition, BlockRefTableReaderNextRelation passes the same value
to BlockRefTableRead's "int length" parameter. If built for
64-bit frontend code, palloc_array() allows a larger array size
than it otherwise would, potentially allowing that parameter to
overflow. Add an explicit check to forestall that and keep the
behavior the same cross-platform.
Reported-by: Xint Code
Author: Tom Lane <tgl@sss.pgh.pa.us>
Backpatch-through: 14
Security: CVE-2026-6473
Oid mltrngtypoid = PG_GETARG_OID(1);
int32 typmod = PG_GETARG_INT32(2);
MultirangeIOData *cache;
- uint32 range_count;
+ int32 range_count;
RangeType **ranges;
MultirangeType *ret;
StringInfoData tmpbuf;
cache = get_multirange_io_data(fcinfo, mltrngtypoid, IOFunc_receive);
range_count = pq_getmsgint(buf, 4);
+ /* palloc_array will enforce a more-or-less-sane range_count value */
ranges = palloc_array(RangeType *, range_count);
initStringInfo(&tmpbuf);
return false;
}
+ /*
+ * Sanity-check the nchunks value. In the backend, palloc_array would
+ * enforce this anyway (with a more generic error message); but in
+ * frontend it would not, potentially allowing BlockRefTableRead's length
+ * parameter to overflow.
+ */
+ if (sentry.nchunks > MaxAllocSize / sizeof(uint16))
+ {
+ reader->error_callback(reader->error_callback_arg,
+ "file \"%s\" has oversized chunk size array",
+ reader->error_filename);
+ return false;
+ }
+
/* Read chunk size array. */
if (reader->chunk_size != NULL)
pfree(reader->chunk_size);