From: Frantisek Sumsal Date: Sun, 12 Apr 2026 13:02:11 +0000 (+0200) Subject: journal: limit decompress_blob() output to DATA_SIZE_MAX X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=31d360fb0b28859aba891aaefb1452f820a5861a;p=thirdparty%2Fsystemd.git journal: limit decompress_blob() output to DATA_SIZE_MAX We already have checks in place during compression that limit the data we compress, so they shouldn't decompress to anything larger than DATA_SIZE_MAX unless they've been tampered with. Let's make this explicit and limit all our decompress_blob() calls in journal-handling code to that limit. One possible scenario this should prevent is when one tries to open and verify a journal file that contains a compression bomb in its payload: $ ls -lh test.journal -rw-rw-r--+ 1 fsumsal fsumsal 1.2M Apr 12 15:07 test.journal $ systemd-run --user --wait --pipe -- build-local/journalctl --verify --file=$PWD/test.journal Running as unit: run-p682422-i4875779.service 000110: Invalid hash (00000000 vs. 11e4948d73bdafdd) 000110: Invalid object contents: Bad message File corruption detected at /home/fsumsal/repos/@systemd/systemd/test.journal:272 (of 1249896 bytes, 0%). FAIL: /home/fsumsal/repos/@systemd/systemd/test.journal (Bad message) Finished with result: exit-code Main processes terminated with: code=exited, status=1/FAILURE Service runtime: 48.051s CPU time consumed: 47.941s Memory peak: 8G (swap: 0B) Same could be, in theory, possible with just `journalctl --file=`, but the reproducer would be a bit more complicated (haven't tried it, yet). Lastly, the change in journal-remote is mostly hardening, as the maximum input size to decompress_blob() there is mandated by MHD's connection memory limit (set to JOURNAL_SERVER_MEMORY_MAX which is 128 KiB at the time of writing), so the possible output size there is already quite limited (e.g. ~800 - 900 MiB for xz-compressed data). --- diff --git a/src/journal-remote/journal-remote-main.c b/src/journal-remote/journal-remote-main.c index fbb53cc42fd..9cb84bbe1e6 100644 --- a/src/journal-remote/journal-remote-main.c +++ b/src/journal-remote/journal-remote-main.c @@ -286,7 +286,7 @@ static int process_http_upload( _cleanup_free_ char *buf = NULL; size_t buf_size; - r = decompress_blob(source->compression, upload_data, *upload_data_size, (void **) &buf, &buf_size, 0); + r = decompress_blob(source->compression, upload_data, *upload_data_size, (void **) &buf, &buf_size, DATA_SIZE_MAX); if (r < 0) return mhd_respondf(connection, r, MHD_HTTP_BAD_REQUEST, "Decompression of received blob failed."); diff --git a/src/libsystemd/sd-journal/journal-file.c b/src/libsystemd/sd-journal/journal-file.c index 54c647d75b8..a4b270ceb72 100644 --- a/src/libsystemd/sd-journal/journal-file.c +++ b/src/libsystemd/sd-journal/journal-file.c @@ -1971,7 +1971,7 @@ static int maybe_decompress_payload( return 1; } - r = decompress_blob(compression, payload, size, &f->compress_buffer, &rsize, 0); + r = decompress_blob(compression, payload, size, &f->compress_buffer, &rsize, DATA_SIZE_MAX); if (r < 0) return r; diff --git a/src/libsystemd/sd-journal/journal-verify.c b/src/libsystemd/sd-journal/journal-verify.c index 1b3bf3fca35..11532c5f8a8 100644 --- a/src/libsystemd/sd-journal/journal-verify.c +++ b/src/libsystemd/sd-journal/journal-verify.c @@ -126,7 +126,7 @@ static int hash_payload(JournalFile *f, Object *o, uint64_t offset, const uint8_ _cleanup_free_ void *b = NULL; size_t b_size; - r = decompress_blob(c, src, size, &b, &b_size, 0); + r = decompress_blob(c, src, size, &b, &b_size, DATA_SIZE_MAX); if (r < 0) { error_errno(offset, r, "%s decompression failed: %m", compression_to_string(c));