From: Christopher Faulet Date: Tue, 3 Feb 2026 06:54:11 +0000 (+0100) Subject: MEDIUM: compression: Be sure to never compress more than a chunk at once X-Git-Tag: v3.4-dev5~44 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=f82ace414b0388745df2018b71c2313d4b41fe46;p=thirdparty%2Fhaproxy.git MEDIUM: compression: Be sure to never compress more than a chunk at once When the compression is performed, a trash chunk is used. So be sure to never compression more data than the trash size. Otherwise the commression could fail. Today, this cannot happen. But with the large buffers support on channels, it could be an issue. Note that this part should be reviewed to evaluate if we should use a larger chunk too to perform the compression, maybe via an option. --- diff --git a/src/flt_http_comp.c b/src/flt_http_comp.c index 002d074eb..79dbcb84a 100644 --- a/src/flt_http_comp.c +++ b/src/flt_http_comp.c @@ -302,6 +302,10 @@ comp_http_payload(struct stream *s, struct filter *filter, struct http_msg *msg, last = 0; v.len = len; } + if (v.len > b_size(&trash)) { + last = 0; + v.len = b_size(&trash); + } ret = htx_compression_buffer_add_data(st, v.ptr, v.len, &trash, dir); if (ret < 0 || htx_compression_buffer_end(st, &trash, last, dir) < 0)