From: Daniel Stenberg Date: Sun, 27 Sep 2009 21:34:13 +0000 (+0000) Subject: - I introduced a maximum limit for received HTTP headers. It is controlled by X-Git-Tag: curl-7_19_7~109 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=8646cecb785e8ac426527daedc1eb35e27f2edca;p=thirdparty%2Fcurl.git - I introduced a maximum limit for received HTTP headers. It is controlled by the define CURL_MAX_HTTP_HEADER which is even exposed in the public header file to allow for users to fairly easy rebuild libcurl with a modified limit. The rationale for a fixed limit is that libcurl is realloc()ing a buffer to be able to put a full header into it, so that it can call the header callback with the entire header, but that also risk getting it into trouble if a server by mistake or willingly sends a header that is more or less without an end. The limit is set to 100K. --- diff --git a/CHANGES b/CHANGES index 6b68f6cee4..af62b60662 100644 --- a/CHANGES +++ b/CHANGES @@ -6,6 +6,16 @@ Changelog +Daniel Stenberg (27 Sep 2009) +- I introduced a maximum limit for received HTTP headers. It is controlled by + the define CURL_MAX_HTTP_HEADER which is even exposed in the public header + file to allow for users to fairly easy rebuild libcurl with a modified + limit. The rationale for a fixed limit is that libcurl is realloc()ing a + buffer to be able to put a full header into it, so that it can call the + header callback with the entire header, but that also risk getting it into + trouble if a server by mistake or willingly sends a header that is more or + less without an end. The limit is set to 100K. + Daniel Stenberg (26 Sep 2009) - John P. McCaskey posted a bug report that showed how libcurl did wrong when saving received cookies with no given path, if the path in the request had a diff --git a/RELEASE-NOTES b/RELEASE-NOTES index b8b46a61e6..2035a93bfd 100644 --- a/RELEASE-NOTES +++ b/RELEASE-NOTES @@ -11,6 +11,7 @@ This release includes the following changes: o -T. is now for non-blocking uploading from stdin o SYST handling on FTP for OS/400 FTP server cases + o libcurl refuses to read a single HTTP header longer than 100K This release includes the following bugfixes: diff --git a/include/curl/curl.h b/include/curl/curl.h index 34da873b68..4b79eca9f4 100644 --- a/include/curl/curl.h +++ b/include/curl/curl.h @@ -178,6 +178,15 @@ typedef int (*curl_progress_callback)(void *clientp, time for those who feel adventurous. */ #define CURL_MAX_WRITE_SIZE 16384 #endif + +#ifndef CURL_MAX_HTTP_HEADER +/* The only reason to have a max limit for this is to avoid the risk of a bad + server feeding libcurl with a never-ending header that will cause reallocs + infinitely */ +#define CURL_MAX_HTTP_HEADER (100*1024) +#endif + + /* This is a magic return code for the write callback that, when returned, will signal libcurl to pause receiving on the current transfer. */ #define CURL_WRITEFUNC_PAUSE 0x10000001 diff --git a/lib/transfer.c b/lib/transfer.c index 37e45002be..405add25d0 100644 --- a/lib/transfer.c +++ b/lib/transfer.c @@ -752,12 +752,22 @@ static CURLcode header_append(struct SessionHandle *data, struct SingleRequest *k, size_t length) { - if(k->hbuflen + length >= data->state.headersize) { + if(k->hbuflen + length >= data->state.headersize) { /* We enlarge the header buffer as it is too small */ char *newbuff; size_t hbufp_index; - size_t newsize=CURLMAX((k->hbuflen+ length)*3/2, - data->state.headersize*2); + size_t newsize; + + if(k->hbuflen + length > CURL_MAX_HTTP_HEADER) { + /* The reason to have a max limit for this is to avoid the risk of a bad + server feeding libcurl with a never-ending header that will cause + reallocs infinitely */ + failf (data, "Avoided giant realloc for header (max is %d)!", + CURL_MAX_HTTP_HEADER); + return CURLE_OUT_OF_MEMORY; + } + + newsize=CURLMAX((k->hbuflen+ length)*3/2, data->state.headersize*2); hbufp_index = k->hbufp - data->state.headerbuff; newbuff = realloc(data->state.headerbuff, newsize); if(!newbuff) {