When getting multiple URLs, curl didn't properly reset the byte counter
after a successful transfer so if the subsequent transfer failed it
would wrongly use the previous byte counter and behave badly (segfault)
because of that. The code assumes that the byte counter and the 'stream'
pointer is well in synch.
Reported by: Jon Sargeant
Bug: http://curl.haxx.se/bug/view.cgi?id=
3028241
o threaded resolver: fix timeout issue
o multi: fix condition that remove timers before trigger
o examples: add curl_multi_timeout
+ o --retry: access violation with URL part sets continued
This release includes the following known bugs:
const size_t err_rc = (sz * nmemb) ? 0 : 1;
if(!out->stream) {
+ out->bytes = 0; /* nothing written yet */
if (!out->filename) {
warnf(config, "Remote filename has no length!\n");
return err_rc; /* Failure */
rc = fwrite(buffer, sz, nmemb, out->stream);
- if((sz * nmemb) == rc) {
+ if((sz * nmemb) == rc)
/* we added this amount of data to the output */
out->bytes += (sz * nmemb);
- }
if(config->readbusy) {
config->readbusy = FALSE;
}
else {
outs.stream = NULL; /* open when needed */
+ outs.bytes = 0; /* reset byte counter */
}
}
infdopen=FALSE;