Make sure to not destroy bucket brigades that have been created by earlier
filters. Otherwise the pool cleanups would be removed causing potential
memory leaks later on.
Submitted by: Stefan Fritsch
Reviewed by: sf, rpluem, pgollucci
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/branches/2.2.x@916627
13f79535-47bb-0310-9956-
ffa450edef68
access control is still vulnerable, unless using OpenSSL >= 0.9.8l.
[Joe Orton, Ruediger Pluem, Hartmut Keil <Hartmut.Keil adnovum.ch>]
+ *) core: Fix potential memory leaks by making sure to not destroy
+ bucket brigades that have been created by earlier filters.
+ [Stefan Fritsch]
+
*) mod_authnz_ldap: Add AuthLDAPBindAuthoritative to allow Authentication to
try other providers in the case of an LDAP bind failure.
PR 46608 [Justin Erenkrantz, Joe Schaefer, Tony Stevenson]
2.2.x Patch: http://people.apache.org/~minfrin/httpd-cache-thundering.patch
+1: minfrin, jim, pgollucci
- * core: Make sure to not destroy bucket brigades that have been created
- by earlier filters. Otherwise the pool cleanups would be removed causing
- potential memory leaks later on.
- Trunk patch: http://svn.apache.org/viewvc?view=revision&revision=821477
- 2.2.x patch: http://people.apache.org/~sf/avoid_apr_brigade_destroy-2.2.x.diff
- +1: sf, rpluem, pgollucci
-
PATCHES PROPOSED TO BACKPORT FROM TRUNK:
[ New proposals should be added at the end of the list ]
APR_BRIGADE_INSERT_TAIL(bsend, e);
/* we're done with the original content - all of our data is in bsend. */
- apr_brigade_destroy(bb);
+ apr_brigade_cleanup(bb);
/* send our multipart output */
return ap_pass_brigade(f->next, bsend);
ctx = f->ctx = apr_pcalloc(r->pool, sizeof(header_filter_ctx));
}
else if (ctx->headers_sent) {
- apr_brigade_destroy(b);
+ apr_brigade_cleanup(b);
return OK;
}
}
ap_pass_brigade(f->next, b2);
if (r->header_only) {
- apr_brigade_destroy(b);
+ apr_brigade_cleanup(b);
ctx->headers_sent = 1;
return OK;
}
/* Create a temporary brigade as a means
* of concatenating a bunch of buckets together
*/
+ temp_brig = apr_brigade_create(f->c->pool,
+ f->c->bucket_alloc);
if (last_merged_bucket) {
/* If we've concatenated together small
* buckets already in a previous pass,
* these buckets, so that the content
* in them doesn't have to be copied again.
*/
- apr_bucket_brigade *bb;
- bb = apr_brigade_split(b,
- APR_BUCKET_NEXT(last_merged_bucket));
- temp_brig = b;
- b = bb;
- }
- else {
- temp_brig = apr_brigade_create(f->c->pool,
- f->c->bucket_alloc);
+ APR_BRIGADE_PREPEND(b, temp_brig);
+ brigade_move(temp_brig, b, APR_BUCKET_NEXT(last_merged_bucket));
}
temp = APR_BRIGADE_FIRST(b);
logio_add_bytes_out(c, bytes_sent);
}
- apr_brigade_destroy(b);
+ apr_brigade_cleanup(b);
/* drive cleanups for resources which were set aside
* this may occur before or after termination of the request which
"core_output_filter: writing data to the network");
if (more)
- apr_brigade_destroy(more);
+ apr_brigade_cleanup(more);
/* No need to check for SUCCESS, we did that above. */
if (!APR_STATUS_IS_EAGAIN(rv)) {