* Close the connection in case an EOC bucket was seen
* In case we see an EOC bucket and there was an error bucket before, use its
status as status for the request. This should ensure proper status logging
in the access log.
* We need to set r->status on each call after we noticed an EOC as
data bucket generators like ap_die might have changed the status
code. But we know better in this case and insist on the status
code that we have seen in the error bucket.
* Keep track of the number of keepalives we processed on this connection.
* Report a broken backend in case reading the response line failed on the
first request on this connection otherwise we assume we have just run
into a keepalive race and the backend is still healthy.
* Add Changelog for r1899451, r1899454, r1899562, r1899564, r1899584
Submitted by: rpluem
Reviewed by: jim
Github: closes #314
git-svn-id: https://svn.apache.org/repos/asf/httpd/httpd/branches/2.4.x@
1900755 13f79535-47bb-0310-9956-
ffa450edef68
-*- coding: utf-8 -*-
Changes with Apache 2.4.54
+ *) Implement full auto status ("key: value" type status output).
+ Especially not only status summary counts for certificates and
+ OCSP stapling but also lists. Auto status format is similar to
+ what was used for mod_proxy_balancer.
+ [Rainer Jung]
+
+ *) mod_md: fixed a bug leading to failed transfers for OCSP
+ stapling information when more than 6 certificates needed
+ updates in the same run. [Stefan Eissing]
+
+ *) mod_proxy: Set a status code of 502 in case the backend just closed the
+ connection in reply to our forwarded request. [Ruediger Pluem]
+
+ *) mod_md: a possible NULL pointer deref was fixed in
+ the JSON code for persisting time periods (start+end).
+ Fixes #282 on mod_md's github.
+ Thanks to @marcstern for finding this.
+
+ *) mod_heartmonitor: Set the documented default value
+ "10" for HeartbeatMaxServers instead of "0". With "0"
+ no shared memory slotmem was initialized. [Rainer Jung]
+
+ *) mod_md: added support for managing certificates via a
+ local tailscale demon for users of that secure networking.
+ This gives trusted certificates for tailscale assigned
+ domain names in the *.ts.net space.
+ [Stefan Eissing]
+
Changes with Apache 2.4.53
*) SECURITY: CVE-2022-23943: mod_sed: Read/write beyond bounds
2.4.x patch: svn merge -c 1898453 ^/httpd/httpd/trunk .
+1: jaillect36, ylavic, rpluem
- *) mod_proxy: Set a status code of 502 in case the backend just closed the
- connection in reply to our forwarded request.
- Trunk version of patch:
- https://svn.apache.org/r1899451
- https://svn.apache.org/r1899454
- https://svn.apache.org/r1899562
- https://svn.apache.org/r1899564
- https://svn.apache.org/r1899584
- https://svn.apache.org/r1899886
- Backport version for 2.4.x of patch:
- https://patch-diff.githubusercontent.com/raw/apache/httpd/pull/314.diff
- Can be applied via apply_backport_pr.sh 314
- +1: rpluem, ylavic, jim
-
PATCHES PROPOSED TO BACKPORT FROM TRUNK:
[ New proposals should be added at the end of the list ]
+++ /dev/null
- *) Implement full auto status ("key: value" type status output).
- Especially not only status summary counts for certificates and
- OCSP stapling but also lists. Auto status format is similar to
- what was used for mod_proxy_balancer.
- [Rainer Jung]
+++ /dev/null
- *) mod_md: fixed a bug leading to failed transfers for OCSP
- stapling information when more than 6 certificates needed
- updates in the same run. [Stefan Eissing]
+++ /dev/null
- *) mod_md: added support for managing certificates via a
- local tailscale demon for users of that secure networking.
- This gives trusted certificates for tailscale assigned
- domain names in the *.ts.net space.
- [Stefan Eissing]
\ No newline at end of file
+++ /dev/null
- *) mod_md: a possible NULL pointer deref was fixed in
- the JSON code for persisting time periods (start+end).
- Fixes #282 on mod_md's github.
- Thanks to @marcstern for finding this.
+++ /dev/null
- *) mod_heartmonitor: Set the documented default value
- "10" for HeartbeatMaxServers instead of "0". With "0"
- no shared memory slotmem was initialized. [Rainer Jung]
/* Context struct for ap_http_outerror_filter */
typedef struct {
int seen_eoc;
+ int first_error;
} outerror_filter_ctx_t;
/* Filter to handle any error buckets on output */
/* stream aborted and we have not ended it yet */
r->connection->keepalive = AP_CONN_CLOSE;
}
+ /*
+ * Memorize the status code of the first error bucket for possible
+ * later use.
+ */
+ if (!ctx->first_error) {
+ ctx->first_error = ((ap_bucket_error *)(e->data))->status;
+ }
continue;
}
/* Detect EOC buckets and memorize this in the context. */
if (AP_BUCKET_IS_EOC(e)) {
+ r->connection->keepalive = AP_CONN_CLOSE;
ctx->seen_eoc = 1;
}
}
* EOS bucket.
*/
if (ctx->seen_eoc) {
+ /*
+ * Set the request status to the status of the first error bucket.
+ * This should ensure that we log an appropriate status code in
+ * the access log.
+ * We need to set r->status on each call after we noticed an EOC as
+ * data bucket generators like ap_die might have changed the status
+ * code. But we know better in this case and insist on the status
+ * code that we have seen in the error bucket.
+ */
+ if (ctx->first_error) {
+ r->status = ctx->first_error;
+ }
for (e = APR_BRIGADE_FIRST(b);
e != APR_BRIGADE_SENTINEL(b);
e = APR_BUCKET_NEXT(e))
ap_pass_brigade(r->output_filters, bb);
/* Mark the backend connection for closing */
backend->close = 1;
- /* Need to return OK to avoid sending an error message */
- return OK;
+ if (origin->keepalives) {
+ /* We already had a request on this backend connection and
+ * might just have run into a keepalive race. Hence we
+ * think positive and assume that the backend is fine and
+ * we do not need to signal an error on backend side.
+ */
+ return OK;
+ }
+ /*
+ * This happened on our first request on this connection to the
+ * backend. This indicates something fishy with the backend.
+ * Return HTTP_INTERNAL_SERVER_ERROR to signal an unrecoverable
+ * server error. We do not worry about r->status code and a
+ * possible error response here as the ap_http_outerror_filter
+ * will fix all of this for us.
+ */
+ return HTTP_INTERNAL_SERVER_ERROR;
}
if (!c->keepalives) {
ap_log_rerror(APLOG_MARK, APLOG_DEBUG, 0, r, APLOGNO(01105)
backend->close = 1;
origin->keepalive = AP_CONN_CLOSE;
}
+ else {
+ /*
+ * Keep track of the number of keepalives we processed on this
+ * connection.
+ */
+ origin->keepalives++;
+ }
+
} else {
/* an http/0.9 response */
backasswards = 1;