From: Michał Kępień Date: Tue, 6 Mar 2018 10:33:16 +0000 (+0100) Subject: Fix a race between "rndc reconfig" and waiting for a ./DNSKEY fetch to complete X-Git-Tag: v9.13.0~108^2~1 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=012ca0a27dc2058090d97027076cf09b1bb2474a;p=thirdparty%2Fbind9.git Fix a race between "rndc reconfig" and waiting for a ./DNSKEY fetch to complete Calling nextpart() after reconfiguring ns1 is not safe, because the expected log message may appear in ns5/named.run before nextpart() is run. With the TTL for ./DNSKEY set to 20 seconds, ns5 will refresh it after 10 seconds, by which time wait_for_log() will already have failed. This results in a false negative. However, just calling nextpart() before reconfiguring ns1 would introduce a different problem: if ns5 refreshed ./DNSKEY between these two steps, the subsequent wait_for_log() call would return immediately as it would come across the log message about a failure while refreshing ./DNSKEY instead of the expected success. This in turn would result in a different false negative as the root key would still be uninitialized by the time "rndc secroots" is called. Prevent both kinds of false negatives by: - calling nextpart() before reconfiguring ns1, in order to prevent the first case described above, - looking for a more specific log message, in order to prevent the second case described above. Also look for a more specific log message in the first part of the relevant check, not to fix any problem, but just to emphasize that a different fetch result is expected in that case. With these tweaks in place, if a (failed) ./DNSKEY refresh is scheduled between nextpart() and reconfiguring ns1, wait_for_log() will just wait for two more seconds (one "hour"), at which point another refresh attempt will be made that will succeed. --- diff --git a/bin/tests/system/mkeys/tests.sh b/bin/tests/system/mkeys/tests.sh index 2f7327ecd6f..ae07d76bc2d 100644 --- a/bin/tests/system/mkeys/tests.sh +++ b/bin/tests/system/mkeys/tests.sh @@ -739,7 +739,7 @@ rm -f ns5/managed-keys.bind* cp ns5/named2.args ns5/named.args nextpart ns5/named.run > /dev/null $PERL $SYSTEMTESTTOP/start.pl --noclean --restart --port ${PORT} . ns5 -wait_for_log "Returned from key fetch in keyfetch_done()" ns5/named.run +wait_for_log "Returned from key fetch in keyfetch_done() for '.': failure" ns5/named.run mkeys_secroots_on 5 grep '; initializing managed' ns5/named.secroots > /dev/null 2>&1 || ret=1 # ns1 should still REFUSE queries from ns5, so resolving should be impossible @@ -750,9 +750,9 @@ grep "status: SERVFAIL" dig.out.ns5.a.test$n > /dev/null || ret=1 # Allow queries from ns5 to ns1 copy_setports ns1/named3.conf.in ns1/named.conf rm -f ns1/root.db.signed.jnl -mkeys_reconfig_on 1 nextpart ns5/named.run > /dev/null -wait_for_log "Returned from key fetch in keyfetch_done()" ns5/named.run +mkeys_reconfig_on 1 +wait_for_log "Returned from key fetch in keyfetch_done() for '.': success" ns5/named.run mkeys_secroots_on 5 grep '; managed' ns5/named.secroots > /dev/null 2>&1 || ret=1 # ns1 should not longer REFUSE queries from ns5, so managed keys should be