Ondřej Surý [Fri, 15 May 2026 07:51:18 +0000 (09:51 +0200)]
[9.18] fix: test: Fix flaky reclimit test
The max-types-per-name cache eviction tests were flaky because two test steps were missing a sleep between queries, causing TTL-based cache verification to fail when both queries completed within the same second.
Backport of MR !11782
Merge branch 'backport-ondrej/fix-flaky-reclimit-9.18' into 'bind-9.18'
The cache verification in steps 11 and 15 checks that the TTL has
decreased from its initial value to confirm the response was served
from cache, but the sleep between the two queries was missing. Both
queries could complete within the same second, leaving the TTL
unchanged and causing the test to incorrectly conclude the entry was
not cached.
Ondřej Surý [Fri, 15 May 2026 07:50:52 +0000 (09:50 +0200)]
[9.18] chg: usr: Fall back to TCP on a UDP response with a mismatched query id
BIND used to wait silently for the correct DNS message id on a UDP fetch
even after receiving a response from the expected server with the wrong
id, leaving room for off-path spoofing attempts to keep guessing within
that window. The resolver now retries the fetch over TCP on the first
such response, and a new MismatchTCP statistics counter tracks how
often the fallback fires.
Closes #5449
Backport of MR !12023
Merge branch 'backport-5449-immediate-tcp-fallback-on-id-mismatch-9.18' into 'bind-9.18'
Ondřej Surý [Thu, 14 May 2026 10:20:19 +0000 (12:20 +0200)]
Switch UDP fetches to TCP on the first response with a wrong query id
Until now, the dispatcher silently dropped UDP responses from the
expected peer that carried the wrong DNS message id and kept listening
for the correct id to arrive within the read timeout. An off-path
attacker who knows the destination address and source port of an
outgoing fetch could exploit that quiet retry window to flood the
resolver with guessed responses; with a gigabit link the per-query
success probability grows linearly with the number of guesses that
arrive before the legitimate answer or the timeout.
Treat any such mismatch as a possible spoofing attempt and let the
resolver immediately retry the same query over TCP, the same control
path the truncation handler already uses.
Add a resolver statistics counter - exposed as 'queries retried over TCP
after a response with mismatched query id' in rndc stats and
'MismatchTCP' in the statistics channel
The global RUNNER_SCRIPT_TIMEOUT: 55m in the parent pipeline was being
forwarded to the stress and tsan:stress child pipelines, where forwarded
yaml variables outrank job-level variables. That caused stress jobs with
BIND_STRESS_TESTS_RUN_TIME >= 60 to be killed at 55 minutes, regardless
of the per-job RUNNER_SCRIPT_TIMEOUT set in the generated child config.
Set forward:yaml_variables: false on both trigger jobs; the generated
configs already declare every variable they need.
Assisted-by: Claude:claude-opus-4-7
Backport of MR !12012
Merge branch 'backport-mnowak/fix-stress-test-script-timeout-9.18' into 'bind-9.18'
Michal Nowak [Wed, 13 May 2026 09:44:26 +0000 (11:44 +0200)]
Selectively inherit yaml vars in stress trigger jobs
The parent's global RUNNER_SCRIPT_TIMEOUT: 55m was reaching the stress
and tsan:stress child pipelines via inherited yaml variables, where
inherited values outrank the child's job-level variables. That caused
stress jobs with BIND_STRESS_TESTS_RUN_TIME >= 60 to be killed at 55
minutes, regardless of the per-job RUNNER_SCRIPT_TIMEOUT set in the
generated child config.
Use inherit:variables with a positive list on both trigger jobs:
inherit only CI_REGISTRY_IMAGE so the parent's registry override
(needed for image pulls in the child) flows through, while keeping
RUNNER_SCRIPT_TIMEOUT (and other globals) out of the child pipeline's
variable scope. The per-job RUNNER_SCRIPT_TIMEOUT values set by the
generated child config now take effect.
Michal Nowak [Wed, 25 Mar 2026 12:31:49 +0000 (13:31 +0100)]
Set RUNNER_SCRIPT_TIMEOUTs
Sometimes jobs can get stuck and be terminated by GitLab, leaving us
without artefacts that could contain useful information about why the
job got stuck.
Mark Andrews [Wed, 17 Aug 2022 01:13:41 +0000 (11:13 +1000)]
tsiggss: regenerate kerberos credentials
The existing set of kerberos credential used deprecated algorithms
which are not supported by some implementations in FIPS mode.
Regenerate the saved credentials using more modern algorithms.
Added tsiggss/krb/setup.sh which sets up a test KDC with the required
principals for the system test to work. The tsiggss system test
needs to be run once with this active and KRB5_CONFIG appropriately.
set. See tsiggss/tests.sh for an example of how to do this.
Michał Kępień [Mon, 11 May 2026 15:46:55 +0000 (17:46 +0200)]
[9.18] chg: ci: Add commit link and diff to RPM build job logs
The output of update_rpms.py is terse, making it difficult to verify its
actions. Add a commit link and "git show" output to the log of every CI
job running the update_rpms.py script in "build" mode to facilitate
double-checking its actions.
Backport of MR !11828
Merge branch 'backport-michal/add-commit-link-and-diff-to-rpm-build-job-logs-9.18' into 'bind-9.18'
Michał Kępień [Mon, 11 May 2026 15:41:50 +0000 (17:41 +0200)]
Add commit link and diff to RPM build job logs
The output of update_rpms.py is terse, making it difficult to verify its
actions. Add a commit link and "git show" output to the log of every CI
job running the update_rpms.py script in "build" mode to facilitate
double-checking its actions.
Michał Kępień [Mon, 11 May 2026 14:27:39 +0000 (16:27 +0200)]
[9.18] fix: ci: Increase GIT_DEPTH for the "assign-milestones" job
Cloning tags with the default GIT_DEPTH of 1 prevents the milestone
assignment script from identifying any merge requests that are included
in a given release. Fix by increasing GIT_DEPTH to an arbitrary value
that is high enough for practical purposes.
The GIT_DEPTH CI variable defaults to 1 for all jobs through the
top-level "variables" key. Explicitly setting it to 1 in job
definitions is unnecessary and may cause confusion. Remove these
redundant assignments.
Backport of MR !11996
Merge branch 'backport-michal/fix-assign-milestones-job-9.18' into 'bind-9.18'
Michał Kępień [Mon, 11 May 2026 14:07:47 +0000 (16:07 +0200)]
Remove redundant "GIT_DEPTH: 1" assignments
The GIT_DEPTH CI variable defaults to 1 for all jobs through the
top-level "variables" key. Explicitly setting it to 1 in job
definitions is unnecessary and may cause confusion. Remove these
redundant assignments.
Michał Kępień [Mon, 11 May 2026 14:07:47 +0000 (16:07 +0200)]
Increase GIT_DEPTH for the "assign-milestones" job
Cloning tags with the default GIT_DEPTH of 1 prevents the milestone
assignment script from identifying any merge requests that are included
in a given release. Fix by increasing GIT_DEPTH to an arbitrary value
that is high enough for practical purposes.
Michał Kępień [Mon, 11 May 2026 08:14:13 +0000 (10:14 +0200)]
[9.18] fix: ci: Fix triggering rules for the "publish-cleanup" job
The "publish-cleanup" tag pipeline job is currently created for all
security releases, including BIND -S releases, but it depends on the
"publish" job, which is only created for open source releases. This
breaks CI configuration for BIND -S tags, preventing pipelines from
getting created for such tags altogether. Fix by only creating the
"publish-cleanup" job in tag pipelines for open source security
releases.
Backport of MR !11992
Merge branch 'backport-michal/fix-triggering-rules-for-the-publish-cleanup-job-9.18' into 'bind-9.18'
Michał Kępień [Mon, 11 May 2026 08:07:38 +0000 (10:07 +0200)]
Fix triggering rules for the "publish-cleanup" job
The "publish-cleanup" tag pipeline job is currently created for all
security releases, including BIND -S releases, but it depends on the
"publish" job, which is only created for open source releases. This
breaks CI configuration for BIND -S tags, preventing pipelines from
getting created for such tags altogether. Fix by only creating the
"publish-cleanup" job in tag pipelines for open source security
releases.
Michał Kępień [Thu, 7 May 2026 16:08:34 +0000 (18:08 +0200)]
[9.18] chg: ci: Mark merged security fixes as "Not released yet"
Adjust the triggering rules for the "merged-metadata" CI job so that
merge requests merged into security-* branches are automatically
assigned to the "Not released yet" milestone, just like merge requests
targeting public branches. This enables merge requests containing
security fixes to be correctly processed by release automation scripts.
Backport of MR !11984
Merge branch 'backport-pspacek/extend-not-released-yet-milestone-9.18' into 'bind-9.18'
Petr Špaček [Tue, 5 May 2026 13:04:36 +0000 (15:04 +0200)]
Mark merged security fixes as "Not released yet"
Adjust the triggering rules for the "merged-metadata" CI job so that
merge requests merged into security-* branches are automatically
assigned to the "Not released yet" milestone, just like merge requests
targeting public branches. This enables merge requests containing
security fixes to be correctly processed by release automation scripts.
Michał Kępień [Thu, 7 May 2026 15:55:32 +0000 (17:55 +0200)]
[9.18] chg: ci: Enable automatic backports for security fixes
Ensure the "backports" CI job is created when new changes are merged
into security-* branches. This enables using backport automation for
security fixes.
Backport of MR !11938
Merge branch 'backport-michal/extend-automatic-backports-9.18' into 'bind-9.18'
Michał Kępień [Thu, 7 May 2026 15:45:35 +0000 (17:45 +0200)]
Enable automatic backports for security fixes
Ensure the "backports" CI job is created when new changes are merged
into security-* branches. This enables using backport automation for
security fixes.
Arаm Sаrgsyаn [Wed, 6 May 2026 21:10:08 +0000 (21:10 +0000)]
[9.18] fix: usr: Fix a bug in allow-query/allow-transfer catalog zone custom properties
The :iscman:`named` process could terminate unexpectedly when
processing a catalog zone with an invalid ``allow-query`` or
``allow-transfer`` custom property (i.e. having a non-APL type)
coexisting with the valid property. This has been fixed.
Closes #5941
Backport of MR !11954
Merge branch 'backport-5941-catz-catz_process_apl-bug-fix-9.18' into 'bind-9.18'
Aram Sargsyan [Mon, 4 May 2026 22:34:01 +0000 (22:34 +0000)]
Fix a bug in catz_process_apl()
The allow-transfer/allow-query catalog zone custom properties support
only APL RRtypes. All other types are correctly rejected by the
catz_process_apl() function. However, when an APL RRtype is processed
by that function, and another (non-APL) RRtype is then attempted to be
processed, there is an assertion failure happening in the prologue
of the function because `*aclbp != NULL` (i.e. an APL has been already
processed). Move the code to do type checking before the affected
REQUIRE assertion.
Arаm Sаrgsyаn [Wed, 6 May 2026 19:24:21 +0000 (19:24 +0000)]
[9.18] fix: usr: Fix a memory leak issue in the catalog zones
The :iscman:`named` process could leak small amounts of memory
when processing a catalog zone entry which had defined custom
primary servers with TSIG keys using both the regular ``primaries``
custom property syntax and the legacy alternative syntax (``masters``)
at the same time. This has been fixed.
Closes #5943
Backport of MR !11951
Merge branch 'backport-5943-catz-primaries-tsig-key-name-leak-fix-9.18' into 'bind-9.18'
Ondřej Surý [Wed, 6 May 2026 06:27:16 +0000 (08:27 +0200)]
[9.18] fix: usr: Prevent a crash when using both dns64 and filter-aaaa
An assertion failure could be triggered if both `dns64` and the `filter-aaaa` plugin were in use simultaneously. This happened if the plugin triggered a second recursion process, which then attempted to store DNS64 state information in a pointer that had already been set by the original recursion process. This has been fixed.
Closes #5854
Backport of MR !11949
Merge branch 'backport-5854-dns64-aaaaok-9.18' into 'bind-9.18'
Evan Hunt [Mon, 4 May 2026 05:00:39 +0000 (22:00 -0700)]
Clear dns64_aaaaok immediately after use
The DNS64 state information stored in client->query.dns64_aaaaok
could cause an assertion failure in query_respond() if the server
was configured in such a way as to trigger a new recursion before
the query had been reset - for example, by using the filter-aaaa
plugin, which may need to recurse to find out whether an A record
exists.
This has been addressed by clearing DNS64 state information
immediately after the call to query_filter64().
Ondřej Surý [Tue, 5 May 2026 18:22:33 +0000 (20:22 +0200)]
[9.18] fix: usr: Reject record sets too large to serve in DNS
When BIND was asked to store a record set whose total size exceeds
what fits in a DNS message, it would allocate memory and build the
structure, then fail later at response time. Such oversized record
sets are now rejected at the time of storage with an error, avoiding
wasted work on data that can never be served.
Backport of MR !11963
Merge branch 'backport-ondrej/harden-buflen-overflow-9.18' into 'bind-9.18'
dns_rdataslab_fromrdataset(), dns_rdataslab_merge() and
dns_rdataslab_subtract() summed per-record storage into an
unsigned int with no upper-bound check. An RRset whose total
encoded size exceeds DNS_RDATA_MAXLENGTH cannot fit in a DNS
message and is unservable; building its in-memory representation
only burns memory on data that will fail at response time, and at
the upper bound the running sum could in theory wrap.
Cap the running total at DNS_RDATA_MAXLENGTH and return ISC_R_NOSPACE
when exceeded. Update the rbtdb cache memory-purge test to use a
record size that fits within the new limit.
Ondřej Surý [Tue, 5 May 2026 07:09:35 +0000 (09:09 +0200)]
[9.18] fix: dev: Tidy up the cleanup path in check_signer()
When check_signer() processed a DNSKEY whose public-key data could not
be parsed, the early return on the parse error skipped the cleanup of
the cloned signature rdataset. In every code path that currently
reaches this function the cloned rdataset holds no resources, so no
memory was actually leaked, but the cleanup is restructured so the
parse and the iteration cannot diverge again.
Closes #5869
Backport of MR !11957
Merge branch 'backport-5869-fix-memory-leak-in-check_signer-9.18' into 'bind-9.18'
The cloned signature rdataset was not disassociated on the early
return taken when dns_dnssec_keyfromrdata() fails to parse the DNSKEY
public-key data. In every current caller val->sigrdataset reaches
check_signer() rdatalist-backed, so dns_rdataset_clone() copies the
struct without taking any reference and dns_rdataset_disassociate()
is a no-op -- no memory is actually leaked today. Hoist the key
parse out of the per-RRSIG loop and let the function fall through
to a single cleanup path, so the parse and the iteration cannot
diverge again.
Ondřej Surý [Tue, 5 May 2026 05:07:40 +0000 (07:07 +0200)]
[9.18] fix: usr: Prevent crafted queries from degrading RRL performance
With response rate limiting enabled, an attacker sending queries from many
spoofed source addresses could steer entries into the same slot of the
internal rate-limit table and slow down query processing on the affected
server. The table now uses a per-process keyed hash so the placement of
entries cannot be predicted or influenced from the network.
Closes #5906
Backport of MR !11950
Merge branch 'backport-5906-rrl-hash-collision-dos-9.18' into 'bind-9.18'
The previous hash_key() was a deterministic, unkeyed (<<1) + add over the
key words. An off-path attacker could invert it offline and submit
queries whose source /24, qname hash, and qtype map to a single bucket;
under chaining this turns every lookup into an O(N) walk under
rrl->lock and starves legitimate query processing on the very feature
deployed to mitigate DoS.
Replace it with isc_hash32(), which is HalfSipHash-2-4 keyed by a
per-process random seed, so collision sets cannot be precomputed.
Michał Kępień [Thu, 30 Apr 2026 20:42:43 +0000 (22:42 +0200)]
[9.18] fix: ci: Use "git push --force-with-lease" for autorebases
If a merge request is merged to an autorebased branch while it is
getting rebased, the "git push -f" command at the end of the autorebase
job will cause the contents of that merge request to be silently deleted
from Git history even though the merge request will still be (correctly)
shown as "merged" by GitLab.
Use "git push --force-with-lease" instead to prevent force-pushing the
rebased version of the branch if it is pushed to after its pre-rebase
version is fetched by the autorebase job. Report such an event
accordingly. For simplicity, no retries are attempted as the problem is
expected to be resolved by the next autorebase and the chances of this
scenario happening in practice are already low to begin with.
Backport of MR !11939
Merge branch 'backport-michal/use-git-push-force-with-lease-for-autorebases-9.18' into 'bind-9.18'
Michał Kępień [Thu, 30 Apr 2026 20:19:59 +0000 (22:19 +0200)]
Use "git push --force-with-lease" for autorebases
If a merge request is merged to an autorebased branch while it is
getting rebased, the "git push -f" command at the end of the autorebase
job will cause the contents of that merge request to be silently deleted
from Git history even though the merge request will still be (correctly)
shown as "merged" by GitLab.
Use "git push --force-with-lease" instead to prevent force-pushing the
rebased version of the branch if it is pushed to after its pre-rebase
version is fetched by the autorebase job. Report such an event
accordingly. For simplicity, no retries are attempted as the problem is
expected to be resolved by the next autorebase and the chances of this
scenario happening in practice are already low to begin with.
Michał Kępień [Thu, 30 Apr 2026 11:28:06 +0000 (13:28 +0200)]
[9.18] new: ci: Set up automatic rebasing for security-* branches
Introduce a set of private branches containing only security fixes that
are automatically rebased onto the corresponding open source branches
whenever new changes are merged. Each rebase triggers a basic build,
failing the CI job if the build breaks.
When a security-* branch is rebased, create a CI pipeline for its new
revision and rebase its corresponding bind-9.x-sub branch (if it exists)
on top of it, creating a rebase chain.
Report any failures in the process via Mattermost.
These changes enable treating security fixes similarly to other code
changes, without deferring merges all the way until release prep.
Backport of MR !11930
Merge branch 'backport-michal/autorebase-chain-9.18' into 'bind-9.18'
Michał Kępień [Thu, 30 Apr 2026 09:58:55 +0000 (11:58 +0200)]
Set up automatic rebasing for security-* branches
Introduce a set of private branches containing only security fixes that
are automatically rebased onto the corresponding open source branches
whenever new changes are merged. Each rebase triggers a basic build,
failing the CI job if the build breaks.
When a security-* branch is rebased, create a CI pipeline for its new
revision and rebase its corresponding bind-9.x-sub branch (if it exists)
on top of it, creating a rebase chain.
Report any failures in the process via Mattermost.
These changes enable treating security fixes similarly to other code
changes, without deferring merges all the way until release prep.
[9.18] fix: usr: prevent malicious DNSSEC zones from exhausting validator CPU
A DNSSEC-signed zone could publish a DNSKEY with an unusually large
RSA public exponent and force any validator resolving names in that
zone to spend disproportionate CPU verifying signatures. The
validator now rejects such DNSKEYs, matching the limit already
applied to keys read from files or HSMs.
Closes #5881
Backport of MR !11917
Merge branch 'backport-5881-rsa-exponent-keytrap-cpu-amplification-9.18' into 'bind-9.18'
Reject RSA DNSKEYs with oversize public exponents at parse time
The wire-format RSA DNSKEY parser was the only key path with no upper
bound on the public exponent — opensslrsa_parse and opensslrsa_fromlabel
already cap at RSA_MAX_PUBEXP_BITS. An attacker-controlled DNSKEY could
therefore force a validator to compute s^e mod n with e up to ~|n| bits,
amplifying every verify by ~120x for typical 2048-bit moduli (OpenSSL
itself only caps the exponent for moduli above 3072 bits). Apply the
same bit-count cap to wire-format keys.
[9.18] fix: dev: Fix swapped arguments in redirect2() single-label branch
On a recursive resolver with nxdomain-redirect configured, an
NXDOMAIN result for a query whose qname is the root could corrupt
the view's nxdomain-redirect target, after which the redirect
feature stopped working for every subsequent query in that view
until named was restarted.
Closes #5908
Backport of MR !11908
Merge branch 'backport-5908-query-redirect2-name-copy-arg-swap-9.18' into 'bind-9.18'
Fix swapped arguments in redirect2() single-label branch
For a query whose qname is the root, the labels==1 branch in
redirect2() called dns_name_copy(redirectname, view->redirectzone)
with arguments reversed, overwriting the view-global
nxdomain-redirect target with the empty redirectname rather than
copying the configured target into the per-query lookup name. After
the corruption, view->redirectzone names the root, so
dns_name_issubdomain() makes redirect2() short-circuit for every
subsequent query and the nxdomain-redirect feature stops working
until named is restarted.
Triggering this needs the resolver to receive an NXDOMAIN for the
root from upstream, which does not happen in normal DNS operation.
Swap the arguments to match the dns_name_copy(source, dest)
signature. Add a system test that issues a root query through the
nxdomain-redirect resolver and verifies the redirect feature still
works for a normal NXDOMAIN-producing query afterwards.
`rndc-confgen -A hmac-sha384` and `-A hmac-sha512` documented a `-b`
range of 1..1024, but any value above 512 aborted on hardened builds
instead of producing a key. The full advertised range now works.
Closes #5903
Backport of MR !11903
Merge branch 'backport-5903-hmac-generate-stack-overflow-9.18' into 'bind-9.18'
Size HMAC key generation buffers to the maximum block size
hmac_generate() declared its on-stack nonce buffer as
unsigned char data[ISC_MAX_MD_SIZE], i.e. 64 bytes. That is the maximum
digest size, but the buffer is filled up to the algorithm's HMAC block
size, which is 128 bytes for SHA-384 and SHA-512. Asking rndc-confgen
for an HMAC-SHA-384 or HMAC-SHA-512 key with -b > 512 (the documented
range allows up to 1024) wrote past the end of the stack buffer; on
hardened builds this aborted with a stack-smash detector firing
instead of producing a key.
Use the existing ISC_MAX_BLOCK_SIZE (128) for the buffer so the full
1..1024 range advertised by -A hmac-sha{384,512} works as documented.
The matching key_rawsecret[64] in confgen's generate_key() is enlarged
the same way so the generated key fits when dumped to the buffer.
Add a system test that exercises rndc-confgen across the previously
overflowing keysizes; with -Db_sanitize=address it caught the abort
before the fix.
[9.18] fix: usr: Fix suppressed missing-glue check in named-checkzone
named-checkzone and named-checkconf -z silently skipped the
missing-glue check for any NS name that had already triggered an
extra-AAAA-glue warning, so zones missing required A glue could pass
validation and be deployed with broken delegations.
Backport of MR !11899
Merge branch 'backport-ondrej/check-tool-err-glue-code-collision-9.18' into 'bind-9.18'
Resolve ERR_MISSING_GLUE / ERR_EXTRA_AAAA value collision
Both constants were defined as 5. The symbol table used by checkns() to
deduplicate log messages keys on (name, error_code), so logging an
extra-AAAA error caused logged() to also return true for the
missing-glue check, silently skipping the entire missing-glue block for
the same name in named-checkzone and named-checkconf -z.
Convert the ERR_* defines to an auto-numbered enum so the compiler
guarantees the values stay pairwise distinct.
[9.18] new: doc: Add AI coding assistants guidance to CONTRIBUTING.md
Adapted from the Linux kernel's Documentation/process/coding-assistants.rst
to the BIND 9 context. Adds three subsections under the existing
"Guidelines for Tool-Generated Content" section:
- Licensing and legal requirements (MPL-2.0, SPDX identifiers).
- Signed-off-by and Developer Certificate of Origin: AI agents must
not add Signed-off-by trailers; only the human submitter may
certify the DCO.
- Attribution: the Assisted-by: AGENT_NAME:MODEL_VERSION trailer
for recording AI involvement, with an explicit prohibition on
AI-added Co-Authored-By trailers (Co-Authored-By designates a
human co-author who shares responsibility).
Backport of MR !11888
Merge branch 'backport-ondrej/coding-assistants-doc-9.18' into 'bind-9.18'
Add AI coding assistants guidance to CONTRIBUTING.md
Adapted from the Linux kernel's Documentation/process/coding-assistants.rst
to the BIND 9 context. Adds three subsections under the existing
"Guidelines for Tool-Generated Content" section:
- Licensing and legal requirements (MPL-2.0, SPDX identifiers).
- Signed-off-by and Developer Certificate of Origin: AI agents must
not add Signed-off-by trailers; only the human submitter may
certify the DCO.
- Attribution: the Assisted-by: AGENT_NAME:MODEL_VERSION trailer
for recording AI involvement, with an explicit prohibition on
AI-added Co-Authored-By trailers (Co-Authored-By designates a
human co-author who shares responsibility).
[9.18] fix: usr: Fix named crash when processing SIG records in dynamic updates
Previously, :iscman:`named` could abort if a client sent a dynamic update containing a SIG record (the legacy signature type) to a zone configured with an update-policy. The function `dns_db_findrdataset` had an incorrect requirements prerequisite that prevented SIG records being looked up, which was triggered as part of processing an UPDATE request and could be triggered remotely by any client permitted to send updates. This has been fixed by ensuring that SIG records are handled consistently with RRSIG records during update processing.
Closes #5818
Backport of MR !11864
Merge branch 'backport-5818-fix-update-of-sig-9.18' into 'bind-9.18'
Make sure the nameserver correctly handles SIG records in the
prerequisites of the dynamic update. The first check is to ensure that
the prerequisites are not examined prior to checking the credentials.
The second test case checks that the SIG present prerequisite is
examined and therefore refuses the update. Also this should not trigger
an assertion failure in dns__db_findrdataset() (due to the REQUIRE()
only accepted dns_rdatatype_rrsig when the covers parameter was set).
Add AXFR regression test for SIG covers preservation
diff.c rdata_covers() runs on both dns_diff_apply (IXFR, ns/update.c
dynamic updates) and dns_diff_load (AXFR). After the previous commit
refused SIG and NXT in dynamic updates, the AXFR path remains the
most natural way to drive legacy SIG records into a secondary's zone
DB and regression-gate the rdata_covers() fix.
The test adds ans11 as an AsyncDnsServer primary for a small zone
whose AXFR carries two SIG rdatas at the same owner with different
covered types (A, MX) and different TTLs (600, 1200), and declares
ns6 a secondary of that zone. With the bug present, dns_diff_load
groups both tuples at typepair (SIG, 0) and the MX-covering record
inherits the first-seen TTL (600); the fix keeps them at (SIG, A)
and (SIG, MX) with their original TTLs.
rndc dumpdb -zones on the secondary is used to inspect stored state
directly, because the wire-level SIG query response merges
same-(owner,type,class) RRs and masks the per-rdataset TTLs.
SIG (24) and NXT (30) are obsolete DNSSEC record types, superseded by
RRSIG and NSEC in RFC 3755. Allowing them through dynamic update
exposes two distinct bugs that the surrounding GL#5818 work already
fixes as defense-in-depth:
- dns__db_findrdataset() used to REQUIRE that (covers == 0 ||
type == RRSIG), which aborts named when a SIG update reaches the
prescan foreach_rr() call. Fixed to accept dns_rdatatype_issig().
- diff.c rdata_covers() used to test only RRSIG, dropping the
covered-type field for SIG rdatas; the zone DB then filed every
SIG rdataset under typepair (SIG, 0) instead of
(SIG, covered_type) and follow-up adds collided at that bucket.
Fixed to use dns_rdatatype_issig().
Both underlying bugs are still reachable via inbound zone transfer
(diff.c rdata_covers() runs from both dns_diff_apply on the IXFR path
and dns_diff_load on the AXFR path), so the type-helper fixes above
remain necessary. For the dynamic-update path, the simplest and
safest posture is to refuse SIG and NXT outright at the front door in
ns/update.c, alongside the existing NSEC/NSEC3/non-apex-RRSIG
refusals. KEY remains permitted because it is still used to carry
public keys for SIG(0) transaction authentication.
The existing tcp-self SIG regression test is repointed to assert
REFUSED on the SIG add, a symmetric NXT test is added, and the
SIG-via-dyn-update covers-bucket test is removed because it is no
longer reachable through this entry point; AXFR-based coverage of
diff.c rdata_covers() follows in a separate commit.
Add regression test for SIG covers being dropped in dns_diff_apply
rdata_covers() in lib/dns/diff.c tests `type == dns_rdatatype_rrsig`
instead of dns_rdatatype_issig(), so for a legacy SIG (24) rdata it
returns 0 and the covered type is discarded on the dynamic-update /
IXFR path. The zone DB then files every SIG rdataset under typepair
(SIG, 0) instead of (SIG, covered_type), and a follow-up add with a
different covers field but a different TTL collides at that bucket,
trips DNS_DBADD_EXACTTTL in qpzone, returns DNS_R_NOTEXACT, and comes
back to the client as SERVFAIL.
The new test adds a PTR to establish the node (tcp-self requires the
client IP's reverse form to equal the owner), then two SIG updates
with different covers and different TTLs; on a buggy build the second
update is SERVFAIL and named logs `dns_diff_apply: .../SIG/IN: add
not exact`. The test is expected to pass once rdata_covers() is
switched to dns_rdatatype_issig(), matching the fix already adopted
for dns__db_findrdataset() on this branch and the helper pattern used
in master.c, xfrout.c, and qpcache.c.
Fix dropped covers field for SIG records in dns_diff_apply
rdata_covers() in lib/dns/diff.c discriminated only on
dns_rdatatype_rrsig (46) and returned 0 for the legacy SIG (24), so
the covered-type field was silently discarded on the dynamic-update
and IXFR paths. Every SIG rdataset was then filed in the zone DB
under typepair (SIG, 0) instead of (SIG, covered_type); a second SIG
add with a different covers but a different TTL collided at that
bucket, tripped DNS_DBADD_EXACTTTL in qpzone, returned
DNS_R_NOTEXACT, and came back to the client as SERVFAIL.
Use dns_rdatatype_issig() here so both SIG and RRSIG carry their
covers through the diff, matching the helper pattern already used in
lib/dns/master.c, lib/ns/xfrout.c, lib/dns/qpcache.c, and the
dns__db_findrdataset() REQUIRE that the surrounding merge request
just relaxed.
Mark Andrews [Tue, 7 Apr 2026 14:39:57 +0000 (16:39 +0200)]
Fix assertion failure in dns_db_findrdataset() for SIG records
dns__db_findrdataset() had a REQUIRE() that only accepted
dns_rdatatype_rrsig when the covers parameter was set. A dynamic
update containing a SIG record (type 24) would trigger this
assertion, crashing named. Use dns_rdatatype_issig() to accept
both SIG and RRSIG.
Tom Krizek [Wed, 13 Mar 2024 17:18:42 +0000 (18:18 +0100)]
Move conftest log initialization to conftest.py
Initializing the conftest logging upon importing the isctest package
isn't practical when there are standalone pieces which can be used
outside of the testing framework, such as the asyncdnsserver module.
[9.18] fix: dev: Fix inverted gethostname() check in rndc status
The replacement of named_os_gethostname() with raw gethostname()
inverted the success check: the "localhost" fallback runs on success,
and on failure the uninitialized hostname buffer is read by snprintf(),
leaking stack memory via the rndc status reply.
Closes #5889
Backport of MR !11879
Merge branch 'backport-5889-fix-gethostname-inverted-check-9.18' into 'bind-9.18'
When named_os_gethostname() was replaced with raw gethostname(), the
success/failure polarity was flipped: the fallback to "localhost" now
runs on success and the hostname buffer is left uninitialized on
failure. In the failure path, snprintf() then reads the uninitialized
stack buffer, disclosing stack contents via the rndc status reply.
[9.18] new: ci: Extend the prepare-release-announcement job to post release links
The prepare-release-announcement job is now extended so that after
creating the announcement MR, it posts a message with links to the newly
released versions to Mattermost.
Backport of MR !11860
Merge branch 'backport-andoni/extend-prepare-release-announcement-with-urls-message-9.18' into 'bind-9.18'
Extend the prepare-release-announcement job post release links
The prepare-release-announcement job is now extended so that after
creating the announcement MR, it posts a message with links to the newly
released versions to Mattermost.
[9.18] new: doc: Document opt-in 🤖 marker for agent-authored issues and MRs
Add short notes in CONTRIBUTING.md telling automated agents to append
🤖 to the title of issues and merge requests so they can be routed
through the streamlined agent triage/merge process.
Backport of MR !11861
Merge branch 'backport-ondrej/agent-contributing-9.18' into 'bind-9.18'
Document opt-in 🤖 marker for agent-authored issues and MRs
Add short notes in CONTRIBUTING.md telling automated agents to append
🤖 to the title of issues and merge requests so they can be routed
through the streamlined agent triage/merge process.
[9.18] chg: ci: Test development version of libuv in CI
Recently, a broken version of libuv was released breaking BIND on
several platforms. The offending [commit](https://github.com/libuv/libuv/issues/5030) was on the development branch
for months, but we didn't notice.
In nightly pipelines, build the current 'main' (actually 'v1.x') branch
of libuv and run the unit and system tests against it.
Backport of MR !11647
Merge branch 'backport-stepan/prelease-testing-for-libuv-9.18' into 'bind-9.18'
Štěpán Balážik [Mon, 9 Mar 2026 16:26:13 +0000 (17:26 +0100)]
Test development version of libuv in CI
Recently, a broken version of libuv was released breaking BIND on
several platforms. The offending commit [1] was on the development
branch for months, but we didn't notice.
In nightly pipelines, build the current 'main' (actually 'v1.x') branch
of libuv and run the unit and system tests against it.
When processing a catalog zone member's primaries definition and
there is a TXT record containing an invalid name TSIG key name,
dns_name_free was incorrectly called triggering an assertion.
This has been fixed.
Closes #5858
Backport of MR !11832
Merge branch 'backport-5858-remove-unnecessary-dns-name-free-call-9.18' into 'bind-9.18'
Mark Andrews [Fri, 10 Apr 2026 03:07:26 +0000 (13:07 +1000)]
Remove unnecessary dns_name_free call
When processing a catalog zone member's primaries definition and
there is a TXT record containing an invalid name TSIG key name,
dns_name_free was incorrectly called triggering an assertion.
This has been fixed.
Mark Andrews [Fri, 10 Apr 2026 08:07:49 +0000 (18:07 +1000)]
[9.18] fix: usr: Fix zone verification of NSEC3 signed zones
Previously, when computing the compressed bitmap during verification of an NSEC3-signed zone, an undersized buffer was used that resulted in an out-of-bounds write if there were too many active windows in the bitmap. This impacted mirror zones which are NSEC3-signed, `dnssec-signzone` and `dnssec-verifyzone`. This has been fixed.
Closes #5834
Backport of MR !11804
Merge branch 'backport-5834-fix-cbm-size-9.18' into 'bind-9.18'
Michał Kępień [Thu, 9 Apr 2026 11:48:49 +0000 (13:48 +0200)]
[9.18] fix: ci: Purge distros token in a separate CI job
The "publish" job runs on a dedicated, locked-down runner that lacks the
Python modules necessary to execute the manage_distros_token.py script.
Instead of deleting the token within the "publish" job, purge it in a
separate job that automatically runs on the "base" image after the
"publish" job succeeds. Define "rules" for the new job so that the
token is only deleted for security releases, as it should have been
initially.
Backport of MR !11817
Merge branch 'backport-michal/purge-distros-token-in-a-separate-ci-job-9.18' into 'bind-9.18'
Michał Kępień [Thu, 9 Apr 2026 11:23:57 +0000 (13:23 +0200)]
Purge distros token in a separate CI job
The "publish" job runs on a dedicated, locked-down runner that lacks the
Python modules necessary to execute the manage_distros_token.py script.
Instead of deleting the token within the "publish" job, purge it in a
separate job that automatically runs on the "base" image after the
"publish" job succeeds. Define "rules" for the new job so that the
token is only deleted for security releases, as it should have been
initially.
Mark Andrews [Thu, 9 Apr 2026 02:07:11 +0000 (12:07 +1000)]
[9.18] fix: doc: nsupdate does not handle zero length RDATA well
Nsupdate does not distinguish between a non-existing RDATA field
and an empty RDATA field when determining which action is desired
when the RDATA field is empty. This only affects a few data types,
like APL, which allow an empty RDATA field. Document a workaround
of using the '\# 0' form for entering these specific records. e.g.
# delete the APL RRset
update delete IN APL
# delete the APL record with a zero length rdata
update delete IN APL \# 0
Closes #5835
Backport of MR !11775
Merge branch 'backport-5835-nsupdate-doc-zero-length-rdata-how-to-9.18' into 'bind-9.18'
Mark Andrews [Tue, 31 Mar 2026 01:26:42 +0000 (12:26 +1100)]
nsupdate does not handle zero length RDATA well
Nsupdate does not distinguish between a non-existing RDATA field
and an empty RDATA field when determining which action is desired
when the RDATA field is empty. This only affects a few data types,
like APL, which allow an empty RDATA field. Document a workaround
of using the '\# 0' form for entering these specific records. e.g.
# delete the APL RRset
update delete IN APL
# delete the APL record with a zero length rdata
update delete IN APL \# 0
Mark Andrews [Tue, 7 Apr 2026 21:57:36 +0000 (07:57 +1000)]
[9.18] fix: test: Check exit status of dig and nsupdate in nsupdate system test
Add missing failure checks to six dig and nsupdate invocations in nsupdate system test so that command failures are properly caught instead of silently ignored.
Backport of MR !11811
Merge branch 'backport-marka/check-return-codes-in-nsupdate-test-9.18' into 'bind-9.18'
The name 'isdelegation()' was confusing. This function is not checking
whether this message is a delegation, but whether the denial of
existence proofs in this message is a proof of a referral to an
unsigned zone.
The name 'is_unsecure_referral()' is more appropriate.
Revert isdelegation() to return boolean value again
The isdelegation() was changed to return an isc_result_t because the
idea was to have a separate return value DNS_R_NSEC3ITERRANGE to signal
to the caller we could not verify the proof because of too many
iterations in the NSEC3 record, or perhaps ISC_R_UNEXPECTED for a more
generic cause that verification was not done.
But this would make error handling more fragile and all we care about
is whether we can reliably say the NS bit was not set.
If we can not reliably say so, we have to treat it as an insecure
referrral.
Since the answer is either yes or no, we can revert back to returning
a boolean value.