Ondřej Surý [Fri, 15 May 2026 06:03:16 +0000 (08:03 +0200)]
fix: test: Fix flaky reclimit test
The max-types-per-name cache eviction tests were flaky because two test steps were missing a sleep between queries, causing TTL-based cache verification to fail when both queries completed within the same second.
Merge branch 'ondrej/fix-flaky-reclimit' into 'main'
The cache verification in steps 11 and 15 checks that the TTL has
decreased from its initial value to confirm the response was served
from cache, but the sleep between the two queries was missing. Both
queries could complete within the same second, leaving the TTL
unchanged and causing the test to incorrectly conclude the entry was
not cached.
Ondřej Surý [Fri, 15 May 2026 05:48:26 +0000 (07:48 +0200)]
chg: dev: Skip in-domain nameservers that have no glue
A referral that names a nameserver inside the delegated zone but
provides no address for it leaves the resolver unable to reach that
server. named now logs "missing mandatory glue for <name>" at notice
level and skips the nameserver.
Merge branch 'ondrej/dont-store-missing-in-domain-glue-ns' into 'main'
Ondřej Surý [Wed, 6 May 2026 10:37:03 +0000 (12:37 +0200)]
Drop in-domain NS without glue from the delegation set
Pull the dns_message_findname() lookups into cache_delegglue() and
cache_delegglue6() so each helper now owns its glue lookup and returns
the number of addresses cached. cache_delegns() splits referrals into
two cases: in-domain (the NS name is below the delegation point) and
sibling/in-bailiwick.
An in-domain NS without glue is unresolvable by definition - the
resolver would have to ask the very server it's trying to find. Log
"missing mandatory glue" at notice level and skip the deleg entirely
rather than leaving an unusable entry in the set. A new
dns_delegset_freedeleg() undoes a fresh dns_delegset_allocdeleg() so
the rest of the delegation set is preserved.
Ondřej Surý [Fri, 15 May 2026 04:57:00 +0000 (06:57 +0200)]
chg: usr: Fall back to TCP on a UDP response with a mismatched query id
BIND used to wait silently for the correct DNS message id on a UDP fetch
even after receiving a response from the expected server with the wrong
id, leaving room for off-path spoofing attempts to keep guessing within
that window. The resolver now retries the fetch over TCP on the first
such response, and a new MismatchTCP statistics counter tracks how
often the fallback fires.
Closes #5449
Merge branch '5449-immediate-tcp-fallback-on-id-mismatch' into 'main'
Ondřej Surý [Thu, 14 May 2026 10:20:19 +0000 (12:20 +0200)]
Switch UDP fetches to TCP on the first response with a wrong query id
Until now, the dispatcher silently dropped UDP responses from the
expected peer that carried the wrong DNS message id and kept listening
for the correct id to arrive within the read timeout. An off-path
attacker who knows the destination address and source port of an
outgoing fetch could exploit that quiet retry window to flood the
resolver with guessed responses; with a gigabit link the per-query
success probability grows linearly with the number of guesses that
arrive before the legitimate answer or the timeout.
Treat any such mismatch as a possible spoofing attempt and let the
resolver immediately retry the same query over TCP, the same control
path the truncation handler already uses.
Add a resolver statistics counter - exposed as 'queries retried over TCP
after a response with mismatched query id' in rndc stats and
'MismatchTCP' in the statistics channel
Ondřej Surý [Thu, 14 May 2026 06:52:58 +0000 (08:52 +0200)]
fix: dev: Fix data race during rndc dumpdb or zone load
'rndc dumpdb' against a server with zones, and async zone load,
had a timing window where the operation's completion could fire
before the server had finished registering the operation,
occasionally leading to a possible crash. The completion is now
delivered after the registration is in place.
Closes #5952
Merge branch '5952-fix-masterdump-async-ctx-race' into 'main'
Ondřej Surý [Fri, 8 May 2026 05:46:03 +0000 (07:46 +0200)]
Fix data race in async master dump/load context publication
Bouncing the offload itself to the target loop let the after-work
callback fire on the target thread and run the user's done callback
before the calling thread had published *dctxp / *lctxp. Enqueue on
the calling loop and bounce only the done callback instead, so the
publish is sequenced before the cross-thread hand-off by construction
and cannot be reintroduced by reordering the entry-point body.
Mark Andrews [Thu, 14 May 2026 00:00:21 +0000 (10:00 +1000)]
Disable output escaping in bind9.xsl
The statistics charts where not displaying on some browsers (e.g. Chrome)
due to '>' being escaped as '>'. Use disable-output-escaping="yes" to
turn this off.
Colin Vidal [Wed, 13 May 2026 20:31:32 +0000 (22:31 +0200)]
fix: test: Fix cyclic glues (again)
Previous fix `ed90d578b3a98f45eb8bc09966e9c4ab870a156d` uses
`wait_for_line()` by mistake, and the test aims to wait for two log
lines to be printed before continuing.
In principle, `wait_for_all()` should do, but `running` should always be
printed first, so `wait_for_sequence()` seems to be the right fit here.
Merge branch 'colin/fix-cyclic-glues-again' into 'main'
Colin Vidal [Wed, 13 May 2026 13:20:35 +0000 (15:20 +0200)]
Fix cyclic glues (again)
Previous fix `ed90d578b3a98f45eb8bc09966e9c4ab870a156d` uses
`wait_for_line()` by mistake as the test aims to wait for two log lines
to be printed before continuing (and not continuing as soon as one of
them is printed).
Instead, `wait_for_all()` is used since the order between the two
expected log line is not guaranteed.
The global RUNNER_SCRIPT_TIMEOUT: 55m in the parent pipeline was being
forwarded to the stress and tsan:stress child pipelines, where forwarded
yaml variables outrank job-level variables. That caused stress jobs with
BIND_STRESS_TESTS_RUN_TIME >= 60 to be killed at 55 minutes, regardless
of the per-job RUNNER_SCRIPT_TIMEOUT set in the generated child config.
Set forward:yaml_variables: false on both trigger jobs; the generated
configs already declare every variable they need.
Assisted-by: Claude:claude-opus-4-7
Merge branch 'mnowak/fix-stress-test-script-timeout' into 'main'
Michal Nowak [Wed, 13 May 2026 09:44:26 +0000 (11:44 +0200)]
Selectively inherit yaml vars in stress trigger jobs
The parent's global RUNNER_SCRIPT_TIMEOUT: 55m was reaching the stress
and tsan:stress child pipelines via inherited yaml variables, where
inherited values outrank the child's job-level variables. That caused
stress jobs with BIND_STRESS_TESTS_RUN_TIME >= 60 to be killed at 55
minutes, regardless of the per-job RUNNER_SCRIPT_TIMEOUT set in the
generated child config.
Use inherit:variables with a positive list on both trigger jobs:
inherit only CI_REGISTRY_IMAGE so the parent's registry override
(needed for image pulls in the child) flows through, while keeping
RUNNER_SCRIPT_TIMEOUT (and other globals) out of the child pipeline's
variable scope. The per-job RUNNER_SCRIPT_TIMEOUT values set by the
generated child config now take effect.
Michal Nowak [Wed, 25 Mar 2026 12:31:49 +0000 (13:31 +0100)]
Set RUNNER_SCRIPT_TIMEOUTs
Sometimes jobs can get stuck and be terminated by GitLab, leaving us
without artefacts that could contain useful information about why the
job got stuck.
Colin Vidal [Tue, 12 May 2026 14:42:43 +0000 (16:42 +0200)]
fix: test: Fix cyclic_glue system test
The cyclic_glue system test was waiting for `running` log after
an `rndc reload` command, but wasn't waiting for the log saying a
specific zone which changed has been reloaded `zone <zone>/IN: loaded`.
As a result, the test could randomly fails. This is now fixed.
Closes #5953
Merge branch '5953-fix-cyclic-glue-test' into 'main'
Colin Vidal [Tue, 12 May 2026 12:42:35 +0000 (14:42 +0200)]
Fix cyclic_glue system test
The cyclic_glue system test was waiting for `running` log after
an `rndc reload` command, but wasn't waiting for the log saying a
specific zone which changed has been reloaded `zone <zone>/IN: loaded`.
As a result, the test could randomly fails. This is now fixed.
Ondřej Surý [Tue, 12 May 2026 14:17:59 +0000 (16:17 +0200)]
chg: usr: Cap glue records cached from a referral
named cached every glue record from a referral, retaining far more
than resolution will ever use. The number of nameservers and
addresses kept per referral is now bounded in the delegation database.
Closes #5701
Merge branch '5701-limit-the-number-of-GLUE-records' into 'main'
Ondřej Surý [Wed, 6 May 2026 10:35:22 +0000 (12:35 +0200)]
Cap glue records cached from a referral
The resolver populated the delegation database with every NS RR and
every glue address from a referral, with no aggregate bound. Resolution
only ever uses the first max-delegation-servers NS owners and a handful
of addresses per NS, so anything beyond that is dead memory.
Stop the NS loop in cache_delegns() at view->max_delegation_servers and
cap each glue rdataset at DELEG_MAX_GLUES_PER_NS (20) addresses, so each
NS owner contributes at most 20 A and 20 AAAA glues.
Michał Kępień [Mon, 11 May 2026 15:43:55 +0000 (17:43 +0200)]
chg: ci: Add commit link and diff to RPM build job logs
The output of update_rpms.py is terse, making it difficult to verify its
actions. Add a commit link and "git show" output to the log of every CI
job running the update_rpms.py script in "build" mode to facilitate
double-checking its actions.
Merge branch 'michal/add-commit-link-and-diff-to-rpm-build-job-logs' into 'main'
Michał Kępień [Mon, 11 May 2026 15:41:50 +0000 (17:41 +0200)]
Add commit link and diff to RPM build job logs
The output of update_rpms.py is terse, making it difficult to verify its
actions. Add a commit link and "git show" output to the log of every CI
job running the update_rpms.py script in "build" mode to facilitate
double-checking its actions.
Michał Kępień [Mon, 11 May 2026 14:23:16 +0000 (16:23 +0200)]
fix: ci: Increase GIT_DEPTH for the "assign-milestones" job
Cloning tags with the default GIT_DEPTH of 1 prevents the milestone
assignment script from identifying any merge requests that are included
in a given release. Fix by increasing GIT_DEPTH to an arbitrary value
that is high enough for practical purposes.
The GIT_DEPTH CI variable defaults to 1 for all jobs through the
top-level "variables" key. Explicitly setting it to 1 in job
definitions is unnecessary and may cause confusion. Remove these
redundant assignments.
Merge branch 'michal/fix-assign-milestones-job' into 'main'
Michał Kępień [Mon, 11 May 2026 14:07:47 +0000 (16:07 +0200)]
Remove redundant "GIT_DEPTH: 1" assignments
The GIT_DEPTH CI variable defaults to 1 for all jobs through the
top-level "variables" key. Explicitly setting it to 1 in job
definitions is unnecessary and may cause confusion. Remove these
redundant assignments.
Michał Kępień [Mon, 11 May 2026 14:07:47 +0000 (16:07 +0200)]
Increase GIT_DEPTH for the "assign-milestones" job
Cloning tags with the default GIT_DEPTH of 1 prevents the milestone
assignment script from identifying any merge requests that are included
in a given release. Fix by increasing GIT_DEPTH to an arbitrary value
that is high enough for practical purposes.
Michal Nowak [Mon, 11 May 2026 13:34:30 +0000 (15:34 +0200)]
new: test: Add isctest.transfer.transfer_message() helper and convert tests
Add a new helper function, `isctest.transfer.transfer_message()`, to
`bin/tests/system/isctest/transfer.py` that generates the log message
produced by `xfrin_log()` in `lib/dns/xfrin.c` for an incoming zone
transfer:
transfer of '<zone>/IN' from <source_ns>#<port>: <msg>
The explicit use of `port` matches current shell system usage.
- zone - zone name without class (e.g. "example.com")
- source_ns - IP string, or None to wildcard the source address
- msg - the transfer-level message
(e.g. "Transfer status: success")
- port - integer source port, or None to wildcard the port number
When both source_ns and port are concrete values a plain str is returned
and `wait_for_line()` treats it as a literal substring match. Whenever
either is `None` a compiled `re.Pattern` is returned, with the unknown part
replaced by a constrained wildcard:
- source_ns=None, port=None -> from .*#[0-9]+:
- source_ns=None, port=53 -> from .*#53:
- source_ns="1.2.3.4", port=None -> from 1.2.3.4#[0-9]+:
- source_ns="1.2.3.4", port=N -> "from 1.2.3.4#N:" (plain str)
The port wildcard is [0-9]+ (not .*) because a port is always numeric.
Convert all hard-coded transfer log patterns in the Python system tests
to use transfer_message().
Notable cases:
- `mirror_root_zone`: source_ns=None (live internet, any root server),
port=53.
- `cipher_suites`: source_ns="10.53.0.1", port=None (each zone transfers
over a different TLS port).
- `test_under_signed_transfer`: parametrize gains a boolean xfrin_msg
flag to distinguish messages that go through xfrin_log() from
lower-level TSIG errors that do not.
Testing
-------
All system tests pass under `pytest -n auto`. The `mirror_root_zone`
live-internet test was also verified separately with
`CI_ENABLE_LIVE_INTERNET_TESTS=1`.
LLM usage
---------
This commit was produced in an interactive session with Claude Code
(Claude Sonnet 4.6), guided step by step by a human reviewer.
Closes #5735
Merge branch '5735-make-transfer-message-formatter' into 'main'
Michal Nowak [Mon, 11 May 2026 11:24:22 +0000 (13:24 +0200)]
Add isctest.transfer.transfer_message() helper and convert tests
Add a new helper function, isctest.transfer.transfer_message(), to
bin/tests/system/isctest/transfer.py that generates the log message
produced by xfrin_log() in lib/dns/xfrin.c for an incoming zone
transfer:
transfer of '<zone>/IN' from <source_ns>#<port>: <msg>
The helper always returns a compiled re.Pattern. source_ns and port
each accept None to match any source address / port. msg accepts
either a plain str (regex-escaped automatically) or a compiled
re.Pattern (spliced into the regex as-is), so callers that need regex
syntax in the message part can pass Re(r"...") without having to
wrap the whole result.
source_ns is passed through re.escape() when provided, so dots in
IPv4 addresses (e.g. "10.53.0.1") match a literal dot rather than
any character.
Convert the existing call sites across the system tests to use the
new helper.
Alessio Podda [Mon, 11 May 2026 12:52:17 +0000 (12:52 +0000)]
chg: dev: Make dns_glue_t private to qpzone
The dns_glue struct currently contains four dns_rdataset structs to hold
the glue. These structs are over 100 bytes each because they need to be
able to hold data for multiple types of databases.
Since the dns_glue_t type is only used by qpzone, we can instead hold pointers
to the vecheaders directly, and only bind the vecheaders to the
rdatasets when adding the glue to the message.
This leads to a 33% memory reduction in some authoritative benchmarks.
Alessio Podda [Sat, 14 Feb 2026 21:20:41 +0000 (22:20 +0100)]
Delay binding glue to rdataset
The dns_glue struct currently contains four dns_rdataset structs to hold
the glue. These structs are over 100 bytes each because they need to be
able to hold data for multiple types of databases.
Since the dns_glue_t type is only used by qpzone, we can instead hold
pointers to the vecheaders directly, and only bind the vecheaders to
the rdatasets when adding the glue to the message.
The dns_glue_t, dns_gluelist_t and dns_glue_additionaldata_ctx types are
only used in qpzone.c. This commits moves them to the private header
qpzone_p.h.
This is done in preparation of a followup commit that will refactor them
to use types that are private to qpzone.
Michał Kępień [Mon, 11 May 2026 08:09:09 +0000 (10:09 +0200)]
fix: ci: Fix triggering rules for the "publish-cleanup" job
The "publish-cleanup" tag pipeline job is currently created for all
security releases, including BIND -S releases, but it depends on the
"publish" job, which is only created for open source releases. This
breaks CI configuration for BIND -S tags, preventing pipelines from
getting created for such tags altogether. Fix by only creating the
"publish-cleanup" job in tag pipelines for open source security
releases.
Merge branch 'michal/fix-triggering-rules-for-the-publish-cleanup-job' into 'main'
Michał Kępień [Mon, 11 May 2026 08:07:38 +0000 (10:07 +0200)]
Fix triggering rules for the "publish-cleanup" job
The "publish-cleanup" tag pipeline job is currently created for all
security releases, including BIND -S releases, but it depends on the
"publish" job, which is only created for open source releases. This
breaks CI configuration for BIND -S tags, preventing pipelines from
getting created for such tags altogether. Fix by only creating the
"publish-cleanup" job in tag pipelines for open source security
releases.
Michał Kępień [Thu, 7 May 2026 16:05:37 +0000 (18:05 +0200)]
chg: ci: Mark merged security fixes as "Not released yet"
Adjust the triggering rules for the "merged-metadata" CI job so that
merge requests merged into security-* branches are automatically
assigned to the "Not released yet" milestone, just like merge requests
targeting public branches. This enables merge requests containing
security fixes to be correctly processed by release automation scripts.
Merge branch 'pspacek/extend-not-released-yet-milestone' into 'main'
Petr Špaček [Tue, 5 May 2026 13:04:36 +0000 (15:04 +0200)]
Mark merged security fixes as "Not released yet"
Adjust the triggering rules for the "merged-metadata" CI job so that
merge requests merged into security-* branches are automatically
assigned to the "Not released yet" milestone, just like merge requests
targeting public branches. This enables merge requests containing
security fixes to be correctly processed by release automation scripts.
Michał Kępień [Thu, 7 May 2026 15:51:36 +0000 (17:51 +0200)]
chg: ci: Enable automatic backports for security fixes
Ensure the "backports" CI job is created when new changes are merged
into security-* branches. This enables using backport automation for
security fixes.
Merge branch 'michal/extend-automatic-backports' into 'main'
Michał Kępień [Thu, 7 May 2026 15:45:35 +0000 (17:45 +0200)]
Enable automatic backports for security fixes
Ensure the "backports" CI job is created when new changes are merged
into security-* branches. This enables using backport automation for
security fixes.
Evan Hunt [Wed, 6 May 2026 20:48:11 +0000 (20:48 +0000)]
fix: dev: Check validator name when adding EDE text
When a validator is being shut down, the associated name
`val->name` is set to NULL. This could cause a crash if a worker
thread subsequently added an EDE code with `val->name` in the
extra text.
`validator_addede()` now checks whether the name is NULL before
trying to add it to the extra text.
Closes #5613
Merge branch 'each-validator-log-after-shutdown' into 'main'
Evan Hunt [Fri, 1 May 2026 18:12:54 +0000 (11:12 -0700)]
check for val->name == NULL when adding EDE text
When a validator is being shut down, the associated name
`val->name` is set to NULL. This could cause a crash if a worker
thread subsequently added an EDE code to the response containing
val->name in the extra text.
`validator_addede()` now checks whether the name is NULL before
trying to add it to the extra text.
Arаm Sаrgsyаn [Wed, 6 May 2026 19:36:35 +0000 (19:36 +0000)]
fix: usr: Fix a bug in allow-query/allow-transfer catalog zone custom properties
The :iscman:`named` process could terminate unexpectedly when
processing a catalog zone with an invalid ``allow-query`` or
``allow-transfer`` custom property (i.e. having a non-APL type)
coexisting with the valid property. This has been fixed.
Closes #5941
Merge branch '5941-catz-catz_process_apl-bug-fix' into 'main'
Aram Sargsyan [Mon, 4 May 2026 22:34:01 +0000 (22:34 +0000)]
Fix a bug in catz_process_apl()
The allow-transfer/allow-query catalog zone custom properties support
only APL RRtypes. All other types are correctly rejected by the
catz_process_apl() function. However, when an APL RRtype is processed
by that function, and another (non-APL) RRtype is then attempted to be
processed, there is an assertion failure happening in the prologue
of the function because `*aclbp != NULL` (i.e. an APL has been already
processed). Move the code to do type checking before the affected
REQUIRE assertion.
Arаm Sаrgsyаn [Wed, 6 May 2026 18:18:58 +0000 (18:18 +0000)]
fix: usr: Fix a memory leak issue in the catalog zones
The :iscman:`named` process could leak small amounts of memory
when processing a catalog zone entry which had defined custom
primary servers with TSIG keys using both the regular ``primaries``
custom property syntax and the legacy alternative syntax (``masters``)
at the same time. This has been fixed.
Closes #5943
Merge branch '5943-catz-primaries-tsig-key-name-leak-fix' into 'main'
Ondřej Surý [Wed, 6 May 2026 04:46:42 +0000 (06:46 +0200)]
fix: usr: Prevent a crash when using both dns64 and filter-aaaa
An assertion failure could be triggered if both `dns64` and the `filter-aaaa` plugin were in use simultaneously. This happened if the plugin triggered a second recursion process, which then attempted to store DNS64 state information in a pointer that had already been set by the original recursion process. This has been fixed.
Evan Hunt [Mon, 4 May 2026 05:00:39 +0000 (22:00 -0700)]
Clear dns64_aaaaok immediately after use
The DNS64 state information stored in client->query.dns64_aaaaok
could cause an assertion failure in query_respond() if the server
was configured in such a way as to trigger a new recursion before
the query had been reset - for example, by using the filter-aaaa
plugin, which may need to recurse to find out whether an A record
exists.
This has been addressed by clearing DNS64 state information
immediately after the call to query_filter64().
Evan Hunt [Tue, 5 May 2026 23:19:59 +0000 (23:19 +0000)]
fix: dev: Fix a stack use-after-free in qpzone
In previous_closest_nsec(), a new qpreader was opened to search the NSEC
tree. It was possible for that to be used to update a QP iterator object
owned by the caller, and then be destroyed when the function returned.
This qpreader object isn't necessary anymore; since namespaces were
added to the QP trie in commit 15653c54a0, we can now just reuse the
existing reader for the main tree.
Evan Hunt [Mon, 4 May 2026 23:10:49 +0000 (16:10 -0700)]
Fix a stack use-after-free in qpzone
In previous_closest_nsec(), a new qpreader was opened to search the NSEC
tree. It was possible for that to be used to update a QP iterator object
owned by the caller, and then be destroyed when the function returned.
This qpreader object isn't necessary anymore; since namespaces were
added to the QP trie in commit 15653c54a0, we can now just reuse the
existing reader for the main tree.
Ondřej Surý [Tue, 5 May 2026 20:27:46 +0000 (22:27 +0200)]
fix: usr: Fix a crash when reconfiguring while an NTA is being rechecked
When named was reconfigured or shut down while a negative trust anchor
was being rechecked against authoritative servers, the in-flight recheck
could outlive the view that owned it and cause `named` to crash. This
has been fixed.
Evan Hunt [Mon, 4 May 2026 07:05:27 +0000 (00:05 -0700)]
Hold a reference to the NTA table for the lifetime of each NTA
Each dns__nta_t now references its parent ntatable in nta_create() and
releases it in dns__nta_destroy(). This avoids a use-after-free in
fetch_done() and other callbacks that dereference nta->ntatable: the
ntatable could otherwise be released by view destruction while an
in-flight resolver fetch still holds a reference to the NTA.
Ondřej Surý [Tue, 5 May 2026 19:06:43 +0000 (21:06 +0200)]
fix: dev: handle KSR files with DNSKEY records before any header
A DNSKEY record appearing before the first ';; KeySigningRequest'
header in a KSR file made dnssec-ksr abort on an internal assertion
instead of producing a structured error, killing pipelines that
fed it crafted or corrupted input. The tool now exits with a
fatal error naming the file and line.
Closes #5914
Merge branch '5914-dnssec-ksr-rdatalist-null-insist' into 'main'
Replace INSIST in KSR DNSKEY parser with a structured error
A DNSKEY record appearing before any ';; KeySigningRequest' header
in a KSR file made dnssec-ksr abort on INSIST(rdatalist != NULL),
which is the wrong tool for a malformed-input case. Issue a fatal()
naming the file and line instead so pipelines see a clean exit
status and an actionable message; the now-unreachable NULL check on
the rdatalist->ttl update goes away too.
Ondřej Surý [Tue, 5 May 2026 16:15:19 +0000 (18:15 +0200)]
fix: usr: Reject record sets too large to serve in DNS
When BIND was asked to store a record set whose total size exceeds
what fits in a DNS message, it would allocate memory and build the
structure, then fail later at response time. Such oversized record
sets are now rejected at the time of storage with an error, avoiding
wasted work on data that can never be served.
Merge branch 'ondrej/harden-buflen-overflow' into 'main'
makeslab(), makevec(), dns_rdatavec_merge() and dns_rdatavec_subtract()
summed per-record storage into an unsigned int with no upper-bound
check. An RRset whose total encoded size exceeds DNS_RDATA_MAXLENGTH
cannot fit in a DNS message and is unservable; building its in-memory
representation only burns memory on data that will fail at response
time, and at the upper bound the running sum could in theory wrap.
Cap the running total at DNS_RDATA_MAXLENGTH and return ISC_R_NOSPACE
when exceeded. Update the qpdb cache memory-purge test to use a
record size that fits within the new limit.
Ondřej Surý [Tue, 5 May 2026 08:49:37 +0000 (10:49 +0200)]
rem: dev: Remove obsolete KEY record flags deprecated by RFC 3445
KEY resource records originally defined NOAUTH, NOCONF, EXTENDED, and
ENTITY flags that were removed by RFC 3445 back in 2002. BIND still
carried code to parse and emit them, including the additional two-octet
flags field that followed when the EXTENDED bit was set. That handling
has been removed and the affected bit positions are now reserved.
Dropping the extended-flags handling also eliminates a possible crash
that could be reached when signing a zone containing an invalid key.
Closes #5900
Merge branch '5900-remove-keyflag-extended' into 'main'
Mark Andrews [Thu, 30 Apr 2026 23:06:36 +0000 (09:06 +1000)]
Remove remaining RFC 3445 KEY flags
RFC 3445 also eliminated the DNS_KEYTYPE_NOAUTH, DNS_KEYTYPE_NOCONF,
and DNS_KEYOWNER_ENTITY flags. With NOAUTH and NOCONF gone, the
concept of NOKEY can no longer be expressed in KEY records.
DNS_KEYOWNER_ENTITY was already unused as of 22d688f656 but still
defined; that is now also removed.
The DNS_KEYFLAG_EXTENDED flag was only legitimate for type KEY
and was eliminated by RFC 3445. Dropping the extended-flags
handling in pub_compare() also fixes a possible crash when
signing a zone whose journal contains a crafted DNSKEY: a
6-byte record with the EXTENDED bit set produced a memmove()
length that underflowed and ran off a stack buffer.
Ondřej Surý [Mon, 4 May 2026 12:58:42 +0000 (14:58 +0200)]
fix: usr: Prevent crafted queries from degrading RRL performance
With response rate limiting enabled, an attacker sending queries from many
spoofed source addresses could steer entries into the same slot of the
internal rate-limit table and slow down query processing on the affected
server. The table now uses a per-process keyed hash so the placement of
entries cannot be predicted or influenced from the network.
Closes #5906
Merge branch '5906-rrl-hash-collision-dos' into 'main'
The previous hash_key() was a deterministic, unkeyed (<<1) + add over the
key words. An off-path attacker could invert it offline and submit
queries whose source /24, qname hash, and qtype map to a single bucket;
under chaining this turns every lookup into an O(N) walk under
rrl->lock and starves legitimate query processing on the very feature
deployed to mitigate DoS.
Replace it with isc_hash32(), which is HalfSipHash-2-4 keyed by a
per-process random seed, so collision sets cannot be precomputed.
Ondřej Surý [Fri, 1 May 2026 06:18:44 +0000 (08:18 +0200)]
fix: dev: Avoid named assertion failure during parent-NS lookups when none exist
Configuring the root zone as a signed primary with parental agents (or with
notify-on-cds-changes) caused named to exit on an internal assertion as soon
as the DS-publication machinery tried to look up the parent NS RRset — the root
has no parent. The lookup is now short-circuited cleanly.
Similar, a zone with no NS records in the parent caused named to exit in the same way.
Closes #5910
Merge branch '5910-nsfetch-start-root-domain-assertion' into 'main'
Once the walk reaches the root, splitting one more label off would
trip an internal assertion and abort named. Stop cleanly with
ISC_R_NOTFOUND so the dispatcher cancels the fetch. Only reachable
through misconfiguration (root configured as a primary with parental
agents, or a parent zone that NODATAs its own NS).
This is required to AXFR and verify the root zone and it makes no
difference for non-root zones (dnssec-verify takes FQDN or makes the
provided name absolute).
Add a test case where the root zone has dnssec-policy configured, with
checkds enabled. This is a silly case because the root does not have
any parent NS records, but it should not crash the server.
The same is true for zones that do not have parent NS records, but
eventually they will hit the same code path.
Ondřej Surý [Fri, 1 May 2026 05:50:38 +0000 (07:50 +0200)]
chg: dev: Catch rare named crash in recursive resolution earlier for diagnosis
A rare crash has been observed in named while it is resolving upstream nameserver
addresses for a recursive query, surfacing as a segmentation fault with no immediate
clue as to the cause. This change adds internal consistency checks so that a future
occurrence of the same condition aborts named with a diagnostic message at the point
the inconsistency arises, rather than corrupting state and crashing later in
an unrelated location.
Closes #5602
Merge branch '5602-adb-find-sanity-checks' into 'main'
Ondřej Surý [Fri, 1 May 2026 04:44:06 +0000 (06:44 +0200)]
Assert adb find loop-affinity invariant at lifetime entry points
The dns_adbfind_t lifetime model has no reference counting; storage
liveness is held together by find->lock and the FIND_EVENT_SENT
idempotency flag, plus an unwritten cross-module rule that all
non-trivial operations on a find run on find->loop. If a caller
violates that rule, the unlock-relock window in dns_adb_cancelfind
(and similar paths) becomes a use-after-free and we crash later
inside libpthread on a corrupted mutex.
Add REQUIREs at dns_adb_cancelfind, dns_adb_destroyfind and
find_sendevent so a violation aborts at the offending call site
rather than silently freeing storage another loop is still touching.
Also poison find->magic with ~DNS_ADBFIND_MAGIC in free_adbfind so
DNS_ADBFIND_VALID catches reuse-after-free at the next public entry
point instead of letting the dangling pointer reach the mutex code.
Ondřej Surý [Fri, 1 May 2026 05:19:57 +0000 (07:19 +0200)]
fix: dev: Harden dig's EDNS option parsing against malformed replies
dig's parser for EDNS options in a DNS reply now stops cleanly when an
option declares a length that runs past the end of the option data,
rather than trusting the upstream OPT-record validator to reject the
reply first. This is a defensive change; behavior is unchanged in
practice.
Merge branch 'ondrej/dig-process-opt-edns-optlen-oob' into 'main'
Bound EDNS option length in dig's process_opt() walk
process_opt() reads the per-option (optcode, optlen) header from the
OPT rdata and then advances the buffer by optlen, both for the COOKIE
branch (via process_cookie()) and for any other optcode. The walk
itself never compared optlen to the buffer remainder; the only reason
it cannot trip the isc_buffer_forward() REQUIRE today is that
fromwire_opt() (lib/dns/rdata/generic/opt_41.c) already validates each
option's length against the rdata bounds before the rdataset is
handed back, so process_opt() never sees a self-inconsistent rdata.
That upstream guarantee is fine, but it leaves the local walker
trusting an invariant established elsewhere. Add a defensive check
that just stops the walk when a future caller (a cached message, an
alternate parser, a refactor of the OPT validator) hands process_opt()
a buffer where optlen would run past the end.
Michał Kępień [Thu, 30 Apr 2026 20:34:55 +0000 (22:34 +0200)]
fix: ci: Use "git push --force-with-lease" for autorebases
If a merge request is merged to an autorebased branch while it is
getting rebased, the "git push -f" command at the end of the autorebase
job will cause the contents of that merge request to be silently deleted
from Git history even though the merge request will still be (correctly)
shown as "merged" by GitLab.
Use "git push --force-with-lease" instead to prevent force-pushing the
rebased version of the branch if it is pushed to after its pre-rebase
version is fetched by the autorebase job. Report such an event
accordingly. For simplicity, no retries are attempted as the problem is
expected to be resolved by the next autorebase and the chances of this
scenario happening in practice are already low to begin with.
Merge branch 'michal/use-git-push-force-with-lease-for-autorebases' into 'main'
Michał Kępień [Thu, 30 Apr 2026 20:19:59 +0000 (22:19 +0200)]
Use "git push --force-with-lease" for autorebases
If a merge request is merged to an autorebased branch while it is
getting rebased, the "git push -f" command at the end of the autorebase
job will cause the contents of that merge request to be silently deleted
from Git history even though the merge request will still be (correctly)
shown as "merged" by GitLab.
Use "git push --force-with-lease" instead to prevent force-pushing the
rebased version of the branch if it is pushed to after its pre-rebase
version is fetched by the autorebase job. Report such an event
accordingly. For simplicity, no retries are attempted as the problem is
expected to be resolved by the next autorebase and the chances of this
scenario happening in practice are already low to begin with.
fix: usr: Reject negative and out-of-range TTLs in dnssec-* tools
The dnssec-* tools accepted negative and out-of-range values for TTL
flags such as dnssec-keygen -L, dnssec-signzone -t and
dnssec-settime -L, silently turning them into TTLs of around 136 years
in the resulting key or zone files. The flag values are now validated
and rejected with a clear "TTL must be non-negative" or "TTL out of
range" error.
Closes #5923
Merge branch '5923-dnssectool-strtottl-negative-ttl-accepted' into 'main'
Reject negative and out-of-range TTLs in dnssec-* tools
strtottl() parsed the operator's TTL string with strtol() and assigned
the long directly to dns_ttl_t (uint32_t) with no sign or ERANGE
check. The only validation was the "no digits parsed" branch, so a
fully-consumed "-1" became UINT32_MAX (~136 years) and was silently
written into DNSKEY/key files by dnssec-keygen -L, dnssec-signzone -t,
dnssec-settime -L, etc. Any signing pipeline interpolating the TTL
from a variable could mint a key with a multi-decade TTL and never see
an error.
Switch to strtoul(), reject a leading '-' explicitly (strtoul silently
negates), check errno == ERANGE, and reject values exceeding
UINT32_MAX before handing the result to time_units(). The pre-existing
multiplication wrap inside time_units() is tracked separately.
Colin Vidal [Thu, 30 Apr 2026 14:31:21 +0000 (16:31 +0200)]
fix: test: Fix `cyclic_glue` system test
The `cyclic_glue` system test was not explicitly waiting for the dump to
complete. As a result, the test could read an outdated dump file and
perform assertions on database state. Fix this by waiting for `dumpdb`
command to finish before reading `named_dump.db`.
Merge branch 'colin/fix-cyclic_glue-test' into 'main'
Colin Vidal [Thu, 30 Apr 2026 13:18:03 +0000 (14:18 +0100)]
Fix `cyclic_glue` system test
The `cyclic_glue` system test was not explicitly waiting for the dump to
complete. As a result, the test could read an outdated dump file and
perform assertions on database state. Fix this by waiting for `dumpdb`
command to finish before reading `named_dump.db`.
fix: dev: Reject RSA DNSKEYs with degenerate modulus
A crafted DNSKEY rdata whose declared exponent length consumed the
whole buffer produced an RSA key with no modulus, which dnssec-importkey
accepted as valid and wrote to a .private file with no key material.
The wire-format parser now rejects RSA public keys with a modulus
smaller than 512 bits, the lowest legitimate size across the RSA
DNSSEC algorithms.
Closes #5920
Merge branch '5920-opensslrsa-fromdns-zero-modulus-accepted' into 'main'
Reject RSA DNSKEYs with degenerate modulus at parse time
The wire-format RSA DNSKEY parser used the residual rdata length after
the exponent as the modulus length, with no positive lower bound. A
crafted DNSKEY whose declared exponent length consumed the whole buffer
produced n = 0; the BN_bin2bn(_, 0, _) returned a non-NULL BIGNUM, the
NULL-check passed, and dnssec-importkey -f wrote out a "valid" key with
no key material. RSASHA1 also bypassed the algorithm-specific lower
bound in opensslrsa_createctx (which only checks an upper bound for the
SHA1 algorithms), so the degenerate key reached the verify path with
whatever behaviour the linked OpenSSL exhibits for n = 0.
Add OPENSSLRSA_MIN_MODULUS_BITS = 512 (the lowest legitimate modulus
across the RSA DNSSEC algorithms per RFC 5702) and reject smaller
moduli at parse time in opensslrsa_fromdns, opensslrsa_parse, and
opensslrsa_fromlabel — the same three load paths where the existing
exponent upper-bound check lives.
fix: usr: Fix dig -x crash on excessively long arguments
dig -x crashed with a segmentation fault rather than printing an
error when given an argument with thousands of dot-separated
components. dig -x now rejects such inputs cleanly with "Invalid IP
address".
Closes #5917
Merge branch '5917-dig-reverse-octets-stack-overflow' into 'main'
reverse_octets() recursed once per dot, with depth bounded only by
ARG_MAX (~2 MiB on Linux), so feeding dig -x a deep input like
'1.1.1.…1' busted the call stack and crashed the tool with SIGSEGV
instead of a structured error. The transformation it performs is
purely textual (split on '.', emit components in reverse), so the
recursion was never load-bearing.
Walk the input once into a fixed-size array of label slices, capped at
DNS_NAME_MAXLABELS (which is the most we could ever fit into the
result buffer anyway), then iterate the array in reverse to write the
output. Inputs with more than DNS_NAME_MAXLABELS labels now return
DNS_R_NAMETOOLONG, which dig.c surfaces as 'Invalid IP address' and
exit 1. Drop the unnecessary (int) casts on ptrdiff_t/size_t lengths
while at it.
Michał Kępień [Thu, 30 Apr 2026 10:17:32 +0000 (12:17 +0200)]
new: ci: Set up automatic rebasing for security-* branches
Introduce a set of private branches containing only security fixes that
are automatically rebased onto the corresponding open source branches
whenever new changes are merged. Each rebase triggers a basic build,
failing the CI job if the build breaks.
When a security-* branch is rebased, create a CI pipeline for its new
revision and rebase its corresponding bind-9.x-sub branch (if it exists)
on top of it, creating a rebase chain.
Report any failures in the process via Mattermost.
These changes enable treating security fixes similarly to other code
changes, without deferring merges all the way until release prep.
Merge branch 'michal/autorebase-chain' into 'main'
Michał Kępień [Thu, 30 Apr 2026 09:58:55 +0000 (11:58 +0200)]
Set up automatic rebasing for security-* branches
Introduce a set of private branches containing only security fixes that
are automatically rebased onto the corresponding open source branches
whenever new changes are merged. Each rebase triggers a basic build,
failing the CI job if the build breaks.
When a security-* branch is rebased, create a CI pipeline for its new
revision and rebase its corresponding bind-9.x-sub branch (if it exists)
on top of it, creating a rebase chain.
Report any failures in the process via Mattermost.
These changes enable treating security fixes similarly to other code
changes, without deferring merges all the way until release prep.
fix: usr: Stop delv from aborting on a malformed query name
delv aborts with SIGABRT instead of exiting cleanly when given a query
name that fails wire-format conversion (e.g. a label longer than 63
octets). After this change delv prints the parse error and exits with
a normal failure code.
Closes #5916
Merge branch '5916-delv-run-resolve-null-detach-abort' into 'main'
run_resolve allocates dns_client_t late, but the cleanup epilogue
called dns_client_detach() unconditionally. When convert_name() or
dns_client_create() failed first, the detach hit a NULL client and
the REQUIRE(DNS_CLIENT_VALID) inside it aborted the process with
SIGABRT instead of a clean error exit.
Guard the detach with a NULL check. Add a digdelv test that runs
delv on a query name whose first label exceeds 63 octets and
asserts the process does not exit 134.
fix: usr: prevent malicious DNSSEC zones from exhausting validator CPU
A DNSSEC-signed zone could publish a DNSKEY with an unusually large
RSA public exponent and force any validator resolving names in that
zone to spend disproportionate CPU verifying signatures. The
validator now rejects such DNSKEYs, matching the limit already
applied to keys read from files or HSMs.
Closes #5881
Merge branch '5881-rsa-exponent-keytrap-cpu-amplification' into 'main'
Reject RSA DNSKEYs with oversize public exponents at parse time
The wire-format RSA DNSKEY parser was the only key path with no upper
bound on the public exponent — opensslrsa_parse and opensslrsa_fromlabel
already cap at RSA_MAX_PUBEXP_BITS. An attacker-controlled DNSKEY could
therefore force a validator to compute s^e mod n with e up to ~|n| bits,
amplifying every verify by ~120x for typical 2048-bit moduli (OpenSSL
itself only caps the exponent for moduli above 3072 bits). Apply the
same bit-count cap to wire-format keys.
fix: usr: prevent rare named crash when notifies are cancelled
Under heavy load, named could occasionally crash when a queued
outbound notify or zone refresh was cancelled at the moment it
was being sent — for example, while a zone was being reloaded or
removed. The race that caused the crash is now prevented.
Closes #5915
Merge branch '5915-ratelimiter-dequeue-tick-uaf' into 'main'
isc__ratelimiter_tick() and isc_ratelimiter_shutdown() each pulled
events out of rl->pending into a function-local list, dropped the
mutex, and then iterated. ISC_LIST_APPEND leaves the link in the
LINKED state, so a concurrent isc_ratelimiter_dequeue() saw an
event as still queued, called ISC_LIST_UNLINK against rl->pending —
which patched the prev/next of the local list — and freed the
event before dispatch finished, producing either an INSIST in the
unlink macro or a use-after-free in the dispatch loop.
isc_async_run() is a non-blocking wfcq enqueue, so there is no
benefit to dropping the mutex around it. Unlink each event and
hand it to isc_async_run() while still holding rl->lock; the
existing ISC_LINK_LINKED check in dequeue then correctly
distinguishes "still queued and cancellable" from "already taken".
fix: dev: free per-command rndc state when response serialisation fails
When isccc_cc_towire failed while building an rndc reply,
control_respond returned without releasing the per-command request,
response, HMAC secret copy, and text buffer. They were eventually
freed when the connection closed, but until then the HMAC key copy
stayed in named's memory. The failure path now goes through the
same cleanup label as every other error.
Closes #5913
Merge branch '5913-controlconf-control-respond-cleanup-leak' into 'main'
Run conn_cleanup on isccc_cc_towire failure in control_respond
The bare return left conn->secret, conn->response, conn->request, and
conn->text pinned until the connection itself was torn down — every
other error in the function reaches conn_cleanup via goto, and the
success path falls into the same label, so the towire-failure return
was the lone outlier. Send it through the existing cleanup path.
testgen existed only to let the rndc system test generate large response payloads.
It accepted an unbounded count and was reachable from read-only control channels,
so any read-only rndc client could drive named into memory exhaustion. The command
and its supporting test helper are gone; remaining rndc commands already produce
non-trivial responses, so transport coverage is preserved.
Closes #5911
Merge branch '5911-rndc-testgen-32bit-truncation-memory-exhaustion' into 'main'
testgen existed solely to let the rndc system test exercise large
response payloads — it has no operator value, accepts an unbounded
count, and could be invoked by any read-only rndc client to drive
named into memory exhaustion. Drop the command, the gencheck helper
that validated its output, and the buffer-size loop in the rndc
system test; the remaining rndc subcommands already produce
non-trivial responses, so the framing path stays exercised.