Štěpán Balážik [Wed, 4 Feb 2026 17:17:17 +0000 (18:17 +0100)]
Make default_algorithm accessible through a fixture and method
Importing pytest fixture trips up static analysis tools, so move
default_algorithm to conftest.py and use it instead of os.environ
accesses in various system tests.
For use outside test function, use Algorithm.default().
Štěpán Balážik [Fri, 30 Jan 2026 16:06:56 +0000 (17:06 +0100)]
Clone the bind9-qa repo to the project root in CI jobs
Cloning to a stable location allows clearer handling of paths when
calling scripts from CI jobs.
`unit:gcc:tarball` and `system:gcc:tarball` do `cd bind-*` in
`before_script` which lead to the `bind9-qa` directory ending up in
a different place in exactly these two jobs and that made reasoning
about paths in `.system_test_common` and `.unit_test_common` tricky.
Ondřej Surý [Fri, 20 Feb 2026 13:07:13 +0000 (14:07 +0100)]
chg: usr: Optimize the TCP source port selection on Linux
Enable a socket option on the outgoing TCP sockets to allow faster selection of the source <address,port> tuple for different destination <address,port> tuples when nearing over 70-80% of the source port utilization.
Merge branch 'improve-selection-of-outgoing-TCP-port' into 'main'
Implement IP_LOCAL_PORT_RANGE socket option for Linux
For Linux >= 6.8:
Since 2023, Linux has introduced a change to the IP_LOCAL_PORT_RANGE
socket option that eliminates the need for the random window
shifting (implemented as a fallback in the next commit).
By setting IP_LOCAL_PORT_RANGE option, we tell the kernel to use better
approach to the source port selection.
For Linux << 6.8:
This implement selecting port by random shifting range leveraging the
IP_LOCAL_PORT_RANGE socket option. The network manager is initialized
with the ephemeral port range (on startup and on reconfig) and then for
every outgoing TCP connection, we define a custom port range (1000
ports) and then randomly shift the custom range within the system range.
This helps the kernel to reduce the search space to the custom window
between <random_offset, random_offset + 1000>.
Since 2015, Linux has introduced a new socket option to overcome TCP
limitations: When an application needs to force a source IP on an active
TCP socket it has to use bind(IP, port=x). As most applications do not
want to deal with already used ports, x is often set to 0, meaning the
kernel is in charge to find an available port. But kernel does not know
yet if this socket is going to be a listener or be connected. This
IP_BIND_ADDRESS_NO_PORT socket option ask the kernel to ignore the 0
port provided by application in bind(IP, port=0) and only remember the
given IP address. The port will be automatically chosen at connect()
time, in a way that allows sharing a source port as long as the 4-tuples
are unique.
Enable IP_BIND_ADDRESS_NO_PORT on the outgoing TCP sockets to overcome
this TCP limitation.
Ondřej Surý [Thu, 19 Feb 2026 11:05:58 +0000 (12:05 +0100)]
Remove return value from isc_net_getudpportrange()
The function was already marked as never failing, always returning
ISC_R_SUCCESS, so there was a lot of dead code around checking whether
the result would be ISC_R_SUCCESS. This has been cleaned up.
Ondřej Surý [Fri, 20 Feb 2026 11:51:41 +0000 (12:51 +0100)]
fix: usr: Fix read UAF in BIND9 dns_client_resolve() via DNAME Response
An attacker controlling a malicious DNS server returns a DNAME record,
and the we stores a pointer to resp->foundname, frees the response
structure, then uses the dangling pointer in dns_name_fullcompare()
possibly causing invalid match. Only the `delv`is affected. This has
been fixed.
Closes #5728
Merge branch '5728-heap-uaf-in-bind9-dns_client_resolve-via-dname-response' into 'main'
Ondřej Surý [Fri, 20 Feb 2026 10:58:13 +0000 (11:58 +0100)]
Fix read UAF in BIND9 dns_client_resolve() via DNAME Response
An attacker controlling a malicious DNS server returns a DNAME record,
and the we stores a pointer to resp->foundname, frees the response
structure, then uses the dangling pointer in dns_name_fullcompare()
possibly causing invalid match. Only the `delv`is affected. This has
been fixed.
Mark Andrews [Wed, 1 Oct 2025 04:49:33 +0000 (14:49 +1000)]
Check notify with bad notify source address and tsig
named was asserting when the notify source address was not available
and TSIG was being used. Check this scenario by adding a nameserver
to the zone which is configured to uses a non-existent source address
and a blackholed destination address and a TSIG using a server clause
for that destination address.
Ondřej Surý [Thu, 19 Feb 2026 12:44:28 +0000 (13:44 +0100)]
Don't retry notify over TCP if it could not successed
Prevent retrying the notify over TCP in case the source address is not
available or the source vs the destination address family mismatch or
when the destination address has been blackholed. Properly log the
hard notify failures.
Ondřej Surý [Thu, 19 Feb 2026 12:44:23 +0000 (13:44 +0100)]
Fix assertion failure when sending notify fails over UDP
When dns_request_create() fails in notify_send_toaddr() the TSIG key was
not cleared when retrying over TCP causing assertion failure. Set the
TSIG key to NULL in the dns_message to prevent the assertion failure.
Ondřej Surý [Wed, 18 Feb 2026 14:08:08 +0000 (15:08 +0100)]
chg: nil: Remove dns_rdataslab_merge() and friends
After the split to dns_rdataslab and dns_rdatavec, the
dns_rdataslab_merge() function was unused and it suffered from the same
data race as fixed in the previous commit. Instead of fixing it, just
remove the function and bunch of other unused functions from the
dns_rdataslab unit.
Merge branch 'ondrej/cleanup-dns_rdataslab' into 'main'
Ondřej Surý [Wed, 17 Dec 2025 09:01:06 +0000 (10:01 +0100)]
Use offsetof() instead of pointer arithmetics to get slabheader
In rdataset_getheader() a cast of the raw buffer to dns_slabheader_t and
pointer arithmetics was used to get the start of the slabheader
structure. Use more correct offsetof(dns_slabheader_t, raw) to
calculate the correct start of the dns_slabheader_t from the flexible
member raw[].
Ondřej Surý [Wed, 17 Dec 2025 08:53:38 +0000 (09:53 +0100)]
Move the count of items in the slabheader from raw data to struct
The count of items was stored in the raw data as first two bytes.
Instead of reading this from the raw header, move the number of the
items into the structure itself.
This needs the flexible member raw[] to be aligned on the size of the
pointer to prevent unaligned access to the start of the header from
rdataset_getheader() function that casts the raw[] to dns_slabheader_t.
Ondřej Surý [Tue, 16 Dec 2025 10:43:34 +0000 (11:43 +0100)]
Cleanup the unused members of dns_slabheader_t
After the rdataslab -> rdataslab,rdatavec split, there were couple of
unused struct members. Remove all the unused members, reorder the
members to eliminate the padding holes and thus reduce the
dns_slabheader_t and dns_slabtop_t structure sizes.
Ondřej Surý [Tue, 16 Dec 2025 10:37:15 +0000 (11:37 +0100)]
Remove dns_rdataslab_merge() and friends
After the split to dns_rdataslab and dns_rdatavec, the
dns_rdataslab_merge() function was unused and it suffered from the same
data race as fixed in the previous commit. Instead of fixing it, just
remove the function and bunch of other unused functions from the
dns_rdataslab unit.
Michał Kępień [Fri, 13 Feb 2026 13:27:10 +0000 (14:27 +0100)]
Implement a response handler that forwards queries
Add a new response handler, ForwarderHandler, which enables forwarding
all queries to another DNS server. To simplify implementation, always
forward queries to the target server via UDP, even if they are
originally received using a different transport protocol.
Michał Kępień [Fri, 13 Feb 2026 13:27:10 +0000 (14:27 +0100)]
Log the server socket receiving each query
Extend AsyncDnsServer._log_query() and AsyncDnsServer._log_response() so
that they also log the <address, port> tuple for the socket on which a
given query was received on. Minimize the signatures of those methods
by taking advantage of all the information contained in the QueryContext
instances passed to them.
Michał Kępień [Fri, 13 Feb 2026 13:27:10 +0000 (14:27 +0100)]
Store server socket information in QueryContext
Extend the QueryContext class with a field holding the <address, port>
tuple for the socket on which a given query was received. This will
enable query handlers to act upon that information in arbitrary ways.
```
dnshost.example. 300 NS ns.dnshost.example.
ns.dnshost.example. 300 A 1.2.3.4
```
And then the child-side of `foo.example.`:
```
foo.example 3600 NS ns.dnshost.example.
a.foo.example 300 A 5.6.7.8
```
While there is a zone misconfiguration (the TTL of the delegation and glue doesn't match in the parent and the child), it is possible to resolve `a.foo.example` on a cold-cache resolver. However, after the `ns.dnshost.example.` glue expires, the resolution would have failed with a "fetch loop detected" error. This is now fixed.
Colin Vidal [Fri, 30 Jan 2026 16:09:18 +0000 (17:09 +0100)]
fetch loop detection improvements
The fetch loop detection occured in two places: when
`dns_resolver_createfetch()` is invoked (looking up through the parent
fetches chain and stops the fetch if a parent fetch is the same qname and
qtype) and right after calling `dns_adb_findname()` in the resolver
(stops the fetch if the current fetch is the same name from the ADB
lookup, and ADB lookup needs to fetch it).
Regarding fetch loop detection at the `dns_resulver_createfetch()`
entry, there are case where both qname and qtype are similar but the
zonecut is different. This will then query different name servers and
get different responses. For instance, the following delegation
parent-side (both for `foo.example.` and `dnshost.example.`):
dnshost.example. 300 NS ns.dnshost.example.
ns.dnshost.example. 300 A 1.2.3.4
Then the child-side of `foo.example.`:
foo.example 3600 NS ns.dnshost.example.
a.foo.example 300 A 5.6.7.8
Obviously, there is a misconfiguration between the parent-side and the
child-side of `dnshost.example` (the mismatch of the TTL), but, this
happens...
Because the resolver is currently child-centric, the parent-side
delegation's glue of `dnshost.example.` will be overriden by the
child-side of the delegation. Once both A records will expires, the
resolver will attempt to find out the A RRs but will start from the
`foo.example.` zonecut, as the delegation itself is still valid.
Then the resolver will attempt to resolve `ns.dnshost.example.`, still
using the `foo.example.` zonecut, which will immediately trigger another
attempt to resolve `ns.foo.example.` (because the A RR is expired). This
is, however _not_ a loop, because the second attempt will have
`dnshost.example.` zonecut. And this changes everything, because the
resolver detects the A name is in-domain, and pass a flag to ADB so
`dns_view_find()` won't use the cache. As a result, the zonecut will be
`.`, and the hints (root servers) will be queried instead.
From that point, they'll return the parent-side delegation, which
includes the glue for `ns.dnshost.example/A`, and the resolution can
continue. Previously, this wouldn't be possible because a loop would be
detected from the second attempt to looking `ns.foo.example/A` and would
result in a SERVFAIL.
Now, the loop detection is relaxed as the loop is detected if the qname,
qtype _and_ zonecut are equals.
This commit also changes the way the loop detection post
`dns_adb_createfind()` works. From the same example above, there would
be two ADB fetches with the same name, but with two different ADB flags
(the first one without DNS_ADB_STARTATZONE, the second one with that
flag). It means that there will be two fetches out of those two ADB
lookups, both legit, and not a loop (i.e. it won't be stuck). To
differenciate between a find which has a pending fetch (which could be
from another find the current find has been attached to), a new find
option `DNS_ADBFIND_STARTEDFETCH` is introduced, which tells that the
current has did started a fetch.
That way, if a find doesn't have `DNS_ADBFIND_STARTEDFETCH` option but
has pending fetches, we know this is a find attached to a similar find
so this is a loop. Otherwise, with `DNS_ADBFIND_STARTEDFETCH`, we know
that even if there is a pending fetch, this is not a loop as the fetch
has just been started
Colin Vidal [Mon, 2 Feb 2026 12:50:38 +0000 (13:50 +0100)]
extends named -T so ADB settings can be tweaked
ADB entry window and ADB min cache time can be tweaked using `named -T
adbentrywindow=<unsigned int>` and `named -T adbmincache=<unsigned
int>`.
While those values doesn't needs to be exposed to the operator, this can
be needed to be able to system test ADB behaviors without having to wait
as long as those values are by default.
Colin Vidal [Tue, 10 Feb 2026 08:25:09 +0000 (09:25 +0100)]
chg: dev: resolver: refactoring of the dns_fetchresponse_t handling
Instead of cloning fetch responses immediately after inserting them at the head of the `fetch_response` list, defer cloning until the events are actually sent.
This enables to:
- Remove the `fctx->cloned` state;
- Simplify the code by eliminating explicit calls to `clone_result()`;
- Remove the logic that enforced having a fetch response with a `sigrdataset` at the head of the list;
- Remove (just a bit of) locking in some places.
The fetch result is stored directly in new `fctx` properties, but there is no memory increase as those are grouped in an anonymous struct used in a union besides another (bigger) anonymous struct wrapping properties used by qmin fetch only (and, in the case of qmin fetch, those fetch result properties are not needed).
Merge branch 'colin/resolver-cloneresults' into 'main'
Evan Hunt [Sat, 17 Jan 2026 04:59:08 +0000 (20:59 -0800)]
use a union for resp and qmin data
It's potentially confusing to use "resp_rdataset" for QNAME
minimization, but we can make it a union and have resp.rdataset
and qmin.rdataset using the same memory.
We can save even more space by using the same union to combine
qminname and resp_foundname and access them as qmin.name and
resp.foundname.
Colin Vidal [Fri, 16 Jan 2026 11:14:58 +0000 (12:14 +0100)]
resolver: remove `qminrrset`, `qminsigrrset` from fctx
Two rdataset property `qminrrset` and `qminsigrrset` are removed from
the fetch context. They only are used as temporary storage for the query
result of the qmin query, and are immediately detached from
`resume_qmin` once the query is over.
As an alternative, use `resp_rdataset` and `resp_sigrdataset`
instead; those are not needed for storing the response data until
after qmin_resume() is over.
Colin Vidal [Thu, 15 Jan 2026 16:36:50 +0000 (17:36 +0100)]
resolver: copy fetch responses and send events in one go
Instead of first copying query response data into each fetch response
and then iterating again to send the response to the caller, perform
both operations in one go.
Colin Vidal [Thu, 15 Jan 2026 15:30:59 +0000 (16:30 +0100)]
resolver: simplify fetch response handling
There is no longer a need to decide whether a fetch response should be
prepended or appended to the fetch response list. As query response data
is stored directly in the fetch context object, responses containing a
sigrdataset no longer need to be ordered first. Remove the code
implementing this logic.
Additionally, the distinction between `fetchstate_done` and
`fetchstate_sendevents` is no longer needed. New clients
`dns_fetchresponse_t` can be attached any time to the fetch context
until `fctx__done()` is called, since there is no dependency on the
first fetch response in the list. This simplifies the code and reduces
(just a bit) locking usage.
Colin Vidal [Thu, 15 Jan 2026 13:47:46 +0000 (14:47 +0100)]
resolver: temporarily store query answer in fetch context
Query answers are now stored in dedicated fetch context properties,
instead of using `ISC_LIST_HEAD(fctx->resps)`.
This reduces lock critical section usage in some places, and enables
further simplifications. (In particular, it removes the need for special
logic to prepend a fetch response to the list when it contains a
sigrdataset.)
Colin Vidal [Thu, 15 Jan 2026 09:42:13 +0000 (10:42 +0100)]
resolver: Defer cloning of fetch responses until events are sent
Instead of cloning fetch responses immediately after writing to the
head of the fetch response list, defer cloning until the events are
actually sent.
This removes the need for the `fctx->cloned` state. However, a new
fetch state value, fetchstate_sentevents, is introduced and occurs
after fetchstate_done. To prevent new fetch responses from being
prepended after the head is written but before cloning occurs,
fetchstate_done is now set at all call sites that previously invoked
`clone_results()`.
Ondřej Surý [Mon, 9 Feb 2026 10:05:20 +0000 (11:05 +0100)]
fix: usr: Fix NULL Pointer Dereference in QP-trie Cache add()
When RRSIG(rdtype) was independently cached before the RDATA for the
rdtype itself, named would crash on the subsequent query for the RDATA
itself. This has been fixed.
ISC would like to thank Vitaly Simonovich for bringing this
vulnerability to our attention.
Closes #5738
Merge branch '5738-null-pointer-dereference-in-qpcache-add' into 'main'
Ondřej Surý [Sat, 7 Feb 2026 04:19:48 +0000 (05:19 +0100)]
Fix NULL Pointer Dereference in QP-trie Cache add()
When RRSIG(rdtype) was independently cached before the RDATA for the
rdtype itself, named would crash on the subsequent query for the RDATA
itself. This has been fixed.
ISC would like to thank Vitaly Simonovich for bringing this
vulnerability to our attention.
Ondřej Surý [Fri, 6 Feb 2026 17:34:00 +0000 (18:34 +0100)]
fix: nil: Release gnamebuf also on the error path
In dst_gssapi_acceptctx(), the gnamebuf could leak a little bit of
memory if dns_name_fromtext() would theoretically fail. This would
require a Kerberos principal with invalid DNS name.
Closes #5737
Merge branch '5737-memory-leak-in-dst_gssapi_acceptctx-on-dns_name_fromtext-failure' into 'main'
Ondřej Surý [Fri, 6 Feb 2026 16:50:55 +0000 (17:50 +0100)]
Release gnamebuf also on the error path
In dst_gssapi_acceptctx(), the gnamebuf could leak a little bit of
memory if dns_name_fromtext() would theoretically fail. This would
require a Kerberos principal with invalid DNS name.
Mark Andrews [Fri, 6 Feb 2026 01:52:55 +0000 (12:52 +1100)]
Record query time for all dnstap responses
The description in the protobuf specification is not a list of request
types to process but rather a list of examples to qualify the
description of whether the time indicates when the message is received
or sent.
Nicki Křížek [Thu, 29 Jan 2026 10:42:37 +0000 (11:42 +0100)]
Allow re-run of kasp test case on all FreeBSDs
Previously, the issue when the kasp.test_kasp_case[secondary.kasp] fails
due to a timeout has been only ocassionally observed on FreeBSD 13
in our CI. It seems to have come back on FreeBSD 15.
A lingering `sizeof` from the prototype era of !11094 caused the
key-wipe in `isc_hmac_key_destroy` to use `sizeof(key->len)` instead of
`key->len` for the length argument of `isc_safe_memwipe`.
This results in a buffer overflow of zero bytes in HMAC keys that are
less than 4 bytes. As such, the overflow can only be visibile in keys
that are less than 32-bits, which is beyond broken and creating such
keys are only possible in testing.
Therefore, this change is *not* a security fix since the conditions are
never reachable in any imaginable deployment scenario.
Builds that use OpenSSL >=3.0 are unaffected as the `sizeof` was only
remaining in pre-3.0 builds.
Closes #5732
Merge branch '5732-invalid-params-to-isc_safe_memwipe' into 'main'
Aydın Mercan [Thu, 5 Feb 2026 12:01:52 +0000 (15:01 +0300)]
wipe hmac keys correctly pre-3.0 libcrypto
A lingering `sizeof` from the prototype era of !11094 caused the
key-wipe in `isc_hmac_key_destroy` to use `sizeof(key->len)` instead of
`key->len` for the length argument of `isc_safe_memwipe`.
This results in a buffer overflow of zero bytes in HMAC keys that are
less than 4 bytes. As such, the overflow can only be visibile in keys
that are less than 32-bits, which is beyond broken and creating such
keys are only possible in testing.
Therefore, this change is *not* a security fix since the conditions are
never reachable in any imaginable deployment scenario.
Builds that use OpenSSL >=3.0 are unaffected as the `sizeof` was only
remaining in pre-3.0 builds.