tcpbench
tcpbytesanswered
tcpclient
-tcpclientimeouts
+tcpclienttimeouts
tcpclientthreads
tcpcurrentconnections
tcpdiedreaddingresponse
redjack
reentrantly
refman
-refreh
refuseds
reid
reimplementation
--- /dev/null
+Building packages
+=================
+
+PowerDNS uses the pdns-builder tool to generate packages for its products. The actual workflow can be found in the [builder-support](https://github.com/PowerDNS/pdns/tree/master/builder-support) directory of the git repository.
+The [build-tags.yml](https://github.com/PowerDNS/pdns/blob/master/.github/workflows/build-tags.yml) workflow automatically builds packages when a tag is pushed, so there is no need to trigger a manual build for releases, and actually doing so would be worse from a provenance point of view where full automation is always better.
+
+Building packages on your own computer
+--------------------------------------
+
+This requires a working Docker installation.
+
+1. Clone our git repo (`git clone https://github.com/PowerDNS/pdns.git`)
+2. Check out the version you want, it can be a git tag like dnsdist-1.8.1, a git commit ID or branch
+3. Update submodules (`git submodule update --init --recursive`)
+4. Execute `builder/build.sh` to see what arguments it supports
+5. Then run `builder/build.sh` with the arguments you want (for example, `builder/build.sh -m recursor debian-bookworm`)
+
+Building packages from GitHub actions
+-------------------------------------
+
+You can build packages from your own fork of the PowerDNS repository. Go to the [PowerDNS/pdns](https://github.com/PowerDNS/pdns) repository and click on `Fork` at the top right of the screen. When asked if you would like to only copy the master branch, say no, as otherwise you will not be able to build packages from tagged releases. If you have already done so and have not done any modification to your fork, the easiest way is to delete and recreate it.
+
+On your fork, go to the `Actions` tab. You will be greeted by a message stating `Workflows aren’t being run on this forked repository`. You can click `I understand my workflows, go ahead and enable them`.
+
+Please be aware that by default some of the workflows are executed once every day, and enabling them will consume billing time our of your GitHub actions quota, although at the moment GitHub disables these by default: `This scheduled workflow is disabled because scheduled workflows are disabled by default in forks`.
+
+On the left side, click on `Trigger specific package build`.
+
+Locate the `Run workflow` dropdown item on the top right side of the screen, inside the blue region stating `This workflow has a workflow_dispatch event trigger.` It will open a menu with several options:
+- `Branch`: you can keep `master` here, unless you need to build for an operating system which is not in the list, in which case you will have to create a new branch and add the required file(s) for this OS. See `Adding a new OS` below.
+- `Product to build`: select the product you want to build packages for, for example `dnsdist`
+- `OSes to build for, space separated`: keep one or more OSes you want to build packages for, for example `ubuntu-focal`
+- `git ref to checkout`: the exact version you want to build. It can be the name of branch, a git tag or a git commit ID. Most likely you will be willing to build from a tagged release, like `dnsdist-1.8.1`.
+- `is this a release build?`: Keep `NO`
+
+Click `Run workflow` to start the build.
+
+If you reload the page, you should now see your build in progress as a `Trigger specific package build` workflow run. It will take some time to finish, but you can look at the progress by clicking on it.
+
+Once it's done, you can retrieve the generated package in the list of artifacts on the `Summary` page of the workflow run, by clicking on the `Summary` link on the top right of the screen.
+
+Adding a new OS to the list
+---------------------------
+
+Adding a new OS is usually easy, provided that it does not differ too much from an existing one. For example, to add support for Debian Bookworm (already present in the current repository), one had to:
+
+Copy the existing instructions for Debian Buster:
+```
+cp builder-support/dockerfiles/Dockerfile.target.debian-buster builder-support/dockerfiles/Dockerfile.target.debian-bookworm
+```
+
+In the new `builder-support/dockerfiles/Dockerfile.target.debian-bookworm` file, replace every occurence of `debian-buster` by `debian-bookworm`, and of `debian:buster` by `debian:bookworm`
+
+Create symbolic links for the amd64 and arm64 versions:
+```
+ln -s builder-support/dockerfiles/Dockerfile.target.debian-bookworm builder-support/dockerfiles/Dockerfile.target.debian-bookworm-amd64
+ln -s builder-support/dockerfiles/Dockerfile.target.debian-bookworm builder-support/dockerfiles/Dockerfile.target.debian-bookworm-arm64
+```
+
+Then add the new target to the list of OSes in the `.github/workflows/builder-dispatch.yml` workflow file:
+```
+default: >-
+ el-7
+ el-8
+ el-9
+ debian-buster
+ debian-bullseye
+ debian-bookworm
+ ubuntu-focal
+ ubuntu-jammy
+```
+
+If release packages should be automatically built for this new target, then `.github/workflows/build-packages.yml` has to be updated as well:
+``
+```
+default: >-
+ el-7
+ el-8
+ el-9
+ debian-buster
+ debian-bullseye
+ debian-bookworm
+ ubuntu-focal
+ ubuntu-jammy
+```
+
+Not forgetting to update the list of hashes later in the same file:
+```
+pkghashes-el-7: ${{ steps.pkghashes.outputs.pkghashes-el-7 }}
+pkghashes-el-8: ${{ steps.pkghashes.outputs.pkghashes-el-8 }}
+pkghashes-el-9: ${{ steps.pkghashes.outputs.pkghashes-el-9 }}
+pkghashes-debian-buster: ${{ steps.pkghashes.outputs.pkghashes-debian-buster }}
+pkghashes-debian-bullseye: ${{ steps.pkghashes.outputs.pkghashes-debian-bullseye }}
+pkghashes-debian-bookworm: ${{ steps.pkghashes.outputs.pkghashes-debian-bookworm }}
+pkghashes-ubuntu-focal: ${{ steps.pkghashes.outputs.pkghashes-ubuntu-focal }}
+pkghashes-ubuntu-jammy: ${{ steps.pkghashes.outputs.pkghashes-ubuntu-jammy }}
+```
+++ /dev/null
-#!/bin/sh
-
-set -x
-context=''
-# poor mans option parsing
-if [ -n "$1" ]; then
- if [ "$1" != "odbc" ]; then
- echo "invalid argument"
- exit 1
- fi
- context=odbc
- if [ -n "$2" ]; then
- echo "too many arguments"
- exit 1
- fi
-fi
-
-export PDNS=/usr/sbin/pdns_server
-export PDNS2=$PDNS
-export SDIG=/usr/bin/sdig
-export NSEC3DIG=/usr/bin/nsec3dig
-export NOTIFY=/usr/bin/pdns_notify
-export SAXFR=/usr/bin/saxfr
-export ZONE2SQL=/usr/bin/zone2sql
-export ZONE2JSON=/usr/bin/zone2json
-export PDNSUTIL=/usr/bin/pdnsutil
-export PDNSCONTROL=/usr/bin/pdns_control
-
-export GEM_HOME=${PWD}/gems
-mkdir -p $GEM_HOME
-export PATH="${GEM_HOME}/bin:$PATH"
-
-if [ -z "$context" ]; then
- cd modules/remotebackend
- ruby -S bundle install
- cd ../../
-fi
-
-MODULES=""
-
-for dir in /usr/lib/x86_64-linux-gnu/pdns /usr/lib64/pdns; do
- if [ -d $dir ]; then
- MODULES=$dir
- break
- fi
-done
-[ -z $MODULES ] && echo "No module directory found" >&2 && exit 1
-
-# Symlink the modules on the system
-cd regression-tests/modules
-for backend in *.so; do
- ln -sf $MODULES/$backend $backend
-done
-
-cd ..
-
-EXITCODE=0
-
-if [ -z "$context" ]; then
- export geoipregion=oc geoipregionip=1.2.3.4
- ./timestamp ./start-test-stop 5300 bind-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 bind-dnssec-both || EXITCODE=1
-
- # No PKCS#11 in packages
- #SETUP_SOFTHSM=y ./timestamp ./start-test-stop 5300 bind-dnssec-pkcs11 || EXITCODE=1
- ./timestamp ./start-test-stop 5300 bind-dnssec-nsec3-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 bind-dnssec-nsec3-optout-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 bind-dnssec-nsec3-narrow || EXITCODE=1
- ./timestamp ./start-test-stop 5300 bind-hybrid-nsec3 || EXITCODE=1
-
- # Adding extra IPs to docker containers in not supported :(
- #./timestamp ./start-test-stop 5300 geoipbackend || EXITCODE=1
- #./timestamp ./start-test-stop 5300 geoipbackend-nsec3-narrow || EXITCODE=1
-
- ./timestamp ./start-test-stop 5300 gmysql-nodnssec-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gmysql-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gmysql-nsec3-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gmysql-nsec3-optout-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gmysql-nsec3-narrow || EXITCODE=1
-
- ./timestamp ./start-test-stop 5300 gpgsql-nodnssec-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gpgsql-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gpgsql-nsec3-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gpgsql-nsec3-optout-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gpgsql-nsec3-narrow || EXITCODE=1
-
- ./timestamp ./start-test-stop 5300 gsqlite3-nodnssec-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gsqlite3-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gsqlite3-nsec3-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gsqlite3-nsec3-optout-both || EXITCODE=1
- ./timestamp ./start-test-stop 5300 gsqlite3-nsec3-narrow || EXITCODE=1
-
- ./timestamp timeout 120s ./start-test-stop 5300 remotebackend-pipe || EXITCODE=1
- ./timestamp timeout 120s ./start-test-stop 5300 remotebackend-pipe-dnssec || EXITCODE=1
- ./timestamp timeout 120s ./start-test-stop 5300 remotebackend-unix || EXITCODE=1
- ./timestamp timeout 120s ./start-test-stop 5300 remotebackend-unix-dnssec || EXITCODE=1
- ./timestamp timeout 120s ./start-test-stop 5300 remotebackend-http || EXITCODE=1
- ./timestamp timeout 120s ./start-test-stop 5300 remotebackend-http-dnssec || EXITCODE=1
-
- ./timestamp timeout 120s ./start-test-stop 5300 lua2
- ./timestamp timeout 120s ./start-test-stop 5300 lua2-dnssec
-
- # No 0MQ in the PowerDNS packages
- #./timestamp timeout 120s ./start-test-stop 5300 remotebackend-zeromq || EXITCODE=1
- #./timestamp timeout 120s ./start-test-stop 5300 remotebackend-zeromq-dnssec || EXITCODE=1
-
- ./timestamp ./start-test-stop 5300 tinydns || EXITCODE=1
-
- cd ../regression-tests.nobackend/
-
- ./runtests || EXITCODE=1
-elif [ "$context" = "odbc" ]; then
- cat > ~/.odbc.ini << __EOF__
-[pdns-sqlite3-1]
-Driver = SQLite3
-Database = $(pwd)/pdns.sqlite3
-
-[pdns-sqlite3-2]
-Driver = SQLite3
-Database = $(pwd)/pdns.sqlite32
-
-[pdns-mssql]
-Driver=FreeTDS
-Trace=No
-Server=pdns-odbc-regress-sql-1.database.windows.net
-Port=1433
-Database=pdns
-TDS_Version=7.1
-ClientCharset=UTF-8
-__EOF__
-
- set +x
- . ~/.mssql-credentials
- set -x
- export GODBC_SQLITE3_DSN=pdns-sqlite3-1
- ./timestamp timeout 120s ./start-test-stop 5300 godbc_sqlite3-nodnssec || EXITCODE=1
- export GODBC_MSSQL_DSN=pdns-mssql
- export GODBC_MSSQL_USERNAME
- export GODBC_MSSQL_PASSWORD
- ./timestamp timeout 3600s ./start-test-stop 5300 godbc_mssql-nodnssec || EXITCODE=1
- ./timestamp timeout 3600s ./start-test-stop 5300 godbc_mssql || EXITCODE=1
- ./timestamp timeout 3600s ./start-test-stop 5300 godbc_mssql-nsec3 || EXITCODE=1
- ./timestamp timeout 3600s ./start-test-stop 5300 godbc_mssql-nsec3-optout || EXITCODE=1
- ./timestamp timeout 3600s ./start-test-stop 5300 godbc_mssql-nsec3-narrow || EXITCODE=1
-fi
-
-exit $EXITCODE
``lmdb-filename``
^^^^^^^^^^^^^^^^^
-Path to the LMDB file (e.g. */var/spool/powerdns/pdns.lmdb*)
+Path to the LMDB file (e.g. */var/lib/powerdns/pdns.lmdb*)
.. warning::
On systemd systems,
Changelogs for 4.8.x
====================
+.. changelog::
+ :version: 4.8.3
+ :released: 5th of October 2023
+
+ This is release 4.8.3 of the Authoritative Server.
+
+ Please review the :doc:`Upgrade Notes <../upgrading>` before upgrading from versions < 4.8.x.
+
+ This release contains one new feature (``default-catalog-zone``), one bugfix (in ixfrdist), and a workaround for a bug in the MySQL client libraries.
+
+ .. change::
+ :tags: Improvements
+ :pullreq: 13271
+
+ smysql: stop explicitly setting MYSQL_OPT_RECONNECT to 0
+
+ .. change::
+ :tags: New Features
+ :pullreq: 13240
+
+ add default-catalog-zone setting
+
+ .. change::
+ :tags: Bug Fixes
+ :pullreq: 13316
+
+ ixfrdist: set AA=1 on SOA responses
+
.. changelog::
:version: 4.8.2
:released: 7th of September 2023
--full-histogram <msec> Write out histogram with specified bin-size to 'full-histogram'
--log-histogram Write out a log-histogram of response times to 'log-histogram'
--no-servfail-stats Remove servfail responses from latency statistics
+--port The source and destination port to consider. Default is looking at packets from and to ports 53 and 5300.
--servfail-tree Figure out subtrees that generate servfails.
--stats-dir <directory> Drop statistics files in this directory. Defaults to ./
-l, --load-stats Emit per-second load statistics (questions, answers, outstanding).
-# To generate requirements.txt, run:
+# To generate requirements.txt, install pip-tools and run:
# pip-compile --generate-hashes requirements.in
Sphinx>=1.5.0,!=1.8.0,<2.0
--hash=sha256:6c2dfe6ca282d75f66df333869bb0ce7331c01b475db6809ff9d107b7cdfe04b
# via sphinxcontrib-openapi
sphinxcontrib-openapi @ https://github.com/PowerDNS/sphinxcontrib-openapi/archive/refs/heads/use-jsondomain-pdns-py3.10-noscm.zip \
- --hash=sha256:7314b6a453d8c397d45a284255adbb55b7ba464f5f2ace32da4d08941ed76b2d
+ --hash=sha256:ad6659a5e86e4899386d249f1eae18a459175a92da0dd851ce20f8b8ff6569b0
# via -r requirements.in
sphinxcontrib-serializinghtml==1.1.5 \
--hash=sha256:352a9a00ae864471d3a7ead8d7d79f5fc0b57e8b3f95e9867eb9eb28999b92fd \
-@ 86400 IN SOA pdns-public-ns1.powerdns.com. peter\.van\.dijk.powerdns.com. 2023091802 10800 3600 604800 10800
+@ 86400 IN SOA pdns-public-ns1.powerdns.com. peter\.van\.dijk.powerdns.com. 2023100501 10800 3600 604800 10800
@ 3600 IN NS pdns-public-ns1.powerdns.com.
@ 3600 IN NS pdns-public-ns2.powerdns.com.
auth-4.8.0.security-status 60 IN TXT "1 OK"
auth-4.8.1.security-status 60 IN TXT "1 OK"
auth-4.8.2.security-status 60 IN TXT "1 OK"
+auth-4.8.3.security-status 60 IN TXT "1 OK"
; Auth Debian
auth-3.4.1-2.debian.security-status 60 IN TXT "3 Upgrade now, see https://doc.powerdns.com/3/security/powerdns-advisory-2015-01/ and https://doc.powerdns.com/3/security/powerdns-advisory-2015-02/ and https://doc.powerdns.com/3/security/powerdns-advisory-2016-02/ and https://doc.powerdns.com/3/security/powerdns-advisory-2016-03/ and https://doc.powerdns.com/3/security/powerdns-advisory-2016-04/ and https://doc.powerdns.com/3/security/powerdns-advisory-2016-05/"
~~~~~~~~~~~~~~~
Various settings have been renamed.
-Their old names still work in 4.5.x, but will be removed in the release after it.
+Their old names still work in 4.5.x, but will be removed in a release after it.
* :ref:`setting-allow-unsigned-supermaster` is now :ref:`setting-allow-unsigned-autoprimary`
* :ref:`setting-master` is now :ref:`setting-primary`
}
}
-HTTPConnector::~HTTPConnector() {}
+HTTPConnector::~HTTPConnector() = default;
void HTTPConnector::addUrlComponent(const Json& parameters, const string& element, std::stringstream& ss)
{
std::string sparam;
- if (parameters[element] != Json())
+ if (parameters[element] != Json()) {
ss << "/" << YaHTTP::Utility::encodeURL(asString(parameters[element]), false);
+ }
}
-std::string HTTPConnector::buildMemberListArgs(std::string prefix, const Json& args)
+std::string HTTPConnector::buildMemberListArgs(const std::string& prefix, const Json& args)
{
std::stringstream stream;
stream << prefix << "[" << YaHTTP::Utility::encodeURL(pair.first, false) << "]=";
}
else {
- stream << prefix << "[" << YaHTTP::Utility::encodeURL(pair.first, false) << "]=" << YaHTTP::Utility::encodeURL(this->asString(pair.second), false);
+ stream << prefix << "[" << YaHTTP::Utility::encodeURL(pair.first, false) << "]=" << YaHTTP::Utility::encodeURL(HTTPConnector::asString(pair.second), false);
}
stream << "&";
}
else if (method == "createSlaveDomain") {
addUrlComponent(parameters, "ip", ss);
addUrlComponent(parameters, "domain", ss);
- if (parameters["account"].is_null() == false && parameters["account"].is_string()) {
+ if (!parameters["account"].is_null() && parameters["account"].is_string()) {
req.POST()["account"] = parameters["account"].string_value();
}
req.preparePost();
req.body = out;
}
else {
- std::stringstream url, content;
+ std::stringstream url;
+ std::stringstream content;
// call url/method.suffix
url << d_url << "/" << input["method"].string_value() << d_url_suffix;
req.setup("POST", url.str());
int HTTPConnector::send_message(const Json& input)
{
- int rv, ec, fd;
+ int rv = 0;
+ int ec = 0;
+ int fd = 0;
std::vector<std::string> members;
std::string method;
// perform request
YaHTTP::Request req;
- if (d_post)
+ if (d_post) {
post_requestbuilder(input, req);
- else
+ }
+ else {
restful_requestbuilder(input["method"].string_value(), input["parameters"], req);
+ }
rv = -1;
req.headers["connection"] = "Keep-Alive"; // see if we can streamline requests (not needed, strictly speaking)
}
}
- if (rv == 1)
+ if (rv == 1) {
return rv;
+ }
this->d_socket.reset();
// connect using tcp
- struct addrinfo *gAddr, *gAddrPtr, hints;
+ struct addrinfo* gAddr = nullptr;
+ struct addrinfo* gAddrPtr = nullptr;
+ struct addrinfo hints
+ {
+ };
std::string sPort = std::to_string(d_port);
memset(&hints, 0, sizeof hints);
hints.ai_family = AF_UNSPEC;
// try to connect to each address.
gAddrPtr = gAddr;
- while (gAddrPtr) {
+ while (gAddrPtr != nullptr) {
try {
d_socket = std::make_unique<Socket>(gAddrPtr->ai_family, gAddrPtr->ai_socktype, gAddrPtr->ai_protocol);
d_addr.setSockaddr(gAddrPtr->ai_addr, gAddrPtr->ai_addrlen);
g_log << Logger::Error << "While writing to HTTP endpoint " << d_addr.toStringWithPort() << ": exception caught" << std::endl;
}
- if (rv > -1)
+ if (rv > -1) {
break;
+ }
d_socket.reset();
gAddrPtr = gAddrPtr->ai_next;
}
YaHTTP::AsyncResponseLoader arl;
YaHTTP::Response resp;
- if (d_socket == nullptr)
+ if (d_socket == nullptr) {
return -1; // cannot receive :(
+ }
char buffer[4096];
int rd = -1;
- time_t t0;
+ time_t t0 = 0;
arl.initialize(&resp);
try {
- t0 = time((time_t*)NULL);
- while (arl.ready() == false && (labs(time((time_t*)NULL) - t0) <= timeout)) {
+ t0 = time((time_t*)nullptr);
+ while (!arl.ready() && (labs(time((time_t*)nullptr) - t0) <= timeout)) {
rd = d_socket->readWithTimeout(buffer, sizeof(buffer), timeout);
- if (rd == 0)
+ if (rd == 0) {
throw NetworkError("EOF while reading");
- if (rd < 0)
+ }
+ if (rd < 0) {
throw NetworkError(std::string(strerror(rd)));
+ }
arl.feed(std::string(buffer, rd));
}
// timeout occurred.
- if (arl.ready() == false)
+ if (!arl.ready()) {
throw NetworkError("timeout");
+ }
}
catch (NetworkError& ne) {
d_socket.reset();
int rv = -1;
std::string err;
output = Json::parse(resp.body, err);
- if (output != nullptr)
+ if (output != nullptr) {
return resp.body.size();
+ }
g_log << Logger::Error << "Cannot parse JSON reply: " << err << endl;
return rv;
PipeConnector::~PipeConnector()
{
- int status;
+ int status = 0;
// just in case...
- if (d_pid == -1)
+ if (d_pid == -1) {
return;
+ }
- if (!waitpid(d_pid, &status, WNOHANG)) {
+ if (waitpid(d_pid, &status, WNOHANG) == 0) {
kill(d_pid, 9);
waitpid(d_pid, &status, 0);
}
- if (d_fd1[1]) {
+ if (d_fd1[1] != 0) {
close(d_fd1[1]);
}
}
void PipeConnector::launch()
{
// no relaunch
- if (d_pid > 0 && checkStatus())
+ if (d_pid > 0 && checkStatus()) {
return;
+ }
std::vector<std::string> v;
split(v, command, boost::is_any_of(" "));
std::vector<const char*> argv(v.size() + 1);
- argv[v.size()] = 0;
+ argv[v.size()] = nullptr;
- for (size_t n = 0; n < v.size(); n++)
+ for (size_t n = 0; n < v.size(); n++) {
argv[n] = v[n].c_str();
+ }
signal(SIGPIPE, SIG_IGN);
- if (access(argv[0], X_OK)) // check before fork so we can throw
+ if (access(argv[0], X_OK) != 0) { // check before fork so we can throw
throw PDNSException("Command '" + string(argv[0]) + "' cannot be executed: " + stringerror());
+ }
- if (pipe(d_fd1) < 0 || pipe(d_fd2) < 0)
+ if (pipe(d_fd1) < 0 || pipe(d_fd2) < 0) {
throw PDNSException("Unable to open pipe for coprocess: " + string(strerror(errno)));
+ }
- if ((d_pid = fork()) < 0)
+ if ((d_pid = fork()) < 0) {
throw PDNSException("Unable to fork for coprocess: " + stringerror());
- else if (d_pid > 0) { // parent speaking
+ }
+ if (d_pid > 0) { // parent speaking
close(d_fd1[0]);
setCloseOnExec(d_fd1[1]);
close(d_fd2[1]);
setCloseOnExec(d_fd2[0]);
- if (!(d_fp = std::unique_ptr<FILE, int (*)(FILE*)>(fdopen(d_fd2[0], "r"), fclose)))
+ if (!(d_fp = std::unique_ptr<FILE, int (*)(FILE*)>(fdopen(d_fd2[0], "r"), fclose))) {
throw PDNSException("Unable to associate a file pointer with pipe: " + stringerror());
- if (d_timeout)
- setbuf(d_fp.get(), 0); // no buffering please, confuses poll
+ }
+ if (d_timeout != 0) {
+ setbuf(d_fp.get(), nullptr); // no buffering please, confuses poll
+ }
}
- else if (!d_pid) { // child
+ else if (d_pid == 0) { // child
signal(SIGCHLD, SIG_DFL); // silence a warning from perl
close(d_fd1[1]);
close(d_fd2[0]);
// stdin & stdout are now connected, fire up our coprocess!
- if (execv(argv[0], const_cast<char* const*>(argv.data())) < 0) // now what
+ if (execv(argv[0], const_cast<char* const*>(argv.data())) < 0) { // now what
exit(123);
+ }
/* not a lot we can do here. We shouldn't return because that will leave a forked process around.
no way to log this either - only thing we can do is make sure that our parent catches this soonest! */
this->send(msg);
msg = nullptr;
- if (this->recv(msg) == false) {
+ if (!this->recv(msg)) {
g_log << Logger::Error << "Failed to initialize coprocess" << std::endl;
}
}
line.append(1, '\n');
unsigned int sent = 0;
- int bytes;
+ int bytes = 0;
// writen routine - socket may not accept al data in one go
while (sent < line.size()) {
bytes = write(d_fd1[1], line.c_str() + sent, line.length() - sent);
- if (bytes < 0)
+ if (bytes < 0) {
throw PDNSException("Writing to coprocess failed: " + std::string(strerror(errno)));
+ }
sent += bytes;
}
while (1) {
receive.clear();
- if (d_timeout) {
+ if (d_timeout != 0) {
int ret = waitForData(fileno(d_fp.get()), 0, d_timeout * 1000);
- if (ret < 0)
+ if (ret < 0) {
throw PDNSException("Error waiting on data from coprocess: " + stringerror());
- if (!ret)
+ }
+ if (ret == 0) {
throw PDNSException("Timeout waiting for data from coprocess");
+ }
}
- if (!stringfgets(d_fp.get(), receive))
+ if (!stringfgets(d_fp.get(), receive)) {
throw PDNSException("Child closed pipe");
+ }
s_output.append(receive);
// see if it can be parsed
output = Json::parse(s_output, err);
- if (output != nullptr)
+ if (output != nullptr) {
return s_output.size();
+ }
}
return 0;
}
-bool PipeConnector::checkStatus()
+bool PipeConnector::checkStatus() const
{
- int status;
+ int status = 0;
int ret = waitpid(d_pid, &status, WNOHANG);
- if (ret < 0)
+ if (ret < 0) {
throw PDNSException("Unable to ascertain status of coprocess " + std::to_string(d_pid) + " from " + std::to_string(getpid()) + ": " + string(strerror(errno)));
- else if (ret) {
+ }
+ if (ret != 0) {
if (WIFEXITED(status)) {
int exitStatus = WEXITSTATUS(status);
throw PDNSException("Coprocess exited with code " + std::to_string(exitStatus));
int sig = WTERMSIG(status);
string reason = "CoProcess died on receiving signal " + std::to_string(sig);
#ifdef WCOREDUMP
- if (WCOREDUMP(status))
+ if (WCOREDUMP(status)) {
reason += ". Dumped core";
+ }
#endif
throw PDNSException(reason);
if (value["result"] == Json()) {
throw PDNSException("No 'result' field in response from remote process");
}
- else if (value["result"].is_bool() && boolFromJson(value, "result", false) == false) {
+ if (value["result"].is_bool() && !boolFromJson(value, "result", false)) {
retval = false;
}
for (const auto& message : value["log"].array_items()) {
this->d_connstr = getArg("connection-string");
this->d_dnssec = mustDo("dnssec");
- this->d_index = -1;
- this->d_trxid = 0;
build();
}
-RemoteBackend::~RemoteBackend() {}
+RemoteBackend::~RemoteBackend() = default;
bool RemoteBackend::send(Json& value)
{
std::map<std::string, std::string> options;
// connstr is of format "type:options"
- size_t pos;
- pos = d_connstr.find_first_of(":");
- if (pos == std::string::npos)
+ size_t pos = 0;
+ pos = d_connstr.find_first_of(':');
+ if (pos == std::string::npos) {
throw PDNSException("Invalid connection string: malformed");
+ }
type = d_connstr.substr(0, pos);
opts = d_connstr.substr(pos + 1);
// find out some options and parse them while we're at it
for (const auto& opt : parts) {
- std::string key, val;
+ std::string key;
+ std::string val;
// make sure there is something else than air in the option...
- if (opt.find_first_not_of(" ") == std::string::npos)
+ if (opt.find_first_not_of(" ") == std::string::npos) {
continue;
+ }
// split it on '='. if not found, we treat it as "yes"
pos = opt.find_first_of("=");
*/
void RemoteBackend::lookup(const QType& qtype, const DNSName& qdomain, int zoneId, DNSPacket* pkt_p)
{
- if (d_index != -1)
+ if (d_index != -1) {
throw PDNSException("Attempt to lookup while one running");
+ }
string localIP = "0.0.0.0";
string remoteIP = "0.0.0.0";
string realRemote = "0.0.0.0/0";
- if (pkt_p) {
+ if (pkt_p != nullptr) {
localIP = pkt_p->getLocal().toString();
realRemote = pkt_p->getRealRemote().toString();
remoteIP = pkt_p->getInnerRemote().toString();
{"method", "lookup"},
{"parameters", Json::object{{"qtype", qtype.toString()}, {"qname", qdomain.toString()}, {"remote", remoteIP}, {"local", localIP}, {"real-remote", realRemote}, {"zone-id", zoneId}}}};
- if (this->send(query) == false || this->recv(d_result) == false) {
+ if (!this->send(query) || !this->recv(d_result)) {
return;
}
// OK. we have result parameters in result. do not process empty result.
- if (d_result["result"].is_array() == false || d_result["result"].array_items().size() < 1)
+ if (!d_result["result"].is_array() || d_result["result"].array_items().empty()) {
return;
+ }
d_index = 0;
}
bool RemoteBackend::list(const DNSName& target, int domain_id, bool include_disabled)
{
- if (d_index != -1)
+ if (d_index != -1) {
throw PDNSException("Attempt to lookup while one running");
+ }
Json query = Json::object{
{"method", "list"},
{"parameters", Json::object{{"zonename", target.toString()}, {"domain_id", domain_id}, {"include_disabled", include_disabled}}}};
- if (this->send(query) == false || this->recv(d_result) == false)
+ if (!this->send(query) || !this->recv(d_result)) {
return false;
- if (d_result["result"].is_array() == false || d_result["result"].array_items().size() < 1)
+ }
+ if (!d_result["result"].is_array() || d_result["result"].array_items().empty()) {
return false;
+ }
d_index = 0;
return true;
bool RemoteBackend::get(DNSResourceRecord& rr)
{
- if (d_index == -1)
+ if (d_index == -1) {
return false;
+ }
rr.qtype = stringFromJson(d_result["result"][d_index], "qtype");
rr.qname = DNSName(stringFromJson(d_result["result"][d_index], "qname"));
rr.content = stringFromJson(d_result["result"][d_index], "content");
rr.ttl = d_result["result"][d_index]["ttl"].int_value();
rr.domain_id = intFromJson(d_result["result"][d_index], "domain_id", -1);
- if (d_dnssec)
- rr.auth = intFromJson(d_result["result"][d_index], "auth", 1);
- else
- rr.auth = 1;
+ if (d_dnssec) {
+ rr.auth = (intFromJson(d_result["result"][d_index], "auth", 1) != 0);
+ }
+ else {
+ rr.auth = true;
+ }
rr.scopeMask = d_result["result"][d_index]["scopeMask"].int_value();
d_index++;
bool RemoteBackend::getBeforeAndAfterNamesAbsolute(uint32_t id, const DNSName& qname, DNSName& unhashed, DNSName& before, DNSName& after)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "getBeforeAndAfterNamesAbsolute"},
{"parameters", Json::object{{"id", Json(static_cast<double>(id))}, {"qname", qname.toString()}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
unhashed = DNSName(stringFromJson(answer["result"], "unhashed"));
before.clear();
after.clear();
- if (answer["result"]["before"] != Json())
+ if (answer["result"]["before"] != Json()) {
before = DNSName(stringFromJson(answer["result"], "before"));
- if (answer["result"]["after"] != Json())
+ }
+ if (answer["result"]["after"] != Json()) {
after = DNSName(stringFromJson(answer["result"], "after"));
+ }
return true;
}
{"method", "getAllDomainMetadata"},
{"parameters", Json::object{{"name", name.toString()}}}};
- if (this->send(query) == false)
+ if (!this->send(query)) {
return false;
+ }
meta.clear();
Json answer;
// not mandatory to implement
- if (this->recv(answer) == false)
+ if (!this->recv(answer)) {
return true;
+ }
for (const auto& pair : answer["result"].object_items()) {
if (pair.second.is_array()) {
- for (const auto& val : pair.second.array_items())
+ for (const auto& val : pair.second.array_items()) {
meta[pair.first].push_back(asString(val));
+ }
}
else {
meta[pair.first].push_back(asString(pair.second));
{"method", "getDomainMetadata"},
{"parameters", Json::object{{"name", name.toString()}, {"kind", kind}}}};
- if (this->send(query) == false)
+ if (!this->send(query)) {
return false;
+ }
meta.clear();
Json answer;
// not mandatory to implement
- if (this->recv(answer) == false)
+ if (!this->recv(answer)) {
return true;
+ }
if (answer["result"].is_array()) {
- for (const auto& row : answer["result"].array_items())
+ for (const auto& row : answer["result"].array_items()) {
meta.push_back(row.string_value());
+ }
}
else if (answer["result"].is_string()) {
meta.push_back(answer["result"].string_value());
{"parameters", Json::object{{"name", name.toString()}, {"kind", kind}, {"value", meta}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
return boolFromJson(answer, "result", false);
}
bool RemoteBackend::getDomainKeys(const DNSName& name, std::vector<DNSBackend::KeyData>& keys)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "getDomainKeys"},
{"parameters", Json::object{{"name", name.toString()}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
keys.clear();
bool RemoteBackend::removeDomainKey(const DNSName& name, unsigned int id)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "removeDomainKey"},
{"parameters", Json::object{{"name", name.toString()}, {"id", static_cast<int>(id)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
-
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::addDomainKey(const DNSName& name, const KeyData& key, int64_t& id)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "addDomainKey"},
{"parameters", Json::object{{"name", name.toString()}, {"key", Json::object{{"flags", static_cast<int>(key.flags)}, {"active", key.active}, {"published", key.published}, {"content", key.content}}}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
id = answer["result"].int_value();
return id >= 0;
bool RemoteBackend::activateDomainKey(const DNSName& name, unsigned int id)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "activateDomainKey"},
{"parameters", Json::object{{"name", name.toString()}, {"id", static_cast<int>(id)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
-
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::deactivateDomainKey(const DNSName& name, unsigned int id)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "deactivateDomainKey"},
{"parameters", Json::object{{"name", name.toString()}, {"id", static_cast<int>(id)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
-
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::publishDomainKey(const DNSName& name, unsigned int id)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "publishDomainKey"},
{"parameters", Json::object{{"name", name.toString()}, {"id", static_cast<int>(id)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
-
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::unpublishDomainKey(const DNSName& name, unsigned int id)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "unpublishDomainKey"},
{"parameters", Json::object{{"name", name.toString()}, {"id", static_cast<int>(id)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
-
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::doesDNSSEC()
bool RemoteBackend::getTSIGKey(const DNSName& name, DNSName& algorithm, std::string& content)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "getTSIGKey"},
{"parameters", Json::object{{"name", name.toString()}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
algorithm = DNSName(stringFromJson(answer["result"], "algorithm"));
content = stringFromJson(answer["result"], "content");
bool RemoteBackend::setTSIGKey(const DNSName& name, const DNSName& algorithm, const std::string& content)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "setTSIGKey"},
{"parameters", Json::object{{"name", name.toString()}, {"algorithm", algorithm.toString()}, {"content", content}}}};
Json answer;
- if (connector->send(query) == false || connector->recv(answer) == false)
- return false;
-
- return true;
+ return connector->send(query) && connector->recv(answer);
}
bool RemoteBackend::deleteTSIGKey(const DNSName& name)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "deleteTSIGKey"},
{"parameters", Json::object{{"name", name.toString()}}}};
Json answer;
- if (connector->send(query) == false || connector->recv(answer) == false)
- return false;
-
- return true;
+ return connector->send(query) && connector->recv(answer);
}
bool RemoteBackend::getTSIGKeys(std::vector<struct TSIGKey>& keys)
{
// no point doing dnssec if it's not supported
- if (d_dnssec == false)
+ if (!d_dnssec) {
return false;
+ }
Json query = Json::object{
{"method", "getTSIGKeys"},
{"parameters", Json::object{}}};
Json answer;
- if (connector->send(query) == false || connector->recv(answer) == false)
+ if (!connector->send(query) || !connector->recv(answer)) {
return false;
+ }
for (const auto& jsonKey : answer["result"].array_items()) {
struct TSIGKey key;
{
di.id = intFromJson(obj, "id", -1);
di.zone = DNSName(stringFromJson(obj, "zone"));
- for (const auto& master : obj["masters"].array_items())
- di.masters.push_back(ComboAddress(master.string_value(), 53));
+ for (const auto& master : obj["masters"].array_items()) {
+ di.masters.emplace_back(master.string_value(), 53);
+ }
di.notified_serial = static_cast<unsigned int>(doubleFromJson(obj, "notified_serial", 0));
di.serial = static_cast<unsigned int>(obj["serial"].number_value());
di.last_check = static_cast<time_t>(obj["last_check"].number_value());
- string kind = "";
+ string kind;
if (obj["kind"].is_string()) {
kind = stringFromJson(obj, "kind");
}
bool RemoteBackend::getDomainInfo(const DNSName& domain, DomainInfo& di, bool /* getSerial */)
{
- if (domain.empty())
+ if (domain.empty()) {
return false;
+ }
+
Json query = Json::object{
{"method", "getDomainInfo"},
{"parameters", Json::object{{"name", domain.toString()}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
this->parseDomainInfo(answer["result"], di);
return true;
{"parameters", Json::object{{"id", static_cast<double>(id)}, {"serial", static_cast<double>(serial)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false) {
+ if (!this->send(query) || !this->recv(answer)) {
g_log << Logger::Error << kBackendId << " Failed to execute RPC for RemoteBackend::setNotified(" << id << "," << serial << ")" << endl;
}
}
{"method", "superMasterBackend"},
{"parameters", Json::object{{"ip", ip}, {"domain", domain.toString()}, {"nsset", rrset}}}};
- *ddb = 0;
+ *ddb = nullptr;
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
// we are the backend
*ddb = this;
}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::replaceRRSet(uint32_t domain_id, const DNSName& qname, const QType& qtype, const vector<DNSResourceRecord>& rrset)
{"parameters", Json::object{{"domain_id", static_cast<double>(domain_id)}, {"qname", qname.toString()}, {"qtype", qtype.toString()}, {"trxid", static_cast<double>(d_trxid)}, {"rrset", json_rrset}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
-
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::feedRecord(const DNSResourceRecord& rr, const DNSName& ordername, bool /* ordernameIsNSEC3 */)
}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
- return true; // XXX FIXME this API should not return 'true' I think -ahu
+ return this->send(query) && this->recv(answer); // XXX FIXME this API should not return 'true' I think -ahu
}
bool RemoteBackend::feedEnts(int domain_id, map<DNSName, bool>& nonterm)
{
Json::array nts;
- for (const auto& t : nonterm)
+ for (const auto& t : nonterm) {
nts.push_back(Json::object{
{"nonterm", t.first.toString()},
{"auth", t.second}});
+ }
Json query = Json::object{
{"method", "feedEnts"},
};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::feedEnts3(int domain_id, const DNSName& domain, map<DNSName, bool>& nonterm, const NSEC3PARAMRecordContent& ns3prc, bool narrow)
{
Json::array nts;
- for (const auto& t : nonterm)
+ for (const auto& t : nonterm) {
nts.push_back(Json::object{
{"nonterm", t.first.toString()},
{"auth", t.second}});
+ }
Json query = Json::object{
{"method", "feedEnts3"},
};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::startTransaction(const DNSName& domain, int domain_id)
{
- this->d_trxid = time((time_t*)NULL);
+ this->d_trxid = time((time_t*)nullptr);
Json query = Json::object{
{"method", "startTransaction"},
{"parameters", Json::object{{"domain", domain.toString()}, {"domain_id", domain_id}, {"trxid", static_cast<double>(d_trxid)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false) {
+ if (!this->send(query) || !this->recv(answer)) {
d_trxid = -1;
return false;
}
return true;
}
+
bool RemoteBackend::commitTransaction()
{
- if (d_trxid == -1)
+ if (d_trxid == -1) {
return false;
+ }
Json query = Json::object{
{"method", "commitTransaction"},
d_trxid = -1;
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
- return true;
+ return this->send(query) && this->recv(answer);
}
bool RemoteBackend::abortTransaction()
{
- if (d_trxid == -1)
+ if (d_trxid == -1) {
return false;
+ }
Json query = Json::object{
{"method", "abortTransaction"},
d_trxid = -1;
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
- return false;
- return true;
+ return this->send(query) && this->recv(answer);
}
string RemoteBackend::directBackendCmd(const string& querystr)
{"parameters", Json::object{{"query", querystr}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return "backend command failed";
+ }
return asString(answer["result"]);
}
{"parameters", Json::object{{"pattern", pattern}, {"maxResults", maxResults}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return false;
+ }
- if (answer["result"].is_array() == false)
+ if (!answer["result"].is_array()) {
return false;
+ }
for (const auto& row : answer["result"].array_items()) {
DNSResourceRecord rr;
rr.content = stringFromJson(row, "content");
rr.ttl = row["ttl"].int_value();
rr.domain_id = intFromJson(row, "domain_id", -1);
- if (d_dnssec)
- rr.auth = intFromJson(row, "auth", 1);
- else
+ if (d_dnssec) {
+ rr.auth = (intFromJson(row, "auth", 1) != 0);
+ }
+ else {
rr.auth = 1;
+ }
rr.scopeMask = row["scopeMask"].int_value();
result.push_back(rr);
}
{"parameters", Json::object{{"include_disabled", include_disabled}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return;
+ }
- if (answer["result"].is_array() == false)
+ if (!answer["result"].is_array()) {
return;
+ }
for (const auto& row : answer["result"].array_items()) {
DomainInfo di;
};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return;
+ }
- if (answer["result"].is_array() == false)
+ if (!answer["result"].is_array()) {
return;
+ }
for (const auto& row : answer["result"].array_items()) {
DomainInfo di;
};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false)
+ if (!this->send(query) || !this->recv(answer)) {
return;
+ }
- if (answer["result"].is_array() == false)
+ if (!answer["result"].is_array()) {
return;
+ }
for (const auto& row : answer["result"].array_items()) {
DomainInfo di;
{"parameters", Json::object{{"id", static_cast<double>(domain_id)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false) {
+ if (!this->send(query) || !this->recv(answer)) {
g_log << Logger::Error << kBackendId << " Failed to execute RPC for RemoteBackend::setStale(" << domain_id << ")" << endl;
}
}
{"parameters", Json::object{{"id", static_cast<double>(domain_id)}}}};
Json answer;
- if (this->send(query) == false || this->recv(answer) == false) {
+ if (!this->send(query) || !this->recv(answer)) {
g_log << Logger::Error << kBackendId << " Failed to execute RPC for RemoteBackend::setFresh(" << domain_id << ")" << endl;
}
}
}
catch (...) {
g_log << Logger::Error << kBackendId << " Unable to instantiate a remotebackend!" << endl;
- return 0;
+ return nullptr;
};
}
class Connector
{
public:
- virtual ~Connector(){};
+ virtual ~Connector() = default;
bool send(Json& value);
bool recv(Json& value);
virtual int send_message(const Json& input) = 0;
virtual int recv_message(Json& output) = 0;
protected:
- string asString(const Json& value)
+ static string asString(const Json& value)
{
- if (value.is_number())
+ if (value.is_number()) {
return std::to_string(value.int_value());
- if (value.is_bool())
+ }
+ if (value.is_bool()) {
return (value.bool_value() ? "1" : "0");
- if (value.is_string())
+ }
+ if (value.is_string()) {
return value.string_value();
+ }
throw JsonException("Json value not convertible to String");
};
};
{
public:
UnixsocketConnector(std::map<std::string, std::string> options);
- virtual ~UnixsocketConnector();
- virtual int send_message(const Json& input);
- virtual int recv_message(Json& output);
+ ~UnixsocketConnector() override;
+ int send_message(const Json& input) override;
+ int recv_message(Json& output) override;
private:
ssize_t read(std::string& data);
{
public:
HTTPConnector(std::map<std::string, std::string> options);
- ~HTTPConnector();
+ ~HTTPConnector() override;
- virtual int send_message(const Json& input);
- virtual int recv_message(Json& output);
+ int send_message(const Json& input) override;
+ int recv_message(Json& output) override;
private:
std::string d_url;
bool d_post_json;
void restful_requestbuilder(const std::string& method, const Json& parameters, YaHTTP::Request& req);
void post_requestbuilder(const Json& input, YaHTTP::Request& req);
- void addUrlComponent(const Json& parameters, const string& element, std::stringstream& ss);
- std::string buildMemberListArgs(std::string prefix, const Json& args);
+ static void addUrlComponent(const Json& parameters, const string& element, std::stringstream& ss);
+ static std::string buildMemberListArgs(const std::string& prefix, const Json& args);
std::unique_ptr<Socket> d_socket;
ComboAddress d_addr;
std::string d_host;
{
public:
ZeroMQConnector(std::map<std::string, std::string> options);
- virtual ~ZeroMQConnector();
- virtual int send_message(const Json& input);
- virtual int recv_message(Json& output);
+ ~ZeroMQConnector() override;
+ int send_message(const Json& input) override;
+ int recv_message(Json& output) override;
private:
void connect();
{
public:
PipeConnector(std::map<std::string, std::string> options);
- ~PipeConnector();
+ ~PipeConnector() override;
- virtual int send_message(const Json& input);
- virtual int recv_message(Json& output);
+ int send_message(const Json& input) override;
+ int recv_message(Json& output) override;
private:
void launch();
- bool checkStatus();
+ [[nodiscard]] bool checkStatus() const;
std::string command;
std::map<std::string, std::string> options;
- int d_fd1[2], d_fd2[2];
+ int d_fd1[2]{}, d_fd2[2]{};
int d_pid;
int d_timeout;
std::unique_ptr<FILE, int (*)(FILE*)> d_fp{nullptr, fclose};
{
public:
RemoteBackend(const std::string& suffix = "");
- ~RemoteBackend();
+ ~RemoteBackend() override;
void lookup(const QType& qtype, const DNSName& qdomain, int zoneId = -1, DNSPacket* pkt_p = nullptr) override;
bool get(DNSResourceRecord& rr) override;
std::unique_ptr<Connector> connector;
bool d_dnssec;
Json d_result;
- int d_index;
- int64_t d_trxid;
+ int d_index{-1};
+ int64_t d_trxid{0};
std::string d_connstr;
bool send(Json& value);
bool recv(Json& value);
- void makeErrorAndThrow(Json& value);
+ static void makeErrorAndThrow(Json& value);
- string asString(const Json& value)
+ static string asString(const Json& value)
{
- if (value.is_number())
+ if (value.is_number()) {
return std::to_string(value.int_value());
- if (value.is_bool())
+ }
+ if (value.is_bool()) {
return (value.bool_value() ? "1" : "0");
- if (value.is_string())
+ }
+ if (value.is_string()) {
return value.string_value();
+ }
throw JsonException("Json value not convertible to String");
};
- bool asBool(const Json& value)
+ static bool asBool(const Json& value)
{
- if (value.is_bool())
+ if (value.is_bool()) {
return value.bool_value();
+ }
try {
string val = asString(value);
- if (val == "0")
+ if (val == "0") {
return false;
- if (val == "1")
+ }
+ if (val == "1") {
return true;
+ }
}
catch (const JsonException&) {
};
{
RemotebackendSetup()
{
- be = 0;
+ be = nullptr;
try {
// setup minimum arguments
::arg().set("module-dir") = "./.libs";
BOOST_TEST_MESSAGE("Cannot start remotebackend: " << ex.reason);
};
}
- ~RemotebackendSetup() {}
+ ~RemotebackendSetup() = default;
};
BOOST_GLOBAL_FIXTURE(RemotebackendSetup);
{
RemotebackendSetup()
{
- be = 0;
+ be = nullptr;
try {
// setup minimum arguments
::arg().set("module-dir") = "./.libs";
BOOST_TEST_MESSAGE("Cannot start remotebackend: " << ex.reason);
};
}
- ~RemotebackendSetup() {}
+ ~RemotebackendSetup() = default;
};
BOOST_GLOBAL_FIXTURE(RemotebackendSetup);
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
*/
+#include <string>
+#include <pdns/dnsbackend.hh>
+
DNSBackend::KeyData k1 = {std::string("Private-key-format: v1.2\nAlgorithm: 5 (RSASHA1)\nModulus: qpe9fxlN4dBT38cLPWtqljZhcJjbqRprj9XsYmf2/uFu4kA5sHYrlQY7H9lpzGJPRfOAfxShBpKs1AVaVInfJQ==\nPublicExponent: AQAB\nPrivateExponent: Ad3YogzXvVDLsWuAfioY571QlolbdTbzVlhLEMLD6dSRx+xcZgw6c27ak2HAH00iSKTvqK3AyeaK8Eqy/oJ5QQ==\nPrime1: wo8LZrdU2y0xLGCeLhwziQDDtTMi18NEIwlx8tUPnhs=\nPrime2: 4HcuFqgo7NOiXFvN+V2PT+QaIt2+oi6D2m8/qtTDS78=\nExponent1: GUdCoPbi9JM7l1t6Ud1iKMPLqchaF5SMTs0UXAuous8=\nExponent2: nzgKqimX9f1corTAEw0pddrwKyEtcu8ZuhzFhZCsAxM=\nCoefficient: YGNxbulf5GTNiIu0oNKmAF0khNtx9layjOPEI0R4/RY="), 1, 257, true, true};
DNSBackend::KeyData k2 = {std::string("Private-key-format: v1.2\nAlgorithm: 5 (RSASHA1)\nModulus: tY2TAMgL/whZdSbn2aci4wcMqohO24KQAaq5RlTRwQ33M8FYdW5fZ3DMdMsSLQUkjGnKJPKEdN3Qd4Z5b18f+w==\nPublicExponent: AQAB\nPrivateExponent: BB6xibPNPrBV0PUp3CQq0OdFpk9v9EZ2NiBFrA7osG5mGIZICqgOx/zlHiHKmX4OLmL28oU7jPKgogeuONXJQQ==\nPrime1: yjxe/iHQ4IBWpvCmuGqhxApWF+DY9LADIP7bM3Ejf3M=\nPrime2: 5dGWTyYEQRBVK74q1a64iXgaNuYm1pbClvvZ6ccCq1k=\nExponent1: TwM5RebmWeAqerzJFoIqw5IaQugJO8hM4KZR9A4/BTs=\nExponent2: bpV2HSmu3Fvuj7jWxbFoDIXlH0uJnrI2eg4/4hSnvSk=\nCoefficient: e2uDDWN2zXwYa2P6VQBWQ4mR1ZZjFEtO/+YqOJZun1Y="), 2, 256, true, true};
{
RemotebackendSetup()
{
- be = 0;
+ be = nullptr;
try {
// setup minimum arguments
::arg().set("module-dir") = "./.libs";
BOOST_TEST_MESSAGE("Cannot start remotebackend: " << ex.reason);
};
}
- ~RemotebackendSetup() {}
+ ~RemotebackendSetup() = default;
};
BOOST_GLOBAL_FIXTURE(RemotebackendSetup);
{
RemotebackendSetup()
{
- be = 0;
+ be = nullptr;
try {
// setup minimum arguments
::arg().set("module-dir") = "./.libs";
BOOST_TEST_MESSAGE("Cannot start remotebackend: " << ex.reason);
};
}
- ~RemotebackendSetup() {}
+ ~RemotebackendSetup() = default;
};
BOOST_GLOBAL_FIXTURE(RemotebackendSetup);
{
RemotebackendSetup()
{
- be = 0;
+ be = nullptr;
try {
// setup minimum arguments
::arg().set("module-dir") = "./.libs";
BOOST_TEST_MESSAGE("Cannot start remotebackend: " << ex.reason);
};
}
- ~RemotebackendSetup() {}
+ ~RemotebackendSetup() = default;
};
BOOST_GLOBAL_FIXTURE(RemotebackendSetup);
{
RemotebackendSetup()
{
- be = 0;
+ be = nullptr;
try {
// setup minimum arguments
::arg().set("module-dir") = "./.libs";
BOOST_TEST_MESSAGE("Cannot start remotebackend: " << ex.reason);
};
}
- ~RemotebackendSetup() {}
+ ~RemotebackendSetup() = default;
};
BOOST_GLOBAL_FIXTURE(RemotebackendSetup);
BOOST_TEST_MESSAGE("Testing list method");
be->list(DNSName("unit.test."), -1);
- while (be->get(rr))
+ while (be->get(rr)) {
record_count++;
+ }
BOOST_CHECK_EQUAL(record_count, 5); // number of records our test domain has
}
BOOST_AUTO_TEST_CASE(test_method_setDomainMetadata)
{
std::vector<std::string> meta;
- meta.push_back("VALUE");
+ meta.emplace_back("VALUE");
BOOST_TEST_MESSAGE("Testing setDomainMetadata method");
BOOST_CHECK(be->setDomainMetadata(DNSName("unit.test."), "TEST", meta));
}
BOOST_TEST_MESSAGE("Testing alsoNotifies method");
be->alsoNotifies(DNSName("unit.test."), &alsoNotifies);
BOOST_CHECK_EQUAL(alsoNotifies.size(), 1);
- if (alsoNotifies.size() > 0)
+ if (!alsoNotifies.empty()) {
BOOST_CHECK_EQUAL(alsoNotifies.count("192.0.2.1"), 1);
+ }
BOOST_CHECK(be->setDomainMetadata(DNSName("unit.test."), "ALSO-NOTIFY", std::vector<std::string>()));
}
BOOST_CHECK_EQUAL(meta.size(), 1);
// in case we got more than one value, which would be unexpected
// but not fatal
- if (meta.size() > 0)
+ if (!meta.empty()) {
BOOST_CHECK_EQUAL(meta[0], "VALUE");
+ }
}
BOOST_AUTO_TEST_CASE(test_method_getAllDomainMetadata)
BOOST_CHECK_EQUAL(meta.size(), 1);
// in case we got more than one value, which would be unexpected
// but not fatal
- if (meta.size() > 0)
+ if (!meta.empty()) {
BOOST_CHECK_EQUAL(meta["TEST"][0], "VALUE");
+ }
}
BOOST_AUTO_TEST_CASE(test_method_addDomainKey)
{
BOOST_TEST_MESSAGE("Testing addDomainKey method");
- int64_t id;
+ int64_t id = 0;
be->addDomainKey(DNSName("unit.test."), k1, id);
BOOST_CHECK_EQUAL(id, 1);
be->addDomainKey(DNSName("unit.test."), k2, id);
BOOST_AUTO_TEST_CASE(test_method_getBeforeAndAfterNamesAbsolute)
{
- DNSName unhashed, before, after;
+ DNSName unhashed;
+ DNSName before;
+ DNSName after;
BOOST_TEST_MESSAGE("Testing getBeforeAndAfterNamesAbsolute method");
be->getBeforeAndAfterNamesAbsolute(-1, DNSName("middle.unit.test."), unhashed, before, after);
BOOST_AUTO_TEST_CASE(test_method_setTSIGKey)
{
- std::string algorithm, content;
+ std::string algorithm;
+ std::string content;
BOOST_TEST_MESSAGE("Testing setTSIGKey method");
BOOST_CHECK_MESSAGE(be->setTSIGKey(DNSName("unit.test."), DNSName("hmac-md5."), "kp4/24gyYsEzbuTVJRUMoqGFmN3LYgVDzJ/3oRSP7ys="), "did not return true");
}
BOOST_AUTO_TEST_CASE(test_method_deleteTSIGKey)
{
- std::string algorithm, content;
+ std::string algorithm;
+ std::string content;
BOOST_TEST_MESSAGE("Testing deleteTSIGKey method");
BOOST_CHECK_MESSAGE(be->deleteTSIGKey(DNSName("unit.test.")), "did not return true");
}
std::vector<struct TSIGKey> keys;
BOOST_TEST_MESSAGE("Testing getTSIGKeys method");
be->getTSIGKeys(keys);
- BOOST_CHECK(keys.size() > 0);
- if (keys.size() > 0) {
+ BOOST_CHECK(!keys.empty());
+ if (!keys.empty()) {
BOOST_CHECK_EQUAL(keys[0].name.toString(), "test.");
BOOST_CHECK_EQUAL(keys[0].algorithm.toString(), "NULL.");
BOOST_CHECK_EQUAL(keys[0].key, "NULL");
{
DNSResourceRecord rr;
std::vector<DNSResourceRecord> nsset;
- DNSBackend* dbd;
+ DNSBackend* dbd = nullptr;
BOOST_TEST_MESSAGE("Testing superMasterBackend method");
rr.qname = DNSName("example.com.");
rr.content = "ns2.example.com.";
nsset.push_back(rr);
- BOOST_CHECK(be->superMasterBackend("10.0.0.1", DNSName("example.com."), nsset, NULL, NULL, &dbd));
+ BOOST_CHECK(be->superMasterBackend("10.0.0.1", DNSName("example.com."), nsset, nullptr, nullptr, &dbd));
// let's see what we got
BOOST_CHECK_EQUAL(dbd, be);
be->getUpdatedMasters(result, catalogs, hashes);
- BOOST_REQUIRE(result.size() > 0);
+ BOOST_REQUIRE(!result.empty());
di = result.at(0);
BOOST_CHECK_EQUAL(di.zone.toString(), "master.test.");
{
auto data = input.dump() + "\n";
int rv = this->write(data);
- if (rv == -1)
+ if (rv == -1) {
return -1;
+ }
return rv;
}
int UnixsocketConnector::recv_message(Json& output)
{
- int rv;
- std::string s_output, err;
-
- struct timeval t0, t;
-
- gettimeofday(&t0, NULL);
+ int rv = 0;
+ std::string s_output;
+ std::string err;
+
+ struct timeval t0
+ {
+ };
+ struct timeval t
+ {
+ };
+
+ gettimeofday(&t0, nullptr);
memcpy(&t, &t0, sizeof(t0));
s_output = "";
while ((t.tv_sec - t0.tv_sec) * 1000 + (t.tv_usec - t0.tv_usec) / 1000 < this->timeout) {
int avail = waitForData(this->fd, 0, this->timeout * 500); // use half the timeout as poll timeout
- if (avail < 0) // poll error
+ if (avail < 0) { // poll error
return -1;
+ }
if (avail == 0) { // timeout
- gettimeofday(&t, NULL);
+ gettimeofday(&t, nullptr);
continue;
}
rv = this->read(s_output);
- if (rv == -1)
+ if (rv == -1) {
return -1;
+ }
if (rv > 0) {
// see if it can be parsed
output = Json::parse(s_output, err);
- if (output != nullptr)
+ if (output != nullptr) {
return s_output.size();
+ }
}
- gettimeofday(&t, NULL);
+ gettimeofday(&t, nullptr);
}
close(fd);
ssize_t UnixsocketConnector::read(std::string& data)
{
- ssize_t nread;
+ ssize_t nread = 0;
char buf[1500] = {0};
reconnect();
- if (!connected)
+ if (!connected) {
return -1;
+ }
nread = ::read(this->fd, buf, sizeof buf);
// just try again later...
- if (nread == -1 && errno == EAGAIN)
+ if (nread == -1 && errno == EAGAIN) {
return 0;
+ }
if (nread == -1 || nread == 0) {
connected = false;
size_t pos = 0;
reconnect();
- if (!connected)
+ if (!connected) {
return -1;
+ }
while (pos < data.size()) {
ssize_t written = ::write(fd, &data.at(pos), data.size() - pos);
close(fd);
return -1;
}
- else {
- pos = pos + static_cast<size_t>(written);
- }
+ pos = pos + static_cast<size_t>(written);
}
return pos;
}
void UnixsocketConnector::reconnect()
{
- struct sockaddr_un sock;
- int rv;
+ struct sockaddr_un sock
+ {
+ };
+ int rv = 0;
- if (connected)
+ if (connected) {
return; // no point reconnecting if connected...
+ }
connected = true;
g_log << Logger::Info << "Reconnecting to backend" << std::endl;
return;
}
- if (makeUNsockaddr(path, &sock)) {
+ if (makeUNsockaddr(path, &sock) != 0) {
g_log << Logger::Error << "Unable to create UNIX domain socket: Path '" << path << "' is not a valid UNIX socket path." << std::endl;
return;
}
this->send(msg);
msg = nullptr;
- if (this->recv(msg) == false) {
+ if (!this->recv(msg)) {
g_log << Logger::Warning << "Failed to initialize backend" << std::endl;
close(fd);
this->connected = false;
this->send(msg);
msg = nullptr;
- if (this->recv(msg) == false) {
+ if (!this->recv(msg)) {
g_log << Logger::Error << "Failed to initialize zeromq" << std::endl;
throw PDNSException("Failed to initialize zeromq");
}
};
-ZeroMQConnector::~ZeroMQConnector() {}
+ZeroMQConnector::~ZeroMQConnector() = default;
int ZeroMQConnector::send_message(const Json& input)
{
// message was not sent
g_log << Logger::Error << "Cannot send to " << this->d_endpoint << ": " << zmq_strerror(errno) << std::endl;
}
- else
+ else {
return line.size();
+ }
}
}
}
// we have an event
if ((item.revents & ZMQ_POLLIN) == ZMQ_POLLIN) {
string data;
- size_t msg_size;
+ size_t msg_size = 0;
zmq_msg_init(&message);
// read something
if (zmq_msg_recv(&message, this->d_sock.get(), ZMQ_NOBLOCK) > 0) {
data.assign(reinterpret_cast<const char*>(zmq_msg_data(&message)), msg_size);
zmq_msg_close(&message);
output = Json::parse(data, err);
- if (output != nullptr)
+ if (output != nullptr) {
rv = msg_size;
- else
+ }
+ else {
g_log << Logger::Error << "Cannot parse JSON reply from " << this->d_endpoint << ": " << err << endl;
+ }
break;
}
- else if (errno == EAGAIN) {
+ if (errno == EAGAIN) {
continue; // try again }
}
- else {
- break;
- }
+ break;
}
}
}
{"snmp-master-socket", "snmp-daemon-socket"},
{"xpf-allow-from", "Proxy Protocol"},
{"xpf-rr-code", "Proxy Protocol"},
+ {"allow-unsigned-supermaster", "allow-unsigned-autoprimary"},
+ {"master", "primary"},
+ {"slave-cycle-interval", "xfr-cycle-interval"},
+ {"slave-renotify", "secondary-do-renotify"},
+ {"slave", "secondary"},
+ {"superslave", "autosecondary"},
+ {"domain-metadata-cache-ttl", "zone-metadata-cache-ttl"},
};
// NOLINTNEXTLINE(readability-convert-member-functions-to-static): accesses d_log (compiled out in auth, hence clang-tidy message)
str << base << "tcpdiedreadingquery" << ' ' << front->tcpDiedReadingQuery.load() << " " << now << "\r\n";
str << base << "tcpdiedsendingresponse" << ' ' << front->tcpDiedSendingResponse.load() << " " << now << "\r\n";
str << base << "tcpgaveup" << ' ' << front->tcpGaveUp.load() << " " << now << "\r\n";
- str << base << "tcpclientimeouts" << ' ' << front->tcpClientTimeouts.load() << " " << now << "\r\n";
+ str << base << "tcpclienttimeouts" << ' ' << front->tcpClientTimeouts.load() << " " << now << "\r\n";
str << base << "tcpdownstreamtimeouts" << ' ' << front->tcpDownstreamTimeouts.load() << " " << now << "\r\n";
str << base << "tcpcurrentconnections" << ' ' << front->tcpCurrentConnections.load() << " " << now << "\r\n";
str << base << "tcpmaxconcurrentconnections" << ' ' << front->tcpMaxConcurrentConnections.load() << " " << now << "\r\n";
}
}
catch (const std::exception& e) {
- errlog("Got an exception in client connection from %s: %s", conn.getClient().toStringWithPort(), e.what());
+ infolog("Got an exception in client connection from %s: %s", conn.getClient().toStringWithPort(), e.what());
}
}
t.detach();
}
catch (const std::exception& e) {
- errlog("Control connection died: %s", e.what());
+ infolog("Control connection died: %s", e.what());
}
}
}
auto slow = g_dynblockNMG.getCopy();
struct timespec now;
gettime(&now);
- boost::format fmt("%-24s %8d %8d %-10s %-20s %s\n");
- g_outputBuffer = (fmt % "What" % "Seconds" % "Blocks" % "Warning" % "Action" % "Reason").str();
+ boost::format fmt("%-24s %8d %8d %-10s %-20s %-10s %s\n");
+ g_outputBuffer = (fmt % "What" % "Seconds" % "Blocks" % "Warning" % "Action" % "eBPF" % "Reason").str();
for (const auto& e : slow) {
if (now < e.second.until) {
uint64_t counter = e.second.blocks;
if (g_defaultBPFFilter && e.second.bpf) {
counter += g_defaultBPFFilter->getHits(e.first.getNetwork());
}
- g_outputBuffer += (fmt % e.first.toString() % (e.second.until.tv_sec - now.tv_sec) % counter % (e.second.warning ? "true" : "false") % DNSAction::typeToString(e.second.action != DNSAction::Action::None ? e.second.action : g_dynBlockAction) % e.second.reason).str();
+ g_outputBuffer += (fmt % e.first.toString() % (e.second.until.tv_sec - now.tv_sec) % counter % (e.second.warning ? "true" : "false") % DNSAction::typeToString(e.second.action != DNSAction::Action::None ? e.second.action : g_dynBlockAction) % (g_defaultBPFFilter && e.second.bpf ? "*" : "") % e.second.reason).str();
}
}
auto slow2 = g_dynblockSMT.getCopy();
string dom("empty");
if (!node.d_value.domain.empty())
dom = node.d_value.domain.toString();
- g_outputBuffer += (fmt % dom % (node.d_value.until.tv_sec - now.tv_sec) % node.d_value.blocks % (node.d_value.warning ? "true" : "false") % DNSAction::typeToString(node.d_value.action != DNSAction::Action::None ? node.d_value.action : g_dynBlockAction) % node.d_value.reason).str();
+ g_outputBuffer += (fmt % dom % (node.d_value.until.tv_sec - now.tv_sec) % node.d_value.blocks % (node.d_value.warning ? "true" : "false") % DNSAction::typeToString(node.d_value.action != DNSAction::Action::None ? node.d_value.action : g_dynBlockAction) % "" % node.d_value.reason).str();
}
});
});
g_frontends.push_back(std::move(cs));
}
- catch (std::exception& e) {
- errlog(e.what());
- g_outputBuffer = "Error: " + string(e.what()) + "\n";
+ catch (const std::exception& e) {
+ errlog("Error during addDNSCryptBind() processing: %s", e.what());
+ g_outputBuffer = "Error during addDNSCryptBind() processing: " + string(e.what()) + "\n";
}
});
result = g_dohlocals.at(index);
}
else {
- errlog("Error: trying to get DOH frontend with index %zu but we only have %zu frontend(s)\n", index, g_dohlocals.size());
+ errlog("Error: trying to get DOH frontend with index %d but we only have %d frontend(s)\n", index, g_dohlocals.size());
g_outputBuffer = "Error: trying to get DOH frontend with index " + std::to_string(index) + " but we only have " + std::to_string(g_dohlocals.size()) + " frontend(s)\n";
}
}
catch (const std::exception& e) {
g_outputBuffer = "Error while trying to get DOH frontend with index " + std::to_string(index) + ": " + string(e.what()) + "\n";
- errlog("Error while trying to get DOH frontend with index %zu: %s\n", index, string(e.what()));
+ errlog("Error while trying to get DOH frontend with index %d: %s\n", index, string(e.what()));
}
#else
g_outputBuffer="DNS over HTTPS support is not present!\n";
result = g_tlslocals.at(index)->getContext();
}
else {
- errlog("Error: trying to get TLS context with index %zu but we only have %zu context(s)\n", index, g_tlslocals.size());
+ errlog("Error: trying to get TLS context with index %d but we only have %d context(s)\n", index, g_tlslocals.size());
g_outputBuffer = "Error: trying to get TLS context with index " + std::to_string(index) + " but we only have " + std::to_string(g_tlslocals.size()) + " context(s)\n";
}
}
catch (const std::exception& e) {
g_outputBuffer = "Error while trying to get TLS context with index " + std::to_string(index) + ": " + string(e.what()) + "\n";
- errlog("Error while trying to get TLS context with index %zu: %s\n", index, string(e.what()));
+ errlog("Error while trying to get TLS context with index %d: %s\n", index, string(e.what()));
}
#else
g_outputBuffer="DNS over TLS support is not present!\n";
result = g_tlslocals.at(index);
}
else {
- errlog("Error: trying to get TLS frontend with index %zu but we only have %zu frontends\n", index, g_tlslocals.size());
+ errlog("Error: trying to get TLS frontend with index %d but we only have %d frontends\n", index, g_tlslocals.size());
g_outputBuffer = "Error: trying to get TLS frontend with index " + std::to_string(index) + " but we only have " + std::to_string(g_tlslocals.size()) + " frontend(s)\n";
}
}
catch (const std::exception& e) {
g_outputBuffer = "Error while trying to get TLS frontend with index " + std::to_string(index) + ": " + string(e.what()) + "\n";
- errlog("Error while trying to get TLS frontend with index %zu: %s\n", index, string(e.what()));
+ errlog("Error while trying to get TLS frontend with index %d: %s\n", index, string(e.what()));
}
#else
g_outputBuffer="DNS over TLS support is not present!\n";
auto result = dnsdist::metrics::declareCustomMetric(name, type, description, customName ? std::optional<std::string>(*customName) : std::nullopt);
if (result) {
g_outputBuffer += *result + "\n";
- errlog("%s", *result);
+ errlog("Error in declareMetric: %s", *result);
return false;
}
return true;
auto result = dnsdist::metrics::incrementCustomCounter(name, step ? *step : 1);
if (const auto* errorStr = std::get_if<dnsdist::metrics::Error>(&result)) {
g_outputBuffer = *errorStr + "'\n";
- errlog("%s", *errorStr);
+ errlog("Error in incMetric: %s", *errorStr);
return static_cast<uint64_t>(0);
}
return std::get<uint64_t>(result);
auto result = dnsdist::metrics::decrementCustomCounter(name, step ? *step : 1);
if (const auto* errorStr = std::get_if<dnsdist::metrics::Error>(&result)) {
g_outputBuffer = *errorStr + "'\n";
- errlog("%s", *errorStr);
+ errlog("Error in decMetric: %s", *errorStr);
return static_cast<uint64_t>(0);
}
return std::get<uint64_t>(result);
auto result = dnsdist::metrics::setCustomGauge(name, value);
if (const auto* errorStr = std::get_if<dnsdist::metrics::Error>(&result)) {
g_outputBuffer = *errorStr + "'\n";
- errlog("%s", *errorStr);
+ errlog("Error in setMetric: %s", *errorStr);
return 0.;
}
return std::get<double>(result);
auto result = dnsdist::metrics::getCustomMetric(name);
if (const auto* errorStr = std::get_if<dnsdist::metrics::Error>(&result)) {
g_outputBuffer = *errorStr + "'\n";
- errlog("%s", *errorStr);
+ errlog("Error in getMetric: %s", *errorStr);
return 0.;
}
return std::get<double>(result);
if (g_tcpStatesDumpRequested > 0) {
/* no race here, we took the lock so it can only be increased in the meantime */
--g_tcpStatesDumpRequested;
- errlog("Dumping the TCP states, as requested:");
+ infolog("Dumping the TCP states, as requested:");
data.mplexer->runForAllWatchedFDs([](bool isRead, int fd, const FDMultiplexer::funcparam_t& param, struct timeval ttd)
{
struct timeval lnow;
gettimeofday(&lnow, nullptr);
if (ttd.tv_sec > 0) {
- errlog("- Descriptor %d is in %s state, TTD in %d", fd, (isRead ? "read" : "write"), (ttd.tv_sec-lnow.tv_sec));
+ infolog("- Descriptor %d is in %s state, TTD in %d", fd, (isRead ? "read" : "write"), (ttd.tv_sec-lnow.tv_sec));
}
else {
- errlog("- Descriptor %d is in %s state, no TTD set", fd, (isRead ? "read" : "write"));
+ infolog("- Descriptor %d is in %s state, no TTD set", fd, (isRead ? "read" : "write"));
}
if (param.type() == typeid(std::shared_ptr<IncomingTCPConnectionState>)) {
auto state = boost::any_cast<std::shared_ptr<IncomingTCPConnectionState>>(param);
- errlog(" - %s", state->toString());
+ infolog(" - %s", state->toString());
}
#ifdef HAVE_NGHTTP2
else if (param.type() == typeid(std::shared_ptr<IncomingHTTP2Connection>)) {
auto state = boost::any_cast<std::shared_ptr<IncomingHTTP2Connection>>(param);
- errlog(" - %s", state->toString());
+ infolog(" - %s", state->toString());
}
#endif /* HAVE_NGHTTP2 */
else if (param.type() == typeid(std::shared_ptr<TCPConnectionToBackend>)) {
auto conn = boost::any_cast<std::shared_ptr<TCPConnectionToBackend>>(param);
- errlog(" - %s", conn->toString());
+ infolog(" - %s", conn->toString());
}
else if (param.type() == typeid(TCPClientThreadData*)) {
- errlog(" - Worker thread pipe");
+ infolog(" - Worker thread pipe");
}
});
- errlog("The TCP/DoT client cache has %d active and %d idle outgoing connections cached", t_downstreamTCPConnectionsManager.getActiveCount(), t_downstreamTCPConnectionsManager.getIdleCount());
+ infolog("The TCP/DoT client cache has %d active and %d idle outgoing connections cached", t_downstreamTCPConnectionsManager.getActiveCount(), t_downstreamTCPConnectionsManager.getIdleCount());
}
}
}
}
catch (const std::exception& e) {
- errlog("Error in TCP worker thread: %s", e.what());
+ warnlog("Error in TCP worker thread: %s", e.what());
}
}
}
static bool apiWriteConfigFile(const string& filebasename, const string& content)
{
if (!g_apiReadWrite) {
- errlog("Not writing content to %s since the API is read-only", filebasename);
+ warnlog("Not writing content to %s since the API is read-only", filebasename);
return false;
}
output << "# TYPE " << frontsbase << "tcpdiedsendingresponse " << "counter" << "\n";
output << "# HELP " << frontsbase << "tcpgaveup " << "Amount of TCP connections terminated after too many attempts to get a connection to the backend" << "\n";
output << "# TYPE " << frontsbase << "tcpgaveup " << "counter" << "\n";
- output << "# HELP " << frontsbase << "tcpclientimeouts " << "Amount of TCP connections terminated by a timeout while reading from the client" << "\n";
- output << "# TYPE " << frontsbase << "tcpclientimeouts " << "counter" << "\n";
+ output << "# HELP " << frontsbase << "tcpclienttimeouts " << "Amount of TCP connections terminated by a timeout while reading from the client" << "\n";
+ output << "# TYPE " << frontsbase << "tcpclienttimeouts " << "counter" << "\n";
output << "# HELP " << frontsbase << "tcpdownstreamtimeouts " << "Amount of TCP connections terminated by a timeout while reading from the backend" << "\n";
output << "# TYPE " << frontsbase << "tcpdownstreamtimeouts " << "counter" << "\n";
output << "# HELP " << frontsbase << "tcpcurrentconnections " << "Amount of current incoming TCP connections from clients" << "\n";
output << frontsbase << "tcpdiedreadingquery" << label << front->tcpDiedReadingQuery.load() << "\n";
output << frontsbase << "tcpdiedsendingresponse" << label << front->tcpDiedSendingResponse.load() << "\n";
output << frontsbase << "tcpgaveup" << label << front->tcpGaveUp.load() << "\n";
- output << frontsbase << "tcpclientimeouts" << label << front->tcpClientTimeouts.load() << "\n";
+ output << frontsbase << "tcpclienttimeouts" << label << front->tcpClientTimeouts.load() << "\n";
output << frontsbase << "tcpdownstreamtimeouts" << label << front->tcpDownstreamTimeouts.load() << "\n";
output << frontsbase << "tcpcurrentconnections" << label << front->tcpCurrentConnections.load() << "\n";
output << frontsbase << "tcpmaxconcurrentconnections" << label << front->tcpMaxConcurrentConnections.load() << "\n";
auto nmg = g_dynblockNMG.getLocal();
struct timespec now;
gettime(&now);
- for (const auto& e: *nmg) {
- if(now < e.second.until ) {
- Json::object thing{
- {"reason", e.second.reason},
- {"seconds", (double)(e.second.until.tv_sec - now.tv_sec)},
- {"blocks", (double)e.second.blocks},
- {"action", DNSAction::typeToString(e.second.action != DNSAction::Action::None ? e.second.action : g_dynBlockAction) },
- {"warning", e.second.warning }
- };
- obj.emplace(e.first.toString(), thing);
+ for (const auto& entry: *nmg) {
+ if (!(now < entry.second.until)) {
+ continue;
+ }
+ uint64_t counter = entry.second.blocks;
+ if (entry.second.bpf && g_defaultBPFFilter) {
+ counter += g_defaultBPFFilter->getHits(entry.first.getNetwork());
}
+ Json::object thing{
+ {"reason", entry.second.reason},
+ {"seconds", static_cast<double>(entry.second.until.tv_sec - now.tv_sec)},
+ {"blocks", static_cast<double>(counter)},
+ {"action", DNSAction::typeToString(entry.second.action != DNSAction::Action::None ? entry.second.action : g_dynBlockAction)},
+ {"warning", entry.second.warning},
+ {"ebpf", entry.second.bpf}
+ };
+ obj.emplace(entry.first.toString(), thing);
}
auto smt = g_dynblockSMT.getLocal();
smt->visit([&now,&obj](const SuffixMatchTree<DynBlock>& node) {
- if(now <node.d_value.until) {
- string dom("empty");
- if(!node.d_value.domain.empty())
- dom = node.d_value.domain.toString();
- Json::object thing{
- {"reason", node.d_value.reason},
- {"seconds", (double)(node.d_value.until.tv_sec - now.tv_sec)},
- {"blocks", (double)node.d_value.blocks},
- {"action", DNSAction::typeToString(node.d_value.action != DNSAction::Action::None ? node.d_value.action : g_dynBlockAction) }
- };
- obj.emplace(dom, thing);
+ if (!(now < node.d_value.until)) {
+ return;
+ }
+ string dom("empty");
+ if (!node.d_value.domain.empty()) {
+ dom = node.d_value.domain.toString();
}
+ Json::object thing{
+ {"reason", node.d_value.reason},
+ {"seconds", static_cast<double>(node.d_value.until.tv_sec - now.tv_sec)},
+ {"blocks", static_cast<double>(node.d_value.blocks)},
+ {"action", DNSAction::typeToString(node.d_value.action != DNSAction::Action::None ? node.d_value.action : g_dynBlockAction)},
+ {"ebpf", node.d_value.bpf}
+ };
+ obj.emplace(dom, thing);
});
#endif /* DISABLE_DYNBLOCKS */
Json my_json = obj;
obj.emplace(std::get<0>(entry).toString(), thing );
}
}
+ if (g_defaultBPFFilter) {
+ auto nmg = g_dynblockNMG.getLocal();
+ for (const auto& entry: *nmg) {
+ if (!(now < entry.second.until) || !entry.second.bpf) {
+ continue;
+ }
+ uint64_t counter = entry.second.blocks + g_defaultBPFFilter->getHits(entry.first.getNetwork());
+ Json::object thing{
+ {"reason", entry.second.reason},
+ {"seconds", static_cast<double>(entry.second.until.tv_sec - now.tv_sec)},
+ {"blocks", static_cast<double>(counter)},
+ {"action", DNSAction::typeToString(entry.second.action != DNSAction::Action::None ? entry.second.action : g_dynBlockAction)},
+ {"warning", entry.second.warning},
+ };
+ obj.emplace(entry.first.toString(), thing);
+ }
+ }
#endif /* HAVE_EBPF */
Json my_json = obj;
resp.body = my_json.dump();
{"id", num++},
{"creationOrder", (double)a.d_creationOrder},
{"uuid", boost::uuids::to_string(a.d_id)},
+ {"name", a.d_name},
{"matches", (double)a.d_rule->d_matches},
{"rule", a.d_rule->toString()},
{"action", a.d_action->toString()},
vinfolog("Webserver thread died with parse error exception while processing a request from %s: %s", conn.getClient().toStringWithPort(), e.what());
}
catch (const std::exception& e) {
- errlog("Webserver thread died with exception while processing a request from %s: %s", conn.getClient().toStringWithPort(), e.what());
+ vinfolog("Webserver thread died with exception while processing a request from %s: %s", conn.getClient().toStringWithPort(), e.what());
}
catch (...) {
- errlog("Webserver thread died with exception while processing a request from %s", conn.getClient().toStringWithPort());
+ vinfolog("Webserver thread died with exception while processing a request from %s", conn.getClient().toStringWithPort());
}
}
t.detach();
}
catch (const std::exception& e) {
- errlog("Had an error accepting new webserver connection: %s", e.what());
+ vinfolog("Had an error accepting new webserver connection: %s", e.what());
}
}
}
}
catch (const std::exception& e) {
if (remote && response.size() > 0 && static_cast<size_t>(response.size()) > sizeof(dnsheader)) {
- infolog("Backend %s sent us a response with id %d that did not parse: %s", remote->d_config.remote.toStringWithPort(), ntohs(dh->id), e.what());
+ vinfolog("Backend %s sent us a response with id %d that did not parse: %s", remote->d_config.remote.toStringWithPort(), ntohs(dh->id), e.what());
}
++dnsdist::metrics::g_stats.nonCompliantResponses;
if (remote) {
(*f)();
secondsToWaitLog = 0;
}
- catch(const std::exception &e) {
+ catch (const std::exception &e) {
if (secondsToWaitLog <= 0) {
- infolog("Error during execution of maintenance function: %s", e.what());
+ warnlog("Error during execution of maintenance function: %s", e.what());
secondsToWaitLog = 61;
}
secondsToWaitLog -= interval;
}
if (!d_beQuiet) {
- warnlog("Inserting %sdynamic block for %s for %d seconds: %s", warning ? "(warning) " :"", requestor.toString(), rule.d_blockDuration, rule.d_blockReason);
+ warnlog("Inserting %s%sdynamic block for %s for %d seconds: %s", warning ? "(warning) " :"", bpf ? "eBPF " : "", requestor.toString(), rule.d_blockDuration, rule.d_blockReason);
}
}
return false;
}
}
- catch(const std::exception& e) {
- warnlog("Error while looking up key '%s' from LMDB file '%s', database '%s': %s", key, d_fname, d_dbName, e.what());
+ catch (const std::exception& e) {
+ vinfolog("Error while looking up key '%s' from LMDB file '%s', database '%s': %s", key, d_fname, d_dbName, e.what());
}
return false;
}
return false;
}
}
- catch(const std::exception& e) {
- warnlog("Error while looking up key '%s' from LMDB file '%s', database '%s': %s", key, d_fname, d_dbName, e.what());
+ catch (const std::exception& e) {
+ vinfolog("Error while looking up key '%s' from LMDB file '%s', database '%s': %s", key, d_fname, d_dbName, e.what());
}
return false;
}
return false;
}
}
- catch(const std::exception& e) {
+ catch (const std::exception& e) {
vinfolog("Error while looking up a range from LMDB file '%s', database '%s': %s", d_fname, d_dbName, e.what());
}
return false;
d_nextCheck = now + d_refreshDelay;
d_refreshing.clear();
}
- catch(...) {
+ catch (...) {
d_refreshing.clear();
throw;
}
}
}
}
- catch(const std::exception& e) {
- warnlog("Error while looking up key '%s' from CDB file '%s': %s", key, d_fname, e.what());
+ catch (const std::exception& e) {
+ vinfolog("Error while looking up key '%s' from CDB file '%s': %s", key, d_fname, e.what());
}
return false;
}
return (*cdb)->keyExists(key);
}
}
- catch(const std::exception& e) {
- warnlog("Error while looking up key '%s' from CDB file '%s': %s", key, d_fname, e.what());
+ catch (const std::exception& e) {
+ vinfolog("Error while looking up key '%s' from CDB file '%s': %s", key, d_fname, e.what());
}
return false;
}
ctx.addNewCertificate(cert, privateKey);
}
}
- catch(const std::exception& e) {
- errlog(e.what());
- g_outputBuffer="Error: "+string(e.what())+"\n";
+ catch (const std::exception& e) {
+ errlog("Error generating a DNSCrypt certificate: %s", e.what());
+ g_outputBuffer = "Error generating a DNSCrypt certificate: " + string(e.what()) + "\n";
}
});
}
}
catch (const std::exception& e) {
- errlog(e.what());
- g_outputBuffer = "Error: " + string(e.what()) + "\n";
+ errlog("Error generating a DNSCrypt certificate: %s", e.what());
+ g_outputBuffer = "Error generating a DNSCrypt certificate: " + string(e.what()) + "\n";
}
});
g_outputBuffer = "Provider fingerprint is: " + DNSCryptContext::getProviderFingerprint(publicKey) + "\n";
}
catch (const std::exception& e) {
- errlog(e.what());
- g_outputBuffer = "Error: " + string(e.what()) + "\n";
+ errlog("Error generating a DNSCrypt provider key: %s", e.what());
+ g_outputBuffer = "Error generating a DNSCrypt provider key: " + string(e.what()) + "\n";
}
sodium_memzero(privateKey, sizeof(privateKey));
g_outputBuffer = "Provider fingerprint is: " + DNSCryptContext::getProviderFingerprint(publicKey) + "\n";
}
catch (const std::exception& e) {
- errlog(e.what());
- g_outputBuffer = "Error: " + string(e.what()) + "\n";
+ errlog("Error getting a DNSCrypt provider fingerprint: %s", e.what());
+ g_outputBuffer = "Error getting a DNSCrypt provider fingerprint: " + string(e.what()) + "\n";
}
});
#endif
}
}
catch (const std::exception& e) {
- infolog("Exception when processing IO for incoming DoH connection from %s: %s", d_ci.remote.toStringWithPort(), e.what());
+ vinfolog("Exception when processing IO for incoming DoH connection from %s: %s", d_ci.remote.toStringWithPort(), e.what());
d_connectionDied = true;
stopIO();
}
{
auto* conn = static_cast<IncomingHTTP2Connection*>(user_data);
- if (error_code == 0) {
- return 0;
- }
-
- auto stream = conn->d_currentStreams.find(stream_id);
- if (stream == conn->d_currentStreams.end()) {
- /* we don't care, then */
- return 0;
- }
-
- struct timeval now
- {
- };
- gettimeofday(&now, nullptr);
- auto request = std::move(stream->second);
- conn->d_currentStreams.erase(stream->first);
-
+ conn->d_currentStreams.erase(stream_id);
return 0;
}
conn->writeToSocket(true);
}
-bool IncomingHTTP2Connection::isIdle() const
-{
- return getConcurrentStreamsCount() == 0;
-}
-
void IncomingHTTP2Connection::stopIO()
{
d_ioState->reset();
gettimeofday(&now, nullptr);
if (newState == IOState::NeedRead) {
- if (isIdle()) {
+ /* use the idle TTL if the handshake has been completed (and proxy protocol payload received, if any),
+ and we have processed at least one query, otherwise we use the shorter read TTL */
+ if ((d_state == State::waitingForQuery || d_state == State::idle) && (d_queriesCount > 0 || d_currentQueriesCount > 0)) {
ttd = getIdleClientReadTTD(now);
}
else {
std::unique_ptr<DOHUnitInterface> getDOHUnit(uint32_t streamID) override;
void stopIO();
- bool isIdle() const;
uint32_t getConcurrentStreamsCount() const;
void updateIO(IOState newState, const FDMultiplexer::callbackfunc_t& callback);
void handleIOError();
if (g_dohStatesDumpRequested > 0) {
/* no race here, we took the lock so it can only be increased in the meantime */
--g_dohStatesDumpRequested;
- errlog("Dumping the DoH client states, as requested:");
+ infolog("Dumping the DoH client states, as requested:");
data.mplexer->runForAllWatchedFDs([](bool isRead, int fd, const FDMultiplexer::funcparam_t& param, struct timeval ttd) {
struct timeval lnow;
gettimeofday(&lnow, nullptr);
if (ttd.tv_sec > 0) {
- errlog("- Descriptor %d is in %s state, TTD in %d", fd, (isRead ? "read" : "write"), (ttd.tv_sec - lnow.tv_sec));
+ infolog("- Descriptor %d is in %s state, TTD in %d", fd, (isRead ? "read" : "write"), (ttd.tv_sec - lnow.tv_sec));
}
else {
- errlog("- Descriptor %d is in %s state, no TTD set", fd, (isRead ? "read" : "write"));
+ infolog("- Descriptor %d is in %s state, no TTD set", fd, (isRead ? "read" : "write"));
}
if (param.type() == typeid(std::shared_ptr<DoHConnectionToBackend>)) {
auto conn = boost::any_cast<std::shared_ptr<DoHConnectionToBackend>>(param);
- errlog(" - %s", conn->toString());
+ infolog(" - %s", conn->toString());
}
else if (param.type() == typeid(DoHClientThreadData*)) {
- errlog(" - Worker thread pipe");
+ infolog(" - Worker thread pipe");
}
});
- errlog("The DoH client cache has %d active and %d idle outgoing connections cached", t_downstreamDoHConnectionsManager.getActiveCount(), t_downstreamDoHConnectionsManager.getIdleCount());
+ infolog("The DoH client cache has %d active and %d idle outgoing connections cached", t_downstreamDoHConnectionsManager.getActiveCount(), t_downstreamDoHConnectionsManager.getIdleCount());
}
}
}
catch (const std::exception& e) {
- errlog("Error in outgoing DoH thread: %s", e.what());
+ warnlog("Error in outgoing DoH thread: %s", e.what());
}
}
}
int securityStatus = std::stoi(split.first);
std::string securityMessage = split.second;
- if(securityStatus == 1 && !g_secPollDone) {
- warnlog("Polled security status of version %s at startup, no known issues reported: %s", std::string(VERSION), securityMessage);
+ if (securityStatus == 1 && !g_secPollDone) {
+ infolog("Polled security status of version %s at startup, no known issues reported: %s", std::string(VERSION), securityMessage);
}
- if(securityStatus == 2) {
+ if (securityStatus == 2) {
errlog("PowerDNS DNSDist Security Update Recommended: %s", securityMessage);
}
else if(securityStatus == 3) {
g_secPollDone = true;
return;
}
- catch(const std::exception& e) {
+ catch (const std::exception& e) {
if (releaseVersion) {
warnlog("Error while retrieving the security update for version %s: %s", version, e.what());
}
> bd = getBind(0)
> bd:attachFilter(bpf)
-:program:`dnsdist` also supports adding dynamic, expiring blocks to a BPF filter::
+:program:`dnsdist` also supports adding dynamic, expiring blocks to a BPF filter:
+.. code-block:: lua
+
+ bpf = newBPFFilter({ipv4MaxItems=1024, ipv6MaxItems=1024, qnamesMaxItems=1024})
+ setDefaultBPFFilter(bpf)
+ local dbr = dynBlockRulesGroup()
+ dbr:setQueryRate(20, 10, "Exceeded query rate", 60)
+
+ function maintenance()
+ dbr:apply()
+ end
+
+This will dynamically block all hosts that exceeded 20 queries/s as measured over the past 10 seconds, and the dynamic block will last for 60 seconds.
+
+Since 1.6.0, the default BPF filter set via :func:`setDefaultBPFFilter` will automatically get used when a "drop" dynamic block is inserted via a :ref:`DynBlockRulesGroup`, which provides a better way to combine dynamic blocks with eBPF filtering.
+Before that, it was possible to use the :func:`addBPFFilterDynBlocks` method instead:
+
+.. code-block:: lua
+
+ -- this is a legacy method, please see above for DNSdist >= 1.6.0
bpf = newBPFFilter({ipv4MaxItems=1024, ipv6MaxItems=1024, qnamesMaxItems=1024})
setDefaultBPFFilter(bpf)
dbpf = newDynBPFFilter(bpf)
dbpf:purgeExpired()
end
-This will dynamically block all hosts that exceeded 20 queries/s as measured over the past 10 seconds, and the dynamic block will last for 60 seconds.
-
The dynamic eBPF blocks and the number of queries they blocked can be seen in the web interface and retrieved from the API. Note however that eBPF dynamic objects need to be registered before they appear in the web interface or the API, using the :func:`registerDynBPFFilter` function::
registerDynBPFFilter(dbpf)
They can be unregistered at a later point using the :func:`unregisterDynBPFFilter` function.
-
-Since 1.6.0, the default BPF filter set via :func:`setDefaultBPFFilter` will automatically get used when a "drop" dynamic block is inserted via a :ref:`DynBlockRulesGroup`, which provides a better way to combine dynamic blocks with eBPF filtering.
+Since 1.8.2, the metrics for the BPF filter registered via :func:`setDefaultBPFFilter` are exported as well.
Requirements
------------
It will get called every second, and from this function you can set rules to block traffic based on statistics.
More exactly, the thread handling the :func:`maintenance` function will sleep for one second between each invocation, so if the function takes several seconds to complete it will not be invoked exactly every second.
-As an example::
+As an example:
+
+.. code-block:: lua
+
+ local dbr = dynBlockRulesGroup()
+ dbr:setQueryRate(20, 10, "Exceeded query rate", 60)
function maintenance()
- addDynBlocks(exceedQRate(20, 10), "Exceeded query rate", 60)
+ dbr:apply()
end
This will dynamically block all hosts that exceeded 20 queries/s as measured over the past 10 seconds, and the dynamic block will last for 60 seconds.
+:ref:`DynBlockRulesGroup` is a very efficient way of processing dynamic blocks that was introduced in 1.3.0. Before that, it was possible to use :meth:`addDynBlocks` instead:
+
+.. code-block:: lua
+
+ -- this is a legacy method, please see above for DNSdist >= 1.3.0
+ function maintenance()
+ addDynBlocks(exceedQRate(20, 10), "Exceeded query rate", 60)
+ end
+
Dynamic blocks in force are displayed with :func:`showDynBlocks` and can be cleared with :func:`clearDynBlocks`.
They return a table whose key is a :class:`ComboAddress` object, representing the client's source address, and whose value is an integer representing the number of queries matching the corresponding condition (for example the qtype for :func:`exceedQTypeRate`, rcode for :func:`exceedServFails`).
designed to make the processing of multiple rate-limiting rules faster by walking the query and response buffers only once
for each invocation, instead of once per existing `exceed*()` invocation.
-For example, instead of having something like:
-
-.. code-block:: lua
-
- function maintenance()
- addDynBlocks(exceedQRate(30, 10), "Exceeded query rate", 60)
- addDynBlocks(exceedNXDOMAINs(20, 10), "Exceeded NXD rate", 60)
- addDynBlocks(exceedServFails(20, 10), "Exceeded ServFail rate", 60)
- addDynBlocks(exceedQTypeRate(DNSQType.ANY, 5, 10), "Exceeded ANY rate", 60)
- addDynBlocks(exceedRespByterate(1000000, 10), "Exceeded resp BW rate", 60)
- end
-
The new syntax would be:
.. code-block:: lua
dbr:apply()
end
+Before 1.3.0 the legacy syntax was:
+
+.. code-block:: lua
+
+ function maintenance()
+ -- this example is using legacy methods, please see above for DNSdist >= 1.3.0
+ addDynBlocks(exceedQRate(30, 10), "Exceeded query rate", 60)
+ addDynBlocks(exceedNXDOMAINs(20, 10), "Exceeded NXD rate", 60)
+ addDynBlocks(exceedServFails(20, 10), "Exceeded ServFail rate", 60)
+ addDynBlocks(exceedQTypeRate(DNSQType.ANY, 5, 10), "Exceeded ANY rate", 60)
+ addDynBlocks(exceedRespByterate(1000000, 10), "Exceeded resp BW rate", 60)
+ end
+
+
The old syntax would walk the query buffer 2 times and the response one 3 times, while the new syntax does it only once for each.
It also reuse the same internal table to keep track of the source IPs, reducing the CPU usage.
# TYPE dnsdist_frontend_tcpdiedsendingresponse counter
# HELP dnsdist_frontend_tcpgaveup Amount of TCP connections terminated after too many attempts to get a connection to the backend
# TYPE dnsdist_frontend_tcpgaveup counter
- # HELP dnsdist_frontend_tcpclientimeouts Amount of TCP connections terminated by a timeout while reading from the client
- # TYPE dnsdist_frontend_tcpclientimeouts counter
+ # HELP dnsdist_frontend_tcpclienttimeouts Amount of TCP connections terminated by a timeout while reading from the client
+ # TYPE dnsdist_frontend_tcpclienttimeouts counter
# HELP dnsdist_frontend_tcpdownstreamtimeouts Amount of TCP connections terminated by a timeout while reading from the backend
# TYPE dnsdist_frontend_tcpdownstreamtimeouts counter
# HELP dnsdist_frontend_tcpcurrentconnections Amount of current incoming TCP connections from clients
dnsdist_frontend_tcpdiedreadingquery{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
dnsdist_frontend_tcpdiedsendingresponse{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
dnsdist_frontend_tcpgaveup{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
- dnsdist_frontend_tcpclientimeouts{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
+ dnsdist_frontend_tcpclienttimeouts{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
dnsdist_frontend_tcpdownstreamtimeouts{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
dnsdist_frontend_tcpcurrentconnections{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
dnsdist_frontend_tcpmaxconcurrentconnections{frontend="127.0.0.1:853",proto="TCP (DNS over TLS)",thread="0"} 0
dnsdist_frontend_tcpdiedreadingquery{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
dnsdist_frontend_tcpdiedsendingresponse{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
dnsdist_frontend_tcpgaveup{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
- dnsdist_frontend_tcpclientimeouts{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
+ dnsdist_frontend_tcpclienttimeouts{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
dnsdist_frontend_tcpdownstreamtimeouts{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
dnsdist_frontend_tcpcurrentconnections{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
dnsdist_frontend_tcpmaxconcurrentconnections{frontend="[::1]:443",proto="TCP (DNS over HTTPS)",thread="0"} 0
dnsdist_frontend_tcpdiedreadingquery{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
dnsdist_frontend_tcpdiedsendingresponse{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
dnsdist_frontend_tcpgaveup{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
- dnsdist_frontend_tcpclientimeouts{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
+ dnsdist_frontend_tcpclienttimeouts{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
dnsdist_frontend_tcpdownstreamtimeouts{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
dnsdist_frontend_tcpcurrentconnections{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
dnsdist_frontend_tcpmaxconcurrentconnections{frontend="127.0.0.1:53",proto="TCP",thread="0"} 0
Block a set of addresses with ``message`` for (optionally) a number of seconds.
The default number of seconds to block for is 10.
+ Since 1.3.0, the use of a :ref:`DynBlockRulesGroup` is a much more efficient way of doing the same thing.
:param addresses: set of Addresses as returned by an exceed function
:param string message: The message to show next to the blocks
This is the eBPF equivalent of :func:`addDynBlocks`, blocking a set of addresses for (optionally) a number of seconds, using an eBPF dynamic filter.
The default number of seconds to block for is 10.
+ Since 1.6.0, the use of a :ref:`DynBlockRulesGroup` is a much more efficient way of doing the same thing.
:param addresses: set of Addresses as returned by an :ref:`exceed function <exceedfuncs>`
:param DynBPFFilter dynbpf: The dynamic eBPF filter to use
.. versionadded:: 1.4.0
Return a new KeyValueStore object associated to the corresponding CDB database. The modification time
- of the CDB file will be checked every 'refrehDelay' second and the database re-opened if needed.
+ of the CDB file will be checked every 'refreshDelay' second and the database re-opened if needed.
:param string filename: The path to an existing CDB database
:param int refreshDelays: The delay in seconds between two checks of the database modification time. 0 means disabled
.. function:: setTCPRecvTimeout(num)
- Set the read timeout on TCP connections from the client, in seconds
+ Set the read timeout on TCP connections from the client, in seconds. Defaults to 2
:param int num:
.. function:: setTCPSendTimeout(num)
- Set the write timeout on TCP connections from the client, in seconds
+ Set the write timeout on TCP connections from the client, in seconds. Defaults to 2
:param int num:
return 0;
}
catch (const std::exception& e) {
- errlog("DOH Handler function failed with error %s", e.what());
+ vinfolog("DOH Handler function failed with error: '%s'", e.what());
return 0;
}
}
processDOHQuery(std::move(dohUnit), false);
}
catch (const std::exception& e) {
- errlog("Error while processing query received over DoH: %s", e.what());
+ vinfolog("Error while processing query received over DoH: %s", e.what());
}
catch (...) {
- errlog("Unspecified error while processing query received over DoH");
+ vinfolog("Unspecified error while processing query received over DoH");
}
}
}
dohUnit = std::move(*tmp);
}
catch (const std::exception& e) {
- errlog("Error reading a DOH internal response: %s", e.what());
+ warnlog("Error reading a DOH internal response: %s", e.what());
return;
}
bouw = bouw + "</table>";
$("#downstreams").html(bouw);
- bouw='<table width="100%"><tr align=left><th>#</th><th align=left>Rule</th><th>Action</th><th>Matches</th></tr>';
+ bouw='<table width="100%"><tr align=left><th>#</th><th align=left>Name</th><th align=left>Rule</th><th>Action</th><th>Matches</th></tr>';
if(data["rules"].length) {
$.each(data["rules"], function(a,b) {
- bouw = bouw + ("<tr align=left><td>"+b["id"]+"</td><td align=left>"+b["rule"]+"</td><td>"+b["action"]+"</td>");
+ bouw = bouw + ("<tr align=left><td>"+b["id"]+"</td><td align=left>"+b["name"]+"</td><td align=left>"+b["rule"]+"</td><td>"+b["action"]+"</td>");
bouw = bouw + ("<td>"+b["matches"]+"</td></tr>");
});
}
bouw = bouw + "</table>";
$("#rules").html(bouw);
- bouw='<table width="100%"><tr align=left><th>#</th><th align=left>Response Rule</th><th>Action</th><th>Matches</th></tr>';
+ bouw='<table width="100%"><tr align=left><th>#</th><th align=left>Name</th><th align=left>Response Rule</th><th>Action</th><th>Matches</th></tr>';
if(data["response-rules"].length) {
$.each(data["response-rules"], function(a,b) {
- bouw = bouw + ("<tr align=left><td>"+b["id"]+"</td><td align=left>"+b["rule"]+"</td><td>"+b["action"]+"</td>");
+ bouw = bouw + ("<tr align=left><td>"+b["id"]+"</td><td align=left>"+b["name"]+"</td><td align=left>"+b["rule"]+"</td><td>"+b["action"]+"</td>");
bouw = bouw + ("<td>"+b["matches"]+"</td></tr>");
});
}
$.ajax({ url: 'jsonstat?command=dynblocklist', type: 'GET', dataType: 'json', jsonp: false,
success: function(data) {
- var bouw='<table width="100%"><tr align=left><th>Dyn blocked netmask</th><th>Seconds</th><th>Blocks</th><th align=left>Reason</th></tr>';
+ var bouw='<table width="100%"><tr align=left><th>Dyn blocked netmask</th><th>Seconds</th><th>Blocks</th><th>eBPF</th><th align=left>Reason</th></tr>';
var gotsome=false;
$.each(data, function(a,b) {
- bouw=bouw+("<tr><td>"+a+"</td><td>"+b.seconds+"</td><td>"+b.blocks+"</td><td>"+b.reason+"</td></tr>");
+ bouw=bouw+("<tr><td>"+a+"</td><td>"+b.seconds+"</td><td>"+b.blocks+"</td><td>"+b.ebpf+"</td><td>"+b.reason+"</td></tr>");
gotsome=true;
});
errorRaised = true;
}
- bool errorRaised{false};
+ std::atomic<bool> errorRaised{false};
};
struct DummyCrossProtocolQuery : public CrossProtocolQuery
}
BOOST_CHECK(holder->empty());
- BOOST_CHECK(sender->errorRaised);
+ BOOST_CHECK(sender->errorRaised.load());
holder->stop();
}
}
BOOST_CHECK(holder->empty());
- BOOST_CHECK(sender->errorRaised);
+ BOOST_CHECK(sender->errorRaised.load());
holder->stop();
}
("filter-name,f", po::value<string>(), "Do statistics only for queries within this domain")
("load-stats,l", po::value<string>()->default_value(""), "if set, emit per-second load statistics (questions, answers, outstanding)")
("no-servfail-stats", "Don't include servfails in response time stats")
+ ("port", po::value<uint16_t>()->default_value(0), "The source and destination port to consider. Default is looking at packets from and to ports 53 and 5300")
("servfail-tree", "Figure out subtrees that generate servfails")
("stats-dir", po::value<string>()->default_value("."), "Directory where statistics will be saved")
("write-failures,w", po::value<string>()->default_value(""), "if set, write weird packets to this PCAP file")
std::unordered_set<ComboAddress, ComboAddress::addressOnlyHash> requestors, recipients, rdnonra;
typedef vector<pair<time_t, LiveCounts> > pcounts_t;
pcounts_t pcounts;
+ const uint16_t port = g_vm["port"].as<uint16_t>();
OPTRecordContent::report();
for(unsigned int fno=0; fno < files.size(); ++fno) {
EDNSOpts edo;
while(pr.getUDPPacket()) {
- if((ntohs(pr.d_udp->uh_dport)==5300 || ntohs(pr.d_udp->uh_sport)==5300 ||
- ntohs(pr.d_udp->uh_dport)==53 || ntohs(pr.d_udp->uh_sport)==53) &&
- pr.d_len > 12) {
- try {
- if((pr.d_ip->ip_v == 4 && !doIPv4) || (pr.d_ip->ip_v == 6 && !doIPv6))
- continue;
- if(pr.d_ip->ip_v == 4) {
- uint16_t frag = ntohs(pr.d_ip->ip_off);
- if((frag & IP_MF) || (frag & IP_OFFMASK)) { // more fragments or IS a fragment
- fragmented++;
- continue;
- }
- }
- uint16_t qtype;
- DNSName qname((const char*)pr.d_payload, pr.d_len, 12, false, &qtype);
- struct dnsheader header;
- memcpy(&header, (struct dnsheader*)pr.d_payload, 12);
-
- if(haveRDFilter && header.rd != rdFilter) {
- rdFilterMismatch++;
- continue;
- }
-
- if(!filtername.empty() && !qname.isPartOf(filtername)) {
- nameMismatch++;
+ if (pr.d_len <= 12) {
+ // non-DNS ip
+ nonDNSIP++;
+ continue;
+ }
+ if (port > 0 &&
+ (ntohs(pr.d_udp->uh_dport) != port && ntohs(pr.d_udp->uh_sport) != port)) {
+ // non-DNS ip
+ nonDNSIP++;
+ continue;
+ }
+
+ if (port == 0 &&
+ (ntohs(pr.d_udp->uh_dport) != 5300 && ntohs(pr.d_udp->uh_sport) != 5300 &&
+ ntohs(pr.d_udp->uh_dport) != 53 && ntohs(pr.d_udp->uh_sport) != 53)) {
+ // non-DNS ip
+ nonDNSIP++;
+ continue;
+ }
+
+ try {
+ if ((pr.d_ip->ip_v == 4 && !doIPv4) || (pr.d_ip->ip_v == 6 && !doIPv6)) {
+ continue;
+ }
+
+ if (pr.d_ip->ip_v == 4) {
+ uint16_t frag = ntohs(pr.d_ip->ip_off);
+ if((frag & IP_MF) || (frag & IP_OFFMASK)) { // more fragments or IS a fragment
+ fragmented++;
continue;
}
+ }
+ uint16_t qtype;
+ DNSName qname((const char*)pr.d_payload, pr.d_len, 12, false, &qtype);
+ struct dnsheader header;
+ memcpy(&header, (struct dnsheader*)pr.d_payload, 12);
+
+ if(haveRDFilter && header.rd != rdFilter) {
+ rdFilterMismatch++;
+ continue;
+ }
+
+ if(!filtername.empty() && !qname.isPartOf(filtername)) {
+ nameMismatch++;
+ continue;
+ }
+
+ if(!header.qr) {
+ uint16_t udpsize, z;
+ if(getEDNSUDPPayloadSizeAndZ((const char*)pr.d_payload, pr.d_len, &udpsize, &z)) {
+ edns++;
+ if(z & EDNSOpts::DNSSECOK)
+ dnssecOK++;
+ if(header.cd)
+ dnssecCD++;
+ if(header.ad)
+ dnssecAD++;
+ }
+ }
+
+ if(pr.d_ip->ip_v == 4)
+ ++ipv4DNSPackets;
+ else
+ ++ipv6DNSPackets;
- if(!header.qr) {
- uint16_t udpsize, z;
- if(getEDNSUDPPayloadSizeAndZ((const char*)pr.d_payload, pr.d_len, &udpsize, &z)) {
- edns++;
- if(z & EDNSOpts::DNSSECOK)
- dnssecOK++;
- if(header.cd)
- dnssecCD++;
- if(header.ad)
- dnssecAD++;
+ if(pr.d_pheader.ts.tv_sec != lastsec) {
+ LiveCounts lc;
+ if(lastsec) {
+ lc.questions = queries;
+ lc.answers = answers;
+ lc.outstanding = liveQuestions();
+
+ LiveCounts diff = lc - lastcounts;
+ pcounts.emplace_back(pr.d_pheader.ts.tv_sec, diff);
+ }
+ lastsec = pr.d_pheader.ts.tv_sec;
+ lastcounts = lc;
+ }
+
+ if(lowestTime) { lowestTime = min((time_t)lowestTime, (time_t)pr.d_pheader.ts.tv_sec); }
+ else { lowestTime = pr.d_pheader.ts.tv_sec; }
+ highestTime=max((time_t)highestTime, (time_t)pr.d_pheader.ts.tv_sec);
+
+ QuestionIdentifier qi=QuestionIdentifier::create(pr.getSource(), pr.getDest(), header, qname, qtype);
+
+ if(!header.qr) { // question
+ // cout<<"Query "<<qi<<endl;
+ if(!header.rd)
+ nonRDQueries++;
+ queries++;
+
+ ComboAddress rem = pr.getSource();
+ rem.sin4.sin_port=0;
+ requestors.insert(rem);
+
+ QuestionData& qd=statmap[qi];
+
+ if(!qd.d_firstquestiontime.tv_sec)
+ qd.d_firstquestiontime=pr.d_pheader.ts;
+ else {
+ auto delta=makeFloat(pr.d_pheader.ts - qd.d_firstquestiontime);
+ // cout<<"Reuse of "<<qi<<", delta t="<<delta<<", count="<<qd.d_qcount<<endl;
+ if(delta > 2.0) {
+ // cout<<"Resetting old entry for "<<qi<<", too old"<<endl;
+ qd.d_qcount=0;
+ qd.d_answercount=0;
+ qd.d_firstquestiontime=pr.d_pheader.ts;
}
}
+ if(qd.d_qcount++)
+ reuses++;
+ }
+ else { // answer
+ // cout<<"Response "<<qi<<endl;
+ rcodes[header.rcode]++;
+ answers++;
+ if(header.rd && !header.ra) {
+ rdNonRAAnswers++;
+ rdnonra.insert(pr.getDest());
+ }
- if(pr.d_ip->ip_v == 4)
- ++ipv4DNSPackets;
- else
- ++ipv6DNSPackets;
+ if(header.ra) {
+ ComboAddress rem = pr.getDest();
+ rem.sin4.sin_port=0;
+ recipients.insert(rem);
+ }
- if(pr.d_pheader.ts.tv_sec != lastsec) {
- LiveCounts lc;
- if(lastsec) {
- lc.questions = queries;
- lc.answers = answers;
- lc.outstanding = liveQuestions();
+ QuestionData& qd=statmap[qi];
+ if(!qd.d_qcount) {
+ // cout<<"Untracked answer: "<<qi<<endl;
+ untracked++;
+ }
- LiveCounts diff = lc - lastcounts;
- pcounts.emplace_back(pr.d_pheader.ts.tv_sec, diff);
- }
- lastsec = pr.d_pheader.ts.tv_sec;
- lastcounts = lc;
+ qd.d_answercount++;
+
+ if(qd.d_qcount) {
+ uint32_t usecs= (pr.d_pheader.ts.tv_sec - qd.d_firstquestiontime.tv_sec) * 1000000 +
+ (pr.d_pheader.ts.tv_usec - qd.d_firstquestiontime.tv_usec) ;
+
+ // cout<<"Usecs for "<<qi<<": "<<usecs<<endl;
+ if(!noservfailstats || header.rcode != 2)
+ cumul[usecs]++;
+
+ if(header.rcode != 0 && header.rcode!=3)
+ errorresult++;
+ ComboAddress rem = pr.getDest();
+ rem.sin4.sin_port=0;
+
+ if(doServFailTree)
+ root.submit(qname, header.rcode, pr.d_len, false, rem);
}
- if(lowestTime) { lowestTime = min((time_t)lowestTime, (time_t)pr.d_pheader.ts.tv_sec); }
- else { lowestTime = pr.d_pheader.ts.tv_sec; }
- highestTime=max((time_t)highestTime, (time_t)pr.d_pheader.ts.tv_sec);
-
- QuestionIdentifier qi=QuestionIdentifier::create(pr.getSource(), pr.getDest(), header, qname, qtype);
-
- if(!header.qr) { // question
- // cout<<"Query "<<qi<<endl;
- if(!header.rd)
- nonRDQueries++;
- queries++;
-
- ComboAddress rem = pr.getSource();
- rem.sin4.sin_port=0;
- requestors.insert(rem);
-
- QuestionData& qd=statmap[qi];
-
- if(!qd.d_firstquestiontime.tv_sec)
- qd.d_firstquestiontime=pr.d_pheader.ts;
- else {
- auto delta=makeFloat(pr.d_pheader.ts - qd.d_firstquestiontime);
- // cout<<"Reuse of "<<qi<<", delta t="<<delta<<", count="<<qd.d_qcount<<endl;
- if(delta > 2.0) {
- // cout<<"Resetting old entry for "<<qi<<", too old"<<endl;
- qd.d_qcount=0;
- qd.d_answercount=0;
- qd.d_firstquestiontime=pr.d_pheader.ts;
- }
- }
- if(qd.d_qcount++)
- reuses++;
- }
- else { // answer
- // cout<<"Response "<<qi<<endl;
- rcodes[header.rcode]++;
- answers++;
- if(header.rd && !header.ra) {
- rdNonRAAnswers++;
- rdnonra.insert(pr.getDest());
- }
-
- if(header.ra) {
- ComboAddress rem = pr.getDest();
- rem.sin4.sin_port=0;
- recipients.insert(rem);
- }
-
- QuestionData& qd=statmap[qi];
- if(!qd.d_qcount) {
- // cout<<"Untracked answer: "<<qi<<endl;
- untracked++;
- }
-
- qd.d_answercount++;
-
- if(qd.d_qcount) {
- uint32_t usecs= (pr.d_pheader.ts.tv_sec - qd.d_firstquestiontime.tv_sec) * 1000000 +
- (pr.d_pheader.ts.tv_usec - qd.d_firstquestiontime.tv_usec) ;
-
- // cout<<"Usecs for "<<qi<<": "<<usecs<<endl;
- if(!noservfailstats || header.rcode != 2)
- cumul[usecs]++;
-
- if(header.rcode != 0 && header.rcode!=3)
- errorresult++;
- ComboAddress rem = pr.getDest();
- rem.sin4.sin_port=0;
-
- if(doServFailTree)
- root.submit(qname, header.rcode, pr.d_len, false, rem);
- }
-
- if(!qd.d_qcount || qd.d_qcount == qd.d_answercount) {
- // cout<<"Clearing state for "<<qi<<endl<<endl;
- statmap.erase(qi);
- }
- else {
- // cout<<"State for qi remains open, qcount="<<qd.d_qcount<<", answercount="<<qd.d_answercount<<endl;
- }
- }
- }
- catch(std::exception& e) {
- if(verbose)
- cout<<"error parsing packet: "<<e.what()<<endl;
-
- if(pw)
- pw->write();
- parsefail++;
- continue;
- }
+ if(!qd.d_qcount || qd.d_qcount == qd.d_answercount) {
+ // cout<<"Clearing state for "<<qi<<endl<<endl;
+ statmap.erase(qi);
+ }
+ else {
+ // cout<<"State for qi remains open, qcount="<<qd.d_qcount<<", answercount="<<qd.d_answercount<<endl;
+ }
+ }
}
- else { // non-DNS ip
- nonDNSIP++;
+ catch(std::exception& e) {
+ if(verbose)
+ cout<<"error parsing packet: "<<e.what()<<endl;
+
+ if(pw)
+ pw->write();
+ parsefail++;
+ continue;
}
}
- cout<<"PCAP contained "<<pr.d_correctpackets<<" correct packets, "<<pr.d_runts<<" runts, "<< pr.d_oversized<<" oversize, "<<pr.d_nonetheripudp<<" non-UDP.\n";
+ cout<<"PCAP contained "<<pr.d_correctpackets<<" correct packets, "<<pr.d_runts<<" runts, "<< pr.d_oversized<<" oversize, "<<pr.d_nonetheripudp<<" non-UDP.\n";
}
/*
Netmask(const ComboAddress& network, uint8_t bits=0xff): d_network(network)
{
d_network.sin4.sin_port = 0;
- setBits(network.isIPv4() ? std::min(bits, static_cast<uint8_t>(32)) : std::min(bits, static_cast<uint8_t>(128)));
+ setBits(bits);
}
Netmask(const sockaddr_in* network, uint8_t bits = 0xff): d_network(network)
{
d_network.sin4.sin_port = 0;
- setBits(std::min(bits, static_cast<uint8_t>(32)));
+ setBits(bits);
}
Netmask(const sockaddr_in6* network, uint8_t bits = 0xff): d_network(network)
{
d_network.sin4.sin_port = 0;
- setBits(std::min(bits, static_cast<uint8_t>(128)));
+ setBits(bits);
}
void setBits(uint8_t value)
{
- d_bits = value;
+ d_bits = d_network.isIPv4() ? std::min(value, static_cast<uint8_t>(32U)) : std::min(value, static_cast<uint8_t>(128U));
if (d_bits < 32) {
d_mask = ~(0xFFFFFFFF >> d_bits);
struct ZoneData
{
- ZoneData(Logr::log_t log, const std::string& zone) :
+ ZoneData(const std::shared_ptr<Logr::Logger>& log, const std::string& zone) :
d_log(log),
d_zone(zone),
d_now(time(nullptr)) {}
// Maybe use a SuffixMatchTree?
std::set<DNSName> d_delegations;
- Logr::log_t d_log;
+ std::shared_ptr<Logr::Logger> d_log;
DNSName d_zone;
time_t d_now;
- bool isRRSetAuth(const DNSName& qname, QType qtype) const;
- void parseDRForCache(DNSRecord& dr);
- pdns::ZoneMD::Result getByAXFR(const RecZoneToCache::Config&, pdns::ZoneMD&);
- pdns::ZoneMD::Result processLines(const std::vector<std::string>& lines, const RecZoneToCache::Config& config, pdns::ZoneMD&);
+ [[nodiscard]] bool isRRSetAuth(const DNSName& qname, QType qtype) const;
+ void parseDRForCache(DNSRecord& resourceRecord);
+ pdns::ZoneMD::Result getByAXFR(const RecZoneToCache::Config& config, pdns::ZoneMD& zonemd);
+ pdns::ZoneMD::Result processLines(const std::vector<std::string>& lines, const RecZoneToCache::Config& config, pdns::ZoneMD& zonemd);
void ZoneToCache(const RecZoneToCache::Config& config);
- vState dnssecValidate(pdns::ZoneMD&, size_t& zonemdCount) const;
+ vState dnssecValidate(pdns::ZoneMD& zonemd, size_t& zonemdCount) const;
};
bool ZoneData::isRRSetAuth(const DNSName& qname, QType qtype) const
break;
}
delegatedZone.chopOff();
- if (delegatedZone == g_rootdnsname || delegatedZone == d_zone)
+ if (delegatedZone == g_rootdnsname || delegatedZone == d_zone) {
break;
+ }
}
return !isDelegated;
}
-void ZoneData::parseDRForCache(DNSRecord& dr)
+void ZoneData::parseDRForCache(DNSRecord& dnsRecord)
{
- if (dr.d_class != QClass::IN) {
+ if (dnsRecord.d_class != QClass::IN) {
return;
}
- const auto key = pair(dr.d_name, dr.d_type);
+ const auto key = pair(dnsRecord.d_name, dnsRecord.d_type);
- dr.d_ttl += d_now;
+ dnsRecord.d_ttl += d_now;
- switch (dr.d_type) {
+ switch (dnsRecord.d_type) {
case QType::NSEC:
case QType::NSEC3:
break;
case QType::RRSIG: {
- const auto& rr = getRR<RRSIGRecordContent>(dr);
- const auto sigkey = pair(key.first, rr->d_type);
+ const auto rrsig = getRR<RRSIGRecordContent>(dnsRecord);
+ const auto sigkey = pair(key.first, rrsig->d_type);
auto found = d_sigs.find(sigkey);
if (found != d_sigs.end()) {
- found->second.push_back(rr);
+ found->second.push_back(rrsig);
}
else {
vector<shared_ptr<const RRSIGRecordContent>> sigsrr;
- sigsrr.push_back(rr);
+ sigsrr.push_back(rrsig);
d_sigs.insert({sigkey, sigsrr});
}
break;
}
case QType::NS:
- if (dr.d_name != d_zone) {
- d_delegations.insert(dr.d_name);
+ if (dnsRecord.d_name != d_zone) {
+ d_delegations.insert(dnsRecord.d_name);
}
break;
default:
auto found = d_all.find(key);
if (found != d_all.end()) {
- found->second.push_back(dr);
+ found->second.push_back(dnsRecord);
}
else {
- vector<DNSRecord> v;
- v.push_back(dr);
- d_all.insert({key, v});
+ vector<DNSRecord> dnsRecords;
+ dnsRecords.push_back(dnsRecord);
+ d_all.insert({key, dnsRecords});
}
}
ComboAddress primary = ComboAddress(config.d_sources.at(0), 53);
uint16_t axfrTimeout = config.d_timeout;
size_t maxReceivedBytes = config.d_maxReceivedBytes;
- const TSIGTriplet tt = config.d_tt;
+ const TSIGTriplet tsigTriplet = config.d_tt;
ComboAddress local = config.d_local;
if (local == ComboAddress()) {
local = pdns::getQueryLocalAddress(primary.sin4.sin_family, 0);
}
- AXFRRetriever axfr(primary, d_zone, tt, &local, maxReceivedBytes, axfrTimeout);
+ AXFRRetriever axfr(primary, d_zone, tsigTriplet, &local, maxReceivedBytes, axfrTimeout);
Resolver::res_t nop;
vector<DNSRecord> chunk;
time_t axfrStart = time(nullptr);
time_t axfrNow = time(nullptr);
- while (axfr.getChunk(nop, &chunk, (axfrStart + axfrTimeout - axfrNow))) {
- for (auto& dr : chunk) {
+ while (axfr.getChunk(nop, &chunk, (axfrStart + axfrTimeout - axfrNow)) != 0) {
+ for (auto& dnsRecord : chunk) {
if (config.d_zonemd != pdns::ZoneMD::Config::Ignore) {
- zonemd.readRecord(dr);
+ zonemd.readRecord(dnsRecord);
}
- parseDRForCache(dr);
+ parseDRForCache(dnsRecord);
}
axfrNow = time(nullptr);
if (axfrNow < axfrStart || axfrNow - axfrStart > axfrTimeout) {
}
}
if (config.d_zonemd != pdns::ZoneMD::Config::Ignore) {
- bool validationDone, validationSuccess;
+ bool validationDone = false;
+ bool validationSuccess = false;
zonemd.verify(validationDone, validationSuccess);
- d_log->info("ZONEMD digest validation", "validationDone", Logging::Loggable(validationDone),
+ d_log->info(Logr::Info, "ZONEMD digest validation", "validationDone", Logging::Loggable(validationDone),
"validationSuccess", Logging::Loggable(validationSuccess));
if (!validationDone) {
return pdns::ZoneMD::Result::NoValidationDone;
{
std::vector<std::string> lines;
#ifdef HAVE_LIBCURL
- MiniCurl mc;
+ MiniCurl miniCurl;
ComboAddress local = config.d_local;
- std::string reply = mc.getURL(config.d_sources.at(0), nullptr, local == ComboAddress() ? nullptr : &local, config.d_timeout, false, true);
+ std::string reply = miniCurl.getURL(config.d_sources.at(0), nullptr, local == ComboAddress() ? nullptr : &local, static_cast<int>(config.d_timeout), false, true);
if (config.d_maxReceivedBytes > 0 && reply.size() > config.d_maxReceivedBytes) {
// We should actually detect this *during* the GET
throw std::runtime_error("Retrieved data exceeds maxReceivedBytes");
zpt.setMaxIncludes(0);
while (zpt.get(drr)) {
- DNSRecord dr(drr);
+ DNSRecord dnsRecord(drr);
if (config.d_zonemd != pdns::ZoneMD::Config::Ignore) {
- zonemd.readRecord(dr);
+ zonemd.readRecord(dnsRecord);
}
- parseDRForCache(dr);
+ parseDRForCache(dnsRecord);
}
if (config.d_zonemd != pdns::ZoneMD::Config::Ignore) {
- bool validationDone, validationSuccess;
+ bool validationDone = false;
+ bool validationSuccess = false;
zonemd.verify(validationDone, validationSuccess);
- d_log->info("ZONEMD digest validation", "validationDone", Logging::Loggable(validationDone),
+ d_log->info(Logr::Info, "ZONEMD digest validation", "validationDone", Logging::Loggable(validationDone),
"validationSuccess", Logging::Loggable(validationSuccess));
if (!validationDone) {
return pdns::ZoneMD::Result::NoValidationDone;
{
zonemdCount = 0;
- SyncRes sr({d_now, 0});
- sr.setDoDNSSEC(true);
- sr.setDNSSECValidationRequested(true);
+ SyncRes resolver({d_now, 0});
+ resolver.setDoDNSSEC(true);
+ resolver.setDNSSECValidationRequested(true);
dsmap_t dsmap; // Actually a set
- vState dsState = sr.getDSRecords(d_zone, dsmap, false, 0, "");
+ vState dsState = resolver.getDSRecords(d_zone, dsmap, false, 0, "");
if (dsState != vState::Secure) {
return dsState;
}
skeyset_t dnsKeys;
sortedRecords_t records;
- if (zonemd.getDNSKEYs().size() == 0) {
+ if (zonemd.getDNSKEYs().empty()) {
return vState::BogusUnableToGetDNSKEYs;
}
for (const auto& key : zonemd.getDNSKEYs()) {
return dnsKeyState;
}
- if (validKeys.size() == 0) {
+ if (validKeys.empty()) {
return vState::BogusNoValidDNSKEY;
}
const auto& nsec3s = zonemd.getNSEC3s();
cspmap_t csp;
- vState nsecValidationStatus;
+ vState nsecValidationStatus = vState::Indeterminate;
- if (nsecs.records.size() > 0 && nsecs.signatures.size() > 0) {
+ if (!nsecs.records.empty() && !nsecs.signatures.empty()) {
// Valdidate the NSEC
nsecValidationStatus = validateWithKeySet(d_now, d_zone, nsecs.records, nsecs.signatures, validKeys, std::nullopt);
csp.emplace(std::pair(d_zone, QType::NSEC), nsecs);
}
- else if (nsec3s.records.size() > 0 && nsec3s.signatures.size() > 0) {
+ else if (!nsec3s.records.empty() && !nsec3s.signatures.empty()) {
// Validate NSEC3PARAMS
records.clear();
for (const auto& rec : zonemd.getNSEC3Params()) {
}
nsecValidationStatus = validateWithKeySet(d_now, d_zone, records, zonemd.getRRSIGs(), validKeys, std::nullopt);
if (nsecValidationStatus != vState::Secure) {
- d_log->info("NSEC3PARAMS records did not validate");
+ d_log->info(Logr::Warning, "NSEC3PARAMS records did not validate");
return nsecValidationStatus;
}
// Valdidate the NSEC3
csp.emplace(std::pair(zonemd.getNSEC3Label(), QType::NSEC3), nsec3s);
}
else {
- d_log->info("No NSEC(3) records and/or RRSIGS found to deny ZONEMD");
+ d_log->info(Logr::Warning, "No NSEC(3) records and/or RRSIGS found to deny ZONEMD");
return vState::BogusInvalidDenial;
}
if (nsecValidationStatus != vState::Secure) {
- d_log->info("zone NSEC(3) record does not validate");
+ d_log->info(Logr::Warning, "zone NSEC(3) record does not validate");
return nsecValidationStatus;
}
auto denial = getDenial(csp, d_zone, QType::ZONEMD, false, false, std::nullopt, true);
if (denial == dState::NXQTYPE) {
- d_log->info("Validated denial of absence of ZONEMD record");
+ d_log->info(Logr::Info, "Validated denial of existence of ZONEMD record");
return vState::Secure;
}
- d_log->info("No ZONEMD record, but NSEC(3) record does not deny it");
+ d_log->info(Logr::Warning, "No ZONEMD record, but NSEC(3) record does not deny it");
return vState::BogusInvalidDenial;
}
void ZoneData::ZoneToCache(const RecZoneToCache::Config& config)
{
if (config.d_sources.size() > 1) {
- d_log->info("Multiple sources not yet supported, using first");
+ d_log->info(Logr::Warning, "Multiple sources not yet supported, using first");
}
if (config.d_dnssec == pdns::ZoneMD::Config::Require && (g_dnssecmode == DNSSECMode::Off || g_dnssecmode == DNSSECMode::ProcessNoValidate)) {
auto zonemd = pdns::ZoneMD(DNSName(config.d_zone));
pdns::ZoneMD::Result result = pdns::ZoneMD::Result::OK;
if (config.d_method == "axfr") {
- d_log->info("Getting zone by AXFR");
+ d_log->info(Logr::Info, "Getting zone by AXFR");
result = getByAXFR(config, zonemd);
}
else {
vector<string> lines;
if (config.d_method == "url") {
- d_log->info("Getting zone by URL");
+ d_log->info(Logr::Info, "Getting zone by URL");
lines = getURL(config);
}
else if (config.d_method == "file") {
- d_log->info("Getting zone from file");
+ d_log->info(Logr::Info, "Getting zone from file");
lines = getLinesFromFile(config.d_sources.at(0));
}
result = processLines(lines, config, zonemd);
// Validate DNSKEYs and ZONEMD, rest of records are validated on-demand by SyncRes
if (config.d_dnssec == pdns::ZoneMD::Config::Require || (g_dnssecmode != DNSSECMode::Off && g_dnssecmode != DNSSECMode::ProcessNoValidate && config.d_dnssec != pdns::ZoneMD::Config::Ignore)) {
- size_t zonemdCount;
+ size_t zonemdCount = 0;
auto validationStatus = dnssecValidate(zonemd, zonemdCount);
- d_log->info("ZONEMD record related DNSSEC validation", "validationStatus", Logging::Loggable(validationStatus),
+ d_log->info(Logr::Info, "ZONEMD record related DNSSEC validation", "validationStatus", Logging::Loggable(validationStatus),
"zonemdCount", Logging::Loggable(zonemdCount));
if (config.d_dnssec == pdns::ZoneMD::Config::Require && validationStatus != vState::Secure) {
throw PDNSException("ZONEMD required DNSSEC validation failed");
switch (qtype) {
case QType::NSEC:
case QType::NSEC3:
- break;
case QType::RRSIG:
break;
default: {
vector<shared_ptr<const RRSIGRecordContent>> sigsrr;
- auto it = d_sigs.find(key);
- if (it != d_sigs.end()) {
- sigsrr = it->second;
+ auto iter = d_sigs.find(key);
+ if (iter != d_sigs.end()) {
+ sigsrr = iter->second;
}
bool auth = isRRSetAuth(qname, qtype);
// Same decision as updateCacheFromRecords() (we do not test for NSEC since we skip those completely)
ZoneData data(log, config.d_zone);
data.ZoneToCache(config);
state.d_waittime = config.d_refreshPeriod;
- log->info("Loaded zone into cache", "refresh", Logging::Loggable(state.d_waittime));
+ log->info(Logr::Info, "Loaded zone into cache", "refresh", Logging::Loggable(state.d_waittime));
}
catch (const PDNSException& e) {
log->error(Logr::Error, e.reason, "Unable to load zone into cache, will retry", "exception", Logging::Loggable("PDNSException"), "refresh", Logging::Loggable(state.d_waittime));
log->error(Logr::Error, e.what(), "Unable to load zone into cache, will retry", "exception", Logging::Loggable("std::runtime_error"), "refresh", Logging::Loggable(state.d_waittime));
}
catch (...) {
- log->info("Unable to load zone into cache, will retry", "refresh", Logging::Loggable(state.d_waittime));
+ log->info(Logr::Error, "Unable to load zone into cache, will retry", "refresh", Logging::Loggable(state.d_waittime));
}
state.d_lastrun = time(nullptr);
- return;
}
TSIGTriplet d_tt; // Authentication data
size_t d_maxReceivedBytes{0}; // Maximum size
time_t d_retryOnError{60}; // Retry on error
- time_t d_refreshPeriod{24 * 3600}; // Time between refetch
+ time_t d_refreshPeriod{static_cast<time_t>(24 * 3600)}; // Time between refetch
uint32_t d_timeout{20}; // timeout in seconds
pdns::ZoneMD::Config d_zonemd{pdns::ZoneMD::Config::Validate};
pdns::ZoneMD::Config d_dnssec{pdns::ZoneMD::Config::Validate};
{
time_t d_lastrun{0};
time_t d_waittime{0};
- uint64_t d_generation;
+ uint64_t d_generation{0};
};
static void maintainStates(const map<DNSName, Config>&, map<DNSName, State>&, uint64_t mygeneration);
else if (res == GNUTLS_E_AGAIN) {
return IOState::NeedWrite;
}
- warnlog("Warning, non-fatal error while writing to TLS connection: %s", gnutls_strerror(res));
+ vinfolog("Warning, non-fatal error while writing to TLS connection: %s", gnutls_strerror(res));
}
}
while (pos < toWrite);
else if (res == GNUTLS_E_AGAIN) {
return IOState::NeedRead;
}
- warnlog("Warning, non-fatal error while writing to TLS connection: %s", gnutls_strerror(res));
+ vinfolog("Warning, non-fatal error while writing to TLS connection: %s", gnutls_strerror(res));
}
}
while (pos < toRead);
BOOST_CHECK(all < empty);
BOOST_CHECK(empty > full);
BOOST_CHECK(full < empty);
+
+ /* invalid (too large) mask */
+ {
+ Netmask invalidMaskV4("192.0.2.1/33");
+ BOOST_CHECK_EQUAL(invalidMaskV4.getBits(), 32U);
+ BOOST_CHECK(invalidMaskV4.getNetwork() == ComboAddress("192.0.2.1"));
+ Netmask invalidMaskV6("fe80::92fb:a6ff:fe4a:51da/129");
+ BOOST_CHECK_EQUAL(invalidMaskV6.getBits(), 128U);
+ BOOST_CHECK(invalidMaskV6.getNetwork() == ComboAddress("fe80::92fb:a6ff:fe4a:51da"));
+ }
+ {
+ Netmask invalidMaskV4(ComboAddress("192.0.2.1"), 33);
+ BOOST_CHECK_EQUAL(invalidMaskV4.getBits(), 32U);
+ BOOST_CHECK(invalidMaskV4.getNetwork() == ComboAddress("192.0.2.1"));
+ Netmask invalidMaskV6(ComboAddress("fe80::92fb:a6ff:fe4a:51da"), 129);
+ BOOST_CHECK_EQUAL(invalidMaskV6.getBits(), 128U);
+ BOOST_CHECK(invalidMaskV6.getNetwork() == ComboAddress("fe80::92fb:a6ff:fe4a:51da"));
+ }
}
static std::string NMGOutputToSorted(const std::string& str)
}
else {
SLOG(g_log<<Logger::Info<<"Set effective user id to "<<uid<<endl,
- g_slog->withName("runtime")->info("Set effective user", "uid", Logging::Loggable(uid)));
+ g_slog->withName("runtime")->info(Logr::Info, "Set effective user", "uid", Logging::Loggable(uid)));
}
}
}
import base64
from datetime import datetime, timedelta
import os
+import sys
import time
import unittest
import dns
Advanced: A question with ECS version larger than 0 yields BADVERS
"""
+ if sys.version_info >= (3, 11) and sys.version_info < (3, 12):
+ raise unittest.SkipTest("Test skipped, see https://github.com/PowerDNS/pdns/pull/12912")
+
name = 'ednsversionrule.advanced.tests.powerdns.com.'
query = dns.message.make_query(name, 'A', 'IN', use_edns=1)
+++ /dev/null
-#!/usr/bin/env bash
-
-"$@" 2>&1 | ts "[%F %T]"
-
-exit ${PIPESTATUS[0]}
c.run('curl -LO http://s3-us-west-1.amazonaws.com/umbrella-static/top-1m.csv.zip')
c.run('unzip top-1m.csv.zip -d .')
c.run('chmod +x /opt/pdns-recursor/bin/* /opt/pdns-recursor/sbin/*')
- c.run(f'DNSBULKTEST=/usr/bin/dnsbulktest RECURSOR=/opt/pdns-recursor/sbin/pdns_recursor RECCONTROL=/opt/pdns-recursor/bin/rec_control THRESHOLD=95 TRACE=no ./timestamp ./recursor-test 5300 100 {threads} {mthreads} {shards}')
+ c.run(f'DNSBULKTEST=/usr/bin/dnsbulktest RECURSOR=/opt/pdns-recursor/sbin/pdns_recursor RECCONTROL=/opt/pdns-recursor/bin/rec_control THRESHOLD=95 TRACE=no ./recursor-test 5300 100 {threads} {mthreads} {shards}')
@task
def install_swagger_tools(c):