This commit fixes a crash in generic TLS stream code, which could be
reproduced during some runs of the 'sslyze' tool.
The intention of this commit is twofold.
Firstly, it ensures that the TLS socket object cannot be destroyed too
early. Now it is being deleted alongside the underlying TCP socket
object.
Secondly, it ensures that the TLS socket object cannot be destroyed as
a result of calling 'tls_do_bio()' (the primary function which
performs encryption/decryption during the IO) as the code did not
expect that. This code path is fixed now.
Resolve "RPZ rpz-nsip rules seem not to understand stub and static-stub zones and don't handle DNS_R_GLUE result well ..."
Closes#3232
See merge request isc-projects/bind9!6037
RPZ NSIP and NSDNAME checks were failing with "unrecognized NS
rpz_rrset_find() failed: glue" when static or static-stub zones
where used to resolve the query name.
Add tests using stub and static-stub zones that are expected to
be filtered and not-filtered against NSIP and NSDNAME rules.
stub and static-stub queries are expected to be filtered
stub-nomatch and static-stub-nomatch queries are expected to be passed
The named_config_getdefault() was missing void in the function
definition. This broke clang-15 that didn't match the declaration that
had the void in the argument with the definition that hadn't.
Somewhere in the move from netmgr/uv-compat.h to uv.c, the
uv_os_getenv() implementation was lost in the process. Restore the
implementation, so we can support Debian stretch for couple more months.
From the ld man page:
When creating a dynamically linked executable, using the -E option or
the --export-dynamic option causes the linker to add all symbols to
the dynamic symbol table. The dynamic symbol table is the set of
symbols which are visible from dynamic objects at run time.
This should allow the backtrace(3) to fully resolve the symbols when
creating backtrace on an assertion failure.
Add a #if to make it clear that struct xrdata->order is only used
in DNS_RDATASET_FIXED mode.
Re-order some variable declarations to merge two #if blocks into one.
As we are going to use libuv outside of the netmgr, we need the shims to
be readily available for the rest of the codebase.
Move the "netmgr/uv-compat.h" to <isc/uv.h> and netmgr/uv-compat.c to
uv.c, and as a rule of thumb, the users of libuv should include
<isc/uv.h> instead of <uv.h> directly.
Additionally, merge netmgr/uverr2result.c into uv.c and rename the
single function from isc__nm_uverr2result() to isc_uverr2result().
Move the netmgr socket related functions from netmgr/netmgr.c and
netmgr/uv-compat.c to netmgr/socket.c, so they are all present all in
the same place. Adjust the names of couple interal functions
accordingly.
These checks have been redundant since the `rbtdb64` implementation
was removed in 2018 (commit 784087390a). It isn't possible to create
a zone that uses `database "rbt64"` now that the `rbt64` database
implementation has been removed, so the checks will always fail.
Sometimes the compiler is unable to see that the `empty` variable was
initialized by the call to is_empty(), which can cause a build
failure; I encountered this with CFLAGS=-Os. So get rid of it and use
the result from `is_empty()` instead.
The rbtnode->rpz flag was left behind when rbt and rpz were disentangled
by CHANGES #4576. Removing it makes the comment above correct again.
This reduces the flags so they fit in a 32 bit word again. On 64
bit systems there is still padding so it doesn't change the size
of an rbtnode. On 32 bit systems it reduces an rbtnode by 4 bytes.
Default paths were not substituted correctly when Python-only build was
used, i.e. it affected only ReadTheDocs. The incorrect rst_epilog was
overriden by Makefile for all "ordinary" builds.
This error was introduced by 3f78c60539.
Related: !5815
The dig commands appear to be failing unexpectedly on some platforms
when rate limiting kicks in and the response is dropped. Correct
behaviour should be for dig to retry the query. Set +qr and capture
stdout and stderr of each of the dig commands involved.
3034 next = ISC_LIST_NEXT(query, link);
3035 } else {
3036 next = NULL;
3037 }
CID 352554 (#1 of 1): Dereference before null check (REVERSE_INULL)
check_after_deref: Null-checking connectquery suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
3038 if (connectquery != NULL) {
3039 query_detach(&connectquery);
3040 }
In '_check_apex_dnskey' we check for each key (KEY1 to KEY4) if they
are present in the DNSKEY RRset if they should be.
However, we only grep the dig output for the first seven fields (owner,
ttl, class, type, flags, protocol, algorithm). This can be the same
for different keys.
For example, KEY1 may be KSK predecessor and KEY2 a KSK successor,
both DNSKEY records for these keys are the same up to the public key
field. This can cause test failures if KEY1 needs to be present, but
KEY2 not, because when grepping for KEY2 we will falsely detect the
key to be present (because the grep matches KEY1).
Fix the function by grepping looking for the first seven fields in the
corresponding key file and retrieve the public key part. Grep for this
in the dig output.
It might be useful to display built-in configuration with all its
values. It should make it easier to test what default values has changed
in a new release.
Related: #1326
- var_decl: Declaring variable "tbuf" without initializer
- assign: Assigning: "target.base" = "tbuf", which points to
uninitialized data
- assign: Assigning: "r.base" = "target.base", which points to
uninitialized data
I expect it would correctly initialize length always. Add simple
initialization to silent coverity.
Coverity detected issues:
- var_decl: Declaring variable "diff" without initializer.
- uninit_use_in_call: Using uninitialized value "diff.tuples.head" when
calling "dns_diff_clear".
Parser ensures new-zones-directory has qstring parameter before it can
reach this place. dir == NULL then should never happen on any
configuration. Replace silent check with insist.
FALLTHOUGH is a copy of how it is defined in <isc/util.h>
UNREACHABLE follows the model used in MacOS /usr/include/c++/v1/cstdlib
to determine if __builtin_ureachable is available
SIG and RRSIG records for private algorithms are supposed to contain
the name / OID of the algorithm used to generate them at the start
of the signature field.
It was discovered that MariaDB 10 didn't define LIBMARIADB leading
to compilation errors of MySQL DLZ modules on Debian stretch.
Use MARIADB_BASE_VERSION instead which is defined in all tested MariaDB
versions.
The DNS catalog zones draft version 5 document requires that catalog
zones consumers must reset the member zone's internal zone state when
its unique label changes (either within the same catalog zone or
during change of ownership performed using the "coo" property).
BIND already behaves like that, and, in fact, doesn't support keeping
the zone state during change of ownership even if the unique label
has been kept the same, because BIND always removes the member zone
and adds it back during unique label renaming or change of ownership.
Document the described behavior and add a log message to inform when
unique label renaming occurs.
Add a system test case with unique label renaming.
We check the `rdclass` to be of type IN in `dns_catz_update_process()`
function, and all the other static functions where similar checks exist
are called after (and in the result of) that function being called,
so they are effectively redundant.
There is already a check for the missing version property case
(catalog-bad1.example), and this new test should result in the same
outcome, but differs in a way that there exists a version record in the
zone, but it is of a wrong type (A instead of the expected TXT).
According to DNS catalog zones draft version 5 document, the CLASS field
of every RR in a catalog zone MUST be IN.
Add a new check in the catz system test to verify that a non-IN class
catalog zone (in this case CH) fails to load.
BIND does not support having a non-IN class RR in an IN class zone, or
non-IN class zone in an IN class view, so to verify that BIND respects
the mentioned restriction we must try to add a non-IN class catalog
zone and check that it didn't succeed.
The `named` configuration files had to be restructured to put all the
zones inside views, which also resulted in some corresponding changes
in the tests.sh script.
When parsing the configuration file, log a warning message in
configure_view() function when encountering a `catalog-zones`
option in a view with non-IN rdata class.
The DNS catalog zones draft version 5 document describes various
situations when a catalog zones must be considered as "broken" and
not be processed.
Implement those checks in catz.c and add corresponding system tests.
Add DNS extended errors 3 (Stale Answer) and 19 (Stale NXDOMAIN Answer)
to responses. Add extra text with the reason why the stale answer was
returned.
To test, we need to change the configuration such that for the first
set of tests the stale-refresh-time window does not interfer with the
expected extended errors.
In zone.c, the "me" strings were defined for functions that could be
traced with "ENTER" macro.
Use the __func__ that's defined by the compiler and is less prone to
copy&paste errors.
The shutdown test sends 'rdnc status' commands in parallel with
'rndc stop' A new rndc connection arriving will reference the ACL
environment to see whether the client is allowed to connect.
Commit c0995bc380 added a mutex lock to ns_interfacemgr_getaclenv(),
but if the new connection arrives while the interfaces are being
purged during shutdown, that lock is already being held. If the
the connection event slips in ahead of one of the netmgr's "stop
listening" events on a worker thread, a deadlock can occur.
The fix is not to hold the interfacemgr lock while shutting down
interfaces; only while actually traversing the interface list to
identify interfaces needing shutdown.
previously fctx_done() detached the fctx but did not clear the pointer
passed into it from the caller. in some conditions, when rctx_done()
was reached while waiting for a validator to complete, fctx_done()
could be called twice on the same fetch, causing a double detach.
fctx_done() now clears the fctx pointer, to reduce the chances of
such mistakes.
There is a possibility for `udp_recv()` to be called with `eresult`
being `ISC_R_SUCCESS`, but nevertheless with already deactivated `resp`,
which can happen when the request has been canceled in the meantime.
This commit ensures that write callbacks are getting called only after
the data has been sent via the network.
Without this fix, a situation could appear when a write callback could
get called before the actual encrypted data would have been sent to
the network. Instead, it would get called right after it would have
been passed to the OpenSSL (i.e. encrypted).
Most likely, the issue does not reveal itself often because the
callback call was asynchronous, so in most cases it should have been
called after the data has been sent, but that was not guaranteed by
the code logic.
Also, this commit removes one memory allocation (netievent) from a hot
path, as there is no need to call this callback asynchronously
anymore.
There was a query_detach() call missing in dig, which could lead to
dig hanging on TLS context creation errors. This commit fixes.
The error was introduced because the Strict TLS implementation was
initially made over an older version of the code, where this extra
query_detach() call was not needed.
This seems to be most appropriate way to ensure consistency between
release tarballs and public presentation on ReadTheDocs.
Previous attempt with removing docutils constraint, which relied on pip
depedency solver to pick the same packages as in CI was flawed. RTD
installs a bit different set of packages so it was inherently
unreliable.
As a result RTD pulled in sphinx-rtd-theme==0.4.3 while CI
had 1.0.0, and this inconsistency caused Table of Contents in Release
Notes to render incorrectly. Previous solution was to downgrade
docutils to < 0.17, but I think we should rather pin exact versions.
For the long history of messing with versions read also
isc-projects/bind9@2a8eda0084isc-projects/images@d4435b97beisc-projects/bind9@6a2daddf5b
There was an error in AX_PROG_CC_FOR_BUILD macro that cached literal
name of the cache variable `saved_ac_cv_c_compiler_gnu` instead of the
value of said variable breaking the consecutive runs of ./configure
script with caching enabled.
Currently our CI images we use to build docs (which subsequently get
into release tarballs) are using docutils 0.17.1, which is latest version
which fulfills Sphinx 4.5.0 requirement for docutils < 0.18.
The old requirement for docutils < 0.17 was causing discrepancy between
the way we build release artifacts and the docs on ReadTheDocs.org which
uses doc/arm/requirements.txt from our repo.
Remove the limit for RDT with hope that it will pull latest permissible
version of docutils.
For the long history of messing with docutils version read also
isc-projects/images@d4435b97beisc-projects/bind9@6a2daddf5b
In dns_adb_cancelfind(), we need to release the find lock and
then acquire the bucket and find locks in that order, for
consistency with locking hierarchy elsehwere. Previously we
were only acquiring the bucket lock.
Also rewrote the function for better readability.
Man pages for dig/mdig/delv used `.. option:: +[no]bla` to describe two
options at once, and very old Sphinx does not support that [] in option
names.
Solution is to split negative and positive options into `+bla, +nobla`
form. In the end it improves readability because it transforms hard to
read strings with double brackets from
`+[no]subnet=addr[/prefix-length]` to
`+subnet=addr[/prefix-length], +nosubnet`.
As a side-effect it also allows easier linking to dig/mdig/delv options
using their name directly instead of always overriding the link target
to `+[no]bla` form.
Transformation was done using regex:
s/:: +\[no\]\(.*\)/:: +\1, +no\1
... and manual review around occurences matching regex
+no.*=
Fixes: #3301
The connect()ed UDP socket provides feedback on a variety of ICMP
errors (eg port unreachable) which bind can then use to decide what to
do with errors (report them to the client, try again with a different
nameserver etc). However, Linux's implementation does not report what
it considers "transient" conditions, which is defined as Destination
host Unreachable, Destination network unreachable, Source Route Failed
and Message Too Big.
Explicitly enable IP_RECVERR / IPV6_RECVERR (via libuv uv_udp_bind()
flag) to learn about ICMP destination network/host unreachable.
When we compile with libuv that has some capabilities via flags passed
to f.e. uv_udp_listen() or uv_udp_bind(), the call with such flags would
fail with invalid arguments when older libuv version is linked at the
runtime that doesn't understand the flag that was available at the
compile time.
Enforce minimal libuv version when flags have been available at the
compile time, but are not available at the runtime. This check is less
strict than enforcing the runtime libuv version to be same or higher
than compile time libuv version.
The interfacemgr and the .route was being detached while the network
manager had pending read from the socket. Instead of detaching from the
socket, we need to cancel the read which in turn will detach the route
socket and the associated interfacemgr.
Sphinx "standard domain" provides directive types ".. program::" and
".. option::" to create link anchor for a program name + option combination.
These can be referenced using :ref:`program option` syntax.
The problem is that Sphinx 1.8.5 (e.g. in Ubuntu 18.04) generates
conflicting link targets if a page contains two option directives
starting with the same word, e.g.:
.. program:: dnssec-settime
.. option:: -P date
.. option:: -P ds date
The reason is that option directive consumes only first word as "option
name" (-P) and all the rest is considered "option argument" (date, ds
date). Newer versions of Sphinx (e.g. 4.5.0) handle this by creating
numbered link anchors, but older versions warn and BIND build system
turns the warning into a hard error.
To handle that we use method recommended by Sphinx maintainer:
https://github.com/sphinx-doc/sphinx/issues/10218#issuecomment-1059925508
As a bonus it provides more accurate link anchors for sub-options.
Alternatives considered:
- Replacing standard domain definition of .. option - causes more
problems, see BIND issue #3294.
- Removing hyperlinks for options - that would be a step back.
Fixes: #3295
Instead of checking if we need to re-seed for every isc_random call,
seed the random number generator in the libisc global initializer
and the per-thread initializer.
It used to require two 32-bit integer divisions to get a random number
less than some limit. Now we use Daniel Lemire's "nearly-divisionless"
algorithm for unbiased bounded random numbers, which requires one
64-bit integer multiply in the usual case, and one 32-bit integer
division in rare slow cases. Even the slow cases are faster than
before; there are also fewer branches.
I think this algorithm is exceptionally beautiful. It also has more
clever tricks than lines of code, so I have done my best to explain
how it works.
The dns__adb_attach() had an assertion failure that prevented to attach
to dns_adb if the dns_adb was shutting down. There was a race between
checking for .exiting in dns_adb_createfind and creating new_adbfind() -
other thread could have set the .exiting to true between the check.
Remove the assertion failure and allow attaching to dns_adb even while
shutting down. The process of dns_adb shutting down would be noticed
only a moments later when any other callback is called.
The rctx_chaseds() function calls dns_resolver_createfetch(), passing
fctx->task as the target task to run resume_dslookup() from. This
breaks task-based serialization of events as fctx->task is the task that
the dns_resolver_createfetch() caller wants to receive its fetch
completion event in; meanwhile, intermediate fetches started by the
resolver itself (e.g. related to QNAME minimization) must use
res->buckets[bucketnum].task instead. This discrepancy may cause
trouble if the resume_dslookup() callback happens to be run concurrently
with e.g. fctx_doshutdown().
Fix by passing the correct task to dns_resolver_createfetch() in
rctx_chaseds().
BIND 9 plugins are installed using Automake's pkglib_LTLIBRARIES stanza,
which causes the relevant shared objects to be placed in the
$(libdir)/@PACKAGE@/ directory, where @PACKAGE@ is expanded to the
lowercase form of the first argument passed to AC_INIT(), i.e. "bind".
Meanwhile, NAMED_PLUGINDIR - the preprocessor macro that the
ns_plugin_expandpath() function uses for determining the absolute path
to a plugin for which only a filename has been provided (rather than a
path) - is set to $(libdir)/named. This discrepancy breaks loading
plugins using just their filenames. Fix the issue (and also prevent it
from reoccurring) by setting NAMED_PLUGINDIR to $(pkglibdir).
The Debian 11 (bullseye) Docker image, which GitLab CI uses for building
documentation, currently contains the following package versions:
- Sphinx 4.5.0
- sphinx-rtd-theme 1.0.0
- docutils 0.17.1
Regenerate the man pages to match contents produced in a Sphinx
environment using the above package versions. This is necessary to
prevent the "docs" GitLab CI job from failing.
PyLint 2.13.7 reports the following error:
bin/tests/system/doth/conftest.py:34:28: E0601: Using variable 'stderr' before assignment (used-before-assignment)
The reason the current code has not caused problems before is that
invoking gnutls-cli with just the --logfile=/dev/null argument causes it
to always return with a non-zero exit code, either due to the option not
being supported or due to the hostname argument not being provided. In
other words, the 'except' branch has always been taken. PyLint is
obviously right on a syntactical level, though.
Instead of relying on a less than obvious code flow (where the 'except'
branch is always taken), rework the flagged code by employing
subprocess.run(..., check=False) instead of subprocess.check_output(),
making exception handling redundant.
While this issue was investigated, it was also noticed that
subprocess.check_output() was incorrectly used as a context manager:
Popen objects are context managers, but subprocess.check_output() and
subprocess.run() are not. Fix by dropping the relevant 'with'
statement.
Commit 3ec5d2d6ed added a Python-based
name server (bin/tests/system/digdelv/ans8/ans.py) to the "digdelv"
system test, but did not update bin/tests/system/Makefile.am to ensure
Python is present in the test environment before the "digdelv" system
test is run. Update bin/tests/system/Makefile.am to enforce that
requirement.
configure.ac currently requires Python 3.4 for running Python-based
system tests. Meanwhile, there are some features in Python 3.6+ that we
would like to use for making our Python code cleaner (e.g. f-strings).
Update the minimum Python version required for running Python-based
system tests to 3.6, noting that:
- Python 3.4 has reached end-of-life on March 18th, 2019.
- Python 3.5 has reached end-of-life on September 5th, 2020.
Since version 5.0.0, decay-based purging is the only available dirty
page cleanup mechanism in jemalloc. It relies on so-called tickers,
which are simple data structures used for ensuring that certain actions
are taken "once every N times". Ticker data (state) is stored in a
thread-specific data structure called tsd in jemalloc parlance. Ticks
are triggered when extents are allocated and deallocated. Once every
1000 ticks, jemalloc attempts to release some of the dirty pages hanging
around (if any). This allows memory use to be kept in check over time.
This dirty page cleanup mechanism has a quirk. If the first
allocator-related action for a given thread is a free(), a
minimally-initialized tsd is set up which does not include ticker data.
When that thread subsequently calls *alloc(), the tsd transitions to its
nominal state, but due to a certain flag being set during minimal tsd
initialization, ticker data remains unallocated. This prevents
decay-based dirty page purging from working, effectively enabling memory
exhaustion over time. [1]
The quirk described above has been addressed (by moving ticker state to
a different structure) in jemalloc's development branch [2], but not in
any numbered jemalloc version released to date (the latest one being
5.2.1 as of this writing).
Work around the problem by ensuring that every thread spawned by
isc_thread_create() starts with a malloc() call. Avoid immediately
calling free() for the dummy allocation to prevent an optimizing
compiler from stripping away the malloc() + free() pair altogether.
An alternative implementation of this workaround was considered that
used a pair of isc_mem_create() + isc_mem_destroy() calls instead of
malloc() + free(), enabling the change to be fully contained within
isc__trampoline_run() (i.e. to not touch struct isc__trampoline), as the
compiler is not allowed to strip away arbitrary function calls.
However, that solution was eventually dismissed as it triggered
ThreadSanitizer reports when tools like dig, nsupdate, or rndc exited
abruptly without waiting for all worker threads to finish their work.
[1] https://github.com/jemalloc/jemalloc/issues/2251
[2] c259323ab3
Resolve "Improve functions parameter validation in lib/dns/message.c to prevent accessing the -1 index of an array"
Closes#2898
See merge request isc-projects/bind9!5824
dns_message_findname and dns_message_sectiontotext incorrectly accepted
DNS_SECTION_ANY. If DNS_SECTION_ANY was passed the section array could
be incorrectly accessed at (-1).
dns_message_pseudosectiontotext and dns_message_pseudosectiontoyaml
incorrectly accepted DNS_PSEUDOSECTION_ANY. These functions are
designed to process a single section.
There were two problems in the notify system test when it waited for
log messages to appear: the shellcheck refactoring introduced a call
to `wait_for_log` with a regex, but `wait_for_log` only supports fixed
strings, so it always ran for the full 45 second timeout; and the new
test to ensure that notify messages time out failed to reset the
nextpart pointer, so if the notify messages timed out before the test
ran, it would fail to see them.
This change adds a `wait_for_log_re` helper that matches a regex, and
uses it where appropriate in the notify system test, which stops the
test from waiting longer than necessary; and it resets the nextpart
pointer so that the notify timeout test works reliably.
Closes#3275
When TASKMGR_TRACE=1 is defined, the task and event objects have
detailed tracing information about function, file, line, and
backtrace (to the extent tracked by gcc) where it was created.
At exit, when there are unfinished tasks, they will be printed along
with the detailed information.
The only place where isc_task_sendto() was used was in dns_resolver
unit, where the "sendto" part was actually no-op, because dns_resolver
uses bound tasks. Remove the isc_task_sendto() and
isc_task_sendtoanddetach() functions in favor of using bound tasks
create with isc_task_create_bound().
Additionally, cache the number of running netmgr threads (nworkers)
locally to reduce the number of function calls.
For some applications, it's useful to not listen on full battery of
threads. Add workers argument to all isc_nm_listen*() functions and
convenience ISC_NM_LISTEN_ONE and ISC_NM_LISTEN_ALL macros.
dns_rdata_fromtext and dns_rdata_fromwire now checks that there is
a valid name or oid at the start of the keydata when the key algorithm
is PRIVATEDNS and PRIVATEOID respectively.
dns_rdata_totext now prints out the oid if the algorithm is PRIVATEOID.
Prime the cache with a negative cache DS entry then make a query for
name beneath that entry. This will cause the DS entry to be retieved
as part of the validation process. Each RRset in the ncache entry
will be validated and the trust level for each will be updated.
dig previously set an exit code of 9 when a TCP connection failed
or when a UDP connection timed out, but when the server address is
localhost it's possible for a UDP query to fail with ISC_R_CONNREFUSED.
that code path didn't update the exit code, causing dig to exit with
status 0. we now set the exit code to 9 in this failure case.
Catalog zones change of ownership is special mechanism to facilitate
controlled migration of a member zone from one catalog to another.
It is implemented using catalog zones property named "coo" and is
documented in DNS catalog zones draft version 5 document.
Implement the feature using a new hash table in the catalog zone
structure, which holds the added "coo" properties for the catalog zone
(containing the target catalog zone's name), and the key for the hash
table being the member zone's name for which the "coo" property is being
created.
Change some log messages to have consistent zone name quoting types.
Update the ARM with change of ownership documentation and usage
examples.
Add tests which check newly the added features.
When there are multiple record datasets in a database node of a catalog
zone, and BIND encounters a soft error during processing of a dataset,
it breaks from the loop and doesn't process the other datasets in the
node.
There are cases when this is not desired. For example, the catalog zones
draft version 5 states that there must be a TXT RRset named
`version.$CATZ` with exactly one RR, but it doesn't set a limitation
on possible non-TXT RRsets named `version.$CATZ` existing alongside
with the TXT one. In case when one exists, we will get a processing
error and will not continue the loop to process the TXT RRset coming
next.
Remove the "break" statement to continue processing all record datasets.
When processing a new or updated catalog zone, the record datasets
from the database are being processed in order. This creates a
problem because we need to know the version of the catalog zone
schema to process some of the records differently, but we do not
know the version until the 'version' record gets processed.
Find the 'version' record and process it first, only then iterate over
the database to process the rest, making sure not to process the
'version' record twice.
According to DNS catalog zones draft version 5 document, catalog
zone custom properties must be placed under the "ext" label.
Make necessary changes to support the new custom properties syntax in
catalog zones with version "2" of the schema.
Change the default catalog zones schema version from "1" to "2" in
ARM to prepare for the new features and changes which come starting
from this commit in order to support the latest DNS catalog zones draft
document.
Make some restructuring in ARM and rename the term catalog zone "option"
to "custom property" to better reflect the terms used in the draft.
Change the version of 'catalog1.zone.' catalog zone in the "catz" system
test to "2", and leave the version of 'catalog2.zone.' catalog zone at
version "1" to test both versions.
Add tests to check that the new syntax works only with the new schema
version, and that the old syntax works only with the legacy schema
version catalog zones.
In `+nssearch` mode `dig` starts the next query of the followup lookup
using `start_udp()` or `start_tcp()` calls without waiting for the
previous query to complete.
In UDP mode that happens in the `send_done()` callback of the previous
query, but in TCP mode that happens in the `start_tcp()` call of the
previous query (recursion) which doesn't work because `start_tcp()`
attaches the `lookup->current_query` to the query it is starting, so a
recursive call will result in an assertion failure.
Make the TCP mode to start the next query in `send_done()`, just like in
the UDP mode. During that time the `lookup->current_query` is already
detached by the `tcp_connected()`/`udp_ready()` callbacks.
Mention in the DNSSEC guide in the "revert to unsigned" recipe that you
can publish CDS and CDNSKEY DELETE records to remove the corresponding
DS records from the parent zone.
Update the function that synchronizes the CDS and CDNSKEY DELETE
records. It now allows for the possibility that the CDS DELETE record
is published and the CDNSKEY DELETE record is not, and vice versa.
Also update the code in zone.c how 'dns_dnssec_syncdelete()' is called.
With KASP, we still maintain the DELETE records our self. Otherwise,
we publish the CDS and CDNSKEY DELETE record only if they are added
to the zone. We do still check if these records can be signed by a KSK.
This change will allow users to add a CDS and/or CDNSKEY DELETE record
manually, without BIND removing them on the next zone sign.
Note that this commit removes the check whether the key is a KSK, this
check is redundant because this check is also made in
'dst_key_is_signing()' when the role is set to DST_BOOL_KSK.
Add a test case for a dynamically added CDS DELETE record and make
sure it is not removed when signing the zone. This happens because
BIND maintains CDS and CDNSKEY publishing and it will only allow
CDS DELETE records if the zone is transitioning to insecure. This is
a state that can be identified when using KASP through 'dnssec-policy',
but not when using 'auto-dnssec'.
Commit bf3fffff67 added a Python-based
name server (bin/tests/system/forward/ans11/ans.py) to the "forward"
system test, but did not update bin/tests/system/Makefile.am to ensure
Python is present in the test environment before the "forward" system
test is run. Update bin/tests/system/Makefile.am to enforce that
requirement.
due to a typo in the code, ADB entries were unlinked from their entry
buckets during shutdown if they had a nonzero reference count. they
were only supposed to be unlinked if the reference count was exactly
one (that being the reference held by the bucket itself).
Implement TCP support in the `ans11` Python-based DNS server.
Implement a control command channel in `ans11` to support an optional
silent mode of operation, which, when enabled, will ignore incoming
queries.
In the added check, make the `ans11` the NS server of
"a.root-servers.nil." for `ns3`, so it uses `ans11` (in silent mode)
for the regular (non-forwarded) name resolutions.
This will trigger the "hung fetch" scenario, which was causing `named`
to crash.
- Check that an NS in an authority section returned from a forwarder
which is above the name in a configured "forward first" or "forward
only" zone (i.e., net/NS in a response from a forwarder configured for
local.net) is not cached.
- Test that a DNAME for a parent domain will not be cached when sent
in a response from a forwarder configured to answer for a child.
- Check that glue is rejected if its name falls below that of zone
configured locally.
- Check that an extra out-of-bailiwick data in the answer section is
not cached (this was already working correctly, but was not explicitly
tested before).
The mctx, zonetask and loadtask pools were being destroyed in the
shutdown function where in theory a dangling zone could be still
attached to it.
Move the isc_mem_put() on the pools to the destroy() function.
There's couple of files that modify behaviour of named when started via
bin/tests/system/start.pl. Add those files as CC-1.0 to .reuse/dep5 as
they are just empty placeholders.
Add a test case to check for lingering TCP sockets stuck in the
CLOSE_WAIT state. This can happen if a client sends some garbage after
its first query.
The system test runs the reproducer script and then sends another TCP
query to the resolver. The resolver is configured to allow one TCP
client only. If BIND has its TCP socket stuck in CLOSE_WAIT, it does
not have the resources available to answer the second query.
Note: A better test would be to check if the named daemon does not
have a TCP socket stuck in CLOSE_WAIT for example with netstat. When
running this test locally you can examine named with netstat manually.
But since netstat is platform specific it is not a good candidate to do
this as a system test.
If you, if you could return, don't let it burn.
Do you have to let it linger?
- Cranberries
This allows Gitlab to show nice summary for individual tests/test
directories and to expose the results in Gitlab API for consumption
elsewhere.
A catch: As of Gitlab 14.7.7, the detailed results are stored
only in artifacts and thus expire. All consumers (including API) need
to be "fast enough" to get the data before they disappear.
This also forces us to always store the artifacts intead of storing them
only on failure.
There are a couple of problems with dns_request_createvia(): a UDP
retry count of zero means unlimited retries (it should mean no
retries), and the overall request timeout is not enforced. The
combination of these bugs means that requests can be retried forever.
This change alters calls to dns_request_createvia() to avoid the
infinite retry bug by providing an explicit retry count. Previously,
the calls specified infinite retries and relied on the limit implied
by the overall request timeout and the UDP timeout (which did not work
because the overall timeout is not enforced). The `udpretries`
argument is also changed to be the number of retries; previously, zero
was interpreted as infinity because of an underflow to UINT_MAX, which
appeared to be a mistake. And `mdig` is updated to match the change in
retry accounting.
The bug could be triggered by zone maintenance queries, including
NOTIFY messages, DS parental checks, refresh SOA queries and stub zone
nameserver lookups. It could also occur with `nsupdate -r 0`.
(But `mdig` had its own code to avoid the bug.)
Implement reference counting for TLS contexts, Resolve#3122 DoT stops working after "rndc reconfigure" when running named as non-root
Closes#3122
See merge request isc-projects/bind9!6087
This commit makes use of isc_nmsocket_set_tlsctx(). Now, instead of
recreating TLS-enabled listeners (including the underlying TCP
listener sockets), only the TLS context in use is replaced.
This commit adds isc_nmsocket_set_tlsctx() - an asynchronous function
that replaces the TLS context within a given TLS-enabled listener
socket object. It is based on the newly added reference counting
functionality.
The intention of adding this function is to add functionality to
replace a TLS context without recreating the whole socket object,
including the underlying TCP listener socket, as a BIND process might
not have enough permissions to re-create it fully on reconfiguration.
The implementation is done on top of the reference counting
functionality found in OpenSSL/LibreSSL, which allows for avoiding
wrapping the object.
Adding this function allows using reference counting for TLS contexts
in BIND 9's codebase.
After some back and forth, it was decidede to match the configuration
option with unbound ("so-reuseport"), PowerDNS ("reuseport") and/or
nginx ("reuseport").
as far as I can determine the order of operations is not important.
*** CID 351372: Concurrent data access violations (ATOMICITY)
/lib/isc/timer.c: 227 in timer_purge()
221 LOCK(&timer->lock);
222 if (!purged) {
223 /*
224 * The event has already been executed, but not
225 * yet destroyed.
226 */
>>> CID 351372: Concurrent data access violations (ATOMICITY)
>>> Using an unreliable value of "event" inside the second locked section. If the data that "event" depends on was changed by another thread, this use might be incorrect.
227 timerevent_unlink(timer, event);
228 }
229 }
230 }
231
232 void
*** CID 351371: Null pointer dereferences (REVERSE_INULL)
/lib/dns/adb.c: 2615 in dns_adb_createfind()
2609 /*
2610 * Copy out error flags from the name structure into the find.
2611 */
2612 find->result_v4 = find_err_map[adbname->fetch_err];
2613 find->result_v6 = find_err_map[adbname->fetch6_err];
2614
>>> CID 351371: Null pointer dereferences (REVERSE_INULL)
>>> Null-checking "find" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
2615 if (find != NULL) {
2616 if (want_event) {
2617 INSIST((find->flags & DNS_ADBFIND_ADDRESSMASK) != 0);
2618 isc_task_attach(task, &(isc_task_t *){ NULL });
2619 find->event.ev_sender = task;
2620 find->event.ev_action = action;
The error code path handling the `ISC_R_CANCELED` code lacks a
`clear_current_lookup()` call, without which dig hangs indefinitely
when handling the error.
Add the missing call to account for all references of the lookup so
it can be destroyed.
In `send_udp()` and `launch_next_query()` functions, when calling
`dighost_printmessage()` to print detailed information about the
sent query, dig always prints the data of the first query in the
lookup's queries list.
The first query in the list can be already finished, having its handles
freed, and accessing this information results in assertion failure.
Print the current query's information instead.
Previously, HAVE_SO_REUSEPORT_LB has been defined only in the private
netmgr-int.h header file, making the configuration of load balanced
sockets inoperable.
Move the missing HAVE_SO_REUSEPORT_LB define the isc/netmgr.h and add
missing isc_nm_getloadbalancesockets() implementation.
Previously, the option to enable kernel load balancing of the sockets
was always enabled when supported by the operating system (SO_REUSEPORT
on Linux and SO_REUSEPORT_LB on FreeBSD).
It was reported that in scenarios where the networking threads are also
responsible for processing long-running tasks (like RPZ processing, CATZ
processing or large zone transfers), this could lead to intermitten
brownouts for some clients, because the thread assigned by the operating
system might be busy. In such scenarious, the overall performance would
be better served by threads competing over the sockets because the idle
threads can pick up the incoming traffic.
Add new configuration option (`load-balance-sockets`) to allow enabling
or disabling the load balancing of the sockets.
Previously, the RPZ updates ran quantized on the main nm_worker loops.
As the quantum was set to 1024, this might lead to service
interruptions when large RPZ update was processed.
Change the RPZ update process to run as the offloaded work. The update
and cleanup loops were refactored to do as little locking of the
maintenance lock as possible for the shortest periods of time and the db
iterator is being paused for every iteration, so we don't hold the rbtdb
tree lock for prolonged periods of time.
Previously dns_rpz_add() were passed dns_rpz_zones_t and index to .zones
array. Because we actually attach to dns_rpz_zone_t, we should be using
the local pointer instead of passing the index and "finding" the
dns_rpz_zone_t again.
Additionally, dns_rpz_add() and dns_rpz_delete() were used only inside
rpz.c, so make them static.
Do a general cleanup of lib/dns/rpz.c style:
* Removed deprecated and unused functions
* Unified dns_rpz_zone_t naming to rpz
* Unified dns_rpz_zones_t naming to rpzs
* Add and use rpz_attach() and rpz_attach_rpzs() functions
* Shuffled variables to be more local (cppcheck cleanup)
Now that the dns_aclenv_t has now properly rwlocked .localhost and
.localnets member, we can remove the task exclusive mode use from the
ns_interfacemgr. Some light related cleanup has been also done.
In order to modify the .localhost and .localnets members of the
dns_aclenv, all other processing on the netmgr loops needed to be
stopped using the task exclusive mode. Add the isc_rwlock to the
dns_aclenv, so any modifications to the .localhost and .localnets can be
done under the write lock.
In recv_done(), when dig decides to start the lookup's next query in
the line using `start_udp()` or `start_tcp()`, and for some reason,
no queries get started, dig doesn't cancel the lookup.
This can occur, for example, when there are two queries in the lookup,
one with a regular IP address, and another with a IPv4 mapped IPv6
address. When the regular IP address fails to serve the query, its
`recv_done()` callback starts the next query in the line (in this
case the one with a mapped IP address), but because `dig` doesn't
connect to such IP addresses, and there are no other queries in the
list, no new queries are being started, and the lookup keeps hanging.
After calling `start_udp()` or `start_tcp()` in `recv_done()`, check
if there are no pending/working queries then cancel the lookup instead
of only detaching from the current query.
The reference counting and isc_timer_attach()/isc_timer_detach()
semantic are actually misleading because it cannot be used under normal
conditions. The usual conditions under which is timer used uses the
object where timer is used as argument to the "timer" itself. This
means that when the caller is using `isc_timer_detach()` it needs the
timer to stop and the isc_timer_detach() does that only if this would be
the last reference. Unfortunately, this also means that if the timer is
attached elsewhere and the timer is fired it will most likely be
use-after-free, because the object used in the timer no longer exists.
Remove the reference counting from the isc_timer unit, remove
isc_timer_attach() function and rename isc_timer_detach() to
isc_timer_destroy() to better reflect how the API needs to be used.
The only caveat is that the already executed event must be destroyed
before the isc_timer_destroy() is called because the timer is no longet
attached to .ev_destroy_arg.
Previously, the task privileged mode has been used only when the named
was starting up and loading the zones from the disk as the "first" thing
to do. The privileged task was setup with quantum == 2, which made the
taskmgr/netmgr spin around the privileged queue processing two events at
the time.
The same effect can be achieved by setting the quantum to UINT_MAX (e.g.
practically unlimited) for the loadzone task, hence the privileged task
mode was removed in favor of just processing all the events on the
loadzone task in a single task_run().
Instead of passing the number of worker to the dns_zonemgr manually,
get the number of nm threads using the new isc_nm_getnworkers() call.
Additionally, remove the isc_pool API and manage the array of memory
context, zonetasks and loadtasks directly in the zonemgr.
After switching to per-thread resources in the zonemgr, the performance
was decreased because the memory context, zonetask and loadtask was
picked from the pool at random.
Pin the zone to single threadid (.tid) and align the memory context,
zonetask and loadtask to be the same, this sets the hard affinity of the
zone to the netmgr thread.
The zone counting in the named was used to properly size the zonemgr
resources (memory contexts, zonetasks and loadtasks). Since this is no
longer the case, remove the whole zone counting from named.
Previously, the zonemgr created 1 task per 100 zones and 1 memory
context per 1000 zones (with minimum 10 tasks and 2 memory contexts) to
reduce the contention between threads.
Instead of reducing the contention by having many resources, create a
per-nm_thread memory context, loadtask and zonetask and spread the zones
between just per-thread resources.
Note: this commit alone does decrease performance when loading the zone
by couple seconds (in case of 1M zone) and thus there's more work in
this whole MR fixing the performance.
Previously, the zone timer was not stopped before detaching the timer.
This could lead to a data race where the timer post_event() could fire
before the timer was detached, but then the event would be executed
after the zone was already destroyed.
This was not noticed before because the timing or the ordering of the
actions were different, but it was causing assertion failures in the
libns tests now.
Properly stop the zone timer before detaching the timer object from the
dns_zone.
When we are loading the zones, set the quantum to UINT_MAX, which makes
task_run process all tasks at once. After the zone loading is finished
the quantum will be dropped to 1 to not block server when we are loading
new zones after reconfiguration.
Add isc_task_setquantum() function that modifies quantum for the future
isc_task_run() invocations.
NOTE: The current isc_task_run() caches the task->quantum into a local
variable and therefore the current event loop is not affected by any
quantum change.
The isc_task_purge() and isc_task_purgerange() were now unused, so sweep
the task.c file. Additionally remove unused ISC_EVENTATTR_NOPURGE event
attribute.
The isc_task_purgerange() was walking through all events on the task to
find a matching task. Instead use the ISC_LINK_LINKED to find whether
the event is active.
Cleanup the related isc_task_unsend() and isc_task_unsendrange()
functions that were not used anywhere.
Adding extra val & 0xffff in the isc_hash_bits32() macros in the hotpath
has significantly reduced the performance. Turn the macro into static
inline function matching the previous hash_32() function used to compute
hashval matching the hashtable->bits.
a test case in the 'resolver' system test was reliant on
logged output that would only be present when query tracing
was enabled, as in developer builds. that test case is now
disabled when query tracing is not available. Thanks to
Anton Castelli.
The `udp_ready()` and `tcp_connected()` functions in dighost.c are
used for similar purposes for UDP and TCP respectively.
Synchronize the `udp_ready()` function entry code to behave like
`tcp_connected()` by adding input validation, debug messages and
early exit code when `cancel_now` is `true`.
When finishing the NSSEARCH task and there is no more followup
lookups to start, dig does not destroy the last lookup, which
causes it to hang indefinitely.
Rename the unused `first_pass` member of `dig_query_t` to `started`
and make it `true` in the first callback after `start_udp()` or
`start_tcp()` of the query to indicate that the query has been
started.
Create a new `check_if_queries_done()` function to check whether
all of the queries inside a lookup have been started and finished,
or canceled.
Use the mentioned function in the TRACE code block in `recv_done()`
to check whether the current query is the last one in the lookup and
cancel the lookup in that case to free the resources.
the line "$GENERATE 19-28/2147483645 $ CNAME x" should generate
a single CNAME with the owner "19.example.com", but prior to the
overflow bug it generated several CNAMEs, half of them with large
negative values.
we now test for the bugfix by using "named-checkzone -D" and
grepping for a single CNAME in the output.
the value of 'i' in generate could overflow when adding 'step' to
it in the 'for' loop. Use an unsigned int for 'i' which will give
an additional bit and prevent the overflow. The inputs are both
less than 2^31 and and the result will be less than 2^32-1.
Ensure the update zone name is mentioned in the NOTAUTH error message
in the server log, so that it is easier to track down problematic
update clients. There are two cases: either the update zone is
unrelated to any of the server's zones (previously no zone was
mentioned); or the update zone is a subdomain of one or more of the
server's zones (previously the name of the irrelevant parent zone was
misleadingly logged).
Closes#3209
The .lock, .exiting and .excl members were not using for anything else
than starting task exclusive mode, setting .exiting to true and ending
exclusive mode.
Remove all the stray members and dead code eliminating the task
exclusive mode use from ns_clientmgr.
The ADB previously used separate reference counters for internal
and external references, plus additional counters for ABD find
and namehook objects, and used all these counters to coordinate
its shutdown process, which was a multi-stage affair involving
a sequence of control events.
It also used a complex interlocking set of static functions for
referencing, deferencing, linking, unlinking, and cleaning up various
internal objects; these functions returned boolean values to their
callers to indicate what additional processing was needed.
The changes in the previous two commits destabilized this fragile
system in a way that was difficult to recover from, so in this commit
we refactor all of it. The dns_adb and dns_adbentry objects now use
conventional attach and detach functions for reference counting, and
the shutdown process is much more straightforward. Instead of
handling shutdown asynchronously, we can just destroy the ADB when
references reach zero
In addition, ADB locking has been simplified. Instead of a
single `find_{name,entry}_and_lock()` function which searches for
a name or entry's hash bucket, locks it, and then searches for the
name or entry in the bucket, we now use one function to find the
bucket (leaving it to the caller to do the locking) and another
find the name or entry. Instead of locking the entire ADB when
modifying hash tables, we now use read-write locks around the
specific hash table. The only remaining need for adb->lock
is when modifying the `whenshutdown` list.
Comments throughout the module have been improved.
Replace adb->{names,entries} and related arrays (indexed by hashed
bucket) with a isc_ht hash tables storing the new struct
adb{name,entry}bucket_t that wraps all the variables that were
originally stored in arrays indexed by "bucket" number stored directly
in the struct dns_adb.
Previously, the task exclusive mode has been used to grow the internal
arrays used to store the named and entries objects. The isc_ht hash
tables are now protected by the isc_rwlock instead and thus the usage of
the task exclusive mode has been removed from the dns_adb.
Co-authored-by: Ondřej Surý <ondrej@isc.org>
the use of "result" as a variable name for a boolean return value
was confusing; all 'result' variables that are not isc_result_t
have been renamed to 'ret'.
The static function print_dns_name() was a duplicate of
dns_name_print(), so it has been replaced with that.
Changed INSIST to REQUIRE where appropriate, and added NULL
initialization for pointer variables.
The use of isc_appctx_t in dns_client was used to wait for
dns_client_startresolve() to finish the processing (the resolve_done()
task callback).
This has been replaced with standard bool+cond+lock combination removing
the need of isc_appctx_t altogether.
Previously, the isc_ht API would always take the key as a literal input
to the hashing function. Change the isc_ht_init() function to take an
'options' argument, in which ISC_HT_CASE_SENSITIVE or _INSENSITIVE can
be specified, to determine whether to use case-sensitive hashing in
isc_hash32() when hashing the key.
Fibonacci hashing was implemented in four separate places (rbt.c,
rbtdb.c, resolver.c, zone.c). This commit combines them into a single
implementation. The hash_32() function is now replaced with
isc_hash_bits32().
In couple places, we have missed INSIST(0) or ISC_UNREACHABLE()
replacement on some branches with UNREACHABLE(). Replace all
ISC_UNREACHABLE() or INSIST(0) calls with UNREACHABLE().
This commit adds points to the CHANGES and the release notes about
supporting remote TLS certificates verification and support for Strict
and Mutual TLS transport connections verification.
Mention that some old cryptographic library versions lack the
functionality to implement ignoring the Subject field (and thus the
Common Name) when establishing DoT connections.
This commit extends the 'doth' system test with a set of Strict/Mutual
TLS related checks.
This commit also makes each doth NS instance use its own TLS
certificate that includes FQDN, IPv4, and IPv6 addresses, issued using
a common Certificate Authority, instead of ad-hoc certs.
Extend servers initialisation timeout to 60 seconds to improve the
tests stability in the CI as certain configurations could fail to
initialise on time under load.
This commit updates the reference manual with short descriptions of
different TLS authentication modes, as mentioned in the RFC 9103,
Section 9.3 (Opportunistic TLS, Strict TLS, Mutual TLS), and mentions
how these authentication modes can be achieved via BIND's
configuration file.
This commit adds support for Strict/Mutual TLS into BIND. It does so
by implementing the backing code for 'hostname' and 'ca-file' options
of the 'tls' statement. The commit also updates the documentation
accordingly.
This commit adds support for Strict/Mutual TLS to dig.
The new command-line options and their behaviour are modelled after
kdig (+tls-ca, +tls-hostname, +tls-certfile, +tls-keyfile) for
compatibility reasons. That is, using +tls-* is sufficient to enable
DoT in dig, implying +tls-ca
If there is no other DNS transport specified via command-line,
specifying any of +tls-* options makes dig use DoT. In this case, its
behaviour is the same as if +tls-ca is specified: that is, the remote
peer's certificate is verified using the platform-specific
intermediate CA certificates store. This behaviour is introduced for
compatibility with kdig.
This commit adds support for ISC_R_TLSBADPEERCERT error code, which is
supposed to be used to signal for TLS peer certificates verification
in dig and other code.
The support for this error code is added to our TLS and TLS DNS
implementations.
This commit also adds isc_nm_verify_tls_peer_result_string() function
which is supposed to be used to get a textual description of the
reason for getting a ISC_R_TLSBADPEERCERT error.
This commit adds support for keeping CA certificates stores associated
with TLS contexts. The intention is to keep one reusable store per a
set of related TLS contexts.
This commit adds a set of functions that can be used to implement
Strict and Mutual TLS:
* isc_tlsctx_load_client_ca_names();
* isc_tlsctx_load_certificate();
* isc_tls_verify_peer_result_string();
* isc_tlsctx_enable_peer_verification().
This commit adds a set of high-level utility functions to manipulate
the certificate stores. The stores are needed to implement TLS
certificates verification efficiently.
There is a possible code path of using the uninitialized `bname`
character array while logging an error message.
Initialize the `bname` buffer earlier in the function.
Also, change the initialization routine to use a helper function.
A successful call to `dns_rdata_tostruct()` expects an accompanying
call to `dns_rdata_freestruct()` to free up any memory that could have
been allocated during the first call.
In catz.c there are several places where `dns_rdata_freestruct()` call
is skipped.
Add the missing cleanup routines.
Because of the "goto" in the "if" body the "else" part is unnecessary
and adds another level of indentation.
Cleanup the code to not have the "else" part.
Catz logs a warning message when it is told to modify a zone which was
not added by the current catalog zone.
When logging a warning, distinguish the two cases when the zone
was not added by a catalog zone at all, and when the zone was
added by a different catalog zone.
The dnssec-settime -p and -up options print times in asctime() and
UNIX time_t formats, respectively. The asctime() format can also be
found inside K*.key public key files. Key files also contain times in
the YYYYMMDDHHMMSS format that can be used in timing parameter
options.
The dnssec-settime -p and -up time formats are now acceptable in
timing parameter options to dnssec-settime and dnssec-keygen, so it is
no longer necessary to parse key files to retrieve times that are
acceptable in timing parameter options.
The way the ns_client_t .shuttingdown member was practically dead code.
The .shuttingdown would be set to true only in ns__client_put() function
meaning that we have detached from all ns_client_t .*handles and the
ns_client_t object being freed:
client->magic = 0;
client->shuttingdown = true;
[...]
isc_mem_put(manager->ctx, client, sizeof(*client))
Meanwhile the ns_client_t object is accessed like this:
isc_nmhandle_detach(&client->fetchhandle);
client->query.attributes &= ~NS_QUERYATTR_RECURSING;
client->state = NS_CLIENTSTATE_WORKING;
qctx_init(client, &devent, 0, &qctx);
client_shuttingdown = ns_client_shuttingdown(client);
if (fetch_canceled || fetch_answered || client_shuttingdown) {
[...]
}
Even if the isc_nmhandle_detach(...) was the last handle detach, it
would mean that immediatelly, after calling the isc_nmhandle_detach(),
we would be causing use-after-free, because the ns_client_t is
immediatelly destroyed after setting .shuttingdown to true.
The similar code in the query_hookresume() already noticed this:
/*
* This event is running under a client task, so it's safe to detach
* the fetch handle. And it should be done before resuming query
* processing below, since that may trigger another recursion or
* asynchronous hook event.
*/
Previously, it was possible to assign a bit of memory space in the
nmhandle to store the client data. This was complicated and prevents
further refactoring of isc_nmhandle_t caching (future work).
Instead of caching the data in the nmhandle, allocate the hot-path
ns_client_t objects from per-thread clientmgr memory context and just
assign it to the isc_nmhandle_t via isc_nmhandle_set().
The ns_client_t is always attached to ns_clientmgr_t which has
associated memory context, server context, task and threadid. Use those
directly from the ns_clientmgr_t instead of attaching it to an extra
copy in ns_client_t to make the ns_client_t more sleek and lean.
Additionally, remove some stray ns_client_t struct members that were not
used anywhere.
Some ancient versions of clang reported uninitialized memory use false
positive (see https://bugs.llvm.org/show_bug.cgi?id=14461). Since clang
4.0.1 has been long obsoleted, just remove the workarounds.
Historically, the inline keyword was a strong suggestion to the compiler
that it should inline the function marked inline. As compilers became
better at optimising, this functionality has receded, and using inline
as a suggestion to inline a function is obsolete. The compiler will
happily ignore it and inline something else entirely if it finds that's
a better optimisation.
Therefore, remove all the occurences of the inline keyword with static
functions inside single compilation unit and leave the decision whether
to inline a function or not entirely on the compiler
NOTE: We keep the usage the inline keyword when the purpose is to change
the linkage behaviour.
C11 has builtin support for _Noreturn function specifier with
convenience noreturn macro defined in <stdnoreturn.h> header.
Replace ISC_NORETURN macro by C11 noreturn with fallback to
__attribute__((noreturn)) if the C11 support is not complete.
Previously, the unreachable code paths would have to be tagged with:
INSIST(0);
ISC_UNREACHABLE();
There was also older parts of the code that used comment annotation:
/* NOTREACHED */
Unify the handling of unreachable code paths to just use:
UNREACHABLE();
The UNREACHABLE() macro now asserts when reached and also uses
__builtin_unreachable(); when such builtin is available in the compiler.
Gcc 7+ and Clang 10+ have implemented __attribute__((fallthrough)) which
is explicit version of the /* FALLTHROUGH */ comment we are currently
using.
Add and apply FALLTHROUGH macro that uses the attribute if available,
but does nothing on older compilers.
In one case (lib/dns/zone.c), using the macro revealed that we were
using the /* FALLTHROUGH */ comment in wrong place, remove that comment.
Instead of passing the "workers" variable back and forth along with
passing the single isc_nm_t instance, add isc_nm_getnworkers() function
that returns the number of netmgr threads are running.
Change the ns_interfacemgr and ns_taskmgr to utilize the newly acquired
knowledge.
From an attacker's point of view, a VLA declaration is essentially a
primitive for performing arbitrary arithmetic on the stack pointer. If
the attacker can control the size of a VLA they have a very powerful
tool for causing memory corruption.
To mitigate this kind of attack, and the more general class of stack
clash vulnerabilities, C compilers insert extra code when allocating a
VLA to probe the growing stack one page at a time. If these probes hit
the stack guard page, the program will crash.
From the point of view of a C programmer, there are a few things to
consider about VLAs:
* If it is important to handle allocation failures in a controlled
manner, don't use VLAs. You can use VLAs if it is OK for
unreasonable inputs to cause an uncontrolled crash.
* If the VLA is known to be smaller than some known fixed size,
use a fixed size array and a run-time check to ensure it is large
enough. This will be more efficient than the compiler's stack
probes that need to cope with arbitrary-size VLAs.
* If the VLA might be large, allocate it on the heap. The heap
allocator can allocate multiple pages in one shot, whereas the
stack clash probes work one page at a time.
Most of the existing uses of VLAs in BIND are in test code where they
are benign, but there was one instance in `named`, in the GSS-TSIG
verification code, which has now been removed.
This commit adjusts the style guide and the C compiler flags to allow
VLAs in test code but not elsewhere.
In the GSS-TSIG verification code there was an alarming
variable-length array whose size came off the network, from the
signature in the request. It turned out to be safe, because the caller
had previously checked that the signature had a reasonable size.
However, the safety checks are in the generic TSIG implementation, and
the risky VLA usage was in the GSS-specific code, and they are
separated by the DST indirection layer, so it wasn't immediately
obvious that the risky VLA was in fact safe.
In fact this risky VLA was completely unnecessary, because the GSS
signature can be verified in place without being copied to the stack,
like the message covered by the signature. The `REGION_TO_GBUFFER()`
macro backwardly assigns the region in its left argument to the GSS
buffer in its right argument; this is just a pointer and length
conversion, without copying any data. The `gss_verify_mic()` call uses
both message and signature GSS buffers in a read-only manner.
Rework the "ans8" server in the "digdelv" system test to support various
modes of operations using a control channel.
The supported modes are:
1. `silent` (do not respond)
2. `close` (UDP: same as `silent`; TCP: also close the connection)
3. `servfail` (always respond with `SERVFAIL`)
4. `unstable` (constantly switch between `silent` and `servfail`)
Add multiple tests to check the handling of both TCP and UDP socket
error scenarios in dig/host.
When encountering a TCP connection error while trying to initiate a
connection to a server, dig erroneously cancels the lookup even when
there are other server(s) to try, which results in an assertion failure.
Cancel the lookup only when there are no more queries left in the
lookup's queries list (i.e. `next` is NULL).
Add a test to check whether dig tries the next query/server after
a connection error.
Add a test to check whether dig tries the next query/server after
a one or more (default is 3) connection/request timeouts.
When timing-out or having other types of socket errors during a query,
dig isn't trying to perform the lookup using other servers which exist
in the lookup's queries list.
After configured amount of timeout retries, or after a socket error,
check if there are other queries/servers in the lookup's queries list,
and start the next one if it exists, instead of unconditionally failing.
This test ensures that `dig` retries with another attempt after a
timed-out request, and that it does not crash when the retried
request returns a SERVFAIL result. See [GL #3020] for the latter
issue.
When a query times out, and `dig` (or `host`) creates a new query
to resend the request, it is being prepended to the lookup's queries
list, which can cause a confusion later, making `dig` (or `host`)
believe that there is another new query in the list, but that is
actually the old one, which was timed out. That mistake will result
in an assertion failure.
That can happen, in particular, when after a timed out request,
the retried request returns a SERVFAIL result, and the recursion
is enabled, and `+nofail` option was used with `dig` (that is the
default behavior in `host`, unless the `-s` option is provided).
Fix the problem by inserting the query just after the current,
timed-out query, instead of prepending to the list.
Before calling start_udp() detach `l->current_query`, like it is
done in another place in the function.
Slightly update a couple of debug messages to make them more
consistent.
After a query results in a SERVFAIL result, and there is another
registered query in the lookup's queries list, `dig` starts the next
query to try another server, but for some reason, reports about that
also when the current query is in the head of the list, even if there
is no other query in the list to try.
Use the same condition for both decisions, and after starting the next
query, jump to the "detach_query" label instead of "next_lookup",
because there is no need to start the next lookup after we just started
a query in the current lookup.
When max-transfer-*-out timeouts were reintroduced, the log message
about starting the timer was errorneously left as ISC_LOG_ERROR.
Change the log level of said message to ISC_LOG_DEBUG(1).
The clang-format-15 has new option InsertBraces that could add missing
branches around single line statements. Use that to our advantage
without switching to not-yet-released LLVM version to add missing braces
in couple of places.
As incremental rehashing has been added to isc_ht implementation, we
need to test whether the rehashing works.
Update the isc_ht unit test to test:
* preinitialized hash table large enough to hold all the elements
* smallest hash table that fully grows to hold all the elements
* partially preinitialized hash table that grows
* iterating while rehashing is in progress
Previously, an incremental hash table resizing was implemented for the
dns_rbt_t hash table implementation. Using that as a base, also
implement the incremental hash table resizing also for isc_ht API
hashtables:
1. During the resize, allocate the new hash table, but keep the old
table unchanged.
2. In each lookup, delete, or iterator operation, check both tables.
3. Perform insertion operations only in the new table.
4. At each insertion also move <r> elements from the old table to
the new table.
5. When all elements are removed from the old table, deallocate it.
To ensure that the old table is completely copied over before the new
table itself needs to be enlarged, it is necessary to increase the
size of the table by a factor of at least (<r> + 1)/<r> during resizing.
In our implementation <r> is equal to 1.
The downside of this approach is that the old table and the new table
could stay in memory for longer when there are no new insertions into
the hash table for prolonged periods of time as the incremental
rehashing happens only during the insertions.
The fetch can be in the shutting down state when resume_dslookup() is
trying to operate on it.
This is also a security issue, because a malicious actor can set up a
name server which delays certain queries in such a way that the fetch
will time out and shut down, which will cause named to crash.
Add a check to see if the fetch has the shutting down attribute set,
and cancel any further operations on it in such case.
A similar bug had been fixed earlier for the resume_qmin() function,
see [GL #966].
This is an optimisation as we can skip a lot of pointless work when we
know there is a DNAME there.
When we have a partial match and a DNAME above the QNAME, the closest
encloser has the same owner as the DNAME, will have the DNAME bit set
in the type map, and we wouldn't use it as we would return the
DNAME + RRSIG(DNAME) instead.
So there is no point in looking for it nor in attempting to check that
it is valid for the QNAME.
When sock->closehandle_cb is set, we need to run nmhandle_detach_cb()
asynchronously to ensure correct order of multiple packets processing in
the isc__nm_process_sock_buffer(). When not run asynchronously, it
would cause:
a) out-of-order processing of the return codes from processbuffer();
b) stack growth because the next TCP DNS message read callback will
be called from within the current TCP DNS message read callback.
The sock->closehandle_cb is set to isc__nm_resume_processing() for TCP
sockets which calls isc__nm_process_sock_buffer(). If the read callback
(called from isc__nm_process_sock_buffer()->processbuffer()) doesn't
attach to the nmhandle (f.e. because it wants to drop the processing or
we send the response directly via uv_try_write()), the
isc__nm_resume_processing() (via .closehandle_cb) would call
isc__nm_process_sock_buffer() recursively.
The below shortened code path shows how the stack can grow:
1: ns__client_request(handle, ...);
2: isc_nm_tcpdns_sequential(handle);
3: ns_query_start(client, handle);
4: query_lookup(qctx);
5: query_send(qctcx->client);
6: isc__nmhandle_detach(&client->reqhandle);
7: nmhandle_detach_cb(&handle);
8: sock->closehandle_cb(sock); // isc__nm_resume_processing
9: isc__nm_process_sock_buffer(sock);
10: processbuffer(sock); // isc__nm_tcpdns_processbuffer
11: isc_nmhandle_attach(req->handle, &handle);
12: isc__nm_readcb(sock, req, ISC_R_SUCCESS);
13: isc__nm_async_readcb(NULL, ...);
14: uvreq->cb.recv(...); // ns__client_request
Instead, if 'sock->closehandle_cb' is set, we need to run detach the
handle asynchroniously in 'isc__nmhandle_detach', so that on line 8 in
the code flow above does not start this recursion. This ensures the
correct order when processing multiple packets in the function
'isc__nm_process_sock_buffer()' and prevents the stack growth.
When not run asynchronously, the out-of-order processing leaves the
first TCP socket open until all requests on the stream have been
processed.
If the pipelining is disabled on the TCP via `keep-response-order`
configuration option, named would keep the first socket in lingering
CLOSE_WAIT state when the client sends an incomplete packet and then
closes the connection from the client side.
'setup_delegation' depends on 'foundname' being the value returned
by 'dns_rbt_findnode' in the cache and 'find_coveringnsec' was
modifying 'foundname' when a covering NSEC was not found.
When caching glue, we need to ensure that there is no closer
source of truth for the name. If the owner name for the glue
record would be answered by a locally configured zone, do not
cache.
When caching additional and glue data *not* from a forwarder, we must
check that there is no "forward only" clause covering the owner name
that would take precedence. Such names would normally be allowed by
baliwick rules, but a "forward only" zone introduces a new baliwick
scope.
If we are using a fowarder, in addition to checking that names to
be cached are subdomains of the forwarded namespace, we must also
check that there are no subsidiary forwarded namespaces which would
take precedence. To be safe, we don't cache any responses if the
forwarding configuration has changed since the query was sent.
Commit 4ca74eee49 update the zone grammar
such that the zone statement is printed with the valid options per
zone type.
This commit is a follow-up, putting back the ZONE heading and adding
a note that these zone statements may also be put inside the view
statement.
It is tricky to actually print the zone statements inside
the view statement, and so we decided that we would add a note to say
that this is possible.
Change the isc_interval_t implementation from separate data type and
separate implementation to be shim implementation on top of isc_time_t.
The distinction between isc_interval_t and isc_time_t has been kept
because they are semantically different - isc_interval_t is relative and
isc_time_t is absolute, but this allows isc_time_t and isc_interval_t to
be freely interchangeable, f.e. this:
isc_time_t *t1;
isc_interval_t *interval;
isc_time_t *t2;
isc_interval_set(interval, isc_time_seconds(t2), isc_time_nanoseconds(t2);;
isc_time_subtract(t1, interval, t2);
isc_interval_set(interval, isc_time_seconds(t2), isc_time_nanoseconds(t2));
to just:
isc_time_t *t1;
isc_interval_t *interval;
isc_time_t *t2;
isc_time_subtract(t1, t2, interval);
without introducing a whole set of new functions.
The isc_timer_reset() now works only with intervals for once timers.
This makes the API almost 1:1 compatible with the libuv timers making
the further refactoring possible.
There were two places where expires argument (absolute isc_time_t value)
was being used. Both places has been converted to use relative interval
argument in preparation of simplification and refactoring of isc_timer
API.
The isc_timer_create() function was a bit conflated. It could have been
used to create a timer and start it at the same time. As there was a
single place where this was done before (see the previous commit for
nta.c), this was cleaned up and the isc_timer_create() function was
changed to only create new timer.
In nta.c, it was the only place where the active timer was created
directly instead of first creating inactive timer and then starting it
with isc_timer_reset().
Change the code to create inactive timer first, so we can refactor the
isc_timer_create() function.
While backporting !5934 I noticed a copy&paste mistake in TSIG
chapter of the ARM.
The incorrect reference was introduced by "Add hyperlinks from
program options to definition in man pages" commit but it is not
worth creating separate MR for that when the backport is not merged
yet.
(cherry picked from commit 4daef4a2a7)
Replace :manpage: with :iscman: to generate internal hyperlinks. That
way reader can use links even when offline, and jumps to man pages
for the same version.
Formerly HTML version of man pages did not have links in See Also
section because :manpage: role in Sphinx can generate only external
hyperlinks - and we do not have that enabled.
Enabling the Sphinx :manpage: linking could reliably create hyperlinks
only to external URLs, but that would take users to another version
of docs.
Generated by:
find bin -name '*.rst' | xargs sed -i -e 's/:manpage:`\([^(]\+\)(\([0-9]\))`/:iscman:`\1(\2) <\1>`/g'
+ hand-edit to revert change for mmencode reference which is
not provided in our source tree.
Use the new role :iscman: to replace all occurences or ``binary``
with :iscman:`binary`, creating a hyperlink to the manual page.
Generated using:
find bin -name *.rst | xargs fgrep --files-with-matches '.. iscman' | xargs -I{} -n1 basename {} .rst > /tmp/progs
for PROG in $(cat /tmp/progs); do find -name '*.rst' | xargs sed -i -e "s/\`\`$PROG\`\`/:iscman:\`$PROG\`/g"; done
Additional hand-edits were done mainly around filter-aaaa and
filter-a which are program names and and option names at the
same time. Couple more edits was neede to fix .rst syntax broken by
automatic replacement.
Sphinx has it's own :program: syntax for refering to program names.
Use it for self-references in manual pages. These self-references are
not clickable and not as eye-cathing as links, which is a good thing.
There is no point in attracting attention to ``dig`` several times on a
single page dedicated to dig itself.
Substituted automatically using:
find bin -name *.rst | xargs fgrep --files-with-matches '.. program' | xargs -n1 bash /tmp/repl.sh
With /tmp/repl.sh being:
BASE=$(basename "$1" .rst)
sed -i -e "s/\`\`$BASE\`\`/:program:\`$BASE\`/g" "$1"
The new directive and role "iscman" allow to tag & reference man pages in
our source tree. Essentially it is just namespacing for ISC man pages,
but it comes with couple benefits.
Differences from .. _man_program label we formerly used:
- Does not expand :ref:`man_program` into full text of the page header.
- Generates index entry with category "manual page".
- Rendering style is closer to ubiquitous to the one produced
by ``named`` syntax.
Differences from Sphinx built-in :manpage: role:
- Supports all builders with support for cross-references.
- Generates internal links (unlike :manpage: which generates external
URLs).
- Checks that target exists withing our source tree.
Side-effect of hyperlinking is that typos in program and option names
are now detected by Sphinx.
Candidate -options were detected using:
find -name *.rst | xargs grep '``-[^`]'
and then modified from ``-o`` to :option:`-o` using regex
s/``\(-[^`]\+\)``/:option:`\1`/
+ manual modifications where necessary.
Non-hyphenated options were detected by looking at context around
program names:
find bin -name *.rst | xargs -I{} -n1 basename {} .rst | sort -u
and grepping for program name with trailing whitespace.
Stand-alone program names like ``named`` are not hyperlinked in this
commit.
The markup allows referencing individual options, and also makes them
more legible (no more thin red text on gray background).
Most of the work was done using regexes:
s/^``-\(.*\)``$/.. option:: -\1\r/
s/^``+\(.*\)``$/.. option:: +\1\r/
on bin/**/*.rst files along with visual inspection and hand-edits,
mostly for positional arguments.
Regex for rndc.rst:
s/^``\(.*\)``/.. option:: \1\r/
+ hand edits to remove extra asterisk and whitespace here and there.
Since pytest itself skips tests using dnspython if the latter is not
available, also using Automake conditionals for silently skipping
pytest-based tests requiring dnspython is redundant and hides
information. Allow all pytest-based tests requiring dnspython to be run
whenever pytest itself is available, in order to ensure test skipping is
done in a uniform manner.
Note that the above reasoning only applies to pytest-based tests, so
similar adjustments were not made for shell-based tests using Python
scripts that require dnspython ("chain", "cookie", "dnssec", "qmin").
The ability to conveniently mark tests which should only be run when the
CI_ENABLE_ALL_TESTS environment variable is set seems to be useful on a
general level and therefore it should not be limited to the "timeouts"
system test, where it is currently used.
pytest documentation [1] suggests to reuse commonly used test markers by
putting them all in a single Python module which then has to be imported
by test files that want to use the markers defined therein. Follow that
advice by creating a new bin/tests/system/pytest_custom_markers.py
Python module containing the relevant marker definitions.
Note that "import pytest_custom_markers" works from a test-specific
subdirectory because pytest modifies sys.path so that it contains the
paths to all parent directories containing a conftest.py file (and
bin/tests/system/ is one). PyLint does not like that, though, so add a
relevant PyLint suppression.
The above changes make bin/tests/system/timeouts/conftest.py redundant,
so remove it.
[1] https://docs.pytest.org/en/7.0.x/how-to/skipping.html#id1
Ensure all "import dns.*" statements are always placed after
pytest.importorskip('dns') calls, in order to allow the latter to
fulfill their purpose. Explicitly import all dnspython modules used by
each dnspython-based test to avoid relying on nested imports. Replace
function-scoped imports with global imports to reduce code duplication.
The intended purpose of the @pytest.mark.dnspython{,2} decorators was to
cause dnspython-based tests to be skipped if dnspython is not available
(or not recent enough). However, a number of system tests employing
those decorators contain global "import dns.resolver" statements which
trigger ImportError exceptions during test initialization if dnspython
is not available. In other words, the @pytest.mark.dnspython{,2}
decorators serve no useful purpose.
Currently, whenever a Python-based test requires dnspython, that
requirement applies to all tests in a given *.py file. Given that,
employ global pytest.importorskip() calls to ensure dnspython-based
parts of various system tests are skipped when dnspython is not
available. Remove all occurrences of the @pytest.mark.dnspython{,2}
decorators (and all associated code) to prevent confusion.
The intended purpose of the @pytest.mark.requests decorator was to cause
Python-based parts of the "statschannel" system test to be skipped if
the requests Python module is not available. However, both
tests-json.py and tests-xml.py contain a global "import requests"
statement which triggers ImportError exceptions during test
initialization if the requests module is not available. In other words,
the @pytest.mark.requests decorator serves no useful purpose.
Since all tests in both tests-json.py and tests-xml.py depend on the
requests Python module, employ pytest.importorskip() to ensure the
Python-based parts of the "statschannel" system test are skipped when
the requests module is not available. Remove all occurrences of the
@pytest.mark.requests decorator (and all associated code) to prevent
confusion.
All tests in bin/tests/system/statschannel/tests-xml.py require libxml2
support to be enabled in BIND 9 at build-time. Instead of applying the
same pytest.mark.skipif() decorator to every test in that file, set the
'pytestmark' global accordingly in order to immediately skip all tests
in tests-xml.py if libxml2 support is not compiled in.
Remove all occurrences of the @pytest.mark.xml decorator (and all
associated code) from the "statschannel" system test as the
xml.etree.ElementTree module is a part of the Python standard library
since Python 2.5 (so checking whether it is available is redundant) and
checking for libxml2 support in the tested BIND 9 build is already
handled by setting the 'pytestmark' global accordingly.
All tests in bin/tests/system/statschannel/tests-json.py require json-c
support to be enabled in BIND 9 at build-time. Instead of applying the
same pytest.mark.skipif() decorator to every test in that file, set the
'pytestmark' global accordingly in order to immediately skip all tests
in tests-json.py if json-c support is not compiled in.
Remove all occurrences of the @pytest.mark.json decorator (and all
associated code) from the "statschannel" system test as the json module
is a part of the Python standard library since Python 2.6 (so checking
whether it is available is redundant) and checking for json-c support in
the tested BIND 9 build is already handled by setting the 'pytestmark'
global accordingly.
Also remove a related excerpt from bin/tests/system/rpzextra/conftest.py
as it is a copy-paste artifact that serves no purpose in the "rpzextra"
system test.
The "statschannel" system test contains two Python helper modules:
- generic.py: test functions directly invoked by both tests-json.py
and test-xml.py,
- helper.py: helper functions invoked by test functions in generic.py.
The above logic for splitting helper functions into Python modules
prevents selective test skipping from working due to unconditional
import statements being present in both helper modules. For example, if
dnspython is not available on the test host, tests-json.py imports
generic.py, which in turn imports helper.py, which in turn attempts to
import various dnspython modules, triggering ImportError exceptions
during test initialization. Various decorators used for some tests
(like @pytest.mark.dnspython) suggest that such a scenario should be
handled gracefully, but that is not the case - modifying the test
collection in conftest.py does not prevent pytest from failing due to
import errors.
Fix by moving helper functions around to achieve a different split:
- generic.py: helper functions only relying on the Python standard
library,
- generic_dnspython.py: helper functions requiring dnspython.
Only two tests in tests-{json,xml}.py need dnspython to work
(test_traffic_json(), test_traffic_xml()). Since all
dnspython-dependent code is now present in generic_dnspython.py, employ
pytest.importorskip() in those two tests to ensure they can be
selectively skipped when dnspython is not available. Adjust other code
to account for the revised Python helper module layout. Remove all
occurrences of the @pytest.mark.dnspython decorator (and all associated
code) from the "statschannel" system test to prevent confusion.
The find invocation used by the bin/tests/system/get_ports.sh script
("find . -maxdepth 1 -mindepth 1 -type d") assumes the list of
directories in bin/tests/system/ remains unchanged throughout the run
time of a single system test suite. With pytest in use and the
conftest.py file now present in bin/tests/system/, that assumption is no
longer true as a __pycache__ directory may be created when the first
pytest-based test is started. Since the list of names returned by the
above find invocation serves as a fixed-size array of "port range
slots", any changes to that list during a system test suite run may lead
to port assignment collisions [1].
Fix by making the find invocation more nuanced, so that it only returns
names of directories containing test code. Squash a grep / cut pipeline
into a single awk invocation.
[1] see commit 31e5ca4bd9
Most Python-based system tests need to know which ports were assigned to
a given test by bin/tests/system/get_ports.sh. This is currently
handled by inspecting the values of various environment variables (set
by bin/tests/system/run.sh) and passing the port numbers to Python
scripts via pytest fixtures. However, this glue code has so far been
copy-pasted into each system test using it, rather than reused.
Since pytest also looks for conftest.py files in parent directories,
move commonly used fixtures to bin/tests/system/conftest.py. Set the
scope of all the moved fixtures to "session" as their return values are
only based on environment variables, so there is no point in recreating
them for every test requesting them. Adjust test code accordingly.
Building BIND 9 with older version of BIND 9 installed would result in
build failure. Fix the last two remaining cases where <prog>_CFLAGS was
being used leading to wrong order of the build flags on the command line.
We now run both docs and docs:tarball jobs at the same time and keeping
artifacts for longer period of time is a waste.
Artifacts for docs job has to be kept for long period of time because
they are used by scripts behind bind.isc.org web site.
The docs:tarball job is deemed to be cheap enough to run all the time
and it catches omissions in dist targets of Makefiles.
MR !5254 was missing changes to dist target in Makefile and broke docs
build from tarball without us noticing during pipeline run on the MR,
and it manifested itself only on scheduled pipelines which include
docs:tarball job.
In httpd.c, the send callback can directly call read callback without
calling isc_nm_resumeread(). When per-send timeout was added, this
could lead to use-after-free when shutting down the named.
Cleanup the way how we attach to .readhandle and .sendhandle, so there's
assurance that .readhandle will be always non-NULL when reading and
.sendhandle will be always non-NULL when sending.
Additionally, it was found that the implementation ignored the
"Connection: close" header and it worked only accidentally by closing
the connection after the first read from the TCP socket. This has been
also fixed.
Previously, the established TCP connections (both client and server)
would be gracefully closed waiting for the write timeout.
Don't wait for TCP connections to gracefully shutdown, but directly
reset them for faster shutdown.
Previously, there was a single per-socket write timer that would get
restarted for every new write. This turned out to be insufficient
because the other side could keep reseting the timer, and never reading
back the responses.
Change the single write timer to per-send timer which would in turn
reset the TCP connection on the first send timeout.
Remove outdated command references from ARM section
3.3.1. Tools for Use With the Name Server Daemon
and replace them with links to man pages.
Fixes: #2799
Both utilities were included as one man page, but this caused a problem:
Sphinx directive .. include was used twice on the same file, which
prevented us from using labels (or anything with unique identifier) in
the man pages. This effectivelly prevented linking to them.
Splitting man pages allows us to solve the linking problems and also
clearly make text easier to follow because it does not mention two tools
at the same time.
This change causes duplication of text, but given the frequecy of changes
to these tools I think it is acceptable. I've considered deduplication
using smaller .rst snippets which get included into both man pages,
but it would require more sed scripting to handle defaults etc. and
I think it would be way too complex solution for this problem.
Related: #2799
Both utilities were included as one man page, but this caused a problem:
Sphinx directive .. include was used twice on the same file, which
prevented us from using labels (or anything with unique identifier) in
the man pages. This effectivelly prevented linking to them.
Splitting man pages allows us to solve the linking problems and also
clearly make text easier to follow because it does not mention two tools
at the same time.
This change causes duplication of text, but given the frequecy of changes
to these tools I think it is acceptable.
Related: #2799
When dig was built without IDN support, it reported an error if the
+noidnin and/or +noidnout options were used. This means the options
were not useful for a script that wants consistent lack of IDN
translation regardless of how BIND is built.
Make dig complain about lack of built-in IDN support only when the
user asks for IDN translation.
Closes#3188
The C17 standard deprecated ATOMIC_VAR_INIT() macro (see [1]). Follow
the suite and remove the ATOMIC_VAR_INIT() usage in favor of simple
assignment of the value as this is what all supported stdatomic.h
implementations do anyway:
* MacOSX.plaform: #define ATOMIC_VAR_INIT(__v) {__v}
* Gcc stdatomic.h: #define ATOMIC_VAR_INIT(VALUE) (VALUE)
1. http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p1138r0.pdf
Previously, the function(s) in the commit subject could fail for various
reasons - mostly allocation failures, or other functions returning
different return code than ISC_R_SUCCESS. Now, the aforementioned
function(s) cannot ever fail and they would always return ISC_R_SUCCESS.
Change the function(s) to return void and remove the extra checks in
the code that uses them.
Previously, the function(s) in the commit subject could fail for various
reasons - mostly allocation failures, or other functions returning
different return code than ISC_R_SUCCESS. Now, the aforementioned
function(s) cannot ever fail and they would always return ISC_R_SUCCESS.
Change the function(s) to return void and remove the extra checks in
the code that uses them.
Previously, the function(s) in the commit subject could fail for various
reasons - mostly allocation failures, or other functions returning
different return code than ISC_R_SUCCESS. Now, the aforementioned
function(s) cannot ever fail and they would always return ISC_R_SUCCESS.
Change the function(s) to return void and remove the extra checks in
the code that uses them.
Previously the socket code would set the TCPv6 maximum segment size to
minimum value to prevent IP fragmentation for TCP. This was not yet
implemented for the network manager.
Implement network manager functions to set and use minimum MTU socket
option and set the TCP_MAXSEG socket option for both IPv4 and IPv6 and
use those to clamp the TCP maximum segment size for TCP, TCPDNS and
TLSDNS layers in the network manager to 1220 bytes, that is 1280 (IPv6
minimum link MTU) minus 40 (IPv6 fixed header) minus 20 (TCP fixed
header)
We already rely on a similar value for UDP to prevent IP fragmentation
and it make sense to use the same value for IPv4 and IPv6 because the
modern networks are required to support IPv6 packet sizes. If there's
need for small TCP segment values, the MTU on the interfaces needs to be
properly configured.
The IPV6_USE_MIN_MTU socket option directs the IP layer to limit the
IPv6 packet size to the minimum required supported MTU from the base
IPv6 specification, i.e. 1280 bytes. Many implementations of TCP
running over IPv6 neglect to check the IPV6_USE_MIN_MTU value when
performing MSS negotiation and when constructing a TCP segment despite
MSS being defined to be the MTU less the IP and TCP header sizes (60
bytes for IPv6). This leads to oversized IPv6 packets being sent
resulting in unintended Path Maximum Transport Unit Discovery (PMTUD)
being performed and to fragmented IPv6 packets being sent.
Add and use a function to set socket option to limit the MTU on IPv6
sockets to the minimum MTU (1280) both for UDP and TCP.
For each algorithm there must be a key performing the KSK and
ZSK rolls. After reading the keys from named.conf check that
each algorithm present has both rolls. CSK implicitly has both
rolls.
bad-ksk-without-zsk.conf only has a ksk defined without a
matching zsk for the same algorithm.
bad-zsk-without-ksk.conf only has a zsk defined without a
matching ksk for the same algorithm.
bad-unpaired-keys.conf has two keys of different algorithms
one ksk only and the other zsk only
When get_dispatch() returns an error code, the dns_request_createraw()
function jumps to the `cleanup` label, which will leave a previous
attachment to the `request` pointer unattached.
Fix the issue by jumping to the `detach` label instead.
The AX_PROG_CC_FOR_BUILD implementation to find a native CC compiler is
slightly better because it uses AC_PROG_CC and AC_PROG_CPP to find the
native compiler instead of just defaulting to `gcc` as AX_CC_FOR_BUILD
does.
AX_PROG_CC_FOR_BUILD also sets BUILD_EXEEXT that we already use in the
Makefile.am for `lib/dns/gen` while AX_CC_FOR_BUILD uses
EXEEXT_FOR_BUILD.
Formerly, the gen.h header contained a compatibility layer between Win32
and POSIX platforms. Since we have already dropped the Win32 build, we
can merged gen.h into gen.c as the header file is not used elsewhere.
The current implementation of isc_queue uses Michael-Scott lock-free
queue that in turn uses hazard pointers. It was discovered that the way
we use the isc_queue, such complicated mechanism isn't really needed,
because most of the time, we either execute the work directly when on
nmthread (in case of UDP) or schedule the work from the matching
nmthreads.
Replace the current implementation of the isc_queue with a simple locked
ISC_LIST. There's a slight improvement - since copying the whole list
is very lightweight - we move the queue into a new list before we start
the processing and locking just for moving the queue and not for every
single item on the list.
NOTE: There's a room for future improvements - since we don't guarantee
the order in which the netievents are processed, we could have two lists
- one unlocked that would be used when scheduling the work from the
matching thread and one locked that would be used from non-matching
thread.
If the dns_request send callback is delayed, the dst API would get
deinitialized and then the detach from the tsig key would cause an
assertion failure.
Shutdown the isc_managers early, and only then dereference the dst
objects when cleaning up the resources used by nsupdate.
The order in which the netievents are processed on the network manager
loop is not guaranteed. Therefore the recv/read callback can come
earlier than the send/write callback.
The dns_request API wasn't ready for this reordering and it was
destroying the dns_request_t object before the send callback has been
called.
Add additional attach/detach in the req_send()/req_senddone() functions
to make sure we don't destroy the dns_request_t while it's still being
references by asynchronous call.
For the reference, the _cancel_lookup() function iterates through
the lookup's queries list and detaches them. In the ideal scenario,
that should be the last reference and the query will be destroyed
after that, but it is also possible that we are still expecting a
callback, which also holds a reference (for example, _cancel_lookup()
could have been called from recv_done(), when send_done() was still
not executed).
The start_udp() and start_tcp() functions are currently designed in
slightly different ways: start_udp() creates a new query attachment
`connectquery`, to be called in the callback function, while
start_tcp() does not, which is a bug, but is hidden by the fact
that when the query is being erroneously destroyed prematurely (before
_cancel_lookup() is called) in the result of that, it also gets
de-listed from the lookup's queries' list, so _cancel_lookup() doesn't
even try to detach it.
For better understanding, here's an illustration of the query's
references count changes, and from where it was changed:
UDP
---
1. _new_query() -> refcount = 1 (initial)
2. start_udp() -> refcount = 2 (lookup->current_query)
3. start_udp() -> refcount = 3 (connectquery)
4. udp_ready() -> refcount = 4 (readquery)
5. udp_ready() -> refcount = 5 (sendquery)
6. udp_ready() -> refcount = 4 (lookup->current_query)
7. udp_ready() -> refcount = 3 (connectquery)
8. send_done() -> refcount = 2 (sendquery)
9. recv_done() -> refcount = 1 (readquery)
10. _cancel_lookup() -> refcount = 0 (initial)
11. the query gets destroyed and removed from `lookup->q`
TCP, fortunate scenario
-----------------------
1. _new_query() -> refcount = 1 (initial)
2. start_tcp() -> refcount = 2 (lookup->current_query)
3. launch_next_query() -> refcount = 3 (readquery)
4. launch_next_query() -> refcount = 4 (sendquery)
5. tcp_connected() -> refcount = 3 (lookup->current_query)
6. tcp_connected() -> refcount = 2 (bug, there was no connectquery)
7. send_done() -> refcount = 1 (sendquery)
8. recv_done() -> refcount = 0 (readquery)
9. the query gets prematurely destroyed and removed from `lookup->q`
10. _cancel_lookup() -> the query is not in `lookup->q`
TCP, unfortunate scenario, revealing the bug
--------------------------------------------
1. _new_query() -> refcount = 1 (initial)
2. start_tcp() -> refcount = 2 (lookup->current_query)
3. launch_next_query() -> refcount = 3 (readquery)
4. launch_next_query() -> refcount = 4 (sendquery)
5. tcp_connected() -> refcount = 3 (lookup->current_query)
6. tcp_connected() -> refcount = 2 (bug, there was no connectquery)
7. recv_done() -> refcount = 1 (readquery)
8. _cancel_lookup() -> refcount = 0 (the query was in `lookup->q`)
9. we hit an assertion here when trying to destroy the query, because
sendhandle is not detached (which is done by send_done()).
10. send_done() -> this never happens
This commit does the following:
1. Add a `connectquery` attachment in start_tcp(), like done in
start_udp().
2. Add missing _cancel_lookup() calls for error scenarios, which
were possibly missing because before fixing the bug, calling
_cancel_lookup() and then calling query_detach() would cause
an assertion.
3. Log a debug message and call isc_nm_cancelread(query->readhandle)
for every query in the lookup from inside the _cancel_lookup()
function, like it is done in _cancel_all().
4. Add a `canceled` property for the query which becomes `true` when
the lookup (and subsequently, its queries) are canceled.
5. Use the `canceled` property in the network manager callbacks to
know that the query was canceled, and act like `eresult` was equal
to `ISC_R_CANCELED`.
There was a missing UNLOCK_LOOKUP in the recv_done() callback when
the operation had been canceled. That omission could result in a
deadlock situation.
BIND unconditionally uses shims for BN_GENCB_new(), BN_GENCB_free(),
and BN_GENCB_get_arg() for all LibreSSL versions and, correctly, for
OpenSSL <1.1.0 versions.
This breaks LibreSSL compilation starting with LibreSSL 3.5.0.
Use autoconf check instead to check whether the family of the functions
are available.
LibreSSL 3.5.0 fails to compile with these shims. We could have just
removed the LibreSSL check from the pre-processor condition, but it
seems that these shims are no longer needed because all the supported
versions of OpenSSL and LibreSSL have those functions.
According to EVP_ENCRYPTINIT(3) manual page in LibreSSL,
EVP_CIPHER_CTX_new() and EVP_CIPHER_CTX_free() first appeared in
OpenSSL 0.9.8b, and have been available since OpenBSD 4.5.
the "zone" clause can be documented using, for instance,
`cfg_test --zonegrammar primary", which prints only
options that are valid in primary zones. this was not
the method being used when generating the named.conf
man page; instead, "zone" was documented with all possible
options, and no zone types at all.
this commit removes "zone" from the generic documentation
and adds include statements in named.conf.rst so that
correct zone grammars will be included in the man page.
when parsing key pairs, if the '=' character fell at max_token
a protective INSIST preventing buffer overrun could be triggered.
Attempt to grow the buffer immediately before the INSIST.
Also removed an unnecessary INSIST on the opening double quote
of key buffer pair.
Resolve "Issue 45110 by ClusterFuzz-External: bind9:dns_master_load_fuzzer: Undefined-shift in soa_get"
Closes#3176
See merge request isc-projects/bind9!5909
By default C promotes short unsigned values to signed int which
leads to undefined behaviour when the value is shifted by too much.
Force unsigned arithmetic to be perform by explicitly casting to a
unsigned type.
The isc__nmsocket_reset() was missing a case for raw TCP sockets (used
by RNDC and DoH) which would case a assertion failure when write timeout
would be triggered.
TCP sockets are now also properly handled in isc__nmsocket_reset().
mem_maybedup() calls isc_mem_allocate() if an mctx is supplied,
but that can no longer fail, so now the only way mem_maybedup()
could return NULL is if it was given a NULL source address by the
caller. this commit adds a REQUIRE to prevent that scenario, and
cleans up all the calling code that previously checked for NULL
return values.
this function is mostly used in rdata tostruct() implementations, so
the documentation for dns_rdata_tostruct() has been updated to
remove 'ISC_R_NOMEMORY' as a possible return value.
"masters" and "default-masters" are now flagged so they will
not be included in the named.conf man page, despite being
accepted as valid options by the parser for backward
compatibiility.
... along with dns_rdataclass_fromtext and dns_rdatatype_fromtext
Most of the test binary is modified named-rrchecker. Main differences:
- reads single RR and exists
- does not refuse meta classes and rr types
We actually do have some fromtext code for meta-things so erroring out
in named-rrchecker would prevent us from testing this code.
Corpus has examples of all currently supported RR types. I did not do
any minimization.
In future use command
diff -U0 \
<(sed -n -e 's/^.*fromtext_\(.*\)(.*$/\1/p' lib/dns/code.h | \
sort) \
<(ls fuzz/dns_rdata_fromtext.in/)
to check for missing RR types.
When isc__nm_uvreq_t gets deactivated, it could be just put onto array
stack to be reused later to save some initialization time.
Unfortunately, this might hide some use-after-free errors.
Disable the inactive uvreqs caching when compiled with Address or
Thread Sanitizer.
When isc_nmhandle_t gets deactivated, it could be just put onto array
stack to be reused later to safe some initialization time.
Unfortunately, this might hide some use-after-free errors.
Disable the inactive handles caching when compiled with Address or
Thread Sanitizer.
The isc__nmsocket_t has locked array of isc_nmhandle_t that's not used
for anything. The isc__nmhandle_get() adds the isc_nmhandle_t to the
locked array (and resized if necessary) and removed when
isc_nmhandle_put() finally destroys the handle. That's all it does, so
it serves no useful purpose.
Remove the .ah_handles, .ah_size, and .ah_frees members of the
isc__nmsocket_t and .ah_pos member of the isc_nmhandle_t struct.
When the TCP, TCPDNS or TLSDNS connection times out, the isc__nm_uvreq_t
would be pushed into sock->inactivereqs before the uv_tcp_connect()
callback finishes. Because the isc__nmsocket_t keeps the list of
inactive isc__nm_uvreq_t, this would cause use-after-free only when the
sock->inactivereqs is full (which could never happen because the failure
happens in connection timeout callback) or when the sock->inactivereqs
mechanism is completely removed (f.e. when running under Address or
Thread Sanitizer).
Delay isc__nm_uvreq_t deallocation to the connection callback and only
signal the connection callback should be called by shutting down the
libuv socket from the connection timeout callback.
Commit aab691d512 did not fix all possible
scenarios in which the ns_statscounter_recursclients counter underflows.
The solution implemented therein can be ineffective e.g. when CNAME
chaining happens with prefetching enabled.
Here is an example recursive resolution scenario in which the
ns_statscounter_recursclients counter can underflow with the current
logic in effect:
1. Query processing starts, the answer is not found in the cache, so
recursion is started. The NS_CLIENTATTR_RECURSING attribute is set.
ns_statscounter_recursclients is incremented (Δ = +1).
2. Recursion completes, returning a CNAME. client->recursionquota is
non-NULL, so the NS_CLIENTATTR_RECURSING attribute remains set.
ns_statscounter_recursclients is decremented (Δ = 0).
3. Query processing restarts.
4. The current QNAME (the target of the CNAME from step 2) is found in
the cache, with a TTL low enough to trigger a prefetch.
5. query_prefetch() attaches to client->recursionquota.
ns_statscounter_recursclients is not incremented because
query_prefetch() does not do that (Δ = 0).
6. Query processing restarts.
7. The current QNAME (the target of the CNAME from step 4) is not found
in the cache, so recursion is started. client->recursionquota is
already attached to (since step 5) and the NS_CLIENTATTR_RECURSING
attribute is set (since step 1), so ns_statscounter_recursclients is
not incremented (Δ = 0).
8. The prefetch from step 5 completes. client->recursionquota is
detached from in prefetch_done(). ns_statscounter_recursclients is
not decremented because prefetch_done() does not do that (Δ = 0).
9. Recursion for the current QNAME completes. client->recursionquota
is already detached from, i.e. set to NULL (since step 8), and the
NS_CLIENTATTR_RECURSING attribute is set (since step 1), so
ns_statscounter_recursclients is decremented (Δ = -1).
Another possible scenario is that after step 7, recursion for the target
of the CNAME from step 4 completes before the prefetch for the CNAME
itself. fetch_callback() then notices that client->recursionquota is
non-NULL and decrements ns_statscounter_recursclients, even though
client->recursionquota was attached to by query_prefetch() and therefore
not accompanied by an incrementation of ns_statscounter_recursclients.
The net result is also an underflow.
Instead of trying to properly handle all possible orderings of events
set into motion by normal recursion and prefetch-triggered recursion,
adjust ns_statscounter_recursclients whenever the recursive clients
quota is successfully attached to or detached from. Remove the
NS_CLIENTATTR_RECURSING attribute altogether as its only purpose is made
obsolete by this change.
Commit b6d40b3c4e removed most uses of the
'fctx' variable from the rctx_dispfail() function: it is now only needed
by the FCTXTRACE3() macro. However, when --enable-querytrace is not in
effect, that macro evaluates to a list of UNUSED() macros that does not
include "UNUSED(fctx);". This triggers the following compilation
warning when building without --enable-querytrace:
resolver.c: In function 'rctx_dispfail':
resolver.c:7888:21: warning: unused variable 'fctx' [-Wunused-variable]
7888 | fetchctx_t *fctx = rctx->fctx;
| ^~~~
Fix by adding "UNUSED(fctx);" lines to all FCTXTRACE*() macros. This is
safe to do because all of those macros use the 'fctx' local variable, so
there is no danger of introducing new errors caused by use of undeclared
identifiers.
Due to a bug in gcc-11, the build fails when AddressSanitizer is
enabled. Downgrading the -Wstringop-overread to just a warning in the
gcc:asan build allows the code to compile.
The keep-response-order option has been obsoleted, and in this commit,
remove the keep-response-order ACL map rendering the option no-op, the
call the isc_nm_sequential() and the now unused isc_nm_sequential()
function itself.
The keep-response-order option has been introduced when TCP pipelining
has been introduced to BIND 9 as a failsafe for possibly non-compliant
clients.
Declare the keep-response-order obsolete as all DNS clients should
either support out-of-order processing or don't send more DNS queries
until the DNS response for the previous one has been received.
There was an artificial limit of 23 on the number of simultaneous
pipelined queries in the single TCP connection. The new network
managers is capable of handling "unlimited" (limited only by the TCP
read buffer size ) queries similar to "unlimited" handling of the DNS
queries receive over UDP.
Don't limit the number of TCP queries that we can process within a
single TCP read callback.
Extend the timeouts system test to ensure that the maximum outgoing
transfer time (max-transfer-time-out) and maximum outgoing transfer idle
time (max-transfer-idle-out) works as expected. This is done by
lowering the limits to 5/1 minutes and testing that the connection has
been dropped while sleeping between the individual XFR messages.
While refactoring the libns to use the new network manager, the
max-transfer-*-out options were not implemented and they were turned
non-operational.
Reimplement the max-transfer-idle-out functionality using the write
timer and max-transfer-time-out using the new isc_nm_timer API.
While refactoring the lib/ns/xfrout.c, it was discovered that .shutdown
and .shutdown_arg members of ns_client_t structure are unused.
Remove the unused members and associated code that was using in it in
the ns_xfrout.
bin/tests/system/makejournal needs to ignore DNS_R_SEENINCLUDE
when calling dns_db_load(), otherwise it cannot generate a journal
for a zone file with a $INCLUDE statement.
When invalid DNS message is received, there was a handling mechanism for
DoH that would be called to return proper HTTP response.
Reuse this mechanism and reset the TCP connection when the client is
blackholed, DNS message is completely bogus or the ns_client receives
response instead of query.
- certain TCP result codes, including ISC_R_EOF and
ISC_R_CONNECTIONRESET, were being mapped to ISC_R_SHUTTINGDOWN
before calling the response handler in tcp_recv_cancelall().
the result codes should be passed through to the response handler
without being changed.
- the response handlers, resquery_response() and req_response(), had
code to return immediately if encountering ISC_R_EOF, but this is
not the correct behavior; that should only happen in the case of
ISC_R_CANCELED when it was the caller that canceled the operation
- ISC_R_CONNECTIONRESET was not being caught in rctx_dispfail().
- removed code in rctx_dispfail() to retry queries without EDNS
when receiving ISC_R_EOF; this is now treated the same as any
other connection failure.
Use the isc_nmhandle_setwritetimeout() function in the netmgr unit test
to allow more time for writing and reading the responses because some of
the intervals that are used in the unit tests are really small leaving a
little room for any delays.
In some situations (unit test and forthcoming XFR timeouts MR), we need
to modify the write timeout independently of the read timeout. Add a
isc_nmhandle_setwritetimeout() function that could be called before
isc_nm_send() to specify a custom write timeout interval.
Extend the timeouts system test that bursts the queries for large TXT
record and never read any responses back filling up the server TCP write
buffer. The test should work with the default wmem_max value on
Linux (208k).
When the outgoing TCP write buffers are full because the other party is
not reading the data, the uv_write() could wait indefinitely on the
uv_loop and never calling the callback. Add a new write timer that uses
the `tcp-idle-timeout` value to interrupt the TCP connection when we are
not able to send data for defined period of time.
The uv_tcp_close_reset() function was added in libuv 1.32.0 and since we
support older libuv releases, we have to add a shim uv_tcp_close_reset()
implementation loosely based on libuv.
Before adding the write timer, we have to remove the generic sock->timer
to sock->read_timer. We don't touch the function names to limit the
impact of the refactoring.
There was a bug in the checking of the "blackhole" ACL in
dns_request_create*(), causing an address to be treated as included
in the ACL if it was explicitly *excluded*. Thus, leaving "blackhole"
unset had no effect, but setting it to "none" would cause any
destination addresses to be rejected for dns_request purposes. This
would cause zone transfer requests and SOA queries to fail, among
other things.
The bug has been fixed, and "blackhole { none; };" was added to the
xfer system test as a regression test.
When a resolver priming attempt completes, the following message is
currently logged:
resolver priming query complete
This message is identical for both successful and failed priming
attempts. Consider the following log excerpts:
- successful priming attempt:
10-Feb-2022 11:33:11.272 all zones loaded
10-Feb-2022 11:33:11.272 running
10-Feb-2022 11:33:19.722 resolver priming query complete
- failed priming attempt:
10-Feb-2022 11:33:29.978 all zones loaded
10-Feb-2022 11:33:29.978 running
10-Feb-2022 11:33:38.432 timed out resolving '_.org/A/IN': 2001:500:9f::42#53
10-Feb-2022 11:33:38.522 timed out resolving './NS/IN': 2001:500:9f::42#53
10-Feb-2022 11:33:42.132 timed out resolving '_.org/A/IN': 2001:500:12::d0d#53
10-Feb-2022 11:33:42.285 timed out resolving './NS/IN': 2001:500:12::d0d#53
10-Feb-2022 11:33:44.685 resolver priming query complete
Include the result of each priming attempt in the relevant log message
to give the administrator better insight into named's resolver priming
process.
The UV_RUNTIME_CHECK() macro requires to keep the function name in sync
like this:
r = func(...);
UV_RUNTIME_CHECK(func, r);
Add semantic patch to keep the function name and return variable in sync
with the previous line.
When libuv functions fail, they return correct return value that could
be useful for more detailed debugging. Currently, we usually just check
whether the return value is 0 and invoke assertion error if it doesn't
throwing away the details why the call has failed. Unfortunately, this
often happen on more exotic platforms.
Add a UV_RUNTIME_CHECK() macro that can be used to print more detailed
error message (via uv_strerror() before ending the execution of the
program abruptly with the assertion.
Add a note to the DNSSEC guide and to the ARM reference that A ZSK/KSK
pair used for signing your zone should have the same algorithm.
This commit also updates the 'dnssec-policy/keys' example to use the
slightly more modern 'rsasha256' algorithm.
For users it's not really important if a RFC is Internet Standard,
Proposed Standard, or Experimental. RFCs are now regrouped by
"Protocol", Best Current Practice, and "catch all" category FYI.
After the build system refactoring, we no longer call AM_PROG_CC_C_O
because it is obsolescent macro. According to the automake manual the
`AC_PROG_CC` has been rewritten in automake 1.14 to not required the
call, thus we need to require at least automake version 1.14.
In autoconf, the AC_INIT() accepts bugreport address for reporting
issues (f.e. when the test suite fails). Instead of providing generic
emails address, change this to the address where to report with the
default Bug template applied.
The task exclusive mode stops all processing (tasks and networking IO)
except the designated exclusive task events. This has impact on the
operation of the server. Add log messages indicating when we start the
exclusive mode, and when we end exclusive task mode.
There are reported occurences where the statitic counters underflows and
starts reporting non-sense.
Add a check for the underflow, when ``named`` is compiled in the
developer mode.
Replace the hard-coded paths for various BIND 9 files (configuration,
pid, etc.) in the man pages and ARM with compile-time values using the
sphinx-build replace system.
This is more complicated, because the restructured text specification
doesn't allow |substitions| inside ``code-blocks``, so for each specific
file we had to create own substition which is sub-optimal, but it is
only way how to do this without adding Sphinx extension.
The isc_thread_setaffinity call was removed in !5265 and we are not
going to restore it because it was proven that the performance is better
without it. Additionally, remove the already disabled cpu system test.
The isc_thread_setconcurrency function is unused and also calling
pthread_setconcurrency() on Linux has no meaning, formerly it was
added because of Solaris in 2001 and it was removed when taskmgr was
refactored to run on top of netmgr in !4918.
The "suppr-lsan.txt" file needs to be referenced with GitLab-specific
variable, otherwise AddressSanitizer won't find it outside the
"isc-projects" project group.
This has been introduced in 8a4f098dee.
When there are more than one tokens initialized in SoftHSMv2,
care must be taken to correctly identify them.
Use a SoftHSMv2 token label which will uniquely identify the
token used for this test.
Use the "--token-label" parameter for the `pkcs11-tool` program
to make sure that it finds and uses the correct token.
The 'id' variable is either keyfromlabel-ksk or keyfromlabel-zsk and is
set in the 'keygen' and 'keyfromlabel' functions. It should not be used
outside these functions.
This test was originally in the pkcs11 system test. While this crash
happened in the native pkcs11 of BIND 9, and that code has been
removed in 9.17, there is no need for this test. Nevertheless, it
doesn't hurt having the test case persist.
Add a system test for engine_pkcs11 interactions that replaces the
tests that are done in the native PKCS#11 system test.
The native PKCS#11 code was removed in 9.17 but without copying the
pkcs11 system test.
The "directory" configuration options affects the configuration listed
after the directive but not before which may affect ``include``
directive with relative file paths.
When isc_quota_attach_cb() API returns ISC_R_QUOTA (meaning hard quota
was reached) the accept_connection() would return without logging a
message about quota reached.
Change the connection callback to log the quota reached message.
Formerly parental-agents grammar was an exception and it did not
auto-generate itself from source code. From now on it is generated using
the same mechanism as other grammars.
For consistency with rest of the system, I've also renamed the grammar
file and the link anchors from "parentals" to "parental-agents".
Technically this is fixup for commit
0311705d4b.
Related: !5234
The missing `::` in the .rst files caused grammar section in docs to
render empty.
The `::` was accidentally removed in an unrelated commit
58bd26b6cf which was supposed to update
only copyright headers.
Fixes: #3120
DLZ modules no longer support being built without threads,
so the "#if PTHREADS" conditionals were no longer necessary,
and were also causing errors in some of the modules due to
PTHREADS no longer being defined in dlz_pthread.h.
Apparently we forgot about DLZ when updating DNS_CLIENTINFO_VERSION
constant for ECS, which is at value "3" since ECS was introduced.
The code in example drivers and tests now hardcodes version numbers
2 (without ECS) and 3 (with ECS) depending on what a given code path
requires.
this brings DNS_CLIENTINFO_VERSION into line with the subscription
branch so that fixes applied to clientinfo processing can also be
applied to the main branch without diverging.
System test used to have sequential system tests, which can't run in
parallel with the rest of system tests. As there are no such tests
anymore the underlying infrastructure can be dropped.
"parallel.sh" script was used on Windows to run system tests in
parallel. Since Windows support was removed from BIND 9, the script is
not needed anymore.
testsummary.sh was not updated after build system rewrite to Autotools,
and needs to be fixed to produce test summary and core dump, assertion
failures, and ThreadSanitizer reports.
Given that all of this is provided by Autotools and run.sh already,
there's little use to testsummary.sh script and should be dropped.
IBM power architecture has L1 cache line size equal to 128. Take
advantage of that on that architecture, do not force more common value
of 64. When it is possible to detect higher value, use that value
instead. Keep the default to be 64.
In the RPZ documentation, there's a mistake where it states that the
default behavior will be disabled by setting `qname-wait-recurse yes;`
while in fact it's opposite `qname-wait-recurse no;`.
This affects only the RST documentation.
runall.sh was mainly used on Windows and as it's support was removed
from the "main" branch the script is not needed anymore.
Also, remove bin/tests/system/README text on running multiple system
test suites simultaneously with runall.sh as that support was not
present in the script anyway.
bin/tests/system/setup.sh just executes setup.sh script of a particular
system test in the directory of the system test. This does not seems to
be useful enough to maintain it.
stopall.sh script takes almost 2 minutes to go thru all test
subdirectories (due to a sleep in stop.pl) and does not seems to be
efficient way to stop manually started tests.
The keyfromlabel system ECDSA tests sometimes fail. When this happens
the ZSK and KSK key id values differ by 1, which is an indication that
the same key is used for both DNSKEY records.
When the private key is retrieved with 'ENGINE_load_private_key()', the
public key is already set. But sometimes that key differs from the key
which was retrieved with 'ENGINE_load_public_key()'.
The libp11 source code uses id to find the key and without IDs all the
keys are "equal", so it is returning the first key in the array of the
enumerated keys instead of the matching key. In our test we didn't use
'--id', just '--label'. With this change, the system test should no
longer fail intermittently.
Note this is only an issue for ECDSA keys, not RSA keys.
These memory leaks are a known issue in libp11: From Timo Teras:
The relevant code is:
https://github.com/OpenSC/libp11/blob/master/src/eng_front.c#L114-L123
The authors of libp11 did not get the locking right and decided
that having intentional memory leaks is better than risking a deadlock.
The leak logs indicate that it is the cached structures that should
have been freed.
These are not a run-time leaks, so suppressing these leaks is probably
okay.
Add missing system test for dnssec-keyfromlabel. Test for various
algorithms that we can generate key files from a key that is stored in a
HSM, and that those keys can be used for signing with dnssec-signzone.
GitLab CI needs to know about some environment variables that will
tell where OpenSSL and SoftHSM2 is installed. This is done in the
image, making the prepare-softhsm2.sh script obsolete.
The SoftHSM2 module location is system specific.
- Removed all code that only runs under CYGWIN, and made all
code that doesn't run under CYGWIN non-optional.
- Removed the $TP variable which was used to add optional
trailing dots to filenames; they're no longer optional.
- Removed references to pssuspend and dos2unix.
- No need to use environment variables for diff and kill.
- Removed uses of "tr -d '\r'"; this was a workaround for
a cygwin regex bug that is no longer needed.
Commit c787a539d2 fixed a certain class of
intermittent system test failures caused by named instances unable to
restart. The root cause was bin/tests/system/stop.pl returning without
waiting for a named instance to remove its lock file.
Later on, it turned out that the above change causes other issues on
Windows due to the way named handles signals on that platform. Commit
761ba4514f intended to address those
issues by making the server_lock_file() subroutine in
bin/tests/system/stop.pl return an empty value on Windows, in order to
prevent the script for waiting for lock file cleanup on that platform.
Note, however, that Windows detection in that subroutine is limited to
checking whether the CYGWIN environment variable is set.
While that environment variable was not set on Unix-like systems before
commit 761ba4514f, another commit
(a33237f070, merged a few weeks later)
changed that by setting the CYGWIN environment variable to an empty
value on Unix-like systems. This made the defined($ENV{'CYGWIN'}) check
in server_lock_file() return true, inadvertently preventing
bin/tests/system/stop.pl from waiting for lock file removal before
exiting on Unix-like systems and therefore reintroducing the original
issue.
Fix by making server_lock_file() only return an empty value when the
CYGWIN environment variable is set to a non-empty value (which is what
bin/tests/system/conf.sh.win32 does). Adjust a similar check in the
pid_file_exists() subroutine in the same way for consistency.
The echo_*() and cat_*() functions in bin/tests/system/conf.sh.common
call the "read" builtin command without specifying the field separator
to use. This results in leading whitespace getting stripped from each
line of the texts passed to those functions, which mangles e.g. pytest
output, hindering test failure troubleshooting.
Address by setting IFS to an empty value for the "read" calls used in
the aforementioned helper functions.
The bin/tests/system/start.pl script truncates the named.run file for a
given named instance unless it is invoked with the --restart
command-line option. Ever since Python-based tests were introduced,
bin/tests/system/run.sh may start named instances used by a given system
test multiple times within a single run, causing the
bin/tests/system/start.pl script to truncate some of the log files
written during the test. This makes troubleshooting certain test
failures hard or even impossible.
Fix by calling bin/tests/system/start.pl with the --restart command-line
option for every start_servers() invocation except the first one.
TLS clients can have their clock a short time in the past which will
result in not being able to validate the certificate.
Setting the "not before" property 5 minutes in the past will
accommodate with some possible clock skew across systems.
dns_dlzcreate() fails to free the memory allocated for dlzname
when an error occurs.
Free dlzname's memory (acquired earlier with isc_mem_strdup())
by calling isc_mem_free() before returning an error code.
When failure is expected, the `rndc` command in the catz system test
is being called directly instead of using a function, i.e.:
$RNDC -c ../common/rndc.conf -s 10.53.0.2 -p 9953 reconfig \
> /dev/null 2>&1 && ret=1
... instead of:
rndccmd 10.53.0.2 reconfig && ret=1
This is done to suppress messages like "lt-rndc: 'reconfig' failed:
failure" appearing in the message log of the test, because failure
is actually expected, and the appearance of that message can be
confusing.
The port value used in this case is not correct, making the
`rndc reload` command to fail. This error was not detected earlier
only because the failure of the command is actually expected, but
the failure happens for a "wrong" reason, and the test still passes.
Fix the error by using the existing variable instead of the fixed
number.
Test the view reverting code by introducing a faulty dlz configuration
in named.conf and using `rndc reconfig` to check if named handles the
situation correctly.
We use "dlz" because the dlz processing code is located in an ideal
place in the view configuration function for the test to cover the
view reverting code.
This test is specifically added to the catz system test to additionally
cover the catz reconfiguration during the mentioned failed
reconfiguration attempt.
When a zone is being configured with a new view, the catalog zones
structure will also be linked to that view. Later on, in case of some
error, should the zone be reverted to the previous view, the link
between the catalog zones structure and the view won't be reverted.
Change the dns_zone_setviewrevert() function so it calls
dns_zone_catz_enable() during a zone revert, which will reset the
link between `catzs` and view.
Separate the locked parts of dns_zone_catz_enable() and
dns_zone_catz_disable() functions into static functions. This will
let us perform those tasks from the other parts of the module while
the zone is locked, avoiding one pair of additional unlocking and
locking operations.
If a view configuration error occurs during a named reconfiguration
procedure, BIND can end up having twin views (old and new), with some
zones and internal structures attached to the old one, and others
attached to the new one, which essentially creates chaos.
Implement some additional view reverting mechanisms to avoid the
situation described above:
1. Revert rpz configuration.
2. Revert catz configuration.
3. Revert zones to view attachments.
There were three RFCs listed in list of "RFCs we implement" but missing
in the ARM.
Command to compare lists in the two documents:
diff <(grep -o '^ RFC[0-9]\+' doc/misc/rfc-compliance | sed -e 's/[^0-9]//g' | sort -n) <(grep '^:rfc:`' doc/arm/general.rst | sed -e 's/^.*`\([0-9]*\)`.*$/\1/' | sort -n)
Supported Platforms section is now really only about platforms and not
libraries. Libraries were moved to the Building BIND section.
We now have section for required libraries, and second with optional
features. Wordy explanations were taken verbatim from the original
README.md.
Converted using pandoc 2.14.2-9 on Arch Linux:
$ pandoc --shift-heading-level-by=-1 -f markdown -t rst README.md > doc/arm/build.rst
Plus hand-edit to remove sections other than Building BIND 9, remove
misindentation in section headers, and add a standard copyright header.
Converted using pandoc 2.14.2-9 on Arch Linux:
$ pandoc -f markdown -t rst PLATFORMS.md > PLATFORMS.rst
The pandoc-generated copyright header was subsequently replaced with
usual one for .rst files.
For reproducible builds, we use last modification time of the CHANGES
file. This works pretty well, unless the builds are made in different
timezones.
Use UTC option to date command to make the builds reproducible.
On some systems, the glibc can return 0 instead of cache-line size to
indicate the cache line sizes cannot be determined. This is comment
from glibc source code:
/* In general we cannot determine these values. Therefore we
return zero which indicates that no information is
available. */
As the goal of the check is to determine whether the L1 cache line size
is still 64 and we would use this value in case the sysconf() call is
not available, we can also ignore the invalid values returned by the
sysconf() call.
As far as I can tell, it is some leftover from the times when Sphinx
docs were introduced (commit 9fb6d11abb).
It seems like it is not referenced from anywhere.
Extend the error message displayed when support for DNS over HTTPS is
requested but libnghttp2 is unavailable at build time, in order to help
the user find a way out of such a situation.
The terms "DNS over HTTPS" and "DNS over TLS" should be hyphenated when
they are used as adjectives and non-hyphenated otherwise. Ensure all
occurrences of these terms in the source tree follow the above rule.
(CHANGES and release notes are intentionally left intact.)
Tweak a related ARM snippet, fixing a typo in the process.
If isc_app_run() gets interrupted by a signal, the global 'rndc_task'
variable may already be detached from (set to NULL) by the time the
outstanding netmgr callbacks are run. This triggers an assertion
failure in isc_task_shutdown(). However, explicitly calling
isc_task_shutdown() from rndc code is redundant because it does not use
isc_task_onshutdown() and the task_shutdown() function gets
automatically called anyway when the task manager gets destroyed (after
isc_app_run() returns). Remove the redundant isc_task_shutdown() calls
to prevent crashes after receiving a signal.
rndc_recvdone() is not treating the ISC_R_CANCELED result code as a
request to stop data processing, which may cause a crash when trying to
dereference ccmsg->buffer. Fix by ensuring ISC_R_CANCELED results in an
early exit from rndc_recvdone().
Make sure the logic for handling ISC_R_CANCELED in rndc_recvnonce()
matches the one present in rndc_recvdone() to ensure consistent behavior
between these two sibling functions.
Sometimes the serving a query or two might fail in the test due to the
listeners not being reinitialised on time. This commit makes the test
suite to wait for reconfiguration message in the log file to detect
the time when the reconfiguration request completed.
gnutls-cli is tricky to script around as it immediately closes the
server connection when its standard input is closed. This prevents
simple shell-based I/O redirection from being used for capturing the DNS
response sent over a TLS connection and the workarounds for this issue
employ non-standard utilities like "timeout".
Instead of resorting to clever shell hacks, reimplement the relevant
check in Python. Exit immediately upon receiving a valid DNS response
or when gnutls-cli exits in order to decrease the test's run time.
Employ dnspython to avoid the need for storing DNS queries in binary
files and to improve test readability. Capture more diagnostic output
to facilitate troubleshooting. Use a pytest fixture instead of an
Autoconf macro to keep test requirements localized.
Some operating systems (OpenBSD and DragonFly BSD) don't restrict the
IPv6 sockets to sending and receiving IPv6 packets only. Explicitly
enable the IPV6_V6ONLY socket option on the IPv6 sockets to prevent
failures from using the IPv4-mapped IPv6 address.
The server_send_error_response() function is supposed to be used only
in case of failures and never in case of legitimate requests. Ensure
that ISC_HTTP_ERROR_SUCCESS is never passed there by mistake.
1. 10 seconds is an unfortunate pick because that reintroduces the
problem described in commit 5307bf64 (for an earlier check).
Change the +tries=3 +timeout=10 to +tries=2 +time=15, so that we
minimize the risk of dig missing any responses sent by the server in
the first 15 seconds while also increasing our chances of the
response arriving in time on machines under heavy load and allowing
it a single retry in case things go awry.
2. The comment about TCP above was misleading: as painfully proven by
GitLab CI, using TCP is no guarantee of receiving a response in a
timely manner. It may help a bit, but it is certainly not a 100%
reliable solution.
Change the dig invocation to just use UDP like in the two prior
tests for consistency (and revise that comment accordingly).
The resolver system tests was exhibiting often intermitten failures,
increase the timeout from default 5 second to 10 seconds to give the dig
more leeway for providing an answer.
The detection of MUSL libc via autoconf $host turned out to be
not reliable.
Convert the autoconf check from $host detection to actually detect
the padding used in the struct msghdr.
The Linux kernel diverts from the POSIX specification for two members of
struct msghdr making them size_t sized (instead of int and socklen_t).
In glibc, the developers have decided to use that. However, the MUSL
developers used padding for the struct and kept the members defined
according to the POSIX.
This creates a problem, because libuv doesn't use recvmmsg() library
call where the padding members are correctly zeroed and instead calls
the syscall directly, the struct msghdr is passed to the kernel with
enormous values in those two members (because of the random junk in the
padding members) and the syscall thus fail with EMSGSIZE.
Disable udp recvmmsg support on systems with MUSL libc until the libuv
starts zeroing the struct msghdr before passing it to the syscall.
Previously, the netmgr/udp.c tried to detect the recvmmsg detection in
libuv with #ifdef UV_UDP_<foo> preprocessor macros. However, because
the UV_UDP_<foo> are not preprocessor macros, but enum members, the
detection didn't work. Because the detection didn't work, the code
didn't have access to the information when we received the final chunk
of the recvmmsg and tried to free the uvbuf every time. Fortunately,
the isc__nm_free_uvbuf() had a kludge that detected attempt to free in
the middle of the receive buffer, so the code worked.
However, libuv 1.37.0 changed the way the recvmmsg was enabled from
implicit to explicit, and we checked for yet another enum member
presence with preprocessor macro, so in fact libuv recvmmsg support was
never enabled with libuv >= 1.37.0.
This commit changes to the preprocessor macros to autoconf checks for
declaration, so the detection now works again. On top of that, it's now
possible to cleanup the alloc_cb and free_uvbuf functions because now,
the information whether we can or cannot free the buffer is available to
us.
When the named is shutting down, the zone event callbacks could
re-schedule the stub and refresh events leading to assertion failure.
Handle the ISC_R_SHUTTINGDOWN event state gracefully by bailing out.
In 2000, old BIND instances (BIND 8?) would return FORMERR if the SOA is
included in the NOTIFY.
Remove the workaround that detected the state and resent the NOTIFY
without SOA record.
Clients can cache the TLS certificates and refuse to accept
another one with the same serial number from the same issuer.
Generate a random serial number for the self-signed certificates
instead of using a fixed value.
GnuTLS, NSS, and possibly other TLS libraries currently fail to work
with compressed point conversion form supported by OpenSSL.
Use uncompressed point conversion form for better compatibility.
When the dispatch code was refactored in libdns, the netmgr was changed
to return ISC_R_SHUTTINGDOWN when the netmgr is shutting down, and the
ISC_R_CANCELED is now reserved only for situation where the callback was
canceled by the caller.
This change wasn't reflected in the controlconf.c channel which was
still looking for ISC_R_CANCELED as the shutdown event.
This commit converts the license handling to adhere to the REUSE
specification. It specifically:
1. Adds used licnses to LICENSES/ directory
2. Add "isc" template for adding the copyright boilerplate
3. Changes all source files to include copyright and SPDX license
header, this includes all the C sources, documentation, zone files,
configuration files. There are notes in the doc/dev/copyrights file
on how to add correct headers to the new files.
4. Handle the rest that can't be modified via .reuse/dep5 file. The
binary (or otherwise unmodifiable) files could have license places
next to them in <foo>.license file, but this would lead to cluttered
repository and most of the files handled in the .reuse/dep5 file are
system test files.
Instead of checking for the licenses in the misc step, add a separate
job that uses the upstream provided image that has reuse tool installed
and run `reuse lint` from the separate job.
The copyright handling has been long obsolete, the works is covered as
whole by the COPYING/LICENSE file even if a specific file doesn't have
a copyright header.
The important thing to remember here is that any work is covered by a
copyright law and by explicitly giving it license we provide extra
rights to the users of the works.
The isc__nm_tcp_resumeread() was using maybe_enqueue function to enqueue
netmgr event which could case the read callback to be executed
immediately if there was enough data waiting in the TCP queue.
If such thing would happen, the read callback would be called before the
previous read callback was finished and the worker receive buffer would
be still marked "in use" causing a assertion failure.
This would affect only raw TCP channels, e.g. rndc and http statistics.
Change RSASHA1 to $DEFAULT_ALGORITHM to be FIPS compliant.
There is one RSASHA1 occurence left, to test that dynamically adding an
NSEC3PARAM record to an NSEC-only zone fails.
Update the autosign system test with new expected behavior.
The 'nozsk.example' zone should have its expired zone signatures
deleted and replaced with signatures generated with the KSK.
The 'inaczsk.example' zone should have its expired zone signatures
deleted and replaced with signatures generated with the KSK.
In both scenarios, signatures are deleted, not retained, so the
"retaining signatures" warning should not be logged.
Furthermore, thsi commit fixex a test bug where the 'awk' command
always returned 0.
Finally, this commit adds a test case for an offline KSK, for the zone
'noksk.example'. In this case the expired signatures should be retained
(despite the zone being bogus, but resigning the DNSKEY RRset with the
ZSK won't help here).
In some cases we want to keep expired signatures. For example, if the
KSK is offline, we don't want to fall back to signing with the ZSK.
We could remove the signatures, but in any case we end up with a broken
zone.
The change made for GL #763 prevented the behavior to sign the DNSKEY
RRset with the ZSK if the KSK was offline (and signatures were expired).
The change causes the definition of "having both keys": if one key is
offline, we still consider having both keys, so we don't fallback
signing with the ZSK if KSK is offline.
That change also works the other way, if the ZSK is offline, we don't
fallback signing with the KSK.
This commit fixes that, so we only fallback signing zone RRsets with
the KSK, not signing key RRsets with the ZSK.
BIND can log this warning:
zone example.ch/IN (signed): Key example.ch/ECDSAP256SHA256/56340
missing or inactive and has no replacement: retaining signatures.
This log can happen when BIND tries to remove signatures because the
are about to expire or to be resigned. These RRsets may be signed with
the KSK if the ZSK files has been removed from disk. When we have
created a new ZSK we can replace the signatures creeated by the KSK
with signatures from the new ZSK.
It complains about the KSK being missing or inactive, but actually it
takes the key id from the RRSIG.
The warning is logged if BIND detects the private ZSK file is missing.
The warning is logged even if we were able to delete the signature.
With the change from this commit it only logs this warning if it is not
okay to delete the signature.
When the signed version of an inline-signed zone is dumped to disk, the
serial number of the unsigned version of the zone is stored in the
raw-format header so that the contents of the signed zone can be
resynchronized after named restart if the unsigned zone file is modified
while named is not running.
In order for the serial number of the unsigned zone to be determined
during the dump, zone->raw must be set to a non-NULL value. This should
always be the case as long as the signed version of the zone is used for
anything by named.
However, a scenario exists in which the signed version of the zone has
zone->raw set to NULL while it is being dumped:
1. Zone dump is requested; zone_dump() is invoked.
2. Another zone dump is already in progress, so the dump gets deferred
until I/O is available (see zonemgr_getio()).
3. The last external reference to the zone is released.
zone_shutdown() gets queued to the zone's task.
4. I/O becomes available for zone dumping. zone_gotwritehandle() gets
queued to the zone's task.
5. The zone's task runs zone_shutdown(). zone->raw gets set to NULL.
6. The zone's task runs zone_gotwritehandle(). zone->raw is determined
to be NULL, causing the serial number of the unsigned version of the
zone to be omitted from the raw-format dump of the signed zone file.
Note that the naïve solution - deferring the dns_zone_detach() call for
zone->raw until zone_free() gets called for the secure version of the
zone - does not work because it leads to a chicken-and-egg problem when
the inline-signed zone is about to get freed: the raw zone holds a weak
reference to the secure zone and that reference does not get released
until the reference count for the raw zone reaches zero, which in turn
would not happen until all weak references to the secure zone were
released.
Defer detaching from zone->raw in zone_shutdown() if the zone is in the
process of being dumped to disk. Ensure zone->raw gets detached from
after the dump is finished if detaching gets deferred. Prevent zone
dumping from being requeued upon failure if the zone is in the process
of being cleaned up as it opens up possibilities for the zone->raw
reference to leak, triggering a shutdown hang.
All signed zone files present in bin/tests/system/inline/ns8 should
contain the unsigned serial number in the raw-format header. Add a
check to ensure that is the case. Extend the dnssec-signzone command
line in ns8/sign.sh with the -L option to allow the zones initially
signed there to pass the newly added check. Add another zone to the
configuration for the ns8 named instance to ensure the check also passes
when multiple zones are inline-signed by a single named instance.
The isc_queue_new() was using dirty tricks to allocate the head and tail
members of the struct aligned to the cacheline. We can now use
isc_mem_get_aligned() to allocate the structure to the cacheline
directly.
Use ISC_OS_CACHELINE_SIZE (64) instead of arbitrary ALIGNMENT (128), one
cacheline size is enough to prevent false sharing.
Cleanup the unused max_threads variable - there was actually no limit on
the maximum number of threads. This was changed a while ago.
The hazard pointers implementation was bit of frivolous with memory
usage allocating memory based on maximum constants rather than on the
usage.
Make the retired list bit use exactly the memory needed for specified
number of hazard pointers. This reduced the memory used by hazard
pointers to one quarter in our specific case because we only use single
HP in the queue implementation (as opposed to allocating memory for
HP_MAX_HPS = 4).
Previously, the alignment to prevent false sharing was double the
cacheline size. This was copied from the ConcurrencyFreaks
implementation, but one cacheline size is enough to prevent false
sharing, so we are using this now to save few bits of memory.
The top level hazard pointers and retired list arrays are now not
aligned to the cacheline size - they are read-only for the whole
life-time of the isc_hp object. Only hp (hazard pointer) and
rl (retired list) array members are allocated aligned to the cacheline
size to avoid false sharing between threads.
Cleanup HP_MAX_HPS and HP_THRESHOLD_R constants from the paper, because
we don't use them in the code. HP_THRESHOLD_R was 0, so the check
whether the retired list size was smaller than the value was basically a
dead code.
There are some situations where having aligned allocations would be
useful, so we don't have to play tricks with padding the data to the
cacheline sizes.
Add isc_mem_{get,put,reget,putanddetach}_aligned() functions that has
alignment and size as last argument mimicking the POSIX posix_memalign()
functions on systems with jemalloc (see the documentation on
MALLOX_ALIGN() for more details). On systems without jemalloc, those
functions are same as non-aligned variants.
Add library ctor and dtor for isc_os compilation unit which initializes
the numbers of the CPUs and also checks whether L1 cacheline size is
really 64 if the sysconf() call is available.
The zt_destroy() function was missing isc_refcount_destroy() on the two
reference counters. The isc_refcount_destroy() adds proper memory
ordering on destroy and also ensures that the reference counters have
been zeroed before destroying the object.
Commit 308bc46a59 introduced a change to
the view_flushanddetach() function which makes the latter access
view->zonetable without holding view->lock. As confirmed by TSAN, this
enables races between threads for view->zonetable accesses.
Swap the view->zonetable pointer under view lock and then detach the
local swapped dns_zt_t later when the view lock is already unlocked.
This commit also changes the dns_zt interfaces, so the setting the
zonetable "flush" flag is separate operation to dns_zt_detach,
e.g. instead of doing:
if (view->flush) {
dns_zt_flushanddetach(&zt);
} else {
dns_zt_detach(&zt);
}
the code is now:
if (view->flush) {
dns_zt_flush(zt);
}
dns_zt_detach(&zt);
making the code more consistent with how we handle flushing and
detaching dns_zone_t pointers from the view.
While doing code review, it was found that the taskmgr->exiting is set
under taskmgr->lock, but accessed under taskmgr->excl_lock in the
isc_task_beginexclusive().
Additionally, before the change that moved running the tasks to the
netmgr, the task_ready() subrouting of isc_task_detach() would lock
mgr->lock, requiring the mgr->excl to be protected mgr->excl_lock
to prevent deadlock in the code. After !4918 has been merged, this is
no longer true, and we can remove taskmgr->excl_lock and use
taskmgr->lock in its stead.
Solve both issues by removing the taskmgr->excl_lock and exclusively use
taskmgr->lock to protect both taskmgr->excl and taskmgr->exiting which
now doesn't need to be atomic_bool, because it's always accessed from
within the locked section.
The isc_taskmgr_excltask() would return ISC_R_NOTFOUND either when the
exclusive task was not set (yet) or when the taskmgr is shutting down
and the exclusive task has been already cleared.
Distinguish between the two states and return ISC_R_SHUTTINGDOWN when
the taskmgr is being shut down instead of ISC_R_NOTFOUND.
If a catz event is scheduled while the task manager was being
shut down, task-exclusive mode is unavailable. This needs to be
handled as an error rather than triggering an assertion.
When the signed version of an inline-signed zone is dumped to disk, the
serial number of the unsigned version of the zone is written in the
raw-format header so that the contents of the signed zone can be
resynchronized after named restart if the unsigned zone file is
modified while named is not running (see RT #26676).
In order for the serial number of the unsigned zone to be determined
during the dump, zone->raw must be set to a non-NULL value. This
should always be the case as long as the signed version of the zone is
used for anything by named.
However, under certain circumstances the zone->raw could be set to NULL
while the zone is being dumped.
Defer detaching from zone->raw in zone_shutdown() if the zone is in the
process of being dumped to disk.
For consistency with similar functions, rename `pcache` to `cachep`,
call a separate destroy function when references reach 0, and add
a missing call to isc_refcount_destroy().
The doc/arm/conf.py Sphinx configuration file specifies
doc/arm/isc-logo.pdf as the logo to use in the PDF files produced.
Since doc/arm/isc-logo.pdf is not currently included in source tarballs
produced using "make dist", attempting to build documentation in PDF
format using a source tarball results in the following error being
raised:
Sphinx error:
logo file 'isc-logo.pdf' does not exist
Ensure doc/arm/isc-logo.pdf is included in source tarballs produced
using "make dist", so that the BIND 9 ARM can be successfully built in
PDF format using just the source tarball.
The existing "docs" GitLab CI job operates on a Git repository rather
than a source tarball. This prevents it from detecting issues caused by
files missing from source tarballs. Add a new GitLab CI job similar to
the "docs" one, but using a source tarball rather than a Git repository.
Extract YAML bits used by multiple job definitions into anchors to avoid
code duplication. Drop the "allow_failure: false" key in the process as
it is the implicit default for non-manual jobs. Replace the
"artifacts:paths" key with "artifacts:untracked" in order to include all
untracked files in the artifact archive for each documentation-building
job; this allows tarball-based artifacts to be properly captured and
also facilitates troubleshooting failed jobs.
A kasp structure was not detached when looking to see if there
was an existing kasp structure with the same name, causing memory
to be leaked. Fixed by calling dns_kasp_detach() to release the
reference.
Add a comment explaining the purpose of setting the "today" variable in
Sphinx invocations to prevent confusion caused by the absence of that
variable from reStructuredText sources.
Drop the -A command-line option from the sphinx-build invocation for
EPUB output as "today" is already set in the ALLSPHINXOPTS variable.
Some Sphinx variables used in the ARM are only set in Makefile.docs.
This works fine when building the ARM using "make", but does not work
with Read the Docs, which only looks at conf.py files.
Since Read the Docs does not run ./configure, renaming conf.py to
conf.py.in and using Autoconf output variables is not a feasible
solution.
Instead, extend doc/arm/conf.py with some Python code which processes
configure.ac using regular expressions and sets the relevant Sphinx
variables accordingly. As this solution also works fine when building
the ARM using "make", drop the relevant -D options from the list of
sphinx-build options used for building the ARM in Makefile.docs.
Note that the man_SPHINXOPTS counterparts of the removed -D switches are
left intact because doc/man/conf.py is a separate Sphinx project which
is only processed using "make" and duplicating the Python code added to
doc/arm/conf.py by this commit would be inelegant.
This commit enables client-side TLS contexts re-use for zone transfers
over TLS. That, in turn, makes it possible to use the internal session
cache associated with the contexts, allowing the TLS connections to be
established faster and requiring fewer resources by not going through
the full TLS handshake procedure.
Previously that would recreate the context on every connection, making
TLS session resumption impossible.
Also, this change lays down a foundation for Strict TLS (when the
client validates a server certificate), as the TLS context cache can
be extended to store additional data required for validation (like
intermediates CA chain).
Using the TLS context cache for server-side contexts could reduce the
number of contexts to initialise in the configurations when e.g. the
same 'tls' entry is used in multiple 'listen-on' statements for the
same DNS transport, binding to multiple IP addresses.
In such a case, only one TLS context will be created, instead of a
context per IP address, which could reduce the initialisation time, as
initialising even a non-ephemeral TLS context introduces some delay,
which can be *visually* noticeable by log activity.
Also, this change lays down a foundation for Mutual TLS (when the
server validates a client certificate, additionally to a client
validating the server), as the TLS context cache can be extended to
store additional data required for validation (like intermediates CA
chain).
Additionally to the above, the change ensures that the contexts are
not being changed after initialisation, as such a practice is frowned
upon. Previously we would set the supported ALPN tags within
isc_nm_listenhttp() and isc_nm_listentlsdns(). We do not do that for
client-side contexts, so that appears to be an overlook. Now we set
the supported ALPN tags right after server-side contexts creation,
similarly how we do for client-side ones.
This commit adds a TLS context object cache implementation. The
intention of having this object is manyfold:
- In the case of client-side contexts: allow reusing the previously
created contexts to employ the context-specific TLS session resumption
cache. That will enable XoT connection to be reestablished faster and
with fewer resources by not going through the full TLS handshake
procedure.
- In the case of server-side contexts: reduce the number of contexts
created on startup. That could reduce startup time in a case when
there are many "listen-on" statements referring to a smaller amount of
`tls` statements, especially when "ephemeral" certificates are
involved.
- The long-term goal is to provide in-memory storage for additional
data associated with the certificates, like runtime
representation (X509_STORE) of intermediate CA-certificates bundle for
Strict TLS/Mutual TLS ("ca-file").
Commit 9ee60e7a17 erroneously introduced
duplicate conditions to several existing conditional statements
responsible for determining error codes passed to connection callbacks
upon failure. Fix the affected expressions to ensure connection
callbacks are invoked with:
- the ISC_R_SHUTTINGDOWN error code when a global netmgr shutdown is
in progress,
- the ISC_R_CANCELED error code when a specific operation has been
canceled.
This does not fix any known bugs, it only adjusts the changes introduced
by commit 9ee60e7a17 so that they match
its original intent.
Commit 9ee60e7a17 enabled netmgr shutdown
to cause read callbacks for active control channel sockets to be invoked
with the ISC_R_SHUTTINGDOWN result code. However, control channel code
only recognizes ISC_R_CANCELED as an indicator of an in-progress netmgr
shutdown (which was correct before the above commit). This discrepancy
enables the following scenario to happen in rare cases:
1. A control channel request is received and responded to. libuv
manages to write the response to the TCP socket, but the completion
callback (control_senddone()) is yet to be invoked.
2. Server shutdown is initiated. All TCP sockets are shut down, which
i.a. causes control_recvmessage() to be invoked with the
ISC_R_SHUTTINGDOWN result code. As the result code is not
ISC_R_CANCELED, control_recvmessage() does not set
listener->controls->shuttingdown to 'true'.
3. control_senddone() is called with the ISC_R_SUCCESS result code. As
neither listener->controls->shuttingdown is 'true' nor is the result
code ISC_R_CANCELED, reading is resumed on the control channel
socket. However, this read can never be completed because the read
callback on that socket was cleared when the TCP socket was shut
down. This causes a reference on the socket's handle to be held
indefinitely, leading to a hang upon shutdown.
Ensure listener->controls->shuttingdown is also set to 'true' when
control_recvmessage() is invoked with the ISC_R_SHUTTINGDOWN result
code. This ensures the send completion callback does not resume reading
after the control channel socket is shut down.
"buster" jobs are now only going to be run in scheduled pipelines.
"--without-gssapi" ./configure option of "bullseye" before it became
the base image is dropped from "bullseye"-the-base-image because it
reduces gcov coverage by 0.38 % (651 lines) and is used in Debian 9
"stretch".
A customary method of exporting TLS pre-master secrets used by a piece
of software (for debugging purposes, e.g. to examine decrypted traffic
in a packet sniffer) is to set the SSLKEYLOGFILE environment variable to
the path to the file in which this data should be logged.
In order to enable writing any data to a file using the logging
framework provided by libisc, a logging channel needs to be defined and
the relevant logging category needs to be associated with it. Since the
SSLKEYLOGFILE variable is only expected to contain a path, some defaults
for the logging channel need to be assumed. Add a new function,
named_log_setdefaultsslkeylogfile(), for setting up those implicit
defaults, which are equivalent to the following logging configuration:
channel default_sslkeylogfile {
file "${SSLKEYLOGFILE}" versions 10 size 100m suffix timestamp;
};
category sslkeylog {
default_sslkeylogfile;
};
This ensures TLS pre-master secrets do not use up more than about 1 GB
of disk space, which should be enough to hold debugging data for the
most recent 1 million TLS connections.
As these values are arguably not universally appropriate for all
deployment environments, a way for overriding them needs to exist.
Suppress creation of the default logging channel for TLS pre-master
secrets when the SSLKEYLOGFILE variable is set to the string "config".
This enables providing custom logging configuration for the relevant
category via the "logging" stanza. (Note that it would have been
simpler to only skip setting up the default logging channel for TLS
pre-master secrets if the SSLKEYLOGFILE environment variable is not set
at all. However, libisc only logs pre-master secrets if that variable
is set. Detecting a "magic" string enables the SSLKEYLOGFILE
environment variable to serve as a single control for both enabling TLS
pre-master secret collection and potentially also indicating where and
how they should be exported.)
The SSL_CTX_set_keylog_callback() function is a fairly recent OpenSSL
addition, having first appeared in version 1.1.1. Add a configure.ac
check for the availability of that function to prevent build errors on
older platforms. Sort similar checks alphabetically.
This makes the SSLKEYLOGFILE mechanism a silent no-op on unsupported
platforms, which is considered acceptable for a debugging feature.
Generate log messages containing TLS pre-master secrets when the
SSLKEYLOGFILE environment variable is set. This only ensures such
messages are prepared using the right logging category and passed to
libisc for further processing.
The TLS pre-master secret logging callback needs to be set on a
per-context basis, so ensure it happens for both client-side and
server-side TLS contexts.
TLS pre-master secrets will be dumped to disk using the logging
framework provided by libisc. Add a new logging category for this type
of debugging data in order to enable exporting it to a dedicated
channel. Derive the name of the new category from the name of the
relevant environment variable, SSLKEYLOGFILE.
Commit 2ececf2c dropped dependency of "respdiff" and
"respdiff-third-party" jobs on "tarball-create" job because these jobs
don't need to depend on in (e.g., for its artifacts). This, however,
caused that respdiff jobs weren't started out-of-order and artifacts
from all the "Build" stage jobs plus "unit:gcc:buster:amd64" job were
downloaded to project directory and caused problems with compilation:
Originally, the dependency on "tarball-create" has been added in
04f8b65a to indicate that respdiff "is meant to operate on two different
BIND versions". It seems that the intent didn't work out, and we better
make it obvious that respdiff jobs don't depend on any other job and
should be run out-of-order.
This commit removes unused listen-on statements from the ns3 instance
in order to reduce the startup time. That should help with occasional
system test initialisation hiccups in the CI which happen because the
required instances cannot initialise in time.
Due to the fact that the primary nameserver creates a lot of TLS
contexts, its reconfiguration could take too much time on the CI,
leading to spurious test failures, while in reality it works just
fine.
This commit adds a separate instance for this test which does not use
ephemeral keys (these are costly to generate) and creates minimal
amount of TLS contexts.
ECDSA P-256 performs considerably better than the previously used
4096-bit RSA (can be observed using `openssl speed`), and, according
to RFC 6605, provides a security level comparable to 3072-bit RSA.
Previously, whole isc_mempool_get() and isc_mempool_set() would be
replaced by simpler version when run with address sanitizer.
Change the code to limit the fillcount to 1 and freemax to 0. This
change will make isc_mempool_get() to always allocate and use a single
new item and isc_mempool_put() will always return the item to the
allocator.
Support for FreeBSD 11.4, the last FreeBSD 11.x release, ended on
September 30, 2021.
The "--with-readline" ./configure option has been added to gcc:sid:amd64
CI job; otherwise, it would be lost with the FreeBSD 11 removal.
Link: https://www.freebsd.org/security/unsupported/
OpenSSL 3.0.1 does not accept 0 as a digest buffer length when
calling EVP_DigestSignFinal as it now checks that the digest buffer
length is large enough for the digest. Pass the digest buffer
length instead.
The order of directories with reference and test BIND 9 are now reversed
for respdiff.sh.
Drop unnecessary dependency on the tarball-create job.
The data.mdb file has more than 10 GB and makes artifact download take
an unnecessarily long time.
It was discovered that NAME_FREEMAX and RDATASET_FREEMAX was based on
the NAME_FILLCOUNT and RDATASET_FILLCOUNT respectively multiplied by 8
and then when used in isc_mempool_setfreemax, the value would be again
multiplied by 32.
Keep the 8 multiplier in the #define and remove the 32 multiplier as it
was kept in error. The default fillcount can fit 99.99% of the requests
under normal circumstances, so we don't need to keep that many free
items on the mempool.
reference counting of ns_interface objects has not been used
since the clientmgr cleanup in #2433, and it no longer really
makes sense now - when we want to destroy an interface on a
rescan, we want it to be destroyed, not kept active by some
other caller. so ns_interface_attach() has been removed,
ns_interface_detach() has been replaced with a static
interface_destroy(), and do_scan() has been simplified
accordingly.
previously, if "listen-on-v6" was set to "none", then every
time a scan saw an IPv6 address it would appear to be a new
one. this commit retains all known interfaces in a list
and sets a flag in the ones that are listening, so that
configured interfaces that have been seen before will be
recognized as such.
as an incidental fix, the ns__interfacemgr_getif() and _nextif()
functions have been removed since they were never used.
This commit modifies the NetLink handling code in such a way
that the contents of the messages we are interested in is checked
for the local addresses changes only. This helps to avoid spurious
interface re-scans.
The 'route_recv' log messages are also reduced from DEBUG(3) to
DEBUG(9).
The memory context created in the clientmgr context was missing a name,
so it was nameless in the memory context statistics.
Set the clientmgr memory context name to "clientmgr".
Every cppcheck update brings the cost of addressing new false positives
in the BIND 9 source code while not reaping any benefits in case of
identified issues with the code.
The 850e9e59bf commit intended to recreate
the HTTPS and TLS interfaces during reconfiguration, but they are being
recreated also during regular interface re-scans.
Make sure the HTTPS and TLS interfaces are being recreated only during
reconfiguration.
For DoH and DoT listeners, a reconfiguration event triggers a creation
of a new 'SSL_CTX' TLS context, and a destruction of the old one.
The network manager, though, keeps using the old context which causes
errors.
During interface scanning, when a matching existing interface is found,
reuse it only when it doesn't have a TLS context, otherwise shut it down
and recreate with a new TLS context.
Surprising error IO error is returned when directory name
is given instead of named.conf file. It can be passed to named-checkconf
or include statement. Make a simple change to return Invalid file
instead. Still not precise, but much better error message is returned.
Fix of rhbz#490837.
Mutex debugging code (used when the ISC_MUTEX_DEBUG preprocessor macro
is set to 1 and PTHREAD_MUTEX_ERRORCHECK is defined) has been broken for
the past 3 years (since commit 2f3eee5a4f)
and nobody complained, which is a strong indication that this code is
not being used these days any more. External tools for detecting
locking issues are already wired into various GitLab CI checks. Drop
all code depending on the ISC_MUTEX_DEBUG preprocessor macro being set.
Mutex profiling code (used when the ISC_MUTEX_PROFILE preprocessor macro
is set to 1) has been broken for the past 3 years (since commit
0bed9bfc28) and nobody complained, which
is a strong indication that this code is not being used these days any
more. External tools for both measuring performance and detecting
locking issues are already wired into various GitLab CI checks. Drop
all code depending on the ISC_MUTEX_PROFILE preprocessor macro being
set.
the 'dipsatchmgr->state' was never set, so the MGR_IS_SHUTTINGDOWN
macro was always false. both of these have been removed.
renamed the 'dispatch->state' field to 'tcpstate' to make its purpose
less ambiguous.
changed an FCTXTRACE log message from "response did not match question"
to the more correctly descriptive "invalid question section".
When a non-matching DNS response is received by the resolver,
it calls dns_dispatch_getnext() to resume reading. This is necessary
for UDP but not for TCP, because TCP connections automatically
resume reading after any valid DNS response.
This commit adds a 'tcpreading' flag to TCP dispatches, so that
`dispatch_getnext()` can be called multiple times without subsequent
calls having any effect.
On FreeBSD, the pthread primitives are not solely allocated on stack,
but part of the object lives on the heap. Missing pthread_*_destroy
causes the heap memory to grow and in case of fast lived object it's
possible to run out-of-memory.
Properly destroy the leaking mutex (worker->lock) and
the leaking condition (sock->cond).
Previously, we set the number of the hazard pointers to be 4 times the
number of workers because the dispatch ran on the old socket code.
Since the old socket code was removed there's a smaller number of
threads, namely:
- 1 main thread
- 1 timer thread
- <n> netmgr threads
- <n> threadpool threads
Set the number of hazard pointers to 2 + 2 * workers.
Previously, the isc_hp_init() could not lower the value of
isc__hp_max_threads, but because of a mistake the isc__hp_max_threads
would be set to HP_MAX_THREADS (e.g. 128 threads) thus it would be
always set to 128. This would result in increased memory usage even
when small number of workers were in use.
Change the default value of isc__hp_max_threads to be 1.
Additionally, enforce the max_hps value in isc_hp_new() to be smaller or
equal to HP_MAX_HPS. The only user is isc_queue which uses just 1
hazard pointer, so it's only theoretical issue.
It's unclear if we are going to keep it or not, so let's mark it as
deprecated for a good measure. It's easier to un-deprecate it than the
other way around.
the lifetime expiry timer for the fetch context was removed
when we switched to using in-band netmgr timeouts. however,
it turns out some dependency loops can occur between a fetch
and the ADB the validator; these deadlocks were formerly broken
when the timer fired, and now there's no timer. we can fix these
errors individually, but in the meantime we don't want the server
to get hung at shutdown because of dangling fetches.
this commit puts back a single timer, which fires two seconds
after the fetch should have completed, and shuts it down. it also
logs a message at level INFO so we know about the problems when
they occur.
A number of DNS implementation produce NSEC records with bad type
maps that don't contain types that exist at the name leading to
NODATA responses being synthesize instead of the records in the
zone. NSEC records with these bad type maps often have the NSEC
NSEC field set to '\000.QNAME'. We look for the first label of
this pattern.
e.g.
example.com NSEC \000.example.com SOA NS NSEC RRSIG
example.com RRRSIG NSEC ...
example.com SOA ...
example.com RRRSIG SOA ...
example.com NS ...
example.com RRRSIG NS ...
example.com A ...
example.com RRRSIG A ...
A is missing from the type map.
This introduces a temporary option 'reject-000-label' to control
this behaviour.
This sets as many server options as possible at once to detect
cut-and-paste bugs when implementing new server options in peer.c.
Most of the accessor functions are similar and it is easy to miss
updating a macro name or structure element name when adding new
accessor functions.
checkconf/setup.sh is there to minimise the difference to branches
with optional server options where the list is updated at runtime.
'server <prefix> { broken-nsec yes; };' can now be used to stop
NSEC records from negative responses from servers in the given
prefix being cached and hence available to synth-from-dnssec.
1) when after processing a node there where no headers that
contained active records.
When
if (check_stale_header(node, header, &locktype, lock, &search,
&header_prev);
succeeds or
if (EXISTS(header) && !ANCIENT(header))
fails for all entries in the list leading to 'empty_node' remaining
true.
If there is are no active records we know nothing about the
current state of the name so we treat is as ISC_R_NOTFOUND.
2) when there was a covering NOQNAME proof found or all the
active headers where negative.
When
if (header->noqname != NULL &&
header->trust == dns_trust_secure)
succeeds or
if (!NEGATIVE(header))
never succeeds. Under these conditions there could (should be for
found_noqname) be a covering NSEC earlier in the tree.
The old code rejected NSEC that proved the wildcard name existed
(exists). The new code rejects NSEC that prove that the wildcard
name exists and that the type exists (exists && data) but accept
NSEC that prove the wildcard name exists.
query_synthnxdomain (renamed query_synthnxdomainnodata) already
took the NSEC records and added the correct records to the message
body for NXDOMAIN or NODATA responses with the above change. The
only additional change needed was to ensure the correct RCODE is
set.
dns_nsec_noexistnodata now checks that RRSIG and NSEC are
present in the type map. Both types should be present in
a correctly constructed NSEC record. This check is in
addition to similar checks in resolver.c and validator.c.
dns_db_nodecount can now be used to get counts from the auxilary
rbt databases. The existing node count is returned by
tree=dns_dbtree_main. The nsec and nsec3 node counts by dns_dbtree_nsec
and dns_dbtree_nsec3 respectively.
"black lies" differ from "white lies" in that the owner name of the
NSEC record matches the QNAME and the intent is to return NODATA
instead of NXDOMAIN for all types. Caching this NSEC does not lead
to unexpected behaviour on synthesis when the QNAME matches the
NSEC owner which it does for the the general "white lie" response.
"black lie" QNAME NSEC \000.QNAME NSEC RRSIG
"white lie" QNAME- NSEC QNAME+ NSEC RRSIG
where QNAME- is a name that is close to QNAME but sorts before QNAME
and QNAME+ is a that is close to QNAME but sorts after QNAME.
Black lies are safe to cache as they don't bring into existence
names that are not intended to exist. "Black lies" intentional change
NXDOMAIN to NODATA. "White lies" bring QNAME- into existence and named
would synthesis NODATA for QNAME+ if it is queried for that name
instead of discovering the, presumable, NXDOMAIN response.
Note rejection NSEC RRsets with NEXT names starting with the label
'\000' renders this change ineffective (see reject-000-label).
construct a test zone which contains a minimal NSEC record,
emit priming queries for this record, and then check that
a respose that would be synthesised from it isn't.
Note when synthesising answer involving wildcards we look in the
cache multiple times, once for the QNAME and once for the wildcard
name which is constucted by looking at the names from the covering
NSEC return by the QNAME miss.
this improves the performance of looking for NSEC and RRSIG(NSEC)
records in the cache by skipping lots of nodes in the main trees
in the cache without these records present. This is a simplified
version of previous_closest_nsec() which uses the same underlying
mechanism to look for NSEC and RRSIG(NSEC) records in authorative
zones.
The auxilary NSEC tree was already being maintained as a side effect
of looking for the covering NSEC in large zones where there can be
lots of glue records that needed to be skipped. Nodes are added
to the tree whenever a NSEC record is added to the primary tree.
They are removed when the corresponding node is removed from the
primary tree.
Having nodes in the NSEC tree w/o NSEC records in the primary tree
should not impact on synth-from-dnssec efficiency as that node would
have held the NSEC we would have been needed to synthesise the
response. Removing the node when the NSEC RRset expires would only
cause rbtdb to return a NSEC which would be rejected at a higher
level.
Previously, when TCP accept failed, we have logged a message with
ISC_LOG_ERROR level. One common case, how this could happen is that the
client hits TCP client quota and is put on hold and when resumed, the
client has already given up and closed the TCP connection. In such
case, the named would log:
TCP connection failed: socket is not connected
This message was quite confusing because it actually doesn't say that
it's related to the accepting the TCP connection and also it logs
everything on the ISC_LOG_ERROR level.
Change the log message to "Accepting TCP connection failed" and for
specific error states lower the severity of the log message to
ISC_LOG_INFO.
The TCP connection reset test starts mock UDP and TCP server which
always returns empty DNS answer with TC bit set over UDP and resets the
TCP connection after five seconds.
When tested without the fix, the DNS query to 10.53.0.2 times out and
the ns2 server hangs at shutdown.
A TCP connection may be held open past its proper timeout if it's
receiving a stream of DNS responses that don't match any queries.
In this case, we now check whether the oldest query should have timed
out.
When the outgoing TCP dispatch times-out active response, we might still
receive the answer during the lifetime of the connection. Previously,
we would just ignore any non-matching DNS answers, which would allow the
server to feed us with otherwise valid DNS answer and keep the
connection open.
Add a counter for timed-out DNS queries over TCP and tear down the whole
TCP connection if we receive unexpected number of DNS answers.
Previously, when invalid DNS message is received over TCP we throw the
garbage DNS message away and continued looking for valid DNS message
that would match our outgoing queries. This logic makes sense for UDP,
because anyone can send DNS message over UDP.
Change the logic that the TCP connection is closed when we receive
garbage, because the other side is acting malicious.
When outgoing TCP connection was prematurely terminated (f.e. with
connection reset), the dispatch code would not cleanup the resources
used by such connection leading to dangling dns_dispentry_t entries.
Add a idna that checks whether non-character letters like _ and * are
preserved when IDN is enabled. This wasn't the case when
UseSTD3ASCIIRules were enabled, f.e. _ from _tcp would get mangled to
tcp.
Disable IDN2_USE_STD3_ASCII_RULES to the libidn2 conversion because it
broke encoding some non-letter but valid domain names like _tcp or *.
This reverts commit ef8aa91740.
This change is made in particular to address the issue with 'doth'
system tests where servers are unable to iniitalise in time in CI
system under high load (that happened particularly often for Debian
Buster cross32 configuration).
The right solution, is, of course, to (re)use TLS context sparingly,
while right now we create too many of them.
XoT: add support client-side TLS parameters for incoming XFRs, add 'tls' name configuration validation on secondaries
See merge request isc-projects/bind9!5602
There was a logical bug when setting a list of enabled TLS protocols,
which may lead to a crash (an abort()) on systems with ancient OpenSSL
versions.
The problem was due to the fact that we were INSIST()ing on supporting
all of the TLS versions, while checking only for mentioned in the
configuration was implied.
This commit ensure that the 'tls' name specified in the 'primaries'
clause of a 'zone' statement is a valid one.
Prior to that such a name would be silently accepted, leading to
silent XFRs-via-TLS failures.
This commit adds support for client-side TLS parameters to XoT.
Prior to this commit all client-side TLS contexts were using default
parameters only, ignoring the options from the BIND's configuration
file.
Currently, the following 'tls' parameters are supported:
- protocols;
- ciphers;
- prefer-server-ciphers.
Resolve "The list of fetches at the end of 'rndc recursing' output is very poorly explained in the ARM - what does 'allowed' mean?"
Closes#2850
See merge request isc-projects/bind9!5388
This commit adds a new system-test: transport-acl system test. It is
intended to test the new, extended syntax for ACLs, the one where port
or transport protocol can be specified. Currently, it includes the
tests only using allow-transfer statement, as this extended syntax is
used only there, at least for now.
This commit updates both the reference manual and release notes with
the information that 'allow-transfer' has been extended with
additional "port" and "transport" options.
This commit extends the 'doth' system test to verify that the new
extended 'allow-transfer' option syntax featuring 'port' and
'transport' parameters is supported and works as expected. That is, it
restricts the primary server to allow zone transfers only via XoT.
Additionally to that, it extends the 'checkonf' test with more
configuration file examples featuring the new syntax.
This commit completes the integration of the new, extended ACL syntax
featuring 'port' and 'transport' options.
The runtime presentation and ACL loading code are extended to allow
the syntax to be used beyond the 'allow-transfer' option (e.g. in
'acl' definitions and other 'allow-*' options) and can be used to
ultimately extend the ACL support with transport-only
ACLs (e.g. 'transport-acl tls-acl port 853 transport tls'). But, due
to fundamental nature of such a change, it has not been completed as a
part of 9.17.X release series due to it being close to 9.18 stable
release status. That means that we do not have enough time to fully
test it.
The complete integration is planned as a part of 9.19.X release
series.
The code was manually verified to work as expected by temporarily
enabling the extended syntax for 'acl' statements and 'allow-query'
options, including ACL merging, negated ACLs.
This commit extends ACL syntax handling code with 'port' and
'transport' options. Currently, the extended syntax is available only
for allow-transfer options.
This commit adds an isc_nm_socket_type() function which can be used to
obtain a handle's socket type.
This change obsoletes isc_nm_is_tlsdns_handle() and
isc_nm_is_http_handle(). However, it was decided to keep the latter as
we eventually might end up supporting multiple HTTP versions.
This commit disables the unused 'tls' clause options. For these some
backing code exists, but their values are not really used anywhere,
nor there are sufficient syntax tests for them.
These options are only disabled temporarily, until TLS certificate
verification gets implemented.
Resolve#3022: DoH: dig eventually aborts on ALPN negotiation failure when issuing a DoH query (because of dangling handles)
Closes#3022
See merge request isc-projects/bind9!5590
This commit makes the TLS stream code to not issue mostly useless
debug log message on error during TLS I/O. This message was cluttering
logs a lot, as it can be generated on (almost) any non-clean TLS
connection termination, even in the cases when the actual query
completed successfully. Nor does it provide much value for end-users,
yet it can occasionally be seen when using dig and quite often when
running BIND over a publicly available network interface.
This commit removes unneeded isc__nmsocket_prep_destroy() call on ALPN
negotiation failure, which was eventually causing the TLS handle to
leak.
This call is not needed, as not attaching to the transport (TLS)
handle should be enough. At this point it seems like a kludge from
earlier days of the TLS code.
This prevents a direct leak in OPENSSL_init_crypto (called from
OPENSSL_init_ssl).
Add shim version of OPENSSL_cleanup because it is missing in LibreSSL on
OpenBSD.
Use relative names when adding SOA record and a long domain
name to create SOA RR where the wire format is longer than
the initial buffer allocation in dns_sdlz_putrr.
The parsing loop needs to process ISC_R_NOSPACE to properly
size the buffer. If result is still ISC_R_NOSPACE at the end
of the parsing loop set result to DNS_R_SERVFAIL.
In file included from rdata.c:602:
In file included from ./code.h:88:
./rdata/in_1/svcb_64.c:259:9: warning: array subscript is of type 'char' [-Wchar-subscripts]
if (!isdigit(*region->base)) {
^~~~~~~~~~~~~~~~~~~~~~
/usr/include/sys/ctype_inline.h:51:44: note: expanded from macro 'isdigit'
#define isdigit(c) ((int)((_ctype_tab_ + 1)[(c)] & _CTYPE_D))
^~~~
This commit makes the 'doth' system test skip HTTP headers check when
curl version is new enough but was compiled without HTTP/2 support.
This should fix the 'doth' system test for macOS systems using
macports.
This commit fixes a peculiar corner case in the client-side DoT code
because of which a crash could occur during a zone transfer. A junk
DNS message should be sent at the end of a zone transfer via TLS to
trigger the crash (abort).
This commit, hopefully, fixes that.
Also, this commit adds similar changes to the TCP DNS code, as it
shares the same origin and most of the logic.
When a UDP dispatch receives a mismatched response, it checks whether
there is still enough time to wait for the correct one to arrive before
the timeout fires. If there is not, the result code is set to
ISC_R_TIMEDOUT, but it is not subsequently used anywhere as 'response'
is set to NULL a few lines earlier. This results in the higher-level
read callback (resquery_response() in case of resolver code) not being
called. However, shortly afterwards, a few levels up the call chain,
isc__nm_udp_read_cb() calls isc__nmsocket_timer_stop() on the dispatch
socket, effectively disabling read timeout handling for that socket.
Combined with the fact that reading is not restarted in such a case
(e.g. by calling dispatch_getnext() from udp_recv()), this leads to the
higher-level query structure remaining referenced indefinitely because
the dispatch socket it uses will neither be read from nor closed due to
a timeout. This in turn causes fetch contexts to linger around
indefinitely, which in turn i.a. prevents certain cache nodes (those
containing rdatasets used by fetch contexts, like fctx->nameservers)
from being cleaned.
Fix by making sure the higher-level callback does get invoked with the
ISC_R_TIMEDOUT result code when udp_recv() determines there is no more
time left to receive the correct UDP response before the timeout fires.
This allows the higher-level callback to clean things up, preventing the
reference leak described above.
The following scenario triggers a "named" crash:
1. Configure a catalog zone.
2. Start "named".
3. Comment out the "catalog-zone" clause.
4. Run `rndc reconfig`.
5. Uncomment the "catalog-zone" clause.
6. Run `rndc reconfig` again.
Implement the required cleanup of the in-memory catalog zone during
the first `rndc reconfig`, so that the second `rndc reconfig` could
find it in an expected state.
the resolver test checks that the correct number of fetches have
been sent NS rrsets of a given size, but it formerly did so by
counting queries received by the authoritative server, which could
result in an off-by-one count if one of the queries had been resent
due to a timeout or a port number collision.
this commit changes the test to count fetches initiated by the
resolver, which should prevent the intermittent test failure, and
is the actual datum we were interested in anyway.
opensslecdsa_fromdns() already rejects too short ECDSA public keys.
Make it also reject too long ones. Remove an assignment made redundant
by this change.
raw_key_to_ossl() assumes fixed ECDSA private key sizes (32 bytes for
ECDSAP256SHA256, 48 bytes for ECDSAP384SHA384). Meanwhile, in rare
cases, ECDSAP256SHA256 private keys are representable in 31 bytes or
less (similarly for ECDSAP384SHA384) and that is how they are then
stored in the "PrivateKey" field of the key file. Nevertheless,
raw_key_to_ossl() always calls BN_bin2bn() with a fixed length argument,
which in the cases mentioned above leads to erroneously interpreting
uninitialized memory as a part of the private key. This results in the
latter being malformed and broken signatures being generated. Address
by using the key length provided by the caller rather than a fixed one.
Apply the same change to public key parsing code for consistency, adding
an INSIST() to prevent buffer overruns.
Most of the test zones in the dnssec system test can be verified.
Use -z when only a single key is being used so that the verifier
knows that only a single key is in use.
The method used to generate a test zone with multiple NSEC and
NSEC3 chains was incorrect. Multiple calls to dnssec-signzone
with multiple parameters is not additive. Extract the chain on
each run then add them to the final signed zone instance.
when processing a mismatched response, we call dns_dispatch_getnext().
If that fails, for example because of a timeout, fctx_done() is called,
which cancels all queries. This triggers a crash afterward when
fctx_cancelquery() is called, and is unnecessary since fctx_done()
would have been called later anyway.
When dns_adb is shutting down, first the adb->shutting_down flag is set
and then task is created that runs shutdown_stage2() that sets the
shutdown flag on names and entries. However, when dns_adb_createfind()
is called, only the individual shutdown flags are being checked, and the
global adb->shutting_down flag was not checked. Because of that it was
possible for a different thread to slip in and create new find between
the dns_adb_shutdown() and dns_adb_detach(), but before the
shutdown_stage2() task is complete. This was detected by
ThreadSanitizer as data race because the zonetable might have been
already detached by dns_view shutdown process and simultaneously
accessed by dns_adb_createfind().
This commit converts the adb->shutting_down to atomic_bool to prevent
the global adb lock when creating the find.
Add a new parameter to 'ns_client_t' to store potential extended DNS
error. Reset when the client request ends, or is put back.
Add defines for all well-known info-codes.
Update the number of DNS_EDNSOPTIONS that we are willing to set.
Create a new function to set the extended error for a client reply.
The documentation was inconsistent with the code. The new description
for cookie-algorithm now reflects the current behavior.
The following two commits are the relevant code changes to this
section of docs: afa81ee4a912f313
Change 5756 (GL #2854) introduced build errors when using
'configure --disable-doh'. To fix this, isc_nm_is_http_handle() is
now defined in all builds, not just builds that have DoH enabled.
Missing code comments were added both for that function and for
isc_nm_is_tlsdns_handle().
Gitlab feature
https://docs.gitlab.com/ee/ci/pipelines/settings.html#auto-cancel-redundant-pipelines
can automatically cancel jobs which operate on an outdated code, i.e. on
branches which received new commits while jobs with an older set of
commits are still running. For this feature to work jobs have to be
configured with boolean interruptible: true.
I think practically all of our current CI jobs can be cancelled,
so the option is now on by default for all jobs.
This is almost minimal prototype to show how to use python-hypothesis
library in a system test. It does not fully replace existing shell-based
system test for wildcards.
Resolve#2854: DoH: Assign HTTP responses freshness lifetime according to the smallest TTL found in the Answer section
Closes#2854
See merge request isc-projects/bind9!5493
This commit makes BIND set the "max-age" value of the "Cache-Control"
HTTP header to the minimal TTL from the Answer section for positive
answers, as RFC 8484 advises in section 5.1.
We calculate the minimal TTL as a side effect of rendering the
response DNS message, so it does not change the code flow much, nor
should it have any measurable negative impact on the performance.
For negative answers, the "max-age" value is set using the TTL and
SOA-minimum values from an SOA record in the Authority section.
This commit adds an isc_nm_set_min_answer_ttl() function which is
intended to to be used to give a hint to the underlying transport
regarding the answer TTL.
The interface is intentionally kept generic because over time more
transports might benefit from this functionality, but currently it is
intended for DoH to set "max-age" value within "Cache-Control" HTTP
header (as recommended in the RFC8484, section 5.1 "Cache
Interaction").
It is no-op for other DNS transports for the time being.
The version number for the XML statistics channel was not incremented
correctly after removal of isc_socket code in
a55589f881, and the JSON version number
was not incremented at all.
Check to see whether there are outstanding requests in the
httpd receive buffer after sending the response, and if so,
process them.
Test that pipelined requests are handled by sending multiple
minimal HTTP/1.1 using netcat (nc) and checking that we get
back the same number of responses.
Remember the amount of space consumed by the HTTP headers, then
move any trailing data to the start of the httpd->recvbuf once
we have finished processing the request.
if an incoming HTTP request is incomplete, but nothing else is clearly
wrong with it, the stats channel continues reading to see if there's
more coming. the buffer length was not being processed correctly in
this case. also, the server state was not reset correctly when the
request was complete, so that subsequent requests could be appended to
the first buffer instead of being treated as new.
in addition fixing the above problems, this commit also increases the
size of the httpd request buffer from 1024 to 4096, because some
browsers send a lot of headers.
A typo introduced in f3f1cab05e prevents
execution of the dns_name_copy-with-result.spatch. The replacement
should end with semicolon not a colon:
plus: parse error:
File "cocci/dns_name_copy-with-result.spatch", line 28, column 23, charpos = 421
around = ':',
whole content = + dns_name_copy(E1, E2):
1) if 'key->external' is set we just need to call
dst__privstruct_writefile
2) the cleanup of 'bufs' was incorrect as 'i' doesn't reflect the
the current index into 'bufs'. Use a simple for loop.
This review was triggered by Coverity reporting a buffer overrun
on 'bufs'.
'dh' was being assigned to key->keydata.dh too soon which could
result in a memory leak on error. Moved the assignement of
key->keydata.dh until after dh was correct.
Coverity was reporting dead code on the error path cleaning up 'dh'
which triggered this review.
'make dist' omits lib/dns/tests/comparekeys/ (added in
7101afa23c) from release tarball it
creates which makes the unit:gcc:tarball CI job permanently fail in the
dst unit test.
Be less strict regarding "tls" statements in the configuration file by allowing both "key-file" and "cert-file" be omitted
See merge request isc-projects/bind9!5546
In the 9.17.19 release "tls" statements verification code was
added. The code was too strict and assumed that every such a statement
should have both "cert-file" and "key-file" specified. This turned out
to be a regression, as in some cases we plan to use the "tls"
statement to specify TLS connection parameters.
This commit fixes this behaviour; now a "tls" statement should either
have both "cert-file" and "key-file" specified, or both should be
omitted.
It was used only as guard against unused variable declaration, but the
surrounding code depends on strtok_r being defined unconditionally, so
there is no point in guarding a variable.
Glibc documentation suggests it is obsolete anyway and e.g. Meson build
system decided to ignore it. It seems to be required only by old
Solaris compiler and OpenIndiana uses gcc.
It's major PITA trying to guess what exactly clang-format has changed,
so how CI stores patch file with changes which can be applied locally if
needed.
PyLint 2.11 reports a new warning, C0209 (consider-using-f-string).
Since f-strings are only available in Python 3.6+, existing scripts
cannot be updated to use this feature just yet because they would stop
working with older Python versions. Instead, disable PyLint warning
C0209 for the time being. Sort all disabled warnings in .pylintrc.
GL #2308 was originally referenced by CHANGES entry 5727. However, the
corresponding code change turned out to be flawed and had to be reverted
in BIND 9.17.19, causing CHANGES entry 5727 to be turned into a
placeholder on the release branch.
Commit 63145fb1d3 subsequently addressed
the flaw, so the fix for GL #2308 will be included in BIND 9.17.20.
Move the relevant CHANGES entry to reflect that.
Previously, when lame cache would be disabled by setting lame-ttl to 0,
it would also disable lame answer detection. In this commit, we enable
the lame response detection even when the lame cache is disabled. This
enables stopping answer processing early rather than going through the
whole answer processing flow.
The lame-ttl cache is implemented in ADB as per-server locked
linked-list "indexed" with <qname,qtype>. This list has to be walked
every time there's a new query or new record added into the lame cache.
Determined attacker can use this to degrade performance of the resolver.
Resolver testing has shown that disabling the lame cache has little
impact on the resolver performance and it's a minimal viable defense
against this kind of attack.
Unless being configured with the `no-deprecated` option, OpenSSL 3.0.0
still has the deprecated APIs present and will throw warnings during
compilation, when using them.
Make sure that the old APIs are being used only with the older versions
of OpenSSL.
OpenSSL 3 deprecates most of the DH* family and associated APIs.
Reimplement the existing functionality using a newer set of APIs
which will be used when compiling/linking with OpenSSL 3.0.0 or newer
versions.
OpenSSL 3 deprecates most of the RSA* family and associated APIs.
Reimplement the existing functionality using a newer set of APIs
which will be used when compiling/linking with OpenSSL 3.0.0 or newer
versions.
OpenSSL 3 deprecates most of the EC* family and associated APIs.
Reimplement the existing functionality using a newer set of APIs
which will be used when compiling/linking with OpenSSL 3.0.0 or newer
versions.
EVP_PKEY_eq() is the replacement with a smaller result range (0, 1)
instead of (-1, 0, 1). EVP_PKEY_cmp() is mapped to EVP_PKEY_eq() when
building with older versions of OpenSSL.
The EVP_MD_CTX_new() and EVP_MD_CTX_free() functions are renamed APIs
which were previously available as EVP_MD_CTX_create() and
EVP_MD_CTX_destroy() respectively, which means that we can use them
instead of providing our own shim functions.
OpenSSL 3.0.0 deprecates the ERR_get_error_line_data() function.
Use ERR_get_error_all() instead of ERR_get_error_line_data() and create
a shim to use the old variant for the older OpenSSL versions which don't
have the newer ERR_get_error_all().
OpenSSL 3.0.0 deprecates the EVP_MD_CTX_md() function.
Use EVP_MD_CTX_md() instead of EVP_MD_CTX_get0_md() and create a shim
to use the old variant for the older OpenSSL versions which don't have
the newer EVP_MD_CTX_get0_md().
OpenSSL 3.0.0 deprecates many low level API functions.
In preparation for the future support of linking BIND with OpenSSL 3.0.0
without the deprecated API functions, change the configure.ac script to
use functions which are available on all supported versions of OpenSSL
and LibreSSL.
The dst_key_pubcompare() and dst_key_compare() didn't have a unit test,
add the unit tests which test comparing the same keys, different keys,
and, where possible, similar keys with a manually altered parameter.
dst_key_pubcompare() internally uses the *_todns() functions of the
lib/dns/openssl*_link.c modules.
dst_key_compare() internally uses the *_compare() functions of the
lib/dns/openssl*_link.c modules.
Duplicate catalog zone entries caused an assertion failure
in named during configuration. This is now a soft error
that is detected earlier by named and also by named-checkconf.
Update the nsec3 system tests to use the new default values. Change
the policy for "nsec3-other" so that we still have a test case for
non-zero salt length.
When using 'nsec3param' in 'dnssec-policy' and no specific parameters
are provided, default to zero additional iterations and no salt, as
recommended by draft-ietf-dnsop-nsec3-guidance.
For the sake of running ASAN and TSAN jobs with the latest stable GCC,
replace "base image" (Debian Buster with GCC 8.3.0) with Fedora 34 image
with GCC 11.
Depending upon when the directory is sampled there may be 2
(oldest version removed and rename / reopen is in progresss) or
3 old versions of the log file.
It was found, that the original commit adding the setmodtime() was
incompletely squashed and there was double check for
DNS_ZONEFLG_NEEDDUMP instead of check for DNS_ZONEFLG_NEEDDUMP and
DNS_ZONEFLG_DUMPING.
Change the duplicate check to DNS_ZONEFLG_DUMPING.
Add a lame delegation to lame.example.org with only an A record
in the additional section; on failure, this will trigger a retry
with AAAA, which will loop. Test that dig returns SERVFAIL, in
addition to confirming that named doesn't hang on shutdown.
If an ADB find is started on behalf of a resolver fetch, and fails to
find any addresses but has a pending resolver fetch associated with it,
then we need to check whether the fetch it's waiting on is the one
that created it. If so, it can never finish and needs to be terminated.
The NAME_FETCH_A and NAME_FETCH_AAAA macros were meant to be
boolean, indicating whether the pointers were set or not, while
the NAME_FETCH_V4 and NAME_FETCH_V6 macros were meant to return
the pointer values. The latter were only used as booleans, so
they've been removed in favor of the former.
Also did some style cleanup and removed an unreachable code block.
there was a race possible in which a dispatch was put into
the 'connected' state before it had a TCP handle attached,
which could cause an assertion failure in dns_dispatch_gettcp().
The isc_time_add() and isc_time_subtract() didn't have a unit test, add
the unit test with couple of edge case vectors to check whether overflow
and underflow is correctly handled.
Use the __builtin_uadd_overflow() and __builtin_usub_overflow() for
overflow checks in isc_time_add() and isc_time_subtract(). This
generates more efficient and safe code.
The isc_time_add() could overflow when t.seconds + i.seconds == UINT_MAX
and t.nanoseconds + i.nanoseconds >= NS_PER_S.
Fix the overflow in isc_time_add(), and simplify the ISC_R_RANGE checks
both in isc_time_add() and isc_time_subtract() functions.
The qmin system test was printing spurious output. On investigation,
the test case turned out to be both broken and ineffective: its
expectations were wrong, and it was printing the output because its
wrong expectations were not met, and those failed expectations were
not causing a test failure. All of this has been corrected.
The ReferenceRole class is only available in Sphinx >= 2.0.0, which
makes building BIND 9 documentation impossible with older Sphinx
versions:
Running Sphinx v1.7.6
Configuration error:
There is a programable error in your configuration file:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/sphinx/config.py", line 161, in __init__
execfile_(filename, config)
File "/usr/lib/python3.6/site-packages/sphinx/util/pycompat.py", line 150, in execfile_
exec_(code, _globals)
File "conf.py", line 21, in <module>
from sphinx.util.docutils import ReferenceRole
ImportError: cannot import name 'ReferenceRole'
Work around the problem by defining a stub version of the ReferenceRole
class if the latter cannot be imported. This allows documentation
(without GitLab hyperlinks in release notes) to be built with older
Sphinx versions.
The fctxbucket_t properly attaches to the fetchctx_t, so it can safely
use its memory context. Save a little bit of memory by removing own
memory context from fctxbucket_t.
Using proper attach/detach functions for the fetch context
instead of fctx_increference() and _decreference() makes
it easier to debug reference counting errors in the resolver.
Fixed several such errors that were found as a result.
it is possible for udp_recv_cb() to fire after the socket
is already shutting down and statichandle is NULL; we need to
create a temporary handle in this case.
it was possible for the route socket's udp_recv() callback to fire
after the interfacemgr was detached, causing an assertion failure.
this has now been fixed by referencing the interfacemgr when setting up
the route socket, and dereferencing it when shutting it down.
The statistics system test sometimes needs a pause to wait for the
expected stats to be reported.
Also, the test for priming queries was ineffective; the result of
the grep was not being checked.
The catz system test included a test case that was looking for a single
answer record after an update, when it should have been looking for two.
The test usually passed because of timing - the first dig usually got a
response before the update was completed - but occasionally the update
processed fast enough for the test to fail. On investigation, it turned
out to be the test that was wrong.
The digdelv system test has a test case in which stderr was
included in the dig output. When trace logging was in use,
this confused the grep and caused a spurious test failure.
The autoconf script prints used compiler version at the end of the
configure script. Solaris native compiler doesn't support --version,
and -V has to be used which in turn isn't supported by Gcc/Clang.
Detect which version flag has to be used and call $CC with it.
Some of the libns unit tests override the isc_nmhandle_attach() and
_detach() functions. This causes a failure in ns_interface_create()
if a route socket is being used, so we add a parameter to disable it.
route/netlink sockets don't have stats counters associated with them,
so it's now necessary to check whether socket stats exist before
incrementing or decrementing them. rather than relying on the caller
for this, we now just pass the socket and an index, and the correct
stats counter will be updated if it exists.
isc_nm_routeconnect() opens a route/netlink socket, then calls a
connect callback, much like isc_nm_udpconnect(), with a handle that
can then be monitored for network changes.
Internally the socket is treated as a UDP socket, since route/netlink
sockets follow the datagram contract.
After support for route/netlink sockets is merged, not all sockets
will have stats counters associated with them, so it's now necessary
to check whether socket stats exist before incrementing or decrementing
them. rather than relying on the caller for this, we now just pass the
socket and an index, and the correct stats counter will be updated if
it exists.
Update the 'catz' system test by adding tests that update an
catalog zone (catalog1.example) while preserving existing entries
(increase SOA serial) then check that catalog zone has transferred
and that the existing entries have not accidentally been removed
as a consequence (can return updated zone content).
After receiving a new version of a catalog zone it is required
to merge it with the old version.
The algorithm walks through the new version's hash table and applies
the following logic:
1. If an entry from the new version does not exist in the old
version, then it's a new entry, add the entry to the `toadd` hash
table.
2. If the zone does not exist in the set of configured zones, because
it was deleted via rndc delzone or it was removed from another
catalog zone instance, then add into to the `toadd` hash table to
be reinstantiated.
3. If an entry from the new version also exists in the old version,
but is modified, then add the entry to the `tomod` hash table, then
remove it from the old version's hash table.
4. If an entry from the new version also exists in the old version and
is the same (unmodified) then just remove it from the old version's
hash table.
The algorithm then deletes all the remaining zones which still exist
in the old version's hash table (because only the ones that don't
exist in the new version should now remain there), then adds the ones
that were added to the `toadd`, and modifies the ones that were added
to the `tomod`, completing the merge.
During a recent refactoring, the part when the entry should be
removed from the old version's hash table on condition (4.) above
was accidentally omitted, so the unmodified zones were remaining
in the old version's hash table and consequently being deleted.
The new rules compare the target name in PTR and SRV records against
the machine name embedded in the kerberos principal. This can be
used to further restrict what PTR and SRV records can be added or
deleted via dynamic updates if desired.
The librpz.h defined LIRPZ_LIKELY() and LIBRPZ_UNLIKELY() macros that
were actually unused in the code. Remove the macros and the autoconf
check for __builtin_expect().
The __builtin_expect() can be used to provide the compiler with branch
prediction information. The Gcc manual says[1] on the subject:
In general, you should prefer to use actual profile feedback for
this (-fprofile-arcs), as programmers are notoriously bad at
predicting how their programs actually perform.
Stop using __builtin_expect() and ISC_LIKELY() and ISC_UNLIKELY() macros
to provide the branch prediction information as the performance testing
shows that named performs better when the __builtin_expect() is not
being used.
1. https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html#index-_005f_005fbuiltin_005fexpect
pytest was failing because it was testing features that had
not been configured. test to see if those features have been
configured before running the tests.
due to comparing logfile suffixes as 32 bit rather than 64 bit
integers, logfiles with timestamp suffixes that should have been
removed when rolling could be left in place. this has been fixed.
the logfileconfig system test did not conform to the style of
other tests, and was difficult to read and maintain. it has
been cleaned up and simplifeid in several ways:
- named.args used when appropriate so that named can be started with
specified command line arguments, instead of having it launched
directly from tests.sh
- unused root zone removed from named configuration
- an existing directory used instead of using 'mkdir' to create one
- dnssec-validation disabled to stop the server sending unnecessary queries
incidental fix: removed leftover debugging printfs from logconf.c.
This commit removes a superfluous call to isc_tlsctx_free() which was
leading to double free() error in a case of a TLS listener creation
failure.
The call is superfluous because the TLS context object is supposed to
be destroyed in ns_listenelt_destroy() only.
Unify the header guard style and replace the inconsistent include guards
with #pragma once.
The #pragma once is widely and very well supported in all compilers that
BIND 9 supports, and #pragma once was already in use in several new or
refactored headers.
Using simpler method will also allow us to automate header guard checks
as this is simpler to programatically check.
For reference, here are the reasons for the change taken from
Wikipedia[1]:
> In the C and C++ programming languages, #pragma once is a non-standard
> but widely supported preprocessor directive designed to cause the
> current source file to be included only once in a single compilation.
>
> Thus, #pragma once serves the same purpose as include guards, but with
> several advantages, including: less code, avoidance of name clashes,
> and sometimes improvement in compilation speed. On the other hand,
> #pragma once is not necessarily available in all compilers and its
> implementation is tricky and might not always be reliable.
1. https://en.wikipedia.org/wiki/Pragma_once
With isc_mem_get() and dns_name_dup() no longer being able to fail, some
functions can now only return ISC_R_SUCCESS. Change the return type to
void for the following function(s):
* dns_zone_setprimaries()
* dns_zone_setparentals()
* dns_zone_setparentals()
* dns_zone_setalsonotify()
With isc_mem_get() and dns_name_dup() no longer being able to fail, some
functions can now only return ISC_R_SUCCESS. Change the return type to
void for the following function(s):
* dns_view_adddelegationonly()
* dns_view_excludedelegationonly()
With isc_mem_get() and dns_name_dup() no longer being able to fail, some
functions can now only return ISC_R_SUCCESS. Change the return type to
void for the following function(s):
* dns_ssutable_addrule()
* dns_ssutable_create()
* dns_ssutable_createdlz()
With isc_mem_get() and dns_name_dup() no longer being able to fail, some
functions can now only return ISC_R_SUCCESS. Change the return type to
void for the following function(s):
* dns_resolver_addalternate()
With isc_mem_get() and dns_name_dup() no longer being able to fail, some
functions can now only return ISC_R_SUCCESS. Change the return type to
void for the following function(s):
* name_duporclone()
With isc_mem_get() and dns_name_dup() no longer being able to fail, some
functions can now only return ISC_R_SUCCESS. Change the return type to
void for the following function(s):
* build_event()
With isc_mem_get() and dns_name_dup() no longer being able to fail, some
functions can now only return ISC_R_SUCCESS. Change the return type to
void for the following function(s):
* dns_catz_options_copy()
* dns_catz_options_setdefault()
* dns_catz_entry_new()
* dns_catz_entry_copy()
POSIX.1-2008 changed the st_atim, st_mtim, and st_ctime members of the
struct stat from time_t to struct timespec and because not all operating
systems already implemented this version of the standard or historically
deviated to include own nanosecond precision in the structure.
The autoconf script used to include <sys/fcntl.h> which contradicts
POSIX.1 as it mandates <sys/stat.h> inclusion. Change the autoconf
check to include <sys/stat.h>.
Also fix the missing AC_MSG_RESULT([yes/no]) in the check.
Replace some "master/slave" terminology in the code with the preferred
"primary/secondary" keywords. This also changes user output such as
log messages, and fixes a typo ("seconary") in cfg_test.c.
There are still some references to "master" and "slave" for various
reasons:
- The old syntax can still be used as a synonym.
- The master syntax is kept when it refers to master files and formats.
- This commit replaces mainly keywords that are local. If "master" or
"slave" is used in for example a structure that is all over the
place, it is considered out of scope for the moment.
Replace most "master/slave" terminology in tests with the preferred
"primary/secondary", with the following exceptions:
- When testing the old syntax
- When master is used in master file and master file format terms
- When master is used in hostmaster or postmaster terms
- When master used in legacy domain names (for example in dig.batch)
- When there is no replacement (for example default-masters)
Originally, the hash table used in RBT database would be resized when it
reached certain number of elements (defined by overcommit). This was
causing resolution brownouts for busy resolvers, because the rehashing
could take several seconds to complete. This was mitigated by
pre-allocating the hash table in the RBT database used for caching to be
large-enough as determined by max-cache-size. The downside of this
solution was that the pre-allocated hash table could take a significant
chunk of the memory even when the resolver cache would be otherwise
empty because the default value for max-cache-size is 90% of available
memory.
Implement incremental resizing[1] to perform the rehashing gradually:
1. During the resize, allocate the new hash table, but keep the old
table unchanged.
2. In each lookup or delete operation, check both tables.
3. Perform insertion operations only in the new table.
4. At each insertion also move r elements from the old table to the new
table.
5. When all elements are removed from the old table, deallocate it.
To ensure that the old table is completely copied over before the new
table itself needs to be enlarged, it is necessary to increase the
size of the table by a factor of at least (r + 1)/r during resizing.
In our implementation r is equal to 1.
The downside of this approach is that the old table and the new table
could stay in memory for longer when there are no new insertions into
the hash table for prolonged periods of time as the incremental
rehashing happens only during the insertions.
The upside of this approach is that it's no longer necessary to
pre-allocate large hash table, because the RBT hash table rehashing
doesn't cause resolution brownouts anymore and thus we can use the
memory as needed.
1. https://en.m.wikipedia.org/wiki/Hash_table#Dynamic_resizing
The documentation and feature-test were using '--with-idn' but the
configure script doesn't recognize this option. The correct option to
enable IDN support is '--with-libidn2'.
Add test to encode unicode sequence that encodes differently with
UseSTD3ASCIIRules=false which is default with idn2 >= 2.0.3 and
UseSTD3ASCIIRules=true which is what should be used to encode hostnames
and domains.
libidn2 defaults to UseSTD3ASCIIRules=false. That allows arbitrary ASCII
characters to show up in the toASCII output, including space and
underscore. Enable IDN2_USE_STD3_ASCII_RULES to the libidn2 conversion
to disallow additional characters from the conversion (see Validity
Criteria[1]).
The AX_CHECK_JEMALLOC() m4 macro sets the JEMALLOC_CFLAGS variable, not
JEMALLOC_CPPFLAGS. Furthermore, the JEMALLOC_CFLAGS and JEMALLOC_LIBS
variables should only be included in the build flags if jemalloc was
successfully configured. Tweak lib/isc/Makefile.am accordingly.
Remove the dynamic registration of result codes. Convert isc_result_t
from unsigned + #defines into 32-bit enum type in grand unified
<isc/result.h> header. Keep the existing values of the result codes
even at the expense of the description and identifier tables being
unnecessary large.
Additionally, add couple of:
switch (result) {
[...]
default:
break;
}
statements where compiler now complains about missing enum values in the
switch statement.
Previously, when using compiler without support for static assertions,
the STATIC_ASSERT() macro would be replaced with runtime assertion.
Change the STATIC_ASSERT() macro to a version that's compile time
assertion even when using pre-C11 compilers.
Courtesy of Joseph Quinsey: https://godbolt.org/z/K9RvWS
Both <isccc/util.h> and <isc/util.h> defined DE_CONST() macro. As
<isccc/util.h> header includes <isc/util.h>, remove the macro from
<isccc/util.h> header.
There's value mismatch between the return type of dns_rrl() that's
dns_rrl_result_t and ISC_R_SUCCESS which belongs to isc_result_t. This
works incidentally, because DNS_RRL_RESULT_OK == ISC_R_SUCCESS.
This would break when we change isc_result_t to be static enum in
consecutive commit. Change the value to match the type.
Renamed some functions for clarity and readability:
- dns_dispatch_addresponse() -> dns_dispatch_add()
- dns_dispatch_removeresponse() -> dns_dispatch_done()
The dns_dispatch_cancel() function now calls dns_dispatch_done()
directly, so it is no longer ever necessary to call both functions.
dns_dispatch_cancel() is used to terminate dispatch connections
that are still pending, while dns_dispatch_done() is used when they
are complete.
The build would fail if the OpenSSL libraries were not in default
include path because we include <openssl/opensslv.h> header in
lib/bind9/check.c. Add $(OPENSSL_CFLAGS) to lib/bind9/Makefile.am.
This commit fixes a crash in DoT code when it was attempting to call a
read callback on the later stages of the connection when it is not
available.
It also fixes [GL #2884] (back-trace provided in the bug report is
exactly the same as was seen when fixing this problem).
This commit makes dig fail with error in case a zone transfer is
attempted over a connections where ALPN was not negotiated. All other
request types will work fine.
This commit make the code handling incoming zone transfers to verify
if they are allowed to be done over the underlying connections. As a
result the check ensures that the "dot" ALPN token has been negotiated
over the underlying connection.
This commit makes BIND verify that zone transfers are allowed to be
done over the underlying connection. Currently, it makes sense only
for DoT, but the code is deliberately made to be protocol-agnostic.
The intention of having this function is to have a predicate to check
if a zone transfer could be performed over the given handle. In most
cases we can assume that we can do zone transfers over any stream
transport except DoH, but this assumption will not work for zone
transfers over DoT (XoT), as the RFC9103 requires ALPN to happen,
which might not be the case for all deployments of DoT.
The 'listenlist_test', 'notify_test', and 'query_test' tests failed
when the descriptor limit was 256 on MacOS 11.6 with 8 cpus. On the
test platform the limit needed to be increased to ~400. Increase
the limit to at least 1024 to give some head room.
as libdns is no longer exported, it's not necessary to have
init and shutdown functions. the only purpose they served
was to create a private mctx and run dst_lib_init(), which
can be called directly instead.
- serve-stale: dig wasn't always running in background when it should.
some of the serve-stale test cases are based on groups of dig calls
running simultaneously in the background: the test pauses and resumes
running after 'wait'. in some cases the final call to dig in a group
wasn't in the background, and this sometimes caused delays that
affected later test results. in another case, a test was simplified
and made more reliable by running dig in the foreground removing a
sleep.
- serve-stale: The extension of the dig timeout period from 10 to 11
seconds in commit 5307bf64ce was left undone in a few places and has
now been completed.
- serve-stale: Resolver-query-timeout was set incorrectly. a comment
above a test case in serve-stale/tests.sh says: "We configured a long
value of 30 seconds for resolver-query-timeout," but
resolver-query-timeout was actually set to 10, not 30. this is now
fixed.
- rpz: Force retransfer of the fast-expire zone, to ensure it's fully
loaded in ns3; previously it could have been left unloaded if ns5
wasn't up yet when ns3 attempted the zone transfer.
- statistics: The TCP4SendErr counter is incremented when a TCP dispatch
is canceled while sending. depending on test timing, this may have
happened by the time the statistics are dumped. worked around by
ignoring that stat couunter when checking for errors.
- hooks: Add a prereq.sh script to prevent running under TSAN.
- zero: Disabled the servfail cache so that SERVFAIL is reported only
when there actually is a failure, not repeatedly every time the same
query is sent.
- Prevent shutdown races: attach/detach to dns_resolver in dns_fetch_t
and fctx_t; delay destruction of fctx when finds are still active;
reference the fctx while canceling; reverse the order of
fctx_destroy() and empty_bucket().
- Don't resend queries if fetches have been canceled.
- It's possible for fctx_doshutdown() to run before a TCP connection has
completed. if the query is not on the queries list, then it is not
canceled, but the adbaddrinfo is freed. when tcp_connected() runs
later, the query is in an inconstent state. to fix this, we add the
query to queries before running dns_dispatch_connect(), instead of in
the connect callback.
- Combined the five fctx_cleanup* functions into a single one.
- Added comments and changed some names to make this code easier to
understand.
udp_recv() will call dispatch_getnext() if the message received is
invalid or doesn't match; we need to reduce the timeout each time this
happens so we can't be starved forever by someone sending garbage
packets.
- disp_connected() has been split into two functions,
udp_connected() (which takes 'resp' as an argument) and
tcp_connected() (which takes 'disp', and calls the connect callbacks
for all pending resps).
- In dns_dispatch_connect(), if a connection is already open, we need to
detach the dispentry immediately because we won't be running
tcp_connected().
- dns_disptach_cancel() also now calls the connect callbacks for pending
TCP responses, and the response callbacks for open TCP connections
waiting on read.
- If udp_connected() runs after dns_dispatch_cancel() has been called,
ensure that the caller's connect callback is run.
- If a UDP connection fails with EADDRINUSE, we try again up to five
times with a different local port number before giving up.
- If a TCP connection is canceled while still pending connection, the
connect timeout may still fire. we attach the dispatch before
connecting to ensure that it won't be detached too soon in this case.
- The dispentry is no longer removed from the pending list when
deactivating, so that the connect callback can still be run if
dns_dispatch_removeresponse() was run while the connecting was
pending.
- Rewrote dns_dispatch_gettcp() to avoid a data race.
- startrecv() and dispatch_getnext() can be called with a NULL resp when
using TCP.
- Refactored udp_recv() and tcp_recv() and added result logging.
- EOF is now treated the same as CANCELED in response callbacks.
- ISC_R_SHUTTINGDOWN is sent to the reponse callbacks for all resps if
tcp_recv() is triggered by a netmgr shutdown. (response callbacks
are *not* sent by udp_recv() in this case.)
- startrecv() and getnext() have been rewritten.
- Don't set TCP flag when connecting a UDP dispatch.
- Prevent TCP connections from trying to connect twice.
- dns_dispatch_gettcp() can now find a matching TCP dispatch that has
not yet fully connected, and attach to it. when the connection is
completed, the connect callbacks are run for all of the pending
entries.
- An atomic 'state' variable is now used for connection state instead of
attributes.
- When dns_dispatch_cancel() is called on a TCP dispatch entry, only
that one entry is canceled. the dispatch itself should not be shut
down until there are no dispatch entries left associated with it.
- Other incidental cleanup, including removing DNS_DISPATCHATTR_IPV4 and
_IPV6 (they were being set in the dispatch attributes but never used),
cleaning up dns_requestmgr_create(), and renaming dns_dispatch_read()
to the more descriptive dns_dispatch_resume().
- It is no longer necessary to pass a 'timeout' callback to
dns_dispatch_addresponse(); timeouts are handled directly by the
'response' callback instead.
- The netmgr handle is no longer passed to dispatch callbacks, since
they don't (and can't) use it. instead, dispatch_cb_t now takes a
result code, region, and argument.
- Cleaned up timeout-related tests in dispatch_test.c
- Responses received by the dispatch are no longer sent to the caller
via a task event, but via a netmgr-style recv callback. the 'action'
parameter to dns_dispatch_addresponse() is now called 'response' and
is called directly from udp_recv() or tcp_recv() when a valid response
has been received.
- All references to isc_task and isc_taskmgr have been removed from
dispatch functions.
- All references to dns_dispatchevent_t have been removed and the type
has been deleted.
- Added a task to the resolver response context, to be used for fctx
events.
- When the caller cancels an operation, the response handler will be
called with ISC_R_CANCELED; it can abort immediately since the caller
will presumably have taken care of cleanup already.
- Cleaned up attach/detach in resquery and request.
Remove the debugging printfs. (leaving this as a separate commit rather
than squashing it so we can restore it in the future if we ever need it
again.)
Since every dispsock was associated with a dispentry anyway (though not
always vice versa), the members of dispsock have been combined into
dispentry, which is now reference-counted. dispentry objects are now
attached before connecting and detached afterward to prevent races
between the connect callback and dns_dispatch_removeresponse().
Dispatch and dispatchmgr objects are now reference counted as well, and
the shutdown process has been simplified. reference counting of
resquery and request objects has also been cleaned up significantly.
dns_dispatch_cancel() now flags a dispentry as having been canceled, so
that if the connect callback runs after cancellation, it will not
initiate a read.
The isblackholed() function has been simplified.
- The `timeout_action` parameter to dns_dispatch_addresponse() been
replaced with a netmgr callback that is called when a dispatch read
times out. this callback may optionally reset the read timer and
resume reading.
- Added a function to convert isc_interval to milliseconds; this is used
to translate fctx->interval into a value that can be passed to
dns_dispatch_addresponse() as the timeout.
- Note that netmgr timeouts are accurate to the millisecond, so code to
check whether a timeout has been reached cannot rely on microsecond
accuracy.
- If serve-stale is configured, then a timeout received by the resolver
may trigger it to return stale data, and then resume waiting for the
read timeout. this is no longer based on a separate stale timer.
- The code for canceling requests in request.c has been altered so that
it can run asynchronously.
- TCP timeout events apply to the dispatch, which may be shared by
multiple queries. since in the event of a timeout we have no query ID
to use to identify the resp we wanted, we now just send the timeout to
the oldest query that was pending.
- There was some additional refactoring in the resolver: combining
fctx_join() and fctx_try_events() into one function to reduce code
duplication, and using fixednames in fetchctx and fetchevent.
- Incidental fix: new_adbaddrinfo() can't return NULL anymore, so the
code can be simplified.
The flow of operations in dispatch is changing and will now be similar
for both UDP and TCP queries:
1) Call dns_dispatch_addresponse() to assign a query ID and register
that we'll be listening for a response with that ID soon. the
parameters for this function include callback functions to inform the
caller when the socket is connected and when the message has been
sent, as well as a task action that will be sent when the response
arrives. (later this could become a netmgr callback, but at this
stage to minimize disruption to the calling code, we continue to use
isc_task for the response event.) on successful completion of this
function, a dispatch entry object will be instantiated.
2) Call dns_dispatch_connect() on the dispatch entry. this runs
isc_nm_udpconnect() or isc_nm_tcpdnsconnect(), as needed, and begins
listening for responses. the caller is informed via a callback
function when the connection is established.
3) Call dns_dispatch_send() on the dispatch entry. this runs
isc_nm_send() to send a request.
4) Call dns_dispatch_removeresponse() to terminate listening and close
the connection.
Implementation comments below:
- As we will be using netmgr buffers now. code to send the length in
TCP queries has also been removed as that is handled by the netmgr.
- TCP dispatches can be used by multiple simultaneous queries, so
dns_dispatch_connect() now checks whether the dispatch is already
connected before calling isc_nm_tcpdnsconnect() again.
- Running dns_dispatch_getnext() from a non-network thread caused a
crash due to assertions in the netmgr read functions that appear to be
unnecessary now. the assertions have been removed.
- fctx->nqueries was formerly incremented when the connection was
successful, but is now incremented when the query is started and
decremented if the connection fails.
- It's no longer necessary for each dispatch to have a pool of tasks, so
there's now a single task per dispatch.
- Dispatch code to avoid UDP ports already in use has been removed.
- dns_resolver and dns_request have been modified to use netmgr callback
functions instead of task events. some additional changes were needed
to handle shutdown processing correctly.
- Timeout processing is not yet fully converted to use netmgr timeouts.
- Fixed a lock order cycle reported by TSAN (view -> zone-> adb -> view)
by by calling dns_zt functions without holding the view lock.
- The read timer must always be stopped when reading stops.
- Read callbacks can now call isc_nm_read() again in TCP, TCPDNS and
TLSDNS; previously this caused an assertion.
- The wrong failure code could be sent after a UDP recv failure because
the if statements were in the wrong order. the check for a NULL
address needs to be after the check for an error code, otherwise the
result will always be set to ISC_R_EOF.
- When aborting a read or connect because the netmgr is shutting down,
use ISC_R_SHUTTINGDOWN. (ISC_R_CANCELED is now reserved for when the
read has been canceled by the caller.)
- A new function isc_nmhandle_timer_running() has been added enabling a
callback to check whether the timer has been reset after processing a
timeout.
- Incidental netmgr fix: always use isc__nm_closing() instead of
referencing sock->mgr->closing directly
- Corrected a few comments that used outdated function names.
Previously isc_nm_read() required references on the handle to be at
least 2, under the assumption that it would only ever be called from a
connect or accept callback. however, it can also be called from a read
callback, in which case the reference count might be only 1.
We now use dns_dispatch_cancel() for this purpose. NOTE: The caller
still has to track whether there are pending send or connect events in
the dispatch or dispatch entry; later this should be moved into the
dispatch module as well.
Also removed some public dns_dispatch_*() API calls that are no longer
used outside dispatch itself.
dns_dispatch_connect() connects a dispatch socket (for TCP) or a
dispatch entry socket (for UDP). This is the next step in moving all
uses of the isc_socket code into the dispatch module.
This API is temporary; it needs to be cleaned up further so that it can
be called the same way for both TCP and UDP.
Continuing the effort to move all uses of the isc_socket API into
dispatch.c, this commit removes the dns_tcpmsg module entirely, as
dispatch was its only caller, and moves the parts of its functionality
that were being used into the dispatch module.
This code will be removed when we switch to using netmgr TCPDNS.
Previously, creation of TCP dispatches differed from UDP in that a TCP
dispatch was created to attach to an existing socket, whereas a UDP
dispatch would be created in a vacuum and sockets would be opened on
demand when a transaction was initiated.
We are moving as much socket code as possible into the dispatch module,
so that it can be replaced with a netmgr version as easily as
possible. (This will also have the side effect of making TCP and UDP
dispatches more similar.)
As a step in that direction, this commit changes
dns_dispatch_createtcp() so that it creates the TCP socket.
- Many dispatch attributes can be set implicitly instead of being passed
in. we can infer whether to set DNS_DISPATCHATTR_TCP or _UDP from
whether we're calling dns_dispatch_createtcp() or _createudp(). we
can also infer DNS_DISPATCHATTR_IPV4 or _IPV6 from the addresses or
the socket that were passed in.
- We no longer use dup'd sockets in UDP dispatches, so the 'dup_socket'
parameter has been removed from dns_dispatch_createudp(), along with
the code implementing it. also removed isc_socket_dup() since it no
longer has any callers.
- The 'buffersize' parameter was ignored and has now been removed;
buffersize is now fixed at 4096.
- Maxbuffers and maxrequests don't need to be passed in on every call to
dns_dispatch_createtcp() and _createudp().
In all current uses, the value for mgr->maxbuffers will either be
raised once from its default of 20000 to 32768, or else left
alone. (passing in a value lower than 20000 does not lower it.) there
isn't enough difference between these values for there to be any need
to configure this.
The value for disp->maxrequests controls both the quota of concurrent
requests for a dispatch and also the size of the dispatch socket
memory pool. it's not clear that this quota is necessary at all. the
memory pool size currently starts at 32768, but is sometimes lowered
to 4096, which is definitely unnecessary.
This commit sets both values permanently to 32768.
- Previously TCP dispatches allocated their own separate QID table,
which didn't incorporate a port table. this commit removes
per-dispatch QID tables and shares the same table between all
dispatches. since dispatches are created for each TCP socket, this may
speed up the dispatch allocation process. there may be a slight
increase in lock contention since all dispatches are sharing a single
QID table, but since TCP sockets are used less often than UDP
sockets (which were already sharing a QID table), it should not be a
substantial change.
- The dispatch port table was being used to determine whether a port was
already in use; if so, then a UDP socket would be bound with
REUSEADDR. this commit removes the port table, and always binds UDP
sockets that way.
Currently the netmgr doesn't support unconnected, shared UDP sockets, so
there's no reason to retain that functionality in the dispatcher prior
to porting to the netmgr.
In this commit, the DNS_DISPATCHATTR_EXCLUSIVE attribute has been
removed as it is now non-optional; UDP dispatches are alwasy exclusive.
Code implementing non-exclusive UDP dispatches has been removed.
dns_dispatch_getentrysocket() now always returns the dispsocket for UDP
dispatches and the dispatch socket for TCP dispatches.
There is no longer any need to search for existing dispatches from
dns_dispatch_getudp(), so the 'mask' option has been removed, and the
function renamed to the more descriptive dns_dispatch_createudp().
- style cleanup
- removed NULL checks in places where they are not currently needed
- use isc_refcount for dispatch reference counting
- revised code flow for readability
- remove some #ifdefs that are no longer relevant
- remove unused struct members
- removed unnecessary function parameters
- use C99 struct initialization
The DNS_REQUESTOPT_SHARE flag was added when client-side pipelining of
TCP queries was implemented. there was no need to make it optional;
forcing it to be in effect for all requests simplfiies the code.
- UDP buffersize is now established when creating dispatch manager
and is always set to 4096.
- Set up the default port range in dispatchmgr before setting the magic
number.
- Magic is not set until dispatchmgr is fully created.
- DNS_DISPATCHATTR_CANREUSE was never set. the code that implements it
has been removed.
- DNS_DISPATCHOPT_FIXEDID and DNS_DISPATCHATTR_FIXEDID were both
defined, but only the DISPATCHOPT was ever set; it appears the
DISPATCHATTR was added accidentally.
- DNS_DISPATCHATTR_NOLISTEN was set but never used.
Resolve#2795, #2796: implement TLS configuration options to make it possible to specify supported TLS versions and implement perfect forward secrecy for DoH and DoT
Closes#2796 and #2795
See merge request isc-projects/bind9!5444
We have to mention that every option within a "tls" clause has
defaults out of our control as some platforms have means for defining
encryption policies globally for any application on the system.
In order to comply with these policies, we have not to modify TLS
contexts settings, unless we have to do so according to the options
specified within "tls" clauses.
This commit adds the ability to enable or disable stateless TLS
session resumption tickets (see RFC5077). Having this ability is
twofold.
Firstly, these tickets are encrypted by the server, and the algorithm
might be weaker than the algorithm negotiated during the TLS session
establishment (it is in general the case for TLSv1.2, but the generic
principle applies to TLSv1.3 as well, despite it having better ciphers
for session tickets). Thus, they might compromise Perfect Forward
Secrecy.
Secondly, disabling it might be necessary if the same TLS key/cert
pair is supposed to be used by multiple servers to achieve, e.g., load
balancing because the session ticket by default gets generated in
runtime, while to achieve successful session resumption ability, in
this case, would have required using a shared key.
The proper alternative to having the ability to disable stateless TLS
session resumption tickets is to implement a proper session tickets
key rollover mechanism so that key rotation might be performed
often (e.g. once an hour) to not compromise forward secrecy while
retaining the associated performance benefits. That is much more work,
though. On the other hand, having the ability to disable session
tickets allows having a deployable configuration right now in the
cases when either forward secrecy is wanted or sharing the TLS
key/cert pair between multiple servers is needed (or both).
This commit adds support for enforcing the preference of server
ciphers over the client ones. This way, the server attains control
over the ciphers priority and, thus, can choose more strong cyphers
when a client prioritises less strong ciphers over the more strong
ones, which is beneficial when trying to achieve Perfect Forward
Secrecy.
This commit adds support for setting TLS cipher list string in the
format specified in the OpenSSL
documentation (https://www.openssl.org/docs/man1.1.1/man1/ciphers.html).
The syntax of the cipher list is verified so that specifying the wrong
string will prevent the configuration from being loaded.
This commit adds support for loading DH-parameters (Diffie-Hellman
parameters) via the new "dhparam-file" option within "tls" clause. In
particular, Diffie-Hellman parameters are needed to enable the range
of forward-secrecy enabled cyphers for TLSv1.2, which are getting
silently disabled otherwise.
This commit adds the ability to specify allowed TLS protocols versions
within the "tls" clause. If an unsupported TLS protocol version is
specified in a file, the configuration file will not pass
verification.
Also, this commit adds strict checks for "tls" clauses verification,
in particular:
- it ensures that loading configuration files containing duplicated
"tls" clauses is not allowed;
- it ensures that loading configuration files containing "tls" clauses
missing "cert-file" or "key-file" is not allowed;
- it ensures that loading configuration files containing "tls" clauses
named as "ephemeral" or "none" is not allowed.
Previously a missing/deleted zone which was referenced by a catalog
zone was causing a crash when doing a reload.
This commit will make `named` to ignore the fact that the zone is
missing, and make sure to restore it later on.
It was discovered that named could crash due to a segmentation fault
when jemalloc was in use and memory allocation failed. This was not
intended to happen as jemalloc's "xmalloc" option was set to "true" in
the "malloc_conf" configuration variable. However, that variable was
only set after jemalloc was already done with parsing it, which
effectively caused setting that variable to have no effect.
While investigating this issue, it was also discovered that enabling the
"xmalloc" option makes jemalloc use a slow processing path, decreasing
its performance by about 25%. [1]
Additionally, further testing (carried out after fixing the way
"malloc_conf" was set) revealed that the non-default configuration
options do not have any measurable effect on either authoritative or
recursive DNS server performance.
Replace code setting various jemalloc options to non-default values with
assertion checks of mallocx()/rallocx() return values.
[1] https://github.com/jemalloc/jemalloc/pull/523
This commit fixes heap use after free when checking BIND's
configuration files for errors with http clauses. The old code
was unnecessarially copying the http element name and freeing
it to early. The name is now used directly.
zone.c:integrity_checks() acquires a read lock while iterating the
zone database, and calls zone_check_mx() which acquires another
read lock. If another thread tries to acquire a write lock in the
meantime, it can deadlock. Calling dns_dbiterator_pause() to release
the first read lock prevents this.
check for type "master" / "slave" at the same time as checking
for "primary" / "secondary" as we step through the maps.
Checking "primary" then "master" or "master" then "primary" does
not work as the synomym is not checked for to stop the search.
Similarly with "secondary" and "slave".
On TCPDNS/TLSDNS read callback, the socket buffer could be reallocated
if the received contents would be larger than the buffer. The existing
code would not preserve the contents of the existing buffer which lead
to the loss of the already received data.
This commit changes the isc_mem_put()+isc_mem_get() with isc_mem_reget()
to preserve the existing contents of the socket buffer.
The netmgr, has an internal cache for freed active handles. This cache
was allocated using isc_mem_allocate()/isc_mem_free() API because it was
simpler to reallocate the cache when we needed to grow it. The new
isc_mem_reget() function could be used here reducing the need to use
isc_mem_allocate() API which is tad bit slower than isc_mem_get() API.
Previously, we cannot use isc_mem_reallocate() for growing the buffer
dynamically, because the memory was allocated using the
isc_mem_get()/isc_mem_put() API. With the introduction of the
isc_mem_reget() function, we can use grow/shrink the memory directly
without always moving the memory around as the allocator might have
reserved some extra space after the initial allocation.
Previously, the zero-sized allocations would return NULL pointer and the
caller had to make sure to not dereference such pointer. The C standard
defines the zero-sized calls to malloc() as implementation specific and
jemalloc mallocx() with zero size would be undefined behaviour. This
complicated the code as it had to handle such cases in a special manner
in all allocator and deallocator functions.
Now, for realloc(), the situation is even more complicated. In C
standard up to C11, the behavior would be implementation defined, and
actually some implementation would free to orig ptr and some would not.
Since C17 (via DR400) would deprecate such usage and since C23, the
behaviour would be undefined.
This commits changes helper mem_get(), mem_put() and mem_realloc()
functions to grow the zero-allocation from 0 to sizeof(void *).
This way we get a predicable behaviour that all the allocations will
always return valid pointer.
The isc_mem_get() and isc_mem_put() functions are leaving the memory
allocation size tracking to the users of the API, while
isc_mem_allocate() and isc_mem_free() would track the sizes internally.
This allowed to have isc_mem_rellocate() to manipulate the memory
allocations by the later set, but not the former set of the functions.
This commit introduces isc_mem_reget(ctx, old_ptr, old_size, new_size)
function that operates on the memory allocations with external size
tracking completing the API.
Previously, the Makefiles for mysql and mysqldyn DLZ modules were
generated from autoconf to get CFLAGS and LIBS for MariaDB or MySQL
libraries. The static Makefiles uses a simpler method by calling
`mysql_config` directly from the Makefile.
The old-style DLZ drivers were already marked as no longer actively
maintained and expected to be removed eventually. With the new automake
build system, the old-style DLZ drivers were not updated, and instead of
putting an effort into something that's not being maintained, let's
rather remove the unmaintained code.
Closes: #2814
The map masterfile-format is very fragile and it needs API bump every
time a RBTDB data structures changes. Also while testing it, we found
out that files larger than 2GB weren't loading and nobody noticed, and
loading many map files were also failing (subject to kernel limits).
Thus we are marking the masterfile-format type 'map' as deprecated and
to be removed in the next stable BIND 9 release.
The Debian 10 (buster) Docker image, which GitLab CI uses for building
documentation, currently contains the following package versions:
- Sphinx 4.2.0
- sphinx-rtd-theme 1.0.0
- docutils 0.17.1
Regenerate the man pages to match contents produced in a Sphinx
environment using the above package versions. This is necessary to
prevent the "docs" GitLab CI job from failing.
"cache-file" was already documented as intended for testing
purposes only and not to be used, so we can remove it without
waiting. this commit marks the option as "ancient", and
removes all the documentation and implementing code, including
dns_cache_setfilename() and dns_cache_dump().
it also removes the documentation for the '-x cachefile`
parameter to named, which had already been removed, but the man
page was not updated at the time.
Address the following warnings reported by PyLint 2.10.2:
************* Module conf
doc/arm/conf.py:90:10: W1406: The u prefix for strings is no longer necessary in Python >=3.0 (redundant-u-string-prefix)
doc/arm/conf.py:92:12: W1406: The u prefix for strings is no longer necessary in Python >=3.0 (redundant-u-string-prefix)
doc/arm/conf.py:93:9: W1406: The u prefix for strings is no longer necessary in Python >=3.0 (redundant-u-string-prefix)
doc/arm/conf.py:143:31: W1406: The u prefix for strings is no longer necessary in Python >=3.0 (redundant-u-string-prefix)
doc/man/conf.py:33:10: W1406: The u prefix for strings is no longer necessary in Python >=3.0 (redundant-u-string-prefix)
doc/man/conf.py:38:12: W1406: The u prefix for strings is no longer necessary in Python >=3.0 (redundant-u-string-prefix)
doc/man/conf.py:39:9: W1406: The u prefix for strings is no longer necessary in Python >=3.0 (redundant-u-string-prefix)
Address the following warnings reported by PyLint 2.10.2:
************* Module tests-checkds
bin/tests/system/checkds/tests-checkds.py:70:9: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
bin/tests/system/checkds/tests-checkds.py:120:13: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
bin/tests/system/checkds/tests-checkds.py:206:17: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
************* Module yamlget
bin/tests/system/digdelv/yamlget.py:22:5: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
************* Module stress_http_quota
bin/tests/system/doth/stress_http_quota.py:131:13: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
************* Module tests-rpz-passthru-logging
bin/tests/system/rpzextra/tests-rpz-passthru-logging.py:40:9: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
bin/tests/system/rpzextra/tests-rpz-passthru-logging.py:44:9: W1514: Using open without explicitly specifying an encoding (unspecified-encoding)
Add an item to the release checklist to make sure regression tests
reproducing publicly disclosed security issues are eventually merged
into each maintained branch.
Discourage the single source port on general level and document that the
source port cannot be same as the listening port. This applies to
query-source, transfer-source, notify-source, parental-source, and their
respective IPv6 counterparts.
- when transfer-source(-v6), query-source(-v6), notify-source(-v6)
or parental-source(-v6) are specified with a port number, issue a
warning.
- when the port specified is the same as the DNS listener port (i.e.,
53, or whatever was specified as "port" in "options"), issue a fatal
error.
- check that "port" is in range. (previously this was only checked
by named, not by named-checkconf.)
- added checkconf tests.
- incidental fix: removed dead code in check.c:bind9_check_namedconf().
(note: if the DNS port is specified on the command line with "named -p",
that is not conveyed to libbind9, so these checks will not take it into
account.)
The ns3->ns2 forwarding is now done using the IPv6 addresses, so we also
test that the query-source-v6 address is still operational after removal
of interface adjustment.
Previously, named would run with a configuration
where *-source-v6 (notify-source-v6, transfer-source-v6 and
query-source-v6) address and port could be simultaneously used for
listening. This is no longer true for BIND 9.16+ and the code that
would do interface adjustments would unexpectedly disable listening on
TCP for such interfaces.
This commit removes the code that would adjust listening interfaces
for addresses/ports configured in *-source-v6 option.
Until we have a system test that would directly test the engine_pkcs11
integration, we need to disable the system tests that enabled native
PKCS#11 in the CI because it's currently broken.
The native PKCS#11 support has been removed in favour of better
maintained, more performance and easier to use OpenSSL PKCS#11 engine
from the OpenSC project.
when "checking lame server clients are dropped below the hard limit",
periodically a query is sent for a name for which the server is
authoritative, to verify that legitimate queries can still be
processed while the server is dealing with a flood of lame delegation
queries. those queries used the same dig options as elsewhere in the
fetchlimit test, including "+tries=1 +timeout=1". on slow systems, a
1-second timeout may be insufficient to get an answer even if the server
is behaving well. this commit increases the timeout for the check
queries to 2 seconds in hopes that will be enough to eliminate test
failures in CI.
Document that the interval on new RRSIG records is randomally
chosen between the limits specified by sig-validity-interval.
document the operatations when this occurs.
- fixed a size comparison using "signed int" that failed if the file
size was more than 2GB, since that was treated as a negative number.
- incidentally renamed deserialize32() to just deserialize(). we no
longer have separate 32 and 64 bit rbtdb implementations.
Dependencies regression: Re-enable some common TLS-related options for non-DoH builds, making DoT usable in them
See merge request isc-projects/bind9!5377
This commit fixes a regression introduced at
ea80bcc41c. Some options, which are
common to both DoH and DoT were mistakenly disabled for non-DoH
builds. That is a mistake, because DoH does not imply DoT and vice
versa. Not fixing this would make DoT functionality not accessible
without DoH.
This commit modifies the MTU of the loopback interface on
Linux systems to 1500, so that oversized UDP packets can
trigger EMSGSIZE errors, and tests that named handles
such errors correctly.
Note that the loopback MTU size has not yet been modified
for other platforms.
This commit ensures that DoH (and DoT) functionality works well via
IPv6 as well.
The changes were made because it turned out that dig could not make
DoH queries against an IPv6 IP address. These tests ensure that such a
bug will not remain unnoticed.
The commit also increases the servers' startup timeout to 25 seconds
because the initial timeout of 14 seconds was too short to generate
(!) eight 4096 bit ephemeral RSA certificates on a heavily loaded CI
runner in some pipeline runs.
This commit replaces ad-hoc code for DoH connect URI construction with
isc_nm_http_makeuri(), making it handle IPv6 adresses properly (among
other things).
This commit adds new function isc_nm_http_makeuri() which is supposed
to unify DoH URI construction throughout the codebase.
It handles IPv6 addresses, hostnames, and IPv6 addresses given as
hostnames properly, and replaces similar ad-hoc code in the codebase.
- removed unused functions
- changed some public functions to static that are never called
from outside client.c
- removed unused types and function prototypes
- renamed dns_client_destroy() to dns_client_detach()
The previous versions of BIND 9 exported its internal libraries so that
they can be used by third-party applications more easily. Certain
library functions were altered from specific BIND-only behavior to more
generic behavior when used by other applications.
This commit removes the function isc_lib_register() that was used by
external applications to enable the functionality.
bump the map zonefile version number to avoid an assertion
failure when loading map files from versions of BIND prior to
the most recent change to the in-memory structure of zone
databases.
test server now has tcp-idle-timeout set to 5 seconds and
tcp-keepalive-timeout set to 7, so queries that follow a 6-second sleep
should either succeed or fail depending on whether the keepalive option
was sent.
this commit removes isc__nm_tcpdns_keepalive() and
isc__nm_tlsdns_keepalive(); keepalive for these protocols and
for TCP will now be set directly from isc_nmhandle_keepalive().
protocols that have an underlying TCP socket (i.e., TLS stream
and HTTP), now have protocol-specific routines, called by
isc_nmhandle_keeaplive(), to set the keepalive value on the
underlying socket.
previously, receiving a keepalive option had no effect on how
long named would keep the connection open; there was a place to
configure the keepalive timeout but it was never used. this commit
corrects that.
this also fixes an error in isc__nm_{tcp,tls}dns_keepalive()
in which the sense of a REQUIRE test was reversed; previously this
error had not been noticed because the functions were not being
used.
- fix some duplicated and out-of-order prototypes declared in
netmgr-int.h
- rename isc_nm_tcpdns_keepalive to isc__nm_tcpdns_keepalive as
it's for internal use
The removed function 'newchain(a, b)' was almost the same as calling
!chain_equal(a, b), varying only in the amount of data compared
in the non-fixed-length data portion of given chain nodes.
A third argument 'data_size' has been introduced into 'chain_equal'
function in order to allow it to know how many bytes to compare in the
variable-length data portion of the chain nodes.
A helper function 'chain_length(e)' has been introduced to allow
easy calculation of the total length of the non-fixed-length data part
of chain nodes.
Check the thread below for more details:
https://gitlab.isc.org/isc-projects/bind9/-/merge_requests/291#note_12184
This commit changes the DoH code in such a way that it makes no
assumptions regarding which headers are expected to be processed
first. In particular, the code expected the :method: pseudo-header to
be processed early, which might not be true.
Add a statschannel test case to confirm that when keys are removed
(in this case because of a dnssec-policy change), the corresponding
dnssec-sign stats are cleared and are no longer shown in the
statistics.
Clear the key slots for dnssec-sign statistics for keys that are
removed. This way, the number of slots will stabilize to the maximum
key usage in a zone and will not grow every time a key rollover is
triggered.
Add a test case that has more than four keys (the initial number of
key slots that are created for dnssec-sign statistics). We shouldn't
be expecting weird values.
This fixes some errors in the manykeys zone configuration (keys
were created for algorithm RSASHA256, but the policy expected RSASHA1,
and the zone was not allowing dynamic updates).
This also fixes an error in the calls to 'zones-json.pl': The perl
script excepts an index number where the zone can be found, rather
than the zone name.
We have introduced dnssec-sign statistics to the zone statistics. This
introduced an operational issue because when using zone-statistics
full, the memory usage was going through the roof. We fixed this by
by allocating just four key slots per zone. If a zone exceeds the
number of keys for example through a key rollover, the keys will be
rotated out on a FIFO basis.
This works for most cases, and fixes the immediate problem of high
memory usage, but if you sign your zone with many, many keys, or are
sign with a ZSK/KSK double algorithm strategy you may experience weird
statistics. A better strategy is to grow the number of key slots per
zone on key rollover events.
That is what this commit is doing: instead of rotating the four slots
to track sign statistics, named now grows the number of key slots
during a key rollover (or via some other method that introduces new
keys).
Add a new function to resize the number of counters in a statistics
counter structure. This will be needed when we keep track of DNSSEC
sign statistics and new keys are introduced due to a rollover.
Add a simple stats unit test that tests the existing library functions
isc_stats_ncounters, isc_stats_increment, isc_stats_decrement,
isc_stats_set, and isc_stats_update_if_greater.
After a reload, if the zone hasn't changed, this will log a
DNS_R_UNCHANGED error. This should not be at error level because it
happens on every reload.
Add a test case for migrating CSK to dnssec-policy. The keymgr has no
way of telling that the key is used as a CSK, but if there is only one
key to migrate it is going to assume it must be a CSK.
Previously, when dnssec-cds copied CDS records to make DS records,
its -a algorithm option did not have any effect. This means that if
the child zone is signed with older software that generates SHA-1 CDS
records, dnssec-cds would (by default) create SHA-1 DS records in
violation of RFC 8624.
This change makes the dnssec-cds -a option apply to CDS records as
well as CDNSKEY records. In the CDS case, the -a algorithms are the
acceptable subset of possible CDS algorithms. If none of the CDS
records are acceptable, dnssec-cds tries to generate DS records from
CDNSKEY records.
Instead of disabling the fragmentation on the UDP sockets, we now
disable the Path MTU Discovery by setting IP(V6)_MTU_DISCOVER socket
option to IP_PMTUDISC_OMIT on Linux and disabling IP(V6)_DONTFRAG socket
option on FreeBSD. This option sets DF=0 in the IP header and also
ignores the Path MTU Discovery.
As additional mitigation on Linux, we recommend setting
net.ipv4.ip_no_pmtu_disc to Mode 3:
Mode 3 is a hardend pmtu discover mode. The kernel will only accept
fragmentation-needed errors if the underlying protocol can verify
them besides a plain socket lookup. Current protocols for which pmtu
events will be honored are TCP, SCTP and DCCP as they verify
e.g. the sequence number or the association. This mode should not be
enabled globally but is only intended to secure e.g. name servers in
namespaces where TCP path mtu must still work but path MTU
information of other protocols should be discarded. If enabled
globally this mode could break other protocols.
The client->rcode_override was originally created to force the server
to send SERVFAIL in some cases when it would normally have sent FORMERR.
More recently, it was used in a3ba95116e
commit (part of GL #2790) to force the sending of a TC=1 NOERROR
response, triggering a retry via TCP, when a UDP packet could not be
sent due to ISC_R_MAXSIZE.
This ran afoul of a pre-existing INSIST in ns_client_error() when
RRL was in use. the INSIST was based on the assumption that
ns_client_error() could never result in a non-error rcode. as
that assumption is no longer valid, the INSIST has been removed.
The additional processing method has been expanded to take the
owner name of the record, as HTTPS and SVBC need it to process "."
in service form.
The additional section callback can now return the RRset that was
added. We use this when adding CNAMEs. Previously, the recursion
would stop if it detected that a record you added already exists. With
CNAMEs this rule doesn't work, as you ultimately care about the RRset
at the target of the CNAME and not the presence of the CNAME itself.
Returning the record allows the caller to restart with the target
name. As CNAMEs can form loops, loop protection was added.
As HTTPS and SVBC can produce infinite chains, we prevent this by
tracking recursion depth and stopping if we go too deep.
Add a test case for GL #2845 where a zone is in two views, one base
view and one "in-view" and that zone is using an $INCLUDE. Make sure
that there is a jnl file (have ixfr-from-differences enabled and do a
dynamic update). Then freeze and make updates in the included file
(this requires the test.db file also to be updated because 'rndc freeze'
causes the zone file to be overwritten). Finally reload and ensure that
the edit in the included file has been loaded.
string.endswith("label.sequence") doesn't check for the implict
period before "label.sequence" when matching longer strings.
"foo.label.sequence" should match but "foolabel.sequence shouldn't".
When looking up a zonecut in cache, we use 'dns_rbt_findnode' to find
the closest matching node. This function however does not take into
account stale nodes. When we do find a stale node and use it, this
has implications for subsequent lookups. For example, this may break
QNAME minimization because we are using a deeper zonecut than we should
have.
Check the header for staleness and if so, and stale entries are not
accepted, look for the deepest zonecut from this node up.
There are some occurrences where we check if a header exists in the
rbtdb. These cases require that the header is also not marked as
ancient (aka ready for cleanup). These cases involve finding certain
data in cache.
Add test cases for GL #2665: The QNAME minimization (if enabled) should
also occur on the second query, after the RRsets have expired from
cache. BIND will still have the entries in cache, but marked stale.
These stale entries should not prevent the resolver from minimizing
the QNAME. We query for the test domain a.b.stale. in all cases (QNAME
minimization off, strict mode, and relaxed mode) and expect it to
behave the same the second time we have a stale delegation structure in
cache.
The commit fixes the doh_recv_send() because occasionally it would
fail because it did not wait for all responses to be sent, making the
check for ssends value to nit pass.
This commit changes TLS stream behaviour in such a way, that it is now
optimised for small writes. In the case there is a need to write less
or equal to 512 bytes, we could avoid calling the memory allocator at
the expense of possibly slight increase in memory usage. In case of
larger writes, the behviour remains unchanged.
At least at this point doing memory copying is not required. Probably
it was a workaround for some problem in the earlier days of DoH, at
this point it appears to be a waste of CPU cycles.
This commit significantly simplifies the code in http_send_outgoing()
as it was unnecessary complicated, because it was dealing with
multiple statically and dynamically allocated buffers, making it
extremely hard to follow, as well as making it to do unnecessary
memory copying in some situations. This commit fixes these issues,
while retaining the high level buffering logic.
When an HTTP/2 client terminates a session it means that it is about
to close the underlying connection. However, we were not doing that.
As a result, with the latest changes to the test suite, which made it
to limit amount of requests per a transport connection, the tests
using quota would hang for quite a while. This commit fixes that.
This commit ensures that only a limited number of requests is going to
be sent over a single HTTP/2 connection. Before that change was
introduced, it was possible to complete all of the planned sends via
only one transport connection, which undermines the purpose of the
tests using the quota facility.
The function should not be called here because it is, in general,
supposed to be called at the end of the transport level callbacks to
perform I/O, and thus, calling it here is clearly a mistake because it
breaks other code expectations. As a result of the call to
http_do_bio() from within isc__nm_http_request() the unit tests were
running slower than expected in some situations.
In this particular situation http_do_bio() is going to be called at
the end of the transport_connect_cb() (initially), or http_readcb(),
sending all of the scheduled requests at once.
This change affects only the test suite because it is the only place
in the codebase where isc__nm_http_request() is used in order to
ensure that the server is able to handle multiple HTTP/2 streams at
once.
This commit fixes a crash in DoH caused by transport handle to be
detached too early when sending outgoing data.
We need to attach to the session->handle earlier because as an
indirect result of the nghttp2_session_mem_send() the session might
get closed and the handle detached. However, there is still might be
some outgoing data to handle. Besides, even when the underlying socket
was closed via the handle, we still should try to attempt to send
outgoing data via isc_nm_send() to let it call write callback, passed
to the http_send_outgoing().
This commit gets rid of custom code taking care of response buffering
by replacing the custom code with isc_buffer_t. Also, it gets rid of
an unnecessary memory copying when sending a response.
This commit replaces the ad-hoc 64K buffer for incoming POST data with
isc_buffer_t backed by dynamically allocated buffer sized accordingly
to the value in the "Content-Length" header.
The commit replaces an ad-hoc incoming DNS-message buffer in the
client-side DoH code with isc_buffer_t.
The commit also fixes a timing issue in the unit tests revealed by the
change.
This commit replaces a static ad-hoc HTTP/2 session's temporary buffer
with a realloc-able isc_buffer_t object, which is being allocated on
as needed basis, lowering the memory consumption somewhat. The buffer
is needed in very rare cases, so allocating it prematurely is not
wise.
Also, it fixes a bug in http_readcb() where the ad-hoc buffer appeared
to be improperly used, leading to a situation when the processed data
from the receiving regions can be processed twice, while unprocessed
data will never be processed.
When copying metadata from one dst_key to another, when the source
dst_key has a boolean metadata unset, the destination dst_key will
have a numeric metadata unset instead.
This means that if a key has KSK or ZSK unset, we may be clearing the
Predecessor or Successor metadata in the destination dst_key.
Add a test case to the dnssec system test to check that:
- a zone with a prepublished key is only signed with the active key.
- a zone with an inactive key but valid signatures retains those
signatures and does not add signatures from successor key.
- signatures are swapped in a zone when signatures of predecessor
inactive key are within the refresh interval.
When signing with a ZSK, check if it has a predecessor. If so, and if
the predecessor key is sane (same algorithm, key id matches predecessor
value, is zsk), check if the RRset is signed with this key. If so, skip
signing with this successor key. Otherwise, do sign with the successor
key.
This change means we also need to apply the interval to keys that are
not actively signing. In other words, 'expired' is always
'isc_serial_gt(now + cycle, rrsig.timeexpire)'.
Fix a print style issue ("removing signature by ..." was untabbed).
In the "Migrating from NSEC to NSEC3" section, it says:
dnssec-policy "standard" {
nsec3param iterations optout no salt-length 16;
};
There should be an integer after "iterations". Based on the following
text, the number of iterations should be 10.
This commit gets rid of RW locks in a hot path of the DoH code. In the
original design, it was implied that we add new endpoints after the
HTTP listener was created. Such a design implies some locking. We do
not need such flexibility, though. Instead, we could build a set of
endpoints before the HTTP listener gets created. Such a design does
not need RW locks at all.
This commit increases the idle TCP timeout to let the DoH quota system
test pass on some platforms (namely FreeBSD 11). It turned out to run
slow enough on the CI under load for the idle TCP timeout to kick in.
This commit refactors the DoH quota system test to make it more
reliable.
The test tries to establish dummy TCP connections to stress the quota
one by one instead of in bulk until the BIND instance cannot answer
queries anymore. This design is better because the test itself does
not need to be aware of the actual quota size.
respdiff needs to be run regularly to identify problems with query
responses discrepancies sooner than after tagging a release.
MAX_DISAGREEMENTS_PERCENTAGE variable is set to 0.5 on the main branch
to make room for a greater number of response disagreements between a
relatively old baseline version and the Development Version.
On the isc_mem water change the old water_t structure could be used
after free. Instead of introducing reference counting on the hot-path
we are going to introduce additional constraints on the
isc_mem_setwater. Once it's set for the first time, the additional
calls have to be made with the same water and water_arg arguments.
Increasing the nodelock count had major impact on the memory footprint
in scenarios where multiple rbtdb structure would be created like
hosting many zones in a single server.
This reverts commit 0344684385 and sets
the nodelock count to previously used values.
Since the forced removal of gcc:sid:i386 in 0aacabc6, we lacked a 32-bit
environment to build and test BIND 9 in the CI. gcc:buster:amd64cross32
adds an environment to cross-compile BIND 9 to 32-bits on Debian Buster
amd64 image with 32-bit BIND 9 dependencies. Commit also adds sanity
checks to ensure that compiled objects are not of the build platform
triplet type.
The support for stat.pl's --restart option was incomplete in run.sh.
This change makes sure it's handled properly and that named.run file is
not being removed by clean.sh when the --restart option is used.
When named failed to start and produced core dump, the core file wasn't
processed by GDB because of run.sh script exiting immediately. This
remedies the limitation, simplifies the surrounding code, and makes the
script shellcheck clean.
Anchor lets the user see the full command logged in GitLab CI:
${CONFIGURE} --disable-maintainer-mode --enable-developer ...
Instead of a folded multi-line when literal block is used:
${CONFIGURE} \ # collapsed multi-line command
Make DoH-quota separate and configurable, make it possible to limit the number of HTTP/2 streams per connection
See merge request isc-projects/bind9!5036
The system tests stress out the DoH quota by opening many TCP
connections and then running dig instances against the "overloaded"
server to perform some queries. The processes cannot make any
resolutions because the quota is exceeded. Then the opened connections
are getting closed in random order allowing the queries to proceed.
This commit makes number of concurrent HTTP/2 streams per connection
configurable as a mean to fight DDoS attacks. As soon as the limit is
reached, BIND terminates the whole session.
The commit adds a global configuration
option (http-streams-per-connection) which can be overridden in an
http <name> {...} statement like follows:
http local-http-server {
...
streams-per-connection 100;
...
};
For now the default value is 100, which should be enough (e.g. NGINX
uses 128, but it is a full-featured WEB-server). When using lower
numbers (e.g. ~70), it is possible to hit the limit with
e.g. flamethrower.
This commit adds support for http-listener-clients global options as
well as ability to override the default in an HTTP server description,
like:
http local-http-server {
...
listener-clients 100;
...
};
This way we have ability to specify per-listener active connections
quota globally and then override it when required. This is exactly
what AT&T requested us: they wanted a functionality to specify quota
globally and then override it for specific IPs. This change
functionality makes such a configuration possible.
It makes sense: for example, one could have different quotas for
internal and external clients. Or, for example, one could use BIND's
internal ability to serve encrypted DoH with some sane quota value for
internal clients, while having un-encrypted DoH listener without quota
to put BIND behind a load balancer doing TLS offloading for external
clients.
Moreover, the code no more shares the quota with TCP, which makes
little sense anyway (see tcp-clients option), because of the nature of
interaction of DoH clients: they tend to keep idle opened connections
for longer periods of time, preventing the TCP and TLS client from
being served. Thus, the need to have a separate, generally larger,
quota for them.
Also, the change makes any option within "http <name> { ... };"
statement optional, making it easier to override only required default
options.
By default, the DoH connections are limited to 300 per listener. I
hope that it is a good initial guesstimate.
This commit adds the code (and some tests) which allows verifying
validity of HTTP paths both in incoming HTTP requests and in BIND's
configuration file.
Extend the "chain" system test with AUTHORITY section checks for signed,
secure delegations. This complements the checks for signed, insecure
delegations added by commit 26ec4b9a89.
Extend the existing AUTHORITY section checks for signed, insecure
delegations to ensure nonexistence of DS RRsets in such responses.
Adjust comments accordingly.
Ensure dig failures cause the "chain" system test to fail.
It has been noticed that commit 7a87bf468b
did not only fix NSEC record handling in signed, insecure delegations
prepared using both wildcard expansion and CNAME chaining - it also
inadvertently fixed DS record handling in signed, secure delegations
of that flavor. This is because the 'rdataset' variable in the relevant
location in query_addds() can be either a DS RRset or an NSEC RRset.
Update a code comment in query_addds() to avoid confusion.
Update the comments describing the purpose of query_addds() so that they
also mention NSEC(3) records.
If we have a CDS or CDNSKEY we at least need to have a DNSKEY with the
same algorithm published and signing the CDS RRset. Same for CDNSKEY
of course.
This relaxes the zone_cdscheck function, because before the CDS or
CDNSKEY had to match a DNSKEY, now only the algorithm has to match.
This allows a provider in a multisigner model to update the CDS/CDNSKEY
RRset in the zone that is served by the other provider.
Add tests to the nsupdate system test to make sure that CDS and/or
CDNSKEY that match an algorithm in the DNSKEY RRset are allowed. Also
add tests that updates are rejected if the algorithm does not match.
Remove the now redundant test cases from the dnssec system test.
Update the checkzone system test: Change the algorithm of the CDS and
CDNSKEY records so that the zone is still rejected.
An unhandled code path left GET query string data uninitialised (equal
to NULL) and led to a crash during the requests' base64 data
decoding. This commit fixes that.
As we don't set the thread affinity, the cpu test would consistently
fail. Disable it, but don't remove it as we might restore setting the
affinity in the future versions of BIND 9.
It was discovered that setting the thread affinity on both the netmgr
and netthread threads lead to inconsistent recursive performance because
sometimes the netmgr and netthread threads would compete over single
resource and sometimes not.
Removing setting the affinity causes a slight dip in the authoritative
performance around 5% (the measured range was from 3.8% to 7.8%), but
the recursive performance is now consistently good.
On OpenBSD and more generally on platforms without either jemalloc or
malloc_(usable_)size, we need to increase the alignment for the memory
to sizeof(max_align_t) as with plain sizeof(void *), the compiled code
would be crashing when accessing the returned memory.
It was discovered that on some platforms (f.e. Alpine Linux with MUSL)
the result of isc_os_ncpus() call differ when called before and after we
drop privileges. This commit changes the isc_os_ncpus() call to cache
the result from the first call and thus always return the same value
during the runtime of the named. The first call to isc_os_ncpus() is
made as soon as possible on the library initalization.
The isc_mem_get(), isc_mem_allocate() and isc_mem_reallocate() can
return NULL ptr in case where the allocation size is NULL. Remove the
nonnull attribute from the functions' declarations.
This stems from the following definition in the C11 standard:
> If the size of the space requested is zero, the behavior is
> implementation-defined: either a null pointer is returned, or the
> behavior is as if the size were some nonzero value, except that the
> returned pointer shall not be used to access an object.
In this case, we return NULL as it's easier to detect errors when
accessing pointer from zero-sized allocation which should obviously
never happen.
In the rallocx() shim for OpenBSD (that's the only platform that doesn't
have malloc_size() or malloc_usable_size() equivalent), the newly
allocated size was missing the extra size_t member for storing the
allocation size leading to size_t sized overflow at the end of the
reallocated memory chunk.
2607
43. tainted_argument: Calling function journal_read_xhdr taints argument xhdr.size. [show details]
2608 result = journal_read_xhdr(j1, &xhdr);
44. Condition rewrite, taking true branch.
45. Condition result == 29, taking false branch.
2609 if (rewrite && result == ISC_R_NOMORE) {
2610 break;
2611 }
46. Condition result != 0, taking false branch.
2612 CHECK(result);
2613
47. var_assign_var: Assigning: size = xhdr.size. Both are now tainted.
2614 size = xhdr.size;
CID 331088 (#3 of 3): Untrusted allocation size (TAINTED_SCALAR)
48. tainted_data: Passing tainted expression size to isc__mem_get, which uses it as an allocation size. [show details]
Ensure that tainted values are properly sanitized, by checking that their values are within a permissible range.
2615 buf = isc_mem_get(mctx, size);
In the jemalloc merge request, we missed the fact that ah_frees and ah_handles
are reallocated which is not compatible with using isc_mem_get() for allocation
and isc_mem_put() for deallocation. This commit reverts that part and restores
use of isc_mem_allocate() and isc_mem_free().
The proper way how to disable the water limit in the isc_mem context is
to call:
isc_mem_setwater(ctx, NULL, NULL, 0, 0);
this ensures that the old water callback is called with ISC_MEM_LOWATER
if the callback was called with ISC_MEM_HIWATER before.
Historically, there were some places where the limits were disabled by
calling:
isc_mem_setwater(ctx, water, water_arg, 0, 0);
which would also call the old callback, but it also causes the water_t
to be allocated and extra check to be executed because water callback is
not NULL.
This commits unifies the calls to disable water to the preferred form.
The Address and Thread Sanitizers both intercept the malloc calls and
using the extended jemalloc API interferes with that. This commit
disables the use of jemalloc for both ASAN and TSAN enabled builds to
eliminate both false positives and false negatives.
Previously, the isc_mem_allocate() and isc_mem_free() would be used for
isc_mem_total test, but since we now use the real allocation
size (sallocx, malloc_size, malloc_usable_size) to track the allocation
size, it's impossible to get the test value right. Changing the test to
use isc_mem_get() and isc_mem_put() will use the exact size provided, so
the test would work again on all the platforms even when jemalloc is not
being used.
It was discovered that softhsm2.4 has a bug that causes invalid free()
call to be called when unloading libsofthsm.so.2 library. The native
PKCS#11 API is scheduled to removed in the 9.17+ release, we could
safely just disable jemalloc for this particular build.
This commit refactors the water mechanism in the isc_mem API to use
single pointer to a water_t structure that can be swapped with
atomic_exchange operation instead of having four different
values (water, water_arg, hi_water, lo_water) in the flat namespace.
This reduces the need for locking and prevents a race when water and
water_arg could be desynchronized.
Calls to jemalloc extended API with size == 0 ends up in undefined
behaviour. This commit makes the isc_mem_get() and friends calls
more POSIX aligned:
If size is 0, either a null pointer or a unique pointer that can be
successfully passed to free() shall be returned.
We picked the easier route (which have been already supported in the old
code) and return NULL on calls to the API where size == 0.
This commit adds support for systems where the jemalloc library is not
available as a package, here's the quick summary:
* On Linux - the jemalloc is usually available as a package, if
configured --without-jemalloc, the shim would be used around
malloc(), free(), realloc() and malloc_usable_size()
* On macOS - the jemalloc is available from homebrew or macports, if
configured --without-jemalloc, the shim would be used around
malloc(), free(), realloc() and malloc_size()
* On FreeBSD - the jemalloc is *the* system allocator, we just need
to check for <malloc_np.h> header to get access to non-standard API
* On NetBSD - the jemalloc is *the* system allocator, we just need to
check for <jemalloc/jemalloc.h> header to get access to non-standard
API
* On a system hostile to users and developers (read OpenBSD) - the
jemalloc API is emulated by using ((size_t *)ptr)[-1] field to hold
the size information. The OpenBSD developers care only for
themselves, so why should we care about speed on OpenBSD?
- isc_mempool_get() can no longer fail; when there are no more objects
in the pool, more are always allocated. checking for NULL return is
no longer necessary.
- the isc_mempool_setmaxalloc() and isc_mempool_getmaxalloc() functions
are no longer used and have been removed.
Current mempools are kind of hybrid structures - they serve two
purposes:
1. mempool with a lock is basically static sized allocator with
pre-allocated free items
2. mempool without a lock is a doubly-linked list of preallocated items
The first kind of usage could be easily replaced with jemalloc small
sized arena objects and thread-local caches.
The second usage not-so-much and we need to keep this (in
libdns:message.c) for performance reasons.
Previously, we only had capability to trace the mempool gets and puts,
but for debugging, it's sometimes also important to keep track how many
and where do the memory pools get created and destroyed. This commit
adds such tracking capability.
The isc_mem_allocate() comes with additional cost because of the memory
tracking. In this commit, we replace the usage with isc_mem_get()
because we track the allocated sizes anyway, so it's possible to also
replace isc_mem_free() with isc_mem_put().
The jemalloc non-standard API fits nicely with our memory contexts, so
just rewrite the memory context internals to use the non-public API.
There's just one caveat - since we no longer track the size of the
allocation for isc_mem_allocate/isc_mem_free combination, we need to use
sallocx() to get real allocation size in both allocator and deallocator
because otherwise the sizes would not match.
The ISC_MEM_DEBUGSIZE and ISC_MEM_DEBUGCTX did sanity checks on matching
size and memory context on the memory returned to the allocator. Those
will no longer needed when most of the allocator will be replaced with
jemalloc.
There's global variable called `malloc_conf` that can be used to
configure jemalloc behaviour at the program startup. We use following
configuration:
* xmalloc:true - abort-on-out-of-memory enabled.
* background_thread:true - Enable internal background worker threads
to handle purging asynchronously.
* metadata_thp:auto - allow jemalloc to use transparent huge page
(THP) for internal metadata initially, but may begin to do so when
metadata usage reaches certain level.
* dirty_decay_ms:30000 - Approximate time in milliseconds from the
creation of a set of unused dirty pages until an equivalent set of
unused dirty pages is purged and/or reused.
* muzzy_decay_ms:30000 - Approximate time in milliseconds from the
creation of a set of unused muzzy pages until an equivalent set of
unused muzzy pages is purged and/or reused.
More information about the specific meaning can be found in the jemalloc
manpage or online at http://jemalloc.net/jemalloc.3.html
The jemalloc allocator is scalable high performance allocator, this is
the first in the series of commits that will add jemalloc as a memory
allocator for BIND 9.
This commit adds configure.ac check and Makefile modifications to use
jemalloc as BIND 9 allocator.
Previously, we only had capability to trace the memory gets and puts,
but for debugging, it's sometimes also important to keep track how many
and where do the memory contexts get created and destroyed. This commit
adds such tracking capability.
This commit makes BIND return HTTP status codes for malformed or too
small requests.
DNS request processing code would ignore such requests. Such an
approach works well for other DNS transport but does not make much
sense for HTTP, not allowing it to complete the request/response
sequence.
Suppose execution has reached the point where DNS message handling
code has been called. In that case, it means that the HTTP request has
been successfully processed, and, thus, we are expected to respond to
it either with a message containing some DNS payload or at least to
return an error status code. This commit ensures that BIND behaves
this way.
This error code fits better than the more generic "Internal Server
Error" (500) which implies that the problem is on the server.
Also, do not end the whole HTTP/2 session on a bad request.
We were too strict regarding the value and presence of "Accept" HTTP
header, slightly breaking compatibility with the specification.
According to RFC8484 client SHOULD add "Accept" header to the requests
but MUST be able to handle "application/dns-message" media type
regardless of the value of the header. That basically suggests we
ignore its value.
Besides, verifying the value of the "Accept" header is a bit tricky
because it could contain multiple media types, thus requiring proper
parsing. That is doable but does not provide us with any benefits.
Among other things, not verifying the value also fixes compatibility
with clients, which could advertise multiple media types as supported,
which we should accept. For example, it is possible for a perfectly
valid request to contain "application/dns-message", "application/*",
and "*/*" in the "Accept" header value. Still, we would treat such a
request as invalid.
The commit fixes BIND hanging when browsers end HTTP/2 streams
prematurely (for example, by sending RST_STREAM). It ensures that
isc__nmsocket_prep_destroy() will be called for an HTTP/2 stream,
allowing it to be properly disposed.
The problem was impossible to reproduce using dig or DoH benchmarking
software (e.g. flamethrower) because these do not tend to end HTTP/2
streams prematurely.
This commit adds two new autoconf options `--enable-doh` (enabled by
default) and `--with-libnghttp2` (mandatory when DoH is enabled).
When DoH support is disabled the library is not linked-in and support
for http(s) protocol is disabled in the netmgr, named and dig.
if a control channel listener was configured with more than one
key algorithm, message verification would be attempted with each
algorithm in turn. if the first key failed due to the wrong
signature length, the entire verification process was aborted,
rather than continuing on to try with another key.
The isc/platform.h header was left empty which things either already
moved to config.h or to appropriate headers. This is just the final
cleanup commit.
The last remaining defines needed for platforms without NAME_MAX and
PATH_MAX (I'm looking at you, GNU Hurd) were moved to isc/dir.h where
it's prevalently used.
The ISC_STRERRORSIZE was defined in isc/platform.h header as the
value was different between Windows and POSIX platforms. Now that
Windows is gone, move the define to where it belongs.
The function 'private_type_record()' is now used in multiple system
setup scripts and should be moved to the common configuration script
conf.sh.common.
The old approach where each zone structure has its own mutex that
a thread needs to obtain multiple locks to do safe keyfile I/O
operations lead to a race condition ending in a possible deadlock.
Consider a zone in two views. Each such zone is stored in a separate
zone structure. A thread that needs to read or write the key files for
this zone needs to obtain both mutexes in seperate structures. If
another thread is working on the same zone in a different view, they
race to get the locks. It would be possible that thread1 grabs the
lock of the zone in view1, while thread2 wins the race for the lock
of the zone in view2. Now both threads try to get the other lock, both
of them are already locked.
Ideally, when a thread wants to do key file operations, it only needs
to lock a single mutex. This commit introduces a key management hash
table, stored in the zonemgr structure. Each time a zone is being
managed, an object is added to the hash table (and removed when the
zone is being released). This object is identified by the zone name
and contains a mutex that needs to be locked prior to reading or
writing key files.
(cherry-picked from commit ef4619366d49efd46f9fae5f75c4a67c246ba2e6)
Similar to notify, add code to send and keep track of checkds requests.
On every zone_rekey event, we will check the DS at parental agents
(but we will only actually query parental agents if theree is a DS
scheduled to be published/withdrawn).
On a zone_rekey event, we will first clear the ongoing checkds requests.
Reset the counter, to avoid continuing KSK rollover premature.
This has the risk that if zone_rekey events happen too soon after each
other, there are redundant DS queries to the parental agents. But
if TTLs and the configured durations in the dnssec-policy are sane (as
in not ridiculous short) the chance of this happening is low.
This code gathers DNSSEC keys from key files and from the DNSKEY RRset.
It is used for the 'rndc dnssec -status' command, but will also be
needed for "checkds". Turn it into a function.
Change the static function 'get_ksk_zsk' to a library function that
can be used to determine the role of a dst_key. Add checks if the
boolean parameters to store the role are not NULL. Rename to
'dst_key_role'.
Add a Pytest based system test for the 'checkds' feature. There is
one nameserver (ns9, because it should be started the latest) that
has configured several zones with dnssec-policy. The zones are set
in such a state that they are waiting for DS publication or DS
withdrawal.
Then several other name servers act as parent servers that either have
the DS for these published, or not. Also one server in the mix is
to test a badly configured parental-agent.
There are tests for DS publication, DS publication error handling,
DS withdrawal and DS withdrawal error handling.
The tests ensures that the zone is DNSSEC valid, and that the
DSPublish/DSRemoved key metadata is set (or not in case of the error
handling).
It does not test if the rollover continues, this is already tested in
the kasp system test (that uses 'rndc -dnssec checkds' to set the
DSPublish/DSRemoved key metadata).
Add checks for "parental-agents" configuration, checking for the option
being at wrong type of zone (only allowed for primaries and
secondaries), duplicate definitions, duplicate references, and
undefined parental clauses (the name referenced in the zone clause
does not have a matching "parental-agent" clause).
When performing the 'setnsec3param' task, zones that are not loaded will have
their task rescheduled. We should do this only if the zone load is still
pending, this prevents zones that failed to load get stuck in a busy wait and
causing a hang on shutdown.
Add a zone to the configuration file that uses NSEC3 with dnssec-policy
and fails to load. This will cause setnsec3param to go into a busy wait
and will cause a hang on shutdown.
The Makefile.tests was modifying global AM_CFLAGS and LDADD and could
accidentally pull /usr/include to be listed before the internal
libraries, which is known to cause problems if the headers from the
previous version of BIND 9 has been installed on the build machine.
We should drop the HISTORY file because it's confusing and the same
information is covered by the release notes for .0 releases (or at
least they should be).
Remove references to the HISTORY file, update the README to tell
people go look somewhere else.
This was written down in the outdated doc/dev/release documentation.
Since the rest of that file can go, add these steps to a separate file
and update it to current standards (e.g. use git commands).
The util/, doc/design/, and doc/dev/ directories included couple of
tools or documents there were completely outdated because they either
refered the the VCS we no longer use (cvs) or described processes that
have been redesigned and they are documented elsewhere.
When backporting the Don't Fragment UDP socket option, it was noticed
that the edns-udp-size probing uses 1432 as one of the values to be
probed and the documentation would be recommending 1400 as the safe
value. As the safe value can be from the 1400-1500 interval, the
documentation has been changed to match the probed value, so we do not
skip it.
- add an 'nsupdate -C' option to override resolv.conf file for nsupdate
- set resolv.conf to use two test servers, the first one of which will
return REFUSED for a query for 'example'.
when nsupdate sends an SOA query to a resolver, if it fails
with REFUSED, nsupdate will now try the next server rather than
aborting the update completely.
In DNS Flag Day 2020, we started setting the DF (Don't Fragment socket
option on the UDP sockets. It turned out, that this code was incomplete
leading to dropping the outgoing UDP packets.
This has been now remedied, so it is possible to disable the
fragmentation on the UDP sockets again as the sending error is now
handled by sending back an empty response with TC (truncated) bit set.
This reverts commit 66eefac78c.
When the fragmentation is disabled on UDP sockets, the uv_udp_send()
call can fail with UV_EMSGSIZE for messages larger than path MTU.
Previously, this error would end with just discarding the response. In
this commit, a proper handling of such case is added and on such error,
a new DNS response with truncated bit set is generated and sent to the
client.
This change allows us to disable the fragmentation on the UDP
sockets again.
Add three more test cases that detect a configuration error if the
key-directory is inherited but has the same value for a zone in a
different view with a deviating DNSSEC policy.
This commit adds a unittest that tests private rdataset_getownercase()
and rdataset_setownercase() methods from rbtdb.c. The test setups
minimal mock dns_rbtdb_t and dns_rbtdbnode_t data structures.
As the rbtdb methods are generally hidden behind layers and layers, we
include the "rbtdb.c" directly from rbtdb_test.c, and thus we can use
the private methods and data structures directly. This also opens up
opportunity to add more unittest for the rbtdb private functions without
going through all the layers.
This check intermittently failed:
I:serve-stale:check not in cache longttl.example times out...
I:serve-stale:failed
This corresponds to this query in the test:
$DIG -p ${PORT} +tries=1 +timeout=3 @10.53.0.3 longttl.example TXT
Looking at the dig output for a failed test, the query actually got a
response from the authoritative server (in one specific example the
query time was 2991 msec, close to 3 seconds).
After doing the query for the test, we enable the authoritative
server after a sleep of three seconds. If we bump this sleep to 4
seconds, the race will be more in favor of the query timing out,
making it unlikely that this test will fail intermittently.
Bump the subsequent wait_for_log checks also with one second.
In the code that rdataset_setownercase() and rdataset_getownercase() we
now use tolower()/toupper()/isupper() functions appropriately instead of
rolling our own code.
Previously, we would set the locale on a global level and that could
possibly lead to different behaviour in underlying functions. In this
commit, we change to code to use the system locale only when calling the
libidn2 functions and reset the locale back to "POSIX" when exiting the
libidn2 code.
Expand the description of mirror zones in the ARM by adding a brief
discussion of how the validation process works for AXFR and IXFR. Move
the paragraph mentioning the "file" option higher up. Apply minor
stylistic and whitespace-related tweaks to the relevant section of the
ARM.
Apply minor stylistical and whitespace-related tweaks to the
descriptions of the "tcp-receive-buffer", "udp-receive-buffer",
"tcp-send-buffer", and "udp-send-buffer" options in the ARM.
The ARM contains typos in the names of the following two options:
- "tcp-receive-buffer"
- "udp-receive-buffer"
Fix the ARM so that it contains proper option names.
Improve the description of the "max-cache-size" option in the ARM by
focusing on its meaning for multiple views and default values.
Add mention of a hash table preallocation.
This commit adds a set of tests to verify that BIND will not crash
when some opcodes are sent over DoT or DoH, leading to marking network
handle in question as sequential.
Previously, each protocol (TCPDNS, TLSDNS) has specified own function to
disable pipelining on the connection. An oversight would lead to
assertion failure when opcode is not query over non-TCPDNS protocol
because the isc_nm_tcpdns_sequential() function would be called over
non-TCPDNS socket. This commit removes the per-protocol functions and
refactors the code to have and use common isc_nm_sequential() function
that would either disable the pipelining on the socket or would handle
the request in per specific manner. Currently it ignores the call for
HTTP sockets and causes assertion failure for protocols where it doesn't
make sense to call the function at all.
The built-in "_bind" view does not allow recursion and therefore does
not need a large cache database. However, as "max-cache-size" is not
explicitly set for that view in the default configuration, it inherits
that setting from global options. Set "max-cache-size" for the built-in
"_bind" view to a fixed value (2 MB, i.e. the smallest allowed value) to
prevent needlessly preallocating memory for its cache RBT hash table.
Currently the implicit default for the "max-cache-size" option is "90%".
As this option is inherited by all configured views, using multiple
views can lead to memory exhaustion over time due to overcommitment.
The "max-cache-size 90%;" default also causes cache RBT hash tables to
be preallocated for every configured view, which does not really make
sense for views which do not allow recursion.
To limit this problem's potential for causing operational issues, use a
minimal-sized cache for views which do not allow recursion and do not
have "max-cache-size" explicitly set (either in global configuration or
in view configuration).
For configurations which include multiple views allowing recursion,
adjusting "max-cache-size" appropriately is still left to the operator.
When locking key files for a zone, we iterate over all the views and
lock a mutex inside the zone structure. However, if we envounter an
in-view zone, we will try to lock the key files twice, one time for
the home view and one time for the in-view view. This will lead to
a deadlock because one thread is trying to get the same lock twice.
When "max-cache-size" is changed to "unlimited" (or "0") for a running
named instance (using "rndc reconfig"), the hash table size limit for
each affected cache DB is not reset to the maximum possible value,
preventing those hash tables from being allowed to grow as a result of
new nodes being added.
Extend dns_rbt_adjusthashsize() to interpret "size" set to 0 as a signal
to remove any previously imposed limits on the hash table size. Adjust
API documentation for dns_db_adjusthashsize() accordingly. Move the
call to dns_db_adjusthashsize() from dns_cache_setcachesize() so that it
also happens when "size" is set to 0.
Upon creation, each dns_rbt_t structure has its "maxhashbits" field
initialized to the value of the RBT_HASH_MAX_BITS preprocessor macro,
i.e. 32. When the dns_rbt_adjusthashsize() function is called for the
first time for a given RBT (for cache RBTs, this happens when they are
first created, i.e. upon named startup), it lowers the value of the
"maxhashbits" field to the number of bits required to index the
requested number of hash table slots. When a larger hash table size is
subsequently requested, the value of the "maxhashbits" field should be
increased accordingly, up to RBT_HASH_MAX_BITS. However, the loop in
the rehash_bits() function currently ensures that the number of bits
necessary to index the resized hash table will not be larger than
rbt->maxhashbits instead of RBT_HASH_MAX_BITS, preventing the hash table
from being grown once the "maxhashbits" field of a given dns_rbt_t
structure is set to any value lower than RBT_HASH_MAX_BITS.
Fix by tweaking the loop guard condition in the rehash_bits() function
so that it compares the new number of bits used for indexing the hash
table against RBT_HASH_MAX_BITS rather than rbt->maxhashbits.
The timeout originally picked for "rndc status" invocations (2 seconds)
in the test attempting to reproduce a deadlock caused by running
multiple "rndc addzone", "rndc modzone", and "rndc delzone" commands
concurrently causes intermittent failures of the "addzone" system test
in GitLab CI. Increase the timeout to 10 seconds to make such failures
less probable. Adjust code comments accordingly.
The requirements for BIND 9.17+ now requires C11 support from the
compiler, so we can safely drop most of the stdatomic.h shims from
lib/isc/unix/include/stdatomic.h.
This commit removes support for clang atomic builtins (clang >= 3.6.0
includes stdatomic.h header) and for Gcc __sync builtins.
The only compatibility shim that remains is support for __atomic
builtins for Gcc >= 4.7.0 since CentOS 7 still includes only Gcc 4.8.1
and the proper stdatomic.h header was only introduced in Gcc >= 4.9.
The warning was produced by an ASAN build:
runtime error: null pointer passed as argument 2, which is declared to
never be null
This commit fixes it by checking if nghttp2_session_mem_send() has
actually returned anything.
Resolve "ThreadSanitizer: data race lib/isc/task.c:435 in task_send (unprotected access to `task->threadid`)"
Closes#2739
See merge request isc-projects/bind9!5149
This change sets the mentioned fields properly and gets rid of klusges
added in the times when we were keeping pointers to isc_sockaddr_t
instead of copies. Among other things it helps to avoid a situation
when garbage instead of an address appears in dig output.
We cannot use DoH for zone transfers. According to RFC8484 a DoH
request contains exactly one DNS message (see Section 6: Definition of
the "application/dns-message" Media Type,
https://datatracker.ietf.org/doc/html/rfc8484#section-6). This makes
DoH unsuitable for zone transfers as often (and usually!) these need
more than one DNS message, especially for larger zones.
As zone transfers over DoH are not (yet) standardised, nor discussed
in RFC8484, the best thing we can do is to return "not implemented."
Technically DoH can be used to transfer small zones which fit in one
message, but that is not enough for the generic case.
Also, this commit makes the server-side DoH code ensure that no
multiple responses could be attempted to be sent over one HTTP/2
stream. In HTTP/2 one stream is mapped to one request/response
transaction. Now the write callback will be called with failure error
code in such a case.
Support a situation in header processing callback when client side
code could receive a belated response or part of it. That could
happen when the HTTP/2 session was already closed, but there were some
response data from server in flight. Other client-side nghttp2
callbacks code already handled this case.
The bug became apparent after HTTP/2 write buffering was supported,
leading to rare unit test failures.
This commit ensures that sock->h2.connect.cstream gets nullified when
the object in question is deleted. This fixes a nasty crash in dig
exposed when receiving large responses leading to double free()ing.
Also, it refactors how the client-side code keeps track of client
streams (hopefully) preventing from similar errors appearing in the
future.
This commit makes NM code to report HTTP as a stream protocol. This
makes it possible to handle large responses properly. Like:
dig +https @127.0.0.1 A cmts1-dhcp.longlines.com
When answering a query requires wildcard expansion, the AUTHORITY
section of the response needs to include NSEC(3) record(s) proving that
the QNAME does not exist.
When a response to a query is an insecure delegation, the AUTHORITY
section needs to include an NSEC(3) proof that no DS record exists at
the parent side of the zone cut.
These two conditions combined trip up the NSEC part of the logic
contained in query_addds(), which expects the NS RRset to be owned by
the first name found in the AUTHORITY section of a delegation response.
This may not always be true, for example if wildcard expansion causes an
NSEC record proving QNAME nonexistence to be added to the AUTHORITY
section before the delegation is added to the response. In such a case,
named incorrectly omits the NSEC record proving nonexistence of QNAME
from the AUTHORITY section.
The same block of code is affected by another flaw: if the same NSEC
record proves nonexistence of both the QNAME and the DS record at the
parent side of the zone cut, this NSEC record will be added to the
AUTHORITY section twice.
Fix by looking for the NS RRset in the entire AUTHORITY section and
adding the NSEC record to the delegation using query_addrrset() (which
handles duplicate RRset detection).
Add a set of system tests which check the contents of the AUTHORITY
section for signed, insecure delegation responses constructed from CNAME
records and wildcards, both for zones using NSEC and NSEC3.
Instead of checking the value of the variable modified two lines earlier
(the number of SOA records present at the apex of the old version of the
zone), one of the RUNTIME_CHECK() assertions in zone_postload() checks
the number of SOA records present at the apex of the new version of the
zone, which is already checked before. Fix the assertion by making it
check the correct variable.
The Windows support has been completely removed from the source tree
and BIND 9 now no longer supports native compilation on Windows.
We might consider reviewing mingw-w64 port if contributed by external
party, but no development efforts will be put into making BIND 9 compile
and run on Windows again.
When named restarts, it will examine signed zones and checks if the
current denial of existence strategy matches the dnssec-policy. If not,
it will schedule to create a new NSEC(3) chain.
However, on startup the zone database may not be read yet, fooling
BIND that the denial of existence chain needs to be created. This
results in a replacement of the previous NSEC(3) chain.
Change the code such that if the NSEC3PARAM lookup failed (the result
did not return in ISC_R_SUCCESS or ISC_R_NOTFOUND), we will try
again later. The nsec3param structure has additional variables to
signal if the lookup is postponed. We also need to save the signal
if an explicit resalt was requested.
In addition to the two added boolean variables, we add a variable to
store the NSEC3PARAM rdata. This may have a yet to be determined salt
value. We can't create the private data yet because there may be a
mismatch in salt length and the NULL salt value.
Add a test case where 'named' is restarted and ensure that an already
signed zone does not change its NSEC3 parameters.
The test case first tests the current zone and saves the used salt
value. Then after restart it checks if the salt (and other parameters)
are the same as before the restart.
This test case changes 'set_nsec3param'. This will now reset the salt
value, and when checking for NSEC3PARAM we will store the salt and
use it when testing the NXDOMAIN response. This does mean that for
every test case we now have to call 'set_nsec3param' explicitly (and
can not omit it because it is the same as the previous zone).
Finally, slightly changed some echo output to make debugging friendlier.
When we rewrote the zone dumping to use the separate threadpool, the
dumping would acquire the read lock for the whole time the zone dumping
process is dumping the zone.
When combined with incoming IXFR that tries to acquire the write lock on
the same rwlock, we would end up blocking all the other readers.
In this commit, we pause the dbiterator every time we get next record
and before start dumping it to the disk.
Make sure an incoming IXFR containing an SOA record which is not placed
at the apex of the transferred zone does not result in a broken version
of the zone being served by named and/or a subsequent crash.
While cleaning up the usage of HAVE_UV_<func> macros, we forgot to
cleanup the HAVE_UV_UDP_CONNECT in the actual code and
HAVE_UV_TRANSLATE_SYS_ERROR and this was causing Windows build to fail
on uv_udp_send() because the socket was already connected and we were
falsely assuming that it was not.
The platforms with autoconf support were not affected, because we were
still checking for the functions from the configure.
This commit adds the ability to consolidate HTTP/2 write requests if
there is already one in flight. If it is the case, the code will
consolidate multiple subsequent write request into a larger one
allowing to utilise the network in a more efficient way by creating
larger TCP packets as well as by reducing TLS records overhead (by
creating large TLS records instead of multiple small ones).
This optimisation is especially efficient for clients, creating many
concurrent HTTP/2 streams over a transport connection at once. This
way, the code might create a small amount of multi-kilobyte requests
instead of many 50-120 byte ones.
In fact, it turned out to work so well that I had to add a work-around
to the code to ensure compatibility with the flamethrower, which, at
the time of writing, does not support TLS records larger than two
kilobytes. Now the code tries to flush the write buffer after 1.5
kilobyte, which is still pretty adequate for our use case.
Essentially, this commit implements a recommendation given by nghttp2
library:
https://nghttp2.org/documentation/nghttp2_session_mem_send.html
Add a call to posix_fadvise() to indicate to the kernel, that `named`
won't be needing the dumped zone files any time soon with:
* POSIX_FADV_DONTNEED - The specified data will not be accessed in the
near future.
Notes:
POSIX_FADV_DONTNEED attempts to free cached pages associated with the
specified region. This is useful, for example, while streaming large
files. A program may periodically request the kernel to free cached
data that has already been used, so that more useful cached pages are
not discarded instead.
Previously, dumping the zones to the files were quantized, so it doesn't
slow down network IO processing. With the introduction of network
manager asynchronous threadpools, we can move the IO intensive work to
use that API and we don't have to quantize the work anymore as it the
file IO won't block anything except other zone dumping processes.
The libuv has a support for running long running tasks in the dedicated
threadpools, so it doesn't affect networking IO.
This commit adds isc_nm_work_enqueue() wrapper that would wraps around
the libuv API and runs it on top of associated worker loop.
The only limitation is that the function must be called from inside
network manager thread, so the call to the function should be wrapped
inside a (bound) task.
Instead of having a configure check for every missing function that has
been added in later version of libuv, we now use UV_VERSION_HEX to
decide whether we need the shim or not.
The uv_req_get_data() and uv_req_set_data() functions were introduced in
libuv >= 1.19.0, so we need to add compatibility shims with older libuv
versions.
Commit bdb777b2a2 updated the man pages
to contents produced using:
- Sphinx 4.0.2
- sphinx-rtd-theme 0.5.2
- docutils 0.17.1
However, sphinx-rtd-theme 0.5.2 is incompatible with versions 0.17+ of
the docutils package. This problem was addressed in the Docker image
used for building man pages by downgrading the docutils package to
version 0.16.
Regenerate the man pages again, this time using:
- Sphinx 4.0.2
- sphinx-rtd-theme 0.5.2
- docutils 0.16
This is necessary to prevent the "docs" GitLab CI job from failing.
Rather than having an expensive 'expired' (fka 'stale_ttl') in the
rdataset structure, that is only used to be printed in a comment on
ancient RRsets, reuse the TTL field of the RRset.
Commit a83c8cb0af updated masterdump so
that stale records in "rndc dumpdb" output no longer shows 0 TTLs. In
this commit we change the name of the `rdataset->stale_ttl` field to
`rdataset->expired` to make its purpose clearer, and set it to zero in
cases where it's unused.
Add 'rbtdb->serve_stale_ttl' to various checks so that stale records
are not purged from the cache when they've been stale for RBTDB_VIRTUAL
(300) seconds.
Increment 'ns_statscounter_usedstale' when a stale answer is used.
Note: There was a question of whether 'overmem_purge' should be
purging ancient records, instead of stale ones. It is left as purging
stale records, since stale records could take up the majority of the
cache.
This submission is copyrighted Akamai Technologies, Inc. and provided
under an MPL 2.0 license.
This commit was originally authored by Kevin Chen, and was updated by
Matthijs Mekking to match recent serve-stale developments.
Once we resume a query, we should clear DNS_FETCHOPT_TRYSTALE_ONTIMEOUT
from the options to prevent triggering the stale-answer-client-timeout
on subsequent fetches.
If we don't this may cause a crash when for example when prefetch is
triggered after a query restart.
Add a test case where a client request is received and the stale
timeout occurs, but it is not served stale data because there is no entry
in the cache, then is served an authoritative answer once the background
fetch completes. This ensures that a stale timeout only affects a
subsequent response if the client was answered.
when a serve-stale answer has been sent, the client continues waiting
for a proper answer. if a final completion event for the client does
arrive, it can just be cleaned up without sending a response, similar
to a canceled fetch.
- send a query for an AAAA which will be resolved as a mapped A
- disable authoritative responses
- wait for the negative AAAA response to become stale
- send another query, wait for the stale answer
- re-enable authorative responses so that a real answer arrives
- currently, this triggers an assertion in query.c
On some platforms, the __attribute__ constructor and destructor won't
take priorities and the compilation failed. On such platform would be
macOS. For this reason, the constructor/destructor in the libisc was
reworked to not use priorities, but have a single constructor and
destructor that calls the appropriate routines in correct order.
This commit removes the extra priority because it's now not needed and
it also breaks a compilation on macOS with GCC 10.
In the shutdown system test multiple queries are sent to a resolver
instance, in the meantime we terminate the same resolver process for
which the queries were sent to, either via rndc stop or a SIGTERM
signal, that means the resolver may not be able to answer all those
queries, since it has initiated the shutdown process.
The dnspython library raises a dns.resolver.NoNameservers exception when
a resolver object fails to receive an answer from the specified list
of nameservers (resolver.nameservers list), we need to handle this
exception as this is something that may happen since we asked the
resolver to terminate, as a result it may not answer clients even if
an answer is available, as the operation will be canceled.
configuring with --enable-mutex-atomics flagged these incorrectly
initialised variables on systems where pthread_mutex_init doesn't
just zero out the structure.
The size of the array holding the pointers to clientmgr was created so
big it could hold the actual clientmgr objects, not just the pointer.
This commit fixes the size to be just the ncpus * sizeof(pointer).
The isc_nmiface_t type was holding just a single isc_sockaddr_t,
so we got rid of the datatype and use plain isc_sockaddr_t in place
where isc_nmiface_t was used before. This means less type-casting and
shorter path to access isc_sockaddr_t members.
At the same time, instead of keeping the reference to the isc_sockaddr_t
that was passed to us when we start listening, we will keep a local
copy. This prevents the data race on destruction of the ns_interface_t
objects where pending nmsockets could reference the sockaddr of already
destroyed ns_interface_t object.
* dns_journal_next() leaves the read point in the journal after the
transaction header so journal_seek() should be inside the loop.
* we need to recover from transaction header inconsistencies
Additionally when correcting for <size, serial0, serial1, 0> the
correct consistency check is isc_serial_gt() rather than
isc_serial_ge(). All instances updated.
BIND installation should be done by setting DESTDIR during "make
install" not by setting prefix via ./configure.
Make sure that installation with DESTDIR=<PATH> works by checking that
named binary and it's respective man page were installed and that
well-known BIND9 directories - and only them - are present in DESTDIR.
Also rename install path variable from BIND_INSTALL_PATH to
INSTALL_PATH to avoid namespace clash in stress tests which use
BIND_INSTALL_PATH variable to configure path to BIND9 binaries.
Ubuntu 16.04 (Xenial Xerus) is reaching End of Standard Support in April
2021 thus we are removing it from the list of supported platforms and
replacing it with Ubuntu 18.04 LTS (Bionic Beaver).
According to the measurements (recorded on GL!5085), the fillcount of 2
for namepool and fillcount of 4 for rdspool can fit 99.99% of request
for tested scenarios.
This was discovered by perf recording the single second recursive test
using flamethrower where the initial malloc lit up like a flare.
Previously, as a way of reducing the contention between threads a
clientmgr object would be created for each interface/IP address.
We tasks being more strictly bound to netmgr workers, this is no longer
needed and we can just create clientmgr object per worker queue (ncpus).
Each clientmgr object than would have a single task and single memory
context.
Similarly, the resolver code would create hundreds of memory contexts
just on the resolver setup. The contention will be reduced directly in
the allocator, so for now just attach to the view memory instead of
creating separate memory context for each bucket.
Since a client object is bound to a netmgr handle, each client
will always be processed by the same netmgr worker, so we can
simplify the code by binding client->task to the same thread as
the client. Since ns__client_request() now runs in the same event
loop as client->task events, is no longer necessary to pause the
task manager before launching them.
Also removed some functions in isc_task that were not used.
The number of memory contexts created in the clientmgr was enormous. It
could easily create thousands of memory contexts because the formula was:
nprotocols * ncpus * ninterfaces * CLIENT_NMCTXS_PERCPU (8)
The original goal was to reduce the contention when allocating the
memory, but after a while nobody noticed that the amount of memory
context allocated would not reduce contention at all.
This commit removes the whole mctxpool and just uses the mctx from
clientmgr as the contention will be reduced directly in the allocator.
Running gcc:tarball CI job for merge requests is consistent with how we
run gcc:out-of-tree CI job and should help identify problems with the
build system during the review process, not once merged during daily
runs. For the sake of time, unit and system tests associated with the
gcc:tarball CI job are excluded from merge requests.
It's a common pattern to spawn CI jobs only for pipelines triggered by
schedules, tags, and web. There should be an anchor so that the rules
are not repeated.
the idle timeout for rndc connections was set to 10 seconds, but this
caused intermittent system failures of the 'rndc' system test on slow
platforms, since 'rndc reconfig' could time out before reconfiguration
was complete.
this commit restores the original timeout value of 60 seconds, which was
changed inadvertently after rndc was updated to use the network manager.
even with this change, however, the test can still time out under
TSAN because loading the huge zone can take a very long time (upwards
of two minutes). so the test is modified here to generate a smaller zone
file when running under TSAN.
dns_name_copy() has been replaced nearly everywhere with
dns_name_copynf(). this commit changes the last two uses of
the original function. afterward, we can remove the old
dns_name_copy() implementation, and replace it with _copynf().
dns_message_gettempname() returns an initialized name with a dedicated
buffer, associated with a dns_fixedname object. Using dns_name_copynf()
to write a name into this object will actually copy the name data
from a source name. dns_name_clone() merely points target->ndata to
source->ndata, so it is faster, but it can lead to a use-after-free if
the source is freed before the target object is released via
dns_message_puttempname().
In a few places, clone was being used where copynf should have been;
this is now fixed.
As a side note, no memory was lost, because the ndata buffer used in
the dns_fixedname_t is internal to the structure, and is freed when
the dns_fixedname_t is freed regardless of the .ndata contents.
When executed in "legacy mode" (i.e. without the '-r' parameter)
run.sh invokes make with a modified environment.
SYSTEMTEST_FORCE_COLOR is now preserved for use by the individual test
scripts.
CYGWIN is now preserved for named, as it controls behavior relating to
crash reporting.
This restores legacy behavior in bin/tests/system where running:
SYSTEMTEST_NO_CLEAN=1 ./run.sh <testname>
would run the test and preserve the output files.
This has been broken since the change that has run.sh invoke "make",
due to SYSTEMTEST_NO_CLEAN not being preserved in the environment
that's set up for "make".
Another option would be to completely remove SYSTEMTEST_NO_CLEAN.
This seems to be the only behavior-changing environment variable
not accounted for in the call to "make".
I don't think this needs a CHANGES entry.
The default value of the "man_make_section_directory" Sphinx option was
changed in Sphinx 4.0.1, which broke building man pages in maintainer
mode as the shell code in doc/man/Makefile.am expects man pages to be
built in doc/man/_build/man/, not doc/man/_build/man/<section_number>/.
The aforementioned change in defaults was reverted in Sphinx 4.0.2, but
this issue should still be prevented from reoccurring in the future.
Ensure that by explicitly setting the "man_make_section_directory"
option to False.
The man pages produced by Sphinx 4.0.2 are slightly different than those
produced by Sphinx 3.5.4. As Sphinx 4.0.2 is now used in GitLab CI,
update all doc/man/*in files so that they reflect what that version of
Sphinx produces, in order to prevent GitLab CI job failures.
The last rdataset_getownercase() left it in a state where the code was
mix of microoptimizations (manual loop unrolling, complicated bitshifts)
with a code that would always rewrite the character even if it stayed
the same after transformation.
This commit makes sure that we modify only the characters that actually
need to change, removes the manual loop unrolling, and replaces the
weird bit arithmetics with a simple shift and bit-and.
dns_message_gettempname() now returns a pointer to an initialized
name associated with a dns_fixedname_t object. it is no longer
necessary to allocate a buffer for temporary names associated with
the message object.
Also, add "set -e" to all shell scripts of the views test to exit when
any command fails or is unknown, e.g., this on OpenBSD:
tests.sh[174]: seq: not found
The seq command is not defined in the POSIX standard and is missing on
OpenBSD. Given that the system test code is meant to be POSIX-compliant
replace it with a shell construct.
This function has never been used since it was added to the source tree
by commit 686b27bfd3 back in 1999. As
the dns_zoneflg_t type is only defined in lib/dns/zone.c, no function
external to that file would be able to use dns_zone_setflag() properly
anyway - the DNS_ZONE_SETFLAG() and DNS_ZONE_CLRFLAG() macros should be
used instead. Zone options that can be set from outside zone.c are set
using dns_zone_setoption().
Add two tests to make sure named-checkconf catches key-directory issues
where a zone in multiple views uses the same directory but has
different dnssec-policies. One test sets the key-directory specifically,
the other inherits the default key-directory (NULL, aka the working
directory).
Also update the good.conf test to allow zones in different views
with the same key-directory if they use the same dnssec-policy.
Also allow zones in different views with different key-directories if
they use different dnssec-policies.
Also allow zones in different views with the same key-directories if
only one view uses a dnssec-policy (the other is set to "none").
Also allow zones in different views with the same key-directories if
no views uses a dnssec-policy (zone in both views has the dnssec-policy
set to "none").
Don't allow the same zone with different dnssec-policies in separate
views have the same key-directory.
Track zones plus key-directory in a symtab and if there is a match,
check the offending zone's dnssec-policy name. If the name is "none"
(there is no kasp for the offending zone), or if the name is the same
(the zone shares keys), it is fine, otherwise it is an error (zones
in views using different policies cannot share the same key-directory).
Resolve "Misleading diagnostic in update_soa_serial indicates BIND will use increment but it doesn't"
Closes#2696
See merge request isc-projects/bind9!5029
if dns_updatemethod_date is used do that the returned method is only
set to dns_updatemethod_increment if the new serial does not encode
the current day (YYYYMMDDXX).
PyLint 2.8.2 reports the following suggestions for two Python scripts
used in the system test suite:
************* Module tests_rndc_deadlock
bin/tests/system/addzone/tests_rndc_deadlock.py:71:4: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
************* Module tests-shutdown
bin/tests/system/shutdown/tests-shutdown.py:68:4: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
bin/tests/system/shutdown/tests-shutdown.py:154:8: R1732: Consider using 'with' for resource-allocating operations (consider-using-with)
Implement the above suggestions by using
concurrent.futures.ThreadPoolExecutor() and subprocess.Popen() as
context managers.
Add an item to the CVE issue template which calls for drafting the
security advisory early in the security incident handling process. The
intention is to ensure there is enough time to review and polish ISC
security advisories before they get published.
Tweak the release checklist to make sure we carefully consider all
confidential issues before opening them up to the public. This change
is intended as a safeguard against accidentally disclosing too much
information about a security vulnerability before our users get a chance
to patch it.
Instead of using fixed quantum, this commit adds atomic counter for
number of items on each queue and uses the number of netievents
scheduled to run as the limit of maximum number of netievents for a
single process_queue() run.
This prevents the endless loops when the netievent would schedule more
netievents onto the same loop, but we don't have to pick "magic" number
for the quantum.
OpenBSD changed the name of the pytest script from py.test-3 in OpenBSD
6.8 to py.test in OpenBSD 6.9.
The py.test-3 name which was added in d5562a3e for the sake of OpenBSD
and CentOS is still required for CentOS.
This commit adds a new configuration option to set the receive and send
buffer sizes on the TCP and UDP netmgr sockets. The default is `0`
which doesn't set any value and just uses the value set by the operating
system.
There's no magic value here - set it too small and the performance will
drop, set it too large, the buffers can fill-up with queries that have
already timeouted on the client side and nobody is interested for the
answer and this would just make the server clog up even more by making
it produce useless work.
The `netstat -su` can be used on POSIX systems to monitor the receive
and send buffer errors.
Unit test run for out-of-tree builds used to fail to find
masterXX.data.in files:
/usr/bin/perl -w /builds/mnowak/bind9/lib/dns/tests/mkraw.pl < testdata/master/master12.data.in > testdata/master/master12.data
/bin/bash: testdata/master/master12.data.in: No such file or directory
make[4]: *** [Makefile:1910: testdata/master/master12.data] Error 1
The outgoing UDP socket selection would pick unintialized children
socket on Windows, because we have more netmgr workers than we have
listening sockets. This commit fixes the selection by keeping the
outgoing socket the same, so it's always run on existing socket.
The initial intent was to limit the number of concurrent streams by
the value of 100 but due to the error when reading the documentation
it was set to the maximum possible number of streams per session.
This could lead to security issues, e.g. a remote attacker could have
taken down the BIND instance by creating lots of sessions via low
number of transport connections. This commit fixes that.
We should not call nghttp2_session_terminate_session() in server-side
code after all of the active HTTP/2 streams are processed. The
underlying transport connection is expected to remain opened at least
for some time in this case for new HTTP/2 requests to arrive. That is
what flamethrower was expecting and it makes perfect sense from the
HTTP/2 perspective.
During the stress testing, it was discovered that the default netmgr
quantum of 128 is not enough and there was a performance drop for TCP on
FreeBSD. Bumping the default quantum to 1024 solves the performance
issue and is still enough to prevent the endless loops.
all privileged tasks are complete by the time we return from
isc_task_endexclusive(), so it makes sense to reset the taskmgr
mode to non-privileged right then.
We were clearing the pointer to taskmgr as soon as isc_taskmgr_destroy()
would be called and before all tasks were finished. Unfortunately, some
tasks would use global named_g_taskmgr objects from inside the events
and this would cause either a data race or NULL pointer dereference.
This commit fixes the data race by moving the destruction of the
referenced pointer to the time after all tasks are finished.
Network manager events that require interlock (pause, resume, listen)
are now always executed in the same worker thread, mgr->workers[0],
to prevent races.
"stoplistening" events no longer require interlock.
- ensure isc_nm_pause() and isc_nm_resume() work the same whether
run from inside or outside of the netmgr.
- promote 'stop' events to the priority event level so they can
run while the netmgr is pausing or paused.
- when pausing, drain the priority queue before acquiring an
interlock; this prevents a deadlock when another thread is waiting
for us to complete a task.
- release interlock after pausing, reacquire it when resuming, so
that stop events can happen.
some incidental changes:
- use a function to enqueue pause and resume events (this was part of a
different change attempt that didn't work out; I kept it because I
thought was more readable).
- make mgr->nworkers a signed int to remove some annoying integer casts.
The netmgr listening, stoplistening, pausing and resuming functions
now use barriers for synchronization, which makes the code much simpler.
isc/barrier.h defines isc_barrier macros as a front-end for uv_barrier
on platforms where that works, and pthread_barrier where it doesn't
(including TSAN builds).
When isc__nm_http_stoplistening() is run from inside the netmgr, we need
to make sure it's run synchronously. This commit is just a band-aid
though, as the desired behvaior for isc_nm_stoplistening() is not always
the same:
1. When run from outside user of the interface, the call must be
synchronous, e.g. the calling code expects the call to really stop
listening on the interfaces.
2. But if there's a call from listen<proto> when listening fails,
that needs to be scheduled to run asynchronously, because
isc_nm_listen<proto> is being run in a paused (interlocked)
netmgr thread and we could get stuck.
The proper solution would be to make isc_nm_stoplistening()
behave like uv_close(), i.e., to have a proper callback.
all zone loading tasks have the privileged flag, but we only want
them to run as privileged tasks when the server is being initialized;
if we privilege them the rest of the time, the server may hang for a
long time after a reload/reconfig. so now we call isc_taskmgr_setmode()
to turn privileged execution mode on or off in the task manager.
isc_task_privileged() returns true if the task's privilege flag is
set *and* the taskmgr is in privileged execution mode. this is used
to determine in which netmgr event queue the task should be run.
This workarounds couple of races where the current_lookup would be
already detached during shutting down the dig, but still processing the
pending reads.
The start_udp() function didn't properly attach to the query and thus
a callback with ISC_R_CANCELED would end with wrong accounting on the
query object.
Usually, this doesn't happen because underlying libuv API
uv_udp_connect() is synchronous, but isc_nm_udpconnect() could return
ISC_R_CANCELED in case it's called while the netmgr is shutting down.
There was a theoretical possibility of clogging up the queue processing
with an endless loop where currently processing netievent would schedule
new netievent that would get processed immediately. This wasn't such a
problem when only netmgr netievents were processed, but with the
addition of the tasks, there are at least two situation where this could
happen:
1. In lib/dns/zone.c:setnsec3param() the task would get re-enqueued
when the zone was not yet fully loaded.
2. Tasks have internal quantum for maximum number of isc_events to be
processed, when the task quantum is reached, the task would get
rescheduled and then immediately processed by the netmgr queue
processing.
As the isc_queue doesn't have a mechanism to atomically move the queue,
this commit adds a mechanism to quantize the queue, so enqueueing new
netievents will never stop processing other uv_loop_t events.
The default quantum size is 128.
Since the queue used in the network manager allows items to be enqueued
more than once, tasks are now reference-counted around task_ready()
and task_run(). task_ready() now has a public API wrapper,
isc_task_ready(), that the netmgr can use to reschedule processing
of a task if the quantum has been reached.
Incidental changes: Cleaned up some unused fields left in isc_task_t
and isc_taskmgr_t after the last refactoring, and changed atomic
flags to atomic_bools for easier manipulation.
With taskmgr running on top of netmgr, the ordering of how the tasks and
netmgr shutdown interacts was wrong as previously isc_taskmgr_destroy()
was waiting until all tasks were properly shutdown and detached. This
responsibility was moved to netmgr, so we now need to do the following:
1. shutdown all the tasks - this schedules all shutdown events onto
the netmgr queue
2. shutdown the netmgr - this also makes sure all the tasks and
events are properly executed
3. Shutdown the taskmgr - this now waits for all the tasks to finish
running before returning
4. Shutdown the netmgr - this call waits for all the netmgr netievents
to finish before returning
This solves the race when the taskmgr object would be destroyed before
all the tasks were finished running in the netmgr loops.
Previously, netmgr, taskmgr, timermgr and socketmgr all had their own
isc_<*>mgr_create() and isc_<*>mgr_destroy() functions. The new
isc_managers_create() and isc_managers_destroy() fold all four into a
single function and makes sure the objects are created and destroy in
correct order.
Especially now, when taskmgr runs on top of netmgr, the correct order is
important and when the code was duplicated at many places it's easy to
make mistake.
The former isc_<*>mgr_create() and isc_<*>mgr_destroy() functions were
made private and a single call to isc_managers_create() and
isc_managers_destroy() is required at the program startup / shutdown.
Fix flawed DoH unit tests logic and some corner cases in the DoH code. Fix doh_test failure on FreeBSD 13.0
Closes#2632
See merge request isc-projects/bind9!5005
Under some circumstances a situation might occur when server-side
session gets finished while there are still active HTTP/2
streams. This would lead to isc_nm_httpsocket object leaks.
This commit fixes this behaviour as well as refactors failed_read_cb()
to allow better code reuse.
This commit fixes a situation when a cstream object could get unlinked
from the list as a result of a cstream->read_cb call. Thus, unlinking
it after the call could crash the program.
... the last handle has been detached after calling write
callback. That makes it possible to detach from the underlying socket
and not to keep the socket object alive for too long. This issue was
causing TLS tests with quota to fail because quota might not have been
detached on time (because it was still referenced by the underlying
TCP socket).
One could say that this commit is an ideological continuation of:
513cdb52ec.
This way we create less netievent objects, not bombarding NM with the
messages in case of numerous low-level errors (like too many open
files) in e.g. unit tests.
This change ensures that a TCP connect callback is called from within
the context of a worker thread in case of a low-level error when
descriptors cannot be created (e.g. when there are too many open file
descriptors).
When looking for key files, we could use isdigit rather than checking
if the character is within the range [0-9].
Use (unsigned char) cast to ensure the value is representable in the
unsigned char type (as suggested by the isdigit manpage).
Change " & 0xff" occurrences to the recommended (unsigned char) type
cast.
Just like with dynamic and/or inline-signing zones, check if no two
or more zone configurations set the same filename. In these cases,
the zone files are not read-only and named-checkconf should catch
a configuration where multiple zone statements write to the same file.
Add some bad configuration tests where KASP zones reference the same
zone file.
Update the good-kasp test to allow for two zones configure the same
file name, dnssec-policy none.
When we introduced "dnssec-policy insecure" we could have removed the
'strcmp' check for "none", because if it was set to "none", the 'kasp'
variable would have been set to NULL.
Add a test for default.kasp that if we remove the private key file,
no successor key is created for it. We need to update the kasp script
to deal with a missing private key. If this is the case, skip checks
for private key files.
Add a test with a zone for which the private key of the ZSK is missing.
Add a test with a zone for which the private key of the KSK is missing.
BIND 9 is smart about when to sign with what key. If a key is offline,
BIND will delete the old signature anyway if there is another key to
sign the RRset with.
With KASP we don't want to fallback to the KSK if the ZSK is missing,
only for the SOA RRset. If the KSK is missing, but we do have a ZSK,
deleting the signature is fine. Otherwise it depends on if we use KASP
or not. Update the 'delsig_ok' function to reflect that.
When checking the current DNSSEC state against the policy, consider
offline keys. If we didn't found an active key, check if the key is
offline by checking the public key list. If there is a match in the
public key list (the key data is retrieved from the .key and the
.state files), treat the key as offline and don't create a successor
key for it.
The rndc command 'dnssec -status' only considered keys from
'dns_dnssec_findmatchingkeys' which only includes keys with accessible
private keys. Change it so that offline keys are also listed in the
status.
The function 'dns_dnssec_keylistfromrdataset()' creates a keylist from
the DNSKEY RRset. If we attempt to read the private key, we also store
the key state. However, if the private key is offline, the key state
will not be stored. To fix this, first attempt to read the public key
file. If then reading the private key file fails, and we do have a
public key, add that to the keylist, with appropriate state. If we
also failed to read the public key file, add the DNSKEY to the keylist,
as we did before.
The kasp system test performs for each zone a couple of checks to make
sure the zone is signed correctly. To avoid test failures caused by
timing issues, there is first a check to ensure the zone is done
signing, 'wait_for_done_signing'. This function waits with the DNSSEC
checks until a "zone_rekey done" log message is seen for a specific
key.
Unfortunately this is not sufficient to avoid test failures due to
timing issues, because there is a small amount of time in between this
log message and the newly signed zone actually being served.
Therefore, in 'check_apex', retry for three seconds the DNSKEY query
check. After that, additional checks should pass without retries,
because at that point we know for sure the zone has been resigned with
the expected keys.
Also reduce the number of redundant 'check_signatures'
This commit adds support for generating backtraces on Windows and
refactors the isc_backtrace API to match the Linux/BSD API (without
the isc_ prefix)
* isc_backtrace_gettrace() was renamed to isc_backtrace(), the third
argument was removed and the return type was changed to int
* isc_backtrace_symbols() was added
* isc_backtrace_symbols_fd() was added and used as appropriate
On Windows, the iocompletionport_createthreads() didn't use
isc_thread_create() to create new threads for processing IO, but just a
simple CreateThread() function that completely circumvent the
isc_trampoline mechanism to initialize global isc_tid_v. This lead to
segmentation fault in isc_hp API because '-1' isn't valid index to the
hazard pointer array.
This commit changes the iocompletionport_createthreads() to use
isc_thread_create() instead of CreateThread() to properly initialize
isc_tid_v.
The nsupdate system test did not record failures from the
'update_test.pl' Perl script. This was because the 'ret' value was
not being saved outside the '{ $PERL ... || ret=1 } cat_i' scope.
Change this piece to store the output in a separate file and then
cat its contents. Now the 'ret' value is being saved.
Also record failures in 'update_test.pl' if sending the update
failed.
Add missing 'n' incrementals to 'nsupdate/test.sh' to keep track of
test numbers.
By default readthedocs.org uses Sphinx 1.8.5, but MR !4563 has
introduced depedency on ReferenceRole class which is available only in
Sphinx 2.0.0.
Path to doc/arm/requirements.txt needs to be configured in
readthedocs.org.
Add a test case when a dnssec-policy is reconfigured to "none",
without setting it to "insecure" first. This is unsupported behavior,
but we want to make sure the behavior is somewhat expected. The
zone should remain signed (but will go bogus once the signatures
expire).
Update the ARM to mention the new built-in "insecure" policy. Update
the DNSSEC guide recipe "Revert to unsigned" to add the additional
step of reconfiguring the zone to "insecure" (instead of immediately
set it to "none").
While it is meant to be used for transitioning a zone to insecure,
add a test case where a zone uses the "insecure" policy immediately.
The zone will go through DNSSEC maintenance, but the outcome should
be the same as 'dnssec-policy none;', that is the zone should be
unsigned.
The tests for going insecure should be changed to use the built-in
"insecure" policy.
The function that checks dnssec status output should again check
for the special case "none".
Add a new built-in policy "insecure", to be used to gracefully unsign
a zone. Previously you could just remove the 'dnssec-policy'
configuration from your zone statement, or remove it.
The built-in policy "none" (or not configured) now actually means
no DNSSEC maintenance for the corresponding zone. So if you
immediately reconfigure your zone from whatever policy to "none",
your zone will temporarily be seen as bogus by validating resolvers.
This means we can remove the functions 'dns_zone_use_kasp()' and
'dns_zone_secure_to_insecure()' again. We also no longer have to
check for the existence of key state files to figure out if a zone
is transitioning to insecure.
* The location of the digest type field has changed to where the
reserved field was.
* The reserved field is now called scheme and is where the digest
type field was.
* Digest type 2 has been defined (SHA256).
dnstap_test produces TSAN errors which originate in libfstrm.so. Unless
libfstrm is TSAN clean or a workaround is placed in libfstrm sources,
suppressing TSAN coming from libfstrm is necessary to test DNSTAP under
TSAN.
All platforms but OpenBSD have dnstap dependencies readily in their
respective repositories, and dnstap thus can be tested there. Given that
majority of images have dnstap dependencies available, it seems fitting
to make dnstap enabled by default.
The pytest "cacheprovider" plugin produces a .cache/v/cache/lastfailed
file, which holds a Python dictionary structure with failed tests.
However, on Ubuntu 16.04 (Xenial) the file is created even though the
test passed and the file contains just an empty dictionary ("{}").
Given that we are not interested in this feature, disabling the
"cacheprovider" plugin globally and removing per-test removals of the
.cache directory seems like the best course of action.
Define a :gl: Sphinx role that takes a GitLab issue/MR number as an
argument and creates a hyperlink to the relevant ISC GitLab URL. This
makes it easy to reach ISC GitLab pages directly from the release notes.
Make all GitLab references in the release notes use the new Sphinx role.
When the `named` would hang on startup it would be killed with SIGKILL
leaving us with no information about the state the process was in.
This commit changes the start.pl script to send SIGABRT instead, so we
can properly collect and process the coredump from the hung named
process.
When reducing the number of NSEC3 iterations to 150, commit
aa26cde2ae added tests for dnssec-policy
to check that a too high iteration count is a configuration failure.
The test is not sufficient because 151 was always too high for
ECDSAP256SHA256. The test should check for a different algorithm.
There was an existing test case that checks for NSEC3 iterations.
Update the test with the new maximum values.
Update the code in 'kaspconf.c' to allow at most 150 iterations.
[CVE-2021-25215] Properly answer queries for DNAME records that require the DNAME to be processed to resolve itself
See merge request isc-private/bind9!253
When answering a query, named should never attempt to add the same RRset
to the ANSWER section more than once. However, such a situation may
arise when chasing DNAME records: one of the DNAME records placed in the
ANSWER section may turn out to be the final answer to a client query,
but there is no way to know that in advance. Tweak the relevant INSIST
assertion in query_respond() so that it handles this case properly.
qctx->rdataset is freed later anyway, so there is no need to clean it up
in query_respond().
If a zone transfer results in a zone not having any NS records, named
stops serving it because such a zone is broken. Do the same if an
incoming zone transfer results in a zone lacking an SOA record at the
apex or containing more than one SOA record.
An IXFR containing SOA records with owner names different than the
transferred zone's origin can result in named serving a version of that
zone without an SOA record at the apex. This causes a RUNTIME_CHECK
assertion failure the next time such a zone is refreshed. Fix by
immediately rejecting a zone transfer (either an incremental or
non-incremental one) upon detecting an SOA record not placed at the apex
of the transferred zone.
While working on the serve-stale backports, I noticed the following
oddities:
1. In the serve-stale system test, in one case we keep track of the
time how long it took for dig to complete. In commit
aaed7f9d8c, the code removed the
exception to check for result == ISC_R_SUCCESS on stale found
answers, and adjusted the test accordingly. This failed to update
the time tracking accordingly. Move the t1/t2 time track variables
back around the two dig commands to ensure the lookups resolved
faster than the resolver-query-timeout.
2. We can remove the setting of NS_QUERYATTR_STALEOK and
DNS_RDATASETATTR_STALE_ADDED on the "else if (stale_timeout)"
code path, because they are added later when we know we have
actually found a stale answer on a stale timeout lookup.
3. We should clear the NS_QUERYATTR_STALEOK flag from the client
query attributes instead of DNS_RDATASETATTR_STALE_ADDED (that
flag is set on the rdataset attributes).
4. In 'bin/named/config.c' we should set the configuration options
in alpabetical order.
5. In the ARM, in the backports we have added "(stale)" between
"cached" and "RRset" to make more clear a stale RRset may be
returned in this scenario.
Exerting excessive I/O load on the host running system tests should be
avoided in order to limit the number of false positives reported by the
system test suite. In some cases, running named with "-d 99" (which is
the default for system tests) results in a massive amount of logs being
generated, most of which are useless. Implement a log file size check
to draw developers' attention to overly verbose named instances used in
system tests. The warning threshold of 200,000 lines was chosen
arbitrarily.
The regression test for CVE-2020-8620 causes a lot of useless messages
to be logged. However, globally decreasing the log level for the
affected named instance would be a step too far as debugging information
may be useful for troubleshooting other checks in the "tcp" system test.
Starting a separate named instance for a single check should be avoided
when possible and thus is also not a good solution. As a compromise,
run "rndc trace 1" for the affected named instance before starting the
regression test for CVE-2020-8620.
The system test framework starts all named instances with the "-d 99"
command line option (unless it is overridden by a named.args file in a
given instance's working directory). This causes a lot of log messages
to be written to named.run files - currently over 5 million lines for a
single test suite run. While debugging information preserved in the log
files is essential for troubleshooting intermittent test failures, some
system tests involve sending hundreds or even thousands of queries,
which causes the relevant log files to explode in size. When multiple
tests (or even multiple test suites) are run in parallel, excessive
logging contributes considerably to the I/O load on the test host,
increasing the odds of intermittent test failures getting triggered.
Decrease the debug level for the seven most verbose named instances:
- use "-d 3" for ns2 in the "cacheclean" system test (it is the lowest
logging level at which the test still passes without the need to
apply any changes to tests.sh),
- use "-d 1" for the other six named instances.
This roughly halves the number of lines logged by each test suite run
while still leaving enough information in the logs to allow at least
basic troubleshooting in case of test failures.
This approach was chosen as it results in a greater decrease in the
number of lines logged than running all named instances with "-d 3",
without causing any test failures.
The malloc attribute allows compiler to do some optmizations on
functions that behave like malloc/calloc, like assuming that the
returned pointer do not alias other pointers.
There is no possibility for mpctx->items to be NULL at the point where
the code was removed, since we enforce that fillcount > 0, if
mpctx->items == NULL when isc_mempool_get is called, then we will
allocate fillcount more items and add to the mpctx->items list.
_query_detach function was incorrectly unliking the query object from
the lookup->q query list, this made it impossible to follow a query
lookup failure with the next one in the list (possibly using a separate
resolver), as the link to the next query in the list was dissolved.
Fix by unliking the node only when the query object is about to be
destroyed, i.e. there is no more references to the object.
When the keymgr needs to create new keys, it is possible it needs to
create multiple keys. The keymgr checks for keyid conflicts with
already existing keys, but it should also check against that it just
created.
GitLab CI pipelines do not currently include a Linux job that would have
GSSAPI support disabled. Add the "--without-gssapi" option to the
./configure invocation on Debian 9 to address that deficiency and also
to continuously test that build-time switch.
If "tkey-gssapi-credential" is set in the configuration and GSSAPI
support is not available, named will refuse to start. As the test
system framework does not support starting named instances
conditionally, ensure that "tkey-gssapi-credential" is only present in
named.conf if GSSAPI support is available.
as with TLS, the destruction of a client stream on failed read
needs to be conditional: if we reached failed_read_cb() as a
result of a timeout on a timer which has subsequently been
reset, the stream must not be closed.
Add a check to the "dnssec" system test which ensures that RRSIG(SOA)
RRsets present anywhere else than at the zone apex are automatically
removed after a zone containing such RRsets is loaded.
If there happens to be a RRSIG(SOA) that is not at the zone apex
for any reason it should not be considered as a stopping condition
for incremental zone signing.
the destruction of the socket in tls_failed_read_cb() needs to be
conditional; if reached due to a timeout on a timer that has
subsequently been reset, the socket must not be destroyed.
this is similar in structure to the UDP timeout recovery test.
this commit adds a new mechanism to the netmgr test allowing the
listen socket to accept incoming TCP connections but never send
a response. this forces the client to time out on read.
when running read callbacks, if the event result is not ISC_R_SUCCESS,
the callback is always run asynchronously. this is a problem on timeout,
because there's no chance to reset the timer before the socket has
already been destroyed. this commit allows read callbacks to run
synchronously for both ISC_R_SUCCESS and ISC_R_TIMEDOUT result codes.
this test sets up a server socket that listens for UDP connections
but never responds. the client will always time out; it should retry
five times before giving up.
The test spawns 4 parallel workers that keep adding, modifying and
deleting zones, the main thread repeatedly checks wheter rndc
status responds within a reasonable period.
While environment and timing issues may affect the test, in most
test cases the deadlock that was taking place before the fix used to
trigger in less than 7 seconds in a machine with at least 2 cores.
It follows a description of the steps that were leading to the deadlock:
1. `do_addzone` calls `isc_task_beginexclusive`.
2. `isc_task_beginexclusive` waits for (N_WORKERS - 1) halted tasks,
this blocks waiting for those (no. workers -1) workers to halt.
...
isc_task_beginexclusive(isc_task_t *task0) {
...
while (manager->halted + 1 < manager->workers) {
wake_all_queues(manager);
WAIT(&manager->halt_cond, &manager->halt_lock);
}
```
3. It is possible that in `task.c / dispatch()` a worker is running a
task event, if that event blocks it will not allow this worker to
halt.
4. `do_addzone` acquires `LOCK(&view->new_zone_lock);`,
5. `rmzone` event is called from some worker's `dispatch()`, `rmzone`
blocks waiting for the same lock.
6. `do_addzone` calls `isc_task_beginexclusive`.
7. Deadlock triggered, since:
- `rmzone` is wating for the lock.
- `isc_task_beginexclusive` is waiting for (no. workers - 1) to
be halted
- since `rmzone` event is blocked it won't allow the worker to halt.
To fix this, we updated do_addzone code to call isc_task_beginexclusive
before the lock is acquired, we postpone locking to the nearest required
place, same for isc_task_beginexclusive.
The same could happen with rndc modzone, so that was addressed as well.
Four named instances in the "nsupdate" system test have GSS-TSIG support
enabled. All of them currently use "tkey-gssapi-keytab". Configure two
of them with "tkey-gssapi-credential" to test that option.
As "tkey-gssapi-keytab" and "tkey-gssapi-credential" both provide the
same functionality, no test modifications are required. The difference
between the two options is that the value of "tkey-gssapi-keytab" is an
explicit path to the keytab file to acquire credentials from, while the
value of "tkey-gssapi-credential" is the name of the principal whose
credentials should be used; those credentials are looked up in the
keytab file expected by the Kerberos library, i.e. /etc/krb5.keytab by
default. The path to the default keytab file can be overridden using by
setting the KRB5_KTNAME environment variable. Utilize that variable to
use existing keytab files with the "tkey-gssapi-credential" option.
The KRB5_KTNAME environment variable should not interfere with the
"tkey-gssapi-keytab" option. Nevertheless, rename one of the keytab
files used with "tkey-gssapi-keytab" to something else than the contents
of the KRB5_KTNAME environment variable in order to make sure that both
"tkey-gssapi-keytab" and "tkey-gssapi-credential" are actually tested.
This mostly removes stuff that's either deprecated, obsolete or not used
at all:
* Update the minimal autoconf version to 2.69
* AC_PROG_CC_C99 is deprecated, just use AC_PROG_CC as we require C11
anyway
* AC_HEADER_TIME is deprecated, both <sys/time.h> and <time.h> can be
included at the same time, and we don't use the macros that
AC_HEADER_TIME defines anywhere
* AC_HEADER_STDC checks for ISO C90 and we require at least C11
* Replace AC_TRY_*([]) with AC_*_IFELSE([AC_LANG_PROGRAM()])
* Update m4/ax_check_openssl.m4 from serial 10 to serial 11
* Update m4/ax_gcc_func_attribute.m4 from serial 10 to serial 13
* Update m4/ax_pthread.m4 from serial 24 to serial 30
* Add early AC_CANONICAL_TARGET call to prevent warning from AX_PTHREAD
This commit changes the taskmgr to run the individual tasks on the
netmgr internal workers. While an effort has been put into keeping the
taskmgr interface intact, couple of changes have been made:
* The taskmgr has no concept of universal privileged mode - rather the
tasks are either privileged or unprivileged (normal). The privileged
tasks are run as a first thing when the netmgr is unpaused. There
are now four different queues in in the netmgr:
1. priority queue - netievent on the priority queue are run even when
the taskmgr enter exclusive mode and netmgr is paused. This is
needed to properly start listening on the interfaces, free
resources and resume.
2. privileged task queue - only privileged tasks are queued here and
this is the first queue that gets processed when network manager
is unpaused using isc_nm_resume(). All netmgr workers need to
clean the privileged task queue before they all proceed normal
operation. Both task queues are processed when the workers are
finished.
3. task queue - only (traditional) task are scheduled here and this
queue along with privileged task queues are process when the
netmgr workers are finishing. This is needed to process the task
shutdown events.
4. normal queue - this is the queue with netmgr events, e.g. reading,
sending, callbacks and pretty much everything is processed here.
* The isc_taskmgr_create() now requires initialized netmgr (isc_nm_t)
object.
* The isc_nm_destroy() function now waits for indefinite time, but it
will print out the active objects when in tracing mode
(-DNETMGR_TRACE=1 and -DNETMGR_TRACE_VERBOSE=1), the netmgr has been
made a little bit more asynchronous and it might take longer time to
shutdown all the active networking connections.
* Previously, the isc_nm_stoplistening() was a synchronous operation.
This has been changed and the isc_nm_stoplistening() just schedules
the child sockets to stop listening and exits. This was needed to
prevent a deadlock as the the (traditional) tasks are now executed on
the netmgr threads.
* The socket selection logic in isc__nm_udp_send() was flawed, but
fortunatelly, it was broken, so we never hit the problem where we
created uvreq_t on a socket from nmhandle_t, but then a different
socket could be picked up and then we were trying to run the send
callback on a socket that had different threadid than currently
running.
When we are reading from the xfrin socket, and the transfer would be
shutdown, the shutdown function would call `xfrin_fail()` which in turns
calls `xfrin_cancelio()` that causes the read callback to be invoked
with `ISC_R_CANCELED` status code and that caused yet another
`xfrin_fail()` call.
The fix here is to ensure the `xfrin_fail()` would be run only once
properly using better synchronization on xfr->shuttingdown flag.
Since all the libraries are internal now, just cleanup the ISCAPI remnants
in isc_socket, isc_task and isc_timer APIs. This means, there's one less
layer as following changes have been done:
* struct isc_socket and struct isc_socketmgr have been removed
* struct isc__socket and struct isc__socketmgr have been renamed
to struct isc_socket and struct isc_socketmgr
* struct isc_task and struct isc_taskmgr have been removed
* struct isc__task and struct isc__taskmgr have been renamed
to struct isc_task and struct isc_taskmgr
* struct isc_timer and struct isc_timermgr have been removed
* struct isc__timer and struct isc__timermgr have been renamed
to struct isc_timer and struct isc_timermgr
* All the associated code that dealt with typing isc_<foo>
to isc__<foo> and back has been removed.
When resolve.c was moved from lib/samples to bin/tests/system, the
resolve.vcxproj.in would still contain old paths to the directory
root. This commit adds one more ..\ to match the directory depth.
Additionally, fixup the path in BINDInstall.vcxproj.in to be
bin/tests/system and not bin/tests/samples.
When setnsec3param() is schedule from zone_postload() there's no
guarantee that `zone->db` is not `NULL` yet. Thus when the
setnsec3param() is called, we need to check for `zone->db` existence and
reschedule the task, because calling `rss_post()` on a zone with empty
`.db` ends up with no-op (the function just returns).
Previously, the taskmgr, timermgr and socketmgr had a constructor
variant, that would create the mgr on top of existing appctx. This was
no longer true and isc_<*>mgr was just calling isc_<*>mgr_create()
directly without any extra code.
This commit just cleans up the extra function.
"resolve" is used by the resolver system tests, and I'm not
certain whether delv exercises the same code, so rather than
remove it, I moved it to bin/tests/system.
sample code for export libraries is no longer needed and
this code is not used for any internal tests. also, sample-gai.c
had already been removed but there were some dangling references.
the libdns client API is no longer being maintained for
external use, we can remove the code that isn't being used
internally, as well as the related tests.
Too much logic was cramped inside the dns_journal_rollforward() that
made it harder to follow. The dns_journal_rollforward() was refactored
to work over already opened journal and some of the previous logic was
moved to new static zone_journal_rollforward() that separates the
journal "rollforward" logic from the "zone" logic.
when dns_journal_rollforward returned ISC_R_RECOVERABLE the distintion
between 'up to date' and 'success' was lost, as a consequence
zone_needdump() was called writing out the zone file when it shouldn't
have been. This change restores that distintion. Adjust system
test to reflect visible changes.
It fixes a corner case which was causing dig to print annoying
messages like:
14-Apr-2021 18:48:37.099 SSL error in BIO: 1 TLS error (errno:
0). Arguments: received_data: (nil), send_data: (nil), finish: false
even when all the data was properly processed.
Before this fix underlying TCP sockets could remain opened for longer
than it is actually required, causing unit tests to fail with lots of
ISC_R_TOOMANYOPENFILES errors.
The change also enables graceful SSL shutdown (before that it would
happen only in the case when isc_nm_cancelread() were called).
This commit merges TLS tests into the common Network Manager unit
tests suite and extends the unit test framework to include support for
additional "ping-pong" style tests where all data could be sent via
lesser number of connections (the behaviour of the old test
suite). The tests for TCP and TLS were extended to make use of the new
mode, as this mode better translates to how the code is used in DoH.
Both TLS and TCP tests now share most of the unit tests' code, as they
are expected to function similarly from a users's perspective anyway.
Additionally to the above, the TLS test suite was extended to include
TLS tests using the connections quota facility.
Due to the lack of "match-clients" clauses in ns4/named2.conf.in, the
same view is incorrectly chosen for all queries received by ns4 in the
"keymgr2kasp" system test. This causes only one version of the
"view-rsasha256.kasp" zone to actually be checked. Add "match-clients"
clauses to ns4/named2.conf.in to ensure the test really checks what it
claims to.
Use identical view names ("ext", "int") in ns4/named.conf.in and
ns4/named2.conf.in so that it is easier to quickly identify the
differences between these two files.
Update tests.sh to account for the above changes. Also fix a copy-paste
error in a comment to prevent confusion.
The test case for a zone with a missing include file was wrong for two
reasons:
1. It was loading the wrong file (master5 instead of master6)
2. It did actually not set the $ret variable to 1 if the test failed
(it should default to ret=1 and clear the variable if the
appropriate log is found).
Add a test case for inline-signing for a zone with an $INCLUDE
statement. There is already a test for a missing include file, this
one adds a test for a zone with an include file that does exist.
Test if the record in the included file is loaded.
The draft says that the NSEC(3) TTL must have the same TTL value
as the minimum of the SOA MINIMUM field and the SOA TTL. This was
always the intended behaviour.
Update the zone structure to also track the SOA TTL. Whenever we
use the MINIMUM value to determine the NSEC(3) TTL, use the minimum
of MINIMUM and SOA TTL instead.
There is no specific test for this, however two tests need adjusting
because otherwise they failed: They were testing for NSEC3 records
including the TTL. Update these checks to use 600 (the SOA TTL),
rather than 3600 (the SOA MINIMUM).
It is more intuitive to have the countdown 'max-stale-ttl' as the
RRset TTL, instead of 0 TTL. This information was already available
in a comment "; stale (will be retained for x more seconds", but
Support suggested to put it in the TTL field instead.
Before binding an RRset, check the time and see if this record is
stale (or perhaps even ancient). Marking a header stale or ancient
happens only when looking up an RRset in cache, but binding an RRset
can also happen on other occasions (for example when dumping the
database).
Check the time and compare it to the header. If according to the
time the entry is stale, but not ancient, set the STALE attribute.
If according to the time is ancient, set the ANCIENT attribute.
We could mark the header stale or ancient here, but that requires
locking, so that's why we only compare the current time against
the rdh_ttl.
Adjust the test to check the dump-db before querying for data. In the
dumped file the entry should be marked as stale, despite no cache
lookup happened since the initial query.
When introducing change 5149, "rndc dumpdb" started to print a line
above a stale RRset, indicating how long the data will be retained.
At that time, I thought it should also be possible to load
a cache from file. But if a TTL has a value of 0 (because it is stale),
stale entries wouldn't be loaded from file. So, I added the
'max-stale-ttl' to TTL values, and adjusted the $DATE accordingly.
Since we actually don't have a "load cache from file" feature, this
is premature and is causing confusion at operators. This commit
changes the 'max-stale-ttl' adjustments.
A check in the serve-stale system test is added for a non-stale
RRset (longttl.example) to make sure the TTL in cache is sensible.
Also, the comment above stale RRsets could have nonsensical
values. A possible reason why this may happen is when the RRset was
marked a stale but the 'max-stale-ttl' has passed (and is actually an
RRset awaiting cleanup). This would lead to the "will be retained"
value to be negative (but since it is stored in an uint32_t, you would
get a nonsensical value (e.g. 4294362497).
To mitigate against this, we now also check if the header is not
ancient. In addition we check if the stale_ttl would be negative, and
if so we set it to 0. Most likely this will not happen because the
header would already have been marked ancient, but there is a possible
race condition where the 'rdh_ttl + serve_stale_ttl' has passed,
but the header has not been checked for staleness.
When system tests are run on Windows, they are assigned port ranges that
are 100 ports wide and start from port number 5000. This is a different
port assignment method than the one used on Unix systems. Drop the "-p"
command line option from bin/tests/system/run.sh invocations used for
starting system tests on Windows to unify the port assignment method
used across all operating systems.
The get_ports.sh script is used for determining the range of ports a
given system test should use. It first determines the start of the port
range to return (the base port); it can either be specified explicitly
by the caller or chosen randomly. Subsequent ports are picked
sequentially, starting from the base port. To ensure no single port is
used by multiple tests, a state file (get_ports.state) containing the
last assigned port is maintained by the script. Concurrent access to
the state file is protected by a lock file (get_ports.lock); if one
instance of the script holds the lock file while another instance tries
to acquire it, the latter retries its attempt to acquire the lock file
after sleeping for 1 second; this retry process can be repeated up to 10
times before the script returns an error.
There are some problems with this approach:
- the sleep period in case of failure to acquire the lock file is
fixed, which leads to a "thundering herd" type of problem, where
(depending on how processes are scheduled by the operating system)
multiple system tests try to acquire the lock file at the same time
and subsequently sleep for 1 second, only for the same situation to
likely happen the next time around,
- the lock file is being locked and then unlocked for every single
port assignment made, not just once for the entire range of ports a
system test should use; in other words, the lock file is currently
locked and unlocked 13 times per system test; this increases the
odds of the "thundering herd" problem described above preventing a
system test from getting one or more ports assigned before the
maximum retry count is reached (assuming multiple system tests are
run in parallel); it also enables the range of ports used by a given
system test to be non-sequential (which is a rather cosmetic issue,
but one that can make log interpretation harder than necessary when
test failures are diagnosed),
- both issues described above cause unnecessary delays when multiple
system tests are started in parallel (due to high lock file
contention among the system tests being started),
- maintaining a state file requires ensuring proper locking, which
complicates the script's source code.
Rework the get_ports.sh script so that it assigns non-overlapping port
ranges to its callers without using a state file or a lock file:
- add a new command line switch, "-t", which takes the name of the
system test to assign ports for,
- ensure every instance of get_ports.sh knows how many ports all
system tests which form the test suite are going to need in total
(based on the number of subdirectories found in bin/tests/system/),
- in order to ensure all instances of get_ports.sh work on the same
global port range (so that no port range collisions happen), a
stable (throughout the expected run time of a single system test
suite) base port selection method is used instead of the random one;
specifically, the base port, unless specified explicitly using the
"-p" command line switch, is derived from the number of hours which
passed since the Unix Epoch time,
- use the name of the system test to assign ports for (passed via the
new "-t" command line switch) as a unique index into the global
system test range, to ensure all system tests use disjoint port
ranges.
The fromhex.pl script needs to be copied from the source directory to
the build directory before any test is run, otherwise the out-of-tree
fails to find it. Given that the script is used only in system test,
move it to bin/tests/system/.
Even if a call to gss_accept_sec_context() fails, it might still cause a
GSS-API response token to be allocated and left for the caller to
release. Make sure the token is released before an early return from
dst_gssapi_acceptctx().
Update the system to include a recoverable managed.keys journal created
with <size,serial0,serial1,0> transactions and test that it has been
updated as part of the start up process.
Previously, dns_journal_begin_transaction() could reserve the wrong
amount of space. We now check that the transaction is internally
consistent when upgrading / downgrading a journal and we also handle the
bad transaction headers.
Instead of journal_write(), use correct format call journal_write_xhdr()
to write the dummy transaction header which looks at j->header_ver1 to
determine which transaction header to write instead of always writing a
zero filled journal_rawxhdr_t header.
The isc_nm_tlsdnsconnect() call could end up with two connect callbacks
called when the timeout fired and the TCP connection was aborted,
but the TLS handshake was not complete yet. isc__nm_connecttimeout_cb()
forgot to clean up sock->tls.pending_req when the connect callback was
called with ISC_R_TIMEDOUT, leading to a second callback running later.
A new argument has been added to the isc__nm_*_failed_connect_cb and
isc__nm_*_failed_read_cb functions, to indicate whether the callback
needs to run asynchronously or not.
We already skip most of the recv_send tests in CI because they are
too timing-related to be run in overloaded environment. This commit
adds a similar change to tls_test before we merge tls_test into
netmgr_test.
if a test failed at the beginning of nm_teardown(), the function
would abort before isc_nm_destroy() or isc_tlsctx_free() were reached;
we would then abort when nm_setup() was run for the next test case.
rearranging the teardown function prevents this problem.
The isc_nm_*connect() functions were refactored to always return the
connection status via the connect callback instead of sometimes returning
the hard failure directly (for example, when the socket could not be
created, or when the network manager was shutting down).
This commit changes the connect functions in all the network manager
modules, and also makes the necessary refactoring changes in places
where the connect functions are called.
dig previously ran isc_nm_udpconnect() three times before giving
up, to work around a freebsd bug that caused connect() to return
a spurious transient EADDRINUSE. this commit moves the retry code
into the network manager itself, so that isc_nm_udpconnect() no
longer needs to return a result code.
The TCP module has been updated to use the generic functions from
netmgr.c instead of its own local copies. This brings the module
mostly up to par with the TCPDNS and TLSDNS modules.
Serveral problems were discovered and fixed after the change in
the connection timeout in the previous commits:
* In TLSDNS, the connection callback was not called at all under some
circumstances when the TCP connection had been established, but the
TLS handshake hadn't been completed yet. Additional checks have
been put in place so that tls_cycle() will end early when the
nmsocket is invalidated by the isc__nm_tlsdns_shutdown() call.
* In TCP, TCPDNS and TLSDNS, new connections would be established
even when the network manager was shutting down. The new
call isc__nm_closing() has been added and is used to bail out
early even before uv_tcp_connect() is attempted.
Similarly to the read timeout, it's now possible to recover from
ISC_R_TIMEDOUT event by restarting the timer from the connect callback.
The change here also fixes platforms that missing the socket() options
to set the TCP connection timeout, by moving the timeout code into user
space. On platforms that support setting the connect timeout via a
socket option, the timeout has been hardcoded to 2 minutes (the maximum
value of tcp-initial-timeout).
Previously, when the client timed out on read, the client socket would
be automatically closed and destroyed when the nmhandle was detached.
This commit changes the logic so that it's possible for the callback to
recover from the ISC_R_TIMEDOUT event by restarting the timer. This is
done by calling isc_nmhandle_settimeout(), which prevents the timeout
handling code from destroying the socket; instead, it continues to wait
for data.
One specific use case for multiple timeouts is serve-stale - the client
socket could be created with shorter timeout (as specified with
stale-answer-client-timeout), so we can serve the requestor with stale
answer, but keep the original query running for a longer time.
The full netmgr test suite is unstable when run in CI due to various
timing issues. Previously, we enabled the full test suite only when
CI_ENABLE_ALL_TESTS environment variable was set, but that went against
original intent of running the full suite when an individual developer
would run it locally.
This change disables the full test suite only when running in the CI and
the CI_ENABLE_ALL_TESTS is not set.
Using "stale-answer-client-timeout" turns out to have unforeseen
negative consequences, and thus it is better to disable the feature
by default for the time being.
Fix race between zone_maintenance and dns_zone_notifyreceive functions,
zone_maintenance was attempting to read a zone flag calling
DNS_ZONE_FLAG(zone, flag) while dns_zone_notifyreceive was updating
a flag in the same zone calling DNS_ZONE_SETFLAG(zone, ...).
The code reading the flag in zone_maintenance was not protected by the
zone's lock, to avoid a race the zone's lock is now being acquired
before an attempt to read the zone flag is made.
When a unit test binary hangs, the GitLab CI job in which it is run is
stuck until its run time limit is exceeded. Furthermore, it is not
trivial to determine which test(s) hung in a given GitLab CI job based
on its log. To prevent these issues, enforce a run time limit on every
binary executed by the lib/unit-test-driver.sh script. Use a timeout of
5 minutes for consistency with older BIND 9 branches, which employed
Kyua for running unit tests. Report an exit code of 124 when the run
time limit is exceeded for a unit test binary, for consistency with the
"timeout" tool included in GNU coreutils.
See "BUGS" section at:
https://www.openssl.org/docs/man1.1.1/man3/SSL_get_error.html
It is mentioned there that when TLS status equals SSL_ERROR_SYSCALL
AND errno == 0 it means that underlying transport layer returned EOF
prematurely. However, we are managing the transport ourselves, so we
should just resume reading from the TCP socket.
It seems that this case has been handled properly on modern versions
of OpenSSL. That being said, the situation goes in line with the
manual: it is briefly mentioned there that SSL_ERROR_SYSCALL might be
returned not only in a case of low-level errors (like system call
failures).
When we are recursing, RPZ processing is not allowed. But when we are
performing a lookup due to "stale-answer-client-timeout", we are still
recursing. This effectively means that RPZ processing is disabled on
such a lookup.
In this case, bail the "stale-answer-client-timeout" lookup and wait
for recursion to complete, as we we can't perform the RPZ rewrite
rules reliably.
The dboption DNS_DBFIND_STALEONLY caused confusion because it implies
we are looking for stale data **only** and ignore any active RRsets in
the cache. Rename it to DNS_DBFIND_STALETIMEOUT as it is more clear
the option is related to a lookup due to "stale-answer-client-timeout".
Rename other usages of "staleonly", instead use "lookup due to...".
Also rename related function and variable names.
When doing a staleonly lookup we don't want to fallback to recursion.
After all, there are obviously problems with recursion, otherwise we
wouldn't do a staleonly lookup.
When resuming from recursion however, we should restore the
RECURSIONOK flag, allowing future required lookups for this client
to recurse.
When implementing "stale-answer-client-timeout", we decided that
we should only return positive answers prematurely to clients. A
negative response is not useful, and in that case it is better to
wait for the recursion to complete.
To do so, we check the result and if it is not ISC_R_SUCCESS, we
decide that it is not good enough. However, there are more return
codes that could lead to a positive answer (e.g. CNAME chains).
This commit removes the exception and now uses the same logic that
other stale lookups use to determine if we found a useful stale
answer (stale_found == true).
This means we can simplify two test cases in the serve-stale system
test: nodata.example is no longer treated differently than data.example.
The NS_QUERYATTR_ANSWERED attribute is to prevent sending a response
twice. Without the attribute, this may happen if a staleonly lookup
found a useful answer and sends a response to the client, and later
recursion ends and also tries to send a response.
The attribute was also used to mask adding a duplicate RRset. This is
considered harmful. When we created a response to the client with a
stale only lookup (regardless if we actually have send the response),
we should clear the rdatasets that were added during that lookup.
Mark such rdatasets with the a new attribute,
DNS_RDATASETATTR_STALE_ADDED. Set a query attribute
NS_QUERYATTR_STALEOK if we may have added rdatasets during a stale
only lookup. Before creating a response on a normal lookup, check if
we can expect rdatasets to have been added during a staleonly lookup.
If so, clear the rdatasets from the message with the attribute
DNS_RDATASETATTR_STALE_ADDED set.
With stale-answer-client-timeout, we may send a response to the client,
but we may want to hold on to the network manager handle, because
recursion is going on in the background, or we need to refresh a
stale RRset.
Simplify the setting of 'nodetach':
* During a staleonly lookup we should not detach the nmhandle, so just
set it prior to 'query_lookup()'.
* During a staleonly "stalefirst" lookup set the 'nodetach' to true
if we are going to refresh the RRset.
Now there is no longer the need to clear the 'nodetach' if we go
through the "dbfind_stale", "stale_refresh_window", or "stale_only"
paths.
When doing a staleonly lookup, ignore active RRsets from cache. If we
don't, we may add a duplicate RRset to the message, and hit an
assertion failure in query.c because adding the duplicate RRset to the
ANSWER section failed.
This can happen on a race condition. When a client query is received,
the recursion is started. When 'stale-answer-client-timeout' triggers
around the same time the recursion completes, the following sequence
of events may happen:
1. Queue the "try stale" fetch_callback() event to the client task.
2. Add the RRsets from the authoritative response to the cache.
3. Queue the "fetch complete" fetch_callback() event to the client task.
4. Execute the "try stale" fetch_callback(), which retrieves the
just-inserted RRset from the database.
5. In "ns_query_done()" we are still recursing, but the "staleonly"
query attribute has already been cleared. In other words, the
query will resume when recursion ends (it already has ended but is
still on the task queue).
6. Execute the "fetch complete" fetch_callback(). It finds the answer
from recursion in the cache again and tries to add the duplicate to
the answer section.
This commit changes the logic for finding stale answers in the cache,
such that on "stale_only" lookups actually only stale RRsets are
considered. It refactors the code so that code paths for "dbfind_stale",
"stale_refresh_window", and "stale_only" are more clear.
First we call some generic code that applies in all three cases,
formatting the domain name for logging purposes, increment the
trystale stats, and check if we actually found stale data that we can
use.
The "dbfind_stale" lookup will return SERVFAIL if we didn't found a
usable answer, otherwise we will continue with the lookup
(query_gotanswer()). This is no different as before the introduction of
"stale-answer-client-timeout" and "stale-refresh-time".
The "stale_refresh_window" lookup is similar to the "dbfind_stale"
lookup: return SERVFAIL if we didn't found a usable answer, otherwise
continue with the lookup (query_gotanswer()).
Finally the "stale_only" lookup.
If the "stale_only" lookup was triggered because of an actual client
timeout (stale-answer-client-timeout > 0), and if database lookup
returned a stale usable RRset, trigger a response to the client.
Otherwise return and wait until the recursion completes (or the
resolver query times out).
If the "stale_only" lookup is a "stale-anwer-client-timeout 0" lookup,
preferring stale data over a lookup. In this case if there was no stale
data, or the data was not a positive answer, retry the lookup with the
stale options cleared, a.k.a. a normal lookup. Otherwise, continue
with the lookup (query_gotanswer()) and refresh the stale RRset. This
will trigger a response to the client, but will not detach the handle
because a fetch will be created to refresh the RRset.
The stale-answer-client-timeout feature introduced a dependancy on
when a client may be detached from the handle. The dboption
DNS_DBFIND_STALEONLY was reused to track this attribute. This overloads
the meaning of this database option, and actually introduced a bug
because the option was checked in other places. In particular, in
'ns_query_done()' there is a check for 'RECURSING(qctx->client) &&
(!QUERY_STALEONLY(&qctx->client->query) || ...' and the condition is
satisfied because recursion has not completed yet and
DNS_DBFIND_STALEONLY is already cleared by that time (in
query_lookup()), because we found a useful answer and we should detach
the client from the handle after sending the response.
Add a new boolean to the client structure to keep track of client
detach from handle is allowed or not. It is only disallowed if we are
in a staleonly lookup and we didn't found a useful answer.
This commit fixes crash in dig when it encounters non-expected header
value. The bug was introduced at some point late in the last DoH
development cycle. Also, refactors the relevant code a little bit to
ensure better incoming data validation for client-side DoH
connections.
Tag the libraries with check_ to prevent them being installed
by "make install". Additionally make check requires .so to be
create which requires .lai files to be constructed which, in
turn, requires -rpath <dir> as part of "linking" the .la file.
The gcc:tarball CI job may identify problems with tarballs created by
"make dist" of the tarball-create CI job. Enabling the gcc:tarball CI
job in web-triggered pipelines provides developers with a test vector.
Some man pages (e.g. dnstap-read.1, named-nzd2nzf.1) should only be
installed conditionally (when the relevant features are enabled in a
given BIND 9 build). This is achieved using Automake conditionals.
However, while all source reStructuredText files are included in
tarballs produced by "make dist" (distribution tarballs) as they should
be, the list of pre-generated man pages included in distribution
tarballs incorrectly depends on the ./configure switches used for the
build for which "make dist" is run. Meanwhile, distribution tarballs
should always contain all the files necessary to build any flavor of
BIND 9.
Here is an example scenario which fails to work as intended:
autoreconf -i
./configure --disable-maintainer-mode
make dist
tar --extract --file bind-9.17.11.tar.xz
cd bind-9.17.11
./configure --disable-maintainer-mode --enable-dnstap
make
Fix by always including pre-generated versions of all conditionally
installed man pages in EXTRA_DIST. While this may cause some of them to
appear in EXTRA_DIST more than once (depending on the ./configure
switches used for the build for which "make dist" is run), it seems to
not be a problem for Automake.
add matching macros to pass arguments from called methods
to generic methods. This will reduce the amount of work
required when extending methods.
Also cleanup unnecessary UNUSED declarations.
util.h requires ISC_CONSTRUCTOR definition, which depends on config.h
inclusion. It does not include it from isc/util.h (or any other header).
Using isc/util.h fails hard when isc/util.h is used without including
bind's config.h.
Move the check to c file, where ISC_CONSTRUCTOR is used. Ensure config.h
is included there.
Added tests to ensure that dig won't retry sending a query over tcp
(+tcp) when a TCP connection is closed prematurely (EOF is read) if
either +tries=1 or retry=0 is specified on the command line.
Now that premature EOF on tcp connections take +tries and +retry into
account, the dig system tests handling TCP EOF with +tries=1 were
expecting dig to do a second attempt in handling the tcp query, which
doesn't happen anymore.
To make the test work as expected +tries value was adjusted to 2, to
make it behave as before after the new update on dig.
Before this commit, a premature EOF (connection closed) on tcp queries
was causing dig to automatically attempt to send the query again, even
if +tries=1 or +retries=0 was provided on command line.
This commit fix the problem by taking into account the no. of retries
specified by the user when processing a premature EOF on tcp
connections.
Add kasp.sh to the list of scripts copied from the source directory to
the build directory before any test is run. This will fix
the out-of-tree test failures introduced in commit
ecb073bdd6 on the 'main' branch.
When calling "rndc dnssec -checkds", it may take some milliseconds
before the appropriate changes have been written to the state file.
Add retry_quiet mechanisms to allow the write operation to finish.
Also retry_quiet the check for the next key event. A "rndc dnssec"
command may trigger a zone_rekey event and this will write out
a new "next key event" log line, but it may take a bit longer than
than expected in the tests.
Call 'dns_zone_rekey' after a 'rndc dnssec -checkds' or 'rndc dnssec
-rollover' command is received, because such a command may influence
the next key event. Updating the keys immediately avoids unnecessary
rollover delays.
The kasp system test no longer needs to call 'rndc loadkeys' after
a 'rndc dnssec -checkds' or 'rndc dnssec -rollover' command.
CDS/CDNSKEY DELETE records are only useful if they are signed,
otherwise the parent cannot verify these RRsets anyway. So once the DS
has been removed (and signaled to BIND), we can remove the DNSKEY and
RRSIG records, and at this point we can also remove the CDS/CDNSKEY
records.
Change the 'check_keys' function to try three times. Some intermittent
kasp test failures are because we are inspecting the key files
before the actual change has happen. The 'retry_quiet' approach allows
for a bit more time to let the write operation finish.
This MR introduces a new system test 'keymgr2kasp' to test
migration to 'dnssec-policy'. It moves some existing tests from
the 'kasp' system test to here.
Also a common script 'kasp.sh', to be used in kasp specific tests,
is introduced.
The 'keymgr_key_init()' function initializes key states if they have
not been set previously. It looks at the key timing metadata and
determines using the given times whether a state should be set to
RUMOURED or OMNIPRESENT.
However, the DNSKEY and ZRRSIG states were mixed up: When looking
at the Activate timing metadata we should set the ZRRSIG state, and
when looking at the Published timing metadata we should set the
DNSKEY state.
Add two test zones that migrate to dnssec-policy. Test if the key
states are set accordingly given the timing metadata.
The rumoured.kasp zone has its Publish/Active/SyncPublish times set
not too long ago so the key states should be set to RUMOURED. The
omnipresent.kasp zone has its Publish/Active/SyncPublish times set
long enough to set the key states to OMNIPRESENT.
Slightly change the init_migration_keys function to set the
key lifetime to "none" (legacy keys don't have lifetime). Then in the
test case set the expected key lifetime explicitly.
This commit is somewhat editorial as it does not introduce something
new nor fixes anything.
The layout in keymgr2kasp/tests.sh has been changed, with the
intention to make more clear where a test scenario ends and begins.
The publication time of some ZSKs has been changed. It makes a more
clear distinction between publication time and activation time.
The kasp system test was getting pretty large, and more tests are on
the way. Time to split up. Move tests that are related to migrating
to dnssec-policy to a separate directory 'keymgr2kasp'.
The named-checkzone tool can also be invoked as named-compilezone. Make
sure a man page is installed for that alias. Move and rename the
"man_named-checkzone" label to prevent a Sphinx duplicate label warning
from being raised (see commit 84862e96c1
for more information).
The named-nzd2nzf utility is only built and installed for LMDB-enabled
builds. Adjust the relevant Makefile.am file to make sure the
named-nzd2nzf.1 man page is also only built and installed for
LMDB-enabled builds.
The dnstap-read utility is only built and installed for dnstap-enabled
builds. Adjust the relevant Makefile.am file to make sure the
dnstap-read.1 man page is also only built and installed for
dnstap-enabled builds.
Issue #2575 was merged to 9.16 only as change 5603, but a placeholder
was not added to CHANGES in the main branch. This commit adds the
placeholder and renumbers the two subsequent changes.
Resolve "dig -u is extremely inaccurate, especially on machines with the kernel timer tick set at 100Hz"
Closes#2592
See merge request isc-projects/bind9!4826
The TIME_NOW macro calls isc_time_now which uses CLOCK_REALTIME_COARSE
for getting the current time. This is perfectly fine for millisecond,
however when the user request microsecond resolutiuon, they are going
to get very inaccurate results. This is especially true on a server
class machine where the clock ticks may be set to 100HZ.
This changes dig to use the new TIME_NOW_HIRES macro that uses the
CLOCK_MONOTONIC_RAW that is more expensive, but gets the *actual*
current time rather than the at the last kernel time tick.
The current isc_time_now uses CLOCK_REALTIME_COARSE which only updates
on a timer tick. This clock is generally fine for millisecond accuracy,
but on servers with 100hz clocks, this clock is nowhere near accurate
enough for microsecond accuracy.
This commit adds a new isc_time_now_hires function that uses
CLOCK_REALTIME, which gives the current time, though it is somewhat
expensive to call. When microsecond accuracy is required, it may be
required to use extra resources for higher accuracy.
In CMocka versions << 1.1.3, the skip() function would cause the whole
unit test to abort when CMOCKA_TEST_ABORT is set. As this is problem
only in Debian 9 Stretch and Ubuntu 16.04 Xenial, we just require the
CMocka >= 1.1.3 and disable the unit testing on Debian 9 Stretch until
we can pull the libcmocka-dev from stretch-backports and remove the
Ubuntu 16.04 Xenial from the CI as it is reaching End of Standard
Support at the end of April 2021.
When NETMGR_TRACE(_VERBOSE) is enabled, the build would fail on some
non-Linux non-glibc platforms because:
* Use <stdint.h> print macros because uint_fast32_t is not always
unsigned long
* The header <execinfo.h> is not available on non-glibc, thus commit
adds dummy backtrace() and backtrace_symbols_fd() functions for
platforms without HAVE_BACKTRACE
The netmgr unit tests were designed to push the system limits to maximum
by sending as many queries as possible in the busy loop from multiple
threads. This mostly works with UDP, but in the stateful protocol where
establishing the connection takes more time, it failed quite often in
the CI. On FreeBSD, this happened more often, because the socket() call
would fail spuriosly making the problem even worse.
This commit does several things to improve reliability:
* return value of isc_nm_<proto>connect() is always checked and retried
when scheduling the connection fails
* The busy while loop has been slowed down with usleep(1000); so the
netmgr threads could schedule the work and get executed.
* The isc_thread_yield() was replaced with usleep(1000); also to allow
the other threads to do any work.
* Instead of waiting on just one variable, we wait for multiple
variables to reach the final value
* We are wrapping the netmgr operations (connects, reads, writes,
accepts) with reference counting and waiting for all the callbacks to
be accounted for.
This has two effects:
a) the isc_nm_t is always clean of active sockets and handles when
destroyed, so it will prevent the spurious INSIST(references == 1)
from isc_nm_destroy()
b) the unit test now ensures that all the callbacks are always called
when they should be called, so any stuck test means that there was
a missing callback call and it is always a real bug
These changes allows us to remove the workaround that would not run
certain tests on systems without port load-balancing.
In tls_error(), we now call isc__nm_tlsdns_failed_read() instead of just
stopping timer and reading from the socket. This allows us to properly
cleanup any pending operation on the socket.
When shutting down, calling the isc__nm_failed_connect_cb() was delayed
until the connect callback would be called. It turned out that the
connect callback might not get called at all when the socket is being
shut down. Call the failed_connect_cb() directly in the
tlsdns_shutdown() instead of waiting for the connect callback to call it.
After a partial write the tls.senddata buffer would be rearranged to
contain only the data tha wasn't sent and the len part would be made
shorter, which would lead to attempt to free only part of a socket's
tls.senddata buffer.
The tlsdns_cycle() might call uv_write() to write data to the socket,
when this happens and the socket is shutdown before the callback
completes, the uvreq structure was not freed because the callback would
be called with non-zero status code.
The RFC7828 specifies the keepalive interval to be 16-bit, specified in
units of 100 milliseconds and the configuration options tcp-*-timeouts
are following the suit. The units of 100 milliseconds are very
unintuitive and while we can't change the configuration and presentation
format, we should not follow this weird unit in the API.
This commit changes the isc_nm_(get|set)timeouts() functions to work
with milliseconds and convert the values to milliseconds before passing
them to the function, not just internally.
The udp, tcpdns and tlsdns contained lot of cut&paste code or code that
was very similar making the stack harder to maintain as any change to
one would have to be copied to the the other protocols.
In this commit, we merge the common parts into the common functions
under isc__nm_<foo> namespace and just keep the little differences based
on the socket type.
After the TCPDNS refactoring the initial and idle timers were broken and
only the tcp-initial-timeout was always applied on the whole TCP
connection.
This broke any TCP connection that took longer than tcp-initial-timeout,
most often this would affect large zone AXFRs.
This commit changes the timeout logic in this way:
* On TCP connection accept the tcp-initial-timeout is applied
and the timer is started
* When we are processing and/or sending any DNS message the timer is
stopped
* When we stop processing all DNS messages, the tcp-idle-timeout
is applied and the timer is started again
The system tests were missing a test that would test tcp-initial-timeout
and tcp-idle-timeout.
This commit adds new "timeouts" system test that adds:
* Test that waits longer than tcp-initial-timeout and then checks
whether the socket was closed
* Test that sends and receives DNS message then waits longer than
tcp-initial-timeout but shorter time than tcp-idle-timeout than
sends DNS message again than waits longer than tcp-idle-timeout
and checks whether the socket was closed
* Similar test, but bursting 25 DNS messages than waiting longer than
tcp-initial-timeout and shorter than tcp-idle-timeout than do second
25 DNS message burst
* Check whether transfer longer than tcp-initial-timeout succeeds
Add a test for freezing, manually updating, and then thawing a dynamic
zone with "dnssec-policy". In the kasp system test we add parameters
to the "update_is_signed" check to signal the indicated IP addresses
for the labels "a" and "d". If set to '-', the test is skipped.
After nsupdating the dynamic.kasp zone, we revert the update (with
nsupdate) and update the zone again, but now with the freeze/thaw
approach.
Dynamic zones with dnssec-policy could not be thawed because KASP
zones were considered always dynamic. But a dynamic KASP zone should
also check whether updates are disabled.
This commit fixes loading the certificate chain files so that the full
chain could be sent to the clients which require that for
verification. Before that fix only the top most certificate would be
loaded from the chain and sent to clients preventing some of them to
perform certificate validation (e.g. Windows 10 DoH client).
The transport should also be detached when we skip a master, otherwise
named will crash when sending a SOA query to the next master over TLS,
because the transport must be NULL when we enter
'dns_view_gettransport'.
When we query the resolver for a domain name that is in the same zone
for which is already one or more fetches outstanding, we could
potentially hit the fetch limits. If so, recursion fails immediately
for the incoming query and if serve-stale is enabled, we may try to
return a stale answer.
If the resolver is also is authoritative for the parent zone (for
example the root zone), first a delegation is found, but we first
check the cache for a better response.
Nothing is found in the cache, so we try to recurse to find the
answer to the query.
Because of fetch-limits 'dns_resolver_createfetch()' returns an error,
which 'ns_query_recurse()' propagates to the caller,
'query_delegation_recurse()'.
Because serve-stale is enabled, 'query_usestale()' is called,
setting 'qctx->db' to the cache db, but leaving 'qctx->version'
untouched. Now 'query_lookup()' is called to search for stale data
in the cache database with a non-NULL 'qctx->version'
(which is set to a zone db version), and thus we hit an assertion
in rbtdb.
This crash was introduced in 'main' by commit
8bcd7fe69e.
Commit 9fb6d11abb (which converted BIND 9
documentation from DocBook to Sphinx) inadvertently removed a paragraph
from the description of the "max-ixfr-ratio" option. Add the missing
paragraph back.
Unfortunately, it's not possible to disable Pull Requests on the
mirrored repository on the GitHub, so this commit adds external action
that closes any new open Issue or Pull Requests instead letting them rot
unnoticed.
- use a value less than 2^32 for DNS_ZONEFLG_FIXJOURNAL; a larger value
could cause problems in some build environments. the zone flag
DNS_ZONEFLG_DIFFONRELOAD, which was no longer in use, has now been
deleted and its value reused for _FIXJOURNAL.
*** CID 329157: Null pointer dereferences (REVERSE_INULL)
/lib/dns/journal.c: 754 in journal_open()
748 j->header.index_size * sizeof(journal_rawpos_t));
749 }
750 if (j->index != NULL) {
751 isc_mem_put(j->mctx, j->index,
752 j->header.index_size * sizeof(journal_pos_t));
753 }
CID 329157: Null pointer dereferences (REVERSE_INULL)
Null-checking "j->filename" suggests that it may be null, but it has already been dereferenced on all paths leading to the check.
754 if (j->filename != NULL) {
755 isc_mem_free(j->mctx, j->filename);
756 }
757 if (j->fp != NULL) {
758 (void)isc_stdio_close(j->fp);
759 }
- rename dot to doth, as it now covers both dot and doh.
- merge xot into doth as it's closely related.
- added long-lived key and cert files (expiring 2121).
- add tests with https-get, https-post, http-plain, alternate
endpoints, and both static and ephemeral TLS configuration.
- incidentally fixed a memory leak in dig that occurred if +https
was specified more than once.
It is advisable to disable Nagle's algorithm for HTTP/2 connections
because multiple HTTP/2 streams could be multiplexed over one
transport connection. Thus, delays when delivering small packets could
bring down performance for the whole session. HTTP/2 is meant to be
used this way.
when called from within the context of a network thread,
isc_nm_tlsconnect() hangs. it is waiting for the socket's
result code to be updated, but that update is supposed to happen
asynchronously in the network thread, and if we're already blocking
in the network thread, it can never occur.
we can kluge around this by setting the socket result code
early; this works for most clients (including "dig"), but it causes
inconsistent behaviors that manifest as test failures in the DoH unit
test.
so we kluged around it even more by setting the socket result code
early *only when running in the network thread*. we need a better
solution for this problem, but this will do for now.
This commit makes the server-side code polite.
It fixes the error handling code on the server side and fixes
returning error code in responses (there was a nasty bug which could
potentially crash the server).
Also, in this commit we limit max size POST request data to 96K, max
processed data size in headers to 128K (should be enough to handle any
GET requests).
If these limits are surpassed, server will terminate the request with
RST_STREAM without responding with error code. Otherwise it politely
responds with error code.
This commit also limits number of concurrent HTTP/2 streams per
transport connection on server to 100 (as nghttp2 advises by default).
Ideally, these parameters should be configurable both globally and per
every HTTP endpoint description in the configuration file, but for now
putting sane limits should be enough.
- style, cleanup, and removal of unnecessary code.
- combined isc_nm_http_add_endpoint() and isc_nm_http_add_doh_endpoint()
into one function, renamed isc_http_endpoint().
- moved isc_nm_http_connect_send_request() into doh_test.c as a helper
function; remove it from the public API.
- renamed isc_http2 and isc_nm_http2 types and functions to just isc_http
and isc_nm_http, for consistency with other existing names.
- shortened a number of long names.
- the caller is now responsible for determining the peer address.
in isc_nm_httpconnect(); this eliminates the need to parse the URI
and the dependency on an external resolver.
- the caller is also now responsible for creating the SSL client context,
for consistency with isc_nm_tlsdnsconnect().
- added setter functions for HTTP/2 ALPN. instead of setting up ALPN in
isc_tlsctx_createclient(), we now have a function
isc_tlsctx_enable_http2client_alpn() that can be run from
isc_nm_httpconnect().
- refactored isc_nm_httprequest() into separate read and send functions.
isc_nm_send() or isc_nm_read() is called on an http socket, it will
be stored until a corresponding isc_nm_read() or _send() arrives; when
we have both halves of the pair the HTTP request will be initiated.
- isc_nm_httprequest() is renamed isc__nm_http_request() for use as an
internal helper function by the DoH unit test. (eventually doh_test
should be rewritten to use read and send, and this function should
be removed.)
- added implementations of isc__nm_tls_settimeout() and
isc__nm_http_settimeout().
- increased NGHTTP2 header block length for client connections to 128K.
- use isc_mem_t for internal memory allocations inside nghttp2, to
help track memory leaks.
- send "Cache-Control" header in requests and responses. (note:
currently we try to bypass HTTP caching proxies, but ideally we should
interact with them: https://tools.ietf.org/html/rfc8484#section-5.1)
Simple typecast to size_t should be enough to silence the warning on
ARMv7, even though the code is in fact correct, because the readlen is
checked for being < 0 in the block before the warning.
The C standard actually doesn't define char as signed or unsigned, and
it could be either according to underlying architecture. It turns out
that while it's usually signed type, it isn't on arm64 where it's
unsigned.
isc_commandline_parse() return int, just use that instead of the char.
tests that version 1 journal files containing version 1 transaction
headers are rolled forward correctly on server startup, then updated
into version 2 journals. also checks journal file consistency and
'max-journal-size' behavior.
'named-journalprint -x' now prints the journal's index table and
the offset of each transaction in the journal, so that index consistency
can be confirmed.
when the 'max-ixfr-ratio' option was added, journal transaction
headers were revised to include a count of RR's in each transaction.
this made it impossible to read old journal files after an upgrade.
this branch restores the ability to read version 1 transaction
headers. when rolling forward, printing journal contents, if
the wrong transaction header format is found, we can switch.
when dns_journal_rollforward() detects a version 1 transaction
header, it returns DNS_R_RECOVERABLE. this triggers zone_postload()
to force a rewrite of the journal file in the new format, and
also to schedule a dump of the zone database with minimal delay.
journal repair is done by dns_journal_compact(), which rewrites
the entire journal, ignoring 'max-journal-size'. journal size is
corrected later.
newly created journal files now have "BIND LOG V9.2" in their headers
instead of "BIND LOG V9". files with the new version string cannot be
read using the old transaction header format. note that this means
newly created journal files will be rejected by older versions of named.
named-journalprint now takes a "-x" option, causing it to print
transaction header information before each delta, including its
format version.
Call the libisc isc__initialize() constructor and isc__shutdown()
destructor from DllMain instead of having duplicate code between
those and DllMain() code.
When AddressSanitizer is in use, disable the internal mempool
implementation and redirect the isc_mempool_get to isc_mem_get
(and similarly for isc_mempool_put). This is the method recommended
by the AddressSanitizer authors for tracking allocations and
deallocations instead of custom poison/unpoison code (see
https://github.com/google/sanitizers/wiki/AddressSanitizerManualPoisoning).
- [ ]***(Support)*** Publish links to downloads on ISC website.
- [ ]***(Support)*** Write release email to *bind-announce*.
- [ ]***(Support)*** Write email to *bind-users* (if a major release).
- [ ]***(Support)*** Send eligible customers updated links to the Subscription Edition.
- [ ]***(Support)*** Send eligible customers updated links to the Subscription Edition (update the -S edition delivery tickets, even if those links were provided earlier via an ASN ticket).
- [ ]***(Support)*** Update tickets in case of waiting support customers.
- [ ]***(QA)*** Build and test any outstanding private packages.
- [ ]***(QA)*** Build public packages (`*.deb`, RPMs).
- [ ]***(QA)*** Build public RPMs.
- [ ]***(SwEng)*** Build Debian/Ubuntu packages.
- [ ]***(SwEng)*** Update Docker images.
- [ ]***(QA)*** Inform Marketing of the release.
- [ ]***(QA)*** Update the internal [BIND release dates wiki page](https://wiki.isc.org/bin/view/Main/BindReleaseDates) when public announcement has been made.
- [ ]***(Marketing)*** Post short note to Twitter.
- [ ]***(Marketing)*** Update [Wikipedia entry for BIND](https://en.wikipedia.org/wiki/BIND).
- [ ]***(Marketing)*** Write blog article (if a major release).
- [ ]***(QA)*** Ensure all new tags are annotated and signed.
- [ ]***(QA)*** Push tags for the published releases to the public repository.
- [ ]***(QA)*** Merge the automatically prepared `prep 9.x.y` commit which updates `version` and documentation on the release branch into the relevant maintenance branch (`v9_x`).
- [ ]***(QA)*** For each maintained branch, update the `BIND_BASELINE_VERSION` variable for the `abi-check` job in `.gitlab-ci.yml` to the latest published BIND version tag for a given branch.
- [ ]***(QA)*** Prepare empty release notes for the next set of releases.
- [ ]***(QA)*** Sanitize all confidential issues assigned to the release milestone and make them public.
- [ ]***(QA)*** Merge published release tags (non-linearly) back into the their relevant development/maintenance branches.
- [ ]***(QA)*** Sanitize confidential issues which are assigned to the current release milestone and do not describe a security vulnerability, then make them public.
- [ ]***(QA)*** Sanitize confidential issues which are assigned to older release milestones and describe security vulnerabilities, then make them public if appropriate[^2].
- [ ]***(QA)*** Update QA tools used in GitLab CI (e.g. Flake8, PyLint) by modifying the relevant `Dockerfile`.
[^1]: If not, use the time remaining until the tagging deadline to ensure all outstanding issues are either resolved or moved to a different milestone.
[^2]: As a rule of thumb, security vulnerabilities which have reproducers merged to the public repository are considered okay for full disclosure.
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
"Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:
(a) You must give any other recipients of the Work or Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.
You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
This Exception is an additional permission under section 7 of the GNU General Public License, version 3 ("GPLv3"). It applies to a given file that bears a notice placed by the copyright holder of the file stating that the file is governed by GPLv3 along with this Exception.
The purpose of this Exception is to allow distribution of Autoconf's typical output under terms of the recipient's choice (including proprietary).
0. Definitions.
"Covered Code" is the source or object code of a version of Autoconf that is a covered work under this License.
"Normally Copied Code" for a version of Autoconf means all parts of its Covered Code which that version can copy from its code (i.e., not from its input file) into its minimally verbose, non-debugging and non-tracing output.
"Ineligible Code" is Covered Code that is not Normally Copied Code.
1. Grant of Additional Permission.
You have permission to propagate output of Autoconf, even if such propagation would otherwise violate the terms of GPLv3. However, if by modifying Autoconf you cause any Ineligible Code of the version you received to become Normally Copied Code of your modified version, then you void this Exception for the resulting covered work. If you convey that resulting covered work, you must remove this Exception in accordance with the second paragraph of Section 7 of GPLv3.
2. No Weakening of Autoconf Copyleft.
The availability of this Exception does not imply any general presumption that third-party software is unaffected by the copyleft requirements of the license of Autoconf.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Copyright (c) <year> <owner>. All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
3. Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations.
Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program.
You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License.
c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program.
In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.
If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License.
7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances.
It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.
This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation.
10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
one line to give the program's name and an idea of what it does. Copyright (C) yyyy name of author
This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker.
signature of Ty Coon, 1 April 1989 Ty Coon, President of Vice
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
TERMS AND CONDITIONS
0. Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”. “Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code” means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A “Major Component”, in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, “normally used” refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).
The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, “control” includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License “or any later version” applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the “copyright” line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an “about box”.
You should also get your employer (if you work as a programmer) or school, if any, to sign a “copyright disclaimer” for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <http://www.gnu.org/philosophy/why-not-lgpl.html>.
Copyright (c) 2004-2010 by Internet Systems Consortium, Inc. ("ISC")
Copyright (c) 1995-2003 by Internet Software Consortium
Permission to use, copy, modify, and/or distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND ISC DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL ISC BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
As a special exception to the GNU General Public License, if you distribute this file as part of a program that contains a configuration script generated by Autoconf, you may include it under the same distribution terms that you use for the rest of that program.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
1.1. "Contributor" means each individual or legal entity that creates, contributes to the creation of, or owns Covered Software.
1.2. "Contributor Version" means the combination of the Contributions of others (if any) used by a Contributor and that particular Contributor's Contribution.
1.3. "Contribution" means Covered Software of a particular Contributor.
1.4. "Covered Software" means Source Code Form to which the initial Contributor has attached the notice in Exhibit A, the Executable Form of such Source Code Form, and Modifications of such Source Code Form, in each case including portions thereof.
1.5. "Incompatible With Secondary Licenses" means
(a) that the initial Contributor has attached the notice described in Exhibit B to the Covered Software; or
(b) that the Covered Software was made available under the terms of version 1.1 or earlier of the License, but not also under the terms of a Secondary License.
1.6. "Executable Form" means any form of the work other than Source Code Form.
1.7. "Larger Work" means a work that combines Covered Software with other material, in a separate file or files, that is not Covered Software.
1.8. "License" means this document.
1.9. "Licensable" means having the right to grant, to the maximum extent possible, whether at the time of the initial grant or subsequently, any and all of the rights conveyed by this License.
1.10. "Modifications" means any of the following:
(a) any file in Source Code Form that results from an addition to, deletion from, or modification of the contents of Covered Software; or
(b) any new file in Source Code Form that contains any Covered Software.
1.11. "Patent Claims" of a Contributor means any patent claim(s), including without limitation, method, process, and apparatus claims, in any patent Licensable by such Contributor that would be infringed, but for the grant of the License, by the making, using, selling, offering for sale, having made, import, or transfer of either its Contributions or its Contributor Version.
1.12. "Secondary License" means either the GNU General Public License, Version 2.0, the GNU Lesser General Public License, Version 2.1, the GNU Affero General Public License, Version 3.0, or any later versions of those licenses.
1.13. "Source Code Form" means the form of the work preferred for making modifications.
1.14. "You" (or "Your") means an individual or a legal entity exercising rights under this License. For legal entities, "You" includes any entity that controls, is controlled by, or is under common control with You. For purposes of this definition, "control" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of more than fifty percent (50%) of the outstanding shares or beneficial ownership of such entity.
2. License Grants and Conditions
2.1. Grants
Each Contributor hereby grants You a world-wide, royalty-free, non-exclusive license:
(a) under intellectual property rights (other than patent or trademark) Licensable by such Contributor to use, reproduce, make available, modify, display, perform, distribute, and otherwise exploit its Contributions, either on an unmodified basis, with Modifications, or as part of a Larger Work; and
(b) under Patent Claims of such Contributor to make, use, sell, offer for sale, have made, import, and otherwise transfer either its Contributions or its Contributor Version.
2.2. Effective Date
The licenses granted in Section 2.1 with respect to any Contribution become effective for each Contribution on the date the Contributor first distributes such Contribution.
2.3. Limitations on Grant Scope
The licenses granted in this Section 2 are the only rights granted under this License. No additional rights or licenses will be implied from the distribution or licensing of Covered Software under this License. Notwithstanding Section 2.1(b) above, no patent license is granted by a Contributor:
(a) for any code that a Contributor has removed from Covered Software; or
(b) for infringements caused by: (i) Your and any other third party's modifications of Covered Software, or (ii) the combination of its Contributions with other software (except as part of its Contributor Version); or
(c) under Patent Claims infringed by Covered Software in the absence of its Contributions.
This License does not grant any rights in the trademarks, service marks, or logos of any Contributor (except as may be necessary to comply with the notice requirements in Section 3.4).
2.4. Subsequent Licenses
No Contributor makes additional grants as a result of Your choice to distribute the Covered Software under a subsequent version of this License (see Section 10.2) or under the terms of a Secondary License (if permitted under the terms of Section 3.3).
2.5. Representation
Each Contributor represents that the Contributor believes its Contributions are its original creation(s) or it has sufficient rights to grant the rights to its Contributions conveyed by this License.
2.6. Fair Use
This License is not intended to limit any rights You have under applicable copyright doctrines of fair use, fair dealing, or other equivalents.
2.7. Conditions
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted in Section 2.1.
3. Responsibilities
3.1. Distribution of Source Form
All distribution of Covered Software in Source Code Form, including any Modifications that You create or to which You contribute, must be under the terms of this License. You must inform recipients that the Source Code Form of the Covered Software is governed by the terms of this License, and how they can obtain a copy of this License. You may not attempt to alter or restrict the recipients' rights in the Source Code Form.
3.2. Distribution of Executable Form
If You distribute Covered Software in Executable Form then:
(a) such Covered Software must also be made available in Source Code Form, as described in Section 3.1, and You must inform recipients of the Executable Form how they can obtain a copy of such Source Code Form by reasonable means in a timely manner, at a charge no more than the cost of distribution to the recipient; and
(b) You may distribute such Executable Form under the terms of this License, or sublicense it under different terms, provided that the license for the Executable Form does not attempt to limit or alter the recipients' rights in the Source Code Form under this License.
3.3. Distribution of a Larger Work
You may create and distribute a Larger Work under terms of Your choice, provided that You also comply with the requirements of this License for the Covered Software. If the Larger Work is a combination of Covered Software with a work governed by one or more Secondary Licenses, and the Covered Software is not Incompatible With Secondary Licenses, this License permits You to additionally distribute such Covered Software under the terms of such Secondary License(s), so that the recipient of the Larger Work may, at their option, further distribute the Covered Software under the terms of either this License or such Secondary License(s).
3.4. Notices
You may not remove or alter the substance of any license notices (including copyright notices, patent notices, disclaimers of warranty, or limitations of liability) contained within the Source Code Form of the Covered Software, except that You may alter any license notices to the extent required to remedy known factual inaccuracies.
3.5. Application of Additional Terms
You may choose to offer, and to charge a fee for, warranty, support, indemnity or liability obligations to one or more recipients of Covered Software. However, You may do so only on Your own behalf, and not on behalf of any Contributor. You must make it absolutely clear that any such warranty, support, indemnity, or liability obligation is offered by You alone, and You hereby agree to indemnify every Contributor for any liability incurred by such Contributor as a result of warranty, support, indemnity or liability terms You offer. You may include additional disclaimers of warranty and limitations of liability specific to any jurisdiction.
4. Inability to Comply Due to Statute or Regulation
If it is impossible for You to comply with any of the terms of this License with respect to some or all of the Covered Software due to statute, judicial order, or regulation then You must: (a) comply with the terms of this License to the maximum extent possible; and (b) describe the limitations and the code they affect. Such description must be placed in a text file included with all distributions of the Covered Software under this License. Except to the extent prohibited by statute or regulation, such description must be sufficiently detailed for a recipient of ordinary skill to be able to understand it.
5. Termination
5.1. The rights granted under this License will terminate automatically if You fail to comply with any of its terms. However, if You become compliant, then the rights granted under this License from a particular Contributor are reinstated (a) provisionally, unless and until such Contributor explicitly and finally terminates Your grants, and (b) on an ongoing basis, if such Contributor fails to notify You of the non-compliance by some reasonable means prior to 60 days after You have come back into compliance. Moreover, Your grants from a particular Contributor are reinstated on an ongoing basis if such Contributor notifies You of the non-compliance by some reasonable means, this is the first time You have received notice of non-compliance with this License from such Contributor, and You become compliant prior to 30 days after Your receipt of the notice.
5.2. If You initiate litigation against any entity by asserting a patent infringement claim (excluding declaratory judgment actions, counter-claims, and cross-claims) alleging that a Contributor Version directly or indirectly infringes any patent, then the rights granted to You by any and all Contributors for the Covered Software under Section 2.1 of this License shall terminate.
5.3. In the event of termination under Sections 5.1 or 5.2 above, all end user license agreements (excluding distributors and resellers) which have been validly granted by You or Your distributors under this License prior to termination shall survive termination.
6. Disclaimer of Warranty
Covered Software is provided under this License on an "as is" basis, without warranty of any kind, either expressed, implied, or statutory, including, without limitation, warranties that the Covered Software is free of defects, merchantable, fit for a particular purpose or non-infringing. The entire risk as to the quality and performance of the Covered Software is with You. Should any Covered Software prove defective in any respect, You (not any Contributor) assume the cost of any necessary servicing, repair, or correction. This disclaimer of warranty constitutes an essential part of this License. No use of any Covered Software is authorized under this License except under this disclaimer.
7. Limitation of Liability
Under no circumstances and under no legal theory, whether tort (including negligence), contract, or otherwise, shall any Contributor, or anyone who distributes Covered Software as permitted above, be liable to You for any direct, indirect, special, incidental, or consequential damages of any character including, without limitation, damages for lost profits, loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses, even if such party shall have been informed of the possibility of such damages. This limitation of liability shall not apply to liability for death or personal injury resulting from such party's negligence to the extent applicable law prohibits such limitation. Some jurisdictions do not allow the exclusion or limitation of incidental or consequential damages, so this exclusion and limitation may not apply to You.
8. Litigation
Any litigation relating to this License may be brought only in the courts of a jurisdiction where the defendant maintains its principal place of business and such litigation shall be governed by laws of that jurisdiction, without reference to its conflict-of-law provisions. Nothing in this Section shall prevent a party's ability to bring cross-claims or counter-claims.
9. Miscellaneous
This License represents the complete agreement concerning the subject matter hereof. If any provision of this License is held to be unenforceable, such provision shall be reformed only to the extent necessary to make it enforceable. Any law or regulation which provides that the language of a contract shall be construed against the drafter shall not be used to construe this License against a Contributor.
10. Versions of the License
10.1. New Versions
Mozilla Foundation is the license steward. Except as provided in Section 10.3, no one other than the license steward has the right to modify or publish new versions of this License. Each version will be given a distinguishing version number.
10.2. Effect of New Versions
You may distribute the Covered Software under the terms of the version of the License under which You originally received the Covered Software, or under the terms of any subsequent version published by the license steward.
10.3. Modified Versions
If you create software not governed by this License, and you want to create a new license for such software, you may create and use a modified version of this License if you rename the license and remove any references to the name of the license steward (except to note that such modified license differs from this License).
10.4. Distributing Source Code Form that is Incompatible With Secondary Licenses
If You choose to distribute Source Code Form that is Incompatible With Secondary Licenses under the terms of this version of the License, the notice described in Exhibit B of this License must be attached.
Exhibit A - Source Code Form License Notice
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, you can obtain one at https://mozilla.org/MPL/2.0/.
If it is not possible or desirable to put the notice in a particular file, then You may include the notice in a location (such as a LICENSE file in a relevant directory) where a recipient would be likely to look for such a notice.
You may add additional accurate notices of copyright ownership.
Exhibit B - "Incompatible With Secondary Licenses" Notice
This Source Code Form is "Incompatible With Secondary Licenses", as defined by the Mozilla Public License, v. 2.0.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.