The OpenSSL 1.x Engines support has been deprecated in the OpenSSL 3.x
and is going to be removed. Remove the OpenSSL Engine support in favor
of OpenSSL Providers.
dns_difftuple_create() could only return success, so change
its type to void and clean up all the calls to it.
other functions that only returned a result value because of it
have been cleaned up in the same way.
The tsig unit test was only testing if everything went ok, but it was
not testing whether the error paths work. Add two more unit tests - one
uses the time outside of the TSIG skew, and the second trashes the
signature with random data.
When automatic-interface-scan is disabled, the route socket was still
being opened. Add new API to connect / disconnect from the route socket
only as needed.
Additionally, move the block that disables periodic interface rescans to
a place where it actually have access to the configuration values.
Previously, the values were being checked before the configuration was
loaded.
Before there was a gap from 301 to 9999 which would be converted
to 10000 and now there is no such gap.
This settimeout_belowmin test was checking the behavior of a value
in the gap. As there is now no gap left, the minimum is 301 and
anything below that is converted to seconds as before. In order
for this check to still test the "below minimum" behavior, change
the value from 9000 to 300.
Update the settimeout_overmax value test too so it logically aligns
with the minimum value test.
ALPN are defined as 1*255OCTET in RFC 9460. commatxt_fromtext was not
rejecting invalid inputs produces by missing a level of escaping
which where later caught be dns_rdata_fromwire on reception.
These inputs should have been rejected
svcb in svcb 1 1.svcb alpn=\,abc
svcb1 in svcb 1 1.svcb alpn=a\,\,abc
and generated 00 03 61 62 63 and 01 61 00 02 61 62 63 respectively.
The correct inputs to include commas in the alpn requires double
escaping.
svcb in svcb 1 1.svcb alpn=\\,abc
svcb1 in svcb 1 1.svcb alpn=a\\,\\,abc
and generate 04 2C 61 62 63 and 06 61 2C 2C 61 62 63 respectively.
As ns_query_init() cannot fail now, remove the error paths, especially
in ns__client_setup() where we now don't have to care what to do with
the connection if setting up the client could fail. It couldn't fail
even before, but now it's formal.
View matching on an incoming query checks the query's signature,
which can be a CPU-heavy task for a SIG(0)-signed message. Implement
an asynchronous mode of the view matching function which uses the
offloaded signature checking facilities, and use it for the incoming
queries.
In order to protect from a malicious DNS client that sends many
queries with a SIG(0)-signed message, add a quota of simultaneously
running SIG(0) checks.
This protection can only help when named is using more than one worker
threads. For example, if named is running with the '-n 4' option, and
'sig0checks-quota 2;' is used, then named will make sure to not use
more than 2 workers for the SIG(0) signature checks in parallel, thus
leaving the other workers to serve the remaining clients which do not
use SIG(0)-signed messages.
That limitation is going to change when SIG(0) signature checks are
offloaded to "slow" threads in a future commit.
The 'sig0checks-quota-exempt' ACL option can be used to exempt certain
clients from the quota requirements using their IP or network addresses.
The 'sig0checks-quota-maxwait-ms' option is used to define a maximum
amount of time for named to wait for a quota to appear. If during that
time no new quota becomes available, named will answer to the client
with DNS_R_REFUSED.
Replace the ISC_LIST based deadnodes implementation with isc_queue which
is wait-free and we don't have to acquire neither the tree nor node lock
to append nodes to the queue and the cleaning process can also
copy (splice) the list into a local copy without acquiring the list.
Currently, there's little benefit to this as we need to hold those
locks anyway, but in the future as we move to RCU based implementation,
this will be ready.
To align the cleaning with our event loop based model, remove the
hardcoded count for the node locks and use the number of the event loops
instead. This way, each event loop can have its own cleaning as part of
the process. Use uniform random numbers to spread the nodes evenly
between the buckets (instead of hashing the domain name).
when the cache is over memory, we purge from the LRU list until
we've freed the approximate amount of memory to be added. this
approximation could fail because the memory allocated for nodenames
wasn't being counted.
add a dns_name_size() function so we can look up the size of nodenames,
then add that to the purgesize calculation.
- change dns_qpdata_t to qpcnode_t (QP cache node), and dns_qpdb_t to
qpcache_t, as these types are only accessed locally.
- also change qpdata_t in qpzone.c to qpznode_t (QP zone node), for
consistency.
- make the refcount declarations for qpcnode_t and qpznode_t static,
using the new ISC_REFCOUNT_STATIC macros.
under some circumstances it was possible for the iterator to
be set to the first leaf in a set of twigs, when it should have
been set to the last.
a unit test has been added to test this scenario. if there is a
a tree containing the following values: {".", "abb.", "abc."}, and
we query for "acb.", previously the iterator would be positioned at
"abb." instead of "abc.".
the tree structure is:
branch (offset 1, ".")
branch (offset 3, ".ab")
leaf (".abb")
leaf (".abc")
we find the branch with offset 3 (indicating that its twigs differ
from each other in the third position of the label, "abB" vs "abC").
but the search key differs from the found keys at position 2
("aC" vs "aB"). we look up the bit value in position 3 of the
search key ("B"), and incorrectly follow it onto the wrong twig
("abB").
to correct for this, we need to check for the case where the search
key is greater than the found key in a position earlier than the
branch offset. if it is, then we need to pop from the current leaf
to its parent, and get the greatest leaf from there.
a further change is needed to ensure that we don't do this twice;
when we've moved to a new leaf and the point of difference between
it and the search key even earlier than before, then we're definitely
at a predecessor node and there's no need to continue the loop.
This commit makes the dispatch_test use the same timeouts that network
manager tests. We do that because the old values appear to be too
small for our heavy loaded CI machines, leading to spurious failures
on them. The network manager tests are much more stable in this
situation and they use somewhat larger timeout values.
We use a smaller connection timeouts for the tests which are expected
to timeout to not wait for too long.
isc_loop() can now take its place.
This also requires changes to the test harness - instead of running the
setup and teardown outside of th main loop, we now schedule the setup
and teardown to run on the loop (via isc_loop_setup() and
isc_loop_teardown()) - this is needed because the new the isc_loop()
call has to be run on the active event loop, but previously the
isc_loop_current() (and the variants like isc_loop_main()) would work
even outside of the loop because it needed just isc_tid() to work, but
not the full loop (which was mainly true for the main thread).
if we had a method to get the running loop, similar to how
isc_tid() gets the current thread ID, we can simplify loop
and loopmgr initialization.
remove most uses of isc_loop_current() in favor of isc_loop().
in some places where that was the only reason to pass loopmgr,
remove loopmgr from the function parameters.
When fixing the iterator, when every leaf on this branch is greater
than the one we wanted we go back to the parent branch and iterate back
to the predecessor from that point.
But if there are no more previous leafs, it means the queried name
precedes the entire range of names in the database, so we would just
move the iterator one step back and continue from there.
This could end in a loop because the queried name precedes the entire
range of names and so none of those names are the predecessor of the
queried name.
now that "qpzone" databases are available for use in zones, we no
longer need to retain the zone semantics in the "qp" database.
all zone-specific code has been removed from QPDB, and "configure
--with-zonedb" once again takes two values, rbt and qp.
some database API methods that are never used with a cache have
been removed from qpdb.c and qp-cachedb.c; these include newversion,
closeversion, subtractrdataset, and nodefullname.
add database API method implementations needed to iterate and dump
a qpzone database to a file (createiterator, allrdatasets and
attachversion, plus dbiterator and rdatasetiter methods).
named-checkzone -D can now dump the contents of most zones,
but zone cuts are not correctly detected.
by default, QPDB is the database used by named and all tools and
unit tests. the old default of RBTDB can now be restored by using
"configure --with-zonedb=rbt --with-cachedb=rbt".
some tests have been fixed so they will work correctly with either
database.
CHANGES and release notes have been updated to reflect this change.
replace the string "rbt" throughout BIND with "qp" so that
qpdb databases will be used by default instead of rbtdb.
rbtdb databases can still be used by specifying "database rbt;"
in a zone statement.
- the DNS_DB_NSEC3ONLY and DNS_DB_NONSEC3 flags are mutually
exclusive; it never made sense to set both at the same time.
to enforce this, it is now a fatal error to do so. the
dbiterator implementation has been cleaned up to remove
code that treated the two as independent: if nonsec3 is
true, we can be certain nsec3only is false, and vice versa.
- previously, iterating a database backwards omitted
NSEC3 records even if DNS_DB_NONSEC3 had not been set. this
has been corrected.
- when an iterator reaches the origin node of the NSEC3 tree, we
need to skip over it and go to the next node in the sequence.
the NSEC3 origin node is there for housekeeping purposes and
never contains data.
- the dbiterator_test unit test has been expanded, several
incorrect expectations have been fixed. (for example, the
expected number of iterations has been reduced by one; we were
previously counting the NSEC3 origin node and we should not
have been doing so.)
- create_node() in rbt.c cannot fail
- the dns_rbt_*name() functions, which are wrappers around
dns_rbt_[add|find|delete]node(), were never used except in tests.
this change isn't really necessary since RBT is likely to go away
eventually anyway. but keeping the API as simple as possible while it
persists is a good thing, and may reduce confusion while QPDB is being
developed from RBTDB code.
when the QPDB is implemented, we will need to have both qpdb_p.h and
rbtdb_p.h. in order to prevent name collisions or code duplication,
this commit adds a generic private header file, db_p.h, containing
structures and macros that will be used by both databases.
some functions and structs have been renamed to more specifically refer
to the RBT database, in order to avoid namespace collision with similar
things that will be needed by the QPDB later.
The case insensitive matching in isc_ht was basically completely broken
as only the hashvalue computation was case insensitive, but the key
comparison was always case sensitive.
In the benchmarks, DNS_QPGC_ALL was trying to hard to cleanup QP
and this was slowing down QP too much. Use DNS_QPGC_MAYBE instead
that we are going to use anyway for more realistic load - this also
shows the memory usage matching the real loads.