The dns_message_create() function cannot soft fail (as all memory
allocations either succeed or cause abort), so we change the function to
return void and cleanup the calls.
dns_message_t objects are now being handled using reference counting
semantics, so now dns_message_destroy() is not called directly anymore,
dns_message_detach must be called instead.
This commit fix the problems that arose when moving the dns_message_t
object from fetchctx_t to the query structure.
Since the lifetime of query objects are different than that of a fetchctx
and the dns_message_t object held by the query may be being used by some
external module, e.g. validator, even after the query may have been destroyed,
propery handling of the references to the message were added in this commit to
avoid accessing an already destroyed object.
Specifically, in rctx_done(), a reference to the message is attached at
the beginning of the function and detached at the end, since a possible call
to fctx_cancelquery() would release the dns_message_t object, and in the next
lines of code a call to rctx_nextserver() or rctx_chaseds() would require
a valid pointer to the same object.
In valcreate() a new reference is attached to the message object, this
ensures that if the corresponding query object is destroyed before the
validator attempts to access it, no invalid pointer access occurs.
In validated() we have to attach a new reference to the message, since
we destroy the validator object at the beginning of the function,
and we need access to the message in the next lines of the same function.
rctx_nextserver() and rctx_chaseds() functions were adapted to receive
a new parameter of dns_message_t* type, this was so they could receive a
valid reference to a dns_message_t since using the response context respctx_t
to access the message through rctx->query->rmessage could lead to an already
released reference due to the query being canceled.
The assertion failure REQUIRE(msg->state == DNS_SECTION_ANY),
caused by calling dns_message_setclass within function resquery_response()
in resolver.c, was happening due to wrong management of dns message_t
objects used to process responses to the queries issued by the resolver.
Before the fix, a resolver's fetch context (fetchctx_t) would hold
a pointer to the message, this same reference would then be used over all
the attempts to resolve the query, trying next server, etc... for this to work
the message object would have it's state reset between each iteration, marking
it as ready for a new processing.
The problem arose in a scenario with many different forwarders configured,
managing the state of the dns_message_t object was lacking better
synchronization, which have led it to a invalid dns_message_t state in
resquery_response().
Instead of adding unnecessarily complex code to synchronize the object,
the dns_message_t object was moved from fetchctx_t structure to the
query structure, where it better belongs to, since each query will produce
a response, this way whenever a new query is created an associated
dns_messate_t is also created.
This commit deals mainly with moving the dns_message_t object from fetchctx_t
to the query structure.
Previously, the $systest directory was being removed for out-of-tree
builds at the end of each system test. Because of that, running tests
which depend on compiled objects was breaking subsequent "make check"
invocations:
make: Target 'check' not remade because of errors.
Making all in dyndb/driver
/bin/bash: line 20: cd: dyndb/driver: No such file or directory
Making all in dlzexternal/driver
/bin/bash: line 20: cd: dlzexternal/driver: No such file or directory
Address by first removing build/test artifacts for a given test and then
removing empty directories inside (and potentially including) $systest.
Make sure the new job does not get run for every pipeline as it is not
expected to break often and it is similar enough to other system test
jobs. Change the name of the variable holding the path to the
out-of-tree build directory to a more generic one.
The isc_nm_pause(), isc_nm_resume() and finishing the nm_thread() from
nm_destroy() has been refactored, so all use the netievents instead of
directly touching the worker structure members. This allows us to
remove most of the locking as the .paused and .finished members are
always accessed from the matching nm_thread.
When shutting down the nm_thread(), instead of issuing uv_stop(), we
just shutdown the .async handler, so all uv_loop_t events are properly
finished first and uv_run() ends gracefully with no outstanding active
handles in the loop.
Since Mac OS X 10.1, Mach-O object files are by default built with a
so-called two-level namespace which prevents symbol lookups in BIND unit
tests that attempt to override the implementations of certain library
functions from working as intended. This feature can be disabled by
passing the "-flat_namespace" flag to the linker. Fix unit tests
affected by this issue on macOS by adding "-flat_namespace" to LDFLAGS
used for building all object files on that operating system (it is not
enough to only set that flag for the unit test executables).
As currently used in the BIND source tree, the --wrap linker option is
redundant because:
- static builds are no longer supported,
- there is no need to wrap around existing functions - what is
actually required (at least for now) is to replace them altogether
in unit tests,
- only functions exposed by shared libraries linked into unit test
binaries are currently being replaced.
Given the above, providing the alternative implementations of functions
to be overridden in lib/ns/tests/nstest.c is a much simpler alternative
to using the --wrap linker option. Drop the code detecting support for
the latter from configure.ac, simplify the relevant Makefile.am, and
remove lib/ns/tests/wrap.c, updating lib/ns/tests/nstest.c accordingly
(it is harmless for unit tests which are not calling the overridden
functions).
RPZ rules cannot be fully relied upon until the summary RPZ database is
updated after an "rndc reload". Wait until the relevant message is
logged after an "rndc reload" to prevent false positives in the
"rpzrecurse" system test caused by the RPZ rules not yet being in effect
by the time ns3 is queried.
The typical sequence of events for AAAA queries which trigger recursion
for an A RRset at the same name is as follows:
1. Original query context is created.
2. An AAAA RRset is found in cache.
3. Client-specific data is allocated from the filter-aaaa memory pool.
4. Recursion is triggered for an A RRset.
5. Original query context is torn down.
6. Recursion for an A RRset completes.
7. A second query context is created.
8. Client-specific data is retrieved from the filter-aaaa memory pool.
9. The response to be sent is processed according to configuration.
10. The response is sent.
11. Client-specific data is returned to the filter-aaaa memory pool.
12. The second query context is torn down.
However, steps 6-12 are not executed if recursion for an A RRset is
canceled. Thus, if named is in the process of recursing for A RRsets
when a shutdown is requested, the filter-aaaa memory pool will have
outstanding allocations which will never get released. This in turn
leads to a crash since every memory pool must not have any outstanding
allocations by the time isc_mempool_destroy() is called.
Fix by creating a stub query context whenever fetch_callback() is called,
including cancellation events. When the qctx is destroyed, it will ensure
the client is detached and the plugin memory is freed.
By changing the check in 'rdatasetiter_first' and 'rdatasetiter_next'
from "now > header->rdh_ttl" to "now - RBDTB_VIRTUAL > header->rdh_ttl"
we include expired rdataset entries so that they can be used for
"rndc dumpdb -expired".
Minor changes are:
- Replace the "$RNDCCMD dumpdb" logic with "rndc_dumpdb" from
conf.sh.common (it does the same thing).
- Update a comment to match the grep calls below it (comment said the
rest should be expired, while the grep calls indicate that they
are still in the cache, the comment now explains why).
The message buffer passed to ns__client_request is only valid for
the life of the the ns__client_request call. Save a copy of it
when we recurse or process a update as ns__client_request will
return before those operations complete.
The dotat() function has been changed to send the TAT
query asynchronously, so there's no lock order loop
because we initialize the data first and then we schedule
the TAT send to happen asynchronously.
This breaks following lock-order loops:
zone->lock (dns_zone_setviewcommit) while holding view->lock
(dns_view_setviewcommit)
keytable->lock (dns_keytable_find) while holding zone->lock
(zone_asyncload)
view->lock (dns_view_findzonecut) while holding keytable->lock
(dns_keytable_forall)
As the query_prefetch() or query_rpzfetch() could be called during
"regular" fetch, we need to introduce separate storage for attaching
the nmhandle during prefetching the records. The query_prefetch()
and query_rpzfetch() are guarded for re-entrance by .query.prefetch
member of ns_client_t, so we can reuse the same .prefetchhandle for
both.