The number of queries to use in the burst can be reduced, as we have
a very low fetch limit of 1.
The dig command in 'wait_for_fetchlimits()' should time out sooner as
we expect a SERVFAIL to be returned promptly.
Enabling serve-stale can be done before hitting fetch-limits. This
reduces the chance that the resolver queries time out and fetch count
is reset. The chance of that happening is already slim because
'resolver-query-timeout' is 10 seconds, but better to first let the
data become stale rather than doing that while attempting to resolve.
(cherry picked from commit 00f575e7ef)
Add a test case when fetch-limits are reached and we have stale data
in cache.
This test starts with a positive answer for 'data.example/TXT' in
cache.
1. Reload named.conf to set fetch limits.
2. Disable responses from the authoritative server.
3. Now send a batch of queries to the resolver, until hitting the
fetch limits. We can detect this by looking at the response RCODE,
at some point we will see SERVFAIL responses.
4. At that point we will turn on serve-stale.
5. Clients should see stale answers now.
6. An incoming query should not set the stale-refresh-time window,
so a following query should still get a stale answer because of a
resolver failure (and not because it was in the stale-refresh-time
window).
(cherry picked from commit 11b74fc176)
The 'legacy-keys.kasp' test checks that a zone with key files but not
yet state files is signed correctly. This test is expanded to cover
the case where old key files still exist in the key directory. This
covers bug #2406 where keys with the "Delete" timing metadata are
picked up by the keymgr as active keys.
Fix the 'legacy-keys.kasp' test, by creating the right key files
(for zone 'legacy-keys.kasp', not 'legacy,kasp').
Use a unique policy for this zone, using shorter lifetimes.
Create two more keys for the zone, and use 'dnssec-settime' to set
the timing metadata in the past, long enough ago so that the keys
should not be considered by the keymgr.
Update the 'key_unused()' test function, and consider keys with
their "Delete" timing metadata in the past as unused.
Extend the test to ensure that the keys to be used are not the old
predecessor keys (with their "Delete" timing metadata in the past).
Update the test so that the checks performed are consistent with the
newly configured policy.
(cherry picked from commit d4b2b7072d)
This commit add 4 tests for the new option:
1. Test default configuration of stale-answer-client-timeout, a
value of 1.8 seconds, with stale-refresh-time disabled.
2. Test disabling of stale-answer-client-timeout.
3. Test stale-answer-client-timeout with a value of zero, in this
case we take advantage of a log entry which shows that a stale
answer was promptly used before an attempt to refresh the RRset
is made. We also check, by activating a disabled authoritative
server, that the RRset was successfully refreshed after that.
4. Test stale-answer-client-timeout 0 with stale-refresh-time 4, in
this test we want to ensure a couple things:
- If we have a stale RRSet entry in cache, a request must be
promptly answered with this data, while BIND must also attempt
to refresh the RRSet in background.
- If the attempt to refresh the RRSet times out, the RRSet must
have its stale-refresh-time window activated.
- If a new request for the same RRSet arrives, it must be
promptly answered with stale data due to stale-refresh-time
being active for this RRSet, in this case no attempt to refresh
the RRSet is made.
- Enable authoritative server, ensure that the RRSet was not
refreshed, to honor stale-refresh-time.
- Wait for stale-refresh-window time pass, send another request
for the same RRSet, this time we expect the answer to be the
stale entry in cache being hit due to
stale-answer-client-timeout 0.
- Send another request, this time we expect the answer to be an
active RRSet, since it must have been refreshed during the
previous request.
(cherry picked from commit 35fd039d03)
After the addition of stale-answer-client-timeout a test was broken due
to the following behavior expected by the test.
1. Prime cache data.example txt.
2. Disable authoritative server.
3. Send a query for data.example txt.
4. Recursive server will timeout and answer from cache with stale RRset.
5. Recursive server will activate stale-refresh-time due to the previous
failure in attempting to refresh the RRset.
6. Send a query for data.example txt.
7. Expect stale answer from cache due to stale-refresh-time
window being active, even if authoritative server is up.
Problem is that in step 4, due to the new option
stale-answer-client-timeout, recursive server will answer with stale
data before the actual fetch completes.
Since the original fetch is still running in background, if we re-enable
the authoritative server during that time, the RRset will actually be
successfully refreshed, and stale-refresh-window will not be activated.
The next queries will fail because they expect the TTL of the RRset to
match the one in the stale cache, not the one just refreshed.
To solve this, we explicitly disable stale-answer-client-timeout for
this test, as it's not the feature we are interested in testing here
anyways.
(cherry picked from commit a12bf4b61b)
the taskset command used for the cpu system test seems
to be failing under vmware, causing a test failure. we
can try the taskset command and skip the test if it doesn't
work.
(cherry picked from commit a8a49bb783)
this changes most visble uses of master/slave terminology in tests.sh
and most uses of 'type master' or 'type slave' in named.conf files.
files in the checkconf test were not updated in order to confirm that
the old syntax still works. rpzrecurse was also left mostly unchanged
to avoid interference with DNSRPS.
(cherry picked from commit e43b3c1fa1)
it is now an error to have two primaries lists with the same
name. this is true regardless of whether the "primaries" or
"masters" keywords were used to define them.
(cherry picked from commit f619708bbf)
as "type primary" is preferred over "type master" now, it makes
sense to make "primaries" available as a synonym too.
added a correctness check to ensure "primaries" and "masters"
cannot both be used in the same zone.
(cherry picked from commit 16e14353b1)
The mkeys system test started to fail after introducing support for
zones transitioning to unsigned without going bogus. This is because
there was actually a bug in the code: if you reconfigure a zone and
remove the "auto-dnssec" option, the zone is actually still DNSSEC
maintained. This is because in zoneconf.c there is no call
to 'dns_zone_setkeyopt()' if the configuration option is not used
(cfg_map_get(zoptions, "auto-dnssec", &obj) will return an error).
The mkeys system test implicitly relied on this bug: initially the
root zone is being DNSSEC maintained, then at some point it needs to
reset the root zone in order to prepare for some tests with bad
signatures. Because it needs to inject a bad signature, 'auto-dnssec'
is removed from the configuration.
The test pass but for the wrong reasons:
I:mkeys:reset the root server
I:mkeys:reinitialize trust anchors
I:mkeys:check positive validation (18)
The 'check positive validation' test works because the zone is still
DNSSEC maintained: The DNSSEC records in the signed root zone file on
disk are being ignored.
After fixing the bug/introducing graceful transition to insecure,
the root zone is no longer DNSSEC maintained after the reconfig.
The zone now explicitly needs to be reloaded because otherwise the
'check positive validation' test works against an old version of the
zone (the one with all the revoked keys), and the test will obviously
fail.
(cherry picked from commit 2fc42b598b)
Configure "none" as a builtin policy. Change the 'cfg_kasp_fromconfig'
api so that the 'name' will determine what policy needs to be
configured.
When transitioning a zone from secure to insecure, there will be
cases when a zone with no DNSSEC policy (dnssec-policy none) should
be using KASP. When there are key state files available, this is an
indication that the zone once was DNSSEC signed but is reconfigured
to become insecure.
If we would not run the keymgr, named would abruptly remove the
DNSSEC records from the zone, making the zone bogus. Therefore,
change the code such that a zone will use kasp if there is a valid
dnssec-policy configured, or if there are state files available.
(cherry picked from commit cf420b2af0)
Add two test zones that will be reconfigured to go insecure, by
setting the 'dnssec-policy' option to 'none'.
One zone was using inline-signing (implicitly through dnssec-policy),
the other is a dynamic zone.
Two tweaks to the kasp system test are required: we need to set
when to except the CDS/CDS Delete Records, and we need to know
when we are dealing with a dynamic zone (because the logs to look for
are slightly different, inline-signing prints "(signed)" after the
zone name, dynamic zones do not).
(cherry picked from commit fa2e4e66b0)
Add a test to check BIND 9 honors CPU affinity mask. This requires
some changes to the start script, to construct the named command.
(cherry picked from commit f1a097964c)
This commit extends the perl Configure script to also check for libssl
in addition to libcrypto and change the vcxproj source files to link
with both libcrypto and libssl.
The traceback files could overwrite each other on systems which do not
use different core dump file names for different processes. Prevent
that by writing the traceback file to the same directory as the core
dump file.
These changes still do not prevent the operating system from overwriting
a core dump file if the same binary crashes multiple times in the same
directory and core dump files are named identically for different
processes.
(cherry picked from commit 6428fc26af)
Upon request from Mark, change the configuration of salt to salt
length.
Introduce a new function 'dns_zone_checknsec3aram' that can be used
upon reconfiguration to check if the existing NSEC3 parameters are
in sync with the configuration. If a salt is used that matches the
configured salt length, don't change the NSEC3 parameters.
(cherry picked from commit 6f97bb6b1f)
NSEC3 is not backwards compatible with key algorithms that existed
before the RFC 5155 specification was published.
(cherry picked from commit 00c5dabea3)
Check 'nsec3param' configuration for the number of iterations. The
maximum number of iterations that are allowed are based on the key
size (see https://tools.ietf.org/html/rfc5155#section-10.3).
Check 'nsec3param' configuration for correct salt. If the string is
not "-" or hex-based, this is a bad salt.
(cherry picked from commit 7039c5f805)
Implement support for NSEC3 in dnssec-policy. Store the configuration
in kasp objects. When configuring a zone, call 'dns_zone_setnsec3param'
to queue an nsec3param event. This will ensure that any previous
chains will be removed and a chain according to the dnssec-policy is
created.
Add tests for dnssec-policy zones that uses the new 'nsec3param'
option, as well as changing to new values, changing to NSEC, and
changing from NSEC.
(cherry picked from commit 114af58ee2)
The synthesised CNAME is not supposed to be followed when the
QTYPE is CNAME or ANY as the lookup is satisfied by the CNAME
record.
(cherry picked from commit e980affba0)
Add one test that checks the behavior when serve-stale is enabled
via configuration (as opposed to enabled via rndc).
Add one test that checks the behavior when stale-refresh-time is
disabled (set to 0).
Using a 'stale-answer-ttl' the same value as the authoritative ttl
value makes it hard to differentiate between a response from the
stale cache and a response from the authoritative server.
Change the stale-answer-ttl from 2 to 4, so that it differs from the
authoritative ttl.
The strategy of running many dig commands in parallel and
waiting for the respective output files to be non empty was
resulting in random test failures, hard to reproduce, where
it was possible that the subsequent reading of the files could
have been failing due to the file's content not being fully flushed.
Instead of checking if output files are non empty, we now wait
for the dig processes to finish.
This test works as follow:
- Query for data.example rrset.
- Sleep until its TTL expires (2 secs).
- Disable authoritative server.
- Query for data.example again.
- Since server is down, answer come from stale cache, which has
a configured stale-answer-ttl of 3 seconds.
- Enable authoritative server.
- Query for data.example again
- Since last query before activating authoritative server failed, and
since 'stale-refresh-time' seconds hasn't elapsed yet, answer should
come from stale cache and not from the authoritative server.
Before the stale-refresh-time feature, the system test for ancient rrset
was somewhat based on the average time the previous tests and queries
were taking, thus not very precise.
After the addition of stale-refresh-time the system test for ancient
rrset started to fail since the queries for stale records (low
max-stale-ttl) were not taking the time to do a full resolution
anymore, since the answers now were coming from the cache (because the
rrset were stale and within stale-refresh-time window after the
previous resolution failure).
To handle this, the correct time to wait before rrset become ancient is
calculated from max-stale-ttl configuration plus the TTL set in the
rrset used in the tests (ans2/ans.pl).
Then before sending queries for ancient rrset, we check if we need to
sleep enough to ensure those rrset will be marked as ancient.
RFC 8767 recommends that attempts to refresh to be done no more
frequently than every 30 seconds.
Added check into named-checkconf, which will warn if values below the
default are found in configuration.
BIND will also log the warning during loading of configuration in the
same fashion.
Before this update, BIND would attempt to do a full recursive resolution
process for each query received if the requested rrset had its ttl
expired. If the resolution fails for any reason, only then BIND would
check for stale rrset in cache (if 'stale-cache-enable' and
'stale-answer-enable' is on).
The problem with this approach is that if an authoritative server is
unreachable or is failing to respond, it is very unlikely that the
problem will be fixed in the next seconds.
A better approach to improve performance in those cases, is to mark the
moment in which a resolution failed, and if new queries arrive for that
same rrset, try to respond directly from the stale cache, and do that
for a window of time configured via 'stale-refresh-time'.
Only when this interval expires we then try to do a normal refresh of
the rrset.
The logic behind this commit is as following:
- In query.c / query_gotanswer(), if the test of 'result' variable falls
to the default case, an error is assumed to have happened, and a call
to 'query_usestale()' is made to check if serving of stale rrset is
enabled in configuration.
- If serving of stale answers is enabled, a flag will be turned on in
the query context to look for stale records:
query.c:6839
qctx->client->query.dboptions |= DNS_DBFIND_STALEOK;
- A call to query_lookup() will be made again, inside it a call to
'dns_db_findext()' is made, which in turn will invoke rbdb.c /
cache_find().
- In rbtdb.c / cache_find() the important bits of this change is the
call to 'check_stale_header()', which is a function that yields true
if we should skip the stale entry, or false if we should consider it.
- In check_stale_header() we now check if the DNS_DBFIND_STALEOK option
is set, if that is the case we know that this new search for stale
records was made due to a failure in a normal resolution, so we keep
track of the time in which the failured occured in rbtdb.c:4559:
header->last_refresh_fail_ts = search->now;
- In check_stale_header(), if DNS_DBFIND_STALEOK is not set, then we
know this is a normal lookup, if the record is stale and the query
time is between last failure time + stale-refresh-time window, then
we return false so cache_find() knows it can consider this stale
rrset entry to return as a response.
The last additions are two new methods to the database interface:
- setservestale_refresh
- getservestale_refresh
Those were added so rbtdb can be aware of the value set in configuration
option, since in that level we have no access to the view object.