When configuring the same dnssec-policy for two zones with the same
name but in different views, there is a race condition for who will
run the keymgr first. If running sequential only one set of keys will
be created, if running parallel two set of keys will be created.
Lock the kasp when running looking for keys and running the key
manager. This way, for the same zone in different views only one
keyset will be created.
The dnssec-policy does not implement sharing keys between different
zones.
(cherry picked from commit e0bdff7ecd)
The test case for zsk-retired was missing the actual checks. Add
them and fix the set_policy call to expect three keys.
(cherry picked from commit 2e4b55de85)
Some comments started with a lowercased letter. Capitalized them to
be more consistent with the rest of the comments.
Add some newlines between `set_*` calls and check calls, also to be
more consistent with the other test cases.
(cherry picked from commit 7e54dd74f9)
We may be checking the algorithm steps too fast: the reconfig
command may still be in progress. Make sure the zones are signed
and loaded by digging the NSEC records for these zones.
(cherry picked from commit d16520532f)
Algorithm rollover waited too long before introducing zone
signatures. It waited to make sure all signatures were resigned,
but when introducing a new algorithm, all signatures are resigned
immediately. Only add the sign delay if there is a predecessor key.
(cherry picked from commit 28506159f0)
Algorithm rollover was stuck on submitting DS because keymgr thought
it would move to an invalid state. It did not match the current
key because it checked it against the current key in the next state.
Fixed by when checking the current key, check it against the desired
state, not the existing state.
(cherry picked from commit a8542b8cab)
Add a test case for algorithm rollover. This is triggered by
changing the dnssec-policy. A new nameserver ns6 is introduced
for tests related to dnssec-policy changes.
This requires a slight change in check_next_key_event to only
check the last occurrence. Also, change the debug log message in
lib/dns/zone.c to deal with checks when no next scheduled key event
exists (and default to loadkeys interval 3600).
(cherry picked from commit 88ebe9581b)
The zone 'step6.ksk-doubleksk.autosign' is configured but is not
set up nor tested. Remove the unneeded configured zone.
(cherry picked from commit cc2afe853b)
Algorithm rollover will require four keys so introduce KEY4.
Also it requires to look at key files for multiple algorithms so
change getting key ids to be algorithm rollover agnostic (adjusting
count checks). The algorithm will be verified in check_key so
relaxing 'get_keyids' is fine.
Replace '${_alg_num}' with '$(key_get KEY[1-4] ALG_NUM)' in checks
to deal with multiple algorithms.
(cherry picked from commit 00ced2d2e7)
OpenBSD virtual machines seem to affected particularly badly by other
activity happening on the host. This causes trouble around release
time: when multiple tags are pushed to the repository, a large number of
jobs is started concurrently on all CI runners. In extreme cases, this
causes the system test suite to run for about an hour (!) on OpenBSD
VMs, with multiple tests failing. We investigated the test artifacts
for all such cases in the past and the outcome was always the same: test
failures were caused by extremely slow I/O on the guest. We tried
various tricks to work around this problem, but nothing helped.
Given the above, stop running OpenBSD system test jobs for pending BIND
releases to prevent the results of these jobs from affecting the
assessment of a given release's readiness for publication. This change
does not affect OpenBSD build jobs. OpenBSD system test jobs will still
be run for scheduled and web-requested pipelines, to make sure we catch
any severe issues with test code on that platform sooner or later.
(cherry picked from commit 7b002cea83)
There is a failure mode which gets triggered on heavily loaded
systems. A key change is scheduled in 5 seconds to make ZSK2 inactive
and ZSK3 active, but `named` takes more than 5 seconds to progress
from `rndc loadkeys` to the query check. At this time the SOA RRset
is already signed by the new ZSK which is not expected to be active
at that point yet.
Split up the checks to test the case where RRsets are signed
correctly with the offline KSK (maintained the signature) and
the active ZSK. First run, RRsets should be signed with the still
active ZSK2, second run RRsets should be signed with the new active
ZSK3.
(cherry picked from commit aebb2aaa0f)
This commit simplifies a bit the lock management within dns_resolver_prime()
and prime_done() functions by means of turning resolver's attribute
"priming" into an atomic_bool and by creating only one dependent object on the
lock "primelock", namely the "primefetch" attribute.
By having the attribute "priming" as an atomic type, it save us from having to
use a lock just to test if priming is on or off for the given resolver context
object, within "dns_resolver_prime" function.
The "primelock" lock is still necessary, since dns_resolver_prime() function
internally calls dns_resolver_createfetch(), and whenever this function
succeeds it registers an event in the task manager which could be called by
another thread, namely the "prime_done" function, and this function is
responsible for disposing the "primefetch" attribute in the resolver object,
also for resetting "priming" attribute to false.
It is important that the invariant "priming == false AND primefetch == NULL"
remains constant, so that any thread calling "dns_resolver_prime" knows for sure
that if the "priming" attribute is false, "primefetch" attribute should also be
NULL, so a new fetch context could be created to fulfill this purpose, and
assigned to "primefetch" attribute under the lock protection.
To honor the explanation above, dns_resolver_prime is implemented as follow:
1. Atomically checks the attribute "priming" for the given resolver context.
2. If "priming" is false, assumes that "primefetch" is NULL (this is
ensured by the "prime_done" implementation), acquire "primelock"
lock and create a new fetch context, update "primefetch" pointer to
point to the newly allocated fetch context.
3. If "priming" is true, assumes that the job is already in progress,
no locks are acquired, nothing else to do.
To keep the previous invariant consistent, "prime_done" is implemented as follow:
1. Acquire "primefetch" lock.
2. Keep a reference to the current "primefetch" object;
3. Reset "primefetch" attribute to NULL.
4. Release "primefetch" lock.
5. Atomically update "priming" attribute to false.
6. Destroy the "primefetch" object by using the temporary reference.
This ensures that if "priming" is false, "primefetch" was already reset to NULL.
It doesn't make any difference in having the "priming" attribute not protected
by a lock, since the visible state of this variable would depend on the calling
order of the functions "dns_resolver_prime" and "prime_done".
As an example, suppose that instead of using an atomic for the "priming" attribute
we employed a lock to protect it.
Now suppose that "prime_done" function is called by Thread A, it is then preempted
before acquiring the lock, thus not reseting "priming" to false.
In parallel to that suppose that a Thread B is scheduled and that it calls
"dns_resolver_prime()", it then acquires the lock and check that "priming" is true,
thus it will consider that this resolver object is already priming and it won't do
any more job.
Conversely if the lock order was acquired in the other direction, Thread B would check
that "priming" is false (since prime_done acquired the lock first and set "priming" to false)
and it would initiate a priming fetch for this resolver.
An atomic variable wouldn't change this behavior, since it would behave exactly the
same, depending on the function call order, with the exception that it would avoid
having to use a lock.
There should be no side effects resulting from this change, since the previous
implementation employed use of the more general resolver's "lock" mutex, which
is used in far more contexts, but in the specifics of the "dns_resolver_prime"
and "prime_done" it was only used to protect "primefetch" and "priming" attributes,
which are not used in any of the other critical sections protected by the same lock,
thus having zero dependency on those variables.
HAVE_UV_IMPORT and other config.h macros must not be set unconditionally
because no existing libuv release exposes uv_import() and/or uv_export()
yet. Windows builds not passing an explicit path to libuv to
win32utils/Configure are currently broken because of this, so comment
out the offending lines and describe when the aforementioned config.h
macros should be set.
(cherry picked from commit 57b430b8ca)
Fixes a race between ns_client_killoldestquery and ns_client_endrequest -
killoldestquery takes a client from `recursing` list while endrequest
destroys client object, then killoldestquery works on a destroyed client
object. Prevent it by holding reclist lock while cancelling query.
(cherry picked from commit df3dbdff81)