Prior to running the keymgr, first make sure that existing keys
are present in the new keylist. If not, treat this as an operational
error where the keys are made offline (temporarily), possibly unwanted.
(cherry picked from commit 5fdad05a8a)
By default we log a rekey failure on debug level. We should probably
change the log level to error. We make an exception for when the zone
is not loaded yet, it often happens at startup that a rekey is
run before the zone is fully loaded.
(cherry picked from commit 68b840c731527e01699afaf084559152124b717a)
when signatures were not added because of too many types already
existing at a node, the diff was not being cleaned up; this led to
a memory leak being reported at shutdown.
(cherry picked from commit 2825bdb1ae5be801e7ed603ba2455ed9a308f1f7)
Previously, the number of RR types for a single owner name was limited
only by the maximum number of the types (64k). As the data structure
that holds the RR types for the database node is just a linked list, and
there are places where we just walk through the whole list (again and
again), adding a large number of RR types for a single owner named with
would slow down processing of such name (database node).
Add a configurable limit to cap the number of the RR types for a single
owner. This is enforced at the database (rbtdb, qpzone, qpcache) level
and configured with new max-types-per-name configuration option that
can be configured globally, per-view and per-zone.
(cherry picked from commit 00d16211d6368b99f070c1182d8c76b3798ca1db)
Previously, the number of RRs in the RRSets were internally unlimited.
As the data structure that holds the RRs is just a linked list, and
there are places where we just walk through all of the RRs, adding an
RRSet with huge number of RRs inside would slow down processing of said
RRSets.
Add a configurable limit to cap the number of the RRs in a single RRSet.
This is enforced at the database (rbtdb, qpzone, qpcache) level and
configured with new max-records-per-type configuration option that can
be configured globally, per-view and per-zone.
(cherry picked from commit 3fbd21f69a1bcbd26c4c00920e7b0a419e8762fc)
When named is started with -4 or -6 and the primaries for a zone
do not have an IPv4 or IPv6 address respectively issue a log message.
(cherry picked from commit 2cd4303249)
The `axfr_makedb()` didn't set the loop on the newly created database,
effectively killing delayed cleaning on such database. Move the
database creation into dns_zone API that knows all the gory details of
creating new database suitable for the zone.
(cherry picked from commit 3310cac2b0)
If the zone is signed with a different way than 'dnssec-policy', use
the legacy way of jittering signatures, that is calculate jitter by
taking the two values of 'sig-validity-interval' and subtracting the
second value from the first value.
Having a value higher than signatures-validity does not make sense
and should be treated as a configuration error.
(cherry picked from commit c3d8932f79)
When calculating the RRSIG validity, jitter is now derived from the
config option rather than from the refresh value.
(cherry picked from commit 67f403a423)
Previously, rbtdb->task had quantum of 1 because it was originally used
just for freeing RBTDB contents, which can happen on a "best effort"
basis (does not need to be prioritized). However, when tree pruning was
implemented, it also started sending events to that task, enabling the
latter to become clogged up with a significant event backlog because it
only pruned a single RBTDB node per event.
To prioritize tree pruning (as it is necessary for enforcing the
configured memory use limit for the cache memory context), create a
second task with a virtually unlimited quantum (UINT_MAX) and send the
tree-pruning events to this new task, to ensure that all nodes scheduled
for pruning will be processed before further nodes are queued in a
similar fashion.
This change enables dropping the prunenodes list and restoring the
originally-used logic that allocates and sends a separate event for each
node to prune.
Checking the DS at the parent only happens if dns_zone_getdnsseckeys()
returns success. However, if this function somehow fails, it can also
prevent the keymgr from running.
Before adding the check DS functionality, the keymgr should only run
if 'dns_dnssec_findmatchingkeys()' did not return an error (either
ISC_R_SUCCESS or ISC_R_NOTFOUND). After this change the correct
result code is used again.
(cherry picked from commit 07c2acf15d)
Add a function that checks whether a DNSKEY, CDNSKEY, or CDS record
belongs to a key that is being used for signing.
(cherry picked from commit 3b6e9a5fa7)
The following code block repeats quite often:
if (rdata.type == dns_rdatatype_dnskey ||
rdata.type == dns_rdatatype_cdnskey ||
rdata.type == dns_rdatatype_cds)
Introduce a new function to reduce the repetition.
(cherry picked from commit ef58f2444f)
If the DNSKEY, CDNSKEY or CDS RRset had different TTLs then the
filtering of these RRset resulted in dns_diff_apply failing with
"not exact". Identify tuple pairs that are just TTL changes and
allow them through the filter.
(cherry picked from commit d601a90ea3)
If the TTLs of the DNSKEY, CDNSKEY and CDS do not match the
dnskey-ttl update them by removing all records and re-adding
them with the correct TTL.
(cherry picked from commit dcb7799061)
Just remove the key from consideration as it is being removed.
The old code could leak a key reference as dst_free_key was not
called every time we continued. This simplification will address
this as well.
(cherry picked from commit a3d0476d17)
When kasp support was added 'inception' was used as a proxy for
'now' and resulted in signatures not being generated or the wrong
signatures being generated. 'inception' is the time to be set
in the signatures being generated and is usually in the past to
allow for clock skew. 'now' determines what keys are to be used
for signing.
(cherry picked from commit 6066e41948)
when transferring in a non-inline-signing secondary for the first time,
we previously never set the value of zone->loadtime, so it remained
zero. this caused a test failure in the statschannel system test,
and that test case was temporarily disabled. the value is now set
correctly and the test case has been reinstated.
(cherry picked from commit 9643281453)
Drop timeout before resending a UDP request from 15 seconds to 5
seconds and add 1 second to the total time to allow for the reply
to the third request to arrive. This will speed up the time it
takes for named to recover from a lost packet when refreshing a
zone and for it to determine that a primary is down.
(cherry picked from commit 29f399797d)
Update the function 'set_resigntime()' so that raw versions of
inline-signing zones are not scheduled to be resigned.
Also update the check in the same function for zone is dynamic, there
exists a function 'dns_zone_isdynamic()' that does a similar thing
and is more complete.
Also in 'zone_postload()' check whether the zone is not the raw
version of an inline-signing zone, preventing calculating the next
resign time.
(cherry picked from commit 741ce2d07a)
This refactors the code flow in got_transfer_quota() to not use the
CHECK() macro as it really obfuscates the code flow logic here.
(cherry picked from commit 00cb151f8e)
The 'result' variable should be reset to ISC_R_NOTFOUND again,
because otherwise a log message could be logged about not being
able to get the TLS configuration based on on the 'result' value
from the previous calls to get the TSIG key.
(cherry picked from commit 6cab7fc627)
The dns_zone_catz_enable_db() and dns_zone_catz_disable_db()
functions can race with similar operations in the catz module
because there is no synchronization between the threads.
Add catz functions which use the view's catalog zones' lock
when registering/unregistering the database update notify callback,
and use those functions in the dns_zone module, instead of doing it
directly.
(cherry picked from commit 6f1f5fc307)
This commit ensures that access to the TLS context cache within zone
manager is properly synchronised.
Previously there was a possibility for it to get unexpectedly
NULLified for a brief moment by a call to
dns_zonemgr_set_tlsctx_cache() from one thread, while being accessed
from another (e.g. from got_transfer_quota()). This behaviour could
lead to server abort()ing on configuration reload (under very rare
circumstances).
That behaviour has been fixed.
(cherry picked from commit 0b95cf74ff)
The zone_resigninc() function does not check the validity of
'zone->db', which can crash named if the zone was unloaded earlier,
for example with "rndc delete".
Check that 'zone->db' is not 'NULL' before attaching to it, like
it is done in zone_sign() and zone_nsec3chain() functions, which
can similarly be called by zone maintenance.
(cherry picked from commit fae0930eb8)
After the dns_xfrin was changed to use network manager, the maximum
global (max-transfer-time-in) and idle (max-transfer-idle-in) times for
incoming transfers were turned inoperational because of missing
implementation.
Restore this functionality by implementing the timers for the incoming
transfers.
(cherry picked from commit d2377f8e04)
If the zone already has existing NSEC/NSEC3 chains then zone_sign
needs to continue to use them. If there are no chains then use
kasp setting otherwise generate an NSEC chain.
(cherry picked from commit 4b55201459)
This reverts commit a719647023.
The commit introduced a data race, because dns_db_endload() is called
after unfreezing the zone.
(not cherry picked from commit 593dea871a)
The zone_postload() function can fail and unregister the callbacks.
Call dns_db_endload() only after calling zone_postload() to make
sure that the registered update-notify callbacks are not called
when the zone loading has failed during zone_postload().
Also, don't ignore the return value of zone_postload().
(cherry picked from commit ed268b46f1)
The kasp pointers in dns_zone_t should consistently be changed by
dns_kasp_attach and dns_kasp_detach so the usage is balanced.
(cherry picked from commit b41882cc75)
When there are multiple managed trust anchors we need to know the
name of the trust anchor that is failing. Extend the error message
to include the trust anchor name.
(cherry picked from commit fb7b7ac495)
it was possible for a managed trust anchor needing to send a key
refresh query to be unable to do so because an authoritative zone
was not yet loaded. this has been corrected by delaying the
synchronization of managed-keys zones until after all zones are
loaded.
(cherry-picked from commit bafbbd2465)
It is allowed to point parental-agents to a resolver. Therefore, the
RD bit should be set on requests.
Upon receiving a DS response, ensure that the message has either the
AA or the RA bit set.
(cherry picked from commit e34722ed43)
Detaching the views in the zone_shutdown() could lead to
lock-order-inversion between adb->namelocks[bucket], adb->lock,
view->lock and zone->lock. Detach the views outside of the section that
zone-locked.
(cherry picked from commit 978a0ef84c)
The reference counting and isc_timer_attach()/isc_timer_detach()
semantic are actually misleading because it cannot be used under normal
conditions. The usual conditions under which is timer used uses the
object where timer is used as argument to the "timer" itself. This
means that when the caller is using `isc_timer_detach()` it needs the
timer to stop and the isc_timer_detach() does that only if this would be
the last reference. Unfortunately, this also means that if the timer is
attached elsewhere and the timer is fired it will most likely be
use-after-free, because the object used in the timer no longer exists.
Remove the reference counting from the isc_timer unit, remove
isc_timer_attach() function and rename isc_timer_detach() to
isc_timer_destroy() to better reflect how the API needs to be used.
The only caveat is that the already executed event must be destroyed
before the isc_timer_destroy() is called because the timer is no longet
attached to .ev_destroy_arg.
(cherry picked from commit ae01ec2823)
When we are loading the zones, set the quantum to UINT_MAX, which makes
task_run process all tasks at once. After the zone loading is finished
the quantum will be dropped to 1 to not block server when we are loading
new zones after reconfiguration.
(cherry picked from commit 87c4c24cde)
The .view (and possibly .prev_view) would be kept attached to the
removed zone until the zone is fully removed from the memory in
zone_free(). If this process is delayed because server is busy
something else like doing constant `rndc reconfig`, it could take
seconds to detach the view, possibly keeping multiple dead views in the
memory. This could quickly lead to a massive memory bloat.
Release the views early in the zone_shutdown() call, and don't wait
until the zone is freed.
(cherry picked from commit 13bb821280)
The dns_zonemgr_releasezone() function makes a decision to destroy
'zmgr' (based on its references count, after decreasing it) inside
a lock, and then destroys the object outside of the lock.
This causes a race with dns_zonemgr_detach(), which could destroy
the object in the meantime.
Change dns_zonemgr_releasezone() to detach from 'zmgr' and destroy
the object (if needed) using dns_zonemgr_detach(), outside of the
lock.
(cherry picked from commit c1fc212253)