We no longer accept copying DNSSEC records from the raw zone to
the secure zone, so update the kasp system test that relies on this
accordingly.
Also add more debugging and store the dnssec-verify results in a file.
(cherry picked from commit 57ea9e08c6)
Add a kasp system test that reconfigures a dnssec-policy zone from
maintaining DNSSEC records directly to the zone to using inline-signing.
Add a similar test case to the nsec3 system test, testing the same
thing but now with NSEC3 in use.
(cherry picked from commit 9018fbb205)
When named starts it creates an empty KEYDATA record in the managed-keys
zone as a placeholder, then schedules a key refresh. If key refresh
fails for some reason (e.g. connectivity problems), named will load the
placeholder key into secroots as a trusted key during the next startup,
which will break the chain of trust, and named will never recover from
that state until managed-keys.bind and managed-keys.bind.jnl files are
manually deleted before (re)starting named again.
Before calling load_secroots(), check that we are not dealing with a
placeholder.
(cherry picked from commit 354ae2d7e3)
Add a dnssec test to make sure that named can correctly process a
managed-keys zone with a placeholder KEYDATA record.
(cherry picked from commit 8c48eabbc1)
Because dns_resolver_createfetch() locks the view, it was necessary
to unlock the zone in zone_refreshkeys() before calling it in order
to maintain the lock order, and relock afterward. this permitted a race
with dns_zone_synckeyzone().
This commit moves the call to dns_resolver_createfetch() into a separate
function which is called asynchronously after the zone has been
unlocked.
The keyfetch object now attaches to the zone to ensure that
it won't be shut down before the asynchronous call completes.
This necessitated refactoring dns_zone_detach() so it always runs
unlocked. For managed zones it schedules zone_shutdown() to
run asynchronously; for unmanaged zones there is no task.
The NetBSD system allocator is in fact based on the jemalloc, but it
doesn't export the extended interface, so we can't use that. Remove
the jemalloc enforcement for the NetBSD.
(cherry picked from commit feea72414b)
the dupsigs test is prone to failing on slow CI machines
because the first test can occur before the zone is fully
signed.
instead of just waiting ten seconds arbitrarily, we now
check every second, and allow up to 30 seconds before giving
up.
(cherry picked from commit d9b85cbaae)
Use the ALGORITHM_SET option to use randomly selected default algorithm
in this test. Make sure the test works by using variables instead of
hard-coding values.
(cherry picked from commit f65f276f98)
Use the get_algorithms.py script to detect supported algorithms and
select random algorithms to use for the tests.
Make sure to load common.conf.sh after KEYGEN env var is exported.
(cherry picked from commit 69b608ee9f)
Multiple algorithm sets can be defined in this script. These can be
selected via the ALGORITHM_SET environment variable. For compatibility
reasons, "stable" set contains the currently used algorithms, since our
system tests need some changes before being compatible with randomly
selected algorithms.
The script operation is similar to the get_ports.py - environment
variables are created and then printed out as `export NAME=VALUE`
commands, to be interpreted by shell. Once we support pytest runner for
system tests, this should be a fixture instead.
(cherry picked from commit 5f480c8485)
Certain variables have to be exported in order for the system tests to
work. It makes little sense to export the variables in one place/script
while they're defined in another place.
Since it makes no harm, export all the variables to make the behaviour
more predictable and consistent. Previously, some variables were
exported as environment variables, while others were just shell
variables which could be used once the configuration was sourced from
another script. However, they wouldn't be exposed to spawned processes.
For simplicity sake (and for the upcoming effort to run system tests
with pytest), export all variables that are used. TESTS, PARALLEL_UNIX
and SUBDIRS variables are automake-specific, aren't used anywhere else
and thus not exported.
(cherry picked from commit 37d14c69c0)
The only variable really needed for the script to work is the path to
the $KEYGEN binary. Allow setting this via an environment variable to
avoid loading conf.sh (and causing a chicken-egg problem). Also make
testcrypto.sh executable to allow its use from conf.sh.
(cherry picked from commit bb1c6bbdc7)
There are three levels there for the port value, with increasing
priority:
1. The default ports, defined by 'port' and 'tls-port' config options.
2. The primaries-level default port: primaries port <number> { ... };
3. The primaries element-level port: primaries { <address> port
<number>; ... };"
In 'named_config_getipandkeylist()', the 'def_port' and 'def_tlsport'
variables are extracted from level 1. The 'port' variable is extracted
from the level 2. Currently if that is unset, it defaults to the
default port ('def_port' or 'def_tlsport' depending on the transport
used), but overrides the level 2 port setting for the next primaries in
the list.
Update the code such that we inherit the port only if the level 3 port
is not set, and inherit from the default ports if the level 2 port is
also not set.
(cherry picked from commit 72d3bf8e4e)
Add a test case that if the first primary fails, the fallback of a
second primary on plain DNS works. This is mainly to test that the port
configuration inheritance works correctly.
(cherry picked from commit 622a499027)
Add a couple of tests that verify the serve-stale behavior when
stale-answer-client-timeout is set to 0 and a (stale) CNAME record is
queried.
Related #3517
Make the Sphinx version listed in doc/arm/requirements.txt match the
version currently used in GitLab CI, so that Read the Docs builds the
documentation using the same Python software versions as those used in
GitLab CI.
(cherry picked from commit a8f0ab7df6)
The "prefetch" setting is in "defaultconf" so it cannot fail, use
INSIST to confirm that.
The 'trigger' and 'eligible' variables are now prefixed with
'prefetch_' and their declaration moved to an upper level, because
there is no more additional code block after this change.
(cherry picked from commit 0227565cf1)
For the prefetch "trigger" parameter ARM states that when a cache
record with a lower TTL value is encountered during query processing,
it is refreshed. But in reality, the record is refreshed when the TTL
value is lower or equal to the configured "trigger" value.
Fix the documentation to make it match with with the code.
(cherry picked from commit ef344b1f52)
ARM states that the "eligibility" TTL is the smallest original TTL
value that is accepted for a record to be eligible for prefetching,
but the code, which implements the condition doesn't behave in that
manner for the edge case when the TTL is equal to the configured
eligibility value.
Fix the code to check that the TTL is greater than, or equal to the
configured eligibility value, instead of just greater than it.
(cherry picked from commit 863f51466e)
The test triggers a prefetch, but fails to check if it acutally
happened, which prevented it from catching a bug when the record's
TTL value matches the configured prefetch eligibility value.
Check that prefetch happened by comparing the TTL values.
(cherry picked from commit 89fa9a6592)
For UDP queries, after calling dns_adb_beginudpfetch() in fctx_query(),
make sure that dns_adb_endudpfetch() is also called on error path, in
order to adjust the quota back.
(cherry picked from commit 5da79e2be0)
It is currently possible that dns_adb_endudpfetch() is not
called in fctx_cancelquery() for a UDP query, which results
in quotas not being adjusted back.
Always call dns_adb_endudpfetch() for UDP queries.
(cherry picked from commit e4569373ca)
In the cleanup code of fctx_query() function there is a code path
where 'query' is linked to 'fctx' and it is being destroyed.
Make sure that 'query' is unlinked before destroying it.
(cherry picked from commit ac889684c7)
For tests where the TCP connection might get interrupted abruptly,
replace the nc with curl as the data sent from server to client might
get lost because of abrupt TCP connection. This happens when the TCP
connection gets closed during sending the large request to the server.
As we already require curl for other system tests, replace the nc usage
in the statschannel test with curl that actually understands the
HTTP/1.1 protocol, so the same connection is reused for sending the
consequtive requests, but without client-side "pipelining".
For the record, the server doesn't support parallel processing of the
pipelined request, so it's a bit misnomer here, because what we are
actually testing is that we process all requests received in a single
TCP read callback.
(cherry picked from commit cd0e5c5784)
The statschannel truncated test still terminates abruptly sometimes and
it doesn't return the answer for the first query. This might happen
when the second process_request() discovers there's not enough space
before the sending is complete and the connection is terminated before
the client gets the data.
Change the isc_http, so it pauses the reading when it receives the data
and resumes it only after the sending has completed or there's
incomplete request waiting for more data.
This makes the request processing slightly less efficient, but also less
taxing for the server, because previously all requests that has been
received via single TCP read would be processed in the loop and the
sends would be queued after the read callback has processed a full
buffer.
(cherry picked from commit 13959781cb)
Now that the artificial limit on the recv buffer has been removed, the
current system test always fails because it tests if the truncation has
happened.
Add test that sending more than 10 headers makes the connection to
closed; and add test that sending huge HTTP request makes the connection
to be closed.
(cherry picked from commit cad2706cce)
Rewrite the isc_httpd to be more robust.
1. Replace the hand-crafted HTTP request parser with picohttpparser for
parsing the whole HTTP/1.0 and HTTP/1.1 requests. Limit the number
of allowed headers to 10 (arbitrary number).
2. Replace the hand-crafted URL parser with isc_url_parse for parsing
the URL from the HTTP request.
3. Increase the receive buffer to match the isc_netmgr buffers, so we
can at least receive two full isc_nm_read()s. This makes the
truncation processing much simpler.
4. Process the received buffer from single isc_nm_read() in a single
loop and schedule the sends to be independent of each other.
The first two changes makes the code simpler and rely on already
existing libraries that we already had (isc_url based on nodejs) or are
used elsewhere (picohttpparser).
The second two changes remove the artificial "truncation" limit on
parsing multiple request. Now only a request that has too many
headers (currently 10) or is too big (so, the receive buffer fills up
without reaching end of the request) will end the connection.
We can be benevolent here with the limites, because the statschannel
channel is by definition private and access must be allowed only to
administrators of the server. There are no timers, no rate-limiting, no
upper limit on the number of requests that can be served, etc.
(cherry picked from commit beecde7120)