Currently, the outgoing UDP sockets have enabled
SO_REUSEADDR (SO_REUSEPORT on BSDs) which allows multiple UDP sockets to
bind to the same address+port. There's one caveat though - only a
single (the last one) socket is going to receive all the incoming
traffic. This in turn could lead to incoming DNS message matching to
invalid dns_dispatch and getting dropped.
Disable setting the SO_REUSEADDR on the outgoing UDP sockets. This
needs to be done explicitly because `uv_udp_open()` silently enables the
option on the socket.
(cherry picked from commit eec30c33c2)
We currently set SO_INCOMING_CPU incorrectly, and testing by Ondrej
shows that fixing the issue and setting affinities is worse than letting
the kernel schedule threads without constraints. So we should not set
SO_INCOMING_CPU anymore.
(cherry picked from commit 8b8149cdd2)
If the operating system UDP queue gets full and the outgoing UDP sending
starts to be delayed, BIND 9 could exhibit memory spikes as it tries to
enqueue all the outgoing UDP messages. As those are not going to be
delivered anyway (as we argued when we stopped enlarging the operating
system send and receive buffers), try to send the UDP messages directly
using `uv_udp_try_send()` and if that fails, drop the outgoing UDP
message.
(cherry picked from commit b576c4c977)
When TLS connection (TLSstream) connection was accepted, the children
listening socket was not attached to sock->server and thus it could have
been freed before all the accepted connections were actually closed.
In turn, this would cause us to call isc_tls_free() too soon - causing
cascade errors in pending SSL_read_ex() in the accepted connections.
Properly attach and detach the children listening socket when accepting
and closing the server connections.
(cherry picked from commit 684f3eb8e6)
This commit ensures that we restart reading only when all DNS data in
the input buffer is processed so the we will not get into the
situation when the buffer is overrun.
Be more aggressive when throttling the reading - when we can't send the
outgoing TCP synchronously with uv_try_write(), we start throttling the
reading immediately instead of waiting for the send buffers to fill up.
This should not affect behaved clients that read the data from the TCP
on the other end.
(cherry picked from commit bc3e713317)
This commit ensures that socket objects use smaller sizes for its
internal requests and handles pools. That prevents a memory allocator
from thrashing.
When a peer is not reading the data we are sending it was for the TLS
DNS code to end up in a situation when it would indefinitely
reschedule send requests, effectively turning the 'uv_loop' into a
busy loop that would consume CPU cycles in endless efforts to send
outgoing data.
The main reason for that was only one send buffer dedicated for sends:
the code would re-queue sends until it is empty - that would never
happen when the remote side is not reading data.
That seems like an omission from the older day of the Network Manager
as it is quiet simple to make the code use multiple buffers for
sends. That ultimately breaks the cycle of futile send request
rescheduling.
As a side effect, this commit also gets rid of one memory copying on a
hot path.
Due to omission it was possible to un-throttle a TCP connection
previously throttled due to the peer not reading back data we are
sending.
In particular, that affected DoH code, but it could also affect other
transports (the current or future ones) that pause/resume reading
according to its internal state.
(cherry picked from commit d228aa8bbb944fbd04baf22d151fde5c33561e26)
The single TCP read can create as much as 64k divided by the minimum
size of the DNS message. This can clog the processing thread and trash
the memory allocator because we need to do as much as ~20k allocations in
a single UV loop tick.
Limit the number of the DNS messages processed in a single UV loop tick
to just single DNS message and limit the number of the outstanding DNS
messages back to 23. This effectively limits the number of pipelined
DNS messages to that number (this is the limit we already had before).
This reverts commit 780a89012d.
When TCP client would not read the DNS message sent to them, the TCP
sends inside named would accumulate and cause degradation of the
service. Throttle the reading from the TCP socket when we accumulate
enough DNS data to be sent. Currently this is limited in a way that a
single largest possible DNS message can fit into the buffer.
(cherry picked from commit 26006f7b44474819fac2a76dc6cd6f69f0d76828)
This commit ensures that an HTTP endpoints set reference is stored in
a socket object associated with an HTTP/2 stream instead of
referencing the global set stored inside a listener.
This helps to prevent an issue like follows:
1. BIND is configured to serve DoH clients;
2. A client is connected and one or more HTTP/2 stream is
created. Internal pointers are now pointing to the data on the
associated HTTP endpoints set;
3. BIND is reconfigured - the new endpoints set object is created and
promoted to all listeners;
4. The old pointers to the HTTP endpoints set data are now invalid.
Instead referencing a global object that is updated on
re-configurations we now store a local reference which prevents the
endpoints set objects to go out of scope prematurely.
(cherry picked from commit b9b5d0c01a3a546c4a6a8b3bff8ae9dd31fee224)
It was reported that HTTP/2 session might get closed or even deleted
before all async. processing has been completed.
This commit addresses that: now we are avoiding using the object when
we do not need it or specifically check if the pointers used are not
'NULL' and by ensuring that there is at least one reference to the
session object while we are doing incoming data processing.
This commit makes the code more resilient to such issues in the
future.
(cherry picked from commit 0cca550dff403c6100b7c0da8f252e7967765ba7)
If uv_tcp_close_reset() returns an error code, this means the
reset_shutdown callback has not been issued, so do it now.
(cherry picked from commit c40e5c8653)
Failing to accept TCP/TLS connections in 9.18 detaches the quota in
isc__nm_failed_accept_cb, causing TCP4Clients and TCP6Clients statistics
to not decrease inside cleanup.
Fix by increasing the counter after the point of no failure but before
handling statistics through the client's socket is no longer valid.
The statistics channel does not expose the current number of TCP clients
connected, only the highwater. Therefore, users did not have an easy
means to collect statistics about TCP clients served over time. This
information could only be measured as a seperate mechanism via rndc by
looking at the TCP quota filled.
In order to expose the exact current count of connected TCP clients
(tracked by the "tcp-clients" quota) as a statistics counter, an
extra, dedicated Network Manager callback would need to be
implemented for that purpose (a counterpart of ns__client_tcpconn()
that would be run when a TCP connection is torn down), which is
inefficient. Instead, track the number of currently-connected TCP
clients separately for IPv4 and IPv6, as Network Manager statistics.
(cherry picked from commit 2690dc48d3)
Change the taskmgr (and thus netmgr) in a way that it supports fast and
slow task queues. The fast queue is used for incoming DNS traffic and
it will pass the processing to the slow queue for sending outgoing DNS
messages and processing resolver messages.
In the future, more tasks might get moved to the slow queues, so the
cached and authoritative DNS traffic can be handled without being slowed
down by operations that take longer time to process.
When connecting to a remote party the TLS DNS code could process more
than one message at a time despite the fact that it is expected that
we should stop after every DNS message.
Every DNS message is handled and consumed from the input buffer by
isc__nm_process_sock_buffer(). However, as opposed to TCP DNS code, it
can be called more than once when processing incoming data from a
server (see tls_cycle_input()). That, in turn means that we can
process more than one message at a time. Some higher level code might
not expect that, as it breaks the contract.
In particular, in the original report that happened during
isc__nm_async_tlsdnsshutdown() call: when shutting down multiple calls
to tls_cycle() are possible (each possibly leading to a
isc__nm_process_sock_buffer()). If there are any non processed
messages left, for any of the messages left the read callback will be
called even when it is not expected as there were no preceding
isc_nm_read().
To keep TCP DNS and TLS DNS code in sync, we make a similar change to
it as well, although it should not matter.
In TLS DNS, it was possible for an 'isc__nm_uvreq_t' object to be used
twice when it was not implied, leading to use after free errors,
leading to abort()'s and/or crashes.
In particular, in the older version of the code, the following was
happening:
* 'tlsdns_send_direct()' is eventually called from
'isc__nm_async_tlsdnssend()' when sending data;
* 'tlsdns_send_direct()' could fail. If it happens early closer to the
beginning of the function, then the 'isc__nm_uvreq_t' object is passed
to 'isc__nm_failed_send_cb()' and eventually gets destroyed normally
without any negative consequences;
* However, if 'tlsdns_send_direct()' fails due to the call to
'tls_cycle()', then the send request object is re-queued to be
processed asynchronously later by a call to 'tlsdns_send_enqueue()'.
* The error processing code in 'tlsdns_send_direct()' is written
without an assumption that the send request object could be re-queued
in 'tlsdns_send_direct()' and, thus, it was passed to
'isc__nm_failed_send_cb()' for asynchronous error processing.
* As a result of asynchronous processing initiated by
`tlsdns_send_enqueue()` the 'isc__nm_uvreq_t' send request object gets
properly destroyed eventually.
* Then, as a result of asynchronous error processing initiated by the
'isc__nm_failed_send_cb()' call, the send request object is used a
second time. However, it was destroyed at the previous step.
So we have a (relatively) obvious logical mistake here. Why
'tls_cycle()' was failing is of no particular importance, as I can see
many ways for it to fail both on TLS and networking levels. In
particular, during testing with 'dnsperf', I have encountered the
following two cases:
1. 'result == ISC_R_TLSERROR' - the remote side was wrapping up,
causing errors on TLS level (e.g. handshake has not been completed
yet);
2. 'result == ISC_R_CONNECTIONRESET' - the remote side closed the
connection after we have initiated the asynchronous send request
operation and reached the point when its processing started.
One could ask: why haven't we encountered the error before? That
happened because 'send()' is, in general, a faster operation than
'read()' so we were lucky enough not to encounter this, as it took
dnsperf to make our code fail in a particular way to trigger the
problematic code path. It requires closing the connection shortly
after we have made an asynchronous 'send()' attempt and passed the
data for processing by the libuv code (or, to be precise, down our
technologies stack).
This commit changes TLS DNS so that partial writes by the
SSL_write_ex() function are taken into account properly. Now, before
doing encryption, we are flushing the buffers for outgoing encrypted
data.
The problem is fairly complicated and originates from the fact that it
is somewhat hard to understand by reading the documentation if and
when partial writes are supported/enabled or not, and one can get a
false impression that they are not supported or enabled by
default (https://www.openssl.org/docs/man3.1/man3/SSL_write_ex.html). I
have added a lengthy comment about that into the code because it will
be more useful there. The documentation on this topic is vague and
hard to follow.
The main point is that when SSL_write_ex() fails with
SSL_ERROR_WANT_WRITE, the OpenSSL code tells us that we need to flush
the outgoing buffers and then call SSL_write_ex() again with exactly
the same arguments in order to continue as partial write could have
happened on the previous call to SSL_write_ex() (that is not hard to
verify by calling BIO_pending(sock->tls.app_rbio) before and after the
call to SSL_write_ex() and comparing the returned values). This aspect
was not taken into account in the code.
Now, one can wonder how that could have led to the behaviour that we
saw in the #4255 bug report. In particular, how could we lose one
message and duplicate one twice? That is where things get interesting.
One needs to keep two things in mind (that is important):
Firstly, the possibility that two (or more) subsequent SSL_write_ex()
calls will be done with exactly the same arguments is very high (the
code does not guarantee that in any way, but in practice, that happens
a lot).
Secondly, the dnsperf (the software that helped us to trigger the bug)
bombed the test server with messages that contained exactly the same
data. The only difference in the responses is message IDs, which can
be found closer to the start of a message.
So, that is what was going on in the older version of the code:
1. During one of the isc_nm_send() calls, the SSL_write_ex() call
fails with SSL_ERROR_WANT_WRITE. Partial writing has happened, though,
and we wrote a part of the message with the message
ID (e.g. 2014). Nevertheless, we have rescheduled the complete send
operation asynchronously by a call to tlsdns_send_enqueue().
2. While the asynchronous request has not been completed, we try to
send the message (e.g. with ID 2015). The next isc_nm_send() or
re-queued send happens with a call to SSL_write_ex() with EXACTLY the
same arguments as in the case of the previous call. That is, we are
acting as if we want to complete the previously failed SSL_write_ex()
attempt (according to the OpenSSL documentation:
https://www.openssl.org/docs/man3.1/man3/SSL_write_ex.html, the
"Warnings" section). This way, we already have a start of the message
containing the previous ID (2014 in our case) but complete the write
request with the rest of the data given in the current write
attempt. However, as responses differ only in message ID, we end up
sending a valid (properly structured) DNS message but with the ID of
the previous one. This way, we send a message with ID from the
previous isc_nm_send() attempt. The message with the ID from the send
request from this attempt will never be sent, as the code thinks that
it is sending it now (that is how we send the message with ID 2014
instead of 2015, as in our example, thus making the message with ID
2015 never to be sent).
3. At some point later, the asynchronous send request (the rescheduled
on the first step) completes without an error, sending a second
message with the same ID (2014).
It took exhausting SSL write buffers (so that a data encryption
attempt cannot be completed in one operation) via long DoT streams in
order to exhibit the behaviour described above. The exhaustion
happened because we have not been trying to flush the buffers often
enough (especially in the case of multiple subsequent writes).
In my opinion, the origin of the problem can be described as follows:
It happened due to making wrong guesses caused by poorly written
documentation.
The detach (and possibly close) netmgr events can cause additional
callbacks to be called when under exclusive mode. The detach can
trigger next queued TCP query to be processed and close will call
configured close callback.
Move the detach and close netmgr events from the priority queue to the
normal queue as the detaching and closing the sockets can wait for the
exclusive mode to be over.
Tasks can block for a long time, especially when used by tools in
interactive mode. Update the event loop's time to avoid unexpected
errors when processing later events during the same callback.
For example, newly started timers can fire too early, because the
current time was stale. See the note about uv_update_time() in the
https://docs.libuv.org/en/v1.x/timer.html#c.uv_timer_start page.
Seemingly by omission, sockstop netievent used by multi-layer sockets
was not a high priority event, like it should be (similarly to other
socket types).
In particular, that could make BIND stuck on reconfiguration after a
DoH-listener is removed from the configuration.
This commit fixes that.
The intention behind 'isc__nmsocket_stop()' was that the function
sends notifications on every worker thread, making them synchronise on
the barrier, then the initiating thread waits on it, too. This way we
ensure than no other operation will start when we shutting down the
listener.
However, it seems that due to mistake we have been passing the wrong
worker pointer into isc__nm_async_sockstop() from within the context
of an worker thread which has initiated shutting down. While
effectively we have not been using the pointer in this case, it could
cause maintenance issues later. This commit fixes that.
This commit contains the backport of the behaviour for handling TLS
connect callbacks when wrapping up.
The current behaviour have not caused any problems to us, yet, but we
are changing it to remain on the safer side.
This commit ensures that 'sock->tls.pending_req' is not getting
nullified during TLS connection timeout callback as it prevents the
connection callback being called when connecting was not successful.
We expect 'isc__nm_failed_connect_cb() to be called from
'isc__nm_tlsdns_shutdown()' when establishing connections was
successful, but with 'sock->tls.pending_req' nullified that will not
happen.
The code removed most likely was required in older iterations of the
NM, but to me it seems that now it does only harm. One of the well
know pronounced effects is leading to irrecoverable zone transfer
hangs via TLS.
libuv support for receiving multiple UDP messages in a single system
call (recvmmsg()) has been tweaked several times between libuv versions
1.35.0 and 1.40.0. Mixing and matching libuv versions within that span
may lead to assertion failures and is therefore considered harmful, so
try to limit potential damage be preventing users from mixing libuv
versions with distinct sets of recvmmsg()-related flags.
(cherry picked from commit 735d09bffe)
The implementation of UDP recvmmsg in libuv 1.35 and 1.36 is
incomplete and could cause assertion failure under certain
circumstances.
Modify the configure and runtime checks to report a fatal error when
trying to compile or run with the affected versions.
(cherry picked from commit 251f411fc3)
Return 'isc_result_t' type value instead of 'bool' to indicate
the actual failure. Rename the function to something not suggesting
a boolean type result. Make changes in the places where the API
function is being used to check for the result code instead of
a boolean value.
(cherry picked from commit 41dc48bfd7)
This reverts commit f17f5e831b that made
following change:
> The TLSDNS transport was not honouring the single read callback for
> TLSDNS client. It would call the read callbacks repeatedly in case the
> single TLS read would result in multiple DNS messages in the decoded
> buffer.
Turns out that this change broke XoT, so we are reverting the change
until we figure out a proper fix that will keep the design promise and
not break XoT at the same time.
DSCP has not been fully working since the network manager was
introduced in 9.16, and has been completely broken since 9.18.
This seems to have caused very few difficulties for anyone,
so we have now marked it as obsolete and removed the
implementation.
To ensure that old config files don't fail, the code to parse
dscp key-value pairs is still present, but a warning is logged
that the feature is obsolete and should not be used. Nothing is
done with configured values, and there is no longer any
range checking.
(cherry picked from commit 916ea26ead)
This commit ensures that send callbacks are always called from within
the context of its worker thread even in the case of
shuttigdown/inactive socket, just like TCP transport does and with
which TLS attempts to be as compatible as possible.
(cherry picked from commit 2bfc079946)
This commit changes ISC_R_NOTCONNECTED error code to ISC_R_CANCELLED
when attempting to start reading data on the shutting down socket in
order to make its behaviour compatible with that of TCP and not break
the common code in the unit tests.
(cherry picked from commit ef659365ce)
This commit adds a check if 'sock->recv_cb' might have been nullified
during the call to 'sock->recv_cb'. That could happen, e.g. by an
indirect call to 'isc_nmhandle_close()' from within the callback when
wrapping up.
In this case, let's close the TLS connection.
(cherry picked from commit bed5e2bb08)
The TLSDNS transport was not honouring the single read callback for
TLSDNS client. It would call the read callbacks repeatedly in case the
single TLS read would result in multiple DNS messages in the decoded
buffer.
(cherry picked from commit e3c628d562)
It was possible that accept callback can be called after listener
shutdown. In such a case the callback pointer equals NULL, leading to
segmentation fault. This commit fixes that.
This commit introduces a primitive isc__nmsocket_stop() which performs
shutting down on a multilayered socket ensuring the proper order of
the operations.
The shared data within the socket object can be destroyed after the
call completed, as it is guaranteed to not be used from within the
context of other worker threads.
(cherry picked from commit 5ab2c0ebb3)