Update CONTRIBUTING.md and developer doc

Include the recent changes such as:
- changes to running system tests
- gitlab development workflow
- changelog and release note process
This commit is contained in:
Nicki Křížek
2024-11-26 16:33:33 +01:00
parent d3765a5f35
commit 39485c1f70
2 changed files with 33 additions and 82 deletions

View File

@@ -43,8 +43,8 @@ The code review process is a dialog between the original author and the
reviewer. Code inspection, including documentation and tests, is part of
this. Compiling and running the resulting code should be done in most
cases, even for trivial changes, to ensure that it works as intended. In
particular, a full regression test (`make` `check`) must be run for every
modification so that unexpected side-effects are identified.
particular, all checks in the CI pipeline must pass run for every modification
so that unexpected side-effects are identified.
When a problem or concern is found by the reviewer, these comments are
placed on the merge request in GitLab so the author can respond.
@@ -78,18 +78,25 @@ Documentation is also reviewed. This includes all user-facing text,
including log messages, manual pages, user manuals and sometimes even
comments; they must be clearly written and consistent with existing style.
#### GitLab development workflow
Every change is ultimately submitted as a GitLab merge request (MR) and reviewed
there. The specifics of the workflow are documented in [BIND development
workflow](https://gitlab.isc.org/isc-projects/bind9/-/wikis/BIND-development-workflow).
Take note of the section about MR title and description, which are used to
generate changelog entries and release notes. These are also subject to the
review process.
#### Steps in code review:
* Read the diff
* Read accompanying notes in the ticket
* Apply the diff to the appropriate branch
* Run `configure` (using at least `--enable-developer`)
* Build
* Read the documentation, if any
* Read the tests
* Run the tests
* Ensure the CI passes
<br>(In some cases it may be appropriate to run tests against code
from before the change to ensure that they fail as expected.)
* Review the MR description and title (refer to GitLab development workflow)
#### Things we look for
@@ -128,73 +135,16 @@ tests and documentation will reduce delay.
### <a name="testing"></a> Testing
#### <a name="systest"></a> Running system tests
When you submit a merge request, it triggers a CI pipeline which executes unit
and system tests on various platforms. You should pay attention to any failures,
as some can only occur in specific environments. Getting the CI to pass is a
good start when preparing the merge request for the review.
To enable system tests to work, we first need to create the test loopback
interfaces (as root):
#### <a name="systest"></a> System tests
$ cd bin/tests/system
$ sudo sh ifconfig.sh up
$ cd ../../..
To run the tests, build BIND (be sure to use --with-cmocka to run unit
tests), then run `make` `check`. An easy way to check the results:
$ make check 2>&1 | tee /tmp/check.out
$ grep -A 10 'Testsuite summary' /tmp/check.out
This will show all of the test results. One or two "R:SKIPPED" is okay; if
there are a lot of them, then you probably forgot to create the loopback
interfaces in the previous step. (NOTE: the summary of tests that appears at
the end of `make` `check` only summarizes the system test results, not the
unit tests, so you can't rely on it to catch everything.)
To run only the system tests, omitting unit tests:
$ make test
To run an individual system test:
$ make -C bin/tests/system/ check TESTS=<testname> V=1
Or:
$ TESTS= make -e all check
$ cd bin/tests/system
$ sh run.sh <testname>
System tests are in separate directories under `bin/tests/system`.
For example, the "dnssec" test is in `bin/tests/system/dnssec`.
#### Writing system tests
The following standard files are found in system test directories:
- `prereq.sh`: run at the beginning to determine whether the test can be run at all; if not, we see R:SKIPPED
- `setup.sh`: sets up the preconditions for the tests
- `tests.sh`: runs all the test cases. A non-zero return value results in R:FAIL
- `ns[X]`: these subdirectories contain test name servers that can be
queried or can interact with each other. (For example, `ns1` might be
running as a root server, `ns2` as a TLD server, and `ns3` as a recursive
resolver.) The value of X indicates the address the server listens on:
for example, `ns2` listens on 10.53.0.2, and ns4 on 10.53.0.4. All test
servers use port 5300 so they don't need to run as root. All servers
log at the highest debug level, and the logs are captured in the file
`nsX/named.run`.
- `ans[X]`: like `ns[X]`, but these are simple mock name servers
implemented in perl; they are generally programmed to misbehave in ways
`named` wouldn't, so as to exercise `named`'s ability to interoperate with
badly behaved name servers. Logs, if any, are captured in `ansX/ans.run`.
All test scripts source the file `bin/tests/system/conf.sh` (which is
generated by `configure` from `conf.sh.in`). This script provides
functions and variables pointing to the binaries under test; for example,
`DIG` contains the path to `dig` in the build tree being tested, `RNDC`
points to `rndc`, `SIGNZONE` to `dnssec-signzone`, etc.
If you want to run the system tests locally, please refer to [BIND9 System Test
Framework](bin/tests/system/README.md) for information about running and writing
system tests.
#### <a name="unittest"></a> Building unit tests