aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorJack Lloyd <[email protected]>2019-06-29 04:21:20 -0400
committerJack Lloyd <[email protected]>2019-06-29 04:47:17 -0400
commit50f3a247881a1fc5293ddf2357594d1461e476b4 (patch)
tree2ac52e1f594a7ab7bb36cdf2f948baa8c61980ff
parentda1dec41fa66c34765eaafc583b4ddb70a3c139d (diff)
Document test and CI systems
-rw-r--r--doc/dev_ref/contents.rst2
-rw-r--r--doc/dev_ref/continuous_integration.rst85
-rw-r--r--doc/dev_ref/fuzzing.rst2
-rw-r--r--doc/dev_ref/mistakes.rst4
-rw-r--r--doc/dev_ref/reading_list.rst6
-rw-r--r--doc/dev_ref/release_process.rst7
-rw-r--r--doc/dev_ref/test_framework.rst314
7 files changed, 414 insertions, 6 deletions
diff --git a/doc/dev_ref/contents.rst b/doc/dev_ref/contents.rst
index 05adaa331..6505762e1 100644
--- a/doc/dev_ref/contents.rst
+++ b/doc/dev_ref/contents.rst
@@ -9,6 +9,8 @@ contributions to the library
contributing
configure
+ test_framework
+ continuous_integration
fuzzing
release_process
todo
diff --git a/doc/dev_ref/continuous_integration.rst b/doc/dev_ref/continuous_integration.rst
new file mode 100644
index 000000000..504435c75
--- /dev/null
+++ b/doc/dev_ref/continuous_integration.rst
@@ -0,0 +1,85 @@
+Continuous Integration and Automated Testing
+===============================================
+
+CI Build Script
+----------------
+
+The Travis and AppVeyor builds are orchestrated using a script
+``src/scripts/ci_build.py``. This allows one to easily reproduce the
+build steps of CI on a local machine.
+
+A seperate repo https://github.com/randombit/botan-ci-tools holds
+binaries which are used by the CI.
+
+Travis CI
+-----------
+
+https://travis-ci.org/randombit/botan
+
+This is the primary CI, and tests the Linux, macOS, and iOS builds. Among other
+things it runs tests using valgrind, cross compilation to different
+architectures (currently ARM, PowerPC and MIPS), MinGW build, and the a build
+that produces the coverage report.
+
+The Travis configurations is in ``src/scripts/ci/travis.yml``, which executes a
+setup script ``src/scripts/ci/setup_travis.sh`` to install needed packages.
+Then ``src/scripts/ci_build.py`` is invoked.
+
+AppVeyor
+----------
+
+https://ci.appveyor.com/project/randombit/botan
+
+Runs a build/test cycle using MSVC on Windows. Like Travis it uses
+``src/scripts/ci_build.py``. The AppVeyor setup script is in
+``src/scripts/ci/setup_appveyor.bat``
+
+The AppVeyor build uses ``sccache`` as a compiler cache. Since that is not
+available in the AppVeyor images it takes a precompiled copy checked into the
+``botan-ci-tools`` repo.
+
+Kullo CI
+----------
+
+This was the initial CI system and tests Linux, macOS, Windows, and Android
+builds. Notably this is currently the only CI system Botan uses which has an
+Android build enabled. It does not use ``ci_build.py``. This system is
+maintained by @webmaster128
+
+LGTM
+---------
+
+https://lgtm.com/projects/g/randombit/botan/
+
+An automated linter that is integrated with Github. It automatically checks each
+incoming PR. It also supports custom queries/alerts, which likely would be useful.
+
+Coverity
+---------
+
+https://scan.coverity.com/projects/624
+
+An automated source code scanner. Use of Coverity scanner is rate-limited,
+sometimes it is very slow to produce a new report, and occasionally the service
+goes offline for days or weeks at a time. New reports are kicked off manually by
+rebasing branch ``coverity_scan`` against the most recent master and force
+pushing it.
+
+Sonar
+-------
+
+https://sonarcloud.io/dashboard?id=botan
+
+Sonar scanner is another software quality scanner. Unfortunately a recent update
+of their scanner caused it to take over an hour to produce a report which caused
+Travis CI timeouts, so it has been disabled. It should be re-enabled to run on
+demand in the same way Coverity is.
+
+OSS-Fuzz
+----------
+
+https://github.com/google/oss-fuzz/
+
+OSS-Fuzz is a distributed fuzzer run by Google. Every night, each library fuzzer
+in ``src/fuzzer`` is built and run on many machines with any findings reported
+by email.
diff --git a/doc/dev_ref/fuzzing.rst b/doc/dev_ref/fuzzing.rst
index 519bae4e1..46e60fb53 100644
--- a/doc/dev_ref/fuzzing.rst
+++ b/doc/dev_ref/fuzzing.rst
@@ -88,4 +88,4 @@ have the signature:
``void fuzz(const uint8_t in[], size_t len)``
-After adding your fuzzer, rerun `./configure.py` and build.
+After adding your fuzzer, rerun ``./configure.py`` and build.
diff --git a/doc/dev_ref/mistakes.rst b/doc/dev_ref/mistakes.rst
index 03b2c7905..6ea9ea927 100644
--- a/doc/dev_ref/mistakes.rst
+++ b/doc/dev_ref/mistakes.rst
@@ -1,6 +1,6 @@
-Mistakes
-===========
+Mistakes Were Made
+===================
These are mistakes made early on in the project's history which are difficult to
fix now, but mentioned in the hope they may serve as an example for others.
diff --git a/doc/dev_ref/reading_list.rst b/doc/dev_ref/reading_list.rst
index b39046803..1b27d05d6 100644
--- a/doc/dev_ref/reading_list.rst
+++ b/doc/dev_ref/reading_list.rst
@@ -21,8 +21,8 @@ Implementation Techniques
for aes_ssse3.
* "Elliptic curves and their implementation" Langley
- http://www.imperialviolet.org/2010/12/04/ecc.html
- Describes sparse representations for ECC math
+ http://www.imperialviolet.org/2010/12/04/ecc.html
+ Describes sparse representations for ECC math
Random Number Generation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -50,7 +50,7 @@ Public Key Side Channels
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.1028&rep=rep1&type=pdf
* "Resistance against Differential Power Analysis for Elliptic Curve Cryptosystems"
- Coron http://www.jscoron.fr/publications/dpaecc.pdf
+ Coron http://www.jscoron.fr/publications/dpaecc.pdf
* "Further Results and Considerations on Side Channel Attacks on RSA"
Klima, Rosa https://eprint.iacr.org/2002/071
diff --git a/doc/dev_ref/release_process.rst b/doc/dev_ref/release_process.rst
index d1418777f..405801c81 100644
--- a/doc/dev_ref/release_process.rst
+++ b/doc/dev_ref/release_process.rst
@@ -1,6 +1,10 @@
Release Process and Checklist
========================================
+Releases are done quarterly, normally on the first non-holiday Monday
+of January, April, July and October. A feature freeze goes into effect
+starting 9 days before the release.
+
.. highlight:: shell
.. note::
@@ -88,6 +92,9 @@ Don't forget to also push tags::
Build The Windows Installer
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+.. note::
+ We haven't distributed Windows binaries for some time.
+
On Windows, run ``configure.py`` to setup a build::
$ python ./configure.py --cc=msvc --cpu=$ARCH --distribution-info=unmodified
diff --git a/doc/dev_ref/test_framework.rst b/doc/dev_ref/test_framework.rst
new file mode 100644
index 000000000..241c51bd1
--- /dev/null
+++ b/doc/dev_ref/test_framework.rst
@@ -0,0 +1,314 @@
+Test Framework
+================
+
+Botan uses a custom-built test framework. Some portions of it are
+quite similar to assertion-based test frameworks such as Catch or
+Gtest, but it also includes many features which are well suited for
+testing cryptographic algorithms.
+
+The intent is that the test framework and the test suite evolve
+symbiotically; as a general rule of thumb if a new function would make
+the implementation of just two distinct tests simpler, it is worth
+adding to the framework on the assumption it will prove useful again.
+Feel free to propose changes to the test system.
+
+When writing a new test, there are three key classes that are used,
+namely ``Test``, ``Test::Result``, and ``Text_Based_Test``. A ``Test``
+(or ``Test_Based_Test``) runs and returns one or more ``Test::Result``.
+
+Namespaces in Test
+-------------------
+
+The test code lives in a distinct namespace (``Botan_Tests``) and all
+code in the tests which calls into the library should use the
+namespace prefix ``Botan::`` rather than a ``using namespace``
+declaration. This makes it easier to see where the test is actually
+invoking the library, and makes it easier to reuse test code for
+applications.
+
+Test Data
+-----------
+
+The test framework is heavily data driven. As of this writing, there
+is about 1 Mib of test code and 17 MiB of test data. For most (though
+certainly not all) tests, it is better to add a data file representing
+the input and outputs, and run the tests over it. Data driven tests
+make adding or editing tests easier, for example by writing scripts
+which produce new test data and output it in the expected format.
+
+Test
+--------
+
+.. cpp:class:: Test
+
+ .. cpp:function:: virtual std::vector<Test::Result> run() = 0
+
+ This is the key function of a ``Test``: it executes and returns a
+ list of results. Almost all other functions on ``Test`` are
+ static functions which just serve as helper functions for ``run``.
+
+ .. cpp:function:: static std::string read_data_file(const std::string& path)
+
+ Return the contents of a data file and return it as a string.
+
+ .. cpp:function:: static std::vector<uint8_t> read_binary_data_file(const std::string& path)
+
+ Return the contents of a data file and return it as a vector of
+ bytes.
+
+ .. cpp:function:: static std::string data_file(const std::string& what)
+
+ An alternative to ``read_data_file`` and ``read_binary_file``,
+ use only as a last result, typically for library APIs which
+ themselves accept a filename rather than a data blob.
+
+ .. cpp:function:: static bool run_long_tests() const
+
+ Returns true if the user gave option ``--run-long-tests``. Use
+ this to gate particularly time-intensive tests.
+
+ .. cpp:function:: static Botan::RandomNumberGenerator& rng()
+
+ Returns a reference to a fast, not cryptographically secure
+ random number generator. It is deterministicly seeded with the
+ seed logged by the test runner, so it is possible to reproduce
+ results in "random" tests.
+
+Tests are registered using the macro ``BOTAN_REGISTER_TEST`` which
+takes 2 arguments: the name of the test and the name of the test class.
+For example given a ``Test`` instance named ``MyTest``, use::
+
+ BOTAN_REGISTER_TEST("mytest", MyTest);
+
+All test names should contain only lowercase letters, numbers, and
+underscore.
+
+Test::Result
+-------------
+
+.. cpp:class:: Test::Result
+
+ A ``Test::Result`` records one or more tests on a particular topic
+ (say "AES-128/CBC" or "ASN.1 date parsing"). Most of the test functions
+ return true or false if the test was successful or not; this allows
+ performing conditional blocks as a result of earlier tests::
+
+ if(result.test_eq("first value", produced, expected))
+ {
+ // further tests that rely on the initial test being correct
+ }
+
+ Only the most commonly used functions on ``Test::Result`` are documented here,
+ see the header ``tests.h`` for more.
+
+ .. cpp:function:: Test::Result(const std::string& who)
+
+ Create a test report on a particular topic. This will be displayed in the
+ test results.
+
+ .. cpp:function:: bool test_success()
+
+ Report a test that was successful.
+
+ .. cpp:function:: bool test_success(const std::string& note)
+
+ Report a test that was successful, including some comment.
+
+ .. cpp:function:: bool test_failure(const std::string& err)
+
+ Report a test failure of some kind. The error string will be logged.
+
+ .. cpp:function:: bool test_failure(const std::string& what, const std::string& error)
+
+ Report a test failure of some kind, with a description of what failed and
+ what the error was.
+
+ .. cpp:function:: void test_failure(const std::string& what, const uint8_t buf[], size_t buf_len)
+
+ Report a test failure due to some particular input, which is provided as
+ arguments. Normally this is only used if the test was using some
+ randomized input which unexpectedly failed, since if the input is
+ hardcoded or from a file it is easier to just reference the test number.
+
+ .. cpp:function:: bool test_eq(const std::string& what, const std::string& produced, const std::string& expected)
+
+ Compare to strings for equality.
+
+ .. cpp:function:: bool test_ne(const std::string& what, const std::string& produced, const std::string& expected)
+
+ Compare to strings for non-equality.
+
+ .. cpp:function:: bool test_eq(const char* producer, const std::string& what, \
+ const uint8_t produced[], size_t produced_len, \
+ const uint8_t expected[], size_t expected_len)
+
+ Compare two arrays for equality.
+
+ .. cpp:function:: bool test_ne(const char* producer, const std::string& what, \
+ const uint8_t produced[], size_t produced_len, \
+ const uint8_t expected[], size_t expected_len)
+
+ Compare two arrays for non-equality.
+
+ .. cpp:function:: bool test_eq(const std::string& producer, const std::string& what, \
+ const std::vector<uint8_t>& produced, \
+ const std::vector<uint8_t>& expected)
+
+ Compare two vectors for equality.
+
+ .. cpp:function:: bool test_ne(const std::string& producer, const std::string& what, \
+ const std::vector<uint8_t>& produced, \
+ const std::vector<uint8_t>& expected)
+
+ Compare two vectors for non-equality.
+
+ .. cpp:function:: bool confirm(const std::string& what, bool expr)
+
+ Test that some expression evaluates to ``true``.
+
+ .. cpp:function:: template<typename T> bool test_not_null(const std::string& what, T* ptr)
+
+ Verify that the pointer is not null.
+
+ .. cpp:function:: bool test_lt(const std::string& what, size_t produced, size_t expected)
+
+ Test that ``produced`` < ``expected``.
+
+ .. cpp:function:: bool test_lte(const std::string& what, size_t produced, size_t expected)
+
+ Test that ``produced`` <= ``expected``.
+
+ .. cpp:function:: bool test_gt(const std::string& what, size_t produced, size_t expected)
+
+ Test that ``produced`` > ``expected``.
+
+ .. cpp:function:: bool test_gte(const std::string& what, size_t produced, size_t expected)
+
+ Test that ``produced`` >= ``expected``.
+
+ .. cpp:function:: bool test_throws(const std::string& what, std::function<void ()> fn)
+
+ Call a function and verify it throws an exception of some kind.
+
+ .. cpp:function:: bool test_throws(const std::string& what, const std::string& expected, std::function<void ()> fn)
+
+ Call a function and verify it throws an exception of some kind
+ and that the exception message exactly equals ``expected``.
+
+Text_Based_Test
+-----------------
+
+A ``Text_Based_Text`` runs tests that are produced from a text file
+with a particular format which looks somewhat like an INI-file::
+
+ # Comments begin with # and continue to end of line
+ [Header]
+ # Test 1
+ Key1 = Value1
+ Key2 = Value2
+
+ # Test 2
+ Key1 = Value1
+ Key2 = Value2
+
+.. cpp:class:: VarMap
+
+ An object of this type is passed to each invocation of the text-based test.
+ It is used to access the test variables. All access takes a key, which is
+ one of the strings which was passed to the constructor of ``Text_Based_Text``.
+ Accesses are either required (``get_req_foo``), in which case an exception is
+ throwing if the key is not set, or optional (``get_opt_foo``) in which case
+ the test provides a default value which is returned if the key was not set
+ for this particular instance of the test.
+
+ .. cpp:function:: std::vector<uint8_t> get_req_bin(const std::string& key) const
+
+ Return a required binary string. The input is assumed to be hex encoded.
+
+ .. cpp:function:: std::vector<uint8_t> get_opt_bin(const std::string& key) const
+
+ Return an optional binary string. The input is assumed to be hex encoded.
+
+ .. cpp:function:: std::vector<std::vector<uint8_t>> get_req_bin_list(const std::string& key) const
+
+ .. cpp:function:: Botan::BigInt get_req_bn(const std::string& key) const
+
+ Return a required BigInt. The input can be decimal or (with "0x" prefix) hex encoded.
+
+ .. cpp:function:: Botan::BigInt get_opt_bn(const std::string& key, const Botan::BigInt& def_value) const
+
+ Return an optional BigInt. The input can be decimal or (with "0x" prefix) hex encoded.
+
+ .. cpp:function:: std::string get_req_str(const std::string& key) const
+
+ Return a required text string.
+
+ .. cpp:function:: std::string get_opt_str(const std::string& key, const std::string& def_value) const
+
+ Return an optional text string.
+
+ .. cpp:function:: size_t get_req_sz(const std::string& key) const
+
+ Return a required integer. The input should be decimal.
+
+ .. cpp:function:: size_t get_opt_sz(const std::string& key, const size_t def_value) const
+
+ Return an optional integer. The input should be decimal.
+
+.. cpp:class:: Text_Based_Test : public Test
+
+ .. cpp:function:: Text_Based_Test(const std::string& input_file, \
+ const std::string& required_keys, \
+ const std::string& optional_keys = "")
+
+ This constructor is
+
+ .. note::
+ The final element of required_keys is the "output key", that is
+ the key which signifies the boundary between one test and the next.
+ When this key is seen, ``run_one_test`` will be invoked. In the
+ test input file, this key must always appear least for any particular
+ test. All the other keys may appear in any order.
+
+ .. cpp:function:: Test::Result run_one_test(const std::string& header, \
+ const VarMap& vars)
+
+ Runs a single test and returns the result of it. The ``header``
+ parameter gives the value (if any) set in a ``[Header]`` block.
+ This can be useful to distinguish several types of tests within a
+ single file, for example "[Valid]" and "[Invalid]".
+
+ .. cpp:function:: bool clear_between_callbacks() const
+
+ By default this function returns ``false``. If it returns
+ ``true``, then when processing the data in the file, variables
+ are not cleared between tests. This can be useful when several
+ tests all use some common parameters.
+
+Test Runner
+-------------
+
+If you are simply writing a new test there should be no need to modify
+the runner, however it can be useful to be aware of its abilities.
+
+The runner can run tests concurrently across many cores. By default single
+threaded execution is used, but you can use ``--test-threads`` option to
+specify the number of threads to use. If you use ``--test-threads=0`` then
+the runner will probe the number of active CPUs and use that (but limited
+to at most 16). If you want to run across many cores on a large machine,
+explicitly specify a thread count. The speedup is close to linear.
+
+The RNG used in the tests is deterministic, and the seed is logged for each
+execution. You can cause the random sequence to repeat using ``--drbg-seed``
+option.
+
+.. note::
+ Currently the RNG is seeded just once at the start of execution. So you
+ must run the exact same sequence of tests as the original test run in
+ order to get reproducible results.
+
+If you are trying to track down a bug that happens only occasionally, two very
+useful options are ``--test-runs`` and ``--abort-on-first-fail``. The first
+takes an integer and runs the specified test cases that many times. The second
+causes abort to be called on the very first failed test. This is sometimes
+useful when tracing a memory corruption bug.