| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Cipher_Mode::update API is more general than needed to just
support ciphers (this is due to it previously being an API of
Transform which before 8b85b780515 was Cipher_Mode's base class)
Define a less general interface `process` which either processes the
blocks in-place, producing exactly as much output as there was input,
or (SIV/CCM case) saves the entire message for processing in `finish`.
These two uses cover all current or anticipated cipher modes.
Leaves `update` for compatability with existing callers; all that is
needed is an inline function forwarding to `process`.
Removes the return type from `start` - in all cipher implementations,
this always returned an empty vector.
Adds BOTAN_ARG_CHECK macro; right now BOTAN_ASSERT is being used
for argument checking in some places, which is not right at all.
|
| |
|
|\
| |
| |
| | |
which recently landed on master.
|
| |\ |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Make TLS::Channel::m_callbacks a reference, so deriving from TLS::Callbacks works
Split out the compat (std::function) based interface to Compat_Callbacks.
This avoids the overhead of empty std::functions when using the virtual
interface, and ensures the virtual interface works since there is no
callback path that does not involve a vtable lookup.
Rename the TLS::Callback functions. Since the idea is that often an owning
class will pass *this as the callbacks argument, it is good to namespace
the virtual functions so as not to conflict with other names chosen by
the class. Specifically, prefixes all cb functions with tls_
Revert changes to use the old style alert callback (with no longer used data/len
params) so no API changes are required for old code. The new Callbacks interface
continues to just receive the alert code itself.
Switch to virtual function interface in CLI tls_client for testing.
Inline tls_server_handshake_state.h - only used in tls_server.cpp
Fix tests - test looked like it was creating a new client object but it
was not actually being used. And when enabled, it failed because the queues
were not being emptied in between. So, fix that.
|
| | | |
|
| | |
| | |
| | |
| | | |
[ci skip]
|
| | |
| | |
| | |
| | | |
[ci skip]
|
| | | |
|
| |\ \ |
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Change reseed interval logic to count calls to `randomize` rather than
bytes, to match SP 800-90A
Changes RNG reseeding API: there is no implicit reference to the
global entropy sources within the RNGs anymore. The entropy sources
must be supplied with the API call. Adds support for reseding directly
from another RNG (such as a system or hardware RNG).
Stateful_RNG keeps optional references to both an RNG and a set of
entropy sources. During a reseed, both sources are used if set.
These can be provided to HMAC_DRBG constructor.
For HMAC_DRBG, SP800-90A requires we output no more than 2**16 bytes
per DRBG request. We treat requests longer than that as if the caller
had instead made several sequential maximum-length requests. This
means it is possible for one or more reseeds to trigger even in the
course of generating a single (long) output (generate a 256-bit key
and use ChaCha or HKDF if this is a problem).
Adds RNG::randomize_with_ts_input which takes timestamps and uses them
as the additional_data DRBG field. Stateful_RNG overrides this to also
include the process ID and the reseed counter. AutoSeeded_RNG's
`randomize` uses this.
Officially deprecates RNG::make_rng and the Serialized_RNG construtor
which creates an AutoSeeded_RNG. With these removed, it would be
possible to perform a build with no AutoSeeded_RNG/HMAC_DRBG at all
(eg, for applications which only use the system RNG).
Tests courtesy @cordney in GH PRs #598 and #600
|
| | | |
| | | |
| | | |
| | | | |
[ci skip]
|
| | | |
| | | |
| | | |
| | | | |
[ci skip]
|
| |/ /
| | |
| | |
| | | |
[ci skip]
|
| | |
| | |
| | |
| | | |
[ci skip]
|
| | |
| | |
| | |
| | | |
[ci skip]
|
| | |
| | |
| | |
| | | |
[ci skip]
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Adds Stateful_RNG base class which handles reseeding after some
amount of output (configurable at instantiation time, defaults to
the build.h value) as well as detecting forks (just using pid
comparisons, so still vulnerable to pid wraparound). Implemented
by HMAC_RNG and HMAC_DRBG. I did not update X9.31 since its
underlying RNG should already be fork safe and handle reseeding
at the appropriate time, since a new block is taken from the
underlying RNG (for the datetime vector) for each block of
output.
Adds RNG::randomize_with_input which for most PRNGs is just a
call to add_entropy followed by randomize. However for HMAC_DRBG
it is used for additional input. Adds tests for HMAC_DRBG with AD
from the CAVS file.
RNG::add_entropy is implemented by System_RNG now, as both
CryptGenRandom and /dev/urandom support receiving application
provided data.
The AutoSeeded_RNG underlying type is currently selectable in
build.h and defaults to HMAC_DRBG(SHA-256). AutoSeeded_RNG
provides additional input with each output request, consisting of
the current pid, a counter, and timestamp (unless the application
explicitly calls randomize_with_input, in which case we just take
what they provided). This is the same hedge used in HMAC_RNGs
output PRF.
AutoSeeded_RNG is part of the base library now and cannot be
compiled out.
Removes Entropy_Accumulator type (which just served to bridge
between the RNG and the entropy source), instead the
Entropy_Source is passed a reference to the RNG being reseeded,
and it can call add_entropy on whatever it can come up with.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Handles fork checking for HMAC_RNG and HMAC_DRBG
AutoSeeded_RNG change - switch to HMAC_DRBG as default.
Start removing the io buffer from entropy poller.
Update default RNG poll bits to 256.
Fix McEliece test, was using wrong RNG API.
Update docs.
|
| |\ \ \ |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
With these fixes the implementation is now compatible with bouncycastle and it should operate
as it is specified in "DHIES: An encryption scheme based on Diffie-Hellman Problem" or in BSI
technical guideline TR-02102-1.
In addition to the already present XOR-encrypion/decryption mode it's now possible to use DLIES with a block cipher.
Previously the input to the KDF was the concatenation of the (ephemeral) public key
and the secret value derived by the key agreement operation:
```
secure_vector<byte> vz(m_my_key.begin(), m_my_key.end());
vz += m_ka.derive_key(0, m_other_key).bits_of();
const size_t K_LENGTH = length + m_mac_keylen;
secure_vector<byte> K = m_kdf->derive_key(K_LENGTH, vz);
```
I don't know why this was implemented like this. But now the input to the KDF is only the secret value obtained by the key agreement operation.
Furthermore the order of the output was changed from {public key, tag, ciphertext} to {public key, ciphertext, tag}.
Multiple test vectors added that were generated with bouncycastle and some with botan itself.
|
| |\ \ \ \
| | |_|/ /
| |/| | | |
|
| | |/ / |
|
| |/ /
| | |
| | |
| | |
| | |
| | |
| | | |
Caused Curve25519 tests to fail when compiled by Clang on ARM, may have
affected other 32-bit platforms.
GH #532
|
| | | |
|
| | |
| | |
| | |
| | | |
[ci skip]
|
| |\ \ |
|
| | | | |
|
| |\ \ \
| | |_|/
| |/| | |
|
| | |\ \ |
|
| | | | | |
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
[ci skip]
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | | |
[ci skip]
|
| |\ \ \ \
| | | | | |
| | | | | |
| | | | | | |
Also adds ChaCha8 support
|
| | | |_|/
| | |/| |
| | | | |
| | | | | |
adding ChaCha8 support
|
| |/ / / |
|
| | |/
| |/|
| | |
| | | |
[ci skip]
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
GCM is defined as having a 32-bit counter, but CTR_BE incremented the
counter across the entire block. This caused incorrect results if
a very large message (2**39 bits) was processed, or if the GHASH
derived nonce ended up having a counter field near to 2**32
Thanks to Juraj Somorovsky for the bug report and repro.
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Adds copyright notices for Juraj Somorovsky and Christian Mainka of Hackmanit
for the changes in 7c7fcecbe6a and 6d327f879c
Add Policy::check_peer_key_acceptable which lets the app set an arbitrary
callback for examining keys - both the end entity signature keys from
certificates and the peer PFS public keys. Default impl checks that the
algorithm size matches the min keylength. This centralizes this logic
and lets the application do interesting things.
Adds a policy for ECDSA group size checks.
Increases default policy minimums to 2048 RSA and 256 ECC.
(Maybe I'm an optimist after all.)
|
| | |
|
|/
|
|
| |
[ci skip]
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With sufficient squinting, Transform provided an abstract base
interface that covered both cipher modes and compression algorithms.
However it mapped on neither of them particularly well. In addition
this API had the same problem that has made me dislike the Pipe/Filter
API: given a Transform&, what does it do when you put bits in? Maybe
it encrypts. Maybe it compresses. It's a floor wax and a dessert topping!
Currently the Cipher_Mode interface is left mostly unchanged, with the
APIs previously on Transform just moved down the type hierarchy. I
think there are some definite improvements possible here, wrt handling
of in-place encryption, but left for a later commit.
The compression API is split into two types, Compression_Algorithm and
Decompression_Algorithm. Compression_Algorithm's start() call takes
the compression level, allowing varying compressions with a single
object. And flushing the compression state is moved to a bool param on
`Compression_Algorithm::update`. All the nonsense WRT compression
algorithms having zero length nonces, input granularity rules, etc
as a result of using the Transform interface goes away.
|
| |
|
|
|
|
|
|
|
|
| |
OpenSSL sends an empty record before each new data record in TLS v1.0
to randomize the IV, as a countermeasure to the BEAST attack. Most
implementations use 1/(n-1) splitting for this instead.
Bug introduced with the const time changes in 1.11.23
|
|
|
|
|
|
|
| |
Fixes GH #460
Closes GH #474
[ci skip]
|
| |
|