| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
adding ChaCha8 support
|
|
|
|
|
|
|
|
|
| |
GCM is defined as having a 32-bit counter, but CTR_BE incremented the
counter across the entire block. This caused incorrect results if
a very large message (2**39 bits) was processed, or if the GHASH
derived nonce ended up having a counter field near to 2**32
Thanks to Juraj Somorovsky for the bug report and repro.
|
|
|
|
| |
warnings.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With sufficient squinting, Transform provided an abstract base
interface that covered both cipher modes and compression algorithms.
However it mapped on neither of them particularly well. In addition
this API had the same problem that has made me dislike the Pipe/Filter
API: given a Transform&, what does it do when you put bits in? Maybe
it encrypts. Maybe it compresses. It's a floor wax and a dessert topping!
Currently the Cipher_Mode interface is left mostly unchanged, with the
APIs previously on Transform just moved down the type hierarchy. I
think there are some definite improvements possible here, wrt handling
of in-place encryption, but left for a later commit.
The compression API is split into two types, Compression_Algorithm and
Decompression_Algorithm. Compression_Algorithm's start() call takes
the compression level, allowing varying compressions with a single
object. And flushing the compression state is moved to a bool param on
`Compression_Algorithm::update`. All the nonsense WRT compression
algorithms having zero length nonces, input granularity rules, etc
as a result of using the Transform interface goes away.
|
| |
|
|
|
|
| |
explicit.
|
|
|
|
|
|
|
| |
In some cases this can offer better optimization, via devirtualization.
And it lets the user know the class is not intended for derivation.
Some discussion in GH #402
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Having the code diffused all over the place was ugly and would
not scale well to multiple alternative providers.
GH #368
|
|
|
|
|
|
|
|
|
|
|
| |
The tests previously had used 4 to 6 different schemes internally (the vec file
reader framework, Catch, the old InSiTo Boost.Test tests, the PK/BigInt tests
which escaped the rewrite in 1.11.7, plus a number of one-offs). Converge on a
design that works everywhere, and update all the things.
Fix also a few bugs found by the test changes: SHA-512-256 name incorrect,
OpenSSL RC4 name incorrect, signature of FFI function botan_pubkey_destroy
was wrong.
|
|
|
|
|
|
| |
For RSA, RC4, and ECDSA put the openssl versions in the same directory
as the base version. They just rely on a macro check for the openssl
module to test for the desire to use OpenSSL.
|
| |
|
| |
|
|
|
|
| |
Only user-visible change is the removal of get_byte.h
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously we were hanging on the type destructors to pull in
the relevant objects. However that fails in many simple cases
where the object is never deleted.
For every type involved in the algo registry add static create
and providers functions to access the algo registry. Modify
lookup.h to be inline and call those functions, and move
a few to sub-headers (eg, get_pbkdf going to pbkdf.h). So
accessing the registry involves going through the same file
that handles the initialization, so there is no way to end up
with missing objs.
|
|
|
|
|
|
|
|
|
|
|
| |
The support problems from having static libraries not work in the
obvious way will be endless trouble. Instead have each set of
registrations tag along in a source file for the basic type, at the
cost of some extra ifdefs. On shared libs this is harmless -
everything is going into the shared object anyway. With static libs,
this means pulling in a single block cipher pulls in the text of all
the them. But that's still strictly better than the amalgamation
(which is really pulling in everything), and it works (unlike status quo).
|
| |
|
| |
|
|
|
|
|
|
|
| |
Uninitialized variables, missing divide by zero checks, missing
virtual destructor, etc. Only thing serious is bug in TLS maximum
fragment decoder; missing breaks in switch statement meant receiver
would treat any negotiated max frament as 4k limit.
|
|
|
|
|
|
| |
Fix two memory leaks (in TLS and modes) caused by calling get_foo and
then cloning the result before saving it (leaking the original object),
a holdover from the conversion between construction techniques in 1.11.14
|
|
|
|
| |
No performance impact afaict.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Convert all uses of Algorithm_Factory and the engines to using Algo_Registry
The shared pool of entropy sources remains but is moved to EntropySource.
With that and few remaining initializations (default OIDs and aliases)
moved elsewhere, the global state is empty and init and shutdown are no-ops.
Remove almost all of the headers and code for handling the global
state, except LibraryInitializer which remains as a compatability stub.
Update seeding for blinding so only one hacky almost-global RNG
instance needs to be setup instead of across all pubkey uses (it uses
either the system RNG or an AutoSeeded_RNG if the system RNG is not
available).
|
| |
|
| |
|
|
|
|
|
| |
Update license header line to specify the terms and refer to the file,
neither of which it included before.
|
| |
|
|
|
|
| |
draft-irtf-cfrg-chacha20-poly1305-03
|
| |
|
| |
|
| |
|
| |
|
|
|