| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
| |
Now modules default to internal headers instead of defaulting to public; making
a new public API should be a visible and intentional choice.
Brings the public header count from over 300 to around 150.
Also removes the deprecated tls_blocking interface
|
|
|
|
|
| |
When the hash and group sizes differ sometimes our conversion was
different from standard. Closes #2415
|
|
|
|
| |
Add const-time annotations to gcd implementation.
|
|
|
|
| |
OSS-Fuzz 21115
|
|
|
|
| |
Also make low_zero_bits constant time.
|
|
|
|
| |
Gives a small but measurable speedup (~1-2%) for RSA and ECDSA
|
| |
|
|
|
|
| |
Based on profiling RSA key generation
|
| |
|
|
|
|
| |
Deprecate some crufty functions. Optimize binary encoding/decoding.
|
|
|
|
|
| |
Use ct_is_zero instead of more complicated construction, and
avoid duplicated size check/resize - Data::set_word will handle it.
|
|
|
|
|
| |
Previously we unpoisoned the input to high_bit but this is no
longer required. But still the output should be unpoisoned.
|
|
|
|
|
|
|
| |
They get compiled as const-time on x86-64 with GCC but I don't think
this can be totally relied on. But it is anyway an improvement.
And, faster, because we compute it recursively
|
|\ |
|
| | |
|
| | |
|
|\ \
| |/
|/| |
|
| | |
|
|/ |
|
| |
|
| |
|
|
|
|
| |
This is still leaky, but much less than before.
|
| |
|
|
|
|
|
|
|
|
| |
It is stupid and slow (~50-100x slower than variable time version) but
still useful for protecting critical algorithms.
Not currently used, waiting for OSS-Fuzz to test it for a while before
we commit to it.
|
| |
|
| |
|
|
|
|
| |
In particular comparisons, calc sig words, and mod_sub are const time now.
|
| |
|
|
|
|
| |
And forbid 0 length substrings, which did not work correctly anyway.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instigated by finding a bug where BigInt::encode with decimal output
would often have a leading '0' char. Which is papered over in the IO
operator, but was exposed by botan_mp_to_str which called BigInt::encode
directly.
Split BigInt::encode/decode into two versions, one taking the Base
argument and the other using the (previously default) binary base.
With a view of eventually deprecating the versions taking a base.
Add BigInt::to_dec_string() and BigInt::to_hex_string()
|
|
|
|
|
|
|
|
|
|
| |
This eliminates an issue identified in the paper
"Prime and Prejudice: Primality Testing Under Adversarial Conditions"
by Albrecht, Massimo, Paterson and Somorovsky
where DL_Group::verify_group with strong=false would accept a composite
q with probability 1/4096, which is exactly as the error bound is
documented, but still unfortunate.
|
|
|
|
| |
Improves P-256 a bit
|
|
|
|
| |
Avoids needless allocations for expressions like x - 1 or y <= 4.
|
| |
|
|
|
|
| |
See also GH #986
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Since the point is public all the values are also, so this reduces
pressure on the mlock allocator and may (slightly) help perf through
cache read-ahead.
Downside is cache based side channels are slightly easier (vs the
data being stored in discontigious vectors). But we shouldn't rely
on that in any case. And having it be in an array makes a masked
table lookup easier to arrange.
|
|
|
|
| |
Use the BOTAN_MP_WORD_BITS consistently
|
| |
|
|
|
|
| |
Precompute the multiples of the prime and then subtract directly.
|
|
|
|
|
|
|
|
|
|
| |
OSS-Fuzz 6570 flagged an issue with slow modular exponentation.
It turned out the problem was not in the library version but the
simple square-and-multiply algorithm. Computing g^x % p with all
three integers being dense (high Hamming weight) numbers took about
1.5 seconds on a fast machine with almost all of the time taken
by the Barrett reductions. With these changes, same testcase
now takes only a tiny fraction of a second.
|
|
|
|
| |
Improves ECDSA times by 2-3%
|
| |
|
|
|
|
| |
Makes 4-6% difference for ECDSA
|
|
|
|
| |
No shared state
|
| |
|
|
|
|
|
|
|
| |
Generally speaking reinterpret_cast is sketchy stuff. But the
special case of char*/uint8_t* is both common and safe. By
isolating those, the remaining (likely sketchy) cases are easier
to grep for.
|
| |
|
| |
|