| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Instead use two specialized algorithms, one for odd modulus and the
other for power of 2 modulus, then combine the results using CRT.
|
|
|
|
|
| |
On Linux x86-64 improves RSA-2048 by ~20% (was 1500/s now 1800/s)
and RSA-3072 by ~6% (was 630/s now 670/s).
|
|
|
|
| |
Signed-off-by: Nuno Goncalves <[email protected]>
|
|
|
|
|
| |
Previous version leaked some (minimal) information from the loop
bounds.
|
| |
|
|
|
|
|
|
|
|
|
| |
If the application caches the PK_Signer or similar, then the
performance is basically identical to what is done now.
However for applications which create a new PK_Signer object per
signature, then this improves performance by about 30%. Notably this
includes the TLS layer.
|
|
|
|
|
| |
On its own gives a modest speedup (3-5%) to RSA sign/decrypt, and it
is needed for another more complicated optimization.
|
|
|
|
|
|
|
|
|
| |
Keys smaller than 384 bits are trivially breakable, but that's true
for 512 as well so no reason to draw the line there. Just do 5 bits
since the smallest legal RSA key is 3*5 and that handles the integer
overflow warning from Coverity which was the original reason for it.
GH #1953
|
|
|
|
|
|
|
|
|
| |
Both threads called Modular_Reducer::reduce on m, which caused the
significant words result to be written twice in an unsynchronized way.
By calling it once beforehand it is computed and cached and so no
additional writes occur.
Found with helgrind.
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the long ago when I wrote the Barrett code I must have missed that
Barrett works for any input < 2^2k where k is the word size of the
modulus. Fixing this has several nice effects, it is faster because it
replaces a multiprecision comparison with a single size_t compare, and
now the branch does not reveal information about the input or modulus,
but only their word lengths, which is not considered sensitive.
Fixing this allows reverting the change make in a57ce5a4fd2 and now
RSA signing is even slightly faster than in 2.8, rather than 30% slower.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Barrett will branch to a different (and slower) algorithm if the input
is larger than the square of the modulus. This branch can be detected
by a side channel.
For RSA we need to compute m % p and m % q to get CRT started. Being
able to detect if m > q*q (assuming q is the smaller prime) allows a
binary search on the secret prime. This attack is blocked by input
blinding, but still seems dangerous. Unfortunately changing to use the
generic const time modulo instead of Barrett introduces a rather
severe performance regression in RSA signing.
In SM2, reduce k-r*x modulo the order before multiplying it with (x-1)^-1.
Otherwise the need for slow modulo vs Barrett leaks information about
k and/or x.
|
|
|
|
|
|
| |
Instead require the inputs be reduced already. For RSA-CRT use
Barrett which is const time already. For SRP6 inputs were not reduced,
use the Barrett hook available in DL_Group.
|
| |
|
| |
|
|
|
|
| |
This is not exhaustive. See GH #1733
|
|
|
|
| |
Needed for https://github.com/strongswan/strongswan/pull/109
|
| |
|
|
|
|
|
| |
Spawning the thread off as quickly as possible helps perf slighty,
especially with larger modulus.
|
|
|
|
| |
See #1606 for discussion
|
|
|
|
|
|
|
|
|
|
|
| |
Let DER_Encoder write to a user specified vector instead of only to an
internal vector. This allows encoding to a std::vector without having
to first write to a locked vector and then copying out the result.
Add ASN1_Object::BER_encode convenience method. Replaces
X509_Object::BER_encode which had the same logic but was restricted to
a subtype. This replaces many cases where DER_Encoder was just used
to encode a single object (X509_DN, AlgorithmIdentifier, etc).
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Add a new function dedicated to generating RSA primes.
Don't test for p.bits() > bits until the very end - rarely happens,
and speeds up prime generation quite noticably.
Add Miller-Rabin error probabilities for 1/2**128, which again
speeds up RSA keygen and DL param gen quite a bit.
|
| |
|
|
|
|
|
|
|
|
| |
In the case of RSA encryption/verification the public exponent is...
public. So we don't need to carefully guard against side channels
that leak the exponent.
Improves RSA verification performance by 50% or more.
|
| |
|
|
|
|
| |
Additional paranoia never hurt.
|
|
|
|
| |
Improves perf by about 15%
|
| |
|
|
|
|
|
| |
Unused and not exposed to higher levels. RSA and ElGamal both check
their inputs vs the system parameters (n, p) after decoding.
|
| |
|
|
|
|
| |
They allowed even e, another leftover from Rabin-Williams
|
|
|
|
|
| |
This is a holdover from Rabin-Williams support and just confusing
in RSA-specific code.
|
|
|
|
|
| |
Done by a perl script which converted all classes to final, followed
by selective reversion where it caused compilation failures.
|
| |
|
|
|
|
|
|
| |
ISO C++ reserves names with double underscores in them
Closes #512
|
|
|
|
|
| |
Defined in build.h, all equal to BOTAN_DLL so ties into existing
system for exporting symbols.
|
|
|
|
|
|
|
|
|
| |
* fixes for deprecated constructions in c++11 and later (explicit rule of 3/5 or implicit rule of 0 and other violations)
* `default` specifier instead of `{}` in some places(probably all)
* removal of unreachable code (for example `return` after `throw`)
* removal of compilation unit only visible, but not used functions
* fix for `throw()` specifier - used instead `BOTAN_NOEXCEPT`
* removed not needed semicolons
|
| |
|
|
|
|
|
|
|
| |
BER_Decoder::end_cons() allready assures the verify_end()
function, so it is redundant.
Signed-off-by: Nuno Goncalves <[email protected]>
|
|
|
|
| |
with prob=128 during sampling and we should check with the same prob
|
| |
|
|
|
|
|
|
| |
Renames a couple of functions for somewhat better name consistency,
eg make_u32bit becomes make_uint32. The old typedefs remain for now
since probably lots of application code uses them.
|
|
|
|
|
|
|
| |
Changes all the Public_Key derived classes ctors to take a
std::vector instead of a secure_vector for the DER encoded
public key bits. There is no point in transporting a public
key in secure storage. (GH #768)
|
|
|
|
|
|
|
| |
Adds new Private_Key::private_key_info() that returns
a PKCS#8 PrivateKeyInfo structure. Renames the current
Private_Key::pkcs8_private_key() to private_key_bits().
BER_encode() just invokes private_key_info().
|
|
|
|
|
|
|
| |
Adds new Public_Key::subject_public_key() that returns
a X.509 SubjectPublicKey structure. Renames the current
Public_Key::x509_subject_public_key() to public_key_bits().
BER_encode() just invokes subject_public_key().
|
|
|
|
|
|
|
|
|
|
| |
Add Public_Key::key_length usable for policy checking (as in
TLS::Policy::check_peer_key_acceptable)
Remove Public_Key::max_input_bits because it didn't make much sense
for most algorithms actually.
Remove message_parts and message_part_size from PK_Ops
|
|
|
|
| |
Also part of Algo_Registry and not needed after #668
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rarely expected and often causes performance problems, especially for private keys.
Instead applications should call check_key explicitly to validate keys when
necessary.
Note this removal doesn't apply to tests like ECDH on-the-curve tests, where a check
on the public key is required for security of our own key.
Updates most APIs to remove RNG calls, where they are no longer required. Exception
is PKCS8 interface, pending further work there (see GH #685) it just ignores the RNG
argument now.
|
| |
|