| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
code as well as the code for handling PKCS #10 requests.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
Add a new option --disable-modules which allows for disabling any
set of modules that normally would be autoloaded.
Rename the Botan feature test macros from BOTAN_EXT_BLAH to BOTAN_HAS_BLAH,
which will be much more sensible especially when everything is done in this
fashion (eg, BOTAN_HAS_BLOWFISH or BOTAN_HAS_RSA, etc)
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
and the TLS v1.0 PRF. These were removed from Botan in v1.4.5.
Initially I had felt that since these protocols were specific to SSL/TLS they
should be placed in Ajisai (an SSL/TLS library based on Botan). However upon
further reflection I have realized it is quite possible that other alternate
implementations of SSL/TLS based on Botan would be quite desirable, and so
to make that (a very slightly bit) easier I am adding back the SSL/TLS
functions to Botan, so other SSL/TLS libs can use them directly.
|
|
|
|
|
|
|
|
|
| |
want to inline the CMAC computation in EAX mode.
Also optimize CMAC::final_result slightly. Only write to state directly,
instead of also the write buffer (this should help L1 data caching), and
avoid what was basically a no-op where we zeroized part of a buffer and
then xored it against another buffer.
|
| |
|
|
|
|
| |
script. It includes all primes <= 11351
|
|
|
|
|
|
| |
the mp_asm64 module. It is called only on systems like UltraSPARC which
have 64 bit registers/ALU but no native 64x64->128 bit multiplication
operation.
|
|
|
|
|
|
|
|
| |
Blowfish Sboxes into one 1024 word array and index into them at
offsets. On my x86-64 machine there is no real difference between the
two, but on register constrained processor like x86 it may make a large
difference, since the x86 has a much easier time indexing off a single
address held in a register rather than 4 distinct ones.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
however now instead it takes a HashFunction pointer, which it deletes
in its destructor.
Why the change? For one, lookup.h, while seemingly a bunch of standalone
functions, actually calls into a large mass of global state (in short, it
is icky). I have a plan in mind for removing much of this while still
providing a high level interface (actually hopefully better than now),
here is just the start.
Now, calling clone() on a LubyRackoff object will now return a new object
with a clone() of the HashFunction. Previously we called get_hash on
the name, which goes through the whole global lookup bit. This is also
good since if you construct one with (say) an OpenSSL provided hash,
clones of it will now also use that implementation.
|
|
|
|
|
|
|
| |
on x86, x86-64, and m68k and not other platforms. Something about the
memory model I'm hitting? Valgrind shows nothing. Rather than struggle with
it further, for minimal gain, I'm reverting. If someone ever does
figure it out, this will be easy to reapply.
|
| |
|
| |
|
|
|
|
| |
pointer used over and over again in MGF1::mask.
|
|
|
|
|
| |
move in there. Make it a subclass of std::bad_alloc instead of
Botan::Exception (this may prove to be a design mistake).
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
around lack of restricted pointers
|
|
|
|
| |
shows a 35% speedup on my Core2 with G++ vs previous version.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
bigint_simple_mul and bigint_simple_sqr. Examining these
functions made it clear inlining would be beneficial, so these two
functions have been moved from an anonymous namespace into mp_mulop.cpp
(to allow assembly versions).
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
word carry = bigint_add3_nc(workspace+N, z0, N, z1, N);
carry += bigint_add2_nc(z + N2, N, workspace + N, N);
bigint_add2_nc(z + N + N2, N2, &carry, 1);
It turns out quite a bit can be shared among these function calls
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
prototyping and testing the x86-64 assembly version in C)
According to most profiles, bigint_monty_redc alone is responsible for
30%-50% of RSA, DSA, and DH benchmarks. So it seems worth tinkering with a bit.
|
| |
|