| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
IA-64 (and, hypothetically, any other 64 bit CPU Visual C++ might
target in the future).
|
|
|
|
| |
yet tested.
|
|\
| |
| |
| | |
and '9e16b5a133480199541647fe245b79b059c9d5ca'
|
| |
| |
| |
| |
| |
| |
| | |
Fix a bug that would cause a harmless but bogus macro to be generated
in build.h if you used --enable-sse2
Add --enable-movbe to turn on a macro marking movbe as available
|
|/
|
|
|
|
|
|
|
|
|
| |
which PRF they want to use. The old interface just calls this new
version with alg_id set to 0 which is HMAC(SHA-1), which was
previously the only supported PRF.
Assign new codepoints for HMAC(SHA-256) and CMAC(Blowfish) to allow
their use with passhash9.
Have the generate+check tests run a test for each supported PRF.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
MAC. If it is, use it as the PRF. Otherwise assume it is a hash
function and use it with HMAC. Instead of instantiating the HMAC
directly, go through the algorithm factory.
Add a test using PBKDF2 with CMAC(Blowfish); Blowfish mainly because
it supports arbitrarily large keys, and also the required 4 KiB of
sbox tables actually would make it fairly useful in that it would make
cracking using hardware or GPUs rather expensive. Have not confirmed
this vector against any other implementation because I don't know of
any other implementation of PBKDF2 that supports MACs other than HMAC.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
so for compatability with keys that were encrypted with an empty
passphrase we probably want to support it as well.
In PBKDF2, don't reject empty passphrases out of hand; simply call
set_key and if the underlying MAC cannot use the key, throw an
informative exception. This will also be more helpful in the case that
someone tries using another MAC (say, CMAC) with a block cipher that
only supports keys of specific sizes.
In HMAC, allow zero-length keys. This is not really optimal in the
sense of allowing the user to do something dumb, but a 1 byte key
would be pretty dumb as well and we already allowed that.
Add a test vector using an empty passphrase generated by OpenSSL
|
| |
|
| |
|
|
|
|
|
| |
passed a ref and having to allocate a new stream object, a little bit
cleaner I think.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rotations in the code. This reduces the number of cache lines
potentially accessed in the first round from 64 to 16 (assuming 64
byte cache lines). On average, about 10 cache lines will actually be
accessed, assuming a uniform distribution of the inputs, so there
definitely is still a timing channel here, just a somewhat smaller
one.
I experimented with using the 256 element table for all rounds but it
reduced performance significantly and I'm not sure if the benefit is
worth the cost or not.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
algorithm had changed to AES-256. This was wrong, it actually changed
to AES-128. However in retrospect AES-256 is probably a reasonable
move (in particular for the 4 extra rounds; the related key attacks
possible against AES-256 are probably not viable since we generate the
key using PBKDF2), so update the 1.9.4 changelog to correctly indicate
the change made in that release, and also modify PKCS #8 to actually
use AES-256.
|
| |
|
| |
|
|
|
|
| |
supports epi64x in 64-bit mode.
|
|
|
|
| |
causes obnoxious problems under MinGW.
|
|
|
|
|
|
| |
to pointers-to-functions (which, admittedly, is undefined in ISO C++,
but doing this is required to use dlopen). Using the dumb hammer of a
C-style cast works, though.
|
| |
|
| |
|
|
|
|
|
|
|
| |
namespace, but this causes backwards compat problems, since cryptobox
is already in 1.8, and also it's likely that other functions along
these lines will be useful at some point (eg using RSA encryption
instead of a passphrase for the key transfer).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
reasons, Intel C++ rejects
const __m128i foo = _mm_set_epi64x(...)
though it will accept if you use one of the _mm_set1 variants.
And Visual C++ doesn't know about _mm_set_epi64x() in 32-bit mode for
similarly dumb reasons - it works fine compiling for 64 bit but for
whatever reason they don't offer this function when compiling as 32
bit. Unfortunately there isn't a good way to specify it's OK with a
particular compiler with one arch but not another, so just disable it
globally for the time being. The workaround for VC++ is probably to
use _mm_set_epi32 and break up the input values into 32 bit chunks.
ICC is a lost cause I fear.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
constant time and on a Nehalem is significantly faster than the table
based version. This implementation technique was invented by Mike
Hamburg and described in a paper in CHES 2009 "Accelerating AES with
Vector Permute Instructions". This code is basically a translation of
his public domain x86-64 assembly code into intrinsics.
Todo: Adding support for AES-192 and AES-256; this just requires
implementing the key schedules.
Currently only tested on an i7 with GCC (32 and 64 bit code);
testing/optimization on 32-bit processors with SSSE3 like the Atom,
and with Visual C++ and other compilers, are also todos.
|
|
|
|
| |
fine with latest SVN.
|
|
|
|
| |
an .S file is, so allow it for x86-64. Tested/works with Clang SVN.
|
| |
|
|
|
|
|
|
| |
x86-64, then enable SSE2 anyway because we know any x86-64 processor
does have SSE2, and the OS has to support it because it's part of the
standard ABIs.
|
|
|
|
| |
errors can result due to not getting the C++ runtime library.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
into global_state.{h,cpp}. Move all of the functions into a new
namespace Global_State_Management, though exposing global_state() into
the Botan namespace for compatability.
Also add new functions global_state_exists and
set_global_state_unless_set which may be helpful in certain tricky
initialization scenarios (eg when an application using botan also uses
a library which may or may not itself use botan).
|
|\
| |
| |
| | |
and 'a4d88442d5f6b8554234c7f7468856868919b614'
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
especially in a multithreaded environment, and doesn't seem like a
useful operation to support.
(In principle, we could support this by adding a clone() call to
Algorithm_Cache, which would in turn call clone on each of it's held
prototype objects, plus adding a clone to Engine. Doesn't seem worth the
bother, though.
|
| |
| |
| |
| |
| |
| |
| |
| | |
16 KiB buffer. Also reorder the parameters to make somewhat more sense, with the
first arguments being totally mandatory and the later ones potentially optional.
Provide inlined version matching the old interface that just forwards to the
new call, marking it as deprecated.
|
| | |
|
| | |
|
|/
|
|
| |
reason to say `class Engine*` later on.
|
|
|
|
|
|
|
|
| |
Linux, Solaris, and the BSDs.
Solaris and BSD are untested, but it seems like they should work.
Using libdl on Solaris is seemingly only required in Solaris 9 and
earlier, but 10 has a stub library so it should work there as well.
|
|
|
|
|
| |
it relies on dyn_load which should be the sole source for this kind of
stuff, since dyn_engine itself does not touch the OS level APIs.
|
|
|
|
|
| |
expected value is 20100728 (ie, today). This will allow for checking
for and/or working around changes to interfaces.
|
|
|
|
|
| |
caches; this might be useful for applications which are, say,
particularly sensitive to memory usage.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The library initializer runs some self tests; this brings objects for
a few select types (AES, SHA-1, etc) into the caches. Later on, when
we add a dynamic engine, the engines aren't requeried because the
cache has hits. So, for instance an dlopen'ed engine that provided
AES-128 would not actually be used unless you called on the algo
factory with a provider of "blah" - even using set_preferred_provider
would have no effect, because that's just a request.
Add a new function to Algorithm_Cache, clear_cache, which just deletes
everything that is currently loaded (this is 90% of the destructor).
Then call this on each cache in Algorithm_Factory when a new Engine is
loaded. In normal use, this should be very fast because on init the
engines are loaded one after another so clear_cache() won't do much
work at all, but it ensures that if you load an engine later on in
runtime it will always be found. It does have the downside that the
app will then requery each Engine for each new algo after this point,
but I think typically loading a dynamic engine will happen very early
on so this won't be too much of a hassle. (And even if it happens in
the middle of execution, everything still works, it just means some
overhead the first time you ask for algo X).
|
|
|
|
|
|
|
|
| |
rather than before. Otherwise, we run into a problem with dynamically
loaded engines: the engine will be deleted (and thus, the external
library unloaded), before calling the destructors on any objects which
may have been cached, so we jump to a now invalid address instead of
the destructor code.
|
|
|
|
|
|
|
|
|
|
| |
the system dynamic linker (if any). Currently it only supports dlopen,
and is only enabled on Linux. It will almost certainly work on BSDs
and Solaris as well, though, and should be easy to extend to support
Win32-style dynamic loading.
Also add a new engine, Dynamically_Loaded_Engine, which loads up a new
Engine object from a shared library/DLL.
|
|
|
|
| |
(slightly) better.
|
|\
| |
| |
| | |
and 'ada4c9893d70affd8934ab9664e390087feab3c9'
|
| | |
|
| |
| |
| |
| | |
Rename CPUID::has_aes_intel to has_aes_ni.
|