| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
exceptions thrown in end_msg (for instance in CBC decryption when the
padding is bad) more or less screwed up the pipe completely. Allowing
reset here at least allows an escape hatch.
|
|
|
|
|
|
|
|
| |
-ivb_rdrnd_cpuid option to toggle the bit off and on. Fortunately on
Intel processors the bit we were actually checking is also enabled by
Ivy Bridge. However it is also used on AMD Bulldozer processors to
signal half-precision floating point support so we could false
positive there.
|
|
|
|
|
|
|
| |
didn't work on older GCC/binutils. Instead hardcode the expression for
rdrand %eax, which should work everywhere. Also, avoid including immintrin.h
unless we're going to use it, to avoid problems with older compilers that
lack that header (this caused build failures under GCC 3.4.6).
|
|
|
|
| |
isn't working here anyway, but also broke DSA servers.
|
|
|
|
| |
caused huge performance issues with DSA/ECDSA signing performance.
|
| |
|
|
|
|
| |
implement Camellia's F function. Roughtly 60 - 80% speedup on Nehalem.
|
|
|
|
|
|
|
| |
processors. Tested using SDE on Linux with GCC 4.6, Intel C++ 11.1,
and Clang 3.0, all using the inline asm variant. I do not know if
current Visual C++ has the intrinsics available or not, so it's only
marked as available for those compilers at the moment.
|
|
|
|
|
|
|
|
|
| |
Camellia exposed by the OpenSSL module is parameterized by the key
length, much as AES is, while the version in the main source uses a
single name/type for all variants. For consistency, convert to using a
key length parameterized name in our version as well. In the future
this might allow for better loop unrolling, etc but currently we don't
make use of that.
|
|
|
|
|
|
|
| |
was broken, and after fixing that and trying to compile the module it
becamse clear that the Qt mutex did not work at all with recent Qt
versions. Taking this as a clear indicator that it is not being used,
remove it.
|
|
|
|
| |
All reported by Patrick Pelletier.
|
|\
| |
| |
| | |
and '50fa70d871f837c3c3338fabf5fb45649669aabf'
|
| |
| |
| |
| |
| | |
list of maintainer mode flags. It produces some very useful warnings,
but also a lot of noisy junk that I really don't care about.
|
|/ |
|
|\
| |
| |
| | |
and 'bc49da394c675517b140a404c19094020d6e9d40'
|
| |
| |
| |
| |
| | |
rather than one past the end. Reported by Stuart Maclean on the
mailing list.
|
| |
| |
| |
| |
| |
| |
| | |
Much faster, especially when using 8192 bit groups as OpenSSL does by
default.
Use BOTAN_DLL symbol visibility macros.
|
|/
|
|
|
|
|
|
|
|
|
| |
for this.
Add a new function that identifies a named SRP group from the N/g
params - this is important as we need to verify the SRP groups, the
easiest way to do that is to to force them to be a known/published
value.
Add the 1536, 3072, 4096, 6144, and 8192 bit groups from RFC 5054
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
convert using bytes.decode, but that's not available in Python 2.5 and
there doesn't seem to be a good way to test for it at runtime. Instead
use a slight hack of calling subprocess with universal_newlines=True,
which causes Py3k subprocess to assume the output is UTF-8 and decode
accordingly (this should be fine in these cases since monotone will
output a hex string and GCC will just output a version number). On
Python 2 it's mostly ignored (especially as we call strip on the
result anyway).
|
| |
|
|
|
|
|
| |
on decoding by default, and add a comment showing how to enable it for
encoding.
|
|
|
|
|
|
|
| |
16*1024 to an argument that treated those values as KiB, it took the
RNG ~3 seconds to create 16 MiB of data to randomize the input. Change
to 16. Also cap the value that can be passed to --buf-size to 1024,
for a 1 MiB buffer.
|
|
|
|
|
|
|
| |
how much we ask for on the basis of how many bits we're counting each
byte as contributing. Change /dev/*random estimate to 7 bits per byte.
Small cleanup in HMAC_RNG.
|
|
|
|
|
| |
list of directory names (without the open DIRs) plus the one currently
active dir.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of giving /dev/random, EGD, and CryptoAPI a full 8 bits per byte of
entropy, estimate at 6 bits.
In the proc walker, allow more files to be read, read more of any
particular file, and count each bit for 1/10 as much as before.
Reading more of the file seems especially valuable, as some files are
quite random, whereas others are very static, and this should ensure
we read more of the actually unpredictable inputs.
Prefer /dev/random over /dev/urandom
|
|
|
|
|
|
| |
waiting for a full kilobyte. This is for the benefit of DSA/ECDSA
which want a call to add_entropy to update the state in some way,
passing just a hash input which might be as small as 20 bytes.
|
|
|
|
| |
Cassidy, sent to the mailing list.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
by TLS (relies on the finished message check). Add a class for reading
files created by GnuTLS's srptool.
|
|
|
|
|
| |
loop (size_t overflow), likely causing a segfault. Not exploitable as
far as I can tell, beyond the obvious crashing.
|
|
|
|
|
|
|
|
| |
If the default value is a list we will append to it instead of
overwriting it. (Previouly, multiple define targets 'worked' with last
one winning as the values were progressively overwritten).
This might be useful for other things, compiler warning options maybe?
|
|
|
|
|
| |
in the Client_Hello parser. Works, tested with GnuTLS command line
client.
|
|
|
|
|
| |
interface but it's a plausible start. Will probably have more insights
after adding TLS hooks.
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
an amalgamation and the app is compiled in Unicode mode.
|
| |
|
| |
|