| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Combine the fast and slow polls, into a single poll() operation.
Instead of being given a buffer to write output into, the EntropySource is
passed an Entropy_Accumulator. This handles the RLE encoding that xor_into_buf
used to do. It also contains a cached I/O buffer so entropy sources do not
individually need to allocate memory for that with each poll. When data
is added to the accumulator, the source specifies an estimate of the number
of bits of entropy per byte, as a double. This is tracked in the accumulator.
Once the estimated entropy hits a target (set by the constructor), the
accumulator's member function predicate polling_goal_achieved flips to true.
This signals to the PRNG that it can stop performing polling on sources,
also polls that take a long time periodically check this flag and return
immediately.
The Win32 and BeOS entropy sources have been updated, but blindly; testing
is needed.
The test_es example program has been modified: now it polls twice and outputs
the XOR of the two collected results. That helps show if the output is consistent
across polls (not a good thing). I have noticed on the Unix entropy source,
occasionally there are many 0x00 bytes in the output, which is not optimal.
This also needs to be investigated.
The RLE is not actually RLE anymore. It works well for non-random inputs
(ASCII text, etc), but I noticed that when /dev/random output was fed into
it, the output buffer would end up being RR01RR01RR01 where RR is a random
byte and 00 is the byte count.
The buffer sizing also needs to be examined carefully. It might be useful
to choose a prime number for the size to XOR stuff into, to help ensure an
even distribution of entropy across the entire buffer space. Or: feed it
all into a hash function?
This change should (perhaps with further modifications) help WRT the
concerns Zack W raised about the RNG on the monotone-dev list.
|
| |
|
|
|
|
| |
using SHA-224, SHA-256, and RIPEMD-160
|
|
|
|
|
| |
using hashes SHA-224, SHA-256, SHA-384, SHA-512, RIPEMD-128, RIPEMD-160,
and Whirlpool.
|
|
|
|
| |
Crypto++ 5.5.2 on motoko (x86-64 Gentoo)
|
|
|
|
| |
SHA-384, and SHA-512 generated using Crypto++ 5.5.2
|
|
|
|
|
| |
has many engine variants, etc. Instead use CRC32 which tends to work and
not be surprising.
|
| |
|
|
|
|
| |
easy to measure
|
| |
|
| |
|
|
|
|
| |
which is a reasonable ordering
|
|
|
|
|
|
|
|
| |
I'm seeing one failure on Core2. Have not diagnosed at all.
A number of tests are #if'ed out. Many were rubbed out in the
original InSiTo version, others I commented out due to changed/removed
APIs.
|
| |
|
| |
|
|
|
|
| |
benchmark
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
was not the right place to keep track of this information. Also modify
all Algorithm_Factory constructor functions to take instead of a SCAN_Name
a pair of std::strings - the SCAN name and an optional provider name. If
a provider is specified, either that provider will be used or the request
will fail. Otherwise, the library will attempt best effort, based on
user-set algorithm implementation settings (combine with benchmark.h for
choosing the fastest implementation at runtime) or if not set, a static
ordering (preset in static_provider_weight in prov_weight.cpp, though it
would be nice to make this easier to toggle).
|
|
|
|
| |
compatability.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Add some missing info.txts
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new class AutoSeeded_RNG that is a RandomNumberGenerator that wraps
up the logic formerly in RandomNumberGenerator::make_rng. make_rng in
fact now just returns a new AutoSeeded_RNG object.
AutoSeeded_RNG is a bit more convenient because
- No need to use auto_ptr
- No need to dereference (same syntax everywhere - it's an underestimated
advantage imo)
Also move the code from timer/timer_base to timer/
|
| |
|
| |
|
| |
|
|
|
|
| |
bad results, especially noticable with fast algorithms and long test times.
|
|
|
|
|
|
|
| |
the wrong one in some situation or another. Just print milliseconds
no matter what.
Also it's easier to read/compare if everything is in the same unit (obv)
|
|
|
|
|
|
|
|
|
| |
several are failing with an uncaught exception.
The test failures may be due to the fact that ECDSA's support for EAC is not
included at the moment, and the CVC code that attempts to do it is #if'ed out.
It certainly can't help anyway. Exception is a decoding error, so seems
quite plausible.
|
|
|
|
| |
brackets)
|
| |
|
| |
|
|
|
|
| |
(tests by Falko Strenzke)
|
| |
|
|
|
|
| |
virtuals)
|
| |
|
|
|
|
| |
faster than OpenSSL's - I hope that is not a fluke in the benchmark program)
|
| |
|
| |
|
| |
|
|
|
|
| |
test.
|