| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Pretty much useless and unused, except for listing the module names in
build.h and the short versions totally suffice for that.
|
|
|
|
|
|
| |
just too fragile and not that useful. Something like Java's checked exceptions
might be nice, but simply killing the process entirely if an unexpected
exception is thrown is not exactly useful for something trying to be robust.
|
|\
| |
| |
| |
| |
| | |
c5ae189464f6ef16e3ce73ea7c563412460d76a3)
to branch 'net.randombit.botan' (head e2b95b6ad31c7539cf9ac0ebddb1d80bf63b5b21)
|
| |
| |
| |
| |
| |
| |
| | |
- rounding.h (round_up, round_down)
- workfactor.h (dl_work_factor)
- timer.h (system_time)
And update all users of the previous util.h
|
| |
| |
| |
| | |
is enabled in the build.
|
|/
|
|
| |
is being used and not Randpool.
|
|
|
|
|
|
| |
the info.txt files with the right module dependencies.
Apply it across the codebase.
|
|
|
|
|
|
|
|
|
|
|
| |
When a reseed is attempted, up to poll_bits attempts will be made, running
in order through the set of available sources. So for instance if poll_bits
is set to the default 256, then up to 256 polls will be performed (some of
which might not provide any entropy, of course) before stopping; of course
if the accumulators goal is achived before that point, then the polling stops.
This should greatly help to resolve the recent rash of PRNG unseeded problems
some people have been having.
|
|
|
|
|
|
| |
/dev/urandom
/dev/random
/dev/srandom (OpenBSD-specific)
|
|
|
|
|
| |
rotate.h, or when it was not needed at all. Remove or change the includes
as needed.
|
|
|
|
|
| |
with the version in earlier releases. Rickard Bondesson pointed out that
this was a problem on the mailing list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
up during the Fedora submission review, that each source file include some
text about the license. One handy Perl script later and each file now has
the line
Distributed under the terms of the Botan license
after the copyright notices.
While I was in there modifying every file anyway, I also stripped out the
remainder of the block comments (lots of astericks before and after the
text); this is stylistic thing I picked up when I was first learning C++
but in retrospect it is not a good style as the structure makes it harder
to modify comments (with the result that comments become fewer, shorter and
are less likely to be updated, which are not good things).
|
|
|
|
|
|
|
|
|
| |
Instead simply consider the PRNG seeded if a poll kicked off from reseed
met its goal, or if the user adds data.
Doing anything else prevents creating (for instance) a PRNG seeded with
64 bits of entropy, which is unsafe for some purposes (key generation)
but quite possibly safe enough for others (generating salts and such).
|
|
|
|
|
| |
techniques, with the one using BufferedComputation being the new
subclass with the charming name Entropy_Accumulator_BufferedComputation.
|
|
|
|
|
|
|
|
| |
a new member function rekey, calling it from both reseed and add_entropy.
Previously add_entropy worked without this because the PRNG would reseed
itself automatically if it was not at the point when randomize() was called,
but once this was removed it was necessary to ensure a rekey kicked off,
if appropriate, when calling add_entropy.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since both Randpool and HMAC_RNG fed the input into a MAC anyway, this
works nicely. (It would be nicer to use tr1::function but, argh, don't
want to fully depend on TR1 quite yet. C++0x cannot come soon enough).
This avoids requiring to do run length encoding, it just dumps everything
as-is into the MAC. This ensures the buffer is not a potential narrow pipe
for the entropy (for instance, one might imagine an entropy source which
outputs one random byte every 16 bytes, and the rest some repeating pattern -
using a 16 byte buffer, you would only get 8 bits of entropy total, no matter
how many times you sampled).
|
|
|
|
| |
randomize, or PRNG_Unseeded will be thrown.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Combine the fast and slow polls, into a single poll() operation.
Instead of being given a buffer to write output into, the EntropySource is
passed an Entropy_Accumulator. This handles the RLE encoding that xor_into_buf
used to do. It also contains a cached I/O buffer so entropy sources do not
individually need to allocate memory for that with each poll. When data
is added to the accumulator, the source specifies an estimate of the number
of bits of entropy per byte, as a double. This is tracked in the accumulator.
Once the estimated entropy hits a target (set by the constructor), the
accumulator's member function predicate polling_goal_achieved flips to true.
This signals to the PRNG that it can stop performing polling on sources,
also polls that take a long time periodically check this flag and return
immediately.
The Win32 and BeOS entropy sources have been updated, but blindly; testing
is needed.
The test_es example program has been modified: now it polls twice and outputs
the XOR of the two collected results. That helps show if the output is consistent
across polls (not a good thing). I have noticed on the Unix entropy source,
occasionally there are many 0x00 bytes in the output, which is not optimal.
This also needs to be investigated.
The RLE is not actually RLE anymore. It works well for non-random inputs
(ASCII text, etc), but I noticed that when /dev/random output was fed into
it, the output buffer would end up being RR01RR01RR01 where RR is a random
byte and 00 is the byte count.
The buffer sizing also needs to be examined carefully. It might be useful
to choose a prime number for the size to XOR stuff into, to help ensure an
even distribution of entropy across the entire buffer space. Or: feed it
all into a hash function?
This change should (perhaps with further modifications) help WRT the
concerns Zack W raised about the RNG on the monotone-dev list.
|
|
|
|
|
| |
entropy source will realistically be able to provide even 768 bits of entropy,
so this is probably overkill even still.
|
|
|
|
| |
randomness data after the contents have been fed into the MAC.
|
|
|
|
|
|
| |
As with HMAC_RNG, instead assume one bit of conditional entropy per byte
of polled material. Since they are no longer used, drop the entropy
estimation routines entirely.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Boaz Barak, Shai Halevi: A model and architecture for pseudo-random
generation with applications to /dev/random. ACM Conference on Computer and
Communications Security 2005.
which I was referred to by Hugo Krawczyk.
Changes include:
Remove the entropy estimation. This is a major point of Barak and
Halevi's paper: the entropy we want to estimate is the condtional
entropy of the collected data from the point of view of an
unknown attacker. Obviously this cannot be computed! Instead
HMAC_RNG simply counts each byte of sampled data as one bit of
estimated entropy.
Increase the reseed threshold from 2^14 to 2^20 outputs, and
change the fast poll during generation from once every 1024
outputs to once every 65536 outputs (though the fast poll might
not trigger that often, if output lengths are very large -
however this doesn't really matter much, and with the X9.31
wrapper it does kick off exactly every 2^16 outputs). The paper
also has some good arguments why it is better to reseed rarely,
making sure you have collected a large amount of (hopefully)
unguessable state.
Remove a second HMAC PRF operation which was only being done to
destroy the previous K value. Considering it has a short
lifetime, seems excessive (and really hurt performance).
|
|
|
|
|
|
| |
It turned out many files were including base.h merely to get other
includes (like types.h, secmem.h, and exceptn.h). Those have been changed
to directly include the files containing the declarations that code needs.
|
| |
|
|
|
|
|
| |
Generate new XTS (extractor salt) values using PRF outputs rather than the
clock.
|
|
|
|
|
|
|
|
|
|
|
|
| |
to randomize(), at the start of the function. After that it will
generate as many outputs as needed. The counter cannot overflow,
as only up to 2**32 bytes can be requested per call to
RandomNumberGenerator::randomize, wheras HMAC_RNG can generate 32
bytes (256 bits) per counter value and uses a 32-bit counter.
The PRF is 'stepped' once after the call to
RandomNumberGenerator::randomize is completed. This reduces the
window of exposure to data that was already output for use by the RNG.
|
| |
|
|
|
|
|
| |
implementation), remove freestanding estimate_entropy function, change
Randpool to use entropy estimator.
|
| |
|
| |
|
|
|
|
|
|
|
| |
the constructor. This avoids repeatedly resetting it for each reseed,
if HMAC_RNG is used without entropy sources and using only
application-provided entropy. Very slightly more efficient and
also the code for reseed becomes a bit clearer.
|
|
|
|
|
|
|
|
| |
available in the build. If neither is avilable, the constructor will
throw an exception.
As before, the underlying RNG will be wrapped in an X9.31 PRNG using
AES-256 as the block cipher (if X9.31 is enabled in the build).
|
|
|
|
|
|
|
|
| |
"On Extract-then-Expand Key Derivation Functions and an HMAC-based KDF".
While it has much smaller state than Randpool (256-512 bits, typically,
versus 4096 bits commonly used in Randpool), the more formal design
analysis seems attractive (and realistically if the RNG can manage to
contain 256 bits of conditional entropy, that is more than sufficient).
|
|
|
|
| |
the underlying PRNG's reseed was a success.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
is requested, Randpool will first do a fast poll on each entropy
source that has been registered. It will count these poll results
towards the collected entropy count, with a maximum of 96
contributed bits of entropy per poll (only /dev/random reaches
this, others measure at 50-60 bits typically), and a maximum of
256 for sum contribution of the fast polls.
Then it will attempt slow polls of all devices until it thinks enough
entropy has been collected (using the rather naive entropy_estimate
function). It will count any slow poll for no more than 256 bits (100 or
so is typical for every poll but /dev/random), and will attempt to collect
at least 512 bits of (estimated/guessed) entropy.
This tends to cause Randpool to use significantly more
sources. Previously it was common, especially on systems with a
/dev/random, for only one or a few sources to be used. This
change helps assure that even if /dev/random and company are
broken or compromised the RNG output remains secure (assuming at
least some amount of entropy unguessable by the attacker can be
collected via other sources).
Also change AutoSeeded_RNG do an automatic poll/seed when it is
created.
|
|
|
|
|
|
| |
implementations
to decouple from knowing about RandomNumberGenerator).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add a new class AutoSeeded_RNG that is a RandomNumberGenerator that wraps
up the logic formerly in RandomNumberGenerator::make_rng. make_rng in
fact now just returns a new AutoSeeded_RNG object.
AutoSeeded_RNG is a bit more convenient because
- No need to use auto_ptr
- No need to dereference (same syntax everywhere - it's an underestimated
advantage imo)
Also move the code from timer/timer_base to timer/
|
| |
|
| |
|
|
|
|
|
| |
them modules now. In any case there is no distinction so info.txt seems
better.
|
|
|