| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
up during the Fedora submission review, that each source file include some
text about the license. One handy Perl script later and each file now has
the line
Distributed under the terms of the Botan license
after the copyright notices.
While I was in there modifying every file anyway, I also stripped out the
remainder of the block comments (lots of astericks before and after the
text); this is stylistic thing I picked up when I was first learning C++
but in retrospect it is not a good style as the structure makes it harder
to modify comments (with the result that comments become fewer, shorter and
are less likely to be updated, which are not good things).
|
|\
| |
| |
| | |
and 'fc89152d6d99043fb9ed1e9f2569fde3fee419e5'
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Make the fast poll significantly more pessimistic/realistic about how
many bits of randomness we're getting from getrusage and stat.
Don't cut out from execing programs if the desired poll bits is under
128. Simply poll until either the accumulator says we're done or we run
out of sources. Assumption is that the poll won't be run at all unless
it is ncessary (es_unix comes late in the list of sources to use since
it is pretty slow).
|
| | |
|
| |
| |
| |
| |
| |
| | |
Also, change the wait time to bits/16 milliseconds. For instance if 64
bits of entropy are requested, the reader will wait at most 4 ms in the
select loop.
|
| |
| |
| |
| |
| |
| | |
inputs might end up not contributing anything to the count even when they should.
This was paricularly noticable with the proc walker - it uses an estimate of .01
bits / byte, so if the file was < 100 bytes it would not count for anything at all.
|
| |
| |
| |
| |
| | |
techniques, with the one using BufferedComputation being the new
subclass with the charming name Entropy_Accumulator_BufferedComputation.
|
|/ |
|
|
|
|
|
| |
entropy, the proc walker will read about 256K bytes. This seems plenty
sufficient to me.
|
|
|
|
| |
achieved.
|
|
|
|
| |
the buffer.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since both Randpool and HMAC_RNG fed the input into a MAC anyway, this
works nicely. (It would be nicer to use tr1::function but, argh, don't
want to fully depend on TR1 quite yet. C++0x cannot come soon enough).
This avoids requiring to do run length encoding, it just dumps everything
as-is into the MAC. This ensures the buffer is not a potential narrow pipe
for the entropy (for instance, one might imagine an entropy source which
outputs one random byte every 16 bytes, and the rest some repeating pattern -
using a 16 byte buffer, you would only get 8 bits of entropy total, no matter
how many times you sampled).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Combine the fast and slow polls, into a single poll() operation.
Instead of being given a buffer to write output into, the EntropySource is
passed an Entropy_Accumulator. This handles the RLE encoding that xor_into_buf
used to do. It also contains a cached I/O buffer so entropy sources do not
individually need to allocate memory for that with each poll. When data
is added to the accumulator, the source specifies an estimate of the number
of bits of entropy per byte, as a double. This is tracked in the accumulator.
Once the estimated entropy hits a target (set by the constructor), the
accumulator's member function predicate polling_goal_achieved flips to true.
This signals to the PRNG that it can stop performing polling on sources,
also polls that take a long time periodically check this flag and return
immediately.
The Win32 and BeOS entropy sources have been updated, but blindly; testing
is needed.
The test_es example program has been modified: now it polls twice and outputs
the XOR of the two collected results. That helps show if the output is consistent
across polls (not a good thing). I have noticed on the Unix entropy source,
occasionally there are many 0x00 bytes in the output, which is not optimal.
This also needs to be investigated.
The RLE is not actually RLE anymore. It works well for non-random inputs
(ASCII text, etc), but I noticed that when /dev/random output was fed into
it, the output buffer would end up being RR01RR01RR01 where RR is a random
byte and 00 is the byte count.
The buffer sizing also needs to be examined carefully. It might be useful
to choose a prime number for the size to XOR stuff into, to help ensure an
even distribution of entropy across the entire buffer space. Or: feed it
all into a hash function?
This change should (perhaps with further modifications) help WRT the
concerns Zack W raised about the RNG on the monotone-dev list.
|
|
|
|
|
|
|
|
|
|
| |
we call stat. Apparently on 32-bit Linux (or at least on Ubuntu
8.04/x86), struct stat has some padding bytes, which are not
written to by the syscall, but valgrind doesn't realize that this
is OK, and warns about uninitialized memory access when we read
the contents of the struct. Since this data is then fed into the
PRNG, the PRNG state and output becomes tainted, which makes
valgrind's output rather useless.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on Solaris 10 with GCC 3.4.3.
First, remove the definition of _XOPEN_SOURCE_EXTENDED=1 in mmap_mem.cpp
and unix_cmd.cpp, because apparently on Solaris defining this macro breaks
C++ compilation entirely with GCC:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6395191
In es_egd.cpp and es_dev.cpp, include <fcntl.h> to get the declaration of
open(), which is apparently where open(2) lives on Solaris - this matches
the include the *BSD man pages for open(2) show, though AFAIK the BSDs
all compiled fine without it (probably due to greater efforts to be
source-compatible with Linux systems by *BSD developers).
I have not been able to test these changes personally on Solaris but
Rickard reports that with these changes everything compiles OK.
Update lib version to 1.8.0-pre. ZOMG. Finally.
|
|
|
|
|
| |
just continue on instead of returning the length of the buffer recv'ed
from EGD.
|
|
|
|
|
|
| |
than the value we gave it. This is pretty unlikely... also caused an
annoying warning with some versions of GCC b/c it couldn't figure out
the signed/unsigned comparison was safe in this case.
|
| |
|
|
|
|
|
|
|
| |
tries to get an amount cooresponding with the size of the output buffer,
specifically 128 times the output size. So, assuming we have enough working
sources, each output byte will be the XOR of (at least) 128 bytes of text
from the output programs. (Though RLE may reduce that somewhat)
|
|
|
|
| |
pollers that grab basic statistical data to 32 bytes.
|
| |
|
| |
|
|
|
|
|
| |
and use xor_into_buf. Completely untested, though it looks clean besides
missing the BeOS headers+funcs if I try to compile on Linux.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
a Buffered_EntropySource. Data used in the poll is directly accumulated
into the output buffer using XOR, wrapping around as needed. The
implementation uses xor_into_buf from xor_buf.h
This is simpler and more convincingly secure than the method used
by Buffered_EntropySource. In particular the collected data is persisted
in the buffer there much longer than needed. It is also much harder for
entropy sources to signal errors or a failure to collected data using
Buffered_EntropySource. And, with the simple xor_into_buf function, it
is actually quite easy to remove without major changes.
|
| |
|
|
|
|
|
|
|
|
|
| |
was too slow, it was noticably slowing down AutoSeeded_RNG. Reduce the
amount of output gathered to 32 times the size of the output buffer,
and instead of using Buffered_EntropySource, just xor the read file
data directly into the output buffer. Read up to 4096 bytes per file, but
only count the first 128 towards the total goal (/proc/config.gz being
a major culprit - large, random looking, and entirely or almost static).
|
| |
|
| |
|
| |
|
|
|
|
| |
close the fds in the entropy source destructor.
|
|
|
|
|
|
| |
It turned out many files were including base.h merely to get other
includes (like types.h, secmem.h, and exceptn.h). Those have been changed
to directly include the files containing the declarations that code needs.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
is requested, Randpool will first do a fast poll on each entropy
source that has been registered. It will count these poll results
towards the collected entropy count, with a maximum of 96
contributed bits of entropy per poll (only /dev/random reaches
this, others measure at 50-60 bits typically), and a maximum of
256 for sum contribution of the fast polls.
Then it will attempt slow polls of all devices until it thinks enough
entropy has been collected (using the rather naive entropy_estimate
function). It will count any slow poll for no more than 256 bits (100 or
so is typical for every poll but /dev/random), and will attempt to collect
at least 512 bits of (estimated/guessed) entropy.
This tends to cause Randpool to use significantly more
sources. Previously it was common, especially on systems with a
/dev/random, for only one or a few sources to be used. This
change helps assure that even if /dev/random and company are
broken or compromised the RNG output remains secure (assuming at
least some amount of entropy unguessable by the attacker can be
collected via other sources).
Also change AutoSeeded_RNG do an automatic poll/seed when it is
created.
|
|
|
|
|
|
| |
implementations
to decouple from knowing about RandomNumberGenerator).
|
| |
|
|
|
|
|
|
| |
fd327b29aa542e0ad5ff6d37d8392321670f0369)
to branch 'net.randombit.botan.modularized' (head 3f8d05493d4b192243fdc8a7f518ed1013c3be54)
|
|
|
|
|
| |
them modules now. In any case there is no distinction so info.txt seems
better.
|
| |
|
| |
|
|
|