aboutsummaryrefslogtreecommitdiffstats
path: root/src/entropy
Commit message (Collapse)AuthorAgeFilesLines
* Remove the 'realname' attribute on all modules and cc/cpu/os info files.lloyd2009-10-298-16/+0
| | | | | Pretty much useless and unused, except for listing the module names in build.h and the short versions totally suffice for that.
* Add support for GNU/Hurdlloyd2009-10-072-0/+2
|
* Add support for Dragonfly BSD (a fork of FreeBSD).lloyd2009-07-253-0/+3
| | | | Contributed by Patrick Georgi
* Two changes to proc_walk:lloyd2009-07-251-2/+2
| | | | | | | | | | | | | | Don't read any file that is not world-readable. This avoids trouble when running as root, since on Linux various special files can cause odd interactions and/or blocking behavior when read (for instance /proc/kmsg). ssumption is that no such files are world-readable. This also avoids any issue of reading data that is potentially sensitive. Instead of reading the first 1 KB of each file, only read the first 128 bytes. This prevents large files (like /proc/config.gz or /proc/kallsyms) from swamping the input buffer; these inputs are pretty static and shouldn't count for much. Reducing to 128 bytes causes a poll to read about 400 different files, rather than ~30.
* Fix some unused variable nits pointed out by icc 10.1lloyd2009-07-211-1/+1
|
* Move some files around to break up dependencies between directorieslloyd2009-07-163-0/+12
|
* static_cast a double before returning it as a u32bit to avoid a warninglloyd2009-07-101-1/+1
| | | | with some older versions of gcc
* Fix a subtle bug in the /dev/*random reader. The maximum ms wait time waslloyd2009-07-021-2/+3
| | | | | | | | set to 1000 ms (scaling based on amount of data requested). At 1000 ms exactly, we would form a timeval of 0 seconds and 1000000 usecs (ie, 1 second). Linux was fine with this, but FreeBSD 7.0's select was returning EINVAL. Fix things to properly create the timeval so that everyone is happy.
* Minor hackery to deal with win32 library dependencieslloyd2009-07-022-2/+2
|
* Changes to /dev/*random poller - read up to 48 bytes, and wait longer in ↵lloyd2009-06-091-3/+2
| | | | select loop (up to a second)
* Many source files included bit_ops.h when what was really desired waslloyd2009-05-131-1/+0
| | | | | rotate.h, or when it was not needed at all. Remove or change the includes as needed.
* Thomas Moschny passed along a request from the Fedora packagers which camelloyd2009-03-3018-23/+59
| | | | | | | | | | | | | | | up during the Fedora submission review, that each source file include some text about the license. One handy Perl script later and each file now has the line Distributed under the terms of the Botan license after the copyright notices. While I was in there modifying every file anyway, I also stripped out the remainder of the block comments (lots of astericks before and after the text); this is stylistic thing I picked up when I was first learning C++ but in retrospect it is not a good style as the structure makes it harder to modify comments (with the result that comments become fewer, shorter and are less likely to be updated, which are not good things).
* merge of '93d8e162df445b607d3085d0f966f4e7b286108a'lloyd2009-01-313-23/+38
|\ | | | | | | and 'fc89152d6d99043fb9ed1e9f2569fde3fee419e5'
| * In es_unix, two changeslloyd2009-01-311-6/+3
| | | | | | | | | | | | | | | | | | | | | | Make the fast poll significantly more pessimistic/realistic about how many bits of randomness we're getting from getrusage and stat. Don't cut out from execing programs if the desired poll bits is under 128. Simply poll until either the accumulator says we're done or we run out of sources. Assumption is that the poll won't be run at all unless it is ncessary (es_unix comes late in the list of sources to use since it is pretty slow).
| * Recast to byte pointer in Entropy_Accumulator before passing to add_byteslloyd2009-01-311-4/+4
| |
| * Change the max amount read from /dev/*random to 128 bits.lloyd2009-01-311-9/+4
| | | | | | | | | | | | Also, change the wait time to bits/16 milliseconds. For instance if 64 bits of entropy are requested, the reader will wait at most 4 ms in the select loop.
| * Track the collected entropy as a double instead of a unsigned int. Otherwiselloyd2009-01-311-3/+5
| | | | | | | | | | | | inputs might end up not contributing anything to the count even when they should. This was paricularly noticable with the proc walker - it uses an estimate of .01 bits / byte, so if the file was < 100 bytes it would not count for anything at all.
| * Make Entropy_Accumulator a pure virtual to allow other accumulationlloyd2009-01-311-5/+26
| | | | | | | | | | techniques, with the one using BufferedComputation being the new subclass with the charming name Entropy_Accumulator_BufferedComputation.
* | Compilation fixes for the Win32 entropy sources.lloyd2009-01-282-4/+4
|/
* Double the static estimate in es_ftw. To collect 256 bits of estimatedlloyd2009-01-281-1/+1
| | | | | entropy, the proc walker will read about 256K bytes. This seems plenty sufficient to me.
* In the BeOS entropy poll, quit the loop early if the polling goal waslloyd2009-01-281-0/+3
| | | | achieved.
* Go back to entropy bits per byte, instead of total estimated entropy oflloyd2009-01-281-4/+4
| | | | the buffer.
* Have Entropy_Accumulator dump everything into a BufferedComputation.lloyd2009-01-273-103/+23
| | | | | | | | | | | | Since both Randpool and HMAC_RNG fed the input into a MAC anyway, this works nicely. (It would be nicer to use tr1::function but, argh, don't want to fully depend on TR1 quite yet. C++0x cannot come soon enough). This avoids requiring to do run length encoding, it just dumps everything as-is into the MAC. This ensures the buffer is not a potential narrow pipe for the entropy (for instance, one might imagine an entropy source which outputs one random byte every 16 bytes, and the rest some repeating pattern - using a 16 byte buffer, you would only get 8 bits of entropy total, no matter how many times you sampled).
* Check in a branch with a major redesign on how entropy polling is performed.lloyd2009-01-2720-442/+510
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Combine the fast and slow polls, into a single poll() operation. Instead of being given a buffer to write output into, the EntropySource is passed an Entropy_Accumulator. This handles the RLE encoding that xor_into_buf used to do. It also contains a cached I/O buffer so entropy sources do not individually need to allocate memory for that with each poll. When data is added to the accumulator, the source specifies an estimate of the number of bits of entropy per byte, as a double. This is tracked in the accumulator. Once the estimated entropy hits a target (set by the constructor), the accumulator's member function predicate polling_goal_achieved flips to true. This signals to the PRNG that it can stop performing polling on sources, also polls that take a long time periodically check this flag and return immediately. The Win32 and BeOS entropy sources have been updated, but blindly; testing is needed. The test_es example program has been modified: now it polls twice and outputs the XOR of the two collected results. That helps show if the output is consistent across polls (not a good thing). I have noticed on the Unix entropy source, occasionally there are many 0x00 bytes in the output, which is not optimal. This also needs to be investigated. The RLE is not actually RLE anymore. It works well for non-random inputs (ASCII text, etc), but I noticed that when /dev/random output was fed into it, the output buffer would end up being RR01RR01RR01 where RR is a random byte and 00 is the byte count. The buffer sizing also needs to be examined carefully. It might be useful to choose a prime number for the size to XOR stuff into, to help ensure an even distribution of entropy across the entire buffer space. Or: feed it all into a hash function? This change should (perhaps with further modifications) help WRT the concerns Zack W raised about the RNG on the monotone-dev list.
* In the Unix entropy source fast poll, clear the stat buf beforelloyd2009-01-031-0/+1
| | | | | | | | | | we call stat. Apparently on 32-bit Linux (or at least on Ubuntu 8.04/x86), struct stat has some padding bytes, which are not written to by the syscall, but valgrind doesn't realize that this is OK, and warns about uninitialized memory access when we read the contents of the struct. Since this data is then fed into the PRNG, the PRNG state and output becomes tainted, which makes valgrind's output rather useless.
* Rickard Bondesson reported on botan-devel about some problems buildinglloyd2008-12-023-11/+7
| | | | | | | | | | | | | | | | | | | | on Solaris 10 with GCC 3.4.3. First, remove the definition of _XOPEN_SOURCE_EXTENDED=1 in mmap_mem.cpp and unix_cmd.cpp, because apparently on Solaris defining this macro breaks C++ compilation entirely with GCC: http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6395191 In es_egd.cpp and es_dev.cpp, include <fcntl.h> to get the declaration of open(), which is apparently where open(2) lives on Solaris - this matches the include the *BSD man pages for open(2) show, though AFAIK the BSDs all compiled fine without it (probably due to greater efforts to be source-compatible with Linux systems by *BSD developers). I have not been able to test these changes personally on Solaris but Rickard reports that with these changes everything compiles OK. Update lib version to 1.8.0-pre. ZOMG. Finally.
* If the read succeceed in EGD_EntropySource::slow_poll, the loop wouldlloyd2008-11-251-0/+2
| | | | | just continue on instead of returning the length of the buffer recv'ed from EGD.
* In es_ftw, remove check for if the return value of read() is largerlloyd2008-11-241-1/+1
| | | | | | than the value we gave it. This is pretty unlikely... also caused an annoying warning with some versions of GCC b/c it couldn't figure out the signed/unsigned comparison was safe in this case.
* Modify es_ftw to use xor_into_buflloyd2008-11-231-4/+4
|
* Previously es_unix would always try to get 16K, then return. Now itlloyd2008-11-231-4/+3
| | | | | | | tries to get an amount cooresponding with the size of the output buffer, specifically 128 times the output size. So, assuming we have enough working sources, each output byte will be the XOR of (at least) 128 bytes of text from the output programs. (Though RLE may reduce that somewhat)
* Limit the output size of fast polls by the BeOS, Unix, and Win32 entropylloyd2008-11-233-0/+3
| | | | pollers that grab basic statistical data to 32 bytes.
* Compile fixlloyd2008-11-231-2/+2
|
* Remove now unused buf_es modulelloyd2008-11-233-134/+0
|
* Update BeOS entropy poller to also derive directly from EntropySourcelloyd2008-11-233-35/+46
| | | | | and use xor_into_buf. Completely untested, though it looks clean besides missing the BeOS headers+funcs if I try to compile on Linux.
* Fix return types in declarationlloyd2008-11-231-2/+3
|
* Convert Win32 stats polling entropy source to use xor_into_buf. Untested.lloyd2008-11-233-44/+59
|
* Fix indexing of ids array. Don't zeroize stat/rusage bufs before uselloyd2008-11-231-8/+4
|
* Use template version of xor_into_buf wherever useful in es_unix.cpplloyd2008-11-231-3/+3
|
* Use template version of xor_into_buf in es_unixlloyd2008-11-231-1/+1
|
* Change unix_procs entropy source to be a plain EntropySource instead oflloyd2008-11-233-24/+47
| | | | | | | | | | | | | a Buffered_EntropySource. Data used in the poll is directly accumulated into the output buffer using XOR, wrapping around as needed. The implementation uses xor_into_buf from xor_buf.h This is simpler and more convincingly secure than the method used by Buffered_EntropySource. In particular the collected data is persisted in the buffer there much longer than needed. It is also much harder for entropy sources to signal errors or a failure to collected data using Buffered_EntropySource. And, with the simple xor_into_buf function, it is actually quite easy to remove without major changes.
* Remove dep on buf_es in proc_walk info.txtlloyd2008-11-211-4/+0
|
* Last minute es_ftw optimizations / logic changes. Performance of seedinglloyd2008-11-212-35/+27
| | | | | | | | | was too slow, it was noticably slowing down AutoSeeded_RNG. Reduce the amount of output gathered to 32 times the size of the output buffer, and instead of using Buffered_EntropySource, just xor the read file data directly into the output buffer. Read up to 4096 bytes per file, but only count the first 128 towards the total goal (/proc/config.gz being a major culprit - large, random looking, and entirely or almost static).
* Remove debug printflloyd2008-11-211-1/+0
|
* Cache socket descriptors in EGD entropy source, instead of creating each polllloyd2008-11-212-50/+97
|
* Reduce /dev/random poll times: 5ms for fast, 20 for slowlloyd2008-11-101-2/+2
|
* The device reader constructors were being called too soon. Insteadlloyd2008-11-102-19/+40
| | | | close the fds in the entropy source destructor.
* Split base.h into block_cipher.h and stream_cipher.hlloyd2008-11-081-0/+2
| | | | | | It turned out many files were including base.h merely to get other includes (like types.h, secmem.h, and exceptn.h). Those have been changed to directly include the files containing the declarations that code needs.
* Cache device descriptors in Device_EntropySourcelloyd2008-11-072-34/+45
|
* Add fast_poll implementationlloyd2008-11-042-3/+12
|
* Substantially change Randpool's reseed logic. Now when a reseedlloyd2008-10-2713-16/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | is requested, Randpool will first do a fast poll on each entropy source that has been registered. It will count these poll results towards the collected entropy count, with a maximum of 96 contributed bits of entropy per poll (only /dev/random reaches this, others measure at 50-60 bits typically), and a maximum of 256 for sum contribution of the fast polls. Then it will attempt slow polls of all devices until it thinks enough entropy has been collected (using the rather naive entropy_estimate function). It will count any slow poll for no more than 256 bits (100 or so is typical for every poll but /dev/random), and will attempt to collect at least 512 bits of (estimated/guessed) entropy. This tends to cause Randpool to use significantly more sources. Previously it was common, especially on systems with a /dev/random, for only one or a few sources to be used. This change helps assure that even if /dev/random and company are broken or compromised the RNG output remains secure (assuming at least some amount of entropy unguessable by the attacker can be collected via other sources). Also change AutoSeeded_RNG do an automatic poll/seed when it is created.