| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
| |
so application code can check for the specific API it expects without
having to keep track of what versions APIs x,y,z changed. Arbitrarily
set all current API versions to 20131128.
|
|
|
|
| |
Fix a few nullptr and cast warnings.
|
| |
|
| |
|
|
|
|
|
|
| |
too large to fit in an fd_set.
Read at least 128 bits even if the poll is asking for less.
|
| |
|
|
|
|
| |
style cast in secmem.h
|
|
|
|
|
|
| |
using a custom allocator. Currently our allocator just does new/delete
with a memset before deletion, and the mmap and mlock allocators have
been removed.
|
|
|
|
|
|
|
| |
how much we ask for on the basis of how many bits we're counting each
byte as contributing. Change /dev/*random estimate to 7 bits per byte.
Small cleanup in HMAC_RNG.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
of giving /dev/random, EGD, and CryptoAPI a full 8 bits per byte of
entropy, estimate at 6 bits.
In the proc walker, allow more files to be read, read more of any
particular file, and count each bit for 1/10 as much as before.
Reading more of the file seems especially valuable, as some files are
quite random, whereas others are very static, and this should ensure
we read more of the actually unpredictable inputs.
Prefer /dev/random over /dev/urandom
|
|
|
|
| |
already. Reported by Jeremy C. Reed <[email protected]>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
around a bug in FreeBSD 6.1, which is long EOL.
If we can't figure out the CPU in configure.py, if running
verbosely dump the entire list of CPUs we know about.
Some doc cleanups.
Rename the 'beos' target to 'haiku', since testing shows that
botan can't compile under the old BeOS GCC 2.95 anyway.
Remove the call to idle_time in the stats entropy source - it causes a
crash on Haiku R1-alpha2 somewhere inside a system DLL. I didn't
bother debugging it beyond looking at the backtrace.
Add a 'bepc' alias for i386 as that is what Haiku reports its
processor as.
Fix the install dirs to match Haiku R1, though apparently they will
change in R2 anyway when they add package management.
Enable use of gmtime_r on Haiku.
|
| |
|
|
|
|
|
| |
representation (rather than in an interator context), instead use &buf[0],
which works for both MemoryRegion and std::vector
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes for the amalgamation generator for internal headers.
Remove BOTAN_DLL exporting macros from all internal-only headers;
the classes/functions there don't need to be exported, and
avoiding the PIC/GOT indirection can be a big win.
Add missing BOTAN_DLLs where necessary, mostly gfpmath and cvc
For GCC, use -fvisibility=hidden and set BOTAN_DLL to the
visibility __attribute__ to export those classes/functions.
|
| |
|
|
|
|
|
|
| |
Remove support for (unused) modset settings.
Move tss, fpe, cryptobox, and aont to new dir constructs
|
|
|
|
|
| |
Pretty much useless and unused, except for listing the module names in
build.h and the short versions totally suffice for that.
|
| |
|
|
|
|
| |
Contributed by Patrick Georgi
|
|
|
|
|
|
|
|
| |
set to 1000 ms (scaling based on amount of data requested). At 1000 ms
exactly, we would form a timeval of 0 seconds and 1000000 usecs (ie, 1 second).
Linux was fine with this, but FreeBSD 7.0's select was returning EINVAL.
Fix things to properly create the timeval so that everyone is happy.
|
|
|
|
| |
select loop (up to a second)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
up during the Fedora submission review, that each source file include some
text about the license. One handy Perl script later and each file now has
the line
Distributed under the terms of the Botan license
after the copyright notices.
While I was in there modifying every file anyway, I also stripped out the
remainder of the block comments (lots of astericks before and after the
text); this is stylistic thing I picked up when I was first learning C++
but in retrospect it is not a good style as the structure makes it harder
to modify comments (with the result that comments become fewer, shorter and
are less likely to be updated, which are not good things).
|
|
|
|
|
|
| |
Also, change the wait time to bits/16 milliseconds. For instance if 64
bits of entropy are requested, the reader will wait at most 4 ms in the
select loop.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Combine the fast and slow polls, into a single poll() operation.
Instead of being given a buffer to write output into, the EntropySource is
passed an Entropy_Accumulator. This handles the RLE encoding that xor_into_buf
used to do. It also contains a cached I/O buffer so entropy sources do not
individually need to allocate memory for that with each poll. When data
is added to the accumulator, the source specifies an estimate of the number
of bits of entropy per byte, as a double. This is tracked in the accumulator.
Once the estimated entropy hits a target (set by the constructor), the
accumulator's member function predicate polling_goal_achieved flips to true.
This signals to the PRNG that it can stop performing polling on sources,
also polls that take a long time periodically check this flag and return
immediately.
The Win32 and BeOS entropy sources have been updated, but blindly; testing
is needed.
The test_es example program has been modified: now it polls twice and outputs
the XOR of the two collected results. That helps show if the output is consistent
across polls (not a good thing). I have noticed on the Unix entropy source,
occasionally there are many 0x00 bytes in the output, which is not optimal.
This also needs to be investigated.
The RLE is not actually RLE anymore. It works well for non-random inputs
(ASCII text, etc), but I noticed that when /dev/random output was fed into
it, the output buffer would end up being RR01RR01RR01 where RR is a random
byte and 00 is the byte count.
The buffer sizing also needs to be examined carefully. It might be useful
to choose a prime number for the size to XOR stuff into, to help ensure an
even distribution of entropy across the entire buffer space. Or: feed it
all into a hash function?
This change should (perhaps with further modifications) help WRT the
concerns Zack W raised about the RNG on the monotone-dev list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
on Solaris 10 with GCC 3.4.3.
First, remove the definition of _XOPEN_SOURCE_EXTENDED=1 in mmap_mem.cpp
and unix_cmd.cpp, because apparently on Solaris defining this macro breaks
C++ compilation entirely with GCC:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6395191
In es_egd.cpp and es_dev.cpp, include <fcntl.h> to get the declaration of
open(), which is apparently where open(2) lives on Solaris - this matches
the include the *BSD man pages for open(2) show, though AFAIK the BSDs
all compiled fine without it (probably due to greater efforts to be
source-compatible with Linux systems by *BSD developers).
I have not been able to test these changes personally on Solaris but
Rickard reports that with these changes everything compiles OK.
Update lib version to 1.8.0-pre. ZOMG. Finally.
|
| |
|
|
|
|
| |
close the fds in the entropy source destructor.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
is requested, Randpool will first do a fast poll on each entropy
source that has been registered. It will count these poll results
towards the collected entropy count, with a maximum of 96
contributed bits of entropy per poll (only /dev/random reaches
this, others measure at 50-60 bits typically), and a maximum of
256 for sum contribution of the fast polls.
Then it will attempt slow polls of all devices until it thinks enough
entropy has been collected (using the rather naive entropy_estimate
function). It will count any slow poll for no more than 256 bits (100 or
so is typical for every poll but /dev/random), and will attempt to collect
at least 512 bits of (estimated/guessed) entropy.
This tends to cause Randpool to use significantly more
sources. Previously it was common, especially on systems with a
/dev/random, for only one or a few sources to be used. This
change helps assure that even if /dev/random and company are
broken or compromised the RNG output remains secure (assuming at
least some amount of entropy unguessable by the attacker can be
collected via other sources).
Also change AutoSeeded_RNG do an automatic poll/seed when it is
created.
|
|
|
|
|
|
| |
implementations
to decouple from knowing about RandomNumberGenerator).
|
| |
|
|
|
|
|
| |
them modules now. In any case there is no distinction so info.txt seems
better.
|
|
|