| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
how much we ask for on the basis of how many bits we're counting each
byte as contributing. Change /dev/*random estimate to 7 bits per byte.
Small cleanup in HMAC_RNG.
|
|
|
|
|
|
| |
waiting for a full kilobyte. This is for the benefit of DSA/ECDSA
which want a call to add_entropy to update the state in some way,
passing just a hash input which might be as small as 20 bytes.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
the parameters of the key length. Instead define a new function which
returns a simple object which contains this information.
This definitely breaks backwards compatability, though only with code
that directly manipulates low level objects like BlockCipher*s
directly, which is probably relatively rare.
Also remove some deprecated accessor functions from lookup.h. It turns
out block_size_of and output_size_of are being used in the TLS code; I
need to remove them from there before I can delete these entirely.
Really that didn't make much sense, because they assumed all
implementations of a particular algorithm will have the same
specifications, which is definitely not necessarily true, especially
WRT key length. It is much safer (and probably simpler) to first
retrieve an instance of the actual object you are going to use and
then ask it directly.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Add RandomNumberGenerator::random_vec, which takes an length n and
returns a new SecureVector with randomized contents of that size. This
nicely covers most of the cases where randomize was being called on a
vector, and is a little cleaner in the code as well, instead of
vec.resize(length);
rng.randomize(&vec[0], vec.size());
we just write
vec = rng.random_vec(length);
|
|
|
|
|
| |
representation (rather than in an interator context), instead use &buf[0],
which works for both MemoryRegion and std::vector
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
harmonising MemoryRegion with std::vector:
The MemoryRegion::clear() function would zeroise the buffer, but keep
the memory allocated and the size unchanged. This is very different
from STL's clear(), which is basically the equivalent to what is
called destroy() in MemoryRegion. So to be able to replace MemoryRegion
with a std::vector, we have to rename destroy() to clear() and we have
to expose the current functionality of clear() in some other way, since
vector doesn't support this operation. Do so by adding a global function
named zeroise() which takes a MemoryRegion which is zeroed. Remove clear()
to ensure all callers are updated.
|
| |
|
| |
|
|
|
|
|
|
|
|
| |
entirely. add_entropy() just adds the input into the extractor; if
more than 1024 bytes of input have been added by the user since the
last reseed, then force a reseed. Until that point, the data simply
remains accumulating in the extractor, which is fast and helps ensure
a large block of data is input when we finally do reseed.
|
|
|
|
| |
contents of all SSL/TLS handshake messages into the PRNG input.
|
|
|
|
|
| |
to be named differently from add_entropy to deal with odd C++
overloading/virtual rules.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to mix in with the user input.
Check that the prf and extractor are compatible.
For the initial PRF key, use all zeros of the appropriate size,
and for the initial XTS key, use PRF("Botan HMAC_RNG XTS"). This
ensures that only the one fixed key size is ever used with either
the prf or extractor objects, allowing you to use, say
HMAC(SHA-256)+CMAC(AES-256), or even CMAC(AES-128)+CMAC(AES-128)
as the PRFs in the RNG.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
PRNG everywhere. The removal of the global PRNG was generated by a
desire to remove the global library state entirely. However the real
point of this was to remove the use of globally visible _mutable_
state; of the mutable state, the PRNG is probably the least important,
and the most useful to share. And it seems unlikely that thread
contention would be a major issue in the PRNG.
Add back a global PRNG to Library_State. Use lazy initialization, so
apps that don't ever use a PRNG don't need a seeding step. Then have
AutoSeeded_RNG call that global PRNG.
Offer once again
RandomNumberGenerator& Library_State::global_rng();
which returns a reference to the global PRNG.
This RNG object serializes access to itself with a mutex.
Remove the hack known as Blinding::choose_nonce, replace with using
the global PRNG to choose a blinding nonce
|
|
|
|
| |
including loadstor.h actually just needed get_byte and nothing else.
|
|
|
|
| |
of AES-NI instructions, etc, in the PRNGs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
bswap.h); too many external apps rely on loadstor.h existing.
Define 64-bit generic bswap in terms of 32-bit bswap, since it's
not much slower if 32-bit is also generic, and much faster if
it's not. This may be quite helpful on 32-bit x86 in particular.
Change formulation of generic 32-bit bswap. It may be faster or
slower depending on the CPU, especially the latency and throuput
of rotate instructions, but should be faster on an ideally
superscalar processor with rotate instructions (ie, what I expect
future CPUs to look more like).
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fixes for the amalgamation generator for internal headers.
Remove BOTAN_DLL exporting macros from all internal-only headers;
the classes/functions there don't need to be exported, and
avoiding the PIC/GOT indirection can be a big win.
Add missing BOTAN_DLLs where necessary, mostly gfpmath and cvc
For GCC, use -fvisibility=hidden and set BOTAN_DLL to the
visibility __attribute__ to export those classes/functions.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
QueryPerformanceCounter, into an entropy source hres_timer. Its
results, if any, do not count as contributing entropy to the poll.
Convert the other (monotonic/fixed epoch) timers to a single function
get_nanoseconds_clock(), living in time.h, which statically chooses
the 'best' timer type (clock_gettime, gettimeofday, std::clock, in
that order depending on what is available). Add feature test macros
for clock_gettime and gettimeofday.
Remove the Timer class and timer.h. Remove the Timer& argument to the
algorithm benchmark function.
|
|
|
|
|
|
|
|
| |
containers (specifically vector).
Rename is_empty to empty
Remove has_items
Rename create to resize
|
|
|
|
|
| |
Pretty much useless and unused, except for listing the module names in
build.h and the short versions totally suffice for that.
|
|
|
|
|
|
| |
just too fragile and not that useful. Something like Java's checked exceptions
might be nice, but simply killing the process entirely if an unexpected
exception is thrown is not exactly useful for something trying to be robust.
|
|\
| |
| |
| |
| |
| | |
c5ae189464f6ef16e3ce73ea7c563412460d76a3)
to branch 'net.randombit.botan' (head e2b95b6ad31c7539cf9ac0ebddb1d80bf63b5b21)
|
| |
| |
| |
| |
| |
| |
| | |
- rounding.h (round_up, round_down)
- workfactor.h (dl_work_factor)
- timer.h (system_time)
And update all users of the previous util.h
|
| |
| |
| |
| | |
is enabled in the build.
|
|/
|
|
| |
is being used and not Randpool.
|
|
|
|
|
|
| |
the info.txt files with the right module dependencies.
Apply it across the codebase.
|
|
|
|
|
|
|
|
|
|
|
| |
When a reseed is attempted, up to poll_bits attempts will be made, running
in order through the set of available sources. So for instance if poll_bits
is set to the default 256, then up to 256 polls will be performed (some of
which might not provide any entropy, of course) before stopping; of course
if the accumulators goal is achived before that point, then the polling stops.
This should greatly help to resolve the recent rash of PRNG unseeded problems
some people have been having.
|
|
|
|
|
|
| |
/dev/urandom
/dev/random
/dev/srandom (OpenBSD-specific)
|
|
|
|
|
| |
rotate.h, or when it was not needed at all. Remove or change the includes
as needed.
|
|
|
|
|
| |
with the version in earlier releases. Rickard Bondesson pointed out that
this was a problem on the mailing list.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
up during the Fedora submission review, that each source file include some
text about the license. One handy Perl script later and each file now has
the line
Distributed under the terms of the Botan license
after the copyright notices.
While I was in there modifying every file anyway, I also stripped out the
remainder of the block comments (lots of astericks before and after the
text); this is stylistic thing I picked up when I was first learning C++
but in retrospect it is not a good style as the structure makes it harder
to modify comments (with the result that comments become fewer, shorter and
are less likely to be updated, which are not good things).
|
|
|
|
|
|
|
|
|
| |
Instead simply consider the PRNG seeded if a poll kicked off from reseed
met its goal, or if the user adds data.
Doing anything else prevents creating (for instance) a PRNG seeded with
64 bits of entropy, which is unsafe for some purposes (key generation)
but quite possibly safe enough for others (generating salts and such).
|
|
|
|
|
| |
techniques, with the one using BufferedComputation being the new
subclass with the charming name Entropy_Accumulator_BufferedComputation.
|
|
|
|
|
|
|
|
| |
a new member function rekey, calling it from both reseed and add_entropy.
Previously add_entropy worked without this because the PRNG would reseed
itself automatically if it was not at the point when randomize() was called,
but once this was removed it was necessary to ensure a rekey kicked off,
if appropriate, when calling add_entropy.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since both Randpool and HMAC_RNG fed the input into a MAC anyway, this
works nicely. (It would be nicer to use tr1::function but, argh, don't
want to fully depend on TR1 quite yet. C++0x cannot come soon enough).
This avoids requiring to do run length encoding, it just dumps everything
as-is into the MAC. This ensures the buffer is not a potential narrow pipe
for the entropy (for instance, one might imagine an entropy source which
outputs one random byte every 16 bytes, and the rest some repeating pattern -
using a 16 byte buffer, you would only get 8 bits of entropy total, no matter
how many times you sampled).
|
|
|
|
| |
randomize, or PRNG_Unseeded will be thrown.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Combine the fast and slow polls, into a single poll() operation.
Instead of being given a buffer to write output into, the EntropySource is
passed an Entropy_Accumulator. This handles the RLE encoding that xor_into_buf
used to do. It also contains a cached I/O buffer so entropy sources do not
individually need to allocate memory for that with each poll. When data
is added to the accumulator, the source specifies an estimate of the number
of bits of entropy per byte, as a double. This is tracked in the accumulator.
Once the estimated entropy hits a target (set by the constructor), the
accumulator's member function predicate polling_goal_achieved flips to true.
This signals to the PRNG that it can stop performing polling on sources,
also polls that take a long time periodically check this flag and return
immediately.
The Win32 and BeOS entropy sources have been updated, but blindly; testing
is needed.
The test_es example program has been modified: now it polls twice and outputs
the XOR of the two collected results. That helps show if the output is consistent
across polls (not a good thing). I have noticed on the Unix entropy source,
occasionally there are many 0x00 bytes in the output, which is not optimal.
This also needs to be investigated.
The RLE is not actually RLE anymore. It works well for non-random inputs
(ASCII text, etc), but I noticed that when /dev/random output was fed into
it, the output buffer would end up being RR01RR01RR01 where RR is a random
byte and 00 is the byte count.
The buffer sizing also needs to be examined carefully. It might be useful
to choose a prime number for the size to XOR stuff into, to help ensure an
even distribution of entropy across the entire buffer space. Or: feed it
all into a hash function?
This change should (perhaps with further modifications) help WRT the
concerns Zack W raised about the RNG on the monotone-dev list.
|
|
|
|
|
| |
entropy source will realistically be able to provide even 768 bits of entropy,
so this is probably overkill even still.
|
|
|
|
| |
randomness data after the contents have been fed into the MAC.
|
|
|
|
|
|
| |
As with HMAC_RNG, instead assume one bit of conditional entropy per byte
of polled material. Since they are no longer used, drop the entropy
estimation routines entirely.
|