| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
and all CPU-specific implementations now depend on the appropriate engine
module.
The most common problem before with this was that the SSE2 module was built,
but the sole SSE2 code (SHA-1) was not (for instance, on an i686). This would
cause a compile warning about the unused request object.
Preventing unused engines from being built will also (very slightly) speed
up the lookup process on most system.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
up during the Fedora submission review, that each source file include some
text about the license. One handy Perl script later and each file now has
the line
Distributed under the terms of the Botan license
after the copyright notices.
While I was in there modifying every file anyway, I also stripped out the
remainder of the block comments (lots of astericks before and after the
text); this is stylistic thing I picked up when I was first learning C++
but in retrospect it is not a good style as the structure makes it harder
to modify comments (with the result that comments become fewer, shorter and
are less likely to be updated, which are not good things).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
to have been so! Change MDx_HashFunction::hash to a new compress_n
which hashes an arbitrary number of blocks. I had a thought this might
reduce a bit of loop overhead but the results were far better than I
anticipated. Speedup across the board of about 2%, and very
noticable (+10%) increases for MD4 and Tiger (probably b/c both
of those have so few instructions in each iteration of the
compression function).
Before:
SHA-1:
amd64: 211.9 MiB/s
core: 210.0 MiB/s
sse2: 295.2 MiB/s
MD4: 476.2 MiB/s
MD5: 355.2 MiB/s
SHA-256: 99.8 MiB/s
SHA-512: 151.4 MiB/s
RIPEMD-128: 326.9 MiB/s
RIPEMD-160: 225.1 MiB/s
Tiger: 214.8 MiB/s
Whirlpool: 38.4 MiB/s
After:
SHA-1:
amd64: 215.6 MiB/s
core: 213.8 MiB/s
sse2: 299.9 MiB/s
MD4: 528.4 MiB/s
MD5: 368.8 MiB/s
SHA-256: 103.9 MiB/s
SHA-512: 156.8 MiB/s
RIPEMD-128: 334.8 MiB/s
RIPEMD-160: 229.7 MiB/s
Tiger: 240.7 MiB/s
Whirlpool: 38.6 MiB/s
|
|
|
|
| |
them to be individually requested as providers on lookup.
|
|
|
|
| |
the current version.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
rather than silently replacing the C++ versions. Instead they are silently
replaced (currently, at least) at the lookup level: we switch off the set
of feature macros set to choose the best implementation in the current
build configuration. So you can have (and benchmark) MD5 and MD5_IA32
directly against each other in the same program with no hassles, but if
you ask for "MD5", you'll get maybe an MD5 or maybe MD5_IA32.
Also make the canonical asm names (which aren't guarded by C++ namespaces)
of the form botan_<algo>_<arch>_<func> as in botan_sha160_ia32_compress,
to avoid namespace collisions.
This change has another bonus that it should in many cases be possible to
derive the asm specializations directly from the original implementation,
saving some code (and of course logically SHA_160_IA32 is a SHA_160, just
one with a faster implementation of the compression function, so this seems
reasonable anyway).
|
| |
|
|
|
|
|
| |
them modules now. In any case there is no distinction so info.txt seems
better.
|
|
|
|
|
|
| |
class).
Add many missing modinfo.txts that I had not checked in. Oops.
|
|
hash/sha1_amd64 and cipher/serpent_ia32.
Remaining code in asm/ dir is for BigInt, so rename to bigint/ in prep for
all (or most) of BigInt being modularized.
|