aboutsummaryrefslogtreecommitdiffstats
path: root/src
Commit message (Collapse)AuthorAgeFilesLines
* Slight cleanups in the Altivec detection code for readability.lloyd2009-10-291-5/+12
|
* Add a new looping load_be / load_le for loading large arrays at once, andlloyd2009-10-2911-49/+104
| | | | | | | | change some of the hash functions to use it as low hanging fruit. Probably could use further optimization (just unrolls x4 currently), but merely having it as syntax is good as it allows optimizing many functions at once (eg using SSE2 to do 4-way byteswaps).
* Fix cpuid with icc (tested with 11.1)lloyd2009-10-291-2/+2
| | | | | Document SHA optimizations, AltiVec runtime checking, fixes for cpuid for both icc and msvc.
* propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2928-964/+1719
|\ | | | | | | | | | | 4fd7eb9630271d3c1dfed21987ef864680d4ce7b) to branch 'net.randombit.botan.general-simd' (head 91df868149cdc4754d340e6103028acc82182609)
| * Clean up prep00_15 - same speed on Core2lloyd2009-10-291-16/+10
| |
| * Clean up the SSE2 SHA-1 code quite a bit, make better use of C++ featureslloyd2009-10-292-308/+267
| | | | | | | | and also make it stylistically much closer to the standard SHA-1 code.
| * Format for easier readinglloyd2009-10-291-31/+43
| |
| * Small cleanups (remove tab characters, change macros to fit the rest oflloyd2009-10-291-123/+121
| | | | | | | | the code stylistically, etc)
| * Give each version of SIMD_32 a public bswap()lloyd2009-10-293-11/+29
| |
| * Add new function enabled() to each of the SIMD_32 instantiations which lloyd2009-10-293-1/+9
| | | | | | | | | | returns true if they might plausibly work. AltiVec and SSE2 versions call into CPUID, scalar version always works.
| * No ||= operator!lloyd2009-10-291-7/+7
| |
| * Add CPUID::have_altivec for AltiVec runtime detection.lloyd2009-10-292-0/+61
| | | | | | | | | | Relies on mfspr emulation/trapping by the kernel, which works on (at least) Linux and NetBSD.
| * Rename sse2 engine to simdlloyd2009-10-292-2/+2
| |
| * Use register writes in the Altivec code for stores because Altivec's handlinglloyd2009-10-291-7/+16
| | | | | | | | | | | | for unaligned writes is messy as hell. If writes are batched this is somewhat easier to deal with (somewhat).
| * Kill realnames on new modules not in mailinelloyd2009-10-293-5/+0
| |
| * propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2922-621/+1322
| |\ | | | | | | | | | | | | | | | 54d2cc7b00ecd5f41295e147d23ab6d294309f61) to branch 'net.randombit.botan.general-simd' (head 9cb1b5f00bfefd05cd9555489db34e6d86867aca)
| | * propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2922-621/+1322
| | |\ | | | | | | | | | | | | | | | | | | | | 8fb69dd1c599ada1008c4cab2a6d502cbcc468e0) to branch 'net.randombit.botan.general-simd' (head c05c9a6d398659891fb8cca170ed514ea7e6476d)
| | | * Rename SSE2 stuff to be generally SIMD since it supports at least SSE2lloyd2009-10-2916-135/+126
| | | | | | | | | | | | | | | | and Altivec (though Altivec is seemingly slower ATM...)
| | | * Add copyright + license on the new SIMD fileslloyd2009-10-284-2/+14
| | | |
| | | * propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2812-404/+1101
| | | |\ | | | | | | | | | | | | | | | | | | | | | | | | | bf629b13dd132b263e76a72b7eca0f7e4ab19aac) to branch 'net.randombit.botan.general-simd' (head f731cff08ff0d04c062742c0c6cfcc18856400ea)
| | | | * Add an AltiVec SIMD_32 implementation. Tested and works for Serpent and XTEAlloyd2009-10-281-0/+178
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on a PowerPC 970 running Gentoo with GCC 4.3.4 Uses a GCC syntax for creating literal values instead of the Motorola syntax [{1,2,3,4} instead of (1,2,3,4)]. In tests so far, this is much, much slower than either the standard scalar code, or using the SIMD-in-scalar-registers code. It looks like for whatever reason GCC is refusing to inline the function: SIMD_Altivec(__vector unsigned int input) { reg = input; } and calls it with a branch hundreds of times in each function. I don't know if this is the entire reason it's slower, but it definitely can't be helping. The code handles unaligned loads OK but assumes stores are to an aligned address. This will fail drastically some day, and needs to be fixed to either use scalar stores, which (most?) PPCs will handle (if slowly), or batch the loads and stores so we can work across the loads. Considering the code so far loads 4 vectors of data in one go this would probably be a big win (and also for loads, since instead of doing 8 loads for 4 registers only 5 are needed).
| | | | * Define SSE rotate_right in terms of rotate left, and load_be in termslloyd2009-10-281-3/+2
| | | | | | | | | | | | | | | | | | | | of load_le + bswap
| | | | * Add XTEA decryptionlloyd2009-10-261-11/+47
| | | | |
| | | | * Add subtraction operators to SIMD_32 classes, needed for XTEA decryptlloyd2009-10-262-0/+26
| | | | |
| | | | * Add a wrapper for a set of SSE2 operations with convenient syntax for 4x32lloyd2009-10-2611-404/+862
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | operations. Also add a pure scalar code version. Convert Serpent to use this new interface, and add an implementation of XTEA in SIMD. The wrappers plus the scalar version allow SIMD-ish code to work on all platforms. This is often a win due to better ILP being visible to the processor (as with the recent XTEA optimizations). Only real danger is register starvation, mostly an issue on x86 these days. So it may (or may not) be a win to consolidate the standard C++ versions and the SIMD versions together. Future work: - Add AltiVec/VMX version - Maybe also for ARM's NEON extension? Less pressing, I would think. - Convert SHA-1 code to use SIMD_32 - Add XTEA SIMD decryption (currently only encrypt) - Change SSE2 engine to SIMD_engine - Modify configure.py to set BOTAN_TARGET_CPU_HAS_[SSE2|ALTIVEC|NEON|XXX] macros
* | | | | Unroll SHA-1's expansion loop from x4 to x8; ~7% faster on Core2lloyd2009-10-291-1/+5
| | | | |
* | | | | Unroll the expansion loop in both SHA-2 implementations by 8. On a Core2,lloyd2009-10-292-13/+29
|/ / / / | | | | | | | | | | | | SHA-256 gets ~7% faster, SHA-512 ~10%.
* / / / Kill straggling realnameslloyd2009-10-292-4/+0
|/ / /
* | | Hurd file was missing txt extension, must have missed it before?lloyd2009-10-291-0/+0
| | |
* | | Remove the 'realname' attribute on all modules and cc/cpu/os info files.lloyd2009-10-29233-469/+0
| | | | | | | | | | | | | | | Pretty much useless and unused, except for listing the module names in build.h and the short versions totally suffice for that.
* | | propagate from branch 'net.randombit.botan.1_8' (head ↵lloyd2009-10-28334-2878/+8169
|\| | | | | | | | | | | | | | | | | 3158f8272a3582dd44dfb771665eb71f7d005339) to branch 'net.randombit.botan' (head bf629b13dd132b263e76a72b7eca0f7e4ab19aac)
| * | Indent fixlloyd2009-10-261-1/+1
| |/
| * Add ; after call to VC++'s __cpuid, not a macrolloyd2009-10-251-1/+1
| |
| * Cast the u32bit output array to an int* when calling the VC++ intrinsic,lloyd2009-10-251-3/+6
| | | | | | | | | | | | | | since it passes signed ints for whatever reason. Ensure CALL_CPUID is always defined (previously, it would not be if on an x86 but compiled with something other than GCC, ICC, VC++).
| * Kill stdio includelloyd2009-10-231-2/+0
| |
| * Use new load/store ops in xtea x4 codelloyd2009-10-231-12/+6
| |
| * Add new store_[l|b]e variants taking 8 values.lloyd2009-10-231-16/+108
| | | | | | | | | | | | Add new load options that are passed a number of variables by reference, setting them all at once. Will allow for batching operations (eg using SIMD operations to do 128-bit wide bswaps) for future optimizations.
| * Simply unrolling the loop in XTEA and processing 4 blocks worth of data atlloyd2009-10-231-0/+70
| | | | | | | | | | | | | | | | a time more than doubles performance (from 38 MB/s to 90 MB/s on Core2 Q6600). Could do even better with SIMD, I'm sure, but this is fast and easy, and works everywhere. Probably will hurt on 32-bit x86 from the register pressure.
| * Increase the internal buffer size of the Hex coder/decoder, and put it intolloyd2009-10-231-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | a named constant instead of being magic. Move from 64 bytes to 256. This was necessary to allow Pipe(new Hex_Decoder, filter, ...) to give filter a sufficiently large input block. It would be nicer if the filter itself (in this case, ECB_Decryption, but others apply as well) was smart enough to buffer on its own. It might also be useful if code could query what parallelism a block cipher provided and modify their actions accordingly.
| * Remove all exception specifications. The way these are designed in C++ islloyd2009-10-22121-140/+140
| | | | | | | | | | | | just too fragile and not that useful. Something like Java's checked exceptions might be nice, but simply killing the process entirely if an unexpected exception is thrown is not exactly useful for something trying to be robust.
| * Enable CPUID on x86 (checking wrong macro name)lloyd2009-10-211-1/+1
| |
| * Format, add names to params in headerlloyd2009-10-191-3/+7
| |
| * Add theoreticaly support for Clang/LLVM. Current Gentoo clang ebuild doesn'tlloyd2009-10-191-0/+46
| | | | | | | | seem to work with C++ at all so untested.
| * Also enable x86 asm word_addlloyd2009-10-151-8/+0
| |
| * Enable x86-64 asm word_addlloyd2009-10-151-8/+0
| |
| * merge of '5cfca720d4ca8d1e8f6946c7d9b4a8a6943094d0'lloyd2009-10-1527-428/+445
| |\ | | | | | | | | | and '8cc9c08544c0f1f1dba7c7a8da51d1657b1c7df8'
| | * Similiar treatment for OFB which is also just a plain stream cipherlloyd2009-10-147-100/+148
| | |
| | * Convert CTR_BE from a Filter to a StreamCipher. Must wrap in a ↵lloyd2009-10-1410-217/+224
| | | | | | | | | | | | | | | | | | StreamCipher_Filter to pass it directly to a Pipe now.
| | * Cleanups/random changes in the stream cipher code:lloyd2009-10-1414-111/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Remove encrypt, decrypt - replace by cipher() and cipher1() Remove seek() - not well supported/tested, I want to redo with a new interface once CTR and OFB modes become stream ciphers. Rename resync to set_iv() Remove StreamCipher::IV_LENGTH and add StreamCipher::valid_iv_length() to allow multiple IV lengths (as for instance Turing allows, as would Salsa20 if XSalsa20 were supported).
| * | Avoid using word_add() in gfp_element.cpp, actually more complex than necessary,lloyd2009-10-151-1/+3
| |/ | | | | | | and was tickling a bug in the asm versions because of the constant 0.