aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* In creating X.509 certificates and PKCS #10 requests, let (actually: require)lloyd2009-11-0910-39/+91
| | | | | | | the user to specify the hash function to use, instead of always using SHA-1. This was a sensible default a few years ago, when there wasn't a ~2^60 attack on SHA-1 and support for SHA-2 was pretty much nil, but using something else makes a lot more sense these days.
* Clean up aes_128_key_expansionlloyd2009-11-061-24/+18
|
* Respect --with-isa when choosing what to enablelloyd2009-11-061-3/+4
|
* GCC doesn't know what Nehalem or Westmere are, though it does know aboutlloyd2009-11-061-0/+3
| | | | | the AES and PCLMUL instructions. Oddness. For the time being, compile Nehalem and Westmere as Core2 + extras, probably close enough.
* Dename unused length fieldlloyd2009-11-061-1/+1
|
* Add a new need_isa marker for info.txt that lets a module dependlloyd2009-11-066-25/+31
| | | | | | | | | | | | on a particular ISA extension rather than a list of CPUs. Much easier to edit and audit, too. Add markers on the AES-NI code and SHA-1/SSE2. Serpent and XTEA don't need it because they are generic and only depend on simd_32 which will silenty swap out a scalar version if SSE2/AltiVec isn't enabled (since it turns out on supersclar processors just doing 4 blocks in parallel can be a win even in GPRs). Add pentium3 to the list of CPUs with rdtsc, was missing. Odd!
* Add a complete but untested AES-128 using the AES-NI intrinsics.lloyd2009-11-063-68/+147
| | | | | | | | | | | | | | | | | | From looking at how key gen works in particular, it seems easiest to provide only AES-128, AES-192, and AES-256 and not a general AES class that can accept any key length. This also has the bonus of allowing full loop unrolling which may be a win (how much so will depend on the latency/throughput of the AES instructions which is currently unknown). No block interleaving, though of course it works very nicely here, simply due to the desire to keep things simple until what is currently here can actually be tested. (Intel has an emulator that is supposed to work but just crashes on my machine...) I'm not entirely sure if byte swapping is required. Intel has a white paper out that suggests it isn't (and really it would have been stupid of them to not build this into the aes instructions), but who knows. If it turns out to be necessary there is a pretty fast bswap instruction for SSE anyway.
* Stub for AES class using Intel's AES-NI instructions and an engine forlloyd2009-11-067-0/+238
| | | | | providing it. Also stubs in the engine for VIA's AES instructions, but needs CPUID checking also.
* The default_submodel option was used by configure.pl but configure.pylloyd2009-11-0617-39/+8
| | | | | | ignores this unless it can detect (or is asked to use) a specific model; otherwise it compiles for the baseline ISA. Remove the default_submodel entries in the arch files.
* The code for handling SIMD ISA extensions actually works fine for generallloyd2009-11-066-35/+44
| | | | | | | | ISA extensions (say, Intel's AES-NI, for instance) so change everything to reflect that. Also rename some of the amd64 models, and add entries for k10, nehalem, and westmere processors.
* Make it possible to explicitly enable SIMD extensions.lloyd2009-11-061-19/+28
| | | | | | | | | There is no point, as far as I can see, of being able to explicitly disable a SIMD or other ISA extension, because if you are compiling for that particular CPU the compiler might well choose to insert CPU-specific instructions anyway. For instance if one is compiling on a P4 but wants to disable SSE2, the right thing to do is compile for (say) an i686 which ensures that no P4 instructions will be emitted.
* Tick to 1.9.3-devlloyd2009-11-066-37/+27
| | | | | Rename BOTAN_UNALIGNED_LOADSTOR_OK to BOTAN_UNALIGNED_MEMORY_ACCESS_OK which is somewhat more clear as to the point.
* Generate SIMD macro flags for build.h from data in build-data/arch forlloyd2009-11-066-6/+70
| | | | | | SSE2, SSSE3, NEON, and AltiVec. Add entries for Intel Atom, POWER6 and POWER7, and the Cortex A8 and A9.
* Add an andc operation, in SSE2 and AltiVec, may be useful for Serpent sboxeslloyd2009-11-044-4/+22
|
* Set BOTAN_TARGET_CPU_HAS_SSE2 macro if amd64. Not set at all for any 32-bitlloyd2009-11-041-0/+3
| | | | | x86 currently. This should be fixed. But it's an improvement over having to always set it manually, at least.
* Indent and avoid one extra assignmentlloyd2009-11-041-3/+2
|
* propagate from branch 'net.randombit.botan.1_8' (head ↵lloyd2009-11-03559-6939/+13364
|\ | | | | | | | | | | 6e8c18515725a70923b34118951252723dd4c29a) to branch 'net.randombit.botan' (head 77ba4ea5a4be36d6d029bcc852b2271edff0d679)
| * propagate from branch 'net.randombit.botan.1_8' (head ↵1.9.2lloyd2009-11-032-2/+3
| |\ | | | | | | | | | | | | | | | a101c8c86b755a666c72baf03154230e09e0667e) to branch 'net.randombit.botan' (head 948905e3872b6f5904686533c6aa87d38ff90a71)
| * | Update for 1.9.2 release 2009-11-03lloyd2009-11-034-11/+5
| | |
| * | Conver the rest of the hash functions to use the array-based load instructions.lloyd2009-11-035-40/+41
| | | | | | | | | | | | | | | | | | | | | I'm not totally happy with this - in particular in all cases the size is a compile time constant - it would be nice to make use of this via tempalate metaprogramming. Also for matching endian loads, a straight memcpy would do the work, which would probably be even faster.
| * | Slight cleanups in the Altivec detection code for readability.lloyd2009-10-291-5/+12
| | |
| * | Add a new looping load_be / load_le for loading large arrays at once, andlloyd2009-10-2911-49/+104
| | | | | | | | | | | | | | | | | | | | | | | | change some of the hash functions to use it as low hanging fruit. Probably could use further optimization (just unrolls x4 currently), but merely having it as syntax is good as it allows optimizing many functions at once (eg using SSE2 to do 4-way byteswaps).
| * | Fix cpuid with icc (tested with 11.1)lloyd2009-10-292-2/+5
| | | | | | | | | | | | | | | Document SHA optimizations, AltiVec runtime checking, fixes for cpuid for both icc and msvc.
| * | propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2930-964/+1723
| |\ \ | | | | | | | | | | | | | | | | | | | | 4fd7eb9630271d3c1dfed21987ef864680d4ce7b) to branch 'net.randombit.botan.general-simd' (head 91df868149cdc4754d340e6103028acc82182609)
| | * | Clean up prep00_15 - same speed on Core2lloyd2009-10-291-16/+10
| | | |
| | * | Clean up the SSE2 SHA-1 code quite a bit, make better use of C++ featureslloyd2009-10-292-308/+267
| | | | | | | | | | | | | | | | and also make it stylistically much closer to the standard SHA-1 code.
| | * | Format for easier readinglloyd2009-10-291-31/+43
| | | |
| | * | Small cleanups (remove tab characters, change macros to fit the rest oflloyd2009-10-291-123/+121
| | | | | | | | | | | | | | | | the code stylistically, etc)
| | * | Give each version of SIMD_32 a public bswap()lloyd2009-10-293-11/+29
| | | |
| | * | Add new function enabled() to each of the SIMD_32 instantiations which lloyd2009-10-293-1/+9
| | | | | | | | | | | | | | | | | | | | returns true if they might plausibly work. AltiVec and SSE2 versions call into CPUID, scalar version always works.
| | * | No ||= operator!lloyd2009-10-291-7/+7
| | | |
| | * | Add CPUID::have_altivec for AltiVec runtime detection.lloyd2009-10-293-0/+63
| | | | | | | | | | | | | | | | | | | | Relies on mfspr emulation/trapping by the kernel, which works on (at least) Linux and NetBSD.
| | * | Rename sse2 engine to simdlloyd2009-10-292-2/+2
| | | |
| | * | Use register writes in the Altivec code for stores because Altivec's handlinglloyd2009-10-291-7/+16
| | | | | | | | | | | | | | | | | | | | | | | | for unaligned writes is messy as hell. If writes are batched this is somewhat easier to deal with (somewhat).
| | * | Kill realnames on new modules not in mailinelloyd2009-10-293-5/+0
| | | |
| | * | propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2923-621/+1324
| | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | 54d2cc7b00ecd5f41295e147d23ab6d294309f61) to branch 'net.randombit.botan.general-simd' (head 9cb1b5f00bfefd05cd9555489db34e6d86867aca)
| | | * \ propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2923-621/+1324
| | | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 8fb69dd1c599ada1008c4cab2a6d502cbcc468e0) to branch 'net.randombit.botan.general-simd' (head c05c9a6d398659891fb8cca170ed514ea7e6476d)
| | | | * | Rename SSE2 stuff to be generally SIMD since it supports at least SSE2lloyd2009-10-2916-135/+126
| | | | | | | | | | | | | | | | | | | | | | | | and Altivec (though Altivec is seemingly slower ATM...)
| | | | * | Add copyright + license on the new SIMD fileslloyd2009-10-284-2/+14
| | | | | |
| | | | * | Document SIMD changeslloyd2009-10-281-0/+2
| | | | | |
| | | | * | propagate from branch 'net.randombit.botan' (head ↵lloyd2009-10-2812-404/+1101
| | | | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | bf629b13dd132b263e76a72b7eca0f7e4ab19aac) to branch 'net.randombit.botan.general-simd' (head f731cff08ff0d04c062742c0c6cfcc18856400ea)
| | | | | * | Add an AltiVec SIMD_32 implementation. Tested and works for Serpent and XTEAlloyd2009-10-281-0/+178
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | on a PowerPC 970 running Gentoo with GCC 4.3.4 Uses a GCC syntax for creating literal values instead of the Motorola syntax [{1,2,3,4} instead of (1,2,3,4)]. In tests so far, this is much, much slower than either the standard scalar code, or using the SIMD-in-scalar-registers code. It looks like for whatever reason GCC is refusing to inline the function: SIMD_Altivec(__vector unsigned int input) { reg = input; } and calls it with a branch hundreds of times in each function. I don't know if this is the entire reason it's slower, but it definitely can't be helping. The code handles unaligned loads OK but assumes stores are to an aligned address. This will fail drastically some day, and needs to be fixed to either use scalar stores, which (most?) PPCs will handle (if slowly), or batch the loads and stores so we can work across the loads. Considering the code so far loads 4 vectors of data in one go this would probably be a big win (and also for loads, since instead of doing 8 loads for 4 registers only 5 are needed).
| | | | | * | Define SSE rotate_right in terms of rotate left, and load_be in termslloyd2009-10-281-3/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | of load_le + bswap
| | | | | * | Add XTEA decryptionlloyd2009-10-261-11/+47
| | | | | | |
| | | | | * | Add subtraction operators to SIMD_32 classes, needed for XTEA decryptlloyd2009-10-262-0/+26
| | | | | | |
| | | | | * | Add a wrapper for a set of SSE2 operations with convenient syntax for 4x32lloyd2009-10-2611-404/+862
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | operations. Also add a pure scalar code version. Convert Serpent to use this new interface, and add an implementation of XTEA in SIMD. The wrappers plus the scalar version allow SIMD-ish code to work on all platforms. This is often a win due to better ILP being visible to the processor (as with the recent XTEA optimizations). Only real danger is register starvation, mostly an issue on x86 these days. So it may (or may not) be a win to consolidate the standard C++ versions and the SIMD versions together. Future work: - Add AltiVec/VMX version - Maybe also for ARM's NEON extension? Less pressing, I would think. - Convert SHA-1 code to use SIMD_32 - Add XTEA SIMD decryption (currently only encrypt) - Change SSE2 engine to SIMD_engine - Modify configure.py to set BOTAN_TARGET_CPU_HAS_[SSE2|ALTIVEC|NEON|XXX] macros
| * | | | | | Unroll SHA-1's expansion loop from x4 to x8; ~7% faster on Core2lloyd2009-10-291-1/+5
| | | | | | |
| * | | | | | Unroll the expansion loop in both SHA-2 implementations by 8. On a Core2,lloyd2009-10-292-13/+29
| |/ / / / / | | | | | | | | | | | | | | | | | | SHA-256 gets ~7% faster, SHA-512 ~10%.
| * / / / / Kill straggling realnameslloyd2009-10-292-4/+0
| |/ / / /
| * | | | Hurd file was missing txt extension, must have missed it before?lloyd2009-10-291-0/+0
| | | | |