diff options
author | lloyd <[email protected]> | 2007-05-31 03:25:19 +0000 |
---|---|---|
committer | lloyd <[email protected]> | 2007-05-31 03:25:19 +0000 |
commit | 55608e7dd1aa593944f967f2549564e4f42b654e (patch) | |
tree | ec2ec03a762a6dac82eb608487d5394370135624 /src/sha256.cpp | |
parent | 22ecdc45a0efa4c444d0b7010b7cd743aeb68c57 (diff) |
Write functions to handle loading and saving words a block at a time, taking into
account endian differences.
The current code does not take advantage of the knowledge of which endianness
we are running on; an optimization suggested by Yves Jerschow is to use (unsafe)
casts to speed up the load/store operations. This turns out to provide large
performance increases (30% or more) in some cases.
Even without the unsafe casts, this version seems to average a few percent
faster, probably because the longer loading loops have been partially or
fully unrolled.
This also makes the code implementing low-level algorithms like ciphers and
hashes a bit more succint.
Diffstat (limited to 'src/sha256.cpp')
-rw-r--r-- | src/sha256.cpp | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/src/sha256.cpp b/src/sha256.cpp index 1a98d4560..ae9849a57 100644 --- a/src/sha256.cpp +++ b/src/sha256.cpp @@ -47,7 +47,7 @@ inline void F1(u32bit A, u32bit B, u32bit C, u32bit& D, void SHA_256::hash(const byte input[]) { for(u32bit j = 0; j != 16; ++j) - W[j] = make_u32bit(input[4*j], input[4*j+1], input[4*j+2], input[4*j+3]); + W[j] = load_be<u32bit>(input, j); for(u32bit j = 16; j != 64; ++j) W[j] = sigma(W[j- 2], 17, 19, 10) + W[j- 7] + sigma(W[j-15], 7, 18, 3) + W[j-16]; @@ -99,8 +99,8 @@ void SHA_256::hash(const byte input[]) *************************************************/ void SHA_256::copy_out(byte output[]) { - for(u32bit j = 0; j != OUTPUT_LENGTH; ++j) - output[j] = get_byte(j % 4, digest[j/4]); + for(u32bit j = 0; j != OUTPUT_LENGTH; j += 4) + store_be(digest[j/4], output + j); } /************************************************* |