| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
| |
The test_fuzzers.py script is very slow especially on CI. Add a mode
to the test fuzzers where it will accept many files on the command
line and test each of them in turn. This is 100s of times faster,
as it avoids all overhead from fork/exec.
It has the downside that you can't tell which input caused a crash, so
retain the old mode with --one-at-a-time option for debugging work.
|
|
|
|
|
| |
Coverage is the slowest build, moving it up puts it into the initial
tranche of builds so it finishes before the end of the build.
|
|
|
|
|
|
| |
It is slower to startup and the overall build ends up waiting for these
last 2 builds. By running them in the front of the line they can overlap
with other builds.
|
|
|
|
|
| |
Running them all takes a long time, especially in CI, and doesn't
really add much.
|
|
|
|
| |
The cache size increases will continue until hit rate improves.
|
|
|
|
|
|
|
| |
Undocumented? side effect of a small git pull depth - if more than N
new commits are pushed to master while an earlier build is running,
the old build starts failing, as when CI does the pull it does not
find the commit it is building within the checked out tree.
|
| |
|
|
|
|
| |
Actual bug, flagged by Coverity
|
|
|
|
| |
Flagged by Coverity
|
|
|
|
|
|
|
| |
This skips putting the git revision in the build.h header. This value
changing every time means we effectively disable ccache's direct mode
(which is faster than preprocessor mode) and also prevent any caching
of the amalgamation file (since version.cpp expands the macro).
|
| |
|
|
|
|
| |
Even 600M is not sufficient for the coverage build
|
|
|
|
|
| |
Using phrase "timestamp" makes it sound like it has some relation
to wall clock which it does not.
|
| |
|
|
|
|
| |
No reason for these to be inlined
|
|
|
|
|
|
| |
Only used in one place, where const time doesn't matter, but can't hurt.
Remove low_bit, can be replaced by ctz.
|
|
|
|
|
|
| |
Reading the system timestamp first causes every event to get a few
hundred cycles tacked onto it. Only mattered when the thing being
tested was very fast.
|
|
|
|
| |
Still insufficient for debug builds
|
|\ |
|
| | |
|
| | |
|
| | |
|
|/
|
|
|
|
|
| |
They get compiled as const-time on x86-64 with GCC but I don't think
this can be totally relied on. But it is anyway an improvement.
And, faster, because we compute it recursively
|
|
|
|
|
| |
With compression disabled, the cache is too small for builds that
use debug info, and causes 100% miss rate.
|
|
|
|
| |
I couldn't get anything to link with PGI, but at least it builds again.
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The decoding leaked some information about the delimiter index
due to copying only exactly input_len - delim_idx bytes. I can't
articulate a specific attack that would work here, but it is easy
enough to fix this to run in const time instead, where all bytes
are accessed regardless of the length of the padding.
CT::copy_out is O(n^2) and thus terrible, but in practice it is only
used with RSA decryption, and multiplication is also O(n^2) with the
modulus size, so a few extra cycles here doesn't matter much.
|
|\ \ |
|
|/ /
| |
| |
| |
| |
| |
| | |
It was only needed for one case which is easily hardcoded. Include
rotate.h in all the source files that actually use rotr/rotl but
implicitly picked it up via loadstor.h -> bswap.h -> rotate.h include
chain.
|
|/
|
|
| |
Since CPU is main bottleneck to the build, this is likely not helping.
|
|
|
|
| |
Add tests for is_power_of_2
|
|
|
|
|
|
| |
Using the Montgomery ladder for operator* was introduced in ca155a7e54, previous
versions did something different, which was itself vulnerable to side channels,
but not with the same issue as CVE-2018-20187.
|
| |
|
|\ |
|
|/
|
|
|
|
|
|
|
|
|
| |
As doing so means that information about the high bits of the scalar can leak
via timing since the loop bound depends on the length of the scalar. An attacker
who has such information can perform a more efficient brute force attack (using
Pollard's rho) than would be possible otherwise.
Found by Ján Jančár (@J08nY) using ECTester (https://github.com/crocs-muni/ECTester)
CVE-2018-20187
|
| |
|
| |
|
|\ |
|
| |
| |
| |
| |
| |
| | |
This doesn't matter much but it causes confusing valgrind output when
const-time checking since it distinguishes between the two possible
conditional returns.
|
| |
| |
| |
| |
| | |
We know the lookup table is some power of 2, unrolling a bit
allows more IPC
|
|/
|
|
|
| |
Code is easier to understand and it may let the CPU interleave the
loads and logical ops better. Slightly faster on my machine.
|
| |
|
|\ |
|
|/
|
|
| |
Improves ECDSA signing by 15%
|
| |
|
| |
|
| |
|
| |
|