diff options
author | Richard Yao <[email protected]> | 2023-02-28 20:28:50 -0500 |
---|---|---|
committer | GitHub <[email protected]> | 2023-02-28 17:28:50 -0800 |
commit | d634d20d1be31dfa8cf06ef2dc96285baf81a2fb (patch) | |
tree | 6b3fce1881a9aa5bb88250af783aeffceb8b5814 /module/icp/algs | |
parent | 2f76797ad9b630751fad9b10aa7aa4a1737c4300 (diff) |
icp: Prevent compilers from optimizing away memset() in gcm_clear_ctx()
The recently merged f58e513f7408f353bf0151fdaf235d4e062e8950 was
intended to zero sensitive data before exit from encryption
functions to harden the code against theoretical information
leaks. Unfortunately, the method by which it did that is
optimized away by the compiler, so some information still leaks. This
was confirmed by counting function calls in disassembly.
After studying how the OpenBSD, FreeBSD and Linux kernels handle this,
and looking at our disassembly, I decided on a two-factor approach to
protect us from compiler dead store elimination passes.
The first factor is to stop trying to inline gcm_clear_ctx(). GCC does
not actually inline it in the first place, and testing suggests that
dead store elimination passes appear to become more powerful in a bad
way when inlining is forced, so we recognize that and move
gcm_clear_ctx() to a C file.
The second factor is to implement an explicit_memset() function based on
the technique used by `secure_zero_memory()` in FreeBSD's blake2
implementation, which coincidentally is functionally identical to the
one used by Linux. The source for this appears to be a LLVM bug:
https://llvm.org/bugs/show_bug.cgi?id=15495
Unlike both FreeBSD and Linux, we explicitly avoid the inline keyword,
based on my observations that GCC's dead store elimination pass becomes
more powerful when inlining is forced, under the assumption that it will
be equally powerful when the compiler does decide to inline function
calls.
Disassembly of GCC's output confirms that all 6 memset() calls are
executed with this patch applied.
Reviewed-by: Attila Fülöp <[email protected]>
Reviewed-by: Brian Behlendorf <[email protected]>
Signed-off-by: Richard Yao <[email protected]>
Closes #14544
Diffstat (limited to 'module/icp/algs')
-rw-r--r-- | module/icp/algs/modes/modes.c | 40 |
1 files changed, 40 insertions, 0 deletions
diff --git a/module/icp/algs/modes/modes.c b/module/icp/algs/modes/modes.c index 60fc84c1b..6f6649b3b 100644 --- a/module/icp/algs/modes/modes.c +++ b/module/icp/algs/modes/modes.c @@ -154,3 +154,43 @@ crypto_free_mode_ctx(void *ctx) kmem_free(ctx, sizeof (gcm_ctx_t)); } } + +static void * +explicit_memset(void *s, int c, size_t n) +{ + memset(s, c, n); + __asm__ __volatile__("" :: "r"(s) : "memory"); + return (s); +} + +/* + * Clear sensitive data in the context and free allocated memory. + * + * ctx->gcm_remainder may contain a plaintext remainder. ctx->gcm_H and + * ctx->gcm_Htable contain the hash sub key which protects authentication. + * ctx->gcm_pt_buf contains the plaintext result of decryption. + * + * Although extremely unlikely, ctx->gcm_J0 and ctx->gcm_tmp could be used for + * a known plaintext attack, they consist of the IV and the first and last + * counter respectively. If they should be cleared is debatable. + */ +void +gcm_clear_ctx(gcm_ctx_t *ctx) +{ + explicit_memset(ctx->gcm_remainder, 0, sizeof (ctx->gcm_remainder)); + explicit_memset(ctx->gcm_H, 0, sizeof (ctx->gcm_H)); +#if defined(CAN_USE_GCM_ASM) + if (ctx->gcm_use_avx == B_TRUE) { + ASSERT3P(ctx->gcm_Htable, !=, NULL); + memset(ctx->gcm_Htable, 0, ctx->gcm_htab_len); + kmem_free(ctx->gcm_Htable, ctx->gcm_htab_len); + } +#endif + if (ctx->gcm_pt_buf != NULL) { + memset(ctx->gcm_pt_buf, 0, ctx->gcm_pt_buf_len); + vmem_free(ctx->gcm_pt_buf, ctx->gcm_pt_buf_len); + } + /* Optional */ + explicit_memset(ctx->gcm_J0, 0, sizeof (ctx->gcm_J0)); + explicit_memset(ctx->gcm_tmp, 0, sizeof (ctx->gcm_tmp)); +} |