]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - releases/4.19.45/crypto-arm64-aes-neonbs-don-t-access-already-freed-walk.iv.patch
Linux 4.19.45
[thirdparty/kernel/stable-queue.git] / releases / 4.19.45 / crypto-arm64-aes-neonbs-don-t-access-already-freed-walk.iv.patch
1 From 4a8108b70508df0b6c4ffa4a3974dab93dcbe851 Mon Sep 17 00:00:00 2001
2 From: Eric Biggers <ebiggers@google.com>
3 Date: Tue, 9 Apr 2019 23:46:32 -0700
4 Subject: crypto: arm64/aes-neonbs - don't access already-freed walk.iv
5
6 From: Eric Biggers <ebiggers@google.com>
7
8 commit 4a8108b70508df0b6c4ffa4a3974dab93dcbe851 upstream.
9
10 If the user-provided IV needs to be aligned to the algorithm's
11 alignmask, then skcipher_walk_virt() copies the IV into a new aligned
12 buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
13 if the caller unconditionally accesses walk.iv, it's a use-after-free.
14
15 xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected
16 by this despite unconditionally accessing walk.iv. However this is more
17 subtle than desired, and unconditionally accessing walk.iv has caused a
18 real problem in other algorithms. Thus, update xts-aes-neonbs to start
19 checking the return value of skcipher_walk_virt().
20
21 Fixes: 1abee99eafab ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64")
22 Cc: <stable@vger.kernel.org> # v4.11+
23 Signed-off-by: Eric Biggers <ebiggers@google.com>
24 Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
25 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
26
27 ---
28 arch/arm64/crypto/aes-neonbs-glue.c | 2 ++
29 1 file changed, 2 insertions(+)
30
31 --- a/arch/arm64/crypto/aes-neonbs-glue.c
32 +++ b/arch/arm64/crypto/aes-neonbs-glue.c
33 @@ -304,6 +304,8 @@ static int __xts_crypt(struct skcipher_r
34 int err;
35
36 err = skcipher_walk_virt(&walk, req, false);
37 + if (err)
38 + return err;
39
40 kernel_neon_begin();
41 neon_aes_ecb_encrypt(walk.iv, walk.iv, ctx->twkey, ctx->key.rounds, 1);