--- /dev/null
+From 4a8108b70508df0b6c4ffa4a3974dab93dcbe851 Mon Sep 17 00:00:00 2001
+From: Eric Biggers <ebiggers@google.com>
+Date: Tue, 9 Apr 2019 23:46:32 -0700
+Subject: crypto: arm64/aes-neonbs - don't access already-freed walk.iv
+
+From: Eric Biggers <ebiggers@google.com>
+
+commit 4a8108b70508df0b6c4ffa4a3974dab93dcbe851 upstream.
+
+If the user-provided IV needs to be aligned to the algorithm's
+alignmask, then skcipher_walk_virt() copies the IV into a new aligned
+buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
+if the caller unconditionally accesses walk.iv, it's a use-after-free.
+
+xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected
+by this despite unconditionally accessing walk.iv. However this is more
+subtle than desired, and unconditionally accessing walk.iv has caused a
+real problem in other algorithms. Thus, update xts-aes-neonbs to start
+checking the return value of skcipher_walk_virt().
+
+Fixes: 1abee99eafab ("crypto: arm64/aes - reimplement bit-sliced ARM/NEON implementation for arm64")
+Cc: <stable@vger.kernel.org> # v4.11+
+Signed-off-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+
+---
+ arch/arm64/crypto/aes-neonbs-glue.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/arch/arm64/crypto/aes-neonbs-glue.c
++++ b/arch/arm64/crypto/aes-neonbs-glue.c
+@@ -307,6 +307,8 @@ static int __xts_crypt(struct skcipher_r
+ int err;
+
+ err = skcipher_walk_virt(&walk, req, true);
++ if (err)
++ return err;
+
+ kernel_neon_begin();
+
--- /dev/null
+From edaf28e996af69222b2cb40455dbb5459c2b875a Mon Sep 17 00:00:00 2001
+From: Eric Biggers <ebiggers@google.com>
+Date: Tue, 9 Apr 2019 23:46:30 -0700
+Subject: crypto: salsa20 - don't access already-freed walk.iv
+
+From: Eric Biggers <ebiggers@google.com>
+
+commit edaf28e996af69222b2cb40455dbb5459c2b875a upstream.
+
+If the user-provided IV needs to be aligned to the algorithm's
+alignmask, then skcipher_walk_virt() copies the IV into a new aligned
+buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then
+if the caller unconditionally accesses walk.iv, it's a use-after-free.
+
+salsa20-generic doesn't set an alignmask, so currently it isn't affected
+by this despite unconditionally accessing walk.iv. However this is more
+subtle than desired, and it was actually broken prior to the alignmask
+being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup
+and convert to skcipher API").
+
+Since salsa20-generic does not update the IV and does not need any IV
+alignment, update it to use req->iv instead of walk.iv.
+
+Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher")
+Cc: stable@vger.kernel.org
+Signed-off-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+
+---
+ crypto/salsa20_generic.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/crypto/salsa20_generic.c
++++ b/crypto/salsa20_generic.c
+@@ -186,7 +186,7 @@ static int encrypt(struct blkcipher_desc
+ blkcipher_walk_init(&walk, dst, src, nbytes);
+ err = blkcipher_walk_virt_block(desc, &walk, 64);
+
+- salsa20_ivsetup(ctx, walk.iv);
++ salsa20_ivsetup(ctx, desc->info);
+
+ while (walk.nbytes >= 64) {
+ salsa20_encrypt_bytes(ctx, walk.dst.virt.addr,