From: Greg Kroah-Hartman Date: Sat, 18 May 2019 07:01:48 +0000 (+0200) Subject: 4.9-stable patches X-Git-Tag: v4.9.178~19 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=77a0bc7069bd033fb84fb8fcf9f1ebaed79f3558;p=thirdparty%2Fkernel%2Fstable-queue.git 4.9-stable patches added patches: crypto-arm-aes-neonbs-don-t-access-already-freed-walk.iv.patch crypto-gcm-fix-error-return-code-in-crypto_gcm_create_common.patch crypto-gcm-fix-incompatibility-between-gcm-and-gcm_base.patch crypto-salsa20-don-t-access-already-freed-walk.iv.patch --- diff --git a/queue-4.9/crypto-arm-aes-neonbs-don-t-access-already-freed-walk.iv.patch b/queue-4.9/crypto-arm-aes-neonbs-don-t-access-already-freed-walk.iv.patch new file mode 100644 index 00000000000..37af6c3e02e --- /dev/null +++ b/queue-4.9/crypto-arm-aes-neonbs-don-t-access-already-freed-walk.iv.patch @@ -0,0 +1,52 @@ +From 767f015ea0b7ab9d60432ff6cd06b664fd71f50f Mon Sep 17 00:00:00 2001 +From: Eric Biggers +Date: Tue, 9 Apr 2019 23:46:31 -0700 +Subject: crypto: arm/aes-neonbs - don't access already-freed walk.iv + +From: Eric Biggers + +commit 767f015ea0b7ab9d60432ff6cd06b664fd71f50f upstream. + +If the user-provided IV needs to be aligned to the algorithm's +alignmask, then skcipher_walk_virt() copies the IV into a new aligned +buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then +if the caller unconditionally accesses walk.iv, it's a use-after-free. + +arm32 xts-aes-neonbs doesn't set an alignmask, so currently it isn't +affected by this despite unconditionally accessing walk.iv. However +this is more subtle than desired, and it was actually broken prior to +the alignmask being removed by commit cc477bf64573 ("crypto: arm/aes - +replace bit-sliced OpenSSL NEON code"). Thus, update xts-aes-neonbs to +start checking the return value of skcipher_walk_virt(). + +Fixes: e4e7f10bfc40 ("ARM: add support for bit sliced AES using NEON instructions") +Cc: # v3.13+ +Signed-off-by: Eric Biggers +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + + +--- + arch/arm/crypto/aesbs-glue.c | 4 ++++ + 1 file changed, 4 insertions(+) + +--- a/arch/arm/crypto/aesbs-glue.c ++++ b/arch/arm/crypto/aesbs-glue.c +@@ -265,6 +265,8 @@ static int aesbs_xts_encrypt(struct blkc + + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt_block(desc, &walk, 8 * AES_BLOCK_SIZE); ++ if (err) ++ return err; + + /* generate the initial tweak */ + AES_encrypt(walk.iv, walk.iv, &ctx->twkey); +@@ -289,6 +291,8 @@ static int aesbs_xts_decrypt(struct blkc + + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt_block(desc, &walk, 8 * AES_BLOCK_SIZE); ++ if (err) ++ return err; + + /* generate the initial tweak */ + AES_encrypt(walk.iv, walk.iv, &ctx->twkey); diff --git a/queue-4.9/crypto-gcm-fix-error-return-code-in-crypto_gcm_create_common.patch b/queue-4.9/crypto-gcm-fix-error-return-code-in-crypto_gcm_create_common.patch new file mode 100644 index 00000000000..6c718bc6294 --- /dev/null +++ b/queue-4.9/crypto-gcm-fix-error-return-code-in-crypto_gcm_create_common.patch @@ -0,0 +1,35 @@ +From 9b40f79c08e81234d759f188b233980d7e81df6c Mon Sep 17 00:00:00 2001 +From: Wei Yongjun +Date: Mon, 17 Oct 2016 15:10:06 +0000 +Subject: crypto: gcm - Fix error return code in crypto_gcm_create_common() + +From: Wei Yongjun + +commit 9b40f79c08e81234d759f188b233980d7e81df6c upstream. + +Fix to return error code -EINVAL from the invalid alg ivsize error +handling case instead of 0, as done elsewhere in this function. + +Signed-off-by: Wei Yongjun +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/gcm.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/crypto/gcm.c ++++ b/crypto/gcm.c +@@ -670,11 +670,11 @@ static int crypto_gcm_create_common(stru + ctr = crypto_spawn_skcipher_alg(&ctx->ctr); + + /* We only support 16-byte blocks. */ ++ err = -EINVAL; + if (crypto_skcipher_alg_ivsize(ctr) != 16) + goto out_put_ctr; + + /* Not a stream cipher? */ +- err = -EINVAL; + if (ctr->base.cra_blocksize != 1) + goto out_put_ctr; + diff --git a/queue-4.9/crypto-gcm-fix-incompatibility-between-gcm-and-gcm_base.patch b/queue-4.9/crypto-gcm-fix-incompatibility-between-gcm-and-gcm_base.patch new file mode 100644 index 00000000000..3eea63aee90 --- /dev/null +++ b/queue-4.9/crypto-gcm-fix-incompatibility-between-gcm-and-gcm_base.patch @@ -0,0 +1,137 @@ +From f699594d436960160f6d5ba84ed4a222f20d11cd Mon Sep 17 00:00:00 2001 +From: Eric Biggers +Date: Thu, 18 Apr 2019 14:43:02 -0700 +Subject: crypto: gcm - fix incompatibility between "gcm" and "gcm_base" + +From: Eric Biggers + +commit f699594d436960160f6d5ba84ed4a222f20d11cd upstream. + +GCM instances can be created by either the "gcm" template, which only +allows choosing the block cipher, e.g. "gcm(aes)"; or by "gcm_base", +which allows choosing the ctr and ghash implementations, e.g. +"gcm_base(ctr(aes-generic),ghash-generic)". + +However, a "gcm_base" instance prevents a "gcm" instance from being +registered using the same implementations. Nor will the instance be +found by lookups of "gcm". This can be used as a denial of service. +Moreover, "gcm_base" instances are never tested by the crypto +self-tests, even if there are compatible "gcm" tests. + +The root cause of these problems is that instances of the two templates +use different cra_names. Therefore, fix these problems by making +"gcm_base" instances set the same cra_name as "gcm" instances, e.g. +"gcm(aes)" instead of "gcm_base(ctr(aes-generic),ghash-generic)". + +This requires extracting the block cipher name from the name of the ctr +algorithm. It also requires starting to verify that the algorithms are +really ctr and ghash, not something else entirely. But it would be +bizarre if anyone were actually using non-gcm-compatible algorithms with +gcm_base, so this shouldn't break anyone in practice. + +Fixes: d00aa19b507b ("[CRYPTO] gcm: Allow block cipher parameter") +Cc: stable@vger.kernel.org +Signed-off-by: Eric Biggers +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/gcm.c | 34 +++++++++++----------------------- + 1 file changed, 11 insertions(+), 23 deletions(-) + +--- a/crypto/gcm.c ++++ b/crypto/gcm.c +@@ -616,7 +616,6 @@ static void crypto_gcm_free(struct aead_ + + static int crypto_gcm_create_common(struct crypto_template *tmpl, + struct rtattr **tb, +- const char *full_name, + const char *ctr_name, + const char *ghash_name) + { +@@ -657,7 +656,8 @@ static int crypto_gcm_create_common(stru + goto err_free_inst; + + err = -EINVAL; +- if (ghash->digestsize != 16) ++ if (strcmp(ghash->base.cra_name, "ghash") != 0 || ++ ghash->digestsize != 16) + goto err_drop_ghash; + + crypto_set_skcipher_spawn(&ctx->ctr, aead_crypto_instance(inst)); +@@ -669,24 +669,24 @@ static int crypto_gcm_create_common(stru + + ctr = crypto_spawn_skcipher_alg(&ctx->ctr); + +- /* We only support 16-byte blocks. */ ++ /* The skcipher algorithm must be CTR mode, using 16-byte blocks. */ + err = -EINVAL; +- if (crypto_skcipher_alg_ivsize(ctr) != 16) ++ if (strncmp(ctr->base.cra_name, "ctr(", 4) != 0 || ++ crypto_skcipher_alg_ivsize(ctr) != 16 || ++ ctr->base.cra_blocksize != 1) + goto out_put_ctr; + +- /* Not a stream cipher? */ +- if (ctr->base.cra_blocksize != 1) ++ err = -ENAMETOOLONG; ++ if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, ++ "gcm(%s", ctr->base.cra_name + 4) >= CRYPTO_MAX_ALG_NAME) + goto out_put_ctr; + +- err = -ENAMETOOLONG; + if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME, + "gcm_base(%s,%s)", ctr->base.cra_driver_name, + ghash_alg->cra_driver_name) >= + CRYPTO_MAX_ALG_NAME) + goto out_put_ctr; + +- memcpy(inst->alg.base.cra_name, full_name, CRYPTO_MAX_ALG_NAME); +- + inst->alg.base.cra_flags = (ghash->base.cra_flags | + ctr->base.cra_flags) & CRYPTO_ALG_ASYNC; + inst->alg.base.cra_priority = (ghash->base.cra_priority + +@@ -728,7 +728,6 @@ static int crypto_gcm_create(struct cryp + { + const char *cipher_name; + char ctr_name[CRYPTO_MAX_ALG_NAME]; +- char full_name[CRYPTO_MAX_ALG_NAME]; + + cipher_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(cipher_name)) +@@ -738,12 +737,7 @@ static int crypto_gcm_create(struct cryp + CRYPTO_MAX_ALG_NAME) + return -ENAMETOOLONG; + +- if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm(%s)", cipher_name) >= +- CRYPTO_MAX_ALG_NAME) +- return -ENAMETOOLONG; +- +- return crypto_gcm_create_common(tmpl, tb, full_name, +- ctr_name, "ghash"); ++ return crypto_gcm_create_common(tmpl, tb, ctr_name, "ghash"); + } + + static struct crypto_template crypto_gcm_tmpl = { +@@ -757,7 +751,6 @@ static int crypto_gcm_base_create(struct + { + const char *ctr_name; + const char *ghash_name; +- char full_name[CRYPTO_MAX_ALG_NAME]; + + ctr_name = crypto_attr_alg_name(tb[1]); + if (IS_ERR(ctr_name)) +@@ -767,12 +760,7 @@ static int crypto_gcm_base_create(struct + if (IS_ERR(ghash_name)) + return PTR_ERR(ghash_name); + +- if (snprintf(full_name, CRYPTO_MAX_ALG_NAME, "gcm_base(%s,%s)", +- ctr_name, ghash_name) >= CRYPTO_MAX_ALG_NAME) +- return -ENAMETOOLONG; +- +- return crypto_gcm_create_common(tmpl, tb, full_name, +- ctr_name, ghash_name); ++ return crypto_gcm_create_common(tmpl, tb, ctr_name, ghash_name); + } + + static struct crypto_template crypto_gcm_base_tmpl = { diff --git a/queue-4.9/crypto-salsa20-don-t-access-already-freed-walk.iv.patch b/queue-4.9/crypto-salsa20-don-t-access-already-freed-walk.iv.patch new file mode 100644 index 00000000000..2e94806e513 --- /dev/null +++ b/queue-4.9/crypto-salsa20-don-t-access-already-freed-walk.iv.patch @@ -0,0 +1,45 @@ +From edaf28e996af69222b2cb40455dbb5459c2b875a Mon Sep 17 00:00:00 2001 +From: Eric Biggers +Date: Tue, 9 Apr 2019 23:46:30 -0700 +Subject: crypto: salsa20 - don't access already-freed walk.iv + +From: Eric Biggers + +commit edaf28e996af69222b2cb40455dbb5459c2b875a upstream. + +If the user-provided IV needs to be aligned to the algorithm's +alignmask, then skcipher_walk_virt() copies the IV into a new aligned +buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then +if the caller unconditionally accesses walk.iv, it's a use-after-free. + +salsa20-generic doesn't set an alignmask, so currently it isn't affected +by this despite unconditionally accessing walk.iv. However this is more +subtle than desired, and it was actually broken prior to the alignmask +being removed by commit b62b3db76f73 ("crypto: salsa20-generic - cleanup +and convert to skcipher API"). + +Since salsa20-generic does not update the IV and does not need any IV +alignment, update it to use req->iv instead of walk.iv. + +Fixes: 2407d60872dd ("[CRYPTO] salsa20: Salsa20 stream cipher") +Cc: stable@vger.kernel.org +Signed-off-by: Eric Biggers +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + + +--- + crypto/salsa20_generic.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/crypto/salsa20_generic.c ++++ b/crypto/salsa20_generic.c +@@ -186,7 +186,7 @@ static int encrypt(struct blkcipher_desc + blkcipher_walk_init(&walk, dst, src, nbytes); + err = blkcipher_walk_virt_block(desc, &walk, 64); + +- salsa20_ivsetup(ctx, walk.iv); ++ salsa20_ivsetup(ctx, desc->info); + + while (walk.nbytes >= 64) { + salsa20_encrypt_bytes(ctx, walk.dst.virt.addr, diff --git a/queue-4.9/series b/queue-4.9/series index c513208171c..51e4167592f 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -31,3 +31,7 @@ btrfs-do-not-start-a-transaction-at-iterate_extent_inodes.patch bcache-fix-a-race-between-cache-register-and-cacheset-unregister.patch bcache-never-set-key_ptrs-of-journal-key-to-0-in-journal_reclaim.patch ipmi-ssif-compare-block-number-correctly-for-multi-part-return-messages.patch +crypto-gcm-fix-error-return-code-in-crypto_gcm_create_common.patch +crypto-gcm-fix-incompatibility-between-gcm-and-gcm_base.patch +crypto-salsa20-don-t-access-already-freed-walk.iv.patch +crypto-arm-aes-neonbs-don-t-access-already-freed-walk.iv.patch