From: Greg Kroah-Hartman Date: Wed, 27 Mar 2024 15:00:05 +0000 (+0100) Subject: 6.1-stable patches X-Git-Tag: v6.7.12~198 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=2810e86cdb04d807f0c233b8c34ec713f1d0d8c5;p=thirdparty%2Fkernel%2Fstable-queue.git 6.1-stable patches added patches: init-kconfig-lower-gcc-version-check-for-warray-bounds.patch kvm-svm-flush-pages-under-kvm-lock-to-fix-uaf-in-svm_register_enc_region.patch kvm-x86-mark-target-gfn-of-emulated-atomic-instruction-as-dirty.patch --- diff --git a/queue-6.1/init-kconfig-lower-gcc-version-check-for-warray-bounds.patch b/queue-6.1/init-kconfig-lower-gcc-version-check-for-warray-bounds.patch new file mode 100644 index 00000000000..938e554bead --- /dev/null +++ b/queue-6.1/init-kconfig-lower-gcc-version-check-for-warray-bounds.patch @@ -0,0 +1,64 @@ +From 3e00f5802fabf2f504070a591b14b648523ede13 Mon Sep 17 00:00:00 2001 +From: Kees Cook +Date: Fri, 23 Feb 2024 09:08:27 -0800 +Subject: init/Kconfig: lower GCC version check for -Warray-bounds +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Kees Cook + +commit 3e00f5802fabf2f504070a591b14b648523ede13 upstream. + +We continue to see false positives from -Warray-bounds even in GCC 10, +which is getting reported in a few places[1] still: + +security/security.c:811:2: warning: `memcpy' offset 32 is out of the bounds [0, 0] [-Warray-bounds] + +Lower the GCC version check from 11 to 10. + +Link: https://lkml.kernel.org/r/20240223170824.work.768-kees@kernel.org +Reported-by: Lu Yao +Closes: https://lore.kernel.org/lkml/20240117014541.8887-1-yaolu@kylinos.cn/ +Link: https://lore.kernel.org/linux-next/65d84438.620a0220.7d171.81a7@mx.google.com [1] +Signed-off-by: Kees Cook +Reviewed-by: Paul Moore +Cc: Ard Biesheuvel +Cc: Christophe Leroy +Cc: Greg Kroah-Hartman +Cc: "Gustavo A. R. Silva" +Cc: Johannes Weiner +Cc: Marc Aurèle La France +Cc: Masahiro Yamada +Cc: Nathan Chancellor +Cc: Nhat Pham +Cc: Petr Mladek +Cc: Randy Dunlap +Cc: Suren Baghdasaryan +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Greg Kroah-Hartman +--- + init/Kconfig | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +--- a/init/Kconfig ++++ b/init/Kconfig +@@ -902,14 +902,14 @@ config CC_IMPLICIT_FALLTHROUGH + default "-Wimplicit-fallthrough=5" if CC_IS_GCC && $(cc-option,-Wimplicit-fallthrough=5) + default "-Wimplicit-fallthrough" if CC_IS_CLANG && $(cc-option,-Wunreachable-code-fallthrough) + +-# Currently, disable gcc-11+ array-bounds globally. ++# Currently, disable gcc-10+ array-bounds globally. + # It's still broken in gcc-13, so no upper bound yet. +-config GCC11_NO_ARRAY_BOUNDS ++config GCC10_NO_ARRAY_BOUNDS + def_bool y + + config CC_NO_ARRAY_BOUNDS + bool +- default y if CC_IS_GCC && GCC_VERSION >= 110000 && GCC11_NO_ARRAY_BOUNDS ++ default y if CC_IS_GCC && GCC_VERSION >= 100000 && GCC10_NO_ARRAY_BOUNDS + + # + # For architectures that know their GCC __int128 support is sound diff --git a/queue-6.1/kvm-svm-flush-pages-under-kvm-lock-to-fix-uaf-in-svm_register_enc_region.patch b/queue-6.1/kvm-svm-flush-pages-under-kvm-lock-to-fix-uaf-in-svm_register_enc_region.patch new file mode 100644 index 00000000000..6b31ad2a2b1 --- /dev/null +++ b/queue-6.1/kvm-svm-flush-pages-under-kvm-lock-to-fix-uaf-in-svm_register_enc_region.patch @@ -0,0 +1,67 @@ +From 5ef1d8c1ddbf696e47b226e11888eaf8d9e8e807 Mon Sep 17 00:00:00 2001 +From: Sean Christopherson +Date: Fri, 16 Feb 2024 17:34:30 -0800 +Subject: KVM: SVM: Flush pages under kvm->lock to fix UAF in svm_register_enc_region() + +From: Sean Christopherson + +commit 5ef1d8c1ddbf696e47b226e11888eaf8d9e8e807 upstream. + +Do the cache flush of converted pages in svm_register_enc_region() before +dropping kvm->lock to fix use-after-free issues where region and/or its +array of pages could be freed by a different task, e.g. if userspace has +__unregister_enc_region_locked() already queued up for the region. + +Note, the "obvious" alternative of using local variables doesn't fully +resolve the bug, as region->pages is also dynamically allocated. I.e. the +region structure itself would be fine, but region->pages could be freed. + +Flushing multiple pages under kvm->lock is unfortunate, but the entire +flow is a rare slow path, and the manual flush is only needed on CPUs that +lack coherency for encrypted memory. + +Fixes: 19a23da53932 ("Fix unsynchronized access to sev members through svm_register_enc_region") +Reported-by: Gabe Kirkpatrick +Cc: Josh Eads +Cc: Peter Gonda +Cc: stable@vger.kernel.org +Signed-off-by: Sean Christopherson +Message-Id: <20240217013430.2079561-1-seanjc@google.com> +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kvm/svm/sev.c | 16 +++++++++------- + 1 file changed, 9 insertions(+), 7 deletions(-) + +--- a/arch/x86/kvm/svm/sev.c ++++ b/arch/x86/kvm/svm/sev.c +@@ -1958,20 +1958,22 @@ int sev_mem_enc_register_region(struct k + goto e_free; + } + +- region->uaddr = range->addr; +- region->size = range->size; +- +- list_add_tail(®ion->list, &sev->regions_list); +- mutex_unlock(&kvm->lock); +- + /* + * The guest may change the memory encryption attribute from C=0 -> C=1 + * or vice versa for this memory range. Lets make sure caches are + * flushed to ensure that guest data gets written into memory with +- * correct C-bit. ++ * correct C-bit. Note, this must be done before dropping kvm->lock, ++ * as region and its array of pages can be freed by a different task ++ * once kvm->lock is released. + */ + sev_clflush_pages(region->pages, region->npages); + ++ region->uaddr = range->addr; ++ region->size = range->size; ++ ++ list_add_tail(®ion->list, &sev->regions_list); ++ mutex_unlock(&kvm->lock); ++ + return ret; + + e_free: diff --git a/queue-6.1/kvm-x86-mark-target-gfn-of-emulated-atomic-instruction-as-dirty.patch b/queue-6.1/kvm-x86-mark-target-gfn-of-emulated-atomic-instruction-as-dirty.patch new file mode 100644 index 00000000000..94e0306e185 --- /dev/null +++ b/queue-6.1/kvm-x86-mark-target-gfn-of-emulated-atomic-instruction-as-dirty.patch @@ -0,0 +1,62 @@ +From 910c57dfa4d113aae6571c2a8b9ae8c430975902 Mon Sep 17 00:00:00 2001 +From: Sean Christopherson +Date: Wed, 14 Feb 2024 17:00:03 -0800 +Subject: KVM: x86: Mark target gfn of emulated atomic instruction as dirty + +From: Sean Christopherson + +commit 910c57dfa4d113aae6571c2a8b9ae8c430975902 upstream. + +When emulating an atomic access on behalf of the guest, mark the target +gfn dirty if the CMPXCHG by KVM is attempted and doesn't fault. This +fixes a bug where KVM effectively corrupts guest memory during live +migration by writing to guest memory without informing userspace that the +page is dirty. + +Marking the page dirty got unintentionally dropped when KVM's emulated +CMPXCHG was converted to do a user access. Before that, KVM explicitly +mapped the guest page into kernel memory, and marked the page dirty during +the unmap phase. + +Mark the page dirty even if the CMPXCHG fails, as the old data is written +back on failure, i.e. the page is still written. The value written is +guaranteed to be the same because the operation is atomic, but KVM's ABI +is that all writes are dirty logged regardless of the value written. And +more importantly, that's what KVM did before the buggy commit. + +Huge kudos to the folks on the Cc list (and many others), who did all the +actual work of triaging and debugging. + +Fixes: 1c2361f667f3 ("KVM: x86: Use __try_cmpxchg_user() to emulate atomic accesses") +Cc: stable@vger.kernel.org +Cc: David Matlack +Cc: Pasha Tatashin +Cc: Michael Krebs +base-commit: 6769ea8da8a93ed4630f1ce64df6aafcaabfce64 +Reviewed-by: Jim Mattson +Link: https://lore.kernel.org/r/20240215010004.1456078-2-seanjc@google.com +Signed-off-by: Sean Christopherson +Signed-off-by: Greg Kroah-Hartman +--- + arch/x86/kvm/x86.c | 10 ++++++++++ + 1 file changed, 10 insertions(+) + +--- a/arch/x86/kvm/x86.c ++++ b/arch/x86/kvm/x86.c +@@ -7758,6 +7758,16 @@ static int emulator_cmpxchg_emulated(str + + if (r < 0) + return X86EMUL_UNHANDLEABLE; ++ ++ /* ++ * Mark the page dirty _before_ checking whether or not the CMPXCHG was ++ * successful, as the old value is written back on failure. Note, for ++ * live migration, this is unnecessarily conservative as CMPXCHG writes ++ * back the original value and the access is atomic, but KVM's ABI is ++ * that all writes are dirty logged, regardless of the value written. ++ */ ++ kvm_vcpu_mark_page_dirty(vcpu, gpa_to_gfn(gpa)); ++ + if (r) + return X86EMUL_CMPXCHG_FAILED; + diff --git a/queue-6.1/series b/queue-6.1/series index 7ef4842725e..8092f380071 100644 --- a/queue-6.1/series +++ b/queue-6.1/series @@ -149,3 +149,6 @@ netfilter-nf_tables-disallow-anonymous-set-with-timeout-flag.patch netfilter-nf_tables-reject-constant-set-with-timeout.patch drivers-hv-vmbus-calculate-ring-buffer-size-for-more-efficient-use-of-memory.patch xfrm-avoid-clang-fortify-warning-in-copy_to_user_tmpl.patch +init-kconfig-lower-gcc-version-check-for-warray-bounds.patch +kvm-x86-mark-target-gfn-of-emulated-atomic-instruction-as-dirty.patch +kvm-svm-flush-pages-under-kvm-lock-to-fix-uaf-in-svm_register_enc_region.patch