--- /dev/null
+From magnus.karlsson@gmail.com Thu May 26 14:13:37 2022
+From: Magnus Karlsson <magnus.karlsson@gmail.com>
+Date: Wed, 25 May 2022 09:19:53 +0200
+Subject: [PATCH 5.15] ice: fix crash at allocation failure
+To: gregkh@linuxfoundation.org, sashal@kernel.org, stable@vger.kernel.org, maciej.fijalkowski@intel.com, bjorn@kernel.org
+Cc: Magnus Karlsson <magnus.karlsson@intel.com>, Jeff Shaw <jeffrey.b.shaw@intel.com>
+Message-ID: <20220525071953.27755-1-magnus.karlsson@gmail.com>
+
+
+From: Magnus Karlsson <magnus.karlsson@intel.com>
+
+Fix a crash in the zero-copy driver that occurs when it fails to
+allocate buffers from user-space. This crash can easily be triggered
+by a malicious program that does not provide any buffers in the fill
+ring for the kernel to use.
+
+Note that this bug does not exist in upstream since the batched buffer
+allocation interface got introduced in 5.16 and replaced this code.
+
+Reported-by: Jeff Shaw <jeffrey.b.shaw@intel.com>
+Tested-by: Jeff Shaw <jeffrey.b.shaw@intel.com>
+Signed-off-by: Magnus Karlsson <magnus.karlsson@intel.com>
+Acked-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/intel/ice/ice_xsk.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/net/ethernet/intel/ice/ice_xsk.c
++++ b/drivers/net/ethernet/intel/ice/ice_xsk.c
+@@ -378,7 +378,7 @@ bool ice_alloc_rx_bufs_zc(struct ice_rin
+
+ do {
+ *xdp = xsk_buff_alloc(rx_ring->xsk_pool);
+- if (!xdp) {
++ if (!*xdp) {
+ ok = false;
+ break;
+ }
--- /dev/null
+From 9f46c187e2e680ecd9de7983e4d081c3391acc76 Mon Sep 17 00:00:00 2001
+From: Paolo Bonzini <pbonzini@redhat.com>
+Date: Fri, 20 May 2022 13:48:11 -0400
+Subject: KVM: x86/mmu: fix NULL pointer dereference on guest INVPCID
+
+From: Paolo Bonzini <pbonzini@redhat.com>
+
+commit 9f46c187e2e680ecd9de7983e4d081c3391acc76 upstream.
+
+With shadow paging enabled, the INVPCID instruction results in a call
+to kvm_mmu_invpcid_gva. If INVPCID is executed with CR0.PG=0, the
+invlpg callback is not set and the result is a NULL pointer dereference.
+Fix it trivially by checking for mmu->invlpg before every call.
+
+There are other possibilities:
+
+- check for CR0.PG, because KVM (like all Intel processors after P5)
+ flushes guest TLB on CR0.PG changes so that INVPCID/INVLPG are a
+ nop with paging disabled
+
+- check for EFER.LMA, because KVM syncs and flushes when switching
+ MMU contexts outside of 64-bit mode
+
+All of these are tricky, go for the simple solution. This is CVE-2022-1789.
+
+Reported-by: Yongkang Jia <kangel@zju.edu.cn>
+Cc: stable@vger.kernel.org
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+[fix conflict due to missing b9e5603c2a3accbadfec570ac501a54431a6bdba]
+Signed-off-by: Vegard Nossum <vegard.nossum@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/mmu/mmu.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kvm/mmu/mmu.c
++++ b/arch/x86/kvm/mmu/mmu.c
+@@ -5396,14 +5396,16 @@ void kvm_mmu_invpcid_gva(struct kvm_vcpu
+ uint i;
+
+ if (pcid == kvm_get_active_pcid(vcpu)) {
+- mmu->invlpg(vcpu, gva, mmu->root_hpa);
++ if (mmu->invlpg)
++ mmu->invlpg(vcpu, gva, mmu->root_hpa);
+ tlb_flush = true;
+ }
+
+ for (i = 0; i < KVM_MMU_NUM_PREV_ROOTS; i++) {
+ if (VALID_PAGE(mmu->prev_roots[i].hpa) &&
+ pcid == kvm_get_pcid(vcpu, mmu->prev_roots[i].pgd)) {
+- mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa);
++ if (mmu->invlpg)
++ mmu->invlpg(vcpu, gva, mmu->prev_roots[i].hpa);
+ tlb_flush = true;
+ }
+ }