--- /dev/null
+From 811c363645b33e6e22658634329e95f383dfc705 Mon Sep 17 00:00:00 2001
+From: Hao Sun <sunhao.th@gmail.com>
+Date: Wed, 1 Nov 2023 13:33:51 +0100
+Subject: bpf: Fix check_stack_write_fixed_off() to correctly spill imm
+
+From: Hao Sun <sunhao.th@gmail.com>
+
+commit 811c363645b33e6e22658634329e95f383dfc705 upstream.
+
+In check_stack_write_fixed_off(), imm value is cast to u32 before being
+spilled to the stack. Therefore, the sign information is lost, and the
+range information is incorrect when load from the stack again.
+
+For the following prog:
+0: r2 = r10
+1: *(u64*)(r2 -40) = -44
+2: r0 = *(u64*)(r2 - 40)
+3: if r0 s<= 0xa goto +2
+4: r0 = 1
+5: exit
+6: r0 = 0
+7: exit
+
+The verifier gives:
+func#0 @0
+0: R1=ctx(off=0,imm=0) R10=fp0
+0: (bf) r2 = r10 ; R2_w=fp0 R10=fp0
+1: (7a) *(u64 *)(r2 -40) = -44 ; R2_w=fp0 fp-40_w=4294967252
+2: (79) r0 = *(u64 *)(r2 -40) ; R0_w=4294967252 R2_w=fp0
+fp-40_w=4294967252
+3: (c5) if r0 s< 0xa goto pc+2
+mark_precise: frame0: last_idx 3 first_idx 0 subseq_idx -1
+mark_precise: frame0: regs=r0 stack= before 2: (79) r0 = *(u64 *)(r2 -40)
+3: R0_w=4294967252
+4: (b7) r0 = 1 ; R0_w=1
+5: (95) exit
+verification time 7971 usec
+stack depth 40
+processed 6 insns (limit 1000000) max_states_per_insn 0 total_states 0
+peak_states 0 mark_read 0
+
+So remove the incorrect cast, since imm field is declared as s32, and
+__mark_reg_known() takes u64, so imm would be correctly sign extended
+by compiler.
+
+Fixes: ecdf985d7615 ("bpf: track immediate values written to stack by BPF_ST instruction")
+Cc: stable@vger.kernel.org
+Signed-off-by: Hao Sun <sunhao.th@gmail.com>
+Acked-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Acked-by: Eduard Zingerman <eddyz87@gmail.com>
+Link: https://lore.kernel.org/r/20231101-fix-check-stack-write-v3-1-f05c2b1473d5@gmail.com
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/verifier.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -2534,7 +2534,7 @@ static int check_stack_write_fixed_off(s
+ insn->imm != 0 && env->bpf_capable) {
+ struct bpf_reg_state fake_reg = {};
+
+- __mark_reg_known(&fake_reg, (u32)insn->imm);
++ __mark_reg_known(&fake_reg, insn->imm);
+ fake_reg.type = SCALAR_VALUE;
+ save_register_state(state, spi, &fake_reg, size);
+ } else if (reg && is_spillable_regtype(reg->type)) {
--- /dev/null
+From 291d044fd51f8484066300ee42afecf8c8db7b3a Mon Sep 17 00:00:00 2001
+From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Date: Thu, 2 Nov 2023 13:39:03 +0800
+Subject: bpf: Fix precision tracking for BPF_ALU | BPF_TO_BE | BPF_END
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+
+commit 291d044fd51f8484066300ee42afecf8c8db7b3a upstream.
+
+BPF_END and BPF_NEG has a different specification for the source bit in
+the opcode compared to other ALU/ALU64 instructions, and is either
+reserved or use to specify the byte swap endianness. In both cases the
+source bit does not encode source operand location, and src_reg is a
+reserved field.
+
+backtrack_insn() currently does not differentiate BPF_END and BPF_NEG
+from other ALU/ALU64 instructions, which leads to r0 being incorrectly
+marked as precise when processing BPF_ALU | BPF_TO_BE | BPF_END
+instructions. This commit teaches backtrack_insn() to correctly mark
+precision for such case.
+
+While precise tracking of BPF_NEG and other BPF_END instructions are
+correct and does not need fixing, this commit opt to process all BPF_NEG
+and BPF_END instructions within the same if-clause to better align with
+current convention used in the verifier (e.g. check_alu_op).
+
+Fixes: b5dc0163d8fd ("bpf: precise scalar_value tracking")
+Cc: stable@vger.kernel.org
+Reported-by: Mohamed Mahmoud <mmahmoud@redhat.com>
+Closes: https://lore.kernel.org/r/87jzrrwptf.fsf@toke.dk
+Tested-by: Toke Høiland-Jørgensen <toke@redhat.com>
+Tested-by: Tao Lyu <tao.lyu@epfl.ch>
+Acked-by: Eduard Zingerman <eddyz87@gmail.com>
+Signed-off-by: Shung-Hsi Yu <shung-hsi.yu@suse.com>
+Link: https://lore.kernel.org/r/20231102053913.12004-2-shung-hsi.yu@suse.com
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/verifier.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+--- a/kernel/bpf/verifier.c
++++ b/kernel/bpf/verifier.c
+@@ -1841,7 +1841,12 @@ static int backtrack_insn(struct bpf_ver
+ if (class == BPF_ALU || class == BPF_ALU64) {
+ if (!(*reg_mask & dreg))
+ return 0;
+- if (opcode == BPF_MOV) {
++ if (opcode == BPF_END || opcode == BPF_NEG) {
++ /* sreg is reserved and unused
++ * dreg still need precision before this insn
++ */
++ return 0;
++ } else if (opcode == BPF_MOV) {
+ if (BPF_SRC(insn->code) == BPF_X) {
+ /* dreg = sreg
+ * dreg needs precision after this insn
--- /dev/null
+From d6800af51c76b6dae20e6023bbdc9b3da3ab5121 Mon Sep 17 00:00:00 2001
+From: Nicolas Saenz Julienne <nsaenz@amazon.com>
+Date: Tue, 17 Oct 2023 15:51:02 +0000
+Subject: KVM: x86: hyper-v: Don't auto-enable stimer on write from user-space
+
+From: Nicolas Saenz Julienne <nsaenz@amazon.com>
+
+commit d6800af51c76b6dae20e6023bbdc9b3da3ab5121 upstream.
+
+Don't apply the stimer's counter side effects when modifying its
+value from user-space, as this may trigger spurious interrupts.
+
+For example:
+ - The stimer is configured in auto-enable mode.
+ - The stimer's count is set and the timer enabled.
+ - The stimer expires, an interrupt is injected.
+ - The VM is live migrated.
+ - The stimer config and count are deserialized, auto-enable is ON, the
+ stimer is re-enabled.
+ - The stimer expires right away, and injects an unwarranted interrupt.
+
+Cc: stable@vger.kernel.org
+Fixes: 1f4b34f825e8 ("kvm/x86: Hyper-V SynIC timers")
+Signed-off-by: Nicolas Saenz Julienne <nsaenz@amazon.com>
+Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
+Link: https://lore.kernel.org/r/20231017155101.40677-1-nsaenz@amazon.com
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/hyperv.c | 10 ++++++----
+ 1 file changed, 6 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kvm/hyperv.c
++++ b/arch/x86/kvm/hyperv.c
+@@ -674,10 +674,12 @@ static int stimer_set_count(struct kvm_v
+
+ stimer_cleanup(stimer);
+ stimer->count = count;
+- if (stimer->count == 0)
+- stimer->config.enable = 0;
+- else if (stimer->config.auto_enable)
+- stimer->config.enable = 1;
++ if (!host) {
++ if (stimer->count == 0)
++ stimer->config.enable = 0;
++ else if (stimer->config.auto_enable)
++ stimer->config.enable = 1;
++ }
+
+ if (stimer->config.enable)
+ stimer_mark_pending(stimer, false);
--- /dev/null
+From 2770d4722036d6bd24bcb78e9cd7f6e572077d03 Mon Sep 17 00:00:00 2001
+From: "Maciej S. Szmigiero" <maciej.szmigiero@oracle.com>
+Date: Thu, 19 Oct 2023 18:06:57 +0200
+Subject: KVM: x86: Ignore MSR_AMD64_TW_CFG access
+
+From: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
+
+commit 2770d4722036d6bd24bcb78e9cd7f6e572077d03 upstream.
+
+Hyper-V enabled Windows Server 2022 KVM VM cannot be started on Zen1 Ryzen
+since it crashes at boot with SYSTEM_THREAD_EXCEPTION_NOT_HANDLED +
+STATUS_PRIVILEGED_INSTRUCTION (in other words, because of an unexpected #GP
+in the guest kernel).
+
+This is because Windows tries to set bit 8 in MSR_AMD64_TW_CFG and can't
+handle receiving a #GP when doing so.
+
+Give this MSR the same treatment that commit 2e32b7190641
+("x86, kvm: Add MSR_AMD64_BU_CFG2 to the list of ignored MSRs") gave
+MSR_AMD64_BU_CFG2 under justification that this MSR is baremetal-relevant
+only.
+Although apparently it was then needed for Linux guests, not Windows as in
+this case.
+
+With this change, the aforementioned guest setup is able to finish booting
+successfully.
+
+This issue can be reproduced either on a Summit Ridge Ryzen (with
+just "-cpu host") or on a Naples EPYC (with "-cpu host,stepping=1" since
+EPYC is ordinarily stepping 2).
+
+Alternatively, userspace could solve the problem by using MSR filters, but
+forcing every userspace to define a filter isn't very friendly and doesn't
+add much, if any, value. The only potential hiccup is if one of these
+"baremetal-only" MSRs ever requires actual emulation and/or has F/M/S
+specific behavior. But if that happens, then KVM can still punt *that*
+handling to userspace since userspace MSR filters "win" over KVM's default
+handling.
+
+Signed-off-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/r/1ce85d9c7c9e9632393816cf19c902e0a3f411f1.1697731406.git.maciej.szmigiero@oracle.com
+[sean: call out MSR filtering alternative]
+Signed-off-by: Sean Christopherson <seanjc@google.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/msr-index.h | 1 +
+ arch/x86/kvm/x86.c | 2 ++
+ 2 files changed, 3 insertions(+)
+
+--- a/arch/x86/include/asm/msr-index.h
++++ b/arch/x86/include/asm/msr-index.h
+@@ -505,6 +505,7 @@
+ #define MSR_AMD64_CPUID_FN_1 0xc0011004
+ #define MSR_AMD64_LS_CFG 0xc0011020
+ #define MSR_AMD64_DC_CFG 0xc0011022
++#define MSR_AMD64_TW_CFG 0xc0011023
+
+ #define MSR_AMD64_DE_CFG 0xc0011029
+ #define MSR_AMD64_DE_CFG_LFENCE_SERIALIZE_BIT 1
+--- a/arch/x86/kvm/x86.c
++++ b/arch/x86/kvm/x86.c
+@@ -3132,6 +3132,7 @@ int kvm_set_msr_common(struct kvm_vcpu *
+ case MSR_AMD64_PATCH_LOADER:
+ case MSR_AMD64_BU_CFG2:
+ case MSR_AMD64_DC_CFG:
++ case MSR_AMD64_TW_CFG:
+ case MSR_F15H_EX_CFG:
+ break;
+
+@@ -3485,6 +3486,7 @@ int kvm_get_msr_common(struct kvm_vcpu *
+ case MSR_AMD64_BU_CFG2:
+ case MSR_IA32_PERF_CTL:
+ case MSR_AMD64_DC_CFG:
++ case MSR_AMD64_TW_CFG:
+ case MSR_F15H_EX_CFG:
+ /*
+ * Intel Sandy Bridge CPUs must support the RAPL (running average power
--- /dev/null
+From 5e538fce33589da6d7cb2de1445b84d3a8a692f7 Mon Sep 17 00:00:00 2001
+From: Vikash Garodia <quic_vgarodia@quicinc.com>
+Date: Thu, 10 Aug 2023 07:55:01 +0530
+Subject: media: venus: hfi: add checks to perform sanity on queue pointers
+
+From: Vikash Garodia <quic_vgarodia@quicinc.com>
+
+commit 5e538fce33589da6d7cb2de1445b84d3a8a692f7 upstream.
+
+Read and write pointers are used to track the packet index in the memory
+shared between video driver and firmware. There is a possibility of OOB
+access if the read or write pointer goes beyond the queue memory size.
+Add checks for the read and write pointer to avoid OOB access.
+
+Cc: stable@vger.kernel.org
+Fixes: d96d3f30c0f2 ("[media] media: venus: hfi: add Venus HFI files")
+Signed-off-by: Vikash Garodia <quic_vgarodia@quicinc.com>
+Signed-off-by: Stanimir Varbanov <stanimir.k.varbanov@gmail.com>
+Signed-off-by: Hans Verkuil <hverkuil-cisco@xs4all.nl>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/media/platform/qcom/venus/hfi_venus.c | 10 ++++++++++
+ 1 file changed, 10 insertions(+)
+
+--- a/drivers/media/platform/qcom/venus/hfi_venus.c
++++ b/drivers/media/platform/qcom/venus/hfi_venus.c
+@@ -206,6 +206,11 @@ static int venus_write_queue(struct venu
+
+ new_wr_idx = wr_idx + dwords;
+ wr_ptr = (u32 *)(queue->qmem.kva + (wr_idx << 2));
++
++ if (wr_ptr < (u32 *)queue->qmem.kva ||
++ wr_ptr > (u32 *)(queue->qmem.kva + queue->qmem.size - sizeof(*wr_ptr)))
++ return -EINVAL;
++
+ if (new_wr_idx < qsize) {
+ memcpy(wr_ptr, packet, dwords << 2);
+ } else {
+@@ -273,6 +278,11 @@ static int venus_read_queue(struct venus
+ }
+
+ rd_ptr = (u32 *)(queue->qmem.kva + (rd_idx << 2));
++
++ if (rd_ptr < (u32 *)queue->qmem.kva ||
++ rd_ptr > (u32 *)(queue->qmem.kva + queue->qmem.size - sizeof(*rd_ptr)))
++ return -EINVAL;
++
+ dwords = *rd_ptr >> 2;
+ if (!dwords)
+ return -EINVAL;
--- /dev/null
+From ea142e590aec55ba40c5affb4d49e68c713c63dc Mon Sep 17 00:00:00 2001
+From: Nicholas Piggin <npiggin@gmail.com>
+Date: Thu, 19 Oct 2023 01:34:23 +1000
+Subject: powerpc/perf: Fix disabling BHRB and instruction sampling
+
+From: Nicholas Piggin <npiggin@gmail.com>
+
+commit ea142e590aec55ba40c5affb4d49e68c713c63dc upstream.
+
+When the PMU is disabled, MMCRA is not updated to disable BHRB and
+instruction sampling. This can lead to those features remaining enabled,
+which can slow down a real or emulated CPU.
+
+Fixes: 1cade527f6e9 ("powerpc/perf: BHRB control to disable BHRB logic when not used")
+Cc: stable@vger.kernel.org # v5.9+
+Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
+Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
+Link: https://msgid.link/20231018153423.298373-1-npiggin@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/perf/core-book3s.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/arch/powerpc/perf/core-book3s.c
++++ b/arch/powerpc/perf/core-book3s.c
+@@ -1289,8 +1289,7 @@ static void power_pmu_disable(struct pmu
+ /*
+ * Disable instruction sampling if it was enabled
+ */
+- if (cpuhw->mmcr.mmcra & MMCRA_SAMPLE_ENABLE)
+- val &= ~MMCRA_SAMPLE_ENABLE;
++ val &= ~MMCRA_SAMPLE_ENABLE;
+
+ /* Disable BHRB via mmcra (BHRBRD) for p10 */
+ if (ppmu->flags & PPMU_ARCH_31)
+@@ -1301,7 +1300,7 @@ static void power_pmu_disable(struct pmu
+ * instruction sampling or BHRB.
+ */
+ if (val != mmcra) {
+- mtspr(SPRN_MMCRA, mmcra);
++ mtspr(SPRN_MMCRA, val);
+ mb();
+ isync();
+ }
--- /dev/null
+From 381fdb73d1e2a48244de7260550e453d1003bb8e Mon Sep 17 00:00:00 2001
+From: Kees Cook <keescook@chromium.org>
+Date: Fri, 6 Oct 2023 21:09:28 -0700
+Subject: randstruct: Fix gcc-plugin performance mode to stay in group
+
+From: Kees Cook <keescook@chromium.org>
+
+commit 381fdb73d1e2a48244de7260550e453d1003bb8e upstream.
+
+The performance mode of the gcc-plugin randstruct was shuffling struct
+members outside of the cache-line groups. Limit the range to the
+specified group indexes.
+
+Cc: linux-hardening@vger.kernel.org
+Cc: stable@vger.kernel.org
+Reported-by: Lukas Loidolt <e1634039@student.tuwien.ac.at>
+Closes: https://lore.kernel.org/all/f3ca77f0-e414-4065-83a5-ae4c4d25545d@student.tuwien.ac.at
+Fixes: 313dd1b62921 ("gcc-plugins: Add the randstruct plugin")
+Signed-off-by: Kees Cook <keescook@chromium.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ scripts/gcc-plugins/randomize_layout_plugin.c | 11 ++++++++---
+ 1 file changed, 8 insertions(+), 3 deletions(-)
+
+--- a/scripts/gcc-plugins/randomize_layout_plugin.c
++++ b/scripts/gcc-plugins/randomize_layout_plugin.c
+@@ -209,12 +209,14 @@ static void partition_struct(tree *field
+
+ static void performance_shuffle(tree *newtree, unsigned long length, ranctx *prng_state)
+ {
+- unsigned long i, x;
++ unsigned long i, x, index;
+ struct partition_group size_group[length];
+ unsigned long num_groups = 0;
+ unsigned long randnum;
+
+ partition_struct(newtree, length, (struct partition_group *)&size_group, &num_groups);
++
++ /* FIXME: this group shuffle is currently a no-op. */
+ for (i = num_groups - 1; i > 0; i--) {
+ struct partition_group tmp;
+ randnum = ranval(prng_state) % (i + 1);
+@@ -224,11 +226,14 @@ static void performance_shuffle(tree *ne
+ }
+
+ for (x = 0; x < num_groups; x++) {
+- for (i = size_group[x].start + size_group[x].length - 1; i > size_group[x].start; i--) {
++ for (index = size_group[x].length - 1; index > 0; index--) {
+ tree tmp;
++
++ i = size_group[x].start + index;
+ if (DECL_BIT_FIELD_TYPE(newtree[i]))
+ continue;
+- randnum = ranval(prng_state) % (i + 1);
++ randnum = ranval(prng_state) % (index + 1);
++ randnum += size_group[x].start;
+ // we could handle this case differently if desired
+ if (DECL_BIT_FIELD_TYPE(newtree[randnum]))
+ continue;
--- /dev/null
+From 8e3ed9e786511ad800c33605ed904b9de49323cf Mon Sep 17 00:00:00 2001
+From: Chandrakanth patil <chandrakanth.patil@broadcom.com>
+Date: Tue, 3 Oct 2023 16:30:18 +0530
+Subject: scsi: megaraid_sas: Increase register read retry rount from 3 to 30 for selected registers
+
+From: Chandrakanth patil <chandrakanth.patil@broadcom.com>
+
+commit 8e3ed9e786511ad800c33605ed904b9de49323cf upstream.
+
+In BMC environments with concurrent access to multiple registers, certain
+registers occasionally yield a value of 0 even after 3 retries due to
+hardware errata. As a fix, we have extended the retry count from 3 to 30.
+
+The same errata applies to the mpt3sas driver, and a similar patch has
+been accepted. Please find more details in the mpt3sas patch reference
+link.
+
+Link: https://lore.kernel.org/r/20230829090020.5417-2-ranjan.kumar@broadcom.com
+Fixes: 272652fcbf1a ("scsi: megaraid_sas: add retry logic in megasas_readl")
+Cc: stable@vger.kernel.org
+Signed-off-by: Chandrakanth patil <chandrakanth.patil@broadcom.com>
+Signed-off-by: Sumit Saxena <sumit.saxena@broadcom.com>
+Link: https://lore.kernel.org/r/20231003110021.168862-2-chandrakanth.patil@broadcom.com
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/scsi/megaraid/megaraid_sas_base.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/scsi/megaraid/megaraid_sas_base.c
++++ b/drivers/scsi/megaraid/megaraid_sas_base.c
+@@ -248,13 +248,13 @@ u32 megasas_readl(struct megasas_instanc
+ * Fusion registers could intermittently return all zeroes.
+ * This behavior is transient in nature and subsequent reads will
+ * return valid value. As a workaround in driver, retry readl for
+- * upto three times until a non-zero value is read.
++ * up to thirty times until a non-zero value is read.
+ */
+ if (instance->adapter_type == AERO_SERIES) {
+ do {
+ ret_val = readl(addr);
+ i++;
+- } while (ret_val == 0 && i < 3);
++ } while (ret_val == 0 && i < 30);
+ return ret_val;
+ } else {
+ return readl(addr);
--- /dev/null
+From 3c978492c333f0c08248a8d51cecbe5eb5f617c9 Mon Sep 17 00:00:00 2001
+From: Ranjan Kumar <ranjan.kumar@broadcom.com>
+Date: Fri, 20 Oct 2023 16:28:49 +0530
+Subject: scsi: mpt3sas: Fix loop logic
+
+From: Ranjan Kumar <ranjan.kumar@broadcom.com>
+
+commit 3c978492c333f0c08248a8d51cecbe5eb5f617c9 upstream.
+
+The retry loop continues to iterate until the count reaches 30, even after
+receiving the correct value. Exit loop when a non-zero value is read.
+
+Fixes: 4ca10f3e3174 ("scsi: mpt3sas: Perform additional retries if doorbell read returns 0")
+Cc: stable@vger.kernel.org
+Signed-off-by: Ranjan Kumar <ranjan.kumar@broadcom.com>
+Link: https://lore.kernel.org/r/20231020105849.6350-1-ranjan.kumar@broadcom.com
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/scsi/mpt3sas/mpt3sas_base.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/scsi/mpt3sas/mpt3sas_base.c
++++ b/drivers/scsi/mpt3sas/mpt3sas_base.c
+@@ -216,8 +216,8 @@ _base_readl_ext_retry(const volatile voi
+
+ for (i = 0 ; i < 30 ; i++) {
+ ret_val = readl(addr);
+- if (ret_val == 0)
+- continue;
++ if (ret_val != 0)
++ break;
+ }
+
+ return ret_val;
tools-power-turbostat-fix-a-knl-bug.patch
cifs-spnego-add-in-host_key_len.patch
cifs-fix-check-of-rc-in-function-generate_smb3signin.patch
+media-venus-hfi-add-checks-to-perform-sanity-on-queue-pointers.patch
+powerpc-perf-fix-disabling-bhrb-and-instruction-sampling.patch
+randstruct-fix-gcc-plugin-performance-mode-to-stay-in-group.patch
+bpf-fix-check_stack_write_fixed_off-to-correctly-spill-imm.patch
+bpf-fix-precision-tracking-for-bpf_alu-bpf_to_be-bpf_end.patch
+scsi-mpt3sas-fix-loop-logic.patch
+scsi-megaraid_sas-increase-register-read-retry-rount-from-3-to-30-for-selected-registers.patch
+x86-cpu-hygon-fix-the-cpu-topology-evaluation-for-real.patch
+kvm-x86-hyper-v-don-t-auto-enable-stimer-on-write-from-user-space.patch
+kvm-x86-ignore-msr_amd64_tw_cfg-access.patch
--- /dev/null
+From ee545b94d39a00c93dc98b1dbcbcf731d2eadeb4 Mon Sep 17 00:00:00 2001
+From: Pu Wen <puwen@hygon.cn>
+Date: Mon, 14 Aug 2023 10:18:26 +0200
+Subject: x86/cpu/hygon: Fix the CPU topology evaluation for real
+
+From: Pu Wen <puwen@hygon.cn>
+
+commit ee545b94d39a00c93dc98b1dbcbcf731d2eadeb4 upstream.
+
+Hygon processors with a model ID > 3 have CPUID leaf 0xB correctly
+populated and don't need the fixed package ID shift workaround. The fixup
+is also incorrect when running in a guest.
+
+Fixes: e0ceeae708ce ("x86/CPU/hygon: Fix phys_proc_id calculation logic for multi-die processors")
+Signed-off-by: Pu Wen <puwen@hygon.cn>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Cc: <stable@vger.kernel.org>
+Link: https://lore.kernel.org/r/tencent_594804A808BD93A4EBF50A994F228E3A7F07@qq.com
+Link: https://lore.kernel.org/r/20230814085112.089607918@linutronix.de
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/cpu/hygon.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kernel/cpu/hygon.c
++++ b/arch/x86/kernel/cpu/hygon.c
+@@ -89,8 +89,12 @@ static void hygon_get_topology(struct cp
+ if (!err)
+ c->x86_coreid_bits = get_count_order(c->x86_max_cores);
+
+- /* Socket ID is ApicId[6] for these processors. */
+- c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
++ /*
++ * Socket ID is ApicId[6] for the processors with model <= 0x3
++ * when running on host.
++ */
++ if (!boot_cpu_has(X86_FEATURE_HYPERVISOR) && c->x86_model <= 0x3)
++ c->phys_proc_id = c->apicid >> APICID_SOCKET_ID_BIT;
+
+ cacheinfo_hygon_init_llc_id(c, cpu);
+ } else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {