--- /dev/null
+From edbbaae42a56f9a2b39c52ef2504dfb3fb0a7858 Mon Sep 17 00:00:00 2001
+From: Shay Drory <shayd@nvidia.com>
+Date: Tue, 6 Aug 2024 10:20:44 +0300
+Subject: genirq/irqdesc: Honor caller provided affinity in alloc_desc()
+
+From: Shay Drory <shayd@nvidia.com>
+
+commit edbbaae42a56f9a2b39c52ef2504dfb3fb0a7858 upstream.
+
+Currently, whenever a caller is providing an affinity hint for an
+interrupt, the allocation code uses it to calculate the node and copies the
+cpumask into irq_desc::affinity.
+
+If the affinity for the interrupt is not marked 'managed' then the startup
+of the interrupt ignores irq_desc::affinity and uses the system default
+affinity mask.
+
+Prevent this by setting the IRQD_AFFINITY_SET flag for the interrupt in the
+allocator, which causes irq_setup_affinity() to use irq_desc::affinity on
+interrupt startup if the mask contains an online CPU.
+
+[ tglx: Massaged changelog ]
+
+Fixes: 45ddcecbfa94 ("genirq: Use affinity hint in irqdesc allocation")
+Signed-off-by: Shay Drory <shayd@nvidia.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: <stable@vger.kernel.org>
+Link: https://lore.kernel.org/all/20240806072044.837827-1-shayd@nvidia.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/irq/irqdesc.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/kernel/irq/irqdesc.c
++++ b/kernel/irq/irqdesc.c
+@@ -493,6 +493,7 @@ static int alloc_descs(unsigned int star
+ flags = IRQD_AFFINITY_MANAGED |
+ IRQD_MANAGED_SHUTDOWN;
+ }
++ flags |= IRQD_AFFINITY_SET;
+ mask = &affinity->mask;
+ node = cpu_to_node(cpumask_first(mask));
+ affinity++;
--- /dev/null
+From d73f0f49daa84176c3beee1606e73c7ffb6af8b2 Mon Sep 17 00:00:00 2001
+From: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
+Date: Fri, 9 Aug 2024 12:32:24 +0530
+Subject: irqchip/xilinx: Fix shift out of bounds
+
+From: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
+
+commit d73f0f49daa84176c3beee1606e73c7ffb6af8b2 upstream.
+
+The device tree property 'xlnx,kind-of-intr' is sanity checked that the
+bitmask contains only set bits which are in the range of the number of
+interrupts supported by the controller.
+
+The check is done by shifting the mask right by the number of supported
+interrupts and checking the result for zero.
+
+The data type of the mask is u32 and the number of supported interrupts is
+up to 32. In case of 32 interrupts the shift is out of bounds, resulting in
+a mismatch warning. The out of bounds condition is also reported by UBSAN:
+
+ UBSAN: shift-out-of-bounds in irq-xilinx-intc.c:332:22
+ shift exponent 32 is too large for 32-bit type 'unsigned int'
+
+Fix it by promoting the mask to u64 for the test.
+
+Fixes: d50466c90724 ("microblaze: intc: Refactor DT sanity check")
+Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/all/1723186944-3571957-1-git-send-email-radhey.shyam.pandey@amd.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/irqchip/irq-xilinx-intc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/irqchip/irq-xilinx-intc.c
++++ b/drivers/irqchip/irq-xilinx-intc.c
+@@ -188,7 +188,7 @@ static int __init xilinx_intc_of_init(st
+ irqc->intr_mask = 0;
+ }
+
+- if (irqc->intr_mask >> irqc->nr_irq)
++ if ((u64)irqc->intr_mask >> irqc->nr_irq)
+ pr_warn("irq-xilinx: mismatch in kind-of-intr param\n");
+
+ pr_info("irq-xilinx: %pOF: num_irq=%d, edge=0x%x\n",
--- /dev/null
+From 7d4df2dad312f270d62fecb0e5c8b086c6d7dcfc Mon Sep 17 00:00:00 2001
+From: Andrey Konovalov <andreyknvl@gmail.com>
+Date: Mon, 29 Jul 2024 04:21:58 +0200
+Subject: kcov: properly check for softirq context
+
+From: Andrey Konovalov <andreyknvl@gmail.com>
+
+commit 7d4df2dad312f270d62fecb0e5c8b086c6d7dcfc upstream.
+
+When collecting coverage from softirqs, KCOV uses in_serving_softirq() to
+check whether the code is running in the softirq context. Unfortunately,
+in_serving_softirq() is > 0 even when the code is running in the hardirq
+or NMI context for hardirqs and NMIs that happened during a softirq.
+
+As a result, if a softirq handler contains a remote coverage collection
+section and a hardirq with another remote coverage collection section
+happens during handling the softirq, KCOV incorrectly detects a nested
+softirq coverate collection section and prints a WARNING, as reported by
+syzbot.
+
+This issue was exposed by commit a7f3813e589f ("usb: gadget: dummy_hcd:
+Switch to hrtimer transfer scheduler"), which switched dummy_hcd to using
+hrtimer and made the timer's callback be executed in the hardirq context.
+
+Change the related checks in KCOV to account for this behavior of
+in_serving_softirq() and make KCOV ignore remote coverage collection
+sections in the hardirq and NMI contexts.
+
+This prevents the WARNING printed by syzbot but does not fix the inability
+of KCOV to collect coverage from the __usb_hcd_giveback_urb when dummy_hcd
+is in use (caused by a7f3813e589f); a separate patch is required for that.
+
+Link: https://lkml.kernel.org/r/20240729022158.92059-1-andrey.konovalov@linux.dev
+Fixes: 5ff3b30ab57d ("kcov: collect coverage from interrupts")
+Signed-off-by: Andrey Konovalov <andreyknvl@gmail.com>
+Reported-by: syzbot+2388cdaeb6b10f0c13ac@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=2388cdaeb6b10f0c13ac
+Acked-by: Marco Elver <elver@google.com>
+Cc: Alan Stern <stern@rowland.harvard.edu>
+Cc: Aleksandr Nogikh <nogikh@google.com>
+Cc: Alexander Potapenko <glider@google.com>
+Cc: Dmitry Vyukov <dvyukov@google.com>
+Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Cc: Marcello Sylvester Bauer <sylv@sylv.io>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/kcov.c | 15 ++++++++++++---
+ 1 file changed, 12 insertions(+), 3 deletions(-)
+
+--- a/kernel/kcov.c
++++ b/kernel/kcov.c
+@@ -151,6 +151,15 @@ static void kcov_remote_area_put(struct
+ list_add(&area->list, &kcov_remote_areas);
+ }
+
++/*
++ * Unlike in_serving_softirq(), this function returns false when called during
++ * a hardirq or an NMI that happened in the softirq context.
++ */
++static inline bool in_softirq_really(void)
++{
++ return in_serving_softirq() && !in_hardirq() && !in_nmi();
++}
++
+ static notrace bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
+ {
+ unsigned int mode;
+@@ -160,7 +169,7 @@ static notrace bool check_kcov_mode(enum
+ * so we ignore code executed in interrupts, unless we are in a remote
+ * coverage collection section in a softirq.
+ */
+- if (!in_task() && !(in_serving_softirq() && t->kcov_softirq))
++ if (!in_task() && !(in_softirq_really() && t->kcov_softirq))
+ return false;
+ mode = READ_ONCE(t->kcov_mode);
+ /*
+@@ -822,7 +831,7 @@ void kcov_remote_start(u64 handle)
+
+ if (WARN_ON(!kcov_check_handle(handle, true, true, true)))
+ return;
+- if (!in_task() && !in_serving_softirq())
++ if (!in_task() && !in_softirq_really())
+ return;
+
+ local_irq_save(flags);
+@@ -963,7 +972,7 @@ void kcov_remote_stop(void)
+ int sequence;
+ unsigned long flags;
+
+- if (!in_task() && !in_serving_softirq())
++ if (!in_task() && !in_softirq_really())
+ return;
+
+ local_irq_save(flags);
--- /dev/null
+From 6d45e1c948a8b7ed6ceddb14319af69424db730c Mon Sep 17 00:00:00 2001
+From: Waiman Long <longman@redhat.com>
+Date: Tue, 6 Aug 2024 13:46:47 -0400
+Subject: padata: Fix possible divide-by-0 panic in padata_mt_helper()
+
+From: Waiman Long <longman@redhat.com>
+
+commit 6d45e1c948a8b7ed6ceddb14319af69424db730c upstream.
+
+We are hit with a not easily reproducible divide-by-0 panic in padata.c at
+bootup time.
+
+ [ 10.017908] Oops: divide error: 0000 1 PREEMPT SMP NOPTI
+ [ 10.017908] CPU: 26 PID: 2627 Comm: kworker/u1666:1 Not tainted 6.10.0-15.el10.x86_64 #1
+ [ 10.017908] Hardware name: Lenovo ThinkSystem SR950 [7X12CTO1WW]/[7X12CTO1WW], BIOS [PSE140J-2.30] 07/20/2021
+ [ 10.017908] Workqueue: events_unbound padata_mt_helper
+ [ 10.017908] RIP: 0010:padata_mt_helper+0x39/0xb0
+ :
+ [ 10.017963] Call Trace:
+ [ 10.017968] <TASK>
+ [ 10.018004] ? padata_mt_helper+0x39/0xb0
+ [ 10.018084] process_one_work+0x174/0x330
+ [ 10.018093] worker_thread+0x266/0x3a0
+ [ 10.018111] kthread+0xcf/0x100
+ [ 10.018124] ret_from_fork+0x31/0x50
+ [ 10.018138] ret_from_fork_asm+0x1a/0x30
+ [ 10.018147] </TASK>
+
+Looking at the padata_mt_helper() function, the only way a divide-by-0
+panic can happen is when ps->chunk_size is 0. The way that chunk_size is
+initialized in padata_do_multithreaded(), chunk_size can be 0 when the
+min_chunk in the passed-in padata_mt_job structure is 0.
+
+Fix this divide-by-0 panic by making sure that chunk_size will be at least
+1 no matter what the input parameters are.
+
+Link: https://lkml.kernel.org/r/20240806174647.1050398-1-longman@redhat.com
+Fixes: 004ed42638f4 ("padata: add basic support for multithreaded jobs")
+Signed-off-by: Waiman Long <longman@redhat.com>
+Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
+Cc: Steffen Klassert <steffen.klassert@secunet.com>
+Cc: Waiman Long <longman@redhat.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/padata.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+--- a/kernel/padata.c
++++ b/kernel/padata.c
+@@ -508,6 +508,13 @@ void __init padata_do_multithreaded(stru
+ ps.chunk_size = max(ps.chunk_size, job->min_chunk);
+ ps.chunk_size = roundup(ps.chunk_size, job->align);
+
++ /*
++ * chunk_size can be 0 if the caller sets min_chunk to 0. So force it
++ * to at least 1 to prevent divide-by-0 panic in padata_mt_helper().`
++ */
++ if (!ps.chunk_size)
++ ps.chunk_size = 1U;
++
+ list_for_each_entry(pw, &works, pw_list)
+ queue_work(system_unbound_wq, &pw->pw_work);
+
--- /dev/null
+From b34ce4a59cfe9cd0d6f870e6408e8ec88a964585 Mon Sep 17 00:00:00 2001
+From: Hans de Goede <hdegoede@redhat.com>
+Date: Wed, 17 Jul 2024 22:03:32 +0200
+Subject: power: supply: axp288_charger: Fix constant_charge_voltage writes
+
+From: Hans de Goede <hdegoede@redhat.com>
+
+commit b34ce4a59cfe9cd0d6f870e6408e8ec88a964585 upstream.
+
+info->max_cv is in millivolts, divide the microvolt value being written
+to constant_charge_voltage by 1000 *before* clamping it to info->max_cv.
+
+Before this fix the code always tried to set constant_charge_voltage
+to max_cv / 1000 = 4 millivolt, which ends up in setting it to 4.1V
+which is the lowest supported value.
+
+Fixes: 843735b788a4 ("power: axp288_charger: axp288 charger driver")
+Cc: stable@vger.kernel.org
+Signed-off-by: Hans de Goede <hdegoede@redhat.com>
+Link: https://lore.kernel.org/r/20240717200333.56669-1-hdegoede@redhat.com
+Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/power/supply/axp288_charger.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -371,8 +371,8 @@ static int axp288_charger_usb_set_proper
+ dev_warn(&info->pdev->dev, "set charge current failed\n");
+ break;
+ case POWER_SUPPLY_PROP_CONSTANT_CHARGE_VOLTAGE:
+- scaled_val = min(val->intval, info->max_cv);
+- scaled_val = DIV_ROUND_CLOSEST(scaled_val, 1000);
++ scaled_val = DIV_ROUND_CLOSEST(val->intval, 1000);
++ scaled_val = min(scaled_val, info->max_cv);
+ ret = axp288_charger_set_cv(info, scaled_val);
+ if (ret < 0)
+ dev_warn(&info->pdev->dev, "set charge voltage failed\n");
--- /dev/null
+From 81af7f2342d162e24ac820c10e68684d9f927663 Mon Sep 17 00:00:00 2001
+From: Hans de Goede <hdegoede@redhat.com>
+Date: Wed, 17 Jul 2024 22:03:33 +0200
+Subject: power: supply: axp288_charger: Round constant_charge_voltage writes down
+
+From: Hans de Goede <hdegoede@redhat.com>
+
+commit 81af7f2342d162e24ac820c10e68684d9f927663 upstream.
+
+Round constant_charge_voltage writes down to the first supported lower
+value, rather then rounding them up to the first supported higher value.
+
+This fixes e.g. writing 4250000 resulting in a value of 4350000 which
+might be dangerous, instead writing 4250000 will now result in a safe
+4200000 value.
+
+Fixes: 843735b788a4 ("power: axp288_charger: axp288 charger driver")
+Cc: stable@vger.kernel.org
+Signed-off-by: Hans de Goede <hdegoede@redhat.com>
+Link: https://lore.kernel.org/r/20240717200333.56669-2-hdegoede@redhat.com
+Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/power/supply/axp288_charger.c | 18 +++++++++---------
+ 1 file changed, 9 insertions(+), 9 deletions(-)
+
+--- a/drivers/power/supply/axp288_charger.c
++++ b/drivers/power/supply/axp288_charger.c
+@@ -168,18 +168,18 @@ static inline int axp288_charger_set_cv(
+ u8 reg_val;
+ int ret;
+
+- if (cv <= CV_4100MV) {
+- reg_val = CHRG_CCCV_CV_4100MV;
+- cv = CV_4100MV;
+- } else if (cv <= CV_4150MV) {
+- reg_val = CHRG_CCCV_CV_4150MV;
+- cv = CV_4150MV;
+- } else if (cv <= CV_4200MV) {
++ if (cv >= CV_4350MV) {
++ reg_val = CHRG_CCCV_CV_4350MV;
++ cv = CV_4350MV;
++ } else if (cv >= CV_4200MV) {
+ reg_val = CHRG_CCCV_CV_4200MV;
+ cv = CV_4200MV;
++ } else if (cv >= CV_4150MV) {
++ reg_val = CHRG_CCCV_CV_4150MV;
++ cv = CV_4150MV;
+ } else {
+- reg_val = CHRG_CCCV_CV_4350MV;
+- cv = CV_4350MV;
++ reg_val = CHRG_CCCV_CV_4100MV;
++ cv = CV_4100MV;
+ }
+
+ reg_val = reg_val << CHRG_CCCV_CV_BIT_POS;
ntp-safeguard-against-time_constant-overflow.patch
timekeeping-fix-bogus-clock_was_set-invocation-in-do_adjtimex.patch
serial-core-check-uartclk-for-zero-to-avoid-divide-by-zero.patch
+kcov-properly-check-for-softirq-context.patch
+irqchip-xilinx-fix-shift-out-of-bounds.patch
+genirq-irqdesc-honor-caller-provided-affinity-in-alloc_desc.patch
+power-supply-axp288_charger-fix-constant_charge_voltage-writes.patch
+power-supply-axp288_charger-round-constant_charge_voltage-writes-down.patch
+tracing-fix-overflow-in-get_free_elt.patch
+padata-fix-possible-divide-by-0-panic-in-padata_mt_helper.patch
+x86-mtrr-check-if-fixed-mtrrs-exist-before-saving-them.patch
--- /dev/null
+From bcf86c01ca4676316557dd482c8416ece8c2e143 Mon Sep 17 00:00:00 2001
+From: Tze-nan Wu <Tze-nan.Wu@mediatek.com>
+Date: Mon, 5 Aug 2024 13:59:22 +0800
+Subject: tracing: Fix overflow in get_free_elt()
+
+From: Tze-nan Wu <Tze-nan.Wu@mediatek.com>
+
+commit bcf86c01ca4676316557dd482c8416ece8c2e143 upstream.
+
+"tracing_map->next_elt" in get_free_elt() is at risk of overflowing.
+
+Once it overflows, new elements can still be inserted into the tracing_map
+even though the maximum number of elements (`max_elts`) has been reached.
+Continuing to insert elements after the overflow could result in the
+tracing_map containing "tracing_map->max_size" elements, leaving no empty
+entries.
+If any attempt is made to insert an element into a full tracing_map using
+`__tracing_map_insert()`, it will cause an infinite loop with preemption
+disabled, leading to a CPU hang problem.
+
+Fix this by preventing any further increments to "tracing_map->next_elt"
+once it reaches "tracing_map->max_elt".
+
+Cc: stable@vger.kernel.org
+Cc: Masami Hiramatsu <mhiramat@kernel.org>
+Fixes: 08d43a5fa063e ("tracing: Add lock-free tracing_map")
+Co-developed-by: Cheng-Jui Wang <cheng-jui.wang@mediatek.com>
+Link: https://lore.kernel.org/20240805055922.6277-1-Tze-nan.Wu@mediatek.com
+Signed-off-by: Cheng-Jui Wang <cheng-jui.wang@mediatek.com>
+Signed-off-by: Tze-nan Wu <Tze-nan.Wu@mediatek.com>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/tracing_map.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/kernel/trace/tracing_map.c
++++ b/kernel/trace/tracing_map.c
+@@ -454,7 +454,7 @@ static struct tracing_map_elt *get_free_
+ struct tracing_map_elt *elt = NULL;
+ int idx;
+
+- idx = atomic_inc_return(&map->next_elt);
++ idx = atomic_fetch_add_unless(&map->next_elt, 1, map->max_elts);
+ if (idx < map->max_elts) {
+ elt = *(TRACING_MAP_ELT(map->elts, idx));
+ if (map->ops && map->ops->elt_init)
+@@ -699,7 +699,7 @@ void tracing_map_clear(struct tracing_ma
+ {
+ unsigned int i;
+
+- atomic_set(&map->next_elt, -1);
++ atomic_set(&map->next_elt, 0);
+ atomic64_set(&map->hits, 0);
+ atomic64_set(&map->drops, 0);
+
+@@ -783,7 +783,7 @@ struct tracing_map *tracing_map_create(u
+
+ map->map_bits = map_bits;
+ map->max_elts = (1 << map_bits);
+- atomic_set(&map->next_elt, -1);
++ atomic_set(&map->next_elt, 0);
+
+ map->map_size = (1 << (map_bits + 1));
+ map->ops = ops;
--- /dev/null
+From 919f18f961c03d6694aa726c514184f2311a4614 Mon Sep 17 00:00:00 2001
+From: Andi Kleen <ak@linux.intel.com>
+Date: Wed, 7 Aug 2024 17:02:44 -0700
+Subject: x86/mtrr: Check if fixed MTRRs exist before saving them
+
+From: Andi Kleen <ak@linux.intel.com>
+
+commit 919f18f961c03d6694aa726c514184f2311a4614 upstream.
+
+MTRRs have an obsolete fixed variant for fine grained caching control
+of the 640K-1MB region that uses separate MSRs. This fixed variant has
+a separate capability bit in the MTRR capability MSR.
+
+So far all x86 CPUs which support MTRR have this separate bit set, so it
+went unnoticed that mtrr_save_state() does not check the capability bit
+before accessing the fixed MTRR MSRs.
+
+Though on a CPU that does not support the fixed MTRR capability this
+results in a #GP. The #GP itself is harmless because the RDMSR fault is
+handled gracefully, but results in a WARN_ON().
+
+Add the missing capability check to prevent this.
+
+Fixes: 2b1f6278d77c ("[PATCH] x86: Save the MTRRs of the BSP before booting an AP")
+Signed-off-by: Andi Kleen <ak@linux.intel.com>
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/all/20240808000244.946864-1-ak@linux.intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/cpu/mtrr/mtrr.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/x86/kernel/cpu/mtrr/mtrr.c
++++ b/arch/x86/kernel/cpu/mtrr/mtrr.c
+@@ -816,7 +816,7 @@ void mtrr_save_state(void)
+ {
+ int first_cpu;
+
+- if (!mtrr_enabled())
++ if (!mtrr_enabled() || !mtrr_state.have_fixed)
+ return;
+
+ first_cpu = cpumask_first(cpu_online_mask);