From: Sasha Levin Date: Fri, 26 Sep 2025 10:57:07 +0000 (-0400) Subject: Fixes for all trees X-Git-Tag: v5.4.300~46 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=dd8d85f328501b0d953120e2a70973eb0ae39e7c;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for all trees Signed-off-by: Sasha Levin --- diff --git a/queue-5.10/arm64-dts-imx8mp-correct-thermal-sensor-index.patch b/queue-5.10/arm64-dts-imx8mp-correct-thermal-sensor-index.patch new file mode 100644 index 0000000000..8d57ba9a09 --- /dev/null +++ b/queue-5.10/arm64-dts-imx8mp-correct-thermal-sensor-index.patch @@ -0,0 +1,50 @@ +From 142cb2570d38be9a4a2804efac650d30c87d4a35 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 5 Sep 2025 11:01:09 +0800 +Subject: arm64: dts: imx8mp: Correct thermal sensor index + +From: Peng Fan + +[ Upstream commit a50342f976d25aace73ff551845ce89406f48f35 ] + +The TMU has two temperature measurement sites located on the chip. The +probe 0 is located inside of the ANAMIX, while the probe 1 is located near +the ARM core. This has been confirmed by checking with HW design team and +checking RTL code. + +So correct the {cpu,soc}-thermal sensor index. + +Fixes: 30cdd62dce6b ("arm64: dts: imx8mp: Add thermal zones support") +Signed-off-by: Peng Fan +Reviewed-by: Frank Li +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/freescale/imx8mp.dtsi | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +index 0186b3992b95f..8b02ead72a88c 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +@@ -142,7 +142,7 @@ + cpu-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 0>; ++ thermal-sensors = <&tmu 1>; + trips { + cpu_alert0: trip0 { + temperature = <85000>; +@@ -172,7 +172,7 @@ + soc-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 1>; ++ thermal-sensors = <&tmu 0>; + trips { + soc_alert0: trip0 { + temperature = <85000>; +-- +2.51.0 + diff --git a/queue-5.10/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch b/queue-5.10/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch new file mode 100644 index 0000000000..b805d21d45 --- /dev/null +++ b/queue-5.10/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch @@ -0,0 +1,52 @@ +From 7f1a6d6f71a11d7320041007eb796ba04fb58abc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 13:26:37 +0200 +Subject: can: rcar_can: rcar_can_resume(): fix s2ram with PSCI + +From: Geert Uytterhoeven + +[ Upstream commit 5c793afa07da6d2d4595f6c73a2a543a471bb055 ] + +On R-Car Gen3 using PSCI, s2ram powers down the SoC. After resume, the +CAN interface no longer works, until it is brought down and up again. + +Fix this by calling rcar_can_start() from the PM resume callback, to +fully initialize the controller instead of just restarting it. + +Signed-off-by: Geert Uytterhoeven +Link: https://patch.msgid.link/699b2f7fcb60b31b6f976a37f08ce99c5ffccb31.1755165227.git.geert+renesas@glider.be +Signed-off-by: Marc Kleine-Budde +Signed-off-by: Sasha Levin +--- + drivers/net/can/rcar/rcar_can.c | 8 +------- + 1 file changed, 1 insertion(+), 7 deletions(-) + +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index 134eda66f0dcf..e759d940977a8 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -867,7 +867,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + { + struct net_device *ndev = dev_get_drvdata(dev); + struct rcar_can_priv *priv = netdev_priv(ndev); +- u16 ctlr; + int err; + + if (!netif_running(ndev)) +@@ -879,12 +878,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + return err; + } + +- ctlr = readw(&priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_SLPM; +- writew(ctlr, &priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_CANM; +- writew(ctlr, &priv->regs->ctlr); +- priv->can.state = CAN_STATE_ERROR_ACTIVE; ++ rcar_can_start(ndev); + + netif_device_attach(ndev); + netif_start_queue(ndev); +-- +2.51.0 + diff --git a/queue-5.10/cpufreq-initialize-cpufreq-based-invariance-before-s.patch b/queue-5.10/cpufreq-initialize-cpufreq-based-invariance-before-s.patch new file mode 100644 index 0000000000..734142aa87 --- /dev/null +++ b/queue-5.10/cpufreq-initialize-cpufreq-based-invariance-before-s.patch @@ -0,0 +1,84 @@ +From d74d5f21649ef06c41e81323a970bf0eb9eec496 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 18 Sep 2025 11:15:52 +0100 +Subject: cpufreq: Initialize cpufreq-based invariance before subsys + +From: Christian Loehle + +[ Upstream commit 8ffe28b4e8d8b18cb2f2933410322c24f039d5d6 ] + +commit 2a6c72738706 ("cpufreq: Initialize cpufreq-based +frequency-invariance later") postponed the frequency invariance +initialization to avoid disabling it in the error case. +This isn't locking safe, instead move the initialization up before +the subsys interface is registered (which will rebuild the +sched_domains) and add the corresponding disable on the error path. + +Observed lockdep without this patch: +[ 0.989686] ====================================================== +[ 0.989688] WARNING: possible circular locking dependency detected +[ 0.989690] 6.17.0-rc4-cix-build+ #31 Tainted: G S +[ 0.989691] ------------------------------------------------------ +[ 0.989692] swapper/0/1 is trying to acquire lock: +[ 0.989693] ffff800082ada7f8 (sched_energy_mutex){+.+.}-{4:4}, at: rebuild_sched_domains_energy+0x30/0x58 +[ 0.989705] + but task is already holding lock: +[ 0.989706] ffff000088c89bc8 (&policy->rwsem){+.+.}-{4:4}, at: cpufreq_online+0x7f8/0xbe0 +[ 0.989713] + which lock already depends on the new lock. + +Fixes: 2a6c72738706 ("cpufreq: Initialize cpufreq-based frequency-invariance later") +Signed-off-by: Christian Loehle +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/cpufreq/cpufreq.c | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 149ba2e39a965..ff0daac63819e 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2792,6 +2792,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + goto err_null_driver; + } + ++ /* ++ * Mark support for the scheduler's frequency invariance engine for ++ * drivers that implement target(), target_index() or fast_switch(). ++ */ ++ if (!cpufreq_driver->setpolicy) { ++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance); ++ pr_debug("cpufreq: supports frequency invariance\n"); ++ } ++ + ret = subsys_interface_register(&cpufreq_interface); + if (ret) + goto err_boost_unreg; +@@ -2814,21 +2823,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + hp_online = ret; + ret = 0; + +- /* +- * Mark support for the scheduler's frequency invariance engine for +- * drivers that implement target(), target_index() or fast_switch(). +- */ +- if (!cpufreq_driver->setpolicy) { +- static_branch_enable_cpuslocked(&cpufreq_freq_invariance); +- pr_debug("supports frequency invariance"); +- } +- + pr_debug("driver %s up and running\n", driver_data->name); + goto out; + + err_if_unreg: + subsys_interface_unregister(&cpufreq_interface); + err_boost_unreg: ++ if (!cpufreq_driver->setpolicy) ++ static_branch_disable_cpuslocked(&cpufreq_freq_invariance); + remove_boost_sysfs_file(); + err_null_driver: + write_lock_irqsave(&cpufreq_driver_lock, flags); +-- +2.51.0 + diff --git a/queue-5.10/series b/queue-5.10/series index 8bf9204b9d..e796710456 100644 --- a/queue-5.10/series +++ b/queue-5.10/series @@ -91,3 +91,6 @@ alsa-usb-audio-convert-comma-to-semicolon.patch alsa-usb-audio-fix-build-with-config_input-n.patch usb-core-add-0x-prefix-to-quirks-debug-output.patch ib-mlx5-fix-obj_type-mismatch-for-srq-event-subscrip.patch +arm64-dts-imx8mp-correct-thermal-sensor-index.patch +cpufreq-initialize-cpufreq-based-invariance-before-s.patch +can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch diff --git a/queue-5.15/arm64-dts-imx8mp-correct-thermal-sensor-index.patch b/queue-5.15/arm64-dts-imx8mp-correct-thermal-sensor-index.patch new file mode 100644 index 0000000000..78aa71334f --- /dev/null +++ b/queue-5.15/arm64-dts-imx8mp-correct-thermal-sensor-index.patch @@ -0,0 +1,50 @@ +From d7624ce853cea3f8734726953cf69cfb9ac937d8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 5 Sep 2025 11:01:09 +0800 +Subject: arm64: dts: imx8mp: Correct thermal sensor index + +From: Peng Fan + +[ Upstream commit a50342f976d25aace73ff551845ce89406f48f35 ] + +The TMU has two temperature measurement sites located on the chip. The +probe 0 is located inside of the ANAMIX, while the probe 1 is located near +the ARM core. This has been confirmed by checking with HW design team and +checking RTL code. + +So correct the {cpu,soc}-thermal sensor index. + +Fixes: 30cdd62dce6b ("arm64: dts: imx8mp: Add thermal zones support") +Signed-off-by: Peng Fan +Reviewed-by: Frank Li +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/freescale/imx8mp.dtsi | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +index b5130e7be8263..4eeef01a5a835 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +@@ -161,7 +161,7 @@ + cpu-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 0>; ++ thermal-sensors = <&tmu 1>; + trips { + cpu_alert0: trip0 { + temperature = <85000>; +@@ -191,7 +191,7 @@ + soc-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 1>; ++ thermal-sensors = <&tmu 0>; + trips { + soc_alert0: trip0 { + temperature = <85000>; +-- +2.51.0 + diff --git a/queue-5.15/bpf-reject-bpf_timer-for-preempt_rt.patch b/queue-5.15/bpf-reject-bpf_timer-for-preempt_rt.patch new file mode 100644 index 0000000000..3ab0b08e43 --- /dev/null +++ b/queue-5.15/bpf-reject-bpf_timer-for-preempt_rt.patch @@ -0,0 +1,43 @@ +From f7cf35cf642ce1459e0a7591fabeea09b7149722 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 20:57:39 +0800 +Subject: bpf: Reject bpf_timer for PREEMPT_RT + +From: Leon Hwang + +[ Upstream commit e25ddfb388c8b7e5f20e3bf38d627fb485003781 ] + +When enable CONFIG_PREEMPT_RT, the kernel will warn when run timer +selftests by './test_progs -t timer': + +BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 + +In order to avoid such warning, reject bpf_timer in verifier when +PREEMPT_RT is enabled. + +Signed-off-by: Leon Hwang +Link: https://lore.kernel.org/r/20250910125740.52172-2-leon.hwang@linux.dev +Signed-off-by: Alexei Starovoitov +Signed-off-by: Sasha Levin +--- + kernel/bpf/verifier.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 89b4fa815a9ba..4b7c9a60a7352 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -5071,6 +5071,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno, + verbose(env, "verifier bug. Two map pointers in a timer helper\n"); + return -EFAULT; + } ++ if (IS_ENABLED(CONFIG_PREEMPT_RT)) { ++ verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); ++ return -EOPNOTSUPP; ++ } + meta->map_uid = reg->map_uid; + meta->map_ptr = map; + return 0; +-- +2.51.0 + diff --git a/queue-5.15/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch b/queue-5.15/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch new file mode 100644 index 0000000000..a6297735ae --- /dev/null +++ b/queue-5.15/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch @@ -0,0 +1,52 @@ +From 40ab5f628fc3e1a3bd62ad89bce50a2f895c0635 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 13:26:37 +0200 +Subject: can: rcar_can: rcar_can_resume(): fix s2ram with PSCI + +From: Geert Uytterhoeven + +[ Upstream commit 5c793afa07da6d2d4595f6c73a2a543a471bb055 ] + +On R-Car Gen3 using PSCI, s2ram powers down the SoC. After resume, the +CAN interface no longer works, until it is brought down and up again. + +Fix this by calling rcar_can_start() from the PM resume callback, to +fully initialize the controller instead of just restarting it. + +Signed-off-by: Geert Uytterhoeven +Link: https://patch.msgid.link/699b2f7fcb60b31b6f976a37f08ce99c5ffccb31.1755165227.git.geert+renesas@glider.be +Signed-off-by: Marc Kleine-Budde +Signed-off-by: Sasha Levin +--- + drivers/net/can/rcar/rcar_can.c | 8 +------- + 1 file changed, 1 insertion(+), 7 deletions(-) + +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index 68ad7da5c07e0..e21b73315b986 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -863,7 +863,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + { + struct net_device *ndev = dev_get_drvdata(dev); + struct rcar_can_priv *priv = netdev_priv(ndev); +- u16 ctlr; + int err; + + if (!netif_running(ndev)) +@@ -875,12 +874,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + return err; + } + +- ctlr = readw(&priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_SLPM; +- writew(ctlr, &priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_CANM; +- writew(ctlr, &priv->regs->ctlr); +- priv->can.state = CAN_STATE_ERROR_ACTIVE; ++ rcar_can_start(ndev); + + netif_device_attach(ndev); + netif_start_queue(ndev); +-- +2.51.0 + diff --git a/queue-5.15/cpufreq-initialize-cpufreq-based-invariance-before-s.patch b/queue-5.15/cpufreq-initialize-cpufreq-based-invariance-before-s.patch new file mode 100644 index 0000000000..41f8bdda96 --- /dev/null +++ b/queue-5.15/cpufreq-initialize-cpufreq-based-invariance-before-s.patch @@ -0,0 +1,84 @@ +From c2cf9ba46642d1791ba3454ea7b7e8ba0b721963 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 18 Sep 2025 11:15:52 +0100 +Subject: cpufreq: Initialize cpufreq-based invariance before subsys + +From: Christian Loehle + +[ Upstream commit 8ffe28b4e8d8b18cb2f2933410322c24f039d5d6 ] + +commit 2a6c72738706 ("cpufreq: Initialize cpufreq-based +frequency-invariance later") postponed the frequency invariance +initialization to avoid disabling it in the error case. +This isn't locking safe, instead move the initialization up before +the subsys interface is registered (which will rebuild the +sched_domains) and add the corresponding disable on the error path. + +Observed lockdep without this patch: +[ 0.989686] ====================================================== +[ 0.989688] WARNING: possible circular locking dependency detected +[ 0.989690] 6.17.0-rc4-cix-build+ #31 Tainted: G S +[ 0.989691] ------------------------------------------------------ +[ 0.989692] swapper/0/1 is trying to acquire lock: +[ 0.989693] ffff800082ada7f8 (sched_energy_mutex){+.+.}-{4:4}, at: rebuild_sched_domains_energy+0x30/0x58 +[ 0.989705] + but task is already holding lock: +[ 0.989706] ffff000088c89bc8 (&policy->rwsem){+.+.}-{4:4}, at: cpufreq_online+0x7f8/0xbe0 +[ 0.989713] + which lock already depends on the new lock. + +Fixes: 2a6c72738706 ("cpufreq: Initialize cpufreq-based frequency-invariance later") +Signed-off-by: Christian Loehle +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/cpufreq/cpufreq.c | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index addd20bf6be08..060a85e5a7d3f 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2853,6 +2853,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + goto err_null_driver; + } + ++ /* ++ * Mark support for the scheduler's frequency invariance engine for ++ * drivers that implement target(), target_index() or fast_switch(). ++ */ ++ if (!cpufreq_driver->setpolicy) { ++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance); ++ pr_debug("cpufreq: supports frequency invariance\n"); ++ } ++ + ret = subsys_interface_register(&cpufreq_interface); + if (ret) + goto err_boost_unreg; +@@ -2874,21 +2883,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + hp_online = ret; + ret = 0; + +- /* +- * Mark support for the scheduler's frequency invariance engine for +- * drivers that implement target(), target_index() or fast_switch(). +- */ +- if (!cpufreq_driver->setpolicy) { +- static_branch_enable_cpuslocked(&cpufreq_freq_invariance); +- pr_debug("supports frequency invariance"); +- } +- + pr_debug("driver %s up and running\n", driver_data->name); + goto out; + + err_if_unreg: + subsys_interface_unregister(&cpufreq_interface); + err_boost_unreg: ++ if (!cpufreq_driver->setpolicy) ++ static_branch_disable_cpuslocked(&cpufreq_freq_invariance); + remove_boost_sysfs_file(); + err_null_driver: + write_lock_irqsave(&cpufreq_driver_lock, flags); +-- +2.51.0 + diff --git a/queue-5.15/series b/queue-5.15/series index dfd4c4dacc..e016c65efe 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -110,3 +110,7 @@ alsa-usb-audio-convert-comma-to-semicolon.patch alsa-usb-audio-fix-build-with-config_input-n.patch usb-core-add-0x-prefix-to-quirks-debug-output.patch ib-mlx5-fix-obj_type-mismatch-for-srq-event-subscrip.patch +arm64-dts-imx8mp-correct-thermal-sensor-index.patch +cpufreq-initialize-cpufreq-based-invariance-before-s.patch +can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch +bpf-reject-bpf_timer-for-preempt_rt.patch diff --git a/queue-5.4/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch b/queue-5.4/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch new file mode 100644 index 0000000000..aab781b9fa --- /dev/null +++ b/queue-5.4/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch @@ -0,0 +1,52 @@ +From c0358d09bcfa7cba27811cb2b64dec7791a5a445 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 13:26:37 +0200 +Subject: can: rcar_can: rcar_can_resume(): fix s2ram with PSCI + +From: Geert Uytterhoeven + +[ Upstream commit 5c793afa07da6d2d4595f6c73a2a543a471bb055 ] + +On R-Car Gen3 using PSCI, s2ram powers down the SoC. After resume, the +CAN interface no longer works, until it is brought down and up again. + +Fix this by calling rcar_can_start() from the PM resume callback, to +fully initialize the controller instead of just restarting it. + +Signed-off-by: Geert Uytterhoeven +Link: https://patch.msgid.link/699b2f7fcb60b31b6f976a37f08ce99c5ffccb31.1755165227.git.geert+renesas@glider.be +Signed-off-by: Marc Kleine-Budde +Signed-off-by: Sasha Levin +--- + drivers/net/can/rcar/rcar_can.c | 8 +------- + 1 file changed, 1 insertion(+), 7 deletions(-) + +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index b99b1b235348c..087c9d16118b5 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -869,7 +869,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + { + struct net_device *ndev = dev_get_drvdata(dev); + struct rcar_can_priv *priv = netdev_priv(ndev); +- u16 ctlr; + int err; + + if (!netif_running(ndev)) +@@ -881,12 +880,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + return err; + } + +- ctlr = readw(&priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_SLPM; +- writew(ctlr, &priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_CANM; +- writew(ctlr, &priv->regs->ctlr); +- priv->can.state = CAN_STATE_ERROR_ACTIVE; ++ rcar_can_start(ndev); + + netif_device_attach(ndev); + netif_start_queue(ndev); +-- +2.51.0 + diff --git a/queue-5.4/series b/queue-5.4/series index 80d96f2e09..5d972ce8f2 100644 --- a/queue-5.4/series +++ b/queue-5.4/series @@ -61,3 +61,4 @@ alsa-usb-audio-convert-comma-to-semicolon.patch alsa-usb-audio-fix-build-with-config_input-n.patch usb-core-add-0x-prefix-to-quirks-debug-output.patch ib-mlx5-fix-obj_type-mismatch-for-srq-event-subscrip.patch +can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch diff --git a/queue-6.1/arm64-dts-imx8mp-correct-thermal-sensor-index.patch b/queue-6.1/arm64-dts-imx8mp-correct-thermal-sensor-index.patch new file mode 100644 index 0000000000..475f44a46d --- /dev/null +++ b/queue-6.1/arm64-dts-imx8mp-correct-thermal-sensor-index.patch @@ -0,0 +1,50 @@ +From d96f7300d408773b19b9f96485263a65987130df Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 5 Sep 2025 11:01:09 +0800 +Subject: arm64: dts: imx8mp: Correct thermal sensor index + +From: Peng Fan + +[ Upstream commit a50342f976d25aace73ff551845ce89406f48f35 ] + +The TMU has two temperature measurement sites located on the chip. The +probe 0 is located inside of the ANAMIX, while the probe 1 is located near +the ARM core. This has been confirmed by checking with HW design team and +checking RTL code. + +So correct the {cpu,soc}-thermal sensor index. + +Fixes: 30cdd62dce6b ("arm64: dts: imx8mp: Add thermal zones support") +Signed-off-by: Peng Fan +Reviewed-by: Frank Li +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/freescale/imx8mp.dtsi | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +index 86af7115ac60c..e05a1029975af 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +@@ -227,7 +227,7 @@ + cpu-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 0>; ++ thermal-sensors = <&tmu 1>; + trips { + cpu_alert0: trip0 { + temperature = <85000>; +@@ -257,7 +257,7 @@ + soc-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 1>; ++ thermal-sensors = <&tmu 0>; + trips { + soc_alert0: trip0 { + temperature = <85000>; +-- +2.51.0 + diff --git a/queue-6.1/bpf-reject-bpf_timer-for-preempt_rt.patch b/queue-6.1/bpf-reject-bpf_timer-for-preempt_rt.patch new file mode 100644 index 0000000000..8e4df087a9 --- /dev/null +++ b/queue-6.1/bpf-reject-bpf_timer-for-preempt_rt.patch @@ -0,0 +1,43 @@ +From 7e0afb3a275f97bd1de38786cc731056ab3f0cb1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 20:57:39 +0800 +Subject: bpf: Reject bpf_timer for PREEMPT_RT + +From: Leon Hwang + +[ Upstream commit e25ddfb388c8b7e5f20e3bf38d627fb485003781 ] + +When enable CONFIG_PREEMPT_RT, the kernel will warn when run timer +selftests by './test_progs -t timer': + +BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 + +In order to avoid such warning, reject bpf_timer in verifier when +PREEMPT_RT is enabled. + +Signed-off-by: Leon Hwang +Link: https://lore.kernel.org/r/20250910125740.52172-2-leon.hwang@linux.dev +Signed-off-by: Alexei Starovoitov +Signed-off-by: Sasha Levin +--- + kernel/bpf/verifier.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index ead1811534a0d..276a0de9a1bb2 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -5733,6 +5733,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno, + verbose(env, "verifier bug. Two map pointers in a timer helper\n"); + return -EFAULT; + } ++ if (IS_ENABLED(CONFIG_PREEMPT_RT)) { ++ verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); ++ return -EOPNOTSUPP; ++ } + meta->map_uid = reg->map_uid; + meta->map_ptr = map; + return 0; +-- +2.51.0 + diff --git a/queue-6.1/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch b/queue-6.1/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch new file mode 100644 index 0000000000..d21a16854c --- /dev/null +++ b/queue-6.1/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch @@ -0,0 +1,52 @@ +From ef8bf6598e47d8bce212334690eb94972aa5fad5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 13:26:37 +0200 +Subject: can: rcar_can: rcar_can_resume(): fix s2ram with PSCI + +From: Geert Uytterhoeven + +[ Upstream commit 5c793afa07da6d2d4595f6c73a2a543a471bb055 ] + +On R-Car Gen3 using PSCI, s2ram powers down the SoC. After resume, the +CAN interface no longer works, until it is brought down and up again. + +Fix this by calling rcar_can_start() from the PM resume callback, to +fully initialize the controller instead of just restarting it. + +Signed-off-by: Geert Uytterhoeven +Link: https://patch.msgid.link/699b2f7fcb60b31b6f976a37f08ce99c5ffccb31.1755165227.git.geert+renesas@glider.be +Signed-off-by: Marc Kleine-Budde +Signed-off-by: Sasha Levin +--- + drivers/net/can/rcar/rcar_can.c | 8 +------- + 1 file changed, 1 insertion(+), 7 deletions(-) + +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index cc43c9c5e38c5..92a3f28bea87a 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -862,7 +862,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + { + struct net_device *ndev = dev_get_drvdata(dev); + struct rcar_can_priv *priv = netdev_priv(ndev); +- u16 ctlr; + int err; + + if (!netif_running(ndev)) +@@ -874,12 +873,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + return err; + } + +- ctlr = readw(&priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_SLPM; +- writew(ctlr, &priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_CANM; +- writew(ctlr, &priv->regs->ctlr); +- priv->can.state = CAN_STATE_ERROR_ACTIVE; ++ rcar_can_start(ndev); + + netif_device_attach(ndev); + netif_start_queue(ndev); +-- +2.51.0 + diff --git a/queue-6.1/cpufreq-initialize-cpufreq-based-invariance-before-s.patch b/queue-6.1/cpufreq-initialize-cpufreq-based-invariance-before-s.patch new file mode 100644 index 0000000000..6c95e67481 --- /dev/null +++ b/queue-6.1/cpufreq-initialize-cpufreq-based-invariance-before-s.patch @@ -0,0 +1,84 @@ +From 69892e02c00fb428e2e35c9a4fc47774469ce173 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 18 Sep 2025 11:15:52 +0100 +Subject: cpufreq: Initialize cpufreq-based invariance before subsys + +From: Christian Loehle + +[ Upstream commit 8ffe28b4e8d8b18cb2f2933410322c24f039d5d6 ] + +commit 2a6c72738706 ("cpufreq: Initialize cpufreq-based +frequency-invariance later") postponed the frequency invariance +initialization to avoid disabling it in the error case. +This isn't locking safe, instead move the initialization up before +the subsys interface is registered (which will rebuild the +sched_domains) and add the corresponding disable on the error path. + +Observed lockdep without this patch: +[ 0.989686] ====================================================== +[ 0.989688] WARNING: possible circular locking dependency detected +[ 0.989690] 6.17.0-rc4-cix-build+ #31 Tainted: G S +[ 0.989691] ------------------------------------------------------ +[ 0.989692] swapper/0/1 is trying to acquire lock: +[ 0.989693] ffff800082ada7f8 (sched_energy_mutex){+.+.}-{4:4}, at: rebuild_sched_domains_energy+0x30/0x58 +[ 0.989705] + but task is already holding lock: +[ 0.989706] ffff000088c89bc8 (&policy->rwsem){+.+.}-{4:4}, at: cpufreq_online+0x7f8/0xbe0 +[ 0.989713] + which lock already depends on the new lock. + +Fixes: 2a6c72738706 ("cpufreq: Initialize cpufreq-based frequency-invariance later") +Signed-off-by: Christian Loehle +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/cpufreq/cpufreq.c | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 90bdccab1dffb..de67d9c6c9c68 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2894,6 +2894,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + goto err_null_driver; + } + ++ /* ++ * Mark support for the scheduler's frequency invariance engine for ++ * drivers that implement target(), target_index() or fast_switch(). ++ */ ++ if (!cpufreq_driver->setpolicy) { ++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance); ++ pr_debug("cpufreq: supports frequency invariance\n"); ++ } ++ + ret = subsys_interface_register(&cpufreq_interface); + if (ret) + goto err_boost_unreg; +@@ -2915,21 +2924,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + hp_online = ret; + ret = 0; + +- /* +- * Mark support for the scheduler's frequency invariance engine for +- * drivers that implement target(), target_index() or fast_switch(). +- */ +- if (!cpufreq_driver->setpolicy) { +- static_branch_enable_cpuslocked(&cpufreq_freq_invariance); +- pr_debug("supports frequency invariance"); +- } +- + pr_debug("driver %s up and running\n", driver_data->name); + goto out; + + err_if_unreg: + subsys_interface_unregister(&cpufreq_interface); + err_boost_unreg: ++ if (!cpufreq_driver->setpolicy) ++ static_branch_disable_cpuslocked(&cpufreq_freq_invariance); + remove_boost_sysfs_file(); + err_null_driver: + write_lock_irqsave(&cpufreq_driver_lock, flags); +-- +2.51.0 + diff --git a/queue-6.1/mm-add-folio_expected_ref_count-for-reference-count-.patch b/queue-6.1/mm-add-folio_expected_ref_count-for-reference-count-.patch new file mode 100644 index 0000000000..835809a631 --- /dev/null +++ b/queue-6.1/mm-add-folio_expected_ref_count-for-reference-count-.patch @@ -0,0 +1,148 @@ +From dbf3fc0eae1eb6830d04fbae8c45d1a8be705d79 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 30 Apr 2025 10:01:51 +0000 +Subject: mm: add folio_expected_ref_count() for reference count calculation + +From: Shivank Garg + +[ Upstream commit 86ebd50224c0734d965843260d0dc057a9431c61 ] + +Patch series " JFS: Implement migrate_folio for jfs_metapage_aops" v5. + +This patchset addresses a warning that occurs during memory compaction due +to JFS's missing migrate_folio operation. The warning was introduced by +commit 7ee3647243e5 ("migrate: Remove call to ->writepage") which added +explicit warnings when filesystem don't implement migrate_folio. + +The syzbot reported following [1]: + jfs_metapage_aops does not implement migrate_folio + WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline] + WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007 + Modules linked in: + CPU: 1 UID: 0 PID: 5861 Comm: syz-executor280 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full) + Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 + RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline] + RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007 + +To fix this issue, this series implement metapage_migrate_folio() for JFS +which handles both single and multiple metapages per page configurations. + +While most filesystems leverage existing migration implementations like +filemap_migrate_folio(), buffer_migrate_folio_norefs() or +buffer_migrate_folio() (which internally used folio_expected_refs()), +JFS's metapage architecture requires special handling of its private data +during migration. To support this, this series introduce the +folio_expected_ref_count(), which calculates external references to a +folio from page/swap cache, private data, and page table mappings. + +This standardized implementation replaces the previous ad-hoc +folio_expected_refs() function and enables JFS to accurately determine +whether a folio has unexpected references before attempting migration. + +Implement folio_expected_ref_count() to calculate expected folio reference +counts from: +- Page/swap cache (1 per page) +- Private data (1) +- Page table mappings (1 per map) + +While originally needed for page migration operations, this improved +implementation standardizes reference counting by consolidating all +refcount contributors into a single, reusable function that can benefit +any subsystem needing to detect unexpected references to folios. + +The folio_expected_ref_count() returns the sum of these external +references without including any reference the caller itself might hold. +Callers comparing against the actual folio_ref_count() must account for +their own references separately. + +Link: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299 [1] +Link: https://lkml.kernel.org/r/20250430100150.279751-1-shivankg@amd.com +Link: https://lkml.kernel.org/r/20250430100150.279751-2-shivankg@amd.com +Signed-off-by: David Hildenbrand +Signed-off-by: Shivank Garg +Suggested-by: Matthew Wilcox +Co-developed-by: David Hildenbrand +Cc: Alistair Popple +Cc: Dave Kleikamp +Cc: Donet Tom +Cc: Jane Chu +Cc: Kefeng Wang +Cc: Zi Yan +Signed-off-by: Andrew Morton +Stable-dep-of: 98c6d259319e ("mm/gup: check ref_count instead of lru before migration") +[ Take the new function in mm.h, removing "const" from its parameter to stop + build warnings; but avoid all the conflicts of using it in mm/migrate.c. ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + include/linux/mm.h | 54 ++++++++++++++++++++++++++++++++++++++++++++++ + 1 file changed, 54 insertions(+) + +diff --git a/include/linux/mm.h b/include/linux/mm.h +index 9e17670de8483..3bf7823e10979 100644 +--- a/include/linux/mm.h ++++ b/include/linux/mm.h +@@ -1782,6 +1782,60 @@ static inline int folio_estimated_sharers(struct folio *folio) + return page_mapcount(folio_page(folio, 0)); + } + ++/** ++ * folio_expected_ref_count - calculate the expected folio refcount ++ * @folio: the folio ++ * ++ * Calculate the expected folio refcount, taking references from the pagecache, ++ * swapcache, PG_private and page table mappings into account. Useful in ++ * combination with folio_ref_count() to detect unexpected references (e.g., ++ * GUP or other temporary references). ++ * ++ * Does currently not consider references from the LRU cache. If the folio ++ * was isolated from the LRU (which is the case during migration or split), ++ * the LRU cache does not apply. ++ * ++ * Calling this function on an unmapped folio -- !folio_mapped() -- that is ++ * locked will return a stable result. ++ * ++ * Calling this function on a mapped folio will not result in a stable result, ++ * because nothing stops additional page table mappings from coming (e.g., ++ * fork()) or going (e.g., munmap()). ++ * ++ * Calling this function without the folio lock will also not result in a ++ * stable result: for example, the folio might get dropped from the swapcache ++ * concurrently. ++ * ++ * However, even when called without the folio lock or on a mapped folio, ++ * this function can be used to detect unexpected references early (for example, ++ * if it makes sense to even lock the folio and unmap it). ++ * ++ * The caller must add any reference (e.g., from folio_try_get()) it might be ++ * holding itself to the result. ++ * ++ * Returns the expected folio refcount. ++ */ ++static inline int folio_expected_ref_count(struct folio *folio) ++{ ++ const int order = folio_order(folio); ++ int ref_count = 0; ++ ++ if (WARN_ON_ONCE(folio_test_slab(folio))) ++ return 0; ++ ++ if (folio_test_anon(folio)) { ++ /* One reference per page from the swapcache. */ ++ ref_count += folio_test_swapcache(folio) << order; ++ } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) { ++ /* One reference per page from the pagecache. */ ++ ref_count += !!folio->mapping << order; ++ /* One reference from PG_private. */ ++ ref_count += folio_test_private(folio); ++ } ++ ++ /* One reference per page table mapping. */ ++ return ref_count + folio_mapcount(folio); ++} + + #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE + static inline int arch_make_page_accessible(struct page *page) +-- +2.51.0 + diff --git a/queue-6.1/mm-folio_may_be_lru_cached-unless-folio_test_large.patch b/queue-6.1/mm-folio_may_be_lru_cached-unless-folio_test_large.patch new file mode 100644 index 0000000000..9c9aaa6eff --- /dev/null +++ b/queue-6.1/mm-folio_may_be_lru_cached-unless-folio_test_large.patch @@ -0,0 +1,134 @@ +From 9d8e55c4a430063f90fef6718cf00f80a0f546a6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 8 Sep 2025 15:23:15 -0700 +Subject: mm: folio_may_be_lru_cached() unless folio_test_large() + +From: Hugh Dickins + +[ Upstream commit 2da6de30e60dd9bb14600eff1cc99df2fa2ddae3 ] + +mm/swap.c and mm/mlock.c agree to drain any per-CPU batch as soon as a +large folio is added: so collect_longterm_unpinnable_folios() just wastes +effort when calling lru_add_drain[_all]() on a large folio. + +But although there is good reason not to batch up PMD-sized folios, we +might well benefit from batching a small number of low-order mTHPs (though +unclear how that "small number" limitation will be implemented). + +So ask if folio_may_be_lru_cached() rather than !folio_test_large(), to +insulate those particular checks from future change. Name preferred to +"folio_is_batchable" because large folios can well be put on a batch: it's +just the per-CPU LRU caches, drained much later, which need care. + +Marked for stable, to counter the increase in lru_add_drain_all()s from +"mm/gup: check ref_count instead of lru before migration". + +Link: https://lkml.kernel.org/r/57d2eaf8-3607-f318-e0c5-be02dce61ad0@google.com +Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") +Signed-off-by: Hugh Dickins +Suggested-by: David Hildenbrand +Acked-by: David Hildenbrand +Cc: "Aneesh Kumar K.V" +Cc: Axel Rasmussen +Cc: Chris Li +Cc: Christoph Hellwig +Cc: Jason Gunthorpe +Cc: Johannes Weiner +Cc: John Hubbard +Cc: Keir Fraser +Cc: Konstantin Khlebnikov +Cc: Li Zhe +Cc: Matthew Wilcox (Oracle) +Cc: Peter Xu +Cc: Rik van Riel +Cc: Shivank Garg +Cc: Vlastimil Babka +Cc: Wei Xu +Cc: Will Deacon +Cc: yangge +Cc: Yuanchu Xie +Cc: Yu Zhao +Cc: +Signed-off-by: Andrew Morton +[ Resolved conflicts in mm/swap.c; left "page" parts of mm/mlock.c as is ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + include/linux/swap.h | 10 ++++++++++ + mm/gup.c | 4 ++-- + mm/mlock.c | 2 +- + mm/swap.c | 4 ++-- + 4 files changed, 15 insertions(+), 5 deletions(-) + +diff --git a/include/linux/swap.h b/include/linux/swap.h +index add47f43e568e..3eecf97dfbb8d 100644 +--- a/include/linux/swap.h ++++ b/include/linux/swap.h +@@ -392,6 +392,16 @@ void lru_cache_add(struct page *); + void mark_page_accessed(struct page *); + void folio_mark_accessed(struct folio *); + ++static inline bool folio_may_be_lru_cached(struct folio *folio) ++{ ++ /* ++ * Holding PMD-sized folios in per-CPU LRU cache unbalances accounting. ++ * Holding small numbers of low-order mTHP folios in per-CPU LRU cache ++ * will be sensible, but nobody has implemented and tested that yet. ++ */ ++ return !folio_test_large(folio); ++} ++ + extern atomic_t lru_disable_count; + + static inline bool lru_cache_disabled(void) +diff --git a/mm/gup.c b/mm/gup.c +index e1f125af9c844..b02993c9a8cdf 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1990,13 +1990,13 @@ static unsigned long collect_longterm_unpinnable_pages( + continue; + } + +- if (drained == 0 && ++ if (drained == 0 && folio_may_be_lru_cached(folio) && + folio_ref_count(folio) != + folio_expected_ref_count(folio) + 1) { + lru_add_drain(); + drained = 1; + } +- if (drained == 1 && ++ if (drained == 1 && folio_may_be_lru_cached(folio) && + folio_ref_count(folio) != + folio_expected_ref_count(folio) + 1) { + lru_add_drain_all(); +diff --git a/mm/mlock.c b/mm/mlock.c +index 7032f6dd0ce19..3bf9e1d263da4 100644 +--- a/mm/mlock.c ++++ b/mm/mlock.c +@@ -256,7 +256,7 @@ void mlock_folio(struct folio *folio) + + folio_get(folio); + if (!pagevec_add(pvec, mlock_lru(&folio->page)) || +- folio_test_large(folio) || lru_cache_disabled()) ++ !folio_may_be_lru_cached(folio) || lru_cache_disabled()) + mlock_pagevec(pvec); + local_unlock(&mlock_pvec.lock); + } +diff --git a/mm/swap.c b/mm/swap.c +index 85aa04fc48a67..e0fdf25350002 100644 +--- a/mm/swap.c ++++ b/mm/swap.c +@@ -249,8 +249,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) + static void folio_batch_add_and_move(struct folio_batch *fbatch, + struct folio *folio, move_fn_t move_fn) + { +- if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) && +- !lru_cache_disabled()) ++ if (folio_batch_add(fbatch, folio) && ++ folio_may_be_lru_cached(folio) && !lru_cache_disabled()) + return; + folio_batch_move_lru(fbatch, move_fn); + } +-- +2.51.0 + diff --git a/queue-6.1/mm-gup-check-ref_count-instead-of-lru-before-migrati.patch b/queue-6.1/mm-gup-check-ref_count-instead-of-lru-before-migrati.patch new file mode 100644 index 0000000000..35fdfd9974 --- /dev/null +++ b/queue-6.1/mm-gup-check-ref_count-instead-of-lru-before-migrati.patch @@ -0,0 +1,143 @@ +From f3510950ae8503be698fe4f732bba8b93085cbed Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 8 Sep 2025 15:15:03 -0700 +Subject: mm/gup: check ref_count instead of lru before migration + +From: Hugh Dickins + +[ Upstream commit 98c6d259319ecf6e8d027abd3f14b81324b8c0ad ] + +Patch series "mm: better GUP pin lru_add_drain_all()", v2. + +Series of lru_add_drain_all()-related patches, arising from recent mm/gup +migration report from Will Deacon. + +This patch (of 5): + +Will Deacon reports:- + +When taking a longterm GUP pin via pin_user_pages(), +__gup_longterm_locked() tries to migrate target folios that should not be +longterm pinned, for example because they reside in a CMA region or +movable zone. This is done by first pinning all of the target folios +anyway, collecting all of the longterm-unpinnable target folios into a +list, dropping the pins that were just taken and finally handing the list +off to migrate_pages() for the actual migration. + +It is critically important that no unexpected references are held on the +folios being migrated, otherwise the migration will fail and +pin_user_pages() will return -ENOMEM to its caller. Unfortunately, it is +relatively easy to observe migration failures when running pKVM (which +uses pin_user_pages() on crosvm's virtual address space to resolve stage-2 +page faults from the guest) on a 6.15-based Pixel 6 device and this +results in the VM terminating prematurely. + +In the failure case, 'crosvm' has called mlock(MLOCK_ONFAULT) on its +mapping of guest memory prior to the pinning. Subsequently, when +pin_user_pages() walks the page-table, the relevant 'pte' is not present +and so the faulting logic allocates a new folio, mlocks it with +mlock_folio() and maps it in the page-table. + +Since commit 2fbb0c10d1e8 ("mm/munlock: mlock_page() munlock_page() batch +by pagevec"), mlock/munlock operations on a folio (formerly page), are +deferred. For example, mlock_folio() takes an additional reference on the +target folio before placing it into a per-cpu 'folio_batch' for later +processing by mlock_folio_batch(), which drops the refcount once the +operation is complete. Processing of the batches is coupled with the LRU +batch logic and can be forcefully drained with lru_add_drain_all() but as +long as a folio remains unprocessed on the batch, its refcount will be +elevated. + +This deferred batching therefore interacts poorly with the pKVM pinning +scenario as we can find ourselves in a situation where the migration code +fails to migrate a folio due to the elevated refcount from the pending +mlock operation. + +Hugh Dickins adds:- + +!folio_test_lru() has never been a very reliable way to tell if an +lru_add_drain_all() is worth calling, to remove LRU cache references to +make the folio migratable: the LRU flag may be set even while the folio is +held with an extra reference in a per-CPU LRU cache. + +5.18 commit 2fbb0c10d1e8 may have made it more unreliable. Then 6.11 +commit 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding +to LRU batch") tried to make it reliable, by moving LRU flag clearing; but +missed the mlock/munlock batches, so still unreliable as reported. + +And it turns out to be difficult to extend 33dfe9204f29's LRU flag +clearing to the mlock/munlock batches: if they do benefit from batching, +mlock/munlock cannot be so effective when easily suppressed while !LRU. + +Instead, switch to an expected ref_count check, which was more reliable +all along: some more false positives (unhelpful drains) than before, and +never a guarantee that the folio will prove migratable, but better. + +Note on PG_private_2: ceph and nfs are still using the deprecated +PG_private_2 flag, with the aid of netfs and filemap support functions. +Although it is consistently matched by an increment of folio ref_count, +folio_expected_ref_count() intentionally does not recognize it, and ceph +folio migration currently depends on that for PG_private_2 folios to be +rejected. New references to the deprecated flag are discouraged, so do +not add it into the collect_longterm_unpinnable_folios() calculation: but +longterm pinning of transiently PG_private_2 ceph and nfs folios (an +uncommon case) may invoke a redundant lru_add_drain_all(). And this makes +easy the backport to earlier releases: up to and including 6.12, btrfs +also used PG_private_2, but without a ref_count increment. + +Note for stable backports: requires 6.16 commit 86ebd50224c0 ("mm: +add folio_expected_ref_count() for reference count calculation"). + +Link: https://lkml.kernel.org/r/41395944-b0e3-c3ac-d648-8ddd70451d28@google.com +Link: https://lkml.kernel.org/r/bd1f314a-fca1-8f19-cac0-b936c9614557@google.com +Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") +Signed-off-by: Hugh Dickins +Reported-by: Will Deacon +Closes: https://lore.kernel.org/linux-mm/20250815101858.24352-1-will@kernel.org/ +Acked-by: Kiryl Shutsemau +Acked-by: David Hildenbrand +Cc: "Aneesh Kumar K.V" +Cc: Axel Rasmussen +Cc: Chris Li +Cc: Christoph Hellwig +Cc: Jason Gunthorpe +Cc: Johannes Weiner +Cc: John Hubbard +Cc: Keir Fraser +Cc: Konstantin Khlebnikov +Cc: Li Zhe +Cc: Matthew Wilcox (Oracle) +Cc: Peter Xu +Cc: Rik van Riel +Cc: Shivank Garg +Cc: Vlastimil Babka +Cc: Wei Xu +Cc: yangge +Cc: Yuanchu Xie +Cc: Yu Zhao +Cc: +Signed-off-by: Andrew Morton +[ Clean cherry-pick now into this tree ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + mm/gup.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/mm/gup.c b/mm/gup.c +index 599c6b9453166..44e5fe2535d0e 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1990,7 +1990,8 @@ static unsigned long collect_longterm_unpinnable_pages( + continue; + } + +- if (!folio_test_lru(folio) && drain_allow) { ++ if (drain_allow && folio_ref_count(folio) != ++ folio_expected_ref_count(folio) + 1) { + lru_add_drain_all(); + drain_allow = false; + } +-- +2.51.0 + diff --git a/queue-6.1/mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch b/queue-6.1/mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch new file mode 100644 index 0000000000..08792da459 --- /dev/null +++ b/queue-6.1/mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch @@ -0,0 +1,88 @@ +From 507210a0fb041814d1eb8a685773c861715ea411 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 8 Sep 2025 15:16:53 -0700 +Subject: mm/gup: local lru_add_drain() to avoid lru_add_drain_all() + +From: Hugh Dickins + +[ Upstream commit a09a8a1fbb374e0053b97306da9dbc05bd384685 ] + +In many cases, if collect_longterm_unpinnable_folios() does need to drain +the LRU cache to release a reference, the cache in question is on this +same CPU, and much more efficiently drained by a preliminary local +lru_add_drain(), than the later cross-CPU lru_add_drain_all(). + +Marked for stable, to counter the increase in lru_add_drain_all()s from +"mm/gup: check ref_count instead of lru before migration". Note for clean +backports: can take 6.16 commit a03db236aebf ("gup: optimize longterm +pin_user_pages() for large folio") first. + +Link: https://lkml.kernel.org/r/66f2751f-283e-816d-9530-765db7edc465@google.com +Signed-off-by: Hugh Dickins +Acked-by: David Hildenbrand +Cc: "Aneesh Kumar K.V" +Cc: Axel Rasmussen +Cc: Chris Li +Cc: Christoph Hellwig +Cc: Jason Gunthorpe +Cc: Johannes Weiner +Cc: John Hubbard +Cc: Keir Fraser +Cc: Konstantin Khlebnikov +Cc: Li Zhe +Cc: Matthew Wilcox (Oracle) +Cc: Peter Xu +Cc: Rik van Riel +Cc: Shivank Garg +Cc: Vlastimil Babka +Cc: Wei Xu +Cc: Will Deacon +Cc: yangge +Cc: Yuanchu Xie +Cc: Yu Zhao +Cc: +Signed-off-by: Andrew Morton +[ Resolved minor conflicts ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + mm/gup.c | 15 +++++++++++---- + 1 file changed, 11 insertions(+), 4 deletions(-) + +diff --git a/mm/gup.c b/mm/gup.c +index 44e5fe2535d0e..e1f125af9c844 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1968,7 +1968,7 @@ static unsigned long collect_longterm_unpinnable_pages( + { + unsigned long i, collected = 0; + struct folio *prev_folio = NULL; +- bool drain_allow = true; ++ int drained = 0; + + for (i = 0; i < nr_pages; i++) { + struct folio *folio = page_folio(pages[i]); +@@ -1990,10 +1990,17 @@ static unsigned long collect_longterm_unpinnable_pages( + continue; + } + +- if (drain_allow && folio_ref_count(folio) != +- folio_expected_ref_count(folio) + 1) { ++ if (drained == 0 && ++ folio_ref_count(folio) != ++ folio_expected_ref_count(folio) + 1) { ++ lru_add_drain(); ++ drained = 1; ++ } ++ if (drained == 1 && ++ folio_ref_count(folio) != ++ folio_expected_ref_count(folio) + 1) { + lru_add_drain_all(); +- drain_allow = false; ++ drained = 2; + } + + if (folio_isolate_lru(folio)) +-- +2.51.0 + diff --git a/queue-6.1/mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch b/queue-6.1/mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch new file mode 100644 index 0000000000..d6b87c7adc --- /dev/null +++ b/queue-6.1/mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch @@ -0,0 +1,114 @@ +From 06d126dc03e9dcce2b88b658e470497da46126a6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 11 Jun 2025 15:13:14 +0200 +Subject: mm/gup: revert "mm: gup: fix infinite loop within + __get_longterm_locked" + +From: David Hildenbrand + +[ Upstream commit 517f496e1e61bd169d585dab4dd77e7147506322 ] + +After commit 1aaf8c122918 ("mm: gup: fix infinite loop within +__get_longterm_locked") we are able to longterm pin folios that are not +supposed to get longterm pinned, simply because they temporarily have the +LRU flag cleared (esp. temporarily isolated). + +For example, two __get_longterm_locked() callers can race, or +__get_longterm_locked() can race with anything else that temporarily +isolates folios. + +The introducing commit mentions the use case of a driver that uses +vm_ops->fault to insert pages allocated through cma_alloc() into the page +tables, assuming they can later get longterm pinned. These pages/ folios +would never have the LRU flag set and consequently cannot get isolated. +There is no known in-tree user making use of that so far, fortunately. + +To handle that in the future -- and avoid retrying forever to +isolate/migrate them -- we will need a different mechanism for the CMA +area *owner* to indicate that it actually already allocated the page and +is fine with longterm pinning it. The LRU flag is not suitable for that. + +Probably we can lookup the relevant CMA area and query the bitmap; we only +have have to care about some races, probably. If already allocated, we +could just allow longterm pinning) + +Anyhow, let's fix the "must not be longterm pinned" problem first by +reverting the original commit. + +Link: https://lkml.kernel.org/r/20250611131314.594529-1-david@redhat.com +Fixes: 1aaf8c122918 ("mm: gup: fix infinite loop within __get_longterm_locked") +Signed-off-by: David Hildenbrand +Closes: https://lore.kernel.org/all/20250522092755.GA3277597@tiffany/ +Reported-by: Hyesoo Yu +Reviewed-by: John Hubbard +Cc: Jason Gunthorpe +Cc: Peter Xu +Cc: Zhaoyang Huang +Cc: Aijun Sun +Cc: Alistair Popple +Cc: +Signed-off-by: Andrew Morton +[ Revert v6.1.129 commit c986a5fb15ed ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + mm/gup.c | 14 ++++++++++---- + 1 file changed, 10 insertions(+), 4 deletions(-) + +diff --git a/mm/gup.c b/mm/gup.c +index 37c55e61460e2..599c6b9453166 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1961,14 +1961,14 @@ struct page *get_dump_page(unsigned long addr) + /* + * Returns the number of collected pages. Return value is always >= 0. + */ +-static void collect_longterm_unpinnable_pages( ++static unsigned long collect_longterm_unpinnable_pages( + struct list_head *movable_page_list, + unsigned long nr_pages, + struct page **pages) + { ++ unsigned long i, collected = 0; + struct folio *prev_folio = NULL; + bool drain_allow = true; +- unsigned long i; + + for (i = 0; i < nr_pages; i++) { + struct folio *folio = page_folio(pages[i]); +@@ -1980,6 +1980,8 @@ static void collect_longterm_unpinnable_pages( + if (folio_is_longterm_pinnable(folio)) + continue; + ++ collected++; ++ + if (folio_is_device_coherent(folio)) + continue; + +@@ -2001,6 +2003,8 @@ static void collect_longterm_unpinnable_pages( + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); + } ++ ++ return collected; + } + + /* +@@ -2093,10 +2097,12 @@ static int migrate_longterm_unpinnable_pages( + static long check_and_migrate_movable_pages(unsigned long nr_pages, + struct page **pages) + { ++ unsigned long collected; + LIST_HEAD(movable_page_list); + +- collect_longterm_unpinnable_pages(&movable_page_list, nr_pages, pages); +- if (list_empty(&movable_page_list)) ++ collected = collect_longterm_unpinnable_pages(&movable_page_list, ++ nr_pages, pages); ++ if (!collected) + return 0; + + return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages, +-- +2.51.0 + diff --git a/queue-6.1/series b/queue-6.1/series index 2e99d193ee..cf5fd9bec0 100644 --- a/queue-6.1/series +++ b/queue-6.1/series @@ -15,3 +15,13 @@ alsa-usb-audio-add-dsd-support-for-comtrue-usb-audio.patch alsa-usb-audio-move-mixer_quirks-min_mute-into-commo.patch alsa-usb-audio-add-mute-tlv-for-playback-volumes-on-.patch ib-mlx5-fix-obj_type-mismatch-for-srq-event-subscrip.patch +mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch +mm-add-folio_expected_ref_count-for-reference-count-.patch +mm-gup-check-ref_count-instead-of-lru-before-migrati.patch +mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch +mm-folio_may_be_lru_cached-unless-folio_test_large.patch +arm64-dts-imx8mp-correct-thermal-sensor-index.patch +cpufreq-initialize-cpufreq-based-invariance-before-s.patch +smb-server-don-t-use-delayed_work-for-post_recv_cred.patch +can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch +bpf-reject-bpf_timer-for-preempt_rt.patch diff --git a/queue-6.1/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch b/queue-6.1/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch new file mode 100644 index 0000000000..b2f7b28259 --- /dev/null +++ b/queue-6.1/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch @@ -0,0 +1,103 @@ +From e5d59a9a52f60c249015e6125abfa76fd0b2e183 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 8 Aug 2025 17:55:17 +0200 +Subject: smb: server: don't use delayed_work for post_recv_credits_work + +From: Stefan Metzmacher + +[ Upstream commit 1cde0a74a7a8951b3097417847a458e557be0b5b ] + +If we are using a hardcoded delay of 0 there's no point in +using delayed_work it only adds confusion. + +The client also uses a normal work_struct and now +it is easier to move it to the common smbdirect_socket. + +Cc: Namjae Jeon +Cc: Steve French +Cc: Tom Talpey +Cc: linux-cifs@vger.kernel.org +Cc: samba-technical@lists.samba.org +Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") +Signed-off-by: Stefan Metzmacher +Acked-by: Namjae Jeon +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/server/transport_rdma.c | 18 ++++++++---------- + 1 file changed, 8 insertions(+), 10 deletions(-) + +diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c +index 323b8a401a8c0..84b5b2f5df998 100644 +--- a/fs/smb/server/transport_rdma.c ++++ b/fs/smb/server/transport_rdma.c +@@ -147,7 +147,7 @@ struct smb_direct_transport { + wait_queue_head_t wait_send_pending; + atomic_t send_pending; + +- struct delayed_work post_recv_credits_work; ++ struct work_struct post_recv_credits_work; + struct work_struct send_immediate_work; + struct work_struct disconnect_work; + +@@ -365,8 +365,8 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id) + + spin_lock_init(&t->lock_new_recv_credits); + +- INIT_DELAYED_WORK(&t->post_recv_credits_work, +- smb_direct_post_recv_credits); ++ INIT_WORK(&t->post_recv_credits_work, ++ smb_direct_post_recv_credits); + INIT_WORK(&t->send_immediate_work, smb_direct_send_immediate_work); + INIT_WORK(&t->disconnect_work, smb_direct_disconnect_rdma_work); + +@@ -393,7 +393,7 @@ static void free_transport(struct smb_direct_transport *t) + atomic_read(&t->send_pending) == 0); + + cancel_work_sync(&t->disconnect_work); +- cancel_delayed_work_sync(&t->post_recv_credits_work); ++ cancel_work_sync(&t->post_recv_credits_work); + cancel_work_sync(&t->send_immediate_work); + + if (t->qp) { +@@ -609,8 +609,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc) + wake_up_interruptible(&t->wait_send_credits); + + if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count)) +- mod_delayed_work(smb_direct_wq, +- &t->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &t->post_recv_credits_work); + + if (data_length) { + enqueue_reassembly(t, recvmsg, (int)data_length); +@@ -767,8 +766,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + st->count_avail_recvmsg += queue_removed; + if (is_receive_credit_post_required(st->recv_credits, st->count_avail_recvmsg)) { + spin_unlock(&st->receive_credit_lock); +- mod_delayed_work(smb_direct_wq, +- &st->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &st->post_recv_credits_work); + } else { + spin_unlock(&st->receive_credit_lock); + } +@@ -795,7 +793,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + static void smb_direct_post_recv_credits(struct work_struct *work) + { + struct smb_direct_transport *t = container_of(work, +- struct smb_direct_transport, post_recv_credits_work.work); ++ struct smb_direct_transport, post_recv_credits_work); + struct smb_direct_recvmsg *recvmsg; + int receive_credits, credits = 0; + int ret; +@@ -1676,7 +1674,7 @@ static int smb_direct_prepare_negotiation(struct smb_direct_transport *t) + goto out_err; + } + +- smb_direct_post_recv_credits(&t->post_recv_credits_work.work); ++ smb_direct_post_recv_credits(&t->post_recv_credits_work); + return 0; + out_err: + put_recvmsg(t, recvmsg); +-- +2.51.0 + diff --git a/queue-6.12/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch b/queue-6.12/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch new file mode 100644 index 0000000000..aab93eaa8f --- /dev/null +++ b/queue-6.12/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch @@ -0,0 +1,45 @@ +From 0591cf542123259f1d196fac81c384bb05af0e30 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 30 Aug 2025 22:37:50 +0200 +Subject: ARM: dts: kirkwood: Fix sound DAI cells for OpenRD clients + +From: Jihed Chaibi + +[ Upstream commit 29341c6c18b8ad2a9a4a68a61be7e1272d842f21 ] + +A previous commit changed the '#sound-dai-cells' property for the +kirkwood audio controller from 1 to 0 in the kirkwood.dtsi file, +but did not update the corresponding 'sound-dai' property in the +kirkwood-openrd-client.dts file. + +This created a mismatch, causing a dtbs_check validation error where +the dts provides one cell (<&audio0 0>) while the .dtsi expects zero. + +Remove the extraneous cell from the 'sound-dai' property to fix the +schema validation warning and align with the updated binding. + +Fixes: e662e70fa419 ("arm: dts: kirkwood: fix error in #sound-dai-cells size") +Signed-off-by: Jihed Chaibi +Reviewed-by: Krzysztof Kozlowski +Signed-off-by: Gregory CLEMENT +Signed-off-by: Sasha Levin +--- + arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts +index d4e0b8150a84c..cf26e2ceaaa07 100644 +--- a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts ++++ b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts +@@ -38,7 +38,7 @@ + simple-audio-card,mclk-fs = <256>; + + simple-audio-card,cpu { +- sound-dai = <&audio0 0>; ++ sound-dai = <&audio0>; + }; + + simple-audio-card,codec { +-- +2.51.0 + diff --git a/queue-6.12/arm64-dts-imx8mp-correct-thermal-sensor-index.patch b/queue-6.12/arm64-dts-imx8mp-correct-thermal-sensor-index.patch new file mode 100644 index 0000000000..95958d8f0e --- /dev/null +++ b/queue-6.12/arm64-dts-imx8mp-correct-thermal-sensor-index.patch @@ -0,0 +1,50 @@ +From 8aad6858077ac841820b564ebc8ffaaa6202e71b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 5 Sep 2025 11:01:09 +0800 +Subject: arm64: dts: imx8mp: Correct thermal sensor index + +From: Peng Fan + +[ Upstream commit a50342f976d25aace73ff551845ce89406f48f35 ] + +The TMU has two temperature measurement sites located on the chip. The +probe 0 is located inside of the ANAMIX, while the probe 1 is located near +the ARM core. This has been confirmed by checking with HW design team and +checking RTL code. + +So correct the {cpu,soc}-thermal sensor index. + +Fixes: 30cdd62dce6b ("arm64: dts: imx8mp: Add thermal zones support") +Signed-off-by: Peng Fan +Reviewed-by: Frank Li +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/freescale/imx8mp.dtsi | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +index 40e847bc0b7f8..62cf525ab714b 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +@@ -283,7 +283,7 @@ + cpu-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 0>; ++ thermal-sensors = <&tmu 1>; + trips { + cpu_alert0: trip0 { + temperature = <85000>; +@@ -313,7 +313,7 @@ + soc-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 1>; ++ thermal-sensors = <&tmu 0>; + trips { + soc_alert0: trip0 { + temperature = <85000>; +-- +2.51.0 + diff --git a/queue-6.12/bpf-check-the-helper-function-is-valid-in-get_helper.patch b/queue-6.12/bpf-check-the-helper-function-is-valid-in-get_helper.patch new file mode 100644 index 0000000000..2980533451 --- /dev/null +++ b/queue-6.12/bpf-check-the-helper-function-is-valid-in-get_helper.patch @@ -0,0 +1,65 @@ +From 3dfa7be1bf16e1e26d4d7a7c26be6b7791f5ea47 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 22:06:55 +0200 +Subject: bpf: Check the helper function is valid in get_helper_proto + +From: Jiri Olsa + +[ Upstream commit e4414b01c1cd9887bbde92f946c1ba94e40d6d64 ] + +kernel test robot reported verifier bug [1] where the helper func +pointer could be NULL due to disabled config option. + +As Alexei suggested we could check on that in get_helper_proto +directly. Marking tail_call helper func with BPF_PTR_POISON, +because it is unused by design. + + [1] https://lore.kernel.org/oe-lkp/202507160818.68358831-lkp@intel.com + +Reported-by: kernel test robot +Reported-by: syzbot+a9ed3d9132939852d0df@syzkaller.appspotmail.com +Suggested-by: Alexei Starovoitov +Signed-off-by: Jiri Olsa +Signed-off-by: Daniel Borkmann +Acked-by: Paul Chaignon +Acked-by: Daniel Borkmann +Link: https://lore.kernel.org/bpf/20250814200655.945632-1-jolsa@kernel.org +Closes: https://lore.kernel.org/oe-lkp/202507160818.68358831-lkp@intel.com +Signed-off-by: Sasha Levin +--- + kernel/bpf/core.c | 5 ++++- + kernel/bpf/verifier.c | 2 +- + 2 files changed, 5 insertions(+), 2 deletions(-) + +diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c +index 9380e0fd5e4af..1f51c8f20722e 100644 +--- a/kernel/bpf/core.c ++++ b/kernel/bpf/core.c +@@ -2953,7 +2953,10 @@ EXPORT_SYMBOL_GPL(bpf_event_output); + + /* Always built-in helper functions. */ + const struct bpf_func_proto bpf_tail_call_proto = { +- .func = NULL, ++ /* func is unused for tail_call, we set it to pass the ++ * get_helper_proto check ++ */ ++ .func = BPF_PTR_POISON, + .gpl_only = false, + .ret_type = RET_VOID, + .arg1_type = ARG_PTR_TO_CTX, +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 24ae8f33e5d76..6e22abf3326b6 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -10465,7 +10465,7 @@ static int get_helper_proto(struct bpf_verifier_env *env, int func_id, + return -EINVAL; + + *ptr = env->ops->get_func_proto(func_id, env->prog); +- return *ptr ? 0 : -EINVAL; ++ return *ptr && (*ptr)->func ? 0 : -EINVAL; + } + + static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn, +-- +2.51.0 + diff --git a/queue-6.12/bpf-reject-bpf_timer-for-preempt_rt.patch b/queue-6.12/bpf-reject-bpf_timer-for-preempt_rt.patch new file mode 100644 index 0000000000..66724aef86 --- /dev/null +++ b/queue-6.12/bpf-reject-bpf_timer-for-preempt_rt.patch @@ -0,0 +1,43 @@ +From 1eeeef08ff7e54730571d9c7444507fb181c1c12 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 20:57:39 +0800 +Subject: bpf: Reject bpf_timer for PREEMPT_RT + +From: Leon Hwang + +[ Upstream commit e25ddfb388c8b7e5f20e3bf38d627fb485003781 ] + +When enable CONFIG_PREEMPT_RT, the kernel will warn when run timer +selftests by './test_progs -t timer': + +BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 + +In order to avoid such warning, reject bpf_timer in verifier when +PREEMPT_RT is enabled. + +Signed-off-by: Leon Hwang +Link: https://lore.kernel.org/r/20250910125740.52172-2-leon.hwang@linux.dev +Signed-off-by: Alexei Starovoitov +Signed-off-by: Sasha Levin +--- + kernel/bpf/verifier.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 6e22abf3326b6..1829f62a74a9e 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -7799,6 +7799,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno, + verbose(env, "verifier bug. Two map pointers in a timer helper\n"); + return -EFAULT; + } ++ if (IS_ENABLED(CONFIG_PREEMPT_RT)) { ++ verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); ++ return -EOPNOTSUPP; ++ } + meta->map_uid = reg->map_uid; + meta->map_ptr = map; + return 0; +-- +2.51.0 + diff --git a/queue-6.12/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch b/queue-6.12/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch new file mode 100644 index 0000000000..b43c8923c3 --- /dev/null +++ b/queue-6.12/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch @@ -0,0 +1,54 @@ +From 4e513ce8714c8ecf2f0efff94c118486171ac5a6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 2 Sep 2025 11:34:10 +0100 +Subject: btrfs: don't allow adding block device of less than 1 MB + +From: Mark Harmstone + +[ Upstream commit 3d1267475b94b3df7a61e4ea6788c7c5d9e473c4 ] + +Commit 15ae0410c37a79 ("btrfs-progs: add error handling for +device_get_partition_size_fd_stat()") in btrfs-progs inadvertently +changed it so that if the BLKGETSIZE64 ioctl on a block device returned +a size of 0, this was no longer seen as an error condition. + +Unfortunately this is how disconnected NBD devices behave, meaning that +with btrfs-progs 6.16 it's now possible to add a device you can't +remove: + + # btrfs device add /dev/nbd0 /root/temp + # btrfs device remove /dev/nbd0 /root/temp + ERROR: error removing device '/dev/nbd0': Invalid argument + +This check should always have been done kernel-side anyway, so add a +check in btrfs_init_new_device() that the new device doesn't have a size +less than BTRFS_DEVICE_RANGE_RESERVED (i.e. 1 MB). + +Reviewed-by: Qu Wenruo +Signed-off-by: Mark Harmstone +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/volumes.c | 5 +++++ + 1 file changed, 5 insertions(+) + +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index 58e0cac5779dd..ce991a8390466 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -2699,6 +2699,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path + goto error; + } + ++ if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) { ++ ret = -EINVAL; ++ goto error; ++ } ++ + if (fs_devices->seeding) { + seeding_dev = true; + down_write(&sb->s_umount); +-- +2.51.0 + diff --git a/queue-6.12/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch b/queue-6.12/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch new file mode 100644 index 0000000000..c5f0f9019f --- /dev/null +++ b/queue-6.12/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch @@ -0,0 +1,52 @@ +From 387f95a9a7ddbbdf0c49017e8de68e97b0414055 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 13:26:37 +0200 +Subject: can: rcar_can: rcar_can_resume(): fix s2ram with PSCI + +From: Geert Uytterhoeven + +[ Upstream commit 5c793afa07da6d2d4595f6c73a2a543a471bb055 ] + +On R-Car Gen3 using PSCI, s2ram powers down the SoC. After resume, the +CAN interface no longer works, until it is brought down and up again. + +Fix this by calling rcar_can_start() from the PM resume callback, to +fully initialize the controller instead of just restarting it. + +Signed-off-by: Geert Uytterhoeven +Link: https://patch.msgid.link/699b2f7fcb60b31b6f976a37f08ce99c5ffccb31.1755165227.git.geert+renesas@glider.be +Signed-off-by: Marc Kleine-Budde +Signed-off-by: Sasha Levin +--- + drivers/net/can/rcar/rcar_can.c | 8 +------- + 1 file changed, 1 insertion(+), 7 deletions(-) + +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index 2b7dd359f27b7..8569178b66df7 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -861,7 +861,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + { + struct net_device *ndev = dev_get_drvdata(dev); + struct rcar_can_priv *priv = netdev_priv(ndev); +- u16 ctlr; + int err; + + if (!netif_running(ndev)) +@@ -873,12 +872,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + return err; + } + +- ctlr = readw(&priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_SLPM; +- writew(ctlr, &priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_CANM; +- writew(ctlr, &priv->regs->ctlr); +- priv->can.state = CAN_STATE_ERROR_ACTIVE; ++ rcar_can_start(ndev); + + netif_device_attach(ndev); + netif_start_queue(ndev); +-- +2.51.0 + diff --git a/queue-6.12/cpufreq-initialize-cpufreq-based-invariance-before-s.patch b/queue-6.12/cpufreq-initialize-cpufreq-based-invariance-before-s.patch new file mode 100644 index 0000000000..89ea53ac15 --- /dev/null +++ b/queue-6.12/cpufreq-initialize-cpufreq-based-invariance-before-s.patch @@ -0,0 +1,84 @@ +From eca9cbf73450437bbb60c5a863528ea383ddf5d0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 18 Sep 2025 11:15:52 +0100 +Subject: cpufreq: Initialize cpufreq-based invariance before subsys + +From: Christian Loehle + +[ Upstream commit 8ffe28b4e8d8b18cb2f2933410322c24f039d5d6 ] + +commit 2a6c72738706 ("cpufreq: Initialize cpufreq-based +frequency-invariance later") postponed the frequency invariance +initialization to avoid disabling it in the error case. +This isn't locking safe, instead move the initialization up before +the subsys interface is registered (which will rebuild the +sched_domains) and add the corresponding disable on the error path. + +Observed lockdep without this patch: +[ 0.989686] ====================================================== +[ 0.989688] WARNING: possible circular locking dependency detected +[ 0.989690] 6.17.0-rc4-cix-build+ #31 Tainted: G S +[ 0.989691] ------------------------------------------------------ +[ 0.989692] swapper/0/1 is trying to acquire lock: +[ 0.989693] ffff800082ada7f8 (sched_energy_mutex){+.+.}-{4:4}, at: rebuild_sched_domains_energy+0x30/0x58 +[ 0.989705] + but task is already holding lock: +[ 0.989706] ffff000088c89bc8 (&policy->rwsem){+.+.}-{4:4}, at: cpufreq_online+0x7f8/0xbe0 +[ 0.989713] + which lock already depends on the new lock. + +Fixes: 2a6c72738706 ("cpufreq: Initialize cpufreq-based frequency-invariance later") +Signed-off-by: Christian Loehle +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/cpufreq/cpufreq.c | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index bd55c23563035..9600a96f91176 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2973,6 +2973,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + goto err_null_driver; + } + ++ /* ++ * Mark support for the scheduler's frequency invariance engine for ++ * drivers that implement target(), target_index() or fast_switch(). ++ */ ++ if (!cpufreq_driver->setpolicy) { ++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance); ++ pr_debug("cpufreq: supports frequency invariance\n"); ++ } ++ + ret = subsys_interface_register(&cpufreq_interface); + if (ret) + goto err_boost_unreg; +@@ -2994,21 +3003,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + hp_online = ret; + ret = 0; + +- /* +- * Mark support for the scheduler's frequency invariance engine for +- * drivers that implement target(), target_index() or fast_switch(). +- */ +- if (!cpufreq_driver->setpolicy) { +- static_branch_enable_cpuslocked(&cpufreq_freq_invariance); +- pr_debug("supports frequency invariance"); +- } +- + pr_debug("driver %s up and running\n", driver_data->name); + goto out; + + err_if_unreg: + subsys_interface_unregister(&cpufreq_interface); + err_boost_unreg: ++ if (!cpufreq_driver->setpolicy) ++ static_branch_disable_cpuslocked(&cpufreq_freq_invariance); + remove_boost_sysfs_file(); + err_null_driver: + write_lock_irqsave(&cpufreq_driver_lock, flags); +-- +2.51.0 + diff --git a/queue-6.12/firmware-imx-add-stub-functions-for-scmi-misc-api.patch b/queue-6.12/firmware-imx-add-stub-functions-for-scmi-misc-api.patch new file mode 100644 index 0000000000..451007bdac --- /dev/null +++ b/queue-6.12/firmware-imx-add-stub-functions-for-scmi-misc-api.patch @@ -0,0 +1,68 @@ +From 57e784b63654c55bd3722d7a200f4ae6fd8c61d6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Aug 2025 15:00:30 +0800 +Subject: firmware: imx: Add stub functions for SCMI MISC API + +From: Peng Fan + +[ Upstream commit b2461e20fa9ac18b1305bba5bc7e22ebf644ea01 ] + +To ensure successful builds when CONFIG_IMX_SCMI_MISC_DRV is not enabled, +this patch adds static inline stub implementations for the following +functions: + + - scmi_imx_misc_ctrl_get() + - scmi_imx_misc_ctrl_set() + +These stubs return -EOPNOTSUPP to indicate that the functionality is not +supported in the current configuration. This avoids potential build or +link errors in code that conditionally calls these functions based on +feature availability. + +This patch also drops the changes in commit 540c830212ed ("firmware: imx: +remove duplicate scmi_imx_misc_ctrl_get()"). + +The original change aimed to simplify the handling of optional features by +removing conditional stubs. However, the use of conditional stubs is +necessary when CONFIG_IMX_SCMI_MISC_DRV is n, while consumer driver is +set to y. + +This is not a matter of preserving legacy patterns, but rather to ensure +that there is no link error whether for module or built-in. + +Fixes: 0b4f8a68b292 ("firmware: imx: Add i.MX95 MISC driver") +Reviewed-by: Cristian Marussi +Signed-off-by: Peng Fan +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + include/linux/firmware/imx/sm.h | 12 ++++++++++++ + 1 file changed, 12 insertions(+) + +diff --git a/include/linux/firmware/imx/sm.h b/include/linux/firmware/imx/sm.h +index 9b85a3f028d1b..61f7a02b05009 100644 +--- a/include/linux/firmware/imx/sm.h ++++ b/include/linux/firmware/imx/sm.h +@@ -17,7 +17,19 @@ + #define SCMI_IMX_CTRL_SAI4_MCLK 4 /* WAKE SAI4 MCLK */ + #define SCMI_IMX_CTRL_SAI5_MCLK 5 /* WAKE SAI5 MCLK */ + ++#if IS_ENABLED(CONFIG_IMX_SCMI_MISC_DRV) + int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val); + int scmi_imx_misc_ctrl_set(u32 id, u32 val); ++#else ++static inline int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline int scmi_imx_misc_ctrl_set(u32 id, u32 val) ++{ ++ return -EOPNOTSUPP; ++} ++#endif + + #endif +-- +2.51.0 + diff --git a/queue-6.12/series b/queue-6.12/series index df670d801f..a00ff30610 100644 --- a/queue-6.12/series +++ b/queue-6.12/series @@ -25,3 +25,14 @@ alsa-usb-audio-add-mute-tlv-for-playback-volumes-on-.patch net-sfp-add-quirk-for-flypro-copper-sfp-module.patch ib-mlx5-fix-obj_type-mismatch-for-srq-event-subscrip.patch hid-amd_sfh-add-sync-across-amd-sfh-work-functions.patch +firmware-imx-add-stub-functions-for-scmi-misc-api.patch +arm64-dts-imx8mp-correct-thermal-sensor-index.patch +arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch +cpufreq-initialize-cpufreq-based-invariance-before-s.patch +smb-server-don-t-use-delayed_work-for-post_recv_cred.patch +smb-server-use-disable_work_sync-in-transport_rdma.c.patch +bpf-check-the-helper-function-is-valid-in-get_helper.patch +btrfs-don-t-allow-adding-block-device-of-less-than-1.patch +wifi-virt_wifi-fix-page-fault-on-connect.patch +can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch +bpf-reject-bpf_timer-for-preempt_rt.patch diff --git a/queue-6.12/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch b/queue-6.12/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch new file mode 100644 index 0000000000..e532ac32bd --- /dev/null +++ b/queue-6.12/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch @@ -0,0 +1,103 @@ +From 5a7cceeb2fc700d53fdaed3bb6d761025fcffdfd Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 8 Aug 2025 17:55:17 +0200 +Subject: smb: server: don't use delayed_work for post_recv_credits_work + +From: Stefan Metzmacher + +[ Upstream commit 1cde0a74a7a8951b3097417847a458e557be0b5b ] + +If we are using a hardcoded delay of 0 there's no point in +using delayed_work it only adds confusion. + +The client also uses a normal work_struct and now +it is easier to move it to the common smbdirect_socket. + +Cc: Namjae Jeon +Cc: Steve French +Cc: Tom Talpey +Cc: linux-cifs@vger.kernel.org +Cc: samba-technical@lists.samba.org +Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") +Signed-off-by: Stefan Metzmacher +Acked-by: Namjae Jeon +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/server/transport_rdma.c | 18 ++++++++---------- + 1 file changed, 8 insertions(+), 10 deletions(-) + +diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c +index 2fc689f99997e..8f5a393828065 100644 +--- a/fs/smb/server/transport_rdma.c ++++ b/fs/smb/server/transport_rdma.c +@@ -147,7 +147,7 @@ struct smb_direct_transport { + wait_queue_head_t wait_send_pending; + atomic_t send_pending; + +- struct delayed_work post_recv_credits_work; ++ struct work_struct post_recv_credits_work; + struct work_struct send_immediate_work; + struct work_struct disconnect_work; + +@@ -366,8 +366,8 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id) + + spin_lock_init(&t->lock_new_recv_credits); + +- INIT_DELAYED_WORK(&t->post_recv_credits_work, +- smb_direct_post_recv_credits); ++ INIT_WORK(&t->post_recv_credits_work, ++ smb_direct_post_recv_credits); + INIT_WORK(&t->send_immediate_work, smb_direct_send_immediate_work); + INIT_WORK(&t->disconnect_work, smb_direct_disconnect_rdma_work); + +@@ -399,7 +399,7 @@ static void free_transport(struct smb_direct_transport *t) + atomic_read(&t->send_pending) == 0); + + cancel_work_sync(&t->disconnect_work); +- cancel_delayed_work_sync(&t->post_recv_credits_work); ++ cancel_work_sync(&t->post_recv_credits_work); + cancel_work_sync(&t->send_immediate_work); + + if (t->qp) { +@@ -614,8 +614,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc) + wake_up_interruptible(&t->wait_send_credits); + + if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count)) +- mod_delayed_work(smb_direct_wq, +- &t->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &t->post_recv_credits_work); + + if (data_length) { + enqueue_reassembly(t, recvmsg, (int)data_length); +@@ -772,8 +771,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + st->count_avail_recvmsg += queue_removed; + if (is_receive_credit_post_required(st->recv_credits, st->count_avail_recvmsg)) { + spin_unlock(&st->receive_credit_lock); +- mod_delayed_work(smb_direct_wq, +- &st->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &st->post_recv_credits_work); + } else { + spin_unlock(&st->receive_credit_lock); + } +@@ -800,7 +798,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + static void smb_direct_post_recv_credits(struct work_struct *work) + { + struct smb_direct_transport *t = container_of(work, +- struct smb_direct_transport, post_recv_credits_work.work); ++ struct smb_direct_transport, post_recv_credits_work); + struct smb_direct_recvmsg *recvmsg; + int receive_credits, credits = 0; + int ret; +@@ -1681,7 +1679,7 @@ static int smb_direct_prepare_negotiation(struct smb_direct_transport *t) + goto out_err; + } + +- smb_direct_post_recv_credits(&t->post_recv_credits_work.work); ++ smb_direct_post_recv_credits(&t->post_recv_credits_work); + return 0; + out_err: + put_recvmsg(t, recvmsg); +-- +2.51.0 + diff --git a/queue-6.12/smb-server-use-disable_work_sync-in-transport_rdma.c.patch b/queue-6.12/smb-server-use-disable_work_sync-in-transport_rdma.c.patch new file mode 100644 index 0000000000..33e445a463 --- /dev/null +++ b/queue-6.12/smb-server-use-disable_work_sync-in-transport_rdma.c.patch @@ -0,0 +1,48 @@ +From 797d5a0d396e115d894a6a5bb4d7dec0f8f26dfc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 13 Aug 2025 08:48:42 +0200 +Subject: smb: server: use disable_work_sync in transport_rdma.c + +From: Stefan Metzmacher + +[ Upstream commit f7f89250175e0a82e99ed66da7012e869c36497d ] + +This makes it safer during the disconnect and avoids +requeueing. + +It's ok to call disable_work[_sync]() more than once. + +Cc: Namjae Jeon +Cc: Steve French +Cc: Tom Talpey +Cc: linux-cifs@vger.kernel.org +Cc: samba-technical@lists.samba.org +Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") +Signed-off-by: Stefan Metzmacher +Acked-by: Namjae Jeon +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/server/transport_rdma.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c +index 8f5a393828065..d059c890d1428 100644 +--- a/fs/smb/server/transport_rdma.c ++++ b/fs/smb/server/transport_rdma.c +@@ -398,9 +398,9 @@ static void free_transport(struct smb_direct_transport *t) + wait_event(t->wait_send_pending, + atomic_read(&t->send_pending) == 0); + +- cancel_work_sync(&t->disconnect_work); +- cancel_work_sync(&t->post_recv_credits_work); +- cancel_work_sync(&t->send_immediate_work); ++ disable_work_sync(&t->disconnect_work); ++ disable_work_sync(&t->post_recv_credits_work); ++ disable_work_sync(&t->send_immediate_work); + + if (t->qp) { + ib_drain_qp(t->qp); +-- +2.51.0 + diff --git a/queue-6.12/wifi-virt_wifi-fix-page-fault-on-connect.patch b/queue-6.12/wifi-virt_wifi-fix-page-fault-on-connect.patch new file mode 100644 index 0000000000..ef4a26fd82 --- /dev/null +++ b/queue-6.12/wifi-virt_wifi-fix-page-fault-on-connect.patch @@ -0,0 +1,43 @@ +From 077be732daa40391ecf9f38aeb0312e08c7cc760 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 19:19:29 +0800 +Subject: wifi: virt_wifi: Fix page fault on connect + +From: James Guan + +[ Upstream commit 9c600589e14f5fc01b8be9a5d0ad1f094b8b304b ] + +This patch prevents page fault in __cfg80211_connect_result()[1] +when connecting a virt_wifi device, while ensuring that virt_wifi +can connect properly. + +[1] https://lore.kernel.org/linux-wireless/20250909063213.1055024-1-guan_yufei@163.com/ + +Closes: https://lore.kernel.org/linux-wireless/20250909063213.1055024-1-guan_yufei@163.com/ +Signed-off-by: James Guan +Link: https://patch.msgid.link/20250910111929.137049-1-guan_yufei@163.com +[remove irrelevant network-manager instructions] +Signed-off-by: Johannes Berg +Signed-off-by: Sasha Levin +--- + drivers/net/wireless/virtual/virt_wifi.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/wireless/virtual/virt_wifi.c b/drivers/net/wireless/virtual/virt_wifi.c +index 4ee3740804667..a77a27c36bdbe 100644 +--- a/drivers/net/wireless/virtual/virt_wifi.c ++++ b/drivers/net/wireless/virtual/virt_wifi.c +@@ -277,7 +277,9 @@ static void virt_wifi_connect_complete(struct work_struct *work) + priv->is_connected = true; + + /* Schedules an event that acquires the rtnl lock. */ +- cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0, ++ cfg80211_connect_result(priv->upperdev, ++ priv->is_connected ? fake_router_bssid : NULL, ++ NULL, 0, NULL, 0, + status, GFP_KERNEL); + netif_carrier_on(priv->upperdev); + } +-- +2.51.0 + diff --git a/queue-6.16/amd-amdkfd-correct-mem-limit-calculation-for-small-a.patch b/queue-6.16/amd-amdkfd-correct-mem-limit-calculation-for-small-a.patch new file mode 100644 index 0000000000..7071350b7b --- /dev/null +++ b/queue-6.16/amd-amdkfd-correct-mem-limit-calculation-for-small-a.patch @@ -0,0 +1,118 @@ +From 1b730d2cca49f47eaec4bf6b5e2f44d8a2351d33 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 20 Aug 2025 16:10:51 +0800 +Subject: amd/amdkfd: correct mem limit calculation for small APUs + +From: Yifan Zhang + +[ Upstream commit 53503556273a5ead8b75534085e2dcb46e96f883 ] + +Current mem limit check leaks some GTT memory (reserved_for_pt +reserved_for_ras + adev->vram_pin_size) for small APUs. + +Since carveout VRAM is tunable on APUs, there are three case +regarding the carveout VRAM size relative to GTT: + +1. 0 < carveout < gtt + apu_prefer_gtt = true, is_app_apu = false + +2. carveout > gtt / 2 + apu_prefer_gtt = false, is_app_apu = false + +3. 0 = carveout + apu_prefer_gtt = true, is_app_apu = true + +It doesn't make sense to check below limitation in case 1 +(default case, small carveout) because the values in the below +expression are mixed with carveout and gtt. + +adev->kfd.vram_used[xcp_id] + vram_needed > + vram_size - reserved_for_pt - reserved_for_ras - + atomic64_read(&adev->vram_pin_size) + +gtt: kfd.vram_used, vram_needed, vram_size +carveout: reserved_for_pt, reserved_for_ras, adev->vram_pin_size + +In case 1, vram allocation will go to gtt domain, skip vram check +since ttm_mem_limit check already cover this allocation. + +Signed-off-by: Yifan Zhang +Reviewed-by: Mario Limonciello +Signed-off-by: Alex Deucher +(cherry picked from commit fa7c99f04f6dd299388e9282812b14e95558ac8e) +Signed-off-by: Sasha Levin +--- + .../gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 44 ++++++++++++++----- + 1 file changed, 32 insertions(+), 12 deletions(-) + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +index 260165bbe3736..b16cce7c22c37 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c +@@ -213,19 +213,35 @@ int amdgpu_amdkfd_reserve_mem_limit(struct amdgpu_device *adev, + spin_lock(&kfd_mem_limit.mem_limit_lock); + + if (kfd_mem_limit.system_mem_used + system_mem_needed > +- kfd_mem_limit.max_system_mem_limit) ++ kfd_mem_limit.max_system_mem_limit) { + pr_debug("Set no_system_mem_limit=1 if using shared memory\n"); ++ if (!no_system_mem_limit) { ++ ret = -ENOMEM; ++ goto release; ++ } ++ } + +- if ((kfd_mem_limit.system_mem_used + system_mem_needed > +- kfd_mem_limit.max_system_mem_limit && !no_system_mem_limit) || +- (kfd_mem_limit.ttm_mem_used + ttm_mem_needed > +- kfd_mem_limit.max_ttm_mem_limit) || +- (adev && xcp_id >= 0 && adev->kfd.vram_used[xcp_id] + vram_needed > +- vram_size - reserved_for_pt - reserved_for_ras - atomic64_read(&adev->vram_pin_size))) { ++ if (kfd_mem_limit.ttm_mem_used + ttm_mem_needed > ++ kfd_mem_limit.max_ttm_mem_limit) { + ret = -ENOMEM; + goto release; + } + ++ /*if is_app_apu is false and apu_prefer_gtt is true, it is an APU with ++ * carve out < gtt. In that case, VRAM allocation will go to gtt domain, skip ++ * VRAM check since ttm_mem_limit check already cover this allocation ++ */ ++ ++ if (adev && xcp_id >= 0 && (!adev->apu_prefer_gtt || adev->gmc.is_app_apu)) { ++ uint64_t vram_available = ++ vram_size - reserved_for_pt - reserved_for_ras - ++ atomic64_read(&adev->vram_pin_size); ++ if (adev->kfd.vram_used[xcp_id] + vram_needed > vram_available) { ++ ret = -ENOMEM; ++ goto release; ++ } ++ } ++ + /* Update memory accounting by decreasing available system + * memory, TTM memory and GPU memory as computed above + */ +@@ -1626,11 +1642,15 @@ size_t amdgpu_amdkfd_get_available_memory(struct amdgpu_device *adev, + uint64_t vram_available, system_mem_available, ttm_mem_available; + + spin_lock(&kfd_mem_limit.mem_limit_lock); +- vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id) +- - adev->kfd.vram_used_aligned[xcp_id] +- - atomic64_read(&adev->vram_pin_size) +- - reserved_for_pt +- - reserved_for_ras; ++ if (adev->apu_prefer_gtt && !adev->gmc.is_app_apu) ++ vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id) ++ - adev->kfd.vram_used_aligned[xcp_id]; ++ else ++ vram_available = KFD_XCP_MEMORY_SIZE(adev, xcp_id) ++ - adev->kfd.vram_used_aligned[xcp_id] ++ - atomic64_read(&adev->vram_pin_size) ++ - reserved_for_pt ++ - reserved_for_ras; + + if (adev->apu_prefer_gtt) { + system_mem_available = no_system_mem_limit ? +-- +2.51.0 + diff --git a/queue-6.16/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch b/queue-6.16/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch new file mode 100644 index 0000000000..0ee985290f --- /dev/null +++ b/queue-6.16/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch @@ -0,0 +1,45 @@ +From 6b96b000c179399c1cfc919ecb7d5d7140bf2d07 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 30 Aug 2025 22:37:50 +0200 +Subject: ARM: dts: kirkwood: Fix sound DAI cells for OpenRD clients + +From: Jihed Chaibi + +[ Upstream commit 29341c6c18b8ad2a9a4a68a61be7e1272d842f21 ] + +A previous commit changed the '#sound-dai-cells' property for the +kirkwood audio controller from 1 to 0 in the kirkwood.dtsi file, +but did not update the corresponding 'sound-dai' property in the +kirkwood-openrd-client.dts file. + +This created a mismatch, causing a dtbs_check validation error where +the dts provides one cell (<&audio0 0>) while the .dtsi expects zero. + +Remove the extraneous cell from the 'sound-dai' property to fix the +schema validation warning and align with the updated binding. + +Fixes: e662e70fa419 ("arm: dts: kirkwood: fix error in #sound-dai-cells size") +Signed-off-by: Jihed Chaibi +Reviewed-by: Krzysztof Kozlowski +Signed-off-by: Gregory CLEMENT +Signed-off-by: Sasha Levin +--- + arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts +index d4e0b8150a84c..cf26e2ceaaa07 100644 +--- a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts ++++ b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts +@@ -38,7 +38,7 @@ + simple-audio-card,mclk-fs = <256>; + + simple-audio-card,cpu { +- sound-dai = <&audio0 0>; ++ sound-dai = <&audio0>; + }; + + simple-audio-card,codec { +-- +2.51.0 + diff --git a/queue-6.16/arm64-dts-imx8mp-correct-thermal-sensor-index.patch b/queue-6.16/arm64-dts-imx8mp-correct-thermal-sensor-index.patch new file mode 100644 index 0000000000..34dbc8810c --- /dev/null +++ b/queue-6.16/arm64-dts-imx8mp-correct-thermal-sensor-index.patch @@ -0,0 +1,50 @@ +From a8177cae77e878e8607cba6b7a6ce97091d4159e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 5 Sep 2025 11:01:09 +0800 +Subject: arm64: dts: imx8mp: Correct thermal sensor index + +From: Peng Fan + +[ Upstream commit a50342f976d25aace73ff551845ce89406f48f35 ] + +The TMU has two temperature measurement sites located on the chip. The +probe 0 is located inside of the ANAMIX, while the probe 1 is located near +the ARM core. This has been confirmed by checking with HW design team and +checking RTL code. + +So correct the {cpu,soc}-thermal sensor index. + +Fixes: 30cdd62dce6b ("arm64: dts: imx8mp: Add thermal zones support") +Signed-off-by: Peng Fan +Reviewed-by: Frank Li +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/freescale/imx8mp.dtsi | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +index 948b88cf5e9df..305c2912e90f7 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +@@ -298,7 +298,7 @@ + cpu-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 0>; ++ thermal-sensors = <&tmu 1>; + trips { + cpu_alert0: trip0 { + temperature = <85000>; +@@ -328,7 +328,7 @@ + soc-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 1>; ++ thermal-sensors = <&tmu 0>; + trips { + soc_alert0: trip0 { + temperature = <85000>; +-- +2.51.0 + diff --git a/queue-6.16/arm64-dts-rockchip-fix-the-headphone-detection-on-th.patch b/queue-6.16/arm64-dts-rockchip-fix-the-headphone-detection-on-th.patch new file mode 100644 index 0000000000..576e67a126 --- /dev/null +++ b/queue-6.16/arm64-dts-rockchip-fix-the-headphone-detection-on-th.patch @@ -0,0 +1,43 @@ +From 42f211a08c2cddc7be72ad6c46ac9dd8dd86941f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 4 Sep 2025 03:01:50 +0000 +Subject: arm64: dts: rockchip: Fix the headphone detection on the orangepi 5 + +From: Jimmy Hon + +[ Upstream commit 0f860eef417df93eb0ae70bbfa8d26cb7e29244d ] + +The logic of the headphone detect pin seems to be inverted, with this +change headphones actually output sound when plugged in. + +Does not need workaround of using pin-switches to enable output. + +Verified by checking /sys/kernel/debug/gpio. + +Fixes: ae46756faff8 ("arm64: dts: rockchip: analog audio on Orange Pi 5") +Signed-off-by: Jimmy Hon +Link: https://lore.kernel.org/r/20250904030150.986042-1-honyuenkwun@gmail.com +Signed-off-by: Heiko Stuebner +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi +index 4fedc50cce8c8..11940c77f2bd0 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi ++++ b/arch/arm64/boot/dts/rockchip/rk3588s-orangepi-5.dtsi +@@ -42,9 +42,8 @@ + simple-audio-card,bitclock-master = <&masterdai>; + simple-audio-card,format = "i2s"; + simple-audio-card,frame-master = <&masterdai>; +- simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_LOW>; ++ simple-audio-card,hp-det-gpios = <&gpio1 RK_PD5 GPIO_ACTIVE_HIGH>; + simple-audio-card,mclk-fs = <256>; +- simple-audio-card,pin-switches = "Headphones"; + simple-audio-card,routing = + "Headphones", "LOUT1", + "Headphones", "ROUT1", +-- +2.51.0 + diff --git a/queue-6.16/bpf-check-the-helper-function-is-valid-in-get_helper.patch b/queue-6.16/bpf-check-the-helper-function-is-valid-in-get_helper.patch new file mode 100644 index 0000000000..0e3a283dac --- /dev/null +++ b/queue-6.16/bpf-check-the-helper-function-is-valid-in-get_helper.patch @@ -0,0 +1,65 @@ +From 3ddc9da2fd0054adc0760899ea905b26f58fd990 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 22:06:55 +0200 +Subject: bpf: Check the helper function is valid in get_helper_proto + +From: Jiri Olsa + +[ Upstream commit e4414b01c1cd9887bbde92f946c1ba94e40d6d64 ] + +kernel test robot reported verifier bug [1] where the helper func +pointer could be NULL due to disabled config option. + +As Alexei suggested we could check on that in get_helper_proto +directly. Marking tail_call helper func with BPF_PTR_POISON, +because it is unused by design. + + [1] https://lore.kernel.org/oe-lkp/202507160818.68358831-lkp@intel.com + +Reported-by: kernel test robot +Reported-by: syzbot+a9ed3d9132939852d0df@syzkaller.appspotmail.com +Suggested-by: Alexei Starovoitov +Signed-off-by: Jiri Olsa +Signed-off-by: Daniel Borkmann +Acked-by: Paul Chaignon +Acked-by: Daniel Borkmann +Link: https://lore.kernel.org/bpf/20250814200655.945632-1-jolsa@kernel.org +Closes: https://lore.kernel.org/oe-lkp/202507160818.68358831-lkp@intel.com +Signed-off-by: Sasha Levin +--- + kernel/bpf/core.c | 5 ++++- + kernel/bpf/verifier.c | 2 +- + 2 files changed, 5 insertions(+), 2 deletions(-) + +diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c +index 829f0792d8d83..17e5cf18da1ef 100644 +--- a/kernel/bpf/core.c ++++ b/kernel/bpf/core.c +@@ -3013,7 +3013,10 @@ EXPORT_SYMBOL_GPL(bpf_event_output); + + /* Always built-in helper functions. */ + const struct bpf_func_proto bpf_tail_call_proto = { +- .func = NULL, ++ /* func is unused for tail_call, we set it to pass the ++ * get_helper_proto check ++ */ ++ .func = BPF_PTR_POISON, + .gpl_only = false, + .ret_type = RET_VOID, + .arg1_type = ARG_PTR_TO_CTX, +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 4fd89659750b2..d6782efd25734 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -11206,7 +11206,7 @@ static int get_helper_proto(struct bpf_verifier_env *env, int func_id, + return -EINVAL; + + *ptr = env->ops->get_func_proto(func_id, env->prog); +- return *ptr ? 0 : -EINVAL; ++ return *ptr && (*ptr)->func ? 0 : -EINVAL; + } + + static int check_helper_call(struct bpf_verifier_env *env, struct bpf_insn *insn, +-- +2.51.0 + diff --git a/queue-6.16/bpf-reject-bpf_timer-for-preempt_rt.patch b/queue-6.16/bpf-reject-bpf_timer-for-preempt_rt.patch new file mode 100644 index 0000000000..9fbd09a05a --- /dev/null +++ b/queue-6.16/bpf-reject-bpf_timer-for-preempt_rt.patch @@ -0,0 +1,43 @@ +From 543c513b0d564bd52db76c1e51b3c59635aef112 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 20:57:39 +0800 +Subject: bpf: Reject bpf_timer for PREEMPT_RT + +From: Leon Hwang + +[ Upstream commit e25ddfb388c8b7e5f20e3bf38d627fb485003781 ] + +When enable CONFIG_PREEMPT_RT, the kernel will warn when run timer +selftests by './test_progs -t timer': + +BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 + +In order to avoid such warning, reject bpf_timer in verifier when +PREEMPT_RT is enabled. + +Signed-off-by: Leon Hwang +Link: https://lore.kernel.org/r/20250910125740.52172-2-leon.hwang@linux.dev +Signed-off-by: Alexei Starovoitov +Signed-off-by: Sasha Levin +--- + kernel/bpf/verifier.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index d6782efd25734..a6338936085ae 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -8405,6 +8405,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno, + verifier_bug(env, "Two map pointers in a timer helper"); + return -EFAULT; + } ++ if (IS_ENABLED(CONFIG_PREEMPT_RT)) { ++ verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); ++ return -EOPNOTSUPP; ++ } + meta->map_uid = reg->map_uid; + meta->map_ptr = map; + return 0; +-- +2.51.0 + diff --git a/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch b/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch new file mode 100644 index 0000000000..e03e21a25c --- /dev/null +++ b/queue-6.16/btrfs-don-t-allow-adding-block-device-of-less-than-1.patch @@ -0,0 +1,54 @@ +From b4cbca440070641199920a2d73a6abd95c9a9ec7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 2 Sep 2025 11:34:10 +0100 +Subject: btrfs: don't allow adding block device of less than 1 MB + +From: Mark Harmstone + +[ Upstream commit 3d1267475b94b3df7a61e4ea6788c7c5d9e473c4 ] + +Commit 15ae0410c37a79 ("btrfs-progs: add error handling for +device_get_partition_size_fd_stat()") in btrfs-progs inadvertently +changed it so that if the BLKGETSIZE64 ioctl on a block device returned +a size of 0, this was no longer seen as an error condition. + +Unfortunately this is how disconnected NBD devices behave, meaning that +with btrfs-progs 6.16 it's now possible to add a device you can't +remove: + + # btrfs device add /dev/nbd0 /root/temp + # btrfs device remove /dev/nbd0 /root/temp + ERROR: error removing device '/dev/nbd0': Invalid argument + +This check should always have been done kernel-side anyway, so add a +check in btrfs_init_new_device() that the new device doesn't have a size +less than BTRFS_DEVICE_RANGE_RESERVED (i.e. 1 MB). + +Reviewed-by: Qu Wenruo +Signed-off-by: Mark Harmstone +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/volumes.c | 5 +++++ + 1 file changed, 5 insertions(+) + +diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c +index f475b4b7c4578..817d3ef501ec4 100644 +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -2714,6 +2714,11 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path + goto error; + } + ++ if (bdev_nr_bytes(file_bdev(bdev_file)) <= BTRFS_DEVICE_RANGE_RESERVED) { ++ ret = -EINVAL; ++ goto error; ++ } ++ + if (fs_devices->seeding) { + seeding_dev = true; + down_write(&sb->s_umount); +-- +2.51.0 + diff --git a/queue-6.16/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch b/queue-6.16/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch new file mode 100644 index 0000000000..11317486aa --- /dev/null +++ b/queue-6.16/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch @@ -0,0 +1,52 @@ +From f666c0f8d135a5d977332fe4607a14f120eb4970 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 13:26:37 +0200 +Subject: can: rcar_can: rcar_can_resume(): fix s2ram with PSCI + +From: Geert Uytterhoeven + +[ Upstream commit 5c793afa07da6d2d4595f6c73a2a543a471bb055 ] + +On R-Car Gen3 using PSCI, s2ram powers down the SoC. After resume, the +CAN interface no longer works, until it is brought down and up again. + +Fix this by calling rcar_can_start() from the PM resume callback, to +fully initialize the controller instead of just restarting it. + +Signed-off-by: Geert Uytterhoeven +Link: https://patch.msgid.link/699b2f7fcb60b31b6f976a37f08ce99c5ffccb31.1755165227.git.geert+renesas@glider.be +Signed-off-by: Marc Kleine-Budde +Signed-off-by: Sasha Levin +--- + drivers/net/can/rcar/rcar_can.c | 8 +------- + 1 file changed, 1 insertion(+), 7 deletions(-) + +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index 2b7dd359f27b7..8569178b66df7 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -861,7 +861,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + { + struct net_device *ndev = dev_get_drvdata(dev); + struct rcar_can_priv *priv = netdev_priv(ndev); +- u16 ctlr; + int err; + + if (!netif_running(ndev)) +@@ -873,12 +872,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + return err; + } + +- ctlr = readw(&priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_SLPM; +- writew(ctlr, &priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_CANM; +- writew(ctlr, &priv->regs->ctlr); +- priv->can.state = CAN_STATE_ERROR_ACTIVE; ++ rcar_can_start(ndev); + + netif_device_attach(ndev); + netif_start_queue(ndev); +-- +2.51.0 + diff --git a/queue-6.16/cpufreq-initialize-cpufreq-based-invariance-before-s.patch b/queue-6.16/cpufreq-initialize-cpufreq-based-invariance-before-s.patch new file mode 100644 index 0000000000..f2f94b55f5 --- /dev/null +++ b/queue-6.16/cpufreq-initialize-cpufreq-based-invariance-before-s.patch @@ -0,0 +1,84 @@ +From 3ddd87608e0e0616cdd2e05bde5599c3433ed9e9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 18 Sep 2025 11:15:52 +0100 +Subject: cpufreq: Initialize cpufreq-based invariance before subsys + +From: Christian Loehle + +[ Upstream commit 8ffe28b4e8d8b18cb2f2933410322c24f039d5d6 ] + +commit 2a6c72738706 ("cpufreq: Initialize cpufreq-based +frequency-invariance later") postponed the frequency invariance +initialization to avoid disabling it in the error case. +This isn't locking safe, instead move the initialization up before +the subsys interface is registered (which will rebuild the +sched_domains) and add the corresponding disable on the error path. + +Observed lockdep without this patch: +[ 0.989686] ====================================================== +[ 0.989688] WARNING: possible circular locking dependency detected +[ 0.989690] 6.17.0-rc4-cix-build+ #31 Tainted: G S +[ 0.989691] ------------------------------------------------------ +[ 0.989692] swapper/0/1 is trying to acquire lock: +[ 0.989693] ffff800082ada7f8 (sched_energy_mutex){+.+.}-{4:4}, at: rebuild_sched_domains_energy+0x30/0x58 +[ 0.989705] + but task is already holding lock: +[ 0.989706] ffff000088c89bc8 (&policy->rwsem){+.+.}-{4:4}, at: cpufreq_online+0x7f8/0xbe0 +[ 0.989713] + which lock already depends on the new lock. + +Fixes: 2a6c72738706 ("cpufreq: Initialize cpufreq-based frequency-invariance later") +Signed-off-by: Christian Loehle +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/cpufreq/cpufreq.c | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 628f5b633b61f..b2da1cda4cebd 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2956,6 +2956,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + goto err_null_driver; + } + ++ /* ++ * Mark support for the scheduler's frequency invariance engine for ++ * drivers that implement target(), target_index() or fast_switch(). ++ */ ++ if (!cpufreq_driver->setpolicy) { ++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance); ++ pr_debug("cpufreq: supports frequency invariance\n"); ++ } ++ + ret = subsys_interface_register(&cpufreq_interface); + if (ret) + goto err_boost_unreg; +@@ -2977,21 +2986,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + hp_online = ret; + ret = 0; + +- /* +- * Mark support for the scheduler's frequency invariance engine for +- * drivers that implement target(), target_index() or fast_switch(). +- */ +- if (!cpufreq_driver->setpolicy) { +- static_branch_enable_cpuslocked(&cpufreq_freq_invariance); +- pr_debug("supports frequency invariance"); +- } +- + pr_debug("driver %s up and running\n", driver_data->name); + goto out; + + err_if_unreg: + subsys_interface_unregister(&cpufreq_interface); + err_boost_unreg: ++ if (!cpufreq_driver->setpolicy) ++ static_branch_disable_cpuslocked(&cpufreq_freq_invariance); + remove_boost_sysfs_file(); + err_null_driver: + write_lock_irqsave(&cpufreq_driver_lock, flags); +-- +2.51.0 + diff --git a/queue-6.16/drm-amdkfd-fix-p2p-links-bug-in-topology.patch b/queue-6.16/drm-amdkfd-fix-p2p-links-bug-in-topology.patch new file mode 100644 index 0000000000..956a821e39 --- /dev/null +++ b/queue-6.16/drm-amdkfd-fix-p2p-links-bug-in-topology.patch @@ -0,0 +1,40 @@ +From 66ce4e29390a4f90aaba4e74ac77433790fe1cb3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Aug 2025 09:50:49 -0400 +Subject: drm/amdkfd: fix p2p links bug in topology + +From: Eric Huang + +[ Upstream commit ce42a3b581a9db10765eb835840b04dbe7972135 ] + +When creating p2p links, KFD needs to check XGMI link +with two conditions, hive_id and is_sharing_enabled, +but it is missing to check is_sharing_enabled, so add +it to fix the error. + +Signed-off-by: Eric Huang +Acked-by: Alex Deucher +Signed-off-by: Alex Deucher +(cherry picked from commit 36cc7d13178d901982da7a122c883861d98da624) +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdkfd/kfd_topology.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c +index 4ec73f33535eb..720b20e842ba4 100644 +--- a/drivers/gpu/drm/amd/amdkfd/kfd_topology.c ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_topology.c +@@ -1587,7 +1587,8 @@ static int kfd_dev_create_p2p_links(void) + break; + if (!dev->gpu || !dev->gpu->adev || + (dev->gpu->kfd->hive_id && +- dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id)) ++ dev->gpu->kfd->hive_id == new_dev->gpu->kfd->hive_id && ++ amdgpu_xgmi_get_is_sharing_enabled(dev->gpu->adev, new_dev->gpu->adev))) + goto next; + + /* check if node(s) is/are peer accessible in one direction or bi-direction */ +-- +2.51.0 + diff --git a/queue-6.16/firmware-imx-add-stub-functions-for-scmi-cpu-api.patch b/queue-6.16/firmware-imx-add-stub-functions-for-scmi-cpu-api.patch new file mode 100644 index 0000000000..dfe14adf73 --- /dev/null +++ b/queue-6.16/firmware-imx-add-stub-functions-for-scmi-cpu-api.patch @@ -0,0 +1,67 @@ +From d9908c4bde9489fd119fadfbdda01cc388e78bed Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Aug 2025 15:00:32 +0800 +Subject: firmware: imx: Add stub functions for SCMI CPU API + +From: Peng Fan + +[ Upstream commit 222accf05fc42f68ae02065d9c1542c20315118b ] + +To ensure successful builds when CONFIG_IMX_SCMI_CPU_DRV is not enabled, +this patch adds static inline stub implementations for the following +functions: + + - scmi_imx_cpu_start() + - scmi_imx_cpu_started() + - scmi_imx_cpu_reset_vector_set() + +These stubs return -EOPNOTSUPP to indicate that the functionality is not +supported in the current configuration. This avoids potential build or +link errors in code that conditionally calls these functions based on +feature availability. + +Fixes: 1055faa5d660 ("firmware: imx: Add i.MX95 SCMI CPU driver") +Reviewed-by: Cristian Marussi +Signed-off-by: Peng Fan +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + include/linux/firmware/imx/sm.h | 18 ++++++++++++++++++ + 1 file changed, 18 insertions(+) + +diff --git a/include/linux/firmware/imx/sm.h b/include/linux/firmware/imx/sm.h +index 6e700e455934e..1817df9aceac8 100644 +--- a/include/linux/firmware/imx/sm.h ++++ b/include/linux/firmware/imx/sm.h +@@ -33,10 +33,28 @@ static inline int scmi_imx_misc_ctrl_set(u32 id, u32 val) + } + #endif + ++#if IS_ENABLED(CONFIG_IMX_SCMI_CPU_DRV) + int scmi_imx_cpu_start(u32 cpuid, bool start); + int scmi_imx_cpu_started(u32 cpuid, bool *started); + int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start, bool boot, + bool resume); ++#else ++static inline int scmi_imx_cpu_start(u32 cpuid, bool start) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline int scmi_imx_cpu_started(u32 cpuid, bool *started) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline int scmi_imx_cpu_reset_vector_set(u32 cpuid, u64 vector, bool start, ++ bool boot, bool resume) ++{ ++ return -EOPNOTSUPP; ++} ++#endif + + enum scmi_imx_lmm_op { + SCMI_IMX_LMM_BOOT, +-- +2.51.0 + diff --git a/queue-6.16/firmware-imx-add-stub-functions-for-scmi-lmm-api.patch b/queue-6.16/firmware-imx-add-stub-functions-for-scmi-lmm-api.patch new file mode 100644 index 0000000000..b21c87ce5a --- /dev/null +++ b/queue-6.16/firmware-imx-add-stub-functions-for-scmi-lmm-api.patch @@ -0,0 +1,63 @@ +From b0f8871c9b899ede333ab2c4670b7444853287d0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Aug 2025 15:00:31 +0800 +Subject: firmware: imx: Add stub functions for SCMI LMM API + +From: Peng Fan + +[ Upstream commit 3fb91b5c86d0fb5ff6f65c30a4f20193166e22fe ] + +To ensure successful builds when CONFIG_IMX_SCMI_LMM_DRV is not enabled, +this patch adds static inline stub implementations for the following +functions: + + - scmi_imx_lmm_operation() + - scmi_imx_lmm_info() + - scmi_imx_lmm_reset_vector_set() + +These stubs return -EOPNOTSUPP to indicate that the functionality is not +supported in the current configuration. This avoids potential build or +link errors in code that conditionally calls these functions based on +feature availability. + +Fixes: 7242bbf418f0 ("firmware: imx: Add i.MX95 SCMI LMM driver") +Reviewed-by: Cristian Marussi +Signed-off-by: Peng Fan +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + include/linux/firmware/imx/sm.h | 17 +++++++++++++++++ + 1 file changed, 17 insertions(+) + +diff --git a/include/linux/firmware/imx/sm.h b/include/linux/firmware/imx/sm.h +index 67fb1d624d285..6e700e455934e 100644 +--- a/include/linux/firmware/imx/sm.h ++++ b/include/linux/firmware/imx/sm.h +@@ -48,7 +48,24 @@ enum scmi_imx_lmm_op { + #define SCMI_IMX_LMM_OP_FORCEFUL 0 + #define SCMI_IMX_LMM_OP_GRACEFUL BIT(0) + ++#if IS_ENABLED(CONFIG_IMX_SCMI_LMM_DRV) + int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags); + int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info); + int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector); ++#else ++static inline int scmi_imx_lmm_operation(u32 lmid, enum scmi_imx_lmm_op op, u32 flags) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline int scmi_imx_lmm_info(u32 lmid, struct scmi_imx_lmm_info *info) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline int scmi_imx_lmm_reset_vector_set(u32 lmid, u32 cpuid, u32 flags, u64 vector) ++{ ++ return -EOPNOTSUPP; ++} ++#endif + #endif +-- +2.51.0 + diff --git a/queue-6.16/firmware-imx-add-stub-functions-for-scmi-misc-api.patch b/queue-6.16/firmware-imx-add-stub-functions-for-scmi-misc-api.patch new file mode 100644 index 0000000000..3a4a8714b8 --- /dev/null +++ b/queue-6.16/firmware-imx-add-stub-functions-for-scmi-misc-api.patch @@ -0,0 +1,69 @@ +From c648257653ea0b36086e1f11137566a420d8caa3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Aug 2025 15:00:30 +0800 +Subject: firmware: imx: Add stub functions for SCMI MISC API + +From: Peng Fan + +[ Upstream commit b2461e20fa9ac18b1305bba5bc7e22ebf644ea01 ] + +To ensure successful builds when CONFIG_IMX_SCMI_MISC_DRV is not enabled, +this patch adds static inline stub implementations for the following +functions: + + - scmi_imx_misc_ctrl_get() + - scmi_imx_misc_ctrl_set() + +These stubs return -EOPNOTSUPP to indicate that the functionality is not +supported in the current configuration. This avoids potential build or +link errors in code that conditionally calls these functions based on +feature availability. + +This patch also drops the changes in commit 540c830212ed ("firmware: imx: +remove duplicate scmi_imx_misc_ctrl_get()"). + +The original change aimed to simplify the handling of optional features by +removing conditional stubs. However, the use of conditional stubs is +necessary when CONFIG_IMX_SCMI_MISC_DRV is n, while consumer driver is +set to y. + +This is not a matter of preserving legacy patterns, but rather to ensure +that there is no link error whether for module or built-in. + +Fixes: 0b4f8a68b292 ("firmware: imx: Add i.MX95 MISC driver") +Reviewed-by: Cristian Marussi +Signed-off-by: Peng Fan +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + include/linux/firmware/imx/sm.h | 12 ++++++++++++ + 1 file changed, 12 insertions(+) + +diff --git a/include/linux/firmware/imx/sm.h b/include/linux/firmware/imx/sm.h +index a8a17eeb7d907..67fb1d624d285 100644 +--- a/include/linux/firmware/imx/sm.h ++++ b/include/linux/firmware/imx/sm.h +@@ -18,8 +18,20 @@ + #define SCMI_IMX_CTRL_SAI4_MCLK 4 /* WAKE SAI4 MCLK */ + #define SCMI_IMX_CTRL_SAI5_MCLK 5 /* WAKE SAI5 MCLK */ + ++#if IS_ENABLED(CONFIG_IMX_SCMI_MISC_DRV) + int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val); + int scmi_imx_misc_ctrl_set(u32 id, u32 val); ++#else ++static inline int scmi_imx_misc_ctrl_get(u32 id, u32 *num, u32 *val) ++{ ++ return -EOPNOTSUPP; ++} ++ ++static inline int scmi_imx_misc_ctrl_set(u32 id, u32 val) ++{ ++ return -EOPNOTSUPP; ++} ++#endif + + int scmi_imx_cpu_start(u32 cpuid, bool start); + int scmi_imx_cpu_started(u32 cpuid, bool *started); +-- +2.51.0 + diff --git a/queue-6.16/nfs-protect-against-eof-page-pollution.patch b/queue-6.16/nfs-protect-against-eof-page-pollution.patch new file mode 100644 index 0000000000..bc711331b2 --- /dev/null +++ b/queue-6.16/nfs-protect-against-eof-page-pollution.patch @@ -0,0 +1,194 @@ +From 12b75dc708dd32d03baa5c0aab7dc1be27703b4a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 4 Sep 2025 18:46:16 -0400 +Subject: NFS: Protect against 'eof page pollution' + +From: Trond Myklebust + +[ Upstream commit b1817b18ff20e69f5accdccefaf78bf5454bede2 ] + +This commit fixes the failing xfstest 'generic/363'. + +When the user mmaps() an area that extends beyond the end of file, and +proceeds to write data into the folio that straddles that eof, we're +required to discard that folio data if the user calls some function that +extends the file length. + +Signed-off-by: Trond Myklebust +Signed-off-by: Sasha Levin +--- + fs/nfs/file.c | 33 +++++++++++++++++++++++++++++++++ + fs/nfs/inode.c | 9 +++++++-- + fs/nfs/internal.h | 2 ++ + fs/nfs/nfs42proc.c | 14 +++++++++++--- + fs/nfs/nfstrace.h | 1 + + 5 files changed, 54 insertions(+), 5 deletions(-) + +diff --git a/fs/nfs/file.c b/fs/nfs/file.c +index a16a619fb8c33..8cc39a73faff8 100644 +--- a/fs/nfs/file.c ++++ b/fs/nfs/file.c +@@ -28,6 +28,7 @@ + #include + #include + #include ++#include + #include + #include + +@@ -279,6 +280,37 @@ nfs_file_fsync(struct file *file, loff_t start, loff_t end, int datasync) + } + EXPORT_SYMBOL_GPL(nfs_file_fsync); + ++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from, ++ loff_t to) ++{ ++ struct folio *folio; ++ ++ if (from >= to) ++ return; ++ ++ folio = filemap_lock_folio(mapping, from >> PAGE_SHIFT); ++ if (IS_ERR(folio)) ++ return; ++ ++ if (folio_mkclean(folio)) ++ folio_mark_dirty(folio); ++ ++ if (folio_test_uptodate(folio)) { ++ loff_t fpos = folio_pos(folio); ++ size_t offset = from - fpos; ++ size_t end = folio_size(folio); ++ ++ if (to - fpos < end) ++ end = to - fpos; ++ folio_zero_segment(folio, offset, end); ++ trace_nfs_size_truncate_folio(mapping->host, to); ++ } ++ ++ folio_unlock(folio); ++ folio_put(folio); ++} ++EXPORT_SYMBOL_GPL(nfs_truncate_last_folio); ++ + /* + * Decide whether a read/modify/write cycle may be more efficient + * then a modify/write/read cycle when writing to a page in the +@@ -353,6 +385,7 @@ static int nfs_write_begin(struct file *file, struct address_space *mapping, + + dfprintk(PAGECACHE, "NFS: write_begin(%pD2(%lu), %u@%lld)\n", + file, mapping->host->i_ino, len, (long long) pos); ++ nfs_truncate_last_folio(mapping, i_size_read(mapping->host), pos); + + fgp |= fgf_set_order(len); + start: +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index a32cc45425e28..f6b448666d419 100644 +--- a/fs/nfs/inode.c ++++ b/fs/nfs/inode.c +@@ -710,6 +710,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry, + { + struct inode *inode = d_inode(dentry); + struct nfs_fattr *fattr; ++ loff_t oldsize = i_size_read(inode); + int error = 0; + + nfs_inc_stats(inode, NFSIOS_VFSSETATTR); +@@ -725,7 +726,7 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry, + if (error) + return error; + +- if (attr->ia_size == i_size_read(inode)) ++ if (attr->ia_size == oldsize) + attr->ia_valid &= ~ATTR_SIZE; + } + +@@ -773,8 +774,12 @@ nfs_setattr(struct mnt_idmap *idmap, struct dentry *dentry, + } + + error = NFS_PROTO(inode)->setattr(dentry, fattr, attr); +- if (error == 0) ++ if (error == 0) { ++ if (attr->ia_valid & ATTR_SIZE) ++ nfs_truncate_last_folio(inode->i_mapping, oldsize, ++ attr->ia_size); + error = nfs_refresh_inode(inode, fattr); ++ } + nfs_free_fattr(fattr); + out: + trace_nfs_setattr_exit(inode, error); +diff --git a/fs/nfs/internal.h b/fs/nfs/internal.h +index 0ef0fc6aba3b3..ae4d039c10d3a 100644 +--- a/fs/nfs/internal.h ++++ b/fs/nfs/internal.h +@@ -438,6 +438,8 @@ int nfs_file_release(struct inode *, struct file *); + int nfs_lock(struct file *, int, struct file_lock *); + int nfs_flock(struct file *, int, struct file_lock *); + int nfs_check_flags(int); ++void nfs_truncate_last_folio(struct address_space *mapping, loff_t from, ++ loff_t to); + + /* inode.c */ + extern struct workqueue_struct *nfsiod_workqueue; +diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c +index 48ee3d5d89c4a..4b0e35a0d89dd 100644 +--- a/fs/nfs/nfs42proc.c ++++ b/fs/nfs/nfs42proc.c +@@ -138,6 +138,7 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len) + .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ALLOCATE], + }; + struct inode *inode = file_inode(filep); ++ loff_t oldsize = i_size_read(inode); + int err; + + if (!nfs_server_capable(inode, NFS_CAP_ALLOCATE)) +@@ -146,7 +147,11 @@ int nfs42_proc_allocate(struct file *filep, loff_t offset, loff_t len) + inode_lock(inode); + + err = nfs42_proc_fallocate(&msg, filep, offset, len); +- if (err == -EOPNOTSUPP) ++ ++ if (err == 0) ++ nfs_truncate_last_folio(inode->i_mapping, oldsize, ++ offset + len); ++ else if (err == -EOPNOTSUPP) + NFS_SERVER(inode)->caps &= ~(NFS_CAP_ALLOCATE | + NFS_CAP_ZERO_RANGE); + +@@ -184,6 +189,7 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len) + .rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_ZERO_RANGE], + }; + struct inode *inode = file_inode(filep); ++ loff_t oldsize = i_size_read(inode); + int err; + + if (!nfs_server_capable(inode, NFS_CAP_ZERO_RANGE)) +@@ -192,9 +198,11 @@ int nfs42_proc_zero_range(struct file *filep, loff_t offset, loff_t len) + inode_lock(inode); + + err = nfs42_proc_fallocate(&msg, filep, offset, len); +- if (err == 0) ++ if (err == 0) { ++ nfs_truncate_last_folio(inode->i_mapping, oldsize, ++ offset + len); + truncate_pagecache_range(inode, offset, (offset + len) -1); +- if (err == -EOPNOTSUPP) ++ } else if (err == -EOPNOTSUPP) + NFS_SERVER(inode)->caps &= ~NFS_CAP_ZERO_RANGE; + + inode_unlock(inode); +diff --git a/fs/nfs/nfstrace.h b/fs/nfs/nfstrace.h +index 7a058bd8c566e..1e4dc632f1800 100644 +--- a/fs/nfs/nfstrace.h ++++ b/fs/nfs/nfstrace.h +@@ -267,6 +267,7 @@ DECLARE_EVENT_CLASS(nfs_update_size_class, + TP_ARGS(inode, new_size)) + + DEFINE_NFS_UPDATE_SIZE_EVENT(truncate); ++DEFINE_NFS_UPDATE_SIZE_EVENT(truncate_folio); + DEFINE_NFS_UPDATE_SIZE_EVENT(wcc); + DEFINE_NFS_UPDATE_SIZE_EVENT(update); + DEFINE_NFS_UPDATE_SIZE_EVENT(grow); +-- +2.51.0 + diff --git a/queue-6.16/nfsv4.2-protect-copy-offload-and-clone-against-eof-p.patch b/queue-6.16/nfsv4.2-protect-copy-offload-and-clone-against-eof-p.patch new file mode 100644 index 0000000000..e27fb9849b --- /dev/null +++ b/queue-6.16/nfsv4.2-protect-copy-offload-and-clone-against-eof-p.patch @@ -0,0 +1,93 @@ +From 1905ba1ef8faa64a61f953b1b0c1ebd295b7acb0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 6 Sep 2025 10:25:35 -0400 +Subject: NFSv4.2: Protect copy offload and clone against 'eof page pollution' + +From: Trond Myklebust + +[ Upstream commit b2036bb65114c01caf4a1afe553026e081703c8c ] + +The NFSv4.2 copy offload and clone functions can also end up extending +the size of the destination file, so they too need to call +nfs_truncate_last_folio(). + +Reported-by: Olga Kornievskaia +Signed-off-by: Trond Myklebust +Signed-off-by: Sasha Levin +--- + fs/nfs/nfs42proc.c | 19 +++++++++++++------ + 1 file changed, 13 insertions(+), 6 deletions(-) + +diff --git a/fs/nfs/nfs42proc.c b/fs/nfs/nfs42proc.c +index 4b0e35a0d89dd..6a0b5871ba3b0 100644 +--- a/fs/nfs/nfs42proc.c ++++ b/fs/nfs/nfs42proc.c +@@ -363,22 +363,27 @@ static int process_copy_commit(struct file *dst, loff_t pos_dst, + + /** + * nfs42_copy_dest_done - perform inode cache updates after clone/copy offload +- * @inode: pointer to destination inode ++ * @file: pointer to destination file + * @pos: destination offset + * @len: copy length ++ * @oldsize: length of the file prior to clone/copy + * + * Punch a hole in the inode page cache, so that the NFS client will + * know to retrieve new data. + * Update the file size if necessary, and then mark the inode as having + * invalid cached values for change attribute, ctime, mtime and space used. + */ +-static void nfs42_copy_dest_done(struct inode *inode, loff_t pos, loff_t len) ++static void nfs42_copy_dest_done(struct file *file, loff_t pos, loff_t len, ++ loff_t oldsize) + { ++ struct inode *inode = file_inode(file); ++ struct address_space *mapping = file->f_mapping; + loff_t newsize = pos + len; + loff_t end = newsize - 1; + +- WARN_ON_ONCE(invalidate_inode_pages2_range(inode->i_mapping, +- pos >> PAGE_SHIFT, end >> PAGE_SHIFT)); ++ nfs_truncate_last_folio(mapping, oldsize, pos); ++ WARN_ON_ONCE(invalidate_inode_pages2_range(mapping, pos >> PAGE_SHIFT, ++ end >> PAGE_SHIFT)); + + spin_lock(&inode->i_lock); + if (newsize > i_size_read(inode)) +@@ -411,6 +416,7 @@ static ssize_t _nfs42_proc_copy(struct file *src, + struct nfs_server *src_server = NFS_SERVER(src_inode); + loff_t pos_src = args->src_pos; + loff_t pos_dst = args->dst_pos; ++ loff_t oldsize_dst = i_size_read(dst_inode); + size_t count = args->count; + ssize_t status; + +@@ -485,7 +491,7 @@ static ssize_t _nfs42_proc_copy(struct file *src, + goto out; + } + +- nfs42_copy_dest_done(dst_inode, pos_dst, res->write_res.count); ++ nfs42_copy_dest_done(dst, pos_dst, res->write_res.count, oldsize_dst); + nfs_invalidate_atime(src_inode); + status = res->write_res.count; + out: +@@ -1252,6 +1258,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f, + struct nfs42_clone_res res = { + .server = server, + }; ++ loff_t oldsize_dst = i_size_read(dst_inode); + int status; + + msg->rpc_argp = &args; +@@ -1286,7 +1293,7 @@ static int _nfs42_proc_clone(struct rpc_message *msg, struct file *src_f, + /* a zero-length count means clone to EOF in src */ + if (count == 0 && res.dst_fattr->valid & NFS_ATTR_FATTR_SIZE) + count = nfs_size_to_loff_t(res.dst_fattr->size) - dst_offset; +- nfs42_copy_dest_done(dst_inode, dst_offset, count); ++ nfs42_copy_dest_done(dst_f, dst_offset, count, oldsize_dst); + status = nfs_post_op_update_inode(dst_inode, res.dst_fattr); + } + +-- +2.51.0 + diff --git a/queue-6.16/selftests-bpf-skip-timer-cases-when-bpf_timer-is-not.patch b/queue-6.16/selftests-bpf-skip-timer-cases-when-bpf_timer-is-not.patch new file mode 100644 index 0000000000..90783bdfba --- /dev/null +++ b/queue-6.16/selftests-bpf-skip-timer-cases-when-bpf_timer-is-not.patch @@ -0,0 +1,115 @@ +From 1c016b5823de5313cfd4c152bf95acf8cec694b4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 20:57:40 +0800 +Subject: selftests/bpf: Skip timer cases when bpf_timer is not supported + +From: Leon Hwang + +[ Upstream commit fbdd61c94bcb09b0c0eb0655917bf4193d07aac1 ] + +When enable CONFIG_PREEMPT_RT, verifier will reject bpf_timer with +returning -EOPNOTSUPP. + +Therefore, skip test cases when errno is EOPNOTSUPP. + +cd tools/testing/selftests/bpf +./test_progs -t timer +125 free_timer:SKIP +456 timer:SKIP +457/1 timer_crash/array:SKIP +457/2 timer_crash/hash:SKIP +457 timer_crash:SKIP +458 timer_lockup:SKIP +459 timer_mim:SKIP +Summary: 5/0 PASSED, 6 SKIPPED, 0 FAILED + +Signed-off-by: Leon Hwang +Link: https://lore.kernel.org/r/20250910125740.52172-3-leon.hwang@linux.dev +Signed-off-by: Alexei Starovoitov +Signed-off-by: Sasha Levin +--- + tools/testing/selftests/bpf/prog_tests/free_timer.c | 4 ++++ + tools/testing/selftests/bpf/prog_tests/timer.c | 4 ++++ + tools/testing/selftests/bpf/prog_tests/timer_crash.c | 4 ++++ + tools/testing/selftests/bpf/prog_tests/timer_lockup.c | 4 ++++ + tools/testing/selftests/bpf/prog_tests/timer_mim.c | 4 ++++ + 5 files changed, 20 insertions(+) + +diff --git a/tools/testing/selftests/bpf/prog_tests/free_timer.c b/tools/testing/selftests/bpf/prog_tests/free_timer.c +index b7b77a6b29799..0de8facca4c5b 100644 +--- a/tools/testing/selftests/bpf/prog_tests/free_timer.c ++++ b/tools/testing/selftests/bpf/prog_tests/free_timer.c +@@ -124,6 +124,10 @@ void test_free_timer(void) + int err; + + skel = free_timer__open_and_load(); ++ if (!skel && errno == EOPNOTSUPP) { ++ test__skip(); ++ return; ++ } + if (!ASSERT_OK_PTR(skel, "open_load")) + return; + +diff --git a/tools/testing/selftests/bpf/prog_tests/timer.c b/tools/testing/selftests/bpf/prog_tests/timer.c +index d66687f1ee6a8..56f660ca567ba 100644 +--- a/tools/testing/selftests/bpf/prog_tests/timer.c ++++ b/tools/testing/selftests/bpf/prog_tests/timer.c +@@ -86,6 +86,10 @@ void serial_test_timer(void) + int err; + + timer_skel = timer__open_and_load(); ++ if (!timer_skel && errno == EOPNOTSUPP) { ++ test__skip(); ++ return; ++ } + if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load")) + return; + +diff --git a/tools/testing/selftests/bpf/prog_tests/timer_crash.c b/tools/testing/selftests/bpf/prog_tests/timer_crash.c +index f74b82305da8c..b841597c8a3a3 100644 +--- a/tools/testing/selftests/bpf/prog_tests/timer_crash.c ++++ b/tools/testing/selftests/bpf/prog_tests/timer_crash.c +@@ -12,6 +12,10 @@ static void test_timer_crash_mode(int mode) + struct timer_crash *skel; + + skel = timer_crash__open_and_load(); ++ if (!skel && errno == EOPNOTSUPP) { ++ test__skip(); ++ return; ++ } + if (!ASSERT_OK_PTR(skel, "timer_crash__open_and_load")) + return; + skel->bss->pid = getpid(); +diff --git a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c +index 1a2f99596916f..eb303fa1e09af 100644 +--- a/tools/testing/selftests/bpf/prog_tests/timer_lockup.c ++++ b/tools/testing/selftests/bpf/prog_tests/timer_lockup.c +@@ -59,6 +59,10 @@ void test_timer_lockup(void) + } + + skel = timer_lockup__open_and_load(); ++ if (!skel && errno == EOPNOTSUPP) { ++ test__skip(); ++ return; ++ } + if (!ASSERT_OK_PTR(skel, "timer_lockup__open_and_load")) + return; + +diff --git a/tools/testing/selftests/bpf/prog_tests/timer_mim.c b/tools/testing/selftests/bpf/prog_tests/timer_mim.c +index 9ff7843909e7d..c930c7d7105b9 100644 +--- a/tools/testing/selftests/bpf/prog_tests/timer_mim.c ++++ b/tools/testing/selftests/bpf/prog_tests/timer_mim.c +@@ -65,6 +65,10 @@ void serial_test_timer_mim(void) + goto cleanup; + + timer_skel = timer_mim__open_and_load(); ++ if (!timer_skel && errno == EOPNOTSUPP) { ++ test__skip(); ++ return; ++ } + if (!ASSERT_OK_PTR(timer_skel, "timer_skel_load")) + goto cleanup; + +-- +2.51.0 + diff --git a/queue-6.16/selftests-fs-mount-notify-fix-compilation-failure.patch b/queue-6.16/selftests-fs-mount-notify-fix-compilation-failure.patch new file mode 100644 index 0000000000..93d8d21f55 --- /dev/null +++ b/queue-6.16/selftests-fs-mount-notify-fix-compilation-failure.patch @@ -0,0 +1,109 @@ +From 0aa2fcaa67d128972ef7cb1351231360b4cd3876 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 13 Aug 2025 11:16:47 +0800 +Subject: selftests/fs/mount-notify: Fix compilation failure. + +From: Xing Guo + +[ Upstream commit e51bd0e595476c1527bb0b4def095a6fd16b2563 ] + +Commit c6d9775c2066 ("selftests/fs/mount-notify: build with tools include +dir") introduces the struct __kernel_fsid_t to decouple dependency with +headers_install. The commit forgets to define a macro for __kernel_fsid_t +and it will cause type re-definition issue. + +Signed-off-by: Xing Guo +Link: https://lore.kernel.org/20250813031647.96411-1-higuoxing@gmail.com +Acked-by: Amir Goldstein +Closes: https://lore.kernel.org/oe-lkp/202508110628.65069d92-lkp@intel.com +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + .../mount-notify/mount-notify_test.c | 17 ++++++++--------- + .../mount-notify/mount-notify_test_ns.c | 18 ++++++++---------- + 2 files changed, 16 insertions(+), 19 deletions(-) + +diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c +index 63ce708d93ed0..e4b7c2b457ee7 100644 +--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c ++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test.c +@@ -2,6 +2,13 @@ + // Copyright (c) 2025 Miklos Szeredi + + #define _GNU_SOURCE ++ ++// Needed for linux/fanotify.h ++typedef struct { ++ int val[2]; ++} __kernel_fsid_t; ++#define __kernel_fsid_t __kernel_fsid_t ++ + #include + #include + #include +@@ -10,20 +17,12 @@ + #include + #include + #include ++#include + + #include "../../kselftest_harness.h" + #include "../statmount/statmount.h" + #include "../utils.h" + +-// Needed for linux/fanotify.h +-#ifndef __kernel_fsid_t +-typedef struct { +- int val[2]; +-} __kernel_fsid_t; +-#endif +- +-#include +- + static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX"; + + static const int mark_cmds[] = { +diff --git a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c +index 090a5ca65004a..9f57ca46e3afa 100644 +--- a/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c ++++ b/tools/testing/selftests/filesystems/mount-notify/mount-notify_test_ns.c +@@ -2,6 +2,13 @@ + // Copyright (c) 2025 Miklos Szeredi + + #define _GNU_SOURCE ++ ++// Needed for linux/fanotify.h ++typedef struct { ++ int val[2]; ++} __kernel_fsid_t; ++#define __kernel_fsid_t __kernel_fsid_t ++ + #include + #include + #include +@@ -10,21 +17,12 @@ + #include + #include + #include ++#include + + #include "../../kselftest_harness.h" +-#include "../../pidfd/pidfd.h" + #include "../statmount/statmount.h" + #include "../utils.h" + +-// Needed for linux/fanotify.h +-#ifndef __kernel_fsid_t +-typedef struct { +- int val[2]; +-} __kernel_fsid_t; +-#endif +- +-#include +- + static const char root_mntpoint_templ[] = "/tmp/mount-notify_test_root.XXXXXX"; + + static const int mark_types[] = { +-- +2.51.0 + diff --git a/queue-6.16/series b/queue-6.16/series index bcea188ea0..a41d69177e 100644 --- a/queue-6.16/series +++ b/queue-6.16/series @@ -35,3 +35,23 @@ net-sfp-add-quirk-for-flypro-copper-sfp-module.patch ib-mlx5-fix-obj_type-mismatch-for-srq-event-subscrip.patch hid-cp2112-fix-setter-callbacks-return-value.patch hid-amd_sfh-add-sync-across-amd-sfh-work-functions.patch +arm64-dts-rockchip-fix-the-headphone-detection-on-th.patch +firmware-imx-add-stub-functions-for-scmi-misc-api.patch +firmware-imx-add-stub-functions-for-scmi-lmm-api.patch +firmware-imx-add-stub-functions-for-scmi-cpu-api.patch +arm64-dts-imx8mp-correct-thermal-sensor-index.patch +arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch +cpufreq-initialize-cpufreq-based-invariance-before-s.patch +smb-server-don-t-use-delayed_work-for-post_recv_cred.patch +smb-server-use-disable_work_sync-in-transport_rdma.c.patch +bpf-check-the-helper-function-is-valid-in-get_helper.patch +selftests-fs-mount-notify-fix-compilation-failure.patch +btrfs-don-t-allow-adding-block-device-of-less-than-1.patch +nfs-protect-against-eof-page-pollution.patch +nfsv4.2-protect-copy-offload-and-clone-against-eof-p.patch +drm-amdkfd-fix-p2p-links-bug-in-topology.patch +amd-amdkfd-correct-mem-limit-calculation-for-small-a.patch +wifi-virt_wifi-fix-page-fault-on-connect.patch +can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch +bpf-reject-bpf_timer-for-preempt_rt.patch +selftests-bpf-skip-timer-cases-when-bpf_timer-is-not.patch diff --git a/queue-6.16/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch b/queue-6.16/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch new file mode 100644 index 0000000000..0d1f70c15b --- /dev/null +++ b/queue-6.16/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch @@ -0,0 +1,103 @@ +From a5ca0169a295b8ce5aeb28bccb6c1c0c68d06f1e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 8 Aug 2025 17:55:17 +0200 +Subject: smb: server: don't use delayed_work for post_recv_credits_work + +From: Stefan Metzmacher + +[ Upstream commit 1cde0a74a7a8951b3097417847a458e557be0b5b ] + +If we are using a hardcoded delay of 0 there's no point in +using delayed_work it only adds confusion. + +The client also uses a normal work_struct and now +it is easier to move it to the common smbdirect_socket. + +Cc: Namjae Jeon +Cc: Steve French +Cc: Tom Talpey +Cc: linux-cifs@vger.kernel.org +Cc: samba-technical@lists.samba.org +Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") +Signed-off-by: Stefan Metzmacher +Acked-by: Namjae Jeon +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/server/transport_rdma.c | 18 ++++++++---------- + 1 file changed, 8 insertions(+), 10 deletions(-) + +diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c +index 6550bd9f002c2..10a6b4ed1a037 100644 +--- a/fs/smb/server/transport_rdma.c ++++ b/fs/smb/server/transport_rdma.c +@@ -148,7 +148,7 @@ struct smb_direct_transport { + wait_queue_head_t wait_send_pending; + atomic_t send_pending; + +- struct delayed_work post_recv_credits_work; ++ struct work_struct post_recv_credits_work; + struct work_struct send_immediate_work; + struct work_struct disconnect_work; + +@@ -367,8 +367,8 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id) + + spin_lock_init(&t->lock_new_recv_credits); + +- INIT_DELAYED_WORK(&t->post_recv_credits_work, +- smb_direct_post_recv_credits); ++ INIT_WORK(&t->post_recv_credits_work, ++ smb_direct_post_recv_credits); + INIT_WORK(&t->send_immediate_work, smb_direct_send_immediate_work); + INIT_WORK(&t->disconnect_work, smb_direct_disconnect_rdma_work); + +@@ -400,7 +400,7 @@ static void free_transport(struct smb_direct_transport *t) + atomic_read(&t->send_pending) == 0); + + cancel_work_sync(&t->disconnect_work); +- cancel_delayed_work_sync(&t->post_recv_credits_work); ++ cancel_work_sync(&t->post_recv_credits_work); + cancel_work_sync(&t->send_immediate_work); + + if (t->qp) { +@@ -615,8 +615,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc) + wake_up_interruptible(&t->wait_send_credits); + + if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count)) +- mod_delayed_work(smb_direct_wq, +- &t->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &t->post_recv_credits_work); + + if (data_length) { + enqueue_reassembly(t, recvmsg, (int)data_length); +@@ -773,8 +772,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + st->count_avail_recvmsg += queue_removed; + if (is_receive_credit_post_required(st->recv_credits, st->count_avail_recvmsg)) { + spin_unlock(&st->receive_credit_lock); +- mod_delayed_work(smb_direct_wq, +- &st->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &st->post_recv_credits_work); + } else { + spin_unlock(&st->receive_credit_lock); + } +@@ -801,7 +799,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + static void smb_direct_post_recv_credits(struct work_struct *work) + { + struct smb_direct_transport *t = container_of(work, +- struct smb_direct_transport, post_recv_credits_work.work); ++ struct smb_direct_transport, post_recv_credits_work); + struct smb_direct_recvmsg *recvmsg; + int receive_credits, credits = 0; + int ret; +@@ -1734,7 +1732,7 @@ static int smb_direct_prepare_negotiation(struct smb_direct_transport *t) + goto out_err; + } + +- smb_direct_post_recv_credits(&t->post_recv_credits_work.work); ++ smb_direct_post_recv_credits(&t->post_recv_credits_work); + return 0; + out_err: + put_recvmsg(t, recvmsg); +-- +2.51.0 + diff --git a/queue-6.16/smb-server-use-disable_work_sync-in-transport_rdma.c.patch b/queue-6.16/smb-server-use-disable_work_sync-in-transport_rdma.c.patch new file mode 100644 index 0000000000..8cb16de8ae --- /dev/null +++ b/queue-6.16/smb-server-use-disable_work_sync-in-transport_rdma.c.patch @@ -0,0 +1,48 @@ +From 8b80bbba036033c9cc99c3a84261e71a33158fd7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 13 Aug 2025 08:48:42 +0200 +Subject: smb: server: use disable_work_sync in transport_rdma.c + +From: Stefan Metzmacher + +[ Upstream commit f7f89250175e0a82e99ed66da7012e869c36497d ] + +This makes it safer during the disconnect and avoids +requeueing. + +It's ok to call disable_work[_sync]() more than once. + +Cc: Namjae Jeon +Cc: Steve French +Cc: Tom Talpey +Cc: linux-cifs@vger.kernel.org +Cc: samba-technical@lists.samba.org +Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") +Signed-off-by: Stefan Metzmacher +Acked-by: Namjae Jeon +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/server/transport_rdma.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c +index 10a6b4ed1a037..74dfb6496095d 100644 +--- a/fs/smb/server/transport_rdma.c ++++ b/fs/smb/server/transport_rdma.c +@@ -399,9 +399,9 @@ static void free_transport(struct smb_direct_transport *t) + wait_event(t->wait_send_pending, + atomic_read(&t->send_pending) == 0); + +- cancel_work_sync(&t->disconnect_work); +- cancel_work_sync(&t->post_recv_credits_work); +- cancel_work_sync(&t->send_immediate_work); ++ disable_work_sync(&t->disconnect_work); ++ disable_work_sync(&t->post_recv_credits_work); ++ disable_work_sync(&t->send_immediate_work); + + if (t->qp) { + ib_drain_qp(t->qp); +-- +2.51.0 + diff --git a/queue-6.16/wifi-virt_wifi-fix-page-fault-on-connect.patch b/queue-6.16/wifi-virt_wifi-fix-page-fault-on-connect.patch new file mode 100644 index 0000000000..a914a6d168 --- /dev/null +++ b/queue-6.16/wifi-virt_wifi-fix-page-fault-on-connect.patch @@ -0,0 +1,43 @@ +From 79c1dbcf496da2618938186d2e976395f70cb8d6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 19:19:29 +0800 +Subject: wifi: virt_wifi: Fix page fault on connect + +From: James Guan + +[ Upstream commit 9c600589e14f5fc01b8be9a5d0ad1f094b8b304b ] + +This patch prevents page fault in __cfg80211_connect_result()[1] +when connecting a virt_wifi device, while ensuring that virt_wifi +can connect properly. + +[1] https://lore.kernel.org/linux-wireless/20250909063213.1055024-1-guan_yufei@163.com/ + +Closes: https://lore.kernel.org/linux-wireless/20250909063213.1055024-1-guan_yufei@163.com/ +Signed-off-by: James Guan +Link: https://patch.msgid.link/20250910111929.137049-1-guan_yufei@163.com +[remove irrelevant network-manager instructions] +Signed-off-by: Johannes Berg +Signed-off-by: Sasha Levin +--- + drivers/net/wireless/virtual/virt_wifi.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/wireless/virtual/virt_wifi.c b/drivers/net/wireless/virtual/virt_wifi.c +index 1fffeff2190ca..4eae89376feb5 100644 +--- a/drivers/net/wireless/virtual/virt_wifi.c ++++ b/drivers/net/wireless/virtual/virt_wifi.c +@@ -277,7 +277,9 @@ static void virt_wifi_connect_complete(struct work_struct *work) + priv->is_connected = true; + + /* Schedules an event that acquires the rtnl lock. */ +- cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0, ++ cfg80211_connect_result(priv->upperdev, ++ priv->is_connected ? fake_router_bssid : NULL, ++ NULL, 0, NULL, 0, + status, GFP_KERNEL); + netif_carrier_on(priv->upperdev); + } +-- +2.51.0 + diff --git a/queue-6.6/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch b/queue-6.6/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch new file mode 100644 index 0000000000..a60b2199c0 --- /dev/null +++ b/queue-6.6/arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch @@ -0,0 +1,45 @@ +From 6ec6e4bcf103005c72d1c11285e772492033013c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 30 Aug 2025 22:37:50 +0200 +Subject: ARM: dts: kirkwood: Fix sound DAI cells for OpenRD clients + +From: Jihed Chaibi + +[ Upstream commit 29341c6c18b8ad2a9a4a68a61be7e1272d842f21 ] + +A previous commit changed the '#sound-dai-cells' property for the +kirkwood audio controller from 1 to 0 in the kirkwood.dtsi file, +but did not update the corresponding 'sound-dai' property in the +kirkwood-openrd-client.dts file. + +This created a mismatch, causing a dtbs_check validation error where +the dts provides one cell (<&audio0 0>) while the .dtsi expects zero. + +Remove the extraneous cell from the 'sound-dai' property to fix the +schema validation warning and align with the updated binding. + +Fixes: e662e70fa419 ("arm: dts: kirkwood: fix error in #sound-dai-cells size") +Signed-off-by: Jihed Chaibi +Reviewed-by: Krzysztof Kozlowski +Signed-off-by: Gregory CLEMENT +Signed-off-by: Sasha Levin +--- + arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts +index d4e0b8150a84c..cf26e2ceaaa07 100644 +--- a/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts ++++ b/arch/arm/boot/dts/marvell/kirkwood-openrd-client.dts +@@ -38,7 +38,7 @@ + simple-audio-card,mclk-fs = <256>; + + simple-audio-card,cpu { +- sound-dai = <&audio0 0>; ++ sound-dai = <&audio0>; + }; + + simple-audio-card,codec { +-- +2.51.0 + diff --git a/queue-6.6/arm64-dts-imx8mp-correct-thermal-sensor-index.patch b/queue-6.6/arm64-dts-imx8mp-correct-thermal-sensor-index.patch new file mode 100644 index 0000000000..7c40441676 --- /dev/null +++ b/queue-6.6/arm64-dts-imx8mp-correct-thermal-sensor-index.patch @@ -0,0 +1,50 @@ +From 23fd8d21137416ca415f092ba5b8fecbf1079e84 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 5 Sep 2025 11:01:09 +0800 +Subject: arm64: dts: imx8mp: Correct thermal sensor index + +From: Peng Fan + +[ Upstream commit a50342f976d25aace73ff551845ce89406f48f35 ] + +The TMU has two temperature measurement sites located on the chip. The +probe 0 is located inside of the ANAMIX, while the probe 1 is located near +the ARM core. This has been confirmed by checking with HW design team and +checking RTL code. + +So correct the {cpu,soc}-thermal sensor index. + +Fixes: 30cdd62dce6b ("arm64: dts: imx8mp: Add thermal zones support") +Signed-off-by: Peng Fan +Reviewed-by: Frank Li +Signed-off-by: Shawn Guo +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/freescale/imx8mp.dtsi | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/freescale/imx8mp.dtsi b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +index 69b213ed7a594..7f7bd8477efde 100644 +--- a/arch/arm64/boot/dts/freescale/imx8mp.dtsi ++++ b/arch/arm64/boot/dts/freescale/imx8mp.dtsi +@@ -228,7 +228,7 @@ + cpu-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 0>; ++ thermal-sensors = <&tmu 1>; + trips { + cpu_alert0: trip0 { + temperature = <85000>; +@@ -258,7 +258,7 @@ + soc-thermal { + polling-delay-passive = <250>; + polling-delay = <2000>; +- thermal-sensors = <&tmu 1>; ++ thermal-sensors = <&tmu 0>; + trips { + soc_alert0: trip0 { + temperature = <85000>; +-- +2.51.0 + diff --git a/queue-6.6/bpf-reject-bpf_timer-for-preempt_rt.patch b/queue-6.6/bpf-reject-bpf_timer-for-preempt_rt.patch new file mode 100644 index 0000000000..84ada022e8 --- /dev/null +++ b/queue-6.6/bpf-reject-bpf_timer-for-preempt_rt.patch @@ -0,0 +1,43 @@ +From e261a534c714f1c086b3d2aafa14f256f45c7a33 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 20:57:39 +0800 +Subject: bpf: Reject bpf_timer for PREEMPT_RT + +From: Leon Hwang + +[ Upstream commit e25ddfb388c8b7e5f20e3bf38d627fb485003781 ] + +When enable CONFIG_PREEMPT_RT, the kernel will warn when run timer +selftests by './test_progs -t timer': + +BUG: sleeping function called from invalid context at kernel/locking/spinlock_rt.c:48 + +In order to avoid such warning, reject bpf_timer in verifier when +PREEMPT_RT is enabled. + +Signed-off-by: Leon Hwang +Link: https://lore.kernel.org/r/20250910125740.52172-2-leon.hwang@linux.dev +Signed-off-by: Alexei Starovoitov +Signed-off-by: Sasha Levin +--- + kernel/bpf/verifier.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 7d6ee41f4b4f4..a6f825b7fbe6c 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -7546,6 +7546,10 @@ static int process_timer_func(struct bpf_verifier_env *env, int regno, + verbose(env, "verifier bug. Two map pointers in a timer helper\n"); + return -EFAULT; + } ++ if (IS_ENABLED(CONFIG_PREEMPT_RT)) { ++ verbose(env, "bpf_timer cannot be used for PREEMPT_RT.\n"); ++ return -EOPNOTSUPP; ++ } + meta->map_uid = reg->map_uid; + meta->map_ptr = map; + return 0; +-- +2.51.0 + diff --git a/queue-6.6/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch b/queue-6.6/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch new file mode 100644 index 0000000000..c0dd54d978 --- /dev/null +++ b/queue-6.6/can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch @@ -0,0 +1,52 @@ +From c7e044cacf7099b778497931e3abc858acac0d99 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Aug 2025 13:26:37 +0200 +Subject: can: rcar_can: rcar_can_resume(): fix s2ram with PSCI + +From: Geert Uytterhoeven + +[ Upstream commit 5c793afa07da6d2d4595f6c73a2a543a471bb055 ] + +On R-Car Gen3 using PSCI, s2ram powers down the SoC. After resume, the +CAN interface no longer works, until it is brought down and up again. + +Fix this by calling rcar_can_start() from the PM resume callback, to +fully initialize the controller instead of just restarting it. + +Signed-off-by: Geert Uytterhoeven +Link: https://patch.msgid.link/699b2f7fcb60b31b6f976a37f08ce99c5ffccb31.1755165227.git.geert+renesas@glider.be +Signed-off-by: Marc Kleine-Budde +Signed-off-by: Sasha Levin +--- + drivers/net/can/rcar/rcar_can.c | 8 +------- + 1 file changed, 1 insertion(+), 7 deletions(-) + +diff --git a/drivers/net/can/rcar/rcar_can.c b/drivers/net/can/rcar/rcar_can.c +index f5aa5dbacaf21..1f26aba620b98 100644 +--- a/drivers/net/can/rcar/rcar_can.c ++++ b/drivers/net/can/rcar/rcar_can.c +@@ -861,7 +861,6 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + { + struct net_device *ndev = dev_get_drvdata(dev); + struct rcar_can_priv *priv = netdev_priv(ndev); +- u16 ctlr; + int err; + + if (!netif_running(ndev)) +@@ -873,12 +872,7 @@ static int __maybe_unused rcar_can_resume(struct device *dev) + return err; + } + +- ctlr = readw(&priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_SLPM; +- writew(ctlr, &priv->regs->ctlr); +- ctlr &= ~RCAR_CAN_CTLR_CANM; +- writew(ctlr, &priv->regs->ctlr); +- priv->can.state = CAN_STATE_ERROR_ACTIVE; ++ rcar_can_start(ndev); + + netif_device_attach(ndev); + netif_start_queue(ndev); +-- +2.51.0 + diff --git a/queue-6.6/cpufreq-initialize-cpufreq-based-invariance-before-s.patch b/queue-6.6/cpufreq-initialize-cpufreq-based-invariance-before-s.patch new file mode 100644 index 0000000000..819c10284c --- /dev/null +++ b/queue-6.6/cpufreq-initialize-cpufreq-based-invariance-before-s.patch @@ -0,0 +1,84 @@ +From c2c143b88290300bd31131e6a1939ea64eb1578b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 18 Sep 2025 11:15:52 +0100 +Subject: cpufreq: Initialize cpufreq-based invariance before subsys + +From: Christian Loehle + +[ Upstream commit 8ffe28b4e8d8b18cb2f2933410322c24f039d5d6 ] + +commit 2a6c72738706 ("cpufreq: Initialize cpufreq-based +frequency-invariance later") postponed the frequency invariance +initialization to avoid disabling it in the error case. +This isn't locking safe, instead move the initialization up before +the subsys interface is registered (which will rebuild the +sched_domains) and add the corresponding disable on the error path. + +Observed lockdep without this patch: +[ 0.989686] ====================================================== +[ 0.989688] WARNING: possible circular locking dependency detected +[ 0.989690] 6.17.0-rc4-cix-build+ #31 Tainted: G S +[ 0.989691] ------------------------------------------------------ +[ 0.989692] swapper/0/1 is trying to acquire lock: +[ 0.989693] ffff800082ada7f8 (sched_energy_mutex){+.+.}-{4:4}, at: rebuild_sched_domains_energy+0x30/0x58 +[ 0.989705] + but task is already holding lock: +[ 0.989706] ffff000088c89bc8 (&policy->rwsem){+.+.}-{4:4}, at: cpufreq_online+0x7f8/0xbe0 +[ 0.989713] + which lock already depends on the new lock. + +Fixes: 2a6c72738706 ("cpufreq: Initialize cpufreq-based frequency-invariance later") +Signed-off-by: Christian Loehle +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/cpufreq/cpufreq.c | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c +index 30d8f2ada0f17..76b0b9e6309b9 100644 +--- a/drivers/cpufreq/cpufreq.c ++++ b/drivers/cpufreq/cpufreq.c +@@ -2950,6 +2950,15 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + goto err_null_driver; + } + ++ /* ++ * Mark support for the scheduler's frequency invariance engine for ++ * drivers that implement target(), target_index() or fast_switch(). ++ */ ++ if (!cpufreq_driver->setpolicy) { ++ static_branch_enable_cpuslocked(&cpufreq_freq_invariance); ++ pr_debug("cpufreq: supports frequency invariance\n"); ++ } ++ + ret = subsys_interface_register(&cpufreq_interface); + if (ret) + goto err_boost_unreg; +@@ -2971,21 +2980,14 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data) + hp_online = ret; + ret = 0; + +- /* +- * Mark support for the scheduler's frequency invariance engine for +- * drivers that implement target(), target_index() or fast_switch(). +- */ +- if (!cpufreq_driver->setpolicy) { +- static_branch_enable_cpuslocked(&cpufreq_freq_invariance); +- pr_debug("supports frequency invariance"); +- } +- + pr_debug("driver %s up and running\n", driver_data->name); + goto out; + + err_if_unreg: + subsys_interface_unregister(&cpufreq_interface); + err_boost_unreg: ++ if (!cpufreq_driver->setpolicy) ++ static_branch_disable_cpuslocked(&cpufreq_freq_invariance); + remove_boost_sysfs_file(); + err_null_driver: + write_lock_irqsave(&cpufreq_driver_lock, flags); +-- +2.51.0 + diff --git a/queue-6.6/mm-add-folio_expected_ref_count-for-reference-count-.patch b/queue-6.6/mm-add-folio_expected_ref_count-for-reference-count-.patch new file mode 100644 index 0000000000..ad9e13230a --- /dev/null +++ b/queue-6.6/mm-add-folio_expected_ref_count-for-reference-count-.patch @@ -0,0 +1,149 @@ +From af0fb81fe325b7c950b9ae06ec5600cc4e23e822 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 30 Apr 2025 10:01:51 +0000 +Subject: mm: add folio_expected_ref_count() for reference count calculation + +From: Shivank Garg + +[ Upstream commit 86ebd50224c0734d965843260d0dc057a9431c61 ] + +Patch series " JFS: Implement migrate_folio for jfs_metapage_aops" v5. + +This patchset addresses a warning that occurs during memory compaction due +to JFS's missing migrate_folio operation. The warning was introduced by +commit 7ee3647243e5 ("migrate: Remove call to ->writepage") which added +explicit warnings when filesystem don't implement migrate_folio. + +The syzbot reported following [1]: + jfs_metapage_aops does not implement migrate_folio + WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 fallback_migrate_folio mm/migrate.c:953 [inline] + WARNING: CPU: 1 PID: 5861 at mm/migrate.c:955 move_to_new_folio+0x70e/0x840 mm/migrate.c:1007 + Modules linked in: + CPU: 1 UID: 0 PID: 5861 Comm: syz-executor280 Not tainted 6.15.0-rc1-next-20250411-syzkaller #0 PREEMPT(full) + Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 + RIP: 0010:fallback_migrate_folio mm/migrate.c:953 [inline] + RIP: 0010:move_to_new_folio+0x70e/0x840 mm/migrate.c:1007 + +To fix this issue, this series implement metapage_migrate_folio() for JFS +which handles both single and multiple metapages per page configurations. + +While most filesystems leverage existing migration implementations like +filemap_migrate_folio(), buffer_migrate_folio_norefs() or +buffer_migrate_folio() (which internally used folio_expected_refs()), +JFS's metapage architecture requires special handling of its private data +during migration. To support this, this series introduce the +folio_expected_ref_count(), which calculates external references to a +folio from page/swap cache, private data, and page table mappings. + +This standardized implementation replaces the previous ad-hoc +folio_expected_refs() function and enables JFS to accurately determine +whether a folio has unexpected references before attempting migration. + +Implement folio_expected_ref_count() to calculate expected folio reference +counts from: +- Page/swap cache (1 per page) +- Private data (1) +- Page table mappings (1 per map) + +While originally needed for page migration operations, this improved +implementation standardizes reference counting by consolidating all +refcount contributors into a single, reusable function that can benefit +any subsystem needing to detect unexpected references to folios. + +The folio_expected_ref_count() returns the sum of these external +references without including any reference the caller itself might hold. +Callers comparing against the actual folio_ref_count() must account for +their own references separately. + +Link: https://syzkaller.appspot.com/bug?extid=8bb6fd945af4e0ad9299 [1] +Link: https://lkml.kernel.org/r/20250430100150.279751-1-shivankg@amd.com +Link: https://lkml.kernel.org/r/20250430100150.279751-2-shivankg@amd.com +Signed-off-by: David Hildenbrand +Signed-off-by: Shivank Garg +Suggested-by: Matthew Wilcox +Co-developed-by: David Hildenbrand +Cc: Alistair Popple +Cc: Dave Kleikamp +Cc: Donet Tom +Cc: Jane Chu +Cc: Kefeng Wang +Cc: Zi Yan +Signed-off-by: Andrew Morton +Stable-dep-of: 98c6d259319e ("mm/gup: check ref_count instead of lru before migration") +[ Take the new function in mm.h, removing "const" from its parameter to stop + build warnings; but avoid all the conflicts of using it in mm/migrate.c. ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + include/linux/mm.h | 55 ++++++++++++++++++++++++++++++++++++++++++++++ + 1 file changed, 55 insertions(+) + +diff --git a/include/linux/mm.h b/include/linux/mm.h +index b97d8a691b28b..ba77f08900ca2 100644 +--- a/include/linux/mm.h ++++ b/include/linux/mm.h +@@ -2156,6 +2156,61 @@ static inline int folio_estimated_sharers(struct folio *folio) + return page_mapcount(folio_page(folio, 0)); + } + ++/** ++ * folio_expected_ref_count - calculate the expected folio refcount ++ * @folio: the folio ++ * ++ * Calculate the expected folio refcount, taking references from the pagecache, ++ * swapcache, PG_private and page table mappings into account. Useful in ++ * combination with folio_ref_count() to detect unexpected references (e.g., ++ * GUP or other temporary references). ++ * ++ * Does currently not consider references from the LRU cache. If the folio ++ * was isolated from the LRU (which is the case during migration or split), ++ * the LRU cache does not apply. ++ * ++ * Calling this function on an unmapped folio -- !folio_mapped() -- that is ++ * locked will return a stable result. ++ * ++ * Calling this function on a mapped folio will not result in a stable result, ++ * because nothing stops additional page table mappings from coming (e.g., ++ * fork()) or going (e.g., munmap()). ++ * ++ * Calling this function without the folio lock will also not result in a ++ * stable result: for example, the folio might get dropped from the swapcache ++ * concurrently. ++ * ++ * However, even when called without the folio lock or on a mapped folio, ++ * this function can be used to detect unexpected references early (for example, ++ * if it makes sense to even lock the folio and unmap it). ++ * ++ * The caller must add any reference (e.g., from folio_try_get()) it might be ++ * holding itself to the result. ++ * ++ * Returns the expected folio refcount. ++ */ ++static inline int folio_expected_ref_count(struct folio *folio) ++{ ++ const int order = folio_order(folio); ++ int ref_count = 0; ++ ++ if (WARN_ON_ONCE(folio_test_slab(folio))) ++ return 0; ++ ++ if (folio_test_anon(folio)) { ++ /* One reference per page from the swapcache. */ ++ ref_count += folio_test_swapcache(folio) << order; ++ } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) { ++ /* One reference per page from the pagecache. */ ++ ref_count += !!folio->mapping << order; ++ /* One reference from PG_private. */ ++ ref_count += folio_test_private(folio); ++ } ++ ++ /* One reference per page table mapping. */ ++ return ref_count + folio_mapcount(folio); ++} ++ + #ifndef HAVE_ARCH_MAKE_PAGE_ACCESSIBLE + static inline int arch_make_page_accessible(struct page *page) + { +-- +2.51.0 + diff --git a/queue-6.6/mm-folio_may_be_lru_cached-unless-folio_test_large.patch b/queue-6.6/mm-folio_may_be_lru_cached-unless-folio_test_large.patch new file mode 100644 index 0000000000..4514551992 --- /dev/null +++ b/queue-6.6/mm-folio_may_be_lru_cached-unless-folio_test_large.patch @@ -0,0 +1,152 @@ +From 1042cb89ec4aa3a7c170a37fc0ea348cdb571464 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 8 Sep 2025 15:23:15 -0700 +Subject: mm: folio_may_be_lru_cached() unless folio_test_large() + +From: Hugh Dickins + +[ Upstream commit 2da6de30e60dd9bb14600eff1cc99df2fa2ddae3 ] + +mm/swap.c and mm/mlock.c agree to drain any per-CPU batch as soon as a +large folio is added: so collect_longterm_unpinnable_folios() just wastes +effort when calling lru_add_drain[_all]() on a large folio. + +But although there is good reason not to batch up PMD-sized folios, we +might well benefit from batching a small number of low-order mTHPs (though +unclear how that "small number" limitation will be implemented). + +So ask if folio_may_be_lru_cached() rather than !folio_test_large(), to +insulate those particular checks from future change. Name preferred to +"folio_is_batchable" because large folios can well be put on a batch: it's +just the per-CPU LRU caches, drained much later, which need care. + +Marked for stable, to counter the increase in lru_add_drain_all()s from +"mm/gup: check ref_count instead of lru before migration". + +Link: https://lkml.kernel.org/r/57d2eaf8-3607-f318-e0c5-be02dce61ad0@google.com +Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") +Signed-off-by: Hugh Dickins +Suggested-by: David Hildenbrand +Acked-by: David Hildenbrand +Cc: "Aneesh Kumar K.V" +Cc: Axel Rasmussen +Cc: Chris Li +Cc: Christoph Hellwig +Cc: Jason Gunthorpe +Cc: Johannes Weiner +Cc: John Hubbard +Cc: Keir Fraser +Cc: Konstantin Khlebnikov +Cc: Li Zhe +Cc: Matthew Wilcox (Oracle) +Cc: Peter Xu +Cc: Rik van Riel +Cc: Shivank Garg +Cc: Vlastimil Babka +Cc: Wei Xu +Cc: Will Deacon +Cc: yangge +Cc: Yuanchu Xie +Cc: Yu Zhao +Cc: +Signed-off-by: Andrew Morton +[ Resolved conflicts in mm/swap.c ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + include/linux/swap.h | 10 ++++++++++ + mm/gup.c | 4 ++-- + mm/mlock.c | 6 +++--- + mm/swap.c | 4 ++-- + 4 files changed, 17 insertions(+), 7 deletions(-) + +diff --git a/include/linux/swap.h b/include/linux/swap.h +index cb25db2a93dd1..d7a5b7817987d 100644 +--- a/include/linux/swap.h ++++ b/include/linux/swap.h +@@ -375,6 +375,16 @@ void folio_add_lru_vma(struct folio *, struct vm_area_struct *); + void mark_page_accessed(struct page *); + void folio_mark_accessed(struct folio *); + ++static inline bool folio_may_be_lru_cached(struct folio *folio) ++{ ++ /* ++ * Holding PMD-sized folios in per-CPU LRU cache unbalances accounting. ++ * Holding small numbers of low-order mTHP folios in per-CPU LRU cache ++ * will be sensible, but nobody has implemented and tested that yet. ++ */ ++ return !folio_test_large(folio); ++} ++ + extern atomic_t lru_disable_count; + + static inline bool lru_cache_disabled(void) +diff --git a/mm/gup.c b/mm/gup.c +index 5be764395e046..53154b63295ab 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1975,13 +1975,13 @@ static unsigned long collect_longterm_unpinnable_pages( + continue; + } + +- if (drained == 0 && ++ if (drained == 0 && folio_may_be_lru_cached(folio) && + folio_ref_count(folio) != + folio_expected_ref_count(folio) + 1) { + lru_add_drain(); + drained = 1; + } +- if (drained == 1 && ++ if (drained == 1 && folio_may_be_lru_cached(folio) && + folio_ref_count(folio) != + folio_expected_ref_count(folio) + 1) { + lru_add_drain_all(); +diff --git a/mm/mlock.c b/mm/mlock.c +index 06bdfab83b58a..6858095c20dd9 100644 +--- a/mm/mlock.c ++++ b/mm/mlock.c +@@ -256,7 +256,7 @@ void mlock_folio(struct folio *folio) + + folio_get(folio); + if (!folio_batch_add(fbatch, mlock_lru(folio)) || +- folio_test_large(folio) || lru_cache_disabled()) ++ !folio_may_be_lru_cached(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); + } +@@ -279,7 +279,7 @@ void mlock_new_folio(struct folio *folio) + + folio_get(folio); + if (!folio_batch_add(fbatch, mlock_new(folio)) || +- folio_test_large(folio) || lru_cache_disabled()) ++ !folio_may_be_lru_cached(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); + } +@@ -300,7 +300,7 @@ void munlock_folio(struct folio *folio) + */ + folio_get(folio); + if (!folio_batch_add(fbatch, folio) || +- folio_test_large(folio) || lru_cache_disabled()) ++ !folio_may_be_lru_cached(folio) || lru_cache_disabled()) + mlock_folio_batch(fbatch); + local_unlock(&mlock_fbatch.lock); + } +diff --git a/mm/swap.c b/mm/swap.c +index 42082eba42de3..8fde1a27aa482 100644 +--- a/mm/swap.c ++++ b/mm/swap.c +@@ -220,8 +220,8 @@ static void folio_batch_move_lru(struct folio_batch *fbatch, move_fn_t move_fn) + static void folio_batch_add_and_move(struct folio_batch *fbatch, + struct folio *folio, move_fn_t move_fn) + { +- if (folio_batch_add(fbatch, folio) && !folio_test_large(folio) && +- !lru_cache_disabled()) ++ if (folio_batch_add(fbatch, folio) && ++ folio_may_be_lru_cached(folio) && !lru_cache_disabled()) + return; + folio_batch_move_lru(fbatch, move_fn); + } +-- +2.51.0 + diff --git a/queue-6.6/mm-gup-check-ref_count-instead-of-lru-before-migrati.patch b/queue-6.6/mm-gup-check-ref_count-instead-of-lru-before-migrati.patch new file mode 100644 index 0000000000..8456ad7e51 --- /dev/null +++ b/queue-6.6/mm-gup-check-ref_count-instead-of-lru-before-migrati.patch @@ -0,0 +1,143 @@ +From cce3cb77bbd08258295b23dedc33b4808e3fd016 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 8 Sep 2025 15:15:03 -0700 +Subject: mm/gup: check ref_count instead of lru before migration + +From: Hugh Dickins + +[ Upstream commit 98c6d259319ecf6e8d027abd3f14b81324b8c0ad ] + +Patch series "mm: better GUP pin lru_add_drain_all()", v2. + +Series of lru_add_drain_all()-related patches, arising from recent mm/gup +migration report from Will Deacon. + +This patch (of 5): + +Will Deacon reports:- + +When taking a longterm GUP pin via pin_user_pages(), +__gup_longterm_locked() tries to migrate target folios that should not be +longterm pinned, for example because they reside in a CMA region or +movable zone. This is done by first pinning all of the target folios +anyway, collecting all of the longterm-unpinnable target folios into a +list, dropping the pins that were just taken and finally handing the list +off to migrate_pages() for the actual migration. + +It is critically important that no unexpected references are held on the +folios being migrated, otherwise the migration will fail and +pin_user_pages() will return -ENOMEM to its caller. Unfortunately, it is +relatively easy to observe migration failures when running pKVM (which +uses pin_user_pages() on crosvm's virtual address space to resolve stage-2 +page faults from the guest) on a 6.15-based Pixel 6 device and this +results in the VM terminating prematurely. + +In the failure case, 'crosvm' has called mlock(MLOCK_ONFAULT) on its +mapping of guest memory prior to the pinning. Subsequently, when +pin_user_pages() walks the page-table, the relevant 'pte' is not present +and so the faulting logic allocates a new folio, mlocks it with +mlock_folio() and maps it in the page-table. + +Since commit 2fbb0c10d1e8 ("mm/munlock: mlock_page() munlock_page() batch +by pagevec"), mlock/munlock operations on a folio (formerly page), are +deferred. For example, mlock_folio() takes an additional reference on the +target folio before placing it into a per-cpu 'folio_batch' for later +processing by mlock_folio_batch(), which drops the refcount once the +operation is complete. Processing of the batches is coupled with the LRU +batch logic and can be forcefully drained with lru_add_drain_all() but as +long as a folio remains unprocessed on the batch, its refcount will be +elevated. + +This deferred batching therefore interacts poorly with the pKVM pinning +scenario as we can find ourselves in a situation where the migration code +fails to migrate a folio due to the elevated refcount from the pending +mlock operation. + +Hugh Dickins adds:- + +!folio_test_lru() has never been a very reliable way to tell if an +lru_add_drain_all() is worth calling, to remove LRU cache references to +make the folio migratable: the LRU flag may be set even while the folio is +held with an extra reference in a per-CPU LRU cache. + +5.18 commit 2fbb0c10d1e8 may have made it more unreliable. Then 6.11 +commit 33dfe9204f29 ("mm/gup: clear the LRU flag of a page before adding +to LRU batch") tried to make it reliable, by moving LRU flag clearing; but +missed the mlock/munlock batches, so still unreliable as reported. + +And it turns out to be difficult to extend 33dfe9204f29's LRU flag +clearing to the mlock/munlock batches: if they do benefit from batching, +mlock/munlock cannot be so effective when easily suppressed while !LRU. + +Instead, switch to an expected ref_count check, which was more reliable +all along: some more false positives (unhelpful drains) than before, and +never a guarantee that the folio will prove migratable, but better. + +Note on PG_private_2: ceph and nfs are still using the deprecated +PG_private_2 flag, with the aid of netfs and filemap support functions. +Although it is consistently matched by an increment of folio ref_count, +folio_expected_ref_count() intentionally does not recognize it, and ceph +folio migration currently depends on that for PG_private_2 folios to be +rejected. New references to the deprecated flag are discouraged, so do +not add it into the collect_longterm_unpinnable_folios() calculation: but +longterm pinning of transiently PG_private_2 ceph and nfs folios (an +uncommon case) may invoke a redundant lru_add_drain_all(). And this makes +easy the backport to earlier releases: up to and including 6.12, btrfs +also used PG_private_2, but without a ref_count increment. + +Note for stable backports: requires 6.16 commit 86ebd50224c0 ("mm: +add folio_expected_ref_count() for reference count calculation"). + +Link: https://lkml.kernel.org/r/41395944-b0e3-c3ac-d648-8ddd70451d28@google.com +Link: https://lkml.kernel.org/r/bd1f314a-fca1-8f19-cac0-b936c9614557@google.com +Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages allocated from CMA region") +Signed-off-by: Hugh Dickins +Reported-by: Will Deacon +Closes: https://lore.kernel.org/linux-mm/20250815101858.24352-1-will@kernel.org/ +Acked-by: Kiryl Shutsemau +Acked-by: David Hildenbrand +Cc: "Aneesh Kumar K.V" +Cc: Axel Rasmussen +Cc: Chris Li +Cc: Christoph Hellwig +Cc: Jason Gunthorpe +Cc: Johannes Weiner +Cc: John Hubbard +Cc: Keir Fraser +Cc: Konstantin Khlebnikov +Cc: Li Zhe +Cc: Matthew Wilcox (Oracle) +Cc: Peter Xu +Cc: Rik van Riel +Cc: Shivank Garg +Cc: Vlastimil Babka +Cc: Wei Xu +Cc: yangge +Cc: Yuanchu Xie +Cc: Yu Zhao +Cc: +Signed-off-by: Andrew Morton +[ Clean cherry-pick now into this tree ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + mm/gup.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/mm/gup.c b/mm/gup.c +index 497d7ce43d393..00ac2df7164c3 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1975,7 +1975,8 @@ static unsigned long collect_longterm_unpinnable_pages( + continue; + } + +- if (!folio_test_lru(folio) && drain_allow) { ++ if (drain_allow && folio_ref_count(folio) != ++ folio_expected_ref_count(folio) + 1) { + lru_add_drain_all(); + drain_allow = false; + } +-- +2.51.0 + diff --git a/queue-6.6/mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch b/queue-6.6/mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch new file mode 100644 index 0000000000..5384a7a5ae --- /dev/null +++ b/queue-6.6/mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch @@ -0,0 +1,88 @@ +From ae0d143998dd7c94b150a24e484e905d51fdbce1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 8 Sep 2025 15:16:53 -0700 +Subject: mm/gup: local lru_add_drain() to avoid lru_add_drain_all() + +From: Hugh Dickins + +[ Upstream commit a09a8a1fbb374e0053b97306da9dbc05bd384685 ] + +In many cases, if collect_longterm_unpinnable_folios() does need to drain +the LRU cache to release a reference, the cache in question is on this +same CPU, and much more efficiently drained by a preliminary local +lru_add_drain(), than the later cross-CPU lru_add_drain_all(). + +Marked for stable, to counter the increase in lru_add_drain_all()s from +"mm/gup: check ref_count instead of lru before migration". Note for clean +backports: can take 6.16 commit a03db236aebf ("gup: optimize longterm +pin_user_pages() for large folio") first. + +Link: https://lkml.kernel.org/r/66f2751f-283e-816d-9530-765db7edc465@google.com +Signed-off-by: Hugh Dickins +Acked-by: David Hildenbrand +Cc: "Aneesh Kumar K.V" +Cc: Axel Rasmussen +Cc: Chris Li +Cc: Christoph Hellwig +Cc: Jason Gunthorpe +Cc: Johannes Weiner +Cc: John Hubbard +Cc: Keir Fraser +Cc: Konstantin Khlebnikov +Cc: Li Zhe +Cc: Matthew Wilcox (Oracle) +Cc: Peter Xu +Cc: Rik van Riel +Cc: Shivank Garg +Cc: Vlastimil Babka +Cc: Wei Xu +Cc: Will Deacon +Cc: yangge +Cc: Yuanchu Xie +Cc: Yu Zhao +Cc: +Signed-off-by: Andrew Morton +[ Resolved minor conflicts ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + mm/gup.c | 15 +++++++++++---- + 1 file changed, 11 insertions(+), 4 deletions(-) + +diff --git a/mm/gup.c b/mm/gup.c +index 00ac2df7164c3..5be764395e046 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1953,7 +1953,7 @@ static unsigned long collect_longterm_unpinnable_pages( + { + unsigned long i, collected = 0; + struct folio *prev_folio = NULL; +- bool drain_allow = true; ++ int drained = 0; + + for (i = 0; i < nr_pages; i++) { + struct folio *folio = page_folio(pages[i]); +@@ -1975,10 +1975,17 @@ static unsigned long collect_longterm_unpinnable_pages( + continue; + } + +- if (drain_allow && folio_ref_count(folio) != +- folio_expected_ref_count(folio) + 1) { ++ if (drained == 0 && ++ folio_ref_count(folio) != ++ folio_expected_ref_count(folio) + 1) { ++ lru_add_drain(); ++ drained = 1; ++ } ++ if (drained == 1 && ++ folio_ref_count(folio) != ++ folio_expected_ref_count(folio) + 1) { + lru_add_drain_all(); +- drain_allow = false; ++ drained = 2; + } + + if (!folio_isolate_lru(folio)) +-- +2.51.0 + diff --git a/queue-6.6/mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch b/queue-6.6/mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch new file mode 100644 index 0000000000..00bbe994b2 --- /dev/null +++ b/queue-6.6/mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch @@ -0,0 +1,114 @@ +From ab31ff1f9d5264fede3a6035290c213d9e264f5b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 11 Jun 2025 15:13:14 +0200 +Subject: mm/gup: revert "mm: gup: fix infinite loop within + __get_longterm_locked" + +From: David Hildenbrand + +[ Upstream commit 517f496e1e61bd169d585dab4dd77e7147506322 ] + +After commit 1aaf8c122918 ("mm: gup: fix infinite loop within +__get_longterm_locked") we are able to longterm pin folios that are not +supposed to get longterm pinned, simply because they temporarily have the +LRU flag cleared (esp. temporarily isolated). + +For example, two __get_longterm_locked() callers can race, or +__get_longterm_locked() can race with anything else that temporarily +isolates folios. + +The introducing commit mentions the use case of a driver that uses +vm_ops->fault to insert pages allocated through cma_alloc() into the page +tables, assuming they can later get longterm pinned. These pages/ folios +would never have the LRU flag set and consequently cannot get isolated. +There is no known in-tree user making use of that so far, fortunately. + +To handle that in the future -- and avoid retrying forever to +isolate/migrate them -- we will need a different mechanism for the CMA +area *owner* to indicate that it actually already allocated the page and +is fine with longterm pinning it. The LRU flag is not suitable for that. + +Probably we can lookup the relevant CMA area and query the bitmap; we only +have have to care about some races, probably. If already allocated, we +could just allow longterm pinning) + +Anyhow, let's fix the "must not be longterm pinned" problem first by +reverting the original commit. + +Link: https://lkml.kernel.org/r/20250611131314.594529-1-david@redhat.com +Fixes: 1aaf8c122918 ("mm: gup: fix infinite loop within __get_longterm_locked") +Signed-off-by: David Hildenbrand +Closes: https://lore.kernel.org/all/20250522092755.GA3277597@tiffany/ +Reported-by: Hyesoo Yu +Reviewed-by: John Hubbard +Cc: Jason Gunthorpe +Cc: Peter Xu +Cc: Zhaoyang Huang +Cc: Aijun Sun +Cc: Alistair Popple +Cc: +Signed-off-by: Andrew Morton +[ Revert v6.6.79 commit 933b08c0edfa ] +Signed-off-by: Hugh Dickins +Signed-off-by: Sasha Levin +--- + mm/gup.c | 14 ++++++++++---- + 1 file changed, 10 insertions(+), 4 deletions(-) + +diff --git a/mm/gup.c b/mm/gup.c +index 29c719b3ab31e..497d7ce43d393 100644 +--- a/mm/gup.c ++++ b/mm/gup.c +@@ -1946,14 +1946,14 @@ struct page *get_dump_page(unsigned long addr) + /* + * Returns the number of collected pages. Return value is always >= 0. + */ +-static void collect_longterm_unpinnable_pages( ++static unsigned long collect_longterm_unpinnable_pages( + struct list_head *movable_page_list, + unsigned long nr_pages, + struct page **pages) + { ++ unsigned long i, collected = 0; + struct folio *prev_folio = NULL; + bool drain_allow = true; +- unsigned long i; + + for (i = 0; i < nr_pages; i++) { + struct folio *folio = page_folio(pages[i]); +@@ -1965,6 +1965,8 @@ static void collect_longterm_unpinnable_pages( + if (folio_is_longterm_pinnable(folio)) + continue; + ++ collected++; ++ + if (folio_is_device_coherent(folio)) + continue; + +@@ -1986,6 +1988,8 @@ static void collect_longterm_unpinnable_pages( + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); + } ++ ++ return collected; + } + + /* +@@ -2078,10 +2082,12 @@ static int migrate_longterm_unpinnable_pages( + static long check_and_migrate_movable_pages(unsigned long nr_pages, + struct page **pages) + { ++ unsigned long collected; + LIST_HEAD(movable_page_list); + +- collect_longterm_unpinnable_pages(&movable_page_list, nr_pages, pages); +- if (list_empty(&movable_page_list)) ++ collected = collect_longterm_unpinnable_pages(&movable_page_list, ++ nr_pages, pages); ++ if (!collected) + return 0; + + return migrate_longterm_unpinnable_pages(&movable_page_list, nr_pages, +-- +2.51.0 + diff --git a/queue-6.6/series b/queue-6.6/series index a341579cab..d22d5539ff 100644 --- a/queue-6.6/series +++ b/queue-6.6/series @@ -19,3 +19,15 @@ alsa-usb-audio-add-dsd-support-for-comtrue-usb-audio.patch alsa-usb-audio-move-mixer_quirks-min_mute-into-commo.patch alsa-usb-audio-add-mute-tlv-for-playback-volumes-on-.patch ib-mlx5-fix-obj_type-mismatch-for-srq-event-subscrip.patch +mm-gup-revert-mm-gup-fix-infinite-loop-within-__get_.patch +mm-add-folio_expected_ref_count-for-reference-count-.patch +mm-gup-check-ref_count-instead-of-lru-before-migrati.patch +mm-gup-local-lru_add_drain-to-avoid-lru_add_drain_al.patch +mm-folio_may_be_lru_cached-unless-folio_test_large.patch +arm64-dts-imx8mp-correct-thermal-sensor-index.patch +arm-dts-kirkwood-fix-sound-dai-cells-for-openrd-clie.patch +cpufreq-initialize-cpufreq-based-invariance-before-s.patch +smb-server-don-t-use-delayed_work-for-post_recv_cred.patch +wifi-virt_wifi-fix-page-fault-on-connect.patch +can-rcar_can-rcar_can_resume-fix-s2ram-with-psci.patch +bpf-reject-bpf_timer-for-preempt_rt.patch diff --git a/queue-6.6/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch b/queue-6.6/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch new file mode 100644 index 0000000000..0e59f250b1 --- /dev/null +++ b/queue-6.6/smb-server-don-t-use-delayed_work-for-post_recv_cred.patch @@ -0,0 +1,103 @@ +From ca53a1af48c6e7739b65410e5f8f0e7ef2e492db Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 8 Aug 2025 17:55:17 +0200 +Subject: smb: server: don't use delayed_work for post_recv_credits_work + +From: Stefan Metzmacher + +[ Upstream commit 1cde0a74a7a8951b3097417847a458e557be0b5b ] + +If we are using a hardcoded delay of 0 there's no point in +using delayed_work it only adds confusion. + +The client also uses a normal work_struct and now +it is easier to move it to the common smbdirect_socket. + +Cc: Namjae Jeon +Cc: Steve French +Cc: Tom Talpey +Cc: linux-cifs@vger.kernel.org +Cc: samba-technical@lists.samba.org +Fixes: 0626e6641f6b ("cifsd: add server handler for central processing and tranport layers") +Signed-off-by: Stefan Metzmacher +Acked-by: Namjae Jeon +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/server/transport_rdma.c | 18 ++++++++---------- + 1 file changed, 8 insertions(+), 10 deletions(-) + +diff --git a/fs/smb/server/transport_rdma.c b/fs/smb/server/transport_rdma.c +index 3720304d67929..504e2a1cf33b8 100644 +--- a/fs/smb/server/transport_rdma.c ++++ b/fs/smb/server/transport_rdma.c +@@ -147,7 +147,7 @@ struct smb_direct_transport { + wait_queue_head_t wait_send_pending; + atomic_t send_pending; + +- struct delayed_work post_recv_credits_work; ++ struct work_struct post_recv_credits_work; + struct work_struct send_immediate_work; + struct work_struct disconnect_work; + +@@ -366,8 +366,8 @@ static struct smb_direct_transport *alloc_transport(struct rdma_cm_id *cm_id) + + spin_lock_init(&t->lock_new_recv_credits); + +- INIT_DELAYED_WORK(&t->post_recv_credits_work, +- smb_direct_post_recv_credits); ++ INIT_WORK(&t->post_recv_credits_work, ++ smb_direct_post_recv_credits); + INIT_WORK(&t->send_immediate_work, smb_direct_send_immediate_work); + INIT_WORK(&t->disconnect_work, smb_direct_disconnect_rdma_work); + +@@ -399,7 +399,7 @@ static void free_transport(struct smb_direct_transport *t) + atomic_read(&t->send_pending) == 0); + + cancel_work_sync(&t->disconnect_work); +- cancel_delayed_work_sync(&t->post_recv_credits_work); ++ cancel_work_sync(&t->post_recv_credits_work); + cancel_work_sync(&t->send_immediate_work); + + if (t->qp) { +@@ -614,8 +614,7 @@ static void recv_done(struct ib_cq *cq, struct ib_wc *wc) + wake_up_interruptible(&t->wait_send_credits); + + if (is_receive_credit_post_required(receive_credits, avail_recvmsg_count)) +- mod_delayed_work(smb_direct_wq, +- &t->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &t->post_recv_credits_work); + + if (data_length) { + enqueue_reassembly(t, recvmsg, (int)data_length); +@@ -772,8 +771,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + st->count_avail_recvmsg += queue_removed; + if (is_receive_credit_post_required(st->recv_credits, st->count_avail_recvmsg)) { + spin_unlock(&st->receive_credit_lock); +- mod_delayed_work(smb_direct_wq, +- &st->post_recv_credits_work, 0); ++ queue_work(smb_direct_wq, &st->post_recv_credits_work); + } else { + spin_unlock(&st->receive_credit_lock); + } +@@ -800,7 +798,7 @@ static int smb_direct_read(struct ksmbd_transport *t, char *buf, + static void smb_direct_post_recv_credits(struct work_struct *work) + { + struct smb_direct_transport *t = container_of(work, +- struct smb_direct_transport, post_recv_credits_work.work); ++ struct smb_direct_transport, post_recv_credits_work); + struct smb_direct_recvmsg *recvmsg; + int receive_credits, credits = 0; + int ret; +@@ -1681,7 +1679,7 @@ static int smb_direct_prepare_negotiation(struct smb_direct_transport *t) + goto out_err; + } + +- smb_direct_post_recv_credits(&t->post_recv_credits_work.work); ++ smb_direct_post_recv_credits(&t->post_recv_credits_work); + return 0; + out_err: + put_recvmsg(t, recvmsg); +-- +2.51.0 + diff --git a/queue-6.6/wifi-virt_wifi-fix-page-fault-on-connect.patch b/queue-6.6/wifi-virt_wifi-fix-page-fault-on-connect.patch new file mode 100644 index 0000000000..06d3612b8d --- /dev/null +++ b/queue-6.6/wifi-virt_wifi-fix-page-fault-on-connect.patch @@ -0,0 +1,43 @@ +From c216e1713d2a7e3bf4198237706517bc8b95f21d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 10 Sep 2025 19:19:29 +0800 +Subject: wifi: virt_wifi: Fix page fault on connect + +From: James Guan + +[ Upstream commit 9c600589e14f5fc01b8be9a5d0ad1f094b8b304b ] + +This patch prevents page fault in __cfg80211_connect_result()[1] +when connecting a virt_wifi device, while ensuring that virt_wifi +can connect properly. + +[1] https://lore.kernel.org/linux-wireless/20250909063213.1055024-1-guan_yufei@163.com/ + +Closes: https://lore.kernel.org/linux-wireless/20250909063213.1055024-1-guan_yufei@163.com/ +Signed-off-by: James Guan +Link: https://patch.msgid.link/20250910111929.137049-1-guan_yufei@163.com +[remove irrelevant network-manager instructions] +Signed-off-by: Johannes Berg +Signed-off-by: Sasha Levin +--- + drivers/net/wireless/virtual/virt_wifi.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/wireless/virtual/virt_wifi.c b/drivers/net/wireless/virtual/virt_wifi.c +index fb4d95a027fef..2977b30c6d593 100644 +--- a/drivers/net/wireless/virtual/virt_wifi.c ++++ b/drivers/net/wireless/virtual/virt_wifi.c +@@ -277,7 +277,9 @@ static void virt_wifi_connect_complete(struct work_struct *work) + priv->is_connected = true; + + /* Schedules an event that acquires the rtnl lock. */ +- cfg80211_connect_result(priv->upperdev, requested_bss, NULL, 0, NULL, 0, ++ cfg80211_connect_result(priv->upperdev, ++ priv->is_connected ? fake_router_bssid : NULL, ++ NULL, 0, NULL, 0, + status, GFP_KERNEL); + netif_carrier_on(priv->upperdev); + } +-- +2.51.0 +