--- /dev/null
+From e9acda52fd2ee0cdca332f996da7a95c5fd25294 Mon Sep 17 00:00:00 2001
+From: Nikolay Aleksandrov <razor@blackwall.org>
+Date: Fri, 23 Jan 2026 14:06:59 +0200
+Subject: bonding: fix use-after-free due to enslave fail after slave array update
+
+From: Nikolay Aleksandrov <razor@blackwall.org>
+
+commit e9acda52fd2ee0cdca332f996da7a95c5fd25294 upstream.
+
+Fix a use-after-free which happens due to enslave failure after the new
+slave has been added to the array. Since the new slave can be used for Tx
+immediately, we can use it after it has been freed by the enslave error
+cleanup path which frees the allocated slave memory. Slave update array is
+supposed to be called last when further enslave failures are not expected.
+Move it after xdp setup to avoid any problems.
+
+It is very easy to reproduce the problem with a simple xdp_pass prog:
+ ip l add bond1 type bond mode balance-xor
+ ip l set bond1 up
+ ip l set dev bond1 xdp object xdp_pass.o sec xdp_pass
+ ip l add dumdum type dummy
+
+Then run in parallel:
+ while :; do ip l set dumdum master bond1 1>/dev/null 2>&1; done;
+ mausezahn bond1 -a own -b rand -A rand -B 1.1.1.1 -c 0 -t tcp "dp=1-1023, flags=syn"
+
+The crash happens almost immediately:
+ [ 605.602850] Oops: general protection fault, probably for non-canonical address 0xe0e6fc2460000137: 0000 [#1] SMP KASAN NOPTI
+ [ 605.602916] KASAN: maybe wild-memory-access in range [0x07380123000009b8-0x07380123000009bf]
+ [ 605.602946] CPU: 0 UID: 0 PID: 2445 Comm: mausezahn Kdump: loaded Tainted: G B 6.19.0-rc6+ #21 PREEMPT(voluntary)
+ [ 605.602979] Tainted: [B]=BAD_PAGE
+ [ 605.602998] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
+ [ 605.603032] RIP: 0010:netdev_core_pick_tx+0xcd/0x210
+ [ 605.603063] Code: 48 89 fa 48 c1 ea 03 80 3c 02 00 0f 85 3e 01 00 00 48 b8 00 00 00 00 00 fc ff df 4c 8b 6b 08 49 8d 7d 30 48 89 fa 48 c1 ea 03 <80> 3c 02 00 0f 85 25 01 00 00 49 8b 45 30 4c 89 e2 48 89 ee 48 89
+ [ 605.603111] RSP: 0018:ffff88817b9af348 EFLAGS: 00010213
+ [ 605.603145] RAX: dffffc0000000000 RBX: ffff88817d28b420 RCX: 0000000000000000
+ [ 605.603172] RDX: 00e7002460000137 RSI: 0000000000000008 RDI: 07380123000009be
+ [ 605.603199] RBP: ffff88817b541a00 R08: 0000000000000001 R09: fffffbfff3ed8c0c
+ [ 605.603226] R10: ffffffff9f6c6067 R11: 0000000000000001 R12: 0000000000000000
+ [ 605.603253] R13: 073801230000098e R14: ffff88817d28b448 R15: ffff88817b541a84
+ [ 605.603286] FS: 00007f6570ef67c0(0000) GS:ffff888221dfa000(0000) knlGS:0000000000000000
+ [ 605.603319] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ [ 605.603343] CR2: 00007f65712fae40 CR3: 000000011371b000 CR4: 0000000000350ef0
+ [ 605.603373] Call Trace:
+ [ 605.603392] <TASK>
+ [ 605.603410] __dev_queue_xmit+0x448/0x32a0
+ [ 605.603434] ? __pfx_vprintk_emit+0x10/0x10
+ [ 605.603461] ? __pfx_vprintk_emit+0x10/0x10
+ [ 605.603484] ? __pfx___dev_queue_xmit+0x10/0x10
+ [ 605.603507] ? bond_start_xmit+0xbfb/0xc20 [bonding]
+ [ 605.603546] ? _printk+0xcb/0x100
+ [ 605.603566] ? __pfx__printk+0x10/0x10
+ [ 605.603589] ? bond_start_xmit+0xbfb/0xc20 [bonding]
+ [ 605.603627] ? add_taint+0x5e/0x70
+ [ 605.603648] ? add_taint+0x2a/0x70
+ [ 605.603670] ? end_report.cold+0x51/0x75
+ [ 605.603693] ? bond_start_xmit+0xbfb/0xc20 [bonding]
+ [ 605.603731] bond_start_xmit+0x623/0xc20 [bonding]
+
+Fixes: 9e2ee5c7e7c3 ("net, bonding: Add XDP support to the bonding driver")
+Signed-off-by: Nikolay Aleksandrov <razor@blackwall.org>
+Reported-by: Chen Zhen <chenzhen126@huawei.com>
+Closes: https://lore.kernel.org/netdev/fae17c21-4940-5605-85b2-1d5e17342358@huawei.com/
+CC: Jussi Maki <joamaki@gmail.com>
+CC: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Daniel Borkmann <daniel@iogearbox.net>
+Link: https://patch.msgid.link/20260123120659.571187-1-razor@blackwall.org
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Tested-by: Yunseong Kim <yunseong.kim@est.tech>
+Signed-off-by: Yunseong Kim <yunseong.kim@est.tech>
+Reviewd-by: David Nyström <david.nystrom@est.tech>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/bonding/bond_main.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/bonding/bond_main.c
++++ b/drivers/net/bonding/bond_main.c
+@@ -2349,9 +2349,6 @@ skip_mac_set:
+ unblock_netpoll_tx();
+ }
+
+- if (bond_mode_can_use_xmit_hash(bond))
+- bond_update_slave_arr(bond, NULL);
+-
+ if (!slave_dev->netdev_ops->ndo_bpf ||
+ !slave_dev->netdev_ops->ndo_xdp_xmit) {
+ if (bond->xdp_prog) {
+@@ -2385,6 +2382,9 @@ skip_mac_set:
+ bpf_prog_inc(bond->xdp_prog);
+ }
+
++ if (bond_mode_can_use_xmit_hash(bond))
++ bond_update_slave_arr(bond, NULL);
++
+ bond_xdp_set_features(bond_dev);
+
+ slave_info(bond_dev, slave_dev, "Enslaving as %s interface with %s link\n",
--- /dev/null
+From 4470f30e8a0ef1d8759d446fca79a6599a256b1a Mon Sep 17 00:00:00 2001
+From: Martin Michaelis <code@mgjm.de>
+Date: Thu, 23 Apr 2026 15:54:11 -0600
+Subject: io_uring/kbuf: support min length left for incremental buffers
+
+From: Martin Michaelis <code@mgjm.de>
+
+Incrementally consumed buffer rings are generally fully consumed, but
+it's quite possible that the application has a minimum size it needs to
+meet to avoid truncation. Currently that minimum limit is 1 byte, but
+this should be a setting that is the hands of the application. For
+recvmsg multishot, a prime use case for incrementally consumed buffers,
+the application may get spurious -EFAULT returned at the end of an
+incrementally consumed buffer, as less space is available than the
+headers need.
+
+Grab a u32 field in struct io_uring_buf_reg, which the application can
+use to inform the kernel of the minimum size that should be available
+in an incrementally consumed buffer. If less than that is available,
+the current buffer is fully processed and the next one will be picked.
+
+Cc: stable@vger.kernel.org
+Fixes: ae98dbf43d75 ("io_uring/kbuf: add support for incremental buffer consumption")
+Link: https://github.com/axboe/liburing/issues/1433
+Signed-off-by: Martin Michaelis <code@mgjm.de>
+[axboe: write commit message, change io_buffer_list member name]
+Reviewed-by: Gabriel Krisman Bertazi <krisman@suse.de>
+Signed-off-by: Jens Axboe <axboe@kernel.dk>
+(cherry picked from commit 7deba791ad495ce1d7921683f4f7d1190fa210d1)
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/uapi/linux/io_uring.h | 3 ++-
+ io_uring/kbuf.c | 8 +++++++-
+ io_uring/kbuf.h | 7 +++++++
+ 3 files changed, 16 insertions(+), 2 deletions(-)
+
+--- a/include/uapi/linux/io_uring.h
++++ b/include/uapi/linux/io_uring.h
+@@ -758,7 +758,8 @@ struct io_uring_buf_reg {
+ __u32 ring_entries;
+ __u16 bgid;
+ __u16 flags;
+- __u64 resv[3];
++ __u32 min_left;
++ __u32 resv[5];
+ };
+
+ /* argument for IORING_REGISTER_PBUF_STATUS */
+--- a/io_uring/kbuf.c
++++ b/io_uring/kbuf.c
+@@ -47,7 +47,7 @@ static bool io_kbuf_inc_commit(struct io
+ this_len = min_t(u32, len, buf_len);
+ buf_len -= this_len;
+ /* Stop looping for invalid buffer length of 0 */
+- if (buf_len || !this_len) {
++ if (buf_len > bl->min_left_sub_one || !this_len) {
+ WRITE_ONCE(buf->addr, READ_ONCE(buf->addr) + this_len);
+ WRITE_ONCE(buf->len, buf_len);
+ return false;
+@@ -727,6 +727,10 @@ int io_register_pbuf_ring(struct io_ring
+ if (reg.ring_entries >= 65536)
+ return -EINVAL;
+
++ /* minimum left byte count is a property of incremental buffers */
++ if (!(reg.flags & IOU_PBUF_RING_INC) && reg.min_left)
++ return -EINVAL;
++
+ bl = io_buffer_get_list(ctx, reg.bgid);
+ if (bl) {
+ /* if mapped buffer ring OR classic exists, don't allow */
+@@ -747,6 +751,8 @@ int io_register_pbuf_ring(struct io_ring
+ if (!ret) {
+ bl->nr_entries = reg.ring_entries;
+ bl->mask = reg.ring_entries - 1;
++ if (reg.min_left)
++ bl->min_left_sub_one = reg.min_left - 1;
+ if (reg.flags & IOU_PBUF_RING_INC)
+ bl->flags |= IOBL_INC;
+
+--- a/io_uring/kbuf.h
++++ b/io_uring/kbuf.h
+@@ -38,6 +38,13 @@ struct io_buffer_list {
+ __u16 flags;
+
+ atomic_t refs;
++
++ /*
++ * minimum required amount to be left to reuse an incrementally
++ * consumed buffer. If less than this is left at consumption time,
++ * buffer is done and head is incremented to the next buffer.
++ */
++ __u32 min_left_sub_one;
+ };
+
+ struct io_buffer {
--- /dev/null
+From 8bbde987c2b84f80da0853f739f0a920386f8b99 Mon Sep 17 00:00:00 2001
+From: SeongJae Park <sj@kernel.org>
+Date: Mon, 6 Apr 2026 17:31:52 -0700
+Subject: mm/damon/core: disallow time-quota setting zero esz
+
+From: SeongJae Park <sj@kernel.org>
+
+commit 8bbde987c2b84f80da0853f739f0a920386f8b99 upstream.
+
+When the throughput of a DAMOS scheme is very slow, DAMOS time quota can
+make the effective size quota smaller than damon_ctx->min_region_sz. In
+the case, damos_apply_scheme() will skip applying the action, because the
+action is tried at region level, which requires >=min_region_sz size.
+That is, the quota is effectively exceeded for the quota charge window.
+
+Because no action will be applied, the total_charged_sz and
+total_charged_ns are also not updated. damos_set_effective_quota() will
+try to update the effective size quota before starting the next charge
+window. However, because the total_charged_sz and total_charged_ns have
+not updated, the throughput and effective size quota are also not changed.
+Since effective size quota can only be decreased, other effective size
+quota update factors including DAMOS quota goals and size quota cannot
+make any change, either.
+
+As a result, the scheme is unexpectedly deactivated until the user notices
+and mitigates the situation. The users can mitigate this situation by
+changing the time quota online or re-install the scheme. While the
+mitigation is somewhat straightforward, finding the situation would be
+challenging, because DAMON is not providing good observabilities for that.
+Even if such observability is provided, doing the additional monitoring
+and the mitigation is somewhat cumbersome and not aligned to the intention
+of the time quota. The time quota was intended to help reduce the user's
+administration overhead.
+
+Fix the problem by setting time quota-modified effective size quota be at
+least min_region_sz always.
+
+The issue was discovered [1] by sashiko.
+
+Link: https://lore.kernel.org/20260407003153.79589-1-sj@kernel.org
+Link: https://lore.kernel.org/20260405192504.110014-1-sj@kernel.org [1]
+Fixes: 1cd243030059 ("mm/damon/schemes: implement time quota")
+Signed-off-by: SeongJae Park <sj@kernel.org>
+Cc: <stable@vger.kernel.org> # 5.16.x
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/damon/core.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -1577,6 +1577,7 @@ static void damos_set_effective_quota(st
+ esz = min(throughput * quota->ms, esz);
+ else
+ esz = throughput * quota->ms;
++ esz = max(DAMON_MIN_REGION, esz);
+ }
+
+ if (quota->sz && quota->sz < esz)
--- /dev/null
+From 4262c53236977de3ceaa3bf2aefdf772c9b874dd Mon Sep 17 00:00:00 2001
+From: SeongJae Park <sj@kernel.org>
+Date: Thu, 15 Jan 2026 07:20:41 -0800
+Subject: mm/damon/core: implement damon_kdamond_pid()
+
+From: SeongJae Park <sj@kernel.org>
+
+commit 4262c53236977de3ceaa3bf2aefdf772c9b874dd upstream.
+
+Patch series "mm/damon: hide kdamond and kdamond_lock from API callers".
+
+'kdamond' and 'kdamond_lock' fields initially exposed to DAMON API callers
+for flexible synchronization and use cases. As DAMON API became somewhat
+complicated compared to the early days, Keeping those exposed could only
+encourage the API callers to invent more creative but complicated and
+difficult-to-debug use cases.
+
+Fortunately DAMON API callers didn't invent that many creative use cases.
+There exist only two use cases of 'kdamond' and 'kdamond_lock'. Finding
+whether the kdamond is actively running, and getting the pid of the
+kdamond. For the first use case, a dedicated API function, namely
+'damon_is_running()' is provided, and all DAMON API callers are using the
+function for the use case. Hence only the second use case is where the
+fields are directly being used by DAMON API callers.
+
+To prevent future invention of complicated and erroneous use cases of the
+fields, hide the fields from the API callers. For that, provide new
+dedicated DAMON API functions for the remaining use case, namely
+damon_kdamond_pid(), migrate DAMON API callers to use the new function,
+and mark the fields as private fields.
+
+
+This patch (of 5):
+
+'kdamond' and 'kdamond_lock' are directly being used by DAMON API callers
+for getting the pid of the corresponding kdamond. To discourage invention
+of creative but complicated and erroneous new usages of the fields that
+require careful synchronization, implement a new API function that can
+simply be used without the manual synchronizations.
+
+Link: https://lkml.kernel.org/r/20260115152047.68415-1-sj@kernel.org
+Link: https://lkml.kernel.org/r/20260115152047.68415-2-sj@kernel.org
+Signed-off-by: SeongJae Park <sj@kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/damon.h | 1 +
+ mm/damon/core.c | 17 +++++++++++++++++
+ 2 files changed, 18 insertions(+)
+
+--- a/include/linux/damon.h
++++ b/include/linux/damon.h
+@@ -778,6 +778,7 @@ static inline unsigned int damon_max_nr_
+
+ int damon_start(struct damon_ctx **ctxs, int nr_ctxs, bool exclusive);
+ int damon_stop(struct damon_ctx **ctxs, int nr_ctxs);
++int damon_kdamond_pid(struct damon_ctx *ctx);
+
+ int damon_set_region_biggest_system_ram_default(struct damon_target *t,
+ unsigned long *start, unsigned long *end);
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -1163,6 +1163,23 @@ int damon_stop(struct damon_ctx **ctxs,
+ return err;
+ }
+
++/**
++ * damon_kdamond_pid() - Return pid of a given DAMON context's worker thread.
++ * @ctx: The DAMON context of the question.
++ *
++ * Return: pid if @ctx is running, negative error code otherwise.
++ */
++int damon_kdamond_pid(struct damon_ctx *ctx)
++{
++ int pid = -EINVAL;
++
++ mutex_lock(&ctx->kdamond_lock);
++ if (ctx->kdamond)
++ pid = ctx->kdamond->pid;
++ mutex_unlock(&ctx->kdamond_lock);
++ return pid;
++}
++
+ /*
+ * Reset the aggregated monitoring results ('nr_accesses' of each region).
+ */
--- /dev/null
+From b98b7ff6025ae82570d4915e083f0cbd8d48b3cf Mon Sep 17 00:00:00 2001
+From: SeongJae Park <sj@kernel.org>
+Date: Sun, 19 Apr 2026 09:10:01 -0700
+Subject: mm/damon/lru_sort: detect and use fresh enabled and kdamond_pid values
+
+From: SeongJae Park <sj@kernel.org>
+
+commit b98b7ff6025ae82570d4915e083f0cbd8d48b3cf upstream.
+
+DAMON_LRU_SORT updates 'enabled' and 'kdamond_pid' parameter values, which
+represents the running status of its kdamond, when the user explicitly
+requests start/stop of the kdamond. The kdamond can, however, be stopped
+in events other than the explicit user request in the following three
+events.
+
+1. ctx->regions_score_histogram allocation failure at beginning of the
+ execution,
+2. damon_commit_ctx() failure due to invalid user input, and
+3. damon_commit_ctx() failure due to its internal allocation failures.
+
+Hence, if the kdamond is stopped by the above three events, the values of
+the status parameters can be stale. Users could show the stale values and
+be confused. This is already bad, but the real consequence is worse.
+DAMON_LRU_SORT avoids unnecessary damon_start() and damon_stop() calls
+based on the 'enabled' parameter value. And the update of 'enabled'
+parameter value depends on the damon_start() and damon_stop() call
+results. Hence, once the kdamond has stopped by the unintentional events,
+the user cannot restart the kdamond before the system reboot. For
+example, the issue can be reproduced via below steps.
+
+ # cd /sys/module/damon_lru_sort/parameters
+ #
+ # # start DAMON_LRU_SORT
+ # echo Y > enabled
+ # ps -ef | grep kdamond
+ root 806 2 0 17:53 ? 00:00:00 [kdamond.0]
+ root 808 803 0 17:53 pts/4 00:00:00 grep kdamond
+ #
+ # # commit wrong input to stop kdamond withou explicit stop request
+ # echo 3 > addr_unit
+ # echo Y > commit_inputs
+ bash: echo: write error: Invalid argument
+ #
+ # # confirm kdamond is stopped
+ # ps -ef | grep kdamond
+ root 811 803 0 17:53 pts/4 00:00:00 grep kdamond
+ #
+ # # users casn now show stable status
+ # cat enabled
+ Y
+ # cat kdamond_pid
+ 806
+ #
+ # # even after fixing the wrong parameter,
+ # # kdamond cannot be restarted.
+ # echo 1 > addr_unit
+ # echo Y > enabled
+ # ps -ef | grep kdamond
+ root 815 803 0 17:54 pts/4 00:00:00 grep kdamond
+
+The problem will only rarely happen in real and common setups for the
+following reasons. The allocation failures are unlikely in such setups
+since those allocations are arguably too small to fail. Also sane users
+on real production environments may not commit wrong input parameters.
+But once it happens, the consequence is quite bad. And the bug is a bug.
+
+The issue stems from the fact that there are multiple events that can
+change the status, and following all the events is challenging.
+Dynamically detect and use the fresh status for the parameters when those
+are requested.
+
+Link: https://lore.kernel.org/20260419161003.79176-3-sj@kernel.org
+Fixes: 40e983cca927 ("mm/damon: introduce DAMON-based LRU-lists Sorting")
+Co-developed-by: Liew Rui Yan <aethernet65535@gmail.com>
+Signed-off-by: Liew Rui Yan <aethernet65535@gmail.com>
+Signed-off-by: SeongJae Park <sj@kernel.org>
+Cc: <stable@vger.kernel.org> # 6.0.x
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/damon.h | 1
+ mm/damon/core.c | 16 +++++++++
+ mm/damon/lru_sort.c | 88 +++++++++++++++++++++++++++++++-------------------
+ 3 files changed, 73 insertions(+), 32 deletions(-)
+
+--- a/include/linux/damon.h
++++ b/include/linux/damon.h
+@@ -778,6 +778,7 @@ static inline unsigned int damon_max_nr_
+
+ int damon_start(struct damon_ctx **ctxs, int nr_ctxs, bool exclusive);
+ int damon_stop(struct damon_ctx **ctxs, int nr_ctxs);
++bool damon_is_running(struct damon_ctx *ctx);
+ int damon_kdamond_pid(struct damon_ctx *ctx);
+
+ int damon_set_region_biggest_system_ram_default(struct damon_target *t,
+--- a/mm/damon/core.c
++++ b/mm/damon/core.c
+@@ -1164,6 +1164,22 @@ int damon_stop(struct damon_ctx **ctxs,
+ }
+
+ /**
++ * damon_is_running() - Returns if a given DAMON context is running.
++ * @ctx: The DAMON context to see if running.
++ *
++ * Return: true if @ctx is running, false otherwise.
++ */
++bool damon_is_running(struct damon_ctx *ctx)
++{
++ bool running;
++
++ mutex_lock(&ctx->kdamond_lock);
++ running = ctx->kdamond != NULL;
++ mutex_unlock(&ctx->kdamond_lock);
++ return running;
++}
++
++/**
+ * damon_kdamond_pid() - Return pid of a given DAMON context's worker thread.
+ * @ctx: The DAMON context of the question.
+ *
+--- a/mm/damon/lru_sort.c
++++ b/mm/damon/lru_sort.c
+@@ -111,15 +111,6 @@ module_param(monitor_region_start, ulong
+ static unsigned long monitor_region_end __read_mostly;
+ module_param(monitor_region_end, ulong, 0600);
+
+-/*
+- * PID of the DAMON thread
+- *
+- * If DAMON_LRU_SORT is enabled, this becomes the PID of the worker thread.
+- * Else, -1.
+- */
+-static int kdamond_pid __read_mostly = -1;
+-module_param(kdamond_pid, int, 0400);
+-
+ static struct damos_stat damon_lru_sort_hot_stat;
+ DEFINE_DAMON_MODULES_DAMOS_STATS_PARAMS(damon_lru_sort_hot_stat,
+ lru_sort_tried_hot_regions, lru_sorted_hot_regions,
+@@ -239,60 +230,93 @@ static int damon_lru_sort_turn(bool on)
+ {
+ int err;
+
+- if (!on) {
+- err = damon_stop(&ctx, 1);
+- if (!err)
+- kdamond_pid = -1;
+- return err;
+- }
++ if (!on)
++ return damon_stop(&ctx, 1);
+
+ err = damon_lru_sort_apply_parameters();
+ if (err)
+ return err;
+
+- err = damon_start(&ctx, 1, true);
+- if (err)
+- return err;
+- kdamond_pid = ctx->kdamond->pid;
+- return 0;
++ return damon_start(&ctx, 1, true);
++}
++
++static bool damon_lru_sort_enabled(void)
++{
++ if (!ctx)
++ return false;
++ return damon_is_running(ctx);
+ }
+
+ static int damon_lru_sort_enabled_store(const char *val,
+ const struct kernel_param *kp)
+ {
+- bool is_enabled = enabled;
+- bool enable;
+ int err;
+
+- err = kstrtobool(val, &enable);
++ err = kstrtobool(val, &enabled);
+ if (err)
+ return err;
+
+- if (is_enabled == enable)
++ if (damon_lru_sort_enabled() == enabled)
+ return 0;
+
+ /* Called before init function. The function will handle this. */
+ if (!ctx)
+- goto set_param_out;
++ return 0;
+
+- err = damon_lru_sort_turn(enable);
+- if (err)
+- return err;
++ return damon_lru_sort_turn(enabled);
++}
+
+-set_param_out:
+- enabled = enable;
+- return err;
++static int damon_lru_sort_enabled_load(char *buffer,
++ const struct kernel_param *kp)
++{
++ return sprintf(buffer, "%c\n", damon_lru_sort_enabled() ? 'Y' : 'N');
+ }
+
+ static const struct kernel_param_ops enabled_param_ops = {
+ .set = damon_lru_sort_enabled_store,
+- .get = param_get_bool,
++ .get = damon_lru_sort_enabled_load,
+ };
+
+ module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
+ MODULE_PARM_DESC(enabled,
+ "Enable or disable DAMON_LRU_SORT (default: disabled)");
+
++static int damon_lru_sort_kdamond_pid_store(const char *val,
++ const struct kernel_param *kp)
++{
++ /*
++ * kdamond_pid is read-only, but kernel command line could write it.
++ * Do nothing here.
++ */
++ return 0;
++}
++
++static int damon_lru_sort_kdamond_pid_load(char *buffer,
++ const struct kernel_param *kp)
++{
++ int kdamond_pid = -1;
++
++ if (ctx) {
++ kdamond_pid = damon_kdamond_pid(ctx);
++ if (kdamond_pid < 0)
++ kdamond_pid = -1;
++ }
++ return sprintf(buffer, "%d\n", kdamond_pid);
++}
++
++static const struct kernel_param_ops kdamond_pid_param_ops = {
++ .set = damon_lru_sort_kdamond_pid_store,
++ .get = damon_lru_sort_kdamond_pid_load,
++};
++
++/*
++ * PID of the DAMON thread
++ *
++ * If DAMON_LRU_SORT is enabled, this becomes the PID of the worker thread.
++ * Else, -1.
++ */
++module_param_cb(kdamond_pid, &kdamond_pid_param_ops, NULL, 0400);
++
+ static int damon_lru_sort_handle_commit_inputs(void)
+ {
+ int err;
--- /dev/null
+From 64a140afa5ed1c6f5ba6d451512cbdbbab1ba339 Mon Sep 17 00:00:00 2001
+From: SeongJae Park <sj@kernel.org>
+Date: Sun, 19 Apr 2026 09:10:00 -0700
+Subject: mm/damon/reclaim: detect and use fresh enabled and kdamond_pid values
+
+From: SeongJae Park <sj@kernel.org>
+
+commit 64a140afa5ed1c6f5ba6d451512cbdbbab1ba339 upstream.
+
+Patch series "mm/damon/modules: detect and use fresh status", v3.
+
+DAMON modules including DAMON_RECLAIM, DAMON_LRU_SORT and DAMON_STAT
+commonly expose the kdamond running status via their parameters. Under
+certain scenarios including wrong user inputs and memory allocation
+failures, those parameter values can be stale. It can confuse users. For
+DAMON_RECLAIM and DAMON_LRU_SORT, it even makes the kdamond unable to be
+restarted before the system reboot.
+
+The problem comes from the fact that there are multiple events for the
+status changes and it is difficult to follow up all the scenarios. Fix
+the issue by detecting and using the status on demand, instead of using a
+cached status that is difficult to be updated.
+
+Patches 1-3 fix the bugs in DAMON_RECLAIM, DAMON_LRU_SORT and DAMON_STAT
+in the order.
+
+
+This patch (of 3):
+
+DAMON_RECLAIM updates 'enabled' and 'kdamond_pid' parameter values, which
+represents the running status of its kdamond, when the user explicitly
+requests start/stop of the kdamond. The kdamond can, however, be stopped
+in events other than the explicit user request in the following three
+events.
+
+1. ctx->regions_score_histogram allocation failure at beginning of the
+ execution,
+2. damon_commit_ctx() failure due to invalid user input, and
+3. damon_commit_ctx() failure due to its internal allocation failures.
+
+Hence, if the kdamond is stopped by the above three events, the values of
+the status parameters can be stale. Users could show the stale values and
+be confused. This is already bad, but the real consequence is worse.
+DAMON_RECLAIM avoids unnecessary damon_start() and damon_stop() calls
+based on the 'enabled' parameter value. And the update of 'enabled'
+parameter value depends on the damon_start() and damon_stop() call
+results. Hence, once the kdamond has stopped by the unintentional events,
+the user cannot restart the kdamond before the system reboot. For
+example, the issue can be reproduced via below steps.
+
+ # cd /sys/module/damon_reclaim/parameters
+ #
+ # # start DAMON_RECLAIM
+ # echo Y > enabled
+ # ps -ef | grep kdamond
+ root 806 2 0 17:53 ? 00:00:00 [kdamond.0]
+ root 808 803 0 17:53 pts/4 00:00:00 grep kdamond
+ #
+ # # commit wrong input to stop kdamond withou explicit stop request
+ # echo 3 > addr_unit
+ # echo Y > commit_inputs
+ bash: echo: write error: Invalid argument
+ #
+ # # confirm kdamond is stopped
+ # ps -ef | grep kdamond
+ root 811 803 0 17:53 pts/4 00:00:00 grep kdamond
+ #
+ # # users casn now show stable status
+ # cat enabled
+ Y
+ # cat kdamond_pid
+ 806
+ #
+ # # even after fixing the wrong parameter,
+ # # kdamond cannot be restarted.
+ # echo 1 > addr_unit
+ # echo Y > enabled
+ # ps -ef | grep kdamond
+ root 815 803 0 17:54 pts/4 00:00:00 grep kdamond
+
+The problem will only rarely happen in real and common setups for the
+following reasons. The allocation failures are unlikely in such setups
+since those allocations are arguably too small to fail. Also sane users
+on real production environments may not commit wrong input parameters.
+But once it happens, the consequence is quite bad. And the bug is a bug.
+
+The issue stems from the fact that there are multiple events that can
+change the status, and following all the events is challenging.
+Dynamically detect and use the fresh status for the parameters when those
+are requested.
+
+Link: https://lore.kernel.org/20260419161003.79176-1-sj@kernel.org
+Link: https://lore.kernel.org/20260419161003.79176-2-sj@kernel.org
+Fixes: e035c280f6df ("mm/damon/reclaim: support online inputs update")
+Co-developed-by: Liew Rui Yan <aethernet65535@gmail.com>
+Signed-off-by: Liew Rui Yan <aethernet65535@gmail.com>
+Signed-off-by: SeongJae Park <sj@kernel.org>
+Cc: <stable@vger.kernel.org> # 5.19.x
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/damon/reclaim.c | 88 +++++++++++++++++++++++++++++++++--------------------
+ 1 file changed, 56 insertions(+), 32 deletions(-)
+
+--- a/mm/damon/reclaim.c
++++ b/mm/damon/reclaim.c
+@@ -137,15 +137,6 @@ module_param(monitor_region_end, ulong,
+ static bool skip_anon __read_mostly;
+ module_param(skip_anon, bool, 0600);
+
+-/*
+- * PID of the DAMON thread
+- *
+- * If DAMON_RECLAIM is enabled, this becomes the PID of the worker thread.
+- * Else, -1.
+- */
+-static int kdamond_pid __read_mostly = -1;
+-module_param(kdamond_pid, int, 0400);
+-
+ static struct damos_stat damon_reclaim_stat;
+ DEFINE_DAMON_MODULES_DAMOS_STATS_PARAMS(damon_reclaim_stat,
+ reclaim_tried_regions, reclaimed_regions, quota_exceeds);
+@@ -247,60 +238,93 @@ static int damon_reclaim_turn(bool on)
+ {
+ int err;
+
+- if (!on) {
+- err = damon_stop(&ctx, 1);
+- if (!err)
+- kdamond_pid = -1;
+- return err;
+- }
++ if (!on)
++ return damon_stop(&ctx, 1);
+
+ err = damon_reclaim_apply_parameters();
+ if (err)
+ return err;
+
+- err = damon_start(&ctx, 1, true);
+- if (err)
+- return err;
+- kdamond_pid = ctx->kdamond->pid;
+- return 0;
++ return damon_start(&ctx, 1, true);
++}
++
++static bool damon_reclaim_enabled(void)
++{
++ if (!ctx)
++ return false;
++ return damon_is_running(ctx);
+ }
+
+ static int damon_reclaim_enabled_store(const char *val,
+ const struct kernel_param *kp)
+ {
+- bool is_enabled = enabled;
+- bool enable;
+ int err;
+
+- err = kstrtobool(val, &enable);
++ err = kstrtobool(val, &enabled);
+ if (err)
+ return err;
+
+- if (is_enabled == enable)
++ if (damon_reclaim_enabled() == enabled)
+ return 0;
+
+ /* Called before init function. The function will handle this. */
+ if (!ctx)
+- goto set_param_out;
++ return 0;
+
+- err = damon_reclaim_turn(enable);
+- if (err)
+- return err;
++ return damon_reclaim_turn(enabled);
++}
+
+-set_param_out:
+- enabled = enable;
+- return err;
++static int damon_reclaim_enabled_load(char *buffer,
++ const struct kernel_param *kp)
++{
++ return sprintf(buffer, "%c\n", damon_reclaim_enabled() ? 'Y' : 'N');
+ }
+
+ static const struct kernel_param_ops enabled_param_ops = {
+ .set = damon_reclaim_enabled_store,
+- .get = param_get_bool,
++ .get = damon_reclaim_enabled_load,
+ };
+
+ module_param_cb(enabled, &enabled_param_ops, &enabled, 0600);
+ MODULE_PARM_DESC(enabled,
+ "Enable or disable DAMON_RECLAIM (default: disabled)");
+
++static int damon_reclaim_kdamond_pid_store(const char *val,
++ const struct kernel_param *kp)
++{
++ /*
++ * kdamond_pid is read-only, but kernel command line could write it.
++ * Do nothing here.
++ */
++ return 0;
++}
++
++static int damon_reclaim_kdamond_pid_load(char *buffer,
++ const struct kernel_param *kp)
++{
++ int kdamond_pid = -1;
++
++ if (ctx) {
++ kdamond_pid = damon_kdamond_pid(ctx);
++ if (kdamond_pid < 0)
++ kdamond_pid = -1;
++ }
++ return sprintf(buffer, "%d\n", kdamond_pid);
++}
++
++static const struct kernel_param_ops kdamond_pid_param_ops = {
++ .set = damon_reclaim_kdamond_pid_store,
++ .get = damon_reclaim_kdamond_pid_load,
++};
++
++/*
++ * PID of the DAMON thread
++ *
++ * If DAMON_RECLAIM is enabled, this becomes the PID of the worker thread.
++ * Else, -1.
++ */
++module_param_cb(kdamond_pid, &kdamond_pid_param_ops, NULL, 0400);
++
+ static int damon_reclaim_handle_commit_inputs(void)
+ {
+ int err;
--- /dev/null
+From 2adc8664018c1cc595c7c0c98474a33c7fe32a85 Mon Sep 17 00:00:00 2001
+From: Miguel Ojeda <ojeda@kernel.org>
+Date: Sun, 26 Apr 2026 16:42:01 +0200
+Subject: rust: allow `clippy::collapsible_if` globally
+
+From: Miguel Ojeda <ojeda@kernel.org>
+
+commit 2adc8664018c1cc595c7c0c98474a33c7fe32a85 upstream.
+
+Similar to `clippy::collapsible_match` (globally allowed in the previous
+commit), the `clippy::collapsible_if` lint [1] can make code harder to
+read in certain cases.
+
+Thus just let developers decide on their own.
+
+In addition, remove the existing `expect` we had.
+
+Cc: stable@vger.kernel.org # Needed in 6.12.y and later (Rust is pinned in older LTSs).
+Suggested-by: Gary Guo <gary@garyguo.net>
+Link: https://lore.kernel.org/rust-for-linux/DGROP5CHU1QZ.1OKJRAUZXE9WC@garyguo.net/
+Link: https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_if [1]
+Reviewed-by: Gary Guo <gary@garyguo.net>
+Link: https://patch.msgid.link/20260426144201.227108-2-ojeda@kernel.org
+Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Makefile | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/Makefile
++++ b/Makefile
+@@ -453,6 +453,7 @@ export rust_common_flags := --edition=20
+ -Wrust_2018_idioms \
+ -Wunreachable_pub \
+ -Wclippy::all \
++ -Aclippy::collapsible_if \
+ -Aclippy::collapsible_match \
+ -Wclippy::ignored_unit_patterns \
+ -Wclippy::mut_mut \
--- /dev/null
+From 838d852da8503372f3a1779bfbd1ccb93153ab4e Mon Sep 17 00:00:00 2001
+From: Miguel Ojeda <ojeda@kernel.org>
+Date: Sun, 26 Apr 2026 16:42:00 +0200
+Subject: rust: allow `clippy::collapsible_match` globally
+
+From: Miguel Ojeda <ojeda@kernel.org>
+
+commit 838d852da8503372f3a1779bfbd1ccb93153ab4e upstream.
+
+The `clippy::collapsible_match` lint [1] can make code harder to read
+in certain cases [2], e.g.
+
+ CLIPPY P rust/libmacros.so - due to command line change
+ warning: this `if` can be collapsed into the outer `match`
+ --> rust/pin-init/internal/src/helpers.rs:91:17
+ |
+ 91 | / if nesting == 1 {
+ 92 | | impl_generics.push(tt.clone());
+ 93 | | impl_generics.push(tt);
+ 94 | | skip_until_comma = false;
+ 95 | | }
+ | |_________________^
+ |
+ = help: for further information visit https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_match
+ = note: `-W clippy::collapsible-match` implied by `-W clippy::all`
+ = help: to override `-W clippy::all` add `#[allow(clippy::collapsible_match)]`
+ help: collapse nested if block
+ |
+ 90 ~ TokenTree::Punct(p) if skip_until_comma && p.as_char() == ','
+ 91 ~ && nesting == 1 => {
+ 92 | impl_generics.push(tt.clone());
+ 93 | impl_generics.push(tt);
+ 94 | skip_until_comma = false;
+ 95 ~ }
+ |
+
+The lint does not have much upside -- when the suggestion may be a good
+one, it would still read fine when nested anyway. And it is the kind of
+lint that may easily bias people to just apply the suggestion instead
+of allowing it.
+
+[ In addition, as Gary points out [3], the suggestion is also wrong [4] and
+ in the process of being fixed [5], possibly for Rust 1.97.0:
+
+ Link: https://lore.kernel.org/rust-for-linux/DI3YV94TH9I3.1SOHW51552497@garyguo.net/ [3]
+ Link: https://github.com/rust-lang/rust-clippy/issues/16875 [4]
+ Link: https://github.com/rust-lang/rust-clippy/pull/16878 [5]
+
+ - Miguel ]
+
+Thus just let developers decide on their own.
+
+Cc: stable@vger.kernel.org # Needed in 6.12.y and later (Rust is pinned in older LTSs).
+Link: https://rust-lang.github.io/rust-clippy/master/index.html#collapsible_match [1]
+Link: https://lore.kernel.org/rust-for-linux/CANiq72nWYJna_hdFxjQCQZK6yJBrr1Mb86iKavivV0U0BgufeA@mail.gmail.com/ [2]
+Reviewed-by: Gary Guo <gary@garyguo.net>
+Link: https://patch.msgid.link/20260426144201.227108-1-ojeda@kernel.org
+Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Makefile | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/Makefile
++++ b/Makefile
+@@ -453,6 +453,7 @@ export rust_common_flags := --edition=20
+ -Wrust_2018_idioms \
+ -Wunreachable_pub \
+ -Wclippy::all \
++ -Aclippy::collapsible_match \
+ -Wclippy::ignored_unit_patterns \
+ -Wclippy::mut_mut \
+ -Wclippy::needless_bitwise_bool \
tracefs-fix-default-permissions-not-being-applied-on-initial-mount.patch
fbcon-avoid-oob-font-access-if-console-rotation-fails.patch
rust-pin-init-fix-incorrect-accessor-reference-lifetime.patch
+mm-damon-core-disallow-time-quota-setting-zero-esz.patch
+mm-damon-core-implement-damon_kdamond_pid.patch
+mm-damon-lru_sort-detect-and-use-fresh-enabled-and-kdamond_pid-values.patch
+mm-damon-reclaim-detect-and-use-fresh-enabled-and-kdamond_pid-values.patch
+rust-allow-clippy-collapsible_match-globally.patch
+rust-allow-clippy-collapsible_if-globally.patch
+bonding-fix-use-after-free-due-to-enslave-fail-after-slave-array-update.patch
+io_uring-kbuf-support-min-length-left-for-incremental-buffers.patch