--- /dev/null
+From prvs=3555e8f33=farbere@amazon.com Wed Sep 24 22:24:47 2025
+From: Eliav Farber <farbere@amazon.com>
+Date: Wed, 24 Sep 2025 20:23:02 +0000
+Subject: minmax: add in_range() macro
+To: <linux@armlinux.org.uk>, <richard@nod.at>, <anton.ivanov@cambridgegreys.com>, <johannes@sipsolutions.net>, <dave.hansen@linux.intel.com>, <luto@kernel.org>, <peterz@infradead.org>, <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <x86@kernel.org>, <hpa@zytor.com>, <tony.luck@intel.com>, <qiuxu.zhuo@intel.com>, <mchehab@kernel.org>, <james.morse@arm.com>, <rric@kernel.org>, <harry.wentland@amd.com>, <sunpeng.li@amd.com>, <Rodrigo.Siqueira@amd.com>, <alexander.deucher@amd.com>, <christian.koenig@amd.com>, <Xinhui.Pan@amd.com>, <airlied@gmail.com>, <daniel@ffwll.ch>, <evan.quan@amd.com>, <james.qian.wang@arm.com>, <liviu.dudau@arm.com>, <mihail.atanassov@arm.com>, <brian.starkey@arm.com>, <maarten.lankhorst@linux.intel.com>, <mripard@kernel.org>, <tzimmermann@suse.de>, <robdclark@gmail.com>, <quic_abhinavk@quicinc.com>, <dmitry.baryshkov@linaro.org>, <sean@poorly.run>, <jdelvare@suse.com>, <linux@roeck-us.net>, <linus.walleij@linaro.org>, <dmitry.torokhov@gmail.com>, <maz@kernel.org>, <wens@csie.org>, <jernej.skrabec@gmail.com>, <samuel@sholland.org>, <agk@redhat.com>, <snitzer@kernel.org>, <dm-devel@redhat.com>, <rajur@chelsio.com>, <davem@davemloft.net>, <edumazet@google.com>, <kuba@kernel.org>, <pabeni@redhat.com>, <peppe.cavallaro@st.com>, <alexandre.torgue@foss.st.com>, <joabreu@synopsys.com>, <mcoquelin.stm32@gmail.com>, <krzysztof.kozlowski@linaro.org>, <malattia@linux.it>, <hdegoede@redhat.com>, <markgross@kernel.org>, <artur.paszkiewicz@intel.com>, <jejb@linux.ibm.com>, <martin.petersen@oracle.com>, <sakari.ailus@linux.intel.com>, <gregkh@linuxfoundation.org>, <fei1.li@intel.com>, <clm@fb.com>, <josef@toxicpanda.com>, <dsterba@suse.com>, <jack@suse.com>, <tytso@mit.edu>, <adilger.kernel@dilger.ca>, <dushistov@mail.ru>, <luc.vanoostenryck@gmail.com>, <rostedt@goodmis.org>, <mhiramat@kernel.org>, <pmladek@suse.com>, <senozhatsky@chromium.org>, <andriy.shevchenko@linux.intel.com>, <linux@rasmusvillemoes.dk>, <minchan@kernel.org>, <ngupta@vflare.org>, <akpm@linux-foundation.org>, <yoshfuji@linux-ipv6.org>, <dsahern@kernel.org>, <pablo@netfilter.org>, <kadlec@netfilter.org>, <fw@strlen.de>, <jmaloy@redhat.com>, <ying.xue@windriver.com>, <andrii@kernel.org>, <mykolal@fb.com>, <ast@kernel.org>, <daniel@iogearbox.net>, <martin.lau@linux.dev>, <song@kernel.org>, <yhs@fb.com>, <john.fastabend@gmail.com>, <kpsingh@kernel.org>, <sdf@google.com>, <haoluo@google.com>, <jolsa@kernel.org>, <shuah@kernel.org>, <keescook@chromium.org>, <wad@chromium.org>, <willy@infradead.org>, <farbere@amazon.com>, <sashal@kernel.org>, <ruanjinjie@huawei.com>, <quic_akhilpo@quicinc.com>, <David.Laight@ACULAB.COM>, <herve.codina@bootlin.com>, <linux-arm-kernel@lists.infradead.org>, <linux-kernel@vger.kernel.org>, <linux-um@lists.infradead.org>, <linux-edac@vger.kernel.org>, <amd-gfx@lists.freedesktop.org>, <dri-devel@lists.freedesktop.org>, <linux-arm-msm@vger.kernel.org>, <freedreno@lists.freedesktop.org>, <linux-hwmon@vger.kernel.org>, <linux-input@vger.kernel.org>, <linux-sunxi@lists.linux.dev>, <linux-media@vger.kernel.org>, <netdev@vger.kernel.org>, <linux-stm32@st-md-mailman.stormreply.com>, <platform-driver-x86@vger.kernel.org>, <linux-scsi@vger.kernel.org>, <linux-staging@lists.linux.dev>, <linux-btrfs@vger.kernel.org>, <linux-ext4@vger.kernel.org>, <linux-sparse@vger.kernel.org>, <linux-mm@kvack.org>, <netfilter-devel@vger.kernel.org>, <coreteam@netfilter.org>, <tipc-discussion@lists.sourceforge.net>, <bpf@vger.kernel.org>, <linux-kselftest@vger.kernel.org>, <stable@vger.kernel.org>
+Message-ID: <20250924202320.32333-2-farbere@amazon.com>
+
+From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
+
+[ Upstream commit f9bff0e31881d03badf191d3b0005839391f5f2b ]
+
+Patch series "New page table range API", v6.
+
+This patchset changes the API used by the MM to set up page table entries.
+The four APIs are:
+
+ set_ptes(mm, addr, ptep, pte, nr)
+ update_mmu_cache_range(vma, addr, ptep, nr)
+ flush_dcache_folio(folio)
+ flush_icache_pages(vma, page, nr)
+
+flush_dcache_folio() isn't technically new, but no architecture
+implemented it, so I've done that for them. The old APIs remain around
+but are mostly implemented by calling the new interfaces.
+
+The new APIs are based around setting up N page table entries at once.
+The N entries belong to the same PMD, the same folio and the same VMA, so
+ptep++ is a legitimate operation, and locking is taken care of for you.
+Some architectures can do a better job of it than just a loop, but I have
+hesitated to make too deep a change to architectures I don't understand
+well.
+
+One thing I have changed in every architecture is that PG_arch_1 is now a
+per-folio bit instead of a per-page bit when used for dcache clean/dirty
+tracking. This was something that would have to happen eventually, and it
+makes sense to do it now rather than iterate over every page involved in a
+cache flush and figure out if it needs to happen.
+
+The point of all this is better performance, and Fengwei Yin has measured
+improvement on x86. I suspect you'll see improvement on your architecture
+too. Try the new will-it-scale test mentioned here:
+https://lore.kernel.org/linux-mm/20230206140639.538867-5-fengwei.yin@intel.com/
+You'll need to run it on an XFS filesystem and have
+CONFIG_TRANSPARENT_HUGEPAGE set.
+
+This patchset is the basis for much of the anonymous large folio work
+being done by Ryan, so it's received quite a lot of testing over the last
+few months.
+
+This patch (of 38):
+
+Determine if a value lies within a range more efficiently (subtraction +
+comparison vs two comparisons and an AND). It also has useful (under some
+circumstances) behaviour if the range exceeds the maximum value of the
+type. Convert all the conflicting definitions of in_range() within the
+kernel; some can use the generic definition while others need their own
+definition.
+
+Link: https://lkml.kernel.org/r/20230802151406.3735276-1-willy@infradead.org
+Link: https://lkml.kernel.org/r/20230802151406.3735276-2-willy@infradead.org
+Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Eliav Farber <farbere@amazon.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm/mm/pageattr.c | 6 +-
+ drivers/gpu/drm/arm/display/include/malidp_utils.h | 2
+ drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c | 24 +++++------
+ drivers/gpu/drm/msm/adreno/a6xx_gmu.c | 6 --
+ drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c | 18 ++++----
+ drivers/virt/acrn/ioreq.c | 4 -
+ fs/btrfs/misc.h | 2
+ fs/ext2/balloc.c | 2
+ fs/ext4/ext4.h | 2
+ fs/ufs/util.h | 6 --
+ include/linux/minmax.h | 27 +++++++++++++
+ lib/logic_pio.c | 3 -
+ net/netfilter/nf_nat_core.c | 6 +-
+ net/tipc/core.h | 2
+ net/tipc/link.c | 10 ++--
+ tools/testing/selftests/bpf/progs/get_branch_snapshot.c | 4 -
+ 16 files changed, 65 insertions(+), 59 deletions(-)
+
+--- a/arch/arm/mm/pageattr.c
++++ b/arch/arm/mm/pageattr.c
+@@ -25,7 +25,7 @@ static int change_page_range(pte_t *ptep
+ return 0;
+ }
+
+-static bool in_range(unsigned long start, unsigned long size,
++static bool range_in_range(unsigned long start, unsigned long size,
+ unsigned long range_start, unsigned long range_end)
+ {
+ return start >= range_start && start < range_end &&
+@@ -63,8 +63,8 @@ static int change_memory_common(unsigned
+ if (!size)
+ return 0;
+
+- if (!in_range(start, size, MODULES_VADDR, MODULES_END) &&
+- !in_range(start, size, VMALLOC_START, VMALLOC_END))
++ if (!range_in_range(start, size, MODULES_VADDR, MODULES_END) &&
++ !range_in_range(start, size, VMALLOC_START, VMALLOC_END))
+ return -EINVAL;
+
+ return __change_memory_common(start, size, set_mask, clear_mask);
+--- a/drivers/gpu/drm/arm/display/include/malidp_utils.h
++++ b/drivers/gpu/drm/arm/display/include/malidp_utils.h
+@@ -35,7 +35,7 @@ static inline void set_range(struct mali
+ rg->end = end;
+ }
+
+-static inline bool in_range(struct malidp_range *rg, u32 v)
++static inline bool malidp_in_range(struct malidp_range *rg, u32 v)
+ {
+ return (v >= rg->start) && (v <= rg->end);
+ }
+--- a/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
++++ b/drivers/gpu/drm/arm/display/komeda/komeda_pipeline_state.c
+@@ -305,12 +305,12 @@ komeda_layer_check_cfg(struct komeda_lay
+ if (komeda_fb_check_src_coords(kfb, src_x, src_y, src_w, src_h))
+ return -EINVAL;
+
+- if (!in_range(&layer->hsize_in, src_w)) {
++ if (!malidp_in_range(&layer->hsize_in, src_w)) {
+ DRM_DEBUG_ATOMIC("invalidate src_w %d.\n", src_w);
+ return -EINVAL;
+ }
+
+- if (!in_range(&layer->vsize_in, src_h)) {
++ if (!malidp_in_range(&layer->vsize_in, src_h)) {
+ DRM_DEBUG_ATOMIC("invalidate src_h %d.\n", src_h);
+ return -EINVAL;
+ }
+@@ -452,14 +452,14 @@ komeda_scaler_check_cfg(struct komeda_sc
+ hsize_out = dflow->out_w;
+ vsize_out = dflow->out_h;
+
+- if (!in_range(&scaler->hsize, hsize_in) ||
+- !in_range(&scaler->hsize, hsize_out)) {
++ if (!malidp_in_range(&scaler->hsize, hsize_in) ||
++ !malidp_in_range(&scaler->hsize, hsize_out)) {
+ DRM_DEBUG_ATOMIC("Invalid horizontal sizes");
+ return -EINVAL;
+ }
+
+- if (!in_range(&scaler->vsize, vsize_in) ||
+- !in_range(&scaler->vsize, vsize_out)) {
++ if (!malidp_in_range(&scaler->vsize, vsize_in) ||
++ !malidp_in_range(&scaler->vsize, vsize_out)) {
+ DRM_DEBUG_ATOMIC("Invalid vertical sizes");
+ return -EINVAL;
+ }
+@@ -574,13 +574,13 @@ komeda_splitter_validate(struct komeda_s
+ return -EINVAL;
+ }
+
+- if (!in_range(&splitter->hsize, dflow->in_w)) {
++ if (!malidp_in_range(&splitter->hsize, dflow->in_w)) {
+ DRM_DEBUG_ATOMIC("split in_w:%d is out of the acceptable range.\n",
+ dflow->in_w);
+ return -EINVAL;
+ }
+
+- if (!in_range(&splitter->vsize, dflow->in_h)) {
++ if (!malidp_in_range(&splitter->vsize, dflow->in_h)) {
+ DRM_DEBUG_ATOMIC("split in_h: %d exceeds the acceptable range.\n",
+ dflow->in_h);
+ return -EINVAL;
+@@ -624,13 +624,13 @@ komeda_merger_validate(struct komeda_mer
+ return -EINVAL;
+ }
+
+- if (!in_range(&merger->hsize_merged, output->out_w)) {
++ if (!malidp_in_range(&merger->hsize_merged, output->out_w)) {
+ DRM_DEBUG_ATOMIC("merged_w: %d is out of the accepted range.\n",
+ output->out_w);
+ return -EINVAL;
+ }
+
+- if (!in_range(&merger->vsize_merged, output->out_h)) {
++ if (!malidp_in_range(&merger->vsize_merged, output->out_h)) {
+ DRM_DEBUG_ATOMIC("merged_h: %d is out of the accepted range.\n",
+ output->out_h);
+ return -EINVAL;
+@@ -866,8 +866,8 @@ void komeda_complete_data_flow_cfg(struc
+ * input/output range.
+ */
+ if (dflow->en_scaling && scaler)
+- dflow->en_split = !in_range(&scaler->hsize, dflow->in_w) ||
+- !in_range(&scaler->hsize, dflow->out_w);
++ dflow->en_split = !malidp_in_range(&scaler->hsize, dflow->in_w) ||
++ !malidp_in_range(&scaler->hsize, dflow->out_w);
+ }
+
+ static bool merger_is_available(struct komeda_pipeline *pipe,
+--- a/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
++++ b/drivers/gpu/drm/msm/adreno/a6xx_gmu.c
+@@ -680,12 +680,6 @@ struct block_header {
+ u32 data[];
+ };
+
+-/* this should be a general kernel helper */
+-static int in_range(u32 addr, u32 start, u32 size)
+-{
+- return addr >= start && addr < start + size;
+-}
+-
+ static bool fw_block_mem(struct a6xx_gmu_bo *bo, const struct block_header *blk)
+ {
+ if (!in_range(blk->addr, bo->iova, bo->size))
+--- a/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
++++ b/drivers/net/ethernet/chelsio/cxgb3/cxgb3_main.c
+@@ -2126,7 +2126,7 @@ static const struct ethtool_ops cxgb_eth
+ .set_link_ksettings = set_link_ksettings,
+ };
+
+-static int in_range(int val, int lo, int hi)
++static int cxgb_in_range(int val, int lo, int hi)
+ {
+ return val < 0 || (val <= hi && val >= lo);
+ }
+@@ -2162,19 +2162,19 @@ static int cxgb_siocdevprivate(struct ne
+ return -EINVAL;
+ if (t.qset_idx >= SGE_QSETS)
+ return -EINVAL;
+- if (!in_range(t.intr_lat, 0, M_NEWTIMER) ||
+- !in_range(t.cong_thres, 0, 255) ||
+- !in_range(t.txq_size[0], MIN_TXQ_ENTRIES,
++ if (!cxgb_in_range(t.intr_lat, 0, M_NEWTIMER) ||
++ !cxgb_in_range(t.cong_thres, 0, 255) ||
++ !cxgb_in_range(t.txq_size[0], MIN_TXQ_ENTRIES,
+ MAX_TXQ_ENTRIES) ||
+- !in_range(t.txq_size[1], MIN_TXQ_ENTRIES,
++ !cxgb_in_range(t.txq_size[1], MIN_TXQ_ENTRIES,
+ MAX_TXQ_ENTRIES) ||
+- !in_range(t.txq_size[2], MIN_CTRL_TXQ_ENTRIES,
++ !cxgb_in_range(t.txq_size[2], MIN_CTRL_TXQ_ENTRIES,
+ MAX_CTRL_TXQ_ENTRIES) ||
+- !in_range(t.fl_size[0], MIN_FL_ENTRIES,
++ !cxgb_in_range(t.fl_size[0], MIN_FL_ENTRIES,
+ MAX_RX_BUFFERS) ||
+- !in_range(t.fl_size[1], MIN_FL_ENTRIES,
++ !cxgb_in_range(t.fl_size[1], MIN_FL_ENTRIES,
+ MAX_RX_JUMBO_BUFFERS) ||
+- !in_range(t.rspq_size, MIN_RSPQ_ENTRIES,
++ !cxgb_in_range(t.rspq_size, MIN_RSPQ_ENTRIES,
+ MAX_RSPQ_ENTRIES))
+ return -EINVAL;
+
+--- a/drivers/virt/acrn/ioreq.c
++++ b/drivers/virt/acrn/ioreq.c
+@@ -351,7 +351,7 @@ static bool handle_cf8cfc(struct acrn_vm
+ return is_handled;
+ }
+
+-static bool in_range(struct acrn_ioreq_range *range,
++static bool acrn_in_range(struct acrn_ioreq_range *range,
+ struct acrn_io_request *req)
+ {
+ bool ret = false;
+@@ -389,7 +389,7 @@ static struct acrn_ioreq_client *find_io
+ list_for_each_entry(client, &vm->ioreq_clients, list) {
+ read_lock_bh(&client->range_lock);
+ list_for_each_entry(range, &client->range_list, list) {
+- if (in_range(range, req)) {
++ if (acrn_in_range(range, req)) {
+ found = client;
+ break;
+ }
+--- a/fs/btrfs/misc.h
++++ b/fs/btrfs/misc.h
+@@ -8,8 +8,6 @@
+ #include <linux/math64.h>
+ #include <linux/rbtree.h>
+
+-#define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
+-
+ static inline void cond_wake_up(struct wait_queue_head *wq)
+ {
+ /*
+--- a/fs/ext2/balloc.c
++++ b/fs/ext2/balloc.c
+@@ -36,8 +36,6 @@
+ */
+
+
+-#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
+-
+ struct ext2_group_desc * ext2_get_group_desc(struct super_block * sb,
+ unsigned int block_group,
+ struct buffer_head ** bh)
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -3804,8 +3804,6 @@ static inline void set_bitmap_uptodate(s
+ set_bit(BH_BITMAP_UPTODATE, &(bh)->b_state);
+ }
+
+-#define in_range(b, first, len) ((b) >= (first) && (b) <= (first) + (len) - 1)
+-
+ /* For ioend & aio unwritten conversion wait queues */
+ #define EXT4_WQ_HASH_SZ 37
+ #define ext4_ioend_wq(v) (&ext4__ioend_wq[((unsigned long)(v)) %\
+--- a/fs/ufs/util.h
++++ b/fs/ufs/util.h
+@@ -11,12 +11,6 @@
+ #include <linux/fs.h>
+ #include "swab.h"
+
+-
+-/*
+- * some useful macros
+- */
+-#define in_range(b,first,len) ((b)>=(first)&&(b)<(first)+(len))
+-
+ /*
+ * functions used for retyping
+ */
+--- a/include/linux/minmax.h
++++ b/include/linux/minmax.h
+@@ -5,6 +5,7 @@
+ #include <linux/build_bug.h>
+ #include <linux/compiler.h>
+ #include <linux/const.h>
++#include <linux/types.h>
+
+ /*
+ * min()/max()/clamp() macros must accomplish three things:
+@@ -192,6 +193,32 @@
+ */
+ #define clamp_val(val, lo, hi) clamp_t(typeof(val), val, lo, hi)
+
++static inline bool in_range64(u64 val, u64 start, u64 len)
++{
++ return (val - start) < len;
++}
++
++static inline bool in_range32(u32 val, u32 start, u32 len)
++{
++ return (val - start) < len;
++}
++
++/**
++ * in_range - Determine if a value lies within a range.
++ * @val: Value to test.
++ * @start: First value in range.
++ * @len: Number of values in range.
++ *
++ * This is more efficient than "if (start <= val && val < (start + len))".
++ * It also gives a different answer if @start + @len overflows the size of
++ * the type by a sufficient amount to encompass @val. Decide for yourself
++ * which behaviour you want, or prove that start + len never overflow.
++ * Do not blindly replace one form with the other.
++ */
++#define in_range(val, start, len) \
++ ((sizeof(start) | sizeof(len) | sizeof(val)) <= sizeof(u32) ? \
++ in_range32(val, start, len) : in_range64(val, start, len))
++
+ /**
+ * swap - swap values of @a and @b
+ * @a: first value
+--- a/lib/logic_pio.c
++++ b/lib/logic_pio.c
+@@ -20,9 +20,6 @@
+ static LIST_HEAD(io_range_list);
+ static DEFINE_MUTEX(io_range_mutex);
+
+-/* Consider a kernel general helper for this */
+-#define in_range(b, first, len) ((b) >= (first) && (b) < (first) + (len))
+-
+ /**
+ * logic_pio_register_range - register logical PIO range for a host
+ * @new_range: pointer to the IO range to be registered.
+--- a/net/netfilter/nf_nat_core.c
++++ b/net/netfilter/nf_nat_core.c
+@@ -242,7 +242,7 @@ static bool l4proto_in_range(const struc
+ /* If we source map this tuple so reply looks like reply_tuple, will
+ * that meet the constraints of range.
+ */
+-static int in_range(const struct nf_conntrack_tuple *tuple,
++static int nf_in_range(const struct nf_conntrack_tuple *tuple,
+ const struct nf_nat_range2 *range)
+ {
+ /* If we are supposed to map IPs, then we must be in the
+@@ -291,7 +291,7 @@ find_appropriate_src(struct net *net,
+ &ct->tuplehash[IP_CT_DIR_REPLY].tuple);
+ result->dst = tuple->dst;
+
+- if (in_range(result, range))
++ if (nf_in_range(result, range))
+ return 1;
+ }
+ }
+@@ -523,7 +523,7 @@ get_unique_tuple(struct nf_conntrack_tup
+ if (maniptype == NF_NAT_MANIP_SRC &&
+ !(range->flags & NF_NAT_RANGE_PROTO_RANDOM_ALL)) {
+ /* try the original tuple first */
+- if (in_range(orig_tuple, range)) {
++ if (nf_in_range(orig_tuple, range)) {
+ if (!nf_nat_used_tuple(orig_tuple, ct)) {
+ *tuple = *orig_tuple;
+ return;
+--- a/net/tipc/core.h
++++ b/net/tipc/core.h
+@@ -197,7 +197,7 @@ static inline int less(u16 left, u16 rig
+ return less_eq(left, right) && (mod(right) != mod(left));
+ }
+
+-static inline int in_range(u16 val, u16 min, u16 max)
++static inline int tipc_in_range(u16 val, u16 min, u16 max)
+ {
+ return !less(val, min) && !more(val, max);
+ }
+--- a/net/tipc/link.c
++++ b/net/tipc/link.c
+@@ -1624,7 +1624,7 @@ next_gap_ack:
+ last_ga->bgack_cnt);
+ }
+ /* Check against the last Gap ACK block */
+- if (in_range(seqno, start, end))
++ if (tipc_in_range(seqno, start, end))
+ continue;
+ /* Update/release the packet peer is acking */
+ bc_has_acked = true;
+@@ -2252,12 +2252,12 @@ static int tipc_link_proto_rcv(struct ti
+ strncpy(if_name, data, TIPC_MAX_IF_NAME);
+
+ /* Update own tolerance if peer indicates a non-zero value */
+- if (in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) {
++ if (tipc_in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) {
+ l->tolerance = peers_tol;
+ l->bc_rcvlink->tolerance = peers_tol;
+ }
+ /* Update own priority if peer's priority is higher */
+- if (in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI))
++ if (tipc_in_range(peers_prio, l->priority + 1, TIPC_MAX_LINK_PRI))
+ l->priority = peers_prio;
+
+ /* If peer is going down we want full re-establish cycle */
+@@ -2300,13 +2300,13 @@ static int tipc_link_proto_rcv(struct ti
+ l->rcv_nxt_state = msg_seqno(hdr) + 1;
+
+ /* Update own tolerance if peer indicates a non-zero value */
+- if (in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) {
++ if (tipc_in_range(peers_tol, TIPC_MIN_LINK_TOL, TIPC_MAX_LINK_TOL)) {
+ l->tolerance = peers_tol;
+ l->bc_rcvlink->tolerance = peers_tol;
+ }
+ /* Update own prio if peer indicates a different value */
+ if ((peers_prio != l->priority) &&
+- in_range(peers_prio, 1, TIPC_MAX_LINK_PRI)) {
++ tipc_in_range(peers_prio, 1, TIPC_MAX_LINK_PRI)) {
+ l->priority = peers_prio;
+ rc = tipc_link_fsm_evt(l, LINK_FAILURE_EVT);
+ }
+--- a/tools/testing/selftests/bpf/progs/get_branch_snapshot.c
++++ b/tools/testing/selftests/bpf/progs/get_branch_snapshot.c
+@@ -15,7 +15,7 @@ long total_entries = 0;
+ #define ENTRY_CNT 32
+ struct perf_branch_entry entries[ENTRY_CNT] = {};
+
+-static inline bool in_range(__u64 val)
++static inline bool gbs_in_range(__u64 val)
+ {
+ return (val >= address_low) && (val < address_high);
+ }
+@@ -31,7 +31,7 @@ int BPF_PROG(test1, int n, int ret)
+ for (i = 0; i < ENTRY_CNT; i++) {
+ if (i >= total_entries)
+ break;
+- if (in_range(entries[i].from) && in_range(entries[i].to))
++ if (gbs_in_range(entries[i].from) && gbs_in_range(entries[i].to))
+ test1_hits++;
+ else if (!test1_hits)
+ wasted_entries++;
--- /dev/null
+From linux-staging+bounces-34577-greg=kroah.com@lists.linux.dev Wed Sep 24 22:27:13 2025
+From: Eliav Farber <farbere@amazon.com>
+Date: Wed, 24 Sep 2025 20:23:04 +0000
+Subject: minmax: deduplicate __unconst_integer_typeof()
+To: <linux@armlinux.org.uk>, <richard@nod.at>, <anton.ivanov@cambridgegreys.com>, <johannes@sipsolutions.net>, <dave.hansen@linux.intel.com>, <luto@kernel.org>, <peterz@infradead.org>, <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <x86@kernel.org>, <hpa@zytor.com>, <tony.luck@intel.com>, <qiuxu.zhuo@intel.com>, <mchehab@kernel.org>, <james.morse@arm.com>, <rric@kernel.org>, <harry.wentland@amd.com>, <sunpeng.li@amd.com>, <Rodrigo.Siqueira@amd.com>, <alexander.deucher@amd.com>, <christian.koenig@amd.com>, <Xinhui.Pan@amd.com>, <airlied@gmail.com>, <daniel@ffwll.ch>, <evan.quan@amd.com>, <james.qian.wang@arm.com>, <liviu.dudau@arm.com>, <mihail.atanassov@arm.com>, <brian.starkey@arm.com>, <maarten.lankhorst@linux.intel.com>, <mripard@kernel.org>, <tzimmermann@suse.de>, <robdclark@gmail.com>, <quic_abhinavk@quicinc.com>, <dmitry.baryshkov@linaro.org>, <sean@poorly.run>, <jdelvare@suse.com>, <linux@roeck-us.net>, <linus.walleij@linaro.org>, <dmitry.torokhov@gmail.com>, <maz@kernel.org>, <wens@csie.org>, <jernej.skrabec@gmail.com>, <samuel@sholland.org>, <agk@redhat.com>, <snitzer@kernel.org>, <dm-devel@redhat.com>, <rajur@chelsio.com>, <davem@davemloft.net>, <edumazet@google.com>, <kuba@kernel.org>, <pabeni@redhat.com>, <peppe.cavallaro@st.com>, <alexandre.torgue@foss.st.com>, <joabreu@synopsys.com>, <mcoquelin.stm32@gmail.com>, <krzysztof.kozlowski@linaro.org>, <malattia@linux.it>, <hdegoede@redhat.com>, <markgross@kernel.org>, <artur.paszkiewicz@intel.com>, <jejb@linux.ibm.com>, <martin.petersen@oracle.com>, <sakari.ailus@linux.intel.com>, <gregkh@linuxfoundation.org>, <fei1.li@intel.com>, <clm@fb.com>, <josef@toxicpanda.com>, <dsterba@suse.com>, <jack@suse.com>, <tytso@mit.edu>, <adilger.kernel@dilger.ca>, <dushistov@mail.ru>, <luc.vanoostenryck@gmail.com>, <rostedt@goodmis.org>, <mhiramat@kernel.org>, <pmladek@suse.com>, <senozhatsky@chromium.org>, <andriy.shevchenko@linux.intel.com>, <linux@rasmusvillemoes.dk>, <minchan@kernel.org>, <ngupta@vflare.org>, <akpm@linux-foundation.org>, <yoshfuji@linux-ipv6.org>, <dsahern@kernel.org>, <pablo@netfilter.org>, <kadlec@netfilter.org>, <fw@strlen.de>, <jmaloy@redhat.com>, <ying.xue@windriver.com>, <andrii@kernel.org>, <mykolal@fb.com>, <ast@kernel.org>, <daniel@iogearbox.net>, <martin.lau@linux.dev>, <song@kernel.org>, <yhs@fb.com>, <john.fastabend@gmail.com>, <kpsingh@kernel.org>, <sdf@google.com>, <haoluo@google.com>, <jolsa@kernel.org>, <shuah@kernel.org>, <keescook@chromium.org>, <wad@chromium.org>, <willy@infradead.org>, <farbere@amazon.com>, <sashal@kernel.org>, <ruanjinjie@huawei.com>, <quic_akhilpo@quicinc.com>, <David.Laight@ACULAB.COM>, <herve.codina@bootlin.com>, <linux-arm-kernel@lists.infradead.org>, <linux-kernel@vger.kernel.org>, <linux-um@lists.infradead.org>, <linux-edac@vger.kernel.org>, <amd-gfx@lists.freedesktop.org>, <dri-devel@lists.freedesktop.org>, <linux-arm-msm@vger.kernel.org>, <freedreno@lists.freedesktop.org>, <linux-hwmon@vger.kernel.org>, <linux-input@vger.kernel.org>, <linux-sunxi@lists.linux.dev>, <linux-media@vger.kernel.org>, <netdev@vger.kernel.org>, <linux-stm32@st-md-mailman.stormreply.com>, <platform-driver-x86@vger.kernel.org>, <linux-scsi@vger.kernel.org>, <linux-staging@lists.linux.dev>, <linux-btrfs@vger.kernel.org>, <linux-ext4@vger.kernel.org>, <linux-sparse@vger.kernel.org>, <linux-mm@kvack.org>, <netfilter-devel@vger.kernel.org>, <coreteam@netfilter.org>, <tipc-discussion@lists.sourceforge.net>, <bpf@vger.kernel.org>, <linux-kselftest@vger.kernel.org>, <stable@vger.kernel.org>
+Message-ID: <20250924202320.32333-4-farbere@amazon.com>
+
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+
+[ Upstream commit 5e57418a2031cd5e1863efdf3d7447a16a368172 ]
+
+It appears that compiler_types.h already have an implementation of the
+__unconst_integer_typeof() called __unqual_scalar_typeof(). Use it
+instead of the copy.
+
+Link: https://lkml.kernel.org/r/20230911154913.4176033-1-andriy.shevchenko@linux.intel.com
+Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Acked-by: Herve Codina <herve.codina@bootlin.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Eliav Farber <farbere@amazon.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/minmax.h | 25 ++-----------------------
+ 1 file changed, 2 insertions(+), 23 deletions(-)
+
+--- a/include/linux/minmax.h
++++ b/include/linux/minmax.h
+@@ -169,27 +169,6 @@
+ #define max_t(type, x, y) __careful_cmp(max, (type)(x), (type)(y))
+
+ /*
+- * Remove a const qualifier from integer types
+- * _Generic(foo, type-name: association, ..., default: association) performs a
+- * comparison against the foo type (not the qualified type).
+- * Do not use the const keyword in the type-name as it will not match the
+- * unqualified type of foo.
+- */
+-#define __unconst_integer_type_cases(type) \
+- unsigned type: (unsigned type)0, \
+- signed type: (signed type)0
+-
+-#define __unconst_integer_typeof(x) typeof( \
+- _Generic((x), \
+- char: (char)0, \
+- __unconst_integer_type_cases(char), \
+- __unconst_integer_type_cases(short), \
+- __unconst_integer_type_cases(int), \
+- __unconst_integer_type_cases(long), \
+- __unconst_integer_type_cases(long long), \
+- default: (x)))
+-
+-/*
+ * Do not check the array parameter using __must_be_array().
+ * In the following legit use-case where the "array" passed is a simple pointer,
+ * __must_be_array() will return a failure.
+@@ -203,13 +182,13 @@
+ * 'int *buff' and 'int buff[N]' types.
+ *
+ * The array can be an array of const items.
+- * typeof() keeps the const qualifier. Use __unconst_integer_typeof() in order
++ * typeof() keeps the const qualifier. Use __unqual_scalar_typeof() in order
+ * to discard the const qualifier for the __element variable.
+ */
+ #define __minmax_array(op, array, len) ({ \
+ typeof(&(array)[0]) __array = (array); \
+ typeof(len) __len = (len); \
+- __unconst_integer_typeof(__array[0]) __element = __array[--__len]; \
++ __unqual_scalar_typeof(__array[0]) __element = __array[--__len];\
+ while (__len--) \
+ __element = op(__element, __array[__len]); \
+ __element; })
--- /dev/null
+From linux-staging+bounces-34578-greg=kroah.com@lists.linux.dev Wed Sep 24 22:28:06 2025
+From: Eliav Farber <farbere@amazon.com>
+Date: Wed, 24 Sep 2025 20:23:05 +0000
+Subject: minmax: fix indentation of __cmp_once() and __clamp_once()
+To: <linux@armlinux.org.uk>, <richard@nod.at>, <anton.ivanov@cambridgegreys.com>, <johannes@sipsolutions.net>, <dave.hansen@linux.intel.com>, <luto@kernel.org>, <peterz@infradead.org>, <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <x86@kernel.org>, <hpa@zytor.com>, <tony.luck@intel.com>, <qiuxu.zhuo@intel.com>, <mchehab@kernel.org>, <james.morse@arm.com>, <rric@kernel.org>, <harry.wentland@amd.com>, <sunpeng.li@amd.com>, <Rodrigo.Siqueira@amd.com>, <alexander.deucher@amd.com>, <christian.koenig@amd.com>, <Xinhui.Pan@amd.com>, <airlied@gmail.com>, <daniel@ffwll.ch>, <evan.quan@amd.com>, <james.qian.wang@arm.com>, <liviu.dudau@arm.com>, <mihail.atanassov@arm.com>, <brian.starkey@arm.com>, <maarten.lankhorst@linux.intel.com>, <mripard@kernel.org>, <tzimmermann@suse.de>, <robdclark@gmail.com>, <quic_abhinavk@quicinc.com>, <dmitry.baryshkov@linaro.org>, <sean@poorly.run>, <jdelvare@suse.com>, <linux@roeck-us.net>, <linus.walleij@linaro.org>, <dmitry.torokhov@gmail.com>, <maz@kernel.org>, <wens@csie.org>, <jernej.skrabec@gmail.com>, <samuel@sholland.org>, <agk@redhat.com>, <snitzer@kernel.org>, <dm-devel@redhat.com>, <rajur@chelsio.com>, <davem@davemloft.net>, <edumazet@google.com>, <kuba@kernel.org>, <pabeni@redhat.com>, <peppe.cavallaro@st.com>, <alexandre.torgue@foss.st.com>, <joabreu@synopsys.com>, <mcoquelin.stm32@gmail.com>, <krzysztof.kozlowski@linaro.org>, <malattia@linux.it>, <hdegoede@redhat.com>, <markgross@kernel.org>, <artur.paszkiewicz@intel.com>, <jejb@linux.ibm.com>, <martin.petersen@oracle.com>, <sakari.ailus@linux.intel.com>, <gregkh@linuxfoundation.org>, <fei1.li@intel.com>, <clm@fb.com>, <josef@toxicpanda.com>, <dsterba@suse.com>, <jack@suse.com>, <tytso@mit.edu>, <adilger.kernel@dilger.ca>, <dushistov@mail.ru>, <luc.vanoostenryck@gmail.com>, <rostedt@goodmis.org>, <mhiramat@kernel.org>, <pmladek@suse.com>, <senozhatsky@chromium.org>, <andriy.shevchenko@linux.intel.com>, <linux@rasmusvillemoes.dk>, <minchan@kernel.org>, <ngupta@vflare.org>, <akpm@linux-foundation.org>, <yoshfuji@linux-ipv6.org>, <dsahern@kernel.org>, <pablo@netfilter.org>, <kadlec@netfilter.org>, <fw@strlen.de>, <jmaloy@redhat.com>, <ying.xue@windriver.com>, <andrii@kernel.org>, <mykolal@fb.com>, <ast@kernel.org>, <daniel@iogearbox.net>, <martin.lau@linux.dev>, <song@kernel.org>, <yhs@fb.com>, <john.fastabend@gmail.com>, <kpsingh@kernel.org>, <sdf@google.com>, <haoluo@google.com>, <jolsa@kernel.org>, <shuah@kernel.org>, <keescook@chromium.org>, <wad@chromium.org>, <willy@infradead.org>, <farbere@amazon.com>, <sashal@kernel.org>, <ruanjinjie@huawei.com>, <quic_akhilpo@quicinc.com>, <David.Laight@ACULAB.COM>, <herve.codina@bootlin.com>, <linux-arm-kernel@lists.infradead.org>, <linux-kernel@vger.kernel.org>, <linux-um@lists.infradead.org>, <linux-edac@vger.kernel.org>, <amd-gfx@lists.freedesktop.org>, <dri-devel@lists.freedesktop.org>, <linux-arm-msm@vger.kernel.org>, <freedreno@lists.freedesktop.org>, <linux-hwmon@vger.kernel.org>, <linux-input@vger.kernel.org>, <linux-sunxi@lists.linux.dev>, <linux-media@vger.kernel.org>, <netdev@vger.kernel.org>, <linux-stm32@st-md-mailman.stormreply.com>, <platform-driver-x86@vger.kernel.org>, <linux-scsi@vger.kernel.org>, <linux-staging@lists.linux.dev>, <linux-btrfs@vger.kernel.org>, <linux-ext4@vger.kernel.org>, <linux-sparse@vger.kernel.org>, <linux-mm@kvack.org>, <netfilter-devel@vger.kernel.org>, <coreteam@netfilter.org>, <tipc-discussion@lists.sourceforge.net>, <bpf@vger.kernel.org>, <linux-kselftest@vger.kernel.org>, <stable@vger.kernel.org>
+Cc: Christoph Hellwig <hch@infradead.org>, "Jason A. Donenfeld" <Jason@zx2c4.com>, Linus Torvalds <torvalds@linux-foundation.org>
+Message-ID: <20250924202320.32333-5-farbere@amazon.com>
+
+From: David Laight <David.Laight@ACULAB.COM>
+
+[ Upstream commit f4b84b2ff851f01d0fac619eadef47eb41648534 ]
+
+Remove the extra indentation and align continuation markers.
+
+Link: https://lkml.kernel.org/r/bed41317a05c498ea0209eafbcab45a5@AcuMS.aculab.com
+Signed-off-by: David Laight <david.laight@aculab.com>
+Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Cc: Christoph Hellwig <hch@infradead.org>
+Cc: Jason A. Donenfeld <Jason@zx2c4.com>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Eliav Farber <farbere@amazon.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/minmax.h | 30 +++++++++++++++---------------
+ 1 file changed, 15 insertions(+), 15 deletions(-)
+
+--- a/include/linux/minmax.h
++++ b/include/linux/minmax.h
+@@ -46,11 +46,11 @@
+ #define __cmp(op, x, y) ((x) __cmp_op_##op (y) ? (x) : (y))
+
+ #define __cmp_once(op, x, y, unique_x, unique_y) ({ \
+- typeof(x) unique_x = (x); \
+- typeof(y) unique_y = (y); \
+- static_assert(__types_ok(x, y), \
+- #op "(" #x ", " #y ") signedness error, fix types or consider u" #op "() before " #op "_t()"); \
+- __cmp(op, unique_x, unique_y); })
++ typeof(x) unique_x = (x); \
++ typeof(y) unique_y = (y); \
++ static_assert(__types_ok(x, y), \
++ #op "(" #x ", " #y ") signedness error, fix types or consider u" #op "() before " #op "_t()"); \
++ __cmp(op, unique_x, unique_y); })
+
+ #define __careful_cmp(op, x, y) \
+ __builtin_choose_expr(__is_constexpr((x) - (y)), \
+@@ -60,16 +60,16 @@
+ #define __clamp(val, lo, hi) \
+ ((val) >= (hi) ? (hi) : ((val) <= (lo) ? (lo) : (val)))
+
+-#define __clamp_once(val, lo, hi, unique_val, unique_lo, unique_hi) ({ \
+- typeof(val) unique_val = (val); \
+- typeof(lo) unique_lo = (lo); \
+- typeof(hi) unique_hi = (hi); \
+- static_assert(__builtin_choose_expr(__is_constexpr((lo) > (hi)), \
+- (lo) <= (hi), true), \
+- "clamp() low limit " #lo " greater than high limit " #hi); \
+- static_assert(__types_ok(val, lo), "clamp() 'lo' signedness error"); \
+- static_assert(__types_ok(val, hi), "clamp() 'hi' signedness error"); \
+- __clamp(unique_val, unique_lo, unique_hi); })
++#define __clamp_once(val, lo, hi, unique_val, unique_lo, unique_hi) ({ \
++ typeof(val) unique_val = (val); \
++ typeof(lo) unique_lo = (lo); \
++ typeof(hi) unique_hi = (hi); \
++ static_assert(__builtin_choose_expr(__is_constexpr((lo) > (hi)), \
++ (lo) <= (hi), true), \
++ "clamp() low limit " #lo " greater than high limit " #hi); \
++ static_assert(__types_ok(val, lo), "clamp() 'lo' signedness error"); \
++ static_assert(__types_ok(val, hi), "clamp() 'hi' signedness error"); \
++ __clamp(unique_val, unique_lo, unique_hi); })
+
+ #define __careful_clamp(val, lo, hi) ({ \
+ __builtin_choose_expr(__is_constexpr((val) - (lo) + (hi)), \
--- /dev/null
+From prvs=3555e8f33=farbere@amazon.com Wed Sep 24 22:25:22 2025
+From: Eliav Farber <farbere@amazon.com>
+Date: Wed, 24 Sep 2025 20:23:03 +0000
+Subject: minmax: Introduce {min,max}_array()
+To: <linux@armlinux.org.uk>, <richard@nod.at>, <anton.ivanov@cambridgegreys.com>, <johannes@sipsolutions.net>, <dave.hansen@linux.intel.com>, <luto@kernel.org>, <peterz@infradead.org>, <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <x86@kernel.org>, <hpa@zytor.com>, <tony.luck@intel.com>, <qiuxu.zhuo@intel.com>, <mchehab@kernel.org>, <james.morse@arm.com>, <rric@kernel.org>, <harry.wentland@amd.com>, <sunpeng.li@amd.com>, <Rodrigo.Siqueira@amd.com>, <alexander.deucher@amd.com>, <christian.koenig@amd.com>, <Xinhui.Pan@amd.com>, <airlied@gmail.com>, <daniel@ffwll.ch>, <evan.quan@amd.com>, <james.qian.wang@arm.com>, <liviu.dudau@arm.com>, <mihail.atanassov@arm.com>, <brian.starkey@arm.com>, <maarten.lankhorst@linux.intel.com>, <mripard@kernel.org>, <tzimmermann@suse.de>, <robdclark@gmail.com>, <quic_abhinavk@quicinc.com>, <dmitry.baryshkov@linaro.org>, <sean@poorly.run>, <jdelvare@suse.com>, <linux@roeck-us.net>, <linus.walleij@linaro.org>, <dmitry.torokhov@gmail.com>, <maz@kernel.org>, <wens@csie.org>, <jernej.skrabec@gmail.com>, <samuel@sholland.org>, <agk@redhat.com>, <snitzer@kernel.org>, <dm-devel@redhat.com>, <rajur@chelsio.com>, <davem@davemloft.net>, <edumazet@google.com>, <kuba@kernel.org>, <pabeni@redhat.com>, <peppe.cavallaro@st.com>, <alexandre.torgue@foss.st.com>, <joabreu@synopsys.com>, <mcoquelin.stm32@gmail.com>, <krzysztof.kozlowski@linaro.org>, <malattia@linux.it>, <hdegoede@redhat.com>, <markgross@kernel.org>, <artur.paszkiewicz@intel.com>, <jejb@linux.ibm.com>, <martin.petersen@oracle.com>, <sakari.ailus@linux.intel.com>, <gregkh@linuxfoundation.org>, <fei1.li@intel.com>, <clm@fb.com>, <josef@toxicpanda.com>, <dsterba@suse.com>, <jack@suse.com>, <tytso@mit.edu>, <adilger.kernel@dilger.ca>, <dushistov@mail.ru>, <luc.vanoostenryck@gmail.com>, <rostedt@goodmis.org>, <mhiramat@kernel.org>, <pmladek@suse.com>, <senozhatsky@chromium.org>, <andriy.shevchenko@linux.intel.com>, <linux@rasmusvillemoes.dk>, <minchan@kernel.org>, <ngupta@vflare.org>, <akpm@linux-foundation.org>, <yoshfuji@linux-ipv6.org>, <dsahern@kernel.org>, <pablo@netfilter.org>, <kadlec@netfilter.org>, <fw@strlen.de>, <jmaloy@redhat.com>, <ying.xue@windriver.com>, <andrii@kernel.org>, <mykolal@fb.com>, <ast@kernel.org>, <daniel@iogearbox.net>, <martin.lau@linux.dev>, <song@kernel.org>, <yhs@fb.com>, <john.fastabend@gmail.com>, <kpsingh@kernel.org>, <sdf@google.com>, <haoluo@google.com>, <jolsa@kernel.org>, <shuah@kernel.org>, <keescook@chromium.org>, <wad@chromium.org>, <willy@infradead.org>, <farbere@amazon.com>, <sashal@kernel.org>, <ruanjinjie@huawei.com>, <quic_akhilpo@quicinc.com>, <David.Laight@ACULAB.COM>, <herve.codina@bootlin.com>, <linux-arm-kernel@lists.infradead.org>, <linux-kernel@vger.kernel.org>, <linux-um@lists.infradead.org>, <linux-edac@vger.kernel.org>, <amd-gfx@lists.freedesktop.org>, <dri-devel@lists.freedesktop.org>, <linux-arm-msm@vger.kernel.org>, <freedreno@lists.freedesktop.org>, <linux-hwmon@vger.kernel.org>, <linux-input@vger.kernel.org>, <linux-sunxi@lists.linux.dev>, <linux-media@vger.kernel.org>, <netdev@vger.kernel.org>, <linux-stm32@st-md-mailman.stormreply.com>, <platform-driver-x86@vger.kernel.org>, <linux-scsi@vger.kernel.org>, <linux-staging@lists.linux.dev>, <linux-btrfs@vger.kernel.org>, <linux-ext4@vger.kernel.org>, <linux-sparse@vger.kernel.org>, <linux-mm@kvack.org>, <netfilter-devel@vger.kernel.org>, <coreteam@netfilter.org>, <tipc-discussion@lists.sourceforge.net>, <bpf@vger.kernel.org>, <linux-kselftest@vger.kernel.org>, <stable@vger.kernel.org>
+Cc: Andy Shevchenko <andy.shevchenko@gmail.com>, Christophe Leroy <christophe.leroy@csgroup.eu>
+Message-ID: <20250924202320.32333-3-farbere@amazon.com>
+
+From: Herve Codina <herve.codina@bootlin.com>
+
+[ Upstream commit c952c748c7a983a8bda9112984e6f2c1f6e441a5 ]
+
+Introduce min_array() (resp max_array()) in order to get the
+minimal (resp maximum) of values present in an array.
+
+Signed-off-by: Herve Codina <herve.codina@bootlin.com>
+Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
+Reviewed-by: Christophe Leroy <christophe.leroy@csgroup.eu>
+Link: https://lore.kernel.org/r/20230623085830.749991-8-herve.codina@bootlin.com
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Eliav Farber <farbere@amazon.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/minmax.h | 64 +++++++++++++++++++++++++++++++++++++++++++++++++
+ 1 file changed, 64 insertions(+)
+
+--- a/include/linux/minmax.h
++++ b/include/linux/minmax.h
+@@ -168,6 +168,70 @@
+ */
+ #define max_t(type, x, y) __careful_cmp(max, (type)(x), (type)(y))
+
++/*
++ * Remove a const qualifier from integer types
++ * _Generic(foo, type-name: association, ..., default: association) performs a
++ * comparison against the foo type (not the qualified type).
++ * Do not use the const keyword in the type-name as it will not match the
++ * unqualified type of foo.
++ */
++#define __unconst_integer_type_cases(type) \
++ unsigned type: (unsigned type)0, \
++ signed type: (signed type)0
++
++#define __unconst_integer_typeof(x) typeof( \
++ _Generic((x), \
++ char: (char)0, \
++ __unconst_integer_type_cases(char), \
++ __unconst_integer_type_cases(short), \
++ __unconst_integer_type_cases(int), \
++ __unconst_integer_type_cases(long), \
++ __unconst_integer_type_cases(long long), \
++ default: (x)))
++
++/*
++ * Do not check the array parameter using __must_be_array().
++ * In the following legit use-case where the "array" passed is a simple pointer,
++ * __must_be_array() will return a failure.
++ * --- 8< ---
++ * int *buff
++ * ...
++ * min = min_array(buff, nb_items);
++ * --- 8< ---
++ *
++ * The first typeof(&(array)[0]) is needed in order to support arrays of both
++ * 'int *buff' and 'int buff[N]' types.
++ *
++ * The array can be an array of const items.
++ * typeof() keeps the const qualifier. Use __unconst_integer_typeof() in order
++ * to discard the const qualifier for the __element variable.
++ */
++#define __minmax_array(op, array, len) ({ \
++ typeof(&(array)[0]) __array = (array); \
++ typeof(len) __len = (len); \
++ __unconst_integer_typeof(__array[0]) __element = __array[--__len]; \
++ while (__len--) \
++ __element = op(__element, __array[__len]); \
++ __element; })
++
++/**
++ * min_array - return minimum of values present in an array
++ * @array: array
++ * @len: array length
++ *
++ * Note that @len must not be zero (empty array).
++ */
++#define min_array(array, len) __minmax_array(min, array, len)
++
++/**
++ * max_array - return maximum of values present in an array
++ * @array: array
++ * @len: array length
++ *
++ * Note that @len must not be zero (empty array).
++ */
++#define max_array(array, len) __minmax_array(max, array, len)
++
+ /**
+ * clamp_t - return a value clamped to a given range using a given type
+ * @type: the type of variable to use
--- /dev/null
+From 41cddf83d8b00f29fd105e7a0777366edc69a5cf Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <david@redhat.com>
+Date: Mon, 10 Feb 2025 17:13:17 +0100
+Subject: mm/migrate_device: don't add folio to be freed to LRU in migrate_device_finalize()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: David Hildenbrand <david@redhat.com>
+
+commit 41cddf83d8b00f29fd105e7a0777366edc69a5cf upstream.
+
+If migration succeeded, we called
+folio_migrate_flags()->mem_cgroup_migrate() to migrate the memcg from the
+old to the new folio. This will set memcg_data of the old folio to 0.
+
+Similarly, if migration failed, memcg_data of the dst folio is left unset.
+
+If we call folio_putback_lru() on such folios (memcg_data == 0), we will
+add the folio to be freed to the LRU, making memcg code unhappy. Running
+the hmm selftests:
+
+ # ./hmm-tests
+ ...
+ # RUN hmm.hmm_device_private.migrate ...
+ [ 102.078007][T14893] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0x7ff27d200 pfn:0x13cc00
+ [ 102.079974][T14893] anon flags: 0x17ff00000020018(uptodate|dirty|swapbacked|node=0|zone=2|lastcpupid=0x7ff)
+ [ 102.082037][T14893] raw: 017ff00000020018 dead000000000100 dead000000000122 ffff8881353896c9
+ [ 102.083687][T14893] raw: 00000007ff27d200 0000000000000000 00000001ffffffff 0000000000000000
+ [ 102.085331][T14893] page dumped because: VM_WARN_ON_ONCE_FOLIO(!memcg && !mem_cgroup_disabled())
+ [ 102.087230][T14893] ------------[ cut here ]------------
+ [ 102.088279][T14893] WARNING: CPU: 0 PID: 14893 at ./include/linux/memcontrol.h:726 folio_lruvec_lock_irqsave+0x10e/0x170
+ [ 102.090478][T14893] Modules linked in:
+ [ 102.091244][T14893] CPU: 0 UID: 0 PID: 14893 Comm: hmm-tests Not tainted 6.13.0-09623-g6c216bc522fd #151
+ [ 102.093089][T14893] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-2.fc40 04/01/2014
+ [ 102.094848][T14893] RIP: 0010:folio_lruvec_lock_irqsave+0x10e/0x170
+ [ 102.096104][T14893] Code: ...
+ [ 102.099908][T14893] RSP: 0018:ffffc900236c37b0 EFLAGS: 00010293
+ [ 102.101152][T14893] RAX: 0000000000000000 RBX: ffffea0004f30000 RCX: ffffffff8183f426
+ [ 102.102684][T14893] RDX: ffff8881063cb880 RSI: ffffffff81b8117f RDI: ffff8881063cb880
+ [ 102.104227][T14893] RBP: 0000000000000000 R08: 0000000000000005 R09: 0000000000000000
+ [ 102.105757][T14893] R10: 0000000000000001 R11: 0000000000000002 R12: ffffc900236c37d8
+ [ 102.107296][T14893] R13: ffff888277a2bcb0 R14: 000000000000001f R15: 0000000000000000
+ [ 102.108830][T14893] FS: 00007ff27dbdd740(0000) GS:ffff888277a00000(0000) knlGS:0000000000000000
+ [ 102.110643][T14893] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ [ 102.111924][T14893] CR2: 00007ff27d400000 CR3: 000000010866e000 CR4: 0000000000750ef0
+ [ 102.113478][T14893] PKRU: 55555554
+ [ 102.114172][T14893] Call Trace:
+ [ 102.114805][T14893] <TASK>
+ [ 102.115397][T14893] ? folio_lruvec_lock_irqsave+0x10e/0x170
+ [ 102.116547][T14893] ? __warn.cold+0x110/0x210
+ [ 102.117461][T14893] ? folio_lruvec_lock_irqsave+0x10e/0x170
+ [ 102.118667][T14893] ? report_bug+0x1b9/0x320
+ [ 102.119571][T14893] ? handle_bug+0x54/0x90
+ [ 102.120494][T14893] ? exc_invalid_op+0x17/0x50
+ [ 102.121433][T14893] ? asm_exc_invalid_op+0x1a/0x20
+ [ 102.122435][T14893] ? __wake_up_klogd.part.0+0x76/0xd0
+ [ 102.123506][T14893] ? dump_page+0x4f/0x60
+ [ 102.124352][T14893] ? folio_lruvec_lock_irqsave+0x10e/0x170
+ [ 102.125500][T14893] folio_batch_move_lru+0xd4/0x200
+ [ 102.126577][T14893] ? __pfx_lru_add+0x10/0x10
+ [ 102.127505][T14893] __folio_batch_add_and_move+0x391/0x720
+ [ 102.128633][T14893] ? __pfx_lru_add+0x10/0x10
+ [ 102.129550][T14893] folio_putback_lru+0x16/0x80
+ [ 102.130564][T14893] migrate_device_finalize+0x9b/0x530
+ [ 102.131640][T14893] dmirror_migrate_to_device.constprop.0+0x7c5/0xad0
+ [ 102.133047][T14893] dmirror_fops_unlocked_ioctl+0x89b/0xc80
+
+Likely, nothing else goes wrong: putting the last folio reference will
+remove the folio from the LRU again. So besides memcg complaining, adding
+the folio to be freed to the LRU is just an unnecessary step.
+
+The new flow resembles what we have in migrate_folio_move(): add the dst
+to the lru, remove migration ptes, unlock and unref dst.
+
+Link: https://lkml.kernel.org/r/20250210161317.717936-1-david@redhat.com
+Fixes: 8763cb45ab96 ("mm/migrate: new memory migration helper for use with device memory")
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Cc: Jérôme Glisse <jglisse@redhat.com>
+Cc: John Hubbard <jhubbard@nvidia.com>
+Cc: Alistair Popple <apopple@nvidia.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/migrate_device.c | 13 ++++---------
+ 1 file changed, 4 insertions(+), 9 deletions(-)
+
+--- a/mm/migrate_device.c
++++ b/mm/migrate_device.c
+@@ -854,20 +854,15 @@ void migrate_device_finalize(unsigned lo
+ dst = src;
+ }
+
++ if (!folio_is_zone_device(dst))
++ folio_add_lru(dst);
+ remove_migration_ptes(src, dst, false);
+ folio_unlock(src);
+-
+- if (folio_is_zone_device(src))
+- folio_put(src);
+- else
+- folio_putback_lru(src);
++ folio_put(src);
+
+ if (dst != src) {
+ folio_unlock(dst);
+- if (folio_is_zone_device(dst))
+- folio_put(dst);
+- else
+- folio_putback_lru(dst);
++ folio_put(dst);
+ }
+ }
+ }
--- /dev/null
+From 58bf8c2bf47550bc94fea9cafd2bc7304d97102c Mon Sep 17 00:00:00 2001
+From: Kefeng Wang <wangkefeng.wang@huawei.com>
+Date: Mon, 26 Aug 2024 14:58:12 +0800
+Subject: mm: migrate_device: use more folio in migrate_device_finalize()
+
+From: Kefeng Wang <wangkefeng.wang@huawei.com>
+
+commit 58bf8c2bf47550bc94fea9cafd2bc7304d97102c upstream.
+
+Saves a couple of calls to compound_head() and remove last two callers of
+putback_lru_page().
+
+Link: https://lkml.kernel.org/r/20240826065814.1336616-5-wangkefeng.wang@huawei.com
+Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
+Reviewed-by: Vishal Moola (Oracle) <vishal.moola@gmail.com>
+Reviewed-by: Alistair Popple <apopple@nvidia.com>
+Cc: Baolin Wang <baolin.wang@linux.alibaba.com>
+Cc: David Hildenbrand <david@redhat.com>
+Cc: Jonathan Corbet <corbet@lwn.net>
+Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
+Cc: Zi Yan <ziy@nvidia.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/migrate_device.c | 41 ++++++++++++++++++++++-------------------
+ 1 file changed, 22 insertions(+), 19 deletions(-)
+
+--- a/mm/migrate_device.c
++++ b/mm/migrate_device.c
+@@ -829,42 +829,45 @@ void migrate_device_finalize(unsigned lo
+ unsigned long i;
+
+ for (i = 0; i < npages; i++) {
+- struct folio *dst, *src;
++ struct folio *dst = NULL, *src = NULL;
+ struct page *newpage = migrate_pfn_to_page(dst_pfns[i]);
+ struct page *page = migrate_pfn_to_page(src_pfns[i]);
+
++ if (newpage)
++ dst = page_folio(newpage);
++
+ if (!page) {
+- if (newpage) {
+- unlock_page(newpage);
+- put_page(newpage);
++ if (dst) {
++ folio_unlock(dst);
++ folio_put(dst);
+ }
+ continue;
+ }
+
+- if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE) || !newpage) {
+- if (newpage) {
+- unlock_page(newpage);
+- put_page(newpage);
++ src = page_folio(page);
++
++ if (!(src_pfns[i] & MIGRATE_PFN_MIGRATE) || !dst) {
++ if (dst) {
++ folio_unlock(dst);
++ folio_put(dst);
+ }
+- newpage = page;
++ dst = src;
+ }
+
+- src = page_folio(page);
+- dst = page_folio(newpage);
+ remove_migration_ptes(src, dst, false);
+ folio_unlock(src);
+
+- if (is_zone_device_page(page))
+- put_page(page);
++ if (folio_is_zone_device(src))
++ folio_put(src);
+ else
+- putback_lru_page(page);
++ folio_putback_lru(src);
+
+- if (newpage != page) {
+- unlock_page(newpage);
+- if (is_zone_device_page(newpage))
+- put_page(newpage);
++ if (dst != src) {
++ folio_unlock(dst);
++ if (folio_is_zone_device(dst))
++ folio_put(dst);
+ else
+- putback_lru_page(newpage);
++ folio_putback_lru(dst);
+ }
+ }
+ }
--- /dev/null
+From nathan@kernel.org Mon Sep 29 15:28:48 2025
+From: Nathan Chancellor <nathan@kernel.org>
+Date: Mon, 22 Sep 2025 14:15:50 -0700
+Subject: s390/cpum_cf: Fix uninitialized warning after backport of ce971233242b
+To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Cc: stable@vger.kernel.org, linux-s390@vger.kernel.org, Nathan Chancellor <nathan@kernel.org>
+Message-ID: <20250922-6-6-s390-cpum_cf-fix-uninit-err-v1-1-5183aa9af417@kernel.org>
+
+From: Nathan Chancellor <nathan@kernel.org>
+
+Upstream commit ce971233242b ("s390/cpum_cf: Deny all sampling events by
+counter PMU"), backported to 6.6 as commit d660c8d8142e ("s390/cpum_cf:
+Deny all sampling events by counter PMU"), implicitly depends on the
+unconditional initialization of err to -ENOENT added by upstream
+commit aa1ac98268cd ("s390/cpumf: Fix double free on error in
+cpumf_pmu_event_init()"). The latter change is missing from 6.6,
+resulting in an instance of -Wuninitialized, which is fairly obvious
+from looking at the actual diff.
+
+ arch/s390/kernel/perf_cpum_cf.c:858:10: warning: variable 'err' is uninitialized when used here [-Wuninitialized]
+ 858 | return err;
+ | ^~~
+
+Commit aa1ac98268cd ("s390/cpumf: Fix double free on error in
+cpumf_pmu_event_init()") depends on commit c70ca298036c ("perf/core:
+Simplify the perf_event_alloc() error path"), which is a part of a much
+larger series unsuitable for stable.
+
+Extract the unconditional initialization of err to -ENOENT from
+commit aa1ac98268cd ("s390/cpumf: Fix double free on error in
+cpumf_pmu_event_init()") and apply it to 6.6 as a standalone change to
+resolve the warning.
+
+Fixes: d660c8d8142e ("s390/cpum_cf: Deny all sampling events by counter PMU")
+Signed-off-by: Nathan Chancellor <nathan@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/s390/kernel/perf_cpum_cf.c | 4 +---
+ 1 file changed, 1 insertion(+), 3 deletions(-)
+
+--- a/arch/s390/kernel/perf_cpum_cf.c
++++ b/arch/s390/kernel/perf_cpum_cf.c
+@@ -552,15 +552,13 @@ static int cpumf_pmu_event_type(struct p
+ static int cpumf_pmu_event_init(struct perf_event *event)
+ {
+ unsigned int type = event->attr.type;
+- int err;
++ int err = -ENOENT;
+
+ if (type == PERF_TYPE_HARDWARE || type == PERF_TYPE_RAW)
+ err = __hw_perf_event_init(event, type);
+ else if (event->pmu->type == type)
+ /* Registered as unknown PMU */
+ err = __hw_perf_event_init(event, cpumf_pmu_event_type(event));
+- else
+- return -ENOENT;
+
+ if (unlikely(err) && event->destroy)
+ event->destroy(event);
mm-hugetlb-fix-folio-is-still-mapped-when-deleted.patch
fbcon-fix-integer-overflow-in-fbcon_do_set_font.patch
fbcon-fix-oob-access-in-font-allocation.patch
+s390-cpum_cf-fix-uninitialized-warning-after-backport-of-ce971233242b.patch
+mm-migrate_device-use-more-folio-in-migrate_device_finalize.patch
+mm-migrate_device-don-t-add-folio-to-be-freed-to-lru-in-migrate_device_finalize.patch
+minmax-add-in_range-macro.patch
+minmax-introduce-min-max-_array.patch
+minmax-deduplicate-__unconst_integer_typeof.patch
+minmax-fix-indentation-of-__cmp_once-and-__clamp_once.patch