A race condition is possible in stable_page_flags() where user-space is
reading /proc/kpageflags concurrently to a folio split. This may lead to
oopses or BUG_ON()s being triggered.
To fix this, this commit uses snapshot_page() in stable_page_flags() so
that stable_page_flags() works with a stable page and folio snapshots
instead.
Note that stable_page_flags() makes use of some functions that require the
original page or folio pointer to work properly (eg. is_free_budy_page()
and folio_test_idle()). Since those functions can't be used on the page
snapshot, we replace their usage with flags that were set by
snapshot_page() for this purpose.
Link: https://lkml.kernel.org/r/52c16c0f00995a812a55980c2f26848a999a34ab.1752499009.git.luizcap@redhat.com Signed-off-by: Luiz Capitulino <luizcap@redhat.com> Reviewed-by: Shivank Garg <shivankg@amd.com> Tested-by: Harry Yoo <harry.yoo@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Oscar Salvador <osalvador@suse.de> Cc: SeongJae Park <sj@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Currently, the call to folio_precise_page_mapcount() from kpage_read() can
race with a folio split. When the race happens we trigger a
VM_BUG_ON_FOLIO() in folio_entire_mapcount() (see splat below).
This commit fixes this race by using snapshot_page() so that we retrieve
the folio mapcount using a folio snapshot.
This commit refactors __dump_page() into snapshot_page().
snapshot_page() tries to take a faithful snapshot of a page and its folio
representation. The snapshot is returned in the struct page_snapshot
parameter along with additional flags that are best retrieved at snapshot
creation time to reduce race windows.
This function is intended to be used by callers that need a stable
representation of a struct page and struct folio so that pointers or page
information doesn't change while working on a page.
The idea and original implementation of snapshot_page() comes from Matthew
Wilcox with suggestions for improvements from David Hildenbrand. All bugs
and misconceptions are mine.
mm/memory: introduce is_huge_zero_pfn() and use it in vm_normal_page_pmd()
Patch series "mm: introduce snapshot_page()", v3.
This series introduces snapshot_page(), a helper function that can be used
to create a snapshot of a struct page and its associated struct folio.
This function is intended to help callers with a consistent view of a a
folio while reducing the chance of encountering partially updated or
inconsistent state, such as during folio splitting which could lead to
crashes and BUG_ON()s being triggered.
This patch (of 4):
Let's avoid working with the PMD when not required. If
vm_normal_page_pmd() would be called on something that is not a present
pmd, it would already be a bug (pfn possibly garbage).
While at it, let's support passing in any pfn covered by the huge zero
folio by masking off PFN bits -- which should be rather cheap.
Kemeng Shi [Thu, 22 May 2025 12:25:54 +0000 (20:25 +0800)]
mm: swap: remove stale comment stale comment in cluster_alloc_swap_entry()
As cluster_next_cpu was already dropped, the associated comment is stale
now.
Link: https://lkml.kernel.org/r/20250522122554.12209-5-shikemeng@huaweicloud.com Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Kairui Song <kasong@tencent.com> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kemeng Shi [Thu, 22 May 2025 12:25:53 +0000 (20:25 +0800)]
mm: swap: fix potential buffer overflow in setup_clusters()
In setup_swap_map(), we only ensure badpages are in range (0, last_page].
As maxpages might be < last_page, setup_clusters() will encounter a buffer
overflow when a badpage is >= maxpages.
Only call inc_cluster_info_page() for badpage which is < maxpages to fix
the issue.
Link: https://lkml.kernel.org/r/20250522122554.12209-4-shikemeng@huaweicloud.com Fixes: b843786b0bd0 ("mm: swapfile: fix SSD detection with swapfile on btrfs") Signed-off-by: Kemeng Shi <shikemeng@huaweicloud.com> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kairui Song <kasong@tencent.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kemeng Shi [Thu, 22 May 2025 12:25:52 +0000 (20:25 +0800)]
mm: swap: correctly use maxpages in swapon syscall to avoid potential deadloop
We use maxpages from read_swap_header() to initialize swap_info_struct,
however the maxpages might be reduced in setup_swap_extents() and the
si->max is assigned with the reduced maxpages from the
setup_swap_extents().
Obviously, this could lead to memory waste as we allocated memory based on
larger maxpages, besides, this could lead to a potential deadloop as
following:
1) When calling setup_clusters() with larger maxpages, unavailable
pages within range [si->max, larger maxpages) are not accounted with
inc_cluster_info_page(). As a result, these pages are assumed
available but can not be allocated. The cluster contains these pages
can be moved to frag_clusters list after it's all available pages were
allocated.
2) When the cluster mentioned in 1) is the only cluster in
frag_clusters list, cluster_alloc_swap_entry() assume order 0
allocation will never failed and will enter a deadloop by keep trying
to allocate page from the only cluster in frag_clusters which contains
no actually available page.
Call setup_swap_extents() to get the final maxpages before
swap_info_struct initialization to fix the issue.
After this change, span will include badblocks and will become large
value which I think is correct value:
In summary, there are two kinds of swapfile_activate operations.
1. Filesystem style: Treat all blocks logical continuity and find
usable physical extents in logical range. In this way, si->pages will
be actual usable physical blocks and span will be "1 + highest_block -
lowest_block".
2. Block device style: Treat all blocks physically continue and only
one single extent is added. In this way, si->pages will be si->max and
span will be "si->pages - 1". Actually, si->pages and si->max is only
used in block device style and span value is set with si->pages. As a
result, span value in block device style will become a larger value as
you mentioned.
I think larger value is correct based on:
1. Span value in filesystem style is "1 + highest_block -
lowest_block" which is the range cover all possible phisical blocks
including the badblocks.
2. For block device style, si->pages is the actual usable block number
and is already in pr_info. The original span value before this patch
is also refer to usable block number which is redundant in pr_info.
Kemeng Shi [Thu, 22 May 2025 12:25:51 +0000 (20:25 +0800)]
mm: swap: move nr_swap_pages counter decrement from folio_alloc_swap() to swap_range_alloc()
Patch series "Some randome fixes and cleanups to swapfile".
Patch 0-3 are some random fixes. Patch 4 is a cleanup. More details can
be found in respective patches.
This patch (of 4):
When folio_alloc_swap() encounters a failure in either
mem_cgroup_try_charge_swap() or add_to_swap_cache(), nr_swap_pages counter
is not decremented for allocated entry. However, the following
put_swap_folio() will increase nr_swap_pages counter unpairly and lead to
an imbalance.
Move nr_swap_pages decrement from folio_alloc_swap() to swap_range_alloc()
to pair the nr_swap_pages counting.
SeongJae Park [Thu, 17 Jul 2025 05:54:46 +0000 (22:54 -0700)]
mm/damon/sysfs: implement refresh_ms file internal work
Only minimum file operations for refresh_ms file is implemented. Further
implement its designed behavior, the periodic essential files content
update, using repeat mode damon_call().
If non-zero value is written to the file, update DAMON sysfs files for
auto-tuned monitoring intervals, DAMOS stats, and auto-tuned DAMOS quota
values, which are essential to be monitored in most DAMON use cases. The
user-written non-zero value becomes the time delay between the update. If
zero is written to the file, the periodic refresh is disabled.
SeongJae Park [Thu, 17 Jul 2025 05:54:45 +0000 (22:54 -0700)]
mm/damon/sysfs: implement refresh_ms file under kdamond directory
Patch series "mm/damon/sysfs: support periodic and automated stats
update".
DAMON sysfs interface provides files for reading DAMON internal status
including auto-tuned monitoring intervals, DAMOS stats, DAMOS action
applied regions, and auto-tuned DAMOS effective quota. Among those,
auto-tuned monitoring intervals, DAMOS stats and auto-tuned DAMOS
effective quota are essential for common DAMON/S use cases.
The content of the files are not automatically updated, though. Users
should manually request updates of the contents by writing a special
command to 'state' file of each kdamond directory. This interface is good
for minimizing overhead, but causes the below problems.
First, the usage is cumbersome. This is arguably not a big problem, since
the user-space tool (damo) can do this instead of the user.
Second, it can be too slow. The update request is not directly handled by
the sysfs interface but kdamond thread. And kdamond threads wake up only
once per the sampling interval. Hence if sampling interval is not short,
each update request could take too long time. The recommended sampling
interval setup is asking DAMON to automatically tune it, within a range
between 5 milliseconds and 10 seconds. On production systems it is not
very rare to have a few seconds sampling interval as a result of the
auto-tuning, so this can disturb observing DAMON internal status.
Finally, parallel update requests can conflict with each other. When
parallel update requests are received, DAMON sysfs interface simply
returns -EBUSY to one of the requests. DAMON user-space tool is hence
implementing its own backoff mechanism, but this can make the operation
even slower.
Introduce a new sysfs file, namely refresh_ms, for asking DAMON sysfs
interface to repeat the update of the above mentioned essential contents
with a user-specified time delay. If non-zero value is written to the
file, DAMON sysfs interface does the updates for essential DAMON internal
status including auto-tuned monitoring intervals, DAMOS stats, and
auto-tuned DAMOS quotas using the user-written value as the time delay.
In other words, it is similar to periodically writing
'update_schemes_stats', 'update_schemes_effective_quotas', and
'update_tuned_intervals' keywords to the 'state' file. If zero is written
to the file, the automatic refresh is disabled.
This patch (of 4):
Implement a new DAMON sysfs file named 'refresh_ms' under each kdamond
directory. The file will be used as a control knob of automatic refresh
of a few DAMON internal status files. This commit implements only minimum
file operations, though. The automatic refresh feature will be
implemented by the following commit.
memcg->socket_pressure is initialised with jiffies when the memcg is
created.
Once vmpressure detects that the cgroup is under memory pressure, the
field is updated with jiffies + HZ to signal the fact to the socket layer
and suppress memory allocation for one second.
Otherwise, the field is not updated.
mem_cgroup_under_socket_pressure() uses time_before() to check if jiffies
is less than memcg->socket_pressure, and this has a bug on 32-bit kernel.
if (time_before(jiffies, memcg->socket_pressure))
return true;
As time_before() casts the final result to long, the acceptable delta
between two timestamps is 2 ^ (BITS_PER_LONG - 1).
On 32-bit kernel with CONFIG_HZ=1000, this is about 24 days.
I don't have a real 32-bit machine so this is a result on QEMU, but
with/without the u64 jiffie patch, the time spent in
mem_cgroup_under_socket_pressure() was 1~5us and I didn't see any
measurable delta.
Link: https://lkml.kernel.org/r/20250717194645.1096500-1-kuniyu@google.com Fixes: 8e8ae645249b ("mm: memcontrol: hook up vmpressure to socket pressure") Signed-off-by: Kuniyuki Iwashima <kuniyu@google.com> Reported-by: Neal Cardwell <ncardwell@google.com> Suggested-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Shakeel Butt <shakeel.butt@linux.dev> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: David S. Miller <davem@davemloft.net> Cc: Eric Dumazet <ncardwell@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Since commit 4b634918384c ("arm64/mm: Close theoretical race where stale
TLB entry remains valid"), all arches that use tlbbatch for reclaim
(arm64, riscv, x86) implement arch_flush_tlb_batched_pending() with a
flush_tlb_mm().
So let's simplify by removing the unnecessary abstraction and doing the
flush_tlb_mm() directly in flush_tlb_batched_pending(). This effectively
reverts commit db6c1f6f236d ("mm/tlbbatch: introduce
arch_flush_tlb_batched_pending()").
Link: https://lkml.kernel.org/r/20250609103132.447370-1-ryan.roberts@arm.com Signed-off-by: Ryan Roberts <ryan.roberts@arm.com> Suggested-by: Will Deacon <will@kernel.org> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Acked-by: Will Deacon <will@kernel.org> Cc: Albert Ou <aou@eecs.berkeley.edu> Cc: Alexandre Ghiti <alex@ghiti.fr> Cc: Borislav Betkov <bp@alien8.de> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Palmer Dabbelt <palmer@dabbelt.com> Cc: Paul Walmsley <paul.walmsley@sifive.com> Cc: Rik van Riel <riel@surriel.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Thomas Gleinxer <tglx@linutronix.de> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Anthony Yznaga [Wed, 16 Jul 2025 01:26:11 +0000 (18:26 -0700)]
mm: drop hugetlb_free_pgd_range()
There are no longer any callers of hugetlb_free_pgd_range().
Link: https://lkml.kernel.org/r/20250716012611.10369-4-anthony.yznaga@oracle.com Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Acked-by: Oscar Salvador <osalvador@suse.de> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Hildenbrand <david@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Anthony Yznaga [Wed, 16 Jul 2025 01:26:10 +0000 (18:26 -0700)]
mm: remove call to hugetlb_free_pgd_range()
With the removal of the last arch-specific implementation of
hugetlb_free_pgd_range(), hugetlb VMAs no longer need special handling
when freeing page tables.
Link: https://lkml.kernel.org/r/20250716012611.10369-3-anthony.yznaga@oracle.com Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Acked-by: Oscar Salvador <osalvador@suse.de> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Hildenbrand <david@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Anthony Yznaga [Wed, 16 Jul 2025 01:26:09 +0000 (18:26 -0700)]
sparc64: remove hugetlb_free_pgd_range()
Patch series "drop hugetlb_free_pgd_range()".
For all architectures that support hugetlb except for sparc,
hugetlb_free_pgd_range() just calls free_pgd_range(). It turns out the
sparc implementation is essentially identical to free_pgd_range() and can
be removed. Remove it and update free_pgtables() to treat hugetlb VMAs
the same as others.
This patch (of 3):
The sparc implementation of hugetlb_free_pgd_range() is identical to
free_pgd_range() with the exception of checking for and skipping possible
leaf entries at the PUD and PMD levels.
These checks are unnecessary because any huge pages have been freed and
their PTEs cleared by the time page tables needed to map them are freed.
While some huge page sizes do populate the page table with multiple PTEs,
they are correctly cleared by huge_ptep_get_and_clear().
To verify this, libhugetlbfs tests were run for 64K, 8M, and 256M page
sizes with an instrumented kernel on a qemu guest modified to support the
256M page size. The same tests were used to verify no regressions after
applying this patch and were also run on x86 for both 2M and 1G page
sizes.
Link: https://lkml.kernel.org/r/20250716012611.10369-1-anthony.yznaga@oracle.com Link: https://lkml.kernel.org/r/20250716012611.10369-2-anthony.yznaga@oracle.com Signed-off-by: Anthony Yznaga <anthony.yznaga@oracle.com> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org> Acked-by: Oscar Salvador <osalvador@suse.de> Cc: Alexander Gordeev <agordeev@linux.ibm.com> Cc: Alexandre Ghiti <alexghiti@rivosinc.com> Cc: Andreas Larsson <andreas@gaisler.com> Cc: Anshuman Khandual <anshuman.khandual@arm.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: David Hildenbrand <david@redhat.com> Cc: David S. Miller <davem@davemloft.net> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Muchun Song <muchun.song@linux.dev> Cc: Oscar Salvador <osalvador@suse.de> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Suren Baghdasaryan <surenb@google.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/shmem: writeout free swap if swap_writeout() reactivates
If swap_writeout() returns AOP_WRITEPAGE_ACTIVATE (for example, because
zswap cannot compress and memcg disables writeback), there is no virtue in
keeping that folio in swap cache and holding the swap allocation:
shmem_writeout() switch it back to shmem page cache before returning.
Folio lock is held, and folio->memcg_data remains set throughout, so there
is no need to get into any memcg or memsw charge complications:
swap_free_nr() and delete_from_swap_cache() do as much as is needed (but
beware the race with shmem_free_swap() when inode truncated or evicted).
Doing the same for an anonymous folio is harder, since it will usually
have been unmapped, with references to the swap left in the page tables.
Adding a function to remap the folio would be fun, but not worthwhile
unless it has other uses, or an urgent bug with anon is demonstrated.
[hughd@google.com: use shmem_recalc_inode() rather than open coding, per Baolin] Link: https://lkml.kernel.org/r/101a7d89-290c-545d-8a6d-b1174ed8b1e5@google.com Link: https://lkml.kernel.org/r/5c911f7a-af7a-5029-1dd4-2e00b66d565c@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Tested-by: David Rientjes <rientjes@google.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <21cnbao@gmail.com> Cc: Chris Li <chrisl@kernel.org> Cc: Kairui Song <ryncsn@gmail.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/shmem: hold shmem_swaplist spinlock (not mutex) much less
A flamegraph (from an MGLRU load) showed shmem_writeout()'s use of the
global shmem_swaplist_mutex worryingly hot: improvement is long overdue.
3.1 commit 6922c0c7abd3 ("tmpfs: convert shmem_writepage and enable swap")
apologized for extending shmem_swaplist_mutex across add_to_swap_cache(),
and hoped to find another way: yes, there may be lots of work to allocate
radix tree nodes in there. Then 6.15 commit b487a2da3575 ("mm, swap:
simplify folio swap allocation") will have made it worse, by moving
shmem_writeout()'s swap allocation under that mutex too (but the worrying
flamegraph was observed even before that change).
There's a useful comment about pagelock no longer protecting from eviction
once moved to swap cache: but it's good till
shmem_delete_from_page_cache() replaces page pointer by swap entry, so
move the swaplist add between them.
We would much prefer to take the global lock once per inode than once per
page: given the possible races with shmem_unuse() pruning when !swapped
(and other tasks racing to swap other pages out or in), try the swaplist
add whenever swapped was incremented from 0 (but inode may already be on
the list - only unuse and evict bother to remove it).
This technique is more subtle than it looks (we're avoiding the very lock
which would make it easy), but works: whereas an unlocked list_empty()
check runs a risk of the inode being unqueued and left off the swaplist
forever, swapoff only completing when the page is faulted in or removed.
The need for a sleepable mutex went away in 5.1 commit b56a2d8af914 ("mm:
rid swapoff of quadratic complexity"): a spinlock works better now.
This commit is certain to take shmem_swaplist_mutex out of contention, and
has been seen to make a practical improvement (but there is likely to have
been an underlying issue which made its contention so visible).
Link: https://lkml.kernel.org/r/87beaec6-a3b0-ce7a-c892-1e1e5bd57aa3@google.com Signed-off-by: Hugh Dickins <hughd@google.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Tested-by: Baolin Wang <baolin.wang@linux.alibaba.com> Tested-by: David Rientjes <rientjes@google.com> Reviewed-by: Kairui Song <kasong@tencent.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <21cnbao@gmail.com> Cc: Chris Li <chrisl@kernel.org> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lorenzo Stoakes [Thu, 17 Jul 2025 16:56:00 +0000 (17:56 +0100)]
tools/testing/selftests: extend mremap_test to test multi-VMA mremap
Now that we have added the ability to move multiple VMAs at once, assert
that this functions correctly, both overwriting VMAs and moving backwards
and forwards with merge and VMA invalidation.
Additionally assert that page tables are correctly propagated by setting
random data and reading it back.
Link: https://lkml.kernel.org/r/139074a24a011ca4ed52498a7fa2080024b43917.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lorenzo Stoakes [Thu, 17 Jul 2025 16:55:59 +0000 (17:55 +0100)]
mm/mremap: permit mremap() move of multiple VMAs
Historically we've made it a uAPI requirement that mremap() may only
operate on a single VMA at a time.
For instances where VMAs need to be resized, this makes sense, as it
becomes very difficult to determine what a user actually wants should they
indicate a desire to expand or shrink the size of multiple VMAs (truncate?
Adjust sizes individually? Some other strategy?).
However, in instances where a user is moving VMAs, it is restrictive to
disallow this.
This is especially the case when anonymous mapping remap may or may not be
mergeable depending on whether VMAs have or have not been faulted due to
anon_vma assignment and folio index alignment with vma->vm_pgoff.
Often this can result in surprising impact where a moved region is
faulted, then moved back and a user fails to observe a merge from
otherwise compatible, adjacent VMAs.
This change allows such cases to work without the user having to be
cognizant of whether a prior mremap() move or other VMA operations has
resulted in VMA fragmentation.
We only permit this for mremap() operations that do NOT change the size of
the VMA and DO specify MREMAP_MAYMOVE | MREMAP_FIXED.
Should no VMA exist in the range, -EFAULT is returned as usual.
If a VMA move spans a single VMA - then there is no functional change.
Otherwise, we place additional requirements upon VMAs:
* They must not have a userfaultfd context associated with them - this
requires dropping the lock to notify users, and we want to perform the
operation with the mmap write lock held throughout.
* If file-backed, they cannot have a custom get_unmapped_area handler -
this might result in MREMAP_FIXED not being honoured, which could result
in unexpected positioning of VMAs in the moved region.
There may be gaps in the range of VMAs that are moved:
X Y X Y
<---> <-> <---> <->
|-------| |-----| |-----| |-------| |-----| |-----|
| A | | B | | C | ---> | A' | | B' | | C' |
|-------| |-----| |-----| |-------| |-----| |-----|
addr new_addr
The move will preserve the gaps between each VMA.
Note that any failures encountered will result in a partial move. Since
an mremap() can fail at any time, this might result in only some of the
VMAs being moved.
Note that failures are very rare and typically require an out of a memory
condition or a mapping limit condition to be hit, assuming the VMAs being
moved are valid.
We don't try to assess ahead of time whether VMAs are valid according to
the multi VMA rules, as it would be rather unusual for a user to mix
uffd-enabled VMAs and/or VMAs which map unusual driver mappings that
specify custom get_unmapped_area() handlers in an aggregate operation.
So we optimise for the far, far more likely case of the operation being
entirely permissible.
In the case of the move of a single VMA, the above conditions are
permitted. This makes the behaviour identical for a single VMA as before.
Link: https://lkml.kernel.org/r/8cab2f2c202c4208bdfdb562635748bea6eb37bf.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lorenzo Stoakes [Thu, 17 Jul 2025 16:55:58 +0000 (17:55 +0100)]
mm/mremap: clean up mlock populate behaviour
When an mlock()'d VMA is expanded, we need to populate the expanded region
to maintain the contract that all mlock()'d memory is present (albeit -
with some period after mmap unlock where the expanded part of the mapping
remains unfaulted).
The current implementation is very unclear, so make it absolutely explicit
under what circumstances we do this.
Link: https://lkml.kernel.org/r/2358b0006baa9cab83db4259817794f16fe1992e.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lorenzo Stoakes [Thu, 17 Jul 2025 16:55:56 +0000 (17:55 +0100)]
mm/mremap: check remap conditions earlier
When we expand or move a VMA, this requires a number of additional checks
to be performed.
Make it really obvious under what circumstances these checks must be
performed and aggregate all the checks in one place by invoking this in
check_prep_vma().
We have to adjust the checks to account for shrink + move operations by
checking new_len <= old_len rather than new_len == old_len.
Lorenzo Stoakes [Thu, 17 Jul 2025 16:55:55 +0000 (17:55 +0100)]
mm/mremap: use an explicit uffd failure path for mremap
Right now it appears that the code is relying upon the returned
destination address having bits outside PAGE_MASK to indicate whether an
error value is specified, and decrementing the increased refcount on the
uffd ctx if so.
This is not a safe means of determining an error value, so instead, be
specific. It makes far more sense to do so in a dedicated error path, so
add mremap_userfaultfd_fail() for this purpose and use this when an error
arises.
A vm_userfaultfd_ctx is not established until we are at the point where
mremap_userfaultfd_prep() is invoked in copy_vma_and_data(), so this is a
no-op until this happens.
That is - uffd remap notification only occurs if the VMA is actually moved
- at which point a UFFD_EVENT_REMAP event is raised.
No errors can occur after this point currently, though it's certainly not
guaranteed this will always remain the case, and we mustn't rely on this.
However, the reason for needing to handle this case is that, when an error
arises on a VMA move at the point of adjusting page tables, we revert this
operation, and propagate the error.
At this point, it is not correct to raise a uffd remap event, and we must
handle it.
This refactoring makes it abundantly clear what we are doing.
We assume vrm->new_addr is always valid, which a prior change made the
case even for mremap() invocations which don't move the VMA, however given
no uffd context would be set up in this case it's immaterial to this
change anyway.
No functional change intended.
Link: https://lkml.kernel.org/r/a70e8a1f7bce9f43d1431065b414e0f212297297.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lorenzo Stoakes [Thu, 17 Jul 2025 16:55:54 +0000 (17:55 +0100)]
mm/mremap: cleanup post-processing stage of mremap
Separate out the uffd bits so it clear's what's happening.
Don't bother setting vrm->mmap_locked after unlocking, because after this
we are done anyway.
The only time we drop the mmap lock is on VMA shrink, at which point
vrm->new_len will be < vrm->old_len and the operation will not be
performed anyway, so move this code out of the if (vrm->mmap_locked)
block.
All addresses returned by mremap() are page-aligned, so the
offset_in_page() check on ret seems only to be incorrectly trying to
detect whether an error occurred - explicitly check for this.
No functional change intended.
Link: https://lkml.kernel.org/r/ebb8f29650b8e343fe98fefc67b3a61a24d1e0f1.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
We are currently checking some things later, and some things immediately.
Aggregate the checks and avoid ones that need not be made.
Simplify things by aligning lengths immediately. Defer setting the delta
parameter until later, which removes some duplicate code in the hugetlb
case.
We can safely perform the checks moved from mremap_to() to
check_mremap_params() because:
* If we set a new address via vrm_set_new_addr(), then this is guaranteed
to not overlap nor to position the new VMA past TASK_SIZE, so there's no
need to check these later.
* We can simply page align lengths immediately. We do not need to check for
overlap nor TASK_SIZE sanity after hugetlb alignment as this asserts
addresses are huge-aligned, then huge-aligns lengths, rounding down. This
means any existing overlap would have already been caught.
Moving things around like this lays the groundwork for subsequent changes
to permit operations on batches of VMAs.
No functional change intended.
Link: https://lkml.kernel.org/r/c862d625c98b1abd861c406f2bfad8baf3287f83.1752770784.git.lorenzo.stoakes@oracle.com Signed-off-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Christian Brauner <brauner@kernel.org> Cc: Jan Kara <jack@suse.cz> Cc: Jann Horn <jannh@google.com> Cc: Liam Howlett <liam.howlett@oracle.com> Cc: Peter Xu <peterx@redhat.com> Cc: Rik van Riel <riel@surriel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Lorenzo Stoakes [Thu, 17 Jul 2025 16:55:51 +0000 (17:55 +0100)]
mm/mremap: perform some simple cleanups
Patch series "mm/mremap: permit mremap() move of multiple VMAs", v4.
Historically we've made it a uAPI requirement that mremap() may only
operate on a single VMA at a time.
For instances where VMAs need to be resized, this makes sense, as it
becomes very difficult to determine what a user actually wants should they
indicate a desire to expand or shrink the size of multiple VMAs (truncate?
Adjust sizes individually? Some other strategy?).
However, in instances where a user is moving VMAs, it is restrictive to
disallow this.
This is especially the case when anonymous mapping remap may or may not be
mergeable depending on whether VMAs have or have not been faulted due to
anon_vma assignment and folio index alignment with vma->vm_pgoff.
Often this can result in surprising impact where a moved region is faulted,
then moved back and a user fails to observe a merge from otherwise
compatible, adjacent VMAs.
This change allows such cases to work without the user having to be
cognizant of whether a prior mremap() move or other VMA operations has
resulted in VMA fragmentation.
In order to do this, this series performs a large amount of refactoring,
most pertinently - grouping sanity checks together, separately those that
check input parameters and those relating to VMAs.
we also simplify the post-mmap lock drop processing for uffd and mlock()'d
VMAs.
With this done, we can then fairly straightforwardly implement this
functionality.
This works exclusively for mremap() invocations which specify
MREMAP_FIXED. It is not compatible with VMAs which use userfaultfd, as the
notification of the userland fault handler would require us to drop the
mmap lock.
It is also not compatible with file-backed mappings with customised
get_unmapped_area() handlers as these may not honour MREMAP_FIXED.
The input and output addresses ranges must not overlap. We carefully
account for moves which would result in VMA iterator invalidation.
While there can be gaps between VMAs in the input range, there can be no
gap before the first VMA in the range.
This patch (of 10):
We const-ify the vrm flags parameter to indicate this will never change.
We rename resize_is_valid() to remap_is_valid(), as this function does not
only apply to cases where we resize, so it's simply confusing to refer to
that here.
We remove the BUG() from mremap_at(), as we should not BUG() unless we are
certain it'll result in system instability.
We rename vrm_charge() to vrm_calc_charge() to make it clear this simply
calculates the charged number of pages rather than actually adjusting any
state.
We update the comment for vrm_implies_new_addr() to explain that
MREMAP_DONTUNMAP does not require a set address, but will always be moved.
Additionally consistently use 'res' rather than 'ret' for result values.
Lorenzo Stoakes [Mon, 14 Jul 2025 13:58:39 +0000 (14:58 +0100)]
mm/vma: refactor vma_modify_flags_name() to vma_modify_name()
The single instance in which we use this function doesn't actually need to
change VMA flags, so remove this parameter and update the caller
accordingly.
mm/mglru: stop try_to_inc_min_seq() if min_seq[type] has not increased
In try_to_inc_min_seq(), if min_seq[type] has not increased. In other
words, min_seq[type] == lrugen->min_seq[type]. Then we should return
directly to avoid unnecessary overhead later.
Corollary: If min_seq[type] of both anonymous and file is not increased,
try_to_inc_min_seq() will fail.
Proof:
It is known that min_seq[type] has not increased, that is, min_seq[type]
is equal to lrugen->min_seq[type], then the following:
case 1: min_seq[type] has not been reassigned and changed before
judgment min_seq[type] <= lrugen->min_seq[type].
Then the subsequent min_seq[type] <= lrugen->min_seq[type] judgment
will always be true.
case 2: min_seq[type] is reassigned to seq, before judgment
min_seq[type] <= lrugen->min_seq[type].
Then at least the condition of min_seq[type] > seq must be met
before min_seq[type] will be reassigned to seq.
That is to say, before the reassignment, lrugen->min_seq[type] > seq
is met, and then min_seq[type] = seq.
Then the following min_seq[type](seq) <= lrugen->min_seq[type] judgment
is always true.
Therefore, in try_to_inc_min_seq(), If min_seq[type] of both anonymous
and file is not increased, we can return false directly to avoid
unnecessary overhead.
Link: https://lkml.kernel.org/r/20250703023946.65315-1-jiahao.kernel@gmail.com Signed-off-by: Hao Jia <jiahao1@lixiang.com> Suggested-by: Yuanchu Xie <yuanchu@google.com> Reviewed-by: Axel Rasmussen <axelrasmussen@google.com> Cc: David Hildenbrand <david@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kinsey Ho <kinseyho@google.com> Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Qi Zheng <zhengqi.arch@bytedance.com> Cc: Shakeel Butt <shakeel.butt@linux.dev> Cc: Yu Zhao <yuzhao@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Ranjan Kumar [Tue, 24 Jun 2025 06:16:49 +0000 (11:46 +0530)]
scsi: Fix sas_user_scan() to handle wildcard and multi-channel scans
sas_user_scan() did not fully process wildcard channel scans
(SCAN_WILD_CARD) when a transport-specific user_scan() callback was
present. Only channel 0 would be scanned via user_scan(), while the
remaining channels were skipped, potentially missing devices.
user_scan() invokes updated sas_user_scan() for channel 0, and if
successful, iteratively scans remaining channels (1 to
shost->max_channel) via scsi_scan_host_selected(). This ensures complete
wildcard scanning without affecting transport-specific scanning behavior.
scsi: target: core: Generate correct identifiers for PR OUT transport IDs
Fix target_parse_pr_out_transport_id() to return a string representing
the transport ID in a human-readable format (e.g., naa.xxxxxxxx...) for
various SCSI protocol types (SAS, FCP, SRP, SBP).
Previously, the function returned a pointer to the raw binary buffer,
which was incorrectly compared against human-readable strings, causing
comparisons to fail. Now, the function writes a properly formatted
string into a buffer provided by the caller. The output format depends
on the transport protocol:
====================
selftests: drv-net: tso: fix issues with tso selftest
There are a couple issues with the tso selftest.
- Features required for test cases are detected by searching the set
of active features at test start, so if a feature is supported by
hw, but disabled, the test will report that the feature under test
is not available and fail.
- The vxlan test cases do not use the correct ip link flags based on
the gso feature under test
- The non-tunneled tso6 test case is showing up with the wrong name.
With all patches applied test output is:
# Detected qstat for LSO wire-packets
TAP version 13
1..14
ok 1 tso.ipv4
# Testing with mangleid enabled
ok 2 tso.vxlan4_ipv4
ok 3 tso.vxlan4_ipv6
# Testing with mangleid enabled
ok 4 tso.vxlan_csum4_ipv4
ok 5 tso.vxlan_csum4_ipv6
# Testing with mangleid enabled
ok 6 tso.gre4_ipv4
ok 7 tso.gre4_ipv6
ok 8 tso.ipv6
# Testing with mangleid enabled
ok 9 tso.vxlan6_ipv4
ok 10 tso.vxlan6_ipv6
# Testing with mangleid enabled
ok 11 tso.vxlan_csum6_ipv4
ok 12 tso.vxlan_csum6_ipv6
# Testing with mangleid enabled
ok 13 tso.gre6_ipv4
ok 14 tso.gre6_ipv6
# Totals: pass:14 fail:0 xfail:0 xpass:0 skip:0 error:0
====================
Daniel Zahka [Wed, 23 Jul 2025 18:47:38 +0000 (11:47 -0700)]
selftests: drv-net: tso: fix non-tunneled tso6 test case name
The non-tunneled tso6 test case was showing up as:
ok 8 tso.ipv4
This is because of the way test_builder() uses the inner_ipver arg in
test naming, and how test_info is iterated over in main(). Given that
some tunnels not supported yet, e.g. ipip or sit, only support ipv4 or
ipv6 as the inner network protocol, I think the best fix here is to
call test_builder() in separate branches for tunneled and non-tunneled
tests, and to make supported inner l3 types an explicit attribute of
tunnel test cases.
# Detected qstat for LSO wire-packets
TAP version 13
1..14
ok 1 tso.ipv4
# Testing with mangleid enabled
ok 2 tso.vxlan4_ipv4
ok 3 tso.vxlan4_ipv6
# Testing with mangleid enabled
ok 4 tso.vxlan_csum4_ipv4
ok 5 tso.vxlan_csum4_ipv6
# Testing with mangleid enabled
ok 6 tso.gre4_ipv4
ok 7 tso.gre4_ipv6
ok 8 tso.ipv6
# Testing with mangleid enabled
ok 9 tso.vxlan6_ipv4
ok 10 tso.vxlan6_ipv6
# Testing with mangleid enabled
ok 11 tso.vxlan_csum6_ipv4
ok 12 tso.vxlan_csum6_ipv6
# Testing with mangleid enabled
ok 13 tso.gre6_ipv4
ok 14 tso.gre6_ipv6
# Totals: pass:14 fail:0 xfail:0 xpass:0 skip:0 error:0
Daniel Zahka [Wed, 23 Jul 2025 18:47:37 +0000 (11:47 -0700)]
selftests: drv-net: tso: fix vxlan tunnel flags to get correct gso_type
When vxlan is used with ipv6 as the outer network header, the correct
ip link parameters for acheiving the SKB_GSO_UDP_TUNNEL gso type is
"udp6zerocsumtx udp6zerocsumrx". Otherwise the gso type will be
SKB_GSO_UDP_TUNNEL_CSUM.
This bug was the reason for the second of the three possible
invocations of run_one_stream() invocations, so that can be deleted as
well. We only need to test with the feature off and on.
Daniel Zahka [Wed, 23 Jul 2025 18:47:36 +0000 (11:47 -0700)]
selftests: drv-net: tso: enable test cases based on hw_features
tso.py uses the active features at the time of test execution
as the set of available gso features to test. This means if a gso
feature is supported but toggled off at test start, the test will be
skipped with a "Device does not support {feature}" message.
Instead, we can enumerate the set of toggleable features by capturing
the driver's hw_features bitmap. To avoid configuration side-effects
from running the test, we also snapshot the wanted_features flag set
before making any feature changes, and then attempt to restore the
same set of wanted_features before test exit.
====================
selftests: drv-net: Fix and improve command requirement checking
This series fixes remote command checking and cleans up command
requirement calls across tests.
The first patch fixes require_cmd() incorrectly checking commands
locally even when remote=True was specified due to a missing host
parameter.
The second patch makes require_cmd() usage explicit about local/remote
requirements, avoiding unnecessary test failures and consolidating
duplicate calls.
====================
Gal Pressman [Wed, 23 Jul 2025 13:54:54 +0000 (16:54 +0300)]
selftests: drv-net: Make command requirements explicit
Make require_cmd() calls explicit about whether commands are needed
locally, remotely, or both.
Since require_cmd() defaults to local=True, tests should explicitly set
local=False when commands are only needed remotely.
- socat: Set local=False since it's only needed on remote hosts.
- iperf3: Use single call with both local=True and remote=True since
it's needed on both hosts.
This avoids unnecessary test failures when commands are missing locally
but available remotely where actually needed, and consolidates a
duplicate require_cmd() call into single call that checks both hosts.
Fixes: 0d0f4174f6c8 ("selftests: drv-net: add a simple TSO test") Fixes: f1e68a1a4a40 ("selftests: drv-net: add require_XYZ() helpers for validating env") Fixes: c76bab22e920 ("selftests: drv-net: rss_input_xfrm: Check test prerequisites before running") Reviewed-by: Nimrod Oren <noren@nvidia.com> Signed-off-by: Gal Pressman <gal@nvidia.com> Link: https://patch.msgid.link/20250723135454.649342-3-gal@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
When building, the following warnings will appear.
"
pci_irq.c: In function ‘mlx5_ctrl_irq_request’:
pci_irq.c:494:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
pci_irq.c: In function ‘mlx5_irq_request_vector’:
pci_irq.c:561:1: warning: the frame size of 1040 bytes is larger than 1024 bytes [-Wframe-larger-than=]
eq.c: In function ‘comp_irq_request_sf’:
eq.c:897:1: warning: the frame size of 1080 bytes is larger than 1024 bytes [-Wframe-larger-than=]
irq_affinity.c: In function ‘irq_pool_request_irq’:
irq_affinity.c:74:1: warning: the frame size of 1048 bytes is larger than 1024 bytes [-Wframe-larger-than=]
"
These warnings indicate that the stack frame size exceeds 1024 bytes in
these functions.
To resolve this, instead of allocating large memory buffers on the stack,
it is better to use kvzalloc to allocate memory dynamically on the heap.
This approach reduces stack usage and eliminates these frame size warnings.
Mike Christie [Mon, 21 Jul 2025 18:51:45 +0000 (13:51 -0500)]
scsi: target: iblock: Allow iblock devices to be shared
We might be running a local application that also interacts with the
backing device. In this setup we have some clustering type of software
that manages the ownwer of it, so we don't want the kernel to restrict
us. This patch allows the user to control if the driver gets exclusive
access.
Seunghui Lee [Thu, 17 Jul 2025 08:12:13 +0000 (17:12 +0900)]
scsi: ufs: core: Use link recovery when h8 exit fails during runtime resume
If the h8 exit fails during runtime resume process, the runtime thread
enters runtime suspend immediately and the error handler operates at the
same time. It becomes stuck and cannot be recovered through the error
handler. To fix this, use link recovery instead of the error handler.
Fixes: 4db7a2360597 ("scsi: ufs: Fix concurrency of error handler and other error recovery paths") Signed-off-by: Seunghui Lee <sh043.lee@samsung.com> Link: https://lore.kernel.org/r/20250717081213.6811-1-sh043.lee@samsung.com Reviewed-by: Bean Huo <beanhuo@micron.com> Acked-by: Bart Van Assche <bvanassche@acm.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
====================
Use enum to represent the NAPI threaded state
Instead of using 0/1 to represent the NAPI threaded states use enum
(disabled/enabled) to represent the NAPI threaded states.
This patch series is a subset of patches from the following patch series:
https://lore.kernel.org/20250718232052.1266188-1-skhawaja@google.com
The first 3 patches are being sent separately as per the feedback to
replace the usage of 0/1 as NAPI threaded states with enum. See:
https://lore.kernel.org/20250721164856.1d2208e4@kernel.org
====================
Instead of using '0' and '1' for napi threaded state use an enum with
'disabled' and 'enabled' states.
Tested:
./tools/testing/selftests/net/nl_netdev.py
TAP version 13
1..7
ok 1 nl_netdev.empty_check
ok 2 nl_netdev.lo_check
ok 3 nl_netdev.page_pool_check
ok 4 nl_netdev.napi_list_check
ok 5 nl_netdev.dev_set_threaded
ok 6 nl_netdev.napi_set_threaded
ok 7 nl_netdev.nsim_rxq_reset_down
# Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0
net: Use netif_threaded_enable instead of netif_set_threaded in drivers
Prepare for adding an enum type for NAPI threaded states by adding
netif_threaded_enable API. De-export the existing netif_set_threaded API
and only use it internally. Update existing drivers to use
netif_threaded_enable instead of the de-exported netif_set_threaded.
Note that dev_set_threaded used by mt76 debugfs file is unchanged.
Steven Rostedt [Wed, 23 Jul 2025 19:41:45 +0000 (15:41 -0400)]
tracing: Call trace_ftrace_test_filter() for the event
The trace event filter bootup self test tests a bunch of filter logic
against the ftrace_test_filter event, but does not actually call the
event. Work is being done to cause a warning if an event is defined but
not used. To quiet the warning call the trace event under an if statement
where it is disabled so it doesn't get optimized out.
The invocation of iscsi_put_conn() in iscsi_iter_destory_conn_fn() is
used to free the initial reference counter of iscsi_cls_conn. For
non-qla4xxx cases, the ->destroy_conn() callback (e.g.,
iscsi_conn_teardown) will call iscsi_remove_conn() and iscsi_put_conn()
to remove the connection from the children list of session and free the
connection at last. However for qla4xxx, it is not the case. The
->destroy_conn() callback of qla4xxx will keep the connection in the
session conn_list and doesn't use iscsi_put_conn() to free the initial
reference counter. Therefore, it seems necessary to keep the
iscsi_put_conn() in the iscsi_iter_destroy_conn_fn(), otherwise, there
will be memory leak problem.
John Garry [Tue, 15 Jul 2025 11:15:35 +0000 (11:15 +0000)]
scsi: aacraid: Stop using PCI_IRQ_AFFINITY
When PCI_IRQ_AFFINITY is set for calling pci_alloc_irq_vectors(), it
means interrupts are spread around the available CPUs. It also means that
the interrupts become managed, which means that an interrupt is shutdown
when all the CPUs in the interrupt affinity mask go offline.
Using managed interrupts in this way means that we should ensure that
completions should not occur on HW queues where the associated interrupt
is shutdown. This is typically achieved by ensuring only CPUs which are
online can generate IO completion traffic to the HW queue which they are
mapped to (so that they can also serve completion interrupts for that HW
queue).
The problem in the driver is that a CPU can generate completions to a HW
queue whose interrupt may be shutdown, as the CPUs in the HW queue
interrupt affinity mask may be offline. This can cause IOs to never
complete and hang the system. The driver maintains its own CPU <-> HW
queue mapping for submissions, see aac_fib_vector_assign(), but this does
not reflect the CPU <-> HW queue interrupt affinity mapping.
Commit 9dc704dcc09e ("scsi: aacraid: Reply queue mapping to CPUs based on
IRQ affinity") tried to remedy this issue may mapping CPUs properly to HW
queue interrupts. However this was later reverted in commit c5becf57dd56
("Revert "scsi: aacraid: Reply queue mapping to CPUs based on IRQ
affinity") - it seems that there were other reports of hangs. I guess
that this was due to some implementation issue in the original commit or
maybe a HW issue.
Fix the very original hang by just not using managed interrupts by not
setting PCI_IRQ_AFFINITY. In this way, all CPUs will be in each HW queue
affinity mask, so should not create completion problems if any CPUs go
offline.
Signed-off-by: John Garry <john.g.garry@oracle.com> Link: https://lore.kernel.org/r/20250715111535.499853-1-john.g.garry@oracle.com Closes: https://lore.kernel.org/linux-scsi/20250618192427.3845724-1-jmeneghi@redhat.com/ Reviewed-by: John Meneghini <jmeneghi@redhat.com> Tested-by: John Meneghini <jmeneghi@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Jakub Kicinski [Fri, 25 Jul 2025 01:12:54 +0000 (18:12 -0700)]
Merge tag 'for-net-next-2025-07-23' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next
Luiz Augusto von Dentz says:
====================
bluetooth-next pull request for net-next:
core:
- hci_sync: fix double free in 'hci_discovery_filter_clear()'
- hci_event: Mask data status from LE ext adv reports
- hci_devcd_dump: fix out-of-bounds via dev_coredumpv
- ISO: add socket option to report packet seqnum via CMSG
- hci_event: Add support for handling LE BIG Sync Lost event
- ISO: Support SCM_TIMESTAMPING for ISO TS
- hci_core: Add PA_LINK to distinguish BIG sync and PA sync connections
- hci_sock: Reset cookie to zero in hci_sock_free_cookie()
drivers:
- btusb: Add new VID/PID 0489/e14e for MT7925
- btusb: Add a new VID/PID 2c7c/7009 for MT7925
- btusb: Add RTL8852BE device 0x13d3:0x3618
- btusb: Add support for variant of RTL8851BE (USB ID 13d3:3601)
- btusb: Add USB ID 3625:010b for TP-LINK Archer TX10UB Nano
- btusb: QCA: Support downloading custom-made firmwares
- btusb: Add one more ID 0x28de:0x1401 for Qualcomm WCN6855
- nxp: add support for supply and reset
- btnxpuart: Add support for 4M baudrate
- btnxpuart: Correct the Independent Reset handling after FW dump
- btnxpuart: Add uevents for FW dump and FW download complete
- btintel: Define a macro for Intel Reset vendor command
- btintel_pcie: Support Function level reset
- btintel_pcie: Add support for device 0x4d76
- btintel_pcie: Make driver wait for alive interrupt
- btintel_pcie: Fix Alive Context State Handling
- hci_qca: Enable ISO data packet RX
* tag 'for-net-next-2025-07-23' of git://git.kernel.org/pub/scm/linux/kernel/git/bluetooth/bluetooth-next: (42 commits)
Bluetooth: Add PA_LINK to distinguish BIG sync and PA sync connections
Bluetooth: hci_event: Mask data status from LE ext adv reports
Bluetooth: btintel_pcie: Fix Alive Context State Handling
Bluetooth: btintel_pcie: Make driver wait for alive interrupt
Bluetooth: hci_devcd_dump: fix out-of-bounds via dev_coredumpv
Bluetooth: hci_sync: fix double free in 'hci_discovery_filter_clear()'
Bluetooth: btusb: Add one more ID 0x28de:0x1401 for Qualcomm WCN6855
Bluetooth: btusb: Sort WCN6855 device IDs by VID and PID
Bluetooth: btusb: QCA: Support downloading custom-made firmwares
Bluetooth: btnxpuart: Add uevents for FW dump and FW download complete
Bluetooth: btnxpuart: Correct the Independent Reset handling after FW dump
Bluetooth: ISO: Support SCM_TIMESTAMPING for ISO TS
Bluetooth: ISO: add socket option to report packet seqnum via CMSG
Bluetooth: btintel: Define a macro for Intel Reset vendor command
Bluetooth: Fix typos in comments
Bluetooth: RFCOMM: Fix typos in comments
Bluetooth: aosp: Fix typo in comment
Bluetooth: hci_bcm4377: Fix typo in comment
Bluetooth: btrtl: Fix typo in comment
Bluetooth: btmtk: Fix typo in log string
...
====================
We've added 3 non-merge commits during the last 3 day(s) which contain
a total of 4 files changed, 40 insertions(+), 15 deletions(-).
The main changes are:
1) Improved verifier error message for incorrect narrower load from
pointer field in ctx, from Paul Chaignon.
2) Disabled migration in nf_hook_run_bpf to address a syzbot report,
from Kuniyuki Iwashima.
* tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next:
selftests/bpf: Test invalid narrower ctx load
bpf: Reject narrower access to pointer ctx fields
bpf: Disable migration in nf_hook_run_bpf().
====================
Stephen Rothwell [Mon, 21 Jul 2025 06:15:57 +0000 (16:15 +1000)]
sprintf.h requires stdarg.h
In file included from drivers/crypto/intel/qat/qat_common/adf_pm_dbgfs_utils.c:4:
include/linux/sprintf.h:11:54: error: unknown type name 'va_list'
11 | __printf(2, 0) int vsprintf(char *buf, const char *, va_list);
| ^~~~~~~
include/linux/sprintf.h:1:1: note: 'va_list' is defined in header '<stdarg.h>'; this is probably fixable by adding '#include <stdarg.h>'
Link: https://lkml.kernel.org/r/20250721173754.42865913@canb.auug.org.au Fixes: 39ced19b9e60 ("lib/vsprintf: split out sprintf() and friends") Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Andriy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Petr Mladek <pmladek@suse.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sergey Senozhatsky <senozhatsky@chromium.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Akinobu Mita [Sat, 19 Jul 2025 11:26:04 +0000 (20:26 +0900)]
resource: fix false warning in __request_region()
A warning is raised when __request_region() detects a conflict with a
resource whose resource.desc is IORES_DESC_DEVICE_PRIVATE_MEMORY.
But this warning is only valid for iomem_resources.
The hmem device resource uses resource.desc as the numa node id, which can
cause spurious warnings.
This warning appeared on a machine with multiple cxl memory expanders.
One of the NUMA node id is 6, which is the same as the value of
IORES_DESC_DEVICE_PRIVATE_MEMORY.
In this environment it was just a spurious warning, but when I saw the
warning I suspected a real problem so it's better to fix it.
This change fixes this by restricting the warning to only iomem_resource.
This also adds a missing new line to the warning message.
Link: https://lkml.kernel.org/r/20250719112604.25500-1-akinobu.mita@gmail.com Fixes: 7dab174e2e27 ("dax/hmem: Move hmem device registration to dax_hmem.ko") Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Reviewed-by: Dan Williams <dan.j.williams@intel.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
SeongJae Park [Sat, 19 Jul 2025 18:19:32 +0000 (11:19 -0700)]
mm/damon/core: commit damos_quota_goal->nid
DAMOS quota goal uses 'nid' field when the metric is
DAMOS_QUOTA_NODE_MEM_{USED,FREE}_BP. But the goal commit function is not
updating the goal's nid field. Fix it.
Link: https://lkml.kernel.org/r/20250719181932.72944-1-sj@kernel.org Fixes: 0e1c773b501f ("mm/damon/core: introduce damos quota goal metrics for memory node utilization") [6.16.x] Signed-off-by: SeongJae Park <sj@kernel.org> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
====================
tools: ynl-gen: print setters for multi-val attrs
ncdevmem seems to manually prepare the queue attributes.
This is not ideal, YNL should be providing helpers for this.
Make YNL output allocation and setter helpers for multi-val attrs.
Jakub Kicinski [Wed, 23 Jul 2025 17:10:46 +0000 (10:10 -0700)]
selftests: drv-net: devmem: use new mattr ynl helpers
Use the just-added YNL helpers instead of manually setting
"_present" bits in the queue attrs. Compile tested only.
Reviewed-by: Donald Hunter <donald.hunter@gmail.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Acked-by: Mina Almasry <almasrymina@google.com> Link: https://patch.msgid.link/20250723171046.4027470-6-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 23 Jul 2025 17:10:45 +0000 (10:10 -0700)]
tools: ynl-gen: print setters for multi-val attrs
For basic types we "flatten" setters. If a request "a" has a simple
nest "b" with value "val" we print helpers like:
req_set_a_b(struct a *req, int val)
{
req->_present.a = 1;
req->b._present.val = 1;
req->b.val = ...
}
This is not possible for multi-attr because they have to be allocated
dynamically by the user. Print "object level" setters so that user
preparing the object doesn't have to futz with the presence bits
and other YNL internals.
Add the ability to pass in the variable name to generated setters.
Using "req" here doesn't feel right, while the attr is part of a request
it's not the request itself, so it seems cleaner to call it "obj".
Jakub Kicinski [Wed, 23 Jul 2025 17:10:44 +0000 (10:10 -0700)]
tools: ynl-gen: print alloc helper for multi-val attrs
In general YNL provides allocation and free helpers for types.
For pure nested structs which are used as multi-attr (and therefore
have to be allocated dynamically) we already print a free helper
as it's needed by free of the containing struct.
Add printing of the alloc helper for consistency. The helper
takes the number of entries to allocate as an argument, e.g.:
Jakub Kicinski [Wed, 23 Jul 2025 17:10:43 +0000 (10:10 -0700)]
tools: ynl-gen: move free printing to the print_type_full() helper
Just to avoid making the main function even more enormous,
before adding more things to print move the free printing
to a helper which already prints the type.
Jakub Kicinski [Fri, 25 Jul 2025 00:25:42 +0000 (17:25 -0700)]
Merge tag 'wireless-next-2025-07-24' of https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next
Johannes Berg says:
====================
Another wireless update:
- rtw89:
- STA+P2P concurrency
- support for USB devices RTL8851BU/RTL8852BU
- ath9k: OF support
- ath12k:
- more EHT/Wi-Fi 7 features
- encapsulation/decapsulation offload
- iwlwifi: some FIPS interoperability
- brcm80211: support SDIO 43751 device
- rt2x00: better DT/OF support
- cfg80211/mac80211:
- improved S1G support
- beacon monitor for MLO
* tag 'wireless-next-2025-07-24' of https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless-next: (199 commits)
ssb: use new GPIO line value setter callbacks for the second GPIO chip
wifi: Fix typos
wifi: brcmsmac: Use str_true_false() helper
wifi: brcmfmac: fix EXTSAE WPA3 connection failure due to AUTH TX failure
wifi: brcm80211: Remove yet more unused functions
wifi: brcm80211: Remove more unused functions
wifi: brcm80211: Remove unused functions
wifi: iwlwifi: Revert "wifi: iwlwifi: remove support of several iwl_ppag_table_cmd versions"
wifi: iwlwifi: check validity of the FW API range
wifi: iwlwifi: don't export symbols that we shouldn't
wifi: iwlwifi: mld: use spec link id and not FW link id
wifi: iwlwifi: mld: decode EOF bit for AMPDUs
wifi: iwlwifi: Remove support for rx OMI bandwidth reduction
wifi: iwlwifi: stop supporting iwl_omi_send_status_notif ver 1
wifi: iwlwifi: remove SC2F firmware support
wifi: iwlwifi: mvm: Remove NAN support
wifi: iwlwifi: mld: avoid outdated reorder buffer head_sn
wifi: iwlwifi: mvm: avoid outdated reorder buffer head_sn
wifi: iwlwifi: disable certain features for fips_enabled
wifi: iwlwifi: mld: support channel survey collection for ACS scans
...
====================
GCC appears to have kind of fragile inlining heuristics, in the
sense that it can change whether or not it inlines something based on
optimizations. It looks like the kcov instrumentation being added (or in
this case, removed) from a function changes the optimization results,
and some functions marked "inline" are _not_ inlined. In that case,
we end up with __init code calling a function not marked __init, and we
get the build warnings I'm trying to eliminate in the coming patch that
adds __no_sanitize_coverage to __init functions:
This problem is somewhat fragile (though using either __always_inline
or __init will deterministically solve it), but we've tripped over
this before with GCC and the solution has usually been to just use
__always_inline and move on.
For x86 this means forcing several functions to be inline with
__always_inline.
GCC appears to have kind of fragile inlining heuristics, in the
sense that it can change whether or not it inlines something based on
optimizations. It looks like the kcov instrumentation being added (or in
this case, removed) from a function changes the optimization results,
and some functions marked "inline" are _not_ inlined. In that case,
we end up with __init code calling a function not marked __init, and we
get the build warnings I'm trying to eliminate in the coming patch that
adds __no_sanitize_coverage to __init functions:
This problem is somewhat fragile (though using either __always_inline
or __init will deterministically solve it), but we've tripped over
this before with GCC and the solution has usually been to just use
__always_inline and move on.
For arm64 this requires forcing one ACPI function to be inlined with
__always_inline.
Merge tag 'pci-v6.16-fixes-4' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci
Pull pci fix from Bjorn Helgaas:
- Create pwrctrl devices only when we need them, i.e., when
CONFIG_PCI_PWRCTRL is enabled.
This allows brcmstb to work around a pwrctrl regression by
disabling CONFIG_PCI_PWRCTRL (Manivannan Sadhasivam)
* tag 'pci-v6.16-fixes-4' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci:
PCI/pwrctrl: Create pwrctrl devices only when CONFIG_PCI_PWRCTRL is enabled
clk: tegra: periph: Make tegra_clk_periph_ops static
Reduce symbol visibility by converting tegra_clk_periph_ops to static.
Removed the extern declaration from clk.h as the symbol is now locally
scoped to clk-periph.c.
Brian Masney [Thu, 10 Jul 2025 21:10:45 +0000 (17:10 -0400)]
clk: imx: scu: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
This driver also implements both the determine_rate() and round_rate()
clk ops, and the round_rate() clk ops is deprecated. When both are
defined, clk_core_determine_round_nolock() from the clk core will only
use the determine_rate() clk ops, so let's remove the round_rate() clk
ops since it's unused.
Brian Masney [Thu, 10 Jul 2025 21:10:44 +0000 (17:10 -0400)]
clk: imx: pllv4: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:43 +0000 (17:10 -0400)]
clk: imx: pllv3: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:42 +0000 (17:10 -0400)]
clk: imx: pllv2: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:41 +0000 (17:10 -0400)]
clk: imx: pll14xx: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:40 +0000 (17:10 -0400)]
clk: imx: pfd: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:39 +0000 (17:10 -0400)]
clk: imx: frac-pll: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:38 +0000 (17:10 -0400)]
clk: imx: fracn-gppll: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:37 +0000 (17:10 -0400)]
clk: imx: fixup-div: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
The change to call fixup_div->ops->determine_rate() instead of
fixup_div->ops->round_rate() was done by hand.
Mark Brown [Thu, 24 Jul 2025 22:17:01 +0000 (23:17 +0100)]
ASoC: fsl_xcvr: get channel status data in two cases
Merge series from Shengjiu Wang <shengjiu.wang@nxp.com>:
There is two different cases for getting channel status data:
1. With PHY exists, there is firmware running on M core, the firmware
should fill the channel status to RAM space, driver need to read them
from RAM.
2. Without PHY, the channel status need to be obtained from registers.
Brian Masney [Thu, 10 Jul 2025 21:10:36 +0000 (17:10 -0400)]
clk: imx: cpu: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
Brian Masney [Thu, 10 Jul 2025 21:10:35 +0000 (17:10 -0400)]
clk: imx: busy: convert from round_rate() to determine_rate()
The round_rate() clk ops is deprecated, so migrate this driver from
round_rate() to determine_rate() using the Coccinelle semantic patch
on the cover letter of this series.
The change to call busy->div_ops->determine_rate() instead of
busy->div_ops->round_rate() was done by hand.
Brian Masney [Thu, 10 Jul 2025 21:10:34 +0000 (17:10 -0400)]
clk: imx: composite-93: remove round_rate() in favor of determine_rate()
This driver implements both the determine_rate() and round_rate() clk
ops, and the round_rate() clk ops is deprecated. When both are defined,
clk_core_determine_round_nolock() from the clk core will only use the
determine_rate() clk ops, so let's remove the round_rate() clk ops since
it's unused.
Brian Masney [Thu, 10 Jul 2025 21:10:33 +0000 (17:10 -0400)]
clk: imx: composite-8m: remove round_rate() in favor of determine_rate()
This driver implements both the determine_rate() and round_rate() clk
ops, and the round_rate() clk ops is deprecated. When both are defined,
clk_core_determine_round_nolock() from the clk core will only use the
determine_rate() clk ops, so let's remove the round_rate() clk ops since
it's unused.
Paul E. McKenney [Wed, 23 Jul 2025 23:13:45 +0000 (16:13 -0700)]
selftests/pidfd: Fix duplicate-symbol warnings for SCHED_ CPP symbols
The pidfd selftests run in userspace and include both userspace and kernel
header files. On some distros (for example, CentOS), this results in
duplicate-symbol warnings in allmodconfig builds, while on other distros
(for example, Ubuntu) it does not.
Therefore, use #undef to get rid of the userspace definitions in favor
of the kernel definitions.
Other ways of handling this include splitting up the selftest code so
that the userspace definitions go into one translation unit and the
kernel definitions into another (which might or might not be feasible)
or to adjust compiler command-line options to suppress the warnings
(which might or might not be desirable).
[ paulmck: Apply Shuah Khan feedback. ]
Link: https://lore.kernel.org/r/cc7e4fe7-299f-4bf3-af46-df6551d61997@paulmck-laptop Signed-off-by: Paul E. McKenney <paulmck@kernel.org> Reviewed-by: Shuah Khan <skhan@linuxfoundation.org> Cc: Christian Brauner <brauner@kernel.org> Cc: <linux-kselftest@vger.kernel.org> Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
Stephen Boyd [Thu, 24 Jul 2025 22:03:54 +0000 (15:03 -0700)]
Merge tag 'thead-clk-for-v6.17-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/fustini/linux into clk-thead
Pull one more T-HEAD clk driver update from Drew Fustini:
Yao Zi has fixed an issue where the c910 mux clk could end up as an
orphan in CCF when the bootloader reparents it to the c910-i0 mux clk.
The solution is to refactor the handling of mux clocks by embedding a
clk_mux structure directly in ccu_mux. This allows the mux clocks to be
registered with devm_clk_hw_register() without allocating any new clk_hw
pointer which solves the orphan issue.
This change has been tested in linux-next. The LPi4a still boots okay
without clk_ignore_unused and peripherals like serial, emmc and ethernet
are functional. The file /sys/kernel/debug/clk/c910/clk_possible_parents
now correctly outputs: "c910-i0 cpu-pll1"
* tag 'thead-clk-for-v6.17-p2' of git://git.kernel.org/pub/scm/linux/kernel/git/fustini/linux:
clk: thead: th1520-ap: Describe mux clocks with clk_mux
Steven Rostedt [Mon, 21 Jul 2025 17:42:12 +0000 (13:42 -0400)]
selftests/tracing: Fix false failure of subsystem event test
The subsystem event test enables all "sched" events and makes sure there's
at least 3 different events in the output. It used to cat the entire trace
file to | wc -l, but on slow machines, that could last a very long time.
To solve that, it was changed to just read the first 100 lines of the
trace file. This can cause false failures as some events repeat so often,
that the 100 lines that are examined could possibly be of only one event.
Instead, create an awk script that looks for 3 different events and will
exit out after it finds them. This will find the 3 events the test looks
for (eventually if it works), and still exit out after the test is
satisfied and not cause slower machines to run forever.
Frank Li [Thu, 10 Jul 2025 19:13:53 +0000 (15:13 -0400)]
misc: pci_endpoint_test: Add doorbell test case
Add doorbell support with the help of three new registers:
PCIE_ENDPOINT_TEST_DB_BAR, PCIE_ENDPOINT_TEST_DB_ADDR, and
PCIE_ENDPOINT_TEST_DB_DATA.
The testcase works by triggering the doorbell in Endpoint by writing the
value from PCI_ENDPOINT_TEST_DB_DATA register to the address provided by
PCI_ENDPOINT_TEST_DB_OFFSET register of the BAR indicated by the
PCIE_ENDPOINT_TEST_DB_BAR register and waiting for the completion status
from the Endpoint.
Signed-off-by: Frank Li <Frank.Li@nxp.com>
[mani: removed one spurious change and reworded the commit message] Signed-off-by: Manivannan Sadhasivam <mani@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Niklas Cassel <cassel@kernel.org> Link: https://patch.msgid.link/20250710-ep-msi-v21-7-57683fc7fb25@nxp.com
Frank Li [Thu, 10 Jul 2025 19:13:52 +0000 (15:13 -0400)]
PCI: endpoint: pci-epf-test: Add doorbell test support
Add doorbell support by allocating a dedicated BAR using the
pci_epf_alloc_doorbell() API and mapping the Endpoint MSI controller
message data address to it. The data to be written in the message address
is stored in the 'pci_epf_test_reg::doorbell_data' register. Finally, the
RC can trigger doorbell in the Endpoint by writing the content of
'doorbell_data' register to the offset specified in 'doorbell_offset' of
the 'doorbell_bar' BAR.
Triggering of the doorbell is detected by pci_epf_test_doorbell_handler(),
which is bound to the doorbell IRQ. On successful completion,
STATUS_DOORBELL_SUCCESS status is set in the above mentioned handler.
To avoid breaking compatibility between host and endpoint, add two new
commands: COMMAND_ENABLE_DOORBELL and COMMAND_DISABLE_DOORBELL.
The doorbell is allocated when COMMAND_ENABLE_DOORBELL command is called
and destroyed when COMMAND_DISABLE_DOORBELL is called.
This doorbell feature only works when both RC and EP drivers support it.
If one of them doesn't support the feature, the testcase will fail.
Signed-off-by: Frank Li <Frank.Li@nxp.com>
[mani: code cleanups and reworded commit message] Signed-off-by: Manivannan Sadhasivam <mani@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Niklas Cassel <cassel@kernel.org> Link: https://patch.msgid.link/20250710-ep-msi-v21-6-57683fc7fb25@nxp.com
Frank Li [Thu, 10 Jul 2025 19:13:51 +0000 (15:13 -0400)]
PCI: endpoint: Add pci_epf_align_inbound_addr() helper for inbound address alignment
Add pci_epf_align_inbound_addr() to align the inbound addresses according
to PCI BAR alignment requirements. The aligned base address and offset are
returned via 'base' and 'off' parameters.
Signed-off-by: Frank Li <Frank.Li@nxp.com>
[mani: reworded kernel-doc and commit message] Signed-off-by: Manivannan Sadhasivam <mani@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Niklas Cassel <cassel@kernel.org> Link: https://patch.msgid.link/20250710-ep-msi-v21-5-57683fc7fb25@nxp.com
Frank Li [Thu, 10 Jul 2025 19:13:50 +0000 (15:13 -0400)]
PCI: endpoint: pci-ep-msi: Add checks for MSI parent and mutability
Some MSI controllers can change address/data pair during the execution of
irq_chip::irq_set_affinity() callback. Since the current PCI Endpoint
framework cannot support mutable MSI controllers, call
irq_domain_is_msi_immutable() API to check if the controller is immutable
or not.
Also ensure that the MSI domain is a parent MSI domain so that it can
allocate address/data pairs.
Signed-off-by: Frank Li <Frank.Li@nxp.com>
[mani: reworded error message and commit message] Signed-off-by: Manivannan Sadhasivam <mani@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Tested-by: Niklas Cassel <cassel@kernel.org> Link: https://patch.msgid.link/20250710-ep-msi-v21-4-57683fc7fb25@nxp.com
pm_runtime_put_autosuspend(), pm_runtime_put_sync_autosuspend(),
pm_runtime_autosuspend() and pm_request_autosuspend() now include a call
to pm_runtime_mark_last_busy(). Remove the now-reduntant explicit call to
pm_runtime_mark_last_busy().
pm_runtime_put_autosuspend(), pm_runtime_put_sync_autosuspend(),
pm_runtime_autosuspend() and pm_request_autosuspend() now include a call
to pm_runtime_mark_last_busy(). Remove the now-reduntant explicit call to
pm_runtime_mark_last_busy().