Peter Zijlstra [Sat, 12 Jul 2025 03:33:43 +0000 (03:33 +0000)]
locking/mutex: Rework task_struct::blocked_on
Track the blocked-on relation for mutexes, to allow following this
relation at schedule time.
task
| blocked-on
v
mutex
| owner
v
task
This all will be used for tracking blocked-task/mutex chains
with the prox-execution patch in a similar fashion to how
priority inheritance is done with rt_mutexes.
For serialization, blocked-on is only set by the task itself
(current). And both when setting or clearing (potentially by
others), is done while holding the mutex::wait_lock.
[minor changes while rebasing]
[jstultz: Fix blocked_on tracking in __mutex_lock_common in error paths] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Juri Lelli <juri.lelli@redhat.com> Signed-off-by: Connor O'Brien <connoro@google.com> Signed-off-by: John Stultz <jstultz@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lkml.kernel.org/r/20250712033407.2383110-3-jstultz@google.com
John Stultz [Sat, 12 Jul 2025 03:33:42 +0000 (03:33 +0000)]
sched: Add CONFIG_SCHED_PROXY_EXEC & boot argument to enable/disable
Add a CONFIG_SCHED_PROXY_EXEC option, along with a boot argument
sched_proxy_exec= that can be used to disable the feature at boot
time if CONFIG_SCHED_PROXY_EXEC was enabled.
Also uses this option to allow the rq->donor to be different from
rq->curr.
xen/gntdev: remove struct gntdev_copy_batch from stack
When compiling the kernel with LLVM, the following warning was issued:
drivers/xen/gntdev.c:991: warning: stack frame size (1160) exceeds
limit (1024) in function 'gntdev_ioctl'
The main reason is struct gntdev_copy_batch which is located on the
stack and has a size of nearly 1kb.
For performance reasons it shouldn't by just dynamically allocated
instead, so allocate a new instance when needed and instead of freeing
it put it into a list of free structs anchored in struct gntdev_priv.
drm/xe: Allow specifying number of extra dwords at the end of wa bb emission
Indirect context setup will need more than one.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20250711160153.49833-6-tvrtko.ursulin@igalia.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
drm/xe: Track number of written dwords from workaround batch buffer emission
Indirect context setup will need to get to the number of written dwords.
Lets add it as an output parameter so it can be accessed from the finish
helper regardless of whether code is writing directly or via an shadow
buffer.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20250711160153.49833-5-tvrtko.ursulin@igalia.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
drm/xe: Rename utilization workaround emission function
Lucas suggested to consolidate to a slightly different naming scheme which
will align with the upcoming additions better.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Suggested-by: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Matt Roper <matthew.d.roper@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20250711160153.49833-4-tvrtko.ursulin@igalia.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Group the function arguments in a struct for more readable code and easier
extending.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20250711160153.49833-3-tvrtko.ursulin@igalia.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Ingo Molnar [Thu, 15 May 2025 13:27:11 +0000 (15:27 +0200)]
x86/tools: insn_sanity.c: Emit standard build success messages
The standard 'success' output of insn_decoder_test spams build logs with:
arch/x86/tools/insn_sanity: Success: decoded and checked 1000000 random instructions with 0 errors (seed:0x2e263877)
Prefix the message with the standard ' ' (two spaces) used by kbuild
to denote regular build messages, making it easier for tools to
filter out warnings and errors.
Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jürgen Groß <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20250515132719.31868-6-mingo@kernel.org
Ingo Molnar [Thu, 15 May 2025 13:27:10 +0000 (15:27 +0200)]
x86/tools: insn_decoder_test.c: Emit standard build success messages
The standard 'success' output of insn_decoder_test spams build logs with:
arch/x86/tools/insn_decoder_test: success: Decoded and checked 8258521 instructions
Prefix the message with the standard ' ' (two spaces) used by kbuild to denote
regular build messages, making it easier for tools to filter out
warnings and errors.
Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jürgen Groß <jgross@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Michal Marek <michal.lkml@markovi.net> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Link: https://lore.kernel.org/r/20250515132719.31868-5-mingo@kernel.org
Generalize the wa bb emission by splitting it into three phases - setup,
emit and finish, and extract setup and finish steps into helpers.
This will enable using the same infrastructure for emitting the indirect
context workarounds.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20250711160153.49833-2-tvrtko.ursulin@igalia.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Lucas De Marchi [Fri, 11 Jul 2025 21:49:12 +0000 (14:49 -0700)]
drm/xe: Fix missing kernel-doc
Fix warning:
Warning: drivers/gpu/drm/xe/xe_device_types.h:658 struct member 'wa_active' not described in 'xe_device'
Fixes: 661a6950e061 ("drm/xe: Add infrastructure for Device OOB workarounds") Cc: Matt Atwood <matthew.s.atwood@intel.com> Reviewed-by: Jonathan Cavitt <joanthan.cavitt@intel.com> Link: https://lore.kernel.org/r/20250711214911.2009714-2-lucas.demarchi@intel.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
xe_bo_create_from_data() last use was removed in 2023 by
commit 0e1a47fcabc8 ("drm/xe: Add a helper for DRM device-lifetime BO
create")
xe_rtp_match_first_gslice_fused_off() last use was removed in 2023 by
commit 4e124151fcfc ("drm/xe/dg2: Drop pre-production workarounds")
Remove them, and xe_dss_mask_empty whose last use was by
xe_rtp_match_first_gslice_fused_off().
(Xe has a bunch ofother symbols that have been added but not used,
given how new it is, I've left those, as opposed to these that
had the code that used them removed).
Pavel Begunkov [Mon, 14 Jul 2025 10:57:23 +0000 (11:57 +0100)]
io_uring/zcrx: disallow user selected dmabuf offset and size
zcrx shouldn't be so frivolous about cutting a dmabuf sgtable and taking
a subrange into it, the dmabuf layer might be not expecting that. It
shouldn't be a problem for now, but since the zcrx dmabuf support is new
and there shouldn't be any real users, let's play safe and reject user
provided ranges into dmabufs. Also, it shouldn't be needed as userspace
should size them appropriately.
Yicong Yang [Thu, 19 Jun 2025 12:55:55 +0000 (20:55 +0800)]
drivers/perf: hisi: Support PMUs with no interrupt
We'll have PMUs don't have an interrupt to indicate the counter
overflow, but the Uncore PMU core assume all the PMUs have
interrupt. So handle this case in the core. The existing PMUs
won't be affected.
Junhao He [Thu, 19 Jun 2025 12:55:54 +0000 (20:55 +0800)]
drivers/perf: hisi: Relax the event number check of v2 PMUs
The supported event number range of each Uncore PMUs is provided by
each driver in hisi_pmu::check_event and out of range events
will be rejected. A later version with expanded event number range
needs to register the PMU with updated hisi_pmu::check_event
even if it's the only update, which means the expanded events
cannot be used unless the driver's updated. However the unsupported
events won't be counted by the hardware so we can relax the event
number check to allow the use the expanded events.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Junhao He <hejunhao3@huawei.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20250619125557.57372-6-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
Junhao He [Thu, 19 Jun 2025 12:55:53 +0000 (20:55 +0800)]
drivers/perf: hisi: Add support for HiSilicon SLLC v3 PMU driver
SLLC v3 PMU has the following changes compared to previous version:
a) update the register layout
b) update the definition of SRCID_CTRL and TGTID_CTRL registers.
To be compatible with v2, we use maximum width (11 bits)
and mask the extra length for themselves.
c) remove latency events (driver does not need to be adapted).
SLLC v3 PMU is identified with HID HISI0264.
Signed-off-by: Junhao He <hejunhao3@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20250619125557.57372-5-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
Junhao He [Thu, 19 Jun 2025 12:55:52 +0000 (20:55 +0800)]
drivers/perf: hisi: Use ACPI driver_data to retrieve SLLC PMU information
Make use of struct acpi_device_id::driver_data for version specific
information rather than judge the version register. This will help
to simplify the probe process and also a bit easier for extension.
Factor out SLLC register definition to struct hisi_sllc_pmu_regs.
No functional changes intended.
Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Junhao He <hejunhao3@huawei.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20250619125557.57372-4-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
Junhao He [Thu, 19 Jun 2025 12:55:51 +0000 (20:55 +0800)]
drivers/perf: hisi: Add support for HiSilicon DDRC v3 PMU driver
HiSilicon DDRC v3 PMU has the different interrupt register offset
compared to the v2. Add device information of v3 PMU with ACPI
HID HISI0235.
Signed-off-by: Junhao He <hejunhao3@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20250619125557.57372-3-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
Junhao He [Thu, 19 Jun 2025 12:55:50 +0000 (20:55 +0800)]
drivers/perf: hisi: Simplify the probe process for each DDRC version
Version 1 and 2 of DDRC PMU also use different HID. Make use of
struct acpi_device_id::driver_data for version specific information
rather than judge the version register. This will help to
simplify the probe process and also a bit easier for extension.
In order to support this extend struct hisi_pmu_dev_info for version
specific counter bits and event range.
Signed-off-by: Junhao He <hejunhao3@huawei.com> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Link: https://lore.kernel.org/r/20250619125557.57372-2-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
Sriram R [Fri, 11 Jul 2025 09:17:04 +0000 (14:47 +0530)]
wifi: ath12k: Add support to enqueue management frame at MLD level
A multi-link client can use any link for transmissions. It can decide to
put one link in power save mode for longer periods while listening on the
other links as per MLD listen interval. Unicast management frames sent to
that link station might get dropped if that link station is in power save
mode or inactive. In such cases, firmware can take decision on which link
to use.
Allow the firmware to decide on which link management frame should be
sent on, by filling the hardware link with maximum value of u32, so that
the firmware will not have a specific link to transmit data on and so
the management frames will be link agnostic. For QCN devices, all action
frames are marked as link agnostic. For WCN devices, if the device is
configured as an AP, then all frames other than probe response frames,
authentication frames, association response frames, re-association response
frames and ADDBA response frames are marked as link agnostic and if the
device is configured as a station, then all frames other than probe request
frames, authentication frames, de-authentication frames and ADDBA response
frames are marked as link agnostic.
wifi: ath12k: Enable memory profile selection for QCN9274
The QCN9274 supports two memory profiles: a default profile and a
low-memory profile. The driver signals the firmware to enable
low-memory optimizations using the QMI initialization service.
Add support to select the low-memory profile on system with less than
512 MB RAM.
wifi: ath12k: Remove redundant TID calculation for QCN9274
Currently, host sends num_tids (number of TID (Traffic Identifier))
value to firmware via WMI_INIT_CMD during WMI initialization. However,
the firmware does not use this value, as it determines the number of
TIDs using its own internal logic.
Hence, remove the redundant num_tids calculation logic for QCN9274.
wifi: ath12k: Add a table of parameters entries impacting memory consumption
Introduce ath12k_mem_profile_based_param structure to define
configuration parameters for both default and low-memory profiles.
Add support for enabling the low-memory profile in the follow-up
patch by making the following changes:
- Reduce sizes for transmit, receive, and monitor descriptor rings.
- Reduce transmit and receive descriptor count.
- Limit the maximum number of virtual devices (vdevs) to 9.
- Reduce the maximum number of client support per radio.
Centralize these parameters in the ath12k_mem_profile_based_param
structure to simplify switching between memory profiles.
Shouping Wang [Fri, 11 Jul 2025 18:15:17 +0000 (19:15 +0100)]
perf/arm-ni: Support sharing IRQs within an NI instance
NI-700 has a distinct PMU interrupt output for each Clock Domain,
however some integrations may still combine these together externally.
The initial driver didn't attempt to support this, in anticipation of a
more general solution for IRQ sharing between system PMU instances, but
that's still a way off, so let's make this intermediate step for now to
at least allow sharing IRQs within an individual NI instance.
Now that CPU affinity and migration are cleaned up, it's fairly
straightforward to adopt similar logic to arm-cmn, to identify CDs with
a common interrupt and loop over them directly in the handler.
Robin Murphy [Fri, 11 Jul 2025 18:15:16 +0000 (19:15 +0100)]
perf/arm-ni: Consolidate CPU affinity handling
Since overflow interrupts from the individual PMUs are infrequent and
unlikely to coincide, and we make no attempt to balance them across
CPUs anyway, there's really not much point tracking a separate CPU
affinity per PMU. Move the CPU affinity and hotplug migration up to
the NI instance level.
nvme: fix inconsistent RCU list manipulation in nvme_ns_add_to_ctrl_list()
When inserting a namespace into the controller's namespace list, the
function uses list_add_rcu() when the namespace is inserted in the middle
of the list, but falls back to a regular list_add() when adding at the
head of the list.
This inconsistency could lead to race conditions during concurrent
access, as users might observe a partially updated list. Fix this by
consistently using list_add_rcu() in both code paths to ensure proper
RCU protection throughout the entire function.
Fixes: be647e2c76b2 ("nvme: use srcu for iterating namespace list") Signed-off-by: Zheng Qixing <zhengqixing@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
Cheng Ming Lin [Mon, 14 Jul 2025 03:10:23 +0000 (11:10 +0800)]
spi: Add check for 8-bit transfer with 8 IO mode support
The current SPI framework does not verify if the SPI device supports
8 IO mode when doing an 8-bit transfer. This patch adds a check to
ensure that if the transfer tx_nbits or rx_nbits is 8, the SPI mode must
support 8 IO. If not, an error is returned, preventing undefined behavior.
ARM: rockchip: fix kernel hang during smp initialization
In order to bring up secondary CPUs main CPU write trampoline
code to SRAM. The trampoline code is written while secondary
CPUs are powered on (at least that true for RK3188 CPU).
Sometimes that leads to kernel hang. Probably because secondary
CPU execute trampoline code while kernel doesn't expect.
The patch moves SRAM initialization step to the point where all
secondary CPUs are powered down.
That fixes rarely hangs on RK3188:
[ 0.091568] CPU0: thread -1, cpu 0, socket 0, mpidr 80000000
[ 0.091996] rockchip_smp_prepare_cpus: ncores 4
netfilter: nf_tables: hide clash bit from userspace
Its a kernel implementation detail, at least at this time:
We can later decide to revert this patch if there is a compelling
reason, but then we should also remove the ifdef that prevents exposure
of ip_conntrack_status enum IPS_NAT_CLASH value in the uapi header.
Clash entries are not included in dumps (true for both old /proc
and ctnetlink) either. So for now exclude the clash bit when dumping.
Fixes: 7e5c6aa67e6f ("netfilter: nf_tables: add packets conntrack state to debug trace info") Link: https://lore.kernel.org/netfilter-devel/aGwf3dCggwBlRKKC@strlen.de/ Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 27 Jun 2025 14:27:51 +0000 (16:27 +0200)]
selftests: netfilter: add conntrack clash resolution test case
Add a dedicated test to exercise conntrack clash resolution path.
Test program emits 128 identical udp packets in parallel, then reads
back replies from socat echo server.
Also check (via conntrack -S) that the clash path was hit at least once.
Due to the racy nature of the test its possible that despite the
threaded program all packets were processed in-order or on same cpu,
emit a SKIP warning in this case.
Two tests are added:
- one to test the simpler, non-nat case
- one to exercise clash resolution where packets
might have different nat transformations attached to them.
Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Fri, 27 Jun 2025 14:27:50 +0000 (16:27 +0200)]
selftests: netfilter: conntrack_resize.sh: extend resize test
Extend the resize test:
- continuously dump table both via /proc and ctnetlink interfaces while
table is resized in a loop.
- if socat is available, send udp packets in additon to ping requests.
- increase/decrease the icmp and udp timeouts while resizes are happening.
This makes sure we also exercise the 'ct has expired' check that happens
on conntrack lookup.
Signed-off-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Alok Tiwari [Tue, 24 Jun 2025 19:43:40 +0000 (12:43 -0700)]
perf/cxlpmu: Fix typos in cxl_pmu.c comments and documentation
Fix several minor typo errors in comments:
- Remove duplicated word "a" in "a a VID / GroupID".
- Correct "Opcopdes" to "Opcodes" in CXL spec reference.
- Fix spelling of "implemnted" to "implemented".
Improves code readability and documentation consistency.
Alok Tiwari [Tue, 24 Jun 2025 19:43:39 +0000 (12:43 -0700)]
perf/cxlpmu: Remove unintended newline from IRQ name format string
The IRQ name format string used in devm_kasprintf() mistakenly included
a newline character "\n".
This could lead to confusing log output or misformatted names in sysfs
or debug messages.
This fix removes the newline to ensure proper IRQ naming.
Alok Tiwari [Tue, 24 Jun 2025 19:43:38 +0000 (12:43 -0700)]
perf/cxlpmu: Fix devm_kcalloc() argument order in cxl_pmu_probe()
The previous code mistakenly swapped the count and size parameters.
This fix corrects the argument order in devm_kcalloc() to follow the
conventional count, size form, avoiding potential confusion or bugs.
Alice Ryhl [Mon, 23 Jun 2025 13:57:27 +0000 (13:57 +0000)]
poll: rust: allow poll_table ptrs to be null
It's possible for a poll_table to be null. This can happen if an
end-user just wants to know if a resource has events right now without
registering a waiter for when events become available. Furthermore,
these null pointers should be handled transparently by the API, so we
should not change `from_ptr` to return an `Option`. Thus, change
`PollTable` to wrap a raw pointer rather than use a reference so that
you can pass null.
Comments mentioning `struct poll_table` are changed to just `poll_table`
since `poll_table` is a typedef. (It's a typedef because it's supposed
to be opaque.)
Reviewed-by: Benno Lossin <lossin@kernel.org> Signed-off-by: Alice Ryhl <aliceryhl@google.com>
ALSA: hda/cs35l56: Workaround bad dev-index on Lenovo Yoga Book 9i GenX
The Lenovo Yoga Book 9i GenX has the wrong values in the cirrus,dev-index
_DSD property. Add a fixup for this model to ignore the property and
hardcode the index from the I2C bus address.
The error in the cirrus,dev-index property would prevent the second amp
instance from probing. The component binding would never see all the
required instances and so there would not be a binding between
patch_realtek.c and the cs35l56 driver.
The bootloader configures a reserved memory region for framebuffer,
which is protected by the IOMMU. The kernel-side driver is oblivious as
of which memory region is set up by the bootloader. In such case, the
IOMMU tries to reference the reserved region - which is not reserved in
the kernel anymore - and it results in an unrecoverable page fault. More
information about it is provided in [1].
Add support for reserved regions using iommu_dma_get_resv_regions().
For OF supported boards, this requires defining the region in the
iommu-addresses property of the IOMMU owner's node.
Jie Zhan [Mon, 23 Jun 2025 14:34:01 +0000 (22:34 +0800)]
PM / devfreq: Add HiSilicon uncore frequency scaling driver
Add the HiSilicon uncore frequency scaling driver for Kunpeng SoCs based on
the devfreq framework. The uncore domain contains shared computing
resources, including system interconnects and L3 cache. The uncore
frequency significantly impacts the system-wide performance as well as
power consumption. This driver adds support for runtime management of
uncore frequency from kernel and userspace. The main function includes
setting and getting frequencies, changing frequency scaling policies, and
querying the list of CPUs whose performance is significantly related to
this uncore frequency domain, etc. The driver communicates with a platform
controller through an ACPI PCC mailbox to take the actual actions of
frequency scaling.
Co-developed-by: Lifeng Zheng <zhenglifeng1@huawei.com> Signed-off-by: Lifeng Zheng <zhenglifeng1@huawei.com> Reviewed-by: Jonathan Cameron <jonathan.cameron@huawei.com> Reviewed-by: Huisong Li <lihuisong@huawei.com> Signed-off-by: Jie Zhan <zhanjie9@hisilicon.com> Signed-off-by: Chanwoo Choi <cw00.choi@samsung.com> Link: https://patchwork.kernel.org/project/linux-pm/patch/20250623143401.4095045-3-zhanjie9@hisilicon.com/
Extend the devfreq_dev_profile to allow drivers optionally create
device-specific sysfs ABIs together with other common devfreq ABIs under
the devfreq device path.
On SM8250 / QRB5165-RB5 using PRR bits resets the device, most likely
because of the hyp limitations. Disable PRR support on that platform.
Fixes: 7f2ef1bfc758 ("iommu/arm-smmu: Add support for PRR bit setup") Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com> Reviewed-by: Akhil P Oommen <akhilpo@oss.qualcomm.com> Reviewed-by: Rob Clark <robin.clark@oss.qualcomm.com> Link: https://lore.kernel.org/r/20250705-iommu-fix-prr-v2-1-406fecc37cf8@oss.qualcomm.com Signed-off-by: Will Deacon <will@kernel.org>
Commit 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap
behavior") removed the last user of the macro iopte_prot. Remove the
macro definition of iopte_prot as well as three other related
definitions.
Fixes: 33729a5fc0ca ("iommu/io-pgtable-arm: Remove split on unmap behavior") Signed-off-by: Daniel Mentz <danielmentz@google.com> Reviewed-by: Liviu Dudau <liviu.dudau@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20250708211705.1567787-1-danielmentz@google.com Signed-off-by: Will Deacon <will@kernel.org>
Alexey Klimov [Fri, 13 Jun 2025 17:32:38 +0000 (18:32 +0100)]
iommu/arm-smmu-qcom: Add SM6115 MDSS compatible
Add the SM6115 MDSS compatible to clients compatible list, as it also
needs that workaround.
Without this workaround, for example, QRB4210 RB2 which is based on
SM4250/SM6115 generates a lot of smmu unhandled context faults during
boot:
Daniel Lezcano [Wed, 9 Jul 2025 15:47:28 +0000 (17:47 +0200)]
cpuidle: psci: Fix cpuhotplug routine with PREEMPT_RT=y
Currently cpu hotplug with the PREEMPT_RT option set in the kernel is
not supported because the underlying generic power domain functions
used in the cpu hotplug callbacks are incompatible from a lock point
of view. This situation prevents the suspend to idle to reach the
deepest idle state for the "cluster" as identified in the
undermentioned commit.
Use the compatible ones when PREEMPT_RT is enabled and remove the
boolean disabling the hotplug callbacks with this option.
With this change the platform can reach the deepest idle state
allowing at suspend time to consume less power.
PM / devfreq: Check governor before using governor->name
Commit 96ffcdf239de ("PM / devfreq: Remove redundant governor_name from
struct devfreq") removes governor_name and uses governor->name to replace
it. But devfreq->governor may be NULL and directly using
devfreq->governor->name may cause null pointer exception. Move the check of
governor to before using governor->name.
Jason Gunthorpe [Fri, 11 Jul 2025 13:16:38 +0000 (10:16 -0300)]
iommu/qcom: Fix pgsize_bitmap
qcom uses the ARM_32_LPAE_S1 format which uses the ARM long descriptor
page table. Eventually arm_32_lpae_alloc_pgtable_s1() will adjust
the pgsize_bitmap with:
cfg->pgsize_bitmap &= (SZ_4K | SZ_2M | SZ_1G);
So the current declaration is nonsensical. Fix it to be just SZ_4K which
is what it has actually been using so far. Most likely the qcom driver
copy and pasted the pgsize_bitmap from something using the ARM_V7S format.
Fixes: db64591de4b2 ("iommu/qcom: Remove iommu_ops pgsize_bitmap") Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Closes: https://lore.kernel.org/all/CA+G9fYvif6kDDFar5ZK4Dff3XThSrhaZaJundjQYujaJW978yg@mail.gmail.com/ Tested-by: Linux Kernel Functional Testing <lkft@linaro.org> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Konrad Dybcio <konrad.dybcio@oss.qualcomm.com> Link: https://lore.kernel.org/r/0-v1-65a7964d2545+195-qcom_pgsize_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Mark Brown [Mon, 14 Jul 2025 10:34:16 +0000 (11:34 +0100)]
ASoC: codec: Convert to GPIO descriptors for
Merge series from Peng Fan <peng.fan@nxp.com>:
This patchset is a pick up of patch 1,2 from [1]. And I also collect
Linus's R-b for patch 2. After this patchset, there is only one user of
of_gpio.h left in sound driver(pxa2xx-ac97).
of_gpio.h is deprecated, update the driver to use GPIO descriptors.
Patch 1 is to drop legacy platform data which in-tree no users are using it
Patch 2 is to convert to GPIO descriptors
Checking the DTS that use the device, all are using GPIOD_ACTIVE_LOW
polarity for reset-gpios, so all should work as expected with this patch.
(A) sets dapm->bias_level, but (B) is not.
I think we should set it on both (A) and (B).
I think it is miss or bug, but Samsung is the only vendor that feels a
problem about this.
I think this patch (= [1/5]) is correct approach, but some non-samsung
vendor might get affect from this patch-set, so I added [RFC] on this
patch-set.
Furthermore, (B) cares both Card and Component, (A) cares Component only.
I guess it is the reason why it is called as "force" bias_level function.
(A) is used from each drivers, (B) is used from soc-dapm only.
I'm not 100% sure though, except special cases, each driver should use (B),
I guess ?
iommu/vt-d: Deduplicate cache_tag_flush_all by reusing flush_range
The logic in cache_tag_flush_all() to iterate over cache tags and issue
TLB invalidations is largely duplicated in cache_tag_flush_range(), with
the only difference being the range parameters.
Extend cache_tag_flush_range() to handle a full address space flush when
called with start = 0 and end = ULONG_MAX. This allows
cache_tag_flush_all() to simply delegate to cache_tag_flush_range()
iommu/vt-d: Fix missing PASID in dev TLB flush with cache_tag_flush_all
The function cache_tag_flush_all() was originally implemented with
incorrect device TLB invalidation logic that does not handle PASID, in
commit c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers")
This causes regressions where full address space TLB invalidations occur
with a PASID attached, such as during transparent hugepage unmapping in
SVA configurations or when calling iommu_flush_iotlb_all(). In these
cases, the device receives a TLB invalidation that lacks PASID.
This incorrect logic was later extracted into
cache_tag_flush_devtlb_all(), in commit 3297d047cd7f ("iommu/vt-d:
Refactor IOTLB and Dev-IOTLB flush for batching")
The fix replaces the call to cache_tag_flush_devtlb_all() with
cache_tag_flush_devtlb_psi(), which properly handles PASID.
Jason Gunthorpe [Mon, 14 Jul 2025 04:50:26 +0000 (12:50 +0800)]
iommu/vt-d: Split paging_domain_compatible()
Make First/Second stage specific functions that follow the same pattern in
intel_iommu_domain_alloc_first/second_stage() for computing
EOPNOTSUPP. This makes the code easier to understand as if we couldn't
create a domain with the parameters for this IOMMU instance then we
certainly are not compatible with it.
Check superpage support directly against the per-stage cap bits and the
pgsize_bitmap.
Add a note that the force_snooping is read without locking. The locking
needs to cover the compatible check and the add of the device to the list.
First Stage and Second Stage have very different ways to deny
no-snoop. The first stage uses the PGSNP bit which is global per-PASID so
enabling requires loading new PASID entries for all the attached devices.
Second stage uses a bit per PTE, so enabling just requires telling future
maps to set the bit.
Since we now have two domain ops we can have two functions that can
directly code their required actions instead of a bunch of logic dancing
around use_first_level.
Combine domain_set_force_snooping() into the new functions since they are
the only caller.
Jason Gunthorpe [Mon, 14 Jul 2025 04:50:24 +0000 (12:50 +0800)]
iommu/vt-d: Create unique domain ops for each stage
Use the domain ops pointer to tell what kind of domain it is instead of
the internal use_first_level indication. This also protects against
wrongly using a SVA/nested/IDENTITY/BLOCKED domain type in places they
should not be.
The only remaining uses of use_first_level outside the paging domain are in
paging_domain_compatible() and intel_iommu_enforce_cache_coherency().
Thus, remove the useless sets of use_first_level in
intel_svm_domain_alloc() and intel_iommu_domain_alloc_nested(). None of
the unique ops for these domain types ever reference it on their call
chains.
Add a WARN_ON() check in domain_context_mapping_one() as it only works
with second stage.
This is preparation for iommupt which will have different ops for each of
the stages.
Create stage specific functions that check the stage specific conditions
if each stage can be supported.
Have intel_iommu_domain_alloc_paging_flags() call both stages in sequence
until one does not return EOPNOTSUPP and prefer to use the first stage if
available and suitable for the requested flags.
Move second stage only operations like nested_parent and dirty_tracking
into the second stage function for clarity.
Move initialization of the iommu_domain members into paging_domain_alloc().
Drop initialization of domain->owner as the callers all do it.
Jason Gunthorpe [Mon, 14 Jul 2025 04:50:22 +0000 (12:50 +0800)]
iommu/vt-d: Do not wipe out the page table NID when devices detach
The NID is used to control which NUMA node memory for the page table is
allocated it from. It should be a permanent property of the page table
when it was allocated and not change during attach/detach of devices.
Jason Gunthorpe [Mon, 14 Jul 2025 04:50:20 +0000 (12:50 +0800)]
iommu/vt-d: Lift the __pa to domain_setup_first_level/intel_svm_set_dev_pasid()
Pass the phys_addr_t down through the call chain from the top instead of
passing a pgd_t * KVA. This moves the __pa() into
domain_setup_first_level() which is the first function to obtain the pgd
from the IOMMU page table in this call chain.
The SVA flow is also adjusted to get the pa of the mm->pgd.
iommput will move the __pa() into iommupt code, it never shares the KVA of
the page table with the driver.
Lu Baolu [Mon, 14 Jul 2025 04:50:19 +0000 (12:50 +0800)]
iommu/vt-d: Optimize iotlb_sync_map for non-caching/non-RWBF modes
The iotlb_sync_map iommu ops allows drivers to perform necessary cache
flushes when new mappings are established. For the Intel iommu driver,
this callback specifically serves two purposes:
- To flush caches when a second-stage page table is attached to a device
whose iommu is operating in caching mode (CAP_REG.CM==1).
- To explicitly flush internal write buffers to ensure updates to memory-
resident remapping structures are visible to hardware (CAP_REG.RWBF==1).
However, in scenarios where neither caching mode nor the RWBF flag is
active, the cache_tag_flush_range_np() helper, which is called in the
iotlb_sync_map path, effectively becomes a no-op.
Despite being a no-op, cache_tag_flush_range_np() involves iterating
through all cache tags of the iommu's attached to the domain, protected
by a spinlock. This unnecessary execution path introduces overhead,
leading to a measurable I/O performance regression. On systems with NVMes
under the same bridge, performance was observed to drop from approximately
~6150 MiB/s down to ~4985 MiB/s.
Introduce a flag in the dmar_domain structure. This flag will only be set
when iotlb_sync_map is required (i.e., when CM or RWBF is set). The
cache_tag_flush_range_np() is called only for domains where this flag is
set. This flag, once set, is immutable, given that there won't be mixed
configurations in real-world scenarios where some IOMMUs in a system
operate in caching mode while others do not. Theoretically, the
immutability of this flag does not impact functionality.
iommu/vt-d: Remove the CONFIG_X86 wrapping from iommu init hook
iommu init hook is wrapped in CONFI_X86 and is a remnant of dmar.c when
it was a common code in "drivers/pci/dmar.c". This was added in commit
(9d5ce73a64be2 x86: intel-iommu: Convert detect_intel_iommu to use
iommu_init hook)
Now this is built only for x86. This config wrap could be removed.
pmdomain: samsung: Fix splash-screen handover by enforcing a sync_state
It's has been reported that some Samsung platforms fails to boot with
genpd's new sync_state support.
Typically the problem exists for platforms where bootloaders are turning on
the splash-screen and handing it over to be managed by the kernel. However,
at this point, it's not clear how to correctly solve the problem.
Although, to make the platforms boot again, let's add a temporary hack in
the samsung power-domain provider driver, which enforces a sync_state that
allows the power-domains to be reset before consumer devices starts to be
attached.
Merge patch series "netfs: Fix use of fscache with ceph"
David Howells <dhowells@redhat.com> says:
Here are a couple of patches that fix the use of fscaching with ceph:
(1) Fix the read collector to mark the write request that it creates to copy
data to the cache with NETFS_RREQ_OFFLOAD_COLLECTION so that it will run
the write collector on a workqueue as it's meant to run in the background
and the app isn't going to wait for it.
(2) Fix the read collector to wake up the copy-to-cache write request after
it sets NETFS_RREQ_ALL_QUEUED if the write request doesn't have any
subrequests left on it. ALL_QUEUED indicates that there won't be any
more subreqs coming and the collector should clean up - except that an
event is needed to trigger that, but it only gets events from subreq
termination and so the last event can beat us to setting ALL_QUEUED.
* patches from https://lore.kernel.org/20250711151005.2956810-1-dhowells@redhat.com:
netfs: Fix race between cache write completion and ALL_QUEUED being set
netfs: Fix copy-to-cache so that it performs collection with ceph+fscache
David Howells [Fri, 11 Jul 2025 15:10:01 +0000 (16:10 +0100)]
netfs: Fix race between cache write completion and ALL_QUEUED being set
When netfslib is issuing subrequests, the subrequests start processing
immediately and may complete before we reach the end of the issuing
function. At the end of the issuing function we set NETFS_RREQ_ALL_QUEUED
to indicate to the collector that we aren't going to issue any more subreqs
and that it can do the final notifications and cleanup.
Now, this isn't a problem if the request is synchronous
(NETFS_RREQ_OFFLOAD_COLLECTION is unset) as the result collection will be
done in-thread and we're guaranteed an opportunity to run the collector.
However, if the request is asynchronous, collection is primarily triggered
by the termination of subrequests queuing it on a workqueue. Now, a race
can occur here if the app thread sets ALL_QUEUED after the last subrequest
terminates.
This can happen most easily with the copy2cache code (as used by Ceph)
where, in the collection routine of a read request, an asynchronous write
request is spawned to copy data to the cache. Folios are added to the
write request as they're unlocked, but there may be a delay before
ALL_QUEUED is set as the write subrequests may complete before we get
there.
If all the write subreqs have finished by the ALL_QUEUED point, no further
events happen and the collection never happens, leaving the request
hanging.
Fix this by queuing the collector after setting ALL_QUEUED. This is a bit
heavy-handed and it may be sufficient to do it only if there are no extant
subreqs.
Also add a tracepoint to cross-reference both requests in a copy-to-request
operation and add a trace to the netfs_rreq tracepoint to indicate the
setting of ALL_QUEUED.
Fixes: e2d46f2ec332 ("netfs: Change the read result collector to only use one work item") Reported-by: Max Kellermann <max.kellermann@ionos.com> Link: https://lore.kernel.org/r/CAKPOu+8z_ijTLHdiCYGU_Uk7yYD=shxyGLwfe-L7AV3DhebS3w@mail.gmail.com/ Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/20250711151005.2956810-3-dhowells@redhat.com Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.org>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Viacheslav Dubeyko <slava@dubeyko.com>
cc: Alex Markuze <amarkuze@redhat.com>
cc: Ilya Dryomov <idryomov@gmail.com>
cc: netfs@lists.linux.dev
cc: ceph-devel@vger.kernel.org
cc: linux-fsdevel@vger.kernel.org
cc: stable@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
David Howells [Fri, 11 Jul 2025 15:10:00 +0000 (16:10 +0100)]
netfs: Fix copy-to-cache so that it performs collection with ceph+fscache
The netfs copy-to-cache that is used by Ceph with local caching sets up a
new request to write data just read to the cache. The request is started
and then left to look after itself whilst the app continues. The request
gets notified by the backing fs upon completion of the async DIO write, but
then tries to wake up the app because NETFS_RREQ_OFFLOAD_COLLECTION isn't
set - but the app isn't waiting there, and so the request just hangs.
Fix this by setting NETFS_RREQ_OFFLOAD_COLLECTION which causes the
notification from the backing filesystem to put the collection onto a work
queue instead.
Fixes: e2d46f2ec332 ("netfs: Change the read result collector to only use one work item") Reported-by: Max Kellermann <max.kellermann@ionos.com> Link: https://lore.kernel.org/r/CAKPOu+8z_ijTLHdiCYGU_Uk7yYD=shxyGLwfe-L7AV3DhebS3w@mail.gmail.com/ Signed-off-by: David Howells <dhowells@redhat.com> Link: https://lore.kernel.org/20250711151005.2956810-2-dhowells@redhat.com Reviewed-by: Paulo Alcantara (Red Hat) <pc@manguebit.org>
cc: Paulo Alcantara <pc@manguebit.org>
cc: Viacheslav Dubeyko <slava@dubeyko.com>
cc: Alex Markuze <amarkuze@redhat.com>
cc: Ilya Dryomov <idryomov@gmail.com>
cc: netfs@lists.linux.dev
cc: ceph-devel@vger.kernel.org
cc: linux-fsdevel@vger.kernel.org
cc: stable@vger.kernel.org Signed-off-by: Christian Brauner <brauner@kernel.org>
Support for overlapping domains added in commit e3589f6c81e4 ("sched:
Allow for overlapping sched_domain spans") also allowed forcefully
setting SD_OVERLAP for !NUMA domains via FORCE_SD_OVERLAP sched_feat().
Since NUMA domains had to be presumed overlapping to ensure correct
behavior, "sched_domain_topology_level::flags" was introduced. NUMA
domains added the SDTL_OVERLAP flag would ensure SD_OVERLAP was always
added during build_sched_domains() for these domains, even when
FORCE_SD_OVERLAP was off.
Condition for adding the SD_OVERLAP flag at the aforementioned commit
was as follows:
if (tl->flags & SDTL_OVERLAP || sched_feat(FORCE_SD_OVERLAP))
sd->flags |= SD_OVERLAP;
The FORCE_SD_OVERLAP debug feature was removed in commit af85596c74de
("sched/topology: Remove FORCE_SD_OVERLAP") which left the NUMA domains
as the exclusive users of SDTL_OVERLAP, SD_OVERLAP, and SD_NUMA flags.
Get rid of SDTL_OVERLAP and SD_OVERLAP as they have become redundant
and instead rely on SD_NUMA to detect the only overlapping domain
currently supported. Since SDTL_OVERLAP was the only user of
"tl->flags", get rid of "sched_domain_topology_level::flags" too.
Li Chen [Thu, 10 Jul 2025 10:57:09 +0000 (18:57 +0800)]
x86/smpboot: moves x86_topology to static initialize and truncate
The #ifdeffery and the initializers in build_sched_topology() are just
disgusting.
Statically initialize the domain levels in the topology array and let
build_sched_topology() invalidate the package domain level when NUMA in
package is available.
Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lore.kernel.org/r/20250710105715.66594-4-me@linux.beauty
Li Chen [Thu, 10 Jul 2025 10:57:08 +0000 (18:57 +0800)]
x86/smpboot: remove redundant CONFIG_SCHED_SMT
On x86 CONFIG_SCHED_SMT is default y if SMP is enabled, so let's
simply drop CONFIG_SCHED_SMT.
Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lore.kernel.org/r/20250710105715.66594-3-me@linux.beauty
Li Chen [Thu, 10 Jul 2025 10:57:07 +0000 (18:57 +0800)]
smpboot: introduce SDTL_INIT() helper to tidy sched topology setup
Define a small SDTL_INIT(maskfn, flagsfn, name) macro and use it to build the
sched_domain_topology_level array. Purely a cleanup; behaviour is unchanged.
Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Li Chen <chenl311@chinatelecom.cn> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: K Prateek Nayak <kprateek.nayak@amd.com> Tested-by: K Prateek Nayak <kprateek.nayak@amd.com> Link: https://lore.kernel.org/r/20250710105715.66594-2-me@linux.beauty
Juri Lelli [Fri, 27 Jun 2025 11:51:17 +0000 (13:51 +0200)]
tools/sched: Add root_domains_dump.py which dumps root domains info
Root domains information is somewhat hard to access at runtime. Even
with sched_debug and sched_verbose, such information is only printed
on kernel console when domains are modified.
Add a simple drgn script to more easily retrieve root domains
information at runtime.
Since tools/sched is a new directory, add it to MAINTAINERS as well.
Juri Lelli [Fri, 27 Jun 2025 11:51:16 +0000 (13:51 +0200)]
sched/deadline: Fix accounting after global limits change
A global limits change (sched_rt_handler() logic) currently leaves stale
and/or incorrect values in variables related to accounting (e.g.
extra_bw).
Properly clean up per runqueue variables before implementing the change
and rebuild scheduling domains (so that accounting is also properly
restored) after such a change is complete.
Juri Lelli [Fri, 27 Jun 2025 11:51:15 +0000 (13:51 +0200)]
sched/deadline: Reset extra_bw to max_bw when clearing root domains
dl_clear_root_domain() doesn't take into account the fact that per-rq
extra_bw variables retain values computed before root domain changes,
resulting in broken accounting.
Fix it by resetting extra_bw to max_bw before restoring back dl-servers
contributions.
Fixes: 2ff899e351643 ("sched/deadline: Rebuild root domain accounting after every update") Reported-by: Marcel Ziswiler <marcel.ziswiler@codethink.co.uk> Signed-off-by: Juri Lelli <juri.lelli@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Marcel Ziswiler <marcel.ziswiler@codethink.co.uk> # nuc & rock5b Link: https://lore.kernel.org/r/20250627115118.438797-3-juri.lelli@redhat.com
Juri Lelli [Fri, 27 Jun 2025 11:51:14 +0000 (13:51 +0200)]
sched/deadline: Initialize dl_servers after SMP
dl-servers are currently initialized too early at boot when CPUs are not
fully up (only boot CPU is). This results in miscalculation of per
runqueue DEADLINE variables like extra_bw (which needs a stable CPU
count).
Move initialization of dl-servers later on after SMP has been
initialized and CPUs are all online, so that CPU count is stable and
DEADLINE variables can be computed correctly.
Fixes: d741f297bceaf ("sched/fair: Fair server interface") Reported-by: Marcel Ziswiler <marcel.ziswiler@codethink.co.uk> Signed-off-by: Juri Lelli <juri.lelli@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Waiman Long <longman@redhat.com> Tested-by: Marcel Ziswiler <marcel.ziswiler@codethink.co.uk> # nuc & rock5b Link: https://lore.kernel.org/r/20250627115118.438797-2-juri.lelli@redhat.com
sched: Change nr_uninterruptible type to unsigned long
The commit e6fe3f422be1 ("sched: Make multiple runqueue task counters
32-bit") changed nr_uninterruptible to an unsigned int. But the
nr_uninterruptible values for each of the CPU runqueues can grow to
large numbers, sometimes exceeding INT_MAX. This is valid, if, over
time, a large number of tasks are migrated off of one CPU after going
into an uninterruptible state. Only the sum of all nr_interruptible
values across all CPUs yields the correct result, as explained in a
comment in kernel/sched/loadavg.c.
Change the type of nr_uninterruptible back to unsigned long to prevent
overflows, and thus the miscalculation of load average.
Merge patch series "refactor the iomap writeback code v5"
Christoph Hellwig <hch@lst.de> says:
This is an alternative approach to the writeback part of the
"fuse: use iomap for buffered writes + writeback" series from Joanne.
The big difference compared to Joanne's version is that I hope the
split between the generic and ioend/bio based writeback code is a bit
cleaner here. We have two methods that define the split between the
generic writeback code, and the implemementation of it, and all knowledge
of ioends and bios now sits below that layer.
This version passes testing on xfs, and gets as far as mainline for
gfs2 (crashes in generic/361).
* patches from https://lore.kernel.org/20250710133343.399917-1-hch@lst.de:
iomap: build the writeback code without CONFIG_BLOCK
iomap: add read_folio_range() handler for buffered writes
iomap: improve argument passing to iomap_read_folio_sync
iomap: replace iomap_folio_ops with iomap_write_ops
iomap: export iomap_writeback_folio
iomap: move folio_unlock out of iomap_writeback_folio
iomap: rename iomap_writepage_map to iomap_writeback_folio
iomap: move all ioend handling to ioend.c
iomap: add public helpers for uptodate state manipulation
iomap: hide ioends from the generic writeback code
iomap: refactor the writeback interface
iomap: cleanup the pending writeback tracking in iomap_writepage_map_blocks
iomap: pass more arguments using the iomap writeback context
iomap: header diet
iomap: build the writeback code without CONFIG_BLOCK
Allow fuse to use the iomap writeback code even when CONFIG_BLOCK is
not enabled. Do this with an ifdef instead of a separate file to keep
the iomap_folio_state local to buffered-io.c.
Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/20250710133343.399917-15-hch@lst.de Reviewed-by: Darrick J. Wong <djwong@kernel.org> Reviewed-by: Joanne Koong <joannelkoong@gmail.com> Signed-off-by: Christian Brauner <brauner@kernel.org>
iomap: add read_folio_range() handler for buffered writes
Add a read_folio_range() handler for buffered writes that filesystems
may pass in if they wish to provide a custom handler for synchronously
reading in the contents of a folio.
Signed-off-by: Joanne Koong <joannelkoong@gmail.com>
[hch: renamed to read_folio_range, pass less arguments] Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/20250710133343.399917-14-hch@lst.de Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
iomap: improve argument passing to iomap_read_folio_sync
Pass the iomap_iter and derive the map inside iomap_read_folio_sync
instead of in the caller, and use the more descriptive srcmap name for
the source iomap. Stop passing the offset into folio argument as it
can be derived from the folio and the file offset. Rename the
variables for the offset into the file and the length to be more
descriptive and match the rest of the code.
Rename the function itself to iomap_read_folio_range to make the use
more clear.
Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/20250710133343.399917-13-hch@lst.de Reviewed-by: Joanne Koong <joannelkoong@gmail.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>
iomap: replace iomap_folio_ops with iomap_write_ops
The iomap_folio_ops are only used for buffered writes, including the zero
and unshare variants. Rename them to iomap_write_ops to better describe
the usage, and pass them through the call chain like the other operation
specific methods instead of through the iomap.
xfs_iomap_valid grows a IOMAP_HOLE check to keep the existing behavior
that never attached the folio_ops to a iomap representing a hole.
Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/20250710133343.399917-12-hch@lst.de Acked-by: Damien Le Moal <dlemoal@kernel.org> Reviewed-by: Brian Foster <bfoster@redhat.com> Reviewed-by: Darrick J. Wong <djwong@kernel.org> Signed-off-by: Christian Brauner <brauner@kernel.org>