]> git.ipfire.org Git - thirdparty/linux.git/commit
arm64: errata: Work around early CME DVMSync acknowledgement
authorCatalin Marinas <catalin.marinas@arm.com>
Tue, 7 Apr 2026 10:28:44 +0000 (11:28 +0100)
committerCatalin Marinas <catalin.marinas@arm.com>
Fri, 10 Apr 2026 18:46:14 +0000 (19:46 +0100)
commit0baba94a9779c13c857f6efc55807e6a45b1d4e4
tree61ba3fde63e58c086158ffb510b74bf319bee73e
parent2c99561016c591f4c3d5ad7d22a61b8726e79735
arm64: errata: Work around early CME DVMSync acknowledgement

C1-Pro acknowledges DVMSync messages before completing the SME/CME
memory accesses. Work around this by issuing an IPI to the affected CPUs
if they are running in EL0 with SME enabled.

Note that we avoid the local DSB in the IPI handler as the kernel runs
with SCTLR_EL1.IESB=1. This is sufficient to complete SME memory
accesses at EL0 on taking an exception to EL1. On the return to user
path, no barrier is necessary either. See the comment in
sme_set_active() and the more detailed explanation in the link below.

To avoid a potential IPI flood from malicious applications (e.g.
madvise(MADV_PAGEOUT) in a tight loop), track where a process is active
via mm_cpumask() and only interrupt those CPUs.

Link: https://lore.kernel.org/r/ablEXwhfKyJW1i7l@J2N7QTR9R3
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Mark Brown <broonie@kernel.org>
Reviewed-by: Will Deacon <will@kernel.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Documentation/arch/arm64/silicon-errata.rst
arch/arm64/Kconfig
arch/arm64/include/asm/cpucaps.h
arch/arm64/include/asm/fpsimd.h
arch/arm64/include/asm/tlbbatch.h
arch/arm64/include/asm/tlbflush.h
arch/arm64/kernel/cpu_errata.c
arch/arm64/kernel/entry-common.c
arch/arm64/kernel/fpsimd.c
arch/arm64/kernel/process.c
arch/arm64/tools/cpucaps