]> git.ipfire.org Git - thirdparty/kernel/linux.git/log
thirdparty/kernel/linux.git
7 weeks agoarm64: Add override for MPAM
Xi Ruoyao [Mon, 2 Jun 2025 04:33:21 +0000 (12:33 +0800)] 
arm64: Add override for MPAM

As the message of the commit 09e6b306f3ba ("arm64: cpufeature: discover
CPU support for MPAM") already states, if a buggy firmware fails to
either enable MPAM or emulate the trap as if it were disabled, the
kernel will just fail to boot.  While upgrading the firmware should be
the best solution, we have some hardware of which the vendor have made
no response 2 months after we requested a firmware update.  Allow
overriding it so our devices don't become some e-waste.

Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Cc: Mingcong Bai <jeffbai@aosc.io>
Cc: Shaopeng Tan <tan.shaopeng@fujitsu.com>
Cc: Ben Horgan <ben.horgan@arm.com>
Signed-off-by: Xi Ruoyao <xry111@xry111.site>
Reviewed-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20250602043723.216338-1-xry111@xry111.site
Signed-off-by: Will Deacon <will@kernel.org>
7 weeks agoarm64/mm: Close theoretical race where stale TLB entry remains valid
Ryan Roberts [Fri, 30 May 2025 15:23:47 +0000 (16:23 +0100)] 
arm64/mm: Close theoretical race where stale TLB entry remains valid

Commit 3ea277194daa ("mm, mprotect: flush TLB if potentially racing with
a parallel reclaim leaving stale TLB entries") describes a race that,
prior to the commit, could occur between reclaim and operations such as
mprotect() when using reclaim's tlbbatch mechanism. See that commit for
details but the summary is:

"""
Nadav Amit identified a theoritical race between page reclaim and
mprotect due to TLB flushes being batched outside of the PTL being held.

He described the race as follows:

CPU0 CPU1
---- ----
user accesses memory using RW PTE
[PTE now cached in TLB]
try_to_unmap_one()
==> ptep_get_and_clear()
==> set_tlb_ubc_flush_pending()
mprotect(addr, PROT_READ)
==> change_pte_range()
==> [ PTE non-present - no flush ]

user writes using cached RW PTE
...

try_to_unmap_flush()
"""

The solution was to insert flush_tlb_batched_pending() in mprotect() and
friends to explcitly drain any pending reclaim TLB flushes. In the
modern version of this solution, arch_flush_tlb_batched_pending() is
called to do that synchronisation.

arm64's tlbbatch implementation simply issues TLBIs at queue-time
(arch_tlbbatch_add_pending()), eliding the trailing dsb(ish). The
trailing dsb(ish) is finally issued in arch_tlbbatch_flush() at the end
of the batch to wait for all the issued TLBIs to complete.

Now, the Arm ARM states:

"""
The completion of the TLB maintenance instruction is guaranteed only by
the execution of a DSB by the observer that performed the TLB
maintenance instruction. The execution of a DSB by a different observer
does not have this effect, even if the DSB is known to be executed after
the TLB maintenance instruction is observed by that different observer.
"""

arch_tlbbatch_add_pending() and arch_tlbbatch_flush() conform to this
requirement because they are called from the same task (either kswapd or
caller of madvise(MADV_PAGEOUT)), so either they are on the same CPU or
if the task was migrated, __switch_to() contains an extra dsb(ish).

HOWEVER, arm64's arch_flush_tlb_batched_pending() is also implemented as
a dsb(ish). But this may be running on a CPU remote from the one that
issued the outstanding TLBIs. So there is no architectural gurantee of
synchonization. Therefore we are still vulnerable to the theoretical
race described in Commit 3ea277194daa ("mm, mprotect: flush TLB if
potentially racing with a parallel reclaim leaving stale TLB entries").

Fix this by flushing the entire mm in arch_flush_tlb_batched_pending().
This aligns with what the other arches that implement the tlbbatch
feature do.

Cc: <stable@vger.kernel.org>
Fixes: 43b3dfdd0455 ("arm64: support batched/deferred tlb shootdown during page reclamation/migration")
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20250530152445.2430295-1-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
7 weeks agoarm64: Work around convergence issue with LLD linker
Ard Biesheuvel [Sat, 31 May 2025 12:30:05 +0000 (14:30 +0200)] 
arm64: Work around convergence issue with LLD linker

LLD will occasionally error out with a '__init_end does not converge'
error if INIT_IDMAP_DIR_SIZE is defined in terms of _end, as this
results in a circular dependency.

Counter this by dimensioning the initial IDMAP page tables based on a
new boundary marker 'kimage_limit', and define it such that its value
should not change as a result of the initdata segment being pushed over
a 64k segment boundary due to changes in INIT_IDMAP_DIR_SIZE, provided
that its value doesn't change by more than 2M between linker passes.

Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250531123005.3866382-2-ardb+git@google.com
Signed-off-by: Will Deacon <will@kernel.org>
7 weeks agoarm64: Disable LLD linker ASSERT()s for the time being
Ard Biesheuvel [Thu, 29 May 2025 07:35:08 +0000 (09:35 +0200)] 
arm64: Disable LLD linker ASSERT()s for the time being

It turns out [1] that the way LLD handles ASSERT()s in the linker script
can result in spurious failures, so disable them for the newly
introduced BSS symbol export checks. Since we're not aware of any issues
with the existing assertions in vmlinux.lds.S, leave those alone for now
so that they can continue to provide useful coverage.

A linker fix [2] is due to land in version 21 of LLD.

Link: https://lore.kernel.org/r/202505261019.OUlitN6m-lkp@intel.com
Link: https://github.com/llvm/llvm-project/commit/5859863bab7f
Link: https://github.com/ClangBuiltLinux/linux/issues/2094
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Nathan Chancellor <nathan@kernel.org>
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202505261019.OUlitN6m-lkp@intel.com/
Link: https://lore.kernel.org/r/20250529073507.2984959-2-ardb+git@google.com
Signed-off-by: Will Deacon <will@kernel.org>
8 weeks agoMerge branch 'for-next/vdso' into for-next/core
Will Deacon [Tue, 27 May 2025 11:26:54 +0000 (12:26 +0100)] 
Merge branch 'for-next/vdso' into for-next/core

* for-next/vdso:
  arm64: vdso: Use __arch_counter_get_cntvct()

8 weeks agoMerge branch 'for-next/sme-fixes' into for-next/core
Will Deacon [Tue, 27 May 2025 11:26:43 +0000 (12:26 +0100)] 
Merge branch 'for-next/sme-fixes' into for-next/core

* for-next/sme-fixes: (35 commits)
  arm64/fpsimd: Allow CONFIG_ARM64_SME to be selected
  arm64/fpsimd: ptrace: Gracefully handle errors
  arm64/fpsimd: ptrace: Mandate SVE payload for streaming-mode state
  arm64/fpsimd: ptrace: Do not present register data for inactive mode
  arm64/fpsimd: ptrace: Save task state before generating SVE header
  arm64/fpsimd: ptrace/prctl: Ensure VL changes leave task in a valid state
  arm64/fpsimd: ptrace/prctl: Ensure VL changes do not resurrect stale data
  arm64/fpsimd: Make clone() compatible with ZA lazy saving
  arm64/fpsimd: Clear PSTATE.SM during clone()
  arm64/fpsimd: Consistently preserve FPSIMD state during clone()
  arm64/fpsimd: Remove redundant task->mm check
  arm64/fpsimd: signal: Use SMSTOP behaviour in setup_return()
  arm64/fpsimd: Add task_smstop_sm()
  arm64/fpsimd: Factor out {sve,sme}_state_size() helpers
  arm64/fpsimd: Clarify sve_sync_*() functions
  arm64/fpsimd: ptrace: Consistently handle partial writes to NT_ARM_(S)SVE
  arm64/fpsimd: signal: Consistently read FPSIMD context
  arm64/fpsimd: signal: Mandate SVE payload for streaming-mode state
  arm64/fpsimd: signal: Clear PSTATE.SM when restoring FPSIMD frame only
  arm64/fpsimd: Do not discard modified SVE state
  ...

8 weeks agoMerge branch 'for-next/selftests' into for-next/core
Will Deacon [Tue, 27 May 2025 11:26:30 +0000 (12:26 +0100)] 
Merge branch 'for-next/selftests' into for-next/core

* for-next/selftests:
  kselftest/arm64: Set default OUTPUT path when undefined
  kselftest/arm64: fp-ptrace: Adjust to new inactive mode behaviour
  kselftest/arm64: fp-ptrace: Adjust to new VL change behaviour
  kselftest/arm64: tpidr2: Adjust to new clone() behaviour
  kselftest/arm64: fp-ptrace: Fix expected FPMR value when PSTATE.SM is changed

8 weeks agoMerge branch 'for-next/psci' into for-next/core
Will Deacon [Tue, 27 May 2025 11:26:22 +0000 (12:26 +0100)] 
Merge branch 'for-next/psci' into for-next/core

* for-next/psci:
  firmware: psci: Fix refcount leak in psci_dt_init

8 weeks agoMerge branch 'for-next/perf' into for-next/core
Will Deacon [Tue, 27 May 2025 11:26:15 +0000 (12:26 +0100)] 
Merge branch 'for-next/perf' into for-next/core

* for-next/perf:
  perf/arm-cmn: Add CMN S3 ACPI binding
  perf/arm-cmn: Initialise cmn->cpu earlier
  perf/amlogic: Replace smp_processor_id() with raw_smp_processor_id() in meson_ddr_pmu_create()
  perf/arm-cmn: Fix REQ2/SNP2 mixup
  perf: Do not enable by default during compile testing
  perf: arm-ni: Fix missing platform_set_drvdata()
  perf: arm-ni: Unregister PMUs on probe failure
  perf/arm-cmn: Remove CMN-600 DTC domain special case

8 weeks agoMerge branch 'for-next/mm' into for-next/core
Will Deacon [Tue, 27 May 2025 11:26:06 +0000 (12:26 +0100)] 
Merge branch 'for-next/mm' into for-next/core

* for-next/mm:
  arm64/boot: Disallow BSS exports to startup code
  arm64/boot: Move global CPU override variables out of BSS
  arm64/boot: Move init_pgdir[] and init_idmap_pgdir[] into __pi_ namespace
  arm64: mm: Drop redundant check in pmd_trans_huge()
  arm64/mm: Permit lazy_mmu_mode to be nested
  arm64/mm: Disable barrier batching in interrupt contexts
  arm64/mm: Batch barriers when updating kernel mappings
  mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes
  arm64/mm: Support huge pte-mapped pages in vmap
  mm/vmalloc: Gracefully unmap huge ptes
  mm/vmalloc: Warn on improper use of vunmap_range()
  arm64/mm: Hoist barriers out of set_ptes_anysz() loop
  arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()
  arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
  mm/page_table_check: Batch-check pmds/puds just like ptes
  arm64: hugetlb: Refine tlb maintenance scope
  arm64: hugetlb: Cleanup huge_pte size discovery mechanisms
  arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings
  arm64: Support ARM64_VA_BITS=52 when setting ARCH_MMAP_RND_BITS_MAX
  arm64/mm: Remove randomization of the linear map

8 weeks agoMerge branch 'for-next/misc' into for-next/core
Will Deacon [Tue, 27 May 2025 11:25:58 +0000 (12:25 +0100)] 
Merge branch 'for-next/misc' into for-next/core

* for-next/misc:
  arm64/cpuinfo: only show one cpu's info in c_show()
  arm64: Extend pr_crit message on invalid FDT
  arm64: Kconfig: remove unnecessary selection of CRC32
  arm64: Add missing includes for mem_encrypt

8 weeks agoMerge branch 'for-next/fixes' into for-next/core
Will Deacon [Tue, 27 May 2025 11:24:18 +0000 (12:24 +0100)] 
Merge branch 'for-next/fixes' into for-next/core

Merge in for-next/fixes, as subsequent improvements to our early PI
code that disallow BSS exports depend on the 'arm64_use_ng_mappings'
fix here.

* for-next/fixes:
  arm64: cpufeature: Move arm64_use_ng_mappings to the .data section to prevent wrong idmap generation
  arm64: errata: Add missing sentinels to Spectre-BHB MIDR arrays

8 weeks agoMerge branch 'for-next/entry' into for-next/core
Will Deacon [Tue, 27 May 2025 11:24:12 +0000 (12:24 +0100)] 
Merge branch 'for-next/entry' into for-next/core

* for-next/entry:
  arm64: el2_setup.h: Make __init_el2_fgt labels consistent, again
  arm64: Update comment regarding values in __boot_cpu_mode
  arm64/mm: Re-organise setting up FEAT_S1PIE registers PIRE0_EL1 and PIR_EL1
  arm64: enable PREEMPT_LAZY

8 weeks agoMerge branch 'for-next/efi' into for-next/core
Will Deacon [Tue, 27 May 2025 11:24:02 +0000 (12:24 +0100)] 
Merge branch 'for-next/efi' into for-next/core

* for-next/efi:
  arm64/fpsimd: Avoid unnecessary per-CPU buffers for EFI runtime calls

8 weeks agoMerge branch 'for-next/cpufeature' into for-next/core
Will Deacon [Tue, 27 May 2025 11:23:50 +0000 (12:23 +0100)] 
Merge branch 'for-next/cpufeature' into for-next/core

* for-next/cpufeature:
  arm64: cputype: Add cputype definition for HIP12
  arm64: Expose AIDR_EL1 via sysfs
  arm64/cpufeature: Add missing id_aa64mmfr4 feature reg update

8 weeks agoMerge branch 'for-next/acpi' into for-next/core
Will Deacon [Tue, 27 May 2025 11:23:32 +0000 (12:23 +0100)] 
Merge branch 'for-next/acpi' into for-next/core

* for-next/acpi:
  firmware: SDEI: Allow sdei initialization without ACPI_APEI_GHES

2 months agoarm64: cputype: Add cputype definition for HIP12
Yicong Yang [Fri, 25 Apr 2025 03:38:43 +0000 (11:38 +0800)] 
arm64: cputype: Add cputype definition for HIP12

Add MIDR encoding for HiSilicon HIP12 which is used on HiSilicon
HIP12 SoCs.

Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Link: https://lore.kernel.org/r/20250425033845.57671-2-yangyicong@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: el2_setup.h: Make __init_el2_fgt labels consistent, again
Rob Herring (Arm) [Tue, 20 May 2025 22:27:37 +0000 (17:27 -0500)] 
arm64: el2_setup.h: Make __init_el2_fgt labels consistent, again

Commit 5b39db6037e7 ("arm64: el2_setup.h: Rename some labels to be more
diff-friendly") reworked the labels in __init_el2_fgt to say what's
skipped rather than what the target location is. The exception was
"set_fgt_" which is where registers are written. In reviewing the BRBE
additions, Will suggested "set_debug_fgt_" where HDFGxTR_EL2 are
written. Doing that would partially revert commit 5b39db6037e7 undoing
the goal of minimizing additions here, but it would follow the
convention for labels where registers are written.

So let's do both. Branches that skip something go to a "skip" label and
places that set registers have a "set" label. This results in some
double labels, but it makes things entirely consistent.

While we're here, the SME skip label was incorrectly named, so fix it.

Reported-by: Will Deacon <will@kernel.org>
Cc: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Rob Herring (Arm) <robh@kernel.org>
Link: https://lore.kernel.org/r/20250520-arm-brbe-v19-v22-2-c1ddde38e7f8@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoperf/arm-cmn: Add CMN S3 ACPI binding
Robin Murphy [Mon, 19 May 2025 10:56:04 +0000 (11:56 +0100)] 
perf/arm-cmn: Add CMN S3 ACPI binding

An ACPI binding for CMN S3 was not yet finalised when the driver support
was originally written, but v1.2 of DEN0093 "ACPI for Arm Components"
has at last been published; support ACPI systems using the proper HID.

Cc: stable@vger.kernel.org
Fixes: 0dc2f4963f7e ("perf/arm-cmn: Support CMN S3")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/7dafe147f186423020af49d7037552ee59c60e97.1747652164.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/boot: Disallow BSS exports to startup code
Ard Biesheuvel [Thu, 8 May 2025 11:43:32 +0000 (13:43 +0200)] 
arm64/boot: Disallow BSS exports to startup code

BSS might be uninitialized when entering the startup code, so forbid the
use by the startup code of any variables that live after __bss_start in
the linker map.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com>
Link: https://lore.kernel.org/r/20250508114328.2460610-8-ardb+git@google.com
[will: Drop export of 'memstart_offset_seed', as this has been removed]
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/boot: Move global CPU override variables out of BSS
Ard Biesheuvel [Thu, 8 May 2025 11:43:31 +0000 (13:43 +0200)] 
arm64/boot: Move global CPU override variables out of BSS

Accessing BSS will no longer be permitted from the startup code in
arch/arm64/kernel/pi, as some of it executes before BSS is cleared.
Clearing BSS earlier would involve managing cache coherency explicitly
in software, which is a hassle we prefer to avoid.

So move some variables that are assigned by the startup code out of BSS
and into .data.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com>
Link: https://lore.kernel.org/r/20250508114328.2460610-7-ardb+git@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/boot: Move init_pgdir[] and init_idmap_pgdir[] into __pi_ namespace
Ard Biesheuvel [Thu, 8 May 2025 11:43:30 +0000 (13:43 +0200)] 
arm64/boot: Move init_pgdir[] and init_idmap_pgdir[] into __pi_ namespace

init_pgdir[] is only referenced from the startup code, but lives after
BSS in the linker map. Before tightening the rules about accessing BSS
from startup code, move init_pgdir[] into the __pi_ namespace, so it
does not need to be exported explicitly.

For symmetry, do the same with init_idmap_pgdir[], although it lives
before BSS.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Yeoreum Yun <yeoreum.yun@arm.com>
Reviewed-by: Yeoreum Yun <yeoreum.yun@arm.com>
Link: https://lore.kernel.org/r/20250508114328.2460610-6-ardb+git@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoperf/arm-cmn: Initialise cmn->cpu earlier
Robin Murphy [Mon, 12 May 2025 17:11:54 +0000 (18:11 +0100)] 
perf/arm-cmn: Initialise cmn->cpu earlier

For all the complexity of handling affinity for CPU hotplug, what we've
apparently managed to overlook is that arm_cmn_init_irqs() has in fact
always been setting the *initial* affinity of all IRQs to CPU 0, not the
CPU we subsequently choose for event scheduling. Oh dear.

Cc: stable@vger.kernel.org
Fixes: 0ba64770a2f2 ("perf: Add Arm CMN-600 PMU driver")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Ilkka Koskinen <ilkka@os.amperecomputing.com>
Link: https://lore.kernel.org/r/b12fccba6b5b4d2674944f59e4daad91cd63420b.1747069914.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agokselftest/arm64: Set default OUTPUT path when undefined
tanze [Thu, 15 May 2025 05:18:39 +0000 (13:18 +0800)] 
kselftest/arm64: Set default OUTPUT path when undefined

When running 'make' in tools/testing/selftests/arm64/ without explicitly
setting the OUTPUT variable, the build system will creates test
directories (e.g., /bti) in the root filesystem due to OUTPUT defaulting
to an empty string. This causes unintended pollution of the root directory.

This patch adds proper handling for the OUTPUT variable: Sets OUTPUT
to the current directory (.) if not specified

Signed-off-by: tanze <tanze@kylinos.cn>
Link: https://lore.kernel.org/r/20250515051839.3409658-1-tanze@kylinos.cn
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: Update comment regarding values in __boot_cpu_mode
Ben Horgan [Tue, 13 May 2025 12:45:25 +0000 (13:45 +0100)] 
arm64: Update comment regarding values in __boot_cpu_mode

The values stored in __boot_cpu_mode were changed without updating the
comment. Rectify that.

Signed-off-by: Ben Horgan <ben.horgan@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Link: https://lore.kernel.org/r/20250513124525.677736-1-ben.horgan@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: mm: Drop redundant check in pmd_trans_huge()
Gavin Shan [Thu, 8 May 2025 08:52:51 +0000 (18:52 +1000)] 
arm64: mm: Drop redundant check in pmd_trans_huge()

pmd_val(pmd) is redundant because a positive pmd_present(pmd) ensures
a positive pmd_val(pmd) according to their definitions like below.

  #define pmd_val(x)       ((x).pmd)
  #define pmd_present(pmd) pte_present(pmd_pte(pmd))
  #define pte_present(pte) (pte_valid(pte) || pte_present_invalid(pte))
  #define pte_valid(pte)   (!!(pte_val(pte) & PTE_VALID))
  #define pte_present_invalid(pte) \
          ((pte_val(pte) & (PTE_VALID | PTE_PRESENT_INVALID)) == PTE_PRESENT_INVALID)

pte_present() can't be positive unless either of the flag PTE_VALID or
PTE_PRESENT_INVALID is set. In this case, pmd_val(pmd) should be positive
either.

So lets drop the redundant check pmd_val(pmd) and no functional changes
intended.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20250508085251.204282-1-gshan@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Re-organise setting up FEAT_S1PIE registers PIRE0_EL1 and PIR_EL1
Anshuman Khandual [Tue, 29 Apr 2025 05:05:11 +0000 (10:35 +0530)] 
arm64/mm: Re-organise setting up FEAT_S1PIE registers PIRE0_EL1 and PIR_EL1

mov_q cannot really move PIE_E[0|1] macros into a general purpose register
as expected if those macro constants contain some 128 bit layout elements,
that are required for D128 page tables. The primary issue is that for D128,
PIE_E[0|1] are defined in terms of 128-bit types with shifting and masking,
which the assembler can't accommodate.

Instead pre-calculate these PIRE0_EL1/PIR_EL1 constants into asm-offsets.h
based PIE_E0_ASM/PIE_E1_ASM which can then be used in arch/arm64/mm/proc.S.

While here also drop PTE_MAYBE_NG/PTE_MAYBE_SHARED assembly overrides which
are not required any longer, as the compiler toolchains are smart enough to
compute both the PIE_[E0|E1]_ASM constants in all scenarios.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Link: https://lore.kernel.org/r/20250429050511.1663235-1-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Permit lazy_mmu_mode to be nested
Ryan Roberts [Mon, 12 May 2025 15:03:31 +0000 (16:03 +0100)] 
arm64/mm: Permit lazy_mmu_mode to be nested

lazy_mmu_mode is not supposed to permit nesting. But in practice this
does happen with CONFIG_DEBUG_PAGEALLOC, where a page allocation inside
a lazy_mmu_mode section (such as zap_pte_range()) will change
permissions on the linear map with apply_to_page_range(), which
re-enters lazy_mmu_mode (see stack trace below).

The warning checking that nesting was not happening was previously being
triggered due to this. So let's relax by removing the warning and
tolerate nesting in the arm64 implementation. The first (inner) call to
arch_leave_lazy_mmu_mode() will flush and clear the flag such that the
remainder of the work in the outer nest behaves as if outside of lazy
mmu mode. This is safe and keeps tracking simple.

Code review suggests powerpc deals with this issue in the same way.

------------[ cut here ]------------
WARNING: CPU: 6 PID: 1 at arch/arm64/include/asm/pgtable.h:89 __apply_to_page_range+0x85c/0x9f8
Modules linked in: ip_tables x_tables ipv6
CPU: 6 UID: 0 PID: 1 Comm: systemd Not tainted 6.15.0-rc5-00075-g676795fe9cf6 #1 PREEMPT
Hardware name: QEMU KVM Virtual Machine, BIOS 2024.08-4 10/25/2024
pstate: 40400005 (nZcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
pc : __apply_to_page_range+0x85c/0x9f8
lr : __apply_to_page_range+0x2b4/0x9f8
sp : ffff80008009b3c0
x29: ffff80008009b460 x28: ffff0000c43a3000 x27: ffff0001ff62b108
x26: ffff0000c43a4000 x25: 0000000000000001 x24: 0010000000000001
x23: ffffbf24c9c209c0 x22: ffff80008009b4d0 x21: ffffbf24c74a3b20
x20: ffff0000c43a3000 x19: ffff0001ff609d18 x18: 0000000000000001
x17: 0000000000000000 x16: 0000000000000000 x15: 0000000000000003
x14: 0000000000000028 x13: ffffbf24c97c1000 x12: ffff0000c43a3fff
x11: ffffbf24cacc9a70 x10: ffff0000c43a3fff x9 : ffff0001fffff018
x8 : 0000000000000012 x7 : ffff0000c43a4000 x6 : ffff0000c43a4000
x5 : ffffbf24c9c209c0 x4 : ffff0000c43a3fff x3 : ffff0001ff609000
x2 : 0000000000000d18 x1 : ffff0000c03e8000 x0 : 0000000080000000
Call trace:
 __apply_to_page_range+0x85c/0x9f8 (P)
 apply_to_page_range+0x14/0x20
 set_memory_valid+0x5c/0xd8
 __kernel_map_pages+0x84/0xc0
 get_page_from_freelist+0x1110/0x1340
 __alloc_frozen_pages_noprof+0x114/0x1178
 alloc_pages_mpol+0xb8/0x1d0
 alloc_frozen_pages_noprof+0x48/0xc0
 alloc_pages_noprof+0x10/0x60
 get_free_pages_noprof+0x14/0x90
 __tlb_remove_folio_pages_size.isra.0+0xe4/0x140
 __tlb_remove_folio_pages+0x10/0x20
 unmap_page_range+0xa1c/0x14c0
 unmap_single_vma.isra.0+0x48/0x90
 unmap_vmas+0xe0/0x200
 vms_clear_ptes+0xf4/0x140
 vms_complete_munmap_vmas+0x7c/0x208
 do_vmi_align_munmap+0x180/0x1a8
 do_vmi_munmap+0xac/0x188
 __vm_munmap+0xe0/0x1e0
 __arm64_sys_munmap+0x20/0x38
 invoke_syscall+0x48/0x104
 el0_svc_common.constprop.0+0x40/0xe0
 do_el0_svc+0x1c/0x28
 el0_svc+0x4c/0x16c
 el0t_64_sync_handler+0x10c/0x140
 el0t_64_sync+0x198/0x19c
irq event stamp: 281312
hardirqs last  enabled at (281311): [<ffffbf24c780fd04>] bad_range+0x164/0x1c0
hardirqs last disabled at (281312): [<ffffbf24c89c4550>] el1_dbg+0x24/0x98
softirqs last  enabled at (281054): [<ffffbf24c752d99c>] handle_softirqs+0x4cc/0x518
softirqs last disabled at (281019): [<ffffbf24c7450694>] __do_softirq+0x14/0x20
---[ end trace 0000000000000000 ]---

Fixes: 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings")
Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Closes: https://lore.kernel.org/linux-arm-kernel/aCH0TLRQslXHin5Q@arm.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20250512150333.5589-1-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Disable barrier batching in interrupt contexts
Ryan Roberts [Mon, 12 May 2025 10:22:40 +0000 (11:22 +0100)] 
arm64/mm: Disable barrier batching in interrupt contexts

Commit 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel
mappings") enabled arm64 kernels to track "lazy mmu mode" using TIF
flags in order to defer barriers until exiting the mode. At the same
time, it added warnings to check that pte manipulations were never
performed in interrupt context, because the tracking implementation
could not deal with nesting.

But it turns out that some debug features (e.g. KFENCE, DEBUG_PAGEALLOC)
do manipulate ptes in softirq context, which triggered the warnings.

So let's take the simplest and safest route and disable the batching
optimization in interrupt contexts. This makes these users no worse off
than prior to the optimization. Additionally the known offenders are
debug features that only manipulate a single PTE, so there is no
performance gain anyway.

There may be some obscure case of encrypted/decrypted DMA with the
dma_free_coherent called from an interrupt context, but again, this is
no worse off than prior to the commit.

Some options for supporting nesting were considered, but there is a
difficult to solve problem if any code manipulates ptes within interrupt
context but *outside of* a lazy mmu region. If this case exists, the
code would expect the updates to be immediate, but because the task
context may have already been in lazy mmu mode, the updates would be
deferred, which could cause incorrect behaviour. This problem is avoided
by always ensuring updates within interrupt context are immediate.

Fixes: 5fdd05efa1cd ("arm64/mm: Batch barriers when updating kernel mappings")
Reported-by: syzbot+5c0d9392e042f41d45c5@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/linux-arm-kernel/681f2a09.050a0220.f2294.0006.GAE@google.com/
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20250512102242.4156463-1-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/cpuinfo: only show one cpu's info in c_show()
Ye Bin [Mon, 21 Apr 2025 06:29:47 +0000 (14:29 +0800)] 
arm64/cpuinfo: only show one cpu's info in c_show()

Currently, when ARM64 displays CPU information, every call to c_show()
assembles all CPU information. However, as the number of CPUs increases,
this can lead to insufficient buffer space due to excessive assembly in
a single call, causing repeated expansion and multiple calls to c_show().

To prevent this invalid c_show() call, only one CPU's information is
assembled each time c_show() is called.

Signed-off-by: Ye Bin <yebin10@huawei.com>
Link: https://lore.kernel.org/r/20250421062947.4072855-1-yebin@huaweicloud.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Batch barriers when updating kernel mappings
Ryan Roberts [Tue, 22 Apr 2025 08:18:19 +0000 (09:18 +0100)] 
arm64/mm: Batch barriers when updating kernel mappings

Because the kernel can't tolerate page faults for kernel mappings, when
setting a valid, kernel space pte (or pmd/pud/p4d/pgd), it emits a
dsb(ishst) to ensure that the store to the pgtable is observed by the
table walker immediately. Additionally it emits an isb() to ensure that
any already speculatively determined invalid mapping fault gets
canceled.

We can improve the performance of vmalloc operations by batching these
barriers until the end of a set of entry updates.
arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() provide the
required hooks.

vmalloc improves by up to 30% as a result.

Two new TIF_ flags are created; TIF_LAZY_MMU tells us if the task is in
the lazy mode and can therefore defer any barriers until exit from the
lazy mode. TIF_LAZY_MMU_PENDING is used to remember if any pte operation
was performed while in the lazy mode that required barriers. Then when
leaving lazy mode, if that flag is set, we emit the barriers.

Since arch_enter_lazy_mmu_mode() and arch_leave_lazy_mmu_mode() are used
for both user and kernel mappings, we need the second flag to avoid
emitting barriers unnecessarily if only user mappings were updated.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-12-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agomm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes
Ryan Roberts [Tue, 22 Apr 2025 08:18:18 +0000 (09:18 +0100)] 
mm/vmalloc: Enter lazy mmu mode while manipulating vmalloc ptes

Wrap vmalloc's pte table manipulation loops with
arch_enter_lazy_mmu_mode() / arch_leave_lazy_mmu_mode(). This provides
the arch code with the opportunity to optimize the pte manipulations.

Note that vmap_pfn() already uses lazy mmu mode since it delegates to
apply_to_page_range() which enters lazy mmu mode for both user and
kernel mappings.

These hooks will shortly be used by arm64 to improve vmalloc
performance.

Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-11-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Support huge pte-mapped pages in vmap
Ryan Roberts [Tue, 22 Apr 2025 08:18:17 +0000 (09:18 +0100)] 
arm64/mm: Support huge pte-mapped pages in vmap

Implement the required arch functions to enable use of contpte in the
vmap when VM_ALLOW_HUGE_VMAP is specified. This speeds up vmap
operations due to only having to issue a DSB and ISB per contpte block
instead of per pte. But it also means that the TLB pressure reduces due
to only needing a single TLB entry for the whole contpte block.

Since vmap uses set_huge_pte_at() to set the contpte, that API is now
used for kernel mappings for the first time. Although in the vmap case
we never expect it to be called to modify a valid mapping so
clear_flush() should never be called, it's still wise to make it robust
for the kernel case, so amend the tlb flush function if the mm is for
kernel space.

Tested with vmalloc performance selftests:

  # kself/mm/test_vmalloc.sh \
run_test_mask=1
test_repeat_count=5
nr_pages=256
test_loop_count=100000
use_huge=1

Duration reduced from 1274243 usec to 1083553 usec on Apple M2 for 15%
reduction in time taken.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-10-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agomm/vmalloc: Gracefully unmap huge ptes
Ryan Roberts [Tue, 22 Apr 2025 08:18:16 +0000 (09:18 +0100)] 
mm/vmalloc: Gracefully unmap huge ptes

Commit f7ee1f13d606 ("mm/vmalloc: enable mapping of huge pages at pte
level in vmap") added its support by reusing the set_huge_pte_at() API,
which is otherwise only used for user mappings. But when unmapping those
huge ptes, it continued to call ptep_get_and_clear(), which is a
layering violation. To date, the only arch to implement this support is
powerpc and it all happens to work ok for it.

But arm64's implementation of ptep_get_and_clear() can not be safely
used to clear a previous set_huge_pte_at(). So let's introduce a new
arch opt-in function, arch_vmap_pte_range_unmap_size(), which can
provide the size of a (present) pte. Then we can call
huge_ptep_get_and_clear() to tear it down properly.

Note that if vunmap_range() is called with a range that starts in the
middle of a huge pte-mapped page, we must unmap the entire huge page so
the behaviour is consistent with pmd and pud block mappings. In this
case emit a warning just like we do for pmd/pud mappings.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-9-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agomm/vmalloc: Warn on improper use of vunmap_range()
Ryan Roberts [Tue, 22 Apr 2025 08:18:15 +0000 (09:18 +0100)] 
mm/vmalloc: Warn on improper use of vunmap_range()

A call to vmalloc_huge() may cause memory blocks to be mapped at pmd or
pud level. But it is possible to subsequently call vunmap_range() on a
sub-range of the mapped memory, which partially overlaps a pmd or pud.
In this case, vmalloc unmaps the entire pmd or pud so that the
no-overlapping portion is also unmapped. Clearly that would have a bad
outcome, but it's not something that any callers do today as far as I
can tell. So I guess it's just expected that callers will not do this.

However, it would be useful to know if this happened in future; let's
add a warning to cover the eventuality.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-8-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Hoist barriers out of set_ptes_anysz() loop
Ryan Roberts [Tue, 22 Apr 2025 08:18:14 +0000 (09:18 +0100)] 
arm64/mm: Hoist barriers out of set_ptes_anysz() loop

set_ptes_anysz() previously called __set_pte() for each PTE in the
range, which would conditionally issue a DSB and ISB to make the new PTE
value immediately visible to the table walker if the new PTE was valid
and for kernel space.

We can do better than this; let's hoist those barriers out of the loop
so that they are only issued once at the end of the loop. We then reduce
the cost by the number of PTEs in the range.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-7-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()
Ryan Roberts [Tue, 22 Apr 2025 08:18:13 +0000 (09:18 +0100)] 
arm64: hugetlb: Use __set_ptes_anysz() and __ptep_get_and_clear_anysz()

Refactor the huge_pte helpers to use the new common __set_ptes_anysz()
and __ptep_get_and_clear_anysz() APIs.

This provides 2 benefits; First, when page_table_check=on, hugetlb is
now properly/fully checked. Previously only the first page of a hugetlb
folio was checked. Second, instead of having to call __set_ptes(nr=1)
for each pte in a loop, the whole contiguous batch can now be set in one
go, which enables some efficiencies and cleans up the code.

One detail to note is that huge_ptep_clear_flush() was previously
calling ptep_clear_flush() for a non-contiguous pte (i.e. a pud or pmd
block mapping). This has a couple of disadvantages; first
ptep_clear_flush() calls ptep_get_and_clear() which transparently
handles contpte. Given we only call for non-contiguous ptes, it would be
safe, but a waste of effort. It's preferable to go straight to the layer
below. However, more problematic is that ptep_get_and_clear() is for
PAGE_SIZE entries so it calls page_table_check_pte_clear() and would not
clear the whole hugetlb folio. So let's stop special-casing the non-cont
case and just rely on get_clear_contig_flush() to do the right thing for
non-cont entries.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-6-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()
Ryan Roberts [Tue, 22 Apr 2025 08:18:12 +0000 (09:18 +0100)] 
arm64/mm: Refactor __set_ptes() and __ptep_get_and_clear()

Refactor __set_ptes(), set_pmd_at() and set_pud_at() so that they are
all a thin wrapper around a new common __set_ptes_anysz(), which takes
pgsize parameter. Additionally, refactor __ptep_get_and_clear() and
pmdp_huge_get_and_clear() to use a new common
__ptep_get_and_clear_anysz() which also takes a pgsize parameter.

These changes will permit the huge_pte API to efficiently batch-set
pgtable entries and take advantage of the future barrier optimizations.
Additionally since the new *_anysz() helpers call the correct
page_table_check_*_set() API based on pgsize, this means that huge_ptes
will be able to get proper coverage. Currently the huge_pte API always
uses the pte API which assumes an entry only covers a single page.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-5-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agomm/page_table_check: Batch-check pmds/puds just like ptes
Ryan Roberts [Tue, 22 Apr 2025 08:18:11 +0000 (09:18 +0100)] 
mm/page_table_check: Batch-check pmds/puds just like ptes

Convert page_table_check_p[mu]d_set(...) to
page_table_check_p[mu]ds_set(..., nr) to allow checking a contiguous set
of pmds/puds in single batch. We retain page_table_check_p[mu]d_set(...)
as macros that call new batch functions with nr=1 for compatibility.

arm64 is about to reorganise its pte/pmd/pud helpers to reuse more code
and to allow the implementation for huge_pte to more efficiently set
ptes/pmds/puds in batches. We need these batch-helpers to make the
refactoring possible.

Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-4-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: hugetlb: Refine tlb maintenance scope
Ryan Roberts [Tue, 22 Apr 2025 08:18:10 +0000 (09:18 +0100)] 
arm64: hugetlb: Refine tlb maintenance scope

When operating on contiguous blocks of ptes (or pmds) for some hugetlb
sizes, we must honour break-before-make requirements and clear down the
block to invalid state in the pgtable then invalidate the relevant tlb
entries before making the pgtable entries valid again.

However, the tlb maintenance is currently always done assuming the worst
case stride (PAGE_SIZE), last_level (false) and tlb_level
(TLBI_TTL_UNKNOWN). We can do much better with the hinting; In reality,
we know the stride from the huge_pte pgsize, we are always operating
only on the last level, and we always know the tlb_level, again based on
pgsize. So let's start providing these hints.

Additionally, avoid tlb maintenace in set_huge_pte_at().
Break-before-make is only required if we are transitioning the
contiguous pte block from valid -> valid. So let's elide the
clear-and-flush ("break") if the pte range was previously invalid.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-3-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: hugetlb: Cleanup huge_pte size discovery mechanisms
Ryan Roberts [Tue, 22 Apr 2025 08:18:09 +0000 (09:18 +0100)] 
arm64: hugetlb: Cleanup huge_pte size discovery mechanisms

Not all huge_pte helper APIs explicitly provide the size of the
huge_pte. So the helpers have to depend on various methods to determine
the size of the huge_pte. Some of these methods are dubious.

Let's clean up the code to use preferred methods and retire the dubious
ones. The options in order of preference:

 - If size is provided as parameter, use it together with
   num_contig_ptes(). This is explicit and works for both present and
   non-present ptes.

 - If vma is provided as a parameter, retrieve size via
   huge_page_size(hstate_vma(vma)) and use it together with
   num_contig_ptes(). This is explicit and works for both present and
   non-present ptes.

 - If the pte is present and contiguous, use find_num_contig() to walk
   the pgtable to find the level and infer the number of ptes from
   level. Only works for *present* ptes.

 - If the pte is present and not contiguous and you can infer from this
   that only 1 pte needs to be operated on. This is ok if you don't care
   about the absolute size, and just want to know the number of ptes.

 - NEVER rely on resolving the PFN of a present pte to a folio and
   getting the folio's size. This is fragile at best, because there is
   nothing to stop the core-mm from allocating a folio twice as big as
   the huge_pte then mapping it across 2 consecutive huge_ptes. Or just
   partially mapping it.

Where we require that the pte is present, add warnings if not-present.

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Signed-off-by: Ryan Roberts <ryan.roberts@arm.com>
Tested-by: Luiz Capitulino <luizcap@redhat.com>
Link: https://lore.kernel.org/r/20250422081822.1836315-2-ryan.roberts@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoperf/amlogic: Replace smp_processor_id() with raw_smp_processor_id() in meson_ddr_pmu...
Anand Moon [Mon, 7 Apr 2025 06:32:03 +0000 (12:02 +0530)] 
perf/amlogic: Replace smp_processor_id() with raw_smp_processor_id() in meson_ddr_pmu_create()

The Amlogic DDR PMU driver meson_ddr_pmu_create() function incorrectly uses
smp_processor_id(), which assumes disabled preemption. This leads to kernel
warnings during module loading because meson_ddr_pmu_create() can be called
in a preemptible context.

Following kernel warning and stack trace:
[   31.745138] [   T2289] BUG: using smp_processor_id() in preemptible [00000000] code: (udev-worker)/2289
[   31.745154] [   T2289] caller is debug_smp_processor_id+0x28/0x38
[   31.745172] [   T2289] CPU: 4 UID: 0 PID: 2289 Comm: (udev-worker) Tainted: GW 6.14.0-0-MANJARO-ARM #1 59519addcbca6ba8de735e151fd7b9e97aac7ff0
[   31.745181] [   T2289] Tainted: [W]=WARN
[   31.745183] [   T2289] Hardware name: Hardkernel ODROID-N2Plus (DT)
[   31.745188] [   T2289] Call trace:
[   31.745191] [   T2289]  show_stack+0x28/0x40 (C)
[   31.745199] [   T2289]  dump_stack_lvl+0x4c/0x198
[   31.745205] [   T2289]  dump_stack+0x20/0x50
[   31.745209] [   T2289]  check_preemption_disabled+0xec/0xf0
[   31.745213] [   T2289]  debug_smp_processor_id+0x28/0x38
[   31.745216] [   T2289]  meson_ddr_pmu_create+0x200/0x560 [meson_ddr_pmu_g12 8095101c49676ad138d9961e3eddaee10acca7bd]
[   31.745237] [   T2289]  g12_ddr_pmu_probe+0x20/0x38 [meson_ddr_pmu_g12 8095101c49676ad138d9961e3eddaee10acca7bd]
[   31.745246] [   T2289]  platform_probe+0x98/0xe0
[   31.745254] [   T2289]  really_probe+0x144/0x3f8
[   31.745258] [   T2289]  __driver_probe_device+0xb8/0x180
[   31.745261] [   T2289]  driver_probe_device+0x54/0x268
[   31.745264] [   T2289]  __driver_attach+0x11c/0x288
[   31.745267] [   T2289]  bus_for_each_dev+0xfc/0x160
[   31.745274] [   T2289]  driver_attach+0x34/0x50
[   31.745277] [   T2289]  bus_add_driver+0x160/0x2b0
[   31.745281] [   T2289]  driver_register+0x78/0x120
[   31.745285] [   T2289]  __platform_driver_register+0x30/0x48
[   31.745288] [   T2289]  init_module+0x30/0xfe0 [meson_ddr_pmu_g12 8095101c49676ad138d9961e3eddaee10acca7bd]
[   31.745298] [   T2289]  do_one_initcall+0x11c/0x438
[   31.745303] [   T2289]  do_init_module+0x68/0x228
[   31.745311] [   T2289]  load_module+0x118c/0x13a8
[   31.745315] [   T2289]  __arm64_sys_finit_module+0x274/0x390
[   31.745320] [   T2289]  invoke_syscall+0x74/0x108
[   31.745326] [   T2289]  el0_svc_common+0x90/0xf8
[   31.745330] [   T2289]  do_el0_svc+0x2c/0x48
[   31.745333] [   T2289]  el0_svc+0x60/0x150
[   31.745337] [   T2289]  el0t_64_sync_handler+0x80/0x118
[   31.745341] [   T2289]  el0t_64_sync+0x1b8/0x1c0

Changes replaces smp_processor_id() with raw_smp_processor_id() to
ensure safe CPU ID retrieval in preemptible contexts.

Cc: Jiucheng Xu <jiucheng.xu@amlogic.com>
Fixes: 2016e2113d35 ("perf/amlogic: Add support for Amlogic meson G12 SoC DDR PMU driver")
Signed-off-by: Anand Moon <linux.amoon@gmail.com>
Link: https://lore.kernel.org/r/20250407063206.5211-1-linux.amoon@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoperf/arm-cmn: Fix REQ2/SNP2 mixup
Robin Murphy [Thu, 8 May 2025 15:16:40 +0000 (16:16 +0100)] 
perf/arm-cmn: Fix REQ2/SNP2 mixup

Somehow the encodings for REQ2/SNP2 channels in XP events
got mixed up... Unmix them.

CC: stable@vger.kernel.org
Fixes: 23760a014417 ("perf/arm-cmn: Add CMN-700 support")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/087023e9737ac93d7ec7a841da904758c254cb01.1746717400.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agokselftest/arm64: fp-ptrace: Adjust to new inactive mode behaviour
Mark Rutland [Thu, 8 May 2025 13:26:44 +0000 (14:26 +0100)] 
kselftest/arm64: fp-ptrace: Adjust to new inactive mode behaviour

In order to fix an ABI problem, we recently changed the way that reads
of the NT_ARM_SVE and NT_ARM_SSVE regsets behave when their
corresponding vector state is inactive.

Update the fp-ptrace test for the new behaviour.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-25-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agokselftest/arm64: fp-ptrace: Adjust to new VL change behaviour
Mark Rutland [Thu, 8 May 2025 13:26:43 +0000 (14:26 +0100)] 
kselftest/arm64: fp-ptrace: Adjust to new VL change behaviour

In order to fix an ABI problem, we recently changed the way that
changing the SVE/SME vector length affects PSTATE.SM. Historically,
changing the SME vector length would clear PSTATE.SM. Now, changing the
SME vector length preserves PSTATE.SM.

Update the fp-ptrace test for the new behaviour.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-24-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agokselftest/arm64: tpidr2: Adjust to new clone() behaviour
Mark Rutland [Thu, 8 May 2025 13:26:42 +0000 (14:26 +0100)] 
kselftest/arm64: tpidr2: Adjust to new clone() behaviour

In order to fix an ABI problem, we recently changed the way that a
clone() syscall manipulates TPIDR2 and PSTATE.ZA. Historically the child
would inherit the parent's TPIDR2 value unless CLONE_SETTLS was set, and
now the child will inherit the parent's TPIDR2 value unless CLONE_VM is
set.

Update the tpidr2 test for the new behaviour.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Richard Sandiford <richard.sandiford@arm.com>
Cc: Sander De Smalen <sander.desmalen@arm.com>
Cc: Tamas Petz <tamas.petz@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yury Khrustalev <yury.khrustalev@arm.com>
Link: https://lore.kernel.org/r/20250508132644.1395904-23-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agokselftest/arm64: fp-ptrace: Fix expected FPMR value when PSTATE.SM is changed
Mark Rutland [Thu, 8 May 2025 13:26:41 +0000 (14:26 +0100)] 
kselftest/arm64: fp-ptrace: Fix expected FPMR value when PSTATE.SM is changed

The fp-ptrace test suite expects that FPMR is set to zero when PSTATE.SM
is changed via ptrace, but ptrace has never altered FPMR in this way,
and the test logic erroneously relies upon (and has concealed) a bug
where task_fpsimd_load() would unexpectedly and non-deterministically
clobber FPMR.

Using ptrace, FPMR can only be altered by writing to the NT_ARM_FPMR
regset. The value of PSTATE.SM can be altered by writing to the
NT_ARM_SVE or NT_ARM_SSVE regsets, and/or by changing the SME vector
length (when writing to the NT_ARM_SVE, NT_ARM_SSVE, or NT_ARM_ZA
regsets), but none of these writes will change the value of FPMR.

The task_fpsimd_load() bug was introduced with the initial FPMR support
in commit:

  203f2b95a882 ("arm64/fpsimd: Support FEAT_FPMR")

The incorrect FPMR test code was introduced in commit:

  7dbd26d0b22d ("kselftest/arm64: Add FPMR coverage to fp-ptrace")

Subsequently, the task_fpsimd_load() bug was fixed in commit:

  e5fa85fce08b ("arm64/fpsimd: Don't corrupt FPMR when streaming mode changes")

... whereupon the fp-ptrace FPMR tests started failing reliably, e.g.

| # # Mismatch in saved FPMR: 915058000 != 0
| # not ok 25 SVE write, SVE 64->64, SME 64/0->64/1

Fix this by changing the test to expect that FPMR is *NOT* changed when
PSTATE.SM is changed via ptrace, matching the extant behaviour.

I've chosen to update the test code rather than modifying ptrace to zero
FPMR when PSTATE.SM changes. Not zeroing FPMR is simpler overall, and
allows the NT_ARM_FPMR regset to be handled independently from other
regsets, leaving less scope for error.

Fixes: 7dbd26d0b22d ("kselftest/arm64: Add FPMR coverage to fp-ptrace")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-22-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Allow CONFIG_ARM64_SME to be selected
Mark Rutland [Thu, 8 May 2025 13:26:40 +0000 (14:26 +0100)] 
arm64/fpsimd: Allow CONFIG_ARM64_SME to be selected

Now that the known issues with SME have been addressed, allow SME to be
selected.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Richard Sandiford <richard.sandiford@arm.com>
Cc: Sander De Smalen <sander.desmalen@arm.com>
Cc: Tamas Petz <tamas.petz@arm.com>
Cc: Todd Kjos <tkjos@google.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yury Khrustalev <yury.khrustalev@arm.com>
Tested-By: Luis Machado <luis.machado@arm.com>
Link: https://lore.kernel.org/r/20250508132644.1395904-21-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: ptrace: Gracefully handle errors
Mark Rutland [Thu, 8 May 2025 13:26:39 +0000 (14:26 +0100)] 
arm64/fpsimd: ptrace: Gracefully handle errors

Within sve_set_common() we do not handle error conditions correctly:

* When writing to NT_ARM_SSVE, if sme_alloc() fails, the task will be
  left with task->thread.sme_state==NULL, but TIF_SME will be set and
  task->thread.fp_type==FP_STATE_SVE. This will result in a subsequent
  null pointer dereference when the task's state is loaded or otherwise
  manipulated.

* When writing to NT_ARM_SSVE, if sve_alloc() fails, the task will be
  left with task->thread.sve_state==NULL, but TIF_SME will be set,
  PSTATE.SM will be set, and task->thread.fp_type==FP_STATE_FPSIMD.
  This is not a legitimate state, and can result in various problems,
  including a subsequent null pointer dereference and/or the task
  inheriting stale streaming mode register state the next time its state
  is loaded into hardware.

* When writing to NT_ARM_SSVE, if the VL is changed but the resulting VL
  differs from that in the header, the task will be left with TIF_SME
  set, PSTATE.SM set, but task->thread.fp_type==FP_STATE_FPSIMD. This is
  not a legitimate state, and can result in various problems as
  described above.

Avoid these problems by allocating memory earlier, and by changing the
task's saved fp_type to FP_STATE_SVE before skipping register writes due
to a change of VL.

To make early returns simpler, I've moved the call to
fpsimd_flush_task_state() earlier. As the tracee's state has already
been saved, and the tracee is known to be blocked for the duration of
sve_set_common(), it doesn't matter whether this is called at the start
or the end.

For consistency I've moved the setting of TIF_SVE earlier. This will be
cleared when loading FPSIMD-only state, and so moving this has no
resulting functional change.

Note that we only allocate the memory for SVE state when SVE register
contents are provided, avoiding unnecessary memory allocations for tasks
which only use FPSIMD.

Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers")
Fixes: baa8515281b3 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")
Fixes: 5d0a8d2fba50 ("arm64/ptrace: Ensure that SME is set up for target when writing SSVE state")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-20-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: ptrace: Mandate SVE payload for streaming-mode state
Mark Rutland [Thu, 8 May 2025 13:26:38 +0000 (14:26 +0100)] 
arm64/fpsimd: ptrace: Mandate SVE payload for streaming-mode state

When a task has PSTATE.SM==1, reads of NT_ARM_SSVE are required to
always present a header with SVE_PT_REGS_SVE, and register data in SVE
format. Reads of NT_ARM_SSVE must never present register data in FPSIMD
format. Within the kernel, we always expect streaming SVE data to be
stored in SVE format.

Currently a user can write to NT_ARM_SSVE with a header presenting
SVE_PT_REGS_FPSIMD rather than SVE_PT_REGS_SVE, placing the task's
FPSIMD/SVE data into an invalid state.

To fix this we can either:

(a) Forbid such writes.
(b) Accept such writes, and immediately convert data into SVE format.

Take the simple option and forbid such writes.

Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-19-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: ptrace: Do not present register data for inactive mode
Mark Rutland [Thu, 8 May 2025 13:26:37 +0000 (14:26 +0100)] 
arm64/fpsimd: ptrace: Do not present register data for inactive mode

The SME ptrace ABI is written around the incorrect assumption that
SVE_PT_REGS_FPSIMD and SVE_PT_REGS_SVE are independent bit flags, where
it is possible for both to be clear. In reality they are different
values for bit 0 of the header flags, where SVE_PT_REGS_FPSIMD is 0 and
SVE_PT_REGS_SVE is 1. In cases where code was written expecting that
neither bit flag would be set, the value is equivalent to
SVE_PT_REGS_FPSIMD.

One consequence of this is that reads of the NT_ARM_SVE or NT_ARM_SSVE
will erroneously present data from the other mode:

* When PSTATE.SM==1, reads of NT_ARM_SVE will present a header with
  SVE_PT_REGS_FPSIMD, and FPSIMD-formatted data from streaming mode.

* When PSTATE.SM==0, reads of NT_ARM_SSVE will present a header with
  SVE_PT_REGS_FPSIMD, and FPSIMD-formatted data from non-streaming mode.

The original intent was that no register data would be provided in these
cases, as described in commit:

  e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers")

Luckily, debuggers do not consume the bogus register data. Both GDB and
LLDB read the NT_ARM_SSVE regset before the NT_ARM_SVE regset, and
assume that when the NT_ARM_SSVE header presents SVE_PT_REGS_FPSIMD, it
is necessary to read register contents from the NT_ARM_SVE regset,
regardless of whether the NT_ARM_SSVE regset provided bogus register
data.

Fix the code to stop presenting register data from the inactive mode.
At the same time, make the manipulation of the flag clearer, and remove
the bogus comment from sve_set_common(). I've given this a quick spin
with GDB and LLDB, and both seem happy.

Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-18-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: ptrace: Save task state before generating SVE header
Mark Rutland [Thu, 8 May 2025 13:26:36 +0000 (14:26 +0100)] 
arm64/fpsimd: ptrace: Save task state before generating SVE header

As sve_init_header_from_task() consumes the saved value of PSTATE.SM and
the saved fp_type, both must be saved before the header is generated.

When generating a coredump for the current task, sve_get_common() calls
sve_init_header_from_task() before saving the task's state. Consequently
the header may be bogus, and the contents of the regset may be
misleading.

Fix this by saving the task's state before generting the header.

Fixes: e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers")
Fixes: b017a0cea627 ("arm64/ptrace: Use saved floating point state type to determine SVE layout")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-17-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: ptrace/prctl: Ensure VL changes leave task in a valid state
Mark Rutland [Thu, 8 May 2025 13:26:35 +0000 (14:26 +0100)] 
arm64/fpsimd: ptrace/prctl: Ensure VL changes leave task in a valid state

Currently, vec_set_vector_length() can manipulate a task into an invalid
state as a result of a prctl/ptrace syscall which changes the SVE/SME
vector length, resulting in several problems:

(1) When changing the SVE vector length, if the task initially has
    PSTATE.ZA==1, and sve_alloc() fails to allocate memory, the task
    will be left with PSTATE.ZA==1 and sve_state==NULL. This is not a
    legitimate state, and could result in a subsequent null pointer
    dereference.

(2) When changing the SVE vector length, if the task initially has
    PSTATE.SM==1, the task will be left with PSTATE.SM==1 and
    fp_type==FP_STATE_FPSIMD. Streaming mode state always needs to be
    saved in SVE format, so this is not a legitimate state.

    Attempting to restore this state may cause a task to erroneously
    inherit stale streaming mode predicate registers and FFR contents,
    behaving non-deterministically and potentially leaving information
    from another task.

    While in this state, reads of the NT_ARM_SSVE regset will indicate
    that the registers are not stored in SVE format. For the NT_ARM_SSVE
    regset specifically, debuggers interpret this as meaning that
    PSTATE.SM==0.

(3) When changing the SME vector length, if the task initially has
    PSTATE.SM==1, the lower 128 bits of task's streaming mode vector
    state will be migrated to non-streaming mode, rather than these bits
    being zeroed as is usually the case for changes to PSTATE.SM.

To fix the first issue, we can eagerly allocate the new sve_state and
sme_state before modifying the task. This makes it possible to handle
memory allocation failure without modifying the task state at all, and
removes the need to clear TIF_SVE and TIF_SME.

To fix the second issue, we either need to clear PSTATE.SM or not change
the saved fp_type. Given we're going to eagerly allocate sve_state and
sme_state, the simplest option is to preserve PSTATE.SM and the saves
fp_type, and consistently truncate the SVE state. This ensures that the
task always stays in a valid state, and by virtue of not exiting
streaming mode, this also sidesteps the third issue.

I believe these changes should not be problematic for realistic usage:

* When the SVE/SME vector length is changed via prctl(), syscall entry
  will have cleared PSTATE.SM. Unless the task's state has been
  manipulated via ptrace after entry, the task will have PSTATE.SM==0.

* When the SVE/SME vector length is changed via a write to the
  NT_ARM_SVE or NT_ARM_SSVE regsets, PSTATE.SM will be forced
  immediately after the length change, and new vector state will be
  copied from userspace.

* When the SME vector length is changed via a write to the NT_ARM_ZA
  regset, the (S)SVE state is clobbered today, so anyone who cares about
  the specific state would need to install this after writing to the
  NT_ARM_ZA regset.

As we need to free the old SVE state while TIF_SVE may still be set, we
cannot use sve_free(), and using kfree() directly makes it clear that
the free pairs with the subsequent assignment. As this leaves sve_free()
unused, I've removed the existing sve_free() and renamed __sve_free() to
mirror sme_free().

Fixes: 8bd7f91c03d8 ("arm64/sme: Implement traps and syscall handling for SME")
Fixes: baa8515281b3 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-16-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: ptrace/prctl: Ensure VL changes do not resurrect stale data
Mark Rutland [Thu, 8 May 2025 13:26:34 +0000 (14:26 +0100)] 
arm64/fpsimd: ptrace/prctl: Ensure VL changes do not resurrect stale data

The SVE/SME vector lengths can be changed via prctl/ptrace syscalls.
Changes to the SVE/SME vector lengths are documented as preserving the
lower 128 bits of the Z registers (i.e. the bits shared with the FPSIMD
V registers). To ensure this, vec_set_vector_length() explicitly copies
register values from a task's saved SVE state to its saved FPSIMD state
when dropping the task to FPSIMD-only.

The logic for this was not updated when when FPSIMD/SVE state tracking
was changed across commits:

  baa8515281b3 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")
  a0136be443d5 (arm64/fpsimd: Load FP state based on recorded data type")
  bbc6172eefdb ("arm64/fpsimd: SME no longer requires SVE register state")
  8c845e273104 ("arm64/sve: Leave SVE enabled on syscall if we don't context switch")

Since the last commit above, a task's FPSIMD/SVE state may be stored in
FPSIMD format while TIF_SVE is set, and the stored SVE state is stale.
When vec_set_vector_length() encounters this case, it will erroneously
clobber the live FPSIMD state with stale SVE state by using
sve_to_fpsimd().

Fix this by using fpsimd_sync_from_effective_state() instead.

Related issues with streaming mode state will be addressed in subsequent
patches.

Fixes: 8c845e273104 ("arm64/sve: Leave SVE enabled on syscall if we don't context switch")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-15-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Make clone() compatible with ZA lazy saving
Mark Rutland [Thu, 8 May 2025 13:26:33 +0000 (14:26 +0100)] 
arm64/fpsimd: Make clone() compatible with ZA lazy saving

Linux is intended to be compatible with userspace written to Arm's
AAPCS64 procedure call standard [1,2]. For the Scalable Matrix Extension
(SME), AAPCS64 was extended with a "ZA lazy saving scheme", where SME's
ZA tile is lazily callee-saved and caller-restored. In this scheme,
TPIDR2_EL0 indicates whether the ZA tile is live or has been saved by
pointing to a "TPIDR2 block" in memory, which has a "za_save_buffer"
pointer. This scheme has been implemented in GCC and LLVM, with
necessary runtime support implemented in glibc and bionic.

AAPCS64 does not specify how the ZA lazy saving scheme is expected to
interact with thread creation mechanisms such as fork() and
pthread_create(), which would be implemented in terms of the Linux clone
syscall. The behaviour implemented by Linux and glibc/bionic doesn't
always compose safely, as explained below.

Currently the clone syscall is implemented such that PSTATE.ZA and the
ZA tile are always inherited by the new task, and TPIDR2_EL0 is
inherited unless the 'flags' argument includes CLONE_SETTLS,
in which case TPIDR2_EL0 is set to 0/NULL. This doesn't make much sense:

(a) TPIDR2_EL0 is part of the calling convention, and changes as control
    is passed between functions. It is *NOT* used for thread local
    storage, despite superficial similarity to TPIDR_EL0, which is is
    used as the TLS register.

(b) TPIDR2_EL0 and PSTATE.ZA are tightly coupled in the procedure call
    standard, and some combinations of states are illegal. In general,
    manipulating the two independently is not guaranteed to be safe.

In practice, code which is compliant with the procedure call standard
may issue a clone syscall while in the "ZA dormant" state, where
PSTATE.ZA==1 and TPIDR2_EL0 is non-null and indicates that ZA needs to
be saved. This can cause a variety of problems, including:

* If the implementation of pthread_create() passes CLONE_SETTLS, the
  new thread will start with PSTATE.ZA==1 and TPIDR2==NULL. Per the
  procedure call standard this is not a legitimate state for most
  functions. This can cause data corruption (e.g. as code may rely on
  PSTATE.ZA being 0 to guarantee that an SMSTART ZA instruction will
  zero the ZA tile contents), and may result in other undefined
  behaviour.

* If the implementation of pthread_create() does not pass CLONE_SETTLS, the
  new thread will start with PSTATE.ZA==1 and TPIDR2 pointing to a
  TPIDR2 block on the parent thread's stack. This can result in a
  variety of problems, e.g.

  - The child may write back to the parent's za_save_buffer, corrupting
    its contents.

  - The child may read from the TPIDR2 block after the parent has reused
    this memory for something else, and consequently the child may abort
    or clobber arbitrary memory.

Ideally we'd require that userspace ensures that a task is in the "ZA
off" state (with PSTATE.ZA==0 and TPIDR2_EL0==NULL) prior to issuing a
clone syscall, and have the kernel force this state for new threads.
Unfortunately, contemporary C libraries do not do this, and simply
forcing this state within the implementation of clone would break
fork().

Instead, we can bodge around this by considering the CLONE_VM flag, and
manipulate PSTATE.ZA and TPIDR2_EL0 as a pair. CLONE_VM indicates that
the new task will run in the same address space as its parent, and in
that case it doesn't make sense to inherit a stale pointer to the
parent's TPIDR2 block:

* For fork(), CLONE_VM will not be set, and it is safe to inherit both
  PSTATE.ZA and TPIDR2_EL0 as the new task will have its own copy of the
  address space, and cannot clobber its parent's stack.

* For pthread_create() and vfork(), CLONE_VM will be set, and discarding
  PSTATE.ZA and TPIDR2_EL0 for the new task doesn't break any existing
  assumptions in userspace.

Implement this behaviour for clone(). We currently inherit PSTATE.ZA in
arch_dup_task_struct(), but this does not have access to the clone
flags, so move this logic under copy_thread(). Documentation is updated
to describe the new behaviour.

[1] https://github.com/ARM-software/abi-aa/releases/download/2025Q1/aapcs64.pdf
[2] https://github.com/ARM-software/abi-aa/blob/c51addc3dc03e73a016a1e4edf25440bcac76431/aapcs64/aapcs64.rst

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Richard Sandiford <richard.sandiford@arm.com>
Cc: Sander De Smalen <sander.desmalen@arm.com>
Cc: Tamas Petz <tamas.petz@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yury Khrustalev <yury.khrustalev@arm.com>
Acked-by: Yury Khrustalev <yury.khrustalev@arm.com>
Link: https://lore.kernel.org/r/20250508132644.1395904-14-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Clear PSTATE.SM during clone()
Mark Rutland [Thu, 8 May 2025 13:26:32 +0000 (14:26 +0100)] 
arm64/fpsimd: Clear PSTATE.SM during clone()

Currently arch_dup_task_struct() doesn't handle cases where the parent
task has PSTATE.SM==1. Since syscall entry exits streaming mode, the
parent will usually have PSTATE.SM==0, but this can be change by ptrace
after syscall entry. When this happens, arch_dup_task_struct() will
initialise the new task into an invalid state. The new task inherits the
parent's configuration of PSTATE.SM, but fp_type is set to
FP_STATE_FPSIMD, TIF_SVE and SME may be cleared, and both sve_state and
sme_state may be set to NULL.

This can result in a variety of problems whenever the new task's state
is manipulated, including kernel NULL pointer dereferences and leaking
of streaming mode state between tasks.

When ptrace is not involved, the parent will have PSTATE.SM==0 as a
result of syscall entry, and the documentation in
Documentation/arch/arm64/sme.rst says:

| On process creation (eg, clone()) the newly created process will have
| PSTATE.SM cleared.

... so make this true by using task_smstop_sm() to exit streaming mode
in the child task, avoiding the problems above.

Fixes: 8bd7f91c03d8 ("arm64/sme: Implement traps and syscall handling for SME")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-13-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Consistently preserve FPSIMD state during clone()
Mark Rutland [Thu, 8 May 2025 13:26:31 +0000 (14:26 +0100)] 
arm64/fpsimd: Consistently preserve FPSIMD state during clone()

In arch_dup_task_struct() we try to ensure that the child task inherits
the FPSIMD state of its parent, but this depends on the parent task's
saved state being in FPSIMD format, which is not always the case.
Consequently the child task may inherit stale FPSIMD state in some
cases.

This can happen when the parent's state has been modified by ptrace
since syscall entry, as writes to the NT_ARM_SVE regset may save state
in SVE format. This has been possible since commit:

  bc0ee4760364 ("arm64/sve: Core task context handling")

More recently it has been possible for a task's FPSIMD/SVE state to be
saved before lazy discarding was guaranteed to occur, in which case
preemption could cause the effective FPSIMD state to be saved in SVE
format non-deterministically. This has been possible since commit:

  f130ac0ae441 ("arm64: syscall: unmask DAIF earlier for SVCs")

Fix this by saving the parent task's effective FPSIMD state into FPSIMD
format before copying the task_struct. As this requires modifying the
parent's fpsimd_state, we must save+flush the state to avoid racing with
concurrent manipulation.

Similar issues exist when the parent has streaming mode state, and will
be addressed by subsequent patches.

Fixes: bc0ee4760364 ("arm64/sve: Core task context handling")
Fixes: f130ac0ae441 ("arm64: syscall: unmask DAIF earlier for SVCs")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-12-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Remove redundant task->mm check
Mark Rutland [Thu, 8 May 2025 13:26:30 +0000 (14:26 +0100)] 
arm64/fpsimd: Remove redundant task->mm check

For historical reasons, arch_dup_task_struct() only calls
fpsimd_preserve_current_state() when current->mm is non-NULL, but this
is no longer necessary.

Historically TIF_FOREIGN_FPSTATE was only managed for user threads, and
was never set for kernel threads. At the time, various functions
attempted to avoid saving/restoring state for kernel threads by checking
task_struct::mm to try to distinguish user threads from kernel threads.

We added the current->mm check to arch_dup_task_struct() in commit:

  6eb6c80187c5 ("arm64: kernel thread don't need to save fpsimd context.")

... where the intent was to avoid pointlessly saving state for kernel
threads, which never had live state (and the saved state would never be
consumed).

Subsequently we began setting TIF_FOREIGN_FPSTATE for kernel threads,
and removed most of the task_struct::mm checks in commit:

  df3fb9682045 ("arm64: fpsimd: Eliminate task->mm checks")

... but we missed the check in arch_dup_task_struct(), which is similarly
redundant.

Remove the redundant check from arch_dup_task_struct().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-11-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: signal: Use SMSTOP behaviour in setup_return()
Mark Rutland [Thu, 8 May 2025 13:26:29 +0000 (14:26 +0100)] 
arm64/fpsimd: signal: Use SMSTOP behaviour in setup_return()

Historically the behaviour of setup_return() was nondeterministic,
depending on whether the task's FSIMD/SVE/SME state happened to be live.
We fixed most of that in commit:

  929fa99b1215 ("arm64/fpsimd: signal: Always save+flush state early")

... but we didn't decide on how clearing PSTATE.SM should behave, and left a
TODO comment to that effect.

Use the new task_smstop_sm() helper to make this behave as if an SMSTOP
instruction was used to exit streaming mode. This would have been the
most common behaviour prior to the commit above.

Fixes: 40a8e87bb328 ("arm64/sme: Disable ZA and streaming mode when handling signals")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-10-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Add task_smstop_sm()
Mark Rutland [Thu, 8 May 2025 13:26:28 +0000 (14:26 +0100)] 
arm64/fpsimd: Add task_smstop_sm()

In a few places we want to transition a task from streaming mode to
non-streaming mode, e.g. signal delivery where we historically tried to
use an SMSTOP SM instruction.

Add a new helper to manipulate a task's state in the same way as an
SMSTOP SM instruction. I have not added a corresponding helper to
simulate the effects of SMSTART SM. Only ptrace transitions a task into
streaming mode, and ptrace has distinct semantics for such transitions.

Per ARM DDI 0487 L.a, section B1.4.6:

| RRSWFQ
| When the Effective value of PSTATE.SM is changed by any method from 0
| to 1, an entry to Streaming SVE mode is performed, and all implemented
| bits of Streaming SVE register state are set to zero.

| RKFRQZ
| When the Effective value of PSTATE.SM is changed by any method from 1
| to 0, an exit from Streaming SVE mode is performed, and in the
| newly-entered mode, all implemented bits of the SVE scalable vector
| registers, SVE predicate registers, and FFR, are set to zero.

Per ARM DDI 0487 L.a, section C5.2.9:

| On entry to or exit from Streaming SVE mode, FPMR is set to 0

Per ARM DDI 0487 L.a, section C5.2.10:

| On entry to or exit from Streaming SVE mode, FPSR.{IOC, DZC, OFC, UFC,
| IXC, IDC, QC} are set to 1 and the remaining bits are set to 0.

This means bits 0, 1, 2, 3, 4, 7, and 27 respectively, i.e. 0x0800009f

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-9-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Factor out {sve,sme}_state_size() helpers
Mark Rutland [Thu, 8 May 2025 13:26:27 +0000 (14:26 +0100)] 
arm64/fpsimd: Factor out {sve,sme}_state_size() helpers

In subsequent patches we'll need to determine the SVE/SME state size for
a given SVE VL and SME VL regardless of whether a task is currently
configured with those VLs. Split the sizing logic out of
sve_state_size() and sme_state_size() so that we don't need to open-code
this logic elsewhere.

At the same time, apply minor cleanups:

* Move sve_state_size() into fpsimd.h, matching the placement of
  sme_state_size().

* Remove the feature checks from sve_state_size(). We only call
  sve_state_size() when at least one of SVE and SME are supported, and
  when either of the two is not supported, the task's corresponding
  SVE/SME vector length will be zero.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-8-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Clarify sve_sync_*() functions
Mark Rutland [Thu, 8 May 2025 13:26:26 +0000 (14:26 +0100)] 
arm64/fpsimd: Clarify sve_sync_*() functions

The sve_sync_{to,from}_fpsimd*() functions are intended to
extract/insert the currently effective FPSIMD state of a task regardless
of whether the task's state is saved in FPSIMD format or SVE format.
Historically they were only used by ptrace, but sve_sync_to_fpsimd() is
now used more widely, and sve_sync_from_fpsimd_zeropad() may be used
more widely in future.

When FPSIMD/SVE state tracking was changed across commits:

  baa8515281b3 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")
  a0136be443d5 (arm64/fpsimd: Load FP state based on recorded data type")
  bbc6172eefdb ("arm64/fpsimd: SME no longer requires SVE register state")
  8c845e273104 ("arm64/sve: Leave SVE enabled on syscall if we don't context switch")

... sve_sync_to_fpsimd() was updated to consider task->thread.fp_type
rather than the task's TIF_SVE and PSTATE.SM, but (apparently due to an
oversight) sve_sync_from_fpsimd_zeropad() was left as-is, leaving the
two inconsistent.

Due to this, sve_sync_from_fpsimd_zeropad() may copy state from
task->thread.uw.fpsimd_state into task->thread.sve_state when
task->thread.fp_type == FP_STATE_FPSIMD. This is redundant (but benign)
as task->thread.uw.fpsimd_state is the effective state that will be
restored, and task->thread.sve_state will not be consumed. For
consistency, and to avoid the redundant work, it better for
sve_sync_from_fpsimd_zeropad() to consider task->thread.fp_type alone,
matching sve_sync_to_fpsimd().

The naming of both functions is somehat unfortunate, as it is unclear
when and why they copy state. It would be better to describe them in
terms of the effective state.

Considering all of the above, clean this up:

* Adjust sve_sync_from_fpsimd_zeropad() to consider
  task->thread.fp_type.

* Update comments to clarify the intended semantics/usage. I've removed
  the description that task->thread.sve_state must have been allocated,
  as this is only necessary when task->thread.fp_type == FP_STATE_SVE,
  which itself implies that task->thread.sve_state must have been
  allocated.

* Rename the functions to more clearly indicate when/why they copy
  state:

  - sve_sync_to_fpsimd() => fpsimd_sync_from_effective_state()

  - sve_sync_from_fpsimd_zeropad => fpsimd_sync_to_effective_state_zeropad()

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-7-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: ptrace: Consistently handle partial writes to NT_ARM_(S)SVE
Mark Rutland [Thu, 8 May 2025 13:26:25 +0000 (14:26 +0100)] 
arm64/fpsimd: ptrace: Consistently handle partial writes to NT_ARM_(S)SVE

Partial writes to the NT_ARM_SVE and NT_ARM_SSVE regsets using an
payload are handled inconsistently and non-deterministically. A comment
within sve_set_common() indicates that we intended that a partial write
would preserve any effective FPSIMD/SVE state which was not overwritten,
but this has never worked consistently, and during syscalls the FPSIMD
vector state may be non-deterministically preserved and may be
erroneously migrated between streaming and non-streaming SVE modes.

The simplest fix is to handle a partial write by consistently zeroing
the remaining state. As detailed below I do not believe this will
adversely affect any real usage.

Neither GDB nor LLDB attempt partial writes to these regsets, and the
documentation (in Documentation/arch/arm64/sve.rst) has always indicated
that state preservation was not guaranteed, as is says:

| The effect of writing a partial, incomplete payload is unspecified.

When the logic was originally introduced in commit:

  43d4da2c45b2 ("arm64/sve: ptrace and ELF coredump support")

... there were two potential behaviours, depending on TIF_SVE:

* When TIF_SVE was clear, all SVE state would be zeroed, excluding the
  low 128 bits of vectors shared with FPSIMD, FPSR, and FPCR.

* When TIF_SVE was set, all SVE state would be zeroed, including the
  low 128 bits of vectors shared with FPSIMD, but excluding FPSR and
  FPCR.

Note that as writing to NT_ARM_SVE would set TIF_SVE, partial writes to
NT_ARM_SVE would not be idempotent, and if a first write preserved the
low 128 bits, a subsequent (potentially identical) partial write would
discard the low 128 bits.

When support for the NT_ARM_SSVE regset was added in commit:

  e12310a0d30f ("arm64/sme: Implement ptrace support for streaming mode SVE registers")

... the above behaviour was retained for writes to the NT_ARM_SVE
regset, though writes to the NT_ARM_SSVE would always zero the SVE
registers and would not inherit FPSIMD register state. This happened as
fpsimd_sync_to_sve() only copied the FPSIMD regs when TIF_SVE was clear
and PSTATE.SM==0.

Subsequently, when FPSIMD/SVE state tracking was changed across commits:

  baa8515281b3 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")
  a0136be443d5 (arm64/fpsimd: Load FP state based on recorded data type")
  bbc6172eefdb ("arm64/fpsimd: SME no longer requires SVE register state")
  8c845e273104 ("arm64/sve: Leave SVE enabled on syscall if we don't context switch")

... there was no corresponding update to the ptrace code, nor to
fpsimd_sync_to_sve(), which stil considers TIF_SVE and PSTATE.SM rather
than the saved fp_type. The saved state can be in the FPSIMD format
regardless of whether TIF_SVE is set or clear, and the saved type can
change non-deterministically during syscalls. Consequently a subsequent
partial write to the NT_ARM_SVE or NT_ARM_SSVE regsets may
non-deterministically preserve the FPSIMD state, and may migrate this
state between streaming and non-streaming modes.

Clean this up by never attempting to preserve ANY state when writing an
SVE payload to the NT_ARM_SVE/NT_ARM_SSVE regsets, zeroing all relevant
state including FPSR and FPCR. This simplifies the code, makes the
behaviour deterministic, and avoids migrating state between streaming
and non-streaming modes. As above, I do not believe this should
adversely affect existing userspace applications.

At the same time, remove fpsimd_sync_to_sve(). It is no longer used,
doesn't do what its documentation implies, and gets in the way of other
cleanups and fixes.

Fixes: 43d4da2c45b2 ("arm64/sve: ptrace and ELF coredump support")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: David Spickett <david.spickett@arm.com>
Cc: Luis Machado <luis.machado@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-6-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: signal: Consistently read FPSIMD context
Mark Rutland [Thu, 8 May 2025 13:26:24 +0000 (14:26 +0100)] 
arm64/fpsimd: signal: Consistently read FPSIMD context

For historical reasons, restore_sve_fpsimd_context() has an open-coded
copy of the logic from read_fpsimd_context(), which is used to either
restore an FPSIMD-only context, or to merge FPSIMD state into an
SVE state when restoring an SVE+FPSIMD context. The logic is *almost*
identical.

Refactor the logic to avoid duplication and make this clearer.

This comes with two functional changes that I do not believe will be
problematic in practice:

* The user_fpsimd_state::size field will be checked in all restore paths
  that consume it user_fpsimd_state. The kernel always populates this
  field when delivering a signal, and so this should contain the
  expected value unless it has been corrupted.

* If a read of user_fpsimd_state fails, we will return early without
  modifying TIF_SVE, the saved SVCR, or the save fp_type. This will
  leave the task in a consistent state, without potentially resurrecting
  stale FPSIMD state. A read of user_fpsimd_state should never fail
  unless the structure has been corrupted or the stack has been
  unmapped.

Suggested-by: Will Deacon <will@kernel.org>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-5-mark.rutland@arm.com
[will: Ensure read_fpsimd_context() returns negative error code or zero]
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: signal: Mandate SVE payload for streaming-mode state
Mark Rutland [Thu, 8 May 2025 13:26:23 +0000 (14:26 +0100)] 
arm64/fpsimd: signal: Mandate SVE payload for streaming-mode state

Non-streaming SVE state may be preserved without an SVE payload, in
which case the SVE context only has a header with VL==0, and all state
can be restored from the FPSIMD context. Streaming SVE state is always
preserved with an SVE payload, where the SVE context header has VL!=0,
and the SVE_SIG_FLAG_SM flag is set.

The kernel never preserves an SVE context where SVE_SIG_FLAG_SM is set
without an SVE payload. However, restore_sve_fpsimd_context() doesn't
forbid restoring such a context, and will handle this case by clearing
PSTATE.SM and restoring the FPSIMD context into non-streaming mode,
which isn't consistent with the SVE_SIG_FLAG_SM flag.

Forbid this case, and mandate an SVE payload when the SVE_SIG_FLAG_SM
flag is set. This avoids an awkward ABI quirk and reduces the risk that
later rework to this code permits configuring a task with PSTATE.SM==1
and fp_type==FP_STATE_FPSIMD.

I've marked this as a fix given that we never intended to support this
case, and we don't want anyone to start relying upon the old behaviour
once we re-enable SME.

Fixes: 85ed24dad290 ("arm64/sme: Implement streaming SVE signal handling")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: signal: Clear PSTATE.SM when restoring FPSIMD frame only
Mark Rutland [Thu, 8 May 2025 13:26:22 +0000 (14:26 +0100)] 
arm64/fpsimd: signal: Clear PSTATE.SM when restoring FPSIMD frame only

On systems with SVE and/or SME, the kernel will always create SVE and
FPSIMD signal frames when delivering a signal, but a user can manipulate
signal frames such that a signal return only observes an FPSIMD signal
frame. When this happens, restore_fpsimd_context() will restore state
such that fp_type==FP_STATE_FPSIMD, but will leave PSTATE.SM as-is.

It is possible for a user to set PSTATE.SM between syscall entry and
execution of the sigreturn logic (e.g. via ptrace), and consequently the
sigreturn may result in the task having PSTATE.SM==1 and
fp_type==FP_STATE_FPSIMD.

For various reasons it is not legitimate for a task to be in a state
where PSTATE.SM==1 and fp_type==FP_STATE_FPSIMD. Portions of the user
ABI are written with the requirement that streaming SVE state is always
presented in SVE format rather than FPSIMD format, and as there is no
mechanism to permit access to only the FPSIMD subset of streaming SVE
state, streaming SVE state must always be saved and restored in SVE
format.

Fix restore_fpsimd_context() to clear PSTATE.SM when restoring an FPSIMD
signal frame without an SVE signal frame. This matches the current
behaviour when an SVE signal frame is present, but the SVE signal frame
has no register payload (e.g. as is the case on SME-only systems which
lack SVE).

This change should have no effect for applications which do not alter
signal frames (i.e. almost all applications). I do not expect
non-{malicious,buggy} applications to hide the SVE signal frame, but
I've chosen to clear PSTATE.SM rather than mandating the presence of an
SVE signal frame in case there is some legacy (non-SME) usage that I am
not currently aware of.

For context, the SME handling was originally introduced in commit:

  85ed24dad290 ("arm64/sme: Implement streaming SVE signal handling")

... and subsequently updated/fixed to handle SME-only systems in commits:

  7dde62f0687c ("arm64/signal: Always accept SVE signal frames on SME only systems")
  f26cd7372160 ("arm64/signal: Always allocate SVE signal frames on SME only systems")

Fixes: 85ed24dad290 ("arm64/sme: Implement streaming SVE signal handling")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Do not discard modified SVE state
Mark Rutland [Thu, 8 May 2025 13:26:21 +0000 (14:26 +0100)] 
arm64/fpsimd: Do not discard modified SVE state

Historically SVE state was discarded deterministically early in the
syscall entry path, before ptrace is notified of syscall entry. This
permitted ptrace to modify SVE state before and after the "real" syscall
logic was executed, with the modified state being retained.

This behaviour was changed by commit:

  8c845e2731041f0f ("arm64/sve: Leave SVE enabled on syscall if we don't context switch")

That commit was intended to speed up workloads that used SVE by
opportunistically leaving SVE enabled when returning from a syscall.
The syscall entry logic was modified to truncate the SVE state without
disabling userspace access to SVE, and fpsimd_save_user_state() was
modified to discard userspace SVE state whenever
in_syscall(current_pt_regs()) is true, i.e. when
current_pt_regs()->syscallno != NO_SYSCALL.

Leaving SVE enabled opportunistically resulted in a couple of changes to
userspace visible behaviour which weren't described at the time, but are
logical consequences of opportunistically leaving SVE enabled:

* Signal handlers can observe the type of saved state in the signal's
  sve_context record. When the kernel only tracks FPSIMD state, the 'vq'
  field is 0 and there is no space allocated for register contents. When
  the kernel tracks SVE state, the 'vq' field is non-zero and the
  register contents are saved into the record.

  As a result of the above commit, 'vq' (and the presence of SVE
  register state) is non-deterministically zero or non-zero for a period
  of time after a syscall. The effective register state is still
  deterministic.

  Hopefully no-one relies on this being deterministic. In general,
  handlers for asynchronous events cannot expect a deterministic state.

* Similarly to signal handlers, ptrace requests can observe the type of
  saved state in the NT_ARM_SVE and NT_ARM_SSVE regsets, as this is
  exposed in the header flags. As a result of the above commit, this is
  now in a non-deterministic state after a syscall. The effective
  register state is still deterministic.

  Hopefully no-one relies on this being deterministic. In general,
  debuggers would have to handle this changing at arbitrary points
  during program flow.

Discarding the SVE state within fpsimd_save_user_state() resulted in
other changes to userspace visible behaviour which are not desirable:

* A ptrace tracer can modify (or create) a tracee's SVE state at syscall
  entry or syscall exit. As a result of the above commit, the tracee's
  SVE state can be discarded non-deterministically after modification,
  rather than being retained as it previously was.

  Note that for co-operative tracer/tracee pairs, the tracer may
  (re)initialise the tracee's state arbitrarily after the tracee sends
  itself an initial SIGSTOP via a syscall, so this affects realistic
  design patterns.

* The current_pt_regs()->syscallno field can be modified via ptrace, and
  can be altered even when the tracee is not really in a syscall,
  causing non-deterministic discarding to occur in situations where this
  was not previously possible.

Further, using current_pt_regs()->syscallno in this way is unsound:

* There are data races between readers and writers of the
  current_pt_regs()->syscallno field.

  The current_pt_regs()->syscallno field is written in interruptible
  task context using plain C accesses, and is read in irq/softirq
  context using plain C accesses. These accesses are subject to data
  races, with the usual concerns with tearing, etc.

* Writes to current_pt_regs()->syscallno are subject to compiler
  reordering.

  As current_pt_regs()->syscallno is written with plain C accesses,
  the compiler is free to move those writes arbitrarily relative to
  anything which doesn't access the same memory location.

  In theory this could break signal return, where prior to restoring the
  SVE state, restore_sigframe() calls forget_syscall(). If the write
  were hoisted after restore of some SVE state, that state could be
  discarded unexpectedly.

  In practice that reordering cannot happen in the absence of LTO (as
  cross compilation-unit function calls happen prevent this reordering),
  and that reordering appears to be unlikely in the presence of LTO.

Additionally, since commit:

  f130ac0ae4412dbe ("arm64: syscall: unmask DAIF earlier for SVCs")

... DAIF is unmasked before el0_svc_common() sets regs->syscallno to the
real syscall number. Consequently state may be saved in SVE format prior
to this point.

Considering all of the above, current_pt_regs()->syscallno should not be
used to infer whether the SVE state can be discarded. Luckily we can
instead use cpu_fp_state::to_save to track when it is safe to discard
the SVE state:

* At syscall entry, after the live SVE register state is truncated, set
  cpu_fp_state::to_save to FP_STATE_FPSIMD to indicate that only the
  FPSIMD portion is live and needs to be saved.

* At syscall exit, once the task's state is guaranteed to be live, set
  cpu_fp_state::to_save to FP_STATE_CURRENT to indicate that TIF_SVE
  must be considered to determine which state needs to be saved.

* Whenever state is modified, it must be saved+flushed prior to
  manipulation. The state will be truncated if necessary when it is
  saved, and reloading the state will set fp_state::to_save to
  FP_STATE_CURRENT, preventing subsequent discarding.

This permits SVE state to be discarded *only* when it is known to have
been truncated (and the non-FPSIMD portions must be zero), and ensures
that SVE state is retained after it is explicitly modified.

For backporting, note that this fix depends on the following commits:

b2482807fbd4 ("arm64/sme: Optimise SME exit on syscall entry")
f130ac0ae441 ("arm64: syscall: unmask DAIF earlier for SVCs")
929fa99b1215 ("arm64/fpsimd: signal: Always save+flush state early")

Fixes: 8c845e273104 ("arm64/sve: Leave SVE enabled on syscall if we don't context switch")
Fixes: f130ac0ae441 ("arm64: syscall: unmask DAIF earlier for SVCs")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250508132644.1395904-2-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agofirmware: SDEI: Allow sdei initialization without ACPI_APEI_GHES
Huang Yiwei [Wed, 7 May 2025 04:57:57 +0000 (12:57 +0800)] 
firmware: SDEI: Allow sdei initialization without ACPI_APEI_GHES

SDEI usually initialize with the ACPI table, but on platforms where
ACPI is not used, the SDEI feature can still be used to handle
specific firmware calls or other customized purposes. Therefore, it
is not necessary for ARM_SDE_INTERFACE to depend on ACPI_APEI_GHES.

In commit dc4e8c07e9e2 ("ACPI: APEI: explicit init of HEST and GHES
in acpi_init()"), to make APEI ready earlier, sdei_init was moved
into acpi_ghes_init instead of being a standalone initcall, adding
ACPI_APEI_GHES dependency to ARM_SDE_INTERFACE. This restricts the
flexibility and usability of SDEI.

This patch corrects the dependency in Kconfig and splits sdei_init()
into two separate functions: sdei_init() and acpi_sdei_init().
sdei_init() will be called by arch_initcall and will only initialize
the platform driver, while acpi_sdei_init() will initialize the
device from acpi_ghes_init() when ACPI is ready. This allows the
initialization of SDEI without ACPI_APEI_GHES enabled.

Fixes: dc4e8c07e9e2 ("ACPI: APEI: explicit init of HEST and GHES in apci_init()")
Cc: Shuai Xue <xueshuai@linux.alibaba.com>
Signed-off-by: Huang Yiwei <quic_hyiwei@quicinc.com>
Reviewed-by: Shuai Xue <xueshuai@linux.alibaba.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Link: https://lore.kernel.org/r/20250507045757.2658795-1-quic_hyiwei@quicinc.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: cpufeature: Move arm64_use_ng_mappings to the .data section to prevent wrong...
Yeoreum Yun [Fri, 2 May 2025 18:04:12 +0000 (19:04 +0100)] 
arm64: cpufeature: Move arm64_use_ng_mappings to the .data section to prevent wrong idmap generation

The PTE_MAYBE_NG macro sets the nG page table bit according to the value
of "arm64_use_ng_mappings". This variable is currently placed in the
.bss section. create_init_idmap() is called before the .bss section
initialisation which is done in early_map_kernel(). Therefore,
data/test_prot in create_init_idmap() could be set incorrectly through
the PAGE_KERNEL -> PROT_DEFAULT -> PTE_MAYBE_NG macros.

   # llvm-objdump-21 --syms vmlinux-gcc | grep arm64_use_ng_mappings
     ffff800082f242a8 g     O .bss    0000000000000001 arm64_use_ng_mappings

The create_init_idmap() function disassembly compiled with llvm-21:

  // create_init_idmap()
  ffff80008255c058d10103ff      sub sp, sp, #0x40
  ffff80008255c05ca9017bfd      stp x29, x30, [sp, #0x10]
  ffff80008255c060a90257f6      stp x22, x21, [sp, #0x20]
  ffff80008255c064a9034ff4      stp x20, x19, [sp, #0x30]
  ffff80008255c068910043fd      add x29, sp, #0x10
  ffff80008255c06c90003fc8      adrp x8, 0xffff800082d54000
  ffff80008255c070d280e06a      mov x10, #0x703     // =1795
  ffff80008255c07491400409      add x9, x0, #0x1, lsl #12 // =0x1000
  ffff80008255c078394a4108      ldrb w8, [x8, #0x290] ------------- (1)
  ffff80008255c07cf2e00d0a      movk x10, #0x68, lsl #48
  ffff80008255c080f90007e9      str x9, [sp, #0x8]
  ffff80008255c084aa0103f3      mov x19, x1
  ffff80008255c088aa0003f4      mov x20, x0
  ffff80008255c08c14000000      b 0xffff80008255c08c <__pi_create_init_idmap+0x34>
  ffff80008255c090aa082d56      orr x22, x10, x8, lsl #11 -------- (2)

Note (1) is loading the arm64_use_ng_mappings value in w8 and (2) is set
the text or data prot with the w8 value to set PTE_NG bit. If the .bss
section isn't initialized, x8 could include a garbage value and generate
an incorrect mapping.

Annotate arm64_use_ng_mappings as __read_mostly so that it is placed in
the .data section.

Fixes: 84b04d3e6bdb ("arm64: kernel: Create initial ID map from C code")
Cc: stable@vger.kernel.org # 6.9.x
Tested-by: Nathan Chancellor <nathan@kernel.org>
Signed-off-by: Yeoreum Yun <yeoreum.yun@arm.com>
Link: https://lore.kernel.org/r/20250502180412.3774883-1-yeoreum.yun@arm.com
[catalin.marinas@arm.com: use __read_mostly instead of __ro_after_init]
[catalin.marinas@arm.com: slight tweaking of the code comment]
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2 months agoarm64: errata: Add missing sentinels to Spectre-BHB MIDR arrays
Will Deacon [Thu, 1 May 2025 10:47:47 +0000 (11:47 +0100)] 
arm64: errata: Add missing sentinels to Spectre-BHB MIDR arrays

Commit a5951389e58d ("arm64: errata: Add newer ARM cores to the
spectre_bhb_loop_affected() lists") added some additional CPUs to the
Spectre-BHB workaround, including some new arrays for designs that
require new 'k' values for the workaround to be effective.

Unfortunately, the new arrays omitted the sentinel entry and so
is_midr_in_range_list() will walk off the end when it doesn't find a
match. With UBSAN enabled, this leads to a crash during boot when
is_midr_in_range_list() is inlined (which was more common prior to
c8c2647e69be ("arm64: Make Â _midr_in_range_list() an exported
function")):

 |  Internal error: aarch64 BRK: 00000000f2000001 [#1] PREEMPT SMP
 |  pstate: 804000c5 (Nzcv daIF +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
 |  pc : spectre_bhb_loop_affected+0x28/0x30
 |  lr : is_spectre_bhb_affected+0x170/0x190
 | [...]
 |  Call trace:
 |   spectre_bhb_loop_affected+0x28/0x30
 |   update_cpu_capabilities+0xc0/0x184
 |   init_cpu_features+0x188/0x1a4
 |   cpuinfo_store_boot_cpu+0x4c/0x60
 |   smp_prepare_boot_cpu+0x38/0x54
 |   start_kernel+0x8c/0x478
 |   __primary_switched+0xc8/0xd4
 |  Code: 6b09011f 54000061 52801080 d65f03c0 (d4200020)
 |  ---[ end trace 0000000000000000 ]---
 |  Kernel panic - not syncing: aarch64 BRK: Fatal exception

Add the missing sentinel entries.

Cc: Lee Jones <lee@kernel.org>
Cc: James Morse <james.morse@arm.com>
Cc: Doug Anderson <dianders@chromium.org>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Cc: <stable@vger.kernel.org>
Reported-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Fixes: a5951389e58d ("arm64: errata: Add newer ARM cores to the spectre_bhb_loop_affected() lists")
Signed-off-by: Will Deacon <will@kernel.org>
Reviewed-by: Lee Jones <lee@kernel.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Link: https://lore.kernel.org/r/20250501104747.28431-1-will@kernel.org
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2 months agoarm64/fpsimd: Avoid warning when sve_to_fpsimd() is unused
Mark Rutland [Wed, 30 Apr 2025 17:32:40 +0000 (18:32 +0100)] 
arm64/fpsimd: Avoid warning when sve_to_fpsimd() is unused

Historically fpsimd_to_sve() and sve_to_fpsimd() were (conditionally)
called by functions which were defined regardless of CONFIG_ARM64_SVE.
Hence it was necessary that both fpsimd_to_sve() and sve_to_fpsimd()
were always defined and not guarded by ifdeffery.

As a result of the removal of fpsimd_signal_preserve_current_state() in
commit:

  929fa99b1215966f ("arm64/fpsimd: signal: Always save+flush state early")

... sve_to_fpsimd() has no callers when CONFIG_ARM64_SVE=n, resulting in
a build-time warnign that it is unused:

| arch/arm64/kernel/fpsimd.c:676:13: warning: unused function 'sve_to_fpsimd' [-Wunused-function]
|   676 | static void sve_to_fpsimd(struct task_struct *task)
|       |             ^~~~~~~~~~~~~
| 1 warning generated.

In contrast, fpsimd_to_sve() still has callers which are defined when
CONFIG_ARM64_SVE=n, and it would be awkward to hide this behind
ifdeffery and/or to use stub functions.

For now, suppress the warning by marking both fpsimd_to_sve() and
sve_to_fpsimd() as 'static inline', as we usually do for stub functions.
The compiler will no longer warn if either function is unused.

Aside from suppressing the warning, there should be no functional change
as a result of this patch.

Link: https://lore.kernel.org/linux-arm-kernel/20250429194600.GA26883@willie-the-truck/
Reported-by: Will Deacon <will@kernel.org>
Fixes: 929fa99b1215 ("arm64/fpsimd: signal: Always save+flush state early")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250430173240.4023627-1-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
2 months agoarm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings
Dev Jain [Thu, 3 Apr 2025 05:28:44 +0000 (10:58 +0530)] 
arm64: pageattr: Explicitly bail out when changing permissions for vmalloc_huge mappings

arm64 uses apply_to_page_range to change permissions for kernel vmalloc mappings,
which does not support changing permissions for block mappings. This function
will change permissions until it encounters a block mapping, and will bail
out with a warning. Since there are no reports of this triggering, it
implies that there are currently no cases of code doing a vmalloc_huge()
followed by partial permission change. But this is a footgun waiting to
go off, so let's detect it early and avoid the possibility of permissions
in an intermediate state. So,  explicitly disallow changing permissions
for VM_ALLOW_HUGE_VMAP mappings.

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Signed-off-by: Dev Jain <dev.jain@arm.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20250403052844.61818-1-dev.jain@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: Extend pr_crit message on invalid FDT
Bartosz Szczepanek [Wed, 23 Apr 2025 08:48:51 +0000 (08:48 +0000)] 
arm64: Extend pr_crit message on invalid FDT

Log size in addition to physical and virtual addresses. It has potential
to be helpful when DTB exceeds the 2 MB limit.

Initialize size to 0 to print out sane value if fixmap_remap_fdt fails
without setting the size.

Signed-off-by: Bartosz Szczepanek <bsz@amazon.de>
Link: https://lore.kernel.org/r/20250423084851.26449-1-bsz@amazon.de
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: Kconfig: remove unnecessary selection of CRC32
Eric Biggers [Mon, 14 Apr 2025 17:40:18 +0000 (10:40 -0700)] 
arm64: Kconfig: remove unnecessary selection of CRC32

The selection of CRC32 by ARM64 was added by commit 7481cddf29ed
("arm64/lib: add accelerated crc32 routines") as a workaround for the
fact that, at the time, the CRC32 library functions used weak symbols to
allow architecture-specific overrides.  That only worked when CRC32 was
built-in, and thus ARM64 was made to just force CRC32 to built-in.

Now that the CRC32 library no longer uses weak symbols, that no longer
applies.  And the selection does not fulfill a user dependency either;
those all have their own selections from other options.  Therefore, the
selection of CRC32 by ARM64 is no longer necessary.  Remove it.

Note that this does not necessarily result in CRC32 no longer being set
to y, as it still tends to get selected by something else anyway.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250414174018.6359-1-ebiggers@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: Add missing includes for mem_encrypt
Jason Gunthorpe [Thu, 17 Apr 2025 16:26:16 +0000 (13:26 -0300)] 
arm64: Add missing includes for mem_encrypt

Doing:
 #include <linux/mem_encrypt.h>

Causes a bunch of compiler failures due to missing implicit includes that
don't happen on x86:

../arch/arm64/include/asm/rsi_cmds.h:117:2: error: call to undeclared library function 'memcpy' with type 'void *(void *, const void *, unsigned long)'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
  117 |         memcpy(&regs.a1, challenge, size);

../arch/arm64/include/asm/mem_encrypt.h:19:49: warning: declaration of 'struct device' will not be visible outside of this function [-Wvisibility]
   19 | static inline bool force_dma_unencrypted(struct device *dev)

../arch/arm64/include/asm/rsi_cmds.h:44:38: error: call to undeclared function 'virt_to_phys'; ISO C99 and later do not support implicit function declarations [-Wimplicit-function-declaration]
   44 |         arm_smccc_smc(SMC_RSI_REALM_CONFIG, virt_to_phys(cfg),

Add the missing includes to the arch/arm headers to avoid this.

Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
Link: https://lore.kernel.org/r/0-v1-47aadfbd64cd+25795-arm_memenc_h_jgg@nvidia.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: Support ARM64_VA_BITS=52 when setting ARCH_MMAP_RND_BITS_MAX
Kornel DulÄ™ba [Thu, 17 Apr 2025 11:47:54 +0000 (11:47 +0000)] 
arm64: Support ARM64_VA_BITS=52 when setting ARCH_MMAP_RND_BITS_MAX

When the 52-bit virtual addressing was introduced the select like
ARCH_MMAP_RND_BITS_MAX logic was never updated to account for it.
Because of that the rnd max bits knob is set to the default value of 18
when ARM64_VA_BITS=52.
Fix this by setting ARCH_MMAP_RND_BITS_MAX to the same value that would
be used if 48-bit addressing was used. Higher values can't used here
because 52-bit addressing is used only if the caller provides a hint to
mmap, with a fallback to 48-bit. The knob in question is an upper bound
for what the user can set in /proc/sys/vm/mmap_rnd_bits, which in turn
is used to determine how many random bits can be inserted into the base
address used for mmap allocations. Since 48-bit allocations are legal
with ARM64_VA_BITS=52, we need to make sure that the base address is
small enough to facilitate this.

Fixes: b6d00d47e81a ("arm64: mm: Introduce 52-bit Kernel VAs")
Signed-off-by: Kornel Dulęba <korneld@google.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20250417114754.3238273-1-korneld@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: Expose AIDR_EL1 via sysfs
Oliver Upton [Thu, 3 Apr 2025 23:16:26 +0000 (16:16 -0700)] 
arm64: Expose AIDR_EL1 via sysfs

The KVM PV ABI recently added a feature that allows the VM to discover
the set of physical CPU implementations, identified by a tuple of
{MIDR_EL1, REVIDR_EL1, AIDR_EL1}. Unlike other KVM PV features, the
expectation is that the VMM implements the hypercall instead of KVM as
it has the authoritative view of where the VM gets scheduled.

To do this the VMM needs to know the values of these registers on any
CPU in the system. While MIDR_EL1 and REVIDR_EL1 are already exposed,
AIDR_EL1 is not. Provide it in sysfs along with the other identification
registers.

Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/20250403231626.3181116-1-oliver.upton@linux.dev
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: vdso: Use __arch_counter_get_cntvct()
Breno Leitao [Mon, 7 Apr 2025 11:33:25 +0000 (04:33 -0700)] 
arm64: vdso: Use __arch_counter_get_cntvct()

While reading how `cntvct_el0` was read in the kernel, I found that
__arch_get_hw_counter() is doing something very similar to what
__arch_counter_get_cntvct() is already doing.

Use the existing __arch_counter_get_cntvct() function instead of
duplicating similar inline assembly code in __arch_get_hw_counter().

Both functions were performing nearly identical operations to read the
cntvct_el0 register. The only difference was that
__arch_get_hw_counter() included a memory clobber in its inline
assembly, which appears unnecessary in this context.

This change simplifies the code by eliminating duplicate functionality
and improves maintainability by centralizing the counter access logic in
a single implementation.

Signed-off-by: Breno Leitao <leitao@debian.org>
Link: https://lore.kernel.org/r/20250407-arm-vdso-v1-1-7012de25b195@debian.org
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64: enable PREEMPT_LAZY
Mark Rutland [Wed, 5 Mar 2025 10:49:25 +0000 (11:49 +0100)] 
arm64: enable PREEMPT_LAZY

For an architecture to enable CONFIG_ARCH_HAS_RESCHED_LAZY, two things are
required:
1) Adding a TIF_NEED_RESCHED_LAZY flag definition
2) Checking for TIF_NEED_RESCHED_LAZY in the appropriate locations

2) is handled in a generic manner by CONFIG_GENERIC_ENTRY, which isn't
(yet) implemented for arm64. However, outside of core scheduler code,
TIF_NEED_RESCHED_LAZY only needs to be checked on a kernel exit, meaning:
o return/entry to userspace.
o return/entry to guest.

The return/entry to a guest is all handled by xfer_to_guest_mode_handle_work()
which already does the right thing, so it can be left as-is.

arm64 doesn't use common entry's exit_to_user_mode_prepare(), so update its
return to user path to check for TIF_NEED_RESCHED_LAZY and call into
schedule() accordingly.

Link: https://lore.kernel.org/linux-rt-users/20241216190451.1c61977c@mordecai.tesarici.cz/
Link: https://lore.kernel.org/all/xhsmh4j0fl0p3.mognet@vschneid-thinkpadt14sgen2i.remote.csb/
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
[testdrive, _TIF_WORK_MASK fixlet and changelog.]
Signed-off-by: Mike Galbraith <efault@gmx.de>
[Another round of testing; changelog faff]
Signed-off-by: Valentin Schneider <vschneid@redhat.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lore.kernel.org/r/20250305104925.189198-2-vschneid@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/cpufeature: Add missing id_aa64mmfr4 feature reg update
Yicong Yang [Sat, 29 Mar 2025 03:44:07 +0000 (11:44 +0800)] 
arm64/cpufeature: Add missing id_aa64mmfr4 feature reg update

Add missing id_aa64mmfr4 feature register check and update in
update_cpu_features(). Update the taint status as well.

Signed-off-by: Yicong Yang <yangyicong@hisilicon.com>
Link: https://lore.kernel.org/r/20250329034409.21354-2-yangyicong@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agofirmware: psci: Fix refcount leak in psci_dt_init
Miaoqian Lin [Tue, 18 Mar 2025 15:17:12 +0000 (23:17 +0800)] 
firmware: psci: Fix refcount leak in psci_dt_init

Fix a reference counter leak in psci_dt_init() where of_node_put(np) was
missing after of_find_matching_node_and_match() when np is unavailable.

Fixes: d09a0011ec0d ("drivers: psci: Allow PSCI node to be disabled")
Signed-off-by: Miaoqian Lin <linmq006@gmail.com>
Reviewed-by: Gavin Shan <gshan@redhat.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250318151712.28763-1-linmq006@gmail.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/mm: Remove randomization of the linear map
Ard Biesheuvel [Tue, 18 Mar 2025 13:49:50 +0000 (14:49 +0100)] 
arm64/mm: Remove randomization of the linear map

Since commit

  97d6786e0669 ("arm64: mm: account for hotplug memory when randomizing the linear region")

the decision whether or not to randomize the placement of the system's
DRAM inside the linear map is based on the capabilities of the CPU
rather than how much memory is present at boot time. This change was
necessary because memory hotplug may result in DRAM appearing in places
that are not covered by the linear region at all (and therefore
unusable) if the decision is solely based on the memory map at boot.

In the Android GKI kernel, which requires support for memory hotplug,
and is built with a reduced virtual address space of only 39 bits wide,
randomization of the linear map never happens in practice as a result.
And even on arm64 kernels built with support for 48 bit virtual
addressing, the wider PArange of recent CPUs means that linear map
randomization is slowly becoming a feature that only works on systems
that will soon be obsolete.

So let's just remove this feature. We can always bring it back in an
improved form if there is a real need for it.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Ryan Roberts <ryan.roberts@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Anshuman Khandual <anshuman.khandual@arm.com>
Cc: Kees Cook <kees@kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20250318134949.3194334-2-ardb+git@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: Avoid unnecessary per-CPU buffers for EFI runtime calls
Ard Biesheuvel [Tue, 18 Mar 2025 13:24:22 +0000 (14:24 +0100)] 
arm64/fpsimd: Avoid unnecessary per-CPU buffers for EFI runtime calls

The EFI specification has some elaborate rules about which runtime
services may be called while another runtime service call is already in
progress. In Linux, however, for simplicity, all EFI runtime service
invocations are serialized via the efi_runtime_lock semaphore.

This implies that calls to the helper pair arch_efi_call_virt_setup()
and arch_efi_call_virt_teardown() are serialized too, and are guaranteed
not to nest.  Furthermore, the arm64 arch code has its own spinlock to
serialize use of the EFI runtime stack, of which only a single instance
exists.

This all means that the FP/SIMD and SVE state preserve/restore logic in
__efi_fpsimd_begin() and __efi_fpsimd_end() are also serialized, and
only a single instance of the associated per-CPU variables can ever be
in use at the same time. There is therefore no need at all for per-CPU
variables here, and they can all be replaced with singleton instances.
This saves a non-trivial amount of memory on systems with many CPUs.

To be more robust against potential future changes in the core EFI code
that may invalidate the reasoning above, move the invocations of
__efi_fpsimd_begin() and __efi_fpsimd_end() into the critical section
covered by the efi_rt_lock spinlock.

Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250318132421.3155799-2-ardb+git@google.com
Signed-off-by: Will Deacon <will@kernel.org>
2 months agoarm64/fpsimd: signal: Clear TPIDR2 when delivering signals
Mark Rutland [Thu, 17 Apr 2025 19:01:13 +0000 (20:01 +0100)] 
arm64/fpsimd: signal: Clear TPIDR2 when delivering signals

Linux is intended to be compatible with userspace written to Arm's
AAPCS64 procedure call standard [1,2]. For the Scalable Matrix Extension
(SME), AAPCS64 was extended with a "ZA lazy saving scheme", where SME's
ZA tile is lazily callee-saved and caller-restored. In this scheme,
TPIDR2_EL0 indicates whether the ZA tile is live or has been saved by
pointing to a "TPIDR2 block" in memory, which has a "za_save_buffer"
pointer. This scheme has been implemented in GCC and LLVM, with
necessary runtime support implemented in glibc.

AAPCS64 does not specify how the ZA lazy saving scheme is expected to
interact with signal handling, and the behaviour that AAPCS64 currently
recommends for (sig)setjmp() and (sig)longjmp() does not always compose
safely with signal handling, as explained below.

When Linux delivers a signal, it creates signal frames which contain the
original values of PSTATE.ZA, the ZA tile, and TPIDR_EL2. Between saving
the original state and entering the signal handler, Linux clears
PSTATE.ZA, but leaves TPIDR2_EL0 unchanged. Consequently a signal
handler can be entered with PSTATE.ZA=0 (meaning accesses to ZA will
trap), while TPIDR_EL0 is non-null (which may indicate that ZA needs to
be lazily saved, depending on the contents of the TPIDR2 block). While
in this state, libc and/or compiler runtime code, such as longjmp(), may
attempt to save ZA. As PSTATE.ZA=0, these accesses will trap, causing
the kernel to inject a SIGILL. Note that by virtue of lazy saving
occurring in libc and/or C runtime code, this can be triggered by
application/library code which is unaware of SME.

To avoid the problem above, the kernel must ensure that signal handlers
are entered with PSTATE.ZA and TPIDR2_EL0 configured in a manner which
complies with the ZA lazy saving scheme. Practically speaking, the only
choice is to enter signal handlers with PSTATE.ZA=0 and TPIDR2_EL0=NULL.
This change should not impact SME code which does not follow the ZA lazy
saving scheme (and hence does not use TPIDR2_EL0).

An alternative approach that was considered is to have the signal
handler inherit the original values of both PSTATE.ZA and TPIDR2_EL0,
relying on lazy save/restore sequences being idempotent and capable of
racing safely. This is not safe as signal handlers must be assumed to
have a "private ZA" interface, and therefore cannot be entered with
PSTATE.ZA=1 and TPIDR2_EL0=NULL, but it is legitimate for signals to be
taken from this state.

With the kernel fixed to clear TPIDR2_EL0, there are a couple of
remaining issues (largely masked by the first issue) that must be fixed
in userspace:

(1) When a (sig)setjmp() + (sig)longjmp() pair cross a signal boundary,
    ZA state may be discarded when it needs to be preserved.

    Currently, the ZA lazy saving scheme recommends that setjmp() does
    not save ZA, and recommends that longjmp() is responsible for saving
    ZA. A call to longjmp() in a signal handler will not have visibility
    of ZA state that existed prior to entry to the signal, and when a
    longjmp() is used to bypass a usual signal return, unsaved ZA state
    will be discarded erroneously.

    To fix this, it is necessary for setjmp() to eagerly save ZA state,
    and for longjmp() to configure PSTATE.ZA=0 and TPIDR2_EL0=NULL. This
    works regardless of whether a signal boundary is crossed.

(2) When a C++ exception is thrown and crosses a signal boundary before
    it is caught, ZA state may be discarded when it needs to be
    preserved.

    AAPCS64 requires that exception handlers are entered with
    PSTATE.{SM,ZA}={0,0} and TPIDR2_EL0=NULL, with exception unwind code
    expected to perform any necessary save of ZA state.

    Where it is necessary to perform an exception unwind across an
    exception boundary, the unwind code must recover any necessary ZA
    state (along with TPIDR2) from signal frames.

Fix the kernel as described above, with setup_return() clearing
TPIDR2_EL0 when delivering a signal. Folk on CC are working on fixes for
the remaining userspace issues, including updates/fixes to the AAPCS64
specification and glibc.

[1] https://github.com/ARM-software/abi-aa/releases/download/2025Q1/aapcs64.pdf
[2] https://github.com/ARM-software/abi-aa/blob/c51addc3dc03e73a016a1e4edf25440bcac76431/aapcs64/aapcs64.rst

Fixes: 39782210eb7e ("arm64/sme: Implement ZA signal handling")
Fixes: 39e54499280f ("arm64/signal: Include TPIDR2 in the signal context")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Daniel Kiss <daniel.kiss@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Richard Sandiford <richard.sandiford@arm.com>
Cc: Sander De Smalen <sander.desmalen@arm.com>
Cc: Tamas Petz <tamas.petz@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Yury Khrustalev <yury.khrustalev@arm.com>
Link: https://lore.kernel.org/r/20250417190113.3778111-1-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoperf: Do not enable by default during compile testing
Krzysztof Kozlowski [Thu, 17 Apr 2025 07:46:50 +0000 (09:46 +0200)] 
perf: Do not enable by default during compile testing

Enabling the compile test should not cause automatic enabling of all
drivers, but only allow to choose to compile them.

Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Link: https://lore.kernel.org/r/20250417074650.81561-1-krzysztof.kozlowski@linaro.org
Signed-off-by: Will Deacon <will@kernel.org>
3 months agoperf: arm-ni: Fix missing platform_set_drvdata()
Hongbo Yao [Tue, 1 Apr 2025 05:42:48 +0000 (13:42 +0800)] 
perf: arm-ni: Fix missing platform_set_drvdata()

Add missing platform_set_drvdata in arm_ni_probe(), otherwise
calling platform_get_drvdata() in remove returns NULL.

Fixes: 4d5a7680f2b4 ("perf: Add driver for Arm NI-700 interconnect PMU")
Signed-off-by: Hongbo Yao <andy.xu@hj-micro.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20250401054248.3985814-1-andy.xu@hj-micro.com
Signed-off-by: Will Deacon <will@kernel.org>
3 months agoperf: arm-ni: Unregister PMUs on probe failure
Hongbo Yao [Thu, 3 Apr 2025 07:09:18 +0000 (15:09 +0800)] 
perf: arm-ni: Unregister PMUs on probe failure

When a resource allocation fails in one clock domain of an NI device,
we need to properly roll back all previously registered perf PMUs in
other clock domains of the same device.

Otherwise, it can lead to kernel panics.

Calling arm_ni_init+0x0/0xff8 [arm_ni] @ 2374
arm-ni ARMHCB70:00: Failed to request PMU region 0x1f3c13000
arm-ni ARMHCB70:00: probe with driver arm-ni failed with error -16
list_add corruption: next->prev should be prev (fffffd01e9698a18),
but was 0000000000000000. (next=ffff10001a0decc8).
pstate: 6340009 (nZCv daif +PAN -UAO +TCO +DIT -SSBS BTYPE=--)
pc : list_add_valid_or_report+0x7c/0xb8
lr : list_add_valid_or_report+0x7c/0xb8
Call trace:
 __list_add_valid_or_report+0x7c/0xb8
 perf_pmu_register+0x22c/0x3a0
 arm_ni_probe+0x554/0x70c [arm_ni]
 platform_probe+0x70/0xe8
 really_probe+0xc6/0x4d8
 driver_probe_device+0x48/0x170
 __driver_attach+0x8e/0x1c0
 bus_for_each_dev+0x64/0xf0
 driver_add+0x138/0x260
 bus_add_driver+0x68/0x138
 __platform_driver_register+0x2c/0x40
 arm_ni_init+0x14/0x2a [arm_ni]
 do_init_module+0x36/0x298
---[ end trace 0000000000000000 ]---
Kernel panic - not syncing: Oops - BUG: Fatal exception
SMP: stopping secondary CPUs

Fixes: 4d5a7680f2b4 ("perf: Add driver for Arm NI-700 interconnect PMU")
Signed-off-by: Hongbo Yao <andy.xu@hj-micro.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/20250403070918.4153839-1-andy.xu@hj-micro.com
Signed-off-by: Will Deacon <will@kernel.org>
3 months agoperf/arm-cmn: Remove CMN-600 DTC domain special case
Robin Murphy [Tue, 18 Mar 2025 14:32:10 +0000 (14:32 +0000)] 
perf/arm-cmn: Remove CMN-600 DTC domain special case

The special case for trying to infer the DTC domain for DTC-adjacent
nodes on CMN-600 is fragile and buggy - currently resulting in subtly
messed up DTC counter allocation - and the theoretical benefit it
offers to a tiny minority of use-cases arguably doesn't outweigh the
inconsistency it offers to others anyway. Just get rid of it.

Fixes: ab33c66fd8f1 ("perf/arm-cmn: Enable per-DTC counter allocation")
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/67985e39f53b56385d79a4f1264cf7f9cacedb58.1742308248.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
3 months agoarm64/fpsimd: signal: Simplify preserve_tpidr2_context()
Mark Rutland [Wed, 9 Apr 2025 16:40:10 +0000 (17:40 +0100)] 
arm64/fpsimd: signal: Simplify preserve_tpidr2_context()

During a context-switch, tls_thread_switch() reads and writes a task's
thread_struct::tpidr2_el0 field. Other code shouldn't access this field
for an active task, as such accesses would form a data-race with a
concurrent context-switch.

The usage in preserve_tpidr2_context() is suspicious, but benign as any
race with a context switch will write the same value back to
current->thread.tpidr2_el0.

Make this clearer and match restore_tpidr2_context() by using a
temporary variable instead, avoiding the (benign) data-race.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-14-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: signal: Always save+flush state early
Mark Rutland [Wed, 9 Apr 2025 16:40:09 +0000 (17:40 +0100)] 
arm64/fpsimd: signal: Always save+flush state early

There are several issues with the way the native signal handling code
manipulates FPSIMD/SVE/SME state, described in detail below. These
issues largely result from races with preemption and inconsistent
handling of live state vs saved state.

Known issues with native FPSIMD/SVE/SME state management include:

* On systems with FPMR, the code to save/restore the FPMR accesses the
  register while it is not owned by the current task. Consequently, this
  may corrupt the FPMR of the current task and/or may corrupt the FPMR
  of an unrelated task. The FPMR save/restore has been broken since it
  was introduced in commit:

    8c46def44409fc91 ("arm64/signal: Add FPMR signal handling")

* On systems with SME, setup_return() modifies both the live register
  state and the saved state register state regardless of whether the
  task's state is live, and without holding the cpu fpsimd context.
  Consequently:

  - This may corrupt the state an unrelated task which has PSTATE.SM set
    and/or PSTATE.ZA set.

  - The task may enter the signal handler in streaming mode, and or with
    ZA storage enabled unexpectedly.

  - The task may enter the signal handler in non-streaming SVE mode with
    stale SVE register state, which may have been inherited from
    streaming SVE mode unexpectedly. Where the streaming and
    non-streaming vector lengths differ, this may be packed into
    registers arbitrarily.

  This logic has been broken since it was introduced in commit:

    40a8e87bb32855b3 ("arm64/sme: Disable ZA and streaming mode when handling signals")

  Further incorrect manipulation of state was added in commits:

    ea64baacbc36a0d5 ("arm64/signal: Flush FPSIMD register state when disabling streaming mode")
    baa8515281b30861 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")

* Several restoration functions use fpsimd_flush_task_state() to discard
  the live FPSIMD/SVE/SME while the in-memory copy is stale.

  When a subset of the FPSIMD/SVE/SME state is restored, the remainder
  may be non-deterministically reset to a stale snapshot from some
  arbitrary point in the past.

  This non-deterministic discarding was introduced in commit:

    8cd969d28fd2848d ("arm64/sve: Signal handling support")

  As of that commit, when TIF_SVE was initially clear, failure to
  restore the SVE signal frame could reset the FPSIMD registers to a
  stale snapshot.

  The pattern of discarding unsaved state was subsequently copied into
  restoration functions for some new state in commits:

    39782210eb7e8763 ("arm64/sme: Implement ZA signal handling")
    ee072cf708048c0d ("arm64/sme: Implement signal handling for ZT")

* On systems with SME/SME2, the entire FPSIMD/SVE/SME state may be
  loaded onto the CPU redundantly. Either restore_fpsimd_context() or
  restore_sve_fpsimd_context() will load the entire FPSIMD/SVE/SME state
  via fpsimd_update_current_state() before restore_za_context() and
  restore_zt_context() each discard the state via
  fpsimd_flush_task_state().

  This is purely redundant work, and not a functional bug.

To fix these issues, rework the native signal handling code to always
save+flush the current task's FPSIMD/SVE/SME state before manipulating
that state. This avoids races with preemption and ensures that state is
manipulated consistently regardless of whether it happened to be live
prior to manipulation. This largely involes:

* Using fpsimd_save_and_flush_current_state() to save+flush the state
  for both signal delivery and signal return, before the state is
  manipulated in any way.

* Removing fpsimd_signal_preserve_current_state() and updating
  preserve_fpsimd_context() to explicitly ensure that the FPSIMD state
  is up-to-date, as preserve_fpsimd_context() is the only consumer of
  the FPSIMD state during signal delivery.

* Modifying fpsimd_update_current_state() to not reload the FPSIMD state
  onto the CPU. Ideally we'd remove fpsimd_update_current_state()
  entirely, but I've left that for subsequent patches as there are a
  number of of other problems with the FPSIMD<->SVE conversion helpers
  that should be addressed at the same time. For now I've removed the
  misleading comment.

For setup_return(), we need to decide (for ABI reasons) whether signal
delivery should have all the side-effects of an SMSTOP. For now I've
left a TODO comment, as there are other questions in this area that I'll
address with subsequent patches.

Fixes: 8c46def44409 ("arm64/signal: Add FPMR signal handling")
Fixes: 40a8e87bb328 ("arm64/sme: Disable ZA and streaming mode when handling signals")
Fixes: ea64baacbc36 ("arm64/signal: Flush FPSIMD register state when disabling streaming mode")
Fixes: baa8515281b3 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")
Fixes: 8cd969d28fd2 ("arm64/sve: Signal handling support")
Fixes: 39782210eb7e ("arm64/sme: Implement ZA signal handling")
Fixes: ee072cf70804 ("arm64/sme: Implement signal handling for ZT")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-13-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: signal32: Always save+flush state early
Mark Rutland [Wed, 9 Apr 2025 16:40:08 +0000 (17:40 +0100)] 
arm64/fpsimd: signal32: Always save+flush state early

There are several issues with the way the native signal handling code
manipulates FPSIMD/SVE/SME state. To fix those issues, subsequent
patches will rework the native signal handling code to always save+flush
the current task's FPSIMD/SVE/SME state before manipulating that state.

In preparation for those changes, rework the compat signal handling code
to save+flush the current task's FPSIMD state before manipulating it.

Subsequent patches will remove fpsimd_signal_preserve_current_state()
and fpsimd_update_current_state(). Compat tasks can only have FPSIMD
state, and cannot have any SVE or SME state. Thus, the SVE state
manipulation present in fpsimd_signal_preserve_current_state() and
fpsimd_update_current_state() is not necessary, and it is safe to
directly manipulate current->thread.uw.fpsimd_state once it has been
saved+flushed.

Use fpsimd_save_and_flush_current_state() to save+flush the state for
both signal delivery and signal return, before the state is manipulated
in any way. While it would be safe for compat_restore_vfp_context() to
use fpsimd_flush_task_state(current), there are several extant issues in
the native signal code resulting from incorrect use of
fpsimd_flush_task_state(), and for consistency it is preferable to use
fpsimd_save_and_flush_current_state().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-12-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Add fpsimd_save_and_flush_current_state()
Mark Rutland [Wed, 9 Apr 2025 16:40:07 +0000 (17:40 +0100)] 
arm64/fpsimd: Add fpsimd_save_and_flush_current_state()

When the current task's FPSIMD/SVE/SME state may be live on *any* CPU in
the system, special care must be taken when manipulating that state, as
this manipulation can race with preemption and/or asynchronous usage of
FPSIMD/SVE/SME (e.g. kernel-mode NEON in softirq handlers).

Even when manipulation is is protected with get_cpu_fpsimd_context() and
get_cpu_fpsimd_context(), the logic necessary when the state is live on
the current CPU can be wildly different from the logic necessary when
the state is not live on the current CPU. A number of historical and
extant issues result from failing to handle these cases consistetntly
and/or correctly.

To make it easier to get such manipulation correct, add a new
fpsimd_save_and_flush_current_state() helper function, which ensures
that the current task's state has been saved to memory and any stale
state on any CPU has been "flushed" such that is not live on any CPU in
the system. This will allow code to safely manipulate the saved state
without risk of races.

Subsequent patches will use the new function.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-11-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Fix merging of FPSIMD state during signal return
Mark Rutland [Wed, 9 Apr 2025 16:40:06 +0000 (17:40 +0100)] 
arm64/fpsimd: Fix merging of FPSIMD state during signal return

For backwards compatibility reasons, when a signal return occurs which
restores SVE state, the effective lower 128 bits of each of the SVE
vector registers are restored from the corresponding FPSIMD vector
register in the FPSIMD signal frame, overriding the values in the SVE
signal frame. This is intended to be the case regardless of streaming
mode.

To make this happen, restore_sve_fpsimd_context() uses
fpsimd_update_current_state() to merge the lower 128 bits from the
FPSIMD signal frame into the SVE register state. Unfortunately,
fpsimd_update_current_state() performs this merging dependent upon
TIF_SVE, which is not always correct for streaming SVE register state:

* When restoring non-streaming SVE register state there is no observable
  problem, as the signal return code configures TIF_SVE and the saved
  fp_type to match before calling fpsimd_update_current_state(), which
  observes either:

  - TIF_SVE set    AND  fp_type == FP_STATE_SVE
  - TIF_SVE clear  AND  fp_type == FP_STATE_FPSIMD

* On systems which have SME but not SVE, TIF_SVE cannot be set. Thus the
  merging will never happen for the streaming SVE register state.

* On systems which have SVE and SME, TIF_SVE can be set and cleared
  independently of PSTATE.SM. Thus the merging may or may not happen for
  streaming SVE register state.

  As TIF_SVE can be cleared non-deterministically during syscalls
  (including at the start of sigreturn()), the merging may occur
  non-deterministically from the perspective of userspace.

This logic has been broken since its introduction in commit:

  85ed24dad2904f7c ("arm64/sme: Implement streaming SVE signal handling")

... at which point both fpsimd_signal_preserve_current_state() and
fpsimd_update_current_state() only checked TIF SVE. When PSTATE.SM==1
and TIF_SVE was clear, signal delivery would place stale FPSIMD state
into the FPSIMD signal frame, and signal return would not merge this
into the restored register state.

Subsequently, signal delivery was fixed as part of commit:

  61da7c8e2a602f66 ("arm64/signal: Don't assume that TIF_SVE means we saved SVE state")

... but signal restore was not given a corresponding fix, and when
TIF_SVE was clear, signal restore would still fail to merge the FPSIMD
state into the restored SVE register state. The 'Fixes' tag did not
indicate that this had been broken since its introduction.

Fix this by merging the FPSIMD state dependent upon the saved fp_type,
matching what we (currently) do during signal delivery.

As described above, when backporting this commit, it will also be
necessary to backport commit:

  61da7c8e2a602f66 ("arm64/signal: Don't assume that TIF_SVE means we saved SVE state")

... and prior to commit:

  baa8515281b30861 ("arm64/fpsimd: Track the saved FPSIMD state type separately to TIF_SVE")

... it will be necessary for fpsimd_signal_preserve_current_state() and
fpsimd_update_current_state() to consider both TIF_SVE and
thread_sm_enabled(&current->thread), in place of the saved fp_type.

Fixes: 85ed24dad290 ("arm64/sme: Implement streaming SVE signal handling")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-10-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Reset FPMR upon exec()
Mark Rutland [Wed, 9 Apr 2025 16:40:05 +0000 (17:40 +0100)] 
arm64/fpsimd: Reset FPMR upon exec()

An exec() is expected to reset all FPSIMD/SVE/SME state, and barring
special handling of the vector lengths, the state is expected to reset
to zero. This reset is handled in fpsimd_flush_thread(), which the core
exec() code calls via flush_thread().

When support was added for FPMR, no logic was added to
fpsimd_flush_thread() to reset the FPMR value, and thus it is
erroneously inherited across an exec().

Add the missing reset of FPMR.

Fixes: 203f2b95a882 ("arm64/fpsimd: Support FEAT_FPMR")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-9-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Avoid clobbering kernel FPSIMD state with SMSTOP
Mark Rutland [Wed, 9 Apr 2025 16:40:04 +0000 (17:40 +0100)] 
arm64/fpsimd: Avoid clobbering kernel FPSIMD state with SMSTOP

On system with SME, a thread's kernel FPSIMD state may be erroneously
clobbered during a context switch immediately after that state is
restored. Systems without SME are unaffected.

If the CPU happens to be in streaming SVE mode before a context switch
to a thread with kernel FPSIMD state, fpsimd_thread_switch() will
restore the kernel FPSIMD state using fpsimd_load_kernel_state() while
the CPU is still in streaming SVE mode. When fpsimd_thread_switch()
subsequently calls fpsimd_flush_cpu_state(), this will execute an
SMSTOP, causing an exit from streaming SVE mode. The exit from
streaming SVE mode will cause the hardware to reset a number of
FPSIMD/SVE/SME registers, clobbering the FPSIMD state.

Fix this by calling fpsimd_flush_cpu_state() before restoring the kernel
FPSIMD state.

Fixes: e92bee9f861b ("arm64/fpsimd: Avoid erroneous elide of user state reload")
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-8-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Don't corrupt FPMR when streaming mode changes
Mark Brown [Wed, 9 Apr 2025 16:40:03 +0000 (17:40 +0100)] 
arm64/fpsimd: Don't corrupt FPMR when streaming mode changes

When the effective value of PSTATE.SM is changed from 0 to 1 or from 1
to 0 by any method, an entry or exit to/from streaming SVE mode is
performed, and hardware automatically resets a number of registers. As
of ARM DDI 0487 L.a, this means:

* All implemented bits of the SVE vector registers are set to zero.

* All implemented bits of the SVE predicate registers are set to zero.

* All implemented bits of FFR are set to zero, if FFR is implemented in
  the new mode.

* FPSR is set to 0x0000_0000_0800_009f.

* FPMR is set to 0, if FPMR is implemented.

Currently task_fpsimd_load() restores FPMR before restoring SVCR (which
is an accessor for PSTATE.{SM,ZA}), and so the restored value of FPMR
may be clobbered if the restored value of PSTATE.SM happens to differ
from the initial value of PSTATE.SM.

Fix this by moving the restore of FPMR later.

Note: this was originally posted as [1].

Fixes: 203f2b95a882 ("arm64/fpsimd: Support FEAT_FPMR")
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/linux-arm-kernel/20241204-arm64-sme-reenable-v2-2-bae87728251d@kernel.org/
[ Rutland: rewrite commit message ]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20250409164010.3480271-7-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Discard stale CPU state when handling SME traps
Mark Brown [Wed, 9 Apr 2025 16:40:02 +0000 (17:40 +0100)] 
arm64/fpsimd: Discard stale CPU state when handling SME traps

The logic for handling SME traps manipulates saved FPSIMD/SVE/SME state
incorrectly, and a race with preemption can result in a task having
TIF_SME set and TIF_FOREIGN_FPSTATE clear even though the live CPU state
is stale (e.g. with SME traps enabled). This can result in warnings from
do_sme_acc() where SME traps are not expected while TIF_SME is set:

|        /* With TIF_SME userspace shouldn't generate any traps */
|        if (test_and_set_thread_flag(TIF_SME))
|                WARN_ON(1);

This is very similar to the SVE issue we fixed in commit:

  751ecf6afd6568ad ("arm64/sve: Discard stale CPU state when handling SVE traps")

The race can occur when the SME trap handler is preempted before and
after manipulating the saved FPSIMD/SVE/SME state, starting and ending on
the same CPU, e.g.

| void do_sme_acc(unsigned long esr, struct pt_regs *regs)
| {
|         // Trap on CPU 0 with TIF_SME clear, SME traps enabled
|         // task->fpsimd_cpu is 0.
|         // per_cpu_ptr(&fpsimd_last_state, 0) is task.
|
|         ...
|
|         // Preempted; migrated from CPU 0 to CPU 1.
|         // TIF_FOREIGN_FPSTATE is set.
|
|         get_cpu_fpsimd_context();
|
|         /* With TIF_SME userspace shouldn't generate any traps */
|         if (test_and_set_thread_flag(TIF_SME))
|                 WARN_ON(1);
|
|         if (!test_thread_flag(TIF_FOREIGN_FPSTATE)) {
|                 unsigned long vq_minus_one =
|                         sve_vq_from_vl(task_get_sme_vl(current)) - 1;
|                 sme_set_vq(vq_minus_one);
|
|                 fpsimd_bind_task_to_cpu();
|         }
|
|         put_cpu_fpsimd_context();
|
|         // Preempted; migrated from CPU 1 to CPU 0.
|         // task->fpsimd_cpu is still 0
|         // If per_cpu_ptr(&fpsimd_last_state, 0) is still task then:
|         // - Stale HW state is reused (with SME traps enabled)
|         // - TIF_FOREIGN_FPSTATE is cleared
|         // - A return to userspace skips HW state restore
| }

Fix the case where the state is not live and TIF_FOREIGN_FPSTATE is set
by calling fpsimd_flush_task_state() to detach from the saved CPU
state. This ensures that a subsequent context switch will not reuse the
stale CPU state, and will instead set TIF_FOREIGN_FPSTATE, forcing the
new state to be reloaded from memory prior to a return to userspace.

Note: this was originallly posted as [1].

Fixes: 8bd7f91c03d8 ("arm64/sme: Implement traps and syscall handling for SME")
Reported-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/linux-arm-kernel/20241204-arm64-sme-reenable-v2-1-bae87728251d@kernel.org/
[ Rutland: rewrite commit message ]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-6-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Remove opportunistic freeing of SME state
Mark Rutland [Wed, 9 Apr 2025 16:40:01 +0000 (17:40 +0100)] 
arm64/fpsimd: Remove opportunistic freeing of SME state

When a task's SVE vector length (NSVL) is changed, and the task happens
to have SVCR.{SM,ZA}=={0,0}, vec_set_vector_length() opportunistically
frees the task's sme_state and clears TIF_SME.

The opportunistic freeing was added with no rationale in commit:

  d4d5be94a8787242 ("arm64/fpsimd: Ensure SME storage is allocated after SVE VL changes")

That commit fixed an unrelated problem where the task's sve_state was
freed while it could be used to store streaming mode register state,
where the fix was to re-allocate the task's sve_state.

There is no need to free and/or reallocate the task's sme_state when the
SVE vector length changes, and there is no need to clear TIF_SME. Given
the SME vector length (SVL) doesn't change, the task's sme_state remains
correctly sized.

Remove the unnecessary opportunistic freeing of the task's sme_state
when the task's SVE vector length is changed. The task's sme_state is
still freed when the SME vector length is changed.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-5-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Remove redundant SVE trap manipulation
Mark Rutland [Wed, 9 Apr 2025 16:40:00 +0000 (17:40 +0100)] 
arm64/fpsimd: Remove redundant SVE trap manipulation

When task_fpsimd_load() loads the saved FPSIMD/SVE/SME state, it
configures EL0 SVE traps by calling sve_user_{enable,disable}(). This is
unnecessary, and this is suspicious/confusing as task_fpsimd_load() does
not configure EL0 SME traps.

All calls to task_fpsimd_load() are followed by a call to
fpsimd_bind_task_to_cpu(), where the latter configures traps for SVE and
SME dependent upon the current values of TIF_SVE and TIF_SME, overriding
any trap configuration performed by task_fpsimd_load().

The calls to sve_user_{enable,disable}() calls in task_fpsimd_load()
have been redundant (though benign) since they were introduced in
commit:

  a0136be443d51803 ("arm64/fpsimd: Load FP state based on recorded data type")

Remove the unnecessary and confusing SVE trap manipulation.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-4-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
3 months agoarm64/fpsimd: Remove unused fpsimd_force_sync_to_sve()
Mark Rutland [Wed, 9 Apr 2025 16:39:59 +0000 (17:39 +0100)] 
arm64/fpsimd: Remove unused fpsimd_force_sync_to_sve()

There have been no users of fpsimd_force_sync_to_sve() since commit:

  bbc6172eefdb276b ("arm64/fpsimd: SME no longer requires SVE register state")

Remove fpsimd_force_sync_to_sve().

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Mark Brown <broonie@kernel.org>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20250409164010.3480271-3-mark.rutland@arm.com
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>