Will Deacon [Thu, 12 Sep 2024 12:43:57 +0000 (13:43 +0100)]
Merge branch 'for-next/selftests' into for-next/core
* for-next/selftests:
kselftest/arm64: Fix build warnings for ptrace
kselftest/arm64: Actually test SME vector length changes via sigreturn
kselftest/arm64: signal: fix/refactor SVE vector length enumeration
Will Deacon [Thu, 12 Sep 2024 12:43:41 +0000 (13:43 +0100)]
Merge branch 'for-next/poe' into for-next/core
* for-next/poe: (31 commits)
arm64: pkeys: remove redundant WARN
kselftest/arm64: Add test case for POR_EL0 signal frame records
kselftest/arm64: parse POE_MAGIC in a signal frame
kselftest/arm64: add HWCAP test for FEAT_S1POE
selftests: mm: make protection_keys test work on arm64
selftests: mm: move fpregs printing
kselftest/arm64: move get_header()
arm64: add Permission Overlay Extension Kconfig
arm64: enable PKEY support for CPUs with S1POE
arm64: enable POE and PIE to coexist
arm64/ptrace: add support for FEAT_POE
arm64: add POE signal support
arm64: implement PKEYS support
arm64: add pte_access_permitted_no_overlay()
arm64: handle PKEY/POE faults
arm64: mask out POIndex when modifying a PTE
arm64: convert protection key into vm_flags and pgprot values
arm64: add POIndex defines
arm64: re-order MTE VM_ flags
arm64: enable the Permission Overlay Extension for EL0
...
Will Deacon [Thu, 12 Sep 2024 12:43:22 +0000 (13:43 +0100)]
Merge branch 'for-next/pkvm-guest' into for-next/core
* for-next/pkvm-guest:
arm64: smccc: Reserve block of KVM "vendor" services for pKVM hypercalls
drivers/virt: pkvm: Intercept ioremap using pKVM MMIO_GUARD hypercall
arm64: mm: Add confidential computing hook to ioremap_prot()
drivers/virt: pkvm: Hook up mem_encrypt API using pKVM hypercalls
arm64: mm: Add top-level dispatcher for internal mem_encrypt API
drivers/virt: pkvm: Add initial support for running as a protected guest
firmware/smccc: Call arch-specific hook on discovering KVM services
Will Deacon [Thu, 12 Sep 2024 12:43:08 +0000 (13:43 +0100)]
Merge branch 'for-next/mm' into for-next/core
* for-next/mm:
arm64/mm: use lm_alias() with addresses passed to memblock_free()
mm: arm64: document why pte is not advanced in contpte_ptep_set_access_flags()
arm64: Expose the end of the linear map in PHYSMEM_END
arm64: trans_pgd: mark PTEs entries as valid to avoid dead kexec()
arm64/mm: Delete __init region from memblock.reserved
Will Deacon [Thu, 12 Sep 2024 12:42:57 +0000 (13:42 +0100)]
Merge branch 'for-next/misc' into for-next/core
* for-next/misc:
arm64: hibernate: Fix warning for cast from restricted gfp_t
arm64: esr: Define ESR_ELx_EC_* constants as UL
arm64: Constify struct kobj_type
arm64: smp: smp_send_stop() and crash_smp_send_stop() should try non-NMI first
arm64/sve: Remove unused declaration read_smcr_features()
arm64: mm: Remove unused declaration early_io_map()
arm64: el2_setup.h: Rename some labels to be more diff-friendly
arm64: signal: Fix some under-bracketed UAPI macros
arm64/mm: Drop TCR_SMP_FLAGS
arm64/mm: Drop PMD_SECT_VALID
Will Deacon [Thu, 12 Sep 2024 12:42:42 +0000 (13:42 +0100)]
Merge branch 'for-next/acpi' into for-next/core
* for-next/acpi:
ACPI/IORT: Add PMCG platform information for HiSilicon HIP10/11
ACPI: ARM64: add acpi_iort.h to MAINTAINERS
ACPI/IORT: Switch to use kmemdup_array()
arm64: hibernate: Fix warning for cast from restricted gfp_t
This patch fixes the following warning by adding __force
to the cast:
arch/arm64/kernel/hibernate.c:410:44: sparse: warning: cast from restricted gfp_t
Add explicit casting to prevent expantion of 32th bit of
u32 into highest half of u64 in several places.
For example, in inject_abt64:
ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT = 0x24 << 26.
This operation's result is int with 1 in 32th bit.
While casting this value into u64 (esr is u64) 1
fills 32 highest bits.
Found by Linux Verification Center (linuxtesting.org) with SVACE.
FEAT_PAN3 is present if FEAT_S1POE is, this WARN() was to represent that.
However execute_only_pkey() is always called by mmap(), even on a CPU without
POE support.
Rather than making the WARN() conditional, just delete it.
Ilkka Koskinen [Fri, 6 Sep 2024 19:15:39 +0000 (12:15 -0700)]
perf: arm_pmuv3: Use BR_RETIRED for HW branch event if enabled
The PMU driver attempts to use PC_WRITE_RETIRED for the HW branch event,
if enabled. However, PC_WRITE_RETIRED counts only taken branches,
whereas BR_RETIRED counts also non-taken ones.
Furthermore, perf uses HW branch event to calculate branch misses ratio,
implying BR_RETIRED is the correct event to count.
We keep PC_WRITE_RETIRED still as an option in case BR_RETIRED isn't
implemented.
Robin Murphy [Wed, 4 Sep 2024 17:34:04 +0000 (18:34 +0100)]
MAINTAINERS: List Arm interconnect PMUs as supported
Whatever I may or may not have hoped for, looking after these drivers
seems to have firmly stuck as one of the responsibilities of the job Arm
pays me for, and I would still like to be aware of any other patches, so
make it official.
Robin Murphy [Wed, 4 Sep 2024 17:34:03 +0000 (18:34 +0100)]
perf: Add driver for Arm NI-700 interconnect PMU
The Arm NI-700 Network-on-Chip Interconnect has a relatively
straightforward design with a hierarchy of voltage, power, and clock
domains, where each clock domain then contains a number of interface
units and a PMU which can monitor events thereon. As such, it begets a
relatively straightforward driver to interface those PMUs with perf.
Even more so than with arm-cmn, users will require detailed knowledge of
the wider system topology in order to meaningfully analyse anything,
since the interconnect itself cannot know what lies beyond the boundary
of each inscrutably-numbered interface. Given that, for now they are
also expected to refer to the NI-700 documentation for the relevant
event IDs to provide as well. An identifier is implemented so we can
come back and add jevents if anyone really wants to.
Robin Murphy [Wed, 4 Sep 2024 17:34:02 +0000 (18:34 +0100)]
dt-bindings/perf: Add Arm NI-700 PMU
Add an initial binding for the Arm NI-700 interconnect PMU. As with the
Arm CMN family, there are already future NI products on the roadmap, so
the overall binding is named generically just in case any
non-discoverable incompatibility between generations crops up.
Robin Murphy [Wed, 4 Sep 2024 18:41:55 +0000 (19:41 +0100)]
perf/arm-cmn: Improve format attr printing
Take full advantage of our formats being stored in bitfield form, and
make the printing even more robust and simple by letting printk do all
the hard work of formatting bitlists.
Robin Murphy [Wed, 4 Sep 2024 18:41:54 +0000 (19:41 +0100)]
perf/arm-cmn: Clean up unnecessary NUMA_NO_NODE check
Checking for NUMA_NO_NODE is a misleading and, on reflection, entirely
unnecessary micro-optimisation. If it ever did happen that an incoming
CPU has no NUMA affinity while the current CPU does, a questionably-
useful PMU migration isn't the biggest thing wrong with that picture...
arm64/mm: use lm_alias() with addresses passed to memblock_free()
The pointer argument to memblock_free() needs to be a linear map address, but
in mem_init() we pass __init_begin/__init_end, which is a kernel image address.
This results in warnings when building with CONFIG_DEBUG_VIRTUAL=y:
virt_to_phys used for non-linear address: ffff800081270000 (set_reset_devices+0x0/0x10)
WARNING: CPU: 0 PID: 1 at arch/arm64/mm/physaddr.c:12 __virt_to_phys+0x54/0x70
Modules linked in:
CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.11.0-rc6-next-20240905 #5810 b1ebb0ad06653f35ce875413d5afad24668df3f3
Hardware name: FVP Base RevC (DT)
pstate: 2161402005 (nZCv daif +PAN -UAO -TCO +DIT -SSBS BTYPE=--)
pc : __virt_to_phys+0x54/0x70
lr : __virt_to_phys+0x54/0x70
sp : ffff80008169be20
...
Call trace:
__virt_to_phys+0x54/0x70
memblock_free+0x18/0x30
free_initmem+0x3c/0x9c
kernel_init+0x30/0x1cc
ret_from_fork+0x10/0x20
Fix this by having mem_init() convert the pointers via lm_alias().
Fixes: 1db9716d4487 ("arm64/mm: Delete __init region from memblock.reserved") Signed-off-by: Joey Gouly <joey.gouly@arm.com> Suggested-by: Mark Rutland <mark.rutland@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Rong Qianfeng <rongqianfeng@vivo.com> Reviewed-by: Mark Rutland <mark.rutland@arm.com> Link: https://lore.kernel.org/r/20240905152935.4156469-1-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Barry Song [Thu, 5 Sep 2024 08:11:24 +0000 (20:11 +1200)]
mm: arm64: document why pte is not advanced in contpte_ptep_set_access_flags()
According to David and Ryan, there isn't a bug here, even though we
don't advance the PTE entry, because __ptep_set_access_flags() only
uses the access flags from the entry.
However, we always check pte_same(pte, entry) using the first entry
in __ptep_set_access_flags(). This means that the checks from 1 to
nr - 1 are not comparing the same PTE indexes (thus, they always
return false), which can be a bit confusing. To clarify the code, let's
add some comments.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Signed-off-by: Barry Song <v-songbaohua@oppo.com> Cc: Ard Biesheuvel <ardb@kernel.org> Cc: John Hubbard <jhubbard@nvidia.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: David Hildenbrand <david@redhat.com> Cc: Will Deacon <will@kernel.org> Reviewed-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/r/20240905081124.9576-1-21cnbao@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
arm64: Expose the end of the linear map in PHYSMEM_END
The memory hot-plug and resource management code needs to know the
largest address which can fit in the linear map, so set PHYSMEM_END for
that purpose.
This fixes a crash at boot when amdgpu tries to create
DEVICE_PRIVATE_MEMORY and is given a physical address by the resource
management code which is outside the range which can have a `struct
page`
arm64: trans_pgd: mark PTEs entries as valid to avoid dead kexec()
The reasons for PTEs in the kernel direct map to be marked invalid are not
limited to kfence / debug pagealloc machinery. In particular,
memfd_secret() also steals pages with set_direct_map_invalid_noflush().
When building the transitional page tables for kexec from the current
kernel's page tables, those pages need to become regular writable pages,
otherwise, if the relocation places kexec segments over such pages, a fault
will occur during kexec, leading to host going dark during kexec.
This patch addresses the kexec issue by marking any PTE as valid if it is
not none. While this fixes the kexec crash, it does not address the
security concern that if processes owning secret memory are not terminated
before kexec, the secret content will be mapped in the new kernel without
being scrubbed.
Rong Qianfeng [Mon, 2 Sep 2024 02:39:35 +0000 (10:39 +0800)]
arm64/mm: Delete __init region from memblock.reserved
If CONFIG_ARCH_KEEP_MEMBLOCK is enabled, the memory information in
memblock will be retained. We release the __init memory here, and
we should also delete the corresponding region in memblock.reserved,
which allows debugfs/memblock/reserved to display correct memory
information.
Robin Murphy [Mon, 2 Sep 2024 17:52:04 +0000 (18:52 +0100)]
perf/arm-cmn: Support CMN S3
CMN S3 is the latest and greatest evolution for 2024, although most of
the new features don't impact the PMU, so from our point of view it ends
up looking a lot like CMN-700 r3 still. We have some new device types to
ignore, a mildly irritating rearrangement of the register layouts, and a
scary new configuration option that makes it potentially unsafe to even
walk the full discovery tree, let alone attempt to use the PMU.
Robin Murphy [Mon, 2 Sep 2024 17:52:03 +0000 (18:52 +0100)]
dt-bindings: perf: arm-cmn: Add CMN S3
The CMN S3 PMU is functionally still very similar to CMN-700, however
while the register contents are compatible, many of them are moved to
different offsets. While this is technically discoverable by a careful
driver that understands the part number in the peripheral ID registers
(which do at least remain in the same place), a new unique compatible
seems warranted to avoid any surprises.
Robin Murphy [Mon, 2 Sep 2024 17:52:02 +0000 (18:52 +0100)]
perf/arm-cmn: Refactor DTC PMU register access
Annoyingly, we're soon going to have to cope with PMU registers moving
about. This will mostly be straightforward, except for the hard-coding
of CMN_PMU_OFFSET for the DTC PMU registers. As a first step, refactor
those accessors to allow for encapsulating a variable offset without
making a big mess all over. As a bonus, we can repack the arm_cmn_dtc
structure to accommodate the new pointer without growing any larger,
since irq_friend only encodes a range of +/-3.
Robin Murphy [Mon, 2 Sep 2024 17:52:01 +0000 (18:52 +0100)]
perf/arm-cmn: Make cycle counts less surprising
By default, CMN has automatic clock-gating with the implication that
a DTC's cycle counter may not increment while the DTC is sufficiently
idle. Given that we may have up to 4 DTCs to choose from when scheduling
a cycles event, this may potentially lead to surprising results if
trying to measure metrics based on activity in a different DTC domain
from where cycles end up being counted. Furthermore, since the details
of internal clock gating are not documented, we can't even reason about
what "active" cycles for a DTC actually mean relative to the activity of
other nodes within the same nominal DTC domain.
Make the reasonable assumption that if the user wants to count cycles,
they almost certainly want to count all of the cycles, and disable clock
gating while a DTC's cycle counter is in use.
Robin Murphy [Mon, 2 Sep 2024 17:51:59 +0000 (18:51 +0100)]
perf/arm-cmn: Ensure dtm_idx is big enough
While CMN_MAX_DIMENSION was bumped to 12 for CMN-650, that only supports
up to a 10x10 mesh, so bumping dtm_idx to 256 bits at the time worked
out OK in practice. However CMN-700 did finally support up to 144 XPs,
and thus needs a worst-case 288 bits of dtm_idx for an aggregated XP
event on a maxed-out config. Oops.
Robin Murphy [Mon, 2 Sep 2024 17:51:57 +0000 (18:51 +0100)]
perf/arm-cmn: Refactor node ID handling. Again.
The scope of the "extra device ports" configuration is not made clear by
the CMN documentation - so far we've assumed it applies globally, based
on the sole example which suggests as much. However it transpires that
this is incorrect, and the format does in fact vary based on each
individual XP's port configuration. As a consequence, we're currenly
liable to decode the port/device indices from a node ID incorrectly,
thus program the wrong event source in the DTM leading to bogus event
counts, and also show device topology on the wrong ports in debugfs.
To put this right, rework node IDs yet again to carry around the
additional data necessary to decode them properly per-XP. At this point
the notion of fully decomposing an ID becomes more impractical than it's
worth, so unabstracting the XY mesh coordinates (where 2/3 users were
just debug anyway) ends up leaving things a bit simpler overall.
Joey Gouly [Thu, 22 Aug 2024 15:11:11 +0000 (16:11 +0100)]
kselftest/arm64: parse POE_MAGIC in a signal frame
Teach the signal frame parsing about the new POE frame, avoids warning when it
is generated.
Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240822151113.1479789-29-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Joey Gouly [Thu, 22 Aug 2024 15:11:10 +0000 (16:11 +0100)]
kselftest/arm64: add HWCAP test for FEAT_S1POE
Check that when POE is enabled, the POR_EL0 register is accessible.
Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Mark Brown <broonie@kernel.org> Cc: Shuah Khan <shuah@kernel.org> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240822151113.1479789-28-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Joey Gouly [Thu, 22 Aug 2024 15:11:09 +0000 (16:11 +0100)]
selftests: mm: make protection_keys test work on arm64
The encoding of the pkey register differs on arm64, than on x86/ppc. On those
platforms, a bit in the register is used to disable permissions, for arm64, a
bit enabled in the register indicates that the permission is allowed.
This drops two asserts of the form:
assert(read_pkey_reg() <= orig_pkey_reg);
Because on arm64 this doesn't hold, due to the encoding.
The pkey must be reset to both access allow and write allow in the signal
handler. pkey_access_allow() works currently for PowerPC as the
PKEY_DISABLE_ACCESS and PKEY_DISABLE_WRITE have overlapping bits set.
Access to the uc_mcontext is abstracted, as arm64 has a different structure.
Signed-off-by: Joey Gouly <joey.gouly@arm.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Shuah Khan <shuah@kernel.org> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com> Acked-by: Dave Hansen <dave.hansen@linux.intel.com> Link: https://lore.kernel.org/r/20240822151113.1479789-27-joey.gouly@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Joey Gouly [Thu, 22 Aug 2024 15:11:04 +0000 (16:11 +0100)]
arm64: enable POE and PIE to coexist
Permission Indirection Extension and Permission Overlay Extension can be
enabled independently.
When PIE is disabled and POE is enabled, the permissions set by POR_EL0 will be
applied on top of the permissions set in the PTE.
When both PIE and POE are enabled, the permissions set by POR_EL0 will be
applied on top of the permissions set by the PIRE0_EL1 register.
However PIRE0_EL1 has encodings that specifically enable and disable the
overlay from applying.
For example:
0001 Read, Overlay applied.
1000 Read, Overlay not applied.
Switch to using the 'Overlay applied' encodings in PIRE0_EL1, so that PIE and
POE can coexist.
This indicates if the system supports POE. This is a CPUCAP_BOOT_CPU_FEATURE
as the boot CPU will enable POE if it has it, so secondary CPUs must also
have this feature.
Will Deacon [Wed, 4 Sep 2024 10:15:52 +0000 (11:15 +0100)]
Merge remote-tracking branch 'kvmarm/arm64-shared-6.12' into for-next/poe
Pull in the AT instruction conversion patch from the KVM arm64 tree, as
this is a shared dependency between the POE series from Joey and the AT
emulation series for Nested Virtualisation from Marc.
Will Deacon [Fri, 30 Aug 2024 13:01:50 +0000 (14:01 +0100)]
arm64: smccc: Reserve block of KVM "vendor" services for pKVM hypercalls
pKVM relies on hypercalls to expose services such as memory sharing to
protected guests. Tentatively allocate a block of 58 hypercalls (i.e.
fill the remaining space in the first 64 function IDs) for pKVM usage,
as future extensions such as pvIOMMU support, range-based memory sharing
and validation of assigned devices will require additional services.
Will Deacon [Fri, 30 Aug 2024 13:01:49 +0000 (14:01 +0100)]
drivers/virt: pkvm: Intercept ioremap using pKVM MMIO_GUARD hypercall
Hook up pKVM's MMIO_GUARD hypercall so that ioremap() and friends will
register the target physical address as MMIO with the hypervisor,
allowing guest exits to that page to be emulated by the host with full
syndrome information.
Will Deacon [Fri, 30 Aug 2024 13:01:47 +0000 (14:01 +0100)]
drivers/virt: pkvm: Hook up mem_encrypt API using pKVM hypercalls
If we detect the presence of pKVM's SHARE and UNSHARE hypercalls, then
register a backend implementation of the mem_encrypt API so that things
like DMA buffers can be shared appropriately with the host.
Marc Zyngier [Fri, 30 Aug 2024 13:01:44 +0000 (14:01 +0100)]
firmware/smccc: Call arch-specific hook on discovering KVM services
arm64 will soon require its own callback to initialise services
that are only available on this architecture. Introduce a hook
that can be overloaded by the architecture.
Mark Brown [Thu, 29 Aug 2024 17:20:09 +0000 (18:20 +0100)]
kselftest/arm64: Actually test SME vector length changes via sigreturn
The test case for SME vector length changes via sigreturn use a bit too
much cut'n'paste and only actually changed the SVE vector length in the
test itself. Andre's recent factoring out of the initialisation code caused
this to be exposed and the test to start failing. Fix the test to actually
cover the thing it's supposed to test.
Currently users can get the Root Ports supported by the PCIe PMU by
"bus" sysfs attributes which indicates the PCIe bus number where
Root Ports are located. This maybe insufficient since Root Ports
supported by different PCIe PMUs may be located on the same PCIe bus.
So export the BDF range the Root Ports additionally.
We make the initial value of event ctrl register as HISI_PCIE_INIT_SET
and modify according to the user options. This will make TLP headers
bandwidth only counting never take effect since HISI_PCIE_INIT_SET
configures to count the TLP payloads bandwidth. Fix this by making
the initial value of event ctrl register as 0.
Fixes: 17d573984d4d ("drivers/perf: hisi: Add TLP filter support") Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20240829090332.28756-3-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
Yicong Yang [Thu, 29 Aug 2024 09:03:30 +0000 (17:03 +0800)]
drivers/perf: hisi_pcie: Record hardware counts correctly
Currently we set the period and record it as the initial value of the
counter without checking it's set to the hardware successfully or not.
However the counter maybe unwritable if the target event is unsupported
by the device. In such case we will pass user a wrong count:
[start counts when setting the period]
hwc->prev_count = 0x8000000000000000
device.counter_value = 0 // the counter is not set as the period
[when user reads the counter]
event->count = device.counter_value - hwc->prev_count
= 0x8000000000000000 // wrong. should be 0.
Fix this by record the hardware counter counts correctly when setting
the period.
Fixes: 8404b0fbc7fb ("drivers/perf: hisi: Add driver for HiSilicon PCIe PMU") Signed-off-by: Yicong Yang <yangyicong@hisilicon.com> Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Link: https://lore.kernel.org/r/20240829090332.28756-2-yangyicong@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
James Clark [Tue, 27 Aug 2024 14:51:12 +0000 (15:51 +0100)]
drivers/perf: arm_spe: Use perf_allow_kernel() for permissions
Use perf_allow_kernel() for 'pa_enable' (physical addresses),
'pct_enable' (physical timestamps) and context IDs. This means that
perf_event_paranoid is now taken into account and LSM hooks can be used,
which is more consistent with other perf_event_open calls. For example
PERF_SAMPLE_PHYS_ADDR uses perf_allow_kernel() rather than just
perfmon_capable().
This also indirectly fixes the following error message which is
misleading because perf_event_paranoid is not taken into account by
perfmon_capable():
$ perf record -e arm_spe/pa_enable/
Error:
Access to performance monitoring and observability operations is
limited. Consider adjusting /proc/sys/kernel/perf_event_paranoid
setting ...
On arm64, this prctl controls access to CNTVCT_EL0, CNTVCTSS_EL0 and
CNTFRQ_EL0 via CNTKCTL_EL1.EL0VCTEN. Since this bit is also used to
implement various erratum workarounds, check whether the CPU needs
a workaround whenever we potentially need to change it.
This is needed for a correct implementation of non-instrumenting
record-replay debugging on arm64 (i.e. rr; https://rr-project.org/).
rr must trap and record any sources of non-determinism from the
userspace program's perspective so it can be replayed later. This
includes the results of syscalls as well as the results of access
to architected timers exposed directly to the program. This prctl
was originally added for x86 by commit 8fb402bccf20 ("generic, x86:
add prctl commands PR_GET_TSC and PR_SET_TSC"), and rr uses it to
trap RDTSC on x86 for the same reason.
We also considered exposing this as a PTRACE_EVENT. However, prctl
seems like a better choice for these reasons:
1) In general an in-process control seems more useful than an
out-of-process control, since anything that you would be able to
do with ptrace could also be done with prctl (tracer can inject a
call to the prctl and handle signal-delivery-stops), and it avoids
needing an additional process (which will complicate debugging
of the ptraced process since it cannot have more than one tracer,
and will be incompatible with ptrace_scope=3) in cases where that
is not otherwise necessary.
2) Consistency with x86_64. Note that on x86_64, RDTSC has been there
since the start, so it's the same situation as on arm64.
perf/dwc_pcie: Always register for PCIe bus notifier
When the PCIe devices are discovered late, the driver can't find
the PCIe devices and returns in the init without registering with
the bus notifier. Due to that the devices which are discovered late
the driver can't register for this.
Register for bus notifier & driver even if the device is not found
as part of init.
perf/dwc_pcie: Fix registration issue in multi PCIe controller instances
When there are multiple of instances of PCIe controllers, registration
to perf driver fails with this error.
sysfs: cannot create duplicate filename '/devices/platform/dwc_pcie_pmu.0'
CPU: 0 PID: 166 Comm: modprobe Not tainted 6.10.0-rc2-next-20240607-dirty
Hardware name: Qualcomm SA8775P Ride (DT)
Call trace:
dump_backtrace.part.8+0x98/0xf0
show_stack+0x14/0x1c
dump_stack_lvl+0x74/0x88
dump_stack+0x14/0x1c
sysfs_warn_dup+0x60/0x78
sysfs_create_dir_ns+0xe8/0x100
kobject_add_internal+0x94/0x224
kobject_add+0xa8/0x118
device_add+0x298/0x7b4
platform_device_add+0x1a0/0x228
platform_device_register_full+0x11c/0x148
dwc_pcie_register_dev+0x74/0xf0 [dwc_pcie_pmu]
dwc_pcie_pmu_init+0x7c/0x1000 [dwc_pcie_pmu]
do_one_initcall+0x58/0x1c0
do_init_module+0x58/0x208
load_module+0x1804/0x188c
__do_sys_init_module+0x18c/0x1f0
__arm64_sys_init_module+0x14/0x1c
invoke_syscall+0x40/0xf8
el0_svc_common.constprop.1+0x70/0xf4
do_el0_svc+0x18/0x20
el0_svc+0x28/0xb0
el0t_64_sync_handler+0x9c/0xc0
el0t_64_sync+0x160/0x164
kobject: kobject_add_internal failed for dwc_pcie_pmu.0 with -EEXIST,
don't try to register things with the same name in the same directory.
This is because of having same bdf value for devices under two different
controllers.
Update the logic to use sbdf which is a unique number in case of
multi instance also.
Jing Zhang [Thu, 22 Aug 2024 03:33:31 +0000 (11:33 +0800)]
drivers/perf: Fix ali_drw_pmu driver interrupt status clearing
The alibaba_uncore_pmu driver forgot to clear all interrupt status
in the interrupt processing function. After the PMU counter overflow
interrupt occurred, an interrupt storm occurred, causing the system
to hang.
Therefore, clear the correct interrupt status in the interrupt handling
function to fix it.
Yangyu Chen [Wed, 7 Aug 2024 02:35:18 +0000 (11:35 +0900)]
drivers/perf: apple_m1: add known PMU events
This patch adds known PMU events that can be found on /usr/share/kpep in
macOS. The m1_pmu_events and m1_pmu_event_affinity are generated from
the script [1], which consumes the plist file from Apple. And then added
these events to m1_pmu_perf_map and m1_pmu_event_attrs with Apple's
documentation [2].
Douglas Anderson [Wed, 21 Aug 2024 21:53:57 +0000 (14:53 -0700)]
arm64: smp: smp_send_stop() and crash_smp_send_stop() should try non-NMI first
When testing hard lockup handling on my sc7180-trogdor-lazor device
with pseudo-NMI enabled, with serial console enabled and with kgdb
disabled, I found that the stack crawls printed to the serial console
ended up as a jumbled mess. After rebooting, the pstore-based console
looked fine though. Also, enabling kgdb to trap the panic made the
console look fine and avoided the mess.
After a bit of tracking down, I came to the conclusion that this was
what was happening:
1. The panic path was stopping all other CPUs with
panic_other_cpus_shutdown().
2. At least one of those other CPUs was in the middle of printing to
the serial console and holding the console port's lock, which is
grabbed with "irqsave". ...but since we were stopping with an NMI
we didn't care about the "irqsave" and interrupted anyway.
3. Since we stopped the CPU while it was holding the lock it would
never release it.
4. All future calls to output to the console would end up failing to
get the lock in qcom_geni_serial_console_write(). This isn't
_totally_ unexpected at panic time but it's a code path that's not
well tested, hard to get right, and apparently doesn't work
terribly well on the Qualcomm geni serial driver.
The Qualcomm geni serial driver was fixed to be a bit better in commit 9e957a155005 ("serial: qcom-geni: Don't cancel/abort if we can't get
the port lock") but it's nice not to get into this situation in the
first place.
Taking a page from what x86 appears to do in native_stop_other_cpus(),
do this:
1. First, try to stop other CPUs with a normal IPI and wait a second.
This gives them a chance to leave critical sections.
2. If CPUs fail to stop then retry with an NMI, but give a much lower
timeout since there's no good reason for a CPU not to react quickly
to a NMI.
This works well and avoids the corrupted console and (presumably)
could help avoid other similar issues.
In order to do this, we need to do a little re-organization of our
IPIs since we don't have any more free IDs. Do what was suggested in
previous conversations and combine "stop" and "crash stop". That frees
up an IPI so now we can have a "stop" and "stop NMI".
In order to do this we also need a slight change in the way we keep
track of which CPUs still need to be stopped. We need to know
specifically which CPUs haven't stopped yet when we fall back to NMI
but in the "crash stop" case the "cpu_online_mask" isn't updated as
CPUs go down. This is why that code path had an atomic of the number
of CPUs left. Solve this by also updating the "cpu_online_mask" for
crash stops.
All of the above lets us combine the logic for "stop" and "crash stop"
code, which appeared to have a bunch of arbitrary implementation
differences.
Aside from the above change where we try a normal IPI and then an NMI,
the combined function has a few subtle differences:
* In the normal smp_send_stop(), if we fail to stop one or more CPUs
then we won't include the current CPU (the one running
smp_send_stop()) in the error message.
* In crash_smp_send_stop(), if we fail to stop some CPUs we'll print
the CPUs that we failed to stop instead of printing all _but_ the
current running CPU.
* In crash_smp_send_stop(), we will now only print "SMP: stopping
secondary CPUs" if (system_state <= SYSTEM_RUNNING).
Andre Przywara [Wed, 21 Aug 2024 16:44:01 +0000 (17:44 +0100)]
kselftest/arm64: signal: fix/refactor SVE vector length enumeration
Currently a number of SVE/SME related tests have almost identical
functions to enumerate all supported vector lengths. However over time
the copy&pasted code has diverged, allowing some bugs to creep in:
- fake_sigreturn_sme_change_vl reports a failure, not a SKIP if only
one vector length is supported (but the SVE version is fine)
- fake_sigreturn_sme_change_vl tries to set the SVE vector length, not
the SME one (but the other SME tests are fine)
- za_no_regs keeps iterating forever if only one vector length is
supported (but za_regs is correct)
Since those bugs seem to be mostly copy&paste ones, let's consolidate
the enumeration loop into one shared function, and just call that from
each test. That should fix the above bugs, and prevent similar issues
from happening again.
Fixes: 4963aeb35a9e ("kselftest/arm64: signal: Add SME signal handling tests") Signed-off-by: Andre Przywara <andre.przywara@arm.com> Reviewed-by: Mark Brown <broonie@kernel.org> Link: https://lore.kernel.org/r/20240821164401.3598545-1-andre.przywara@arm.com Signed-off-by: Will Deacon <will@kernel.org>
Yicong Yang [Wed, 31 Jul 2024 09:26:58 +0000 (17:26 +0800)]
ACPI/IORT: Add PMCG platform information for HiSilicon HIP10/11
HiSilicon HIP10/11 platforms using the same SMMU PMCG with HIP09
and thus suffers the same erratum. List them in the PMCG platform
information list without introducing a new SMMU PMCG Model.
perf: arm_pmuv3: Add support for Armv9.4 PMU instruction counter
Armv9.4/8.9 PMU adds optional support for a fixed instruction counter
similar to the fixed cycle counter. Support for the feature is indicated
in the ID_AA64DFR1_EL1 register PMICNTR field. The counter is not
accessible in AArch32.
Existing userspace using direct counter access won't know how to handle
the fixed instruction counter, so we have to avoid using the counter
when user access is requested.
KVM: arm64: Refine PMU defines for number of counters
There are 2 defines for the number of PMU counters:
ARMV8_PMU_MAX_COUNTERS and ARMPMU_MAX_HWEVENTS. Both are the same
currently, but Armv9.4/8.9 increases the number of possible counters
from 32 to 33. With this change, the maximum number of counters will
differ for KVM's PMU emulation which is PMUv3.4. Give KVM PMU emulation
its own define to decouple it from the rest of the kernel's number PMU
counters.
The VHE PMU code needs to match the PMU driver, so switch it to use
ARMPMU_MAX_HWEVENTS instead.
KVM: arm64: pmu: Use generated define for PMSELR_EL0.SEL access
ARMV8_PMU_COUNTER_MASK is really a mask for the PMSELR_EL0.SEL register
field. Make that clear by adding a standard sysreg definition for the
register, and using it instead.
Reviewed-by: Mark Rutland <mark.rutland@arm.com> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Marc Zyngier <maz@kernel.org> Signed-off-by: Rob Herring (Arm) <robh@kernel.org> Tested-by: James Clark <james.clark@linaro.org> Link: https://lore.kernel.org/r/20240731-arm-pmu-3-9-icntr-v3-4-280a8d7ff465@kernel.org Signed-off-by: Will Deacon <will@kernel.org>
KVM: arm64: pmu: Use arm_pmuv3.h register accessors
Commit df29ddf4f04b ("arm64: perf: Abstract system register accesses
away") split off PMU register accessor functions to a standalone header.
Let's use it for KVM PMU code and get rid one copy of the ugly switch
macro.
perf: arm_pmuv3: Prepare for more than 32 counters
Various PMUv3 registers which are a mask of counters are 64-bit
registers, but the accessor functions take a u32. This has been fine as
the upper 32-bits have been RES0 as there has been a maximum of 32
counters prior to Armv9.4/8.9. With Armv9.4/8.9, a 33rd counter is
added. Update the accessor functions to use a u64 instead.
perf: arm_pmu: Remove event index to counter remapping
Xscale and Armv6 PMUs defined the cycle counter at 0 and event counters
starting at 1 and had 1:1 event index to counter numbering. On Armv7 and
later, this changed the cycle counter to 31 and event counters start at
0. The drivers for Armv7 and PMUv3 kept the old event index numbering
and introduced an event index to counter conversion. The conversion uses
masking to convert from event index to a counter number. This operation
relies on having at most 32 counters so that the cycle counter index 0
can be transformed to counter number 31.
Armv9.4 adds support for an additional fixed function counter
(instructions) which increases possible counters to more than 32, and
the conversion won't work anymore as a simple subtract and mask. The
primary reason for the translation (other than history) seems to be to
have a contiguous mask of counters 0-N. Keeping that would result in
more complicated index to counter conversions. Instead, store a mask of
available counters rather than just number of events. That provides more
information in addition to the number of events.
Use of_property_present() to test for property presence rather than
of_find_property(). This is part of a larger effort to remove callers
of of_find_property() and similar functions. of_find_property() leaks
the DT struct property and data pointers which is a problem for
dynamically allocated nodes which may be freed.
Signed-off-by: Rob Herring (Arm) <robh@kernel.org> Acked-by: Mark Rutland <mark.rutland@arm.com> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Link: https://lore.kernel.org/r/20240731191312.1710417-15-robh@kernel.org Signed-off-by: Will Deacon <will@kernel.org>