]> git.ipfire.org Git - thirdparty/kernel/linux.git/log
thirdparty/kernel/linux.git
2 weeks agoperf vendor events intel: Update alderlake events from 1.34 to 1.35
Ian Rogers [Tue, 2 Dec 2025 16:53:32 +0000 (08:53 -0800)] 
perf vendor events intel: Update alderlake events from 1.34 to 1.35

The updated events were published in:
https://github.com/intel/perfmon/commit/c74f1cefa94d224cb3338507961b59d8a2a1c4e9

Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf arm_spe: Add CPU variants supporting common data source packet
Leo Yan [Thu, 13 Nov 2025 10:57:39 +0000 (10:57 +0000)] 
perf arm_spe: Add CPU variants supporting common data source packet

Add the following CPU variants to the list for data source decoding:

  - Cortex-A715 [1]
  - Cortex-A78C [2]
  - Cortex-X1 [3]
  - Cortex-X4 [4]
  - Neoverse V3 [5]

[1] https://developer.arm.com/documentation/101590/0103/Statistical-Profiling-Extension-Support/Statistical-Profiling-Extension-data-source-packet
[2] https://developer.arm.com/documentation/102226/0002/Debug-descriptions/Statistical-Profiling-Extension/implementation-defined-features-of-SPE
[3] https://developer.arm.com/documentation/101433/0102/Debug-descriptions/Statistical-Profiling-Extension/implementation-defined-features-of-SPE
[4] https://developer.arm.com/documentation/102484/0003/Statistical-Profiling-Extension-support/Statistical-Profiling-Extension-data-source-packet
[5] https://developer.arm.com/documentation/107734/0002/Statistical-Profiling-Extension-support/Statistical-Profiling-Extension-data-source-packet

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf auxtrace: Include sys/types.h for pid_t
Arnaldo Carvalho de Melo [Wed, 3 Dec 2025 14:50:03 +0000 (11:50 -0300)] 
perf auxtrace: Include sys/types.h for pid_t

In 754187ad73b73bcb ("perf build: Remove NO_AUXTRACE build option")
sys/types.h was removed, which broke the build in all Alpine Linux
releases, as musl libc has pid_t defined via sys/types.h, add it back.

Fixes: 754187ad73b73bcb ("perf build: Remove NO_AUXTRACE build option")
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf test: Add kallsyms split test
Namhyung Kim [Tue, 2 Dec 2025 23:57:18 +0000 (15:57 -0800)] 
perf test: Add kallsyms split test

Create a fake root directory for /proc/{version,modules,kallsyms} in
/tmp for testing.  The kallsyms has a bad symbol in the module and it
causes the main map splitted.  The test ensures it only has two maps -
kernel and the module and it finds the initial map after the module
without creating the split maps like [kernel].0 and so on.

  $ perf test -vv "split kallsyms"
   69: split kallsyms:
  --- start ---
  test child forked, pid 1016196
  try to create fake root directory
  create kernel maps from the fake root directory
  maps__set_modules_path_dir: cannot open /tmp/perf-test.Zrv6Sy/lib/modules/X.Y.Z dir
  Problems setting modules path maps, continuing anyway...
  Failed to open /tmp/perf-test.Zrv6Sy/proc/kcore. Note /proc/kcore requires CAP_SYS_RAWIO capability to access.
  Using /tmp/perf-test.Zrv6Sy/proc/kallsyms for symbols
  kernel map loaded - check symbol and map
  ---- end(0) ----
   69: split kallsyms                                                  : Ok

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Use machine->root_dir to find /proc/kallsyms
Namhyung Kim [Tue, 2 Dec 2025 23:57:17 +0000 (15:57 -0800)] 
perf tools: Use machine->root_dir to find /proc/kallsyms

This is for test functions to find the kallsyms correctly.  It can find
the machine from the kernel maps and use its root_dir.  This is helpful
to setup fake /proc directory for testing.

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Fallback to initial kernel map properly
Namhyung Kim [Tue, 2 Dec 2025 23:57:16 +0000 (15:57 -0800)] 
perf tools: Fallback to initial kernel map properly

In maps__split_kallsyms(), it assumes new kernel map when it finds a
symbol without module after any module and the initial kernel map has
some symbols.  Because it expects modules are out of the kernel map so
modules should not have symbols in the kernel map.

For example, the following memory map shows symbols and maps.  Any
symbols in the module 1 area will go to the module 1.  The main kernel
map starts at 0xffffffffbc200000.  But if any symbol has a module
between the symbols in that area, next symbols after 0xffffffffbd008000
will generate new kernel maps like [kernel].1.

   kernel address   |                     |
                    |                     |
 0xffffffffc0000000 |---------------------|
                    |     (symbols)       |
                    |        ...          |   <---  [kernel].N
 0xffffffffbc400000 |---------------------|
                    |     (symbols)       |
                    |      module 2       |   <---  bad?
 0xffffffffbc380000 |---------------------|
                    |        ...          |
                    |     (symbols)       |
                    |  [kernel.kallsyms]  |   <---  initial map
 0xffffffffbc200000 |---------------------|
                    |                     |
                    |                     |
 0xffffffffabcde000 |---------------------|
                    |     (symbols)       |
                    |      module 1       |
 0xffffffffabcd0000 |---------------------|

This is very fragile when the module has a symbol that falls into the
main kernel map for some reason.  My system has a livepatch module with
such symbols.  And it created a lot of new kernel maps after those
symbols.  But the symbol may have broken addresses and the later symbols
can still be found in the initial kernel map.

Let's check the symbol address in the initial map and use it if found.

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Fix split kallsyms DSO counting
Namhyung Kim [Tue, 2 Dec 2025 23:57:15 +0000 (15:57 -0800)] 
perf tools: Fix split kallsyms DSO counting

It's counted twice as it's increased after calling maps__insert().  I
guess we want to increase it only after it's added properly.

Reviewed-by: Ian Rogers <irogers@google.com>
Fixes: 2e538c4a1847291cf ("perf tools: Improve kernel/modules symbol lookup")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Mark split kallsyms DSOs as loaded
Namhyung Kim [Tue, 2 Dec 2025 23:57:14 +0000 (15:57 -0800)] 
perf tools: Mark split kallsyms DSOs as loaded

The maps__split_kallsyms() will split symbols to module DSOs if it comes
from a module.  It also handled some unusual kernel symbols after modules
by creating new kernel maps like "[kernel].0".

But they are pseudo DSOs to have those unexpected symbols.  They should
not be considered as unloaded kernel DSOs.  Otherwise the dso__load()
for them will end up calling dso__load_kallsyms() and then
maps__split_kallsyms() again and again.

Reviewed-by: Ian Rogers <irogers@google.com>
Fixes: 2e538c4a1847291cf ("perf tools: Improve kernel/modules symbol lookup")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Flush remaining samples w/o deferred callchains
Namhyung Kim [Thu, 20 Nov 2025 23:48:04 +0000 (15:48 -0800)] 
perf tools: Flush remaining samples w/o deferred callchains

It's possible that some kernel samples don't have matching deferred
callchain records when the profiling session was ended before the
threads came back to userspace.  Let's flush the samples before
finish the session.

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Merge deferred user callchains
Namhyung Kim [Thu, 20 Nov 2025 23:48:03 +0000 (15:48 -0800)] 
perf tools: Merge deferred user callchains

Save samples with deferred callchains in a separate list and deliver
them after merging the user callchains.  If users don't want to merge
they can set tool->merge_deferred_callchains to false to prevent the
behavior.

With previous result, now perf script will show the merged callchains.

  $ perf script
  ...
  pwd    2312   121.163435:     249113 cpu/cycles/P:
          ffffffff845b78d8 __build_id_parse.isra.0+0x218 ([kernel.kallsyms])
          ffffffff83bb5bf6 perf_event_mmap+0x2e6 ([kernel.kallsyms])
          ffffffff83c31959 mprotect_fixup+0x1e9 ([kernel.kallsyms])
          ffffffff83c31dc5 do_mprotect_pkey+0x2b5 ([kernel.kallsyms])
          ffffffff83c3206f __x64_sys_mprotect+0x1f ([kernel.kallsyms])
          ffffffff845e6692 do_syscall_64+0x62 ([kernel.kallsyms])
          ffffffff8360012f entry_SYSCALL_64_after_hwframe+0x76 ([kernel.kallsyms])
              7f18fe337fa7 mprotect+0x7 (/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
              7f18fe330e0f _dl_sysdep_start+0x7f (/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
              7f18fe331448 _dl_start_user+0x0 (/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
  ...

The old output can be get using --no-merge-callchain option.
Also perf report can get the user callchain entry at the end.

  $ perf report --no-children --stdio -q -S __build_id_parse.isra.0
  # symbol: __build_id_parse.isra.0
       8.40%  pwd      [kernel.kallsyms]
              |
              ---__build_id_parse.isra.0
                 perf_event_mmap
                 mprotect_fixup
                 do_mprotect_pkey
                 __x64_sys_mprotect
                 do_syscall_64
                 entry_SYSCALL_64_after_hwframe
                 mprotect
                 _dl_sysdep_start
                 _dl_start_user

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf script: Display PERF_RECORD_CALLCHAIN_DEFERRED
Namhyung Kim [Thu, 20 Nov 2025 23:48:02 +0000 (15:48 -0800)] 
perf script: Display PERF_RECORD_CALLCHAIN_DEFERRED

Handle the deferred callchains in the script output.

  $ perf script
  ...
  pwd    2312   121.163435:     249113 cpu/cycles/P:
          ffffffff845b78d8 __build_id_parse.isra.0+0x218 ([kernel.kallsyms])
          ffffffff83bb5bf6 perf_event_mmap+0x2e6 ([kernel.kallsyms])
          ffffffff83c31959 mprotect_fixup+0x1e9 ([kernel.kallsyms])
          ffffffff83c31dc5 do_mprotect_pkey+0x2b5 ([kernel.kallsyms])
          ffffffff83c3206f __x64_sys_mprotect+0x1f ([kernel.kallsyms])
          ffffffff845e6692 do_syscall_64+0x62 ([kernel.kallsyms])
          ffffffff8360012f entry_SYSCALL_64_after_hwframe+0x76 ([kernel.kallsyms])
                 b00000006 (cookie) ([unknown])

  pwd    2312   121.163447: DEFERRED CALLCHAIN [cookie: b00000006]
              7f18fe337fa7 mprotect+0x7 (/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
              7f18fe330e0f _dl_sysdep_start+0x7f (/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)
              7f18fe331448 _dl_start_user+0x0 (/lib/x86_64-linux-gnu/ld-linux-x86-64.so.2)

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf record: Add --call-graph fp,defer option for deferred callchains
Namhyung Kim [Thu, 20 Nov 2025 23:48:01 +0000 (15:48 -0800)] 
perf record: Add --call-graph fp,defer option for deferred callchains

Add a new callchain record mode option for deferred callchains.  For now
it only works with FP (frame-pointer) mode.

And add the missing feature detection logic to clear the flag on old
kernels.

  $ perf record --call-graph fp,defer -vv true
  ...
  ------------------------------------------------------------
  perf_event_attr:
    type                             0 (PERF_TYPE_HARDWARE)
    size                             136
    config                           0 (PERF_COUNT_HW_CPU_CYCLES)
    { sample_period, sample_freq }   4000
    sample_type                      IP|TID|TIME|CALLCHAIN|PERIOD
    read_format                      ID|LOST
    disabled                         1
    inherit                          1
    mmap                             1
    comm                             1
    freq                             1
    enable_on_exec                   1
    task                             1
    sample_id_all                    1
    mmap2                            1
    comm_exec                        1
    ksymbol                          1
    bpf_event                        1
    defer_callchain                  1
    defer_output                     1
  ------------------------------------------------------------
  sys_perf_event_open: pid 162755  cpu 0  group_fd -1  flags 0x8
  sys_perf_event_open failed, error -22
  switching off deferred callchain support

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Minimal DEFERRED_CALLCHAIN support
Namhyung Kim [Thu, 20 Nov 2025 23:48:00 +0000 (15:48 -0800)] 
perf tools: Minimal DEFERRED_CALLCHAIN support

Add a new event type for deferred callchains and a new callback for the
struct perf_tool.  For now it doesn't actually handle the deferred
callchains but it just marks the sample if it has the PERF_CONTEXT_
USER_DEFFERED in the callchain array.

At least, perf report can dump the raw data with this change.  Actually
this requires the next commit to enable attr.defer_callchain, but if you
already have a data file, it'll show the following result.

  $ perf report -D
  ...
  0x2158@perf.data [0x40]: event: 22
  .
  . ... raw event: size 64 bytes
  .  0000:  16 00 00 00 02 00 40 00 06 00 00 00 0b 00 00 00  ......@.........
  .  0010:  03 00 00 00 00 00 00 00 a7 7f 33 fe 18 7f 00 00  ..........3.....
  .  0020:  0f 0e 33 fe 18 7f 00 00 48 14 33 fe 18 7f 00 00  ..3.....H.3.....
  .  0030:  08 09 00 00 08 09 00 00 e6 7a e7 35 1c 00 00 00  .........z.5....

  121163447014 0x2158 [0x40]: PERF_RECORD_CALLCHAIN_DEFERRED(IP, 0x2): 2312/2312: 0xb00000006
  ... FP chain: nr:3
  .....  0: 00007f18fe337fa7
  .....  1: 00007f18fe330e0f
  .....  2: 00007f18fe331448
  : unhandled!

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agotools headers UAPI: Sync linux/perf_event.h for deferred callchains
Namhyung Kim [Thu, 20 Nov 2025 23:47:59 +0000 (15:47 -0800)] 
tools headers UAPI: Sync linux/perf_event.h for deferred callchains

It needs to sync with the kernel to support user space changes for the
deferred callchains.

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Skip optional metrics in metric group list
Ian Rogers [Tue, 2 Dec 2025 17:50:07 +0000 (09:50 -0800)] 
perf jevents: Skip optional metrics in metric group list

For metric groups, skip metrics in the list that are None. This allows
functions to better optionally return metrics.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Drop duplicate pending metrics
Ian Rogers [Tue, 2 Dec 2025 17:50:06 +0000 (09:50 -0800)] 
perf jevents: Drop duplicate pending metrics

Drop adding a pending metric if there is an existing one. Ensure the
PMUs differ for hybrid systems.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Move json encoding to its own functions
Ian Rogers [Tue, 2 Dec 2025 17:50:05 +0000 (09:50 -0800)] 
perf jevents: Move json encoding to its own functions

Have dedicated encode functions rather than having them embedded in
MetricGroup. This is to provide some uniformity in the Metric ToXXX
routines.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Add threshold expressions to Metric
Ian Rogers [Tue, 2 Dec 2025 17:50:04 +0000 (09:50 -0800)] 
perf jevents: Add threshold expressions to Metric

Allow threshold expressions for metrics to be generated.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Term list fix in event parsing
Ian Rogers [Tue, 2 Dec 2025 17:50:03 +0000 (09:50 -0800)] 
perf jevents: Term list fix in event parsing

Fix events seemingly broken apart at a comma.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Support parsing negative exponents
Ian Rogers [Tue, 2 Dec 2025 17:50:02 +0000 (09:50 -0800)] 
perf jevents: Support parsing negative exponents

Support negative exponents when parsing from a json metric string by
making the numbers after the 'e' optional in the 'Event' insertion fix
up.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Allow metric groups not to be named
Ian Rogers [Tue, 2 Dec 2025 17:50:01 +0000 (09:50 -0800)] 
perf jevents: Allow metric groups not to be named

It can be convenient to have unnamed metric groups for the sake of
organizing other metrics and metric groups. An unspecified name
shouldn't contribute to the MetricGroup json value, so don't record
it.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Add descriptions to metricgroup abstraction
Ian Rogers [Tue, 2 Dec 2025 17:50:00 +0000 (09:50 -0800)] 
perf jevents: Add descriptions to metricgroup abstraction

Add a function to recursively generate metric group descriptions.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Update metric constraint support
Ian Rogers [Tue, 2 Dec 2025 17:49:59 +0000 (09:49 -0800)] 
perf jevents: Update metric constraint support

Previous metric constraints were binary, either none or don't group
when the NMI watchdog is present. Update to match the definitions in
'enum metric_event_groups' in pmu-events.h.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jevents: Allow multiple metricgroups.json files
Ian Rogers [Tue, 2 Dec 2025 17:49:58 +0000 (09:49 -0800)] 
perf jevents: Allow multiple metricgroups.json files

Allow multiple metricgroups.json files by handling any file ending
with metricgroups.json as a metricgroups file.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf ilist: Be tolerant of reading a metric on the wrong CPU
Ian Rogers [Tue, 2 Dec 2025 17:49:57 +0000 (09:49 -0800)] 
perf ilist: Be tolerant of reading a metric on the wrong CPU

This happens on hybrid machine metrics. Be tolerant and don't cause
the ilist application to crash with an exception.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf python: Correct copying of metric_leader in an evsel
Ian Rogers [Tue, 2 Dec 2025 17:49:56 +0000 (09:49 -0800)] 
perf python: Correct copying of metric_leader in an evsel

Ensure the metric_leader is copied and set up correctly. In
compute_metric determine the correct metric_leader event to match the
requested CPU. Fixes the handling of metrics particularly on hybrid
machines.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf test: Add python JIT dump test
Namhyung Kim [Tue, 25 Nov 2025 08:07:47 +0000 (00:07 -0800)] 
perf test: Add python JIT dump test

Add a test case for the python interpreter like below so that we can
make sure it won't break again.  To validate the effect of build-ID
generation, it adds and removes the JIT'ed DSOs to/from the build-ID
cache for the test.

  $ perf test -vv jitdump
   84: python profiling with jitdump:
  --- start ---
  test child forked, pid 214316
  Run python with -Xperf_jit
  [ perf record: Woken up 5 times to write data ]
  [ perf record: Captured and wrote 1.180 MB /tmp/__perf_test.perf.data.XbqZNm (140 samples) ]
  Generate JIT-ed DSOs using perf inject
  Add JIT-ed DSOs to the build-ID cache
  Check the symbol containing the script name
  Found 108 matching lines
  Remove JIT-ed DSOs from the build-ID cache
  ---- end(0) ----
   84: python profiling with jitdump                                   : Ok

Cc: Pablo Galindo <pablogsal@gmail.com>
Link: https://docs.python.org/3/howto/perf_profiling.html#how-to-work-without-frame-pointers
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf jitdump: Add sym/str-tables to build-ID generation
Namhyung Kim [Tue, 25 Nov 2025 08:07:46 +0000 (00:07 -0800)] 
perf jitdump: Add sym/str-tables to build-ID generation

It was reported that python backtrace with JIT dump was broken after the
change to built-in SHA-1 implementation.  It seems python generates the
same JIT code for each function.  They will become separate DSOs but the
contents are the same.  Only difference is in the symbol name.

But this caused a problem that every JIT'ed DSOs will have the same
build-ID which makes perf confused.  And it resulted in no python
symbols (from JIT) in the output.

Looking back at the original code before the conversion, it used the
load_addr as well as the code section to distinguish each DSO.  But it'd
be better to use contents of symtab and strtab instead as it aligns with
some linker behaviors.

This patch adds a buffer to save all the contents in a single place for
SHA-1 calculation.  Probably we need to add sha1_update() or similar to
update the existing hash value with different contents and use it here.
But it's out of scope for this change and I'd like something that can be
backported to the stable trees easily.

Reviewed-by: Ian Rogers <irogers@google.com>
Cc: Eric Biggers <ebiggers@kernel.org>
Cc: Pablo Galindo <pablogsal@gmail.com>
Cc: Fangrui Song <maskray@sourceware.org>
Link: https://github.com/python/cpython/issues/139544
Fixes: e3f612c1d8f3945b ("perf genelf: Remove libcrypto dependency and use built-in sha1()")
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf test: Fix hybrid testing of event fallback test
Ian Rogers [Mon, 1 Dec 2025 23:11:36 +0000 (15:11 -0800)] 
perf test: Fix hybrid testing of event fallback test

The mem-loads-aux event exists on hybrid systems but the "cpu" PMU
does not. This causes an event parsing error which erroneously makes
the test look like it is failing. Avoid naming the PMU to avoid
this. Rather than cleaning up perf.data in the directory the test is
run, explicitly send the 'perf record' output to /dev/null and avoid
any cleanup scripts.

Fixes: fc9c17b22352 ("perf test: Add a perf event fallback test")
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
2 weeks agoperf tools: Remove a trailing newline in the event terms
Namhyung Kim [Tue, 2 Dec 2025 23:01:31 +0000 (15:01 -0800)] 
perf tools: Remove a trailing newline in the event terms

So that it can show the correct encoding info in the JSON output.

  $ perf list -j hw
  [
  {
          "Unit": "cpu",
          "Topic": "legacy hardware",
          "EventName": "branch-instructions",
          "EventType": "Kernel PMU event",
          "BriefDescription": "Retired branch instructions [This event is an alias of branches]",
          "Encoding": "cpu/event=0xc4/"
  },
  ...

Reviewed-by: Ian Rogers <irogers@google.com>
Suggested-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
3 weeks agoperf trace: Skip internal syscall arguments
Namhyung Kim [Thu, 27 Nov 2025 04:44:18 +0000 (20:44 -0800)] 
perf trace: Skip internal syscall arguments

Recent changes in the linux-next kernel will add new field for syscalls
to have contents in the userspace like below.

  # cat /sys/kernel/tracing/events/syscalls/sys_enter_write/format
  name: sys_enter_write
  ID: 758
  format:
          field:unsigned short common_type;       offset:0;       size:2; signed:0;
          field:unsigned char common_flags;       offset:2;       size:1; signed:0;
          field:unsigned char common_preempt_count;       offset:3;       size:1; signed:0;
          field:int common_pid;   offset:4;       size:4; signed:1;

          field:int __syscall_nr; offset:8;       size:4; signed:1;
          field:unsigned int fd;  offset:16;      size:8; signed:0;
          field:const char * buf; offset:24;      size:8; signed:0;
          field:size_t count;     offset:32;      size:8; signed:0;
          field:__data_loc char[] __buf_val;      offset:40;      size:4; signed:0;

  print fmt: "fd: 0x%08lx, buf: 0x%08lx (%s), count: 0x%08lx", ((unsigned long)(REC->fd)),
             ((unsigned long)(REC->buf)), __print_dynamic_array(__buf_val, 1),
             ((unsigned long)(REC->count))

We have a different way to handle those arguments and this change
confuses perf trace then make some tests failing.  Fix it by skipping
the new fields that have "__data_loc char[]" type.

Maybe we can switch to this instead of the BPF augmentation later.

Reviewed-by: Howard Chu <howardchu95@gmail.com>
Tested-by: Thomas Richter <tmricht@linux.ibm.com>
Tested-by: Steven Rostedt (Google) <rostedt@goodmis.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Howard Chu <howardchu95@gmail.com>
Reported-by: Thomas Richter <tmricht@linux.ibm.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
3 weeks agoperf tools: Don't read build-ids from non-regular files
James Clark [Mon, 24 Nov 2025 10:59:08 +0000 (10:59 +0000)] 
perf tools: Don't read build-ids from non-regular files

Simplify the build ID reading code by removing the non-blocking option.
Having to pass the correct option to this function was fragile and a
mistake would result in a hang, see the linked fix. Furthermore,
compressed files are always opened blocking anyway, ignoring the
non-blocking option.

We also don't expect to read build IDs from non-regular files. The only
hits to this function that are non-regular are devices that won't be elf
files with build IDs, for example "/dev/dri/renderD129".

Now instead of opening these as non-blocking and failing to read, we
skip them. Even if something like a pipe or character device did have a
build ID, I don't think it would have worked because you need to call
read() in a loop, check for -EAGAIN and handle timeouts to make
non-blocking reads work.

Link: https://lore.kernel.org/linux-perf-users/20251022-james-perf-fix-dso-block-v1-1-c4faab150546@linaro.org/
Signed-off-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
3 weeks agoperf vendor events riscv: add T-HEAD C920V2 JSON support
Inochi Amaoto [Tue, 14 Oct 2025 01:48:29 +0000 (09:48 +0800)] 
perf vendor events riscv: add T-HEAD C920V2 JSON support

T-HEAD C920 has a V2 iteration, which supports Sscompmf. The V2
iteration supports the same perf events as V1.

Reuse T-HEAD c900-legacy JSON file for T-HEAD C920V2.

Signed-off-by: Inochi Amaoto <inochiama@gmail.com>
Acked-by: Paul Walmsley <pjw@kernel.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
3 weeks agoperf pmu: fix duplicate conditional statement
Anubhav Shelat [Tue, 25 Nov 2025 11:41:18 +0000 (11:41 +0000)] 
perf pmu: fix duplicate conditional statement

Remove duplicate check for PERF_PMU_TYPE_DRM_END in perf_pmu__kind.

Fixes: f0feb21e0a10 ("perf pmu: Add PMU kind to simplify differentiating")
Signed-off-by: Anubhav Shelat <ashelat@redhat.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Closes: https://lore.kernel.org/linux-perf-users/CA+G8Dh+wLx+FvjjoEkypqvXhbzWEQVpykovzrsHi2_eQjHkzQA@mail.gmail.com/
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
3 weeks agoperf docs: arm-spe: Document new SPE filtering features
James Clark [Tue, 11 Nov 2025 11:37:59 +0000 (11:37 +0000)] 
perf docs: arm-spe: Document new SPE filtering features

FEAT_SPE_EFT and FEAT_SPE_FDS etc have new user facing format attributes
so document them. Also document existing 'event_filter' bits that were
missing from the doc and the fact that latency values are stored in the
weight field.

Reviewed-by: Leo Yan <leo.yan@arm.com>
Tested-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: James Clark <james.clark@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
3 weeks agoperf tools: Add support for perf_event_attr::config4
James Clark [Tue, 11 Nov 2025 11:37:58 +0000 (11:37 +0000)] 
perf tools: Add support for perf_event_attr::config4

perf_event_attr has gained a new field, config4, so add support for it
extending the existing configN support.

Reviewed-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Tested-by: Leo Yan <leo.yan@arm.com>
Signed-off-by: James Clark <james.clark@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
3 weeks agotools headers UAPI: Sync linux/perf_event.h with the kernel sources
James Clark [Tue, 11 Nov 2025 11:37:57 +0000 (11:37 +0000)] 
tools headers UAPI: Sync linux/perf_event.h with the kernel sources

To pickup config4 changes.

Tested-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: James Clark <james.clark@linaro.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf: replace strcpy() with strncpy() in util/jitdump.c
Hrishikesh Suresh [Thu, 20 Nov 2025 04:16:10 +0000 (23:16 -0500)] 
perf: replace strcpy() with strncpy() in util/jitdump.c

Usage of strcpy() can lead to buffer overflows. Therefore, it has been
replaced with strncpy(). The output file path is provided as a parameter
and might be restricted by command-line by default. But this defensive
patch will prevent any potential overflow, making the code more robust
against future changes in input handling.

Testing:
- ran perf test from tools/perf and did not observe any regression with
  the earlier code

Signed-off-by: Hrishikesh Suresh <hrishikesh123s@gmail.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf list: Support filtering in JSON output
Namhyung Kim [Thu, 20 Nov 2025 00:47:26 +0000 (16:47 -0800)] 
perf list: Support filtering in JSON output

Like regular output mode, it should honor command line arguments to
limit to a certain type of PMUs or events.

  $ perf list -j hw
  [
  {
          "Unit": "cpu",
          "Topic": "legacy hardware",
          "EventName": "branch-instructions",
          "EventType": "Kernel PMU event",
          "BriefDescription": "Retired branch instructions [This event is an alias of branches]",
          "Encoding": "cpu/event=0xc4\n/"
  },
  {
          "Unit": "cpu",
          "Topic": "legacy hardware",
          "EventName": "branch-misses",
          "EventType": "Kernel PMU event",
          "BriefDescription": "Mispredicted branch instructions",
          "Encoding": "cpu/event=0xc5\n/"
  },
  ...

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf list: Share print state with JSON output
Namhyung Kim [Thu, 20 Nov 2025 00:47:25 +0000 (16:47 -0800)] 
perf list: Share print state with JSON output

The JSON print state has only one different field (need_sep).  Let's
add the default print state to the json state and use it.  Then we can
use the 'ps' variable to update the state properly.

This is a preparation for the next commit.

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf list: Print matching PMU events for --unit
Namhyung Kim [Thu, 20 Nov 2025 00:47:24 +0000 (16:47 -0800)] 
perf list: Print matching PMU events for --unit

When --unit option is used, pmu_glob is set to the argument.  It should
match with event PMU and display the matching ones only.  But it also
shows raw events and metrics after that.

  $ perf list --unit tool
  List of pre-defined events (to be used in -e or -M):

  tool:
    core_wide
         [1 if not SMT,if SMT are events being gathered on all SMT threads 1 otherwise 0. Unit: tool]
    duration_time
         [Wall clock interval time in nanoseconds. Unit: tool]
    has_pmem
         [1 if persistent memory installed otherwise 0. Unit: tool]
    num_cores
         [Number of cores. A core consists of 1 or more thread,with each thread being associated with a logical Linux CPU. Unit: tool]
    num_cpus
         [Number of logical Linux CPUs. There may be multiple such CPUs on a core. Unit: tool]
    ...
    rNNN                                               [Raw event descriptor]
    cpu/event=0..255,pc,edge,.../modifier              [Raw event descriptor]
         [(see 'man perf-list' or 'man perf-record' on how to encode it)]
    breakpoint//modifier                               [Raw event descriptor]
    cstate_core/event=0..0xffffffffffffffff/modifier   [Raw event descriptor]
    cstate_pkg/event=0..0xffffffffffffffff/modifier    [Raw event descriptor]
    drm_i915//modifier                                 [Raw event descriptor]
    hwmon_acpitz//modifier                             [Raw event descriptor]
    hwmon_ac//modifier                                 [Raw event descriptor]
    hwmon_bat0//modifier                               [Raw event descriptor]
    hwmon_coretemp//modifier                           [Raw event descriptor]
    ...

  Metric Groups:

  Backend: [Grouping from Top-down Microarchitecture Analysis Metrics spreadsheet]
    tma_core_bound
         [This metric represents fraction of slots where Core non-memory issues were of a bottleneck]
    tma_info_core_ilp
         [Instruction-Level-Parallelism (average number of uops executed when there is execution) per thread (logical-processor)]
    tma_info_memory_l2mpki
         [L2 cache true misses per kilo instruction for retired demand loads]
    ...

This change makes it print the tool PMU events only.

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf test all metrics: Fully ignore Default metric failures
Ian Rogers [Wed, 19 Nov 2025 19:30:47 +0000 (11:30 -0800)] 
perf test all metrics: Fully ignore Default metric failures

Determine if a metric is default from `perf list --raw-dump $m` eg:
```
$ perf list --raw-dump l1_prefetch_miss_rate
Default4 l1_prefetch_miss_rate
```
If a metric has "not supported" or "no supported events" then ignore
these failures for default metrics. Tidy up the skip/fail messages in
the output to make them easier to spot/read.

```
$ perf list -vv "all metrics"
...
Testing llc_miss_rate
[Ignored llc_miss_rate] failed but as a Default metric this can be expected
Error: No supported events found. The LLC-loads event is not supported.
...
```

Reported-by: Thomas Richter <tmricht@linux.ibm.com>
Closes: https://lore.kernel.org/linux-perf-users/20251119104751.51960-1-tmricht@linux.ibm.com/
Reported-by: Namhyung Kim <namhyung@kernel.org>
Reported-by: James Clark <james.clark@linaro.org>
Closes: https://lore.kernel.org/lkml/aRi9xnwdLh3Dir9f@google.com/
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Tested-by: Thomas Richter <tmricht@linux.ibm.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf evsel: Skip store_evsel_ids for non-perf-event PMUs
Ian Rogers [Fri, 14 Nov 2025 22:05:47 +0000 (14:05 -0800)] 
perf evsel: Skip store_evsel_ids for non-perf-event PMUs

The IDs are associated with perf events and not applicable to non-perf
event PMUs. The failure to generate the ids was causing perf stat
record to fail.

```
$ perf stat record -a sleep 1

 Performance counter stats for 'system wide':

            47,941      context-switches                 #      nan cs/sec  cs_per_second
              0.00 msec cpu-clock                        #      0.0 CPUs  CPUs_utilized
             3,261      cpu-migrations                   #      nan migrations/sec  migrations_per_second
               516      page-faults                      #      nan faults/sec  page_faults_per_second
         7,525,483      cpu_core/branch-misses/          #      2.3 %  branch_miss_rate
       322,069,004      cpu_core/branches/               #      nan M/sec  branch_frequency
     1,895,684,291      cpu_core/cpu-cycles/             #      nan GHz  cycles_frequency
     2,789,777,426      cpu_core/instructions/           #      1.5 instructions  insn_per_cycle
         7,074,765      cpu_atom/branch-misses/          #      3.2 %  branch_miss_rate         (49.89%)
       224,225,412      cpu_atom/branches/               #      nan M/sec  branch_frequency     (50.29%)
     2,061,679,981      cpu_atom/cpu-cycles/             #      nan GHz  cycles_frequency       (50.33%)
     2,011,242,533      cpu_atom/instructions/           #      1.0 instructions  insn_per_cycle  (50.33%)
             TopdownL1 (cpu_core)                        #      9.0 %  tma_bad_speculation
                                                         #     28.3 %  tma_frontend_bound
                                                         #     35.2 %  tma_backend_bound
                                                         #     27.5 %  tma_retiring
             TopdownL1 (cpu_atom)                        #     36.8 %  tma_backend_bound        (59.65%)
                                                         #     22.8 %  tma_frontend_bound       (59.60%)
                                                         #     11.6 %  tma_bad_speculation
                                                         #     28.8 %  tma_retiring             (59.59%)

       1.006777519 seconds time elapsed

$ perf stat report

 Performance counter stats for 'perf':

     1,013,376,154      duration_time
     <not counted>      duration_time
     <not counted>      duration_time
     <not counted>      duration_time
     <not counted>      duration_time
     <not counted>      duration_time
            47,941      context-switches
              0.00 msec cpu-clock
             3,261      cpu-migrations
               516      page-faults
         7,525,483      cpu_core/branch-misses/
       322,069,814      cpu_core/branches/
       322,069,004      cpu_core/branches/
     1,895,684,291      cpu_core/cpu-cycles/
     1,895,679,209      cpu_core/cpu-cycles/
     2,789,777,426      cpu_core/instructions/
     <not counted>      cpu_core/cpu-cycles/
     <not counted>      cpu_core/stalled-cycles-frontend/
     <not counted>      cpu_core/cpu-cycles/
     <not counted>      cpu_core/stalled-cycles-backend/
     <not counted>      cpu_core/stalled-cycles-backend/
     <not counted>      cpu_core/instructions/
     <not counted>      cpu_core/stalled-cycles-frontend/
         7,074,765      cpu_atom/branch-misses/                                                 (49.89%)
       221,679,088      cpu_atom/branches/                                                      (49.89%)
       224,225,412      cpu_atom/branches/                                                      (50.29%)
     2,061,679,981      cpu_atom/cpu-cycles/                                                    (50.33%)
     2,016,259,567      cpu_atom/cpu-cycles/                                                    (50.33%)
     2,011,242,533      cpu_atom/instructions/                                                  (50.33%)
     <not counted>      cpu_atom/cpu-cycles/
     <not counted>      cpu_atom/stalled-cycles-frontend/
     <not counted>      cpu_atom/cpu-cycles/
     <not counted>      cpu_atom/stalled-cycles-backend/
     <not counted>      cpu_atom/stalled-cycles-backend/
     <not counted>      cpu_atom/instructions/
     <not counted>      cpu_atom/stalled-cycles-frontend/
        17,145,113      cpu_core/INT_MISC.UOP_DROPPING/
    10,594,226,100      cpu_core/TOPDOWN.SLOTS/
     2,919,021,401      cpu_core/topdown-retiring/
       943,101,838      cpu_core/topdown-bad-spec/
     3,031,152,533      cpu_core/topdown-fe-bound/
     3,739,756,791      cpu_core/topdown-be-bound/
     1,909,501,648      cpu_atom/CPU_CLK_UNHALTED.CORE/                                         (60.04%)
     3,516,608,359      cpu_atom/TOPDOWN_BE_BOUND.ALL/                                          (59.65%)
     2,179,403,876      cpu_atom/TOPDOWN_FE_BOUND.ALL/                                          (59.60%)
     2,745,732,458      cpu_atom/TOPDOWN_RETIRING.ALL/                                          (59.59%)

       1.006777519 seconds time elapsed

Some events weren't counted. Try disabling the NMI watchdog:
        echo 0 > /proc/sys/kernel/nmi_watchdog
        perf stat ...
        echo 1 > /proc/sys/kernel/nmi_watchdog
```

Reported-by: James Clark <james.clark@linaro.org>
Closes: https://lore.kernel.org/lkml/ca0f0cd3-7335-48f9-8737-2f70a75b019a@linaro.org/
Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf pmu: Add PMU kind to simplify differentiating
Ian Rogers [Fri, 14 Nov 2025 22:05:46 +0000 (14:05 -0800)] 
perf pmu: Add PMU kind to simplify differentiating

Rather than perf_pmu__is_xxx calls, and a notion of kind so that a
single call can be used.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf header: Switch "cpu" for find_core_pmu in caps feature writing
Ian Rogers [Fri, 14 Nov 2025 22:05:45 +0000 (14:05 -0800)] 
perf header: Switch "cpu" for find_core_pmu in caps feature writing

Writing currently fails on non-x86 and hybrid CPUs. Switch to the more
regular find_core_pmu that is normally used in this case. Tested on
hybrid alderlake system.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf test maps: Additional maps__fixup_overlap_and_insert tests
Ian Rogers [Wed, 19 Nov 2025 05:05:55 +0000 (21:05 -0800)] 
perf test maps: Additional maps__fixup_overlap_and_insert tests

Add additional test to the maps covering
maps__fixup_overlap_and_insert. Change the test suite to be for more
than just 1 test.

Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf maps: Avoid RC_CHK use after free
Ian Rogers [Wed, 19 Nov 2025 05:05:54 +0000 (21:05 -0800)] 
perf maps: Avoid RC_CHK use after free

The case of __maps__fixup_overlap_and_insert where the "new" maps
covers existing mappings can create a use-after-free with reference
count checking enabled. The issue is that "pos" holds a map pointer
from maps_by_address that is put from maps_by_address but then used to
look for a map in maps_by_name (the compared map is now a
use-after-free). The issue stems from using maps__remove which redoes
some of the searches already done by __maps__fixup_overlap_and_insert,
so optimize the code (by avoiding repeated searches) and avoid the
use-after-free by inlining the appropriate removal code.

Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202511141407.f9edcfa6-lkp@intel.com
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf stat: Read tool events last
Ian Rogers [Tue, 18 Nov 2025 21:13:24 +0000 (13:13 -0800)] 
perf stat: Read tool events last

When reading a metric like memory bandwidth on multiple sockets, the
additional sockets will be on CPUS > 0. Because of the affinity
reading, the counters are read on CPU 0 along with the time, then the
later sockets are read. This can lead to the later sockets having a
bandwidth larger than is possible for the period of time. To avoid
this move the reading of tool events to occur after all other events
are read.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Synthesize memory samples for SIMD operations
Leo Yan [Wed, 12 Nov 2025 18:24:43 +0000 (18:24 +0000)] 
perf arm_spe: Synthesize memory samples for SIMD operations

Synthesize memory samples for SIMD operations (including Advanced SIMD,
SVE, and SME). To provide complete information, also generate data
source entries for SIMD operations.

Since memory operations are not limited to load and store, set
PERF_MEM_OP_STORE if the operation does not fall into these cases.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Expose SIMD information in other operations
Leo Yan [Wed, 12 Nov 2025 18:24:42 +0000 (18:24 +0000)] 
perf arm_spe: Expose SIMD information in other operations

The other operations contain SME data processing, ASE (Advanced SIMD)
and floating-point operations. Expose these info in the records.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Report GCS in record
Leo Yan [Wed, 12 Nov 2025 18:24:41 +0000 (18:24 +0000)] 
perf arm_spe: Report GCS in record

Report GCS related info in records.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Report memset and memcpy in records
Leo Yan [Wed, 12 Nov 2025 18:24:40 +0000 (18:24 +0000)] 
perf arm_spe: Report memset and memcpy in records

Expose memset and memcpy related info in records.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Report associated info for SVE / SME operations
Leo Yan [Wed, 12 Nov 2025 18:24:39 +0000 (18:24 +0000)] 
perf arm_spe: Report associated info for SVE / SME operations

SVE / SME operations can be predicated or Gather load / scatter store,
save the relevant info into record.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Report extended memory operations in records
Leo Yan [Wed, 12 Nov 2025 18:24:38 +0000 (18:24 +0000)] 
perf arm_spe: Report extended memory operations in records

Extended memory operations include atomic (AT), acquire/release (AR),
and exclusive (EXCL) operations. Save the relevant information
in the records.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Report MTE allocation tag in record
Leo Yan [Wed, 12 Nov 2025 18:24:37 +0000 (18:24 +0000)] 
perf arm_spe: Report MTE allocation tag in record

Save MTE tag info in memory record.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Report register access in record
Leo Yan [Wed, 12 Nov 2025 18:24:36 +0000 (18:24 +0000)] 
perf arm_spe: Report register access in record

Record register access info for load / store operations.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Introduce data processing macro for SVE operations
Leo Yan [Wed, 12 Nov 2025 18:24:35 +0000 (18:24 +0000)] 
perf arm_spe: Introduce data processing macro for SVE operations

Introduce the ARM_SPE_OP_DP (data processing) macro as associated
information for SVE operations. For SVE register access, only
ARM_SPE_OP_SVE is set; for SVE data processing, both ARM_SPE_OP_SVE and
ARM_SPE_OP_DP are set together.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Consolidate operation types
Leo Yan [Wed, 12 Nov 2025 18:24:34 +0000 (18:24 +0000)] 
perf arm_spe: Consolidate operation types

Consolidate operation types in a way:

(a) Extract the second-level types into separate enums.
(b) The second-level types for memory and SIMD operations are classified
    by modules. E.g., an operation may relate to general register,
    SIMD/FP, SVE, etc.
(c) The associated information tells details. E.g., an operation is
    load or store, whether it is atomic operation, etc.

Start the enum items for the second-level types from 8 to accommodate
more entries within a 32-bit integer.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Remove unused operation types
Leo Yan [Wed, 12 Nov 2025 18:24:33 +0000 (18:24 +0000)] 
perf arm_spe: Remove unused operation types

Remove unused SVE operation types. These operations will be reintroduced
in subsequent refactoring, but with a different format.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Decode SME data processing packet
Leo Yan [Wed, 12 Nov 2025 18:24:32 +0000 (18:24 +0000)] 
perf arm_spe: Decode SME data processing packet

For SME data processing, decode its Effective vector length or Tile Size
(ETS), and print out if a floating-point operation.

After:

  .  00000000:  49 00                                           SME-OTHER ETS 1024 FP
  .  00000002:  b2 18 3c d7 83 00 80 ff ff                      VA 0xffff800083d73c18
  .  0000000b:  9a 00 00                                        LAT 0 XLAT
  .  0000000e:  43 00                                           DATA-SOURCE 0

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Decode ASE and FP fields in other operation
Leo Yan [Wed, 12 Nov 2025 18:24:31 +0000 (18:24 +0000)] 
perf arm_spe: Decode ASE and FP fields in other operation

Add a check for other operation, which prevents any incorrectly
classifying. Parse the ASE and FP fields.

After:

  .  0000002f:  48 06                                           OTHER ASE FP INSN-OTHER
  .  00000031:  b2 08 80 48 01 08 00 ff ff                      VA 0xffff000801488008
  .  0000003a:  9a 00 00                                        LAT 0 XLAT
  .  0000003d:  42 16                                           EV RETIRED L1D-ACCESS TLB-ACCESS

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Rename SPE_OP_PKT_IS_OTHER_SVE_OP macro
Leo Yan [Wed, 12 Nov 2025 18:24:30 +0000 (18:24 +0000)] 
perf arm_spe: Rename SPE_OP_PKT_IS_OTHER_SVE_OP macro

Rename the macro to SPE_OP_PKT_OTHER_SUBCLASS_SVE to unify naming.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Decode GCS operation
Leo Yan [Wed, 12 Nov 2025 18:24:29 +0000 (18:24 +0000)] 
perf arm_spe: Decode GCS operation

Decode a load or store from a GCS operation and the associated "common"
field.

After:

  .  00000000:  49 44                                           LD GCS COMM
  .  00000002:  b2 18 3c d7 83 00 80 ff ff                      VA 0xffff800083d73c18
  .  0000000b:  9a 00 00                                        LAT 0 XLAT
  .  0000000e:  43 00                                           DATA-SOURCE 0

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Unify operation naming
Leo Yan [Wed, 12 Nov 2025 18:24:28 +0000 (18:24 +0000)] 
perf arm_spe: Unify operation naming

Rename extended subclass and SVE/SME register access subclass, so that
the naming can be consistent cross all sub classes.

Add an log "SVE-SME-REG" for the SVE/SME register access, this is easier
for parsing.

Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf arm_spe: Fix memset subclass in operation
Leo Yan [Wed, 12 Nov 2025 18:24:27 +0000 (18:24 +0000)] 
perf arm_spe: Fix memset subclass in operation

The operation subclass is extracted from bits [7..1] of the payload.
Since bit [0] is not parsed, there is no chance to match the memset type
(0x25). As a result, the memset payload is never parsed successfully.

Instead of extracting a unified bit field, change to extract the
specific bits for each operation subclass.

Fixes: 34fb60400e32 ("perf arm-spe: Add raw decoding for SPEv1.3 MTE and MOPS load/store")
Signed-off-by: Leo Yan <leo.yan@arm.com>
Reviewed-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf tool_pmu: More accurately set the cpus for tool events
Ian Rogers [Thu, 13 Nov 2025 18:05:13 +0000 (10:05 -0800)] 
perf tool_pmu: More accurately set the cpus for tool events

The user and system time events can record on different CPUs, but for
all other events a single CPU map of just CPU 0 makes sense. In
parse-events detect a tool PMU and then pass the perf_event_attr so
that the tool_pmu can return CPUs specific for the event. This avoids
a CPU map of all online CPUs being used for events like
duration_time. Avoiding this avoids the evlist CPUs containing CPUs
for which duration_time just gives 0. Minimizing the evlist CPUs can
remove unnecessary sched_setaffinity syscalls that delay metric
calculations.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf stat: Reduce scope of walltime_nsecs_stats
Ian Rogers [Thu, 13 Nov 2025 18:05:12 +0000 (10:05 -0800)] 
perf stat: Reduce scope of walltime_nsecs_stats

walltime_nsecs_stats is no longer used for counter values, move into
that stat_config where it controls certain things like noise
measurement.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf stat: Reduce scope of ru_stats
Ian Rogers [Thu, 13 Nov 2025 18:05:11 +0000 (10:05 -0800)] 
perf stat: Reduce scope of ru_stats

The ru_stats are used to capture user and system time stats when a
process exits. These are then applied to user and system time tool
events if their reads fail due to the process terminating. Reduce the
scope now the metric code no longer reads these values.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf stat-shadow: Read tool events directly
Ian Rogers [Thu, 13 Nov 2025 18:05:10 +0000 (10:05 -0800)] 
perf stat-shadow: Read tool events directly

When reading time values for metrics don't use the globals updated in
builtin-stat, just read the events as regular events. The only
exception is for time events where nanoseconds need converting to
seconds as metrics assume time metrics are in seconds.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf tool_pmu: Use old_count when computing count values for time events
Ian Rogers [Thu, 13 Nov 2025 18:05:09 +0000 (10:05 -0800)] 
perf tool_pmu: Use old_count when computing count values for time events

When running in interval mode every third count of a time event isn't
showing properly:
```
$ perf stat -e duration_time -a -I 1000
     1.001082862      1,002,290,425      duration_time
     2.004264262      1,003,183,516      duration_time
     3.007381401      <not counted>      duration_time
     4.011160141      1,003,705,631      duration_time
     5.014515385      1,003,290,110      duration_time
     6.018539680      <not counted>      duration_time
     7.022065321      1,003,591,720      duration_time
```
The regression came in with a different fix, found through bisection,
commit 68cb1567439f ("perf tool_pmu: Fix aggregation on
duration_time"). The issue is caused by the enabled and running time
of the event matching the old_count's and creating a delta of 0, which
is indicative of an error.

Fixes: 68cb1567439f ("perf tool_pmu: Fix aggregation on duration_time")
Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf pmu: perf_cpu_map__new_int to avoid parsing a string
Ian Rogers [Thu, 13 Nov 2025 18:05:08 +0000 (10:05 -0800)] 
perf pmu: perf_cpu_map__new_int to avoid parsing a string

Prefer perf_cpu_map__new_int(0) to perf_cpu_map__new("0") as it avoids
strings parsing.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agolibperf cpumap: Reduce allocations and sorting in intersect
Ian Rogers [Thu, 13 Nov 2025 18:05:07 +0000 (10:05 -0800)] 
libperf cpumap: Reduce allocations and sorting in intersect

On hybrid platforms the CPU maps are often disjoint. Rather than copy
CPUs and trim, compute the number of common CPUs, if none early exit,
otherwise copy in an sorted order. This avoids memory allocation in
the disjoint case and avoids a second malloc and useless sort in the
previous trim cases.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf stat: Display metric-only for 0 counters
Ian Rogers [Wed, 12 Nov 2025 19:53:11 +0000 (11:53 -0800)] 
perf stat: Display metric-only for 0 counters

0 counters may occur in hypervisor settings but metric-only output is
always expected. This resolves an issue in the "perf stat STD output
linter" test.

Signed-off-by: Ian Rogers <irogers@google.com>
Tested-by: Thomas Falcon <thomas.falcon@intel.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf test: Don't fail if user rdpmc returns 0 when disabled
Ian Rogers [Wed, 12 Nov 2025 19:53:10 +0000 (11:53 -0800)] 
perf test: Don't fail if user rdpmc returns 0 when disabled

In certain hypervisor set ups the value 0 may be returned but this is
only erroneous if the user rdpmc isn't disabled.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf parse-events: Add debug logging to perf_event
Ian Rogers [Wed, 12 Nov 2025 19:53:09 +0000 (11:53 -0800)] 
perf parse-events: Add debug logging to perf_event

If verbose is enabled and parse_event is called, typically by tests,
log failures.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf test: Be tolerant of missing json metric none value
Ian Rogers [Wed, 12 Nov 2025 19:53:08 +0000 (11:53 -0800)] 
perf test: Be tolerant of missing json metric none value

print_metric_only_json and print_metric_end in stat-display.c may
create a metric value of "none" which fails validation as isfloat. Add
a helper to properly validate metric numeric values.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
4 weeks agoperf sample: Fix the wrong format specifier
liujing [Mon, 22 Sep 2025 09:50:57 +0000 (17:50 +0800)] 
perf sample: Fix the wrong format specifier

In the file tools/perf/util/cs-etm.c, queue_nr is of type unsigned
int and should be printed with %u.

Signed-off-by: liujing <liujing@cmss.chinamobile.com>
Reviewed-by: Mike Leach <mike.leach@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf script: Fix build by removing unused evsel_script()
James Clark [Fri, 14 Nov 2025 14:06:18 +0000 (14:06 +0000)] 
perf script: Fix build by removing unused evsel_script()

The evsel_script() function is unused since the linked commit. Fix the
build by removing it.

Fixes the following compilation error:

  static inline struct evsel_script *evsel_script(struct evsel *evsel)
                                     ^

builtin-script.c:347:36: error: unused function 'evsel_script' [-Werror,-Wunused-function]
Fixes: 3622990efaab ("perf script: Change metric format to use json metrics")
Signed-off-by: James Clark <james.clark@linaro.org>
Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf vendor metrics s390: Avoid has_event(INSTRUCTIONS)
Ian Rogers [Wed, 12 Nov 2025 16:24:39 +0000 (08:24 -0800)] 
perf vendor metrics s390: Avoid has_event(INSTRUCTIONS)

The instructions event is now provided in json meaning the has_event
test always succeeds. Switch to using non-legacy event names in the
affected metrics.

Reported-by: Thomas Richter <tmricht@linux.ibm.com>
Closes: https://lore.kernel.org/linux-perf-users/3e80f453-f015-4f4f-93d3-8df6bb6b3c95@linux.ibm.com/
Fixes: 0012e0fa221b ("perf jevents: Add legacy-hardware and legacy-cache json")
Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: Thomas Richter <tmricht@linux.ibm.com>
Tested-by: Thomas Richter <tmricht@linux.ibm.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf auxtrace: Remove errno.h from auxtrace.h and fix transitive dependencies
Ian Rogers [Mon, 10 Nov 2025 01:31:52 +0000 (17:31 -0800)] 
perf auxtrace: Remove errno.h from auxtrace.h and fix transitive dependencies

errno.h isn't used in auxtrace.h so remove it and fix build failures
caused by transitive dependencies through auxtrace.h on errno.h.

Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf build: Remove NO_AUXTRACE build option
Ian Rogers [Mon, 10 Nov 2025 01:31:51 +0000 (17:31 -0800)] 
perf build: Remove NO_AUXTRACE build option

The NO_AUXTRACE build option was used when the __get_cpuid feature
test failed or if it was provided on the command line. The option no
longer avoids a dependency on a library and so having the option is
just adding complexity to the code base. Remove the option
CONFIG_AUXTRACE from Build files and HAVE_AUXTRACE_SUPPORT by assuming
it is always defined.

Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agotool build: Remove __get_cpuid feature test
Ian Rogers [Mon, 10 Nov 2025 01:31:50 +0000 (17:31 -0800)] 
tool build: Remove __get_cpuid feature test

This feature test is no longer used so remove.

The function tested by the feature test is used in:
tools/power/x86/x86_energy_perf_policy/x86_energy_perf_policy.c
however, the Makefile just assumes the presence of the function and
doesn't perform a build feature test for it.

Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf build: Don't add NO_AUXTRACE if missing feature-get_cpuid
Ian Rogers [Mon, 10 Nov 2025 01:31:49 +0000 (17:31 -0800)] 
perf build: Don't add NO_AUXTRACE if missing feature-get_cpuid

The intel-pt code dependent on __get_cpuid is no longer present so
remove the feature test in the Makefile.config.

Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf intel-pt: Use the perf provided "cpuid.h"
Ian Rogers [Mon, 10 Nov 2025 01:31:48 +0000 (17:31 -0800)] 
perf intel-pt: Use the perf provided "cpuid.h"

Rather than having a feature test and include of <cpuid.h> for the
__get_cpuid function, use the cpuid function provided by
tools/perf/arch/x86/util/cpuid.h.

Signed-off-by: Ian Rogers <irogers@google.com>
Reviewed-by: James Clark <james.clark@linaro.org>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test: Add a perf event fallback test
Zide Chen [Wed, 12 Nov 2025 16:48:23 +0000 (08:48 -0800)] 
perf test: Add a perf event fallback test

This adds test cases to verify the precise ip fallback logic:

- If the system supports precise ip, for an event given with the maximum
  precision level, it should be able to decrease precise_ip to find a
  supported level.
- The same fallback behavior should also work in more complex scenarios,
  such as event groups or when PEBS is involved

Additional fallback tests, such as those covering missing feature cases,
can be added in the future.

Suggested-by: Ian Rogers <irogers@google.com>
Reviewed-by: Dapeng Mi <dapeng1.mi@linux.intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Ian Rogers <irogers!@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf stat: Align metric output without events
Namhyung Kim [Thu, 6 Nov 2025 07:28:34 +0000 (23:28 -0800)] 
perf stat: Align metric output without events

One of my concern in the perf stat output was the alignment in the
metrics and shadow stats.  I think it missed to calculate the basic
output length using COUNTS_LEN and EVNAME_LEN but missed to add the
unit length like "msec" and surround 2 spaces.  I'm not sure why it's
not printed below though.

But anyway, now it shows correctly aligned metric output.

  $ perf stat true

   Performance counter stats for 'true':

             859,772      task-clock                       #    0.395 CPUs utilized
                   0      context-switches                 #    0.000 /sec
                   0      cpu-migrations                   #    0.000 /sec
                  56      page-faults                      #   65.134 K/sec
           1,075,022      instructions                     #    0.86  insn per cycle
           1,255,911      cycles                           #    1.461 GHz
             220,573      branches                         #  256.548 M/sec
               7,381      branch-misses                    #    3.35% of all branches
                          TopdownL1                        #     19.2 %  tma_retiring
                                                           #     28.6 %  tma_backend_bound
                                                           #      9.5 %  tma_bad_speculation
                                                           #     42.6 %  tma_frontend_bound

         0.002174871 seconds time elapsed                  ^
                                                           |
         0.002154000 seconds user                          |
         0.000000000 seconds sys                          here

Reviewed-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf tool_pmu: Make core_wide and target_cpu json events
Ian Rogers [Tue, 11 Nov 2025 21:22:06 +0000 (13:22 -0800)] 
perf tool_pmu: Make core_wide and target_cpu json events

For the sake of better documentation, add core_wide and target_cpu to
the tool.json. When the values of system_wide and
user_requested_cpu_list are unknown, use the values from the global
stat_config.

Example output showing how '-a' modifies the values in `perf stat`:
```
$ perf stat -e core_wide,target_cpu true

 Performance counter stats for 'true':

                 0      core_wide
                 0      target_cpu

       0.000993787 seconds time elapsed

       0.001128000 seconds user
       0.000000000 seconds sys

$ perf stat -e core_wide,target_cpu -a true

 Performance counter stats for 'system wide':

                 1      core_wide
                 1      target_cpu

       0.002271723 seconds time elapsed

$ perf list
...
tool:
  core_wide
       [1 if not SMT,if SMT are events being gathered on all SMT threads 1 otherwise 0. Unit: tool]
...
  target_cpu
       [1 if CPUs being analyzed,0 if threads/processes. Unit: tool]
...
```

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test stat csv: Update test expectations and events
Ian Rogers [Tue, 11 Nov 2025 21:22:05 +0000 (13:22 -0800)] 
perf test stat csv: Update test expectations and events

Explicitly use a metric rather than implicitly expecting '-e
instructions,cycles' to produce a metric. Use a metric with software
events to make it more compatible.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test stat: Update test expectations and events
Ian Rogers [Tue, 11 Nov 2025 21:22:04 +0000 (13:22 -0800)] 
perf test stat: Update test expectations and events

test_stat_record_report and test_stat_record_script used default
output which triggers a bug when sending metrics. As this isn't
relevant to the test switch to using named software events.

Update the match in test_hybrid as the cycles event is now cpu-cycles
to workaround potential ARM issues.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test stat: Update shadow test to use metrics
Ian Rogers [Tue, 11 Nov 2025 21:22:03 +0000 (13:22 -0800)] 
perf test stat: Update shadow test to use metrics

Previously '-e cycles,instructions' would implicitly create an IPC
metric. This now has to be explicit with '-M insn_per_cycle'.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test metrics: Update all metrics for possibly failing default metrics
Ian Rogers [Tue, 11 Nov 2025 21:22:02 +0000 (13:22 -0800)] 
perf test metrics: Update all metrics for possibly failing default metrics

Default metrics may use unsupported events and be ignored. These
metrics shouldn't cause metric testing to fail.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test stat: Update std_output testing metric expectations
Ian Rogers [Tue, 11 Nov 2025 21:22:01 +0000 (13:22 -0800)] 
perf test stat: Update std_output testing metric expectations

Make the expectations match json metrics rather than the previous hard
coded ones.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test stat: Ignore failures in Default[234] metricgroups
Ian Rogers [Tue, 11 Nov 2025 21:22:00 +0000 (13:22 -0800)] 
perf test stat: Ignore failures in Default[234] metricgroups

The Default[234] metric groups may contain unsupported legacy
events. Allow those metric groups to fail.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf test stat+json: Improve metric-only testing
Ian Rogers [Tue, 11 Nov 2025 21:21:59 +0000 (13:21 -0800)] 
perf test stat+json: Improve metric-only testing

When testing metric-only, pass a metric to perf rather than expecting
a hard coded metric value to be generated.

Remove keys that were really metric-only units and instead don't
expect metric only to have a matching json key as it encodes metrics
as {"metric_name", "metric_value"}.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf stat: Remove "unit" workarounds for metric-only
Ian Rogers [Tue, 11 Nov 2025 21:21:58 +0000 (13:21 -0800)] 
perf stat: Remove "unit" workarounds for metric-only

Remove code that tested the "unit" as in KB/sec for certain hard coded
metric values and did workarounds.

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf stat: Sort default events/metrics
Ian Rogers [Tue, 11 Nov 2025 21:21:57 +0000 (13:21 -0800)] 
perf stat: Sort default events/metrics

To improve the readability of default events/metrics, sort the evsels
after the Default metric groups have be parsed.

Before:
```
$ perf stat -a sleep 1
 Performance counter stats for 'system wide':

            22,087      context-switches                 #      nan cs/sec  cs_per_second
             TopdownL1 (cpu_core)                 #     10.3 %  tma_bad_speculation
                                                  #     25.8 %  tma_frontend_bound
                                                  #     34.5 %  tma_backend_bound
                                                  #     29.3 %  tma_retiring
             7,829      page-faults                      #      nan faults/sec  page_faults_per_second
       880,144,270      cpu_atom/cpu-cycles/             #      nan GHz  cycles_frequency       (50.10%)
     1,693,081,235      cpu_core/cpu-cycles/             #      nan GHz  cycles_frequency
             TopdownL1 (cpu_atom)                 #     20.5 %  tma_bad_speculation
                                                  #     13.8 %  tma_retiring             (50.26%)
                                                  #     34.6 %  tma_frontend_bound       (50.23%)
        89,326,916      cpu_atom/branches/               #      nan M/sec  branch_frequency     (60.19%)
       538,123,088      cpu_core/branches/               #      nan M/sec  branch_frequency
             1,368      cpu-migrations                   #      nan migrations/sec  migrations_per_second
                                                  #     31.1 %  tma_backend_bound        (60.19%)
              0.00 msec cpu-clock                        #      0.0 CPUs  CPUs_utilized
       485,744,856      cpu_atom/instructions/           #      0.6 instructions  insn_per_cycle  (59.87%)
     3,093,112,283      cpu_core/instructions/           #      1.8 instructions  insn_per_cycle
         4,939,427      cpu_atom/branch-misses/          #      5.0 %  branch_miss_rate         (49.77%)
         7,632,248      cpu_core/branch-misses/          #      1.4 %  branch_miss_rate

       1.005084693 seconds time elapsed
```
After:
```
$ perf stat -a sleep 1
 Performance counter stats for 'system wide':

            22,165      context-switches                 #      nan cs/sec  cs_per_second
              0.00 msec cpu-clock                        #      0.0 CPUs  CPUs_utilized
             2,260      cpu-migrations                   #      nan migrations/sec  migrations_per_second
            20,476      page-faults                      #      nan faults/sec  page_faults_per_second
        17,052,357      cpu_core/branch-misses/          #      1.5 %  branch_miss_rate
     1,120,090,590      cpu_core/branches/               #      nan M/sec  branch_frequency
     3,402,892,275      cpu_core/cpu-cycles/             #      nan GHz  cycles_frequency
     6,129,236,701      cpu_core/instructions/           #      1.8 instructions  insn_per_cycle
         6,159,523      cpu_atom/branch-misses/          #      3.1 %  branch_miss_rate         (49.86%)
       222,158,812      cpu_atom/branches/               #      nan M/sec  branch_frequency     (50.25%)
     1,547,610,244      cpu_atom/cpu-cycles/             #      nan GHz  cycles_frequency       (50.40%)
     1,304,901,260      cpu_atom/instructions/           #      0.8 instructions  insn_per_cycle  (50.41%)
             TopdownL1 (cpu_core)                 #     13.7 %  tma_bad_speculation
                                                  #     23.5 %  tma_frontend_bound
                                                  #     33.3 %  tma_backend_bound
                                                  #     29.6 %  tma_retiring
             TopdownL1 (cpu_atom)                 #     32.1 %  tma_backend_bound        (59.65%)
                                                  #     30.1 %  tma_frontend_bound       (59.51%)
                                                  #     22.3 %  tma_bad_speculation
                                                  #     15.5 %  tma_retiring             (59.53%)

       1.008405429 seconds time elapsed
```

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf stat: Fix default metricgroup display on hybrid
Ian Rogers [Tue, 11 Nov 2025 21:21:56 +0000 (13:21 -0800)] 
perf stat: Fix default metricgroup display on hybrid

The logic to skip output of a default metric line was firing on
Alderlake and not displaying 'TopdownL1 (cpu_atom)'. Remove the
need_full_name check as it is equivalent to the different PMU test in
the cases we care about, merge the 'if's and flip the evsel of the PMU
test. The 'if' is now basically saying, if the output matches the last
printed output then skip the output.

Before:
```
             TopdownL1 (cpu_core)                 #     11.3 %  tma_bad_speculation
                                                  #     24.3 %  tma_frontend_bound
             TopdownL1 (cpu_core)                 #     33.9 %  tma_backend_bound
                                                  #     30.6 %  tma_retiring
                                                  #     42.2 %  tma_backend_bound
                                                  #     25.0 %  tma_frontend_bound       (49.81%)
                                                  #     12.8 %  tma_bad_speculation
                                                  #     20.0 %  tma_retiring             (59.46%)
```
After:
```
             TopdownL1 (cpu_core)                 #      8.3 %  tma_bad_speculation
                                                  #     43.7 %  tma_frontend_bound
                                                  #     30.7 %  tma_backend_bound
                                                  #     17.2 %  tma_retiring
             TopdownL1 (cpu_atom)                 #     31.9 %  tma_backend_bound
                                                  #     37.6 %  tma_frontend_bound       (49.66%)
                                                  #     18.0 %  tma_bad_speculation
                                                  #     12.6 %  tma_retiring             (59.58%)
```

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf stat: Remove hard coded shadow metrics
Ian Rogers [Tue, 11 Nov 2025 21:21:55 +0000 (13:21 -0800)] 
perf stat: Remove hard coded shadow metrics

Now that the metrics are encoded in common json the hard coded
printing means the metrics are shown twice. Remove the hard coded
version.

This means that when specifying events, and those events correspond to
a hard coded metric, the metric will no longer be displayed. The
metric will be displayed if the metric is requested. Due to the adhoc
printing in the previous approach it was often found frustrating, the
new approach avoids this.

The default perf stat output on an alderlake now looks like:
```
$ perf stat -a -- sleep 1

 Performance counter stats for 'system wide':

            19,697      context-switches                 #      nan cs/sec  cs_per_second
             TopdownL1 (cpu_core)                 #     10.7 %  tma_bad_speculation
                                                  #     24.9 %  tma_frontend_bound
             TopdownL1 (cpu_core)                 #     34.3 %  tma_backend_bound
                                                  #     30.1 %  tma_retiring
             6,593      page-faults                      #      nan faults/sec  page_faults_per_second
       729,065,658      cpu_atom/cpu-cycles/             #      nan GHz  cycles_frequency       (49.79%)
     1,605,131,101      cpu_core/cpu-cycles/             #      nan GHz  cycles_frequency
                                                  #     19.7 %  tma_bad_speculation
                                                  #     14.2 %  tma_retiring             (50.14%)
                                                  #     37.3 %  tma_frontend_bound       (50.31%)
        87,302,268      cpu_atom/branches/               #      nan M/sec  branch_frequency     (60.27%)
       512,046,956      cpu_core/branches/               #      nan M/sec  branch_frequency
             1,111      cpu-migrations                   #      nan migrations/sec  migrations_per_second
                                                  #     28.8 %  tma_backend_bound        (60.26%)
              0.00 msec cpu-clock                        #      0.0 CPUs  CPUs_utilized
       392,509,323      cpu_atom/instructions/           #      0.6 instructions  insn_per_cycle  (60.19%)
     2,990,369,310      cpu_core/instructions/           #      1.9 instructions  insn_per_cycle
         3,493,478      cpu_atom/branch-misses/          #      5.9 %  branch_miss_rate         (49.69%)
         7,297,531      cpu_core/branch-misses/          #      1.4 %  branch_miss_rate

       1.006621701 seconds time elapsed
```

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf script: Change metric format to use json metrics
Ian Rogers [Tue, 11 Nov 2025 21:21:54 +0000 (13:21 -0800)] 
perf script: Change metric format to use json metrics

The metric format option isn't properly supported. This change
improves that by making the sample events update the counts of an
evsel, where the shadow metric code expects to read the values.  To
support printing metrics, metrics need to be found. This is done on
the first attempt to print a metric. Every metric is parsed and then
the evsels in the metric's evlist compared to those in perf script
using the perf_event_attr type and config. If the metric matches then
it is added for printing. As an event in the perf script's evlist may
have >1 metric id, or different leader for aggregation, the first
metric matched will be displayed in those cases.

An example use is:
```
$ perf record -a -e '{instructions,cpu-cycles}:S' -a -- sleep 1
$ perf script -F period,metric
...
     867817
         metric:    0.30  insn per cycle
     125394
         metric:    0.04  insn per cycle
     313516
         metric:    0.11  insn per cycle
         metric:    1.00  insn per cycle
```

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>
5 weeks agoperf stat: Add detail -d,-dd,-ddd metrics
Ian Rogers [Tue, 11 Nov 2025 21:21:53 +0000 (13:21 -0800)] 
perf stat: Add detail -d,-dd,-ddd metrics

Add metrics for the stat-shadow -d, -dd and -ddd events and hard coded
metrics. Remove the events as these now come from the metrics.

Following this change a detailed perf stat output looks like:
```
$ perf stat -a -ddd -- sleep 1
 Performance counter stats for 'system wide':

            21,089      context-switches                 #      nan cs/sec  cs_per_second
             TopdownL1 (cpu_core)                 #     14.1 %  tma_bad_speculation
                                                  #     27.3 %  tma_frontend_bound       (30.56%)
             TopdownL1 (cpu_core)                 #     31.5 %  tma_backend_bound
                                                  #     27.2 %  tma_retiring             (30.56%)
             6,302      page-faults                      #      nan faults/sec  page_faults_per_second
       928,495,163      cpu_atom/cpu-cycles/
                                                  #      nan GHz  cycles_frequency       (28.41%)
     1,841,409,834      cpu_core/cpu-cycles/
                                                  #      nan GHz  cycles_frequency       (38.51%)
                                                  #     14.5 %  tma_bad_speculation
                                                  #     16.0 %  tma_retiring             (28.41%)
                                                  #     36.8 %  tma_frontend_bound       (35.57%)
       100,859,118      cpu_atom/branches/               #      nan M/sec  branch_frequency     (42.73%)
       572,657,734      cpu_core/branches/               #      nan M/sec  branch_frequency     (54.43%)
             1,527      cpu-migrations                   #      nan migrations/sec  migrations_per_second
                                                  #     32.7 %  tma_backend_bound        (42.73%)
              0.00 msec cpu-clock                        #    0.000 CPUs utilized
                                                  #      0.0 CPUs  CPUs_utilized
       498,668,509      cpu_atom/instructions/           #    0.57  insn per cycle
                                                  #      0.6 instructions  insn_per_cycle  (42.97%)
     3,281,762,225      cpu_core/instructions/           #    1.84  insn per cycle
                                                  #      1.8 instructions  insn_per_cycle  (62.20%)
         4,919,511      cpu_atom/branch-misses/          #    5.43% of all branches
                                                  #      5.4 %  branch_miss_rate         (35.80%)
         7,431,776      cpu_core/branch-misses/          #    1.39% of all branches
                                                  #      1.4 %  branch_miss_rate         (62.20%)
         2,517,007      cpu_atom/LLC-loads/              #      0.1 %  llc_miss_rate            (28.62%)
         3,931,318      cpu_core/LLC-loads/              #     40.4 %  llc_miss_rate            (45.98%)
        14,918,674      cpu_core/L1-dcache-load-misses/  #    2.25% of all L1-dcache accesses
                                                  #      nan %  l1d_miss_rate            (37.80%)
        27,067,264      cpu_atom/L1-icache-load-misses/  #   15.92% of all L1-icache accesses
                                                  #     15.9 %  l1i_miss_rate            (21.47%)
       116,848,994      cpu_atom/dTLB-loads/             #      0.8 %  dtlb_miss_rate           (21.47%)
       764,870,407      cpu_core/dTLB-loads/             #      0.1 %  dtlb_miss_rate           (15.12%)

       1.006181526 seconds time elapsed
```

Signed-off-by: Ian Rogers <irogers@google.com>
Signed-off-by: Namhyung Kim <namhyung@kernel.org>