]> git.ipfire.org Git - thirdparty/kernel/stable.git/commitdiff
perf/x86: Fix full width counter, counter overflow
authorPeter Zijlstra (Intel) <peterz@infradead.org>
Tue, 29 Nov 2016 20:33:28 +0000 (20:33 +0000)
committerJiri Slaby <jslaby@suse.cz>
Thu, 26 Jan 2017 16:22:17 +0000 (17:22 +0100)
commit 7f612a7f0bc13a2361a152862435b7941156b6af upstream.

Lukasz reported that perf stat counters overflow handling is broken on KNL/SLM.

Both these parts have full_width_write set, and that does indeed have
a problem. In order to deal with counter wrap, we must sample the
counter at at least half the counter period (see also the sampling
theorem) such that we can unambiguously reconstruct the count.

However commit:

  069e0c3c4058 ("perf/x86/intel: Support full width counting")

sets the sampling interval to the full period, not half.

Fixing that exposes another issue, in that we must not sign extend the
delta value when we shift it right; the counter cannot have
decremented after all.

With both these issues fixed, counter overflow functions correctly
again.

Reported-by: Lukasz Odzioba <lukasz.odzioba@intel.com>
Tested-by: Liang, Kan <kan.liang@intel.com>
Tested-by: Odzioba, Lukasz <lukasz.odzioba@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Fixes: 069e0c3c4058 ("perf/x86/intel: Support full width counting")
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
arch/x86/kernel/cpu/perf_event.c
arch/x86/kernel/cpu/perf_event_intel.c

index 0271272d55d04a8ddd04a06c0f48193b3cb40811..050784bcd71ffa5b0885673c5f25cf98d56ad010 100644 (file)
@@ -64,7 +64,7 @@ u64 x86_perf_event_update(struct perf_event *event)
        int shift = 64 - x86_pmu.cntval_bits;
        u64 prev_raw_count, new_raw_count;
        int idx = hwc->idx;
-       s64 delta;
+       u64 delta;
 
        if (idx == INTEL_PMC_IDX_FIXED_BTS)
                return 0;
index 04e7df068f0e57e60c6caa0b39dde1d1c7ea50bb..0c6527a168f07a02ea32e21e07e6ee54fa03ba4f 100644 (file)
@@ -2578,7 +2578,7 @@ __init int intel_pmu_init(void)
 
        /* Support full width counters using alternative MSR range */
        if (x86_pmu.intel_cap.full_width_write) {
-               x86_pmu.max_period = x86_pmu.cntval_mask;
+               x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
                x86_pmu.perfctr = MSR_IA32_PMC0;
                pr_cont("full-width counters, ");
        }