--- /dev/null
+From 4ac19ead0dfbabd8e0bfc731f507cfb0b95d6c99 Mon Sep 17 00:00:00 2001
+From: Aaron Lewis <aaronlewis@google.com>
+Date: Tue, 17 May 2022 05:12:36 +0000
+Subject: kvm: x86/pmu: Fix the compare function used by the pmu event filter
+
+From: Aaron Lewis <aaronlewis@google.com>
+
+commit 4ac19ead0dfbabd8e0bfc731f507cfb0b95d6c99 upstream.
+
+When returning from the compare function the u64 is truncated to an
+int. This results in a loss of the high nybble[1] in the event select
+and its sign if that nybble is in use. Switch from using a result that
+can end up being truncated to a result that can only be: 1, 0, -1.
+
+[1] bits 35:32 in the event select register and bits 11:8 in the event
+ select.
+
+Fixes: 7ff775aca48ad ("KVM: x86/pmu: Use binary search to check filtered events")
+Signed-off-by: Aaron Lewis <aaronlewis@google.com>
+Reviewed-by: Sean Christopherson <seanjc@google.com>
+Message-Id: <20220517051238.2566934-1-aaronlewis@google.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kvm/pmu.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+--- a/arch/x86/kvm/pmu.c
++++ b/arch/x86/kvm/pmu.c
+@@ -170,9 +170,12 @@ static bool pmc_resume_counter(struct kv
+ return true;
+ }
+
+-static int cmp_u64(const void *a, const void *b)
++static int cmp_u64(const void *pa, const void *pb)
+ {
+- return *(__u64 *)a - *(__u64 *)b;
++ u64 a = *(u64 *)pa;
++ u64 b = *(u64 *)pb;
++
++ return (a > b) - (a < b);
+ }
+
+ void reprogram_gp_counter(struct kvm_pmc *pmc, u64 eventsel)