From: Greg Kroah-Hartman Date: Thu, 23 Sep 2021 08:06:34 +0000 (+0200) Subject: 4.19-stable patches X-Git-Tag: v4.4.285~63 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=75f724fc4cb6b47f9241d971cbd722f4b5ffdf51;p=thirdparty%2Fkernel%2Fstable-queue.git 4.19-stable patches added patches: kvm-remember-position-in-kvm-vcpus-array.patch rcu-fix-missed-wakeup-of-exp_wq-waiters.patch --- diff --git a/queue-4.19/kvm-remember-position-in-kvm-vcpus-array.patch b/queue-4.19/kvm-remember-position-in-kvm-vcpus-array.patch new file mode 100644 index 00000000000..c718a4b9eb1 --- /dev/null +++ b/queue-4.19/kvm-remember-position-in-kvm-vcpus-array.patch @@ -0,0 +1,77 @@ +From 8750e72a79dda2f665ce17b62049f4d62130d991 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Radim=20Kr=C4=8Dm=C3=A1=C5=99?= +Date: Thu, 7 Nov 2019 07:53:42 -0500 +Subject: KVM: remember position in kvm->vcpus array +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Radim Krčmář + +commit 8750e72a79dda2f665ce17b62049f4d62130d991 upstream. + +Fetching an index for any vcpu in kvm->vcpus array by traversing +the entire array everytime is costly. +This patch remembers the position of each vcpu in kvm->vcpus array +by storing it in vcpus_idx under kvm_vcpu structure. + +Signed-off-by: Radim Krčmář +Signed-off-by: Nitesh Narayan Lal +Signed-off-by: Paolo Bonzini +[borntraeger@de.ibm.com]: backport to 4.19 (also fits for 5.4) +Signed-off-by: Christian Borntraeger +Acked-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman +--- + include/linux/kvm_host.h | 11 +++-------- + virt/kvm/kvm_main.c | 5 +++-- + 2 files changed, 6 insertions(+), 10 deletions(-) + +--- a/include/linux/kvm_host.h ++++ b/include/linux/kvm_host.h +@@ -248,7 +248,8 @@ struct kvm_vcpu { + struct preempt_notifier preempt_notifier; + #endif + int cpu; +- int vcpu_id; ++ int vcpu_id; /* id given by userspace at creation */ ++ int vcpu_idx; /* index in kvm->vcpus array */ + int srcu_idx; + int mode; + u64 requests; +@@ -551,13 +552,7 @@ static inline struct kvm_vcpu *kvm_get_v + + static inline int kvm_vcpu_get_idx(struct kvm_vcpu *vcpu) + { +- struct kvm_vcpu *tmp; +- int idx; +- +- kvm_for_each_vcpu(idx, tmp, vcpu->kvm) +- if (tmp == vcpu) +- return idx; +- BUG(); ++ return vcpu->vcpu_idx; + } + + #define kvm_for_each_memslot(memslot, slots) \ +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -2751,7 +2751,8 @@ static int kvm_vm_ioctl_create_vcpu(stru + goto unlock_vcpu_destroy; + } + +- BUG_ON(kvm->vcpus[atomic_read(&kvm->online_vcpus)]); ++ vcpu->vcpu_idx = atomic_read(&kvm->online_vcpus); ++ BUG_ON(kvm->vcpus[vcpu->vcpu_idx]); + + /* Now it's all set up, let userspace reach it */ + kvm_get_kvm(kvm); +@@ -2761,7 +2762,7 @@ static int kvm_vm_ioctl_create_vcpu(stru + goto unlock_vcpu_destroy; + } + +- kvm->vcpus[atomic_read(&kvm->online_vcpus)] = vcpu; ++ kvm->vcpus[vcpu->vcpu_idx] = vcpu; + + /* + * Pairs with smp_rmb() in kvm_get_vcpu. Write kvm->vcpus diff --git a/queue-4.19/rcu-fix-missed-wakeup-of-exp_wq-waiters.patch b/queue-4.19/rcu-fix-missed-wakeup-of-exp_wq-waiters.patch new file mode 100644 index 00000000000..d7f74c10065 --- /dev/null +++ b/queue-4.19/rcu-fix-missed-wakeup-of-exp_wq-waiters.patch @@ -0,0 +1,99 @@ +From fd6bc19d7676a060a171d1cf3dcbf6fd797eb05f Mon Sep 17 00:00:00 2001 +From: Neeraj Upadhyay +Date: Tue, 19 Nov 2019 03:17:07 +0000 +Subject: rcu: Fix missed wakeup of exp_wq waiters + +From: Neeraj Upadhyay + +commit fd6bc19d7676a060a171d1cf3dcbf6fd797eb05f upstream. + +Tasks waiting within exp_funnel_lock() for an expedited grace period to +elapse can be starved due to the following sequence of events: + +1. Tasks A and B both attempt to start an expedited grace + period at about the same time. This grace period will have + completed when the lower four bits of the rcu_state structure's + ->expedited_sequence field are 0b'0100', for example, when the + initial value of this counter is zero. Task A wins, and thus + does the actual work of starting the grace period, including + acquiring the rcu_state structure's .exp_mutex and sets the + counter to 0b'0001'. + +2. Because task B lost the race to start the grace period, it + waits on ->expedited_sequence to reach 0b'0100' inside of + exp_funnel_lock(). This task therefore blocks on the rcu_node + structure's ->exp_wq[1] field, keeping in mind that the + end-of-grace-period value of ->expedited_sequence (0b'0100') + is shifted down two bits before indexing the ->exp_wq[] field. + +3. Task C attempts to start another expedited grace period, + but blocks on ->exp_mutex, which is still held by Task A. + +4. The aforementioned expedited grace period completes, so that + ->expedited_sequence now has the value 0b'0100'. A kworker task + therefore acquires the rcu_state structure's ->exp_wake_mutex + and starts awakening any tasks waiting for this grace period. + +5. One of the first tasks awakened happens to be Task A. Task A + therefore releases the rcu_state structure's ->exp_mutex, + which allows Task C to start the next expedited grace period, + which causes the lower four bits of the rcu_state structure's + ->expedited_sequence field to become 0b'0101'. + +6. Task C's expedited grace period completes, so that the lower four + bits of the rcu_state structure's ->expedited_sequence field now + become 0b'1000'. + +7. The kworker task from step 4 above continues its wakeups. + Unfortunately, the wake_up_all() refetches the rcu_state + structure's .expedited_sequence field: + + wake_up_all(&rnp->exp_wq[rcu_seq_ctr(rcu_state.expedited_sequence) & 0x3]); + + This results in the wakeup being applied to the rcu_node + structure's ->exp_wq[2] field, which is unfortunate given that + Task B is instead waiting on ->exp_wq[1]. + +On a busy system, no harm is done (or at least no permanent harm is done). +Some later expedited grace period will redo the wakeup. But on a quiet +system, such as many embedded systems, it might be a good long time before +there was another expedited grace period. On such embedded systems, +this situation could therefore result in a system hang. + +This issue manifested as DPM device timeout during suspend (which +usually qualifies as a quiet time) due to a SCSI device being stuck in +_synchronize_rcu_expedited(), with the following stack trace: + + schedule() + synchronize_rcu_expedited() + synchronize_rcu() + scsi_device_quiesce() + scsi_bus_suspend() + dpm_run_callback() + __device_suspend() + +This commit therefore prevents such delays, timeouts, and hangs by +making rcu_exp_wait_wake() use its "s" argument consistently instead of +refetching from rcu_state.expedited_sequence. + +Fixes: 3b5f668e715b ("rcu: Overlap wakeups with next expedited grace period") +Signed-off-by: Neeraj Upadhyay +Signed-off-by: Paul E. McKenney +Signed-off-by: David Chen +Acked-by: Neeraj Upadhyay +Signed-off-by: Greg Kroah-Hartman +--- + kernel/rcu/tree_exp.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/kernel/rcu/tree_exp.h ++++ b/kernel/rcu/tree_exp.h +@@ -613,7 +613,7 @@ static void rcu_exp_wait_wake(struct rcu + spin_unlock(&rnp->exp_lock); + } + smp_mb(); /* All above changes before wakeup. */ +- wake_up_all(&rnp->exp_wq[rcu_seq_ctr(rsp->expedited_sequence) & 0x3]); ++ wake_up_all(&rnp->exp_wq[rcu_seq_ctr(s) & 0x3]); + } + trace_rcu_exp_grace_period(rsp->name, s, TPS("endwake")); + mutex_unlock(&rsp->exp_wake_mutex); diff --git a/queue-4.19/series b/queue-4.19/series index a206d1de2f4..c387ae0b652 100644 --- a/queue-4.19/series +++ b/queue-4.19/series @@ -1 +1,3 @@ s390-bpf-fix-optimizing-out-zero-extensions.patch +kvm-remember-position-in-kvm-vcpus-array.patch +rcu-fix-missed-wakeup-of-exp_wq-waiters.patch