]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.19.33/cpu-hotplug-prevent-crash-when-cpu-bringup-fails-on-config_hotplug_cpu-n.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.19.33 / cpu-hotplug-prevent-crash-when-cpu-bringup-fails-on-config_hotplug_cpu-n.patch
CommitLineData
9e63ff13
GKH
1From 206b92353c839c0b27a0b9bec24195f93fd6cf7a Mon Sep 17 00:00:00 2001
2From: Thomas Gleixner <tglx@linutronix.de>
3Date: Tue, 26 Mar 2019 17:36:05 +0100
4Subject: cpu/hotplug: Prevent crash when CPU bringup fails on CONFIG_HOTPLUG_CPU=n
5
6From: Thomas Gleixner <tglx@linutronix.de>
7
8commit 206b92353c839c0b27a0b9bec24195f93fd6cf7a upstream.
9
10Tianyu reported a crash in a CPU hotplug teardown callback when booting a
11kernel which has CONFIG_HOTPLUG_CPU disabled with the 'nosmt' boot
12parameter.
13
14It turns out that the SMP=y CONFIG_HOTPLUG_CPU=n case has been broken
15forever in case that a bringup callback fails. Unfortunately this issue was
16not recognized when the CPU hotplug code was reworked, so the shortcoming
17just stayed in place.
18
19When a bringup callback fails, the CPU hotplug code rolls back the
20operation and takes the CPU offline.
21
22The 'nosmt' command line argument uses a bringup failure to abort the
23bringup of SMT sibling CPUs. This partial bringup is required due to the
24MCE misdesign on Intel CPUs.
25
26With CONFIG_HOTPLUG_CPU=y the rollback works perfectly fine, but
27CONFIG_HOTPLUG_CPU=n lacks essential mechanisms to exercise the low level
28teardown of a CPU including the synchronizations in various facilities like
29RCU, NOHZ and others.
30
31As a consequence the teardown callbacks which must be executed on the
32outgoing CPU within stop machine with interrupts disabled are executed on
33the control CPU in interrupt enabled and preemptible context causing the
34kernel to crash and burn. The pre state machine code has a different
35failure mode which is more subtle and resulting in a less obvious use after
36free crash because the control side frees resources which are still in use
37by the undead CPU.
38
39But this is not a x86 only problem. Any architecture which supports the
40SMP=y HOTPLUG_CPU=n combination suffers from the same issue. It's just less
41likely to be triggered because in 99.99999% of the cases all bringup
42callbacks succeed.
43
44The easy solution of making HOTPLUG_CPU mandatory for SMP is not working on
45all architectures as the following architectures have either no hotplug
46support at all or not all subarchitectures support it:
47
48 alpha, arc, hexagon, openrisc, riscv, sparc (32bit), mips (partial).
49
50Crashing the kernel in such a situation is not an acceptable state
51either.
52
53Implement a minimal rollback variant by limiting the teardown to the point
54where all regular teardown callbacks have been invoked and leave the CPU in
55the 'dead' idle state. This has the following consequences:
56
57 - the CPU is brought down to the point where the stop_machine takedown
58 would happen.
59
60 - the CPU stays there forever and is idle
61
62 - The CPU is cleared in the CPU active mask, but not in the CPU online
63 mask which is a legit state.
64
65 - Interrupts are not forced away from the CPU
66
67 - All facilities which only look at online mask would still see it, but
68 that is the case during normal hotplug/unplug operations as well. It's
69 just a (way) longer time frame.
70
71This will expose issues, which haven't been exposed before or only seldom,
72because now the normally transient state of being non active but online is
73a permanent state. In testing this exposed already an issue vs. work queues
74where the vmstat code schedules work on the almost dead CPU which ends up
75in an unbound workqueue and triggers 'preemtible context' warnings. This is
76not a problem of this change, it merily exposes an already existing issue.
77Still this is better than crashing fully without a chance to debug it.
78
79This is mainly thought as workaround for those architectures which do not
80support HOTPLUG_CPU. All others should enforce HOTPLUG_CPU for SMP.
81
82Fixes: 2e1a3483ce74 ("cpu/hotplug: Split out the state walk into functions")
83Reported-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
84Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
85Tested-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
86Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
87Cc: Konrad Wilk <konrad.wilk@oracle.com>
88Cc: Josh Poimboeuf <jpoimboe@redhat.com>
89Cc: Mukesh Ojha <mojha@codeaurora.org>
90Cc: Peter Zijlstra <peterz@infradead.org>
91Cc: Jiri Kosina <jkosina@suse.cz>
92Cc: Rik van Riel <riel@surriel.com>
93Cc: Andy Lutomirski <luto@kernel.org>
94Cc: Micheal Kelley <michael.h.kelley@microsoft.com>
95Cc: "K. Y. Srinivasan" <kys@microsoft.com>
96Cc: Linus Torvalds <torvalds@linux-foundation.org>
97Cc: Borislav Petkov <bp@alien8.de>
98Cc: K. Y. Srinivasan <kys@microsoft.com>
99Cc: stable@vger.kernel.org
100Link: https://lkml.kernel.org/r/20190326163811.503390616@linutronix.de
101Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
102
103---
104 kernel/cpu.c | 20 ++++++++++++++++++--
105 1 file changed, 18 insertions(+), 2 deletions(-)
106
107--- a/kernel/cpu.c
108+++ b/kernel/cpu.c
109@@ -533,6 +533,20 @@ static void undo_cpu_up(unsigned int cpu
110 cpuhp_invoke_callback(cpu, st->state, false, NULL, NULL);
111 }
112
113+static inline bool can_rollback_cpu(struct cpuhp_cpu_state *st)
114+{
115+ if (IS_ENABLED(CONFIG_HOTPLUG_CPU))
116+ return true;
117+ /*
118+ * When CPU hotplug is disabled, then taking the CPU down is not
119+ * possible because takedown_cpu() and the architecture and
120+ * subsystem specific mechanisms are not available. So the CPU
121+ * which would be completely unplugged again needs to stay around
122+ * in the current state.
123+ */
124+ return st->state <= CPUHP_BRINGUP_CPU;
125+}
126+
127 static int cpuhp_up_callbacks(unsigned int cpu, struct cpuhp_cpu_state *st,
128 enum cpuhp_state target)
129 {
130@@ -543,8 +557,10 @@ static int cpuhp_up_callbacks(unsigned i
131 st->state++;
132 ret = cpuhp_invoke_callback(cpu, st->state, true, NULL, NULL);
133 if (ret) {
134- st->target = prev_state;
135- undo_cpu_up(cpu, st);
136+ if (can_rollback_cpu(st)) {
137+ st->target = prev_state;
138+ undo_cpu_up(cpu, st);
139+ }
140 break;
141 }
142 }