]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - queue-5.1/x86-vdso-prevent-segfaults-due-to-hoisted-vclock-reads.patch
5.1-stable patches
[thirdparty/kernel/stable-queue.git] / queue-5.1 / x86-vdso-prevent-segfaults-due-to-hoisted-vclock-reads.patch
1 From ff17bbe0bb405ad8b36e55815d381841f9fdeebc Mon Sep 17 00:00:00 2001
2 From: Andy Lutomirski <luto@kernel.org>
3 Date: Fri, 21 Jun 2019 08:43:04 -0700
4 Subject: x86/vdso: Prevent segfaults due to hoisted vclock reads
5
6 From: Andy Lutomirski <luto@kernel.org>
7
8 commit ff17bbe0bb405ad8b36e55815d381841f9fdeebc upstream.
9
10 GCC 5.5.0 sometimes cleverly hoists reads of the pvclock and/or hvclock
11 pages before the vclock mode checks. This creates a path through
12 vclock_gettime() in which no vclock is enabled at all (due to disabled
13 TSC on old CPUs, for example) but the pvclock or hvclock page
14 nevertheless read. This will segfault on bare metal.
15
16 This fixes commit 459e3a21535a ("gcc-9: properly declare the
17 {pv,hv}clock_page storage") in the sense that, before that commit, GCC
18 didn't seem to generate the offending code. There was nothing wrong
19 with that commit per se, and -stable maintainers should backport this to
20 all supported kernels regardless of whether the offending commit was
21 present, since the same crash could just as easily be triggered by the
22 phase of the moon.
23
24 On GCC 9.1.1, this doesn't seem to affect the generated code at all, so
25 I'm not too concerned about performance regressions from this fix.
26
27 Cc: stable@vger.kernel.org
28 Cc: x86@kernel.org
29 Cc: Borislav Petkov <bp@alien8.de>
30 Reported-by: Duncan Roe <duncan_roe@optusnet.com.au>
31 Signed-off-by: Andy Lutomirski <luto@kernel.org>
32 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
33 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
34
35 ---
36 arch/x86/entry/vdso/vclock_gettime.c | 15 +++++++++++++--
37 1 file changed, 13 insertions(+), 2 deletions(-)
38
39 --- a/arch/x86/entry/vdso/vclock_gettime.c
40 +++ b/arch/x86/entry/vdso/vclock_gettime.c
41 @@ -128,13 +128,24 @@ notrace static inline u64 vgetcyc(int mo
42 {
43 if (mode == VCLOCK_TSC)
44 return (u64)rdtsc_ordered();
45 +
46 + /*
47 + * For any memory-mapped vclock type, we need to make sure that gcc
48 + * doesn't cleverly hoist a load before the mode check. Otherwise we
49 + * might end up touching the memory-mapped page even if the vclock in
50 + * question isn't enabled, which will segfault. Hence the barriers.
51 + */
52 #ifdef CONFIG_PARAVIRT_CLOCK
53 - else if (mode == VCLOCK_PVCLOCK)
54 + if (mode == VCLOCK_PVCLOCK) {
55 + barrier();
56 return vread_pvclock();
57 + }
58 #endif
59 #ifdef CONFIG_HYPERV_TSCPAGE
60 - else if (mode == VCLOCK_HVCLOCK)
61 + if (mode == VCLOCK_HVCLOCK) {
62 + barrier();
63 return vread_hvclock();
64 + }
65 #endif
66 return U64_MAX;
67 }