1 From 50972fe78f24f1cd0b9d7bbf1f87d2be9e4f412e Mon Sep 17 00:00:00 2001
2 From: Prateek Sood <prsood@codeaurora.org>
3 Date: Fri, 14 Jul 2017 19:17:56 +0530
4 Subject: locking/osq_lock: Fix osq_lock queue corruption
6 From: Prateek Sood <prsood@codeaurora.org>
8 commit 50972fe78f24f1cd0b9d7bbf1f87d2be9e4f412e upstream.
10 Fix ordering of link creation between node->prev and prev->next in
11 osq_lock(). A case in which the status of optimistic spin queue is
12 CPU6->CPU2 in which CPU6 has acquired the lock.
20 At this point if CPU0 comes in to acquire osq_lock, it will update the
24 ----------------------------------
32 After tail count update if CPU2 starts to unqueue itself from
33 optimistic spin queue, it will find an updated tail count with CPU0 and
34 update CPU2 node->next to NULL in osq_wait_next().
46 ->tail != curr && !node->next
48 If reordering of following stores happen then prev->next where prev
49 being CPU2 would be updated to point to CPU0 node:
59 xchg(node->next, NULL)
69 At this point if next instruction
70 WRITE_ONCE(next->prev, prev);
71 in CPU2 path is committed before the update of CPU0 node->prev = prev then
72 CPU0 node->prev will point to CPU6 node.
81 At this point if CPU0 path's node->prev = prev is committed resulting
82 in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
92 so if CPU0 gets into unqueue path of osq_lock it will keep spinning
93 in infinite loop as condition prev->next == node will never be true.
95 Signed-off-by: Prateek Sood <prsood@codeaurora.org>
96 [ Added pictures, rewrote comments. ]
97 Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
98 Cc: Linus Torvalds <torvalds@linux-foundation.org>
99 Cc: Peter Zijlstra <peterz@infradead.org>
100 Cc: Thomas Gleixner <tglx@linutronix.de>
101 Cc: sramana@codeaurora.org
102 Link: http://lkml.kernel.org/r/1500040076-27626-1-git-send-email-prsood@codeaurora.org
103 Signed-off-by: Ingo Molnar <mingo@kernel.org>
104 Signed-off-by: Amit Pundir <amit.pundir@linaro.org>
105 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
108 kernel/locking/osq_lock.c | 13 +++++++++++++
109 1 file changed, 13 insertions(+)
111 --- a/kernel/locking/osq_lock.c
112 +++ b/kernel/locking/osq_lock.c
113 @@ -104,6 +104,19 @@ bool osq_lock(struct optimistic_spin_que
115 prev = decode_cpu(old);
119 + * osq_lock() unqueue
121 + * node->prev = prev osq_wait_next()
123 + * prev->next = node next->prev = prev // unqueue-C
125 + * Here 'node->prev' and 'next->prev' are the same variable and we need
126 + * to ensure these stores happen in-order to avoid corrupting the list.
130 WRITE_ONCE(prev->next, node);