1 From mgorman@suse.de Tue Jan 7 10:38:11 2014
2 From: Mel Gorman <mgorman@suse.de>
3 Date: Tue, 7 Jan 2014 14:00:47 +0000
4 Subject: mm: numa: guarantee that tlb_flush_pending updates are visible before page table updates
5 To: gregkh@linuxfoundation.org
6 Cc: athorlton@sgi.com, riel@redhat.com, chegu_vinod@hp.com, Mel Gorman <mgorman@suse.de>, stable@vger.kernel.org
7 Message-ID: <1389103248-17617-13-git-send-email-mgorman@suse.de>
9 From: Mel Gorman <mgorman@suse.de>
11 commit af2c1401e6f9177483be4fad876d0073669df9df upstream.
13 According to documentation on barriers, stores issued before a LOCK can
14 complete after the lock implying that it's possible tlb_flush_pending
15 can be visible after a page table update. As per revised documentation,
16 this patch adds a smp_mb__before_spinlock to guarantee the correct
19 Signed-off-by: Mel Gorman <mgorman@suse.de>
20 Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
21 Reviewed-by: Rik van Riel <riel@redhat.com>
22 Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
23 Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
24 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
26 include/linux/mm_types.h | 7 ++++++-
27 1 file changed, 6 insertions(+), 1 deletion(-)
29 --- a/include/linux/mm_types.h
30 +++ b/include/linux/mm_types.h
31 @@ -478,7 +478,12 @@ static inline bool mm_tlb_flush_pending(
32 static inline void set_tlb_flush_pending(struct mm_struct *mm)
34 mm->tlb_flush_pending = true;
38 + * Guarantee that the tlb_flush_pending store does not leak into the
39 + * critical section updating the page tables
41 + smp_mb__before_spinlock();
43 /* Clearing is done after a TLB flush, which also provides a barrier. */
44 static inline void clear_tlb_flush_pending(struct mm_struct *mm)