From: Mel Gorman Date: Tue, 7 Jan 2014 14:00:47 +0000 (+0000) Subject: mm: numa: guarantee that tlb_flush_pending updates are visible before page table... X-Git-Tag: v3.12.7~34 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=186fa6eb6131954d17457f37283e654cb079c25b;p=thirdparty%2Fkernel%2Fstable.git mm: numa: guarantee that tlb_flush_pending updates are visible before page table updates commit af2c1401e6f9177483be4fad876d0073669df9df upstream. According to documentation on barriers, stores issued before a LOCK can complete after the lock implying that it's possible tlb_flush_pending can be visible after a page table update. As per revised documentation, this patch adds a smp_mb__before_spinlock to guarantee the correct ordering. Signed-off-by: Mel Gorman Acked-by: Paul E. McKenney Reviewed-by: Rik van Riel Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Greg Kroah-Hartman --- diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h index 0edf600b2b5ef..8e082f18fb6aa 100644 --- a/include/linux/mm_types.h +++ b/include/linux/mm_types.h @@ -478,7 +478,12 @@ static inline bool mm_tlb_flush_pending(struct mm_struct *mm) static inline void set_tlb_flush_pending(struct mm_struct *mm) { mm->tlb_flush_pending = true; - barrier(); + + /* + * Guarantee that the tlb_flush_pending store does not leak into the + * critical section updating the page tables + */ + smp_mb__before_spinlock(); } /* Clearing is done after a TLB flush, which also provides a barrier. */ static inline void clear_tlb_flush_pending(struct mm_struct *mm)