From: Sasha Levin Date: Sun, 26 Apr 2020 23:28:01 +0000 (-0400) Subject: Fixes for 4.14 X-Git-Tag: v4.19.119~42 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=e568c765d0965de73915c8af0749df64c0f09615;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 4.14 Signed-off-by: Sasha Levin --- diff --git a/queue-4.14/mm-slub-restore-the-original-intention-of-prefetch_f.patch b/queue-4.14/mm-slub-restore-the-original-intention-of-prefetch_f.patch new file mode 100644 index 00000000000..1e6f5e30626 --- /dev/null +++ b/queue-4.14/mm-slub-restore-the-original-intention-of-prefetch_f.patch @@ -0,0 +1,56 @@ +From 0c0f73796f5527c5d65c76d1f2ac58dc6bb1ceb0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 26 Apr 2020 09:06:17 +0200 +Subject: mm, slub: restore the original intention of prefetch_freepointer() + +From: Vlastimil Babka + +commit 0882ff9190e3bc51e2d78c3aadd7c690eeaa91d5 upstream. + +In SLUB, prefetch_freepointer() is used when allocating an object from +cache's freelist, to make sure the next object in the list is cache-hot, +since it's probable it will be allocated soon. + +Commit 2482ddec670f ("mm: add SLUB free list pointer obfuscation") has +unintentionally changed the prefetch in a way where the prefetch is +turned to a real fetch, and only the next->next pointer is prefetched. +In case there is not a stream of allocations that would benefit from +prefetching, the extra real fetch might add a useless cache miss to the +allocation. Restore the previous behavior. + +Link: http://lkml.kernel.org/r/20180809085245.22448-1-vbabka@suse.cz +Fixes: 2482ddec670f ("mm: add SLUB free list pointer obfuscation") +Signed-off-by: Vlastimil Babka +Acked-by: Kees Cook +Cc: Daniel Micay +Cc: Eric Dumazet +Cc: Christoph Lameter +Cc: Pekka Enberg +Cc: David Rientjes +Cc: Joonsoo Kim +Cc: Matthias Schiffer +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Sven Eckelmann +Signed-off-by: Sasha Levin +--- + mm/slub.c | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +diff --git a/mm/slub.c b/mm/slub.c +index 3c1a16f03b2bd..481518c3f61a9 100644 +--- a/mm/slub.c ++++ b/mm/slub.c +@@ -269,8 +269,7 @@ static inline void *get_freepointer(struct kmem_cache *s, void *object) + + static void prefetch_freepointer(const struct kmem_cache *s, void *object) + { +- if (object) +- prefetch(freelist_dereference(s, object + s->offset)); ++ prefetch(object + s->offset); + } + + static inline void *get_freepointer_safe(struct kmem_cache *s, void *object) +-- +2.20.1 + diff --git a/queue-4.14/series b/queue-4.14/series index 83acaae5b55..5e8d8f96c6c 100644 --- a/queue-4.14/series +++ b/queue-4.14/series @@ -20,3 +20,4 @@ pwm-renesas-tpu-fix-late-runtime-pm-enablement.patch pwm-bcm2835-dynamically-allocate-base.patch perf-core-disable-page-faults-when-getting-phys-addr.patch pci-aspm-allow-re-enabling-clock-pm.patch +mm-slub-restore-the-original-intention-of-prefetch_f.patch