From: Vlastimil Babka Date: Fri, 13 Sep 2024 09:08:27 +0000 (+0200) Subject: Merge branch 'slab/for-6.12/rcu_barriers' into slab/for-next X-Git-Tag: v6.12-rc1~162^2~1 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=a715e94dbda4ece41aac49b7b7ff8ddb55a7fe08;p=thirdparty%2Fkernel%2Flinux.git Merge branch 'slab/for-6.12/rcu_barriers' into slab/for-next Merge most of SLUB feature work for 6.12: - Barrier for pending kfree_rcu() in kmem_cache_destroy() and associated refactoring of the destroy path (Vlastimil Babka) - CONFIG_SLUB_RCU_DEBUG to allow KASAN catching UAF bugs in SLAB_TYPESAFE_BY_RCU caches (Jann Horn) - kmem_cache_charge() for delayed kmemcg charging (Shakeel Butt) --- a715e94dbda4ece41aac49b7b7ff8ddb55a7fe08 diff --cc mm/slub.c index d52c88f29f69a,aa512de974e74..81cea762d0944 --- a/mm/slub.c +++ b/mm/slub.c @@@ -2247,15 -2334,9 +2334,15 @@@ bool slab_free_hook(struct kmem_cache * rsize = (s->flags & SLAB_RED_ZONE) ? s->red_left_pad : 0; memset((char *)kasan_reset_tag(x) + inuse, 0, s->size - inuse - rsize); + /* + * Restore orig_size, otherwize kmalloc redzone overwritten + * would be reported + */ + set_orig_size(s, x, orig_size); + } /* KASAN might put x into memory quarantine, delaying its reuse. */ - return !kasan_slab_free(s, x, init); + return !kasan_slab_free(s, x, init, still_accessible); } static __fastpath_inline