From: Vlastimil Babka Date: Fri, 26 Sep 2025 13:50:25 +0000 (+0200) Subject: Merge series "slab: Re-entrant kmalloc_nolock()" X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=ca74b8cadaad4b179f77f1f4dc3d288be9a580f1;p=thirdparty%2Fkernel%2Fstable.git Merge series "slab: Re-entrant kmalloc_nolock()" From the cover letter [1]: This patch set introduces kmalloc_nolock() which is the next logical step towards any context allocation necessary to remove bpf_mem_alloc and get rid of preallocation requirement in BPF infrastructure. In production BPF maps grew to gigabytes in size. Preallocation wastes memory. Alloc from any context addresses this issue for BPF and other subsystems that are forced to preallocate too. This long task started with introduction of alloc_pages_nolock(), then memcg and objcg were converted to operate from any context including NMI, this set completes the task with kmalloc_nolock() that builds on top of alloc_pages_nolock() and memcg changes. After that BPF subsystem will gradually adopt it everywhere. Link: https://lore.kernel.org/all/20250909010007.1660-1-alexei.starovoitov@gmail.com/ [1] --- ca74b8cadaad4b179f77f1f4dc3d288be9a580f1 diff --cc mm/slub.c index c2c6b350766e2,f9f7f3942074f..a585d0ac45d40 --- a/mm/slub.c +++ b/mm/slub.c @@@ -2097,11 -2089,26 +2099,25 @@@ int alloc_slab_obj_exts(struct slab *sl gfp &= ~OBJCGS_CLEAR_MASK; /* Prevent recursive extension vector allocation */ gfp |= __GFP_NO_OBJ_EXT; - vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, - slab_nid(slab)); + + /* + * Note that allow_spin may be false during early boot and its + * restricted GFP_BOOT_MASK. Due to kmalloc_nolock() only supporting + * architectures with cmpxchg16b, early obj_exts will be missing for + * very early allocations on those. + */ + if (unlikely(!allow_spin)) { + size_t sz = objects * sizeof(struct slabobj_ext); + + vec = kmalloc_nolock(sz, __GFP_ZERO | __GFP_NO_OBJ_EXT, + slab_nid(slab)); + } else { + vec = kcalloc_node(objects, sizeof(struct slabobj_ext), gfp, + slab_nid(slab)); + } if (!vec) { /* Mark vectors which failed to allocate */ - if (new_slab) - mark_failed_objexts_alloc(slab); + mark_failed_objexts_alloc(slab); return -ENOMEM; }