Commit
7a8e71bc619d ("mm/slab: use stride to access slabobj_ext")
defined the type of slab->stride as unsigned short, because the author
initially planned to store stride within the lower 16 bits of the
page_type field, but later stored it in unused bits in the counters
field instead.
However, the idea of having only 2-byte stride turned out to be a
serious mistake. On systems with 64k pages, order-1 pages are 128k,
which is larger than USHRT_MAX. It triggers a debug warning because
s->size is 128k while stride, truncated to 2 bytes, becomes zero:
------------[ cut here ]------------
Warning! stride (0) != s->size (131072)
WARNING: mm/slub.c:2231 at alloc_slab_obj_exts_early.constprop.0+0x524/0x534, CPU#6: systemd-sysctl/307
Modules linked in:
CPU: 6 UID: 0 PID: 307 Comm: systemd-sysctl Not tainted 7.0.0-rc1+ #6 PREEMPTLAZY
Hardware name: IBM,9009-22A POWER9 (architected) 0x4e0202 0xf000005 of:IBM,FW950.E0 (VL950_179) hv:phyp pSeries
NIP:
c0000000008a9ac0 LR:
c0000000008a9abc CTR:
0000000000000000
REGS:
c0000000141f7390 TRAP: 0700 Not tainted (7.0.0-rc1+)
MSR:
8000000000029033 <SF,EE,ME,IR,DR,RI,LE> CR:
28004400 XER:
00000005
CFAR:
c000000000279318 IRQMASK: 0
GPR00:
c0000000008a9abc c0000000141f7630 c00000000252a300 c00000001427b200
GPR04:
0000000000000004 0000000000000000 c000000000278fd0 0000000000000000
GPR08:
fffffffffffe0000 0000000000000000 0000000000000000 0000000022004400
GPR12:
c000000000f644b0 c000000017ff8f00 0000000000000000 0000000000000000
GPR16:
0000000000000000 c0000000141f7aa0 0000000000000000 c0000000141f7a88
GPR20:
0000000000000000 0000000000400cc0 ffffffffffffffff c00000001427b180
GPR24:
0000000000000004 00000000000c0cc0 c000000004e89a20 c00000005de90011
GPR28:
0000000000010010 c00000005df00000 c000000006017f80 c00c000000177a00
NIP [
c0000000008a9ac0] alloc_slab_obj_exts_early.constprop.0+0x524/0x534
LR [
c0000000008a9abc] alloc_slab_obj_exts_early.constprop.0+0x520/0x534
Call Trace:
[
c0000000141f7630] [
c0000000008a9abc] alloc_slab_obj_exts_early.constprop.0+0x520/0x534 (unreliable)
[
c0000000141f76c0] [
c0000000008aafbc] allocate_slab+0x154/0x94c
[
c0000000141f7760] [
c0000000008b41c0] refill_objects+0x124/0x16c
[
c0000000141f77c0] [
c0000000008b4be0] __pcs_replace_empty_main+0x2b0/0x444
[
c0000000141f7810] [
c0000000008b9600] __kvmalloc_node_noprof+0x840/0x914
[
c0000000141f7900] [
c000000000a3dd40] seq_read_iter+0x60c/0xb00
[
c0000000141f7a10] [
c000000000b36b24] proc_reg_read_iter+0x154/0x1fc
[
c0000000141f7a50] [
c0000000009cee7c] vfs_read+0x39c/0x4e4
[
c0000000141f7b30] [
c0000000009d0214] ksys_read+0x9c/0x180
[
c0000000141f7b90] [
c00000000003a8d0] system_call_exception+0x1e0/0x4b0
[
c0000000141f7e50] [
c00000000000d05c] system_call_vectored_common+0x15c/0x2ec
This leads to slab_obj_ext() returning the first slabobj_ext or all
objects and confuses the reference counting of object cgroups [1] and
memory (un)charging for memory cgroups [2].
Fortunately, the counters field has 32 unused bits instead of 16
on 64-bit CPUs, which is wide enough to hold any value of s->size.
Change the type to unsigned int.
Reported-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com>
Closes: https://lore.kernel.org/lkml/ca241daa-e7e7-4604-a48d-de91ec9184a5@linux.ibm.com [1]
Closes: https://lore.kernel.org/all/ddff7c7d-c0c3-4780-808f-9a83268bbf0c@linux.ibm.com [2]
Fixes: 7a8e71bc619d ("mm/slab: use stride to access slabobj_ext")
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Link: https://patch.msgid.link/20260303135722.2680521-1-harry.yoo@oracle.com
Reviewed-by: Hao Li <hao.li@linux.dev>
Tested-by: Venkat Rao Bagalkote <venkat88@linux.ibm.com>
Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
* to save memory. In case ->stride field is not available,
* such optimizations are disabled.
*/
- unsigned short stride;
+ unsigned int stride;
#endif
};
};
}
#ifdef CONFIG_64BIT
-static inline void slab_set_stride(struct slab *slab, unsigned short stride)
+static inline void slab_set_stride(struct slab *slab, unsigned int stride)
{
slab->stride = stride;
}
-static inline unsigned short slab_get_stride(struct slab *slab)
+static inline unsigned int slab_get_stride(struct slab *slab)
{
return slab->stride;
}
#else
-static inline void slab_set_stride(struct slab *slab, unsigned short stride)
+static inline void slab_set_stride(struct slab *slab, unsigned int stride)
{
VM_WARN_ON_ONCE(stride != sizeof(struct slabobj_ext));
}
-static inline unsigned short slab_get_stride(struct slab *slab)
+static inline unsigned int slab_get_stride(struct slab *slab)
{
return sizeof(struct slabobj_ext);
}