mm,slub: do not special case N_NORMAL nodes for slab_nodes
Patch series "Implement numa node notifier", v7.
Memory notifier is a tool that allow consumers to get notified whenever
memory gets onlined or offlined in the system. Currently, there are 10
consumers of that, but 5 out of those 10 consumers are only interested in
getting notifications when a numa node changes its memory state. That
means going from memoryless to memory-aware of vice versa.
Which means that for every {online,offline}_pages operation they get
notified even though the numa node might not have changed its state. This
is suboptimal, and we want to decouple numa node state changes from memory
state changes.
While we are doing this, remove status_change_nid_normal, as the only
current user (slub) does not really need it. This allows us to further
simplify and clean up the code.
The first patch gets rid of status_change_nid_normal in slub. The second
patch implements a numa node notifier that does just that, and have those
consumers register in there, so they get notified only when they are
interested.
The third patch replaces 'status_change_nid{_normal}' fields within
memory_notify with a 'nid', as that is only what we need for memory
notifer and update the only user of it (page_ext).
Consumers that are only interested in numa node states change are:
- memory-tier
- slub
- cpuset
- hmat
- cxl
- autoweight-mempolicy
This patch (of 11):
Currently, slab_mem_going_online_callback() checks whether the node has
N_NORMAL memory in order to be set in slab_nodes. While it is true that
getting rid of that enforcing would mean ending up with movables nodes in
slab_nodes, the memory waste that comes with that is negligible.
So stop checking for status_change_nid_normal and just use
status_change_nid instead which works for both types of memory.
Also, once we allocate the kmem_cache_node cache for the node in
slab_mem_online_callback(), we never deallocate it in
slab_mem_offline_callback() when the node goes memoryless, so we can just
get rid of it.
The side effects are that we will stop clearing the node from slab_nodes,
and also that newly created kmem caches after node hotremove will now
allocate their kmem_cache_node for the node(s) that was hotremoved, but
these should be negligible.
Link: https://lkml.kernel.org/r/20250616135158.450136-1-osalvador@suse.de
Link: https://lkml.kernel.org/r/20250616135158.450136-2-osalvador@suse.de
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Suggested-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Joanthan Cameron <Jonathan.Cameron@huawei.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>