From: Sasha Levin Date: Wed, 26 Jun 2024 19:04:37 +0000 (-0400) Subject: Fixes for 5.15 X-Git-Tag: v4.19.317~155 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=678121e90b88b2b4284d50758fb2dd79aa96dd48;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.15 Signed-off-by: Sasha Levin --- diff --git a/queue-5.15/acpi-x86-force-storaged3enable-on-more-products.patch b/queue-5.15/acpi-x86-force-storaged3enable-on-more-products.patch new file mode 100644 index 00000000000..c48f66ef8ee --- /dev/null +++ b/queue-5.15/acpi-x86-force-storaged3enable-on-more-products.patch @@ -0,0 +1,83 @@ +From ece455c66e0047a50155ce2ea5367bc005b7287e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 9 May 2024 13:45:02 -0500 +Subject: ACPI: x86: Force StorageD3Enable on more products + +From: Mario Limonciello + +[ Upstream commit e79a10652bbd320649da705ca1ea0c04351af403 ] + +A Rembrandt-based HP thin client is reported to have problems where +the NVME disk isn't present after resume from s2idle. + +This is because the NVME disk wasn't put into D3 at suspend, and +that happened because the StorageD3Enable _DSD was missing in the BIOS. + +As AMD's architecture requires that the NVME is in D3 for s2idle, adjust +the criteria for force_storage_d3 to match *all* Zen SoCs when the FADT +advertises low power idle support. + +This will ensure that any future products with this BIOS deficiency don't +need to be added to the allow list of overrides. + +Cc: All applicable +Signed-off-by: Mario Limonciello +Acked-by: Hans de Goede +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/acpi/x86/utils.c | 24 ++++++++++-------------- + 1 file changed, 10 insertions(+), 14 deletions(-) + +diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c +index 7d6083d40bf6b..aa4f233373afc 100644 +--- a/drivers/acpi/x86/utils.c ++++ b/drivers/acpi/x86/utils.c +@@ -179,16 +179,16 @@ bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *s + } + + /* +- * AMD systems from Renoir and Lucienne *require* that the NVME controller ++ * AMD systems from Renoir onwards *require* that the NVME controller + * is put into D3 over a Modern Standby / suspend-to-idle cycle. + * + * This is "typically" accomplished using the `StorageD3Enable` + * property in the _DSD that is checked via the `acpi_storage_d3` function +- * but this property was introduced after many of these systems launched +- * and most OEM systems don't have it in their BIOS. ++ * but some OEM systems still don't have it in their BIOS. + * + * The Microsoft documentation for StorageD3Enable mentioned that Windows has +- * a hardcoded allowlist for D3 support, which was used for these platforms. ++ * a hardcoded allowlist for D3 support as well as a registry key to override ++ * the BIOS, which has been used for these cases. + * + * This allows quirking on Linux in a similar fashion. + * +@@ -201,17 +201,13 @@ bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *s + * https://bugzilla.kernel.org/show_bug.cgi?id=216773 + * https://bugzilla.kernel.org/show_bug.cgi?id=217003 + * 2) On at least one HP system StorageD3Enable is missing on the second NVME +- disk in the system. ++ * disk in the system. ++ * 3) On at least one HP Rembrandt system StorageD3Enable is missing on the only ++ * NVME device. + */ +-static const struct x86_cpu_id storage_d3_cpu_ids[] = { +- X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 24, NULL), /* Picasso */ +- X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 96, NULL), /* Renoir */ +- X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 104, NULL), /* Lucienne */ +- X86_MATCH_VENDOR_FAM_MODEL(AMD, 25, 80, NULL), /* Cezanne */ +- {} +-}; +- + bool force_storage_d3(void) + { +- return x86_match_cpu(storage_d3_cpu_ids); ++ if (!cpu_feature_enabled(X86_FEATURE_ZEN)) ++ return false; ++ return acpi_gbl_FADT.flags & ACPI_FADT_LOW_POWER_S0; + } +-- +2.43.0 + diff --git a/queue-5.15/acpi-x86-utils-add-picasso-to-the-list-for-forcing-s.patch b/queue-5.15/acpi-x86-utils-add-picasso-to-the-list-for-forcing-s.patch new file mode 100644 index 00000000000..a783d4be4a8 --- /dev/null +++ b/queue-5.15/acpi-x86-utils-add-picasso-to-the-list-for-forcing-s.patch @@ -0,0 +1,43 @@ +From 3b69516c27a31d72a9934e1e5be8e1974ab5d1e5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 31 Mar 2023 11:08:42 -0500 +Subject: ACPI: x86: utils: Add Picasso to the list for forcing StorageD3Enable + +From: Mario Limonciello + +[ Upstream commit 10b6b4a8ac6120ec36555fd286eed577f7632e3b ] + +Picasso was the first APU that introduced s2idle support from AMD, +and it was predating before vendors started to use `StorageD3Enable` +in their firmware. + +Windows doesn't have problems with this hardware and NVME so it was +likely on the list of hardcoded CPUs to use this behavior in Windows. + +Add it to the list for Linux to avoid NVME resume issues. + +Reported-by: Stuart Axon +Link: https://gitlab.freedesktop.org/drm/amd/-/issues/2449 +Signed-off-by: Mario Limonciello +Signed-off-by: Rafael J. Wysocki +Stable-dep-of: e79a10652bbd ("ACPI: x86: Force StorageD3Enable on more products") +Signed-off-by: Sasha Levin +--- + drivers/acpi/x86/utils.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/acpi/x86/utils.c b/drivers/acpi/x86/utils.c +index f1dd086d0b87d..7d6083d40bf6b 100644 +--- a/drivers/acpi/x86/utils.c ++++ b/drivers/acpi/x86/utils.c +@@ -204,6 +204,7 @@ bool acpi_device_override_status(struct acpi_device *adev, unsigned long long *s + disk in the system. + */ + static const struct x86_cpu_id storage_d3_cpu_ids[] = { ++ X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 24, NULL), /* Picasso */ + X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 96, NULL), /* Renoir */ + X86_MATCH_VENDOR_FAM_MODEL(AMD, 23, 104, NULL), /* Lucienne */ + X86_MATCH_VENDOR_FAM_MODEL(AMD, 25, 80, NULL), /* Cezanne */ +-- +2.43.0 + diff --git a/queue-5.15/gve-add-rx-context.patch b/queue-5.15/gve-add-rx-context.patch new file mode 100644 index 00000000000..8087ec99fa0 --- /dev/null +++ b/queue-5.15/gve-add-rx-context.patch @@ -0,0 +1,234 @@ +From 22c4bf3f225db7d0b3138b1ef5bd3499f34d8280 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 24 Oct 2021 11:42:36 -0700 +Subject: gve: Add RX context. + +From: David Awogbemila + +[ Upstream commit 1344e751e91092ac0cb63b194621e59d2f364197 ] + +This refactor moves the skb_head and skb_tail fields into a new +gve_rx_ctx struct. This new struct will contain information about the +current packet being processed. This is in preparation for +multi-descriptor RX packets. + +Signed-off-by: David Awogbemila +Signed-off-by: Jeroen de Borst +Reviewed-by: Catherine Sullivan +Signed-off-by: David S. Miller +Stable-dep-of: 6f4d93b78ade ("gve: Clear napi->skb before dev_kfree_skb_any()") +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/google/gve/gve.h | 13 +++- + drivers/net/ethernet/google/gve/gve_rx_dqo.c | 68 ++++++++++---------- + 2 files changed, 44 insertions(+), 37 deletions(-) + +diff --git a/drivers/net/ethernet/google/gve/gve.h b/drivers/net/ethernet/google/gve/gve.h +index 08f4c0595efae..822bdaff66f69 100644 +--- a/drivers/net/ethernet/google/gve/gve.h ++++ b/drivers/net/ethernet/google/gve/gve.h +@@ -144,6 +144,15 @@ struct gve_index_list { + s16 tail; + }; + ++/* A single received packet split across multiple buffers may be ++ * reconstructed using the information in this structure. ++ */ ++struct gve_rx_ctx { ++ /* head and tail of skb chain for the current packet or NULL if none */ ++ struct sk_buff *skb_head; ++ struct sk_buff *skb_tail; ++}; ++ + /* Contains datapath state used to represent an RX queue. */ + struct gve_rx_ring { + struct gve_priv *gve; +@@ -208,9 +217,7 @@ struct gve_rx_ring { + dma_addr_t q_resources_bus; /* dma address for the queue resources */ + struct u64_stats_sync statss; /* sync stats for 32bit archs */ + +- /* head and tail of skb chain for the current packet or NULL if none */ +- struct sk_buff *skb_head; +- struct sk_buff *skb_tail; ++ struct gve_rx_ctx ctx; /* Info for packet currently being processed in this ring. */ + }; + + /* A TX desc ring entry */ +diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c +index d947c2c334128..630f42a3037b0 100644 +--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c ++++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c +@@ -240,8 +240,8 @@ static int gve_rx_alloc_ring_dqo(struct gve_priv *priv, int idx) + rx->dqo.bufq.mask = buffer_queue_slots - 1; + rx->dqo.complq.num_free_slots = completion_queue_slots; + rx->dqo.complq.mask = completion_queue_slots - 1; +- rx->skb_head = NULL; +- rx->skb_tail = NULL; ++ rx->ctx.skb_head = NULL; ++ rx->ctx.skb_tail = NULL; + + rx->dqo.num_buf_states = min_t(s16, S16_MAX, buffer_queue_slots * 4); + rx->dqo.buf_states = kvcalloc(rx->dqo.num_buf_states, +@@ -467,12 +467,12 @@ static void gve_rx_skb_hash(struct sk_buff *skb, + + static void gve_rx_free_skb(struct gve_rx_ring *rx) + { +- if (!rx->skb_head) ++ if (!rx->ctx.skb_head) + return; + +- dev_kfree_skb_any(rx->skb_head); +- rx->skb_head = NULL; +- rx->skb_tail = NULL; ++ dev_kfree_skb_any(rx->ctx.skb_head); ++ rx->ctx.skb_head = NULL; ++ rx->ctx.skb_tail = NULL; + } + + /* Chains multi skbs for single rx packet. +@@ -483,7 +483,7 @@ static int gve_rx_append_frags(struct napi_struct *napi, + u16 buf_len, struct gve_rx_ring *rx, + struct gve_priv *priv) + { +- int num_frags = skb_shinfo(rx->skb_tail)->nr_frags; ++ int num_frags = skb_shinfo(rx->ctx.skb_tail)->nr_frags; + + if (unlikely(num_frags == MAX_SKB_FRAGS)) { + struct sk_buff *skb; +@@ -492,17 +492,17 @@ static int gve_rx_append_frags(struct napi_struct *napi, + if (!skb) + return -1; + +- skb_shinfo(rx->skb_tail)->frag_list = skb; +- rx->skb_tail = skb; ++ skb_shinfo(rx->ctx.skb_tail)->frag_list = skb; ++ rx->ctx.skb_tail = skb; + num_frags = 0; + } +- if (rx->skb_tail != rx->skb_head) { +- rx->skb_head->len += buf_len; +- rx->skb_head->data_len += buf_len; +- rx->skb_head->truesize += priv->data_buffer_size_dqo; ++ if (rx->ctx.skb_tail != rx->ctx.skb_head) { ++ rx->ctx.skb_head->len += buf_len; ++ rx->ctx.skb_head->data_len += buf_len; ++ rx->ctx.skb_head->truesize += priv->data_buffer_size_dqo; + } + +- skb_add_rx_frag(rx->skb_tail, num_frags, ++ skb_add_rx_frag(rx->ctx.skb_tail, num_frags, + buf_state->page_info.page, + buf_state->page_info.page_offset, + buf_len, priv->data_buffer_size_dqo); +@@ -556,7 +556,7 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, + buf_len, DMA_FROM_DEVICE); + + /* Append to current skb if one exists. */ +- if (rx->skb_head) { ++ if (rx->ctx.skb_head) { + if (unlikely(gve_rx_append_frags(napi, buf_state, buf_len, rx, + priv)) != 0) { + goto error; +@@ -567,11 +567,11 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, + } + + if (eop && buf_len <= priv->rx_copybreak) { +- rx->skb_head = gve_rx_copy(priv->dev, napi, +- &buf_state->page_info, buf_len, 0); +- if (unlikely(!rx->skb_head)) ++ rx->ctx.skb_head = gve_rx_copy(priv->dev, napi, ++ &buf_state->page_info, buf_len, 0); ++ if (unlikely(!rx->ctx.skb_head)) + goto error; +- rx->skb_tail = rx->skb_head; ++ rx->ctx.skb_tail = rx->ctx.skb_head; + + u64_stats_update_begin(&rx->statss); + rx->rx_copied_pkt++; +@@ -583,12 +583,12 @@ static int gve_rx_dqo(struct napi_struct *napi, struct gve_rx_ring *rx, + return 0; + } + +- rx->skb_head = napi_get_frags(napi); +- if (unlikely(!rx->skb_head)) ++ rx->ctx.skb_head = napi_get_frags(napi); ++ if (unlikely(!rx->ctx.skb_head)) + goto error; +- rx->skb_tail = rx->skb_head; ++ rx->ctx.skb_tail = rx->ctx.skb_head; + +- skb_add_rx_frag(rx->skb_head, 0, buf_state->page_info.page, ++ skb_add_rx_frag(rx->ctx.skb_head, 0, buf_state->page_info.page, + buf_state->page_info.page_offset, buf_len, + priv->data_buffer_size_dqo); + gve_dec_pagecnt_bias(&buf_state->page_info); +@@ -635,27 +635,27 @@ static int gve_rx_complete_skb(struct gve_rx_ring *rx, struct napi_struct *napi, + rx->gve->ptype_lut_dqo->ptypes[desc->packet_type]; + int err; + +- skb_record_rx_queue(rx->skb_head, rx->q_num); ++ skb_record_rx_queue(rx->ctx.skb_head, rx->q_num); + + if (feat & NETIF_F_RXHASH) +- gve_rx_skb_hash(rx->skb_head, desc, ptype); ++ gve_rx_skb_hash(rx->ctx.skb_head, desc, ptype); + + if (feat & NETIF_F_RXCSUM) +- gve_rx_skb_csum(rx->skb_head, desc, ptype); ++ gve_rx_skb_csum(rx->ctx.skb_head, desc, ptype); + + /* RSC packets must set gso_size otherwise the TCP stack will complain + * that packets are larger than MTU. + */ + if (desc->rsc) { +- err = gve_rx_complete_rsc(rx->skb_head, desc, ptype); ++ err = gve_rx_complete_rsc(rx->ctx.skb_head, desc, ptype); + if (err < 0) + return err; + } + +- if (skb_headlen(rx->skb_head) == 0) ++ if (skb_headlen(rx->ctx.skb_head) == 0) + napi_gro_frags(napi); + else +- napi_gro_receive(napi, rx->skb_head); ++ napi_gro_receive(napi, rx->ctx.skb_head); + + return 0; + } +@@ -717,18 +717,18 @@ int gve_rx_poll_dqo(struct gve_notify_block *block, int budget) + /* Free running counter of completed descriptors */ + rx->cnt++; + +- if (!rx->skb_head) ++ if (!rx->ctx.skb_head) + continue; + + if (!compl_desc->end_of_packet) + continue; + + work_done++; +- pkt_bytes = rx->skb_head->len; ++ pkt_bytes = rx->ctx.skb_head->len; + /* The ethernet header (first ETH_HLEN bytes) is snipped off + * by eth_type_trans. + */ +- if (skb_headlen(rx->skb_head)) ++ if (skb_headlen(rx->ctx.skb_head)) + pkt_bytes += ETH_HLEN; + + /* gve_rx_complete_skb() will consume skb if successful */ +@@ -741,8 +741,8 @@ int gve_rx_poll_dqo(struct gve_notify_block *block, int budget) + } + + bytes += pkt_bytes; +- rx->skb_head = NULL; +- rx->skb_tail = NULL; ++ rx->ctx.skb_head = NULL; ++ rx->ctx.skb_tail = NULL; + } + + gve_rx_post_buffers_dqo(rx); +-- +2.43.0 + diff --git a/queue-5.15/gve-clear-napi-skb-before-dev_kfree_skb_any.patch b/queue-5.15/gve-clear-napi-skb-before-dev_kfree_skb_any.patch new file mode 100644 index 00000000000..fbc5843602d --- /dev/null +++ b/queue-5.15/gve-clear-napi-skb-before-dev_kfree_skb_any.patch @@ -0,0 +1,69 @@ +From 8bbf589b0d69a6a3c06ce2d3d74d1eb1ebfde414 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 12 Jun 2024 00:16:54 +0000 +Subject: gve: Clear napi->skb before dev_kfree_skb_any() + +From: Ziwei Xiao + +[ Upstream commit 6f4d93b78ade0a4c2cafd587f7b429ce95abb02e ] + +gve_rx_free_skb incorrectly leaves napi->skb referencing an skb after it +is freed with dev_kfree_skb_any(). This can result in a subsequent call +to napi_get_frags returning a dangling pointer. + +Fix this by clearing napi->skb before the skb is freed. + +Fixes: 9b8dd5e5ea48 ("gve: DQO: Add RX path") +Cc: stable@vger.kernel.org +Reported-by: Shailend Chand +Signed-off-by: Ziwei Xiao +Reviewed-by: Harshitha Ramamurthy +Reviewed-by: Shailend Chand +Reviewed-by: Praveen Kaligineedi +Link: https://lore.kernel.org/r/20240612001654.923887-1-ziweixiao@google.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/google/gve/gve_rx_dqo.c | 8 +++++--- + 1 file changed, 5 insertions(+), 3 deletions(-) + +diff --git a/drivers/net/ethernet/google/gve/gve_rx_dqo.c b/drivers/net/ethernet/google/gve/gve_rx_dqo.c +index 630f42a3037b0..8756f9cbd631e 100644 +--- a/drivers/net/ethernet/google/gve/gve_rx_dqo.c ++++ b/drivers/net/ethernet/google/gve/gve_rx_dqo.c +@@ -465,11 +465,13 @@ static void gve_rx_skb_hash(struct sk_buff *skb, + skb_set_hash(skb, le32_to_cpu(compl_desc->hash), hash_type); + } + +-static void gve_rx_free_skb(struct gve_rx_ring *rx) ++static void gve_rx_free_skb(struct napi_struct *napi, struct gve_rx_ring *rx) + { + if (!rx->ctx.skb_head) + return; + ++ if (rx->ctx.skb_head == napi->skb) ++ napi->skb = NULL; + dev_kfree_skb_any(rx->ctx.skb_head); + rx->ctx.skb_head = NULL; + rx->ctx.skb_tail = NULL; +@@ -690,7 +692,7 @@ int gve_rx_poll_dqo(struct gve_notify_block *block, int budget) + + err = gve_rx_dqo(napi, rx, compl_desc, rx->q_num); + if (err < 0) { +- gve_rx_free_skb(rx); ++ gve_rx_free_skb(napi, rx); + u64_stats_update_begin(&rx->statss); + if (err == -ENOMEM) + rx->rx_skb_alloc_fail++; +@@ -733,7 +735,7 @@ int gve_rx_poll_dqo(struct gve_notify_block *block, int budget) + + /* gve_rx_complete_skb() will consume skb if successful */ + if (gve_rx_complete_skb(rx, napi, compl_desc, feat) != 0) { +- gve_rx_free_skb(rx); ++ gve_rx_free_skb(napi, rx); + u64_stats_update_begin(&rx->statss); + rx->rx_desc_err_dropped_pkt++; + u64_stats_update_end(&rx->statss); +-- +2.43.0 + diff --git a/queue-5.15/series b/queue-5.15/series index 1af6af8c254..ab6329c4e1d 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -285,3 +285,8 @@ perf-core-fix-missing-wakeup-when-waiting-for-contex.patch pci-add-pci_error_response-and-related-definitions.patch x86-amd_nb-check-for-invalid-smn-reads.patch smb-client-fix-deadlock-in-smb2_find_smb_tcon.patch +x86-mm-numa-use-numa_no_node-when-calling-memblock_s.patch +acpi-x86-utils-add-picasso-to-the-list-for-forcing-s.patch +acpi-x86-force-storaged3enable-on-more-products.patch +gve-add-rx-context.patch +gve-clear-napi-skb-before-dev_kfree_skb_any.patch diff --git a/queue-5.15/x86-mm-numa-use-numa_no_node-when-calling-memblock_s.patch b/queue-5.15/x86-mm-numa-use-numa_no_node-when-calling-memblock_s.patch new file mode 100644 index 00000000000..3b566c56706 --- /dev/null +++ b/queue-5.15/x86-mm-numa-use-numa_no_node-when-calling-memblock_s.patch @@ -0,0 +1,58 @@ +From 4c108e2a4120d33de74e043cb98d505fc261dd3e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 29 May 2024 09:42:05 +0200 +Subject: x86/mm/numa: Use NUMA_NO_NODE when calling memblock_set_node() + +From: Jan Beulich + +[ Upstream commit 3ac36aa7307363b7247ccb6f6a804e11496b2b36 ] + +memblock_set_node() warns about using MAX_NUMNODES, see + + e0eec24e2e19 ("memblock: make memblock_set_node() also warn about use of MAX_NUMNODES") + +for details. + +Reported-by: Narasimhan V +Signed-off-by: Jan Beulich +Cc: stable@vger.kernel.org +[bp: commit message] +Signed-off-by: Borislav Petkov (AMD) +Reviewed-by: Mike Rapoport (IBM) +Tested-by: Paul E. McKenney +Link: https://lore.kernel.org/r/20240603141005.23261-1-bp@kernel.org +Link: https://lore.kernel.org/r/abadb736-a239-49e4-ab42-ace7acdd4278@suse.com +Signed-off-by: Mike Rapoport (IBM) +Signed-off-by: Sasha Levin +--- + arch/x86/mm/numa.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c +index 1a1c0c242f272..d074a1b4f976c 100644 +--- a/arch/x86/mm/numa.c ++++ b/arch/x86/mm/numa.c +@@ -522,7 +522,7 @@ static void __init numa_clear_kernel_node_hotplug(void) + for_each_reserved_mem_region(mb_region) { + int nid = memblock_get_region_node(mb_region); + +- if (nid != MAX_NUMNODES) ++ if (nid != NUMA_NO_NODE) + node_set(nid, reserved_nodemask); + } + +@@ -642,9 +642,9 @@ static int __init numa_init(int (*init_func)(void)) + nodes_clear(node_online_map); + memset(&numa_meminfo, 0, sizeof(numa_meminfo)); + WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.memory, +- MAX_NUMNODES)); ++ NUMA_NO_NODE)); + WARN_ON(memblock_set_node(0, ULLONG_MAX, &memblock.reserved, +- MAX_NUMNODES)); ++ NUMA_NO_NODE)); + /* In case that parsing SRAT failed. */ + WARN_ON(memblock_clear_hotplug(0, ULLONG_MAX)); + numa_reset_distance(); +-- +2.43.0 +