Architecture like powerpc, checks for pfn_valid() in their virt_to_phys()
implementation (when CONFIG_DEBUG_VIRTUAL is enabled) [1]. Commit
d49004c5f0c1 "arch, mm: consolidate initialization of nodes, zones and
memory map" changed the order of initialization between
hugetlb_bootmem_alloc() and free_area_init(). This means, pfn_valid() can
now return false in alloc_bootmem() path, since sparse_init() is not yet
done.
Since, alloc_bootmem() uses memblock_alloc(.., MEMBLOCK_ALLOC_ACCESSIBLE),
this means these allocations are always going to happen below high_memory,
where __pa() should return valid physical addresses. Hence this patch
converts the two callers of virt_to_phys() in alloc_bootmem() path to
__pa() to avoid this bootup warning:
------------[ cut here ]------------
WARNING: arch/powerpc/include/asm/io.h:879 at virt_to_phys+0x44/0x1b8, CPU#0: swapper/0
Modules linked in:
<...>
NIP [
c000000000601584] virt_to_phys+0x44/0x1b8
LR [
c000000004075de4] alloc_bootmem+0x144/0x1a8
Call Trace:
[
c000000004d1fb50] [
c000000004075dd4] alloc_bootmem+0x134/0x1a8
[
c000000004d1fba0] [
c000000004075fac] __alloc_bootmem_huge_page+0x164/0x230
[
c000000004d1fbe0] [
c000000004030bc4] alloc_bootmem_huge_page+0x44/0x138
[
c000000004d1fc10] [
c000000004076e48] hugetlb_hstate_alloc_pages+0x350/0x5ac
[
c000000004d1fd30] [
c0000000040782f0] hugetlb_bootmem_alloc+0x15c/0x19c
[
c000000004d1fd70] [
c00000000406d7b4] mm_core_init_early+0x7c/0xdf4
[
c000000004d1ff30] [
c000000004011d84] start_kernel+0xac/0xc58
[
c000000004d1ffe0] [
c00000000000e99c] start_here_common+0x1c/0x20
[1]: https://lore.kernel.org/linuxppc-dev/87tsv5h544.ritesh.list@gmail.com/
Link: https://lkml.kernel.org/r/b4a7d2c6c4c1dd81dddc904fc21f01303290a4b8.1772107852.git.riteshh@linux.ibm.com
Fixes: d49004c5f0c1 ("arch, mm: consolidate initialization of nodes, zones and memory map")
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
Cc: David Hildenbrand <david@kernel.org>
Cc: Muchun Song <muchun.song@linux.dev>
Cc: Oscar Salvador <osalvador@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* extract the actual node first.
*/
if (m)
- listnode = early_pfn_to_nid(PHYS_PFN(virt_to_phys(m)));
+ listnode = early_pfn_to_nid(PHYS_PFN(__pa(m)));
}
if (m) {
* The head struct page is used to get folio information by the HugeTLB
* subsystem like zone id and node id.
*/
- memblock_reserved_mark_noinit(virt_to_phys((void *)m + PAGE_SIZE),
+ memblock_reserved_mark_noinit(__pa((void *)m + PAGE_SIZE),
huge_page_size(h) - PAGE_SIZE);
return 1;