arm/memremap: fix arch_memremap_can_ram_remap()
commit
260364d112bc ("arm[64]/memremap: don't abuse pfn_valid() to ensure
presence of linear map") added the definition of
arch_memremap_can_ram_remap() for arm[64] specific filtering of what pages
can be used from the linear mapping. memblock_is_map_memory() was called
with the pfn of the address given to arch_memremap_can_ram_remap();
however, memblock_is_map_memory() expects to be given an address for arm,
not a pfn.
This results in calls to memremap() returning a newly mapped area when
it should return an address in the existing linear mapping.
Fix this by removing the address to pfn translation and pass the
address directly.
Fixes: 260364d112bc ("arm[64]/memremap: don't abuse pfn_valid() to ensure presence of linear map")
Signed-off-by: Ross Stutterheim <ross.stutterheim@garmin.com>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: stable@vger.kernel.org
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
bool arch_memremap_can_ram_remap(resource_size_t offset, size_t size,
unsigned long flags)
{
- unsigned long pfn = PHYS_PFN(offset);
-
- return memblock_is_map_memory(pfn);
+ return memblock_is_map_memory(offset);
}