]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/6.6.26/revert-x86-mm-ident_map-use-gbpages-only-where-full-.patch
Linux 6.6.26
[thirdparty/kernel/stable-queue.git] / releases / 6.6.26 / revert-x86-mm-ident_map-use-gbpages-only-where-full-.patch
CommitLineData
ffc1c2fe
SL
1From 903e6b636ff957419a5c19368ffc685536b7c3b6 Mon Sep 17 00:00:00 2001
2From: Sasha Levin <sashal@kernel.org>
3Date: Mon, 25 Mar 2024 11:47:51 +0100
4Subject: Revert "x86/mm/ident_map: Use gbpages only where full GB page should
5 be mapped."
6
7From: Ingo Molnar <mingo@kernel.org>
8
9[ Upstream commit c567f2948f57bdc03ed03403ae0234085f376b7d ]
10
11This reverts commit d794734c9bbfe22f86686dc2909c25f5ffe1a572.
12
13While the original change tries to fix a bug, it also unintentionally broke
14existing systems, see the regressions reported at:
15
16 https://lore.kernel.org/all/3a1b9909-45ac-4f97-ad68-d16ef1ce99db@pavinjoseph.com/
17
18Since d794734c9bbf was also marked for -stable, let's back it out before
19causing more damage.
20
21Note that due to another upstream change the revert was not 100% automatic:
22
23 0a845e0f6348 mm/treewide: replace pud_large() with pud_leaf()
24
25Signed-off-by: Ingo Molnar <mingo@kernel.org>
26Cc: <stable@vger.kernel.org>
27Cc: Russ Anderson <rja@hpe.com>
28Cc: Steve Wahl <steve.wahl@hpe.com>
29Cc: Dave Hansen <dave.hansen@linux.intel.com>
30Link: https://lore.kernel.org/all/3a1b9909-45ac-4f97-ad68-d16ef1ce99db@pavinjoseph.com/
31Fixes: d794734c9bbf ("x86/mm/ident_map: Use gbpages only where full GB page should be mapped.")
32Signed-off-by: Sasha Levin <sashal@kernel.org>
33---
34 arch/x86/mm/ident_map.c | 23 +++++------------------
35 1 file changed, 5 insertions(+), 18 deletions(-)
36
37diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
38index a204a332c71fc..968d7005f4a72 100644
39--- a/arch/x86/mm/ident_map.c
40+++ b/arch/x86/mm/ident_map.c
41@@ -26,31 +26,18 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
42 for (; addr < end; addr = next) {
43 pud_t *pud = pud_page + pud_index(addr);
44 pmd_t *pmd;
45- bool use_gbpage;
46
47 next = (addr & PUD_MASK) + PUD_SIZE;
48 if (next > end)
49 next = end;
50
51- /* if this is already a gbpage, this portion is already mapped */
52- if (pud_leaf(*pud))
53- continue;
54-
55- /* Is using a gbpage allowed? */
56- use_gbpage = info->direct_gbpages;
57-
58- /* Don't use gbpage if it maps more than the requested region. */
59- /* at the begining: */
60- use_gbpage &= ((addr & ~PUD_MASK) == 0);
61- /* ... or at the end: */
62- use_gbpage &= ((next & ~PUD_MASK) == 0);
63-
64- /* Never overwrite existing mappings */
65- use_gbpage &= !pud_present(*pud);
66-
67- if (use_gbpage) {
68+ if (info->direct_gbpages) {
69 pud_t pudval;
70
71+ if (pud_present(*pud))
72+ continue;
73+
74+ addr &= PUD_MASK;
75 pudval = __pud((addr - info->offset) | info->page_flag);
76 set_pud(pud, pudval);
77 continue;
78--
792.43.0
80