perf-arm_pmu-don-t-disable-counter-in-armpmu_add.patch
arm64-cputype-add-qcom_cpu_part_kryo_3xx_gold.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
hid-pidff-convert-infinite-length-from-linux-api-to-.patch
hid-pidff-do-not-send-effect-envelope-if-it-s-empty.patch
hid-pidff-fix-null-pointer-dereference-in-pidff_find.patch
+++ /dev/null
-From 5b511ca368ba896f40559c7639510ed199bf2a1e Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index 968d7005f4a72..2f383e288c430 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -27,9 +27,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pud_t *pud = pud_page + pud_index(addr);
- pmd_t *pmd;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- if (info->direct_gbpages) {
- pud_t pudval;
-@@ -68,10 +66,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -113,10 +108,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-
perf-arm_pmu-don-t-disable-counter-in-armpmu_add.patch
arm64-cputype-add-qcom_cpu_part_kryo_3xx_gold.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
hid-pidff-convert-infinite-length-from-linux-api-to-.patch
hid-pidff-do-not-send-effect-envelope-if-it-s-empty.patch
hid-pidff-fix-null-pointer-dereference-in-pidff_find.patch
+++ /dev/null
-From f915f4bd41ad619ca5a481a67c0f10b3c63028b2 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index 968d7005f4a72..2f383e288c430 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -27,9 +27,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pud_t *pud = pud_page + pud_index(addr);
- pmd_t *pmd;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- if (info->direct_gbpages) {
- pud_t pudval;
-@@ -68,10 +66,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -113,10 +108,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-
x86-cpu-don-t-clear-x86_feature_lahf_lm-flag-in-init.patch
perf-arm_pmu-don-t-disable-counter-in-armpmu_add.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
hid-pidff-convert-infinite-length-from-linux-api-to-.patch
hid-pidff-do-not-send-effect-envelope-if-it-s-empty.patch
hid-pidff-fix-null-pointer-dereference-in-pidff_find.patch
+++ /dev/null
-From c838fe3ec8d4e3f10a5e784fb8caad02e6aa9683 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index 968d7005f4a72..2f383e288c430 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -27,9 +27,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pud_t *pud = pud_page + pud_index(addr);
- pmd_t *pmd;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- if (info->direct_gbpages) {
- pud_t pudval;
-@@ -68,10 +66,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -113,10 +108,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-
perf-arm_pmu-don-t-disable-counter-in-armpmu_add.patch
arm64-cputype-add-qcom_cpu_part_kryo_3xx_gold.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
hid-pidff-convert-infinite-length-from-linux-api-to-.patch
hid-pidff-do-not-send-effect-envelope-if-it-s-empty.patch
hid-pidff-fix-null-pointer-dereference-in-pidff_find.patch
+++ /dev/null
-From cd3413b149bb9ca0c632f403a6ef0f6ab9e29ad9 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index 968d7005f4a72..2f383e288c430 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -27,9 +27,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pud_t *pud = pud_page + pud_index(addr);
- pmd_t *pmd;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- if (info->direct_gbpages) {
- pud_t pudval;
-@@ -68,10 +66,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -113,10 +108,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-
flush-console-log-from-kernel_power_off.patch
arm64-cputype-add-qcom_cpu_part_kryo_3xx_gold.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
zstd-increase-dynamic_bmi2-gcc-version-cutoff-from-4.patch
platform-chrome-cros_ec_lpc-match-on-framework-acpi-.patch
asoc-sof-topology-use-krealloc_array-to-replace-krea.patch
+++ /dev/null
-From 8e1fbc0ef68e8d7e62211c000c46c6342d61eee7 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index 5ab7bd2f1983c..bd5d101c5c379 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -101,9 +101,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pmd_t *pmd;
- bool use_gbpage;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- /* if this is already a gbpage, this portion is already mapped */
- if (pud_leaf(*pud))
-@@ -154,10 +152,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -199,10 +194,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-
flush-console-log-from-kernel_power_off.patch
arm64-cputype-add-qcom_cpu_part_kryo_3xx_gold.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
zstd-increase-dynamic_bmi2-gcc-version-cutoff-from-4.patch
tracing-disable-branch-profiling-in-noinstr-code.patch
platform-chrome-cros_ec_lpc-match-on-framework-acpi-.patch
+++ /dev/null
-From 7128b2f4e41a2b021024562af7b93cfd788d2b46 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index 5ab7bd2f1983c..bd5d101c5c379 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -101,9 +101,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pmd_t *pmd;
- bool use_gbpage;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- /* if this is already a gbpage, this portion is already mapped */
- if (pud_leaf(*pud))
-@@ -154,10 +152,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -199,10 +194,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-
cpufreq-amd-pstate-invalidate-cppc_req_cached-during.patch
arm64-cputype-add-qcom_cpu_part_kryo_3xx_gold.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
zstd-increase-dynamic_bmi2-gcc-version-cutoff-from-4.patch
tracing-disable-branch-profiling-in-noinstr-code.patch
platform-chrome-cros_ec_lpc-match-on-framework-acpi-.patch
+++ /dev/null
-From 271133a4928763e7abb7c4f3aad1bb3e515e0805 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index 5ab7bd2f1983c..bd5d101c5c379 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -101,9 +101,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pmd_t *pmd;
- bool use_gbpage;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- /* if this is already a gbpage, this portion is already mapped */
- if (pud_leaf(*pud))
-@@ -154,10 +152,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -199,10 +194,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-
perf-arm_pmu-don-t-disable-counter-in-armpmu_add.patch
arm64-cputype-add-qcom_cpu_part_kryo_3xx_gold.patch
xen-mcelog-add-__nonstring-annotations-for-untermina.patch
-x86-mm-ident_map-fix-theoretical-virtual-address-ove.patch
zstd-increase-dynamic_bmi2-gcc-version-cutoff-from-4.patch
asoc-sof-topology-use-krealloc_array-to-replace-krea.patch
hid-pidff-convert-infinite-length-from-linux-api-to-.patch
+++ /dev/null
-From 9a43b0aa210e8eb94b82fac7690a126290d8a755 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Wed, 16 Oct 2024 14:14:55 +0300
-Subject: x86/mm/ident_map: Fix theoretical virtual address overflow to zero
-
-From: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-
-[ Upstream commit f666c92090a41ac5524dade63ff96b3adcf8c2ab ]
-
-The current calculation of the 'next' virtual address in the
-page table initialization functions in arch/x86/mm/ident_map.c
-doesn't protect against wrapping to zero.
-
-This is a theoretical issue that cannot happen currently,
-the problematic case is possible only if the user sets a
-high enough x86_mapping_info::offset value - which no
-current code in the upstream kernel does.
-
-( The wrapping to zero only occurs if the top PGD entry is accessed.
- There are no such users upstream. Only hibernate_64.c uses
- x86_mapping_info::offset, and it operates on the direct mapping
- range, which is not the top PGD entry. )
-
-Should such an overflow happen, it can result in page table
-corruption and a hang.
-
-To future-proof this code, replace the manual 'next' calculation
-with p?d_addr_end() which handles wrapping correctly.
-
-[ Backporter's note: there's no need to backport this patch. ]
-
-Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Reviewed-by: Kai Huang <kai.huang@intel.com>
-Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
-Cc: Andy Lutomirski <luto@kernel.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Link: https://lore.kernel.org/r/20241016111458.846228-2-kirill.shutemov@linux.intel.com
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/mm/ident_map.c | 14 +++-----------
- 1 file changed, 3 insertions(+), 11 deletions(-)
-
-diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
-index fe0b2e66ded93..7adf7001473e7 100644
---- a/arch/x86/mm/ident_map.c
-+++ b/arch/x86/mm/ident_map.c
-@@ -28,9 +28,7 @@ static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- pmd_t *pmd;
- bool use_gbpage;
-
-- next = (addr & PUD_MASK) + PUD_SIZE;
-- if (next > end)
-- next = end;
-+ next = pud_addr_end(addr, end);
-
- /* if this is already a gbpage, this portion is already mapped */
- if (pud_leaf(*pud))
-@@ -81,10 +79,7 @@ static int ident_p4d_init(struct x86_mapping_info *info, p4d_t *p4d_page,
- p4d_t *p4d = p4d_page + p4d_index(addr);
- pud_t *pud;
-
-- next = (addr & P4D_MASK) + P4D_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = p4d_addr_end(addr, end);
- if (p4d_present(*p4d)) {
- pud = pud_offset(p4d, 0);
- result = ident_pud_init(info, pud, addr, next);
-@@ -126,10 +121,7 @@ int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- pgd_t *pgd = pgd_page + pgd_index(addr);
- p4d_t *p4d;
-
-- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
-- if (next > end)
-- next = end;
--
-+ next = pgd_addr_end(addr, end);
- if (pgd_present(*pgd)) {
- p4d = p4d_offset(pgd, 0);
- result = ident_p4d_init(info, p4d, addr, next);
---
-2.39.5
-