--- /dev/null
+From 4a65dff81a04f874fa6915c7f069b4dc2c4010e4 Mon Sep 17 00:00:00 2001
+From: Georg Kohmann <geokohma@cisco.com>
+Date: Wed, 7 Oct 2020 14:53:02 +0200
+Subject: net: ipv6: Discard next-hop MTU less than minimum link MTU
+
+From: Georg Kohmann <geokohma@cisco.com>
+
+commit 4a65dff81a04f874fa6915c7f069b4dc2c4010e4 upstream.
+
+When a ICMPV6_PKT_TOOBIG report a next-hop MTU that is less than the IPv6
+minimum link MTU, the estimated path MTU is reduced to the minimum link
+MTU. This behaviour breaks TAHI IPv6 Core Conformance Test v6LC4.1.6:
+Packet Too Big Less than IPv6 MTU.
+
+Referring to RFC 8201 section 4: "If a node receives a Packet Too Big
+message reporting a next-hop MTU that is less than the IPv6 minimum link
+MTU, it must discard it. A node must not reduce its estimate of the Path
+MTU below the IPv6 minimum link MTU on receipt of a Packet Too Big
+message."
+
+Drop the path MTU update if reported MTU is less than the minimum link MTU.
+
+Signed-off-by: Georg Kohmann <geokohma@cisco.com>
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: WangYuli <wangyuli@uniontech.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/ipv6/route.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/net/ipv6/route.c
++++ b/net/ipv6/route.c
+@@ -2765,7 +2765,8 @@ static void __ip6_rt_update_pmtu(struct
+ if (confirm_neigh)
+ dst_confirm_neigh(dst, daddr);
+
+- mtu = max_t(u32, mtu, IPV6_MIN_MTU);
++ if (mtu < IPV6_MIN_MTU)
++ return;
+ if (mtu >= dst_mtu(dst))
+ return;
+
--- /dev/null
+From 76303ee8d54bff6d9a6d55997acd88a6c2ba63cf Mon Sep 17 00:00:00 2001
+From: Jann Horn <jannh@google.com>
+Date: Wed, 2 Jul 2025 10:32:04 +0200
+Subject: x86/mm: Disable hugetlb page table sharing on 32-bit
+
+From: Jann Horn <jannh@google.com>
+
+commit 76303ee8d54bff6d9a6d55997acd88a6c2ba63cf upstream.
+
+Only select ARCH_WANT_HUGE_PMD_SHARE on 64-bit x86.
+Page table sharing requires at least three levels because it involves
+shared references to PMD tables; 32-bit x86 has either two-level paging
+(without PAE) or three-level paging (with PAE), but even with
+three-level paging, having a dedicated PGD entry for hugetlb is only
+barely possible (because the PGD only has four entries), and it seems
+unlikely anyone's actually using PMD sharing on 32-bit.
+
+Having ARCH_WANT_HUGE_PMD_SHARE enabled on non-PAE 32-bit X86 (which
+has 2-level paging) became particularly problematic after commit
+59d9094df3d7 ("mm: hugetlb: independent PMD page table shared count"),
+since that changes `struct ptdesc` such that the `pt_mm` (for PGDs) and
+the `pt_share_count` (for PMDs) share the same union storage - and with
+2-level paging, PMDs are PGDs.
+
+(For comparison, arm64 also gates ARCH_WANT_HUGE_PMD_SHARE on the
+configuration of page tables such that it is never enabled with 2-level
+paging.)
+
+Closes: https://lore.kernel.org/r/srhpjxlqfna67blvma5frmy3aa@altlinux.org
+Fixes: cfe28c5d63d8 ("x86: mm: Remove x86 version of huge_pmd_share.")
+Reported-by: Vitaly Chikunov <vt@altlinux.org>
+Suggested-by: Dave Hansen <dave.hansen@intel.com>
+Signed-off-by: Jann Horn <jannh@google.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Acked-by: Oscar Salvador <osalvador@suse.de>
+Acked-by: David Hildenbrand <david@redhat.com>
+Tested-by: Vitaly Chikunov <vt@altlinux.org>
+Cc:stable@vger.kernel.org
+Link: https://lore.kernel.org/all/20250702-x86-2level-hugetlb-v2-1-1a98096edf92%40google.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/Kconfig | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/x86/Kconfig
++++ b/arch/x86/Kconfig
+@@ -95,7 +95,7 @@ config X86
+ select ARCH_USE_QUEUED_SPINLOCKS
+ select ARCH_WANT_BATCHED_UNMAP_TLB_FLUSH
+ select ARCH_WANTS_DYNAMIC_TASK_STRUCT
+- select ARCH_WANT_HUGE_PMD_SHARE
++ select ARCH_WANT_HUGE_PMD_SHARE if X86_64
+ select ARCH_WANTS_THP_SWAP if X86_64
+ select BUILDTIME_EXTABLE_SORT
+ select CLKEVT_I8253