--- /dev/null
+From c0a454b9044fdc99486853aa424e5b3be2107078 Mon Sep 17 00:00:00 2001
+From: Mark Brown <broonie@kernel.org>
+Date: Mon, 5 Sep 2022 15:22:55 +0100
+Subject: arm64/bti: Disable in kernel BTI when cross section thunks are broken
+
+From: Mark Brown <broonie@kernel.org>
+
+commit c0a454b9044fdc99486853aa424e5b3be2107078 upstream.
+
+GCC does not insert a `bti c` instruction at the beginning of a function
+when it believes that all callers reach the function through a direct
+branch[1]. Unfortunately the logic it uses to determine this is not
+sufficiently robust, for example not taking account of functions being
+placed in different sections which may be loaded separately, so we may
+still see thunks being generated to these functions. If that happens,
+the first instruction in the callee function will result in a Branch
+Target Exception due to the missing landing pad.
+
+While this has currently only been observed in the case of modules
+having their main code loaded sufficiently far from their init section
+to require thunks it could potentially happen for other cases so the
+safest thing is to disable BTI for the kernel when building with an
+affected toolchain.
+
+[1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671
+
+Reported-by: D Scott Phillips <scott@os.amperecomputing.com>
+[Bits of the commit message are lifted from his report & workaround]
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Link: https://lore.kernel.org/r/20220905142255.591990-1-broonie@kernel.org
+Cc: <stable@vger.kernel.org> # v5.10+
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/Kconfig | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -1626,6 +1626,8 @@ config ARM64_BTI_KERNEL
+ depends on CC_HAS_BRANCH_PROT_PAC_RET_BTI
+ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=94697
+ depends on !CC_IS_GCC || GCC_VERSION >= 100100
++ # https://gcc.gnu.org/bugzilla/show_bug.cgi?id=106671
++ depends on !CC_IS_GCC
+ # https://github.com/llvm/llvm-project/commit/a88c722e687e6780dcd6a58718350dc76fcc4cc9
+ depends on !CC_IS_CLANG || CLANG_VERSION >= 120000
+ depends on (!FUNCTION_GRAPH_TRACER || DYNAMIC_FTRACE_WITH_REGS)
--- /dev/null
+From e89d120c4b720e232cc6a94f0fcbd59c15d41489 Mon Sep 17 00:00:00 2001
+From: Ionela Voinescu <ionela.voinescu@arm.com>
+Date: Fri, 19 Aug 2022 11:30:50 +0100
+Subject: arm64: errata: add detection for AMEVCNTR01 incrementing incorrectly
+
+From: Ionela Voinescu <ionela.voinescu@arm.com>
+
+commit e89d120c4b720e232cc6a94f0fcbd59c15d41489 upstream.
+
+The AMU counter AMEVCNTR01 (constant counter) should increment at the same
+rate as the system counter. On affected Cortex-A510 cores, AMEVCNTR01
+increments incorrectly giving a significantly higher output value. This
+results in inaccurate task scheduler utilization tracking and incorrect
+feedback on CPU frequency.
+
+Work around this problem by returning 0 when reading the affected counter
+in key locations that results in disabling all users of this counter from
+using it either for frequency invariance or as FFH reference counter. This
+effect is the same to firmware disabling affected counters.
+
+Details on how the two features are affected by this erratum:
+
+ - AMU counters will not be used for frequency invariance for affected
+ CPUs and CPUs in the same cpufreq policy. AMUs can still be used for
+ frequency invariance for unaffected CPUs in the system. Although
+ unlikely, if no alternative method can be found to support frequency
+ invariance for affected CPUs (cpufreq based or solution based on
+ platform counters) frequency invariance will be disabled. Please check
+ the chapter on frequency invariance at
+ Documentation/scheduler/sched-capacity.rst for details of its effect.
+
+ - Given that FFH can be used to fetch either the core or constant counter
+ values, restrictions are lifted regarding any of these counters
+ returning a valid (!0) value. Therefore FFH is considered supported
+ if there is a least one CPU that support AMUs, independent of any
+ counters being disabled or affected by this erratum. Clarifying
+ comments are now added to the cpc_ffh_supported(), cpu_read_constcnt()
+ and cpu_read_corecnt() functions.
+
+The above is achieved through adding a new erratum: ARM64_ERRATUM_2457168.
+
+Signed-off-by: Ionela Voinescu <ionela.voinescu@arm.com>
+Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
+Cc: Catalin Marinas <catalin.marinas@arm.com>
+Cc: Will Deacon <will@kernel.org>
+Cc: James Morse <james.morse@arm.com>
+Link: https://lore.kernel.org/r/20220819103050.24211-1-ionela.voinescu@arm.com
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/arm64/silicon-errata.rst | 2 ++
+ arch/arm64/Kconfig | 17 +++++++++++++++++
+ arch/arm64/kernel/cpu_errata.c | 9 +++++++++
+ arch/arm64/kernel/cpufeature.c | 5 ++++-
+ arch/arm64/kernel/topology.c | 32 ++++++++++++++++++++++++++++++--
+ arch/arm64/tools/cpucaps | 1 +
+ 6 files changed, 63 insertions(+), 3 deletions(-)
+
+--- a/Documentation/arm64/silicon-errata.rst
++++ b/Documentation/arm64/silicon-errata.rst
+@@ -94,6 +94,8 @@ stable kernels.
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Cortex-A510 | #2441009 | ARM64_ERRATUM_2441009 |
+ +----------------+-----------------+-----------------+-----------------------------+
++| ARM | Cortex-A510 | #2457168 | ARM64_ERRATUM_2457168 |
+++----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Neoverse-N1 | #1188873,1418040| ARM64_ERRATUM_1418040 |
+ +----------------+-----------------+-----------------+-----------------------------+
+ | ARM | Neoverse-N1 | #1349291 | N/A |
+--- a/arch/arm64/Kconfig
++++ b/arch/arm64/Kconfig
+@@ -683,6 +683,23 @@ config ARM64_ERRATUM_2441009
+
+ If unsure, say Y.
+
++config ARM64_ERRATUM_2457168
++ bool "Cortex-A510: 2457168: workaround for AMEVCNTR01 incrementing incorrectly"
++ depends on ARM64_AMU_EXTN
++ default y
++ help
++ This option adds the workaround for ARM Cortex-A510 erratum 2457168.
++
++ The AMU counter AMEVCNTR01 (constant counter) should increment at the same rate
++ as the system counter. On affected Cortex-A510 cores AMEVCNTR01 increments
++ incorrectly giving a significantly higher output value.
++
++ Work around this problem by returning 0 when reading the affected counter in
++ key locations that results in disabling all users of this counter. This effect
++ is the same to firmware disabling affected counters.
++
++ If unsure, say Y.
++
+ config CAVIUM_ERRATUM_22375
+ bool "Cavium erratum 22375, 24313"
+ default y
+--- a/arch/arm64/kernel/cpu_errata.c
++++ b/arch/arm64/kernel/cpu_errata.c
+@@ -551,6 +551,15 @@ const struct arm64_cpu_capabilities arm6
+ ERRATA_MIDR_ALL_VERSIONS(MIDR_NVIDIA_CARMEL),
+ },
+ #endif
++#ifdef CONFIG_ARM64_ERRATUM_2457168
++ {
++ .desc = "ARM erratum 2457168",
++ .capability = ARM64_WORKAROUND_2457168,
++ .type = ARM64_CPUCAP_WEAK_LOCAL_CPU_FEATURE,
++ /* Cortex-A510 r0p0-r1p1 */
++ CAP_MIDR_RANGE(MIDR_CORTEX_A510, 0, 0, 1, 1)
++ },
++#endif
+ {
+ }
+ };
+--- a/arch/arm64/kernel/cpufeature.c
++++ b/arch/arm64/kernel/cpufeature.c
+@@ -1736,7 +1736,10 @@ static void cpu_amu_enable(struct arm64_
+ pr_info("detected CPU%d: Activity Monitors Unit (AMU)\n",
+ smp_processor_id());
+ cpumask_set_cpu(smp_processor_id(), &amu_cpus);
+- update_freq_counters_refs();
++
++ /* 0 reference values signal broken/disabled counters */
++ if (!this_cpu_has_cap(ARM64_WORKAROUND_2457168))
++ update_freq_counters_refs();
+ }
+ }
+
+--- a/arch/arm64/kernel/topology.c
++++ b/arch/arm64/kernel/topology.c
+@@ -308,12 +308,25 @@ core_initcall(init_amu_fie);
+
+ static void cpu_read_corecnt(void *val)
+ {
++ /*
++ * A value of 0 can be returned if the current CPU does not support AMUs
++ * or if the counter is disabled for this CPU. A return value of 0 at
++ * counter read is properly handled as an error case by the users of the
++ * counter.
++ */
+ *(u64 *)val = read_corecnt();
+ }
+
+ static void cpu_read_constcnt(void *val)
+ {
+- *(u64 *)val = read_constcnt();
++ /*
++ * Return 0 if the current CPU is affected by erratum 2457168. A value
++ * of 0 is also returned if the current CPU does not support AMUs or if
++ * the counter is disabled. A return value of 0 at counter read is
++ * properly handled as an error case by the users of the counter.
++ */
++ *(u64 *)val = this_cpu_has_cap(ARM64_WORKAROUND_2457168) ?
++ 0UL : read_constcnt();
+ }
+
+ static inline
+@@ -340,7 +353,22 @@ int counters_read_on_cpu(int cpu, smp_ca
+ */
+ bool cpc_ffh_supported(void)
+ {
+- return freq_counters_valid(get_cpu_with_amu_feat());
++ int cpu = get_cpu_with_amu_feat();
++
++ /*
++ * FFH is considered supported if there is at least one present CPU that
++ * supports AMUs. Using FFH to read core and reference counters for CPUs
++ * that do not support AMUs, have counters disabled or that are affected
++ * by errata, will result in a return value of 0.
++ *
++ * This is done to allow any enabled and valid counters to be read
++ * through FFH, knowing that potentially returning 0 as counter value is
++ * properly handled by the users of these counters.
++ */
++ if ((cpu >= nr_cpu_ids) || !cpumask_test_cpu(cpu, cpu_present_mask))
++ return false;
++
++ return true;
+ }
+
+ int cpc_read_ffh(int cpu, struct cpc_reg *reg, u64 *val)
+--- a/arch/arm64/tools/cpucaps
++++ b/arch/arm64/tools/cpucaps
+@@ -54,6 +54,7 @@ WORKAROUND_1418040
+ WORKAROUND_1463225
+ WORKAROUND_1508412
+ WORKAROUND_1542419
++WORKAROUND_2457168
+ WORKAROUND_CAVIUM_23154
+ WORKAROUND_CAVIUM_27456
+ WORKAROUND_CAVIUM_30115
--- /dev/null
+From 53fc7ad6edf210b497230ce74b61b322a202470c Mon Sep 17 00:00:00 2001
+From: Lu Baolu <baolu.lu@linux.intel.com>
+Date: Tue, 23 Aug 2022 14:15:55 +0800
+Subject: iommu/vt-d: Correctly calculate sagaw value of IOMMU
+
+From: Lu Baolu <baolu.lu@linux.intel.com>
+
+commit 53fc7ad6edf210b497230ce74b61b322a202470c upstream.
+
+The Intel IOMMU driver possibly selects between the first-level and the
+second-level translation tables for DMA address translation. However,
+the levels of page-table walks for the 4KB base page size are calculated
+from the SAGAW field of the capability register, which is only valid for
+the second-level page table. This causes the IOMMU driver to stop working
+if the hardware (or the emulated IOMMU) advertises only first-level
+translation capability and reports the SAGAW field as 0.
+
+This solves the above problem by considering both the first level and the
+second level when calculating the supported page table levels.
+
+Fixes: b802d070a52a1 ("iommu/vt-d: Use iova over first level")
+Cc: stable@vger.kernel.org
+Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
+Link: https://lore.kernel.org/r/20220817023558.3253263-1-baolu.lu@linux.intel.com
+Signed-off-by: Joerg Roedel <jroedel@suse.de>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/iommu/intel/iommu.c | 28 +++++++++++++++++++++++++---
+ 1 file changed, 25 insertions(+), 3 deletions(-)
+
+--- a/drivers/iommu/intel/iommu.c
++++ b/drivers/iommu/intel/iommu.c
+@@ -542,14 +542,36 @@ static inline int domain_pfn_supported(s
+ return !(addr_width < BITS_PER_LONG && pfn >> addr_width);
+ }
+
++/*
++ * Calculate the Supported Adjusted Guest Address Widths of an IOMMU.
++ * Refer to 11.4.2 of the VT-d spec for the encoding of each bit of
++ * the returned SAGAW.
++ */
++static unsigned long __iommu_calculate_sagaw(struct intel_iommu *iommu)
++{
++ unsigned long fl_sagaw, sl_sagaw;
++
++ fl_sagaw = BIT(2) | (cap_fl1gp_support(iommu->cap) ? BIT(3) : 0);
++ sl_sagaw = cap_sagaw(iommu->cap);
++
++ /* Second level only. */
++ if (!sm_supported(iommu) || !ecap_flts(iommu->ecap))
++ return sl_sagaw;
++
++ /* First level only. */
++ if (!ecap_slts(iommu->ecap))
++ return fl_sagaw;
++
++ return fl_sagaw & sl_sagaw;
++}
++
+ static int __iommu_calculate_agaw(struct intel_iommu *iommu, int max_gaw)
+ {
+ unsigned long sagaw;
+ int agaw;
+
+- sagaw = cap_sagaw(iommu->cap);
+- for (agaw = width_to_agaw(max_gaw);
+- agaw >= 0; agaw--) {
++ sagaw = __iommu_calculate_sagaw(iommu);
++ for (agaw = width_to_agaw(max_gaw); agaw >= 0; agaw--) {
+ if (test_bit(agaw, &sagaw))
+ break;
+ }