--- /dev/null
+From 566579652bc3fcc3d9489416730e6336493126be Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 11 Jan 2024 12:56:36 +0100
+Subject: arm64: dts: broadcom: bcmbca: bcm4908: drop invalid switch cells
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Rafał Miłecki <rafal@milecki.pl>
+
+[ Upstream commit 27058b95fbb784406ea4c40b20caa3f04937140c ]
+
+Ethernet switch does not have addressable subnodes.
+
+This fixes:
+arch/arm64/boot/dts/broadcom/bcmbca/bcm4908-asus-gt-ac5300.dtb: ethernet-switch@0: '#address-cells', '#size-cells' do not match any of the regexes: 'pinctrl-[0-9]+'
+ from schema $id: http://devicetree.org/schemas/net/dsa/brcm,sf2.yaml#
+
+Fixes: 527a3ac9bdf8 ("arm64: dts: broadcom: bcm4908: describe internal switch")
+Signed-off-by: Rafał Miłecki <rafal@milecki.pl>
+Link: https://lore.kernel.org/r/20240111115636.12095-1-zajec5@gmail.com
+Signed-off-by: Florian Fainelli <florian.fainelli@broadcom.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi | 3 ---
+ 1 file changed, 3 deletions(-)
+
+diff --git a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
+index df71348542064..a4c5a38905b03 100644
+--- a/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
++++ b/arch/arm64/boot/dts/broadcom/bcmbca/bcm4908.dtsi
+@@ -180,9 +180,6 @@
+ brcm,num-gphy = <5>;
+ brcm,num-rgmii-ports = <2>;
+
+- #address-cells = <1>;
+- #size-cells = <0>;
+-
+ ports: ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+--
+2.43.0
+
--- /dev/null
+From 5090df00a60f2e200d1579368c0d5a18a6bbcba9 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 5 Mar 2024 15:36:28 +0100
+Subject: ASoC: rockchip: i2s-tdm: Fix inaccurate sampling rates
+
+From: Luca Ceresoli <luca.ceresoli@bootlin.com>
+
+[ Upstream commit 9e2ab4b18ebd46813fc3459207335af4d368e323 ]
+
+The sample rates set by the rockchip_i2s_tdm driver in master mode are
+inaccurate up to 5% in several cases, due to the driver logic to configure
+clocks and a nasty interaction with the Common Clock Framework.
+
+To understand what happens, here is the relevant section of the clock tree
+(slightly simplified), along with the names used in the driver:
+
+ vpll0 _OR_ vpll1 "mclk_root"
+ clk_i2s2_8ch_tx_src "mclk_parent"
+ clk_i2s2_8ch_tx_mux
+ clk_i2s2_8ch_tx "mclk" or "mclk_tx"
+
+This is what happens when playing back e.g. at 192 kHz using
+audio-graph-card (when recording the same applies, only s/tx/rx/):
+
+ 0. at probe, rockchip_i2s_tdm_set_sysclk() stores the passed frequency in
+ i2s_tdm->mclk_tx_freq (*) which is 50176000, and that is never modified
+ afterwards
+
+ 1. when playback is started, rockchip_i2s_tdm_hw_params() is called and
+ does the following two calls
+
+ 2. rockchip_i2s_tdm_calibrate_mclk():
+
+ 2a. selects mclk_root0 (vpll0) as a parent for mclk_parent
+ (mclk_tx_src), which is OK because the vpll0 rate is a good for
+ 192000 (and sumbultiple) rates
+
+ 2b. sets the mclk_root frequency based on ppm calibration computations
+
+ 2c. sets mclk_tx_src to 49152000 (= 256 * 192000), which is also OK as
+ it is a multiple of the required bit clock
+
+ 3. rockchip_i2s_tdm_set_mclk()
+
+ 3a. calls clk_set_rate() to set the rate of mclk_tx (clk_i2s2_8ch_tx)
+ to the value of i2s_tdm->mclk_tx_freq (*), i.e. 50176000 which is
+ not a multiple of the sampling frequency -- this is not OK
+
+ 3a1. clk_set_rate() reacts by reparenting clk_i2s2_8ch_tx_src to
+ vpll1 -- this is not OK because the default vpll1 rate can be
+ divided to get 44.1 kHz and related rates, not 192 kHz
+
+The result is that the driver does a lot of ad-hoc decisions about clocks
+and ends up in using the wrong parent at an unoptimal rate.
+
+Step 0 is one part of the problem: unless the card driver calls set_sysclk
+at each stream start, whatever rate is set in mclk_tx_freq during boot will
+be taken and used until reboot. Moreover the driver does not care if its
+value is not a multiple of any audio frequency.
+
+Another part of the problem is that the whole reparenting and clock rate
+setting logic is conflicting with the CCF algorithms to achieve largely the
+same goal: selecting the best parent and setting the closest clock
+rate. And it turns out that only calling once clk_set_rate() on
+clk_i2s2_8ch_tx picks the correct vpll and sets the correct rate.
+
+The fix is based on removing the custom logic in the driver to select the
+parent and set the various clocks, and just let the Clock Framework do it
+all. As a side effect, the set_sysclk() op becomes useless because we now
+let the CCF compute the appropriate value for the sampling rate. It also
+implies that the whole calibration logic is now dead code and so it is
+removed along with the "PCM Clock Compensation in PPM" kcontrol, which has
+always been broken anyway. The handling of the 4 optional clocks also
+becomes dead code and is removed.
+
+The actual rates have been tested playing 30 seconds of audio at various
+sampling rates before and after this change using sox:
+
+ time play -r <sample_rate> -n synth 30 sine 950 gain -3
+
+The time reported in the table below is the 'real' value reported by the
+'time' command in the above command line.
+
+ rate before after
+ --------- ------ ------
+ 8000 Hz 30.60s 30.63s
+ 11025 Hz 30.45s 30.51s
+ 16000 Hz 30.47s 30.50s
+ 22050 Hz 30.78s 30.41s
+ 32000 Hz 31.02s 30.43s
+ 44100 Hz 30.78s 30.41s
+ 48000 Hz 29.81s 30.45s
+ 88200 Hz 30.78s 30.41s
+ 96000 Hz 29.79s 30.42s
+ 176400 Hz 27.40s 30.41s
+ 192000 Hz 29.79s 30.42s
+
+While the tests are running the clock tree confirms that:
+
+ * without the patch, vpll1 is always used and clk_i2s2_8ch_tx always
+ produces 50176000 Hz, which cannot be divided for most audio rates
+ except the slowest ones, generating inaccurate rates
+ * with the patch:
+ - for 192000 Hz vpll0 is used
+ - for 176400 Hz vpll1 is used
+ - clk_i2s2_8ch_tx always produces (256 * <rate>) Hz
+
+Tested on the RK3308 using the internal audio codec.
+
+Fixes: 081068fd6414 ("ASoC: rockchip: add support for i2s-tdm controller")
+Signed-off-by: Luca Ceresoli <luca.ceresoli@bootlin.com>
+Link: https://msgid.link/r/20240305-rk3308-audio-codec-v4-1-312acdbe628f@bootlin.com
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ sound/soc/rockchip/rockchip_i2s_tdm.c | 352 +-------------------------
+ 1 file changed, 6 insertions(+), 346 deletions(-)
+
+diff --git a/sound/soc/rockchip/rockchip_i2s_tdm.c b/sound/soc/rockchip/rockchip_i2s_tdm.c
+index 2550bd2a5e78c..2e36a97077b99 100644
+--- a/sound/soc/rockchip/rockchip_i2s_tdm.c
++++ b/sound/soc/rockchip/rockchip_i2s_tdm.c
+@@ -27,8 +27,6 @@
+ #define DEFAULT_MCLK_FS 256
+ #define CH_GRP_MAX 4 /* The max channel 8 / 2 */
+ #define MULTIPLEX_CH_MAX 10
+-#define CLK_PPM_MIN -1000
+-#define CLK_PPM_MAX 1000
+
+ #define TRCM_TXRX 0
+ #define TRCM_TX 1
+@@ -55,20 +53,6 @@ struct rk_i2s_tdm_dev {
+ struct clk *hclk;
+ struct clk *mclk_tx;
+ struct clk *mclk_rx;
+- /* The mclk_tx_src is parent of mclk_tx */
+- struct clk *mclk_tx_src;
+- /* The mclk_rx_src is parent of mclk_rx */
+- struct clk *mclk_rx_src;
+- /*
+- * The mclk_root0 and mclk_root1 are root parent and supplies for
+- * the different FS.
+- *
+- * e.g:
+- * mclk_root0 is VPLL0, used for FS=48000Hz
+- * mclk_root1 is VPLL1, used for FS=44100Hz
+- */
+- struct clk *mclk_root0;
+- struct clk *mclk_root1;
+ struct regmap *regmap;
+ struct regmap *grf;
+ struct snd_dmaengine_dai_dma_data capture_dma_data;
+@@ -78,19 +62,11 @@ struct rk_i2s_tdm_dev {
+ struct rk_i2s_soc_data *soc_data;
+ bool is_master_mode;
+ bool io_multiplex;
+- bool mclk_calibrate;
+ bool tdm_mode;
+- unsigned int mclk_rx_freq;
+- unsigned int mclk_tx_freq;
+- unsigned int mclk_root0_freq;
+- unsigned int mclk_root1_freq;
+- unsigned int mclk_root0_initial_freq;
+- unsigned int mclk_root1_initial_freq;
+ unsigned int frame_width;
+ unsigned int clk_trcm;
+ unsigned int i2s_sdis[CH_GRP_MAX];
+ unsigned int i2s_sdos[CH_GRP_MAX];
+- int clk_ppm;
+ int refcount;
+ spinlock_t lock; /* xfer lock */
+ bool has_playback;
+@@ -116,12 +92,6 @@ static void i2s_tdm_disable_unprepare_mclk(struct rk_i2s_tdm_dev *i2s_tdm)
+ {
+ clk_disable_unprepare(i2s_tdm->mclk_tx);
+ clk_disable_unprepare(i2s_tdm->mclk_rx);
+- if (i2s_tdm->mclk_calibrate) {
+- clk_disable_unprepare(i2s_tdm->mclk_tx_src);
+- clk_disable_unprepare(i2s_tdm->mclk_rx_src);
+- clk_disable_unprepare(i2s_tdm->mclk_root0);
+- clk_disable_unprepare(i2s_tdm->mclk_root1);
+- }
+ }
+
+ /**
+@@ -144,29 +114,9 @@ static int i2s_tdm_prepare_enable_mclk(struct rk_i2s_tdm_dev *i2s_tdm)
+ ret = clk_prepare_enable(i2s_tdm->mclk_rx);
+ if (ret)
+ goto err_mclk_rx;
+- if (i2s_tdm->mclk_calibrate) {
+- ret = clk_prepare_enable(i2s_tdm->mclk_tx_src);
+- if (ret)
+- goto err_mclk_rx;
+- ret = clk_prepare_enable(i2s_tdm->mclk_rx_src);
+- if (ret)
+- goto err_mclk_rx_src;
+- ret = clk_prepare_enable(i2s_tdm->mclk_root0);
+- if (ret)
+- goto err_mclk_root0;
+- ret = clk_prepare_enable(i2s_tdm->mclk_root1);
+- if (ret)
+- goto err_mclk_root1;
+- }
+
+ return 0;
+
+-err_mclk_root1:
+- clk_disable_unprepare(i2s_tdm->mclk_root0);
+-err_mclk_root0:
+- clk_disable_unprepare(i2s_tdm->mclk_rx_src);
+-err_mclk_rx_src:
+- clk_disable_unprepare(i2s_tdm->mclk_tx_src);
+ err_mclk_rx:
+ clk_disable_unprepare(i2s_tdm->mclk_tx);
+ err_mclk_tx:
+@@ -566,159 +516,6 @@ static void rockchip_i2s_tdm_xfer_resume(struct snd_pcm_substream *substream,
+ I2S_XFER_RXS_START);
+ }
+
+-static int rockchip_i2s_tdm_clk_set_rate(struct rk_i2s_tdm_dev *i2s_tdm,
+- struct clk *clk, unsigned long rate,
+- int ppm)
+-{
+- unsigned long rate_target;
+- int delta, ret;
+-
+- if (ppm == i2s_tdm->clk_ppm)
+- return 0;
+-
+- if (ppm < 0)
+- delta = -1;
+- else
+- delta = 1;
+-
+- delta *= (int)div64_u64((u64)rate * (u64)abs(ppm) + 500000,
+- 1000000);
+-
+- rate_target = rate + delta;
+-
+- if (!rate_target)
+- return -EINVAL;
+-
+- ret = clk_set_rate(clk, rate_target);
+- if (ret)
+- return ret;
+-
+- i2s_tdm->clk_ppm = ppm;
+-
+- return 0;
+-}
+-
+-static int rockchip_i2s_tdm_calibrate_mclk(struct rk_i2s_tdm_dev *i2s_tdm,
+- struct snd_pcm_substream *substream,
+- unsigned int lrck_freq)
+-{
+- struct clk *mclk_root;
+- struct clk *mclk_parent;
+- unsigned int mclk_root_freq;
+- unsigned int mclk_root_initial_freq;
+- unsigned int mclk_parent_freq;
+- unsigned int div, delta;
+- u64 ppm;
+- int ret;
+-
+- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK)
+- mclk_parent = i2s_tdm->mclk_tx_src;
+- else
+- mclk_parent = i2s_tdm->mclk_rx_src;
+-
+- switch (lrck_freq) {
+- case 8000:
+- case 16000:
+- case 24000:
+- case 32000:
+- case 48000:
+- case 64000:
+- case 96000:
+- case 192000:
+- mclk_root = i2s_tdm->mclk_root0;
+- mclk_root_freq = i2s_tdm->mclk_root0_freq;
+- mclk_root_initial_freq = i2s_tdm->mclk_root0_initial_freq;
+- mclk_parent_freq = DEFAULT_MCLK_FS * 192000;
+- break;
+- case 11025:
+- case 22050:
+- case 44100:
+- case 88200:
+- case 176400:
+- mclk_root = i2s_tdm->mclk_root1;
+- mclk_root_freq = i2s_tdm->mclk_root1_freq;
+- mclk_root_initial_freq = i2s_tdm->mclk_root1_initial_freq;
+- mclk_parent_freq = DEFAULT_MCLK_FS * 176400;
+- break;
+- default:
+- dev_err(i2s_tdm->dev, "Invalid LRCK frequency: %u Hz\n",
+- lrck_freq);
+- return -EINVAL;
+- }
+-
+- ret = clk_set_parent(mclk_parent, mclk_root);
+- if (ret)
+- return ret;
+-
+- ret = rockchip_i2s_tdm_clk_set_rate(i2s_tdm, mclk_root,
+- mclk_root_freq, 0);
+- if (ret)
+- return ret;
+-
+- delta = abs(mclk_root_freq % mclk_parent_freq - mclk_parent_freq);
+- ppm = div64_u64((uint64_t)delta * 1000000, (uint64_t)mclk_root_freq);
+-
+- if (ppm) {
+- div = DIV_ROUND_CLOSEST(mclk_root_initial_freq, mclk_parent_freq);
+- if (!div)
+- return -EINVAL;
+-
+- mclk_root_freq = mclk_parent_freq * round_up(div, 2);
+-
+- ret = clk_set_rate(mclk_root, mclk_root_freq);
+- if (ret)
+- return ret;
+-
+- i2s_tdm->mclk_root0_freq = clk_get_rate(i2s_tdm->mclk_root0);
+- i2s_tdm->mclk_root1_freq = clk_get_rate(i2s_tdm->mclk_root1);
+- }
+-
+- return clk_set_rate(mclk_parent, mclk_parent_freq);
+-}
+-
+-static int rockchip_i2s_tdm_set_mclk(struct rk_i2s_tdm_dev *i2s_tdm,
+- struct snd_pcm_substream *substream,
+- struct clk **mclk)
+-{
+- unsigned int mclk_freq;
+- int ret;
+-
+- if (i2s_tdm->clk_trcm) {
+- if (i2s_tdm->mclk_tx_freq != i2s_tdm->mclk_rx_freq) {
+- dev_err(i2s_tdm->dev,
+- "clk_trcm, tx: %d and rx: %d should be the same\n",
+- i2s_tdm->mclk_tx_freq,
+- i2s_tdm->mclk_rx_freq);
+- return -EINVAL;
+- }
+-
+- ret = clk_set_rate(i2s_tdm->mclk_tx, i2s_tdm->mclk_tx_freq);
+- if (ret)
+- return ret;
+-
+- ret = clk_set_rate(i2s_tdm->mclk_rx, i2s_tdm->mclk_rx_freq);
+- if (ret)
+- return ret;
+-
+- /* mclk_rx is also ok. */
+- *mclk = i2s_tdm->mclk_tx;
+- } else {
+- if (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) {
+- *mclk = i2s_tdm->mclk_tx;
+- mclk_freq = i2s_tdm->mclk_tx_freq;
+- } else {
+- *mclk = i2s_tdm->mclk_rx;
+- mclk_freq = i2s_tdm->mclk_rx_freq;
+- }
+-
+- ret = clk_set_rate(*mclk, mclk_freq);
+- if (ret)
+- return ret;
+- }
+-
+- return 0;
+-}
+-
+ static int rockchip_i2s_ch_to_io(unsigned int ch, bool substream_capture)
+ {
+ if (substream_capture) {
+@@ -849,19 +646,17 @@ static int rockchip_i2s_tdm_hw_params(struct snd_pcm_substream *substream,
+ struct snd_soc_dai *dai)
+ {
+ struct rk_i2s_tdm_dev *i2s_tdm = to_info(dai);
+- struct clk *mclk;
+- int ret = 0;
+ unsigned int val = 0;
+ unsigned int mclk_rate, bclk_rate, div_bclk = 4, div_lrck = 64;
++ int err;
+
+ if (i2s_tdm->is_master_mode) {
+- if (i2s_tdm->mclk_calibrate)
+- rockchip_i2s_tdm_calibrate_mclk(i2s_tdm, substream,
+- params_rate(params));
++ struct clk *mclk = (substream->stream == SNDRV_PCM_STREAM_PLAYBACK) ?
++ i2s_tdm->mclk_tx : i2s_tdm->mclk_rx;
+
+- ret = rockchip_i2s_tdm_set_mclk(i2s_tdm, substream, &mclk);
+- if (ret)
+- return ret;
++ err = clk_set_rate(mclk, DEFAULT_MCLK_FS * params_rate(params));
++ if (err)
++ return err;
+
+ mclk_rate = clk_get_rate(mclk);
+ bclk_rate = i2s_tdm->frame_width * params_rate(params);
+@@ -969,96 +764,6 @@ static int rockchip_i2s_tdm_trigger(struct snd_pcm_substream *substream,
+ return 0;
+ }
+
+-static int rockchip_i2s_tdm_set_sysclk(struct snd_soc_dai *cpu_dai, int stream,
+- unsigned int freq, int dir)
+-{
+- struct rk_i2s_tdm_dev *i2s_tdm = to_info(cpu_dai);
+-
+- /* Put set mclk rate into rockchip_i2s_tdm_set_mclk() */
+- if (i2s_tdm->clk_trcm) {
+- i2s_tdm->mclk_tx_freq = freq;
+- i2s_tdm->mclk_rx_freq = freq;
+- } else {
+- if (stream == SNDRV_PCM_STREAM_PLAYBACK)
+- i2s_tdm->mclk_tx_freq = freq;
+- else
+- i2s_tdm->mclk_rx_freq = freq;
+- }
+-
+- dev_dbg(i2s_tdm->dev, "The target mclk_%s freq is: %d\n",
+- stream ? "rx" : "tx", freq);
+-
+- return 0;
+-}
+-
+-static int rockchip_i2s_tdm_clk_compensation_info(struct snd_kcontrol *kcontrol,
+- struct snd_ctl_elem_info *uinfo)
+-{
+- uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER;
+- uinfo->count = 1;
+- uinfo->value.integer.min = CLK_PPM_MIN;
+- uinfo->value.integer.max = CLK_PPM_MAX;
+- uinfo->value.integer.step = 1;
+-
+- return 0;
+-}
+-
+-static int rockchip_i2s_tdm_clk_compensation_get(struct snd_kcontrol *kcontrol,
+- struct snd_ctl_elem_value *ucontrol)
+-{
+- struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+- struct rk_i2s_tdm_dev *i2s_tdm = snd_soc_dai_get_drvdata(dai);
+-
+- ucontrol->value.integer.value[0] = i2s_tdm->clk_ppm;
+-
+- return 0;
+-}
+-
+-static int rockchip_i2s_tdm_clk_compensation_put(struct snd_kcontrol *kcontrol,
+- struct snd_ctl_elem_value *ucontrol)
+-{
+- struct snd_soc_dai *dai = snd_kcontrol_chip(kcontrol);
+- struct rk_i2s_tdm_dev *i2s_tdm = snd_soc_dai_get_drvdata(dai);
+- int ret = 0, ppm = 0;
+- int changed = 0;
+- unsigned long old_rate;
+-
+- if (ucontrol->value.integer.value[0] < CLK_PPM_MIN ||
+- ucontrol->value.integer.value[0] > CLK_PPM_MAX)
+- return -EINVAL;
+-
+- ppm = ucontrol->value.integer.value[0];
+-
+- old_rate = clk_get_rate(i2s_tdm->mclk_root0);
+- ret = rockchip_i2s_tdm_clk_set_rate(i2s_tdm, i2s_tdm->mclk_root0,
+- i2s_tdm->mclk_root0_freq, ppm);
+- if (ret)
+- return ret;
+- if (old_rate != clk_get_rate(i2s_tdm->mclk_root0))
+- changed = 1;
+-
+- if (clk_is_match(i2s_tdm->mclk_root0, i2s_tdm->mclk_root1))
+- return changed;
+-
+- old_rate = clk_get_rate(i2s_tdm->mclk_root1);
+- ret = rockchip_i2s_tdm_clk_set_rate(i2s_tdm, i2s_tdm->mclk_root1,
+- i2s_tdm->mclk_root1_freq, ppm);
+- if (ret)
+- return ret;
+- if (old_rate != clk_get_rate(i2s_tdm->mclk_root1))
+- changed = 1;
+-
+- return changed;
+-}
+-
+-static struct snd_kcontrol_new rockchip_i2s_tdm_compensation_control = {
+- .iface = SNDRV_CTL_ELEM_IFACE_PCM,
+- .name = "PCM Clock Compensation in PPM",
+- .info = rockchip_i2s_tdm_clk_compensation_info,
+- .get = rockchip_i2s_tdm_clk_compensation_get,
+- .put = rockchip_i2s_tdm_clk_compensation_put,
+-};
+-
+ static int rockchip_i2s_tdm_dai_probe(struct snd_soc_dai *dai)
+ {
+ struct rk_i2s_tdm_dev *i2s_tdm = snd_soc_dai_get_drvdata(dai);
+@@ -1068,9 +773,6 @@ static int rockchip_i2s_tdm_dai_probe(struct snd_soc_dai *dai)
+ if (i2s_tdm->has_playback)
+ dai->playback_dma_data = &i2s_tdm->playback_dma_data;
+
+- if (i2s_tdm->mclk_calibrate)
+- snd_soc_add_dai_controls(dai, &rockchip_i2s_tdm_compensation_control, 1);
+-
+ return 0;
+ }
+
+@@ -1110,7 +812,6 @@ static int rockchip_i2s_tdm_set_bclk_ratio(struct snd_soc_dai *dai,
+ static const struct snd_soc_dai_ops rockchip_i2s_tdm_dai_ops = {
+ .hw_params = rockchip_i2s_tdm_hw_params,
+ .set_bclk_ratio = rockchip_i2s_tdm_set_bclk_ratio,
+- .set_sysclk = rockchip_i2s_tdm_set_sysclk,
+ .set_fmt = rockchip_i2s_tdm_set_fmt,
+ .set_tdm_slot = rockchip_dai_tdm_slot,
+ .trigger = rockchip_i2s_tdm_trigger,
+@@ -1433,35 +1134,6 @@ static void rockchip_i2s_tdm_path_config(struct rk_i2s_tdm_dev *i2s_tdm,
+ rockchip_i2s_tdm_tx_path_config(i2s_tdm, num);
+ }
+
+-static int rockchip_i2s_tdm_get_calibrate_mclks(struct rk_i2s_tdm_dev *i2s_tdm)
+-{
+- int num_mclks = 0;
+-
+- i2s_tdm->mclk_tx_src = devm_clk_get(i2s_tdm->dev, "mclk_tx_src");
+- if (!IS_ERR(i2s_tdm->mclk_tx_src))
+- num_mclks++;
+-
+- i2s_tdm->mclk_rx_src = devm_clk_get(i2s_tdm->dev, "mclk_rx_src");
+- if (!IS_ERR(i2s_tdm->mclk_rx_src))
+- num_mclks++;
+-
+- i2s_tdm->mclk_root0 = devm_clk_get(i2s_tdm->dev, "mclk_root0");
+- if (!IS_ERR(i2s_tdm->mclk_root0))
+- num_mclks++;
+-
+- i2s_tdm->mclk_root1 = devm_clk_get(i2s_tdm->dev, "mclk_root1");
+- if (!IS_ERR(i2s_tdm->mclk_root1))
+- num_mclks++;
+-
+- if (num_mclks < 4 && num_mclks != 0)
+- return -ENOENT;
+-
+- if (num_mclks == 4)
+- i2s_tdm->mclk_calibrate = 1;
+-
+- return 0;
+-}
+-
+ static int rockchip_i2s_tdm_path_prepare(struct rk_i2s_tdm_dev *i2s_tdm,
+ struct device_node *np,
+ bool is_rx_path)
+@@ -1609,11 +1281,6 @@ static int rockchip_i2s_tdm_probe(struct platform_device *pdev)
+ i2s_tdm->io_multiplex =
+ of_property_read_bool(node, "rockchip,io-multiplex");
+
+- ret = rockchip_i2s_tdm_get_calibrate_mclks(i2s_tdm);
+- if (ret)
+- return dev_err_probe(i2s_tdm->dev, ret,
+- "mclk-calibrate clocks missing");
+-
+ regs = devm_platform_get_and_ioremap_resource(pdev, 0, &res);
+ if (IS_ERR(regs)) {
+ return dev_err_probe(i2s_tdm->dev, PTR_ERR(regs),
+@@ -1666,13 +1333,6 @@ static int rockchip_i2s_tdm_probe(struct platform_device *pdev)
+ goto err_disable_hclk;
+ }
+
+- if (i2s_tdm->mclk_calibrate) {
+- i2s_tdm->mclk_root0_initial_freq = clk_get_rate(i2s_tdm->mclk_root0);
+- i2s_tdm->mclk_root1_initial_freq = clk_get_rate(i2s_tdm->mclk_root1);
+- i2s_tdm->mclk_root0_freq = i2s_tdm->mclk_root0_initial_freq;
+- i2s_tdm->mclk_root1_freq = i2s_tdm->mclk_root1_initial_freq;
+- }
+-
+ pm_runtime_enable(&pdev->dev);
+
+ regmap_update_bits(i2s_tdm->regmap, I2S_DMACR, I2S_DMACR_TDL_MASK,
+--
+2.43.0
+
--- /dev/null
+From 3ceba9654b3b840e8766b78b22cf632ab5f2b147 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Mar 2024 13:44:40 -0700
+Subject: bpf: report RCU QS in cpumap kthread
+
+From: Yan Zhai <yan@cloudflare.com>
+
+[ Upstream commit 00bf63122459e87193ee7f1bc6161c83a525569f ]
+
+When there are heavy load, cpumap kernel threads can be busy polling
+packets from redirect queues and block out RCU tasks from reaching
+quiescent states. It is insufficient to just call cond_resched() in such
+context. Periodically raise a consolidated RCU QS before cond_resched
+fixes the problem.
+
+Fixes: 6710e1126934 ("bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP")
+Reviewed-by: Jesper Dangaard Brouer <hawk@kernel.org>
+Signed-off-by: Yan Zhai <yan@cloudflare.com>
+Acked-by: Paul E. McKenney <paulmck@kernel.org>
+Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
+Link: https://lore.kernel.org/r/c17b9f1517e19d813da3ede5ed33ee18496bb5d8.1710877680.git.yan@cloudflare.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ kernel/bpf/cpumap.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
+index 0508937048137..806a7c1b364b6 100644
+--- a/kernel/bpf/cpumap.c
++++ b/kernel/bpf/cpumap.c
+@@ -306,6 +306,7 @@ static int cpu_map_bpf_prog_run(struct bpf_cpu_map_entry *rcpu, void **frames,
+ static int cpu_map_kthread_run(void *data)
+ {
+ struct bpf_cpu_map_entry *rcpu = data;
++ unsigned long last_qs = jiffies;
+
+ complete(&rcpu->kthread_running);
+ set_current_state(TASK_INTERRUPTIBLE);
+@@ -331,10 +332,12 @@ static int cpu_map_kthread_run(void *data)
+ if (__ptr_ring_empty(rcpu->queue)) {
+ schedule();
+ sched = 1;
++ last_qs = jiffies;
+ } else {
+ __set_current_state(TASK_RUNNING);
+ }
+ } else {
++ rcu_softirq_qs_periodic(last_qs);
+ sched = cond_resched();
+ }
+
+--
+2.43.0
+
--- /dev/null
+From 2469efbf4f844d21bdba68ea4a8216e69975eacb Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 31 Jan 2024 02:54:19 -0800
+Subject: coresight: etm4x: Set skip_power_up in etm4_init_arch_data function
+
+From: Mao Jinlong <quic_jinlmao@quicinc.com>
+
+[ Upstream commit 1bbe0a247e5d72f723daeecf41596bfa99e199f1 ]
+
+skip_power_up is used in etm4_init_arch_data when set lpoverride. So
+need to set the value of it before calling using it.
+
+Fixes: 5214b563588e ("coresight: etm4x: Add support for sysreg only devices")
+Signed-off-by: Mao Jinlong <quic_jinlmao@quicinc.com>
+Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
+Link: https://lore.kernel.org/r/20240131105423.9519-1-quic_jinlmao@quicinc.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/hwtracing/coresight/coresight-etm4x-core.c | 10 +++++-----
+ 1 file changed, 5 insertions(+), 5 deletions(-)
+
+diff --git a/drivers/hwtracing/coresight/coresight-etm4x-core.c b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+index fda48a0afc1a5..5b1362aef14ae 100644
+--- a/drivers/hwtracing/coresight/coresight-etm4x-core.c
++++ b/drivers/hwtracing/coresight/coresight-etm4x-core.c
+@@ -1081,6 +1081,7 @@ static void etm4_init_arch_data(void *info)
+ struct etm4_init_arg *init_arg = info;
+ struct etmv4_drvdata *drvdata;
+ struct csdev_access *csa;
++ struct device *dev = init_arg->dev;
+ int i;
+
+ drvdata = init_arg->drvdata;
+@@ -1094,6 +1095,10 @@ static void etm4_init_arch_data(void *info)
+ if (!etm4_init_csdev_access(drvdata, csa))
+ return;
+
++ if (!csa->io_mem ||
++ fwnode_property_present(dev_fwnode(dev), "qcom,skip-power-up"))
++ drvdata->skip_power_up = true;
++
+ /* Detect the support for OS Lock before we actually use it */
+ etm_detect_os_lock(drvdata, csa);
+
+@@ -1952,11 +1957,6 @@ static int etm4_probe(struct device *dev, void __iomem *base, u32 etm_pid)
+ if (!drvdata->arch)
+ return -EINVAL;
+
+- /* TRCPDCR is not accessible with system instructions. */
+- if (!desc.access.io_mem ||
+- fwnode_property_present(dev_fwnode(dev), "qcom,skip-power-up"))
+- drvdata->skip_power_up = true;
+-
+ major = ETM_ARCH_MAJOR_VERSION(drvdata->arch);
+ minor = ETM_ARCH_MINOR_VERSION(drvdata->arch);
+
+--
+2.43.0
+
--- /dev/null
+From 213b48e330cea153a1bf201279d166c29b5dbebf Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 25 Jan 2023 23:31:55 +0100
+Subject: dm: address indent/space issues
+
+From: Heinz Mauelshagen <heinzm@redhat.com>
+
+[ Upstream commit 255e2646496fcbf836a3dfe1b535692f09f11b45 ]
+
+Signed-off-by: Heinz Mauelshagen <heinzm@redhat.com>
+Signed-off-by: Mike Snitzer <snitzer@kernel.org>
+Stable-dep-of: b4d78cfeb304 ("dm-integrity: align the outgoing bio in integrity_recheck")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/md/dm-cache-policy.h | 2 +-
+ drivers/md/dm-crypt.c | 2 +-
+ drivers/md/dm-integrity.c | 5 ++---
+ drivers/md/dm-log.c | 8 ++++----
+ drivers/md/dm-raid.c | 8 ++++----
+ drivers/md/dm-raid1.c | 2 +-
+ drivers/md/dm-table.c | 4 ++--
+ drivers/md/dm-thin.c | 6 +++---
+ drivers/md/dm-writecache.c | 2 +-
+ drivers/md/persistent-data/dm-btree.c | 6 +++---
+ drivers/md/persistent-data/dm-space-map-common.c | 2 +-
+ drivers/md/persistent-data/dm-space-map-common.h | 2 +-
+ 12 files changed, 24 insertions(+), 25 deletions(-)
+
+diff --git a/drivers/md/dm-cache-policy.h b/drivers/md/dm-cache-policy.h
+index 6ba3e9c91af53..8bc21d54884e9 100644
+--- a/drivers/md/dm-cache-policy.h
++++ b/drivers/md/dm-cache-policy.h
+@@ -75,7 +75,7 @@ struct dm_cache_policy {
+ * background work.
+ */
+ int (*get_background_work)(struct dm_cache_policy *p, bool idle,
+- struct policy_work **result);
++ struct policy_work **result);
+
+ /*
+ * You must pass in the same work pointer that you were given, not
+diff --git a/drivers/md/dm-crypt.c b/drivers/md/dm-crypt.c
+index e8c534b5870ac..25e51dc6e5598 100644
+--- a/drivers/md/dm-crypt.c
++++ b/drivers/md/dm-crypt.c
+@@ -2535,7 +2535,7 @@ static int crypt_set_keyring_key(struct crypt_config *cc, const char *key_string
+ type = &key_type_encrypted;
+ set_key = set_key_encrypted;
+ } else if (IS_ENABLED(CONFIG_TRUSTED_KEYS) &&
+- !strncmp(key_string, "trusted:", key_desc - key_string + 1)) {
++ !strncmp(key_string, "trusted:", key_desc - key_string + 1)) {
+ type = &key_type_trusted;
+ set_key = set_key_trusted;
+ } else {
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index e1bf91faa462b..94382e43ea506 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -2367,7 +2367,6 @@ static void dm_integrity_map_continue(struct dm_integrity_io *dio, bool from_map
+ else
+ skip_check:
+ dec_in_flight(dio);
+-
+ } else {
+ INIT_WORK(&dio->work, integrity_metadata);
+ queue_work(ic->metadata_wq, &dio->work);
+@@ -4151,7 +4150,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
+ } else if (sscanf(opt_string, "block_size:%u%c", &val, &dummy) == 1) {
+ if (val < 1 << SECTOR_SHIFT ||
+ val > MAX_SECTORS_PER_BLOCK << SECTOR_SHIFT ||
+- (val & (val -1))) {
++ (val & (val - 1))) {
+ r = -EINVAL;
+ ti->error = "Invalid block_size argument";
+ goto bad;
+@@ -4477,7 +4476,7 @@ static int dm_integrity_ctr(struct dm_target *ti, unsigned int argc, char **argv
+ if (ic->internal_hash) {
+ size_t recalc_tags_size;
+ ic->recalc_wq = alloc_workqueue("dm-integrity-recalc", WQ_MEM_RECLAIM, 1);
+- if (!ic->recalc_wq ) {
++ if (!ic->recalc_wq) {
+ ti->error = "Cannot allocate workqueue";
+ r = -ENOMEM;
+ goto bad;
+diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
+index 05141eea18d3c..b7dd5a0cd58ba 100644
+--- a/drivers/md/dm-log.c
++++ b/drivers/md/dm-log.c
+@@ -756,8 +756,8 @@ static void core_set_region_sync(struct dm_dirty_log *log, region_t region,
+ log_clear_bit(lc, lc->recovering_bits, region);
+ if (in_sync) {
+ log_set_bit(lc, lc->sync_bits, region);
+- lc->sync_count++;
+- } else if (log_test_bit(lc->sync_bits, region)) {
++ lc->sync_count++;
++ } else if (log_test_bit(lc->sync_bits, region)) {
+ lc->sync_count--;
+ log_clear_bit(lc, lc->sync_bits, region);
+ }
+@@ -765,9 +765,9 @@ static void core_set_region_sync(struct dm_dirty_log *log, region_t region,
+
+ static region_t core_get_sync_count(struct dm_dirty_log *log)
+ {
+- struct log_c *lc = (struct log_c *) log->context;
++ struct log_c *lc = (struct log_c *) log->context;
+
+- return lc->sync_count;
++ return lc->sync_count;
+ }
+
+ #define DMEMIT_SYNC \
+diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
+index 7fbce214e00f5..bf833ca880bc1 100644
+--- a/drivers/md/dm-raid.c
++++ b/drivers/md/dm-raid.c
+@@ -362,8 +362,8 @@ static struct {
+ const int mode;
+ const char *param;
+ } _raid456_journal_mode[] = {
+- { R5C_JOURNAL_MODE_WRITE_THROUGH , "writethrough" },
+- { R5C_JOURNAL_MODE_WRITE_BACK , "writeback" }
++ { R5C_JOURNAL_MODE_WRITE_THROUGH, "writethrough" },
++ { R5C_JOURNAL_MODE_WRITE_BACK, "writeback" }
+ };
+
+ /* Return MD raid4/5/6 journal mode for dm @journal_mode one */
+@@ -1114,7 +1114,7 @@ static int validate_raid_redundancy(struct raid_set *rs)
+ * [stripe_cache <sectors>] Stripe cache size for higher RAIDs
+ * [region_size <sectors>] Defines granularity of bitmap
+ * [journal_dev <dev>] raid4/5/6 journaling deviice
+- * (i.e. write hole closing log)
++ * (i.e. write hole closing log)
+ *
+ * RAID10-only options:
+ * [raid10_copies <# copies>] Number of copies. (Default: 2)
+@@ -3999,7 +3999,7 @@ static int raid_preresume(struct dm_target *ti)
+ }
+
+ /* Resize bitmap to adjust to changed region size (aka MD bitmap chunksize) or grown device size */
+- if (test_bit(RT_FLAG_RS_BITMAP_LOADED, &rs->runtime_flags) && mddev->bitmap &&
++ if (test_bit(RT_FLAG_RS_BITMAP_LOADED, &rs->runtime_flags) && mddev->bitmap &&
+ (test_bit(RT_FLAG_RS_GROW, &rs->runtime_flags) ||
+ (rs->requested_bitmap_chunk_sectors &&
+ mddev->bitmap_info.chunksize != to_bytes(rs->requested_bitmap_chunk_sectors)))) {
+diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
+index c38e63706d911..2327645fc0648 100644
+--- a/drivers/md/dm-raid1.c
++++ b/drivers/md/dm-raid1.c
+@@ -902,7 +902,7 @@ static struct mirror_set *alloc_context(unsigned int nr_mirrors,
+ if (IS_ERR(ms->io_client)) {
+ ti->error = "Error creating dm_io client";
+ kfree(ms);
+- return NULL;
++ return NULL;
+ }
+
+ ms->rh = dm_region_hash_create(ms, dispatch_bios, wakeup_mirrord,
+diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
+index e0367a672eabf..aabb2435070b8 100644
+--- a/drivers/md/dm-table.c
++++ b/drivers/md/dm-table.c
+@@ -72,7 +72,7 @@ static sector_t high(struct dm_table *t, unsigned int l, unsigned int n)
+ n = get_child(n, CHILDREN_PER_NODE - 1);
+
+ if (n >= t->counts[l])
+- return (sector_t) - 1;
++ return (sector_t) -1;
+
+ return get_node(t, l, n)[KEYS_PER_NODE - 1];
+ }
+@@ -1533,7 +1533,7 @@ static bool dm_table_any_dev_attr(struct dm_table *t,
+ if (ti->type->iterate_devices &&
+ ti->type->iterate_devices(ti, func, data))
+ return true;
+- }
++ }
+
+ return false;
+ }
+diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
+index 601f9e4e6234f..f24d89af7c5f0 100644
+--- a/drivers/md/dm-thin.c
++++ b/drivers/md/dm-thin.c
+@@ -1179,9 +1179,9 @@ static void process_prepared_discard_passdown_pt1(struct dm_thin_new_mapping *m)
+ discard_parent = bio_alloc(NULL, 1, 0, GFP_NOIO);
+ discard_parent->bi_end_io = passdown_endio;
+ discard_parent->bi_private = m;
+- if (m->maybe_shared)
+- passdown_double_checking_shared_status(m, discard_parent);
+- else {
++ if (m->maybe_shared)
++ passdown_double_checking_shared_status(m, discard_parent);
++ else {
+ struct discard_op op;
+
+ begin_discard(&op, tc, discard_parent);
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index c6ff43a8f0b25..a705e24d3e2b6 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -531,7 +531,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc, bool wait_for_ios)
+ req.notify.context = &endio;
+
+ /* writing via async dm-io (implied by notify.fn above) won't return an error */
+- (void) dm_io(&req, 1, ®ion, NULL);
++ (void) dm_io(&req, 1, ®ion, NULL);
+ i = j;
+ }
+
+diff --git a/drivers/md/persistent-data/dm-btree.c b/drivers/md/persistent-data/dm-btree.c
+index 1cc783d7030d8..18d949d63543b 100644
+--- a/drivers/md/persistent-data/dm-btree.c
++++ b/drivers/md/persistent-data/dm-btree.c
+@@ -726,7 +726,7 @@ static int shadow_child(struct dm_btree_info *info, struct dm_btree_value_type *
+ * nodes, so saves metadata space.
+ */
+ static int split_two_into_three(struct shadow_spine *s, unsigned int parent_index,
+- struct dm_btree_value_type *vt, uint64_t key)
++ struct dm_btree_value_type *vt, uint64_t key)
+ {
+ int r;
+ unsigned int middle_index;
+@@ -781,7 +781,7 @@ static int split_two_into_three(struct shadow_spine *s, unsigned int parent_inde
+ if (shadow_current(s) != right)
+ unlock_block(s->info, right);
+
+- return r;
++ return r;
+ }
+
+
+@@ -1216,7 +1216,7 @@ int btree_get_overwrite_leaf(struct dm_btree_info *info, dm_block_t root,
+ static bool need_insert(struct btree_node *node, uint64_t *keys,
+ unsigned int level, unsigned int index)
+ {
+- return ((index >= le32_to_cpu(node->header.nr_entries)) ||
++ return ((index >= le32_to_cpu(node->header.nr_entries)) ||
+ (le64_to_cpu(node->keys[index]) != keys[level]));
+ }
+
+diff --git a/drivers/md/persistent-data/dm-space-map-common.c b/drivers/md/persistent-data/dm-space-map-common.c
+index af800efed9f3c..4833a3998c1d9 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.c
++++ b/drivers/md/persistent-data/dm-space-map-common.c
+@@ -390,7 +390,7 @@ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
+ }
+
+ int sm_ll_find_common_free_block(struct ll_disk *old_ll, struct ll_disk *new_ll,
+- dm_block_t begin, dm_block_t end, dm_block_t *b)
++ dm_block_t begin, dm_block_t end, dm_block_t *b)
+ {
+ int r;
+ uint32_t count;
+diff --git a/drivers/md/persistent-data/dm-space-map-common.h b/drivers/md/persistent-data/dm-space-map-common.h
+index 706ceb85d6800..63d9a72e3265c 100644
+--- a/drivers/md/persistent-data/dm-space-map-common.h
++++ b/drivers/md/persistent-data/dm-space-map-common.h
+@@ -120,7 +120,7 @@ int sm_ll_lookup(struct ll_disk *ll, dm_block_t b, uint32_t *result);
+ int sm_ll_find_free_block(struct ll_disk *ll, dm_block_t begin,
+ dm_block_t end, dm_block_t *result);
+ int sm_ll_find_common_free_block(struct ll_disk *old_ll, struct ll_disk *new_ll,
+- dm_block_t begin, dm_block_t end, dm_block_t *result);
++ dm_block_t begin, dm_block_t end, dm_block_t *result);
+
+ /*
+ * The next three functions return (via nr_allocations) the net number of
+--
+2.43.0
+
--- /dev/null
+From cbee6076b3123d67dd0c7184553892d298a6c5da Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 21 Mar 2024 17:48:45 +0100
+Subject: dm-integrity: align the outgoing bio in integrity_recheck
+
+From: Mikulas Patocka <mpatocka@redhat.com>
+
+[ Upstream commit b4d78cfeb30476239cf08f4f40afc095c173d6e3 ]
+
+It is possible to set up dm-integrity with smaller sector size than
+the logical sector size of the underlying device. In this situation,
+dm-integrity guarantees that the outgoing bios have the same alignment as
+incoming bios (so, if you create a filesystem with 4k block size,
+dm-integrity would send 4k-aligned bios to the underlying device).
+
+This guarantee was broken when integrity_recheck was implemented.
+integrity_recheck sends bio that is aligned to ic->sectors_per_block. So
+if we set up integrity with 512-byte sector size on a device with logical
+block size 4k, we would be sending unaligned bio. This triggered a bug in
+one of our internal tests.
+
+This commit fixes it by determining the actual alignment of the
+incoming bio and then makes sure that the outgoing bio in
+integrity_recheck has the same alignment.
+
+Fixes: c88f5e553fe3 ("dm-integrity: recheck the integrity tag after a failure")
+Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
+Signed-off-by: Mike Snitzer <snitzer@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/md/dm-integrity.c | 12 ++++++++++--
+ 1 file changed, 10 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index aff818eb31fbb..9c9e2b50c63c3 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -1709,7 +1709,6 @@ static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checks
+ struct bio_vec bv;
+ sector_t sector, logical_sector, area, offset;
+ struct page *page;
+- void *buffer;
+
+ get_area_and_offset(ic, dio->range.logical_sector, &area, &offset);
+ dio->metadata_block = get_metadata_sector_and_offset(ic, area, offset,
+@@ -1718,13 +1717,14 @@ static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checks
+ logical_sector = dio->range.logical_sector;
+
+ page = mempool_alloc(&ic->recheck_pool, GFP_NOIO);
+- buffer = page_to_virt(page);
+
+ __bio_for_each_segment(bv, bio, iter, dio->bio_details.bi_iter) {
+ unsigned pos = 0;
+
+ do {
++ sector_t alignment;
+ char *mem;
++ char *buffer = page_to_virt(page);
+ int r;
+ struct dm_io_request io_req;
+ struct dm_io_region io_loc;
+@@ -1737,6 +1737,14 @@ static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checks
+ io_loc.sector = sector;
+ io_loc.count = ic->sectors_per_block;
+
++ /* Align the bio to logical block size */
++ alignment = dio->range.logical_sector | bio_sectors(bio) | (PAGE_SIZE >> SECTOR_SHIFT);
++ alignment &= -alignment;
++ io_loc.sector = round_down(io_loc.sector, alignment);
++ io_loc.count += sector - io_loc.sector;
++ buffer += (sector - io_loc.sector) << SECTOR_SHIFT;
++ io_loc.count = round_up(io_loc.count, alignment);
++
+ r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r)) {
+ dio->bi_status = errno_to_blk_status(r);
+--
+2.43.0
+
--- /dev/null
+From 390bb24f2075034853b0fb0106539ef576031fc0 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Mar 2024 18:35:06 +0100
+Subject: dm-integrity: fix a memory leak when rechecking the data
+
+From: Mikulas Patocka <mpatocka@redhat.com>
+
+[ Upstream commit 55e565c42dce81a4e49c13262d5bc4eb4c2e588a ]
+
+Memory for the "checksums" pointer will leak if the data is rechecked
+after checksum failure (because the associated kfree won't happen due
+to 'goto skip_io').
+
+Fix this by freeing the checksums memory before recheck, and just use
+the "checksum_onstack" memory for storing checksum during recheck.
+
+Fixes: c88f5e553fe3 ("dm-integrity: recheck the integrity tag after a failure")
+Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
+Signed-off-by: Mike Snitzer <snitzer@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/md/dm-integrity.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 3da4359f51645..e1bf91faa462b 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -1856,12 +1856,12 @@ static void integrity_metadata(struct work_struct *w)
+ r = dm_integrity_rw_tag(ic, checksums, &dio->metadata_block, &dio->metadata_offset,
+ checksums_ptr - checksums, dio->op == REQ_OP_READ ? TAG_CMP : TAG_WRITE);
+ if (unlikely(r)) {
++ if (likely(checksums != checksums_onstack))
++ kfree(checksums);
+ if (r > 0) {
+- integrity_recheck(dio, checksums);
++ integrity_recheck(dio, checksums_onstack);
+ goto skip_io;
+ }
+- if (likely(checksums != checksums_onstack))
+- kfree(checksums);
+ goto error;
+ }
+
+--
+2.43.0
+
--- /dev/null
+From 75f88b1005b72e0ed3ea0e1309aac1a96a341d56 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 24 Jan 2024 13:35:53 +0800
+Subject: dm io: Support IO priority
+
+From: Hongyu Jin <hongyu.jin@unisoc.com>
+
+[ Upstream commit 6e5f0f6383b4896c7e9b943d84b136149d0f45e9 ]
+
+Some IO will dispatch from kworker with different io_context settings
+than the submitting task, we may need to specify a priority to avoid
+losing priority.
+
+Add IO priority parameter to dm_io() and update all callers.
+
+Co-developed-by: Yibin Ding <yibin.ding@unisoc.com>
+Signed-off-by: Yibin Ding <yibin.ding@unisoc.com>
+Signed-off-by: Hongyu Jin <hongyu.jin@unisoc.com>
+Reviewed-by: Eric Biggers <ebiggers@google.com>
+Reviewed-by: Mikulas Patocka <mpatocka@redhat.com>
+Signed-off-by: Mike Snitzer <snitzer@kernel.org>
+Stable-dep-of: b4d78cfeb304 ("dm-integrity: align the outgoing bio in integrity_recheck")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/md/dm-bufio.c | 6 +++---
+ drivers/md/dm-integrity.c | 12 ++++++------
+ drivers/md/dm-io.c | 23 +++++++++++++----------
+ drivers/md/dm-kcopyd.c | 4 ++--
+ drivers/md/dm-log.c | 4 ++--
+ drivers/md/dm-raid1.c | 6 +++---
+ drivers/md/dm-snap-persistent.c | 4 ++--
+ drivers/md/dm-verity-target.c | 2 +-
+ drivers/md/dm-writecache.c | 8 ++++----
+ include/linux/dm-io.h | 3 ++-
+ 10 files changed, 38 insertions(+), 34 deletions(-)
+
+diff --git a/drivers/md/dm-bufio.c b/drivers/md/dm-bufio.c
+index 100a6a236d92a..ec662f97ba828 100644
+--- a/drivers/md/dm-bufio.c
++++ b/drivers/md/dm-bufio.c
+@@ -614,7 +614,7 @@ static void use_dmio(struct dm_buffer *b, enum req_op op, sector_t sector,
+ io_req.mem.ptr.vma = (char *)b->data + offset;
+ }
+
+- r = dm_io(&io_req, 1, ®ion, NULL);
++ r = dm_io(&io_req, 1, ®ion, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r))
+ b->end_io(b, errno_to_blk_status(r));
+ }
+@@ -1375,7 +1375,7 @@ int dm_bufio_issue_flush(struct dm_bufio_client *c)
+
+ BUG_ON(dm_bufio_in_request());
+
+- return dm_io(&io_req, 1, &io_reg, NULL);
++ return dm_io(&io_req, 1, &io_reg, NULL, IOPRIO_DEFAULT);
+ }
+ EXPORT_SYMBOL_GPL(dm_bufio_issue_flush);
+
+@@ -1398,7 +1398,7 @@ int dm_bufio_issue_discard(struct dm_bufio_client *c, sector_t block, sector_t c
+
+ BUG_ON(dm_bufio_in_request());
+
+- return dm_io(&io_req, 1, &io_reg, NULL);
++ return dm_io(&io_req, 1, &io_reg, NULL, IOPRIO_DEFAULT);
+ }
+ EXPORT_SYMBOL_GPL(dm_bufio_issue_discard);
+
+diff --git a/drivers/md/dm-integrity.c b/drivers/md/dm-integrity.c
+index 94382e43ea506..aff818eb31fbb 100644
+--- a/drivers/md/dm-integrity.c
++++ b/drivers/md/dm-integrity.c
+@@ -579,7 +579,7 @@ static int sync_rw_sb(struct dm_integrity_c *ic, blk_opf_t opf)
+ }
+ }
+
+- r = dm_io(&io_req, 1, &io_loc, NULL);
++ r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r))
+ return r;
+
+@@ -1089,7 +1089,7 @@ static void rw_journal_sectors(struct dm_integrity_c *ic, blk_opf_t opf,
+ io_loc.sector = ic->start + SB_SECTORS + sector;
+ io_loc.count = n_sectors;
+
+- r = dm_io(&io_req, 1, &io_loc, NULL);
++ r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r)) {
+ dm_integrity_io_error(ic, (opf & REQ_OP_MASK) == REQ_OP_READ ?
+ "reading journal" : "writing journal", r);
+@@ -1205,7 +1205,7 @@ static void copy_from_journal(struct dm_integrity_c *ic, unsigned int section, u
+ io_loc.sector = target;
+ io_loc.count = n_sectors;
+
+- r = dm_io(&io_req, 1, &io_loc, NULL);
++ r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r)) {
+ WARN_ONCE(1, "asynchronous dm_io failed: %d", r);
+ fn(-1UL, data);
+@@ -1532,7 +1532,7 @@ static void dm_integrity_flush_buffers(struct dm_integrity_c *ic, bool flush_dat
+ fr.io_reg.count = 0,
+ fr.ic = ic;
+ init_completion(&fr.comp);
+- r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL);
++ r = dm_io(&fr.io_req, 1, &fr.io_reg, NULL, IOPRIO_DEFAULT);
+ BUG_ON(r);
+ }
+
+@@ -1737,7 +1737,7 @@ static noinline void integrity_recheck(struct dm_integrity_io *dio, char *checks
+ io_loc.sector = sector;
+ io_loc.count = ic->sectors_per_block;
+
+- r = dm_io(&io_req, 1, &io_loc, NULL);
++ r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r)) {
+ dio->bi_status = errno_to_blk_status(r);
+ goto free_ret;
+@@ -2774,7 +2774,7 @@ static void integrity_recalc(struct work_struct *w)
+ io_loc.sector = get_data_sector(ic, area, offset);
+ io_loc.count = n_sectors;
+
+- r = dm_io(&io_req, 1, &io_loc, NULL);
++ r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r)) {
+ dm_integrity_io_error(ic, "reading data", r);
+ goto err;
+diff --git a/drivers/md/dm-io.c b/drivers/md/dm-io.c
+index e488b05e35fa3..ec97658387c39 100644
+--- a/drivers/md/dm-io.c
++++ b/drivers/md/dm-io.c
+@@ -295,7 +295,7 @@ static void km_dp_init(struct dpages *dp, void *data)
+ *---------------------------------------------------------------*/
+ static void do_region(const blk_opf_t opf, unsigned int region,
+ struct dm_io_region *where, struct dpages *dp,
+- struct io *io)
++ struct io *io, unsigned short ioprio)
+ {
+ struct bio *bio;
+ struct page *page;
+@@ -344,6 +344,7 @@ static void do_region(const blk_opf_t opf, unsigned int region,
+ &io->client->bios);
+ bio->bi_iter.bi_sector = where->sector + (where->count - remaining);
+ bio->bi_end_io = endio;
++ bio->bi_ioprio = ioprio;
+ store_io_and_region_in_bio(bio, io, region);
+
+ if (op == REQ_OP_DISCARD || op == REQ_OP_WRITE_ZEROES) {
+@@ -371,7 +372,7 @@ static void do_region(const blk_opf_t opf, unsigned int region,
+
+ static void dispatch_io(blk_opf_t opf, unsigned int num_regions,
+ struct dm_io_region *where, struct dpages *dp,
+- struct io *io, int sync)
++ struct io *io, int sync, unsigned short ioprio)
+ {
+ int i;
+ struct dpages old_pages = *dp;
+@@ -388,7 +389,7 @@ static void dispatch_io(blk_opf_t opf, unsigned int num_regions,
+ for (i = 0; i < num_regions; i++) {
+ *dp = old_pages;
+ if (where[i].count || (opf & REQ_PREFLUSH))
+- do_region(opf, i, where + i, dp, io);
++ do_region(opf, i, where + i, dp, io, ioprio);
+ }
+
+ /*
+@@ -413,7 +414,7 @@ static void sync_io_complete(unsigned long error, void *context)
+
+ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
+ struct dm_io_region *where, blk_opf_t opf, struct dpages *dp,
+- unsigned long *error_bits)
++ unsigned long *error_bits, unsigned short ioprio)
+ {
+ struct io *io;
+ struct sync_io sio;
+@@ -435,7 +436,7 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
+ io->vma_invalidate_address = dp->vma_invalidate_address;
+ io->vma_invalidate_size = dp->vma_invalidate_size;
+
+- dispatch_io(opf, num_regions, where, dp, io, 1);
++ dispatch_io(opf, num_regions, where, dp, io, 1, ioprio);
+
+ wait_for_completion_io(&sio.wait);
+
+@@ -447,7 +448,8 @@ static int sync_io(struct dm_io_client *client, unsigned int num_regions,
+
+ static int async_io(struct dm_io_client *client, unsigned int num_regions,
+ struct dm_io_region *where, blk_opf_t opf,
+- struct dpages *dp, io_notify_fn fn, void *context)
++ struct dpages *dp, io_notify_fn fn, void *context,
++ unsigned short ioprio)
+ {
+ struct io *io;
+
+@@ -467,7 +469,7 @@ static int async_io(struct dm_io_client *client, unsigned int num_regions,
+ io->vma_invalidate_address = dp->vma_invalidate_address;
+ io->vma_invalidate_size = dp->vma_invalidate_size;
+
+- dispatch_io(opf, num_regions, where, dp, io, 0);
++ dispatch_io(opf, num_regions, where, dp, io, 0, ioprio);
+ return 0;
+ }
+
+@@ -509,7 +511,8 @@ static int dp_init(struct dm_io_request *io_req, struct dpages *dp,
+ }
+
+ int dm_io(struct dm_io_request *io_req, unsigned int num_regions,
+- struct dm_io_region *where, unsigned long *sync_error_bits)
++ struct dm_io_region *where, unsigned long *sync_error_bits,
++ unsigned short ioprio)
+ {
+ int r;
+ struct dpages dp;
+@@ -520,11 +523,11 @@ int dm_io(struct dm_io_request *io_req, unsigned int num_regions,
+
+ if (!io_req->notify.fn)
+ return sync_io(io_req->client, num_regions, where,
+- io_req->bi_opf, &dp, sync_error_bits);
++ io_req->bi_opf, &dp, sync_error_bits, ioprio);
+
+ return async_io(io_req->client, num_regions, where,
+ io_req->bi_opf, &dp, io_req->notify.fn,
+- io_req->notify.context);
++ io_req->notify.context, ioprio);
+ }
+ EXPORT_SYMBOL(dm_io);
+
+diff --git a/drivers/md/dm-kcopyd.c b/drivers/md/dm-kcopyd.c
+index 0ef78e56aa88c..fda51bd140ed3 100644
+--- a/drivers/md/dm-kcopyd.c
++++ b/drivers/md/dm-kcopyd.c
+@@ -572,9 +572,9 @@ static int run_io_job(struct kcopyd_job *job)
+ io_job_start(job->kc->throttle);
+
+ if (job->op == REQ_OP_READ)
+- r = dm_io(&io_req, 1, &job->source, NULL);
++ r = dm_io(&io_req, 1, &job->source, NULL, IOPRIO_DEFAULT);
+ else
+- r = dm_io(&io_req, job->num_dests, job->dests, NULL);
++ r = dm_io(&io_req, job->num_dests, job->dests, NULL, IOPRIO_DEFAULT);
+
+ return r;
+ }
+diff --git a/drivers/md/dm-log.c b/drivers/md/dm-log.c
+index b7dd5a0cd58ba..da77878cb2c02 100644
+--- a/drivers/md/dm-log.c
++++ b/drivers/md/dm-log.c
+@@ -295,7 +295,7 @@ static int rw_header(struct log_c *lc, enum req_op op)
+ {
+ lc->io_req.bi_opf = op;
+
+- return dm_io(&lc->io_req, 1, &lc->header_location, NULL);
++ return dm_io(&lc->io_req, 1, &lc->header_location, NULL, IOPRIO_DEFAULT);
+ }
+
+ static int flush_header(struct log_c *lc)
+@@ -308,7 +308,7 @@ static int flush_header(struct log_c *lc)
+
+ lc->io_req.bi_opf = REQ_OP_WRITE | REQ_PREFLUSH;
+
+- return dm_io(&lc->io_req, 1, &null_location, NULL);
++ return dm_io(&lc->io_req, 1, &null_location, NULL, IOPRIO_DEFAULT);
+ }
+
+ static int read_header(struct log_c *log)
+diff --git a/drivers/md/dm-raid1.c b/drivers/md/dm-raid1.c
+index 2327645fc0648..1004199ae77ac 100644
+--- a/drivers/md/dm-raid1.c
++++ b/drivers/md/dm-raid1.c
+@@ -273,7 +273,7 @@ static int mirror_flush(struct dm_target *ti)
+ }
+
+ error_bits = -1;
+- dm_io(&io_req, ms->nr_mirrors, io, &error_bits);
++ dm_io(&io_req, ms->nr_mirrors, io, &error_bits, IOPRIO_DEFAULT);
+ if (unlikely(error_bits != 0)) {
+ for (i = 0; i < ms->nr_mirrors; i++)
+ if (test_bit(i, &error_bits))
+@@ -543,7 +543,7 @@ static void read_async_bio(struct mirror *m, struct bio *bio)
+
+ map_region(&io, m, bio);
+ bio_set_m(bio, m);
+- BUG_ON(dm_io(&io_req, 1, &io, NULL));
++ BUG_ON(dm_io(&io_req, 1, &io, NULL, IOPRIO_DEFAULT));
+ }
+
+ static inline int region_in_sync(struct mirror_set *ms, region_t region,
+@@ -670,7 +670,7 @@ static void do_write(struct mirror_set *ms, struct bio *bio)
+ */
+ bio_set_m(bio, get_default_mirror(ms));
+
+- BUG_ON(dm_io(&io_req, ms->nr_mirrors, io, NULL));
++ BUG_ON(dm_io(&io_req, ms->nr_mirrors, io, NULL, IOPRIO_DEFAULT));
+ }
+
+ static void do_writes(struct mirror_set *ms, struct bio_list *writes)
+diff --git a/drivers/md/dm-snap-persistent.c b/drivers/md/dm-snap-persistent.c
+index 80b95746a43e0..eee1cd3aa3fcf 100644
+--- a/drivers/md/dm-snap-persistent.c
++++ b/drivers/md/dm-snap-persistent.c
+@@ -220,7 +220,7 @@ static void do_metadata(struct work_struct *work)
+ {
+ struct mdata_req *req = container_of(work, struct mdata_req, work);
+
+- req->result = dm_io(req->io_req, 1, req->where, NULL);
++ req->result = dm_io(req->io_req, 1, req->where, NULL, IOPRIO_DEFAULT);
+ }
+
+ /*
+@@ -244,7 +244,7 @@ static int chunk_io(struct pstore *ps, void *area, chunk_t chunk, blk_opf_t opf,
+ struct mdata_req req;
+
+ if (!metadata)
+- return dm_io(&io_req, 1, &where, NULL);
++ return dm_io(&io_req, 1, &where, NULL, IOPRIO_DEFAULT);
+
+ req.where = &where;
+ req.io_req = &io_req;
+diff --git a/drivers/md/dm-verity-target.c b/drivers/md/dm-verity-target.c
+index b48e1b59e6da4..6a707b41dc865 100644
+--- a/drivers/md/dm-verity-target.c
++++ b/drivers/md/dm-verity-target.c
+@@ -503,7 +503,7 @@ static noinline int verity_recheck(struct dm_verity *v, struct dm_verity_io *io,
+ io_loc.bdev = v->data_dev->bdev;
+ io_loc.sector = cur_block << (v->data_dev_block_bits - SECTOR_SHIFT);
+ io_loc.count = 1 << (v->data_dev_block_bits - SECTOR_SHIFT);
+- r = dm_io(&io_req, 1, &io_loc, NULL);
++ r = dm_io(&io_req, 1, &io_loc, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r))
+ goto free_ret;
+
+diff --git a/drivers/md/dm-writecache.c b/drivers/md/dm-writecache.c
+index a705e24d3e2b6..20fc84b24fc75 100644
+--- a/drivers/md/dm-writecache.c
++++ b/drivers/md/dm-writecache.c
+@@ -531,7 +531,7 @@ static void ssd_commit_flushed(struct dm_writecache *wc, bool wait_for_ios)
+ req.notify.context = &endio;
+
+ /* writing via async dm-io (implied by notify.fn above) won't return an error */
+- (void) dm_io(&req, 1, ®ion, NULL);
++ (void) dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT);
+ i = j;
+ }
+
+@@ -568,7 +568,7 @@ static void ssd_commit_superblock(struct dm_writecache *wc)
+ req.notify.fn = NULL;
+ req.notify.context = NULL;
+
+- r = dm_io(&req, 1, ®ion, NULL);
++ r = dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r))
+ writecache_error(wc, r, "error writing superblock");
+ }
+@@ -596,7 +596,7 @@ static void writecache_disk_flush(struct dm_writecache *wc, struct dm_dev *dev)
+ req.client = wc->dm_io;
+ req.notify.fn = NULL;
+
+- r = dm_io(&req, 1, ®ion, NULL);
++ r = dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT);
+ if (unlikely(r))
+ writecache_error(wc, r, "error flushing metadata: %d", r);
+ }
+@@ -984,7 +984,7 @@ static int writecache_read_metadata(struct dm_writecache *wc, sector_t n_sectors
+ req.client = wc->dm_io;
+ req.notify.fn = NULL;
+
+- return dm_io(&req, 1, ®ion, NULL);
++ return dm_io(&req, 1, ®ion, NULL, IOPRIO_DEFAULT);
+ }
+
+ static void writecache_resume(struct dm_target *ti)
+diff --git a/include/linux/dm-io.h b/include/linux/dm-io.h
+index 92e7abfe04f92..70b3737052dd2 100644
+--- a/include/linux/dm-io.h
++++ b/include/linux/dm-io.h
+@@ -79,7 +79,8 @@ void dm_io_client_destroy(struct dm_io_client *client);
+ * error occurred doing io to the corresponding region.
+ */
+ int dm_io(struct dm_io_request *io_req, unsigned int num_regions,
+- struct dm_io_region *region, unsigned int long *sync_error_bits);
++ struct dm_io_region *region, unsigned int long *sync_error_bits,
++ unsigned short ioprio);
+
+ #endif /* __KERNEL__ */
+ #endif /* _LINUX_DM_IO_H */
+--
+2.43.0
+
--- /dev/null
+From 420960d16c1caef781c57cf3f0076db9aa6262b1 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Sat, 16 Mar 2024 13:25:20 -0300
+Subject: drm: Fix drm_fixp2int_round() making it add 0.5
+
+From: Arthur Grillo <arthurgrillo@riseup.net>
+
+[ Upstream commit 807f96abdf14c80f534c78f2d854c2590963345c ]
+
+As well noted by Pekka[1], the rounding of drm_fixp2int_round is wrong.
+To round a number, you need to add 0.5 to the number and floor that,
+drm_fixp2int_round() is adding 0.0000076. Make it add 0.5.
+
+[1]: https://lore.kernel.org/all/20240301135327.22efe0dd.pekka.paalanen@collabora.com/
+
+Fixes: 8b25320887d7 ("drm: Add fixed-point helper to get rounded integer values")
+Suggested-by: Pekka Paalanen <pekka.paalanen@collabora.com>
+Reviewed-by: Harry Wentland <harry.wentland@amd.com>
+Reviewed-by: Melissa Wen <mwen@igalia.com>
+Signed-off-by: Arthur Grillo <arthurgrillo@riseup.net>
+Signed-off-by: Melissa Wen <melissa.srw@gmail.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20240316-drm_fixed-v2-1-c1bc2665b5ed@riseup.net
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ include/drm/drm_fixed.h | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/include/drm/drm_fixed.h b/include/drm/drm_fixed.h
+index 6230088428cdb..a476a406e5997 100644
+--- a/include/drm/drm_fixed.h
++++ b/include/drm/drm_fixed.h
+@@ -70,7 +70,6 @@ static inline u32 dfixed_div(fixed20_12 A, fixed20_12 B)
+ }
+
+ #define DRM_FIXED_POINT 32
+-#define DRM_FIXED_POINT_HALF 16
+ #define DRM_FIXED_ONE (1ULL << DRM_FIXED_POINT)
+ #define DRM_FIXED_DECIMAL_MASK (DRM_FIXED_ONE - 1)
+ #define DRM_FIXED_DIGITS_MASK (~DRM_FIXED_DECIMAL_MASK)
+@@ -89,7 +88,7 @@ static inline int drm_fixp2int(s64 a)
+
+ static inline int drm_fixp2int_round(s64 a)
+ {
+- return drm_fixp2int(a + (1 << (DRM_FIXED_POINT_HALF - 1)));
++ return drm_fixp2int(a + DRM_FIXED_ONE / 2);
+ }
+
+ static inline int drm_fixp2int_ceil(s64 a)
+--
+2.43.0
+
--- /dev/null
+From 46a106cb06153984c53f51298d059874de7a4a07 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 13 Mar 2024 00:27:19 +0900
+Subject: hsr: Fix uninit-value access in hsr_get_node()
+
+From: Shigeru Yoshida <syoshida@redhat.com>
+
+[ Upstream commit ddbec99f58571301679addbc022256970ca3eac6 ]
+
+KMSAN reported the following uninit-value access issue [1]:
+
+=====================================================
+BUG: KMSAN: uninit-value in hsr_get_node+0xa2e/0xa40 net/hsr/hsr_framereg.c:246
+ hsr_get_node+0xa2e/0xa40 net/hsr/hsr_framereg.c:246
+ fill_frame_info net/hsr/hsr_forward.c:577 [inline]
+ hsr_forward_skb+0xe12/0x30e0 net/hsr/hsr_forward.c:615
+ hsr_dev_xmit+0x1a1/0x270 net/hsr/hsr_device.c:223
+ __netdev_start_xmit include/linux/netdevice.h:4940 [inline]
+ netdev_start_xmit include/linux/netdevice.h:4954 [inline]
+ xmit_one net/core/dev.c:3548 [inline]
+ dev_hard_start_xmit+0x247/0xa10 net/core/dev.c:3564
+ __dev_queue_xmit+0x33b8/0x5130 net/core/dev.c:4349
+ dev_queue_xmit include/linux/netdevice.h:3134 [inline]
+ packet_xmit+0x9c/0x6b0 net/packet/af_packet.c:276
+ packet_snd net/packet/af_packet.c:3087 [inline]
+ packet_sendmsg+0x8b1d/0x9f30 net/packet/af_packet.c:3119
+ sock_sendmsg_nosec net/socket.c:730 [inline]
+ __sock_sendmsg net/socket.c:745 [inline]
+ __sys_sendto+0x735/0xa10 net/socket.c:2191
+ __do_sys_sendto net/socket.c:2203 [inline]
+ __se_sys_sendto net/socket.c:2199 [inline]
+ __x64_sys_sendto+0x125/0x1c0 net/socket.c:2199
+ do_syscall_x64 arch/x86/entry/common.c:52 [inline]
+ do_syscall_64+0x6d/0x140 arch/x86/entry/common.c:83
+ entry_SYSCALL_64_after_hwframe+0x63/0x6b
+
+Uninit was created at:
+ slab_post_alloc_hook+0x129/0xa70 mm/slab.h:768
+ slab_alloc_node mm/slub.c:3478 [inline]
+ kmem_cache_alloc_node+0x5e9/0xb10 mm/slub.c:3523
+ kmalloc_reserve+0x13d/0x4a0 net/core/skbuff.c:560
+ __alloc_skb+0x318/0x740 net/core/skbuff.c:651
+ alloc_skb include/linux/skbuff.h:1286 [inline]
+ alloc_skb_with_frags+0xc8/0xbd0 net/core/skbuff.c:6334
+ sock_alloc_send_pskb+0xa80/0xbf0 net/core/sock.c:2787
+ packet_alloc_skb net/packet/af_packet.c:2936 [inline]
+ packet_snd net/packet/af_packet.c:3030 [inline]
+ packet_sendmsg+0x70e8/0x9f30 net/packet/af_packet.c:3119
+ sock_sendmsg_nosec net/socket.c:730 [inline]
+ __sock_sendmsg net/socket.c:745 [inline]
+ __sys_sendto+0x735/0xa10 net/socket.c:2191
+ __do_sys_sendto net/socket.c:2203 [inline]
+ __se_sys_sendto net/socket.c:2199 [inline]
+ __x64_sys_sendto+0x125/0x1c0 net/socket.c:2199
+ do_syscall_x64 arch/x86/entry/common.c:52 [inline]
+ do_syscall_64+0x6d/0x140 arch/x86/entry/common.c:83
+ entry_SYSCALL_64_after_hwframe+0x63/0x6b
+
+CPU: 1 PID: 5033 Comm: syz-executor334 Not tainted 6.7.0-syzkaller-00562-g9f8413c4a66f #0
+Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/17/2023
+=====================================================
+
+If the packet type ID field in the Ethernet header is either ETH_P_PRP or
+ETH_P_HSR, but it is not followed by an HSR tag, hsr_get_skb_sequence_nr()
+reads an invalid value as a sequence number. This causes the above issue.
+
+This patch fixes the issue by returning NULL if the Ethernet header is not
+followed by an HSR tag.
+
+Fixes: f266a683a480 ("net/hsr: Better frame dispatch")
+Reported-and-tested-by: syzbot+2ef3a8ce8e91b5a50098@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=2ef3a8ce8e91b5a50098 [1]
+Signed-off-by: Shigeru Yoshida <syoshida@redhat.com>
+Link: https://lore.kernel.org/r/20240312152719.724530-1-syoshida@redhat.com
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/hsr/hsr_framereg.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/net/hsr/hsr_framereg.c b/net/hsr/hsr_framereg.c
+index 0b01998780952..e44a039e36afe 100644
+--- a/net/hsr/hsr_framereg.c
++++ b/net/hsr/hsr_framereg.c
+@@ -235,6 +235,10 @@ struct hsr_node *hsr_get_node(struct hsr_port *port, struct list_head *node_db,
+ */
+ if (ethhdr->h_proto == htons(ETH_P_PRP) ||
+ ethhdr->h_proto == htons(ETH_P_HSR)) {
++ /* Check if skb contains hsr_ethhdr */
++ if (skb->mac_len < sizeof(struct hsr_ethhdr))
++ return NULL;
++
+ /* Use the existing sequence_nr from the tag as starting point
+ * for filtering duplicate frames.
+ */
+--
+2.43.0
+
--- /dev/null
+From 2c63c0682ab7f897bc218f0a6108d062294c4fb0 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 15 Mar 2024 13:04:52 +0100
+Subject: hsr: Handle failures in module init
+
+From: Felix Maurer <fmaurer@redhat.com>
+
+[ Upstream commit 3cf28cd492308e5f63ed00b29ea03ca016264376 ]
+
+A failure during registration of the netdev notifier was not handled at
+all. A failure during netlink initialization did not unregister the netdev
+notifier.
+
+Handle failures of netdev notifier registration and netlink initialization.
+Both functions should only return negative values on failure and thereby
+lead to the hsr module not being loaded.
+
+Fixes: f421436a591d ("net/hsr: Add support for the High-availability Seamless Redundancy protocol (HSRv0)")
+Signed-off-by: Felix Maurer <fmaurer@redhat.com>
+Reviewed-by: Shigeru Yoshida <syoshida@redhat.com>
+Reviewed-by: Breno Leitao <leitao@debian.org>
+Link: https://lore.kernel.org/r/3ce097c15e3f7ace98fc7fd9bcbf299f092e63d1.1710504184.git.fmaurer@redhat.com
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/hsr/hsr_main.c | 15 +++++++++++----
+ 1 file changed, 11 insertions(+), 4 deletions(-)
+
+diff --git a/net/hsr/hsr_main.c b/net/hsr/hsr_main.c
+index b099c31501509..257b50124cee5 100644
+--- a/net/hsr/hsr_main.c
++++ b/net/hsr/hsr_main.c
+@@ -148,14 +148,21 @@ static struct notifier_block hsr_nb = {
+
+ static int __init hsr_init(void)
+ {
+- int res;
++ int err;
+
+ BUILD_BUG_ON(sizeof(struct hsr_tag) != HSR_HLEN);
+
+- register_netdevice_notifier(&hsr_nb);
+- res = hsr_netlink_init();
++ err = register_netdevice_notifier(&hsr_nb);
++ if (err)
++ return err;
++
++ err = hsr_netlink_init();
++ if (err) {
++ unregister_netdevice_notifier(&hsr_nb);
++ return err;
++ }
+
+- return res;
++ return 0;
+ }
+
+ static void __exit hsr_exit(void)
+--
+2.43.0
+
--- /dev/null
+From 0dd68f5ddb4595e6532840d276bcb7ab8c46075e Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 8 Jan 2024 12:19:06 +0000
+Subject: hwtracing: hisi_ptt: Move type check to the beginning of
+ hisi_ptt_pmu_event_init()
+
+From: Yang Jihong <yangjihong1@huawei.com>
+
+[ Upstream commit 06226d120a28f146abd3637799958a4dc4dbb7a1 ]
+
+When perf_init_event() calls perf_try_init_event() to init pmu driver,
+searches for the next pmu driver only when the return value is -ENOENT.
+Therefore, hisi_ptt_pmu_event_init() needs to check the type at the
+beginning of the function.
+Otherwise, in the case of perf-task mode, perf_try_init_event() returns
+-EOPNOTSUPP and skips subsequent pmu drivers, causes perf_init_event() to
+fail.
+
+Fixes: ff0de066b463 ("hwtracing: hisi_ptt: Add trace function support for HiSilicon PCIe Tune and Trace device")
+Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
+Reviewed-by: Yicong Yang <yangyicong@hisilicon.com>
+Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
+Link: https://lore.kernel.org/r/20240108121906.3514820-1-yangjihong1@huawei.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/hwtracing/ptt/hisi_ptt.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/hwtracing/ptt/hisi_ptt.c b/drivers/hwtracing/ptt/hisi_ptt.c
+index 8d8fa8e8afe04..20a9cddb3723a 100644
+--- a/drivers/hwtracing/ptt/hisi_ptt.c
++++ b/drivers/hwtracing/ptt/hisi_ptt.c
+@@ -654,6 +654,9 @@ static int hisi_ptt_pmu_event_init(struct perf_event *event)
+ int ret;
+ u32 val;
+
++ if (event->attr.type != hisi_ptt->hisi_ptt_pmu.type)
++ return -ENOENT;
++
+ if (event->cpu < 0) {
+ dev_dbg(event->pmu->dev, "Per-task mode not supported\n");
+ return -EOPNOTSUPP;
+@@ -662,9 +665,6 @@ static int hisi_ptt_pmu_event_init(struct perf_event *event)
+ if (event->attach_state & PERF_ATTACH_TASK)
+ return -EOPNOTSUPP;
+
+- if (event->attr.type != hisi_ptt->hisi_ptt_pmu.type)
+- return -ENOENT;
+-
+ ret = hisi_ptt_trace_valid_filter(hisi_ptt, event->attr.config);
+ if (ret < 0)
+ return ret;
+--
+2.43.0
+
--- /dev/null
+From 468a6efc74f4364a606714c477adc62cda15edfd Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 15 Mar 2024 15:35:40 +0100
+Subject: ipv4: raw: Fix sending packets from raw sockets via IPsec tunnels
+
+From: Tobias Brunner <tobias@strongswan.org>
+
+[ Upstream commit c9b3b81716c5b92132a6c1d4ac3c48a7b44082ab ]
+
+Since the referenced commit, the xfrm_inner_extract_output() function
+uses the protocol field to determine the address family. So not setting
+it for IPv4 raw sockets meant that such packets couldn't be tunneled via
+IPsec anymore.
+
+IPv6 raw sockets are not affected as they already set the protocol since
+9c9c9ad5fae7 ("ipv6: set skb->protocol on tcp, raw and ip6_append_data
+genereated skbs").
+
+Fixes: f4796398f21b ("xfrm: Remove inner/outer modes from output path")
+Signed-off-by: Tobias Brunner <tobias@strongswan.org>
+Reviewed-by: David Ahern <dsahern@kernel.org>
+Reviewed-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
+Link: https://lore.kernel.org/r/c5d9a947-eb19-4164-ac99-468ea814ce20@strongswan.org
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/ipv4/raw.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/net/ipv4/raw.c b/net/ipv4/raw.c
+index 7c63b91edbf7a..ee0efd0efec40 100644
+--- a/net/ipv4/raw.c
++++ b/net/ipv4/raw.c
+@@ -348,6 +348,7 @@ static int raw_send_hdrinc(struct sock *sk, struct flowi4 *fl4,
+ goto error;
+ skb_reserve(skb, hlen);
+
++ skb->protocol = htons(ETH_P_IP);
+ skb->priority = READ_ONCE(sk->sk_priority);
+ skb->mark = sockc->mark;
+ skb->tstamp = sockc->transmit_time;
+--
+2.43.0
+
--- /dev/null
+From 32fdb892778c62b9650ce52afa9a752fccd32ee7 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Sat, 3 Feb 2024 00:57:59 +0900
+Subject: kconfig: fix infinite loop when expanding a macro at the end of file
+
+From: Masahiro Yamada <masahiroy@kernel.org>
+
+[ Upstream commit af8bbce92044dc58e4cc039ab94ee5d470a621f5 ]
+
+A macro placed at the end of a file with no newline causes an infinite
+loop.
+
+[Test Kconfig]
+ $(info,hello)
+ \ No newline at end of file
+
+I realized that flex-provided input() returns 0 instead of EOF when it
+reaches the end of a file.
+
+Fixes: 104daea149c4 ("kconfig: reference environment variables directly and remove 'option env='")
+Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ scripts/kconfig/lexer.l | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/scripts/kconfig/lexer.l b/scripts/kconfig/lexer.l
+index cc386e4436834..2c2b3e6f248ca 100644
+--- a/scripts/kconfig/lexer.l
++++ b/scripts/kconfig/lexer.l
+@@ -302,8 +302,11 @@ static char *expand_token(const char *in, size_t n)
+ new_string();
+ append_string(in, n);
+
+- /* get the whole line because we do not know the end of token. */
+- while ((c = input()) != EOF) {
++ /*
++ * get the whole line because we do not know the end of token.
++ * input() returns 0 (not EOF!) when it reachs the end of file.
++ */
++ while ((c = input()) != 0) {
+ if (c == '\n') {
+ unput(c);
+ break;
+--
+2.43.0
+
--- /dev/null
+From 520ecd9117516ee55e1dc6c8b91ccc1a9c8fcebb Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 15 Mar 2024 15:55:35 -0500
+Subject: net/bnx2x: Prevent access to a freed page in page_pool
+
+From: Thinh Tran <thinhtr@linux.ibm.com>
+
+[ Upstream commit d27e2da94a42655861ca4baea30c8cd65546f25d ]
+
+Fix race condition leading to system crash during EEH error handling
+
+During EEH error recovery, the bnx2x driver's transmit timeout logic
+could cause a race condition when handling reset tasks. The
+bnx2x_tx_timeout() schedules reset tasks via bnx2x_sp_rtnl_task(),
+which ultimately leads to bnx2x_nic_unload(). In bnx2x_nic_unload()
+SGEs are freed using bnx2x_free_rx_sge_range(). However, this could
+overlap with the EEH driver's attempt to reset the device using
+bnx2x_io_slot_reset(), which also tries to free SGEs. This race
+condition can result in system crashes due to accessing freed memory
+locations in bnx2x_free_rx_sge()
+
+799 static inline void bnx2x_free_rx_sge(struct bnx2x *bp,
+800 struct bnx2x_fastpath *fp, u16 index)
+801 {
+802 struct sw_rx_page *sw_buf = &fp->rx_page_ring[index];
+803 struct page *page = sw_buf->page;
+....
+where sw_buf was set to NULL after the call to dma_unmap_page()
+by the preceding thread.
+
+ EEH: Beginning: 'slot_reset'
+ PCI 0011:01:00.0#10000: EEH: Invoking bnx2x->slot_reset()
+ bnx2x: [bnx2x_io_slot_reset:14228(eth1)]IO slot reset initializing...
+ bnx2x 0011:01:00.0: enabling device (0140 -> 0142)
+ bnx2x: [bnx2x_io_slot_reset:14244(eth1)]IO slot reset --> driver unload
+ Kernel attempted to read user page (0) - exploit attempt? (uid: 0)
+ BUG: Kernel NULL pointer dereference on read at 0x00000000
+ Faulting instruction address: 0xc0080000025065fc
+ Oops: Kernel access of bad area, sig: 11 [#1]
+ .....
+ Call Trace:
+ [c000000003c67a20] [c00800000250658c] bnx2x_io_slot_reset+0x204/0x610 [bnx2x] (unreliable)
+ [c000000003c67af0] [c0000000000518a8] eeh_report_reset+0xb8/0xf0
+ [c000000003c67b60] [c000000000052130] eeh_pe_report+0x180/0x550
+ [c000000003c67c70] [c00000000005318c] eeh_handle_normal_event+0x84c/0xa60
+ [c000000003c67d50] [c000000000053a84] eeh_event_handler+0xf4/0x170
+ [c000000003c67da0] [c000000000194c58] kthread+0x1c8/0x1d0
+ [c000000003c67e10] [c00000000000cf64] ret_from_kernel_thread+0x5c/0x64
+
+To solve this issue, we need to verify page pool allocations before
+freeing.
+
+Fixes: 4cace675d687 ("bnx2x: Alloc 4k fragment for each rx ring buffer element")
+Signed-off-by: Thinh Tran <thinhtr@linux.ibm.com>
+Reviewed-by: Jiri Pirko <jiri@nvidia.com>
+Link: https://lore.kernel.org/r/20240315205535.1321-1-thinhtr@linux.ibm.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+index d8b1824c334d3..0bc1367fd6492 100644
+--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
++++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_cmn.h
+@@ -1002,9 +1002,6 @@ static inline void bnx2x_set_fw_mac_addr(__le16 *fw_hi, __le16 *fw_mid,
+ static inline void bnx2x_free_rx_mem_pool(struct bnx2x *bp,
+ struct bnx2x_alloc_pool *pool)
+ {
+- if (!pool->page)
+- return;
+-
+ put_page(pool->page);
+
+ pool->page = NULL;
+@@ -1015,6 +1012,9 @@ static inline void bnx2x_free_rx_sge_range(struct bnx2x *bp,
+ {
+ int i;
+
++ if (!fp->page_pool.page)
++ return;
++
+ if (fp->mode == TPA_MODE_DISABLED)
+ return;
+
+--
+2.43.0
+
--- /dev/null
+From b037c062310f96a6eb557d66b6d60a4f3a3dfc71 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 14 Mar 2024 12:33:42 +0300
+Subject: net: dsa: mt7530: fix handling of all link-local frames
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Arınç ÜNAL <arinc.unal@arinc9.com>
+
+[ Upstream commit 69ddba9d170bdaee1dc0eb4ced38d7e4bb7b92af ]
+
+Currently, the MT753X switches treat frames with :01-0D and :0F MAC DAs as
+regular multicast frames, therefore flooding them to user ports.
+
+On page 205, section "8.6.3 Frame filtering" of the active standard, IEEE
+Std 802.1Q™-2022, it is stated that frames with 01:80:C2:00:00:00-0F as MAC
+DA must only be propagated to C-VLAN and MAC Bridge components. That means
+VLAN-aware and VLAN-unaware bridges. On the switch designs with CPU ports,
+these frames are supposed to be processed by the CPU (software). So we make
+the switch only forward them to the CPU port. And if received from a CPU
+port, forward to a single port. The software is responsible of making the
+switch conform to the latter by setting a single port as destination port
+on the special tag.
+
+This switch intellectual property cannot conform to this part of the
+standard fully. Whilst the REV_UN frame tag covers the remaining :04-0D and
+:0F MAC DAs, it also includes :22-FF which the scope of propagation is not
+supposed to be restricted for these MAC DAs.
+
+Set frames with :01-03 MAC DAs to be trapped to the CPU port(s). Add a
+comment for the remaining MAC DAs.
+
+Note that the ingress port must have a PVID assigned to it for the switch
+to forward untagged frames. A PVID is set by default on VLAN-aware and
+VLAN-unaware ports. However, when the network interface that pertains to
+the ingress port is attached to a vlan_filtering enabled bridge, the user
+can remove the PVID assignment from it which would prevent the link-local
+frames from being trapped to the CPU port. I am yet to see a way to forward
+link-local frames while preventing other untagged frames from being
+forwarded too.
+
+Fixes: b8f126a8d543 ("net-next: dsa: add dsa support for Mediatek MT7530 switch")
+Signed-off-by: Arınç ÜNAL <arinc.unal@arinc9.com>
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/dsa/mt7530.c | 37 +++++++++++++++++++++++++++++++++----
+ drivers/net/dsa/mt7530.h | 13 +++++++++++++
+ 2 files changed, 46 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 86c410f9fef8c..07065c1af55e4 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -998,6 +998,21 @@ static void mt7530_setup_port5(struct dsa_switch *ds, phy_interface_t interface)
+ mutex_unlock(&priv->reg_mutex);
+ }
+
++/* On page 205, section "8.6.3 Frame filtering" of the active standard, IEEE Std
++ * 802.1Q™-2022, it is stated that frames with 01:80:C2:00:00:00-0F as MAC DA
++ * must only be propagated to C-VLAN and MAC Bridge components. That means
++ * VLAN-aware and VLAN-unaware bridges. On the switch designs with CPU ports,
++ * these frames are supposed to be processed by the CPU (software). So we make
++ * the switch only forward them to the CPU port. And if received from a CPU
++ * port, forward to a single port. The software is responsible of making the
++ * switch conform to the latter by setting a single port as destination port on
++ * the special tag.
++ *
++ * This switch intellectual property cannot conform to this part of the standard
++ * fully. Whilst the REV_UN frame tag covers the remaining :04-0D and :0F MAC
++ * DAs, it also includes :22-FF which the scope of propagation is not supposed
++ * to be restricted for these MAC DAs.
++ */
+ static void
+ mt753x_trap_frames(struct mt7530_priv *priv)
+ {
+@@ -1012,13 +1027,27 @@ mt753x_trap_frames(struct mt7530_priv *priv)
+ MT753X_BPDU_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
+ MT753X_BPDU_CPU_ONLY);
+
+- /* Trap LLDP frames with :0E MAC DA to the CPU port(s) and egress them
+- * VLAN-untagged.
++ /* Trap frames with :01 and :02 MAC DAs to the CPU port(s) and egress
++ * them VLAN-untagged.
++ */
++ mt7530_rmw(priv, MT753X_RGAC1, MT753X_R02_EG_TAG_MASK |
++ MT753X_R02_PORT_FW_MASK | MT753X_R01_EG_TAG_MASK |
++ MT753X_R01_PORT_FW_MASK,
++ MT753X_R02_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++ MT753X_R02_PORT_FW(MT753X_BPDU_CPU_ONLY) |
++ MT753X_R01_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++ MT753X_BPDU_CPU_ONLY);
++
++ /* Trap frames with :03 and :0E MAC DAs to the CPU port(s) and egress
++ * them VLAN-untagged.
+ */
+ mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_EG_TAG_MASK |
+- MT753X_R0E_PORT_FW_MASK,
++ MT753X_R0E_PORT_FW_MASK | MT753X_R03_EG_TAG_MASK |
++ MT753X_R03_PORT_FW_MASK,
+ MT753X_R0E_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
+- MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY));
++ MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY) |
++ MT753X_R03_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++ MT753X_BPDU_CPU_ONLY);
+ }
+
+ static int
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index a5b864fd7d60c..fa2afa67ceb07 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -71,12 +71,25 @@ enum mt753x_id {
+ #define MT753X_BPDU_EG_TAG(x) FIELD_PREP(MT753X_BPDU_EG_TAG_MASK, x)
+ #define MT753X_BPDU_PORT_FW_MASK GENMASK(2, 0)
+
++/* Register for :01 and :02 MAC DA frame control */
++#define MT753X_RGAC1 0x28
++#define MT753X_R02_EG_TAG_MASK GENMASK(24, 22)
++#define MT753X_R02_EG_TAG(x) FIELD_PREP(MT753X_R02_EG_TAG_MASK, x)
++#define MT753X_R02_PORT_FW_MASK GENMASK(18, 16)
++#define MT753X_R02_PORT_FW(x) FIELD_PREP(MT753X_R02_PORT_FW_MASK, x)
++#define MT753X_R01_EG_TAG_MASK GENMASK(8, 6)
++#define MT753X_R01_EG_TAG(x) FIELD_PREP(MT753X_R01_EG_TAG_MASK, x)
++#define MT753X_R01_PORT_FW_MASK GENMASK(2, 0)
++
+ /* Register for :03 and :0E MAC DA frame control */
+ #define MT753X_RGAC2 0x2c
+ #define MT753X_R0E_EG_TAG_MASK GENMASK(24, 22)
+ #define MT753X_R0E_EG_TAG(x) FIELD_PREP(MT753X_R0E_EG_TAG_MASK, x)
+ #define MT753X_R0E_PORT_FW_MASK GENMASK(18, 16)
+ #define MT753X_R0E_PORT_FW(x) FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x)
++#define MT753X_R03_EG_TAG_MASK GENMASK(8, 6)
++#define MT753X_R03_EG_TAG(x) FIELD_PREP(MT753X_R03_EG_TAG_MASK, x)
++#define MT753X_R03_PORT_FW_MASK GENMASK(2, 0)
+
+ enum mt753x_bpdu_port_fw {
+ MT753X_BPDU_FOLLOW_MFC,
+--
+2.43.0
+
--- /dev/null
+From 9c394f3e5701736a9cd08043525e42b27311088e Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 14 Mar 2024 12:33:41 +0300
+Subject: net: dsa: mt7530: fix link-local frames that ingress vlan filtering
+ ports
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Arınç ÜNAL <arinc.unal@arinc9.com>
+
+[ Upstream commit e8bf353577f382c7066c661fed41b2adc0fc7c40 ]
+
+Whether VLAN-aware or not, on every VID VLAN table entry that has the CPU
+port as a member of it, frames are set to egress the CPU port with the VLAN
+tag stacked. This is so that VLAN tags can be appended after hardware
+special tag (called DSA tag in the context of Linux drivers).
+
+For user ports on a VLAN-unaware bridge, frame ingressing the user port
+egresses CPU port with only the special tag.
+
+For user ports on a VLAN-aware bridge, frame ingressing the user port
+egresses CPU port with the special tag and the VLAN tag.
+
+This causes issues with link-local frames, specifically BPDUs, because the
+software expects to receive them VLAN-untagged.
+
+There are two options to make link-local frames egress untagged. Setting
+CONSISTENT or UNTAGGED on the EG_TAG bits on the relevant register.
+CONSISTENT means frames egress exactly as they ingress. That means
+egressing with the VLAN tag they had at ingress or egressing untagged if
+they ingressed untagged. Although link-local frames are not supposed to be
+transmitted VLAN-tagged, if they are done so, when egressing through a CPU
+port, the special tag field will be broken.
+
+BPDU egresses CPU port with VLAN tag egressing stacked, received on
+software:
+
+00:01:25.104821 AF Unknown (382365846), length 106:
+ | STAG | | VLAN |
+ 0x0000: 0000 6c27 614d 4143 0001 0000 8100 0001 ..l'aMAC........
+ 0x0010: 0026 4242 0300 0000 0000 0000 6c27 614d .&BB........l'aM
+ 0x0020: 4143 0000 0000 0000 6c27 614d 4143 0000 AC......l'aMAC..
+ 0x0030: 0000 1400 0200 0f00 0000 0000 0000 0000 ................
+
+BPDU egresses CPU port with VLAN tag egressing untagged, received on
+software:
+
+00:23:56.628708 AF Unknown (25215488), length 64:
+ | STAG |
+ 0x0000: 0000 6c27 614d 4143 0001 0000 0026 4242 ..l'aMAC.....&BB
+ 0x0010: 0300 0000 0000 0000 6c27 614d 4143 0000 ........l'aMAC..
+ 0x0020: 0000 0000 6c27 614d 4143 0000 0000 1400 ....l'aMAC......
+ 0x0030: 0200 0f00 0000 0000 0000 0000 ............
+
+BPDU egresses CPU port with VLAN tag egressing tagged, received on
+software:
+
+00:01:34.311963 AF Unknown (25215488), length 64:
+ | Mess |
+ 0x0000: 0000 6c27 614d 4143 0001 0001 0026 4242 ..l'aMAC.....&BB
+ 0x0010: 0300 0000 0000 0000 6c27 614d 4143 0000 ........l'aMAC..
+ 0x0020: 0000 0000 6c27 614d 4143 0000 0000 1400 ....l'aMAC......
+ 0x0030: 0200 0f00 0000 0000 0000 0000 ............
+
+To prevent confusing the software, force the frame to egress UNTAGGED
+instead of CONSISTENT. This way, frames can't possibly be received TAGGED
+by software which would have the special tag field broken.
+
+VLAN Tag Egress Procedure
+
+ For all frames, one of these options set the earliest in this order will
+ apply to the frame:
+
+ - EG_TAG in certain registers for certain frames.
+ This will apply to frame with matching MAC DA or EtherType.
+
+ - EG_TAG in the address table.
+ This will apply to frame at its incoming port.
+
+ - EG_TAG in the PVC register.
+ This will apply to frame at its incoming port.
+
+ - EG_CON and [EG_TAG per port] in the VLAN table.
+ This will apply to frame at its outgoing port.
+
+ - EG_TAG in the PCR register.
+ This will apply to frame at its outgoing port.
+
+ EG_TAG in certain registers for certain frames:
+
+ PPPoE Discovery_ARP/RARP: PPP_EG_TAG and ARP_EG_TAG in the APC register.
+ IGMP_MLD: IGMP_EG_TAG and MLD_EG_TAG in the IMC register.
+ BPDU and PAE: BPDU_EG_TAG and PAE_EG_TAG in the BPC register.
+ REV_01 and REV_02: R01_EG_TAG and R02_EG_TAG in the RGAC1 register.
+ REV_03 and REV_0E: R03_EG_TAG and R0E_EG_TAG in the RGAC2 register.
+ REV_10 and REV_20: R10_EG_TAG and R20_EG_TAG in the RGAC3 register.
+ REV_21 and REV_UN: R21_EG_TAG and RUN_EG_TAG in the RGAC4 register.
+
+With this change, it can be observed that a bridge interface with stp_state
+and vlan_filtering enabled will properly block ports now.
+
+Fixes: b8f126a8d543 ("net-next: dsa: add dsa support for Mediatek MT7530 switch")
+Signed-off-by: Arınç ÜNAL <arinc.unal@arinc9.com>
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/dsa/mt7530.c | 23 +++++++++++++++--------
+ drivers/net/dsa/mt7530.h | 9 ++++++++-
+ 2 files changed, 23 insertions(+), 9 deletions(-)
+
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index 80b346d4d990f..86c410f9fef8c 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -1001,16 +1001,23 @@ static void mt7530_setup_port5(struct dsa_switch *ds, phy_interface_t interface)
+ static void
+ mt753x_trap_frames(struct mt7530_priv *priv)
+ {
+- /* Trap BPDUs to the CPU port(s) */
+- mt7530_rmw(priv, MT753X_BPC, MT753X_BPDU_PORT_FW_MASK,
++ /* Trap 802.1X PAE frames and BPDUs to the CPU port(s) and egress them
++ * VLAN-untagged.
++ */
++ mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_EG_TAG_MASK |
++ MT753X_PAE_PORT_FW_MASK | MT753X_BPDU_EG_TAG_MASK |
++ MT753X_BPDU_PORT_FW_MASK,
++ MT753X_PAE_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
++ MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY) |
++ MT753X_BPDU_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
+ MT753X_BPDU_CPU_ONLY);
+
+- /* Trap 802.1X PAE frames to the CPU port(s) */
+- mt7530_rmw(priv, MT753X_BPC, MT753X_PAE_PORT_FW_MASK,
+- MT753X_PAE_PORT_FW(MT753X_BPDU_CPU_ONLY));
+-
+- /* Trap LLDP frames with :0E MAC DA to the CPU port(s) */
+- mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_PORT_FW_MASK,
++ /* Trap LLDP frames with :0E MAC DA to the CPU port(s) and egress them
++ * VLAN-untagged.
++ */
++ mt7530_rmw(priv, MT753X_RGAC2, MT753X_R0E_EG_TAG_MASK |
++ MT753X_R0E_PORT_FW_MASK,
++ MT753X_R0E_EG_TAG(MT7530_VLAN_EG_UNTAGGED) |
+ MT753X_R0E_PORT_FW(MT753X_BPDU_CPU_ONLY));
+ }
+
+diff --git a/drivers/net/dsa/mt7530.h b/drivers/net/dsa/mt7530.h
+index 6202b0f8c3f34..a5b864fd7d60c 100644
+--- a/drivers/net/dsa/mt7530.h
++++ b/drivers/net/dsa/mt7530.h
+@@ -63,12 +63,18 @@ enum mt753x_id {
+
+ /* Registers for BPDU and PAE frame control*/
+ #define MT753X_BPC 0x24
+-#define MT753X_BPDU_PORT_FW_MASK GENMASK(2, 0)
++#define MT753X_PAE_EG_TAG_MASK GENMASK(24, 22)
++#define MT753X_PAE_EG_TAG(x) FIELD_PREP(MT753X_PAE_EG_TAG_MASK, x)
+ #define MT753X_PAE_PORT_FW_MASK GENMASK(18, 16)
+ #define MT753X_PAE_PORT_FW(x) FIELD_PREP(MT753X_PAE_PORT_FW_MASK, x)
++#define MT753X_BPDU_EG_TAG_MASK GENMASK(8, 6)
++#define MT753X_BPDU_EG_TAG(x) FIELD_PREP(MT753X_BPDU_EG_TAG_MASK, x)
++#define MT753X_BPDU_PORT_FW_MASK GENMASK(2, 0)
+
+ /* Register for :03 and :0E MAC DA frame control */
+ #define MT753X_RGAC2 0x2c
++#define MT753X_R0E_EG_TAG_MASK GENMASK(24, 22)
++#define MT753X_R0E_EG_TAG(x) FIELD_PREP(MT753X_R0E_EG_TAG_MASK, x)
+ #define MT753X_R0E_PORT_FW_MASK GENMASK(18, 16)
+ #define MT753X_R0E_PORT_FW(x) FIELD_PREP(MT753X_R0E_PORT_FW_MASK, x)
+
+@@ -251,6 +257,7 @@ enum mt7530_port_mode {
+ enum mt7530_vlan_port_eg_tag {
+ MT7530_VLAN_EG_DISABLED = 0,
+ MT7530_VLAN_EG_CONSISTENT = 1,
++ MT7530_VLAN_EG_UNTAGGED = 4,
+ };
+
+ enum mt7530_vlan_port_attr {
+--
+2.43.0
+
--- /dev/null
+From 3f9f94a61e29cb882a2e95e059149b4467b3dcc5 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 14 Mar 2024 12:28:35 +0300
+Subject: net: dsa: mt7530: prevent possible incorrect XTAL frequency selection
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Arınç ÜNAL <arinc.unal@arinc9.com>
+
+[ Upstream commit f490c492e946d8ffbe65ad4efc66de3c5ede30a4 ]
+
+On MT7530, the HT_XTAL_FSEL field of the HWTRAP register stores a 2-bit
+value that represents the frequency of the crystal oscillator connected to
+the switch IC. The field is populated by the state of the ESW_P4_LED_0 and
+ESW_P4_LED_0 pins, which is done right after reset is deasserted.
+
+ ESW_P4_LED_0 ESW_P3_LED_0 Frequency
+ -----------------------------------------
+ 0 0 Reserved
+ 0 1 20MHz
+ 1 0 40MHz
+ 1 1 25MHz
+
+On MT7531, the XTAL25 bit of the STRAP register stores this. The LAN0LED0
+pin is used to populate the bit. 25MHz when the pin is high, 40MHz when
+it's low.
+
+These pins are also used with LEDs, therefore, their state can be set to
+something other than the bootstrapping configuration. For example, a link
+may be established on port 3 before the DSA subdriver takes control of the
+switch which would set ESW_P3_LED_0 to high.
+
+Currently on mt7530_setup() and mt7531_setup(), 1000 - 1100 usec delay is
+described between reset assertion and deassertion. Some switch ICs in real
+life conditions cannot always have these pins set back to the bootstrapping
+configuration before reset deassertion in this amount of delay. This causes
+wrong crystal frequency to be selected which puts the switch in a
+nonfunctional state after reset deassertion.
+
+The tests below are conducted on an MT7530 with a 40MHz crystal oscillator
+by Justin Swartz.
+
+With a cable from an active peer connected to port 3 before reset, an
+incorrect crystal frequency (0b11 = 25MHz) is selected:
+
+ [1] [3] [5]
+ : : :
+ _____________________________ __________________
+ESW_P4_LED_0 |_______|
+ _____________________________
+ESW_P3_LED_0 |__________________________
+
+ : : : :
+ : : [4]...:
+ : :
+ [2]................:
+
+[1] Reset is asserted.
+[2] Period of 1000 - 1100 usec.
+[3] Reset is deasserted.
+[4] Period of 315 usec. HWTRAP register is populated with incorrect
+ XTAL frequency.
+[5] Signals reflect the bootstrapped configuration.
+
+Increase the delay between reset_control_assert() and
+reset_control_deassert(), and gpiod_set_value_cansleep(priv->reset, 0) and
+gpiod_set_value_cansleep(priv->reset, 1) to 5000 - 5100 usec. This amount
+ensures a higher possibility that the switch IC will have these pins back
+to the bootstrapping configuration before reset deassertion.
+
+With a cable from an active peer connected to port 3 before reset, the
+correct crystal frequency (0b10 = 40MHz) is selected:
+
+ [1] [2-1] [3] [5]
+ : : : :
+ _____________________________ __________________
+ESW_P4_LED_0 |_______|
+ ___________________ _______
+ESW_P3_LED_0 |_________| |__________________
+
+ : : : : :
+ : [2-2]...: [4]...:
+ [2]................:
+
+[1] Reset is asserted.
+[2] Period of 5000 - 5100 usec.
+[2-1] ESW_P3_LED_0 goes low.
+[2-2] Remaining period of 5000 - 5100 usec.
+[3] Reset is deasserted.
+[4] Period of 310 usec. HWTRAP register is populated with bootstrapped
+ XTAL frequency.
+[5] Signals reflect the bootstrapped configuration.
+
+ESW_P3_LED_0 low period before reset deassertion:
+
+ 5000 usec
+ - 5100 usec
+ TEST RESET HOLD
+ # (usec)
+ ---------------------
+ 1 5410
+ 2 5440
+ 3 4375
+ 4 5490
+ 5 5475
+ 6 4335
+ 7 4370
+ 8 5435
+ 9 4205
+ 10 4335
+ 11 3750
+ 12 3170
+ 13 4395
+ 14 4375
+ 15 3515
+ 16 4335
+ 17 4220
+ 18 4175
+ 19 4175
+ 20 4350
+
+ Min 3170
+ Max 5490
+
+ Median 4342.500
+ Avg 4466.500
+
+Revert commit 2920dd92b980 ("net: dsa: mt7530: disable LEDs before reset").
+Changing the state of pins via reset assertion is simpler and more
+efficient than doing so by setting the LED controller off.
+
+Fixes: b8f126a8d543 ("net-next: dsa: add dsa support for Mediatek MT7530 switch")
+Fixes: c288575f7810 ("net: dsa: mt7530: Add the support of MT7531 switch")
+Co-developed-by: Justin Swartz <justin.swartz@risingedge.co.za>
+Signed-off-by: Justin Swartz <justin.swartz@risingedge.co.za>
+Signed-off-by: Arınç ÜNAL <arinc.unal@arinc9.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/dsa/mt7530.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/net/dsa/mt7530.c b/drivers/net/dsa/mt7530.c
+index b988c8a40d536..80b346d4d990f 100644
+--- a/drivers/net/dsa/mt7530.c
++++ b/drivers/net/dsa/mt7530.c
+@@ -2187,11 +2187,11 @@ mt7530_setup(struct dsa_switch *ds)
+ */
+ if (priv->mcm) {
+ reset_control_assert(priv->rstc);
+- usleep_range(1000, 1100);
++ usleep_range(5000, 5100);
+ reset_control_deassert(priv->rstc);
+ } else {
+ gpiod_set_value_cansleep(priv->reset, 0);
+- usleep_range(1000, 1100);
++ usleep_range(5000, 5100);
+ gpiod_set_value_cansleep(priv->reset, 1);
+ }
+
+@@ -2401,11 +2401,11 @@ mt7531_setup(struct dsa_switch *ds)
+ */
+ if (priv->mcm) {
+ reset_control_assert(priv->rstc);
+- usleep_range(1000, 1100);
++ usleep_range(5000, 5100);
+ reset_control_deassert(priv->rstc);
+ } else {
+ gpiod_set_value_cansleep(priv->reset, 0);
+- usleep_range(1000, 1100);
++ usleep_range(5000, 5100);
+ gpiod_set_value_cansleep(priv->reset, 1);
+ }
+
+--
+2.43.0
+
--- /dev/null
+From c6c96d42fd5e10d70e6c7773ef5343b00218c1f5 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 13 Mar 2024 22:50:40 +0000
+Subject: net: ethernet: mtk_eth_soc: fix PPE hanging issue
+
+From: Daniel Golle <daniel@makrotopia.org>
+
+[ Upstream commit ea80e3ed09ab2c2b75724faf5484721753e92c31 ]
+
+A patch to resolve an issue was found in MediaTek's GPL-licensed SDK:
+In the mtk_ppe_stop() function, the PPE scan mode is not disabled before
+disabling the PPE. This can potentially lead to a hang during the process
+of disabling the PPE.
+
+Without this patch, the PPE may experience a hang during the reboot test.
+
+Link: https://git01.mediatek.com/plugins/gitiles/openwrt/feeds/mtk-openwrt-feeds/+/b40da332dfe763932a82f9f62a4709457a15dd6c
+Fixes: ba37b7caf1ed ("net: ethernet: mtk_eth_soc: add support for initializing the PPE")
+Suggested-by: Bc-bocun Chen <bc-bocun.chen@mediatek.com>
+Signed-off-by: Daniel Golle <daniel@makrotopia.org>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/mediatek/mtk_ppe.c | 18 +++++++++++-------
+ 1 file changed, 11 insertions(+), 7 deletions(-)
+
+diff --git a/drivers/net/ethernet/mediatek/mtk_ppe.c b/drivers/net/ethernet/mediatek/mtk_ppe.c
+index d6eed204574a9..c64211e22ae70 100644
+--- a/drivers/net/ethernet/mediatek/mtk_ppe.c
++++ b/drivers/net/ethernet/mediatek/mtk_ppe.c
+@@ -811,7 +811,7 @@ void mtk_ppe_start(struct mtk_ppe *ppe)
+ MTK_PPE_KEEPALIVE_DISABLE) |
+ FIELD_PREP(MTK_PPE_TB_CFG_HASH_MODE, 1) |
+ FIELD_PREP(MTK_PPE_TB_CFG_SCAN_MODE,
+- MTK_PPE_SCAN_MODE_KEEPALIVE_AGE) |
++ MTK_PPE_SCAN_MODE_CHECK_AGE) |
+ FIELD_PREP(MTK_PPE_TB_CFG_ENTRY_NUM,
+ MTK_PPE_ENTRIES_SHIFT);
+ if (MTK_HAS_CAPS(ppe->eth->soc->caps, MTK_NETSYS_V2))
+@@ -895,17 +895,21 @@ int mtk_ppe_stop(struct mtk_ppe *ppe)
+
+ mtk_ppe_cache_enable(ppe, false);
+
+- /* disable offload engine */
+- ppe_clear(ppe, MTK_PPE_GLO_CFG, MTK_PPE_GLO_CFG_EN);
+- ppe_w32(ppe, MTK_PPE_FLOW_CFG, 0);
+-
+ /* disable aging */
+ val = MTK_PPE_TB_CFG_AGE_NON_L4 |
+ MTK_PPE_TB_CFG_AGE_UNBIND |
+ MTK_PPE_TB_CFG_AGE_TCP |
+ MTK_PPE_TB_CFG_AGE_UDP |
+- MTK_PPE_TB_CFG_AGE_TCP_FIN;
++ MTK_PPE_TB_CFG_AGE_TCP_FIN |
++ MTK_PPE_TB_CFG_SCAN_MODE;
+ ppe_clear(ppe, MTK_PPE_TB_CFG, val);
+
+- return mtk_ppe_wait_busy(ppe);
++ if (mtk_ppe_wait_busy(ppe))
++ return -ETIMEDOUT;
++
++ /* disable offload engine */
++ ppe_clear(ppe, MTK_PPE_GLO_CFG, MTK_PPE_GLO_CFG_EN);
++ ppe_w32(ppe, MTK_PPE_FLOW_CFG, 0);
++
++ return 0;
+ }
+--
+2.43.0
+
--- /dev/null
+From 8b0ed6d3e1446b2441a480338982354d3499569f Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 13 Mar 2024 22:50:18 +0000
+Subject: net: mediatek: mtk_eth_soc: clear MAC_MCR_FORCE_LINK only when MAC is
+ up
+
+From: Daniel Golle <daniel@makrotopia.org>
+
+[ Upstream commit f1b85ef15a99f06ed48871ce933d591127d2dcc0 ]
+
+Clearing bit MAC_MCR_FORCE_LINK which forces the link down too early
+can result in MAC ending up in a broken/blocked state.
+
+Fix this by handling this bit in the .mac_link_up and .mac_link_down
+calls instead of in .mac_finish.
+
+Fixes: b8fc9f30821e ("net: ethernet: mediatek: Add basic PHYLINK support")
+Suggested-by: Mason-cw Chang <Mason-cw.Chang@mediatek.com>
+Signed-off-by: Daniel Golle <daniel@makrotopia.org>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/mediatek/mtk_eth_soc.c | 7 +++----
+ 1 file changed, 3 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+index 17e6ac4445afc..fecf3dd22dfaa 100644
+--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+@@ -561,8 +561,7 @@ static int mtk_mac_finish(struct phylink_config *config, unsigned int mode,
+ mcr_cur = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
+ mcr_new = mcr_cur;
+ mcr_new |= MAC_MCR_IPG_CFG | MAC_MCR_FORCE_MODE |
+- MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_FORCE_LINK |
+- MAC_MCR_RX_FIFO_CLR_DIS;
++ MAC_MCR_BACKOFF_EN | MAC_MCR_BACKPR_EN | MAC_MCR_RX_FIFO_CLR_DIS;
+
+ /* Only update control register when needed! */
+ if (mcr_new != mcr_cur)
+@@ -610,7 +609,7 @@ static void mtk_mac_link_down(struct phylink_config *config, unsigned int mode,
+ phylink_config);
+ u32 mcr = mtk_r32(mac->hw, MTK_MAC_MCR(mac->id));
+
+- mcr &= ~(MAC_MCR_TX_EN | MAC_MCR_RX_EN);
++ mcr &= ~(MAC_MCR_TX_EN | MAC_MCR_RX_EN | MAC_MCR_FORCE_LINK);
+ mtk_w32(mac->hw, mcr, MTK_MAC_MCR(mac->id));
+ }
+
+@@ -649,7 +648,7 @@ static void mtk_mac_link_up(struct phylink_config *config,
+ if (rx_pause)
+ mcr |= MAC_MCR_FORCE_RX_FC;
+
+- mcr |= MAC_MCR_TX_EN | MAC_MCR_RX_EN;
++ mcr |= MAC_MCR_TX_EN | MAC_MCR_RX_EN | MAC_MCR_FORCE_LINK;
+ mtk_w32(mac->hw, mcr, MTK_MAC_MCR(mac->id));
+ }
+
+--
+2.43.0
+
--- /dev/null
+From 42147477307b22b03f93735020a1ce118697dd5f Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 8 May 2023 13:52:28 -1000
+Subject: net: octeontx2: Use alloc_ordered_workqueue() to create ordered
+ workqueues
+
+From: Tejun Heo <tj@kernel.org>
+
+[ Upstream commit 289f97467480266f9bd8cac7f1e05a478d523f79 ]
+
+BACKGROUND
+==========
+
+When multiple work items are queued to a workqueue, their execution order
+doesn't match the queueing order. They may get executed in any order and
+simultaneously. When fully serialized execution - one by one in the queueing
+order - is needed, an ordered workqueue should be used which can be created
+with alloc_ordered_workqueue().
+
+However, alloc_ordered_workqueue() was a later addition. Before it, an
+ordered workqueue could be obtained by creating an UNBOUND workqueue with
+@max_active==1. This originally was an implementation side-effect which was
+broken by 4c16bd327c74 ("workqueue: restore WQ_UNBOUND/max_active==1 to be
+ordered"). Because there were users that depended on the ordered execution,
+5c0338c68706 ("workqueue: restore WQ_UNBOUND/max_active==1 to be ordered")
+made workqueue allocation path to implicitly promote UNBOUND workqueues w/
+@max_active==1 to ordered workqueues.
+
+While this has worked okay, overloading the UNBOUND allocation interface
+this way creates other issues. It's difficult to tell whether a given
+workqueue actually needs to be ordered and users that legitimately want a
+min concurrency level wq unexpectedly gets an ordered one instead. With
+planned UNBOUND workqueue updates to improve execution locality and more
+prevalence of chiplet designs which can benefit from such improvements, this
+isn't a state we wanna be in forever.
+
+This patch series audits all callsites that create an UNBOUND workqueue w/
+@max_active==1 and converts them to alloc_ordered_workqueue() as necessary.
+
+WHAT TO LOOK FOR
+================
+
+The conversions are from
+
+ alloc_workqueue(WQ_UNBOUND | flags, 1, args..)
+
+to
+
+ alloc_ordered_workqueue(flags, args...)
+
+which don't cause any functional changes. If you know that fully ordered
+execution is not ncessary, please let me know. I'll drop the conversion and
+instead add a comment noting the fact to reduce confusion while conversion
+is in progress.
+
+If you aren't fully sure, it's completely fine to let the conversion
+through. The behavior will stay exactly the same and we can always
+reconsider later.
+
+As there are follow-up workqueue core changes, I'd really appreciate if the
+patch can be routed through the workqueue tree w/ your acks. Thanks.
+
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Reviewed-by: Sunil Goutham <sgoutham@marvell.com>
+Cc: "David S. Miller" <davem@davemloft.net>
+Cc: Eric Dumazet <edumazet@google.com>
+Cc: Jakub Kicinski <kuba@kernel.org>
+Cc: Paolo Abeni <pabeni@redhat.com>
+Cc: Ratheesh Kannoth <rkannoth@marvell.com>
+Cc: Srujana Challa <schalla@marvell.com>
+Cc: Geetha sowjanya <gakula@marvell.com>
+Cc: netdev@vger.kernel.org
+Stable-dep-of: 7558ce0d974c ("octeontx2-pf: Use default max_active works instead of one")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 5 ++---
+ .../net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 13 +++++--------
+ .../net/ethernet/marvell/octeontx2/nic/otx2_vf.c | 5 ++---
+ 3 files changed, 9 insertions(+), 14 deletions(-)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 59e6442ddf4a4..a7965b457bee9 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -3055,9 +3055,8 @@ static int rvu_flr_init(struct rvu *rvu)
+ cfg | BIT_ULL(22));
+ }
+
+- rvu->flr_wq = alloc_workqueue("rvu_afpf_flr",
+- WQ_UNBOUND | WQ_HIGHPRI | WQ_MEM_RECLAIM,
+- 1);
++ rvu->flr_wq = alloc_ordered_workqueue("rvu_afpf_flr",
++ WQ_HIGHPRI | WQ_MEM_RECLAIM);
+ if (!rvu->flr_wq)
+ return -ENOMEM;
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 05ee55022b92c..3f044b161e8bf 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -272,8 +272,7 @@ static int otx2_pf_flr_init(struct otx2_nic *pf, int num_vfs)
+ {
+ int vf;
+
+- pf->flr_wq = alloc_workqueue("otx2_pf_flr_wq",
+- WQ_UNBOUND | WQ_HIGHPRI, 1);
++ pf->flr_wq = alloc_ordered_workqueue("otx2_pf_flr_wq", WQ_HIGHPRI);
+ if (!pf->flr_wq)
+ return -ENOMEM;
+
+@@ -584,9 +583,8 @@ static int otx2_pfvf_mbox_init(struct otx2_nic *pf, int numvfs)
+ if (!pf->mbox_pfvf)
+ return -ENOMEM;
+
+- pf->mbox_pfvf_wq = alloc_workqueue("otx2_pfvf_mailbox",
+- WQ_UNBOUND | WQ_HIGHPRI |
+- WQ_MEM_RECLAIM, 1);
++ pf->mbox_pfvf_wq = alloc_ordered_workqueue("otx2_pfvf_mailbox",
++ WQ_HIGHPRI | WQ_MEM_RECLAIM);
+ if (!pf->mbox_pfvf_wq)
+ return -ENOMEM;
+
+@@ -1088,9 +1086,8 @@ static int otx2_pfaf_mbox_init(struct otx2_nic *pf)
+ int err;
+
+ mbox->pfvf = pf;
+- pf->mbox_wq = alloc_workqueue("otx2_pfaf_mailbox",
+- WQ_UNBOUND | WQ_HIGHPRI |
+- WQ_MEM_RECLAIM, 1);
++ pf->mbox_wq = alloc_ordered_workqueue("otx2_pfaf_mailbox",
++ WQ_HIGHPRI | WQ_MEM_RECLAIM);
+ if (!pf->mbox_wq)
+ return -ENOMEM;
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index 68fef947ccced..dcb8190de2407 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -308,9 +308,8 @@ static int otx2vf_vfaf_mbox_init(struct otx2_nic *vf)
+ int err;
+
+ mbox->pfvf = vf;
+- vf->mbox_wq = alloc_workqueue("otx2_vfaf_mailbox",
+- WQ_UNBOUND | WQ_HIGHPRI |
+- WQ_MEM_RECLAIM, 1);
++ vf->mbox_wq = alloc_ordered_workqueue("otx2_vfaf_mailbox",
++ WQ_HIGHPRI | WQ_MEM_RECLAIM);
+ if (!vf->mbox_wq)
+ return -ENOMEM;
+
+--
+2.43.0
+
--- /dev/null
+From 9a23bed9472e72b3f536ecd97521591d81ecfe64 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 15 Mar 2024 20:50:52 +0300
+Subject: net: phy: fix phy_read_poll_timeout argument type in genphy_loopback
+
+From: Nikita Kiryushin <kiryushin@ancud.ru>
+
+[ Upstream commit 32fa4366cc4da1c97b725a0066adf43c6b298f37 ]
+
+read_poll_timeout inside phy_read_poll_timeout can set val negative
+in some cases (for example, __mdiobus_read inside phy_read can return
+-EOPNOTSUPP).
+
+Supposedly, commit 4ec732951702 ("net: phylib: fix phy_read*_poll_timeout()")
+should fix problems with wrong-signed vals, but I do not see how
+as val is sent to phy_read as is and __val = phy_read (not val)
+is checked for sign.
+
+Change val type for signed to allow better error handling as done in other
+phy_read_poll_timeout callers. This will not fix any error handling
+by itself, but allows, for example, to modify cond with appropriate
+sign check or check resulting val separately.
+
+Found by Linux Verification Center (linuxtesting.org) with SVACE.
+
+Fixes: 014068dcb5b1 ("net: phy: genphy_loopback: add link speed configuration")
+Signed-off-by: Nikita Kiryushin <kiryushin@ancud.ru>
+Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
+Link: https://lore.kernel.org/r/20240315175052.8049-1-kiryushin@ancud.ru
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/phy/phy_device.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
+index 45b07004669d6..f25b0d338ca8d 100644
+--- a/drivers/net/phy/phy_device.c
++++ b/drivers/net/phy/phy_device.c
+@@ -2640,8 +2640,8 @@ EXPORT_SYMBOL(genphy_resume);
+ int genphy_loopback(struct phy_device *phydev, bool enable)
+ {
+ if (enable) {
+- u16 val, ctl = BMCR_LOOPBACK;
+- int ret;
++ u16 ctl = BMCR_LOOPBACK;
++ int ret, val;
+
+ ctl |= mii_bmcr_encode_fixed(phydev->speed, phydev->duplex);
+
+--
+2.43.0
+
--- /dev/null
+From 48cd3f8ba3b4a9a47361edaeef904310ba4106c7 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Mar 2024 13:44:37 -0700
+Subject: net: report RCU QS on threaded NAPI repolling
+
+From: Yan Zhai <yan@cloudflare.com>
+
+[ Upstream commit d6dbbb11247c71203785a2c9da474c36f4b19eae ]
+
+NAPI threads can keep polling packets under load. Currently it is only
+calling cond_resched() before repolling, but it is not sufficient to
+clear out the holdout of RCU tasks, which prevent BPF tracing programs
+from detaching for long period. This can be reproduced easily with
+following set up:
+
+ip netns add test1
+ip netns add test2
+
+ip -n test1 link add veth1 type veth peer name veth2 netns test2
+
+ip -n test1 link set veth1 up
+ip -n test1 link set lo up
+ip -n test2 link set veth2 up
+ip -n test2 link set lo up
+
+ip -n test1 addr add 192.168.1.2/31 dev veth1
+ip -n test1 addr add 1.1.1.1/32 dev lo
+ip -n test2 addr add 192.168.1.3/31 dev veth2
+ip -n test2 addr add 2.2.2.2/31 dev lo
+
+ip -n test1 route add default via 192.168.1.3
+ip -n test2 route add default via 192.168.1.2
+
+for i in `seq 10 210`; do
+ for j in `seq 10 210`; do
+ ip netns exec test2 iptables -I INPUT -s 3.3.$i.$j -p udp --dport 5201
+ done
+done
+
+ip netns exec test2 ethtool -K veth2 gro on
+ip netns exec test2 bash -c 'echo 1 > /sys/class/net/veth2/threaded'
+ip netns exec test1 ethtool -K veth1 tso off
+
+Then run an iperf3 client/server and a bpftrace script can trigger it:
+
+ip netns exec test2 iperf3 -s -B 2.2.2.2 >/dev/null&
+ip netns exec test1 iperf3 -c 2.2.2.2 -B 1.1.1.1 -u -l 1500 -b 3g -t 100 >/dev/null&
+bpftrace -e 'kfunc:__napi_poll{@=count();} interval:s:1{exit();}'
+
+Report RCU quiescent states periodically will resolve the issue.
+
+Fixes: 29863d41bb6e ("net: implement threaded-able napi poll loop support")
+Reviewed-by: Jesper Dangaard Brouer <hawk@kernel.org>
+Signed-off-by: Yan Zhai <yan@cloudflare.com>
+Acked-by: Paul E. McKenney <paulmck@kernel.org>
+Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
+Link: https://lore.kernel.org/r/4c3b0d3f32d3b18949d75b18e5e1d9f13a24f025.1710877680.git.yan@cloudflare.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/core/dev.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 9a48a7e26cf46..65284eeec7de5 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -6645,6 +6645,8 @@ static int napi_threaded_poll(void *data)
+ void *have;
+
+ while (!napi_thread_wait(napi)) {
++ unsigned long last_qs = jiffies;
++
+ for (;;) {
+ bool repoll = false;
+
+@@ -6659,6 +6661,7 @@ static int napi_threaded_poll(void *data)
+ if (!repoll)
+ break;
+
++ rcu_softirq_qs_periodic(last_qs);
+ cond_resched();
+ }
+ }
+--
+2.43.0
+
--- /dev/null
+From a8cd4ceb0bfca7eb057fccbd382dcab3e0e9ca6b Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 11 Mar 2024 20:46:28 +0000
+Subject: net/sched: taprio: proper TCA_TAPRIO_TC_ENTRY_INDEX check
+
+From: Eric Dumazet <edumazet@google.com>
+
+[ Upstream commit 343041b59b7810f9cdca371f445dd43b35c740b1 ]
+
+taprio_parse_tc_entry() is not correctly checking
+TCA_TAPRIO_TC_ENTRY_INDEX attribute:
+
+ int tc; // Signed value
+
+ tc = nla_get_u32(tb[TCA_TAPRIO_TC_ENTRY_INDEX]);
+ if (tc >= TC_QOPT_MAX_QUEUE) {
+ NL_SET_ERR_MSG_MOD(extack, "TC entry index out of range");
+ return -ERANGE;
+ }
+
+syzbot reported that it could fed arbitary negative values:
+
+UBSAN: shift-out-of-bounds in net/sched/sch_taprio.c:1722:18
+shift exponent -2147418108 is negative
+CPU: 0 PID: 5066 Comm: syz-executor367 Not tainted 6.8.0-rc7-syzkaller-00136-gc8a5c731fd12 #0
+Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
+Call Trace:
+ <TASK>
+ __dump_stack lib/dump_stack.c:88 [inline]
+ dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
+ ubsan_epilogue lib/ubsan.c:217 [inline]
+ __ubsan_handle_shift_out_of_bounds+0x3c7/0x420 lib/ubsan.c:386
+ taprio_parse_tc_entry net/sched/sch_taprio.c:1722 [inline]
+ taprio_parse_tc_entries net/sched/sch_taprio.c:1768 [inline]
+ taprio_change+0xb87/0x57d0 net/sched/sch_taprio.c:1877
+ taprio_init+0x9da/0xc80 net/sched/sch_taprio.c:2134
+ qdisc_create+0x9d4/0x1190 net/sched/sch_api.c:1355
+ tc_modify_qdisc+0xa26/0x1e40 net/sched/sch_api.c:1776
+ rtnetlink_rcv_msg+0x885/0x1040 net/core/rtnetlink.c:6617
+ netlink_rcv_skb+0x1e3/0x430 net/netlink/af_netlink.c:2543
+ netlink_unicast_kernel net/netlink/af_netlink.c:1341 [inline]
+ netlink_unicast+0x7ea/0x980 net/netlink/af_netlink.c:1367
+ netlink_sendmsg+0xa3b/0xd70 net/netlink/af_netlink.c:1908
+ sock_sendmsg_nosec net/socket.c:730 [inline]
+ __sock_sendmsg+0x221/0x270 net/socket.c:745
+ ____sys_sendmsg+0x525/0x7d0 net/socket.c:2584
+ ___sys_sendmsg net/socket.c:2638 [inline]
+ __sys_sendmsg+0x2b0/0x3a0 net/socket.c:2667
+ do_syscall_64+0xf9/0x240
+ entry_SYSCALL_64_after_hwframe+0x6f/0x77
+RIP: 0033:0x7f1b2dea3759
+Code: 48 83 c4 28 c3 e8 d7 19 00 00 0f 1f 80 00 00 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
+RSP: 002b:00007ffd4de452f8 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
+RAX: ffffffffffffffda RBX: 00007f1b2def0390 RCX: 00007f1b2dea3759
+RDX: 0000000000000000 RSI: 00000000200007c0 RDI: 0000000000000004
+RBP: 0000000000000003 R08: 0000555500000000 R09: 0000555500000000
+R10: 0000555500000000 R11: 0000000000000246 R12: 00007ffd4de45340
+R13: 00007ffd4de45310 R14: 0000000000000001 R15: 00007ffd4de45340
+
+Fixes: a54fc09e4cba ("net/sched: taprio: allow user input of per-tc max SDU")
+Reported-and-tested-by: syzbot+a340daa06412d6028918@syzkaller.appspotmail.com
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Cc: Vladimir Oltean <vladimir.oltean@nxp.com>
+Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/sched/sch_taprio.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/net/sched/sch_taprio.c b/net/sched/sch_taprio.c
+index 8d5eebb2dd1b1..1d4638aa4254f 100644
+--- a/net/sched/sch_taprio.c
++++ b/net/sched/sch_taprio.c
+@@ -765,7 +765,8 @@ static const struct nla_policy entry_policy[TCA_TAPRIO_SCHED_ENTRY_MAX + 1] = {
+ };
+
+ static const struct nla_policy taprio_tc_policy[TCA_TAPRIO_TC_ENTRY_MAX + 1] = {
+- [TCA_TAPRIO_TC_ENTRY_INDEX] = { .type = NLA_U32 },
++ [TCA_TAPRIO_TC_ENTRY_INDEX] = NLA_POLICY_MAX(NLA_U32,
++ TC_QOPT_MAX_QUEUE),
+ [TCA_TAPRIO_TC_ENTRY_MAX_SDU] = { .type = NLA_U32 },
+ };
+
+--
+2.43.0
+
--- /dev/null
+From ea741b6777293955ce5583ab52ddc0d354f45c29 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 13 Mar 2024 19:37:58 +0100
+Subject: net: veth: do not manipulate GRO when using XDP
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Ignat Korchagin <ignat@cloudflare.com>
+
+[ Upstream commit d7db7775ea2e31502d46427f5efd385afc4ff1eb ]
+
+Commit d3256efd8e8b ("veth: allow enabling NAPI even without XDP") tried to fix
+the fact that GRO was not possible without XDP, because veth did not use NAPI
+without XDP. However, it also introduced the behaviour that GRO is always
+enabled, when XDP is enabled.
+
+While it might be desired for most cases, it is confusing for the user at best
+as the GRO flag suddenly changes, when an XDP program is attached. It also
+introduces some complexities in state management as was partially addressed in
+commit fe9f801355f0 ("net: veth: clear GRO when clearing XDP even when down").
+
+But the biggest problem is that it is not possible to disable GRO at all, when
+an XDP program is attached, which might be needed for some use cases.
+
+Fix this by not touching the GRO flag on XDP enable/disable as the code already
+supports switching to NAPI if either GRO or XDP is requested.
+
+Link: https://lore.kernel.org/lkml/20240311124015.38106-1-ignat@cloudflare.com/
+Fixes: d3256efd8e8b ("veth: allow enabling NAPI even without XDP")
+Fixes: fe9f801355f0 ("net: veth: clear GRO when clearing XDP even when down")
+Signed-off-by: Ignat Korchagin <ignat@cloudflare.com>
+Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/veth.c | 18 ------------------
+ 1 file changed, 18 deletions(-)
+
+diff --git a/drivers/net/veth.c b/drivers/net/veth.c
+index dd9f5f1461921..8dcd3b6e143b9 100644
+--- a/drivers/net/veth.c
++++ b/drivers/net/veth.c
+@@ -1444,8 +1444,6 @@ static netdev_features_t veth_fix_features(struct net_device *dev,
+ if (peer_priv->_xdp_prog)
+ features &= ~NETIF_F_GSO_SOFTWARE;
+ }
+- if (priv->_xdp_prog)
+- features |= NETIF_F_GRO;
+
+ return features;
+ }
+@@ -1542,14 +1540,6 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ }
+
+ if (!old_prog) {
+- if (!veth_gro_requested(dev)) {
+- /* user-space did not require GRO, but adding
+- * XDP is supposed to get GRO working
+- */
+- dev->features |= NETIF_F_GRO;
+- netdev_features_change(dev);
+- }
+-
+ peer->hw_features &= ~NETIF_F_GSO_SOFTWARE;
+ peer->max_mtu = max_mtu;
+ }
+@@ -1560,14 +1550,6 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
+ if (dev->flags & IFF_UP)
+ veth_disable_xdp(dev);
+
+- /* if user-space did not require GRO, since adding XDP
+- * enabled it, clear it now
+- */
+- if (!veth_gro_requested(dev)) {
+- dev->features &= ~NETIF_F_GRO;
+- netdev_features_change(dev);
+- }
+-
+ if (peer) {
+ peer->hw_features |= NETIF_F_GSO_SOFTWARE;
+ peer->max_mtu = ETH_MAX_MTU;
+--
+2.43.0
+
--- /dev/null
+From d6b6a62e62fc40d1b5079aa8c26e3d4f851388ee Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 14 Mar 2024 18:51:38 +0100
+Subject: netfilter: nf_tables: do not compare internal table flags on updates
+
+From: Pablo Neira Ayuso <pablo@netfilter.org>
+
+[ Upstream commit 4a0e7f2decbf9bd72461226f1f5f7dcc4b08f139 ]
+
+Restore skipping transaction if table update does not modify flags.
+
+Fixes: 179d9ba5559a ("netfilter: nf_tables: fix table flag updates")
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/netfilter/nf_tables_api.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c
+index d3ba947f43761..0a86c019a75de 100644
+--- a/net/netfilter/nf_tables_api.c
++++ b/net/netfilter/nf_tables_api.c
+@@ -1205,7 +1205,7 @@ static int nf_tables_updtable(struct nft_ctx *ctx)
+ if (flags & ~NFT_TABLE_F_MASK)
+ return -EOPNOTSUPP;
+
+- if (flags == ctx->table->flags)
++ if (flags == (ctx->table->flags & NFT_TABLE_F_MASK))
+ return 0;
+
+ if ((nft_table_has_owner(ctx->table) &&
+--
+2.43.0
+
--- /dev/null
+From 0ee28e045dadb6a9bee69ff340f27c5ecb284c0a Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Sun, 10 Mar 2024 10:02:41 +0100
+Subject: netfilter: nft_set_pipapo: release elements in clone only from
+ destroy path
+
+From: Pablo Neira Ayuso <pablo@netfilter.org>
+
+[ Upstream commit b0e256f3dd2ba6532f37c5c22e07cb07a36031ee ]
+
+Clone already always provides a current view of the lookup table, use it
+to destroy the set, otherwise it is possible to destroy elements twice.
+
+This fix requires:
+
+ 212ed75dc5fb ("netfilter: nf_tables: integrate pipapo into commit protocol")
+
+which came after:
+
+ 9827a0e6e23b ("netfilter: nft_set_pipapo: release elements in clone from abort path").
+
+Fixes: 9827a0e6e23b ("netfilter: nft_set_pipapo: release elements in clone from abort path")
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/netfilter/nft_set_pipapo.c | 5 +----
+ 1 file changed, 1 insertion(+), 4 deletions(-)
+
+diff --git a/net/netfilter/nft_set_pipapo.c b/net/netfilter/nft_set_pipapo.c
+index e1969209b3abb..58eca26162735 100644
+--- a/net/netfilter/nft_set_pipapo.c
++++ b/net/netfilter/nft_set_pipapo.c
+@@ -2240,8 +2240,6 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ if (m) {
+ rcu_barrier();
+
+- nft_set_pipapo_match_destroy(ctx, set, m);
+-
+ for_each_possible_cpu(cpu)
+ pipapo_free_scratch(m, cpu);
+ free_percpu(m->scratch);
+@@ -2253,8 +2251,7 @@ static void nft_pipapo_destroy(const struct nft_ctx *ctx,
+ if (priv->clone) {
+ m = priv->clone;
+
+- if (priv->dirty)
+- nft_set_pipapo_match_destroy(ctx, set, m);
++ nft_set_pipapo_match_destroy(ctx, set, m);
+
+ for_each_possible_cpu(cpu)
+ pipapo_free_scratch(priv->clone, cpu);
+--
+2.43.0
+
--- /dev/null
+From 6056d0129c1262319fb465dd50760a1a8b8586b3 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 11 Mar 2024 17:20:37 +1000
+Subject: nouveau: reset the bo resource bus info after an eviction
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Dave Airlie <airlied@redhat.com>
+
+[ Upstream commit f35c9af45ea7a4b1115b193d84858b14d13517fc ]
+
+Later attempts to refault the bo won't happen and the whole
+GPU does to lunch. I think Christian's refactoring of this
+code out to the driver broke this not very well tested path.
+
+Fixes: 141b15e59175 ("drm/nouveau: move io_reserve_lru handling into the driver v5")
+Cc: Christian König <christian.koenig@amd.com>
+Signed-off-by: Dave Airlie <airlied@redhat.com>
+Acked-by: Christian König <christian.koenig@amd.com>
+Signed-off-by: Danilo Krummrich <dakr@redhat.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20240311072037.287905-1-airlied@gmail.com
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/gpu/drm/nouveau/nouveau_bo.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.c b/drivers/gpu/drm/nouveau/nouveau_bo.c
+index 126b3c6e12f99..f2dca41e46c5f 100644
+--- a/drivers/gpu/drm/nouveau/nouveau_bo.c
++++ b/drivers/gpu/drm/nouveau/nouveau_bo.c
+@@ -1194,6 +1194,8 @@ nouveau_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *reg)
+ drm_vma_node_unmap(&nvbo->bo.base.vma_node,
+ bdev->dev_mapping);
+ nouveau_ttm_io_mem_free_locked(drm, nvbo->bo.resource);
++ nvbo->bo.resource->bus.offset = 0;
++ nvbo->bo.resource->bus.addr = NULL;
+ goto retry;
+ }
+
+--
+2.43.0
+
--- /dev/null
+From 70506692d9b79d6d42a71a88e2d7a93c3582e9d8 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 30 Nov 2022 17:28:48 +0100
+Subject: nvme: add the Apple shared tag workaround to nvme_alloc_io_tag_set
+
+From: Christoph Hellwig <hch@lst.de>
+
+[ Upstream commit 93b24f579c392bac2e491fee79ad5ce5a131992e ]
+
+Add the apple shared tag workaround to nvme_alloc_io_tag_set to prepare
+for using that helper in the PCIe driver.
+
+Signed-off-by: Christoph Hellwig <hch@lst.de>
+Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
+Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
+Stable-dep-of: de105068fead ("nvme: fix reconnection fail due to reserved tag allocation")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/nvme/host/core.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 98a8d90feb37d..951c8946701aa 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -5029,7 +5029,13 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
+ memset(set, 0, sizeof(*set));
+ set->ops = ops;
+ set->queue_depth = ctrl->sqsize + 1;
+- if (ctrl->ops->flags & NVME_F_FABRICS)
++ /*
++ * Some Apple controllers requires tags to be unique across admin and
++ * the (only) I/O queue, so reserve the first 32 tags of the I/O queue.
++ */
++ if (ctrl->quirks & NVME_QUIRK_SHARED_TAGS)
++ set->reserved_tags = NVME_AQ_DEPTH;
++ else if (ctrl->ops->flags & NVME_F_FABRICS)
+ set->reserved_tags = NVMF_RESERVED_TAGS;
+ set->numa_node = ctrl->numa_node;
+ set->flags = BLK_MQ_F_SHOULD_MERGE;
+--
+2.43.0
+
--- /dev/null
+From c7bcf6e4746fb6ed982d023690f50a52795e5580 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 11 Mar 2024 10:09:27 +0800
+Subject: nvme: fix reconnection fail due to reserved tag allocation
+
+From: Chunguang Xu <chunguang.xu@shopee.com>
+
+[ Upstream commit de105068fead55ed5c07ade75e9c8e7f86a00d1d ]
+
+We found a issue on production environment while using NVMe over RDMA,
+admin_q reconnect failed forever while remote target and network is ok.
+After dig into it, we found it may caused by a ABBA deadlock due to tag
+allocation. In my case, the tag was hold by a keep alive request
+waiting inside admin_q, as we quiesced admin_q while reset ctrl, so the
+request maked as idle and will not process before reset success. As
+fabric_q shares tagset with admin_q, while reconnect remote target, we
+need a tag for connect command, but the only one reserved tag was held
+by keep alive command which waiting inside admin_q. As a result, we
+failed to reconnect admin_q forever. In order to fix this issue, I
+think we should keep two reserved tags for admin queue.
+
+Fixes: ed01fee283a0 ("nvme-fabrics: only reserve a single tag")
+Signed-off-by: Chunguang Xu <chunguang.xu@shopee.com>
+Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
+Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
+Reviewed-by: Christoph Hellwig <hch@lst.de>
+Signed-off-by: Keith Busch <kbusch@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/nvme/host/core.c | 6 ++++--
+ drivers/nvme/host/fabrics.h | 7 -------
+ 2 files changed, 4 insertions(+), 9 deletions(-)
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 951c8946701aa..d7516e99275b6 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -4971,7 +4971,8 @@ int nvme_alloc_admin_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
+ set->ops = ops;
+ set->queue_depth = NVME_AQ_MQ_TAG_DEPTH;
+ if (ctrl->ops->flags & NVME_F_FABRICS)
+- set->reserved_tags = NVMF_RESERVED_TAGS;
++ /* Reserved for fabric connect and keep alive */
++ set->reserved_tags = 2;
+ set->numa_node = ctrl->numa_node;
+ set->flags = BLK_MQ_F_NO_SCHED;
+ if (ctrl->ops->flags & NVME_F_BLOCKING)
+@@ -5036,7 +5037,8 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
+ if (ctrl->quirks & NVME_QUIRK_SHARED_TAGS)
+ set->reserved_tags = NVME_AQ_DEPTH;
+ else if (ctrl->ops->flags & NVME_F_FABRICS)
+- set->reserved_tags = NVMF_RESERVED_TAGS;
++ /* Reserved for fabric connect */
++ set->reserved_tags = 1;
+ set->numa_node = ctrl->numa_node;
+ set->flags = BLK_MQ_F_SHOULD_MERGE;
+ if (ctrl->ops->flags & NVME_F_BLOCKING)
+diff --git a/drivers/nvme/host/fabrics.h b/drivers/nvme/host/fabrics.h
+index dcac3df8a5f76..60c238caf7a97 100644
+--- a/drivers/nvme/host/fabrics.h
++++ b/drivers/nvme/host/fabrics.h
+@@ -18,13 +18,6 @@
+ /* default is -1: the fail fast mechanism is disabled */
+ #define NVMF_DEF_FAIL_FAST_TMO -1
+
+-/*
+- * Reserved one command for internal usage. This command is used for sending
+- * the connect command, as well as for the keep alive command on the admin
+- * queue once live.
+- */
+-#define NVMF_RESERVED_TAGS 1
+-
+ /*
+ * Define a host as seen by the target. We allocate one at boot, but also
+ * allow the override it when creating controllers. This is both to provide
+--
+2.43.0
+
--- /dev/null
+From d55095432bc46b61b996d3a447fd093c55470ba8 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 30 Nov 2022 17:27:07 +0100
+Subject: nvme: only set reserved_tags in nvme_alloc_io_tag_set for fabrics
+ controllers
+
+From: Christoph Hellwig <hch@lst.de>
+
+[ Upstream commit b794d1c2ad6d7921f2867ce393815ad31b5b5a83 ]
+
+The reserved_tags are only needed for fabrics controllers. Right now only
+fabrics drivers call this helper, so this is harmless, but we'll use it
+in the PCIe driver soon.
+
+Signed-off-by: Christoph Hellwig <hch@lst.de>
+Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
+Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
+Stable-dep-of: de105068fead ("nvme: fix reconnection fail due to reserved tag allocation")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/nvme/host/core.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
+index 0c088db944706..98a8d90feb37d 100644
+--- a/drivers/nvme/host/core.c
++++ b/drivers/nvme/host/core.c
+@@ -5029,7 +5029,8 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
+ memset(set, 0, sizeof(*set));
+ set->ops = ops;
+ set->queue_depth = ctrl->sqsize + 1;
+- set->reserved_tags = NVMF_RESERVED_TAGS;
++ if (ctrl->ops->flags & NVME_F_FABRICS)
++ set->reserved_tags = NVMF_RESERVED_TAGS;
+ set->numa_node = ctrl->numa_node;
+ set->flags = BLK_MQ_F_SHOULD_MERGE;
+ if (ctrl->ops->flags & NVME_F_BLOCKING)
+--
+2.43.0
+
--- /dev/null
+From 55cf00ac811bb9ac0e996cb23e64f0440f8203d8 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Jan 2023 17:33:49 +0530
+Subject: octeontx2-af: add mbox for CPT LF reset
+
+From: Srujana Challa <schalla@marvell.com>
+
+[ Upstream commit f58cf765e8f5f4860ea094aa12c156d9195a4c28 ]
+
+On OcteonTX2 SoC, the admin function (AF) is the only one with all
+priviliges to configure HW and alloc resources, PFs and it's VFs
+have to request AF via mailbox for all their needs.
+This patch adds a new mailbox for CPT VFs to request for CPT LF
+reset.
+
+Signed-off-by: Srujana Challa <schalla@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Stable-dep-of: a88e0f936ba9 ("octeontx2: Detect the mbox up or down message via register")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ .../net/ethernet/marvell/octeontx2/af/mbox.h | 8 +++++
+ .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 33 +++++++++++++++++++
+ 2 files changed, 41 insertions(+)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index 03ebabd616353..5decd1919de03 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -196,6 +196,7 @@ M(CPT_STATS, 0xA05, cpt_sts, cpt_sts_req, cpt_sts_rsp) \
+ M(CPT_RXC_TIME_CFG, 0xA06, cpt_rxc_time_cfg, cpt_rxc_time_cfg_req, \
+ msg_rsp) \
+ M(CPT_CTX_CACHE_SYNC, 0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp) \
++M(CPT_LF_RESET, 0xA08, cpt_lf_reset, cpt_lf_rst_req, msg_rsp) \
+ /* SDP mbox IDs (range 0x1000 - 0x11FF) */ \
+ M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
+ M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
+@@ -1702,6 +1703,13 @@ struct cpt_inst_lmtst_req {
+ u64 rsvd;
+ };
+
++/* Mailbox message format to request for CPT LF reset */
++struct cpt_lf_rst_req {
++ struct mbox_msghdr hdr;
++ u32 slot;
++ u32 rsvd;
++};
++
+ struct sdp_node_info {
+ /* Node to which this PF belons to */
+ u8 node_id;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 1ed16ce515bb1..1cd34914cb86b 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -851,6 +851,39 @@ int rvu_mbox_handler_cpt_ctx_cache_sync(struct rvu *rvu, struct msg_req *req,
+ return rvu_cpt_ctx_flush(rvu, req->hdr.pcifunc);
+ }
+
++int rvu_mbox_handler_cpt_lf_reset(struct rvu *rvu, struct cpt_lf_rst_req *req,
++ struct msg_rsp *rsp)
++{
++ u16 pcifunc = req->hdr.pcifunc;
++ struct rvu_block *block;
++ int cptlf, blkaddr, ret;
++ u16 actual_slot;
++ u64 ctl, ctl2;
++
++ blkaddr = rvu_get_blkaddr_from_slot(rvu, BLKTYPE_CPT, pcifunc,
++ req->slot, &actual_slot);
++ if (blkaddr < 0)
++ return CPT_AF_ERR_LF_INVALID;
++
++ block = &rvu->hw->block[blkaddr];
++
++ cptlf = rvu_get_lf(rvu, block, pcifunc, actual_slot);
++ if (cptlf < 0)
++ return CPT_AF_ERR_LF_INVALID;
++ ctl = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf));
++ ctl2 = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf));
++
++ ret = rvu_lf_reset(rvu, block, cptlf);
++ if (ret)
++ dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n",
++ block->addr, cptlf);
++
++ rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), ctl);
++ rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), ctl2);
++
++ return 0;
++}
++
+ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
+ {
+ struct cpt_rxc_time_cfg_req req;
+--
+2.43.0
+
--- /dev/null
+From 5df7fe27a02d2cb9ba699f29acc1c2990415f8b3 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Jan 2023 17:33:54 +0530
+Subject: octeontx2-af: add mbox to return CPT_AF_FLT_INT info
+
+From: Srujana Challa <schalla@marvell.com>
+
+[ Upstream commit 8299ffe3dc3dc9ac2bd60e3a8332008f03156aca ]
+
+CPT HW would trigger the CPT AF FLT interrupt when CPT engines
+hits some uncorrectable errors and AF is the one which receives
+the interrupt and recovers the engines.
+This patch adds a mailbox for CPT VFs to request for CPT faulted
+and recovered engines info.
+
+Signed-off-by: Srujana Challa <schalla@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Stable-dep-of: a88e0f936ba9 ("octeontx2: Detect the mbox up or down message via register")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ .../net/ethernet/marvell/octeontx2/af/mbox.h | 17 +++++++++
+ .../net/ethernet/marvell/octeontx2/af/rvu.h | 4 +++
+ .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 35 +++++++++++++++++++
+ 3 files changed, 56 insertions(+)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index 5decd1919de03..bbb6658420f1d 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -197,6 +197,8 @@ M(CPT_RXC_TIME_CFG, 0xA06, cpt_rxc_time_cfg, cpt_rxc_time_cfg_req, \
+ msg_rsp) \
+ M(CPT_CTX_CACHE_SYNC, 0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp) \
+ M(CPT_LF_RESET, 0xA08, cpt_lf_reset, cpt_lf_rst_req, msg_rsp) \
++M(CPT_FLT_ENG_INFO, 0xA09, cpt_flt_eng_info, cpt_flt_eng_info_req, \
++ cpt_flt_eng_info_rsp) \
+ /* SDP mbox IDs (range 0x1000 - 0x11FF) */ \
+ M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \
+ M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \
+@@ -1710,6 +1712,21 @@ struct cpt_lf_rst_req {
+ u32 rsvd;
+ };
+
++/* Mailbox message format to request for CPT faulted engines */
++struct cpt_flt_eng_info_req {
++ struct mbox_msghdr hdr;
++ int blkaddr;
++ bool reset;
++ u32 rsvd;
++};
++
++struct cpt_flt_eng_info_rsp {
++ struct mbox_msghdr hdr;
++ u64 flt_eng_map[CPT_10K_AF_INT_VEC_RVU];
++ u64 rcvrd_eng_map[CPT_10K_AF_INT_VEC_RVU];
++ u64 rsvd;
++};
++
+ struct sdp_node_info {
+ /* Node to which this PF belons to */
+ u8 node_id;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index e1760f9298b17..6a39006c334d7 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -109,6 +109,8 @@ struct rvu_block {
+ u64 lfreset_reg;
+ unsigned char name[NAME_SIZE];
+ struct rvu *rvu;
++ u64 cpt_flt_eng_map[3];
++ u64 cpt_rcvrd_eng_map[3];
+ };
+
+ struct nix_mcast {
+@@ -521,6 +523,8 @@ struct rvu {
+ struct list_head mcs_intrq_head;
+ /* mcs interrupt queue lock */
+ spinlock_t mcs_intrq_lock;
++ /* CPT interrupt lock */
++ spinlock_t cpt_intr_lock;
+ };
+
+ static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 923af460db296..6fb02b93c1718 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -70,6 +70,14 @@ static irqreturn_t cpt_af_flt_intr_handler(int vec, void *ptr)
+
+ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng), grp);
+ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng), val | 1ULL);
++
++ spin_lock(&rvu->cpt_intr_lock);
++ block->cpt_flt_eng_map[vec] |= BIT_ULL(i);
++ val = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_STS(eng));
++ val = val & 0x3;
++ if (val == 0x1 || val == 0x2)
++ block->cpt_rcvrd_eng_map[vec] |= BIT_ULL(i);
++ spin_unlock(&rvu->cpt_intr_lock);
+ }
+ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(vec), reg);
+
+@@ -884,6 +892,31 @@ int rvu_mbox_handler_cpt_lf_reset(struct rvu *rvu, struct cpt_lf_rst_req *req,
+ return 0;
+ }
+
++int rvu_mbox_handler_cpt_flt_eng_info(struct rvu *rvu, struct cpt_flt_eng_info_req *req,
++ struct cpt_flt_eng_info_rsp *rsp)
++{
++ struct rvu_block *block;
++ unsigned long flags;
++ int blkaddr, vec;
++
++ blkaddr = validate_and_get_cpt_blkaddr(req->blkaddr);
++ if (blkaddr < 0)
++ return blkaddr;
++
++ block = &rvu->hw->block[blkaddr];
++ for (vec = 0; vec < CPT_10K_AF_INT_VEC_RVU; vec++) {
++ spin_lock_irqsave(&rvu->cpt_intr_lock, flags);
++ rsp->flt_eng_map[vec] = block->cpt_flt_eng_map[vec];
++ rsp->rcvrd_eng_map[vec] = block->cpt_rcvrd_eng_map[vec];
++ if (req->reset) {
++ block->cpt_flt_eng_map[vec] = 0x0;
++ block->cpt_rcvrd_eng_map[vec] = 0x0;
++ }
++ spin_unlock_irqrestore(&rvu->cpt_intr_lock, flags);
++ }
++ return 0;
++}
++
+ static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr)
+ {
+ struct cpt_rxc_time_cfg_req req;
+@@ -1172,5 +1205,7 @@ int rvu_cpt_init(struct rvu *rvu)
+ {
+ /* Retrieve CPT PF number */
+ rvu->cpt_pf_num = get_cpt_pf_num(rvu);
++ spin_lock_init(&rvu->cpt_intr_lock);
++
+ return 0;
+ }
+--
+2.43.0
+
--- /dev/null
+From f0d86b49c90a554a82c96a48e6e0d0301a123b76 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Jan 2023 17:33:51 +0530
+Subject: octeontx2-af: optimize cpt pf identification
+
+From: Srujana Challa <schalla@marvell.com>
+
+[ Upstream commit 9adb04ff62f51265002c2c83e718bcf459e06e48 ]
+
+Optimize CPT PF identification in mbox handling for faster
+mbox response by doing it at AF driver probe instead of
+every mbox message.
+
+Signed-off-by: Srujana Challa <schalla@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Stable-dep-of: a88e0f936ba9 ("octeontx2: Detect the mbox up or down message via register")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 8 ++++++++
+ drivers/net/ethernet/marvell/octeontx2/af/rvu.h | 2 ++
+ drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c | 13 ++++++++++---
+ 3 files changed, 20 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index d88d86bf07b03..8f5b7d14e3f7c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -1164,8 +1164,16 @@ static int rvu_setup_hw_resources(struct rvu *rvu)
+ goto nix_err;
+ }
+
++ err = rvu_cpt_init(rvu);
++ if (err) {
++ dev_err(rvu->dev, "%s: Failed to initialize cpt\n", __func__);
++ goto mcs_err;
++ }
++
+ return 0;
+
++mcs_err:
++ rvu_mcs_exit(rvu);
+ nix_err:
+ rvu_nix_freemem(rvu);
+ npa_err:
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 0b76dfa979d4e..e1760f9298b17 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -506,6 +506,7 @@ struct rvu {
+ struct ptp *ptp;
+
+ int mcs_blk_cnt;
++ int cpt_pf_num;
+
+ #ifdef CONFIG_DEBUG_FS
+ struct rvu_debugfs rvu_dbg;
+@@ -872,6 +873,7 @@ void rvu_cpt_unregister_interrupts(struct rvu *rvu);
+ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf,
+ int slot);
+ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc);
++int rvu_cpt_init(struct rvu *rvu);
+
+ #define NDC_AF_BANK_MASK GENMASK_ULL(7, 0)
+ #define NDC_AF_BANK_LINE_MASK GENMASK_ULL(31, 16)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 1cd34914cb86b..923af460db296 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -340,7 +340,7 @@ static int get_cpt_pf_num(struct rvu *rvu)
+
+ static bool is_cpt_pf(struct rvu *rvu, u16 pcifunc)
+ {
+- int cpt_pf_num = get_cpt_pf_num(rvu);
++ int cpt_pf_num = rvu->cpt_pf_num;
+
+ if (rvu_get_pf(pcifunc) != cpt_pf_num)
+ return false;
+@@ -352,7 +352,7 @@ static bool is_cpt_pf(struct rvu *rvu, u16 pcifunc)
+
+ static bool is_cpt_vf(struct rvu *rvu, u16 pcifunc)
+ {
+- int cpt_pf_num = get_cpt_pf_num(rvu);
++ int cpt_pf_num = rvu->cpt_pf_num;
+
+ if (rvu_get_pf(pcifunc) != cpt_pf_num)
+ return false;
+@@ -1023,7 +1023,7 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s
+ static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr,
+ int nix_blkaddr)
+ {
+- int cpt_pf_num = get_cpt_pf_num(rvu);
++ int cpt_pf_num = rvu->cpt_pf_num;
+ struct cpt_inst_lmtst_req *req;
+ dma_addr_t res_daddr;
+ int timeout = 3000;
+@@ -1167,3 +1167,10 @@ int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc)
+
+ return 0;
+ }
++
++int rvu_cpt_init(struct rvu *rvu)
++{
++ /* Retrieve CPT PF number */
++ rvu->cpt_pf_num = get_cpt_pf_num(rvu);
++ return 0;
++}
+--
+2.43.0
+
--- /dev/null
+From 987481d66c149f3602b0adf575d3c53787c8fa79 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 18 Jan 2023 17:33:48 +0530
+Subject: octeontx2-af: recover CPT engine when it gets fault
+
+From: Srujana Challa <schalla@marvell.com>
+
+[ Upstream commit 07ea567d84cdf0add274d66db7c02b55b818d517 ]
+
+When CPT engine has uncorrectable errors, it will get halted and
+must be disabled and re-enabled. This patch adds code for the same.
+
+Signed-off-by: Srujana Challa <schalla@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Stable-dep-of: a88e0f936ba9 ("octeontx2: Detect the mbox up or down message via register")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 110 +++++++++++++-----
+ 1 file changed, 80 insertions(+), 30 deletions(-)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+index 38bbae5d9ae05..1ed16ce515bb1 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c
+@@ -37,34 +37,60 @@
+ (_rsp)->free_sts_##etype = free_sts; \
+ })
+
+-static irqreturn_t rvu_cpt_af_flt_intr_handler(int irq, void *ptr)
++static irqreturn_t cpt_af_flt_intr_handler(int vec, void *ptr)
+ {
+ struct rvu_block *block = ptr;
+ struct rvu *rvu = block->rvu;
+ int blkaddr = block->addr;
+- u64 reg0, reg1, reg2;
+-
+- reg0 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(0));
+- reg1 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(1));
+- if (!is_rvu_otx2(rvu)) {
+- reg2 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(2));
+- dev_err_ratelimited(rvu->dev,
+- "Received CPTAF FLT irq : 0x%llx, 0x%llx, 0x%llx",
+- reg0, reg1, reg2);
+- } else {
+- dev_err_ratelimited(rvu->dev,
+- "Received CPTAF FLT irq : 0x%llx, 0x%llx",
+- reg0, reg1);
++ u64 reg, val;
++ int i, eng;
++ u8 grp;
++
++ reg = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(vec));
++ dev_err_ratelimited(rvu->dev, "Received CPTAF FLT%d irq : 0x%llx", vec, reg);
++
++ i = -1;
++ while ((i = find_next_bit((unsigned long *)®, 64, i + 1)) < 64) {
++ switch (vec) {
++ case 0:
++ eng = i;
++ break;
++ case 1:
++ eng = i + 64;
++ break;
++ case 2:
++ eng = i + 128;
++ break;
++ }
++ grp = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng)) & 0xFF;
++ /* Disable and enable the engine which triggers fault */
++ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng), 0x0);
++ val = rvu_read64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng));
++ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng), val & ~1ULL);
++
++ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL2(eng), grp);
++ rvu_write64(rvu, blkaddr, CPT_AF_EXEX_CTL(eng), val | 1ULL);
+ }
+-
+- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(0), reg0);
+- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(1), reg1);
+- if (!is_rvu_otx2(rvu))
+- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(2), reg2);
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(vec), reg);
+
+ return IRQ_HANDLED;
+ }
+
++static irqreturn_t rvu_cpt_af_flt0_intr_handler(int irq, void *ptr)
++{
++ return cpt_af_flt_intr_handler(CPT_AF_INT_VEC_FLT0, ptr);
++}
++
++static irqreturn_t rvu_cpt_af_flt1_intr_handler(int irq, void *ptr)
++{
++ return cpt_af_flt_intr_handler(CPT_AF_INT_VEC_FLT1, ptr);
++}
++
++static irqreturn_t rvu_cpt_af_flt2_intr_handler(int irq, void *ptr)
++{
++ return cpt_af_flt_intr_handler(CPT_10K_AF_INT_VEC_FLT2, ptr);
++}
++
+ static irqreturn_t rvu_cpt_af_rvu_intr_handler(int irq, void *ptr)
+ {
+ struct rvu_block *block = ptr;
+@@ -119,8 +145,10 @@ static void cpt_10k_unregister_interrupts(struct rvu_block *block, int off)
+ int i;
+
+ /* Disable all CPT AF interrupts */
+- for (i = 0; i < CPT_10K_AF_INT_VEC_RVU; i++)
+- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1);
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(0), ~0ULL);
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(1), ~0ULL);
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(2), 0xFFFF);
++
+ rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1);
+ rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1);
+
+@@ -151,7 +179,7 @@ static void cpt_unregister_interrupts(struct rvu *rvu, int blkaddr)
+
+ /* Disable all CPT AF interrupts */
+ for (i = 0; i < CPT_AF_INT_VEC_RVU; i++)
+- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1);
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), ~0ULL);
+ rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1);
+ rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1);
+
+@@ -172,16 +200,31 @@ static int cpt_10k_register_interrupts(struct rvu_block *block, int off)
+ {
+ struct rvu *rvu = block->rvu;
+ int blkaddr = block->addr;
++ irq_handler_t flt_fn;
+ int i, ret;
+
+ for (i = CPT_10K_AF_INT_VEC_FLT0; i < CPT_10K_AF_INT_VEC_RVU; i++) {
+ sprintf(&rvu->irq_name[(off + i) * NAME_SIZE], "CPTAF FLT%d", i);
++
++ switch (i) {
++ case CPT_10K_AF_INT_VEC_FLT0:
++ flt_fn = rvu_cpt_af_flt0_intr_handler;
++ break;
++ case CPT_10K_AF_INT_VEC_FLT1:
++ flt_fn = rvu_cpt_af_flt1_intr_handler;
++ break;
++ case CPT_10K_AF_INT_VEC_FLT2:
++ flt_fn = rvu_cpt_af_flt2_intr_handler;
++ break;
++ }
+ ret = rvu_cpt_do_register_interrupt(block, off + i,
+- rvu_cpt_af_flt_intr_handler,
+- &rvu->irq_name[(off + i) * NAME_SIZE]);
++ flt_fn, &rvu->irq_name[(off + i) * NAME_SIZE]);
+ if (ret)
+ goto err;
+- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1);
++ if (i == CPT_10K_AF_INT_VEC_FLT2)
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0xFFFF);
++ else
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), ~0ULL);
+ }
+
+ ret = rvu_cpt_do_register_interrupt(block, off + CPT_10K_AF_INT_VEC_RVU,
+@@ -208,8 +251,8 @@ static int cpt_register_interrupts(struct rvu *rvu, int blkaddr)
+ {
+ struct rvu_hwinfo *hw = rvu->hw;
+ struct rvu_block *block;
++ irq_handler_t flt_fn;
+ int i, offs, ret = 0;
+- char irq_name[16];
+
+ if (!is_block_implemented(rvu->hw, blkaddr))
+ return 0;
+@@ -226,13 +269,20 @@ static int cpt_register_interrupts(struct rvu *rvu, int blkaddr)
+ return cpt_10k_register_interrupts(block, offs);
+
+ for (i = CPT_AF_INT_VEC_FLT0; i < CPT_AF_INT_VEC_RVU; i++) {
+- snprintf(irq_name, sizeof(irq_name), "CPTAF FLT%d", i);
++ sprintf(&rvu->irq_name[(offs + i) * NAME_SIZE], "CPTAF FLT%d", i);
++ switch (i) {
++ case CPT_AF_INT_VEC_FLT0:
++ flt_fn = rvu_cpt_af_flt0_intr_handler;
++ break;
++ case CPT_AF_INT_VEC_FLT1:
++ flt_fn = rvu_cpt_af_flt1_intr_handler;
++ break;
++ }
+ ret = rvu_cpt_do_register_interrupt(block, offs + i,
+- rvu_cpt_af_flt_intr_handler,
+- irq_name);
++ flt_fn, &rvu->irq_name[(offs + i) * NAME_SIZE]);
+ if (ret)
+ goto err;
+- rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1);
++ rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), ~0ULL);
+ }
+
+ ret = rvu_cpt_do_register_interrupt(block, offs + CPT_AF_INT_VEC_RVU,
+--
+2.43.0
+
--- /dev/null
+From 78bc39108681a32b941a5e1debd9fc290823f614 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 12 Mar 2024 12:36:22 +0530
+Subject: octeontx2-af: Use matching wake_up API variant in CGX command
+ interface
+
+From: Linu Cherian <lcherian@marvell.com>
+
+[ Upstream commit e642921dfeed1e15e73f78f2c3b6746f72b6deb2 ]
+
+Use wake_up API instead of wake_up_interruptible, since
+wait_event_timeout API is used for waiting on command completion.
+
+Fixes: 1463f382f58d ("octeontx2-af: Add support for CGX link management")
+Signed-off-by: Linu Cherian <lcherian@marvell.com>
+Signed-off-by: Sunil Goutham <sgoutham@marvell.com>
+Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/marvell/octeontx2/af/cgx.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+index 90be87dc105d3..e6fe599f7bf3a 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/cgx.c
+@@ -1346,7 +1346,7 @@ static irqreturn_t cgx_fwi_event_handler(int irq, void *data)
+
+ /* Release thread waiting for completion */
+ lmac->cmd_pend = false;
+- wake_up_interruptible(&lmac->wq_cmd_cmplt);
++ wake_up(&lmac->wq_cmd_cmplt);
+ break;
+ case CGX_EVT_ASYNC:
+ if (cgx_event_is_linkevent(event))
+--
+2.43.0
+
--- /dev/null
+From 887869d5a87f0e18650a11b1fdfbc5db01ca6be1 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Mar 2024 14:59:58 +0530
+Subject: octeontx2-af: Use separate handlers for interrupts
+
+From: Subbaraya Sundeep <sbhatta@marvell.com>
+
+[ Upstream commit 50e60de381c342008c0956fd762e1c26408f372c ]
+
+For PF to AF interrupt vector and VF to AF vector same
+interrupt handler is registered which is causing race condition.
+When two interrupts are raised to two CPUs at same time
+then two cores serve same event corrupting the data.
+
+Fixes: 7304ac4567bc ("octeontx2-af: Add mailbox IRQ and msg handlers")
+Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/marvell/octeontx2/af/rvu.c | 17 ++++++++++++++---
+ 1 file changed, 14 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index a7965b457bee9..a7034b47ed6c9 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2528,10 +2528,9 @@ static void rvu_queue_work(struct mbox_wq_info *mw, int first,
+ }
+ }
+
+-static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
++static irqreturn_t rvu_mbox_pf_intr_handler(int irq, void *rvu_irq)
+ {
+ struct rvu *rvu = (struct rvu *)rvu_irq;
+- int vfs = rvu->vfs;
+ u64 intr;
+
+ intr = rvu_read64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT);
+@@ -2545,6 +2544,18 @@ static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
+
+ rvu_queue_work(&rvu->afpf_wq_info, 0, rvu->hw->total_pfs, intr);
+
++ return IRQ_HANDLED;
++}
++
++static irqreturn_t rvu_mbox_intr_handler(int irq, void *rvu_irq)
++{
++ struct rvu *rvu = (struct rvu *)rvu_irq;
++ int vfs = rvu->vfs;
++ u64 intr;
++
++ /* Sync with mbox memory region */
++ rmb();
++
+ /* Handle VF interrupts */
+ if (vfs > 64) {
+ intr = rvupf_read64(rvu, RVU_PF_VFPF_MBOX_INTX(1));
+@@ -2881,7 +2892,7 @@ static int rvu_register_interrupts(struct rvu *rvu)
+ /* Register mailbox interrupt handler */
+ sprintf(&rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], "RVUAF Mbox");
+ ret = request_irq(pci_irq_vector(rvu->pdev, RVU_AF_INT_VEC_MBOX),
+- rvu_mbox_intr_handler, 0,
++ rvu_mbox_pf_intr_handler, 0,
+ &rvu->irq_name[RVU_AF_INT_VEC_MBOX * NAME_SIZE], rvu);
+ if (ret) {
+ dev_err(rvu->dev,
+--
+2.43.0
+
--- /dev/null
+From 7f4643854caaa77e7baf5496f2eb70e8284f9df6 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Mar 2024 14:59:54 +0530
+Subject: octeontx2: Detect the mbox up or down message via register
+
+From: Subbaraya Sundeep <sbhatta@marvell.com>
+
+[ Upstream commit a88e0f936ba9a301c78f6eacfd38737d003c130b ]
+
+A single line of interrupt is used to receive up notifications
+and down reply messages from AF to PF (similarly from PF to its VF).
+PF acts as bridge and forwards VF messages to AF and sends respsones
+back from AF to VF. When an async event like link event is received
+by up message when PF is in middle of forwarding VF message then
+mailbox errors occur because PF state machine is corrupted.
+Since VF is a separate driver or VF driver can be in a VM it is
+not possible to serialize from the start of communication at VF.
+Hence to differentiate between type of messages at PF this patch makes
+sender to set mbox data register with distinct values for up and down
+messages. Sender also checks whether previous interrupt is received
+before triggering current interrupt by waiting for mailbox data register
+to become zero.
+
+Fixes: 5a6d7c9daef3 ("octeontx2-pf: Mailbox communication with AF")
+Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ .../net/ethernet/marvell/octeontx2/af/mbox.c | 43 ++++++-
+ .../net/ethernet/marvell/octeontx2/af/mbox.h | 6 +
+ .../marvell/octeontx2/af/mcs_rvu_if.c | 17 ++-
+ .../net/ethernet/marvell/octeontx2/af/rvu.c | 14 ++-
+ .../net/ethernet/marvell/octeontx2/af/rvu.h | 2 +
+ .../ethernet/marvell/octeontx2/af/rvu_cgx.c | 20 ++--
+ .../marvell/octeontx2/nic/otx2_common.h | 2 +-
+ .../ethernet/marvell/octeontx2/nic/otx2_pf.c | 113 ++++++++++++------
+ .../ethernet/marvell/octeontx2/nic/otx2_vf.c | 71 ++++++-----
+ 9 files changed, 205 insertions(+), 83 deletions(-)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+index 9690ac01f02c8..7d741e3ba8c51 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.c
+@@ -214,11 +214,12 @@ int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid)
+ }
+ EXPORT_SYMBOL(otx2_mbox_busy_poll_for_rsp);
+
+-void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
++static void otx2_mbox_msg_send_data(struct otx2_mbox *mbox, int devid, u64 data)
+ {
+ struct otx2_mbox_dev *mdev = &mbox->dev[devid];
+ struct mbox_hdr *tx_hdr, *rx_hdr;
+ void *hw_mbase = mdev->hwbase;
++ u64 intr_val;
+
+ tx_hdr = hw_mbase + mbox->tx_start;
+ rx_hdr = hw_mbase + mbox->rx_start;
+@@ -254,14 +255,52 @@ void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
+
+ spin_unlock(&mdev->mbox_lock);
+
++ /* Check if interrupt pending */
++ intr_val = readq((void __iomem *)mbox->reg_base +
++ (mbox->trigger | (devid << mbox->tr_shift)));
++
++ intr_val |= data;
+ /* The interrupt should be fired after num_msgs is written
+ * to the shared memory
+ */
+- writeq(1, (void __iomem *)mbox->reg_base +
++ writeq(intr_val, (void __iomem *)mbox->reg_base +
+ (mbox->trigger | (devid << mbox->tr_shift)));
+ }
++
++void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
++{
++ otx2_mbox_msg_send_data(mbox, devid, MBOX_DOWN_MSG);
++}
+ EXPORT_SYMBOL(otx2_mbox_msg_send);
+
++void otx2_mbox_msg_send_up(struct otx2_mbox *mbox, int devid)
++{
++ otx2_mbox_msg_send_data(mbox, devid, MBOX_UP_MSG);
++}
++EXPORT_SYMBOL(otx2_mbox_msg_send_up);
++
++bool otx2_mbox_wait_for_zero(struct otx2_mbox *mbox, int devid)
++{
++ u64 data;
++
++ data = readq((void __iomem *)mbox->reg_base +
++ (mbox->trigger | (devid << mbox->tr_shift)));
++
++ /* If data is non-zero wait for ~1ms and return to caller
++ * whether data has changed to zero or not after the wait.
++ */
++ if (!data)
++ return true;
++
++ usleep_range(950, 1000);
++
++ data = readq((void __iomem *)mbox->reg_base +
++ (mbox->trigger | (devid << mbox->tr_shift)));
++
++ return data == 0;
++}
++EXPORT_SYMBOL(otx2_mbox_wait_for_zero);
++
+ struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+ int size, int size_rsp)
+ {
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+index bbb6658420f1d..be70269e91684 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h
+@@ -16,6 +16,9 @@
+
+ #define MBOX_SIZE SZ_64K
+
++#define MBOX_DOWN_MSG 1
++#define MBOX_UP_MSG 2
++
+ /* AF/PF: PF initiated, PF/VF VF initiated */
+ #define MBOX_DOWN_RX_START 0
+ #define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
+@@ -101,6 +104,7 @@ int otx2_mbox_regions_init(struct otx2_mbox *mbox, void __force **hwbase,
+ struct pci_dev *pdev, void __force *reg_base,
+ int direction, int ndevs, unsigned long *bmap);
+ void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
++void otx2_mbox_msg_send_up(struct otx2_mbox *mbox, int devid);
+ int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
+ int otx2_mbox_busy_poll_for_rsp(struct otx2_mbox *mbox, int devid);
+ struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
+@@ -118,6 +122,8 @@ static inline struct mbox_msghdr *otx2_mbox_alloc_msg(struct otx2_mbox *mbox,
+ return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
+ }
+
++bool otx2_mbox_wait_for_zero(struct otx2_mbox *mbox, int devid);
++
+ /* Mailbox message types */
+ #define MBOX_MSG_MASK 0xFFFF
+ #define MBOX_MSG_INVALID 0xFFFE
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+index dfd23580e3b8e..d39d86e694ccf 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/mcs_rvu_if.c
+@@ -121,13 +121,17 @@ int mcs_add_intr_wq_entry(struct mcs *mcs, struct mcs_intr_event *event)
+ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+ {
+ struct mcs_intr_info *req;
+- int err, pf;
++ int pf;
+
+ pf = rvu_get_pf(event->pcifunc);
+
++ mutex_lock(&rvu->mbox_lock);
++
+ req = otx2_mbox_alloc_msg_mcs_intr_notify(rvu, pf);
+- if (!req)
++ if (!req) {
++ mutex_unlock(&rvu->mbox_lock);
+ return -ENOMEM;
++ }
+
+ req->mcs_id = event->mcs_id;
+ req->intr_mask = event->intr_mask;
+@@ -135,10 +139,11 @@ static int mcs_notify_pfvf(struct mcs_intr_event *event, struct rvu *rvu)
+ req->hdr.pcifunc = event->pcifunc;
+ req->lmac_id = event->lmac_id;
+
+- otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pf);
+- err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pf);
+- if (err)
+- dev_warn(rvu->dev, "MCS notification to pf %d failed\n", pf);
++ otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pf);
++
++ otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pf);
++
++ mutex_unlock(&rvu->mbox_lock);
+
+ return 0;
+ }
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+index 8f5b7d14e3f7c..59e6442ddf4a4 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c
+@@ -2114,7 +2114,7 @@ MBOX_MESSAGES
+ }
+ }
+
+-static void __rvu_mbox_handler(struct rvu_work *mwork, int type)
++static void __rvu_mbox_handler(struct rvu_work *mwork, int type, bool poll)
+ {
+ struct rvu *rvu = mwork->rvu;
+ int offset, err, id, devid;
+@@ -2181,6 +2181,9 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type)
+ }
+ mw->mbox_wrk[devid].num_msgs = 0;
+
++ if (poll)
++ otx2_mbox_wait_for_zero(mbox, devid);
++
+ /* Send mbox responses to VF/PF */
+ otx2_mbox_msg_send(mbox, devid);
+ }
+@@ -2188,15 +2191,18 @@ static void __rvu_mbox_handler(struct rvu_work *mwork, int type)
+ static inline void rvu_afpf_mbox_handler(struct work_struct *work)
+ {
+ struct rvu_work *mwork = container_of(work, struct rvu_work, work);
++ struct rvu *rvu = mwork->rvu;
+
+- __rvu_mbox_handler(mwork, TYPE_AFPF);
++ mutex_lock(&rvu->mbox_lock);
++ __rvu_mbox_handler(mwork, TYPE_AFPF, true);
++ mutex_unlock(&rvu->mbox_lock);
+ }
+
+ static inline void rvu_afvf_mbox_handler(struct work_struct *work)
+ {
+ struct rvu_work *mwork = container_of(work, struct rvu_work, work);
+
+- __rvu_mbox_handler(mwork, TYPE_AFVF);
++ __rvu_mbox_handler(mwork, TYPE_AFVF, false);
+ }
+
+ static void __rvu_mbox_up_handler(struct rvu_work *mwork, int type)
+@@ -2371,6 +2377,8 @@ static int rvu_mbox_init(struct rvu *rvu, struct mbox_wq_info *mw,
+ }
+ }
+
++ mutex_init(&rvu->mbox_lock);
++
+ mbox_regions = kcalloc(num, sizeof(void *), GFP_KERNEL);
+ if (!mbox_regions) {
+ err = -ENOMEM;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+index 6a39006c334d7..a3ae21398ca74 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h
+@@ -525,6 +525,8 @@ struct rvu {
+ spinlock_t mcs_intrq_lock;
+ /* CPT interrupt lock */
+ spinlock_t cpt_intr_lock;
++
++ struct mutex mbox_lock; /* Serialize mbox up and down msgs */
+ };
+
+ static inline void rvu_write64(struct rvu *rvu, u64 block, u64 offset, u64 val)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+index bcb4385d0621c..d1e6b12ecfa70 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
++++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cgx.c
+@@ -232,7 +232,7 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
+ struct cgx_link_user_info *linfo;
+ struct cgx_link_info_msg *msg;
+ unsigned long pfmap;
+- int err, pfid;
++ int pfid;
+
+ linfo = &event->link_uinfo;
+ pfmap = cgxlmac_to_pfmap(rvu, event->cgx_id, event->lmac_id);
+@@ -250,16 +250,22 @@ static void cgx_notify_pfs(struct cgx_link_event *event, struct rvu *rvu)
+ continue;
+ }
+
++ mutex_lock(&rvu->mbox_lock);
++
+ /* Send mbox message to PF */
+ msg = otx2_mbox_alloc_msg_cgx_link_event(rvu, pfid);
+- if (!msg)
++ if (!msg) {
++ mutex_unlock(&rvu->mbox_lock);
+ continue;
++ }
++
+ msg->link_info = *linfo;
+- otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, pfid);
+- err = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, pfid);
+- if (err)
+- dev_warn(rvu->dev, "notification to pf %d failed\n",
+- pfid);
++
++ otx2_mbox_wait_for_zero(&rvu->afpf_wq_info.mbox_up, pfid);
++
++ otx2_mbox_msg_send_up(&rvu->afpf_wq_info.mbox_up, pfid);
++
++ mutex_unlock(&rvu->mbox_lock);
+ } while (pfmap);
+ }
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+index 44950c2542bb7..c15d1864a6371 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_common.h
+@@ -785,7 +785,7 @@ static inline int otx2_sync_mbox_up_msg(struct mbox *mbox, int devid)
+
+ if (!otx2_mbox_nonempty(&mbox->mbox_up, devid))
+ return 0;
+- otx2_mbox_msg_send(&mbox->mbox_up, devid);
++ otx2_mbox_msg_send_up(&mbox->mbox_up, devid);
+ err = otx2_mbox_wait_for_rsp(&mbox->mbox_up, devid);
+ if (err)
+ return err;
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index a2d8ac6204054..05ee55022b92c 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -292,8 +292,8 @@ static int otx2_pf_flr_init(struct otx2_nic *pf, int num_vfs)
+ return 0;
+ }
+
+-static void otx2_queue_work(struct mbox *mw, struct workqueue_struct *mbox_wq,
+- int first, int mdevs, u64 intr, int type)
++static void otx2_queue_vf_work(struct mbox *mw, struct workqueue_struct *mbox_wq,
++ int first, int mdevs, u64 intr)
+ {
+ struct otx2_mbox_dev *mdev;
+ struct otx2_mbox *mbox;
+@@ -307,40 +307,26 @@ static void otx2_queue_work(struct mbox *mw, struct workqueue_struct *mbox_wq,
+
+ mbox = &mw->mbox;
+ mdev = &mbox->dev[i];
+- if (type == TYPE_PFAF)
+- otx2_sync_mbox_bbuf(mbox, i);
+ hdr = mdev->mbase + mbox->rx_start;
+ /* The hdr->num_msgs is set to zero immediately in the interrupt
+- * handler to ensure that it holds a correct value next time
+- * when the interrupt handler is called.
+- * pf->mbox.num_msgs holds the data for use in pfaf_mbox_handler
+- * pf>mbox.up_num_msgs holds the data for use in
+- * pfaf_mbox_up_handler.
++ * handler to ensure that it holds a correct value next time
++ * when the interrupt handler is called. pf->mw[i].num_msgs
++ * holds the data for use in otx2_pfvf_mbox_handler and
++ * pf->mw[i].up_num_msgs holds the data for use in
++ * otx2_pfvf_mbox_up_handler.
+ */
+ if (hdr->num_msgs) {
+ mw[i].num_msgs = hdr->num_msgs;
+ hdr->num_msgs = 0;
+- if (type == TYPE_PFAF)
+- memset(mbox->hwbase + mbox->rx_start, 0,
+- ALIGN(sizeof(struct mbox_hdr),
+- sizeof(u64)));
+-
+ queue_work(mbox_wq, &mw[i].mbox_wrk);
+ }
+
+ mbox = &mw->mbox_up;
+ mdev = &mbox->dev[i];
+- if (type == TYPE_PFAF)
+- otx2_sync_mbox_bbuf(mbox, i);
+ hdr = mdev->mbase + mbox->rx_start;
+ if (hdr->num_msgs) {
+ mw[i].up_num_msgs = hdr->num_msgs;
+ hdr->num_msgs = 0;
+- if (type == TYPE_PFAF)
+- memset(mbox->hwbase + mbox->rx_start, 0,
+- ALIGN(sizeof(struct mbox_hdr),
+- sizeof(u64)));
+-
+ queue_work(mbox_wq, &mw[i].mbox_up_wrk);
+ }
+ }
+@@ -356,8 +342,10 @@ static void otx2_forward_msg_pfvf(struct otx2_mbox_dev *mdev,
+ /* Msgs are already copied, trigger VF's mbox irq */
+ smp_wmb();
+
++ otx2_mbox_wait_for_zero(pfvf_mbox, devid);
++
+ offset = pfvf_mbox->trigger | (devid << pfvf_mbox->tr_shift);
+- writeq(1, (void __iomem *)pfvf_mbox->reg_base + offset);
++ writeq(MBOX_DOWN_MSG, (void __iomem *)pfvf_mbox->reg_base + offset);
+
+ /* Restore VF's mbox bounce buffer region address */
+ src_mdev->mbase = bbuf_base;
+@@ -547,7 +535,7 @@ static void otx2_pfvf_mbox_up_handler(struct work_struct *work)
+ end:
+ offset = mbox->rx_start + msg->next_msgoff;
+ if (mdev->msgs_acked == (vf_mbox->up_num_msgs - 1))
+- __otx2_mbox_reset(mbox, 0);
++ __otx2_mbox_reset(mbox, vf_idx);
+ mdev->msgs_acked++;
+ }
+ }
+@@ -564,8 +552,7 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+ if (vfs > 64) {
+ intr = otx2_read64(pf, RVU_PF_VFPF_MBOX_INTX(1));
+ otx2_write64(pf, RVU_PF_VFPF_MBOX_INTX(1), intr);
+- otx2_queue_work(mbox, pf->mbox_pfvf_wq, 64, vfs, intr,
+- TYPE_PFVF);
++ otx2_queue_vf_work(mbox, pf->mbox_pfvf_wq, 64, vfs, intr);
+ if (intr)
+ trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
+ vfs = 64;
+@@ -574,7 +561,7 @@ static irqreturn_t otx2_pfvf_mbox_intr_handler(int irq, void *pf_irq)
+ intr = otx2_read64(pf, RVU_PF_VFPF_MBOX_INTX(0));
+ otx2_write64(pf, RVU_PF_VFPF_MBOX_INTX(0), intr);
+
+- otx2_queue_work(mbox, pf->mbox_pfvf_wq, 0, vfs, intr, TYPE_PFVF);
++ otx2_queue_vf_work(mbox, pf->mbox_pfvf_wq, 0, vfs, intr);
+
+ if (intr)
+ trace_otx2_msg_interrupt(mbox->mbox.pdev, "VF(s) to PF", intr);
+@@ -822,20 +809,22 @@ static void otx2_pfaf_mbox_handler(struct work_struct *work)
+ struct mbox *af_mbox;
+ struct otx2_nic *pf;
+ int offset, id;
++ u16 num_msgs;
+
+ af_mbox = container_of(work, struct mbox, mbox_wrk);
+ mbox = &af_mbox->mbox;
+ mdev = &mbox->dev[0];
+ rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++ num_msgs = rsp_hdr->num_msgs;
+
+ offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+ pf = af_mbox->pfvf;
+
+- for (id = 0; id < af_mbox->num_msgs; id++) {
++ for (id = 0; id < num_msgs; id++) {
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ otx2_process_pfaf_mbox_msg(pf, msg);
+ offset = mbox->rx_start + msg->next_msgoff;
+- if (mdev->msgs_acked == (af_mbox->num_msgs - 1))
++ if (mdev->msgs_acked == (num_msgs - 1))
+ __otx2_mbox_reset(mbox, 0);
+ mdev->msgs_acked++;
+ }
+@@ -946,12 +935,14 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work)
+ int offset, id, devid = 0;
+ struct mbox_hdr *rsp_hdr;
+ struct mbox_msghdr *msg;
++ u16 num_msgs;
+
+ rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++ num_msgs = rsp_hdr->num_msgs;
+
+ offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+
+- for (id = 0; id < af_mbox->up_num_msgs; id++) {
++ for (id = 0; id < num_msgs; id++) {
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+
+ devid = msg->pcifunc & RVU_PFVF_FUNC_MASK;
+@@ -960,10 +951,11 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work)
+ otx2_process_mbox_msg_up(pf, msg);
+ offset = mbox->rx_start + msg->next_msgoff;
+ }
+- if (devid) {
++ /* Forward to VF iff VFs are really present */
++ if (devid && pci_num_vf(pf->pdev)) {
+ otx2_forward_vf_mbox_msgs(pf, &pf->mbox.mbox_up,
+ MBOX_DIR_PFVF_UP, devid - 1,
+- af_mbox->up_num_msgs);
++ num_msgs);
+ return;
+ }
+
+@@ -973,16 +965,49 @@ static void otx2_pfaf_mbox_up_handler(struct work_struct *work)
+ static irqreturn_t otx2_pfaf_mbox_intr_handler(int irq, void *pf_irq)
+ {
+ struct otx2_nic *pf = (struct otx2_nic *)pf_irq;
+- struct mbox *mbox;
++ struct mbox *mw = &pf->mbox;
++ struct otx2_mbox_dev *mdev;
++ struct otx2_mbox *mbox;
++ struct mbox_hdr *hdr;
++ u64 mbox_data;
+
+ /* Clear the IRQ */
+ otx2_write64(pf, RVU_PF_INT, BIT_ULL(0));
+
+- mbox = &pf->mbox;
+
+- trace_otx2_msg_interrupt(mbox->mbox.pdev, "AF to PF", BIT_ULL(0));
++ mbox_data = otx2_read64(pf, RVU_PF_PFAF_MBOX0);
++
++ if (mbox_data & MBOX_UP_MSG) {
++ mbox_data &= ~MBOX_UP_MSG;
++ otx2_write64(pf, RVU_PF_PFAF_MBOX0, mbox_data);
++
++ mbox = &mw->mbox_up;
++ mdev = &mbox->dev[0];
++ otx2_sync_mbox_bbuf(mbox, 0);
++
++ hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++ if (hdr->num_msgs)
++ queue_work(pf->mbox_wq, &mw->mbox_up_wrk);
++
++ trace_otx2_msg_interrupt(pf->pdev, "UP message from AF to PF",
++ BIT_ULL(0));
++ }
++
++ if (mbox_data & MBOX_DOWN_MSG) {
++ mbox_data &= ~MBOX_DOWN_MSG;
++ otx2_write64(pf, RVU_PF_PFAF_MBOX0, mbox_data);
++
++ mbox = &mw->mbox;
++ mdev = &mbox->dev[0];
++ otx2_sync_mbox_bbuf(mbox, 0);
++
++ hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++ if (hdr->num_msgs)
++ queue_work(pf->mbox_wq, &mw->mbox_wrk);
+
+- otx2_queue_work(mbox, pf->mbox_wq, 0, 1, 1, TYPE_PFAF);
++ trace_otx2_msg_interrupt(pf->pdev, "DOWN reply from AF to PF",
++ BIT_ULL(0));
++ }
+
+ return IRQ_HANDLED;
+ }
+@@ -3030,6 +3055,7 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+ struct otx2_vf_config *config;
+ struct cgx_link_info_msg *req;
+ struct mbox_msghdr *msghdr;
++ struct delayed_work *dwork;
+ struct otx2_nic *pf;
+ int vf_idx;
+
+@@ -3038,10 +3064,21 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+ vf_idx = config - config->pf->vf_configs;
+ pf = config->pf;
+
++ mutex_lock(&pf->mbox.lock);
++
++ dwork = &config->link_event_work;
++
++ if (!otx2_mbox_wait_for_zero(&pf->mbox_pfvf[0].mbox_up, vf_idx)) {
++ schedule_delayed_work(dwork, msecs_to_jiffies(100));
++ mutex_unlock(&pf->mbox.lock);
++ return;
++ }
++
+ msghdr = otx2_mbox_alloc_msg_rsp(&pf->mbox_pfvf[0].mbox_up, vf_idx,
+ sizeof(*req), sizeof(struct msg_rsp));
+ if (!msghdr) {
+ dev_err(pf->dev, "Failed to create VF%d link event\n", vf_idx);
++ mutex_unlock(&pf->mbox.lock);
+ return;
+ }
+
+@@ -3050,7 +3087,11 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+ req->hdr.sig = OTX2_MBOX_REQ_SIG;
+ memcpy(&req->link_info, &pf->linfo, sizeof(req->link_info));
+
+- otx2_sync_mbox_up_msg(&pf->mbox_pfvf[0], vf_idx);
++ otx2_mbox_wait_for_zero(&pf->mbox_pfvf[0].mbox_up, vf_idx);
++
++ otx2_mbox_msg_send_up(&pf->mbox_pfvf[0].mbox_up, vf_idx);
++
++ mutex_unlock(&pf->mbox.lock);
+ }
+
+ static int otx2_sriov_enable(struct pci_dev *pdev, int numvfs)
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+index 404855bccb4b6..68fef947ccced 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_vf.c
+@@ -89,16 +89,20 @@ static void otx2vf_vfaf_mbox_handler(struct work_struct *work)
+ struct otx2_mbox *mbox;
+ struct mbox *af_mbox;
+ int offset, id;
++ u16 num_msgs;
+
+ af_mbox = container_of(work, struct mbox, mbox_wrk);
+ mbox = &af_mbox->mbox;
+ mdev = &mbox->dev[0];
+ rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+- if (af_mbox->num_msgs == 0)
++ num_msgs = rsp_hdr->num_msgs;
++
++ if (num_msgs == 0)
+ return;
++
+ offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+
+- for (id = 0; id < af_mbox->num_msgs; id++) {
++ for (id = 0; id < num_msgs; id++) {
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ otx2vf_process_vfaf_mbox_msg(af_mbox->pfvf, msg);
+ offset = mbox->rx_start + msg->next_msgoff;
+@@ -151,6 +155,7 @@ static void otx2vf_vfaf_mbox_up_handler(struct work_struct *work)
+ struct mbox *vf_mbox;
+ struct otx2_nic *vf;
+ int offset, id;
++ u16 num_msgs;
+
+ vf_mbox = container_of(work, struct mbox, mbox_up_wrk);
+ vf = vf_mbox->pfvf;
+@@ -158,12 +163,14 @@ static void otx2vf_vfaf_mbox_up_handler(struct work_struct *work)
+ mdev = &mbox->dev[0];
+
+ rsp_hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+- if (vf_mbox->up_num_msgs == 0)
++ num_msgs = rsp_hdr->num_msgs;
++
++ if (num_msgs == 0)
+ return;
+
+ offset = mbox->rx_start + ALIGN(sizeof(*rsp_hdr), MBOX_MSG_ALIGN);
+
+- for (id = 0; id < vf_mbox->up_num_msgs; id++) {
++ for (id = 0; id < num_msgs; id++) {
+ msg = (struct mbox_msghdr *)(mdev->mbase + offset);
+ otx2vf_process_mbox_msg_up(vf, msg);
+ offset = mbox->rx_start + msg->next_msgoff;
+@@ -178,40 +185,48 @@ static irqreturn_t otx2vf_vfaf_mbox_intr_handler(int irq, void *vf_irq)
+ struct otx2_mbox_dev *mdev;
+ struct otx2_mbox *mbox;
+ struct mbox_hdr *hdr;
++ u64 mbox_data;
+
+ /* Clear the IRQ */
+ otx2_write64(vf, RVU_VF_INT, BIT_ULL(0));
+
++ mbox_data = otx2_read64(vf, RVU_VF_VFPF_MBOX0);
++
+ /* Read latest mbox data */
+ smp_rmb();
+
+- /* Check for PF => VF response messages */
+- mbox = &vf->mbox.mbox;
+- mdev = &mbox->dev[0];
+- otx2_sync_mbox_bbuf(mbox, 0);
++ if (mbox_data & MBOX_DOWN_MSG) {
++ mbox_data &= ~MBOX_DOWN_MSG;
++ otx2_write64(vf, RVU_VF_VFPF_MBOX0, mbox_data);
++
++ /* Check for PF => VF response messages */
++ mbox = &vf->mbox.mbox;
++ mdev = &mbox->dev[0];
++ otx2_sync_mbox_bbuf(mbox, 0);
+
+- trace_otx2_msg_interrupt(mbox->pdev, "PF to VF", BIT_ULL(0));
++ hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++ if (hdr->num_msgs)
++ queue_work(vf->mbox_wq, &vf->mbox.mbox_wrk);
+
+- hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+- if (hdr->num_msgs) {
+- vf->mbox.num_msgs = hdr->num_msgs;
+- hdr->num_msgs = 0;
+- memset(mbox->hwbase + mbox->rx_start, 0,
+- ALIGN(sizeof(struct mbox_hdr), sizeof(u64)));
+- queue_work(vf->mbox_wq, &vf->mbox.mbox_wrk);
++ trace_otx2_msg_interrupt(mbox->pdev, "DOWN reply from PF to VF",
++ BIT_ULL(0));
+ }
+- /* Check for PF => VF notification messages */
+- mbox = &vf->mbox.mbox_up;
+- mdev = &mbox->dev[0];
+- otx2_sync_mbox_bbuf(mbox, 0);
+-
+- hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
+- if (hdr->num_msgs) {
+- vf->mbox.up_num_msgs = hdr->num_msgs;
+- hdr->num_msgs = 0;
+- memset(mbox->hwbase + mbox->rx_start, 0,
+- ALIGN(sizeof(struct mbox_hdr), sizeof(u64)));
+- queue_work(vf->mbox_wq, &vf->mbox.mbox_up_wrk);
++
++ if (mbox_data & MBOX_UP_MSG) {
++ mbox_data &= ~MBOX_UP_MSG;
++ otx2_write64(vf, RVU_VF_VFPF_MBOX0, mbox_data);
++
++ /* Check for PF => VF notification messages */
++ mbox = &vf->mbox.mbox_up;
++ mdev = &mbox->dev[0];
++ otx2_sync_mbox_bbuf(mbox, 0);
++
++ hdr = (struct mbox_hdr *)(mdev->mbase + mbox->rx_start);
++ if (hdr->num_msgs)
++ queue_work(vf->mbox_wq, &vf->mbox.mbox_up_wrk);
++
++ trace_otx2_msg_interrupt(mbox->pdev, "UP message from PF to VF",
++ BIT_ULL(0));
+ }
+
+ return IRQ_HANDLED;
+--
+2.43.0
+
--- /dev/null
+From 1e3b5e44b10ce2064d59e13b3df9927f706ae844 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Mar 2024 14:59:57 +0530
+Subject: octeontx2-pf: Send UP messages to VF only when VF is up.
+
+From: Subbaraya Sundeep <sbhatta@marvell.com>
+
+[ Upstream commit dfcf6355f53b1796cf7fd50a4f27b18ee6a3497a ]
+
+When PF sending link status messages to VF, it is possible
+that by the time link_event_task work function is executed
+VF might have brought down. Hence before sending VF link
+status message check whether VF is up to receive it.
+
+Fixes: ad513ed938c9 ("octeontx2-vf: Link event notification support")
+Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index a6c5f6a2dab07..7e2c30927c312 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -3062,6 +3062,9 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+ vf_idx = config - config->pf->vf_configs;
+ pf = config->pf;
+
++ if (config->intf_down)
++ return;
++
+ mutex_lock(&pf->mbox.lock);
+
+ dwork = &config->link_event_work;
+--
+2.43.0
+
--- /dev/null
+From 9c131f208f20edd21fc6708217ed31935f618723 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Mar 2024 14:59:56 +0530
+Subject: octeontx2-pf: Use default max_active works instead of one
+
+From: Subbaraya Sundeep <sbhatta@marvell.com>
+
+[ Upstream commit 7558ce0d974ced1dc07edc1197f750fe28c52e57 ]
+
+Only one execution context for the workqueue used for PF and
+VFs mailbox communication is incorrect since multiple works are
+queued simultaneously by all the VFs and PF link UP messages.
+Hence use default number of execution contexts by passing zero
+as max_active to alloc_workqueue function. With this fix in place,
+modify UP messages also to wait until completion.
+
+Fixes: d424b6c02415 ("octeontx2-pf: Enable SRIOV and added VF mbox handling")
+Signed-off-by: Subbaraya Sundeep <sbhatta@marvell.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+index 3f044b161e8bf..a6c5f6a2dab07 100644
+--- a/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
++++ b/drivers/net/ethernet/marvell/octeontx2/nic/otx2_pf.c
+@@ -583,8 +583,9 @@ static int otx2_pfvf_mbox_init(struct otx2_nic *pf, int numvfs)
+ if (!pf->mbox_pfvf)
+ return -ENOMEM;
+
+- pf->mbox_pfvf_wq = alloc_ordered_workqueue("otx2_pfvf_mailbox",
+- WQ_HIGHPRI | WQ_MEM_RECLAIM);
++ pf->mbox_pfvf_wq = alloc_workqueue("otx2_pfvf_mailbox",
++ WQ_UNBOUND | WQ_HIGHPRI |
++ WQ_MEM_RECLAIM, 0);
+ if (!pf->mbox_pfvf_wq)
+ return -ENOMEM;
+
+@@ -3086,7 +3087,7 @@ static void otx2_vf_link_event_task(struct work_struct *work)
+
+ otx2_mbox_wait_for_zero(&pf->mbox_pfvf[0].mbox_up, vf_idx);
+
+- otx2_mbox_msg_send_up(&pf->mbox_pfvf[0].mbox_up, vf_idx);
++ otx2_sync_mbox_up_msg(&pf->mbox_pfvf[0], vf_idx);
+
+ mutex_unlock(&pf->mbox.lock);
+ }
+--
+2.43.0
+
--- /dev/null
+From f200d5e08e7e61f3a12297415cbe40c5a0f57456 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 14 Mar 2024 14:18:16 +0000
+Subject: packet: annotate data-races around ignore_outgoing
+
+From: Eric Dumazet <edumazet@google.com>
+
+[ Upstream commit 6ebfad33161afacb3e1e59ed1c2feefef70f9f97 ]
+
+ignore_outgoing is read locklessly from dev_queue_xmit_nit()
+and packet_getsockopt()
+
+Add appropriate READ_ONCE()/WRITE_ONCE() annotations.
+
+syzbot reported:
+
+BUG: KCSAN: data-race in dev_queue_xmit_nit / packet_setsockopt
+
+write to 0xffff888107804542 of 1 bytes by task 22618 on cpu 0:
+ packet_setsockopt+0xd83/0xfd0 net/packet/af_packet.c:4003
+ do_sock_setsockopt net/socket.c:2311 [inline]
+ __sys_setsockopt+0x1d8/0x250 net/socket.c:2334
+ __do_sys_setsockopt net/socket.c:2343 [inline]
+ __se_sys_setsockopt net/socket.c:2340 [inline]
+ __x64_sys_setsockopt+0x66/0x80 net/socket.c:2340
+ do_syscall_64+0xd3/0x1d0
+ entry_SYSCALL_64_after_hwframe+0x6d/0x75
+
+read to 0xffff888107804542 of 1 bytes by task 27 on cpu 1:
+ dev_queue_xmit_nit+0x82/0x620 net/core/dev.c:2248
+ xmit_one net/core/dev.c:3527 [inline]
+ dev_hard_start_xmit+0xcc/0x3f0 net/core/dev.c:3547
+ __dev_queue_xmit+0xf24/0x1dd0 net/core/dev.c:4335
+ dev_queue_xmit include/linux/netdevice.h:3091 [inline]
+ batadv_send_skb_packet+0x264/0x300 net/batman-adv/send.c:108
+ batadv_send_broadcast_skb+0x24/0x30 net/batman-adv/send.c:127
+ batadv_iv_ogm_send_to_if net/batman-adv/bat_iv_ogm.c:392 [inline]
+ batadv_iv_ogm_emit net/batman-adv/bat_iv_ogm.c:420 [inline]
+ batadv_iv_send_outstanding_bat_ogm_packet+0x3f0/0x4b0 net/batman-adv/bat_iv_ogm.c:1700
+ process_one_work kernel/workqueue.c:3254 [inline]
+ process_scheduled_works+0x465/0x990 kernel/workqueue.c:3335
+ worker_thread+0x526/0x730 kernel/workqueue.c:3416
+ kthread+0x1d1/0x210 kernel/kthread.c:388
+ ret_from_fork+0x4b/0x60 arch/x86/kernel/process.c:147
+ ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:243
+
+value changed: 0x00 -> 0x01
+
+Reported by Kernel Concurrency Sanitizer on:
+CPU: 1 PID: 27 Comm: kworker/u8:1 Tainted: G W 6.8.0-syzkaller-08073-g480e035fc4c7 #0
+Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/29/2024
+Workqueue: bat_events batadv_iv_send_outstanding_bat_ogm_packet
+
+Fixes: fa788d986a3a ("packet: add sockopt to ignore outgoing packets")
+Reported-by: syzbot+c669c1136495a2e7c31f@syzkaller.appspotmail.com
+Closes: https://lore.kernel.org/netdev/CANn89i+Z7MfbkBLOv=p7KZ7=K1rKHO4P1OL5LYDCtBiyqsa9oQ@mail.gmail.com/T/#t
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Cc: Willem de Bruijn <willemdebruijn.kernel@gmail.com>
+Reviewed-by: Willem de Bruijn <willemb@google.com>
+Reviewed-by: Jason Xing <kerneljasonxing@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/core/dev.c | 2 +-
+ net/packet/af_packet.c | 4 ++--
+ 2 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/net/core/dev.c b/net/core/dev.c
+index 60619fe8af5fc..9a48a7e26cf46 100644
+--- a/net/core/dev.c
++++ b/net/core/dev.c
+@@ -2271,7 +2271,7 @@ void dev_queue_xmit_nit(struct sk_buff *skb, struct net_device *dev)
+ rcu_read_lock();
+ again:
+ list_for_each_entry_rcu(ptype, ptype_list, list) {
+- if (ptype->ignore_outgoing)
++ if (READ_ONCE(ptype->ignore_outgoing))
+ continue;
+
+ /* Never send packets back to the socket
+diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
+index c3117350f5fbb..7188ca8d84693 100644
+--- a/net/packet/af_packet.c
++++ b/net/packet/af_packet.c
+@@ -3981,7 +3981,7 @@ packet_setsockopt(struct socket *sock, int level, int optname, sockptr_t optval,
+ if (val < 0 || val > 1)
+ return -EINVAL;
+
+- po->prot_hook.ignore_outgoing = !!val;
++ WRITE_ONCE(po->prot_hook.ignore_outgoing, !!val);
+ return 0;
+ }
+ case PACKET_TX_HAS_OFF:
+@@ -4110,7 +4110,7 @@ static int packet_getsockopt(struct socket *sock, int level, int optname,
+ 0);
+ break;
+ case PACKET_IGNORE_OUTGOING:
+- val = po->prot_hook.ignore_outgoing;
++ val = READ_ONCE(po->prot_hook.ignore_outgoing);
+ break;
+ case PACKET_ROLLOVER_STATS:
+ if (!po->rollover)
+--
+2.43.0
+
--- /dev/null
+From 037fafff8fb16fb4fc1c462704af4aa339d08f49 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 19 Mar 2024 13:44:34 -0700
+Subject: rcu: add a helper to report consolidated flavor QS
+
+From: Yan Zhai <yan@cloudflare.com>
+
+[ Upstream commit 1a77557d48cff187a169c2aec01c0dd78a5e7e50 ]
+
+When under heavy load, network processing can run CPU-bound for many
+tens of seconds. Even in preemptible kernels (non-RT kernel), this can
+block RCU Tasks grace periods, which can cause trace-event removal to
+take more than a minute, which is unacceptably long.
+
+This commit therefore creates a new helper function that passes through
+both RCU and RCU-Tasks quiescent states every 100 milliseconds. This
+hard-coded value suffices for current workloads.
+
+Suggested-by: Paul E. McKenney <paulmck@kernel.org>
+Reviewed-by: Jesper Dangaard Brouer <hawk@kernel.org>
+Signed-off-by: Yan Zhai <yan@cloudflare.com>
+Reviewed-by: Paul E. McKenney <paulmck@kernel.org>
+Acked-by: Jesper Dangaard Brouer <hawk@kernel.org>
+Link: https://lore.kernel.org/r/90431d46ee112d2b0af04dbfe936faaca11810a5.1710877680.git.yan@cloudflare.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Stable-dep-of: d6dbbb11247c ("net: report RCU QS on threaded NAPI repolling")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ include/linux/rcupdate.h | 31 +++++++++++++++++++++++++++++++
+ 1 file changed, 31 insertions(+)
+
+diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
+index d2507168b9c7b..319698087d66a 100644
+--- a/include/linux/rcupdate.h
++++ b/include/linux/rcupdate.h
+@@ -268,6 +268,37 @@ do { \
+ cond_resched(); \
+ } while (0)
+
++/**
++ * rcu_softirq_qs_periodic - Report RCU and RCU-Tasks quiescent states
++ * @old_ts: jiffies at start of processing.
++ *
++ * This helper is for long-running softirq handlers, such as NAPI threads in
++ * networking. The caller should initialize the variable passed in as @old_ts
++ * at the beginning of the softirq handler. When invoked frequently, this macro
++ * will invoke rcu_softirq_qs() every 100 milliseconds thereafter, which will
++ * provide both RCU and RCU-Tasks quiescent states. Note that this macro
++ * modifies its old_ts argument.
++ *
++ * Because regions of code that have disabled softirq act as RCU read-side
++ * critical sections, this macro should be invoked with softirq (and
++ * preemption) enabled.
++ *
++ * The macro is not needed when CONFIG_PREEMPT_RT is defined. RT kernels would
++ * have more chance to invoke schedule() calls and provide necessary quiescent
++ * states. As a contrast, calling cond_resched() only won't achieve the same
++ * effect because cond_resched() does not provide RCU-Tasks quiescent states.
++ */
++#define rcu_softirq_qs_periodic(old_ts) \
++do { \
++ if (!IS_ENABLED(CONFIG_PREEMPT_RT) && \
++ time_after(jiffies, (old_ts) + HZ / 10)) { \
++ preempt_disable(); \
++ rcu_softirq_qs(); \
++ preempt_enable(); \
++ (old_ts) = jiffies; \
++ } \
++} while (0)
++
+ /*
+ * Infrastructure to implement the synchronize_() primitives in
+ * TREE_RCU and rcu_barrier_() primitives in TINY_RCU.
+--
+2.43.0
+
--- /dev/null
+From 8f8642a729413c2cb1fe26276debc14364dfda7f Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 15 Mar 2024 18:28:38 +0900
+Subject: rds: introduce acquire/release ordering in acquire/release_in_xmit()
+
+From: Yewon Choi <woni9911@gmail.com>
+
+[ Upstream commit 1422f28826d2a0c11e5240b3e951c9e214d8656e ]
+
+acquire/release_in_xmit() work as bit lock in rds_send_xmit(), so they
+are expected to ensure acquire/release memory ordering semantics.
+However, test_and_set_bit/clear_bit() don't imply such semantics, on
+top of this, following smp_mb__after_atomic() does not guarantee release
+ordering (memory barrier actually should be placed before clear_bit()).
+
+Instead, we use clear_bit_unlock/test_and_set_bit_lock() here.
+
+Fixes: 0f4b1c7e89e6 ("rds: fix rds_send_xmit() serialization")
+Fixes: 1f9ecd7eacfd ("RDS: Pass rds_conn_path to rds_send_xmit()")
+Signed-off-by: Yewon Choi <woni9911@gmail.com>
+Reviewed-by: Michal Kubiak <michal.kubiak@intel.com>
+Link: https://lore.kernel.org/r/ZfQUxnNTO9AJmzwc@libra05
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/rds/send.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+diff --git a/net/rds/send.c b/net/rds/send.c
+index a4ba45c430d81..0005fb43f2dfa 100644
+--- a/net/rds/send.c
++++ b/net/rds/send.c
+@@ -103,13 +103,12 @@ EXPORT_SYMBOL_GPL(rds_send_path_reset);
+
+ static int acquire_in_xmit(struct rds_conn_path *cp)
+ {
+- return test_and_set_bit(RDS_IN_XMIT, &cp->cp_flags) == 0;
++ return test_and_set_bit_lock(RDS_IN_XMIT, &cp->cp_flags) == 0;
+ }
+
+ static void release_in_xmit(struct rds_conn_path *cp)
+ {
+- clear_bit(RDS_IN_XMIT, &cp->cp_flags);
+- smp_mb__after_atomic();
++ clear_bit_unlock(RDS_IN_XMIT, &cp->cp_flags);
+ /*
+ * We don't use wait_on_bit()/wake_up_bit() because our waking is in a
+ * hot path and finding waiters is very rare. We don't want to walk
+--
+2.43.0
+
--- /dev/null
+From 991384a3dea31c89d0eb426f56fe651f256d1aa3 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 8 Mar 2024 12:01:22 -0800
+Subject: rds: tcp: Fix use-after-free of net in reqsk_timer_handler().
+
+From: Kuniyuki Iwashima <kuniyu@amazon.com>
+
+[ Upstream commit 2a750d6a5b365265dbda33330a6188547ddb5c24 ]
+
+syzkaller reported a warning of netns tracker [0] followed by KASAN
+splat [1] and another ref tracker warning [1].
+
+syzkaller could not find a repro, but in the log, the only suspicious
+sequence was as follows:
+
+ 18:26:22 executing program 1:
+ r0 = socket$inet6_mptcp(0xa, 0x1, 0x106)
+ ...
+ connect$inet6(r0, &(0x7f0000000080)={0xa, 0x4001, 0x0, @loopback}, 0x1c) (async)
+
+The notable thing here is 0x4001 in connect(), which is RDS_TCP_PORT.
+
+So, the scenario would be:
+
+ 1. unshare(CLONE_NEWNET) creates a per netns tcp listener in
+ rds_tcp_listen_init().
+ 2. syz-executor connect()s to it and creates a reqsk.
+ 3. syz-executor exit()s immediately.
+ 4. netns is dismantled. [0]
+ 5. reqsk timer is fired, and UAF happens while freeing reqsk. [1]
+ 6. listener is freed after RCU grace period. [2]
+
+Basically, reqsk assumes that the listener guarantees netns safety
+until all reqsk timers are expired by holding the listener's refcount.
+However, this was not the case for kernel sockets.
+
+Commit 740ea3c4a0b2 ("tcp: Clean up kernel listener's reqsk in
+inet_twsk_purge()") fixed this issue only for per-netns ehash.
+
+Let's apply the same fix for the global ehash.
+
+[0]:
+ref_tracker: net notrefcnt@0000000065449cc3 has 1/1 users at
+ sk_alloc (./include/net/net_namespace.h:337 net/core/sock.c:2146)
+ inet6_create (net/ipv6/af_inet6.c:192 net/ipv6/af_inet6.c:119)
+ __sock_create (net/socket.c:1572)
+ rds_tcp_listen_init (net/rds/tcp_listen.c:279)
+ rds_tcp_init_net (net/rds/tcp.c:577)
+ ops_init (net/core/net_namespace.c:137)
+ setup_net (net/core/net_namespace.c:340)
+ copy_net_ns (net/core/net_namespace.c:497)
+ create_new_namespaces (kernel/nsproxy.c:110)
+ unshare_nsproxy_namespaces (kernel/nsproxy.c:228 (discriminator 4))
+ ksys_unshare (kernel/fork.c:3429)
+ __x64_sys_unshare (kernel/fork.c:3496)
+ do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
+ entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:129)
+...
+WARNING: CPU: 0 PID: 27 at lib/ref_tracker.c:179 ref_tracker_dir_exit (lib/ref_tracker.c:179)
+
+[1]:
+BUG: KASAN: slab-use-after-free in inet_csk_reqsk_queue_drop (./include/net/inet_hashtables.h:180 net/ipv4/inet_connection_sock.c:952 net/ipv4/inet_connection_sock.c:966)
+Read of size 8 at addr ffff88801b370400 by task swapper/0/0
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
+Call Trace:
+ <IRQ>
+ dump_stack_lvl (lib/dump_stack.c:107 (discriminator 1))
+ print_report (mm/kasan/report.c:378 mm/kasan/report.c:488)
+ kasan_report (mm/kasan/report.c:603)
+ inet_csk_reqsk_queue_drop (./include/net/inet_hashtables.h:180 net/ipv4/inet_connection_sock.c:952 net/ipv4/inet_connection_sock.c:966)
+ reqsk_timer_handler (net/ipv4/inet_connection_sock.c:979 net/ipv4/inet_connection_sock.c:1092)
+ call_timer_fn (./arch/x86/include/asm/jump_label.h:27 ./include/linux/jump_label.h:207 ./include/trace/events/timer.h:127 kernel/time/timer.c:1701)
+ __run_timers.part.0 (kernel/time/timer.c:1752 kernel/time/timer.c:2038)
+ run_timer_softirq (kernel/time/timer.c:2053)
+ __do_softirq (./arch/x86/include/asm/jump_label.h:27 ./include/linux/jump_label.h:207 ./include/trace/events/irq.h:142 kernel/softirq.c:554)
+ irq_exit_rcu (kernel/softirq.c:427 kernel/softirq.c:632 kernel/softirq.c:644)
+ sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1076 (discriminator 14))
+ </IRQ>
+
+Allocated by task 258 on cpu 0 at 83.612050s:
+ kasan_save_stack (mm/kasan/common.c:48)
+ kasan_save_track (mm/kasan/common.c:68)
+ __kasan_slab_alloc (mm/kasan/common.c:343)
+ kmem_cache_alloc (mm/slub.c:3813 mm/slub.c:3860 mm/slub.c:3867)
+ copy_net_ns (./include/linux/slab.h:701 net/core/net_namespace.c:421 net/core/net_namespace.c:480)
+ create_new_namespaces (kernel/nsproxy.c:110)
+ unshare_nsproxy_namespaces (kernel/nsproxy.c:228 (discriminator 4))
+ ksys_unshare (kernel/fork.c:3429)
+ __x64_sys_unshare (kernel/fork.c:3496)
+ do_syscall_64 (arch/x86/entry/common.c:52 arch/x86/entry/common.c:83)
+ entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:129)
+
+Freed by task 27 on cpu 0 at 329.158864s:
+ kasan_save_stack (mm/kasan/common.c:48)
+ kasan_save_track (mm/kasan/common.c:68)
+ kasan_save_free_info (mm/kasan/generic.c:643)
+ __kasan_slab_free (mm/kasan/common.c:265)
+ kmem_cache_free (mm/slub.c:4299 mm/slub.c:4363)
+ cleanup_net (net/core/net_namespace.c:456 net/core/net_namespace.c:446 net/core/net_namespace.c:639)
+ process_one_work (kernel/workqueue.c:2638)
+ worker_thread (kernel/workqueue.c:2700 kernel/workqueue.c:2787)
+ kthread (kernel/kthread.c:388)
+ ret_from_fork (arch/x86/kernel/process.c:153)
+ ret_from_fork_asm (arch/x86/entry/entry_64.S:250)
+
+The buggy address belongs to the object at ffff88801b370000
+ which belongs to the cache net_namespace of size 4352
+The buggy address is located 1024 bytes inside of
+ freed 4352-byte region [ffff88801b370000, ffff88801b371100)
+
+[2]:
+WARNING: CPU: 0 PID: 95 at lib/ref_tracker.c:228 ref_tracker_free (lib/ref_tracker.c:228 (discriminator 1))
+Modules linked in:
+Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014
+RIP: 0010:ref_tracker_free (lib/ref_tracker.c:228 (discriminator 1))
+...
+Call Trace:
+<IRQ>
+ __sk_destruct (./include/net/net_namespace.h:353 net/core/sock.c:2204)
+ rcu_core (./arch/x86/include/asm/preempt.h:26 kernel/rcu/tree.c:2165 kernel/rcu/tree.c:2433)
+ __do_softirq (./arch/x86/include/asm/jump_label.h:27 ./include/linux/jump_label.h:207 ./include/trace/events/irq.h:142 kernel/softirq.c:554)
+ irq_exit_rcu (kernel/softirq.c:427 kernel/softirq.c:632 kernel/softirq.c:644)
+ sysvec_apic_timer_interrupt (arch/x86/kernel/apic/apic.c:1076 (discriminator 14))
+</IRQ>
+
+Reported-by: syzkaller <syzkaller@googlegroups.com>
+Suggested-by: Eric Dumazet <edumazet@google.com>
+Fixes: 467fa15356ac ("RDS-TCP: Support multiple RDS-TCP listen endpoints, one per netns.")
+Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
+Reviewed-by: Eric Dumazet <edumazet@google.com>
+Link: https://lore.kernel.org/r/20240308200122.64357-3-kuniyu@amazon.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/ipv4/tcp_minisocks.c | 4 ----
+ 1 file changed, 4 deletions(-)
+
+diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c
+index 42844d20da020..b3bfa1a09df68 100644
+--- a/net/ipv4/tcp_minisocks.c
++++ b/net/ipv4/tcp_minisocks.c
+@@ -357,10 +357,6 @@ void tcp_twsk_purge(struct list_head *net_exit_list, int family)
+ /* Even if tw_refcount == 1, we must clean up kernel reqsk */
+ inet_twsk_purge(net->ipv4.tcp_death_row.hashinfo, family);
+ } else if (!purged_once) {
+- /* The last refcount is decremented in tcp_sk_exit_batch() */
+- if (refcount_read(&net->ipv4.tcp_death_row.tw_refcount) == 1)
+- continue;
+-
+ inet_twsk_purge(&tcp_hashinfo, family);
+ purged_once = true;
+ }
+--
+2.43.0
+
--- /dev/null
+From 72854df69d6a49a1eb42bb5c9e8e92bfe5e07e34 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 17 Jan 2024 14:53:12 +0100
+Subject: remoteproc: stm32: Fix incorrect type assignment returned by
+ stm32_rproc_get_loaded_rsc_tablef
+
+From: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
+
+[ Upstream commit c77b35ce66af25bdd6fde60b62e35b9b316ea5c2 ]
+
+The sparse tool complains about the remove of the _iomem attribute.
+
+stm32_rproc.c:660:17: warning: cast removes address space '__iomem' of expression
+
+Add '__force' to explicitly specify that the cast is intentional.
+This conversion is necessary to cast to addresses pointer,
+which are then managed by the remoteproc core as a pointer to a
+resource_table structure.
+
+Fixes: 8a471396d21c ("remoteproc: stm32: Move resource table setup to rproc_ops")
+Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
+Link: https://lore.kernel.org/r/20240117135312.3381936-3-arnaud.pouliquen@foss.st.com
+Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/remoteproc/stm32_rproc.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 722cf1cdc2cb0..385e931603ad3 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -641,7 +641,7 @@ stm32_rproc_get_loaded_rsc_table(struct rproc *rproc, size_t *table_sz)
+ * entire area by overwriting it with the initial values stored in rproc->clean_table.
+ */
+ *table_sz = RSC_TBL_SIZE;
+- return (struct resource_table *)ddata->rsc_va;
++ return (__force struct resource_table *)ddata->rsc_va;
+ }
+
+ static const struct rproc_ops st_rproc_ops = {
+--
+2.43.0
+
--- /dev/null
+From 0dbaa52d403db784e07c14c5568cdafc217924de Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 17 Jan 2024 14:53:11 +0100
+Subject: remoteproc: stm32: Fix incorrect type in assignment for va
+
+From: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
+
+[ Upstream commit 32381bbccba4c21145c571701f8f7fb1d9b3a92e ]
+
+The sparse tool complains about the attribute conversion between
+a _iomem void * and a void *:
+
+stm32_rproc.c:122:12: sparse: sparse: incorrect type in assignment (different address spaces) @@ expected void *va @@ got void [noderef] __iomem * @@
+stm32_rproc.c:122:12: sparse: expected void *va
+stm32_rproc.c:122:12: sparse: got void [noderef] __iomem *
+
+Add '__force' to explicitly specify that the cast is intentional.
+This conversion is necessary to cast to virtual addresses pointer,used,
+by the remoteproc core.
+
+Reported-by: kernel test robot <lkp@intel.com>
+Closes: https://lore.kernel.org/oe-kbuild-all/202312150052.HCiNKlqB-lkp@intel.com/
+Fixes: 13140de09cc2 ("remoteproc: stm32: add an ST stm32_rproc driver")
+Signed-off-by: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
+Link: https://lore.kernel.org/r/20240117135312.3381936-2-arnaud.pouliquen@foss.st.com
+Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/remoteproc/stm32_rproc.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index e432febf4337b..722cf1cdc2cb0 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -119,7 +119,7 @@ static int stm32_rproc_mem_alloc(struct rproc *rproc,
+ void *va;
+
+ dev_dbg(dev, "map memory: %pad+%zx\n", &mem->dma, mem->len);
+- va = ioremap_wc(mem->dma, mem->len);
++ va = (__force void *)ioremap_wc(mem->dma, mem->len);
+ if (IS_ERR_OR_NULL(va)) {
+ dev_err(dev, "Unable to map memory region: %pad+0x%zx\n",
+ &mem->dma, mem->len);
+@@ -136,7 +136,7 @@ static int stm32_rproc_mem_release(struct rproc *rproc,
+ struct rproc_mem_entry *mem)
+ {
+ dev_dbg(rproc->dev.parent, "unmap memory: %pa\n", &mem->dma);
+- iounmap(mem->va);
++ iounmap((__force __iomem void *)mem->va);
+
+ return 0;
+ }
+--
+2.43.0
+
--- /dev/null
+From 59e5406fd618a73501269fb96e9646d4fcad4484 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 9 Jun 2023 12:45:42 +0200
+Subject: remoteproc: stm32: use correct format strings on 64-bit
+
+From: Arnd Bergmann <arnd@arndb.de>
+
+[ Upstream commit 03bd158e1535e68bcd2b1e095b0ebcad7c84bd20 ]
+
+With CONFIG_ARCH_STM32 making it into arch/arm64, a couple of format
+strings no longer work, since they rely on size_t being compatible
+with %x, or they print an 'int' using %z:
+
+drivers/remoteproc/stm32_rproc.c: In function 'stm32_rproc_mem_alloc':
+drivers/remoteproc/stm32_rproc.c:122:22: error: format '%x' expects argument of type 'unsigned int', but argument 5 has type 'size_t' {aka 'long unsigned int'} [-Werror=format=]
+drivers/remoteproc/stm32_rproc.c:122:40: note: format string is defined here
+ 122 | dev_dbg(dev, "map memory: %pa+%x\n", &mem->dma, mem->len);
+ | ~^
+ | |
+ | unsigned int
+ | %lx
+drivers/remoteproc/stm32_rproc.c:125:30: error: format '%x' expects argument of type 'unsigned int', but argument 4 has type 'size_t' {aka 'long unsigned int'} [-Werror=format=]
+drivers/remoteproc/stm32_rproc.c:125:65: note: format string is defined here
+ 125 | dev_err(dev, "Unable to map memory region: %pa+%x\n",
+ | ~^
+ | |
+ | unsigned int
+ | %lx
+drivers/remoteproc/stm32_rproc.c: In function 'stm32_rproc_get_loaded_rsc_table':
+drivers/remoteproc/stm32_rproc.c:646:30: error: format '%zx' expects argument of type 'size_t', but argument 4 has type 'int' [-Werror=format=]
+drivers/remoteproc/stm32_rproc.c:646:66: note: format string is defined here
+ 646 | dev_err(dev, "Unable to map memory region: %pa+%zx\n",
+ | ~~^
+ | |
+ | long unsigned int
+ | %x
+
+Fix up all three instances to work across architectures, and enable
+compile testing for this driver to ensure it builds everywhere.
+
+Reviewed-by: Arnaud Pouliquen <arnaud.pouliquen@foss.st.com>
+Acked-by: Randy Dunlap <rdunlap@infradead.org>
+Tested-by: Randy Dunlap <rdunlap@infradead.org>
+Signed-off-by: Arnd Bergmann <arnd@arndb.de>
+Stable-dep-of: 32381bbccba4 ("remoteproc: stm32: Fix incorrect type in assignment for va")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/remoteproc/Kconfig | 2 +-
+ drivers/remoteproc/stm32_rproc.c | 6 +++---
+ 2 files changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/remoteproc/Kconfig b/drivers/remoteproc/Kconfig
+index 1660197866531..d93113b6ffaa1 100644
+--- a/drivers/remoteproc/Kconfig
++++ b/drivers/remoteproc/Kconfig
+@@ -313,7 +313,7 @@ config ST_SLIM_REMOTEPROC
+
+ config STM32_RPROC
+ tristate "STM32 remoteproc support"
+- depends on ARCH_STM32
++ depends on ARCH_STM32 || COMPILE_TEST
+ depends on REMOTEPROC
+ select MAILBOX
+ help
+diff --git a/drivers/remoteproc/stm32_rproc.c b/drivers/remoteproc/stm32_rproc.c
+index 8746cbb1f168d..e432febf4337b 100644
+--- a/drivers/remoteproc/stm32_rproc.c
++++ b/drivers/remoteproc/stm32_rproc.c
+@@ -118,10 +118,10 @@ static int stm32_rproc_mem_alloc(struct rproc *rproc,
+ struct device *dev = rproc->dev.parent;
+ void *va;
+
+- dev_dbg(dev, "map memory: %pa+%x\n", &mem->dma, mem->len);
++ dev_dbg(dev, "map memory: %pad+%zx\n", &mem->dma, mem->len);
+ va = ioremap_wc(mem->dma, mem->len);
+ if (IS_ERR_OR_NULL(va)) {
+- dev_err(dev, "Unable to map memory region: %pa+%x\n",
++ dev_err(dev, "Unable to map memory region: %pad+0x%zx\n",
+ &mem->dma, mem->len);
+ return -ENOMEM;
+ }
+@@ -627,7 +627,7 @@ stm32_rproc_get_loaded_rsc_table(struct rproc *rproc, size_t *table_sz)
+
+ ddata->rsc_va = devm_ioremap_wc(dev, rsc_pa, RSC_TBL_SIZE);
+ if (IS_ERR_OR_NULL(ddata->rsc_va)) {
+- dev_err(dev, "Unable to map memory region: %pa+%zx\n",
++ dev_err(dev, "Unable to map memory region: %pa+%x\n",
+ &rsc_pa, RSC_TBL_SIZE);
+ ddata->rsc_va = NULL;
+ return ERR_PTR(-ENOMEM);
+--
+2.43.0
+
--- /dev/null
+From a3103645113129579852b6b41040a6646ab6d6e5 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 12 Feb 2024 21:02:58 -0800
+Subject: rtc: mt6397: select IRQ_DOMAIN instead of depending on it
+
+From: Randy Dunlap <rdunlap@infradead.org>
+
+[ Upstream commit 544c42f798e1651dcb04fb0395219bf0f1c2607e ]
+
+IRQ_DOMAIN is a hidden (not user visible) symbol. Users cannot set
+it directly thru "make *config", so drivers should select it instead
+of depending on it if they need it.
+Relying on it being set for a dependency is risky.
+
+Consistently using "select" or "depends on" can also help reduce
+Kconfig circular dependency issues.
+
+Therefore, change the use of "depends on" for IRQ_DOMAIN to
+"select" for RTC_DRV_MT6397.
+
+Fixes: 04d3ba70a3c9 ("rtc: mt6397: add IRQ domain dependency")
+Cc: Arnd Bergmann <arnd@arndb.de>
+Cc: Eddie Huang <eddie.huang@mediatek.com>
+Cc: Sean Wang <sean.wang@mediatek.com>
+Cc: Matthias Brugger <matthias.bgg@gmail.com>
+Cc: linux-arm-kernel@lists.infradead.org
+Cc: linux-mediatek@lists.infradead.org
+Cc: Alessandro Zummo <a.zummo@towertech.it>
+Cc: Alexandre Belloni <alexandre.belloni@bootlin.com>
+Cc: linux-rtc@vger.kernel.org
+Cc: Marc Zyngier <maz@kernel.org>
+Cc: Philipp Zabel <p.zabel@pengutronix.de>
+Cc: Peter Rosin <peda@axentia.se>
+Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
+Signed-off-by: Randy Dunlap <rdunlap@infradead.org>
+Link: https://lore.kernel.org/r/20240213050258.6167-1-rdunlap@infradead.org
+Signed-off-by: Alexandre Belloni <alexandre.belloni@bootlin.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/rtc/Kconfig | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/rtc/Kconfig b/drivers/rtc/Kconfig
+index bb63edb507da4..87dc050ca004c 100644
+--- a/drivers/rtc/Kconfig
++++ b/drivers/rtc/Kconfig
+@@ -1843,7 +1843,8 @@ config RTC_DRV_MT2712
+
+ config RTC_DRV_MT6397
+ tristate "MediaTek PMIC based RTC"
+- depends on MFD_MT6397 || (COMPILE_TEST && IRQ_DOMAIN)
++ depends on MFD_MT6397 || COMPILE_TEST
++ select IRQ_DOMAIN
+ help
+ This selects the MediaTek(R) RTC driver. RTC is part of MediaTek
+ MT6397 PMIC. You should enable MT6397 PMIC MFD before select
+--
+2.43.0
+
--- /dev/null
+From 60b4b1916ee3b4167f903a218efbe6f82ef415fc Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 6 Mar 2024 12:31:52 +0100
+Subject: s390/vtime: fix average steal time calculation
+
+From: Mete Durlu <meted@linux.ibm.com>
+
+[ Upstream commit 367c50f78451d3bd7ad70bc5c89f9ba6dec46ca9 ]
+
+Current average steal timer calculation produces volatile and inflated
+values. The only user of this value is KVM so far and it uses that to
+decide whether or not to yield the vCPU which is seeing steal time.
+KVM compares average steal timer to a threshold and if the threshold
+is past then it does not allow CPU polling and yields it to host, else
+it keeps the CPU by polling.
+Since KVM's steal time threshold is very low by default (%10) it most
+likely is not effected much by the bloated average steal timer values
+because the operating region is pretty small. However there might be
+new users in the future who might rely on this number. Fix average
+steal timer calculation by changing the formula from:
+
+ avg_steal_timer = avg_steal_timer / 2 + steal_timer;
+
+to the following:
+
+ avg_steal_timer = (avg_steal_timer + steal_timer) / 2;
+
+This ensures that avg_steal_timer is actually a naive average of steal
+timer values. It now closely follows steal timer values but of course
+in a smoother manner.
+
+Fixes: 152e9b8676c6 ("s390/vtime: steal time exponential moving average")
+Signed-off-by: Mete Durlu <meted@linux.ibm.com>
+Acked-by: Heiko Carstens <hca@linux.ibm.com>
+Acked-by: Christian Borntraeger <borntraeger@linux.ibm.com>
+Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ arch/s390/kernel/vtime.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c
+index 9436f3053b88c..003c926a0f4de 100644
+--- a/arch/s390/kernel/vtime.c
++++ b/arch/s390/kernel/vtime.c
+@@ -210,13 +210,13 @@ void vtime_flush(struct task_struct *tsk)
+ virt_timer_expire();
+
+ steal = S390_lowcore.steal_timer;
+- avg_steal = S390_lowcore.avg_steal_timer / 2;
++ avg_steal = S390_lowcore.avg_steal_timer;
+ if ((s64) steal > 0) {
+ S390_lowcore.steal_timer = 0;
+ account_steal_time(cputime_to_nsecs(steal));
+ avg_steal += steal;
+ }
+- S390_lowcore.avg_steal_timer = avg_steal;
++ S390_lowcore.avg_steal_timer = avg_steal / 2;
+ }
+
+ static u64 vtime_delta(void)
+--
+2.43.0
+
--- /dev/null
+From e118c195f5e774d12b90841f7809b5315483569c Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Wed, 20 Mar 2024 08:57:17 +0200
+Subject: selftests: forwarding: Fix ping failure due to short timeout
+
+From: Ido Schimmel <idosch@nvidia.com>
+
+[ Upstream commit e4137851d4863a9bdc6aabc613bcb46c06d91e64 ]
+
+The tests send 100 pings in 0.1 second intervals and force a timeout of
+11 seconds, which is borderline (especially on debug kernels), resulting
+in random failures in netdev CI [1].
+
+Fix by increasing the timeout to 20 seconds. It should not prolong the
+test unless something is wrong, in which case the test will rightfully
+fail.
+
+[1]
+ # selftests: net/forwarding: vxlan_bridge_1d_port_8472_ipv6.sh
+ # INFO: Running tests with UDP port 8472
+ # TEST: ping: local->local [ OK ]
+ # TEST: ping: local->remote 1 [FAIL]
+ # Ping failed
+ [...]
+
+Fixes: b07e9957f220 ("selftests: forwarding: Add VxLAN tests with a VLAN-unaware bridge for IPv6")
+Fixes: 728b35259e28 ("selftests: forwarding: Add VxLAN tests with a VLAN-aware bridge for IPv6")
+Reported-by: Paolo Abeni <pabeni@redhat.com>
+Closes: https://lore.kernel.org/netdev/24a7051fdcd1f156c3704bca39e4b3c41dfc7c4b.camel@redhat.com/
+Signed-off-by: Ido Schimmel <idosch@nvidia.com>
+Reviewed-by: Hangbin Liu <liuhangbin@gmail.com>
+Reviewed-by: Jiri Pirko <jiri@nvidia.com>
+Link: https://lore.kernel.org/r/20240320065717.4145325-1-idosch@nvidia.com
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ .../testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh | 4 ++--
+ .../testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh | 4 ++--
+ 2 files changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
+index ac97f07e5ce82..bd3f7d492af2b 100755
+--- a/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
++++ b/tools/testing/selftests/net/forwarding/vxlan_bridge_1d_ipv6.sh
+@@ -354,7 +354,7 @@ __ping_ipv4()
+
+ # Send 100 packets and verify that at least 100 packets hit the rule,
+ # to overcome ARP noise.
+- PING_COUNT=100 PING_TIMEOUT=11 ping_do $dev $dst_ip
++ PING_COUNT=100 PING_TIMEOUT=20 ping_do $dev $dst_ip
+ check_err $? "Ping failed"
+
+ tc_check_at_least_x_packets "dev $rp1 egress" 101 10 100
+@@ -410,7 +410,7 @@ __ping_ipv6()
+
+ # Send 100 packets and verify that at least 100 packets hit the rule,
+ # to overcome neighbor discovery noise.
+- PING_COUNT=100 PING_TIMEOUT=11 ping6_do $dev $dst_ip
++ PING_COUNT=100 PING_TIMEOUT=20 ping6_do $dev $dst_ip
+ check_err $? "Ping failed"
+
+ tc_check_at_least_x_packets "dev $rp1 egress" 101 100
+diff --git a/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh b/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh
+index d880df89bc8bd..e83fde79f40d0 100755
+--- a/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh
++++ b/tools/testing/selftests/net/forwarding/vxlan_bridge_1q_ipv6.sh
+@@ -457,7 +457,7 @@ __ping_ipv4()
+
+ # Send 100 packets and verify that at least 100 packets hit the rule,
+ # to overcome ARP noise.
+- PING_COUNT=100 PING_TIMEOUT=11 ping_do $dev $dst_ip
++ PING_COUNT=100 PING_TIMEOUT=20 ping_do $dev $dst_ip
+ check_err $? "Ping failed"
+
+ tc_check_at_least_x_packets "dev $rp1 egress" 101 10 100
+@@ -522,7 +522,7 @@ __ping_ipv6()
+
+ # Send 100 packets and verify that at least 100 packets hit the rule,
+ # to overcome neighbor discovery noise.
+- PING_COUNT=100 PING_TIMEOUT=11 ping6_do $dev $dst_ip
++ PING_COUNT=100 PING_TIMEOUT=20 ping6_do $dev $dst_ip
+ check_err $? "Ping failed"
+
+ tc_check_at_least_x_packets "dev $rp1 egress" 101 100
+--
+2.43.0
+
--- /dev/null
+From 3ac68cfd2f1cc598c0a1b54beb9b27d767957f6c Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 19 Feb 2024 17:04:57 +0200
+Subject: serial: 8250_exar: Don't remove GPIO device on suspend
+
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+
+[ Upstream commit 73b5a5c00be39e23b194bad10e1ea8bb73eee176 ]
+
+It seems a copy&paste mistake that suspend callback removes the GPIO
+device. There is no counterpart of this action, means once suspended
+there is no more GPIO device available untile full unbind-bind cycle
+is performed. Remove suspicious GPIO device removal in suspend.
+
+Fixes: d0aeaa83f0b0 ("serial: exar: split out the exar code from 8250_pci")
+Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Link: https://lore.kernel.org/r/20240219150627.2101198-2-andriy.shevchenko@linux.intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/tty/serial/8250/8250_exar.c | 5 +----
+ 1 file changed, 1 insertion(+), 4 deletions(-)
+
+diff --git a/drivers/tty/serial/8250/8250_exar.c b/drivers/tty/serial/8250/8250_exar.c
+index dca1abe363248..55451ff846520 100644
+--- a/drivers/tty/serial/8250/8250_exar.c
++++ b/drivers/tty/serial/8250/8250_exar.c
+@@ -714,6 +714,7 @@ static void exar_pci_remove(struct pci_dev *pcidev)
+ for (i = 0; i < priv->nr; i++)
+ serial8250_unregister_port(priv->line[i]);
+
++ /* Ensure that every init quirk is properly torn down */
+ if (priv->board->exit)
+ priv->board->exit(pcidev);
+ }
+@@ -728,10 +729,6 @@ static int __maybe_unused exar_suspend(struct device *dev)
+ if (priv->line[i] >= 0)
+ serial8250_suspend_port(priv->line[i]);
+
+- /* Ensure that every init quirk is properly torn down */
+- if (priv->board->exit)
+- priv->board->exit(pcidev);
+-
+ return 0;
+ }
+
+--
+2.43.0
+
--- /dev/null
+From df3e4bd36ccdbd4a77496142336f95d9cf468922 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 18 Jan 2024 10:22:01 -0500
+Subject: serial: max310x: fix syntax error in IRQ error message
+
+From: Hugo Villeneuve <hvilleneuve@dimonoff.com>
+
+[ Upstream commit 8ede8c6f474255b2213cccd7997b993272a8e2f9 ]
+
+Replace g with q.
+
+Helpful when grepping thru source code or logs for
+"request" keyword.
+
+Fixes: f65444187a66 ("serial: New serial driver MAX310X")
+Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com>
+Signed-off-by: Hugo Villeneuve <hvilleneuve@dimonoff.com>
+Link: https://lore.kernel.org/r/20240118152213.2644269-6-hugo@hugovil.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/tty/serial/max310x.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/tty/serial/max310x.c b/drivers/tty/serial/max310x.c
+index 163a89f84c9c2..444f89eb2d4b7 100644
+--- a/drivers/tty/serial/max310x.c
++++ b/drivers/tty/serial/max310x.c
+@@ -1459,7 +1459,7 @@ static int max310x_probe(struct device *dev, const struct max310x_devtype *devty
+ if (!ret)
+ return 0;
+
+- dev_err(dev, "Unable to reguest IRQ %i\n", irq);
++ dev_err(dev, "Unable to request IRQ %i\n", irq);
+
+ out_uart:
+ for (i = 0; i < devtype->nr; i++) {
+--
+2.43.0
+
nfs-fix-panic-when-nfs4_ff_layout_prepare_ds-fails.patch
io_uring-net-correct-the-type-of-variable.patch
comedi-comedi_test-prevent-timers-rescheduling-during-deletion.patch
+remoteproc-stm32-use-correct-format-strings-on-64-bi.patch
+remoteproc-stm32-fix-incorrect-type-in-assignment-fo.patch
+remoteproc-stm32-fix-incorrect-type-assignment-retur.patch
+usb-phy-generic-get-the-vbus-supply.patch
+tty-vt-fix-20-vs-0x20-typo-in-escsiignore.patch
+serial-max310x-fix-syntax-error-in-irq-error-message.patch
+tty-serial-samsung-fix-tx_empty-to-return-tiocser_te.patch
+arm64-dts-broadcom-bcmbca-bcm4908-drop-invalid-switc.patch
+coresight-etm4x-set-skip_power_up-in-etm4_init_arch_.patch
+kconfig-fix-infinite-loop-when-expanding-a-macro-at-.patch
+hwtracing-hisi_ptt-move-type-check-to-the-beginning-.patch
+rtc-mt6397-select-irq_domain-instead-of-depending-on.patch
+serial-8250_exar-don-t-remove-gpio-device-on-suspend.patch
+staging-greybus-fix-get_channel_from_mode-failure-pa.patch
+usb-gadget-net2272-use-irqflags-in-the-call-to-net22.patch
+asoc-rockchip-i2s-tdm-fix-inaccurate-sampling-rates.patch
+nouveau-reset-the-bo-resource-bus-info-after-an-evic.patch
+tcp-fix-new_syn_recv-handling-in-inet_twsk_purge.patch
+rds-tcp-fix-use-after-free-of-net-in-reqsk_timer_han.patch
+octeontx2-af-use-matching-wake_up-api-variant-in-cgx.patch
+s390-vtime-fix-average-steal-time-calculation.patch
+net-sched-taprio-proper-tca_taprio_tc_entry_index-ch.patch
+soc-fsl-dpio-fix-kcalloc-argument-order.patch
+tcp-fix-refcnt-handling-in-__inet_hash_connect.patch
+hsr-fix-uninit-value-access-in-hsr_get_node.patch
+nvme-only-set-reserved_tags-in-nvme_alloc_io_tag_set.patch
+nvme-add-the-apple-shared-tag-workaround-to-nvme_all.patch
+nvme-fix-reconnection-fail-due-to-reserved-tag-alloc.patch
+net-mediatek-mtk_eth_soc-clear-mac_mcr_force_link-on.patch
+net-ethernet-mtk_eth_soc-fix-ppe-hanging-issue.patch
+packet-annotate-data-races-around-ignore_outgoing.patch
+net-veth-do-not-manipulate-gro-when-using-xdp.patch
+net-dsa-mt7530-prevent-possible-incorrect-xtal-frequ.patch
+drm-fix-drm_fixp2int_round-making-it-add-0.5.patch
+vdpa_sim-reset-must-not-run.patch
+vdpa-mlx5-allow-cvq-size-changes.patch
+wireguard-receive-annotate-data-race-around-receivin.patch
+rds-introduce-acquire-release-ordering-in-acquire-re.patch
+hsr-handle-failures-in-module-init.patch
+ipv4-raw-fix-sending-packets-from-raw-sockets-via-ip.patch
+net-phy-fix-phy_read_poll_timeout-argument-type-in-g.patch
+dm-integrity-fix-a-memory-leak-when-rechecking-the-d.patch
+net-bnx2x-prevent-access-to-a-freed-page-in-page_poo.patch
+octeontx2-af-recover-cpt-engine-when-it-gets-fault.patch
+octeontx2-af-add-mbox-for-cpt-lf-reset.patch
+octeontx2-af-optimize-cpt-pf-identification.patch
+octeontx2-af-add-mbox-to-return-cpt_af_flt_int-info.patch
+octeontx2-detect-the-mbox-up-or-down-message-via-reg.patch
+net-octeontx2-use-alloc_ordered_workqueue-to-create-.patch
+octeontx2-pf-use-default-max_active-works-instead-of.patch
+octeontx2-pf-send-up-messages-to-vf-only-when-vf-is-.patch
+octeontx2-af-use-separate-handlers-for-interrupts.patch
+netfilter-nft_set_pipapo-release-elements-in-clone-o.patch
+netfilter-nf_tables-do-not-compare-internal-table-fl.patch
+rcu-add-a-helper-to-report-consolidated-flavor-qs.patch
+net-report-rcu-qs-on-threaded-napi-repolling.patch
+bpf-report-rcu-qs-in-cpumap-kthread.patch
+net-dsa-mt7530-fix-link-local-frames-that-ingress-vl.patch
+net-dsa-mt7530-fix-handling-of-all-link-local-frames.patch
+spi-spi-mt65xx-fix-null-pointer-access-in-interrupt-.patch
+selftests-forwarding-fix-ping-failure-due-to-short-t.patch
+dm-address-indent-space-issues.patch
+dm-io-support-io-priority.patch
+dm-integrity-align-the-outgoing-bio-in-integrity_rec.patch
--- /dev/null
+From 531609bb5ba0904330ee05f9e09701aa843cc13b Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 9 Feb 2024 20:34:36 +0100
+Subject: soc: fsl: dpio: fix kcalloc() argument order
+
+From: Arnd Bergmann <arnd@arndb.de>
+
+[ Upstream commit 72ebb41b88f9d7c10c5e159e0507074af0a22fe2 ]
+
+A previous bugfix added a call to kcalloc(), which starting in gcc-14
+causes a harmless warning about the argument order:
+
+drivers/soc/fsl/dpio/dpio-service.c: In function 'dpaa2_io_service_enqueue_multiple_desc_fq':
+drivers/soc/fsl/dpio/dpio-service.c:526:29: error: 'kcalloc' sizes specified with 'sizeof' in the earlier argument and not in the later argument [-Werror=calloc-transposed-args]
+ 526 | ed = kcalloc(sizeof(struct qbman_eq_desc), 32, GFP_KERNEL);
+ | ^~~~~~
+drivers/soc/fsl/dpio/dpio-service.c:526:29: note: earlier argument should specify number of elements, later size of each element
+
+Since the two are only multiplied, the order does not change the
+behavior, so just fix it now to shut up the compiler warning.
+
+Dmity independently came up with the same fix.
+
+Fixes: 5c4a5999b245 ("soc: fsl: dpio: avoid stack usage warning")
+Reported-by: Dmitry Antipov <dmantipov@yandex.ru>
+Signed-off-by: Arnd Bergmann <arnd@arndb.de>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/soc/fsl/dpio/dpio-service.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/soc/fsl/dpio/dpio-service.c b/drivers/soc/fsl/dpio/dpio-service.c
+index 1d2b27e3ea63f..b811446e0fa55 100644
+--- a/drivers/soc/fsl/dpio/dpio-service.c
++++ b/drivers/soc/fsl/dpio/dpio-service.c
+@@ -523,7 +523,7 @@ int dpaa2_io_service_enqueue_multiple_desc_fq(struct dpaa2_io *d,
+ struct qbman_eq_desc *ed;
+ int i, ret;
+
+- ed = kcalloc(sizeof(struct qbman_eq_desc), 32, GFP_KERNEL);
++ ed = kcalloc(32, sizeof(struct qbman_eq_desc), GFP_KERNEL);
+ if (!ed)
+ return -ENOMEM;
+
+--
+2.43.0
+
--- /dev/null
+From 3801f86f5e51eb6b87f358c8eccccaa9ec3e1845 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 21 Mar 2024 15:08:57 +0800
+Subject: spi: spi-mt65xx: Fix NULL pointer access in interrupt handler
+
+From: Fei Shao <fshao@chromium.org>
+
+[ Upstream commit a20ad45008a7c82f1184dc6dee280096009ece55 ]
+
+The TX buffer in spi_transfer can be a NULL pointer, so the interrupt
+handler may end up writing to the invalid memory and cause crashes.
+
+Add a check to trans->tx_buf before using it.
+
+Fixes: 1ce24864bff4 ("spi: mediatek: Only do dma for 4-byte aligned buffers")
+Signed-off-by: Fei Shao <fshao@chromium.org>
+Reviewed-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com>
+Link: https://msgid.link/r/20240321070942.1587146-2-fshao@chromium.org
+Signed-off-by: Mark Brown <broonie@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/spi/spi-mt65xx.c | 22 ++++++++++++----------
+ 1 file changed, 12 insertions(+), 10 deletions(-)
+
+diff --git a/drivers/spi/spi-mt65xx.c b/drivers/spi/spi-mt65xx.c
+index 6e95efb50acbc..f9ec8742917a6 100644
+--- a/drivers/spi/spi-mt65xx.c
++++ b/drivers/spi/spi-mt65xx.c
+@@ -787,17 +787,19 @@ static irqreturn_t mtk_spi_interrupt(int irq, void *dev_id)
+ mdata->xfer_len = min(MTK_SPI_MAX_FIFO_SIZE, len);
+ mtk_spi_setup_packet(master);
+
+- cnt = mdata->xfer_len / 4;
+- iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
+- trans->tx_buf + mdata->num_xfered, cnt);
++ if (trans->tx_buf) {
++ cnt = mdata->xfer_len / 4;
++ iowrite32_rep(mdata->base + SPI_TX_DATA_REG,
++ trans->tx_buf + mdata->num_xfered, cnt);
+
+- remainder = mdata->xfer_len % 4;
+- if (remainder > 0) {
+- reg_val = 0;
+- memcpy(®_val,
+- trans->tx_buf + (cnt * 4) + mdata->num_xfered,
+- remainder);
+- writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++ remainder = mdata->xfer_len % 4;
++ if (remainder > 0) {
++ reg_val = 0;
++ memcpy(®_val,
++ trans->tx_buf + (cnt * 4) + mdata->num_xfered,
++ remainder);
++ writel(reg_val, mdata->base + SPI_TX_DATA_REG);
++ }
+ }
+
+ mtk_spi_enable_transfer(master);
+--
+2.43.0
+
--- /dev/null
+From 4ce29bd330a101034312c2281c71eeb11cf011d7 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 4 Mar 2024 10:04:48 +0300
+Subject: staging: greybus: fix get_channel_from_mode() failure path
+
+From: Dan Carpenter <dan.carpenter@linaro.org>
+
+[ Upstream commit 34164202a5827f60a203ca9acaf2d9f7d432aac8 ]
+
+The get_channel_from_mode() function is supposed to return the channel
+which matches the mode. But it has a bug where if it doesn't find a
+matching channel then it returns the last channel. It should return
+NULL instead.
+
+Also remove an unnecessary NULL check on "channel".
+
+Fixes: 2870b52bae4c ("greybus: lights: add lights implementation")
+Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
+Reviewed-by: Rui Miguel Silva <rmfrfs@gmail.com>
+Reviewed-by: Alex Elder <elder@linaro.org>
+Link: https://lore.kernel.org/r/379c0cb4-39e0-4293-8a18-c7b1298e5420@moroto.mountain
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/staging/greybus/light.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/staging/greybus/light.c b/drivers/staging/greybus/light.c
+index 87d36948c6106..c6bd86a5335ab 100644
+--- a/drivers/staging/greybus/light.c
++++ b/drivers/staging/greybus/light.c
+@@ -100,15 +100,15 @@ static struct led_classdev *get_channel_cdev(struct gb_channel *channel)
+ static struct gb_channel *get_channel_from_mode(struct gb_light *light,
+ u32 mode)
+ {
+- struct gb_channel *channel = NULL;
++ struct gb_channel *channel;
+ int i;
+
+ for (i = 0; i < light->channels_count; i++) {
+ channel = &light->channels[i];
+- if (channel && channel->mode == mode)
+- break;
++ if (channel->mode == mode)
++ return channel;
+ }
+- return channel;
++ return NULL;
+ }
+
+ static int __gb_lights_flash_intensity_set(struct gb_channel *channel,
+--
+2.43.0
+
--- /dev/null
+From ee7c7bb49ec072232a979e8d9831c4d9ad4470cc Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 8 Mar 2024 12:01:21 -0800
+Subject: tcp: Fix NEW_SYN_RECV handling in inet_twsk_purge()
+
+From: Eric Dumazet <edumazet@google.com>
+
+[ Upstream commit 1c4e97dd2d3c9a3e84f7e26346aa39bc426d3249 ]
+
+inet_twsk_purge() uses rcu to find TIME_WAIT and NEW_SYN_RECV
+objects to purge.
+
+These objects use SLAB_TYPESAFE_BY_RCU semantic and need special
+care. We need to use refcount_inc_not_zero(&sk->sk_refcnt).
+
+Reuse the existing correct logic I wrote for TIME_WAIT,
+because both structures have common locations for
+sk_state, sk_family, and netns pointer.
+
+If after the refcount_inc_not_zero() the object fields longer match
+the keys, use sock_gen_put(sk) to release the refcount.
+
+Then we can call inet_twsk_deschedule_put() for TIME_WAIT,
+inet_csk_reqsk_queue_drop_and_put() for NEW_SYN_RECV sockets,
+with BH disabled.
+
+Then we need to restart the loop because we had drop rcu_read_lock().
+
+Fixes: 740ea3c4a0b2 ("tcp: Clean up kernel listener's reqsk in inet_twsk_purge()")
+Link: https://lore.kernel.org/netdev/CANn89iLvFuuihCtt9PME2uS1WJATnf5fKjDToa1WzVnRzHnPfg@mail.gmail.com/T/#u
+Signed-off-by: Eric Dumazet <edumazet@google.com>
+Link: https://lore.kernel.org/r/20240308200122.64357-2-kuniyu@amazon.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/ipv4/inet_timewait_sock.c | 41 ++++++++++++++++-------------------
+ 1 file changed, 19 insertions(+), 22 deletions(-)
+
+diff --git a/net/ipv4/inet_timewait_sock.c b/net/ipv4/inet_timewait_sock.c
+index 1d77d992e6e77..340a8f0c29800 100644
+--- a/net/ipv4/inet_timewait_sock.c
++++ b/net/ipv4/inet_timewait_sock.c
+@@ -281,12 +281,12 @@ void __inet_twsk_schedule(struct inet_timewait_sock *tw, int timeo, bool rearm)
+ }
+ EXPORT_SYMBOL_GPL(__inet_twsk_schedule);
+
++/* Remove all non full sockets (TIME_WAIT and NEW_SYN_RECV) for dead netns */
+ void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family)
+ {
+- struct inet_timewait_sock *tw;
+- struct sock *sk;
+ struct hlist_nulls_node *node;
+ unsigned int slot;
++ struct sock *sk;
+
+ for (slot = 0; slot <= hashinfo->ehash_mask; slot++) {
+ struct inet_ehash_bucket *head = &hashinfo->ehash[slot];
+@@ -295,38 +295,35 @@ void inet_twsk_purge(struct inet_hashinfo *hashinfo, int family)
+ rcu_read_lock();
+ restart:
+ sk_nulls_for_each_rcu(sk, node, &head->chain) {
+- if (sk->sk_state != TCP_TIME_WAIT) {
+- /* A kernel listener socket might not hold refcnt for net,
+- * so reqsk_timer_handler() could be fired after net is
+- * freed. Userspace listener and reqsk never exist here.
+- */
+- if (unlikely(sk->sk_state == TCP_NEW_SYN_RECV &&
+- hashinfo->pernet)) {
+- struct request_sock *req = inet_reqsk(sk);
+-
+- inet_csk_reqsk_queue_drop_and_put(req->rsk_listener, req);
+- }
++ int state = inet_sk_state_load(sk);
+
++ if ((1 << state) & ~(TCPF_TIME_WAIT |
++ TCPF_NEW_SYN_RECV))
+ continue;
+- }
+
+- tw = inet_twsk(sk);
+- if ((tw->tw_family != family) ||
+- refcount_read(&twsk_net(tw)->ns.count))
++ if (sk->sk_family != family ||
++ refcount_read(&sock_net(sk)->ns.count))
+ continue;
+
+- if (unlikely(!refcount_inc_not_zero(&tw->tw_refcnt)))
++ if (unlikely(!refcount_inc_not_zero(&sk->sk_refcnt)))
+ continue;
+
+- if (unlikely((tw->tw_family != family) ||
+- refcount_read(&twsk_net(tw)->ns.count))) {
+- inet_twsk_put(tw);
++ if (unlikely(sk->sk_family != family ||
++ refcount_read(&sock_net(sk)->ns.count))) {
++ sock_gen_put(sk);
+ goto restart;
+ }
+
+ rcu_read_unlock();
+ local_bh_disable();
+- inet_twsk_deschedule_put(tw);
++ if (state == TCP_TIME_WAIT) {
++ inet_twsk_deschedule_put(inet_twsk(sk));
++ } else {
++ struct request_sock *req = inet_reqsk(sk);
++
++ inet_csk_reqsk_queue_drop_and_put(req->rsk_listener,
++ req);
++ }
+ local_bh_enable();
+ goto restart_rcu;
+ }
+--
+2.43.0
+
--- /dev/null
+From f7c10388508411c141066413224bba86f9b10a61 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 8 Mar 2024 12:16:23 -0800
+Subject: tcp: Fix refcnt handling in __inet_hash_connect().
+
+From: Kuniyuki Iwashima <kuniyu@amazon.com>
+
+[ Upstream commit 04d9d1fc428ac9f581d55118d67e0cb546701feb ]
+
+syzbot reported a warning in sk_nulls_del_node_init_rcu().
+
+The commit 66b60b0c8c4a ("dccp/tcp: Unhash sk from ehash for tb2 alloc
+failure after check_estalblished().") tried to fix an issue that an
+unconnected socket occupies an ehash entry when bhash2 allocation fails.
+
+In such a case, we need to revert changes done by check_established(),
+which does not hold refcnt when inserting socket into ehash.
+
+So, to revert the change, we need to __sk_nulls_add_node_rcu() instead
+of sk_nulls_add_node_rcu().
+
+Otherwise, sock_put() will cause refcnt underflow and leak the socket.
+
+[0]:
+WARNING: CPU: 0 PID: 23948 at include/net/sock.h:799 sk_nulls_del_node_init_rcu+0x166/0x1a0 include/net/sock.h:799
+Modules linked in:
+CPU: 0 PID: 23948 Comm: syz-executor.2 Not tainted 6.8.0-rc6-syzkaller-00159-gc055fc00c07b #0
+Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
+RIP: 0010:sk_nulls_del_node_init_rcu+0x166/0x1a0 include/net/sock.h:799
+Code: e8 7f 71 c6 f7 83 fb 02 7c 25 e8 35 6d c6 f7 4d 85 f6 0f 95 c0 5b 41 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc e8 1b 6d c6 f7 90 <0f> 0b 90 eb b2 e8 10 6d c6 f7 4c 89 e7 be 04 00 00 00 e8 63 e7 d2
+RSP: 0018:ffffc900032d7848 EFLAGS: 00010246
+RAX: ffffffff89cd0035 RBX: 0000000000000001 RCX: 0000000000040000
+RDX: ffffc90004de1000 RSI: 000000000003ffff RDI: 0000000000040000
+RBP: 1ffff1100439ac26 R08: ffffffff89ccffe3 R09: 1ffff1100439ac28
+R10: dffffc0000000000 R11: ffffed100439ac29 R12: ffff888021cd6140
+R13: dffffc0000000000 R14: ffff88802a9bf5c0 R15: ffff888021cd6130
+FS: 00007f3b823f16c0(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
+CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+CR2: 00007f3b823f0ff8 CR3: 000000004674a000 CR4: 00000000003506f0
+DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
+DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
+Call Trace:
+ <TASK>
+ __inet_hash_connect+0x140f/0x20b0 net/ipv4/inet_hashtables.c:1139
+ dccp_v6_connect+0xcb9/0x1480 net/dccp/ipv6.c:956
+ __inet_stream_connect+0x262/0xf30 net/ipv4/af_inet.c:678
+ inet_stream_connect+0x65/0xa0 net/ipv4/af_inet.c:749
+ __sys_connect_file net/socket.c:2048 [inline]
+ __sys_connect+0x2df/0x310 net/socket.c:2065
+ __do_sys_connect net/socket.c:2075 [inline]
+ __se_sys_connect net/socket.c:2072 [inline]
+ __x64_sys_connect+0x7a/0x90 net/socket.c:2072
+ do_syscall_64+0xf9/0x240
+ entry_SYSCALL_64_after_hwframe+0x6f/0x77
+RIP: 0033:0x7f3b8167dda9
+Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48
+RSP: 002b:00007f3b823f10c8 EFLAGS: 00000246 ORIG_RAX: 000000000000002a
+RAX: ffffffffffffffda RBX: 00007f3b817abf80 RCX: 00007f3b8167dda9
+RDX: 000000000000001c RSI: 0000000020000040 RDI: 0000000000000003
+RBP: 00007f3b823f1120 R08: 0000000000000000 R09: 0000000000000000
+R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000001
+R13: 000000000000000b R14: 00007f3b817abf80 R15: 00007ffd3beb57b8
+ </TASK>
+
+Reported-by: syzbot+12c506c1aae251e70449@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=12c506c1aae251e70449
+Fixes: 66b60b0c8c4a ("dccp/tcp: Unhash sk from ehash for tb2 alloc failure after check_estalblished().")
+Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com>
+Reviewed-by: Eric Dumazet <edumazet@google.com>
+Link: https://lore.kernel.org/r/20240308201623.65448-1-kuniyu@amazon.com
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ net/ipv4/inet_hashtables.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
+index 56776e1b1de52..0ad25e6783ac7 100644
+--- a/net/ipv4/inet_hashtables.c
++++ b/net/ipv4/inet_hashtables.c
+@@ -1117,7 +1117,7 @@ int __inet_hash_connect(struct inet_timewait_death_row *death_row,
+ sock_prot_inuse_add(net, sk->sk_prot, -1);
+
+ spin_lock(lock);
+- sk_nulls_del_node_init_rcu(sk);
++ __sk_nulls_del_node_init_rcu(sk);
+ spin_unlock(lock);
+
+ sk->sk_hash = 0;
+--
+2.43.0
+
--- /dev/null
+From 2aa433023598679f3078b1823132abe6873ca592 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 19 Jan 2024 10:45:08 +0000
+Subject: tty: serial: samsung: fix tx_empty() to return TIOCSER_TEMT
+
+From: Tudor Ambarus <tudor.ambarus@linaro.org>
+
+[ Upstream commit 314c2b399288f0058a8c5b6683292cbde5f1531b ]
+
+The core expects for tx_empty() either TIOCSER_TEMT when the tx is
+empty or 0 otherwise. s3c24xx_serial_txempty_nofifo() might return
+0x4, and at least uart_get_lsr_info() tries to clear exactly
+TIOCSER_TEMT (BIT(1)). Fix tx_empty() to return TIOCSER_TEMT.
+
+Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
+Signed-off-by: Tudor Ambarus <tudor.ambarus@linaro.org>
+Reviewed-by: Sam Protsenko <semen.protsenko@linaro.org>
+Link: https://lore.kernel.org/r/20240119104526.1221243-2-tudor.ambarus@linaro.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/tty/serial/samsung_tty.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/tty/serial/samsung_tty.c b/drivers/tty/serial/samsung_tty.c
+index aa2c51b84116f..589daed19e625 100644
+--- a/drivers/tty/serial/samsung_tty.c
++++ b/drivers/tty/serial/samsung_tty.c
+@@ -996,11 +996,10 @@ static unsigned int s3c24xx_serial_tx_empty(struct uart_port *port)
+ if ((ufstat & info->tx_fifomask) != 0 ||
+ (ufstat & info->tx_fifofull))
+ return 0;
+-
+- return 1;
++ return TIOCSER_TEMT;
+ }
+
+- return s3c24xx_serial_txempty_nofifo(port);
++ return s3c24xx_serial_txempty_nofifo(port) ? TIOCSER_TEMT : 0;
+ }
+
+ /* no modem control lines */
+--
+2.43.0
+
--- /dev/null
+From 774be584f7c3d10f88357d2a803fb45bafae3687 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 22 Jan 2024 12:03:17 +0100
+Subject: tty: vt: fix 20 vs 0x20 typo in EScsiignore
+
+From: Jiri Slaby (SUSE) <jirislaby@kernel.org>
+
+[ Upstream commit 0e6a92f67c8a94707f7bb27ac29e2bdf3e7c167d ]
+
+The if (c >= 20 && c <= 0x3f) test added in commit 7a99565f8732 is
+wrong. 20 is DC4 in ascii and it makes no sense to consider that as the
+bottom limit. Instead, it should be 0x20 as in the other test in
+the commit above. This is supposed to NOT change anything as we handle
+interesting 20-0x20 asciis far before this if.
+
+So for sakeness, change to 0x20 (which is SPACE).
+
+Signed-off-by: "Jiri Slaby (SUSE)" <jirislaby@kernel.org>
+Fixes: 7a99565f8732 ("vt: ignore csi sequences with intermediate characters.")
+Cc: Martin Hostettler <textshell@uchuujin.de>
+Link: https://lore.kernel.org/all/ZaP45QY2WEsDqoxg@neutronstar.dyndns.org/
+Tested-by: Helge Deller <deller@gmx.de> # parisc STI console
+Link: https://lore.kernel.org/r/20240122110401.7289-4-jirislaby@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/tty/vt/vt.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/tty/vt/vt.c b/drivers/tty/vt/vt.c
+index 981d2bfcf9a5b..9e30ef2b6eb8c 100644
+--- a/drivers/tty/vt/vt.c
++++ b/drivers/tty/vt/vt.c
+@@ -2515,7 +2515,7 @@ static void do_con_trol(struct tty_struct *tty, struct vc_data *vc, int c)
+ }
+ return;
+ case EScsiignore:
+- if (c >= 20 && c <= 0x3f)
++ if (c >= 0x20 && c <= 0x3f)
+ return;
+ vc->vc_state = ESnormal;
+ return;
+--
+2.43.0
+
--- /dev/null
+From a84967bb69a77277cc73058db11b5628974d7059 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 7 Mar 2024 18:17:34 +0000
+Subject: usb: gadget: net2272: Use irqflags in the call to net2272_probe_fin
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Colin Ian King <colin.i.king@gmail.com>
+
+[ Upstream commit 600556809f04eb3bbccd05218215dcd7b285a9a9 ]
+
+Currently the variable irqflags is being set but is not being used,
+it appears it should be used in the call to net2272_probe_fin
+rather than IRQF_TRIGGER_LOW being used. Kudos to Uwe Kleine-König
+for suggesting the fix.
+
+Cleans up clang scan build warning:
+drivers/usb/gadget/udc/net2272.c:2610:15: warning: variable 'irqflags'
+set but not used [-Wunused-but-set-variable]
+
+Fixes: ceb80363b2ec ("USB: net2272: driver for PLX NET2272 USB device controller")
+Signed-off-by: Colin Ian King <colin.i.king@gmail.com>
+Acked-by: Alan Stern <stern@rowland.harvard.edu>
+Link: https://lore.kernel.org/r/20240307181734.2034407-1-colin.i.king@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/usb/gadget/udc/net2272.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drivers/usb/gadget/udc/net2272.c b/drivers/usb/gadget/udc/net2272.c
+index 538c1b9a28835..c42d5aa99e81a 100644
+--- a/drivers/usb/gadget/udc/net2272.c
++++ b/drivers/usb/gadget/udc/net2272.c
+@@ -2650,7 +2650,7 @@ net2272_plat_probe(struct platform_device *pdev)
+ goto err_req;
+ }
+
+- ret = net2272_probe_fin(dev, IRQF_TRIGGER_LOW);
++ ret = net2272_probe_fin(dev, irqflags);
+ if (ret)
+ goto err_io;
+
+--
+2.43.0
+
--- /dev/null
+From fbaa8a5affaf4ac744bb65c6139ff24478f91cd2 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Tue, 23 Jan 2024 17:51:09 -0500
+Subject: usb: phy: generic: Get the vbus supply
+
+From: Sean Anderson <sean.anderson@seco.com>
+
+[ Upstream commit 75fd6485cccef269ac9eb3b71cf56753341195ef ]
+
+While support for working with a vbus was added, the regulator was never
+actually gotten (despite what was documented). Fix this by actually
+getting the supply from the device tree.
+
+Fixes: 7acc9973e3c4 ("usb: phy: generic: add vbus support")
+Signed-off-by: Sean Anderson <sean.anderson@seco.com>
+Link: https://lore.kernel.org/r/20240123225111.1629405-3-sean.anderson@seco.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/usb/phy/phy-generic.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/drivers/usb/phy/phy-generic.c b/drivers/usb/phy/phy-generic.c
+index 3dc5c04e7cbf9..953df04b40d40 100644
+--- a/drivers/usb/phy/phy-generic.c
++++ b/drivers/usb/phy/phy-generic.c
+@@ -265,6 +265,13 @@ int usb_phy_gen_create_phy(struct device *dev, struct usb_phy_generic *nop)
+ return -EPROBE_DEFER;
+ }
+
++ nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus");
++ if (PTR_ERR(nop->vbus_draw) == -ENODEV)
++ nop->vbus_draw = NULL;
++ if (IS_ERR(nop->vbus_draw))
++ return dev_err_probe(dev, PTR_ERR(nop->vbus_draw),
++ "could not get vbus regulator\n");
++
+ nop->vbus_draw = devm_regulator_get_exclusive(dev, "vbus");
+ if (PTR_ERR(nop->vbus_draw) == -ENODEV)
+ nop->vbus_draw = NULL;
+--
+2.43.0
+
--- /dev/null
+From 19454a99576c953f4696125ed84d94fdd1261f56 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 16 Feb 2024 09:25:02 -0500
+Subject: vdpa/mlx5: Allow CVQ size changes
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Jonah Palmer <jonah.palmer@oracle.com>
+
+[ Upstream commit 749a4016839270163efc36ecddddd01de491a16b ]
+
+The MLX driver was not updating its control virtqueue size at set_vq_num
+and instead always initialized to MLX5_CVQ_MAX_ENT (16) at
+setup_cvq_vring.
+
+Qemu would try to set the size to 64 by default, however, because the
+CVQ size always was initialized to 16, an error would be thrown when
+sending >16 control messages (as used-ring entry 17 is initialized to 0).
+For example, starting a guest with x-svq=on and then executing the
+following command would produce the error below:
+
+ # for i in {1..20}; do ifconfig eth0 hw ether XX:xx:XX:xx:XX:XX; done
+
+ qemu-system-x86_64: Insufficient written data (0)
+ [ 435.331223] virtio_net virtio0: Failed to set mac address by vq command.
+ SIOCSIFHWADDR: Invalid argument
+
+Acked-by: Dragos Tatulea <dtatulea@nvidia.com>
+Acked-by: Eugenio Pérez <eperezma@redhat.com>
+Signed-off-by: Jonah Palmer <jonah.palmer@oracle.com>
+Message-Id: <20240216142502.78095-1-jonah.palmer@oracle.com>
+Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
+Tested-by: Lei Yang <leiyang@redhat.com>
+Fixes: 5262912ef3cf ("vdpa/mlx5: Add support for control VQ and MAC setting")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/vdpa/mlx5/net/mlx5_vnet.c | 13 +++++++++----
+ 1 file changed, 9 insertions(+), 4 deletions(-)
+
+diff --git a/drivers/vdpa/mlx5/net/mlx5_vnet.c b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+index 2b7e796c48897..74d295312466f 100644
+--- a/drivers/vdpa/mlx5/net/mlx5_vnet.c
++++ b/drivers/vdpa/mlx5/net/mlx5_vnet.c
+@@ -185,8 +185,6 @@ static void teardown_driver(struct mlx5_vdpa_net *ndev);
+
+ static bool mlx5_vdpa_debug;
+
+-#define MLX5_CVQ_MAX_ENT 16
+-
+ #define MLX5_LOG_VIO_FLAG(_feature) \
+ do { \
+ if (features & BIT_ULL(_feature)) \
+@@ -1980,9 +1978,16 @@ static void mlx5_vdpa_set_vq_num(struct vdpa_device *vdev, u16 idx, u32 num)
+ struct mlx5_vdpa_net *ndev = to_mlx5_vdpa_ndev(mvdev);
+ struct mlx5_vdpa_virtqueue *mvq;
+
+- if (!is_index_valid(mvdev, idx) || is_ctrl_vq_idx(mvdev, idx))
++ if (!is_index_valid(mvdev, idx))
+ return;
+
++ if (is_ctrl_vq_idx(mvdev, idx)) {
++ struct mlx5_control_vq *cvq = &mvdev->cvq;
++
++ cvq->vring.vring.num = num;
++ return;
++ }
++
+ mvq = &ndev->vqs[idx];
+ mvq->num_ent = num;
+ }
+@@ -2512,7 +2517,7 @@ static int setup_cvq_vring(struct mlx5_vdpa_dev *mvdev)
+ u16 idx = cvq->vring.last_avail_idx;
+
+ err = vringh_init_iotlb(&cvq->vring, mvdev->actual_features,
+- MLX5_CVQ_MAX_ENT, false,
++ cvq->vring.vring.num, false,
+ (struct vring_desc *)(uintptr_t)cvq->desc_addr,
+ (struct vring_avail *)(uintptr_t)cvq->driver_addr,
+ (struct vring_used *)(uintptr_t)cvq->device_addr);
+--
+2.43.0
+
--- /dev/null
+From f3a17153b46ee70eada6dd86ca35720ac0b84994 Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Fri, 9 Feb 2024 14:30:07 -0800
+Subject: vdpa_sim: reset must not run
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Steve Sistare <steven.sistare@oracle.com>
+
+[ Upstream commit 9588e7fc511f9c55b9835f14916e90ab940061b7 ]
+
+vdpasim_do_reset sets running to true, which is wrong, as it allows
+vdpasim_kick_vq to post work requests before the device has been
+configured. To fix, do not set running until VIRTIO_CONFIG_S_DRIVER_OK
+is set.
+
+Fixes: 0c89e2a3a9d0 ("vdpa_sim: Implement suspend vdpa op")
+Signed-off-by: Steve Sistare <steven.sistare@oracle.com>
+Reviewed-by: Eugenio Pérez <eperezma@redhat.com>
+Acked-by: Jason Wang <jasowang@redhat.com>
+Message-Id: <1707517807-137331-1-git-send-email-steven.sistare@oracle.com>
+Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/vdpa/vdpa_sim/vdpa_sim.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/drivers/vdpa/vdpa_sim/vdpa_sim.c b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+index 61bde476cf9c8..e7fc25bfdd237 100644
+--- a/drivers/vdpa/vdpa_sim/vdpa_sim.c
++++ b/drivers/vdpa/vdpa_sim/vdpa_sim.c
+@@ -120,7 +120,7 @@ static void vdpasim_do_reset(struct vdpasim *vdpasim)
+ for (i = 0; i < vdpasim->dev_attr.nas; i++)
+ vhost_iotlb_reset(&vdpasim->iommu[i]);
+
+- vdpasim->running = true;
++ vdpasim->running = false;
+ spin_unlock(&vdpasim->iommu_lock);
+
+ vdpasim->features = 0;
+@@ -513,6 +513,7 @@ static void vdpasim_set_status(struct vdpa_device *vdpa, u8 status)
+
+ spin_lock(&vdpasim->lock);
+ vdpasim->status = status;
++ vdpasim->running = (status & VIRTIO_CONFIG_S_DRIVER_OK) != 0;
+ spin_unlock(&vdpasim->lock);
+ }
+
+--
+2.43.0
+
--- /dev/null
+From d9df8b24df42583507e25b23ac043f6987264ddc Mon Sep 17 00:00:00 2001
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 14 Mar 2024 16:49:06 -0600
+Subject: wireguard: receive: annotate data-race around
+ receiving_counter.counter
+
+From: Nikita Zhandarovich <n.zhandarovich@fintech.ru>
+
+[ Upstream commit bba045dc4d996d03dce6fe45726e78a1a1f6d4c3 ]
+
+Syzkaller with KCSAN identified a data-race issue when accessing
+keypair->receiving_counter.counter. Use READ_ONCE() and WRITE_ONCE()
+annotations to mark the data race as intentional.
+
+ BUG: KCSAN: data-race in wg_packet_decrypt_worker / wg_packet_rx_poll
+
+ write to 0xffff888107765888 of 8 bytes by interrupt on cpu 0:
+ counter_validate drivers/net/wireguard/receive.c:321 [inline]
+ wg_packet_rx_poll+0x3ac/0xf00 drivers/net/wireguard/receive.c:461
+ __napi_poll+0x60/0x3b0 net/core/dev.c:6536
+ napi_poll net/core/dev.c:6605 [inline]
+ net_rx_action+0x32b/0x750 net/core/dev.c:6738
+ __do_softirq+0xc4/0x279 kernel/softirq.c:553
+ do_softirq+0x5e/0x90 kernel/softirq.c:454
+ __local_bh_enable_ip+0x64/0x70 kernel/softirq.c:381
+ __raw_spin_unlock_bh include/linux/spinlock_api_smp.h:167 [inline]
+ _raw_spin_unlock_bh+0x36/0x40 kernel/locking/spinlock.c:210
+ spin_unlock_bh include/linux/spinlock.h:396 [inline]
+ ptr_ring_consume_bh include/linux/ptr_ring.h:367 [inline]
+ wg_packet_decrypt_worker+0x6c5/0x700 drivers/net/wireguard/receive.c:499
+ process_one_work kernel/workqueue.c:2633 [inline]
+ ...
+
+ read to 0xffff888107765888 of 8 bytes by task 3196 on cpu 1:
+ decrypt_packet drivers/net/wireguard/receive.c:252 [inline]
+ wg_packet_decrypt_worker+0x220/0x700 drivers/net/wireguard/receive.c:501
+ process_one_work kernel/workqueue.c:2633 [inline]
+ process_scheduled_works+0x5b8/0xa30 kernel/workqueue.c:2706
+ worker_thread+0x525/0x730 kernel/workqueue.c:2787
+ ...
+
+Fixes: a9e90d9931f3 ("wireguard: noise: separate receive counter from send counter")
+Reported-by: syzbot+d1de830e4ecdaac83d89@syzkaller.appspotmail.com
+Signed-off-by: Nikita Zhandarovich <n.zhandarovich@fintech.ru>
+Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
+Reviewed-by: Jiri Pirko <jiri@nvidia.com>
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+---
+ drivers/net/wireguard/receive.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/drivers/net/wireguard/receive.c b/drivers/net/wireguard/receive.c
+index a176653c88616..db01ec03bda00 100644
+--- a/drivers/net/wireguard/receive.c
++++ b/drivers/net/wireguard/receive.c
+@@ -251,7 +251,7 @@ static bool decrypt_packet(struct sk_buff *skb, struct noise_keypair *keypair)
+
+ if (unlikely(!READ_ONCE(keypair->receiving.is_valid) ||
+ wg_birthdate_has_expired(keypair->receiving.birthdate, REJECT_AFTER_TIME) ||
+- keypair->receiving_counter.counter >= REJECT_AFTER_MESSAGES)) {
++ READ_ONCE(keypair->receiving_counter.counter) >= REJECT_AFTER_MESSAGES)) {
+ WRITE_ONCE(keypair->receiving.is_valid, false);
+ return false;
+ }
+@@ -318,7 +318,7 @@ static bool counter_validate(struct noise_replay_counter *counter, u64 their_cou
+ for (i = 1; i <= top; ++i)
+ counter->backtrack[(i + index_current) &
+ ((COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1)] = 0;
+- counter->counter = their_counter;
++ WRITE_ONCE(counter->counter, their_counter);
+ }
+
+ index &= (COUNTER_BITS_TOTAL / BITS_PER_LONG) - 1;
+@@ -463,7 +463,7 @@ int wg_packet_rx_poll(struct napi_struct *napi, int budget)
+ net_dbg_ratelimited("%s: Packet has invalid nonce %llu (max %llu)\n",
+ peer->device->dev->name,
+ PACKET_CB(skb)->nonce,
+- keypair->receiving_counter.counter);
++ READ_ONCE(keypair->receiving_counter.counter));
+ goto next;
+ }
+
+--
+2.43.0
+