From: Sasha Levin Date: Mon, 31 May 2021 02:37:05 +0000 (-0400) Subject: Fixes for 5.4 X-Git-Tag: v4.4.271~46 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=710e294fccce743ac80e722945a28624299e950e;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.4 Signed-off-by: Sasha Levin --- diff --git a/queue-5.4/alsa-usb-audio-scarlett2-snd_scarlett_gen2_controls_.patch b/queue-5.4/alsa-usb-audio-scarlett2-snd_scarlett_gen2_controls_.patch new file mode 100644 index 00000000000..a6b0385fa10 --- /dev/null +++ b/queue-5.4/alsa-usb-audio-scarlett2-snd_scarlett_gen2_controls_.patch @@ -0,0 +1,40 @@ +From 4cbb031826a7441268cf416610e3f4e59afedc49 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 23 May 2021 02:09:00 +0800 +Subject: ALSA: usb-audio: scarlett2: snd_scarlett_gen2_controls_create() can + be static + +From: kernel test robot + +[ Upstream commit 2b899f31f1a6db2db4608bac2ac04fe2c4ad89eb ] + +sound/usb/mixer_scarlett_gen2.c:2000:5: warning: symbol 'snd_scarlett_gen2_controls_create' was not declared. Should it be static? + +Fixes: 265d1a90e4fb ("ALSA: usb-audio: scarlett2: Improve driver startup messages") +Reported-by: kernel test robot +Signed-off-by: kernel test robot +Link: https://lore.kernel.org/r/20210522180900.GA83915@f59a3af2f1d9 +Signed-off-by: Takashi Iwai +Signed-off-by: Sasha Levin +--- + sound/usb/mixer_scarlett_gen2.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/sound/usb/mixer_scarlett_gen2.c b/sound/usb/mixer_scarlett_gen2.c +index a973dd4d5bbe..7a10c9e22c46 100644 +--- a/sound/usb/mixer_scarlett_gen2.c ++++ b/sound/usb/mixer_scarlett_gen2.c +@@ -1997,8 +1997,8 @@ static int scarlett2_mixer_status_create(struct usb_mixer_interface *mixer) + return usb_submit_urb(mixer->urb, GFP_KERNEL); + } + +-int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer, +- const struct scarlett2_device_info *info) ++static int snd_scarlett_gen2_controls_create(struct usb_mixer_interface *mixer, ++ const struct scarlett2_device_info *info) + { + int err; + +-- +2.30.2 + diff --git a/queue-5.4/asoc-cs35l33-fix-an-error-code-in-probe.patch b/queue-5.4/asoc-cs35l33-fix-an-error-code-in-probe.patch new file mode 100644 index 00000000000..732cfe51295 --- /dev/null +++ b/queue-5.4/asoc-cs35l33-fix-an-error-code-in-probe.patch @@ -0,0 +1,36 @@ +From f7b0ea00cb6c3e0ce56931e32c53fbd5e81a6013 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 20 May 2021 08:08:24 +0300 +Subject: ASoC: cs35l33: fix an error code in probe() + +From: Dan Carpenter + +[ Upstream commit 833bc4cf9754643acc69b3c6b65988ca78df4460 ] + +This error path returns zero (success) but it should return -EINVAL. + +Fixes: 3333cb7187b9 ("ASoC: cs35l33: Initial commit of the cs35l33 CODEC driver.") +Signed-off-by: Dan Carpenter +Reviewed-by: Charles Keepax +Link: https://lore.kernel.org/r/YKXuyGEzhPT35R3G@mwanda +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + sound/soc/codecs/cs35l33.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/sound/soc/codecs/cs35l33.c b/sound/soc/codecs/cs35l33.c +index 6042194d95d3..8894369e329a 100644 +--- a/sound/soc/codecs/cs35l33.c ++++ b/sound/soc/codecs/cs35l33.c +@@ -1201,6 +1201,7 @@ static int cs35l33_i2c_probe(struct i2c_client *i2c_client, + dev_err(&i2c_client->dev, + "CS35L33 Device ID (%X). Expected ID %X\n", + devid, CS35L33_CHIP_ID); ++ ret = -EINVAL; + goto err_enable; + } + +-- +2.30.2 + diff --git a/queue-5.4/asoc-cs42l42-regmap-must-use_single_read-write.patch b/queue-5.4/asoc-cs42l42-regmap-must-use_single_read-write.patch new file mode 100644 index 00000000000..0c0a3d049b4 --- /dev/null +++ b/queue-5.4/asoc-cs42l42-regmap-must-use_single_read-write.patch @@ -0,0 +1,49 @@ +From 7ec9d943afbeeb59a96e4a955e53afb6cb91f9cc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 11 May 2021 14:28:55 +0100 +Subject: ASoC: cs42l42: Regmap must use_single_read/write + +From: Richard Fitzgerald + +[ Upstream commit 0fad605fb0bdc00d8ad78696300ff2fbdee6e048 ] + +cs42l42 does not support standard burst transfers so the use_single_read +and use_single_write flags must be set in the regmap config. + +Because of this bug, the patch: + +commit 0a0eb567e1d4 ("ASoC: cs42l42: Minor error paths fixups") + +broke cs42l42 probe() because without the use_single_* flags it causes +regmap to issue a burst read. + +However, the missing use_single_* could cause problems anyway because the +regmap cache can attempt burst transfers if these flags are not set. + +Fixes: 2c394ca79604 ("ASoC: Add support for CS42L42 codec") +Signed-off-by: Richard Fitzgerald +Acked-by: Charles Keepax +Link: https://lore.kernel.org/r/20210511132855.27159-1-rf@opensource.cirrus.com +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + sound/soc/codecs/cs42l42.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/sound/soc/codecs/cs42l42.c b/sound/soc/codecs/cs42l42.c +index dcd2acb2c3ce..5faf8877137a 100644 +--- a/sound/soc/codecs/cs42l42.c ++++ b/sound/soc/codecs/cs42l42.c +@@ -398,6 +398,9 @@ static const struct regmap_config cs42l42_regmap = { + .reg_defaults = cs42l42_reg_defaults, + .num_reg_defaults = ARRAY_SIZE(cs42l42_reg_defaults), + .cache_type = REGCACHE_RBTREE, ++ ++ .use_single_read = true, ++ .use_single_write = true, + }; + + static DECLARE_TLV_DB_SCALE(adc_tlv, -9600, 100, false); +-- +2.30.2 + diff --git a/queue-5.4/bnxt_en-include-new-p5-hv-definition-in-vf-check.patch b/queue-5.4/bnxt_en-include-new-p5-hv-definition-in-vf-check.patch new file mode 100644 index 00000000000..f7508566548 --- /dev/null +++ b/queue-5.4/bnxt_en-include-new-p5-hv-definition-in-vf-check.patch @@ -0,0 +1,39 @@ +From 4f00193a225408ba8c55b60e1a7a1d33760694be Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 15 May 2021 03:25:18 -0400 +Subject: bnxt_en: Include new P5 HV definition in VF check. + +From: Andy Gospodarek + +[ Upstream commit ab21494be9dc7d62736c5fcd06be65d49df713ee ] + +Otherwise, some of the recently added HyperV VF IDs would not be +recognized as VF devices and they would not initialize properly. + +Fixes: 7fbf359bb2c1 ("bnxt_en: Add PCI IDs for Hyper-V VF devices.") +Reviewed-by: Edwin Peer +Signed-off-by: Andy Gospodarek +Signed-off-by: Michael Chan +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/broadcom/bnxt/bnxt.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +index 106f2b2ce17f..0dba28bb309a 100644 +--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c ++++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c +@@ -280,7 +280,8 @@ static bool bnxt_vf_pciid(enum board_idx idx) + { + return (idx == NETXTREME_C_VF || idx == NETXTREME_E_VF || + idx == NETXTREME_S_VF || idx == NETXTREME_C_VF_HV || +- idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF); ++ idx == NETXTREME_E_VF_HV || idx == NETXTREME_E_P5_VF || ++ idx == NETXTREME_E_P5_VF_HV); + } + + #define DB_CP_REARM_FLAGS (DB_KEY_CP | DB_IDX_VALID) +-- +2.30.2 + diff --git a/queue-5.4/bpf-set-mac_len-in-bpf_skb_change_head.patch b/queue-5.4/bpf-set-mac_len-in-bpf_skb_change_head.patch new file mode 100644 index 00000000000..9d1ed8079a5 --- /dev/null +++ b/queue-5.4/bpf-set-mac_len-in-bpf_skb_change_head.patch @@ -0,0 +1,40 @@ +From 91e2ed6bc44abdf46326670d10ad89ad6e109e82 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 19 May 2021 15:47:42 +0000 +Subject: bpf: Set mac_len in bpf_skb_change_head + +From: Jussi Maki + +[ Upstream commit 84316ca4e100d8cbfccd9f774e23817cb2059868 ] + +The skb_change_head() helper did not set "skb->mac_len", which is +problematic when it's used in combination with skb_redirect_peer(). +Without it, redirecting a packet from a L3 device such as wireguard to +the veth peer device will cause skb->data to point to the middle of the +IP header on entry to tcp_v4_rcv() since the L2 header is not pulled +correctly due to mac_len=0. + +Fixes: 3a0af8fd61f9 ("bpf: BPF for lightweight tunnel infrastructure") +Signed-off-by: Jussi Maki +Signed-off-by: Daniel Borkmann +Link: https://lore.kernel.org/bpf/20210519154743.2554771-2-joamaki@gmail.com +Signed-off-by: Sasha Levin +--- + net/core/filter.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/net/core/filter.c b/net/core/filter.c +index 7fbb274b7fe3..108bcf600052 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -3331,6 +3331,7 @@ static inline int __bpf_skb_change_head(struct sk_buff *skb, u32 head_room, + __skb_push(skb, head_room); + memset(skb->data, 0, head_room); + skb_reset_mac_header(skb); ++ skb_reset_mac_len(skb); + } + + return ret; +-- +2.30.2 + diff --git a/queue-5.4/cxgb4-avoid-accessing-registers-when-clearing-filter.patch b/queue-5.4/cxgb4-avoid-accessing-registers-when-clearing-filter.patch new file mode 100644 index 00000000000..740cdf590b5 --- /dev/null +++ b/queue-5.4/cxgb4-avoid-accessing-registers-when-clearing-filter.patch @@ -0,0 +1,39 @@ +From 02644434c7a94f8088516f53be13acaa6a617214 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 19 May 2021 16:48:31 +0530 +Subject: cxgb4: avoid accessing registers when clearing filters + +From: Raju Rangoju + +[ Upstream commit 88c380df84fbd03f9b137c2b9d0a44b9f2f553b0 ] + +Hardware register having the server TID base can contain +invalid values when adapter is in bad state (for example, +due to AER fatal error). Reading these invalid values in the +register can lead to out-of-bound memory access. So, fix +by using the saved server TID base when clearing filters. + +Fixes: b1a79360ee86 ("cxgb4: Delete all hash and TCAM filters before resource cleanup") +Signed-off-by: Raju Rangoju +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c +index 64a2453e06ba..ccb28182f745 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_filter.c +@@ -778,7 +778,7 @@ void clear_all_filters(struct adapter *adapter) + cxgb4_del_filter(dev, i, &f->fs); + } + +- sb = t4_read_reg(adapter, LE_DB_SRVR_START_INDEX_A); ++ sb = adapter->tids.stid_base; + for (i = 0; i < sb; i++) { + f = (struct filter_entry *)adapter->tids.tid_tab[i]; + +-- +2.30.2 + diff --git a/queue-5.4/gve-add-null-pointer-checks-when-freeing-irqs.patch b/queue-5.4/gve-add-null-pointer-checks-when-freeing-irqs.patch new file mode 100644 index 00000000000..7b1ce764db9 --- /dev/null +++ b/queue-5.4/gve-add-null-pointer-checks-when-freeing-irqs.patch @@ -0,0 +1,61 @@ +From fc71ef21cc16ab996eab5a77d84d9a87f03978f5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 May 2021 14:08:13 -0700 +Subject: gve: Add NULL pointer checks when freeing irqs. + +From: David Awogbemila + +[ Upstream commit 5218e919c8d06279884aa0baf76778a6817d5b93 ] + +When freeing notification blocks, we index priv->msix_vectors. +If we failed to allocate priv->msix_vectors (see abort_with_msix_vectors) +this could lead to a NULL pointer dereference if the driver is unloaded. + +Fixes: 893ce44df565 ("gve: Add basic driver framework for Compute Engine Virtual NIC") +Signed-off-by: David Awogbemila +Acked-by: Willem de Brujin +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/google/gve/gve_main.c | 20 +++++++++++--------- + 1 file changed, 11 insertions(+), 9 deletions(-) + +diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c +index 9c74251f9b6a..77e79d2939ba 100644 +--- a/drivers/net/ethernet/google/gve/gve_main.c ++++ b/drivers/net/ethernet/google/gve/gve_main.c +@@ -242,20 +242,22 @@ static void gve_free_notify_blocks(struct gve_priv *priv) + { + int i; + +- /* Free the irqs */ +- for (i = 0; i < priv->num_ntfy_blks; i++) { +- struct gve_notify_block *block = &priv->ntfy_blocks[i]; +- int msix_idx = i; +- +- irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, +- NULL); +- free_irq(priv->msix_vectors[msix_idx].vector, block); ++ if (priv->msix_vectors) { ++ /* Free the irqs */ ++ for (i = 0; i < priv->num_ntfy_blks; i++) { ++ struct gve_notify_block *block = &priv->ntfy_blocks[i]; ++ int msix_idx = i; ++ ++ irq_set_affinity_hint(priv->msix_vectors[msix_idx].vector, ++ NULL); ++ free_irq(priv->msix_vectors[msix_idx].vector, block); ++ } ++ free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); + } + dma_free_coherent(&priv->pdev->dev, + priv->num_ntfy_blks * sizeof(*priv->ntfy_blocks), + priv->ntfy_blocks, priv->ntfy_block_bus); + priv->ntfy_blocks = NULL; +- free_irq(priv->msix_vectors[priv->mgmt_msix_idx].vector, priv); + pci_disable_msix(priv->pdev); + kvfree(priv->msix_vectors); + priv->msix_vectors = NULL; +-- +2.30.2 + diff --git a/queue-5.4/gve-check-tx-qpl-was-actually-assigned.patch b/queue-5.4/gve-check-tx-qpl-was-actually-assigned.patch new file mode 100644 index 00000000000..9808c06f528 --- /dev/null +++ b/queue-5.4/gve-check-tx-qpl-was-actually-assigned.patch @@ -0,0 +1,52 @@ +From e4adf967d175f1eaa14954aa67dca9869a22adf1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 May 2021 14:08:11 -0700 +Subject: gve: Check TX QPL was actually assigned + +From: Catherine Sullivan + +[ Upstream commit 5aec55b46c6238506cdf0c60cd0e42ab77a1e5e0 ] + +Correctly check the TX QPL was assigned and unassigned if +other steps in the allocation fail. + +Fixes: f5cedc84a30d (gve: Add transmit and receive support) +Signed-off-by: Catherine Sullivan +Signed-off-by: David Awogbemila +Acked-by: Willem de Bruijn +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/google/gve/gve_tx.c | 6 +++++- + 1 file changed, 5 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c +index d0244feb0301..30532ee28dd3 100644 +--- a/drivers/net/ethernet/google/gve/gve_tx.c ++++ b/drivers/net/ethernet/google/gve/gve_tx.c +@@ -207,10 +207,12 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) + goto abort_with_info; + + tx->tx_fifo.qpl = gve_assign_tx_qpl(priv); ++ if (!tx->tx_fifo.qpl) ++ goto abort_with_desc; + + /* map Tx FIFO */ + if (gve_tx_fifo_init(priv, &tx->tx_fifo)) +- goto abort_with_desc; ++ goto abort_with_qpl; + + tx->q_resources = + dma_alloc_coherent(hdev, +@@ -229,6 +231,8 @@ static int gve_tx_alloc_ring(struct gve_priv *priv, int idx) + + abort_with_fifo: + gve_tx_fifo_release(priv, &tx->tx_fifo); ++abort_with_qpl: ++ gve_unassign_qpl(priv, tx->tx_fifo.qpl->id); + abort_with_desc: + dma_free_coherent(hdev, bytes, tx->desc, tx->bus); + tx->desc = NULL; +-- +2.30.2 + diff --git a/queue-5.4/gve-correct-skb-queue-index-validation.patch b/queue-5.4/gve-correct-skb-queue-index-validation.patch new file mode 100644 index 00000000000..4012f3fb07c --- /dev/null +++ b/queue-5.4/gve-correct-skb-queue-index-validation.patch @@ -0,0 +1,37 @@ +From 1e54c3871f31a28896fe6b85ca66ef88099bee1f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 May 2021 14:08:15 -0700 +Subject: gve: Correct SKB queue index validation. + +From: David Awogbemila + +[ Upstream commit fbd4a28b4fa66faaa7f510c0adc531d37e0a7848 ] + +SKBs with skb_get_queue_mapping(skb) == tx_cfg.num_queues should also be +considered invalid. + +Fixes: f5cedc84a30d ("gve: Add transmit and receive support") +Signed-off-by: David Awogbemila +Acked-by: Willem de Brujin +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/google/gve/gve_tx.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/google/gve/gve_tx.c b/drivers/net/ethernet/google/gve/gve_tx.c +index 30532ee28dd3..b653197b34d1 100644 +--- a/drivers/net/ethernet/google/gve/gve_tx.c ++++ b/drivers/net/ethernet/google/gve/gve_tx.c +@@ -482,7 +482,7 @@ netdev_tx_t gve_tx(struct sk_buff *skb, struct net_device *dev) + struct gve_tx_ring *tx; + int nsegs; + +- WARN(skb_get_queue_mapping(skb) > priv->tx_cfg.num_queues, ++ WARN(skb_get_queue_mapping(skb) >= priv->tx_cfg.num_queues, + "skb queue index out of range"); + tx = &priv->tx[skb_get_queue_mapping(skb)]; + if (unlikely(gve_maybe_stop_tx(tx, skb))) { +-- +2.30.2 + diff --git a/queue-5.4/gve-update-mgmt_msix_idx-if-num_ntfy-changes.patch b/queue-5.4/gve-update-mgmt_msix_idx-if-num_ntfy-changes.patch new file mode 100644 index 00000000000..6a90ae81102 --- /dev/null +++ b/queue-5.4/gve-update-mgmt_msix_idx-if-num_ntfy-changes.patch @@ -0,0 +1,38 @@ +From 7d6865973f792edbfeb7ac4097d5c722b7516b21 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 May 2021 14:08:12 -0700 +Subject: gve: Update mgmt_msix_idx if num_ntfy changes + +From: David Awogbemila + +[ Upstream commit e96b491a0ffa35a8a9607c193fa4d894ca9fb32f ] + +If we do not get the expected number of vectors from +pci_enable_msix_range, we update priv->num_ntfy_blks but not +priv->mgmt_msix_idx. This patch fixes this so that priv->mgmt_msix_idx +is updated accordingly. + +Fixes: f5cedc84a30d ("gve: Add transmit and receive support") +Signed-off-by: David Awogbemila +Acked-by: Willem de Bruijn +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/google/gve/gve_main.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c +index 9b7a8db9860f..9c74251f9b6a 100644 +--- a/drivers/net/ethernet/google/gve/gve_main.c ++++ b/drivers/net/ethernet/google/gve/gve_main.c +@@ -161,6 +161,7 @@ static int gve_alloc_notify_blocks(struct gve_priv *priv) + int vecs_left = new_num_ntfy_blks % 2; + + priv->num_ntfy_blks = new_num_ntfy_blks; ++ priv->mgmt_msix_idx = priv->num_ntfy_blks; + priv->tx_cfg.max_queues = min_t(int, priv->tx_cfg.max_queues, + vecs_per_type); + priv->rx_cfg.max_queues = min_t(int, priv->rx_cfg.max_queues, +-- +2.30.2 + diff --git a/queue-5.4/gve-upgrade-memory-barrier-in-poll-routine.patch b/queue-5.4/gve-upgrade-memory-barrier-in-poll-routine.patch new file mode 100644 index 00000000000..504f1563396 --- /dev/null +++ b/queue-5.4/gve-upgrade-memory-barrier-in-poll-routine.patch @@ -0,0 +1,48 @@ +From 15620e0d8eab76c59db57c9e1117baef82ac2eb1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 May 2021 14:08:14 -0700 +Subject: gve: Upgrade memory barrier in poll routine + +From: Catherine Sullivan + +[ Upstream commit f81781835f0adfae8d701545386030d223efcd6f ] + +As currently written, if the driver checks for more work (via +gve_tx_poll or gve_rx_poll) before the device posts work and the +irq doorbell is not unmasked +(via iowrite32be(GVE_IRQ_ACK | GVE_IRQ_EVENT, ...)) before the device +attempts to raise an interrupt, an interrupt is lost and this could +potentially lead to the traffic being completely halted. For +example, if a tx queue has already been stopped, the driver won't get +the chance to complete work and egress will be halted. + +We need a full memory barrier in the poll +routine to ensure that the irq doorbell is unmasked before the driver +checks for more work. + +Fixes: f5cedc84a30d ("gve: Add transmit and receive support") +Signed-off-by: Catherine Sullivan +Signed-off-by: David Awogbemila +Acked-by: Willem de Brujin +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/google/gve/gve_main.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/google/gve/gve_main.c b/drivers/net/ethernet/google/gve/gve_main.c +index 77e79d2939ba..6ea0975d74a1 100644 +--- a/drivers/net/ethernet/google/gve/gve_main.c ++++ b/drivers/net/ethernet/google/gve/gve_main.c +@@ -121,7 +121,7 @@ static int gve_napi_poll(struct napi_struct *napi, int budget) + /* Double check we have no extra work. + * Ensure unmask synchronizes with checking for work. + */ +- dma_rmb(); ++ mb(); + if (block->tx) + reschedule |= gve_tx_poll(block, -1); + if (block->rx) +-- +2.30.2 + diff --git a/queue-5.4/ipv6-record-frag_max_size-in-atomic-fragments-in-inp.patch b/queue-5.4/ipv6-record-frag_max_size-in-atomic-fragments-in-inp.patch new file mode 100644 index 00000000000..f10556ca825 --- /dev/null +++ b/queue-5.4/ipv6-record-frag_max_size-in-atomic-fragments-in-inp.patch @@ -0,0 +1,46 @@ +From 759386dee68ef59f5ca20b5c1b29d926158d00c0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 21 May 2021 13:21:14 -0700 +Subject: ipv6: record frag_max_size in atomic fragments in input path + +From: Francesco Ruggeri + +[ Upstream commit e29f011e8fc04b2cdc742a2b9bbfa1b62518381a ] + +Commit dbd1759e6a9c ("ipv6: on reassembly, record frag_max_size") +filled the frag_max_size field in IP6CB in the input path. +The field should also be filled in case of atomic fragments. + +Fixes: dbd1759e6a9c ('ipv6: on reassembly, record frag_max_size') +Signed-off-by: Francesco Ruggeri +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/ipv6/reassembly.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/net/ipv6/reassembly.c b/net/ipv6/reassembly.c +index c8cf1bbad74a..45ee1971d998 100644 +--- a/net/ipv6/reassembly.c ++++ b/net/ipv6/reassembly.c +@@ -344,7 +344,7 @@ static int ipv6_frag_rcv(struct sk_buff *skb) + hdr = ipv6_hdr(skb); + fhdr = (struct frag_hdr *)skb_transport_header(skb); + +- if (!(fhdr->frag_off & htons(0xFFF9))) { ++ if (!(fhdr->frag_off & htons(IP6_OFFSET | IP6_MF))) { + /* It is not a fragmented frame */ + skb->transport_header += sizeof(struct frag_hdr); + __IP6_INC_STATS(net, +@@ -352,6 +352,8 @@ static int ipv6_frag_rcv(struct sk_buff *skb) + + IP6CB(skb)->nhoff = (u8 *)fhdr - skb_network_header(skb); + IP6CB(skb)->flags |= IP6SKB_FRAGMENTED; ++ IP6CB(skb)->frag_max_size = ntohs(hdr->payload_len) + ++ sizeof(struct ipv6hdr); + return 1; + } + +-- +2.30.2 + diff --git a/queue-5.4/ixgbe-fix-large-mtu-request-from-vf.patch b/queue-5.4/ixgbe-fix-large-mtu-request-from-vf.patch new file mode 100644 index 00000000000..8214c839f06 --- /dev/null +++ b/queue-5.4/ixgbe-fix-large-mtu-request-from-vf.patch @@ -0,0 +1,76 @@ +From 62070582e01c89740f83c7356a8a898675d8172b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 20 May 2021 11:18:35 -0700 +Subject: ixgbe: fix large MTU request from VF + +From: Jesse Brandeburg + +[ Upstream commit 63e39d29b3da02e901349f6cd71159818a4737a6 ] + +Check that the MTU value requested by the VF is in the supported +range of MTUs before attempting to set the VF large packet enable, +otherwise reject the request. This also avoids unnecessary +register updates in the case of the 82599 controller. + +Fixes: 872844ddb9e4 ("ixgbe: Enable jumbo frames support w/ SR-IOV") +Co-developed-by: Piotr Skajewski +Signed-off-by: Piotr Skajewski +Signed-off-by: Jesse Brandeburg +Co-developed-by: Mateusz Palczewski +Signed-off-by: Mateusz Palczewski +Tested-by: Konrad Jankowski +Signed-off-by: Tony Nguyen +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c | 16 +++++++--------- + 1 file changed, 7 insertions(+), 9 deletions(-) + +diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +index 537dfff585e0..47a920128760 100644 +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c +@@ -467,12 +467,16 @@ static int ixgbe_set_vf_vlan(struct ixgbe_adapter *adapter, int add, int vid, + return err; + } + +-static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) ++static int ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 max_frame, u32 vf) + { + struct ixgbe_hw *hw = &adapter->hw; +- int max_frame = msgbuf[1]; + u32 max_frs; + ++ if (max_frame < ETH_MIN_MTU || max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) { ++ e_err(drv, "VF max_frame %d out of range\n", max_frame); ++ return -EINVAL; ++ } ++ + /* + * For 82599EB we have to keep all PFs and VFs operating with + * the same max_frame value in order to avoid sending an oversize +@@ -533,12 +537,6 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf) + } + } + +- /* MTU < 68 is an error and causes problems on some kernels */ +- if (max_frame > IXGBE_MAX_JUMBO_FRAME_SIZE) { +- e_err(drv, "VF max_frame %d out of range\n", max_frame); +- return -EINVAL; +- } +- + /* pull current max frame size from hardware */ + max_frs = IXGBE_READ_REG(hw, IXGBE_MAXFRS); + max_frs &= IXGBE_MHADD_MFS_MASK; +@@ -1249,7 +1247,7 @@ static int ixgbe_rcv_msg_from_vf(struct ixgbe_adapter *adapter, u32 vf) + retval = ixgbe_set_vf_vlan_msg(adapter, msgbuf, vf); + break; + case IXGBE_VF_SET_LPE: +- retval = ixgbe_set_vf_lpe(adapter, msgbuf, vf); ++ retval = ixgbe_set_vf_lpe(adapter, msgbuf[1], vf); + break; + case IXGBE_VF_SET_MACVLAN: + retval = ixgbe_set_vf_macvlan_msg(adapter, msgbuf, vf); +-- +2.30.2 + diff --git a/queue-5.4/mips-alchemy-xxs1500-add-gpio-au1000.h-header-file.patch b/queue-5.4/mips-alchemy-xxs1500-add-gpio-au1000.h-header-file.patch new file mode 100644 index 00000000000..42f006d8449 --- /dev/null +++ b/queue-5.4/mips-alchemy-xxs1500-add-gpio-au1000.h-header-file.patch @@ -0,0 +1,46 @@ +From c8be3aeff7bddea7d32c1bb61ffbca5ed240dc34 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 16 May 2021 17:01:08 -0700 +Subject: MIPS: alchemy: xxs1500: add gpio-au1000.h header file + +From: Randy Dunlap + +[ Upstream commit ff4cff962a7eedc73e54b5096693da7f86c61346 ] + +board-xxs1500.c references 2 functions without declaring them, so add +the header file to placate the build. + +../arch/mips/alchemy/board-xxs1500.c: In function 'board_setup': +../arch/mips/alchemy/board-xxs1500.c:56:2: error: implicit declaration of function 'alchemy_gpio1_input_enable' [-Werror=implicit-function-declaration] + 56 | alchemy_gpio1_input_enable(); +../arch/mips/alchemy/board-xxs1500.c:57:2: error: implicit declaration of function 'alchemy_gpio2_enable'; did you mean 'alchemy_uart_enable'? [-Werror=implicit-function-declaration] + 57 | alchemy_gpio2_enable(); + +Fixes: 8e026910fcd4 ("MIPS: Alchemy: merge GPR/MTX-1/XXS1500 board code into single files") +Signed-off-by: Randy Dunlap +Cc: Thomas Bogendoerfer +Cc: linux-mips@vger.kernel.org +Cc: Manuel Lauss +Cc: Ralf Baechle +Acked-by: Manuel Lauss +Signed-off-by: Thomas Bogendoerfer +Signed-off-by: Sasha Levin +--- + arch/mips/alchemy/board-xxs1500.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/arch/mips/alchemy/board-xxs1500.c b/arch/mips/alchemy/board-xxs1500.c +index c67dfe1f4997..ec35aedc7727 100644 +--- a/arch/mips/alchemy/board-xxs1500.c ++++ b/arch/mips/alchemy/board-xxs1500.c +@@ -18,6 +18,7 @@ + #include + #include + #include ++#include + #include + + const char *get_system_type(void) +-- +2.30.2 + diff --git a/queue-5.4/mips-ralink-export-rt_sysc_membase-for-rt2880_wdt.c.patch b/queue-5.4/mips-ralink-export-rt_sysc_membase-for-rt2880_wdt.c.patch new file mode 100644 index 00000000000..dd53ba192a9 --- /dev/null +++ b/queue-5.4/mips-ralink-export-rt_sysc_membase-for-rt2880_wdt.c.patch @@ -0,0 +1,53 @@ +From 60c4a821f450290e5880c1113c1cd077bae29439 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 16 May 2021 17:54:17 -0700 +Subject: MIPS: ralink: export rt_sysc_membase for rt2880_wdt.c + +From: Randy Dunlap + +[ Upstream commit fef532ea0cd871afab7d9a7b6e9da99ac2c24371 ] + +rt2880_wdt.c uses (well, attempts to use) rt_sysc_membase. However, +when this watchdog driver is built as a loadable module, there is a +build error since the rt_sysc_membase symbol is not exported. +Export it to quell the build error. + +ERROR: modpost: "rt_sysc_membase" [drivers/watchdog/rt2880_wdt.ko] undefined! + +Fixes: 473cf939ff34 ("watchdog: add ralink watchdog driver") +Signed-off-by: Randy Dunlap +Cc: Guenter Roeck +Cc: Wim Van Sebroeck +Cc: John Crispin +Cc: linux-mips@vger.kernel.org +Cc: linux-watchdog@vger.kernel.org +Acked-by: Guenter Roeck +Signed-off-by: Thomas Bogendoerfer +Signed-off-by: Sasha Levin +--- + arch/mips/ralink/of.c | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/arch/mips/ralink/of.c b/arch/mips/ralink/of.c +index 59b23095bfbb..4e38a905ab38 100644 +--- a/arch/mips/ralink/of.c ++++ b/arch/mips/ralink/of.c +@@ -8,6 +8,7 @@ + + #include + #include ++#include + #include + #include + #include +@@ -25,6 +26,7 @@ + + __iomem void *rt_sysc_membase; + __iomem void *rt_memc_membase; ++EXPORT_SYMBOL_GPL(rt_sysc_membase); + + __iomem void *plat_of_remap_node(const char *node) + { +-- +2.30.2 + diff --git a/queue-5.4/mld-fix-panic-in-mld_newpack.patch b/queue-5.4/mld-fix-panic-in-mld_newpack.patch new file mode 100644 index 00000000000..4bcb253f2fe --- /dev/null +++ b/queue-5.4/mld-fix-panic-in-mld_newpack.patch @@ -0,0 +1,112 @@ +From 9b2f100da86e3e0d2f33d5e6ab575f63500f0114 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 16 May 2021 14:44:42 +0000 +Subject: mld: fix panic in mld_newpack() + +From: Taehee Yoo + +[ Upstream commit 020ef930b826d21c5446fdc9db80fd72a791bc21 ] + +mld_newpack() doesn't allow to allocate high order page, +only order-0 allocation is allowed. +If headroom size is too large, a kernel panic could occur in skb_put(). + +Test commands: + ip netns del A + ip netns del B + ip netns add A + ip netns add B + ip link add veth0 type veth peer name veth1 + ip link set veth0 netns A + ip link set veth1 netns B + + ip netns exec A ip link set lo up + ip netns exec A ip link set veth0 up + ip netns exec A ip -6 a a 2001:db8:0::1/64 dev veth0 + ip netns exec B ip link set lo up + ip netns exec B ip link set veth1 up + ip netns exec B ip -6 a a 2001:db8:0::2/64 dev veth1 + for i in {1..99} + do + let A=$i-1 + ip netns exec A ip link add ip6gre$i type ip6gre \ + local 2001:db8:$A::1 remote 2001:db8:$A::2 encaplimit 100 + ip netns exec A ip -6 a a 2001:db8:$i::1/64 dev ip6gre$i + ip netns exec A ip link set ip6gre$i up + + ip netns exec B ip link add ip6gre$i type ip6gre \ + local 2001:db8:$A::2 remote 2001:db8:$A::1 encaplimit 100 + ip netns exec B ip -6 a a 2001:db8:$i::2/64 dev ip6gre$i + ip netns exec B ip link set ip6gre$i up + done + +Splat looks like: +kernel BUG at net/core/skbuff.c:110! +invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC KASAN PTI +CPU: 0 PID: 7 Comm: kworker/0:1 Not tainted 5.12.0+ #891 +Workqueue: ipv6_addrconf addrconf_dad_work +RIP: 0010:skb_panic+0x15d/0x15f +Code: 92 fe 4c 8b 4c 24 10 53 8b 4d 70 45 89 e0 48 c7 c7 00 ae 79 83 +41 57 41 56 41 55 48 8b 54 24 a6 26 f9 ff <0f> 0b 48 8b 6c 24 20 89 +34 24 e8 4a 4e 92 fe 8b 34 24 48 c7 c1 20 +RSP: 0018:ffff88810091f820 EFLAGS: 00010282 +RAX: 0000000000000089 RBX: ffff8881086e9000 RCX: 0000000000000000 +RDX: 0000000000000089 RSI: 0000000000000008 RDI: ffffed1020123efb +RBP: ffff888005f6eac0 R08: ffffed1022fc0031 R09: ffffed1022fc0031 +R10: ffff888117e00187 R11: ffffed1022fc0030 R12: 0000000000000028 +R13: ffff888008284eb0 R14: 0000000000000ed8 R15: 0000000000000ec0 +FS: 0000000000000000(0000) GS:ffff888117c00000(0000) +knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 00007f8b801c5640 CR3: 0000000033c2c006 CR4: 00000000003706f0 +DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 +DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 +Call Trace: + ? ip6_mc_hdr.isra.26.constprop.46+0x12a/0x600 + ? ip6_mc_hdr.isra.26.constprop.46+0x12a/0x600 + skb_put.cold.104+0x22/0x22 + ip6_mc_hdr.isra.26.constprop.46+0x12a/0x600 + ? rcu_read_lock_sched_held+0x91/0xc0 + mld_newpack+0x398/0x8f0 + ? ip6_mc_hdr.isra.26.constprop.46+0x600/0x600 + ? lock_contended+0xc40/0xc40 + add_grhead.isra.33+0x280/0x380 + add_grec+0x5ca/0xff0 + ? mld_sendpack+0xf40/0xf40 + ? lock_downgrade+0x690/0x690 + mld_send_initial_cr.part.34+0xb9/0x180 + ipv6_mc_dad_complete+0x15d/0x1b0 + addrconf_dad_completed+0x8d2/0xbb0 + ? lock_downgrade+0x690/0x690 + ? addrconf_rs_timer+0x660/0x660 + ? addrconf_dad_work+0x73c/0x10e0 + addrconf_dad_work+0x73c/0x10e0 + +Allowing high order page allocation could fix this problem. + +Fixes: 72e09ad107e7 ("ipv6: avoid high order allocations") +Signed-off-by: Taehee Yoo +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/ipv6/mcast.c | 3 --- + 1 file changed, 3 deletions(-) + +diff --git a/net/ipv6/mcast.c b/net/ipv6/mcast.c +index c875c9b6edbe..7d0a6a7c9d28 100644 +--- a/net/ipv6/mcast.c ++++ b/net/ipv6/mcast.c +@@ -1604,10 +1604,7 @@ static struct sk_buff *mld_newpack(struct inet6_dev *idev, unsigned int mtu) + IPV6_TLV_PADN, 0 }; + + /* we assume size > sizeof(ra) here */ +- /* limit our allocations to order-0 page */ +- size = min_t(int, size, SKB_MAX_ORDER(0, 0)); + skb = sock_alloc_send_skb(sk, size, 1, &err); +- + if (!skb) + return NULL; + +-- +2.30.2 + diff --git a/queue-5.4/net-bnx2-fix-error-return-code-in-bnx2_init_board.patch b/queue-5.4/net-bnx2-fix-error-return-code-in-bnx2_init_board.patch new file mode 100644 index 00000000000..08cfdb443b2 --- /dev/null +++ b/queue-5.4/net-bnx2-fix-error-return-code-in-bnx2_init_board.patch @@ -0,0 +1,40 @@ +From e10cb3a9787609e2906e59f8904eaf39d6a57d07 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 15 May 2021 15:16:05 +0800 +Subject: net: bnx2: Fix error return code in bnx2_init_board() + +From: Zhen Lei + +[ Upstream commit 28c66b6da4087b8cfe81c2ec0a46eb6116dafda9 ] + +Fix to return -EPERM from the error handling case instead of 0, as done +elsewhere in this function. + +Fixes: b6016b767397 ("[BNX2]: New Broadcom gigabit network driver.") +Reported-by: Hulk Robot +Signed-off-by: Zhen Lei +Reviewed-by: Michael Chan +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/broadcom/bnx2.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/broadcom/bnx2.c b/drivers/net/ethernet/broadcom/bnx2.c +index fbc196b480b6..c3f67d8e1093 100644 +--- a/drivers/net/ethernet/broadcom/bnx2.c ++++ b/drivers/net/ethernet/broadcom/bnx2.c +@@ -8249,9 +8249,9 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev) + BNX2_WR(bp, PCI_COMMAND, reg); + } else if ((BNX2_CHIP_ID(bp) == BNX2_CHIP_ID_5706_A1) && + !(bp->flags & BNX2_FLAG_PCIX)) { +- + dev_err(&pdev->dev, + "5706 A1 can only be used in a PCIX bus, aborting\n"); ++ rc = -EPERM; + goto err_out_unmap; + } + +-- +2.30.2 + diff --git a/queue-5.4/net-dsa-fix-error-code-getting-shifted-with-4-in-dsa.patch b/queue-5.4/net-dsa-fix-error-code-getting-shifted-with-4-in-dsa.patch new file mode 100644 index 00000000000..f25cb87cb1e --- /dev/null +++ b/queue-5.4/net-dsa-fix-error-code-getting-shifted-with-4-in-dsa.patch @@ -0,0 +1,68 @@ +From 9525995b1d46d5f03aa26f10df43a79b8099be30 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 9 May 2021 22:33:38 +0300 +Subject: net: dsa: fix error code getting shifted with 4 in + dsa_slave_get_sset_count + +From: Vladimir Oltean + +[ Upstream commit b94cbc909f1d80378a1f541968309e5c1178c98b ] + +DSA implements a bunch of 'standardized' ethtool statistics counters, +namely tx_packets, tx_bytes, rx_packets, rx_bytes. So whatever the +hardware driver returns in .get_sset_count(), we need to add 4 to that. + +That is ok, except that .get_sset_count() can return a negative error +code, for example: + +b53_get_sset_count +-> phy_ethtool_get_sset_count + -> return -EIO + +-EIO is -5, and with 4 added to it, it becomes -1, aka -EPERM. One can +imagine that certain error codes may even become positive, although +based on code inspection I did not see instances of that. + +Check the error code first, if it is negative return it as-is. + +Based on a similar patch for dsa_master_get_strings from Dan Carpenter: +https://patchwork.kernel.org/project/netdevbpf/patch/YJaSe3RPgn7gKxZv@mwanda/ + +Fixes: 91da11f870f0 ("net: Distributed Switch Architecture protocol support") +Signed-off-by: Vladimir Oltean +Reviewed-by: Florian Fainelli +Reviewed-by: Andrew Lunn +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/dsa/slave.c | 12 +++++++----- + 1 file changed, 7 insertions(+), 5 deletions(-) + +diff --git a/net/dsa/slave.c b/net/dsa/slave.c +index 06f8874d53ee..75b4cd4bcafb 100644 +--- a/net/dsa/slave.c ++++ b/net/dsa/slave.c +@@ -692,13 +692,15 @@ static int dsa_slave_get_sset_count(struct net_device *dev, int sset) + struct dsa_switch *ds = dp->ds; + + if (sset == ETH_SS_STATS) { +- int count; ++ int count = 0; + +- count = 4; +- if (ds->ops->get_sset_count) +- count += ds->ops->get_sset_count(ds, dp->index, sset); ++ if (ds->ops->get_sset_count) { ++ count = ds->ops->get_sset_count(ds, dp->index, sset); ++ if (count < 0) ++ return count; ++ } + +- return count; ++ return count + 4; + } + + return -EOPNOTSUPP; +-- +2.30.2 + diff --git a/queue-5.4/net-ethernet-mtk_eth_soc-fix-packet-statistics-suppo.patch b/queue-5.4/net-ethernet-mtk_eth_soc-fix-packet-statistics-suppo.patch new file mode 100644 index 00000000000..6099ab48937 --- /dev/null +++ b/queue-5.4/net-ethernet-mtk_eth_soc-fix-packet-statistics-suppo.patch @@ -0,0 +1,160 @@ +From 86ce7a2b274d53455ef4b52690860d1cd134e168 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 22 May 2021 09:56:30 +0200 +Subject: net: ethernet: mtk_eth_soc: Fix packet statistics support for + MT7628/88 + +From: Stefan Roese + +[ Upstream commit ad79fd2c42f7626bdf6935cd72134c2a5a59ff2d ] + +The MT7628/88 SoC(s) have other (limited) packet counter registers than +currently supported in the mtk_eth_soc driver. This patch adds support +for reading these registers, so that the packet statistics are correctly +updated. + +Additionally the defines for the non-MT7628 variant packet counter +registers are added and used in this patch instead of using hard coded +values. + +Signed-off-by: Stefan Roese +Fixes: 296c9120752b ("net: ethernet: mediatek: Add MT7628/88 SoC support") +Cc: Felix Fietkau +Cc: John Crispin +Cc: Ilya Lipnitskiy +Cc: Reto Schneider +Cc: Reto Schneider +Cc: David S. Miller +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/mediatek/mtk_eth_soc.c | 67 ++++++++++++++------- + drivers/net/ethernet/mediatek/mtk_eth_soc.h | 24 +++++++- + 2 files changed, 66 insertions(+), 25 deletions(-) + +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +index 7e3806fd70b2..48b395b9c15a 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c +@@ -675,32 +675,53 @@ static int mtk_set_mac_address(struct net_device *dev, void *p) + void mtk_stats_update_mac(struct mtk_mac *mac) + { + struct mtk_hw_stats *hw_stats = mac->hw_stats; +- unsigned int base = MTK_GDM1_TX_GBCNT; +- u64 stats; +- +- base += hw_stats->reg_offset; ++ struct mtk_eth *eth = mac->hw; + + u64_stats_update_begin(&hw_stats->syncp); + +- hw_stats->rx_bytes += mtk_r32(mac->hw, base); +- stats = mtk_r32(mac->hw, base + 0x04); +- if (stats) +- hw_stats->rx_bytes += (stats << 32); +- hw_stats->rx_packets += mtk_r32(mac->hw, base + 0x08); +- hw_stats->rx_overflow += mtk_r32(mac->hw, base + 0x10); +- hw_stats->rx_fcs_errors += mtk_r32(mac->hw, base + 0x14); +- hw_stats->rx_short_errors += mtk_r32(mac->hw, base + 0x18); +- hw_stats->rx_long_errors += mtk_r32(mac->hw, base + 0x1c); +- hw_stats->rx_checksum_errors += mtk_r32(mac->hw, base + 0x20); +- hw_stats->rx_flow_control_packets += +- mtk_r32(mac->hw, base + 0x24); +- hw_stats->tx_skip += mtk_r32(mac->hw, base + 0x28); +- hw_stats->tx_collisions += mtk_r32(mac->hw, base + 0x2c); +- hw_stats->tx_bytes += mtk_r32(mac->hw, base + 0x30); +- stats = mtk_r32(mac->hw, base + 0x34); +- if (stats) +- hw_stats->tx_bytes += (stats << 32); +- hw_stats->tx_packets += mtk_r32(mac->hw, base + 0x38); ++ if (MTK_HAS_CAPS(eth->soc->caps, MTK_SOC_MT7628)) { ++ hw_stats->tx_packets += mtk_r32(mac->hw, MT7628_SDM_TPCNT); ++ hw_stats->tx_bytes += mtk_r32(mac->hw, MT7628_SDM_TBCNT); ++ hw_stats->rx_packets += mtk_r32(mac->hw, MT7628_SDM_RPCNT); ++ hw_stats->rx_bytes += mtk_r32(mac->hw, MT7628_SDM_RBCNT); ++ hw_stats->rx_checksum_errors += ++ mtk_r32(mac->hw, MT7628_SDM_CS_ERR); ++ } else { ++ unsigned int offs = hw_stats->reg_offset; ++ u64 stats; ++ ++ hw_stats->rx_bytes += mtk_r32(mac->hw, ++ MTK_GDM1_RX_GBCNT_L + offs); ++ stats = mtk_r32(mac->hw, MTK_GDM1_RX_GBCNT_H + offs); ++ if (stats) ++ hw_stats->rx_bytes += (stats << 32); ++ hw_stats->rx_packets += ++ mtk_r32(mac->hw, MTK_GDM1_RX_GPCNT + offs); ++ hw_stats->rx_overflow += ++ mtk_r32(mac->hw, MTK_GDM1_RX_OERCNT + offs); ++ hw_stats->rx_fcs_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_FERCNT + offs); ++ hw_stats->rx_short_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_SERCNT + offs); ++ hw_stats->rx_long_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_LENCNT + offs); ++ hw_stats->rx_checksum_errors += ++ mtk_r32(mac->hw, MTK_GDM1_RX_CERCNT + offs); ++ hw_stats->rx_flow_control_packets += ++ mtk_r32(mac->hw, MTK_GDM1_RX_FCCNT + offs); ++ hw_stats->tx_skip += ++ mtk_r32(mac->hw, MTK_GDM1_TX_SKIPCNT + offs); ++ hw_stats->tx_collisions += ++ mtk_r32(mac->hw, MTK_GDM1_TX_COLCNT + offs); ++ hw_stats->tx_bytes += ++ mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_L + offs); ++ stats = mtk_r32(mac->hw, MTK_GDM1_TX_GBCNT_H + offs); ++ if (stats) ++ hw_stats->tx_bytes += (stats << 32); ++ hw_stats->tx_packets += ++ mtk_r32(mac->hw, MTK_GDM1_TX_GPCNT + offs); ++ } ++ + u64_stats_update_end(&hw_stats->syncp); + } + +diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +index 1e9202b34d35..c0b2768b480f 100644 +--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h ++++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h +@@ -264,8 +264,21 @@ + /* QDMA FQ Free Page Buffer Length Register */ + #define MTK_QDMA_FQ_BLEN 0x1B2C + +-/* GMA1 Received Good Byte Count Register */ +-#define MTK_GDM1_TX_GBCNT 0x2400 ++/* GMA1 counter / statics register */ ++#define MTK_GDM1_RX_GBCNT_L 0x2400 ++#define MTK_GDM1_RX_GBCNT_H 0x2404 ++#define MTK_GDM1_RX_GPCNT 0x2408 ++#define MTK_GDM1_RX_OERCNT 0x2410 ++#define MTK_GDM1_RX_FERCNT 0x2414 ++#define MTK_GDM1_RX_SERCNT 0x2418 ++#define MTK_GDM1_RX_LENCNT 0x241c ++#define MTK_GDM1_RX_CERCNT 0x2420 ++#define MTK_GDM1_RX_FCCNT 0x2424 ++#define MTK_GDM1_TX_SKIPCNT 0x2428 ++#define MTK_GDM1_TX_COLCNT 0x242c ++#define MTK_GDM1_TX_GBCNT_L 0x2430 ++#define MTK_GDM1_TX_GBCNT_H 0x2434 ++#define MTK_GDM1_TX_GPCNT 0x2438 + #define MTK_STAT_OFFSET 0x40 + + /* QDMA descriptor txd4 */ +@@ -476,6 +489,13 @@ + #define MT7628_SDM_MAC_ADRL (MT7628_SDM_OFFSET + 0x0c) + #define MT7628_SDM_MAC_ADRH (MT7628_SDM_OFFSET + 0x10) + ++/* Counter / stat register */ ++#define MT7628_SDM_TPCNT (MT7628_SDM_OFFSET + 0x100) ++#define MT7628_SDM_TBCNT (MT7628_SDM_OFFSET + 0x104) ++#define MT7628_SDM_RPCNT (MT7628_SDM_OFFSET + 0x108) ++#define MT7628_SDM_RBCNT (MT7628_SDM_OFFSET + 0x10c) ++#define MT7628_SDM_CS_ERR (MT7628_SDM_OFFSET + 0x110) ++ + struct mtk_rx_dma { + unsigned int rxd1; + unsigned int rxd2; +-- +2.30.2 + diff --git a/queue-5.4/net-fec-fix-the-potential-memory-leak-in-fec_enet_in.patch b/queue-5.4/net-fec-fix-the-potential-memory-leak-in-fec_enet_in.patch new file mode 100644 index 00000000000..a2a50302801 --- /dev/null +++ b/queue-5.4/net-fec-fix-the-potential-memory-leak-in-fec_enet_in.patch @@ -0,0 +1,64 @@ +From fc857eac2dc4ba55b5c269d3274e1121f66b75c5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 12 May 2021 10:43:59 +0800 +Subject: net: fec: fix the potential memory leak in fec_enet_init() + +From: Fugang Duan + +[ Upstream commit 619fee9eb13b5d29e4267cb394645608088c28a8 ] + +If the memory allocated for cbd_base is failed, it should +free the memory allocated for the queues, otherwise it causes +memory leak. + +And if the memory allocated for the queues is failed, it can +return error directly. + +Fixes: 59d0f7465644 ("net: fec: init multi queue date structure") +Signed-off-by: Fugang Duan +Signed-off-by: Joakim Zhang +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/freescale/fec_main.c | 11 +++++++++-- + 1 file changed, 9 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c +index fd7fc6f20c9d..b1856552ab81 100644 +--- a/drivers/net/ethernet/freescale/fec_main.c ++++ b/drivers/net/ethernet/freescale/fec_main.c +@@ -3274,7 +3274,9 @@ static int fec_enet_init(struct net_device *ndev) + return ret; + } + +- fec_enet_alloc_queue(ndev); ++ ret = fec_enet_alloc_queue(ndev); ++ if (ret) ++ return ret; + + bd_size = (fep->total_tx_ring_size + fep->total_rx_ring_size) * dsize; + +@@ -3282,7 +3284,8 @@ static int fec_enet_init(struct net_device *ndev) + cbd_base = dmam_alloc_coherent(&fep->pdev->dev, bd_size, &bd_dma, + GFP_KERNEL); + if (!cbd_base) { +- return -ENOMEM; ++ ret = -ENOMEM; ++ goto free_queue_mem; + } + + /* Get the Ethernet address */ +@@ -3360,6 +3363,10 @@ static int fec_enet_init(struct net_device *ndev) + fec_enet_update_ethtool_stats(ndev); + + return 0; ++ ++free_queue_mem: ++ fec_enet_free_queue(ndev); ++ return ret; + } + + #ifdef CONFIG_OF +-- +2.30.2 + diff --git a/queue-5.4/net-hso-check-for-allocation-failure-in-hso_create_b.patch b/queue-5.4/net-hso-check-for-allocation-failure-in-hso_create_b.patch new file mode 100644 index 00000000000..1b67615e658 --- /dev/null +++ b/queue-5.4/net-hso-check-for-allocation-failure-in-hso_create_b.patch @@ -0,0 +1,90 @@ +From 24e1d1ecf23208ab8d97880e86682b7f97c4a7f0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 May 2021 17:24:48 +0300 +Subject: net: hso: check for allocation failure in + hso_create_bulk_serial_device() + +From: Dan Carpenter + +[ Upstream commit 31db0dbd72444abe645d90c20ecb84d668f5af5e ] + +In current kernels, small allocations never actually fail so this +patch shouldn't affect runtime. + +Originally this error handling code written with the idea that if +the "serial->tiocmget" allocation failed, then we would continue +operating instead of bailing out early. But in later years we added +an unchecked dereference on the next line. + + serial->tiocmget->serial_state_notification = kzalloc(); + ^^^^^^^^^^^^^^^^^^ + +Since these allocations are never going fail in real life, this is +mostly a philosophical debate, but I think bailing out early is the +correct behavior that the user would want. And generally it's safer to +bail as soon an error happens. + +Fixes: af0de1303c4e ("usb: hso: obey DMA rules in tiocmget") +Signed-off-by: Dan Carpenter +Reviewed-by: Johan Hovold +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/usb/hso.c | 37 ++++++++++++++++++------------------- + 1 file changed, 18 insertions(+), 19 deletions(-) + +diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c +index 8c2c6d7921f4..0436e1661096 100644 +--- a/drivers/net/usb/hso.c ++++ b/drivers/net/usb/hso.c +@@ -2619,29 +2619,28 @@ static struct hso_device *hso_create_bulk_serial_device( + num_urbs = 2; + serial->tiocmget = kzalloc(sizeof(struct hso_tiocmget), + GFP_KERNEL); ++ if (!serial->tiocmget) ++ goto exit; + serial->tiocmget->serial_state_notification + = kzalloc(sizeof(struct hso_serial_state_notification), + GFP_KERNEL); +- /* it isn't going to break our heart if serial->tiocmget +- * allocation fails don't bother checking this. +- */ +- if (serial->tiocmget && serial->tiocmget->serial_state_notification) { +- tiocmget = serial->tiocmget; +- tiocmget->endp = hso_get_ep(interface, +- USB_ENDPOINT_XFER_INT, +- USB_DIR_IN); +- if (!tiocmget->endp) { +- dev_err(&interface->dev, "Failed to find INT IN ep\n"); +- goto exit; +- } +- +- tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL); +- if (tiocmget->urb) { +- mutex_init(&tiocmget->mutex); +- init_waitqueue_head(&tiocmget->waitq); +- } else +- hso_free_tiomget(serial); ++ if (!serial->tiocmget->serial_state_notification) ++ goto exit; ++ tiocmget = serial->tiocmget; ++ tiocmget->endp = hso_get_ep(interface, ++ USB_ENDPOINT_XFER_INT, ++ USB_DIR_IN); ++ if (!tiocmget->endp) { ++ dev_err(&interface->dev, "Failed to find INT IN ep\n"); ++ goto exit; + } ++ ++ tiocmget->urb = usb_alloc_urb(0, GFP_KERNEL); ++ if (tiocmget->urb) { ++ mutex_init(&tiocmget->mutex); ++ init_waitqueue_head(&tiocmget->waitq); ++ } else ++ hso_free_tiomget(serial); + } + else + num_urbs = 1; +-- +2.30.2 + diff --git a/queue-5.4/net-lantiq-fix-memory-corruption-in-rx-ring.patch b/queue-5.4/net-lantiq-fix-memory-corruption-in-rx-ring.patch new file mode 100644 index 00000000000..a8b682ef639 --- /dev/null +++ b/queue-5.4/net-lantiq-fix-memory-corruption-in-rx-ring.patch @@ -0,0 +1,70 @@ +From 8417e7f2218c9b707e4e6f276b0839c155db9b5d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 21 May 2021 16:45:58 +0200 +Subject: net: lantiq: fix memory corruption in RX ring + +From: Aleksander Jan Bajkowski + +[ Upstream commit c7718ee96dbc2f9c5fc3b578abdf296dd44b9c20 ] + +In a situation where memory allocation or dma mapping fails, an +invalid address is programmed into the descriptor. This can lead +to memory corruption. If the memory allocation fails, DMA should +reuse the previous skb and mapping and drop the packet. This patch +also increments rx drop counter. + +Fixes: fe1a56420cf2 ("net: lantiq: Add Lantiq / Intel VRX200 Ethernet driver ") +Signed-off-by: Aleksander Jan Bajkowski +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/lantiq_xrx200.c | 14 +++++++++----- + 1 file changed, 9 insertions(+), 5 deletions(-) + +diff --git a/drivers/net/ethernet/lantiq_xrx200.c b/drivers/net/ethernet/lantiq_xrx200.c +index 4e44a39267eb..6ece99e6b6dd 100644 +--- a/drivers/net/ethernet/lantiq_xrx200.c ++++ b/drivers/net/ethernet/lantiq_xrx200.c +@@ -154,6 +154,7 @@ static int xrx200_close(struct net_device *net_dev) + + static int xrx200_alloc_skb(struct xrx200_chan *ch) + { ++ dma_addr_t mapping; + int ret = 0; + + ch->skb[ch->dma.desc] = netdev_alloc_skb_ip_align(ch->priv->net_dev, +@@ -163,16 +164,17 @@ static int xrx200_alloc_skb(struct xrx200_chan *ch) + goto skip; + } + +- ch->dma.desc_base[ch->dma.desc].addr = dma_map_single(ch->priv->dev, +- ch->skb[ch->dma.desc]->data, XRX200_DMA_DATA_LEN, +- DMA_FROM_DEVICE); +- if (unlikely(dma_mapping_error(ch->priv->dev, +- ch->dma.desc_base[ch->dma.desc].addr))) { ++ mapping = dma_map_single(ch->priv->dev, ch->skb[ch->dma.desc]->data, ++ XRX200_DMA_DATA_LEN, DMA_FROM_DEVICE); ++ if (unlikely(dma_mapping_error(ch->priv->dev, mapping))) { + dev_kfree_skb_any(ch->skb[ch->dma.desc]); + ret = -ENOMEM; + goto skip; + } + ++ ch->dma.desc_base[ch->dma.desc].addr = mapping; ++ /* Make sure the address is written before we give it to HW */ ++ wmb(); + skip: + ch->dma.desc_base[ch->dma.desc].ctl = + LTQ_DMA_OWN | LTQ_DMA_RX_OFFSET(NET_IP_ALIGN) | +@@ -196,6 +198,8 @@ static int xrx200_hw_receive(struct xrx200_chan *ch) + ch->dma.desc %= LTQ_DESC_NUM; + + if (ret) { ++ ch->skb[ch->dma.desc] = skb; ++ net_dev->stats.rx_dropped++; + netdev_err(net_dev, "failed to allocate new rx buffer\n"); + return ret; + } +-- +2.30.2 + diff --git a/queue-5.4/net-mdio-octeon-fix-some-double-free-issues.patch b/queue-5.4/net-mdio-octeon-fix-some-double-free-issues.patch new file mode 100644 index 00000000000..66507a1c9e0 --- /dev/null +++ b/queue-5.4/net-mdio-octeon-fix-some-double-free-issues.patch @@ -0,0 +1,50 @@ +From 1ba7003b772ce5b12f41484fac3f07989789b97c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 13 May 2021 09:24:55 +0200 +Subject: net: mdio: octeon: Fix some double free issues + +From: Christophe JAILLET + +[ Upstream commit e1d027dd97e1e750669cdc0d3b016a4f54e473eb ] + +'bus->mii_bus' has been allocated with 'devm_mdiobus_alloc_size()' in the +probe function. So it must not be freed explicitly or there will be a +double free. + +Remove the incorrect 'mdiobus_free' in the error handling path of the +probe function and in remove function. + +Suggested-By: Andrew Lunn +Fixes: 35d2aeac9810 ("phy: mdio-octeon: Use devm_mdiobus_alloc_size()") +Signed-off-by: Christophe JAILLET +Reviewed-by: Russell King +Reviewed-by: Andrew Lunn +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/phy/mdio-octeon.c | 2 -- + 1 file changed, 2 deletions(-) + +diff --git a/drivers/net/phy/mdio-octeon.c b/drivers/net/phy/mdio-octeon.c +index 8327382aa568..088c73731652 100644 +--- a/drivers/net/phy/mdio-octeon.c ++++ b/drivers/net/phy/mdio-octeon.c +@@ -72,7 +72,6 @@ static int octeon_mdiobus_probe(struct platform_device *pdev) + + return 0; + fail_register: +- mdiobus_free(bus->mii_bus); + smi_en.u64 = 0; + oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN); + return err; +@@ -86,7 +85,6 @@ static int octeon_mdiobus_remove(struct platform_device *pdev) + bus = platform_get_drvdata(pdev); + + mdiobus_unregister(bus->mii_bus); +- mdiobus_free(bus->mii_bus); + smi_en.u64 = 0; + oct_mdio_writeq(smi_en.u64, bus->register_base + SMI_EN); + return 0; +-- +2.30.2 + diff --git a/queue-5.4/net-mdio-thunder-fix-a-double-free-issue-in-the-.rem.patch b/queue-5.4/net-mdio-thunder-fix-a-double-free-issue-in-the-.rem.patch new file mode 100644 index 00000000000..b409bd9cb93 --- /dev/null +++ b/queue-5.4/net-mdio-thunder-fix-a-double-free-issue-in-the-.rem.patch @@ -0,0 +1,40 @@ +From 851552cb39b4617c43561bea9fddefdacc19e598 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 13 May 2021 09:44:49 +0200 +Subject: net: mdio: thunder: Fix a double free issue in the .remove function + +From: Christophe JAILLET + +[ Upstream commit a93a0a15876d2a077a3bc260b387d2457a051f24 ] + +'bus->mii_bus' have been allocated with 'devm_mdiobus_alloc_size()' in the +probe function. So it must not be freed explicitly or there will be a +double free. + +Remove the incorrect 'mdiobus_free' in the remove function. + +Fixes: 379d7ac7ca31 ("phy: mdio-thunder: Add driver for Cavium Thunder SoC MDIO buses.") +Signed-off-by: Christophe JAILLET +Reviewed-by: Russell King +Reviewed-by: Andrew Lunn +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/phy/mdio-thunder.c | 1 - + 1 file changed, 1 deletion(-) + +diff --git a/drivers/net/phy/mdio-thunder.c b/drivers/net/phy/mdio-thunder.c +index b6128ae7f14f..1e2f57ed1ef7 100644 +--- a/drivers/net/phy/mdio-thunder.c ++++ b/drivers/net/phy/mdio-thunder.c +@@ -126,7 +126,6 @@ static void thunder_mdiobus_pci_remove(struct pci_dev *pdev) + continue; + + mdiobus_unregister(bus->mii_bus); +- mdiobus_free(bus->mii_bus); + oct_mdio_writeq(0, bus->register_base + SMI_EN); + } + pci_set_drvdata(pdev, NULL); +-- +2.30.2 + diff --git a/queue-5.4/net-netcp-fix-an-error-message.patch b/queue-5.4/net-netcp-fix-an-error-message.patch new file mode 100644 index 00000000000..6da85ef64fc --- /dev/null +++ b/queue-5.4/net-netcp-fix-an-error-message.patch @@ -0,0 +1,41 @@ +From c451d6034843a505bd5b3507bdfdc77a736fcd5f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 8 May 2021 07:38:22 +0200 +Subject: net: netcp: Fix an error message + +From: Christophe JAILLET + +[ Upstream commit ddb6e00f8413e885ff826e32521cff7924661de0 ] + +'ret' is known to be 0 here. +The expected error code is stored in 'tx_pipe->dma_queue', so use it +instead. + +While at it, switch from %d to %pe which is more user friendly. + +Fixes: 84640e27f230 ("net: netcp: Add Keystone NetCP core ethernet driver") +Signed-off-by: Christophe JAILLET +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/ti/netcp_core.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c +index 1b2702f74455..c0025bb8a584 100644 +--- a/drivers/net/ethernet/ti/netcp_core.c ++++ b/drivers/net/ethernet/ti/netcp_core.c +@@ -1350,8 +1350,8 @@ int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe) + tx_pipe->dma_queue = knav_queue_open(name, tx_pipe->dma_queue_id, + KNAV_QUEUE_SHARED); + if (IS_ERR(tx_pipe->dma_queue)) { +- dev_err(dev, "Could not open DMA queue for channel \"%s\": %d\n", +- name, ret); ++ dev_err(dev, "Could not open DMA queue for channel \"%s\": %pe\n", ++ name, tx_pipe->dma_queue); + ret = PTR_ERR(tx_pipe->dma_queue); + goto err; + } +-- +2.30.2 + diff --git a/queue-5.4/net-really-orphan-skbs-tied-to-closing-sk.patch b/queue-5.4/net-really-orphan-skbs-tied-to-closing-sk.patch new file mode 100644 index 00000000000..ceee9f61ccd --- /dev/null +++ b/queue-5.4/net-really-orphan-skbs-tied-to-closing-sk.patch @@ -0,0 +1,71 @@ +From 2a7760820a606d9d8c28efb384ed42a58f2a2939 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 11 May 2021 10:35:21 +0200 +Subject: net: really orphan skbs tied to closing sk + +From: Paolo Abeni + +[ Upstream commit 098116e7e640ba677d9e345cbee83d253c13d556 ] + +If the owing socket is shutting down - e.g. the sock reference +count already dropped to 0 and only sk_wmem_alloc is keeping +the sock alive, skb_orphan_partial() becomes a no-op. + +When forwarding packets over veth with GRO enabled, the above +causes refcount errors. + +This change addresses the issue with a plain skb_orphan() call +in the critical scenario. + +Fixes: 9adc89af724f ("net: let skb_orphan_partial wake-up waiters.") +Signed-off-by: Paolo Abeni +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + include/net/sock.h | 4 +++- + net/core/sock.c | 8 ++++---- + 2 files changed, 7 insertions(+), 5 deletions(-) + +diff --git a/include/net/sock.h b/include/net/sock.h +index 4137fa178790..a0728f24ecc5 100644 +--- a/include/net/sock.h ++++ b/include/net/sock.h +@@ -2150,13 +2150,15 @@ static inline void skb_set_owner_r(struct sk_buff *skb, struct sock *sk) + sk_mem_charge(sk, skb->truesize); + } + +-static inline void skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk) ++static inline __must_check bool skb_set_owner_sk_safe(struct sk_buff *skb, struct sock *sk) + { + if (sk && refcount_inc_not_zero(&sk->sk_refcnt)) { + skb_orphan(skb); + skb->destructor = sock_efree; + skb->sk = sk; ++ return true; + } ++ return false; + } + + void sk_reset_timer(struct sock *sk, struct timer_list *timer, +diff --git a/net/core/sock.c b/net/core/sock.c +index 19c178aac0ae..68f84fac63e0 100644 +--- a/net/core/sock.c ++++ b/net/core/sock.c +@@ -2026,10 +2026,10 @@ void skb_orphan_partial(struct sk_buff *skb) + if (skb_is_tcp_pure_ack(skb)) + return; + +- if (can_skb_orphan_partial(skb)) +- skb_set_owner_sk_safe(skb, skb->sk); +- else +- skb_orphan(skb); ++ if (can_skb_orphan_partial(skb) && skb_set_owner_sk_safe(skb, skb->sk)) ++ return; ++ ++ skb_orphan(skb); + } + EXPORT_SYMBOL(skb_orphan_partial); + +-- +2.30.2 + diff --git a/queue-5.4/net-sched-fix-packet-stuck-problem-for-lockless-qdis.patch b/queue-5.4/net-sched-fix-packet-stuck-problem-for-lockless-qdis.patch new file mode 100644 index 00000000000..c33aa376297 --- /dev/null +++ b/queue-5.4/net-sched-fix-packet-stuck-problem-for-lockless-qdis.patch @@ -0,0 +1,199 @@ +From 4d9637593f2fdb5747bbc66636de6b3d8c29c746 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 May 2021 11:16:59 +0800 +Subject: net: sched: fix packet stuck problem for lockless qdisc + +From: Yunsheng Lin + +[ Upstream commit a90c57f2cedd52a511f739fb55e6244e22e1a2fb ] + +Lockless qdisc has below concurrent problem: + cpu0 cpu1 + . . +q->enqueue . + . . +qdisc_run_begin() . + . . +dequeue_skb() . + . . +sch_direct_xmit() . + . . + . q->enqueue + . qdisc_run_begin() + . return and do nothing + . . +qdisc_run_end() . + +cpu1 enqueue a skb without calling __qdisc_run() because cpu0 +has not released the lock yet and spin_trylock() return false +for cpu1 in qdisc_run_begin(), and cpu0 do not see the skb +enqueued by cpu1 when calling dequeue_skb() because cpu1 may +enqueue the skb after cpu0 calling dequeue_skb() and before +cpu0 calling qdisc_run_end(). + +Lockless qdisc has below another concurrent problem when +tx_action is involved: + +cpu0(serving tx_action) cpu1 cpu2 + . . . + . q->enqueue . + . qdisc_run_begin() . + . dequeue_skb() . + . . q->enqueue + . . . + . sch_direct_xmit() . + . . qdisc_run_begin() + . . return and do nothing + . . . + clear __QDISC_STATE_SCHED . . + qdisc_run_begin() . . + return and do nothing . . + . . . + . qdisc_run_end() . + +This patch fixes the above data race by: +1. If the first spin_trylock() return false and STATE_MISSED is + not set, set STATE_MISSED and retry another spin_trylock() in + case other CPU may not see STATE_MISSED after it releases the + lock. +2. reschedule if STATE_MISSED is set after the lock is released + at the end of qdisc_run_end(). + +For tx_action case, STATE_MISSED is also set when cpu1 is at the +end if qdisc_run_end(), so tx_action will be rescheduled again +to dequeue the skb enqueued by cpu2. + +Clear STATE_MISSED before retrying a dequeuing when dequeuing +returns NULL in order to reduce the overhead of the second +spin_trylock() and __netif_schedule() calling. + +Also clear the STATE_MISSED before calling __netif_schedule() +at the end of qdisc_run_end() to avoid doing another round of +dequeuing in the pfifo_fast_dequeue(). + +The performance impact of this patch, tested using pktgen and +dummy netdev with pfifo_fast qdisc attached: + + threads without+this_patch with+this_patch delta + 1 2.61Mpps 2.60Mpps -0.3% + 2 3.97Mpps 3.82Mpps -3.7% + 4 5.62Mpps 5.59Mpps -0.5% + 8 2.78Mpps 2.77Mpps -0.3% + 16 2.22Mpps 2.22Mpps -0.0% + +Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking") +Acked-by: Jakub Kicinski +Tested-by: Juergen Gross +Signed-off-by: Yunsheng Lin +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + include/net/sch_generic.h | 35 ++++++++++++++++++++++++++++++++++- + net/sched/sch_generic.c | 19 +++++++++++++++++++ + 2 files changed, 53 insertions(+), 1 deletion(-) + +diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h +index b2ceec7b280d..0852f3e51360 100644 +--- a/include/net/sch_generic.h ++++ b/include/net/sch_generic.h +@@ -36,6 +36,7 @@ struct qdisc_rate_table { + enum qdisc_state_t { + __QDISC_STATE_SCHED, + __QDISC_STATE_DEACTIVATED, ++ __QDISC_STATE_MISSED, + }; + + struct qdisc_size_table { +@@ -156,8 +157,33 @@ static inline bool qdisc_is_empty(const struct Qdisc *qdisc) + static inline bool qdisc_run_begin(struct Qdisc *qdisc) + { + if (qdisc->flags & TCQ_F_NOLOCK) { ++ if (spin_trylock(&qdisc->seqlock)) ++ goto nolock_empty; ++ ++ /* If the MISSED flag is set, it means other thread has ++ * set the MISSED flag before second spin_trylock(), so ++ * we can return false here to avoid multi cpus doing ++ * the set_bit() and second spin_trylock() concurrently. ++ */ ++ if (test_bit(__QDISC_STATE_MISSED, &qdisc->state)) ++ return false; ++ ++ /* Set the MISSED flag before the second spin_trylock(), ++ * if the second spin_trylock() return false, it means ++ * other cpu holding the lock will do dequeuing for us ++ * or it will see the MISSED flag set after releasing ++ * lock and reschedule the net_tx_action() to do the ++ * dequeuing. ++ */ ++ set_bit(__QDISC_STATE_MISSED, &qdisc->state); ++ ++ /* Retry again in case other CPU may not see the new flag ++ * after it releases the lock at the end of qdisc_run_end(). ++ */ + if (!spin_trylock(&qdisc->seqlock)) + return false; ++ ++nolock_empty: + WRITE_ONCE(qdisc->empty, false); + } else if (qdisc_is_running(qdisc)) { + return false; +@@ -173,8 +199,15 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc) + static inline void qdisc_run_end(struct Qdisc *qdisc) + { + write_seqcount_end(&qdisc->running); +- if (qdisc->flags & TCQ_F_NOLOCK) ++ if (qdisc->flags & TCQ_F_NOLOCK) { + spin_unlock(&qdisc->seqlock); ++ ++ if (unlikely(test_bit(__QDISC_STATE_MISSED, ++ &qdisc->state))) { ++ clear_bit(__QDISC_STATE_MISSED, &qdisc->state); ++ __netif_schedule(qdisc); ++ } ++ } + } + + static inline bool qdisc_may_bulk(const struct Qdisc *qdisc) +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index 6e6147a81bc3..0723e7858658 100644 +--- a/net/sched/sch_generic.c ++++ b/net/sched/sch_generic.c +@@ -645,8 +645,10 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) + { + struct pfifo_fast_priv *priv = qdisc_priv(qdisc); + struct sk_buff *skb = NULL; ++ bool need_retry = true; + int band; + ++retry: + for (band = 0; band < PFIFO_FAST_BANDS && !skb; band++) { + struct skb_array *q = band2list(priv, band); + +@@ -657,6 +659,23 @@ static struct sk_buff *pfifo_fast_dequeue(struct Qdisc *qdisc) + } + if (likely(skb)) { + qdisc_update_stats_at_dequeue(qdisc, skb); ++ } else if (need_retry && ++ test_bit(__QDISC_STATE_MISSED, &qdisc->state)) { ++ /* Delay clearing the STATE_MISSED here to reduce ++ * the overhead of the second spin_trylock() in ++ * qdisc_run_begin() and __netif_schedule() calling ++ * in qdisc_run_end(). ++ */ ++ clear_bit(__QDISC_STATE_MISSED, &qdisc->state); ++ ++ /* Make sure dequeuing happens after clearing ++ * STATE_MISSED. ++ */ ++ smp_mb__after_atomic(); ++ ++ need_retry = false; ++ ++ goto retry; + } else { + WRITE_ONCE(qdisc->empty, true); + } +-- +2.30.2 + diff --git a/queue-5.4/net-sched-fix-tx-action-reschedule-issue-with-stoppe.patch b/queue-5.4/net-sched-fix-tx-action-reschedule-issue-with-stoppe.patch new file mode 100644 index 00000000000..9103c03ef0a --- /dev/null +++ b/queue-5.4/net-sched-fix-tx-action-reschedule-issue-with-stoppe.patch @@ -0,0 +1,120 @@ +From 309c7b159d97cafbd4fc429de75edd358a0976bb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 May 2021 11:17:01 +0800 +Subject: net: sched: fix tx action reschedule issue with stopped queue + +From: Yunsheng Lin + +[ Upstream commit dcad9ee9e0663d74a89b25b987f9c7be86432812 ] + +The netdev qeueue might be stopped when byte queue limit has +reached or tx hw ring is full, net_tx_action() may still be +rescheduled if STATE_MISSED is set, which consumes unnecessary +cpu without dequeuing and transmiting any skb because the +netdev queue is stopped, see qdisc_run_end(). + +This patch fixes it by checking the netdev queue state before +calling qdisc_run() and clearing STATE_MISSED if netdev queue is +stopped during qdisc_run(), the net_tx_action() is rescheduled +again when netdev qeueue is restarted, see netif_tx_wake_queue(). + +As there is time window between netif_xmit_frozen_or_stopped() +checking and STATE_MISSED clearing, between which STATE_MISSED +may set by net_tx_action() scheduled by netif_tx_wake_queue(), +so set the STATE_MISSED again if netdev queue is restarted. + +Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking") +Reported-by: Michal Kubecek +Acked-by: Jakub Kicinski +Signed-off-by: Yunsheng Lin +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/core/dev.c | 3 ++- + net/sched/sch_generic.c | 27 ++++++++++++++++++++++++++- + 2 files changed, 28 insertions(+), 2 deletions(-) + +diff --git a/net/core/dev.c b/net/core/dev.c +index 0e38b5b044b6..e226f266da9e 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -3384,7 +3384,8 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q, + + if (q->flags & TCQ_F_NOLOCK) { + rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK; +- qdisc_run(q); ++ if (likely(!netif_xmit_frozen_or_stopped(txq))) ++ qdisc_run(q); + + if (unlikely(to_free)) + kfree_skb_list(to_free); +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index 2b87617d023d..9bc5cbe9809b 100644 +--- a/net/sched/sch_generic.c ++++ b/net/sched/sch_generic.c +@@ -35,6 +35,25 @@ + const struct Qdisc_ops *default_qdisc_ops = &pfifo_fast_ops; + EXPORT_SYMBOL(default_qdisc_ops); + ++static void qdisc_maybe_clear_missed(struct Qdisc *q, ++ const struct netdev_queue *txq) ++{ ++ clear_bit(__QDISC_STATE_MISSED, &q->state); ++ ++ /* Make sure the below netif_xmit_frozen_or_stopped() ++ * checking happens after clearing STATE_MISSED. ++ */ ++ smp_mb__after_atomic(); ++ ++ /* Checking netif_xmit_frozen_or_stopped() again to ++ * make sure STATE_MISSED is set if the STATE_MISSED ++ * set by netif_tx_wake_queue()'s rescheduling of ++ * net_tx_action() is cleared by the above clear_bit(). ++ */ ++ if (!netif_xmit_frozen_or_stopped(txq)) ++ set_bit(__QDISC_STATE_MISSED, &q->state); ++} ++ + /* Main transmission queue. */ + + /* Modifications to data participating in scheduling must be protected with +@@ -74,6 +93,7 @@ static inline struct sk_buff *__skb_dequeue_bad_txq(struct Qdisc *q) + } + } else { + skb = SKB_XOFF_MAGIC; ++ qdisc_maybe_clear_missed(q, txq); + } + } + +@@ -242,6 +262,7 @@ static struct sk_buff *dequeue_skb(struct Qdisc *q, bool *validate, + } + } else { + skb = NULL; ++ qdisc_maybe_clear_missed(q, txq); + } + if (lock) + spin_unlock(lock); +@@ -251,8 +272,10 @@ validate: + *validate = true; + + if ((q->flags & TCQ_F_ONETXQUEUE) && +- netif_xmit_frozen_or_stopped(txq)) ++ netif_xmit_frozen_or_stopped(txq)) { ++ qdisc_maybe_clear_missed(q, txq); + return skb; ++ } + + skb = qdisc_dequeue_skb_bad_txq(q); + if (unlikely(skb)) { +@@ -311,6 +334,8 @@ bool sch_direct_xmit(struct sk_buff *skb, struct Qdisc *q, + HARD_TX_LOCK(dev, txq, smp_processor_id()); + if (!netif_xmit_frozen_or_stopped(txq)) + skb = dev_hard_start_xmit(skb, dev, txq, &ret); ++ else ++ qdisc_maybe_clear_missed(q, txq); + + HARD_TX_UNLOCK(dev, txq); + } else { +-- +2.30.2 + diff --git a/queue-5.4/net-sched-fix-tx-action-rescheduling-issue-during-de.patch b/queue-5.4/net-sched-fix-tx-action-rescheduling-issue-during-de.patch new file mode 100644 index 00000000000..d808858e7d5 --- /dev/null +++ b/queue-5.4/net-sched-fix-tx-action-rescheduling-issue-during-de.patch @@ -0,0 +1,173 @@ +From 92bf22790207335a79c7e02f4734b7bc6a41328d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 May 2021 11:17:00 +0800 +Subject: net: sched: fix tx action rescheduling issue during deactivation + +From: Yunsheng Lin + +[ Upstream commit 102b55ee92f9fda4dde7a45d2b20538e6e3e3d1e ] + +Currently qdisc_run() checks the STATE_DEACTIVATED of lockless +qdisc before calling __qdisc_run(), which ultimately clear the +STATE_MISSED when all the skb is dequeued. If STATE_DEACTIVATED +is set before clearing STATE_MISSED, there may be rescheduling +of net_tx_action() at the end of qdisc_run_end(), see below: + +CPU0(net_tx_atcion) CPU1(__dev_xmit_skb) CPU2(dev_deactivate) + . . . + . set STATE_MISSED . + . __netif_schedule() . + . . set STATE_DEACTIVATED + . . qdisc_reset() + . . . + .<--------------- . synchronize_net() +clear __QDISC_STATE_SCHED | . . + . | . . + . | . some_qdisc_is_busy() + . | . return *false* + . | . . + test STATE_DEACTIVATED | . . +__qdisc_run() *not* called | . . + . | . . + test STATE_MISS | . . + __netif_schedule()--------| . . + . . . + . . . + +__qdisc_run() is not called by net_tx_atcion() in CPU0 because +CPU2 has set STATE_DEACTIVATED flag during dev_deactivate(), and +STATE_MISSED is only cleared in __qdisc_run(), __netif_schedule +is called at the end of qdisc_run_end(), causing tx action +rescheduling problem. + +qdisc_run() called by net_tx_action() runs in the softirq context, +which should has the same semantic as the qdisc_run() called by +__dev_xmit_skb() protected by rcu_read_lock_bh(). And there is a +synchronize_net() between STATE_DEACTIVATED flag being set and +qdisc_reset()/some_qdisc_is_busy in dev_deactivate(), we can safely +bail out for the deactived lockless qdisc in net_tx_action(), and +qdisc_reset() will reset all skb not dequeued yet. + +So add the rcu_read_lock() explicitly to protect the qdisc_run() +and do the STATE_DEACTIVATED checking in net_tx_action() before +calling qdisc_run_begin(). Another option is to do the checking in +the qdisc_run_end(), but it will add unnecessary overhead for +non-tx_action case, because __dev_queue_xmit() will not see qdisc +with STATE_DEACTIVATED after synchronize_net(), the qdisc with +STATE_DEACTIVATED can only be seen by net_tx_action() because of +__netif_schedule(). + +The STATE_DEACTIVATED checking in qdisc_run() is to avoid race +between net_tx_action() and qdisc_reset(), see: +commit d518d2ed8640 ("net/sched: fix race between deactivation +and dequeue for NOLOCK qdisc"). As the bailout added above for +deactived lockless qdisc in net_tx_action() provides better +protection for the race without calling qdisc_run() at all, so +remove the STATE_DEACTIVATED checking in qdisc_run(). + +After qdisc_reset(), there is no skb in qdisc to be dequeued, so +clear the STATE_MISSED in dev_reset_queue() too. + +Fixes: 6b3ba9146fe6 ("net: sched: allow qdiscs to handle locking") +Acked-by: Jakub Kicinski +Signed-off-by: Yunsheng Lin +V8: Clearing STATE_MISSED before calling __netif_schedule() has + avoid the endless rescheduling problem, but there may still + be a unnecessary rescheduling, so adjust the commit log. +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + include/net/pkt_sched.h | 7 +------ + net/core/dev.c | 26 ++++++++++++++++++++++---- + net/sched/sch_generic.c | 4 +++- + 3 files changed, 26 insertions(+), 11 deletions(-) + +diff --git a/include/net/pkt_sched.h b/include/net/pkt_sched.h +index cee1c084e9f4..b16f9236de14 100644 +--- a/include/net/pkt_sched.h ++++ b/include/net/pkt_sched.h +@@ -118,12 +118,7 @@ void __qdisc_run(struct Qdisc *q); + static inline void qdisc_run(struct Qdisc *q) + { + if (qdisc_run_begin(q)) { +- /* NOLOCK qdisc must check 'state' under the qdisc seqlock +- * to avoid racing with dev_qdisc_reset() +- */ +- if (!(q->flags & TCQ_F_NOLOCK) || +- likely(!test_bit(__QDISC_STATE_DEACTIVATED, &q->state))) +- __qdisc_run(q); ++ __qdisc_run(q); + qdisc_run_end(q); + } + } +diff --git a/net/core/dev.c b/net/core/dev.c +index a30878346f54..0e38b5b044b6 100644 +--- a/net/core/dev.c ++++ b/net/core/dev.c +@@ -4515,25 +4515,43 @@ static __latent_entropy void net_tx_action(struct softirq_action *h) + sd->output_queue_tailp = &sd->output_queue; + local_irq_enable(); + ++ rcu_read_lock(); ++ + while (head) { + struct Qdisc *q = head; + spinlock_t *root_lock = NULL; + + head = head->next_sched; + +- if (!(q->flags & TCQ_F_NOLOCK)) { +- root_lock = qdisc_lock(q); +- spin_lock(root_lock); +- } + /* We need to make sure head->next_sched is read + * before clearing __QDISC_STATE_SCHED + */ + smp_mb__before_atomic(); ++ ++ if (!(q->flags & TCQ_F_NOLOCK)) { ++ root_lock = qdisc_lock(q); ++ spin_lock(root_lock); ++ } else if (unlikely(test_bit(__QDISC_STATE_DEACTIVATED, ++ &q->state))) { ++ /* There is a synchronize_net() between ++ * STATE_DEACTIVATED flag being set and ++ * qdisc_reset()/some_qdisc_is_busy() in ++ * dev_deactivate(), so we can safely bail out ++ * early here to avoid data race between ++ * qdisc_deactivate() and some_qdisc_is_busy() ++ * for lockless qdisc. ++ */ ++ clear_bit(__QDISC_STATE_SCHED, &q->state); ++ continue; ++ } ++ + clear_bit(__QDISC_STATE_SCHED, &q->state); + qdisc_run(q); + if (root_lock) + spin_unlock(root_lock); + } ++ ++ rcu_read_unlock(); + } + + xfrm_dev_backlog(sd); +diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c +index 0723e7858658..2b87617d023d 100644 +--- a/net/sched/sch_generic.c ++++ b/net/sched/sch_generic.c +@@ -1176,8 +1176,10 @@ static void dev_reset_queue(struct net_device *dev, + qdisc_reset(qdisc); + + spin_unlock_bh(qdisc_lock(qdisc)); +- if (nolock) ++ if (nolock) { ++ clear_bit(__QDISC_STATE_MISSED, &qdisc->state); + spin_unlock_bh(&qdisc->seqlock); ++ } + } + + static bool some_qdisc_is_busy(struct net_device *dev) +-- +2.30.2 + diff --git a/queue-5.4/openvswitch-meter-fix-race-when-getting-now_ms.patch b/queue-5.4/openvswitch-meter-fix-race-when-getting-now_ms.patch new file mode 100644 index 00000000000..8e67cb15f74 --- /dev/null +++ b/queue-5.4/openvswitch-meter-fix-race-when-getting-now_ms.patch @@ -0,0 +1,57 @@ +From a482be5db3ccea5c1235d3e668e5fee690774244 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 13 May 2021 21:08:00 +0800 +Subject: openvswitch: meter: fix race when getting now_ms. + +From: Tao Liu + +[ Upstream commit e4df1b0c24350a0f00229ff895a91f1072bd850d ] + +We have observed meters working unexpected if traffic is 3+Gbit/s +with multiple connections. + +now_ms is not pretected by meter->lock, we may get a negative +long_delta_ms when another cpu updated meter->used, then: + delta_ms = (u32)long_delta_ms; +which will be a large value. + + band->bucket += delta_ms * band->rate; +then we get a wrong band->bucket. + +OpenVswitch userspace datapath has fixed the same issue[1] some +time ago, and we port the implementation to kernel datapath. + +[1] https://patchwork.ozlabs.org/project/openvswitch/patch/20191025114436.9746-1-i.maximets@ovn.org/ + +Fixes: 96fbc13d7e77 ("openvswitch: Add meter infrastructure") +Signed-off-by: Tao Liu +Suggested-by: Ilya Maximets +Reviewed-by: Ilya Maximets +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/openvswitch/meter.c | 8 ++++++++ + 1 file changed, 8 insertions(+) + +diff --git a/net/openvswitch/meter.c b/net/openvswitch/meter.c +index 541eea74ef7a..c37e09223cbb 100644 +--- a/net/openvswitch/meter.c ++++ b/net/openvswitch/meter.c +@@ -460,6 +460,14 @@ bool ovs_meter_execute(struct datapath *dp, struct sk_buff *skb, + spin_lock(&meter->lock); + + long_delta_ms = (now_ms - meter->used); /* ms */ ++ if (long_delta_ms < 0) { ++ /* This condition means that we have several threads fighting ++ * for a meter lock, and the one who received the packets a ++ * bit later wins. Assuming that all racing threads received ++ * packets at the same time to avoid overflow. ++ */ ++ long_delta_ms = 0; ++ } + + /* Make sure delta_ms will not be too large, so that bucket will not + * wrap around below. +-- +2.30.2 + diff --git a/queue-5.4/sch_dsmark-fix-a-null-deref-in-qdisc_reset.patch b/queue-5.4/sch_dsmark-fix-a-null-deref-in-qdisc_reset.patch new file mode 100644 index 00000000000..2fe44cf9dba --- /dev/null +++ b/queue-5.4/sch_dsmark-fix-a-null-deref-in-qdisc_reset.patch @@ -0,0 +1,76 @@ +From c98b18f0256bfe2a1515036ea8dde5805c7d0fcf Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 23 May 2021 14:38:53 +0000 +Subject: sch_dsmark: fix a NULL deref in qdisc_reset() + +From: Taehee Yoo + +[ Upstream commit 9b76eade16423ef06829cccfe3e100cfce31afcd ] + +If Qdisc_ops->init() is failed, Qdisc_ops->reset() would be called. +When dsmark_init(Qdisc_ops->init()) is failed, it possibly doesn't +initialize dsmark_qdisc_data->q. But dsmark_reset(Qdisc_ops->reset()) +uses dsmark_qdisc_data->q pointer wihtout any null checking. +So, panic would occur. + +Test commands: + sysctl net.core.default_qdisc=dsmark -w + ip link add dummy0 type dummy + ip link add vw0 link dummy0 type virt_wifi + ip link set vw0 up + +Splat looks like: +KASAN: null-ptr-deref in range [0x0000000000000018-0x000000000000001f] +CPU: 3 PID: 684 Comm: ip Not tainted 5.12.0+ #910 +RIP: 0010:qdisc_reset+0x2b/0x680 +Code: 1f 44 00 00 48 b8 00 00 00 00 00 fc ff df 41 57 41 56 41 55 41 54 +55 48 89 fd 48 83 c7 18 53 48 89 fa 48 c1 ea 03 48 83 ec 20 <80> 3c 02 +00 0f 85 09 06 00 00 4c 8b 65 18 0f 1f 44 00 00 65 8b 1d +RSP: 0018:ffff88800fda6bf8 EFLAGS: 00010282 +RAX: dffffc0000000000 RBX: ffff8880050ed800 RCX: 0000000000000000 +RDX: 0000000000000003 RSI: ffffffff99e34100 RDI: 0000000000000018 +RBP: 0000000000000000 R08: fffffbfff346b553 R09: fffffbfff346b553 +R10: 0000000000000001 R11: fffffbfff346b552 R12: ffffffffc0824940 +R13: ffff888109e83800 R14: 00000000ffffffff R15: ffffffffc08249e0 +FS: 00007f5042287680(0000) GS:ffff888119800000(0000) +knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 000055ae1f4dbd90 CR3: 0000000006760002 CR4: 00000000003706e0 +DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 +DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 +Call Trace: + ? rcu_read_lock_bh_held+0xa0/0xa0 + dsmark_reset+0x3d/0xf0 [sch_dsmark] + qdisc_reset+0xa9/0x680 + qdisc_destroy+0x84/0x370 + qdisc_create_dflt+0x1fe/0x380 + attach_one_default_qdisc.constprop.41+0xa4/0x180 + dev_activate+0x4d5/0x8c0 + ? __dev_open+0x268/0x390 + __dev_open+0x270/0x390 + +Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") +Signed-off-by: Taehee Yoo +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/sched/sch_dsmark.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/net/sched/sch_dsmark.c b/net/sched/sch_dsmark.c +index 2b88710994d7..76ed1a05ded2 100644 +--- a/net/sched/sch_dsmark.c ++++ b/net/sched/sch_dsmark.c +@@ -406,7 +406,8 @@ static void dsmark_reset(struct Qdisc *sch) + struct dsmark_qdisc_data *p = qdisc_priv(sch); + + pr_debug("%s(sch %p,[qdisc %p])\n", __func__, sch, p); +- qdisc_reset(p->q); ++ if (p->q) ++ qdisc_reset(p->q); + sch->qstats.backlog = 0; + sch->q.qlen = 0; + } +-- +2.30.2 + diff --git a/queue-5.4/scsi-libsas-use-_safe-loop-in-sas_resume_port.patch b/queue-5.4/scsi-libsas-use-_safe-loop-in-sas_resume_port.patch new file mode 100644 index 00000000000..78f2003919b --- /dev/null +++ b/queue-5.4/scsi-libsas-use-_safe-loop-in-sas_resume_port.patch @@ -0,0 +1,51 @@ +From 5be2585584c7c03225567f315f4bbe3ccaee939c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 19 May 2021 17:20:27 +0300 +Subject: scsi: libsas: Use _safe() loop in sas_resume_port() + +From: Dan Carpenter + +[ Upstream commit 8c7e7b8486cda21269d393245883c5e4737d5ee7 ] + +If sas_notify_lldd_dev_found() fails then this code calls: + + sas_unregister_dev(port, dev); + +which removes "dev", our list iterator, from the list. This could lead to +an endless loop. We need to use list_for_each_entry_safe(). + +Link: https://lore.kernel.org/r/YKUeq6gwfGcvvhty@mwanda +Fixes: 303694eeee5e ("[SCSI] libsas: suspend / resume support") +Reviewed-by: John Garry +Signed-off-by: Dan Carpenter +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/scsi/libsas/sas_port.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/scsi/libsas/sas_port.c b/drivers/scsi/libsas/sas_port.c +index 7c86fd248129..f751a12f92ea 100644 +--- a/drivers/scsi/libsas/sas_port.c ++++ b/drivers/scsi/libsas/sas_port.c +@@ -25,7 +25,7 @@ static bool phy_is_wideport_member(struct asd_sas_port *port, struct asd_sas_phy + + static void sas_resume_port(struct asd_sas_phy *phy) + { +- struct domain_device *dev; ++ struct domain_device *dev, *n; + struct asd_sas_port *port = phy->port; + struct sas_ha_struct *sas_ha = phy->ha; + struct sas_internal *si = to_sas_internal(sas_ha->core.shost->transportt); +@@ -44,7 +44,7 @@ static void sas_resume_port(struct asd_sas_phy *phy) + * 1/ presume every device came back + * 2/ force the next revalidation to check all expander phys + */ +- list_for_each_entry(dev, &port->dev_list, dev_list_node) { ++ list_for_each_entry_safe(dev, n, &port->dev_list, dev_list_node) { + int i, rc; + + rc = sas_notify_lldd_dev_found(dev); +-- +2.30.2 + diff --git a/queue-5.4/series b/queue-5.4/series index cce790d3528..dcde290bad1 100644 --- a/queue-5.4/series +++ b/queue-5.4/series @@ -132,3 +132,38 @@ drm-amd-display-disconnect-non-dp-with-no-edid.patch drm-amd-amdgpu-fix-refcount-leak.patch drm-amdgpu-fix-a-use-after-free.patch drm-amd-amdgpu-fix-a-potential-deadlock-in-gpu-reset.patch +net-netcp-fix-an-error-message.patch +net-dsa-fix-error-code-getting-shifted-with-4-in-dsa.patch +asoc-cs42l42-regmap-must-use_single_read-write.patch +vfio-ccw-check-initialized-flag-in-cp_init.patch +net-really-orphan-skbs-tied-to-closing-sk.patch +net-fec-fix-the-potential-memory-leak-in-fec_enet_in.patch +net-mdio-thunder-fix-a-double-free-issue-in-the-.rem.patch +net-mdio-octeon-fix-some-double-free-issues.patch +openvswitch-meter-fix-race-when-getting-now_ms.patch +tls-splice-check-splice_f_nonblock-instead-of-msg_do.patch +net-sched-fix-packet-stuck-problem-for-lockless-qdis.patch +net-sched-fix-tx-action-rescheduling-issue-during-de.patch +net-sched-fix-tx-action-reschedule-issue-with-stoppe.patch +net-hso-check-for-allocation-failure-in-hso_create_b.patch +net-bnx2-fix-error-return-code-in-bnx2_init_board.patch +bnxt_en-include-new-p5-hv-definition-in-vf-check.patch +mld-fix-panic-in-mld_newpack.patch +gve-check-tx-qpl-was-actually-assigned.patch +gve-update-mgmt_msix_idx-if-num_ntfy-changes.patch +gve-add-null-pointer-checks-when-freeing-irqs.patch +gve-upgrade-memory-barrier-in-poll-routine.patch +gve-correct-skb-queue-index-validation.patch +cxgb4-avoid-accessing-registers-when-clearing-filter.patch +staging-emxx_udc-fix-loop-in-_nbu2ss_nuke.patch +asoc-cs35l33-fix-an-error-code-in-probe.patch +bpf-set-mac_len-in-bpf_skb_change_head.patch +ixgbe-fix-large-mtu-request-from-vf.patch +scsi-libsas-use-_safe-loop-in-sas_resume_port.patch +net-lantiq-fix-memory-corruption-in-rx-ring.patch +ipv6-record-frag_max_size-in-atomic-fragments-in-inp.patch +alsa-usb-audio-scarlett2-snd_scarlett_gen2_controls_.patch +net-ethernet-mtk_eth_soc-fix-packet-statistics-suppo.patch +sch_dsmark-fix-a-null-deref-in-qdisc_reset.patch +mips-alchemy-xxs1500-add-gpio-au1000.h-header-file.patch +mips-ralink-export-rt_sysc_membase-for-rt2880_wdt.c.patch diff --git a/queue-5.4/staging-emxx_udc-fix-loop-in-_nbu2ss_nuke.patch b/queue-5.4/staging-emxx_udc-fix-loop-in-_nbu2ss_nuke.patch new file mode 100644 index 00000000000..a452e9c0062 --- /dev/null +++ b/queue-5.4/staging-emxx_udc-fix-loop-in-_nbu2ss_nuke.patch @@ -0,0 +1,49 @@ +From ffcc1838a27786c80026d39e0f95d5382f97bb8e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 19 May 2021 17:16:50 +0300 +Subject: staging: emxx_udc: fix loop in _nbu2ss_nuke() + +From: Dan Carpenter + +[ Upstream commit e0112a7c9e847ada15a631b88e279d547e8f26a7 ] + +The _nbu2ss_ep_done() function calls: + + list_del_init(&req->queue); + +which means that the loop will never exit. + +Fixes: ca3d253eb967 ("Staging: emxx_udc: Iterate list using list_for_each_entry") +Signed-off-by: Dan Carpenter +Link: https://lore.kernel.org/r/YKUd0sDyjm/lkJfJ@mwanda +Signed-off-by: Greg Kroah-Hartman +Signed-off-by: Sasha Levin +--- + drivers/staging/emxx_udc/emxx_udc.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/staging/emxx_udc/emxx_udc.c b/drivers/staging/emxx_udc/emxx_udc.c +index a6c893ddbf28..cc4c18c3fb36 100644 +--- a/drivers/staging/emxx_udc/emxx_udc.c ++++ b/drivers/staging/emxx_udc/emxx_udc.c +@@ -2064,7 +2064,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc, + struct nbu2ss_ep *ep, + int status) + { +- struct nbu2ss_req *req; ++ struct nbu2ss_req *req, *n; + + /* Endpoint Disable */ + _nbu2ss_epn_exit(udc, ep); +@@ -2076,7 +2076,7 @@ static int _nbu2ss_nuke(struct nbu2ss_udc *udc, + return 0; + + /* called with irqs blocked */ +- list_for_each_entry(req, &ep->queue, queue) { ++ list_for_each_entry_safe(req, n, &ep->queue, queue) { + _nbu2ss_ep_done(ep, req, status); + } + +-- +2.30.2 + diff --git a/queue-5.4/tls-splice-check-splice_f_nonblock-instead-of-msg_do.patch b/queue-5.4/tls-splice-check-splice_f_nonblock-instead-of-msg_do.patch new file mode 100644 index 00000000000..a8d4677242e --- /dev/null +++ b/queue-5.4/tls-splice-check-splice_f_nonblock-instead-of-msg_do.patch @@ -0,0 +1,75 @@ +From fe519416ea4fb245b7f10f88301d9ff499dc992e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 May 2021 11:11:02 +0800 +Subject: tls splice: check SPLICE_F_NONBLOCK instead of MSG_DONTWAIT + +From: Jim Ma + +[ Upstream commit 974271e5ed45cfe4daddbeb16224a2156918530e ] + +In tls_sw_splice_read, checkout MSG_* is inappropriate, should use +SPLICE_*, update tls_wait_data to accept nonblock arguments instead +of flags for recvmsg and splice. + +Fixes: c46234ebb4d1 ("tls: RX path for ktls") +Signed-off-by: Jim Ma +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/tls/tls_sw.c | 11 ++++++----- + 1 file changed, 6 insertions(+), 5 deletions(-) + +diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c +index 0d524ef0d8c8..cdb65aa54be7 100644 +--- a/net/tls/tls_sw.c ++++ b/net/tls/tls_sw.c +@@ -37,6 +37,7 @@ + + #include + #include ++#include + #include + + #include +@@ -1278,7 +1279,7 @@ int tls_sw_sendpage(struct sock *sk, struct page *page, + } + + static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock, +- int flags, long timeo, int *err) ++ bool nonblock, long timeo, int *err) + { + struct tls_context *tls_ctx = tls_get_ctx(sk); + struct tls_sw_context_rx *ctx = tls_sw_ctx_rx(tls_ctx); +@@ -1303,7 +1304,7 @@ static struct sk_buff *tls_wait_data(struct sock *sk, struct sk_psock *psock, + if (sock_flag(sk, SOCK_DONE)) + return NULL; + +- if ((flags & MSG_DONTWAIT) || !timeo) { ++ if (nonblock || !timeo) { + *err = -EAGAIN; + return NULL; + } +@@ -1781,7 +1782,7 @@ int tls_sw_recvmsg(struct sock *sk, + bool async_capable; + bool async = false; + +- skb = tls_wait_data(sk, psock, flags, timeo, &err); ++ skb = tls_wait_data(sk, psock, flags & MSG_DONTWAIT, timeo, &err); + if (!skb) { + if (psock) { + int ret = __tcp_bpf_recvmsg(sk, psock, +@@ -1985,9 +1986,9 @@ ssize_t tls_sw_splice_read(struct socket *sock, loff_t *ppos, + + lock_sock(sk); + +- timeo = sock_rcvtimeo(sk, flags & MSG_DONTWAIT); ++ timeo = sock_rcvtimeo(sk, flags & SPLICE_F_NONBLOCK); + +- skb = tls_wait_data(sk, NULL, flags, timeo, &err); ++ skb = tls_wait_data(sk, NULL, flags & SPLICE_F_NONBLOCK, timeo, &err); + if (!skb) + goto splice_read_end; + +-- +2.30.2 + diff --git a/queue-5.4/vfio-ccw-check-initialized-flag-in-cp_init.patch b/queue-5.4/vfio-ccw-check-initialized-flag-in-cp_init.patch new file mode 100644 index 00000000000..bb256a2b9b8 --- /dev/null +++ b/queue-5.4/vfio-ccw-check-initialized-flag-in-cp_init.patch @@ -0,0 +1,48 @@ +From 8b43176c71a7ec174eb740d81398d01e1e3b863f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 11 May 2021 21:56:29 +0200 +Subject: vfio-ccw: Check initialized flag in cp_init() + +From: Eric Farman + +[ Upstream commit c6c82e0cd8125d30f2f1b29205c7e1a2f1a6785b ] + +We have a really nice flag in the channel_program struct that +indicates if it had been initialized by cp_init(), and use it +as a guard in the other cp accessor routines, but not for a +duplicate call into cp_init(). The possibility of this occurring +is low, because that flow is protected by the private->io_mutex +and FSM CP_PROCESSING state. But then why bother checking it +in (for example) cp_prefetch() then? + +Let's just be consistent and check for that in cp_init() too. + +Fixes: 71189f263f8a3 ("vfio-ccw: make it safe to access channel programs") +Signed-off-by: Eric Farman +Reviewed-by: Cornelia Huck +Acked-by: Matthew Rosato +Message-Id: <20210511195631.3995081-2-farman@linux.ibm.com> +Signed-off-by: Cornelia Huck +Signed-off-by: Sasha Levin +--- + drivers/s390/cio/vfio_ccw_cp.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/drivers/s390/cio/vfio_ccw_cp.c b/drivers/s390/cio/vfio_ccw_cp.c +index 3645d1720c4b..9628e0f3add3 100644 +--- a/drivers/s390/cio/vfio_ccw_cp.c ++++ b/drivers/s390/cio/vfio_ccw_cp.c +@@ -636,6 +636,10 @@ int cp_init(struct channel_program *cp, struct device *mdev, union orb *orb) + { + int ret; + ++ /* this is an error in the caller */ ++ if (cp->initialized) ++ return -EBUSY; ++ + /* + * XXX: + * Only support prefetch enable mode now. +-- +2.30.2 +