From: Sasha Levin Date: Fri, 10 Jan 2025 14:26:07 +0000 (-0500) Subject: Fixes for 5.10 X-Git-Tag: v6.1.125~67 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=79b1d256140019541f8163dee09a07c7697de012;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.10 Signed-off-by: Sasha Levin --- diff --git a/queue-5.10/cxgb4-avoid-removal-of-uninserted-tid.patch b/queue-5.10/cxgb4-avoid-removal-of-uninserted-tid.patch new file mode 100644 index 00000000000..b8f75f0ab8d --- /dev/null +++ b/queue-5.10/cxgb4-avoid-removal-of-uninserted-tid.patch @@ -0,0 +1,42 @@ +From eb89b9aa93f4f13d5cb825e0dfd0d4892db2c76f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 3 Jan 2025 14:53:27 +0530 +Subject: cxgb4: Avoid removal of uninserted tid + +From: Anumula Murali Mohan Reddy + +[ Upstream commit 4c1224501e9d6c5fd12d83752f1c1b444e0e3418 ] + +During ARP failure, tid is not inserted but _c4iw_free_ep() +attempts to remove tid which results in error. +This patch fixes the issue by avoiding removal of uninserted tid. + +Fixes: 59437d78f088 ("cxgb4/chtls: fix ULD connection failures due to wrong TID base") +Signed-off-by: Anumula Murali Mohan Reddy +Signed-off-by: Potnuri Bharat Teja +Link: https://patch.msgid.link/20250103092327.1011925-1-anumula@chelsio.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c | 5 ++++- + 1 file changed, 4 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +index 720f2ca7f856..75ff6bf1b58e 100644 +--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c ++++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c +@@ -1800,7 +1800,10 @@ void cxgb4_remove_tid(struct tid_info *t, unsigned int chan, unsigned int tid, + struct adapter *adap = container_of(t, struct adapter, tids); + struct sk_buff *skb; + +- WARN_ON(tid_out_of_range(&adap->tids, tid)); ++ if (tid_out_of_range(&adap->tids, tid)) { ++ dev_err(adap->pdev_dev, "tid %d out of range\n", tid); ++ return; ++ } + + if (t->tid_tab[tid - adap->tids.tid_base]) { + t->tid_tab[tid - adap->tids.tid_base] = NULL; +-- +2.39.5 + diff --git a/queue-5.10/ieee802154-ca8210-add-missing-check-for-kfifo_alloc-.patch b/queue-5.10/ieee802154-ca8210-add-missing-check-for-kfifo_alloc-.patch new file mode 100644 index 00000000000..b0446e0b905 --- /dev/null +++ b/queue-5.10/ieee802154-ca8210-add-missing-check-for-kfifo_alloc-.patch @@ -0,0 +1,45 @@ +From 7033209b2903fab75901792cd3a0a3548586b891 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 29 Oct 2024 19:27:12 +0100 +Subject: ieee802154: ca8210: Add missing check for kfifo_alloc() in + ca8210_probe() + +From: Keisuke Nishimura + +[ Upstream commit 2c87309ea741341c6722efdf1fb3f50dd427c823 ] + +ca8210_test_interface_init() returns the result of kfifo_alloc(), +which can be non-zero in case of an error. The caller, ca8210_probe(), +should check the return value and do error-handling if it fails. + +Fixes: ded845a781a5 ("ieee802154: Add CA8210 IEEE 802.15.4 device driver") +Signed-off-by: Keisuke Nishimura +Reviewed-by: Simon Horman +Reviewed-by: Miquel Raynal +Link: https://lore.kernel.org/20241029182712.318271-1-keisuke.nishimura@inria.fr +Signed-off-by: Stefan Schmidt +Signed-off-by: Sasha Levin +--- + drivers/net/ieee802154/ca8210.c | 6 +++++- + 1 file changed, 5 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/ieee802154/ca8210.c b/drivers/net/ieee802154/ca8210.c +index 0ce426c0c0bf..9a082910ec59 100644 +--- a/drivers/net/ieee802154/ca8210.c ++++ b/drivers/net/ieee802154/ca8210.c +@@ -3125,7 +3125,11 @@ static int ca8210_probe(struct spi_device *spi_device) + spi_set_drvdata(priv->spi, priv); + if (IS_ENABLED(CONFIG_IEEE802154_CA8210_DEBUGFS)) { + cascoda_api_upstream = ca8210_test_int_driver_write; +- ca8210_test_interface_init(priv); ++ ret = ca8210_test_interface_init(priv); ++ if (ret) { ++ dev_crit(&spi_device->dev, "ca8210_test_interface_init failed\n"); ++ goto error; ++ } + } else { + cascoda_api_upstream = NULL; + } +-- +2.39.5 + diff --git a/queue-5.10/net-802-llc-snap-oid-pid-lookup-on-start-of-skb-data.patch b/queue-5.10/net-802-llc-snap-oid-pid-lookup-on-start-of-skb-data.patch new file mode 100644 index 00000000000..069f85020a5 --- /dev/null +++ b/queue-5.10/net-802-llc-snap-oid-pid-lookup-on-start-of-skb-data.patch @@ -0,0 +1,56 @@ +From 6d4d12571ae6737e00d3b8ba25f1a7cd1dffe614 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 2 Jan 2025 20:23:00 -0500 +Subject: net: 802: LLC+SNAP OID:PID lookup on start of skb data + +From: Antonio Pastor + +[ Upstream commit 1e9b0e1c550c42c13c111d1a31e822057232abc4 ] + +802.2+LLC+SNAP frames received by napi_complete_done() with GRO and DSA +have skb->transport_header set two bytes short, or pointing 2 bytes +before network_header & skb->data. This was an issue as snap_rcv() +expected offset to point to SNAP header (OID:PID), causing packet to +be dropped. + +A fix at llc_fixup_skb() (a024e377efed) resets transport_header for any +LLC consumers that may care about it, and stops SNAP packets from being +dropped, but doesn't fix the problem which is that LLC and SNAP should +not use transport_header offset. + +Ths patch eliminates the use of transport_header offset for SNAP lookup +of OID:PID so that SNAP does not rely on the offset at all. +The offset is reset after pull for any SNAP packet consumers that may +(but shouldn't) use it. + +Fixes: fda55eca5a33 ("net: introduce skb_transport_header_was_set()") +Signed-off-by: Antonio Pastor +Reviewed-by: Eric Dumazet +Link: https://patch.msgid.link/20250103012303.746521-1-antonio.pastor@gmail.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/802/psnap.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/net/802/psnap.c b/net/802/psnap.c +index 4492e8d7ad20..ed6e17c8cce9 100644 +--- a/net/802/psnap.c ++++ b/net/802/psnap.c +@@ -55,11 +55,11 @@ static int snap_rcv(struct sk_buff *skb, struct net_device *dev, + goto drop; + + rcu_read_lock(); +- proto = find_snap_client(skb_transport_header(skb)); ++ proto = find_snap_client(skb->data); + if (proto) { + /* Pass the frame on. */ +- skb->transport_header += 5; + skb_pull_rcsum(skb, 5); ++ skb_reset_transport_header(skb); + rc = proto->rcvfunc(skb, dev, &snap_packet_type, orig_dev); + } + rcu_read_unlock(); +-- +2.39.5 + diff --git a/queue-5.10/net-hns3-initialize-reset_timer-before-hclgevf_misc_.patch b/queue-5.10/net-hns3-initialize-reset_timer-before-hclgevf_misc_.patch new file mode 100644 index 00000000000..b72823247c7 --- /dev/null +++ b/queue-5.10/net-hns3-initialize-reset_timer-before-hclgevf_misc_.patch @@ -0,0 +1,45 @@ +From 566a2da5a1c692b7238153f9260fed8dfe4d0def Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 25 Oct 2024 17:29:36 +0800 +Subject: net: hns3: initialize reset_timer before hclgevf_misc_irq_init() + +From: Jian Shen + +[ Upstream commit d1c2e2961ab460ac2433ff8ad46000582abc573c ] + +Currently the misc irq is initialized before reset_timer setup. But +it will access the reset_timer in the irq handler. So initialize +the reset_timer earlier. + +Fixes: ff200099d271 ("net: hns3: remove unnecessary work in hclgevf_main") +Signed-off-by: Jian Shen +Signed-off-by: Jijie Shao +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +index 755935f9efc8..8193c5afe610 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +@@ -2656,6 +2656,7 @@ static void hclgevf_state_init(struct hclgevf_dev *hdev) + clear_bit(HCLGEVF_STATE_RST_FAIL, &hdev->state); + + INIT_DELAYED_WORK(&hdev->service_task, hclgevf_service_task); ++ timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); + + mutex_init(&hdev->mbx_resp.mbx_mutex); + sema_init(&hdev->reset_sem, 1); +@@ -3279,7 +3280,6 @@ static int hclgevf_init_hdev(struct hclgevf_dev *hdev) + HCLGEVF_DRIVER_NAME); + + hclgevf_task_schedule(hdev, round_jiffies_relative(HZ)); +- timer_setup(&hdev->reset_timer, hclgevf_reset_timer, 0); + + return 0; + +-- +2.39.5 + diff --git a/queue-5.10/net_sched-cls_flow-validate-tca_flow_rshift-attribut.patch b/queue-5.10/net_sched-cls_flow-validate-tca_flow_rshift-attribut.patch new file mode 100644 index 00000000000..a1640f6b63b --- /dev/null +++ b/queue-5.10/net_sched-cls_flow-validate-tca_flow_rshift-attribut.patch @@ -0,0 +1,74 @@ +From b55a7d07c0fe5e575d5317371028375837306d28 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 3 Jan 2025 10:45:46 +0000 +Subject: net_sched: cls_flow: validate TCA_FLOW_RSHIFT attribute + +From: Eric Dumazet + +[ Upstream commit a039e54397c6a75b713b9ce7894a62e06956aa92 ] + +syzbot found that TCA_FLOW_RSHIFT attribute was not validated. +Right shitfing a 32bit integer is undefined for large shift values. + +UBSAN: shift-out-of-bounds in net/sched/cls_flow.c:329:23 +shift exponent 9445 is too large for 32-bit type 'u32' (aka 'unsigned int') +CPU: 1 UID: 0 PID: 54 Comm: kworker/u8:3 Not tainted 6.13.0-rc3-syzkaller-00180-g4f619d518db9 #0 +Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 +Workqueue: ipv6_addrconf addrconf_dad_work +Call Trace: + + __dump_stack lib/dump_stack.c:94 [inline] + dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 + ubsan_epilogue lib/ubsan.c:231 [inline] + __ubsan_handle_shift_out_of_bounds+0x3c8/0x420 lib/ubsan.c:468 + flow_classify+0x24d5/0x25b0 net/sched/cls_flow.c:329 + tc_classify include/net/tc_wrapper.h:197 [inline] + __tcf_classify net/sched/cls_api.c:1771 [inline] + tcf_classify+0x420/0x1160 net/sched/cls_api.c:1867 + sfb_classify net/sched/sch_sfb.c:260 [inline] + sfb_enqueue+0x3ad/0x18b0 net/sched/sch_sfb.c:318 + dev_qdisc_enqueue+0x4b/0x290 net/core/dev.c:3793 + __dev_xmit_skb net/core/dev.c:3889 [inline] + __dev_queue_xmit+0xf0e/0x3f50 net/core/dev.c:4400 + dev_queue_xmit include/linux/netdevice.h:3168 [inline] + neigh_hh_output include/net/neighbour.h:523 [inline] + neigh_output include/net/neighbour.h:537 [inline] + ip_finish_output2+0xd41/0x1390 net/ipv4/ip_output.c:236 + iptunnel_xmit+0x55d/0x9b0 net/ipv4/ip_tunnel_core.c:82 + udp_tunnel_xmit_skb+0x262/0x3b0 net/ipv4/udp_tunnel_core.c:173 + geneve_xmit_skb drivers/net/geneve.c:916 [inline] + geneve_xmit+0x21dc/0x2d00 drivers/net/geneve.c:1039 + __netdev_start_xmit include/linux/netdevice.h:5002 [inline] + netdev_start_xmit include/linux/netdevice.h:5011 [inline] + xmit_one net/core/dev.c:3590 [inline] + dev_hard_start_xmit+0x27a/0x7d0 net/core/dev.c:3606 + __dev_queue_xmit+0x1b73/0x3f50 net/core/dev.c:4434 + +Fixes: e5dfb815181f ("[NET_SCHED]: Add flow classifier") +Reported-by: syzbot+1dbb57d994e54aaa04d2@syzkaller.appspotmail.com +Closes: https://lore.kernel.org/netdev/6777bf49.050a0220.178762.0040.GAE@google.com/T/#u +Signed-off-by: Eric Dumazet +Link: https://patch.msgid.link/20250103104546.3714168-1-edumazet@google.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/sched/cls_flow.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/net/sched/cls_flow.c b/net/sched/cls_flow.c +index 87398af2715a..117c7b038591 100644 +--- a/net/sched/cls_flow.c ++++ b/net/sched/cls_flow.c +@@ -354,7 +354,8 @@ static const struct nla_policy flow_policy[TCA_FLOW_MAX + 1] = { + [TCA_FLOW_KEYS] = { .type = NLA_U32 }, + [TCA_FLOW_MODE] = { .type = NLA_U32 }, + [TCA_FLOW_BASECLASS] = { .type = NLA_U32 }, +- [TCA_FLOW_RSHIFT] = { .type = NLA_U32 }, ++ [TCA_FLOW_RSHIFT] = NLA_POLICY_MAX(NLA_U32, ++ 31 /* BITS_PER_U32 - 1 */), + [TCA_FLOW_ADDEND] = { .type = NLA_U32 }, + [TCA_FLOW_MASK] = { .type = NLA_U32 }, + [TCA_FLOW_XOR] = { .type = NLA_U32 }, +-- +2.39.5 + diff --git a/queue-5.10/netfilter-conntrack-clamp-maximum-hashtable-size-to-.patch b/queue-5.10/netfilter-conntrack-clamp-maximum-hashtable-size-to-.patch new file mode 100644 index 00000000000..509017bf811 --- /dev/null +++ b/queue-5.10/netfilter-conntrack-clamp-maximum-hashtable-size-to-.patch @@ -0,0 +1,48 @@ +From 2af1b8d943a169877ee7ac6a7d9d75456ceb632e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 8 Jan 2025 22:56:33 +0100 +Subject: netfilter: conntrack: clamp maximum hashtable size to INT_MAX + +From: Pablo Neira Ayuso + +[ Upstream commit b541ba7d1f5a5b7b3e2e22dc9e40e18a7d6dbc13 ] + +Use INT_MAX as maximum size for the conntrack hashtable. Otherwise, it +is possible to hit WARN_ON_ONCE in __kvmalloc_node_noprof() when +resizing hashtable because __GFP_NOWARN is unset. See: + + 0708a0afe291 ("mm: Consider __GFP_NOWARN flag for oversized kvmalloc() calls") + +Note: hashtable resize is only possible from init_netns. + +Fixes: 9cc1c73ad666 ("netfilter: conntrack: avoid integer overflow when resizing") +Signed-off-by: Pablo Neira Ayuso +Signed-off-by: Sasha Levin +--- + net/netfilter/nf_conntrack_core.c | 5 ++++- + 1 file changed, 4 insertions(+), 1 deletion(-) + +diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c +index f82a234ac53a..99d5d8cd3895 100644 +--- a/net/netfilter/nf_conntrack_core.c ++++ b/net/netfilter/nf_conntrack_core.c +@@ -2435,12 +2435,15 @@ void *nf_ct_alloc_hashtable(unsigned int *sizep, int nulls) + struct hlist_nulls_head *hash; + unsigned int nr_slots, i; + +- if (*sizep > (UINT_MAX / sizeof(struct hlist_nulls_head))) ++ if (*sizep > (INT_MAX / sizeof(struct hlist_nulls_head))) + return NULL; + + BUILD_BUG_ON(sizeof(struct hlist_nulls_head) != sizeof(struct hlist_head)); + nr_slots = *sizep = roundup(*sizep, PAGE_SIZE / sizeof(struct hlist_nulls_head)); + ++ if (nr_slots > (INT_MAX / sizeof(struct hlist_nulls_head))) ++ return NULL; ++ + hash = kvcalloc(nr_slots, sizeof(struct hlist_nulls_head), GFP_KERNEL); + + if (hash && nulls) +-- +2.39.5 + diff --git a/queue-5.10/netfilter-nf_tables-imbalance-in-flowtable-binding.patch b/queue-5.10/netfilter-nf_tables-imbalance-in-flowtable-binding.patch new file mode 100644 index 00000000000..85d7a4dcc64 --- /dev/null +++ b/queue-5.10/netfilter-nf_tables-imbalance-in-flowtable-binding.patch @@ -0,0 +1,117 @@ +From 432bcc021738b8fc2a71422de24939a4dc39189e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 2 Jan 2025 13:01:13 +0100 +Subject: netfilter: nf_tables: imbalance in flowtable binding + +From: Pablo Neira Ayuso + +[ Upstream commit 13210fc63f353fe78584048079343413a3cdf819 ] + +All these cases cause imbalance between BIND and UNBIND calls: + +- Delete an interface from a flowtable with multiple interfaces + +- Add a (device to a) flowtable with --check flag + +- Delete a netns containing a flowtable + +- In an interactive nft session, create a table with owner flag and + flowtable inside, then quit. + +Fix it by calling FLOW_BLOCK_UNBIND when unregistering hooks, then +remove late FLOW_BLOCK_UNBIND call when destroying flowtable. + +Fixes: ff4bf2f42a40 ("netfilter: nf_tables: add nft_unregister_flowtable_hook()") +Reported-by: Phil Sutter +Tested-by: Phil Sutter +Signed-off-by: Pablo Neira Ayuso +Signed-off-by: Sasha Levin +--- + net/netfilter/nf_tables_api.c | 15 +++++++++++---- + 1 file changed, 11 insertions(+), 4 deletions(-) + +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 28ea2ed3f337..d4c9ea4fda9c 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -7006,6 +7006,7 @@ static void nft_unregister_flowtable_hook(struct net *net, + } + + static void __nft_unregister_flowtable_net_hooks(struct net *net, ++ struct nft_flowtable *flowtable, + struct list_head *hook_list, + bool release_netdev) + { +@@ -7013,6 +7014,8 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net, + + list_for_each_entry_safe(hook, next, hook_list, list) { + nf_unregister_net_hook(net, &hook->ops); ++ flowtable->data.type->setup(&flowtable->data, hook->ops.dev, ++ FLOW_BLOCK_UNBIND); + if (release_netdev) { + list_del(&hook->list); + kfree_rcu(hook, rcu); +@@ -7021,9 +7024,10 @@ static void __nft_unregister_flowtable_net_hooks(struct net *net, + } + + static void nft_unregister_flowtable_net_hooks(struct net *net, ++ struct nft_flowtable *flowtable, + struct list_head *hook_list) + { +- __nft_unregister_flowtable_net_hooks(net, hook_list, false); ++ __nft_unregister_flowtable_net_hooks(net, flowtable, hook_list, false); + } + + static int nft_register_flowtable_net_hooks(struct net *net, +@@ -7645,8 +7649,6 @@ static void nf_tables_flowtable_destroy(struct nft_flowtable *flowtable) + + flowtable->data.type->free(&flowtable->data); + list_for_each_entry_safe(hook, next, &flowtable->hook_list, list) { +- flowtable->data.type->setup(&flowtable->data, hook->ops.dev, +- FLOW_BLOCK_UNBIND); + list_del_rcu(&hook->list); + kfree_rcu(hook, rcu); + } +@@ -8787,6 +8789,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) + &nft_trans_flowtable_hooks(trans), + NFT_MSG_DELFLOWTABLE); + nft_unregister_flowtable_net_hooks(net, ++ nft_trans_flowtable(trans), + &nft_trans_flowtable_hooks(trans)); + } else { + list_del_rcu(&nft_trans_flowtable(trans)->list); +@@ -8795,6 +8798,7 @@ static int nf_tables_commit(struct net *net, struct sk_buff *skb) + &nft_trans_flowtable(trans)->hook_list, + NFT_MSG_DELFLOWTABLE); + nft_unregister_flowtable_net_hooks(net, ++ nft_trans_flowtable(trans), + &nft_trans_flowtable(trans)->hook_list); + } + break; +@@ -9014,11 +9018,13 @@ static int __nf_tables_abort(struct net *net, enum nfnl_abort_action action) + case NFT_MSG_NEWFLOWTABLE: + if (nft_trans_flowtable_update(trans)) { + nft_unregister_flowtable_net_hooks(net, ++ nft_trans_flowtable(trans), + &nft_trans_flowtable_hooks(trans)); + } else { + nft_use_dec_restore(&trans->ctx.table->use); + list_del_rcu(&nft_trans_flowtable(trans)->list); + nft_unregister_flowtable_net_hooks(net, ++ nft_trans_flowtable(trans), + &nft_trans_flowtable(trans)->hook_list); + } + break; +@@ -9582,7 +9588,8 @@ static void __nft_release_hook(struct net *net, struct nft_table *table) + list_for_each_entry(chain, &table->chains, list) + __nf_tables_unregister_hook(net, table, chain, true); + list_for_each_entry(flowtable, &table->flowtables, list) +- __nft_unregister_flowtable_net_hooks(net, &flowtable->hook_list, ++ __nft_unregister_flowtable_net_hooks(net, flowtable, ++ &flowtable->hook_list, + true); + } + +-- +2.39.5 + diff --git a/queue-5.10/series b/queue-5.10/series index ee381655b82..4840611b8c1 100644 --- a/queue-5.10/series +++ b/queue-5.10/series @@ -6,3 +6,13 @@ dm-array-fix-cursor-index-when-skipping-across-block.patch exfat-fix-the-infinite-loop-in-exfat_readdir.patch asoc-mediatek-disable-buffer-pre-allocation.patch netfilter-nft_dynset-honor-stateful-expressions-in-s.patch +ieee802154-ca8210-add-missing-check-for-kfifo_alloc-.patch +net-802-llc-snap-oid-pid-lookup-on-start-of-skb-data.patch +tcp-dccp-complete-lockless-accesses-to-sk-sk_max_ack.patch +tcp-dccp-allow-a-connection-when-sk_max_ack_backlog-.patch +net_sched-cls_flow-validate-tca_flow_rshift-attribut.patch +cxgb4-avoid-removal-of-uninserted-tid.patch +tls-fix-tls_sw_sendmsg-error-handling.patch +net-hns3-initialize-reset_timer-before-hclgevf_misc_.patch +netfilter-nf_tables-imbalance-in-flowtable-binding.patch +netfilter-conntrack-clamp-maximum-hashtable-size-to-.patch diff --git a/queue-5.10/tcp-dccp-allow-a-connection-when-sk_max_ack_backlog-.patch b/queue-5.10/tcp-dccp-allow-a-connection-when-sk_max_ack_backlog-.patch new file mode 100644 index 00000000000..e91b5f27b04 --- /dev/null +++ b/queue-5.10/tcp-dccp-allow-a-connection-when-sk_max_ack_backlog-.patch @@ -0,0 +1,47 @@ +From c74eb28c711e39b3e249b3dc2f5c89c6071afdfe Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 2 Jan 2025 17:14:26 +0000 +Subject: tcp/dccp: allow a connection when sk_max_ack_backlog is zero + +From: Zhongqiu Duan + +[ Upstream commit 3479c7549fb1dfa7a1db4efb7347c7b8ef50de4b ] + +If the backlog of listen() is set to zero, sk_acceptq_is_full() allows +one connection to be made, but inet_csk_reqsk_queue_is_full() does not. +When the net.ipv4.tcp_syncookies is zero, inet_csk_reqsk_queue_is_full() +will cause an immediate drop before the sk_acceptq_is_full() check in +tcp_conn_request(), resulting in no connection can be made. + +This patch tries to keep consistent with 64a146513f8f ("[NET]: Revert +incorrect accept queue backlog changes."). + +Link: https://lore.kernel.org/netdev/20250102080258.53858-1-kuniyu@amazon.com/ +Fixes: ef547f2ac16b ("tcp: remove max_qlen_log") +Signed-off-by: Zhongqiu Duan +Reviewed-by: Kuniyuki Iwashima +Reviewed-by: Jason Xing +Reviewed-by: Eric Dumazet +Link: https://patch.msgid.link/20250102171426.915276-1-dzq.aishenghu0@gmail.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + include/net/inet_connection_sock.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h +index 2a4bf2553476..cfb66f5a5076 100644 +--- a/include/net/inet_connection_sock.h ++++ b/include/net/inet_connection_sock.h +@@ -282,7 +282,7 @@ static inline int inet_csk_reqsk_queue_len(const struct sock *sk) + + static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk) + { +- return inet_csk_reqsk_queue_len(sk) >= READ_ONCE(sk->sk_max_ack_backlog); ++ return inet_csk_reqsk_queue_len(sk) > READ_ONCE(sk->sk_max_ack_backlog); + } + + bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req); +-- +2.39.5 + diff --git a/queue-5.10/tcp-dccp-complete-lockless-accesses-to-sk-sk_max_ack.patch b/queue-5.10/tcp-dccp-complete-lockless-accesses-to-sk-sk_max_ack.patch new file mode 100644 index 00000000000..80acb012557 --- /dev/null +++ b/queue-5.10/tcp-dccp-complete-lockless-accesses-to-sk-sk_max_ack.patch @@ -0,0 +1,40 @@ +From 5ecd7fee702fefd99f3e6353bd78b09df947b21b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 31 Mar 2024 17:05:21 +0800 +Subject: tcp/dccp: complete lockless accesses to sk->sk_max_ack_backlog + +From: Jason Xing + +[ Upstream commit 9a79c65f00e2b036e17af3a3a607d7d732b7affb ] + +Since commit 099ecf59f05b ("net: annotate lockless accesses to +sk->sk_max_ack_backlog") decided to handle the sk_max_ack_backlog +locklessly, there is one more function mostly called in TCP/DCCP +cases. So this patch completes it:) + +Signed-off-by: Jason Xing +Reviewed-by: Eric Dumazet +Link: https://lore.kernel.org/r/20240331090521.71965-1-kerneljasonxing@gmail.com +Signed-off-by: Jakub Kicinski +Stable-dep-of: 3479c7549fb1 ("tcp/dccp: allow a connection when sk_max_ack_backlog is zero") +Signed-off-by: Sasha Levin +--- + include/net/inet_connection_sock.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/include/net/inet_connection_sock.h b/include/net/inet_connection_sock.h +index f5967805c33f..2a4bf2553476 100644 +--- a/include/net/inet_connection_sock.h ++++ b/include/net/inet_connection_sock.h +@@ -282,7 +282,7 @@ static inline int inet_csk_reqsk_queue_len(const struct sock *sk) + + static inline int inet_csk_reqsk_queue_is_full(const struct sock *sk) + { +- return inet_csk_reqsk_queue_len(sk) >= sk->sk_max_ack_backlog; ++ return inet_csk_reqsk_queue_len(sk) >= READ_ONCE(sk->sk_max_ack_backlog); + } + + bool inet_csk_reqsk_queue_drop(struct sock *sk, struct request_sock *req); +-- +2.39.5 + diff --git a/queue-5.10/tls-fix-tls_sw_sendmsg-error-handling.patch b/queue-5.10/tls-fix-tls_sw_sendmsg-error-handling.patch new file mode 100644 index 00000000000..fbd8d74ebaf --- /dev/null +++ b/queue-5.10/tls-fix-tls_sw_sendmsg-error-handling.patch @@ -0,0 +1,46 @@ +From ef9776565a232fed1aaa60353f9b092b4b61d8f2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 Jan 2025 10:29:45 -0500 +Subject: tls: Fix tls_sw_sendmsg error handling + +From: Benjamin Coddington + +[ Upstream commit b341ca51d2679829d26a3f6a4aa9aee9abd94f92 ] + +We've noticed that NFS can hang when using RPC over TLS on an unstable +connection, and investigation shows that the RPC layer is stuck in a tight +loop attempting to transmit, but forever getting -EBADMSG back from the +underlying network. The loop begins when tcp_sendmsg_locked() returns +-EPIPE to tls_tx_records(), but that error is converted to -EBADMSG when +calling the socket's error reporting handler. + +Instead of converting errors from tcp_sendmsg_locked(), let's pass them +along in this path. The RPC layer handles -EPIPE by reconnecting the +transport, which prevents the endless attempts to transmit on a broken +connection. + +Signed-off-by: Benjamin Coddington +Fixes: a42055e8d2c3 ("net/tls: Add support for async encryption of records for performance") +Link: https://patch.msgid.link/9594185559881679d81f071b181a10eb07cd079f.1736004079.git.bcodding@redhat.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/tls/tls_sw.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/net/tls/tls_sw.c b/net/tls/tls_sw.c +index 46f1c19f7c60..ec57ca01b3c4 100644 +--- a/net/tls/tls_sw.c ++++ b/net/tls/tls_sw.c +@@ -428,7 +428,7 @@ int tls_tx_records(struct sock *sk, int flags) + + tx_err: + if (rc < 0 && rc != -EAGAIN) +- tls_err_abort(sk, -EBADMSG); ++ tls_err_abort(sk, rc); + + return rc; + } +-- +2.39.5 +