From: Sasha Levin Date: Mon, 27 Jun 2022 03:09:31 +0000 (-0400) Subject: Fixes for 5.10 X-Git-Tag: v5.10.126~8 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=877f2630c48fa252261b031901acd195a747d659;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.10 Signed-off-by: Sasha Levin --- diff --git a/queue-5.10/afs-fix-dynamic-root-getattr.patch b/queue-5.10/afs-fix-dynamic-root-getattr.patch new file mode 100644 index 00000000000..189ec36b7f0 --- /dev/null +++ b/queue-5.10/afs-fix-dynamic-root-getattr.patch @@ -0,0 +1,59 @@ +From 7627f73b9c500cf73b83f923c243b03c86fe2b6b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 21 Jun 2022 15:59:57 +0100 +Subject: afs: Fix dynamic root getattr + +From: David Howells + +[ Upstream commit cb78d1b5efffe4cf97e16766329dd7358aed3deb ] + +The recent patch to make afs_getattr consult the server didn't account +for the pseudo-inodes employed by the dynamic root-type afs superblock +not having a volume or a server to access, and thus an oops occurs if +such a directory is stat'd. + +Fix this by checking to see if the vnode->volume pointer actually points +anywhere before following it in afs_getattr(). + +This can be tested by stat'ing a directory in /afs. It may be +sufficient just to do "ls /afs" and the oops looks something like: + + BUG: kernel NULL pointer dereference, address: 0000000000000020 + ... + RIP: 0010:afs_getattr+0x8b/0x14b + ... + Call Trace: + + vfs_statx+0x79/0xf5 + vfs_fstatat+0x49/0x62 + +Fixes: 2aeb8c86d499 ("afs: Fix afs_getattr() to refetch file status if callback break occurred") +Reported-by: Marc Dionne +Signed-off-by: David Howells +Reviewed-by: Marc Dionne +Tested-by: Marc Dionne +cc: linux-afs@lists.infradead.org +Link: https://lore.kernel.org/r/165408450783.1031787.7941404776393751186.stgit@warthog.procyon.org.uk/ +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + fs/afs/inode.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/fs/afs/inode.c b/fs/afs/inode.c +index 7e7a9454bcb9..826fae22a8cc 100644 +--- a/fs/afs/inode.c ++++ b/fs/afs/inode.c +@@ -734,7 +734,8 @@ int afs_getattr(const struct path *path, struct kstat *stat, + + _enter("{ ino=%lu v=%u }", inode->i_ino, inode->i_generation); + +- if (!(query_flags & AT_STATX_DONT_SYNC) && ++ if (vnode->volume && ++ !(query_flags & AT_STATX_DONT_SYNC) && + !test_bit(AFS_VNODE_CB_PROMISED, &vnode->flags)) { + key = afs_request_key(vnode->volume->cell); + if (IS_ERR(key)) +-- +2.35.1 + diff --git a/queue-5.10/bonding-arp-monitor-spams-netdev_notify_peers-notifi.patch b/queue-5.10/bonding-arp-monitor-spams-netdev_notify_peers-notifi.patch new file mode 100644 index 00000000000..b59b2ef1499 --- /dev/null +++ b/queue-5.10/bonding-arp-monitor-spams-netdev_notify_peers-notifi.patch @@ -0,0 +1,48 @@ +From e003af019cb22dcfffdc4e482afa004d16b23bc2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 16 Jun 2022 12:32:40 -0700 +Subject: bonding: ARP monitor spams NETDEV_NOTIFY_PEERS notifiers + +From: Jay Vosburgh + +[ Upstream commit 7a9214f3d88cfdb099f3896e102a306b316d8707 ] + +The bonding ARP monitor fails to decrement send_peer_notif, the +number of peer notifications (gratuitous ARP or ND) to be sent. This +results in a continuous series of notifications. + +Correct this by decrementing the counter for each notification. + +Reported-by: Jonathan Toppins +Signed-off-by: Jay Vosburgh +Fixes: b0929915e035 ("bonding: Fix RTNL: assertion failed at net/core/rtnetlink.c for ab arp monitor") +Link: https://lore.kernel.org/netdev/b2fd4147-8f50-bebd-963a-1a3e8d1d9715@redhat.com/ +Tested-by: Jonathan Toppins +Reviewed-by: Jonathan Toppins +Link: https://lore.kernel.org/r/9400.1655407960@famine +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/bonding/bond_main.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index cbeb69bca0bb..9c4b45341fd2 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -3368,9 +3368,11 @@ static void bond_activebackup_arp_mon(struct bonding *bond) + if (!rtnl_trylock()) + return; + +- if (should_notify_peers) ++ if (should_notify_peers) { ++ bond->send_peer_notif--; + call_netdevice_notifiers(NETDEV_NOTIFY_PEERS, + bond->dev); ++ } + if (should_notify_rtnl) { + bond_slave_state_notify(bond); + bond_slave_link_notify(bond); +-- +2.35.1 + diff --git a/queue-5.10/bpf-fix-request_sock-leak-in-sk-lookup-helpers.patch b/queue-5.10/bpf-fix-request_sock-leak-in-sk-lookup-helpers.patch new file mode 100644 index 00000000000..9e43528c6cb --- /dev/null +++ b/queue-5.10/bpf-fix-request_sock-leak-in-sk-lookup-helpers.patch @@ -0,0 +1,98 @@ +From 163296d190ebec08428c1fda3051192851907a5f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 15 Jun 2022 11:15:40 +1000 +Subject: bpf: Fix request_sock leak in sk lookup helpers + +From: Jon Maxwell + +[ Upstream commit 3046a827316c0e55fc563b4fb78c93b9ca5c7c37 ] + +A customer reported a request_socket leak in a Calico cloud environment. We +found that a BPF program was doing a socket lookup with takes a refcnt on +the socket and that it was finding the request_socket but returning the parent +LISTEN socket via sk_to_full_sk() without decrementing the child request socket +1st, resulting in request_sock slab object leak. This patch retains the +existing behaviour of returning full socks to the caller but it also decrements +the child request_socket if one is present before doing so to prevent the leak. + +Thanks to Curtis Taylor for all the help in diagnosing and testing this. And +thanks to Antoine Tenart for the reproducer and patch input. + +v2 of this patch contains, refactor as per Daniel Borkmann's suggestions to +validate RCU flags on the listen socket so that it balances with bpf_sk_release() +and update comments as per Martin KaFai Lau's suggestion. One small change to +Daniels suggestion, put "sk = sk2" under "if (sk2 != sk)" to avoid an extra +instruction. + +Fixes: f7355a6c0497 ("bpf: Check sk_fullsock() before returning from bpf_sk_lookup()") +Fixes: edbf8c01de5a ("bpf: add skc_lookup_tcp helper") +Co-developed-by: Antoine Tenart +Signed-off-by: Antoine Tenart +Signed-off-by: Jon Maxwell +Signed-off-by: Daniel Borkmann +Tested-by: Curtis Taylor +Cc: Martin KaFai Lau +Link: https://lore.kernel.org/bpf/56d6f898-bde0-bb25-3427-12a330b29fb8@iogearbox.net +Link: https://lore.kernel.org/bpf/20220615011540.813025-1-jmaxwell37@gmail.com +Signed-off-by: Sasha Levin +--- + net/core/filter.c | 34 ++++++++++++++++++++++++++++------ + 1 file changed, 28 insertions(+), 6 deletions(-) + +diff --git a/net/core/filter.c b/net/core/filter.c +index d348f1d3fb8f..246947fbc958 100644 +--- a/net/core/filter.c ++++ b/net/core/filter.c +@@ -5982,10 +5982,21 @@ __bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len, + ifindex, proto, netns_id, flags); + + if (sk) { +- sk = sk_to_full_sk(sk); +- if (!sk_fullsock(sk)) { ++ struct sock *sk2 = sk_to_full_sk(sk); ++ ++ /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk ++ * sock refcnt is decremented to prevent a request_sock leak. ++ */ ++ if (!sk_fullsock(sk2)) ++ sk2 = NULL; ++ if (sk2 != sk) { + sock_gen_put(sk); +- return NULL; ++ /* Ensure there is no need to bump sk2 refcnt */ ++ if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { ++ WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); ++ return NULL; ++ } ++ sk = sk2; + } + } + +@@ -6019,10 +6030,21 @@ bpf_sk_lookup(struct sk_buff *skb, struct bpf_sock_tuple *tuple, u32 len, + flags); + + if (sk) { +- sk = sk_to_full_sk(sk); +- if (!sk_fullsock(sk)) { ++ struct sock *sk2 = sk_to_full_sk(sk); ++ ++ /* sk_to_full_sk() may return (sk)->rsk_listener, so make sure the original sk ++ * sock refcnt is decremented to prevent a request_sock leak. ++ */ ++ if (!sk_fullsock(sk2)) ++ sk2 = NULL; ++ if (sk2 != sk) { + sock_gen_put(sk); +- return NULL; ++ /* Ensure there is no need to bump sk2 refcnt */ ++ if (unlikely(sk2 && !sock_flag(sk2, SOCK_RCU_FREE))) { ++ WARN_ONCE(1, "Found non-RCU, unreferenced socket!"); ++ return NULL; ++ } ++ sk = sk2; + } + } + +-- +2.35.1 + diff --git a/queue-5.10/bpf-x86-fix-tail-call-count-offset-calculation-on-bp.patch b/queue-5.10/bpf-x86-fix-tail-call-count-offset-calculation-on-bp.patch new file mode 100644 index 00000000000..09ebe38ffcc --- /dev/null +++ b/queue-5.10/bpf-x86-fix-tail-call-count-offset-calculation-on-bp.patch @@ -0,0 +1,86 @@ +From d5e3b9c9e530c4d35a55eb669643bd846f12641f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 16 Jun 2022 18:20:36 +0200 +Subject: bpf, x86: Fix tail call count offset calculation on bpf2bpf call + +From: Jakub Sitnicki + +[ Upstream commit ff672c67ee7635ca1e28fb13729e8ef0d1f08ce5 ] + +On x86-64 the tail call count is passed from one BPF function to another +through %rax. Additionally, on function entry, the tail call count value +is stored on stack right after the BPF program stack, due to register +shortage. + +The stored count is later loaded from stack either when performing a tail +call - to check if we have not reached the tail call limit - or before +calling another BPF function call in order to pass it via %rax. + +In the latter case, we miscalculate the offset at which the tail call count +was stored on function entry. The JIT does not take into account that the +allocated BPF program stack is always a multiple of 8 on x86, while the +actual stack depth does not have to be. + +This leads to a load from an offset that belongs to the BPF stack, as shown +in the example below: + +SEC("tc") +int entry(struct __sk_buff *skb) +{ + /* Have data on stack which size is not a multiple of 8 */ + volatile char arr[1] = {}; + return subprog_tail(skb); +} + +int entry(struct __sk_buff * skb): + 0: (b4) w2 = 0 + 1: (73) *(u8 *)(r10 -1) = r2 + 2: (85) call pc+1#bpf_prog_ce2f79bb5f3e06dd_F + 3: (95) exit + +int entry(struct __sk_buff * skb): + 0xffffffffa0201788: nop DWORD PTR [rax+rax*1+0x0] + 0xffffffffa020178d: xor eax,eax + 0xffffffffa020178f: push rbp + 0xffffffffa0201790: mov rbp,rsp + 0xffffffffa0201793: sub rsp,0x8 + 0xffffffffa020179a: push rax + 0xffffffffa020179b: xor esi,esi + 0xffffffffa020179d: mov BYTE PTR [rbp-0x1],sil + 0xffffffffa02017a1: mov rax,QWORD PTR [rbp-0x9] !!! tail call count + 0xffffffffa02017a8: call 0xffffffffa02017d8 !!! is at rbp-0x10 + 0xffffffffa02017ad: leave + 0xffffffffa02017ae: ret + +Fix it by rounding up the BPF stack depth to a multiple of 8, when +calculating the tail call count offset on stack. + +Fixes: ebf7d1f508a7 ("bpf, x64: rework pro/epilogue and tailcall handling in JIT") +Signed-off-by: Jakub Sitnicki +Signed-off-by: Daniel Borkmann +Acked-by: Maciej Fijalkowski +Acked-by: Daniel Borkmann +Link: https://lore.kernel.org/bpf/20220616162037.535469-2-jakub@cloudflare.com +Signed-off-by: Sasha Levin +--- + arch/x86/net/bpf_jit_comp.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c +index a0a7ead52698..1714e85eb26d 100644 +--- a/arch/x86/net/bpf_jit_comp.c ++++ b/arch/x86/net/bpf_jit_comp.c +@@ -1261,8 +1261,9 @@ xadd: if (is_imm8(insn->off)) + case BPF_JMP | BPF_CALL: + func = (u8 *) __bpf_call_base + imm32; + if (tail_call_reachable) { ++ /* mov rax, qword ptr [rbp - rounded_stack_depth - 8] */ + EMIT3_off32(0x48, 0x8B, 0x85, +- -(bpf_prog->aux->stack_depth + 8)); ++ -round_up(bpf_prog->aux->stack_depth, 8) - 8); + if (!imm32 || emit_call(&prog, func, image + addrs[i - 1] + 7)) + return -EINVAL; + } else { +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-dp-check-core_initialized-before-disable-int.patch b/queue-5.10/drm-msm-dp-check-core_initialized-before-disable-int.patch new file mode 100644 index 00000000000..e3740c03f50 --- /dev/null +++ b/queue-5.10/drm-msm-dp-check-core_initialized-before-disable-int.patch @@ -0,0 +1,69 @@ +From 1b53935b5c20b40c6e1745bb23225064837b0041 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 6 Jun 2022 10:55:39 -0700 +Subject: drm/msm/dp: check core_initialized before disable interrupts at + dp_display_unbind() + +From: Kuogee Hsieh + +[ Upstream commit d80c3ba0ac247791a4ed7a0cd865a64906c8906a ] + +During msm initialize phase, dp_display_unbind() will be called to undo +initializations had been done by dp_display_bind() previously if there is +error happen at msm_drm_bind. In this case, core_initialized flag had to +be check to make sure clocks is on before update DP controller register +to disable HPD interrupts. Otherwise system will crash due to below NOC +fatal error. + +QTISECLIB [01f01a7ad]CNOC2 ERROR: ERRLOG0_LOW = 0x00061007 +QTISECLIB [01f01a7ad]GEM_NOC ERROR: ERRLOG0_LOW = 0x00001007 +QTISECLIB [01f0371a0]CNOC2 ERROR: ERRLOG0_HIGH = 0x00000003 +QTISECLIB [01f055297]GEM_NOC ERROR: ERRLOG0_HIGH = 0x00000003 +QTISECLIB [01f072beb]CNOC2 ERROR: ERRLOG1_LOW = 0x00000024 +QTISECLIB [01f0914b8]GEM_NOC ERROR: ERRLOG1_LOW = 0x00000042 +QTISECLIB [01f0ae639]CNOC2 ERROR: ERRLOG1_HIGH = 0x00004002 +QTISECLIB [01f0cc73f]GEM_NOC ERROR: ERRLOG1_HIGH = 0x00004002 +QTISECLIB [01f0ea092]CNOC2 ERROR: ERRLOG2_LOW = 0x0009020c +QTISECLIB [01f10895f]GEM_NOC ERROR: ERRLOG2_LOW = 0x0ae9020c +QTISECLIB [01f125ae1]CNOC2 ERROR: ERRLOG2_HIGH = 0x00000000 +QTISECLIB [01f143be7]GEM_NOC ERROR: ERRLOG2_HIGH = 0x00000000 +QTISECLIB [01f16153a]CNOC2 ERROR: ERRLOG3_LOW = 0x00000000 +QTISECLIB [01f17fe07]GEM_NOC ERROR: ERRLOG3_LOW = 0x00000000 +QTISECLIB [01f19cf89]CNOC2 ERROR: ERRLOG3_HIGH = 0x00000000 +QTISECLIB [01f1bb08e]GEM_NOC ERROR: ERRLOG3_HIGH = 0x00000000 +QTISECLIB [01f1d8a31]CNOC2 ERROR: SBM1 FAULTINSTATUS0_LOW = 0x00000002 +QTISECLIB [01f1f72a4]GEM_NOC ERROR: SBM0 FAULTINSTATUS0_LOW = 0x00000001 +QTISECLIB [01f21a217]CNOC3 ERROR: ERRLOG0_LOW = 0x00000006 +QTISECLIB [01f23dfd3]NOC error fatal + +changes in v2: +-- drop the first patch (drm/msm: enable msm irq after all initializations are done successfully at msm_drm_init()) since the problem had been fixed by other patch + +Fixes: 570d3e5d28db ("drm/msm/dp: stop event kernel thread when DP unbind") +Signed-off-by: Kuogee Hsieh +Reviewed-by: Stephen Boyd +Patchwork: https://patchwork.freedesktop.org/patch/488387/ +Link: https://lore.kernel.org/r/1654538139-7450-1-git-send-email-quic_khsieh@quicinc.com +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/dp/dp_display.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c +index ebd05678a27b..47bdddb860e5 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.c ++++ b/drivers/gpu/drm/msm/dp/dp_display.c +@@ -268,7 +268,8 @@ static void dp_display_unbind(struct device *dev, struct device *master, + } + + /* disable all HPD interrupts */ +- dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); ++ if (dp->core_initialized) ++ dp_catalog_hpd_config_intr(dp->catalog, DP_DP_HPD_INT_MASK, false); + + kthread_stop(dp->ev_tsk); + +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-dp-deinitialize-mainlink-if-link-training-fa.patch b/queue-5.10/drm-msm-dp-deinitialize-mainlink-if-link-training-fa.patch new file mode 100644 index 00000000000..e8d50c5ad6a --- /dev/null +++ b/queue-5.10/drm-msm-dp-deinitialize-mainlink-if-link-training-fa.patch @@ -0,0 +1,198 @@ +From c45a7e4af2f2943c221e072aeb614d5be1cf2669 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 3 Nov 2020 12:49:00 -0800 +Subject: drm/msm/dp: deinitialize mainlink if link training failed + +From: Kuogee Hsieh + +[ Upstream commit 231a04fcc6cb5b0e5f72c015d36462a17355f925 ] + +DP compo phy have to be enable to start link training. When +link training failed phy need to be disabled so that next +link traning can be proceed smoothly at next plug in. This +patch de-initialize mainlink to disable phy if link training +failed. This prevent system crash due to +disp_cc_mdss_dp_link_intf_clk stuck at "off" state. This patch +also perform checking power_on flag at dp_display_enable() and +dp_display_disable() to avoid crashing when unplug cable while +display is off. + +Signed-off-by: Kuogee Hsieh +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/dp/dp_catalog.c | 2 +- + drivers/gpu/drm/msm/dp/dp_catalog.h | 2 +- + drivers/gpu/drm/msm/dp/dp_ctrl.c | 40 +++++++++++++++++++++++++++-- + drivers/gpu/drm/msm/dp/dp_display.c | 15 ++++++++++- + drivers/gpu/drm/msm/dp/dp_panel.c | 2 +- + 5 files changed, 55 insertions(+), 6 deletions(-) + +diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.c b/drivers/gpu/drm/msm/dp/dp_catalog.c +index aeca8b2ac5c6..2da6982efdbf 100644 +--- a/drivers/gpu/drm/msm/dp/dp_catalog.c ++++ b/drivers/gpu/drm/msm/dp/dp_catalog.c +@@ -572,7 +572,7 @@ void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog) + dp_write_aux(catalog, REG_DP_DP_HPD_CTRL, DP_DP_HPD_CTRL_HPD_EN); + } + +-u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog) ++u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog) + { + struct dp_catalog_private *catalog = container_of(dp_catalog, + struct dp_catalog_private, dp_catalog); +diff --git a/drivers/gpu/drm/msm/dp/dp_catalog.h b/drivers/gpu/drm/msm/dp/dp_catalog.h +index 6d257dbebf29..176a9020a520 100644 +--- a/drivers/gpu/drm/msm/dp/dp_catalog.h ++++ b/drivers/gpu/drm/msm/dp/dp_catalog.h +@@ -97,7 +97,7 @@ void dp_catalog_ctrl_enable_irq(struct dp_catalog *dp_catalog, bool enable); + void dp_catalog_hpd_config_intr(struct dp_catalog *dp_catalog, + u32 intr_mask, bool en); + void dp_catalog_ctrl_hpd_config(struct dp_catalog *dp_catalog); +-u32 dp_catalog_hpd_get_state_status(struct dp_catalog *dp_catalog); ++u32 dp_catalog_link_is_connected(struct dp_catalog *dp_catalog); + u32 dp_catalog_hpd_get_intr_status(struct dp_catalog *dp_catalog); + void dp_catalog_ctrl_phy_reset(struct dp_catalog *dp_catalog); + int dp_catalog_ctrl_update_vx_px(struct dp_catalog *dp_catalog, u8 v_level, +diff --git a/drivers/gpu/drm/msm/dp/dp_ctrl.c b/drivers/gpu/drm/msm/dp/dp_ctrl.c +index c83a1650437d..b9ca844ce2ad 100644 +--- a/drivers/gpu/drm/msm/dp/dp_ctrl.c ++++ b/drivers/gpu/drm/msm/dp/dp_ctrl.c +@@ -1460,6 +1460,30 @@ static int dp_ctrl_reinitialize_mainlink(struct dp_ctrl_private *ctrl) + return ret; + } + ++static int dp_ctrl_deinitialize_mainlink(struct dp_ctrl_private *ctrl) ++{ ++ struct dp_io *dp_io; ++ struct phy *phy; ++ int ret; ++ ++ dp_io = &ctrl->parser->io; ++ phy = dp_io->phy; ++ ++ dp_catalog_ctrl_mainlink_ctrl(ctrl->catalog, false); ++ ++ dp_catalog_ctrl_reset(ctrl->catalog); ++ ++ ret = dp_power_clk_enable(ctrl->power, DP_CTRL_PM, false); ++ if (ret) { ++ DRM_ERROR("Failed to disable link clocks. ret=%d\n", ret); ++ } ++ ++ phy_power_off(phy); ++ phy_exit(phy); ++ ++ return 0; ++} ++ + static int dp_ctrl_link_maintenance(struct dp_ctrl_private *ctrl) + { + int ret = 0; +@@ -1640,8 +1664,7 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) + if (rc) + return rc; + +- while (--link_train_max_retries && +- !atomic_read(&ctrl->dp_ctrl.aborted)) { ++ while (--link_train_max_retries) { + rc = dp_ctrl_reinitialize_mainlink(ctrl); + if (rc) { + DRM_ERROR("Failed to reinitialize mainlink. rc=%d\n", +@@ -1656,6 +1679,10 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) + break; + } else if (training_step == DP_TRAINING_1) { + /* link train_1 failed */ ++ if (!dp_catalog_link_is_connected(ctrl->catalog)) { ++ break; ++ } ++ + rc = dp_ctrl_link_rate_down_shift(ctrl); + if (rc < 0) { /* already in RBR = 1.6G */ + if (cr.lane_0_1 & DP_LANE0_1_CR_DONE) { +@@ -1675,6 +1702,10 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) + } + } else if (training_step == DP_TRAINING_2) { + /* link train_2 failed, lower lane rate */ ++ if (!dp_catalog_link_is_connected(ctrl->catalog)) { ++ break; ++ } ++ + rc = dp_ctrl_link_lane_down_shift(ctrl); + if (rc < 0) { + /* end with failure */ +@@ -1695,6 +1726,11 @@ int dp_ctrl_on_link(struct dp_ctrl *dp_ctrl) + */ + if (rc == 0) /* link train successfully */ + dp_ctrl_push_idle(dp_ctrl); ++ else { ++ /* link training failed */ ++ dp_ctrl_deinitialize_mainlink(ctrl); ++ rc = -ECONNRESET; ++ } + + return rc; + } +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c +index 4b18ab71ae59..d504cf68283a 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.c ++++ b/drivers/gpu/drm/msm/dp/dp_display.c +@@ -556,6 +556,11 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data) + if (ret) { /* link train failed */ + hpd->hpd_high = 0; + dp->hpd_state = ST_DISCONNECTED; ++ ++ if (ret == -ECONNRESET) { /* cable unplugged */ ++ dp->core_initialized = false; ++ } ++ + } else { + /* start sentinel checking in case of missing uevent */ + dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout); +@@ -827,6 +832,11 @@ static int dp_display_enable(struct dp_display_private *dp, u32 data) + + dp_display = g_dp_display; + ++ if (dp_display->power_on) { ++ DRM_DEBUG_DP("Link already setup, return\n"); ++ return 0; ++ } ++ + rc = dp_ctrl_on_stream(dp->ctrl); + if (!rc) + dp_display->power_on = true; +@@ -859,6 +869,9 @@ static int dp_display_disable(struct dp_display_private *dp, u32 data) + + dp_display = g_dp_display; + ++ if (!dp_display->power_on) ++ return 0; ++ + /* wait only if audio was enabled */ + if (dp_display->audio_enabled) { + /* signal the disconnect event */ +@@ -1245,7 +1258,7 @@ static int dp_pm_resume(struct device *dev) + + dp_catalog_ctrl_hpd_config(dp->catalog); + +- status = dp_catalog_hpd_get_state_status(dp->catalog); ++ status = dp_catalog_link_is_connected(dp->catalog); + + if (status) + dp->dp_display.is_connected = true; +diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c +index 550871ba6e5a..4e8a19114e87 100644 +--- a/drivers/gpu/drm/msm/dp/dp_panel.c ++++ b/drivers/gpu/drm/msm/dp/dp_panel.c +@@ -197,7 +197,7 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel, + if (!dp_panel->edid) { + DRM_ERROR("panel edid read failed\n"); + /* check edid read fail is due to unplug */ +- if (!dp_catalog_hpd_get_state_status(panel->catalog)) { ++ if (!dp_catalog_link_is_connected(panel->catalog)) { + rc = -ETIMEDOUT; + goto end; + } +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-dp-fix-connect-disconnect-handled-at-irq_hpd.patch b/queue-5.10/drm-msm-dp-fix-connect-disconnect-handled-at-irq_hpd.patch new file mode 100644 index 00000000000..2f0799eb744 --- /dev/null +++ b/queue-5.10/drm-msm-dp-fix-connect-disconnect-handled-at-irq_hpd.patch @@ -0,0 +1,204 @@ +From 2bd83b3aed198f9d02012b4d2eb0b0b35148fbd2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Nov 2020 13:00:14 -0800 +Subject: drm/msm/dp: fix connect/disconnect handled at irq_hpd + +From: Kuogee Hsieh + +[ Upstream commit c58eb1b54feefc3a47fab78addd14083bc941c44 ] + +Some usb type-c dongle use irq_hpd request to perform device connection +and disconnection. This patch add handling of both connection and +disconnection are based on the state of hpd_state and sink_count. + +Changes in V2: +-- add dp_display_handle_port_ststus_changed() +-- fix kernel test robot complaint + +Changes in V3: +-- add encoder_mode_set into struct dp_display_private + +Reported-by: kernel test robot +Fixes: 26b8d66a399e ("drm/msm/dp: promote irq_hpd handle to handle link training correctly") +Tested-by: Stephen Boyd +Signed-off-by: Kuogee Hsieh +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/dp/dp_display.c | 92 +++++++++++++++++------------ + 1 file changed, 55 insertions(+), 37 deletions(-) + +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c +index f1f777baa2c4..a3de1d0523ea 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.c ++++ b/drivers/gpu/drm/msm/dp/dp_display.c +@@ -102,6 +102,8 @@ struct dp_display_private { + struct dp_display_mode dp_mode; + struct msm_dp dp_display; + ++ bool encoder_mode_set; ++ + /* wait for audio signaling */ + struct completion audio_comp; + +@@ -306,13 +308,24 @@ static void dp_display_send_hpd_event(struct msm_dp *dp_display) + drm_helper_hpd_irq_event(connector->dev); + } + +-static int dp_display_send_hpd_notification(struct dp_display_private *dp, +- bool hpd) ++ ++static void dp_display_set_encoder_mode(struct dp_display_private *dp) + { +- static bool encoder_mode_set; + struct msm_drm_private *priv = dp->dp_display.drm_dev->dev_private; + struct msm_kms *kms = priv->kms; + ++ if (!dp->encoder_mode_set && dp->dp_display.encoder && ++ kms->funcs->set_encoder_mode) { ++ kms->funcs->set_encoder_mode(kms, ++ dp->dp_display.encoder, false); ++ ++ dp->encoder_mode_set = true; ++ } ++} ++ ++static int dp_display_send_hpd_notification(struct dp_display_private *dp, ++ bool hpd) ++{ + if ((hpd && dp->dp_display.is_connected) || + (!hpd && !dp->dp_display.is_connected)) { + DRM_DEBUG_DP("HPD already %s\n", (hpd ? "on" : "off")); +@@ -325,15 +338,6 @@ static int dp_display_send_hpd_notification(struct dp_display_private *dp, + + dp->dp_display.is_connected = hpd; + +- if (dp->dp_display.is_connected && dp->dp_display.encoder +- && !encoder_mode_set +- && kms->funcs->set_encoder_mode) { +- kms->funcs->set_encoder_mode(kms, +- dp->dp_display.encoder, false); +- DRM_DEBUG_DP("set_encoder_mode() Completed\n"); +- encoder_mode_set = true; +- } +- + dp_display_send_hpd_event(&dp->dp_display); + + return 0; +@@ -369,7 +373,6 @@ static int dp_display_process_hpd_high(struct dp_display_private *dp) + + dp_add_event(dp, EV_USER_NOTIFICATION, true, 0); + +- + end: + return rc; + } +@@ -386,6 +389,8 @@ static void dp_display_host_init(struct dp_display_private *dp) + if (dp->usbpd->orientation == ORIENTATION_CC2) + flip = true; + ++ dp_display_set_encoder_mode(dp); ++ + dp_power_init(dp->power, flip); + dp_ctrl_host_init(dp->ctrl, flip); + dp_aux_init(dp->aux); +@@ -469,24 +474,42 @@ static void dp_display_handle_video_request(struct dp_display_private *dp) + } + } + +-static int dp_display_handle_irq_hpd(struct dp_display_private *dp) ++static int dp_display_handle_port_ststus_changed(struct dp_display_private *dp) + { +- u32 sink_request; +- +- sink_request = dp->link->sink_request; ++ int rc = 0; + +- if (sink_request & DS_PORT_STATUS_CHANGED) { +- if (dp_display_is_sink_count_zero(dp)) { +- DRM_DEBUG_DP("sink count is zero, nothing to do\n"); +- return -ENOTCONN; ++ if (dp_display_is_sink_count_zero(dp)) { ++ DRM_DEBUG_DP("sink count is zero, nothing to do\n"); ++ if (dp->hpd_state != ST_DISCONNECTED) { ++ dp->hpd_state = ST_DISCONNECT_PENDING; ++ dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); ++ } ++ } else { ++ if (dp->hpd_state == ST_DISCONNECTED) { ++ dp->hpd_state = ST_CONNECT_PENDING; ++ rc = dp_display_process_hpd_high(dp); ++ if (rc) ++ dp->hpd_state = ST_DISCONNECTED; + } ++ } ++ ++ return rc; ++} ++ ++static int dp_display_handle_irq_hpd(struct dp_display_private *dp) ++{ ++ u32 sink_request = dp->link->sink_request; + +- return dp_display_process_hpd_high(dp); ++ if (dp->hpd_state == ST_DISCONNECTED) { ++ if (sink_request & DP_LINK_STATUS_UPDATED) { ++ DRM_ERROR("Disconnected, no DP_LINK_STATUS_UPDATED\n"); ++ return -EINVAL; ++ } + } + + dp_ctrl_handle_sink_request(dp->ctrl); + +- if (dp->link->sink_request & DP_TEST_LINK_VIDEO_PATTERN) ++ if (sink_request & DP_TEST_LINK_VIDEO_PATTERN) + dp_display_handle_video_request(dp); + + return 0; +@@ -517,19 +540,10 @@ static int dp_display_usbpd_attention_cb(struct device *dev) + rc = dp_link_process_request(dp->link); + if (!rc) { + sink_request = dp->link->sink_request; +- if (sink_request & DS_PORT_STATUS_CHANGED) { +- /* same as unplugged */ +- hpd->hpd_high = 0; +- dp->hpd_state = ST_DISCONNECT_PENDING; +- dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); +- } +- +- rc = dp_display_handle_irq_hpd(dp); +- +- if (!rc && (sink_request & DS_PORT_STATUS_CHANGED)) { +- hpd->hpd_high = 1; +- dp->hpd_state = ST_CONNECT_PENDING; +- } ++ if (sink_request & DS_PORT_STATUS_CHANGED) ++ rc = dp_display_handle_port_ststus_changed(dp); ++ else ++ rc = dp_display_handle_irq_hpd(dp); + } + + return rc; +@@ -694,6 +708,7 @@ static int dp_disconnect_pending_timeout(struct dp_display_private *dp, u32 data + static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data) + { + u32 state; ++ int ret; + + mutex_lock(&dp->event_mutex); + +@@ -704,7 +719,10 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data) + return 0; + } + +- dp_display_usbpd_attention_cb(&dp->pdev->dev); ++ ret = dp_display_usbpd_attention_cb(&dp->pdev->dev); ++ if (ret == -ECONNRESET) { /* cable unplugged */ ++ dp->core_initialized = false; ++ } + + mutex_unlock(&dp->event_mutex); + +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-dp-fixes-wrong-connection-state-caused-by-fa.patch b/queue-5.10/drm-msm-dp-fixes-wrong-connection-state-caused-by-fa.patch new file mode 100644 index 00000000000..fa2b46e732c --- /dev/null +++ b/queue-5.10/drm-msm-dp-fixes-wrong-connection-state-caused-by-fa.patch @@ -0,0 +1,201 @@ +From b5babd50c4fd236abc1c9e0bb39e889d5e1867e2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 3 Nov 2020 14:53:36 -0800 +Subject: drm/msm/dp: fixes wrong connection state caused by failure of link + train + +From: Kuogee Hsieh + +[ Upstream commit 62671d2ef24bca1e2e1709a59a5bfb5c423cdc8e ] + +Connection state is not set correctly happen when either failure of link +train due to cable unplugged in the middle of aux channel reading or +cable plugged in while in suspended state. This patch fixes these problems. +This patch also replace ST_SUSPEND_PENDING with ST_DISPLAY_OFF. + +Changes in V2: +-- Add more information to commit message. + +Changes in V3: +-- change base + +Changes in V4: +-- add Fixes tag + +Signed-off-by: Kuogee Hsieh +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/dp/dp_display.c | 42 ++++++++++++++--------------- + drivers/gpu/drm/msm/dp/dp_panel.c | 5 ++++ + 2 files changed, 25 insertions(+), 22 deletions(-) + +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c +index 47bdddb860e5..4b18ab71ae59 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.c ++++ b/drivers/gpu/drm/msm/dp/dp_display.c +@@ -45,7 +45,7 @@ enum { + ST_CONNECT_PENDING, + ST_CONNECTED, + ST_DISCONNECT_PENDING, +- ST_SUSPEND_PENDING, ++ ST_DISPLAY_OFF, + ST_SUSPENDED, + }; + +@@ -531,7 +531,7 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data) + mutex_lock(&dp->event_mutex); + + state = dp->hpd_state; +- if (state == ST_SUSPEND_PENDING) { ++ if (state == ST_DISPLAY_OFF || state == ST_SUSPENDED) { + mutex_unlock(&dp->event_mutex); + return 0; + } +@@ -553,14 +553,14 @@ static int dp_hpd_plug_handle(struct dp_display_private *dp, u32 data) + hpd->hpd_high = 1; + + ret = dp_display_usbpd_configure_cb(&dp->pdev->dev); +- if (ret) { /* failed */ ++ if (ret) { /* link train failed */ + hpd->hpd_high = 0; + dp->hpd_state = ST_DISCONNECTED; ++ } else { ++ /* start sentinel checking in case of missing uevent */ ++ dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout); + } + +- /* start sanity checking */ +- dp_add_event(dp, EV_CONNECT_PENDING_TIMEOUT, 0, tout); +- + mutex_unlock(&dp->event_mutex); + + /* uevent will complete connection part */ +@@ -612,11 +612,6 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data) + mutex_lock(&dp->event_mutex); + + state = dp->hpd_state; +- if (state == ST_SUSPEND_PENDING) { +- mutex_unlock(&dp->event_mutex); +- return 0; +- } +- + if (state == ST_DISCONNECT_PENDING || state == ST_DISCONNECTED) { + mutex_unlock(&dp->event_mutex); + return 0; +@@ -643,7 +638,7 @@ static int dp_hpd_unplug_handle(struct dp_display_private *dp, u32 data) + */ + dp_display_usbpd_disconnect_cb(&dp->pdev->dev); + +- /* start sanity checking */ ++ /* start sentinel checking in case of missing uevent */ + dp_add_event(dp, EV_DISCONNECT_PENDING_TIMEOUT, 0, DP_TIMEOUT_5_SECOND); + + /* signal the disconnect event early to ensure proper teardown */ +@@ -682,7 +677,7 @@ static int dp_irq_hpd_handle(struct dp_display_private *dp, u32 data) + + /* irq_hpd can happen at either connected or disconnected state */ + state = dp->hpd_state; +- if (state == ST_SUSPEND_PENDING) { ++ if (state == ST_DISPLAY_OFF) { + mutex_unlock(&dp->event_mutex); + return 0; + } +@@ -1119,7 +1114,7 @@ static irqreturn_t dp_display_irq_handler(int irq, void *dev_id) + } + + if (hpd_isr_status & DP_DP_IRQ_HPD_INT_MASK) { +- /* delete connect pending event first */ ++ /* stop sentinel connect pending checking */ + dp_del_event(dp, EV_CONNECT_PENDING_TIMEOUT); + dp_add_event(dp, EV_IRQ_HPD_INT, 0, 0); + } +@@ -1252,13 +1247,10 @@ static int dp_pm_resume(struct device *dev) + + status = dp_catalog_hpd_get_state_status(dp->catalog); + +- if (status) { ++ if (status) + dp->dp_display.is_connected = true; +- } else { ++ else + dp->dp_display.is_connected = false; +- /* make sure next resume host_init be called */ +- dp->core_initialized = false; +- } + + mutex_unlock(&dp->event_mutex); + +@@ -1280,6 +1272,9 @@ static int dp_pm_suspend(struct device *dev) + + dp->hpd_state = ST_SUSPENDED; + ++ /* host_init will be called at pm_resume */ ++ dp->core_initialized = false; ++ + mutex_unlock(&dp->event_mutex); + + return 0; +@@ -1412,6 +1407,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder) + + mutex_lock(&dp_display->event_mutex); + ++ /* stop sentinel checking */ + dp_del_event(dp_display, EV_CONNECT_PENDING_TIMEOUT); + + rc = dp_display_set_mode(dp, &dp_display->dp_mode); +@@ -1430,7 +1426,7 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder) + + state = dp_display->hpd_state; + +- if (state == ST_SUSPEND_PENDING) ++ if (state == ST_DISPLAY_OFF) + dp_display_host_init(dp_display); + + dp_display_enable(dp_display, 0); +@@ -1442,7 +1438,8 @@ int msm_dp_display_enable(struct msm_dp *dp, struct drm_encoder *encoder) + dp_display_unprepare(dp); + } + +- if (state == ST_SUSPEND_PENDING) ++ /* manual kick off plug event to train link */ ++ if (state == ST_DISPLAY_OFF) + dp_add_event(dp_display, EV_IRQ_HPD_INT, 0, 0); + + /* completed connection */ +@@ -1474,6 +1471,7 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder) + + mutex_lock(&dp_display->event_mutex); + ++ /* stop sentinel checking */ + dp_del_event(dp_display, EV_DISCONNECT_PENDING_TIMEOUT); + + dp_display_disable(dp_display, 0); +@@ -1487,7 +1485,7 @@ int msm_dp_display_disable(struct msm_dp *dp, struct drm_encoder *encoder) + /* completed disconnection */ + dp_display->hpd_state = ST_DISCONNECTED; + } else { +- dp_display->hpd_state = ST_SUSPEND_PENDING; ++ dp_display->hpd_state = ST_DISPLAY_OFF; + } + + mutex_unlock(&dp_display->event_mutex); +diff --git a/drivers/gpu/drm/msm/dp/dp_panel.c b/drivers/gpu/drm/msm/dp/dp_panel.c +index 2768d1d306f0..550871ba6e5a 100644 +--- a/drivers/gpu/drm/msm/dp/dp_panel.c ++++ b/drivers/gpu/drm/msm/dp/dp_panel.c +@@ -196,6 +196,11 @@ int dp_panel_read_sink_caps(struct dp_panel *dp_panel, + &panel->aux->ddc); + if (!dp_panel->edid) { + DRM_ERROR("panel edid read failed\n"); ++ /* check edid read fail is due to unplug */ ++ if (!dp_catalog_hpd_get_state_status(panel->catalog)) { ++ rc = -ETIMEDOUT; ++ goto end; ++ } + + /* fail safe edid */ + mutex_lock(&connector->dev->mode_config.mutex); +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-dp-promote-irq_hpd-handle-to-handle-link-tra.patch b/queue-5.10/drm-msm-dp-promote-irq_hpd-handle-to-handle-link-tra.patch new file mode 100644 index 00000000000..dd70228e006 --- /dev/null +++ b/queue-5.10/drm-msm-dp-promote-irq_hpd-handle-to-handle-link-tra.patch @@ -0,0 +1,78 @@ +From 57c6435a0b3195c904bb1792d9f50584ce87588f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 3 Nov 2020 12:49:02 -0800 +Subject: drm/msm/dp: promote irq_hpd handle to handle link training correctly + +From: Kuogee Hsieh + +[ Upstream commit 26b8d66a399e625f3aa2c02ccbab1bff2e00040c ] + +Some dongles require link training done at irq_hpd request instead +of plugin request. This patch promote irq_hpd handler to handle link +training and setup hpd_state correctly. + +Signed-off-by: Kuogee Hsieh +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/dp/dp_display.c | 25 +++++++++++++++++++++---- + 1 file changed, 21 insertions(+), 4 deletions(-) + +diff --git a/drivers/gpu/drm/msm/dp/dp_display.c b/drivers/gpu/drm/msm/dp/dp_display.c +index d504cf68283a..f1f777baa2c4 100644 +--- a/drivers/gpu/drm/msm/dp/dp_display.c ++++ b/drivers/gpu/drm/msm/dp/dp_display.c +@@ -476,10 +476,9 @@ static int dp_display_handle_irq_hpd(struct dp_display_private *dp) + sink_request = dp->link->sink_request; + + if (sink_request & DS_PORT_STATUS_CHANGED) { +- dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); + if (dp_display_is_sink_count_zero(dp)) { + DRM_DEBUG_DP("sink count is zero, nothing to do\n"); +- return 0; ++ return -ENOTCONN; + } + + return dp_display_process_hpd_high(dp); +@@ -496,7 +495,9 @@ static int dp_display_handle_irq_hpd(struct dp_display_private *dp) + static int dp_display_usbpd_attention_cb(struct device *dev) + { + int rc = 0; ++ u32 sink_request; + struct dp_display_private *dp; ++ struct dp_usbpd *hpd; + + if (!dev) { + DRM_ERROR("invalid dev\n"); +@@ -510,10 +511,26 @@ static int dp_display_usbpd_attention_cb(struct device *dev) + return -ENODEV; + } + ++ hpd = dp->usbpd; ++ + /* check for any test request issued by sink */ + rc = dp_link_process_request(dp->link); +- if (!rc) +- dp_display_handle_irq_hpd(dp); ++ if (!rc) { ++ sink_request = dp->link->sink_request; ++ if (sink_request & DS_PORT_STATUS_CHANGED) { ++ /* same as unplugged */ ++ hpd->hpd_high = 0; ++ dp->hpd_state = ST_DISCONNECT_PENDING; ++ dp_add_event(dp, EV_USER_NOTIFICATION, false, 0); ++ } ++ ++ rc = dp_display_handle_irq_hpd(dp); ++ ++ if (!rc && (sink_request & DS_PORT_STATUS_CHANGED)) { ++ hpd->hpd_high = 1; ++ dp->hpd_state = ST_CONNECT_PENDING; ++ } ++ } + + return rc; + } +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-fix-double-pm_runtime_disable-call.patch b/queue-5.10/drm-msm-fix-double-pm_runtime_disable-call.patch new file mode 100644 index 00000000000..95c1d4293e6 --- /dev/null +++ b/queue-5.10/drm-msm-fix-double-pm_runtime_disable-call.patch @@ -0,0 +1,70 @@ +From d5a4743e21cbb2696fbcdd03c8944560a566d3ef Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 6 Jun 2022 23:13:05 +0200 +Subject: drm/msm: Fix double pm_runtime_disable() call + +From: Maximilian Luz + +[ Upstream commit ce0db505bc0c51ef5e9ba446c660de7e26f78f29 ] + +Following commit 17e822f7591f ("drm/msm: fix unbalanced +pm_runtime_enable in adreno_gpu_{init, cleanup}"), any call to +adreno_unbind() will disable runtime PM twice, as indicated by the call +trees below: + + adreno_unbind() + -> pm_runtime_force_suspend() + -> pm_runtime_disable() + + adreno_unbind() + -> gpu->funcs->destroy() [= aNxx_destroy()] + -> adreno_gpu_cleanup() + -> pm_runtime_disable() + +Note that pm_runtime_force_suspend() is called right before +gpu->funcs->destroy() and both functions are called unconditionally. + +With recent addition of the eDP AUX bus code, this problem manifests +itself when the eDP panel cannot be found yet and probing is deferred. +On the first probe attempt, we disable runtime PM twice as described +above. This then causes any later probe attempt to fail with + + [drm:adreno_load_gpu [msm]] *ERROR* Couldn't power up the GPU: -13 + +preventing the driver from loading. + +As there seem to be scenarios where the aNxx_destroy() functions are not +called from adreno_unbind(), simply removing pm_runtime_disable() from +inside adreno_unbind() does not seem to be the proper fix. This is what +commit 17e822f7591f ("drm/msm: fix unbalanced pm_runtime_enable in +adreno_gpu_{init, cleanup}") intended to fix. Therefore, instead check +whether runtime PM is still enabled, and only disable it in that case. + +Fixes: 17e822f7591f ("drm/msm: fix unbalanced pm_runtime_enable in adreno_gpu_{init, cleanup}") +Signed-off-by: Maximilian Luz +Tested-by: Bjorn Andersson +Reviewed-by: Rob Clark +Link: https://lore.kernel.org/r/20220606211305.189585-1-luzmaximilian@gmail.com +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/adreno/adreno_gpu.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/msm/adreno/adreno_gpu.c b/drivers/gpu/drm/msm/adreno/adreno_gpu.c +index 458b5b26d3c2..de8cc25506d6 100644 +--- a/drivers/gpu/drm/msm/adreno/adreno_gpu.c ++++ b/drivers/gpu/drm/msm/adreno/adreno_gpu.c +@@ -960,7 +960,8 @@ void adreno_gpu_cleanup(struct adreno_gpu *adreno_gpu) + for (i = 0; i < ARRAY_SIZE(adreno_gpu->info->fw); i++) + release_firmware(adreno_gpu->fw[i]); + +- pm_runtime_disable(&priv->gpu_pdev->dev); ++ if (pm_runtime_enabled(&priv->gpu_pdev->dev)) ++ pm_runtime_disable(&priv->gpu_pdev->dev); + + msm_gpu_cleanup(&adreno_gpu->base); + +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-mdp4-fix-refcount-leak-in-mdp4_modeset_init_.patch b/queue-5.10/drm-msm-mdp4-fix-refcount-leak-in-mdp4_modeset_init_.patch new file mode 100644 index 00000000000..75d95352afa --- /dev/null +++ b/queue-5.10/drm-msm-mdp4-fix-refcount-leak-in-mdp4_modeset_init_.patch @@ -0,0 +1,50 @@ +From 63ec9d3cf23ddae0273438c786c46b84e4e0883a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 Jun 2022 15:08:38 +0400 +Subject: drm/msm/mdp4: Fix refcount leak in mdp4_modeset_init_intf + +From: Miaoqian Lin + +[ Upstream commit b9cc4598607cb7f7eae5c75fc1e3209cd52ff5e0 ] + +of_graph_get_remote_node() returns remote device node pointer with +refcount incremented, we should use of_node_put() on it +when not need anymore. +Add missing of_node_put() to avoid refcount leak. + +Fixes: 86418f90a4c1 ("drm: convert drivers to use of_graph_get_remote_node") +Signed-off-by: Miaoqian Lin +Reviewed-by: Dmitry Baryshkov +Reviewed-by: Stephen Boyd +Reviewed-by: Abhinav Kumar +Patchwork: https://patchwork.freedesktop.org/patch/488473/ +Link: https://lore.kernel.org/r/20220607110841.53889-1-linmq006@gmail.com +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +index 913de5938782..b4d0bfc83d70 100644 +--- a/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c ++++ b/drivers/gpu/drm/msm/disp/mdp4/mdp4_kms.c +@@ -221,6 +221,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms, + encoder = mdp4_lcdc_encoder_init(dev, panel_node); + if (IS_ERR(encoder)) { + DRM_DEV_ERROR(dev->dev, "failed to construct LCDC encoder\n"); ++ of_node_put(panel_node); + return PTR_ERR(encoder); + } + +@@ -230,6 +231,7 @@ static int mdp4_modeset_init_intf(struct mdp4_kms *mdp4_kms, + connector = mdp4_lvds_connector_init(dev, panel_node, encoder); + if (IS_ERR(connector)) { + DRM_DEV_ERROR(dev->dev, "failed to initialize LVDS connector\n"); ++ of_node_put(panel_node); + return PTR_ERR(connector); + } + +-- +2.35.1 + diff --git a/queue-5.10/drm-msm-use-for_each_sgtable_sg-to-iterate-over-scat.patch b/queue-5.10/drm-msm-use-for_each_sgtable_sg-to-iterate-over-scat.patch new file mode 100644 index 00000000000..14c9d3e200c --- /dev/null +++ b/queue-5.10/drm-msm-use-for_each_sgtable_sg-to-iterate-over-scat.patch @@ -0,0 +1,39 @@ +From 6468c1cb01f31f0c17f82c5b877718fbd649dfbc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 13 Jun 2022 18:10:19 -0400 +Subject: drm/msm: use for_each_sgtable_sg to iterate over scatterlist + +From: Jonathan Marek + +[ Upstream commit 62b5e322fb6cc5a5a91fdeba0e4e57e75d9f4387 ] + +The dma_map_sgtable() call (used to invalidate cache) overwrites sgt->nents +with 1, so msm_iommu_pagetable_map maps only the first physical segment. + +To fix this problem use for_each_sgtable_sg(), which uses orig_nents. + +Fixes: b145c6e65eb0 ("drm/msm: Add support to create a local pagetable") +Signed-off-by: Jonathan Marek +Link: https://lore.kernel.org/r/20220613221019.11399-1-jonathan@marek.ca +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/msm_iommu.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c +index 22ac7c692a81..ecab6287c1c3 100644 +--- a/drivers/gpu/drm/msm/msm_iommu.c ++++ b/drivers/gpu/drm/msm/msm_iommu.c +@@ -58,7 +58,7 @@ static int msm_iommu_pagetable_map(struct msm_mmu *mmu, u64 iova, + u64 addr = iova; + unsigned int i; + +- for_each_sg(sgt->sgl, sg, sgt->nents, i) { ++ for_each_sgtable_sg(sgt, sg, i) { + size_t size = sg->length; + phys_addr_t phys = sg_phys(sg); + +-- +2.35.1 + diff --git a/queue-5.10/drm-sun4i-fix-crash-during-suspend-after-component-b.patch b/queue-5.10/drm-sun4i-fix-crash-during-suspend-after-component-b.patch new file mode 100644 index 00000000000..56327fca7a9 --- /dev/null +++ b/queue-5.10/drm-sun4i-fix-crash-during-suspend-after-component-b.patch @@ -0,0 +1,59 @@ +From a2846a01db0cef113bf2366e0615685ab145dbc2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 15 Jun 2022 00:42:53 -0500 +Subject: drm/sun4i: Fix crash during suspend after component bind failure + +From: Samuel Holland + +[ Upstream commit 1342b5b23da9559a1578978eaff7f797d8a87d91 ] + +If the component driver fails to bind, or is unbound, the driver data +for the top-level platform device points to a freed drm_device. If the +system is then suspended, the driver passes this dangling pointer to +drm_mode_config_helper_suspend(), which crashes. + +Fix this by only setting the driver data while the platform driver holds +a reference to the drm_device. + +Fixes: 624b4b48d9d8 ("drm: sun4i: Add support for suspending the display driver") +Signed-off-by: Samuel Holland +Reviewed-by: Jernej Skrabec +Signed-off-by: Maxime Ripard +Link: https://lore.kernel.org/r/20220615054254.16352-1-samuel@sholland.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/sun4i/sun4i_drv.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/sun4i/sun4i_drv.c b/drivers/gpu/drm/sun4i/sun4i_drv.c +index 29861fc81b35..c5912fd53772 100644 +--- a/drivers/gpu/drm/sun4i/sun4i_drv.c ++++ b/drivers/gpu/drm/sun4i/sun4i_drv.c +@@ -71,7 +71,6 @@ static int sun4i_drv_bind(struct device *dev) + goto free_drm; + } + +- dev_set_drvdata(dev, drm); + drm->dev_private = drv; + INIT_LIST_HEAD(&drv->frontend_list); + INIT_LIST_HEAD(&drv->engine_list); +@@ -112,6 +111,8 @@ static int sun4i_drv_bind(struct device *dev) + + drm_fbdev_generic_setup(drm, 32); + ++ dev_set_drvdata(dev, drm); ++ + return 0; + + finish_poll: +@@ -128,6 +129,7 @@ static void sun4i_drv_unbind(struct device *dev) + { + struct drm_device *drm = dev_get_drvdata(dev); + ++ dev_set_drvdata(dev, NULL); + drm_dev_unregister(drm); + drm_kms_helper_poll_fini(drm); + drm_atomic_helper_shutdown(drm); +-- +2.35.1 + diff --git a/queue-5.10/erspan-do-not-assume-transport-header-is-always-set.patch b/queue-5.10/erspan-do-not-assume-transport-header-is-always-set.patch new file mode 100644 index 00000000000..dc233dde104 --- /dev/null +++ b/queue-5.10/erspan-do-not-assume-transport-header-is-always-set.patch @@ -0,0 +1,127 @@ +From 194507672adc295927502eed0857250b3d30406c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 20 Jun 2022 01:35:06 -0700 +Subject: erspan: do not assume transport header is always set + +From: Eric Dumazet + +[ Upstream commit 301bd140ed0b24f0da660874c7e8a47dad8c8222 ] + +Rewrite tests in ip6erspan_tunnel_xmit() and +erspan_fb_xmit() to not assume transport header is set. + +syzbot reported: + +WARNING: CPU: 0 PID: 1350 at include/linux/skbuff.h:2911 skb_transport_header include/linux/skbuff.h:2911 [inline] +WARNING: CPU: 0 PID: 1350 at include/linux/skbuff.h:2911 ip6erspan_tunnel_xmit+0x15af/0x2eb0 net/ipv6/ip6_gre.c:963 +Modules linked in: +CPU: 0 PID: 1350 Comm: aoe_tx0 Not tainted 5.19.0-rc2-syzkaller-00160-g274295c6e53f #0 +Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.14.0-2 04/01/2014 +RIP: 0010:skb_transport_header include/linux/skbuff.h:2911 [inline] +RIP: 0010:ip6erspan_tunnel_xmit+0x15af/0x2eb0 net/ipv6/ip6_gre.c:963 +Code: 0f 47 f0 40 88 b5 7f fe ff ff e8 8c 16 4b f9 89 de bf ff ff ff ff e8 a0 12 4b f9 66 83 fb ff 0f 85 1d f1 ff ff e8 71 16 4b f9 <0f> 0b e9 43 f0 ff ff e8 65 16 4b f9 48 8d 85 30 ff ff ff ba 60 00 +RSP: 0018:ffffc90005daf910 EFLAGS: 00010293 +RAX: 0000000000000000 RBX: 000000000000ffff RCX: 0000000000000000 +RDX: ffff88801f032100 RSI: ffffffff882e8d3f RDI: 0000000000000003 +RBP: ffffc90005dafab8 R08: 0000000000000003 R09: 000000000000ffff +R10: 000000000000ffff R11: 0000000000000000 R12: ffff888024f21d40 +R13: 000000000000a288 R14: 00000000000000b0 R15: ffff888025a2e000 +FS: 0000000000000000(0000) GS:ffff88802c800000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 0000001b2e425000 CR3: 000000006d099000 CR4: 0000000000152ef0 +Call Trace: + +__netdev_start_xmit include/linux/netdevice.h:4805 [inline] +netdev_start_xmit include/linux/netdevice.h:4819 [inline] +xmit_one net/core/dev.c:3588 [inline] +dev_hard_start_xmit+0x188/0x880 net/core/dev.c:3604 +sch_direct_xmit+0x19f/0xbe0 net/sched/sch_generic.c:342 +__dev_xmit_skb net/core/dev.c:3815 [inline] +__dev_queue_xmit+0x14a1/0x3900 net/core/dev.c:4219 +dev_queue_xmit include/linux/netdevice.h:2994 [inline] +tx+0x6a/0xc0 drivers/block/aoe/aoenet.c:63 +kthread+0x1e7/0x3b0 drivers/block/aoe/aoecmd.c:1229 +kthread+0x2e9/0x3a0 kernel/kthread.c:376 +ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:302 + + +Fixes: d5db21a3e697 ("erspan: auto detect truncated ipv6 packets.") +Reported-by: syzbot +Signed-off-by: Eric Dumazet +Cc: William Tu +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/ipv4/ip_gre.c | 15 ++++++++++----- + net/ipv6/ip6_gre.c | 15 ++++++++++----- + 2 files changed, 20 insertions(+), 10 deletions(-) + +diff --git a/net/ipv4/ip_gre.c b/net/ipv4/ip_gre.c +index a7e32be8714f..6ab5c50aa7a8 100644 +--- a/net/ipv4/ip_gre.c ++++ b/net/ipv4/ip_gre.c +@@ -519,7 +519,6 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev) + int tunnel_hlen; + int version; + int nhoff; +- int thoff; + + tun_info = skb_tunnel_info(skb); + if (unlikely(!tun_info || !(tun_info->mode & IP_TUNNEL_INFO_TX) || +@@ -553,10 +552,16 @@ static void erspan_fb_xmit(struct sk_buff *skb, struct net_device *dev) + (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) + truncate = true; + +- thoff = skb_transport_header(skb) - skb_mac_header(skb); +- if (skb->protocol == htons(ETH_P_IPV6) && +- (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) +- truncate = true; ++ if (skb->protocol == htons(ETH_P_IPV6)) { ++ int thoff; ++ ++ if (skb_transport_header_was_set(skb)) ++ thoff = skb_transport_header(skb) - skb_mac_header(skb); ++ else ++ thoff = nhoff + sizeof(struct ipv6hdr); ++ if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) ++ truncate = true; ++ } + + if (version == 1) { + erspan_build_header(skb, ntohl(tunnel_id_to_key32(key->tun_id)), +diff --git a/net/ipv6/ip6_gre.c b/net/ipv6/ip6_gre.c +index 3f88ba6555ab..9e0890738d93 100644 +--- a/net/ipv6/ip6_gre.c ++++ b/net/ipv6/ip6_gre.c +@@ -944,7 +944,6 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb, + __be16 proto; + __u32 mtu; + int nhoff; +- int thoff; + + if (!pskb_inet_may_pull(skb)) + goto tx_err; +@@ -965,10 +964,16 @@ static netdev_tx_t ip6erspan_tunnel_xmit(struct sk_buff *skb, + (ntohs(ip_hdr(skb)->tot_len) > skb->len - nhoff)) + truncate = true; + +- thoff = skb_transport_header(skb) - skb_mac_header(skb); +- if (skb->protocol == htons(ETH_P_IPV6) && +- (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff)) +- truncate = true; ++ if (skb->protocol == htons(ETH_P_IPV6)) { ++ int thoff; ++ ++ if (skb_transport_header_was_set(skb)) ++ thoff = skb_transport_header(skb) - skb_mac_header(skb); ++ else ++ thoff = nhoff + sizeof(struct ipv6hdr); ++ if (ntohs(ipv6_hdr(skb)->payload_len) > skb->len - thoff) ++ truncate = true; ++ } + + if (skb_cow_head(skb, dev->needed_headroom ?: t->hlen)) + goto tx_err; +-- +2.35.1 + diff --git a/queue-5.10/gpio-winbond-fix-error-code-in-winbond_gpio_get.patch b/queue-5.10/gpio-winbond-fix-error-code-in-winbond_gpio_get.patch new file mode 100644 index 00000000000..dc2a72ad5df --- /dev/null +++ b/queue-5.10/gpio-winbond-fix-error-code-in-winbond_gpio_get.patch @@ -0,0 +1,45 @@ +From 64f93c369a4fcc0d33aef0d9297b5dd93ddce258 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 23 Jun 2022 11:29:48 +0300 +Subject: gpio: winbond: Fix error code in winbond_gpio_get() + +From: Dan Carpenter + +[ Upstream commit 9ca766eaea2e87b8b773bff04ee56c055cb76d4e ] + +This error path returns 1, but it should instead propagate the negative +error code from winbond_sio_enter(). + +Fixes: a0d65009411c ("gpio: winbond: Add driver") +Signed-off-by: Dan Carpenter +Reviewed-by: Andy Shevchenko +Signed-off-by: Bartosz Golaszewski +Signed-off-by: Sasha Levin +--- + drivers/gpio/gpio-winbond.c | 7 ++++--- + 1 file changed, 4 insertions(+), 3 deletions(-) + +diff --git a/drivers/gpio/gpio-winbond.c b/drivers/gpio/gpio-winbond.c +index 7f8f5b02e31d..4b61d975cc0e 100644 +--- a/drivers/gpio/gpio-winbond.c ++++ b/drivers/gpio/gpio-winbond.c +@@ -385,12 +385,13 @@ static int winbond_gpio_get(struct gpio_chip *gc, unsigned int offset) + unsigned long *base = gpiochip_get_data(gc); + const struct winbond_gpio_info *info; + bool val; ++ int ret; + + winbond_gpio_get_info(&offset, &info); + +- val = winbond_sio_enter(*base); +- if (val) +- return val; ++ ret = winbond_sio_enter(*base); ++ if (ret) ++ return ret; + + winbond_sio_select_logical(*base, info->dev); + +-- +2.35.1 + diff --git a/queue-5.10/ice-ethtool-advertise-1000m-speeds-properly.patch b/queue-5.10/ice-ethtool-advertise-1000m-speeds-properly.patch new file mode 100644 index 00000000000..ad7fb8c389d --- /dev/null +++ b/queue-5.10/ice-ethtool-advertise-1000m-speeds-properly.patch @@ -0,0 +1,86 @@ +From 632f0bd453e7c491f46d518f407d920a0cfec177 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 20 Jun 2022 09:47:05 +0200 +Subject: ice: ethtool: advertise 1000M speeds properly + +From: Anatolii Gerasymenko + +[ Upstream commit c3d184c83ff4b80167e34edfc3d21df424bf27ff ] + +In current implementation ice_update_phy_type enables all link modes +for selected speed. This approach doesn't work for 1000M speeds, +because both copper (1000baseT) and optical (1000baseX) standards +cannot be enabled at once. + +Fix this, by adding the function `ice_set_phy_type_from_speed()` +for 1000M speeds. + +Fixes: 48cb27f2fd18 ("ice: Implement handlers for ethtool PHY/link operations") +Signed-off-by: Anatolii Gerasymenko +Tested-by: Gurucharan (A Contingent worker at Intel) +Signed-off-by: Tony Nguyen +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/intel/ice/ice_ethtool.c | 39 +++++++++++++++++++- + 1 file changed, 38 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/intel/ice/ice_ethtool.c b/drivers/net/ethernet/intel/ice/ice_ethtool.c +index 421fc707f80a..060897eb9cab 100644 +--- a/drivers/net/ethernet/intel/ice/ice_ethtool.c ++++ b/drivers/net/ethernet/intel/ice/ice_ethtool.c +@@ -2174,6 +2174,42 @@ ice_setup_autoneg(struct ice_port_info *p, struct ethtool_link_ksettings *ks, + return err; + } + ++/** ++ * ice_set_phy_type_from_speed - set phy_types based on speeds ++ * and advertised modes ++ * @ks: ethtool link ksettings struct ++ * @phy_type_low: pointer to the lower part of phy_type ++ * @phy_type_high: pointer to the higher part of phy_type ++ * @adv_link_speed: targeted link speeds bitmap ++ */ ++static void ++ice_set_phy_type_from_speed(const struct ethtool_link_ksettings *ks, ++ u64 *phy_type_low, u64 *phy_type_high, ++ u16 adv_link_speed) ++{ ++ /* Handle 1000M speed in a special way because ice_update_phy_type ++ * enables all link modes, but having mixed copper and optical ++ * standards is not supported. ++ */ ++ adv_link_speed &= ~ICE_AQ_LINK_SPEED_1000MB; ++ ++ if (ethtool_link_ksettings_test_link_mode(ks, advertising, ++ 1000baseT_Full)) ++ *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_T | ++ ICE_PHY_TYPE_LOW_1G_SGMII; ++ ++ if (ethtool_link_ksettings_test_link_mode(ks, advertising, ++ 1000baseKX_Full)) ++ *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_KX; ++ ++ if (ethtool_link_ksettings_test_link_mode(ks, advertising, ++ 1000baseX_Full)) ++ *phy_type_low |= ICE_PHY_TYPE_LOW_1000BASE_SX | ++ ICE_PHY_TYPE_LOW_1000BASE_LX; ++ ++ ice_update_phy_type(phy_type_low, phy_type_high, adv_link_speed); ++} ++ + /** + * ice_set_link_ksettings - Set Speed and Duplex + * @netdev: network interface device structure +@@ -2310,7 +2346,8 @@ ice_set_link_ksettings(struct net_device *netdev, + adv_link_speed = curr_link_speed; + + /* Convert the advertise link speeds to their corresponded PHY_TYPE */ +- ice_update_phy_type(&phy_type_low, &phy_type_high, adv_link_speed); ++ ice_set_phy_type_from_speed(ks, &phy_type_low, &phy_type_high, ++ adv_link_speed); + + if (!autoneg_changed && adv_link_speed == curr_link_speed) { + netdev_info(netdev, "Nothing changed, exiting without setting anything.\n"); +-- +2.35.1 + diff --git a/queue-5.10/igb-fix-a-use-after-free-issue-in-igb_clean_tx_ring.patch b/queue-5.10/igb-fix-a-use-after-free-issue-in-igb_clean_tx_ring.patch new file mode 100644 index 00000000000..9025e018c39 --- /dev/null +++ b/queue-5.10/igb-fix-a-use-after-free-issue-in-igb_clean_tx_ring.patch @@ -0,0 +1,93 @@ +From 8f1811c32e8e0ecdb839a8b2c52dc20f39f28164 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 16 Jun 2022 16:13:20 +0200 +Subject: igb: fix a use-after-free issue in igb_clean_tx_ring + +From: Lorenzo Bianconi + +[ Upstream commit 3f6a57ee8544ec3982f8a3cbcbf4aea7d47eb9ec ] + +Fix the following use-after-free bug in igb_clean_tx_ring routine when +the NIC is running in XDP mode. The issue can be triggered redirecting +traffic into the igb NIC and then closing the device while the traffic +is flowing. + +[ 73.322719] CPU: 1 PID: 487 Comm: xdp_redirect Not tainted 5.18.3-apu2 #9 +[ 73.330639] Hardware name: PC Engines APU2/APU2, BIOS 4.0.7 02/28/2017 +[ 73.337434] RIP: 0010:refcount_warn_saturate+0xa7/0xf0 +[ 73.362283] RSP: 0018:ffffc9000081f798 EFLAGS: 00010282 +[ 73.367761] RAX: 0000000000000000 RBX: ffffc90000420f80 RCX: 0000000000000000 +[ 73.375200] RDX: ffff88811ad22d00 RSI: ffff88811ad171e0 RDI: ffff88811ad171e0 +[ 73.382590] RBP: 0000000000000900 R08: ffffffff82298f28 R09: 0000000000000058 +[ 73.390008] R10: 0000000000000219 R11: ffffffff82280f40 R12: 0000000000000090 +[ 73.397356] R13: ffff888102343a40 R14: ffff88810359e0e4 R15: 0000000000000000 +[ 73.404806] FS: 00007ff38d31d740(0000) GS:ffff88811ad00000(0000) knlGS:0000000000000000 +[ 73.413129] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +[ 73.419096] CR2: 000055cff35f13f8 CR3: 0000000106391000 CR4: 00000000000406e0 +[ 73.426565] Call Trace: +[ 73.429087] +[ 73.431314] igb_clean_tx_ring+0x43/0x140 [igb] +[ 73.436002] igb_down+0x1d7/0x220 [igb] +[ 73.439974] __igb_close+0x3c/0x120 [igb] +[ 73.444118] igb_xdp+0x10c/0x150 [igb] +[ 73.447983] ? igb_pci_sriov_configure+0x70/0x70 [igb] +[ 73.453362] dev_xdp_install+0xda/0x110 +[ 73.457371] dev_xdp_attach+0x1da/0x550 +[ 73.461369] do_setlink+0xfd0/0x10f0 +[ 73.465166] ? __nla_validate_parse+0x89/0xc70 +[ 73.469714] rtnl_setlink+0x11a/0x1e0 +[ 73.473547] rtnetlink_rcv_msg+0x145/0x3d0 +[ 73.477709] ? rtnl_calcit.isra.0+0x130/0x130 +[ 73.482258] netlink_rcv_skb+0x8d/0x110 +[ 73.486229] netlink_unicast+0x230/0x340 +[ 73.490317] netlink_sendmsg+0x215/0x470 +[ 73.494395] __sys_sendto+0x179/0x190 +[ 73.498268] ? move_addr_to_user+0x37/0x70 +[ 73.502547] ? __sys_getsockname+0x84/0xe0 +[ 73.506853] ? netlink_setsockopt+0x1c1/0x4a0 +[ 73.511349] ? __sys_setsockopt+0xc8/0x1d0 +[ 73.515636] __x64_sys_sendto+0x20/0x30 +[ 73.519603] do_syscall_64+0x3b/0x80 +[ 73.523399] entry_SYSCALL_64_after_hwframe+0x44/0xae +[ 73.528712] RIP: 0033:0x7ff38d41f20c +[ 73.551866] RSP: 002b:00007fff3b945a68 EFLAGS: 00000246 ORIG_RAX: 000000000000002c +[ 73.559640] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff38d41f20c +[ 73.567066] RDX: 0000000000000034 RSI: 00007fff3b945b30 RDI: 0000000000000003 +[ 73.574457] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000 +[ 73.581852] R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff3b945ab0 +[ 73.589179] R13: 0000000000000000 R14: 0000000000000003 R15: 00007fff3b945b30 +[ 73.596545] +[ 73.598842] ---[ end trace 0000000000000000 ]--- + +Fixes: 9cbc948b5a20c ("igb: add XDP support") +Signed-off-by: Lorenzo Bianconi +Reviewed-by: Jesse Brandeburg +Acked-by: Jesper Dangaard Brouer +Link: https://lore.kernel.org/r/e5c01d549dc37bff18e46aeabd6fb28a7bcf84be.1655388571.git.lorenzo@kernel.org +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/intel/igb/igb_main.c | 7 +++++-- + 1 file changed, 5 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index 5e67c9c119d2..758e468e677a 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -4813,8 +4813,11 @@ static void igb_clean_tx_ring(struct igb_ring *tx_ring) + while (i != tx_ring->next_to_use) { + union e1000_adv_tx_desc *eop_desc, *tx_desc; + +- /* Free all the Tx ring sk_buffs */ +- dev_kfree_skb_any(tx_buffer->skb); ++ /* Free all the Tx ring sk_buffs or xdp frames */ ++ if (tx_buffer->type == IGB_TYPE_SKB) ++ dev_kfree_skb_any(tx_buffer->skb); ++ else ++ xdp_return_frame(tx_buffer->xdpf); + + /* unmap skb header data */ + dma_unmap_single(tx_ring->dev, +-- +2.35.1 + diff --git a/queue-5.10/igb-make-dma-faster-when-cpu-is-active-on-the-pcie-l.patch b/queue-5.10/igb-make-dma-faster-when-cpu-is-active-on-the-pcie-l.patch new file mode 100644 index 00000000000..6e1df37067c --- /dev/null +++ b/queue-5.10/igb-make-dma-faster-when-cpu-is-active-on-the-pcie-l.patch @@ -0,0 +1,83 @@ +From e52659de2f7084bd6696d554f52ae08c0543ff4c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 21 Jun 2022 15:10:56 -0700 +Subject: igb: Make DMA faster when CPU is active on the PCIe link + +From: Kai-Heng Feng + +[ Upstream commit 4e0effd9007ea0be31f7488611eb3824b4541554 ] + +Intel I210 on some Intel Alder Lake platforms can only achieve ~750Mbps +Tx speed via iperf. The RR2DCDELAY shows around 0x2xxx DMA delay, which +will be significantly lower when 1) ASPM is disabled or 2) SoC package +c-state stays above PC3. When the RR2DCDELAY is around 0x1xxx the Tx +speed can reach to ~950Mbps. + +According to the I210 datasheet "8.26.1 PCIe Misc. Register - PCIEMISC", +"DMA Idle Indication" doesn't seem to tie to DMA coalesce anymore, so +set it to 1b for "DMA is considered idle when there is no Rx or Tx AND +when there are no TLPs indicating that CPU is active detected on the +PCIe link (such as the host executes CSR or Configuration register read +or write operation)" and performing Tx should also fall under "active +CPU on PCIe link" case. + +In addition to that, commit b6e0c419f040 ("igb: Move DMA Coalescing init +code to separate function.") seems to wrongly changed from enabling +E1000_PCIEMISC_LX_DECISION to disabling it, also fix that. + +Fixes: b6e0c419f040 ("igb: Move DMA Coalescing init code to separate function.") +Signed-off-by: Kai-Heng Feng +Tested-by: Gurucharan (A Contingent worker at Intel) +Signed-off-by: Tony Nguyen +Link: https://lore.kernel.org/r/20220621221056.604304-1-anthony.l.nguyen@intel.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/intel/igb/igb_main.c | 12 +++++------- + 1 file changed, 5 insertions(+), 7 deletions(-) + +diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c +index 758e468e677a..4e51f4bb58ff 100644 +--- a/drivers/net/ethernet/intel/igb/igb_main.c ++++ b/drivers/net/ethernet/intel/igb/igb_main.c +@@ -9829,11 +9829,10 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) + struct e1000_hw *hw = &adapter->hw; + u32 dmac_thr; + u16 hwm; ++ u32 reg; + + if (hw->mac.type > e1000_82580) { + if (adapter->flags & IGB_FLAG_DMAC) { +- u32 reg; +- + /* force threshold to 0. */ + wr32(E1000_DMCTXTH, 0); + +@@ -9866,7 +9865,6 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) + /* Disable BMC-to-OS Watchdog Enable */ + if (hw->mac.type != e1000_i354) + reg &= ~E1000_DMACR_DC_BMC2OSW_EN; +- + wr32(E1000_DMACR, reg); + + /* no lower threshold to disable +@@ -9883,12 +9881,12 @@ static void igb_init_dmac(struct igb_adapter *adapter, u32 pba) + */ + wr32(E1000_DMCTXTH, (IGB_MIN_TXPBSIZE - + (IGB_TX_BUF_4096 + adapter->max_frame_size)) >> 6); ++ } + +- /* make low power state decision controlled +- * by DMA coal +- */ ++ if (hw->mac.type >= e1000_i210 || ++ (adapter->flags & IGB_FLAG_DMAC)) { + reg = rd32(E1000_PCIEMISC); +- reg &= ~E1000_PCIEMISC_LX_DECISION; ++ reg |= E1000_PCIEMISC_LX_DECISION; + wr32(E1000_PCIEMISC, reg); + } /* endif adapter->dmac is not disabled */ + } else if (hw->mac.type == e1000_82580) { +-- +2.35.1 + diff --git a/queue-5.10/iio-adc-vf610-fix-conversion-mode-sysfs-node-name.patch b/queue-5.10/iio-adc-vf610-fix-conversion-mode-sysfs-node-name.patch new file mode 100644 index 00000000000..4e64e9c570e --- /dev/null +++ b/queue-5.10/iio-adc-vf610-fix-conversion-mode-sysfs-node-name.patch @@ -0,0 +1,35 @@ +From d5c00ef6a344b8f7ebda4c495b120d044a23762c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 30 May 2022 11:50:26 +0300 +Subject: iio: adc: vf610: fix conversion mode sysfs node name + +From: Baruch Siach + +[ Upstream commit f1a633b15cd5371a2a83f02c513984e51132dd68 ] + +The documentation missed the "in_" prefix for this IIO_SHARED_BY_DIR +entry. + +Fixes: bf04c1a367e3 ("iio: adc: vf610: implement configurable conversion modes") +Signed-off-by: Baruch Siach +Acked-by: Haibo Chen +Link: https://lore.kernel.org/r/560dc93fafe5ef7e9a409885fd20b6beac3973d8.1653900626.git.baruch@tkos.co.il +Signed-off-by: Jonathan Cameron +Signed-off-by: Sasha Levin +--- + Documentation/ABI/testing/sysfs-bus-iio-vf610 | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/Documentation/ABI/testing/sysfs-bus-iio-vf610 b/Documentation/ABI/testing/sysfs-bus-iio-vf610 +index 308a6756d3bf..491ead804488 100644 +--- a/Documentation/ABI/testing/sysfs-bus-iio-vf610 ++++ b/Documentation/ABI/testing/sysfs-bus-iio-vf610 +@@ -1,4 +1,4 @@ +-What: /sys/bus/iio/devices/iio:deviceX/conversion_mode ++What: /sys/bus/iio/devices/iio:deviceX/in_conversion_mode + KernelVersion: 4.2 + Contact: linux-iio@vger.kernel.org + Description: +-- +2.35.1 + diff --git a/queue-5.10/iio-mma8452-fix-probe-fail-when-device-tree-compatib.patch b/queue-5.10/iio-mma8452-fix-probe-fail-when-device-tree-compatib.patch new file mode 100644 index 00000000000..2bbe2e41046 --- /dev/null +++ b/queue-5.10/iio-mma8452-fix-probe-fail-when-device-tree-compatib.patch @@ -0,0 +1,48 @@ +From 2567d285d35d663fd725ca51d47e1f37fd9e6f8d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Apr 2022 16:41:00 +0800 +Subject: iio: mma8452: fix probe fail when device tree compatible is used. + +From: Haibo Chen + +[ Upstream commit fe18894930a025617114aa8ca0adbf94d5bffe89 ] + +Correct the logic for the probe. First check of_match_table, if +not meet, then check i2c_driver.id_table. If both not meet, then +return fail. + +Fixes: a47ac019e7e8 ("iio: mma8452: Fix probe failing when an i2c_device_id is used") +Signed-off-by: Haibo Chen +Link: https://lore.kernel.org/r/1650876060-17577-1-git-send-email-haibo.chen@nxp.com +Signed-off-by: Jonathan Cameron +Signed-off-by: Sasha Levin +--- + drivers/iio/accel/mma8452.c | 12 +++++++----- + 1 file changed, 7 insertions(+), 5 deletions(-) + +diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c +index e7e280282774..67463be797de 100644 +--- a/drivers/iio/accel/mma8452.c ++++ b/drivers/iio/accel/mma8452.c +@@ -1542,11 +1542,13 @@ static int mma8452_probe(struct i2c_client *client, + mutex_init(&data->lock); + + data->chip_info = device_get_match_data(&client->dev); +- if (!data->chip_info && id) { +- data->chip_info = &mma_chip_info_table[id->driver_data]; +- } else { +- dev_err(&client->dev, "unknown device model\n"); +- return -ENODEV; ++ if (!data->chip_info) { ++ if (id) { ++ data->chip_info = &mma_chip_info_table[id->driver_data]; ++ } else { ++ dev_err(&client->dev, "unknown device model\n"); ++ return -ENODEV; ++ } + } + + data->vdd_reg = devm_regulator_get(&client->dev, "vdd"); +-- +2.35.1 + diff --git a/queue-5.10/mips-remove-repetitive-increase-irq_err_count.patch b/queue-5.10/mips-remove-repetitive-increase-irq_err_count.patch new file mode 100644 index 00000000000..774da1087d3 --- /dev/null +++ b/queue-5.10/mips-remove-repetitive-increase-irq_err_count.patch @@ -0,0 +1,61 @@ +From 9bf5774cd7914a4f2bd2650ba0b282e9c168d865 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 10 Jun 2022 19:14:20 +0800 +Subject: MIPS: Remove repetitive increase irq_err_count + +From: huhai + +[ Upstream commit c81aba8fde2aee4f5778ebab3a1d51bd2ef48e4c ] + +commit 979934da9e7a ("[PATCH] mips: update IRQ handling for vr41xx") added +a function irq_dispatch, and it'll increase irq_err_count when the get_irq +callback returns a negative value, but increase irq_err_count in get_irq +was not removed. + +And also, modpost complains once gpio-vr41xx drivers become modules. + ERROR: modpost: "irq_err_count" [drivers/gpio/gpio-vr41xx.ko] undefined! + +So it would be a good idea to remove repetitive increase irq_err_count in +get_irq callback. + +Fixes: 27fdd325dace ("MIPS: Update VR41xx GPIO driver to use gpiolib") +Fixes: 979934da9e7a ("[PATCH] mips: update IRQ handling for vr41xx") +Reported-by: k2ci +Signed-off-by: huhai +Signed-off-by: Genjian Zhang +Signed-off-by: Thomas Bogendoerfer +Signed-off-by: Sasha Levin +--- + arch/mips/vr41xx/common/icu.c | 2 -- + drivers/gpio/gpio-vr41xx.c | 2 -- + 2 files changed, 4 deletions(-) + +diff --git a/arch/mips/vr41xx/common/icu.c b/arch/mips/vr41xx/common/icu.c +index 7b7f25b4b057..9240bcdbe74e 100644 +--- a/arch/mips/vr41xx/common/icu.c ++++ b/arch/mips/vr41xx/common/icu.c +@@ -640,8 +640,6 @@ static int icu_get_irq(unsigned int irq) + + printk(KERN_ERR "spurious ICU interrupt: %04x,%04x\n", pend1, pend2); + +- atomic_inc(&irq_err_count); +- + return -1; + } + +diff --git a/drivers/gpio/gpio-vr41xx.c b/drivers/gpio/gpio-vr41xx.c +index 98cd715ccc33..8d09b619c166 100644 +--- a/drivers/gpio/gpio-vr41xx.c ++++ b/drivers/gpio/gpio-vr41xx.c +@@ -217,8 +217,6 @@ static int giu_get_irq(unsigned int irq) + printk(KERN_ERR "spurious GIU interrupt: %04x(%04x),%04x(%04x)\n", + maskl, pendl, maskh, pendh); + +- atomic_inc(&irq_err_count); +- + return -EINVAL; + } + +-- +2.35.1 + diff --git a/queue-5.10/net-sched-sch_netem-fix-arithmetic-in-netem_dump-for.patch b/queue-5.10/net-sched-sch_netem-fix-arithmetic-in-netem_dump-for.patch new file mode 100644 index 00000000000..69b89101132 --- /dev/null +++ b/queue-5.10/net-sched-sch_netem-fix-arithmetic-in-netem_dump-for.patch @@ -0,0 +1,73 @@ +From dddaf6ba4405f8163496e946c12597ca274211ce Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 16 Jun 2022 16:43:36 -0700 +Subject: net/sched: sch_netem: Fix arithmetic in netem_dump() for 32-bit + platforms + +From: Peilin Ye + +[ Upstream commit a2b1a5d40bd12b44322c2ccd40bb0ec1699708b6 ] + +As reported by Yuming, currently tc always show a latency of UINT_MAX +for netem Qdisc's on 32-bit platforms: + + $ tc qdisc add dev dummy0 root netem latency 100ms + $ tc qdisc show dev dummy0 + qdisc netem 8001: root refcnt 2 limit 1000 delay 275s 275s + ^^^^^^^^^^^^^^^^ + +Let us take a closer look at netem_dump(): + + qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency, + UINT_MAX); + +qopt.latency is __u32, psched_tdiff_t is signed long, +(psched_tdiff_t)(UINT_MAX) is negative for 32-bit platforms, so +qopt.latency is always UINT_MAX. + +Fix it by using psched_time_t (u64) instead. + +Note: confusingly, users have two ways to specify 'latency': + + 1. normally, via '__u32 latency' in struct tc_netem_qopt; + 2. via the TCA_NETEM_LATENCY64 attribute, which is s64. + +For the second case, theoretically 'latency' could be negative. This +patch ignores that corner case, since it is broken (i.e. assigning a +negative s64 to __u32) anyways, and should be handled separately. + +Thanks Ted Lin for the analysis [1] . + +[1] https://github.com/raspberrypi/linux/issues/3512 + +Reported-by: Yuming Chen +Fixes: 112f9cb65643 ("netem: convert to qdisc_watchdog_schedule_ns") +Reviewed-by: Cong Wang +Signed-off-by: Peilin Ye +Acked-by: Stephen Hemminger +Link: https://lore.kernel.org/r/20220616234336.2443-1-yepeilin.cs@gmail.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/sched/sch_netem.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/net/sched/sch_netem.c b/net/sched/sch_netem.c +index 0c345e43a09a..adc5407fd5d5 100644 +--- a/net/sched/sch_netem.c ++++ b/net/sched/sch_netem.c +@@ -1146,9 +1146,9 @@ static int netem_dump(struct Qdisc *sch, struct sk_buff *skb) + struct tc_netem_rate rate; + struct tc_netem_slot slot; + +- qopt.latency = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->latency), ++ qopt.latency = min_t(psched_time_t, PSCHED_NS2TICKS(q->latency), + UINT_MAX); +- qopt.jitter = min_t(psched_tdiff_t, PSCHED_NS2TICKS(q->jitter), ++ qopt.jitter = min_t(psched_time_t, PSCHED_NS2TICKS(q->jitter), + UINT_MAX); + qopt.limit = q->limit; + qopt.loss = q->loss; +-- +2.35.1 + diff --git a/queue-5.10/net-tls-fix-tls_sk_proto_close-executed-repeatedly.patch b/queue-5.10/net-tls-fix-tls_sk_proto_close-executed-repeatedly.patch new file mode 100644 index 00000000000..0b795d5adbd --- /dev/null +++ b/queue-5.10/net-tls-fix-tls_sk_proto_close-executed-repeatedly.patch @@ -0,0 +1,55 @@ +From e1165c36d5c42663db4963d6413b29be2f594b89 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 20 Jun 2022 12:35:08 +0800 +Subject: net/tls: fix tls_sk_proto_close executed repeatedly + +From: Ziyang Xuan + +[ Upstream commit 69135c572d1f84261a6de2a1268513a7e71753e2 ] + +After setting the sock ktls, update ctx->sk_proto to sock->sk_prot by +tls_update(), so now ctx->sk_proto->close is tls_sk_proto_close(). When +close the sock, tls_sk_proto_close() is called for sock->sk_prot->close +is tls_sk_proto_close(). But ctx->sk_proto->close() will be executed later +in tls_sk_proto_close(). Thus tls_sk_proto_close() executed repeatedly +occurred. That will trigger the following bug. + +================================================================= +KASAN: null-ptr-deref in range [0x0000000000000010-0x0000000000000017] +RIP: 0010:tls_sk_proto_close+0xd8/0xaf0 net/tls/tls_main.c:306 +Call Trace: + + tls_sk_proto_close+0x356/0xaf0 net/tls/tls_main.c:329 + inet_release+0x12e/0x280 net/ipv4/af_inet.c:428 + __sock_release+0xcd/0x280 net/socket.c:650 + sock_close+0x18/0x20 net/socket.c:1365 + +Updating a proto which is same with sock->sk_prot is incorrect. Add proto +and sock->sk_prot equality check at the head of tls_update() to fix it. + +Fixes: 95fa145479fb ("bpf: sockmap/tls, close can race with map free") +Reported-by: syzbot+29c3c12f3214b85ad081@syzkaller.appspotmail.com +Signed-off-by: Ziyang Xuan +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/tls/tls_main.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c +index 58d22d6b86ae..9492528f5852 100644 +--- a/net/tls/tls_main.c ++++ b/net/tls/tls_main.c +@@ -787,6 +787,9 @@ static void tls_update(struct sock *sk, struct proto *p, + { + struct tls_context *ctx; + ++ if (sk->sk_prot == p) ++ return; ++ + ctx = tls_get_ctx(sk); + if (likely(ctx)) { + ctx->sk_write_space = write_space; +-- +2.35.1 + diff --git a/queue-5.10/netfilter-nftables-add-nft_parse_register_load-and-u.patch b/queue-5.10/netfilter-nftables-add-nft_parse_register_load-and-u.patch new file mode 100644 index 00000000000..9e9aaf306f3 --- /dev/null +++ b/queue-5.10/netfilter-nftables-add-nft_parse_register_load-and-u.patch @@ -0,0 +1,863 @@ +From c94eea64cd1914c27571ed0656a0c99852dbc85b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Jan 2021 17:28:18 +0100 +Subject: netfilter: nftables: add nft_parse_register_load() and use it +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Pablo Neira Ayuso + +[ Upstream commit 4f16d25c68ec844299a4df6ecbb0234eaf88a935 ] + +This new function combines the netlink register attribute parser +and the load validation function. + +This update requires to replace: + + enum nft_registers sreg:8; + +in many of the expression private areas otherwise compiler complains +with: + + error: cannot take address of bit-field ‘sreg’ + +when passing the register field as reference. + +Signed-off-by: Pablo Neira Ayuso +Signed-off-by: Sasha Levin +--- + include/net/netfilter/nf_tables.h | 2 +- + include/net/netfilter/nf_tables_core.h | 6 ++--- + include/net/netfilter/nft_meta.h | 2 +- + net/ipv4/netfilter/nft_dup_ipv4.c | 18 ++++++------- + net/ipv6/netfilter/nft_dup_ipv6.c | 18 ++++++------- + net/netfilter/nf_tables_api.c | 18 +++++++++++-- + net/netfilter/nft_bitwise.c | 10 ++++---- + net/netfilter/nft_byteorder.c | 6 ++--- + net/netfilter/nft_cmp.c | 8 +++--- + net/netfilter/nft_ct.c | 5 ++-- + net/netfilter/nft_dup_netdev.c | 6 ++--- + net/netfilter/nft_dynset.c | 12 ++++----- + net/netfilter/nft_exthdr.c | 6 ++--- + net/netfilter/nft_fwd_netdev.c | 18 ++++++------- + net/netfilter/nft_hash.c | 10 +++++--- + net/netfilter/nft_lookup.c | 6 ++--- + net/netfilter/nft_masq.c | 18 ++++++------- + net/netfilter/nft_meta.c | 3 +-- + net/netfilter/nft_nat.c | 35 +++++++++++--------------- + net/netfilter/nft_objref.c | 6 ++--- + net/netfilter/nft_payload.c | 4 +-- + net/netfilter/nft_queue.c | 12 ++++----- + net/netfilter/nft_range.c | 6 ++--- + net/netfilter/nft_redir.c | 18 ++++++------- + net/netfilter/nft_tproxy.c | 14 +++++------ + 25 files changed, 132 insertions(+), 135 deletions(-) + +diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h +index b7907385a02f..06e7f84a6d12 100644 +--- a/include/net/netfilter/nf_tables.h ++++ b/include/net/netfilter/nf_tables.h +@@ -203,7 +203,7 @@ int nft_parse_u32_check(const struct nlattr *attr, int max, u32 *dest); + unsigned int nft_parse_register(const struct nlattr *attr); + int nft_dump_register(struct sk_buff *skb, unsigned int attr, unsigned int reg); + +-int nft_validate_register_load(enum nft_registers reg, unsigned int len); ++int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len); + int nft_validate_register_store(const struct nft_ctx *ctx, + enum nft_registers reg, + const struct nft_data *data, +diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h +index 8657e6815b07..b7aff03a3f0f 100644 +--- a/include/net/netfilter/nf_tables_core.h ++++ b/include/net/netfilter/nf_tables_core.h +@@ -26,14 +26,14 @@ void nf_tables_core_module_exit(void); + struct nft_bitwise_fast_expr { + u32 mask; + u32 xor; +- enum nft_registers sreg:8; ++ u8 sreg; + enum nft_registers dreg:8; + }; + + struct nft_cmp_fast_expr { + u32 data; + u32 mask; +- enum nft_registers sreg:8; ++ u8 sreg; + u8 len; + bool inv; + }; +@@ -67,7 +67,7 @@ struct nft_payload_set { + enum nft_payload_bases base:8; + u8 offset; + u8 len; +- enum nft_registers sreg:8; ++ u8 sreg; + u8 csum_type; + u8 csum_offset; + u8 csum_flags; +diff --git a/include/net/netfilter/nft_meta.h b/include/net/netfilter/nft_meta.h +index 07e2fd507963..946fa8c83798 100644 +--- a/include/net/netfilter/nft_meta.h ++++ b/include/net/netfilter/nft_meta.h +@@ -8,7 +8,7 @@ struct nft_meta { + enum nft_meta_keys key:8; + union { + enum nft_registers dreg:8; +- enum nft_registers sreg:8; ++ u8 sreg; + }; + }; + +diff --git a/net/ipv4/netfilter/nft_dup_ipv4.c b/net/ipv4/netfilter/nft_dup_ipv4.c +index bcdb37f86a94..aeb631760eb9 100644 +--- a/net/ipv4/netfilter/nft_dup_ipv4.c ++++ b/net/ipv4/netfilter/nft_dup_ipv4.c +@@ -13,8 +13,8 @@ + #include + + struct nft_dup_ipv4 { +- enum nft_registers sreg_addr:8; +- enum nft_registers sreg_dev:8; ++ u8 sreg_addr; ++ u8 sreg_dev; + }; + + static void nft_dup_ipv4_eval(const struct nft_expr *expr, +@@ -40,16 +40,16 @@ static int nft_dup_ipv4_init(const struct nft_ctx *ctx, + if (tb[NFTA_DUP_SREG_ADDR] == NULL) + return -EINVAL; + +- priv->sreg_addr = nft_parse_register(tb[NFTA_DUP_SREG_ADDR]); +- err = nft_validate_register_load(priv->sreg_addr, sizeof(struct in_addr)); ++ err = nft_parse_register_load(tb[NFTA_DUP_SREG_ADDR], &priv->sreg_addr, ++ sizeof(struct in_addr)); + if (err < 0) + return err; + +- if (tb[NFTA_DUP_SREG_DEV] != NULL) { +- priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]); +- return nft_validate_register_load(priv->sreg_dev, sizeof(int)); +- } +- return 0; ++ if (tb[NFTA_DUP_SREG_DEV]) ++ err = nft_parse_register_load(tb[NFTA_DUP_SREG_DEV], ++ &priv->sreg_dev, sizeof(int)); ++ ++ return err; + } + + static int nft_dup_ipv4_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/ipv6/netfilter/nft_dup_ipv6.c b/net/ipv6/netfilter/nft_dup_ipv6.c +index 8b5193efb1f1..3a00d95e964e 100644 +--- a/net/ipv6/netfilter/nft_dup_ipv6.c ++++ b/net/ipv6/netfilter/nft_dup_ipv6.c +@@ -13,8 +13,8 @@ + #include + + struct nft_dup_ipv6 { +- enum nft_registers sreg_addr:8; +- enum nft_registers sreg_dev:8; ++ u8 sreg_addr; ++ u8 sreg_dev; + }; + + static void nft_dup_ipv6_eval(const struct nft_expr *expr, +@@ -38,16 +38,16 @@ static int nft_dup_ipv6_init(const struct nft_ctx *ctx, + if (tb[NFTA_DUP_SREG_ADDR] == NULL) + return -EINVAL; + +- priv->sreg_addr = nft_parse_register(tb[NFTA_DUP_SREG_ADDR]); +- err = nft_validate_register_load(priv->sreg_addr, sizeof(struct in6_addr)); ++ err = nft_parse_register_load(tb[NFTA_DUP_SREG_ADDR], &priv->sreg_addr, ++ sizeof(struct in6_addr)); + if (err < 0) + return err; + +- if (tb[NFTA_DUP_SREG_DEV] != NULL) { +- priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]); +- return nft_validate_register_load(priv->sreg_dev, sizeof(int)); +- } +- return 0; ++ if (tb[NFTA_DUP_SREG_DEV]) ++ err = nft_parse_register_load(tb[NFTA_DUP_SREG_DEV], ++ &priv->sreg_dev, sizeof(int)); ++ ++ return err; + } + + static int nft_dup_ipv6_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 0c56a90c3f08..91713ecd60e9 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -8514,7 +8514,7 @@ EXPORT_SYMBOL_GPL(nft_dump_register); + * Validate that the input register is one of the general purpose + * registers and that the length of the load is within the bounds. + */ +-int nft_validate_register_load(enum nft_registers reg, unsigned int len) ++static int nft_validate_register_load(enum nft_registers reg, unsigned int len) + { + if (reg < NFT_REG_1 * NFT_REG_SIZE / NFT_REG32_SIZE) + return -EINVAL; +@@ -8525,7 +8525,21 @@ int nft_validate_register_load(enum nft_registers reg, unsigned int len) + + return 0; + } +-EXPORT_SYMBOL_GPL(nft_validate_register_load); ++ ++int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len) ++{ ++ u32 reg; ++ int err; ++ ++ reg = nft_parse_register(attr); ++ err = nft_validate_register_load(reg, len); ++ if (err < 0) ++ return err; ++ ++ *sreg = reg; ++ return 0; ++} ++EXPORT_SYMBOL_GPL(nft_parse_register_load); + + /** + * nft_validate_register_store - validate an expressions' register store +diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c +index bbd773d74377..2157970b3cd3 100644 +--- a/net/netfilter/nft_bitwise.c ++++ b/net/netfilter/nft_bitwise.c +@@ -16,7 +16,7 @@ + #include + + struct nft_bitwise { +- enum nft_registers sreg:8; ++ u8 sreg; + enum nft_registers dreg:8; + enum nft_bitwise_ops op:8; + u8 len; +@@ -169,8 +169,8 @@ static int nft_bitwise_init(const struct nft_ctx *ctx, + + priv->len = len; + +- priv->sreg = nft_parse_register(tb[NFTA_BITWISE_SREG]); +- err = nft_validate_register_load(priv->sreg, priv->len); ++ err = nft_parse_register_load(tb[NFTA_BITWISE_SREG], &priv->sreg, ++ priv->len); + if (err < 0) + return err; + +@@ -315,8 +315,8 @@ static int nft_bitwise_fast_init(const struct nft_ctx *ctx, + struct nft_bitwise_fast_expr *priv = nft_expr_priv(expr); + int err; + +- priv->sreg = nft_parse_register(tb[NFTA_BITWISE_SREG]); +- err = nft_validate_register_load(priv->sreg, sizeof(u32)); ++ err = nft_parse_register_load(tb[NFTA_BITWISE_SREG], &priv->sreg, ++ sizeof(u32)); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c +index 12bed3f7bbc6..0960563cd5a1 100644 +--- a/net/netfilter/nft_byteorder.c ++++ b/net/netfilter/nft_byteorder.c +@@ -16,7 +16,7 @@ + #include + + struct nft_byteorder { +- enum nft_registers sreg:8; ++ u8 sreg; + enum nft_registers dreg:8; + enum nft_byteorder_ops op:8; + u8 len; +@@ -131,14 +131,14 @@ static int nft_byteorder_init(const struct nft_ctx *ctx, + return -EINVAL; + } + +- priv->sreg = nft_parse_register(tb[NFTA_BYTEORDER_SREG]); + err = nft_parse_u32_check(tb[NFTA_BYTEORDER_LEN], U8_MAX, &len); + if (err < 0) + return err; + + priv->len = len; + +- err = nft_validate_register_load(priv->sreg, priv->len); ++ err = nft_parse_register_load(tb[NFTA_BYTEORDER_SREG], &priv->sreg, ++ priv->len); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_cmp.c b/net/netfilter/nft_cmp.c +index 1d42d06f5b64..b529c0e86546 100644 +--- a/net/netfilter/nft_cmp.c ++++ b/net/netfilter/nft_cmp.c +@@ -18,7 +18,7 @@ + + struct nft_cmp_expr { + struct nft_data data; +- enum nft_registers sreg:8; ++ u8 sreg; + u8 len; + enum nft_cmp_ops op:8; + }; +@@ -87,8 +87,7 @@ static int nft_cmp_init(const struct nft_ctx *ctx, const struct nft_expr *expr, + return err; + } + +- priv->sreg = nft_parse_register(tb[NFTA_CMP_SREG]); +- err = nft_validate_register_load(priv->sreg, desc.len); ++ err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len); + if (err < 0) + return err; + +@@ -211,8 +210,7 @@ static int nft_cmp_fast_init(const struct nft_ctx *ctx, + if (err < 0) + return err; + +- priv->sreg = nft_parse_register(tb[NFTA_CMP_SREG]); +- err = nft_validate_register_load(priv->sreg, desc.len); ++ err = nft_parse_register_load(tb[NFTA_CMP_SREG], &priv->sreg, desc.len); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c +index 7fcb73ac2e6e..dbf54cca6083 100644 +--- a/net/netfilter/nft_ct.c ++++ b/net/netfilter/nft_ct.c +@@ -28,7 +28,7 @@ struct nft_ct { + enum ip_conntrack_dir dir:8; + union { + enum nft_registers dreg:8; +- enum nft_registers sreg:8; ++ u8 sreg; + }; + }; + +@@ -608,8 +608,7 @@ static int nft_ct_set_init(const struct nft_ctx *ctx, + } + } + +- priv->sreg = nft_parse_register(tb[NFTA_CT_SREG]); +- err = nft_validate_register_load(priv->sreg, len); ++ err = nft_parse_register_load(tb[NFTA_CT_SREG], &priv->sreg, len); + if (err < 0) + goto err1; + +diff --git a/net/netfilter/nft_dup_netdev.c b/net/netfilter/nft_dup_netdev.c +index 70c457476b87..5b5c607fbf83 100644 +--- a/net/netfilter/nft_dup_netdev.c ++++ b/net/netfilter/nft_dup_netdev.c +@@ -14,7 +14,7 @@ + #include + + struct nft_dup_netdev { +- enum nft_registers sreg_dev:8; ++ u8 sreg_dev; + }; + + static void nft_dup_netdev_eval(const struct nft_expr *expr, +@@ -40,8 +40,8 @@ static int nft_dup_netdev_init(const struct nft_ctx *ctx, + if (tb[NFTA_DUP_SREG_DEV] == NULL) + return -EINVAL; + +- priv->sreg_dev = nft_parse_register(tb[NFTA_DUP_SREG_DEV]); +- return nft_validate_register_load(priv->sreg_dev, sizeof(int)); ++ return nft_parse_register_load(tb[NFTA_DUP_SREG_DEV], &priv->sreg_dev, ++ sizeof(int)); + } + + static int nft_dup_netdev_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/netfilter/nft_dynset.c b/net/netfilter/nft_dynset.c +index 58904bee1a0d..8c45e01fecdd 100644 +--- a/net/netfilter/nft_dynset.c ++++ b/net/netfilter/nft_dynset.c +@@ -16,8 +16,8 @@ struct nft_dynset { + struct nft_set *set; + struct nft_set_ext_tmpl tmpl; + enum nft_dynset_ops op:8; +- enum nft_registers sreg_key:8; +- enum nft_registers sreg_data:8; ++ u8 sreg_key; ++ u8 sreg_data; + bool invert; + u64 timeout; + struct nft_expr *expr; +@@ -154,8 +154,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx, + return err; + } + +- priv->sreg_key = nft_parse_register(tb[NFTA_DYNSET_SREG_KEY]); +- err = nft_validate_register_load(priv->sreg_key, set->klen); ++ err = nft_parse_register_load(tb[NFTA_DYNSET_SREG_KEY], &priv->sreg_key, ++ set->klen); + if (err < 0) + return err; + +@@ -165,8 +165,8 @@ static int nft_dynset_init(const struct nft_ctx *ctx, + if (set->dtype == NFT_DATA_VERDICT) + return -EOPNOTSUPP; + +- priv->sreg_data = nft_parse_register(tb[NFTA_DYNSET_SREG_DATA]); +- err = nft_validate_register_load(priv->sreg_data, set->dlen); ++ err = nft_parse_register_load(tb[NFTA_DYNSET_SREG_DATA], ++ &priv->sreg_data, set->dlen); + if (err < 0) + return err; + } else if (set->flags & NFT_SET_MAP) +diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c +index faa0844c01fb..f2b36b9c2b53 100644 +--- a/net/netfilter/nft_exthdr.c ++++ b/net/netfilter/nft_exthdr.c +@@ -20,7 +20,7 @@ struct nft_exthdr { + u8 len; + u8 op; + enum nft_registers dreg:8; +- enum nft_registers sreg:8; ++ u8 sreg; + u8 flags; + }; + +@@ -403,11 +403,11 @@ static int nft_exthdr_tcp_set_init(const struct nft_ctx *ctx, + priv->type = nla_get_u8(tb[NFTA_EXTHDR_TYPE]); + priv->offset = offset; + priv->len = len; +- priv->sreg = nft_parse_register(tb[NFTA_EXTHDR_SREG]); + priv->flags = flags; + priv->op = op; + +- return nft_validate_register_load(priv->sreg, priv->len); ++ return nft_parse_register_load(tb[NFTA_EXTHDR_SREG], &priv->sreg, ++ priv->len); + } + + static int nft_exthdr_ipv4_init(const struct nft_ctx *ctx, +diff --git a/net/netfilter/nft_fwd_netdev.c b/net/netfilter/nft_fwd_netdev.c +index 3b0dcd170551..7730409f6f09 100644 +--- a/net/netfilter/nft_fwd_netdev.c ++++ b/net/netfilter/nft_fwd_netdev.c +@@ -18,7 +18,7 @@ + #include + + struct nft_fwd_netdev { +- enum nft_registers sreg_dev:8; ++ u8 sreg_dev; + }; + + static void nft_fwd_netdev_eval(const struct nft_expr *expr, +@@ -50,8 +50,8 @@ static int nft_fwd_netdev_init(const struct nft_ctx *ctx, + if (tb[NFTA_FWD_SREG_DEV] == NULL) + return -EINVAL; + +- priv->sreg_dev = nft_parse_register(tb[NFTA_FWD_SREG_DEV]); +- return nft_validate_register_load(priv->sreg_dev, sizeof(int)); ++ return nft_parse_register_load(tb[NFTA_FWD_SREG_DEV], &priv->sreg_dev, ++ sizeof(int)); + } + + static int nft_fwd_netdev_dump(struct sk_buff *skb, const struct nft_expr *expr) +@@ -83,8 +83,8 @@ static bool nft_fwd_netdev_offload_action(const struct nft_expr *expr) + } + + struct nft_fwd_neigh { +- enum nft_registers sreg_dev:8; +- enum nft_registers sreg_addr:8; ++ u8 sreg_dev; ++ u8 sreg_addr; + u8 nfproto; + }; + +@@ -162,8 +162,6 @@ static int nft_fwd_neigh_init(const struct nft_ctx *ctx, + !tb[NFTA_FWD_NFPROTO]) + return -EINVAL; + +- priv->sreg_dev = nft_parse_register(tb[NFTA_FWD_SREG_DEV]); +- priv->sreg_addr = nft_parse_register(tb[NFTA_FWD_SREG_ADDR]); + priv->nfproto = ntohl(nla_get_be32(tb[NFTA_FWD_NFPROTO])); + + switch (priv->nfproto) { +@@ -177,11 +175,13 @@ static int nft_fwd_neigh_init(const struct nft_ctx *ctx, + return -EOPNOTSUPP; + } + +- err = nft_validate_register_load(priv->sreg_dev, sizeof(int)); ++ err = nft_parse_register_load(tb[NFTA_FWD_SREG_DEV], &priv->sreg_dev, ++ sizeof(int)); + if (err < 0) + return err; + +- return nft_validate_register_load(priv->sreg_addr, addr_len); ++ return nft_parse_register_load(tb[NFTA_FWD_SREG_ADDR], &priv->sreg_addr, ++ addr_len); + } + + static int nft_fwd_neigh_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/netfilter/nft_hash.c b/net/netfilter/nft_hash.c +index 96371d878e7e..7ee6c6da50ae 100644 +--- a/net/netfilter/nft_hash.c ++++ b/net/netfilter/nft_hash.c +@@ -14,7 +14,7 @@ + #include + + struct nft_jhash { +- enum nft_registers sreg:8; ++ u8 sreg; + enum nft_registers dreg:8; + u8 len; + bool autogen_seed:1; +@@ -83,7 +83,6 @@ static int nft_jhash_init(const struct nft_ctx *ctx, + if (tb[NFTA_HASH_OFFSET]) + priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET])); + +- priv->sreg = nft_parse_register(tb[NFTA_HASH_SREG]); + priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); + + err = nft_parse_u32_check(tb[NFTA_HASH_LEN], U8_MAX, &len); +@@ -94,6 +93,10 @@ static int nft_jhash_init(const struct nft_ctx *ctx, + + priv->len = len; + ++ err = nft_parse_register_load(tb[NFTA_HASH_SREG], &priv->sreg, len); ++ if (err < 0) ++ return err; ++ + priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS])); + if (priv->modulus < 1) + return -ERANGE; +@@ -108,8 +111,7 @@ static int nft_jhash_init(const struct nft_ctx *ctx, + get_random_bytes(&priv->seed, sizeof(priv->seed)); + } + +- return nft_validate_register_load(priv->sreg, len) && +- nft_validate_register_store(ctx, priv->dreg, NULL, ++ return nft_validate_register_store(ctx, priv->dreg, NULL, + NFT_DATA_VALUE, sizeof(u32)); + } + +diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c +index f1363b8aabba..9e87b6d39f51 100644 +--- a/net/netfilter/nft_lookup.c ++++ b/net/netfilter/nft_lookup.c +@@ -17,7 +17,7 @@ + + struct nft_lookup { + struct nft_set *set; +- enum nft_registers sreg:8; ++ u8 sreg; + enum nft_registers dreg:8; + bool invert; + struct nft_set_binding binding; +@@ -76,8 +76,8 @@ static int nft_lookup_init(const struct nft_ctx *ctx, + if (IS_ERR(set)) + return PTR_ERR(set); + +- priv->sreg = nft_parse_register(tb[NFTA_LOOKUP_SREG]); +- err = nft_validate_register_load(priv->sreg, set->klen); ++ err = nft_parse_register_load(tb[NFTA_LOOKUP_SREG], &priv->sreg, ++ set->klen); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_masq.c b/net/netfilter/nft_masq.c +index 71390b727040..9953e8053753 100644 +--- a/net/netfilter/nft_masq.c ++++ b/net/netfilter/nft_masq.c +@@ -15,8 +15,8 @@ + + struct nft_masq { + u32 flags; +- enum nft_registers sreg_proto_min:8; +- enum nft_registers sreg_proto_max:8; ++ u8 sreg_proto_min; ++ u8 sreg_proto_max; + }; + + static const struct nla_policy nft_masq_policy[NFTA_MASQ_MAX + 1] = { +@@ -54,19 +54,15 @@ static int nft_masq_init(const struct nft_ctx *ctx, + } + + if (tb[NFTA_MASQ_REG_PROTO_MIN]) { +- priv->sreg_proto_min = +- nft_parse_register(tb[NFTA_MASQ_REG_PROTO_MIN]); +- +- err = nft_validate_register_load(priv->sreg_proto_min, plen); ++ err = nft_parse_register_load(tb[NFTA_MASQ_REG_PROTO_MIN], ++ &priv->sreg_proto_min, plen); + if (err < 0) + return err; + + if (tb[NFTA_MASQ_REG_PROTO_MAX]) { +- priv->sreg_proto_max = +- nft_parse_register(tb[NFTA_MASQ_REG_PROTO_MAX]); +- +- err = nft_validate_register_load(priv->sreg_proto_max, +- plen); ++ err = nft_parse_register_load(tb[NFTA_MASQ_REG_PROTO_MAX], ++ &priv->sreg_proto_max, ++ plen); + if (err < 0) + return err; + } else { +diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c +index bf4b3ad5314c..65e231ec1884 100644 +--- a/net/netfilter/nft_meta.c ++++ b/net/netfilter/nft_meta.c +@@ -661,8 +661,7 @@ int nft_meta_set_init(const struct nft_ctx *ctx, + return -EOPNOTSUPP; + } + +- priv->sreg = nft_parse_register(tb[NFTA_META_SREG]); +- err = nft_validate_register_load(priv->sreg, len); ++ err = nft_parse_register_load(tb[NFTA_META_SREG], &priv->sreg, len); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_nat.c b/net/netfilter/nft_nat.c +index 6a4a5ac88db7..db8f9116eeb4 100644 +--- a/net/netfilter/nft_nat.c ++++ b/net/netfilter/nft_nat.c +@@ -21,10 +21,10 @@ + #include + + struct nft_nat { +- enum nft_registers sreg_addr_min:8; +- enum nft_registers sreg_addr_max:8; +- enum nft_registers sreg_proto_min:8; +- enum nft_registers sreg_proto_max:8; ++ u8 sreg_addr_min; ++ u8 sreg_addr_max; ++ u8 sreg_proto_min; ++ u8 sreg_proto_max; + enum nf_nat_manip_type type:8; + u8 family; + u16 flags; +@@ -208,18 +208,15 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr, + priv->family = family; + + if (tb[NFTA_NAT_REG_ADDR_MIN]) { +- priv->sreg_addr_min = +- nft_parse_register(tb[NFTA_NAT_REG_ADDR_MIN]); +- err = nft_validate_register_load(priv->sreg_addr_min, alen); ++ err = nft_parse_register_load(tb[NFTA_NAT_REG_ADDR_MIN], ++ &priv->sreg_addr_min, alen); + if (err < 0) + return err; + + if (tb[NFTA_NAT_REG_ADDR_MAX]) { +- priv->sreg_addr_max = +- nft_parse_register(tb[NFTA_NAT_REG_ADDR_MAX]); +- +- err = nft_validate_register_load(priv->sreg_addr_max, +- alen); ++ err = nft_parse_register_load(tb[NFTA_NAT_REG_ADDR_MAX], ++ &priv->sreg_addr_max, ++ alen); + if (err < 0) + return err; + } else { +@@ -231,19 +228,15 @@ static int nft_nat_init(const struct nft_ctx *ctx, const struct nft_expr *expr, + + plen = sizeof_field(struct nf_nat_range, min_addr.all); + if (tb[NFTA_NAT_REG_PROTO_MIN]) { +- priv->sreg_proto_min = +- nft_parse_register(tb[NFTA_NAT_REG_PROTO_MIN]); +- +- err = nft_validate_register_load(priv->sreg_proto_min, plen); ++ err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MIN], ++ &priv->sreg_proto_min, plen); + if (err < 0) + return err; + + if (tb[NFTA_NAT_REG_PROTO_MAX]) { +- priv->sreg_proto_max = +- nft_parse_register(tb[NFTA_NAT_REG_PROTO_MAX]); +- +- err = nft_validate_register_load(priv->sreg_proto_max, +- plen); ++ err = nft_parse_register_load(tb[NFTA_NAT_REG_PROTO_MAX], ++ &priv->sreg_proto_max, ++ plen); + if (err < 0) + return err; + } else { +diff --git a/net/netfilter/nft_objref.c b/net/netfilter/nft_objref.c +index 5f9207a9f485..bc104d36d3bb 100644 +--- a/net/netfilter/nft_objref.c ++++ b/net/netfilter/nft_objref.c +@@ -95,7 +95,7 @@ static const struct nft_expr_ops nft_objref_ops = { + + struct nft_objref_map { + struct nft_set *set; +- enum nft_registers sreg:8; ++ u8 sreg; + struct nft_set_binding binding; + }; + +@@ -137,8 +137,8 @@ static int nft_objref_map_init(const struct nft_ctx *ctx, + if (!(set->flags & NFT_SET_OBJECT)) + return -EINVAL; + +- priv->sreg = nft_parse_register(tb[NFTA_OBJREF_SET_SREG]); +- err = nft_validate_register_load(priv->sreg, set->klen); ++ err = nft_parse_register_load(tb[NFTA_OBJREF_SET_SREG], &priv->sreg, ++ set->klen); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c +index 6a8495bd08bb..b9702236d310 100644 +--- a/net/netfilter/nft_payload.c ++++ b/net/netfilter/nft_payload.c +@@ -664,7 +664,6 @@ static int nft_payload_set_init(const struct nft_ctx *ctx, + priv->base = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE])); + priv->offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET])); + priv->len = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN])); +- priv->sreg = nft_parse_register(tb[NFTA_PAYLOAD_SREG]); + + if (tb[NFTA_PAYLOAD_CSUM_TYPE]) + priv->csum_type = +@@ -697,7 +696,8 @@ static int nft_payload_set_init(const struct nft_ctx *ctx, + return -EOPNOTSUPP; + } + +- return nft_validate_register_load(priv->sreg, priv->len); ++ return nft_parse_register_load(tb[NFTA_PAYLOAD_SREG], &priv->sreg, ++ priv->len); + } + + static int nft_payload_set_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/netfilter/nft_queue.c b/net/netfilter/nft_queue.c +index 23265d757acb..9ba1de51ac07 100644 +--- a/net/netfilter/nft_queue.c ++++ b/net/netfilter/nft_queue.c +@@ -19,10 +19,10 @@ + static u32 jhash_initval __read_mostly; + + struct nft_queue { +- enum nft_registers sreg_qnum:8; +- u16 queuenum; +- u16 queues_total; +- u16 flags; ++ u8 sreg_qnum; ++ u16 queuenum; ++ u16 queues_total; ++ u16 flags; + }; + + static void nft_queue_eval(const struct nft_expr *expr, +@@ -111,8 +111,8 @@ static int nft_queue_sreg_init(const struct nft_ctx *ctx, + struct nft_queue *priv = nft_expr_priv(expr); + int err; + +- priv->sreg_qnum = nft_parse_register(tb[NFTA_QUEUE_SREG_QNUM]); +- err = nft_validate_register_load(priv->sreg_qnum, sizeof(u32)); ++ err = nft_parse_register_load(tb[NFTA_QUEUE_SREG_QNUM], ++ &priv->sreg_qnum, sizeof(u32)); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_range.c b/net/netfilter/nft_range.c +index 89efcc5a533d..e4a1c44d7f51 100644 +--- a/net/netfilter/nft_range.c ++++ b/net/netfilter/nft_range.c +@@ -15,7 +15,7 @@ + struct nft_range_expr { + struct nft_data data_from; + struct nft_data data_to; +- enum nft_registers sreg:8; ++ u8 sreg; + u8 len; + enum nft_range_ops op:8; + }; +@@ -86,8 +86,8 @@ static int nft_range_init(const struct nft_ctx *ctx, const struct nft_expr *expr + goto err2; + } + +- priv->sreg = nft_parse_register(tb[NFTA_RANGE_SREG]); +- err = nft_validate_register_load(priv->sreg, desc_from.len); ++ err = nft_parse_register_load(tb[NFTA_RANGE_SREG], &priv->sreg, ++ desc_from.len); + if (err < 0) + goto err2; + +diff --git a/net/netfilter/nft_redir.c b/net/netfilter/nft_redir.c +index 2056051c0af0..ba09890dddb5 100644 +--- a/net/netfilter/nft_redir.c ++++ b/net/netfilter/nft_redir.c +@@ -14,8 +14,8 @@ + #include + + struct nft_redir { +- enum nft_registers sreg_proto_min:8; +- enum nft_registers sreg_proto_max:8; ++ u8 sreg_proto_min; ++ u8 sreg_proto_max; + u16 flags; + }; + +@@ -50,19 +50,15 @@ static int nft_redir_init(const struct nft_ctx *ctx, + + plen = sizeof_field(struct nf_nat_range, min_addr.all); + if (tb[NFTA_REDIR_REG_PROTO_MIN]) { +- priv->sreg_proto_min = +- nft_parse_register(tb[NFTA_REDIR_REG_PROTO_MIN]); +- +- err = nft_validate_register_load(priv->sreg_proto_min, plen); ++ err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MIN], ++ &priv->sreg_proto_min, plen); + if (err < 0) + return err; + + if (tb[NFTA_REDIR_REG_PROTO_MAX]) { +- priv->sreg_proto_max = +- nft_parse_register(tb[NFTA_REDIR_REG_PROTO_MAX]); +- +- err = nft_validate_register_load(priv->sreg_proto_max, +- plen); ++ err = nft_parse_register_load(tb[NFTA_REDIR_REG_PROTO_MAX], ++ &priv->sreg_proto_max, ++ plen); + if (err < 0) + return err; + } else { +diff --git a/net/netfilter/nft_tproxy.c b/net/netfilter/nft_tproxy.c +index 242222dc52c3..37c728bdad41 100644 +--- a/net/netfilter/nft_tproxy.c ++++ b/net/netfilter/nft_tproxy.c +@@ -13,9 +13,9 @@ + #endif + + struct nft_tproxy { +- enum nft_registers sreg_addr:8; +- enum nft_registers sreg_port:8; +- u8 family; ++ u8 sreg_addr; ++ u8 sreg_port; ++ u8 family; + }; + + static void nft_tproxy_eval_v4(const struct nft_expr *expr, +@@ -254,15 +254,15 @@ static int nft_tproxy_init(const struct nft_ctx *ctx, + } + + if (tb[NFTA_TPROXY_REG_ADDR]) { +- priv->sreg_addr = nft_parse_register(tb[NFTA_TPROXY_REG_ADDR]); +- err = nft_validate_register_load(priv->sreg_addr, alen); ++ err = nft_parse_register_load(tb[NFTA_TPROXY_REG_ADDR], ++ &priv->sreg_addr, alen); + if (err < 0) + return err; + } + + if (tb[NFTA_TPROXY_REG_PORT]) { +- priv->sreg_port = nft_parse_register(tb[NFTA_TPROXY_REG_PORT]); +- err = nft_validate_register_load(priv->sreg_port, sizeof(u16)); ++ err = nft_parse_register_load(tb[NFTA_TPROXY_REG_PORT], ++ &priv->sreg_port, sizeof(u16)); + if (err < 0) + return err; + } +-- +2.35.1 + diff --git a/queue-5.10/netfilter-nftables-add-nft_parse_register_store-and-.patch b/queue-5.10/netfilter-nftables-add-nft_parse_register_store-and-.patch new file mode 100644 index 00000000000..27143d0208f --- /dev/null +++ b/queue-5.10/netfilter-nftables-add-nft_parse_register_store-and-.patch @@ -0,0 +1,671 @@ +From 7f24e33728d72475e5429bb8a621fd13e6667831 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 25 Jan 2021 18:27:22 +0100 +Subject: netfilter: nftables: add nft_parse_register_store() and use it +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Pablo Neira Ayuso + +[ Upstream commit 345023b0db315648ccc3c1a36aee88304a8b4d91 ] + +This new function combines the netlink register attribute parser +and the store validation function. + +This update requires to replace: + + enum nft_registers dreg:8; + +in many of the expression private areas otherwise compiler complains +with: + + error: cannot take address of bit-field ‘dreg’ + +when passing the register field as reference. + +Signed-off-by: Pablo Neira Ayuso +Signed-off-by: Sasha Levin +--- + include/net/netfilter/nf_tables.h | 8 +++--- + include/net/netfilter/nf_tables_core.h | 6 ++--- + include/net/netfilter/nft_fib.h | 2 +- + include/net/netfilter/nft_meta.h | 2 +- + net/bridge/netfilter/nft_meta_bridge.c | 5 ++-- + net/netfilter/nf_tables_api.c | 34 ++++++++++++++++++++++---- + net/netfilter/nft_bitwise.c | 13 +++++----- + net/netfilter/nft_byteorder.c | 8 +++--- + net/netfilter/nft_ct.c | 7 +++--- + net/netfilter/nft_exthdr.c | 8 +++--- + net/netfilter/nft_fib.c | 5 ++-- + net/netfilter/nft_hash.c | 17 ++++++------- + net/netfilter/nft_immediate.c | 6 ++--- + net/netfilter/nft_lookup.c | 8 +++--- + net/netfilter/nft_meta.c | 5 ++-- + net/netfilter/nft_numgen.c | 15 +++++------- + net/netfilter/nft_osf.c | 8 +++--- + net/netfilter/nft_payload.c | 6 ++--- + net/netfilter/nft_rt.c | 7 +++--- + net/netfilter/nft_socket.c | 7 +++--- + net/netfilter/nft_tunnel.c | 8 +++--- + net/netfilter/nft_xfrm.c | 7 +++--- + 22 files changed, 100 insertions(+), 92 deletions(-) + +diff --git a/include/net/netfilter/nf_tables.h b/include/net/netfilter/nf_tables.h +index 06e7f84a6d12..b9948e7861f2 100644 +--- a/include/net/netfilter/nf_tables.h ++++ b/include/net/netfilter/nf_tables.h +@@ -204,10 +204,10 @@ unsigned int nft_parse_register(const struct nlattr *attr); + int nft_dump_register(struct sk_buff *skb, unsigned int attr, unsigned int reg); + + int nft_parse_register_load(const struct nlattr *attr, u8 *sreg, u32 len); +-int nft_validate_register_store(const struct nft_ctx *ctx, +- enum nft_registers reg, +- const struct nft_data *data, +- enum nft_data_types type, unsigned int len); ++int nft_parse_register_store(const struct nft_ctx *ctx, ++ const struct nlattr *attr, u8 *dreg, ++ const struct nft_data *data, ++ enum nft_data_types type, unsigned int len); + + /** + * struct nft_userdata - user defined data associated with an object +diff --git a/include/net/netfilter/nf_tables_core.h b/include/net/netfilter/nf_tables_core.h +index b7aff03a3f0f..fd10a7862fdc 100644 +--- a/include/net/netfilter/nf_tables_core.h ++++ b/include/net/netfilter/nf_tables_core.h +@@ -27,7 +27,7 @@ struct nft_bitwise_fast_expr { + u32 mask; + u32 xor; + u8 sreg; +- enum nft_registers dreg:8; ++ u8 dreg; + }; + + struct nft_cmp_fast_expr { +@@ -40,7 +40,7 @@ struct nft_cmp_fast_expr { + + struct nft_immediate_expr { + struct nft_data data; +- enum nft_registers dreg:8; ++ u8 dreg; + u8 dlen; + }; + +@@ -60,7 +60,7 @@ struct nft_payload { + enum nft_payload_bases base:8; + u8 offset; + u8 len; +- enum nft_registers dreg:8; ++ u8 dreg; + }; + + struct nft_payload_set { +diff --git a/include/net/netfilter/nft_fib.h b/include/net/netfilter/nft_fib.h +index 628b6fa579cd..237f3757637e 100644 +--- a/include/net/netfilter/nft_fib.h ++++ b/include/net/netfilter/nft_fib.h +@@ -5,7 +5,7 @@ + #include + + struct nft_fib { +- enum nft_registers dreg:8; ++ u8 dreg; + u8 result; + u32 flags; + }; +diff --git a/include/net/netfilter/nft_meta.h b/include/net/netfilter/nft_meta.h +index 946fa8c83798..2dce55c736f4 100644 +--- a/include/net/netfilter/nft_meta.h ++++ b/include/net/netfilter/nft_meta.h +@@ -7,7 +7,7 @@ + struct nft_meta { + enum nft_meta_keys key:8; + union { +- enum nft_registers dreg:8; ++ u8 dreg; + u8 sreg; + }; + }; +diff --git a/net/bridge/netfilter/nft_meta_bridge.c b/net/bridge/netfilter/nft_meta_bridge.c +index 8e8ffac037cd..97805ec424c1 100644 +--- a/net/bridge/netfilter/nft_meta_bridge.c ++++ b/net/bridge/netfilter/nft_meta_bridge.c +@@ -87,9 +87,8 @@ static int nft_meta_bridge_get_init(const struct nft_ctx *ctx, + return nft_meta_get_init(ctx, expr, tb); + } + +- priv->dreg = nft_parse_register(tb[NFTA_META_DREG]); +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ return nft_parse_register_store(ctx, tb[NFTA_META_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, len); + } + + static struct nft_expr_type nft_meta_bridge_type; +diff --git a/net/netfilter/nf_tables_api.c b/net/netfilter/nf_tables_api.c +index 91713ecd60e9..3c17fadaab5f 100644 +--- a/net/netfilter/nf_tables_api.c ++++ b/net/netfilter/nf_tables_api.c +@@ -4414,6 +4414,12 @@ static int nf_tables_delset(struct net *net, struct sock *nlsk, + return nft_delset(&ctx, set); + } + ++static int nft_validate_register_store(const struct nft_ctx *ctx, ++ enum nft_registers reg, ++ const struct nft_data *data, ++ enum nft_data_types type, ++ unsigned int len); ++ + static int nf_tables_bind_check_setelem(const struct nft_ctx *ctx, + struct nft_set *set, + const struct nft_set_iter *iter, +@@ -8555,10 +8561,11 @@ EXPORT_SYMBOL_GPL(nft_parse_register_load); + * A value of NULL for the data means that its runtime gathered + * data. + */ +-int nft_validate_register_store(const struct nft_ctx *ctx, +- enum nft_registers reg, +- const struct nft_data *data, +- enum nft_data_types type, unsigned int len) ++static int nft_validate_register_store(const struct nft_ctx *ctx, ++ enum nft_registers reg, ++ const struct nft_data *data, ++ enum nft_data_types type, ++ unsigned int len) + { + int err; + +@@ -8590,7 +8597,24 @@ int nft_validate_register_store(const struct nft_ctx *ctx, + return 0; + } + } +-EXPORT_SYMBOL_GPL(nft_validate_register_store); ++ ++int nft_parse_register_store(const struct nft_ctx *ctx, ++ const struct nlattr *attr, u8 *dreg, ++ const struct nft_data *data, ++ enum nft_data_types type, unsigned int len) ++{ ++ int err; ++ u32 reg; ++ ++ reg = nft_parse_register(attr); ++ err = nft_validate_register_store(ctx, reg, data, type, len); ++ if (err < 0) ++ return err; ++ ++ *dreg = reg; ++ return 0; ++} ++EXPORT_SYMBOL_GPL(nft_parse_register_store); + + static const struct nla_policy nft_verdict_policy[NFTA_VERDICT_MAX + 1] = { + [NFTA_VERDICT_CODE] = { .type = NLA_U32 }, +diff --git a/net/netfilter/nft_bitwise.c b/net/netfilter/nft_bitwise.c +index 2157970b3cd3..47b0dba95054 100644 +--- a/net/netfilter/nft_bitwise.c ++++ b/net/netfilter/nft_bitwise.c +@@ -17,7 +17,7 @@ + + struct nft_bitwise { + u8 sreg; +- enum nft_registers dreg:8; ++ u8 dreg; + enum nft_bitwise_ops op:8; + u8 len; + struct nft_data mask; +@@ -174,9 +174,9 @@ static int nft_bitwise_init(const struct nft_ctx *ctx, + if (err < 0) + return err; + +- priv->dreg = nft_parse_register(tb[NFTA_BITWISE_DREG]); +- err = nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, priv->len); ++ err = nft_parse_register_store(ctx, tb[NFTA_BITWISE_DREG], ++ &priv->dreg, NULL, NFT_DATA_VALUE, ++ priv->len); + if (err < 0) + return err; + +@@ -320,9 +320,8 @@ static int nft_bitwise_fast_init(const struct nft_ctx *ctx, + if (err < 0) + return err; + +- priv->dreg = nft_parse_register(tb[NFTA_BITWISE_DREG]); +- err = nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, sizeof(u32)); ++ err = nft_parse_register_store(ctx, tb[NFTA_BITWISE_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, sizeof(u32)); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_byteorder.c b/net/netfilter/nft_byteorder.c +index 0960563cd5a1..9d5947ab8d4e 100644 +--- a/net/netfilter/nft_byteorder.c ++++ b/net/netfilter/nft_byteorder.c +@@ -17,7 +17,7 @@ + + struct nft_byteorder { + u8 sreg; +- enum nft_registers dreg:8; ++ u8 dreg; + enum nft_byteorder_ops op:8; + u8 len; + u8 size; +@@ -142,9 +142,9 @@ static int nft_byteorder_init(const struct nft_ctx *ctx, + if (err < 0) + return err; + +- priv->dreg = nft_parse_register(tb[NFTA_BYTEORDER_DREG]); +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, priv->len); ++ return nft_parse_register_store(ctx, tb[NFTA_BYTEORDER_DREG], ++ &priv->dreg, NULL, NFT_DATA_VALUE, ++ priv->len); + } + + static int nft_byteorder_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/netfilter/nft_ct.c b/net/netfilter/nft_ct.c +index dbf54cca6083..781118465d46 100644 +--- a/net/netfilter/nft_ct.c ++++ b/net/netfilter/nft_ct.c +@@ -27,7 +27,7 @@ struct nft_ct { + enum nft_ct_keys key:8; + enum ip_conntrack_dir dir:8; + union { +- enum nft_registers dreg:8; ++ u8 dreg; + u8 sreg; + }; + }; +@@ -499,9 +499,8 @@ static int nft_ct_get_init(const struct nft_ctx *ctx, + } + } + +- priv->dreg = nft_parse_register(tb[NFTA_CT_DREG]); +- err = nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ err = nft_parse_register_store(ctx, tb[NFTA_CT_DREG], &priv->dreg, NULL, ++ NFT_DATA_VALUE, len); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_exthdr.c b/net/netfilter/nft_exthdr.c +index f2b36b9c2b53..670dd146fb2b 100644 +--- a/net/netfilter/nft_exthdr.c ++++ b/net/netfilter/nft_exthdr.c +@@ -19,7 +19,7 @@ struct nft_exthdr { + u8 offset; + u8 len; + u8 op; +- enum nft_registers dreg:8; ++ u8 dreg; + u8 sreg; + u8 flags; + }; +@@ -353,12 +353,12 @@ static int nft_exthdr_init(const struct nft_ctx *ctx, + priv->type = nla_get_u8(tb[NFTA_EXTHDR_TYPE]); + priv->offset = offset; + priv->len = len; +- priv->dreg = nft_parse_register(tb[NFTA_EXTHDR_DREG]); + priv->flags = flags; + priv->op = op; + +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, priv->len); ++ return nft_parse_register_store(ctx, tb[NFTA_EXTHDR_DREG], ++ &priv->dreg, NULL, NFT_DATA_VALUE, ++ priv->len); + } + + static int nft_exthdr_tcp_set_init(const struct nft_ctx *ctx, +diff --git a/net/netfilter/nft_fib.c b/net/netfilter/nft_fib.c +index 4dfdaeaf09a5..b10ce732b337 100644 +--- a/net/netfilter/nft_fib.c ++++ b/net/netfilter/nft_fib.c +@@ -86,7 +86,6 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr, + return -EINVAL; + + priv->result = ntohl(nla_get_be32(tb[NFTA_FIB_RESULT])); +- priv->dreg = nft_parse_register(tb[NFTA_FIB_DREG]); + + switch (priv->result) { + case NFT_FIB_RESULT_OIF: +@@ -106,8 +105,8 @@ int nft_fib_init(const struct nft_ctx *ctx, const struct nft_expr *expr, + return -EINVAL; + } + +- err = nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ err = nft_parse_register_store(ctx, tb[NFTA_FIB_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, len); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_hash.c b/net/netfilter/nft_hash.c +index 7ee6c6da50ae..f829f5289e16 100644 +--- a/net/netfilter/nft_hash.c ++++ b/net/netfilter/nft_hash.c +@@ -15,7 +15,7 @@ + + struct nft_jhash { + u8 sreg; +- enum nft_registers dreg:8; ++ u8 dreg; + u8 len; + bool autogen_seed:1; + u32 modulus; +@@ -38,7 +38,7 @@ static void nft_jhash_eval(const struct nft_expr *expr, + } + + struct nft_symhash { +- enum nft_registers dreg:8; ++ u8 dreg; + u32 modulus; + u32 offset; + }; +@@ -83,8 +83,6 @@ static int nft_jhash_init(const struct nft_ctx *ctx, + if (tb[NFTA_HASH_OFFSET]) + priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET])); + +- priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); +- + err = nft_parse_u32_check(tb[NFTA_HASH_LEN], U8_MAX, &len); + if (err < 0) + return err; +@@ -111,8 +109,8 @@ static int nft_jhash_init(const struct nft_ctx *ctx, + get_random_bytes(&priv->seed, sizeof(priv->seed)); + } + +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, sizeof(u32)); ++ return nft_parse_register_store(ctx, tb[NFTA_HASH_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, sizeof(u32)); + } + + static int nft_symhash_init(const struct nft_ctx *ctx, +@@ -128,8 +126,6 @@ static int nft_symhash_init(const struct nft_ctx *ctx, + if (tb[NFTA_HASH_OFFSET]) + priv->offset = ntohl(nla_get_be32(tb[NFTA_HASH_OFFSET])); + +- priv->dreg = nft_parse_register(tb[NFTA_HASH_DREG]); +- + priv->modulus = ntohl(nla_get_be32(tb[NFTA_HASH_MODULUS])); + if (priv->modulus < 1) + return -ERANGE; +@@ -137,8 +133,9 @@ static int nft_symhash_init(const struct nft_ctx *ctx, + if (priv->offset + priv->modulus - 1 < priv->offset) + return -EOVERFLOW; + +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, sizeof(u32)); ++ return nft_parse_register_store(ctx, tb[NFTA_HASH_DREG], ++ &priv->dreg, NULL, NFT_DATA_VALUE, ++ sizeof(u32)); + } + + static int nft_jhash_dump(struct sk_buff *skb, +diff --git a/net/netfilter/nft_immediate.c b/net/netfilter/nft_immediate.c +index 5c9d88560a47..d0f67d325bdf 100644 +--- a/net/netfilter/nft_immediate.c ++++ b/net/netfilter/nft_immediate.c +@@ -48,9 +48,9 @@ static int nft_immediate_init(const struct nft_ctx *ctx, + + priv->dlen = desc.len; + +- priv->dreg = nft_parse_register(tb[NFTA_IMMEDIATE_DREG]); +- err = nft_validate_register_store(ctx, priv->dreg, &priv->data, +- desc.type, desc.len); ++ err = nft_parse_register_store(ctx, tb[NFTA_IMMEDIATE_DREG], ++ &priv->dreg, &priv->data, desc.type, ++ desc.len); + if (err < 0) + goto err1; + +diff --git a/net/netfilter/nft_lookup.c b/net/netfilter/nft_lookup.c +index 9e87b6d39f51..b0f558b4fea5 100644 +--- a/net/netfilter/nft_lookup.c ++++ b/net/netfilter/nft_lookup.c +@@ -18,7 +18,7 @@ + struct nft_lookup { + struct nft_set *set; + u8 sreg; +- enum nft_registers dreg:8; ++ u8 dreg; + bool invert; + struct nft_set_binding binding; + }; +@@ -100,9 +100,9 @@ static int nft_lookup_init(const struct nft_ctx *ctx, + if (!(set->flags & NFT_SET_MAP)) + return -EINVAL; + +- priv->dreg = nft_parse_register(tb[NFTA_LOOKUP_DREG]); +- err = nft_validate_register_store(ctx, priv->dreg, NULL, +- set->dtype, set->dlen); ++ err = nft_parse_register_store(ctx, tb[NFTA_LOOKUP_DREG], ++ &priv->dreg, NULL, set->dtype, ++ set->dlen); + if (err < 0) + return err; + } else if (set->flags & NFT_SET_MAP) +diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c +index 65e231ec1884..a7e01e9952f1 100644 +--- a/net/netfilter/nft_meta.c ++++ b/net/netfilter/nft_meta.c +@@ -535,9 +535,8 @@ int nft_meta_get_init(const struct nft_ctx *ctx, + return -EOPNOTSUPP; + } + +- priv->dreg = nft_parse_register(tb[NFTA_META_DREG]); +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ return nft_parse_register_store(ctx, tb[NFTA_META_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, len); + } + EXPORT_SYMBOL_GPL(nft_meta_get_init); + +diff --git a/net/netfilter/nft_numgen.c b/net/netfilter/nft_numgen.c +index f1fc824f9737..722cac1e90e0 100644 +--- a/net/netfilter/nft_numgen.c ++++ b/net/netfilter/nft_numgen.c +@@ -16,7 +16,7 @@ + static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state); + + struct nft_ng_inc { +- enum nft_registers dreg:8; ++ u8 dreg; + u32 modulus; + atomic_t counter; + u32 offset; +@@ -66,11 +66,10 @@ static int nft_ng_inc_init(const struct nft_ctx *ctx, + if (priv->offset + priv->modulus - 1 < priv->offset) + return -EOVERFLOW; + +- priv->dreg = nft_parse_register(tb[NFTA_NG_DREG]); + atomic_set(&priv->counter, priv->modulus - 1); + +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, sizeof(u32)); ++ return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, sizeof(u32)); + } + + static int nft_ng_dump(struct sk_buff *skb, enum nft_registers dreg, +@@ -100,7 +99,7 @@ static int nft_ng_inc_dump(struct sk_buff *skb, const struct nft_expr *expr) + } + + struct nft_ng_random { +- enum nft_registers dreg:8; ++ u8 dreg; + u32 modulus; + u32 offset; + }; +@@ -140,10 +139,8 @@ static int nft_ng_random_init(const struct nft_ctx *ctx, + + prandom_init_once(&nft_numgen_prandom_state); + +- priv->dreg = nft_parse_register(tb[NFTA_NG_DREG]); +- +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, sizeof(u32)); ++ return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, sizeof(u32)); + } + + static int nft_ng_random_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/netfilter/nft_osf.c b/net/netfilter/nft_osf.c +index 2c957629ea66..d82677e83400 100644 +--- a/net/netfilter/nft_osf.c ++++ b/net/netfilter/nft_osf.c +@@ -6,7 +6,7 @@ + #include + + struct nft_osf { +- enum nft_registers dreg:8; ++ u8 dreg; + u8 ttl; + u32 flags; + }; +@@ -83,9 +83,9 @@ static int nft_osf_init(const struct nft_ctx *ctx, + priv->flags = flags; + } + +- priv->dreg = nft_parse_register(tb[NFTA_OSF_DREG]); +- err = nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, NFT_OSF_MAXGENRELEN); ++ err = nft_parse_register_store(ctx, tb[NFTA_OSF_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, ++ NFT_OSF_MAXGENRELEN); + if (err < 0) + return err; + +diff --git a/net/netfilter/nft_payload.c b/net/netfilter/nft_payload.c +index b9702236d310..01878c16418c 100644 +--- a/net/netfilter/nft_payload.c ++++ b/net/netfilter/nft_payload.c +@@ -144,10 +144,10 @@ static int nft_payload_init(const struct nft_ctx *ctx, + priv->base = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_BASE])); + priv->offset = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_OFFSET])); + priv->len = ntohl(nla_get_be32(tb[NFTA_PAYLOAD_LEN])); +- priv->dreg = nft_parse_register(tb[NFTA_PAYLOAD_DREG]); + +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, priv->len); ++ return nft_parse_register_store(ctx, tb[NFTA_PAYLOAD_DREG], ++ &priv->dreg, NULL, NFT_DATA_VALUE, ++ priv->len); + } + + static int nft_payload_dump(struct sk_buff *skb, const struct nft_expr *expr) +diff --git a/net/netfilter/nft_rt.c b/net/netfilter/nft_rt.c +index 7cfcb0e2f7ee..bcd01a63e38f 100644 +--- a/net/netfilter/nft_rt.c ++++ b/net/netfilter/nft_rt.c +@@ -15,7 +15,7 @@ + + struct nft_rt { + enum nft_rt_keys key:8; +- enum nft_registers dreg:8; ++ u8 dreg; + }; + + static u16 get_tcpmss(const struct nft_pktinfo *pkt, const struct dst_entry *skbdst) +@@ -141,9 +141,8 @@ static int nft_rt_get_init(const struct nft_ctx *ctx, + return -EOPNOTSUPP; + } + +- priv->dreg = nft_parse_register(tb[NFTA_RT_DREG]); +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ return nft_parse_register_store(ctx, tb[NFTA_RT_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, len); + } + + static int nft_rt_get_dump(struct sk_buff *skb, +diff --git a/net/netfilter/nft_socket.c b/net/netfilter/nft_socket.c +index 8a0125e966c8..f6d517185d9c 100644 +--- a/net/netfilter/nft_socket.c ++++ b/net/netfilter/nft_socket.c +@@ -10,7 +10,7 @@ + struct nft_socket { + enum nft_socket_keys key:8; + union { +- enum nft_registers dreg:8; ++ u8 dreg; + }; + }; + +@@ -146,9 +146,8 @@ static int nft_socket_init(const struct nft_ctx *ctx, + return -EOPNOTSUPP; + } + +- priv->dreg = nft_parse_register(tb[NFTA_SOCKET_DREG]); +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ return nft_parse_register_store(ctx, tb[NFTA_SOCKET_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, len); + } + + static int nft_socket_dump(struct sk_buff *skb, +diff --git a/net/netfilter/nft_tunnel.c b/net/netfilter/nft_tunnel.c +index d3eb953d0333..3b27926d5382 100644 +--- a/net/netfilter/nft_tunnel.c ++++ b/net/netfilter/nft_tunnel.c +@@ -15,7 +15,7 @@ + + struct nft_tunnel { + enum nft_tunnel_keys key:8; +- enum nft_registers dreg:8; ++ u8 dreg; + enum nft_tunnel_mode mode:8; + }; + +@@ -93,8 +93,6 @@ static int nft_tunnel_get_init(const struct nft_ctx *ctx, + return -EOPNOTSUPP; + } + +- priv->dreg = nft_parse_register(tb[NFTA_TUNNEL_DREG]); +- + if (tb[NFTA_TUNNEL_MODE]) { + priv->mode = ntohl(nla_get_be32(tb[NFTA_TUNNEL_MODE])); + if (priv->mode > NFT_TUNNEL_MODE_MAX) +@@ -103,8 +101,8 @@ static int nft_tunnel_get_init(const struct nft_ctx *ctx, + priv->mode = NFT_TUNNEL_MODE_NONE; + } + +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ return nft_parse_register_store(ctx, tb[NFTA_TUNNEL_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, len); + } + + static int nft_tunnel_get_dump(struct sk_buff *skb, +diff --git a/net/netfilter/nft_xfrm.c b/net/netfilter/nft_xfrm.c +index 06d5cabf1d7c..cbbbc4ecad3a 100644 +--- a/net/netfilter/nft_xfrm.c ++++ b/net/netfilter/nft_xfrm.c +@@ -24,7 +24,7 @@ static const struct nla_policy nft_xfrm_policy[NFTA_XFRM_MAX + 1] = { + + struct nft_xfrm { + enum nft_xfrm_keys key:8; +- enum nft_registers dreg:8; ++ u8 dreg; + u8 dir; + u8 spnum; + }; +@@ -86,9 +86,8 @@ static int nft_xfrm_get_init(const struct nft_ctx *ctx, + + priv->spnum = spnum; + +- priv->dreg = nft_parse_register(tb[NFTA_XFRM_DREG]); +- return nft_validate_register_store(ctx, priv->dreg, NULL, +- NFT_DATA_VALUE, len); ++ return nft_parse_register_store(ctx, tb[NFTA_XFRM_DREG], &priv->dreg, ++ NULL, NFT_DATA_VALUE, len); + } + + /* Return true if key asks for daddr/saddr and current +-- +2.35.1 + diff --git a/queue-5.10/netfilter-use-get_random_u32-instead-of-prandom.patch b/queue-5.10/netfilter-use-get_random_u32-instead-of-prandom.patch new file mode 100644 index 00000000000..75cefb61479 --- /dev/null +++ b/queue-5.10/netfilter-use-get_random_u32-instead-of-prandom.patch @@ -0,0 +1,131 @@ +From 9e67ca4c394be5678d6e98be0e23e7f5ff47e8f0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 May 2022 20:15:31 +0200 +Subject: netfilter: use get_random_u32 instead of prandom + +From: Florian Westphal + +[ Upstream commit b1fd94e704571f98b21027340eecf821b2bdffba ] + +bh might occur while updating per-cpu rnd_state from user context, +ie. local_out path. + +BUG: using smp_processor_id() in preemptible [00000000] code: nginx/2725 +caller is nft_ng_random_eval+0x24/0x54 [nft_numgen] +Call Trace: + check_preemption_disabled+0xde/0xe0 + nft_ng_random_eval+0x24/0x54 [nft_numgen] + +Use the random driver instead, this also avoids need for local prandom +state. Moreover, prandom now uses the random driver since d4150779e60f +("random32: use real rng for non-deterministic randomness"). + +Based on earlier patch from Pablo Neira. + +Fixes: 6b2faee0ca91 ("netfilter: nft_meta: place prandom handling in a helper") +Fixes: 978d8f9055c3 ("netfilter: nft_numgen: add map lookups for numgen random operations") +Signed-off-by: Florian Westphal +Signed-off-by: Pablo Neira Ayuso +Signed-off-by: Sasha Levin +--- + net/netfilter/nft_meta.c | 13 ++----------- + net/netfilter/nft_numgen.c | 12 +++--------- + 2 files changed, 5 insertions(+), 20 deletions(-) + +diff --git a/net/netfilter/nft_meta.c b/net/netfilter/nft_meta.c +index a7e01e9952f1..44d9b38e5f90 100644 +--- a/net/netfilter/nft_meta.c ++++ b/net/netfilter/nft_meta.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -32,8 +33,6 @@ + #define NFT_META_SECS_PER_DAY 86400 + #define NFT_META_DAYS_PER_WEEK 7 + +-static DEFINE_PER_CPU(struct rnd_state, nft_prandom_state); +- + static u8 nft_meta_weekday(void) + { + time64_t secs = ktime_get_real_seconds(); +@@ -267,13 +266,6 @@ static bool nft_meta_get_eval_ifname(enum nft_meta_keys key, u32 *dest, + return true; + } + +-static noinline u32 nft_prandom_u32(void) +-{ +- struct rnd_state *state = this_cpu_ptr(&nft_prandom_state); +- +- return prandom_u32_state(state); +-} +- + #ifdef CONFIG_IP_ROUTE_CLASSID + static noinline bool + nft_meta_get_eval_rtclassid(const struct sk_buff *skb, u32 *dest) +@@ -385,7 +377,7 @@ void nft_meta_get_eval(const struct nft_expr *expr, + break; + #endif + case NFT_META_PRANDOM: +- *dest = nft_prandom_u32(); ++ *dest = get_random_u32(); + break; + #ifdef CONFIG_XFRM + case NFT_META_SECPATH: +@@ -514,7 +506,6 @@ int nft_meta_get_init(const struct nft_ctx *ctx, + len = IFNAMSIZ; + break; + case NFT_META_PRANDOM: +- prandom_init_once(&nft_prandom_state); + len = sizeof(u32); + break; + #ifdef CONFIG_XFRM +diff --git a/net/netfilter/nft_numgen.c b/net/netfilter/nft_numgen.c +index 722cac1e90e0..4e43214e88de 100644 +--- a/net/netfilter/nft_numgen.c ++++ b/net/netfilter/nft_numgen.c +@@ -9,12 +9,11 @@ + #include + #include + #include ++#include + #include + #include + #include + +-static DEFINE_PER_CPU(struct rnd_state, nft_numgen_prandom_state); +- + struct nft_ng_inc { + u8 dreg; + u32 modulus; +@@ -104,12 +103,9 @@ struct nft_ng_random { + u32 offset; + }; + +-static u32 nft_ng_random_gen(struct nft_ng_random *priv) ++static u32 nft_ng_random_gen(const struct nft_ng_random *priv) + { +- struct rnd_state *state = this_cpu_ptr(&nft_numgen_prandom_state); +- +- return reciprocal_scale(prandom_u32_state(state), priv->modulus) + +- priv->offset; ++ return reciprocal_scale(get_random_u32(), priv->modulus) + priv->offset; + } + + static void nft_ng_random_eval(const struct nft_expr *expr, +@@ -137,8 +133,6 @@ static int nft_ng_random_init(const struct nft_ctx *ctx, + if (priv->offset + priv->modulus - 1 < priv->offset) + return -EOVERFLOW; + +- prandom_init_once(&nft_numgen_prandom_state); +- + return nft_parse_register_store(ctx, tb[NFTA_NG_DREG], &priv->dreg, + NULL, NFT_DATA_VALUE, sizeof(u32)); + } +-- +2.35.1 + diff --git a/queue-5.10/nvme-centralize-setting-the-timeout-in-nvme_alloc_re.patch b/queue-5.10/nvme-centralize-setting-the-timeout-in-nvme_alloc_re.patch new file mode 100644 index 00000000000..3f4fcce5b67 --- /dev/null +++ b/queue-5.10/nvme-centralize-setting-the-timeout-in-nvme_alloc_re.patch @@ -0,0 +1,99 @@ +From e056fe09316579b8249f484aa7aad223c295e5b2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 9 Nov 2020 16:33:42 -0800 +Subject: nvme: centralize setting the timeout in nvme_alloc_request + +From: Chaitanya Kulkarni + +[ Upstream commit 0d2e7c840b178bf9a47bd0de89d8f9182fa71d86 ] + +The function nvme_alloc_request() is called from different context +(I/O and Admin queue) where callers do not consider the I/O timeout when +called from I/O queue context. + +Update nvme_alloc_request() to set the default I/O and Admin timeout +value based on whether the queuedata is set or not. + +Signed-off-by: Chaitanya Kulkarni +Reviewed-by: Sagi Grimberg +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/core.c | 11 +++++++++-- + drivers/nvme/host/lightnvm.c | 3 ++- + drivers/nvme/host/pci.c | 2 -- + 3 files changed, 11 insertions(+), 5 deletions(-) + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 0aa68da51ed7..4a7154cbca50 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -553,6 +553,11 @@ struct request *nvme_alloc_request(struct request_queue *q, + if (IS_ERR(req)) + return req; + ++ if (req->q->queuedata) ++ req->timeout = NVME_IO_TIMEOUT; ++ else /* no queuedata implies admin queue */ ++ req->timeout = ADMIN_TIMEOUT; ++ + req->cmd_flags |= REQ_FAILFAST_DRIVER; + nvme_clear_nvme_request(req); + nvme_req(req)->cmd = cmd; +@@ -927,7 +932,8 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, + if (IS_ERR(req)) + return PTR_ERR(req); + +- req->timeout = timeout ? timeout : ADMIN_TIMEOUT; ++ if (timeout) ++ req->timeout = timeout; + + if (buffer && bufflen) { + ret = blk_rq_map_kern(q, req, buffer, bufflen, GFP_KERNEL); +@@ -1097,7 +1103,8 @@ static int nvme_submit_user_cmd(struct request_queue *q, + if (IS_ERR(req)) + return PTR_ERR(req); + +- req->timeout = timeout ? timeout : ADMIN_TIMEOUT; ++ if (timeout) ++ req->timeout = timeout; + nvme_req(req)->flags |= NVME_REQ_USERCMD; + + if (ubuffer && bufflen) { +diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c +index 8e562d0f2c30..88a7c8eac455 100644 +--- a/drivers/nvme/host/lightnvm.c ++++ b/drivers/nvme/host/lightnvm.c +@@ -774,7 +774,8 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q, + goto err_cmd; + } + +- rq->timeout = timeout ? timeout : ADMIN_TIMEOUT; ++ if (timeout) ++ rq->timeout = timeout; + + if (ppa_buf && ppa_len) { + ppa_list = dma_pool_alloc(dev->dma_pool, GFP_KERNEL, &ppa_dma); +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index 7de24a10dd92..f2d0148d4050 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -1356,7 +1356,6 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) + return BLK_EH_RESET_TIMER; + } + +- abort_req->timeout = ADMIN_TIMEOUT; + abort_req->end_io_data = NULL; + blk_execute_rq_nowait(abort_req->q, NULL, abort_req, 0, abort_endio); + +@@ -2283,7 +2282,6 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) + if (IS_ERR(req)) + return PTR_ERR(req); + +- req->timeout = ADMIN_TIMEOUT; + req->end_io_data = nvmeq; + + init_completion(&nvmeq->delete_done); +-- +2.35.1 + diff --git a/queue-5.10/nvme-don-t-check-nvme_req-flags-for-new-req.patch b/queue-5.10/nvme-don-t-check-nvme_req-flags-for-new-req.patch new file mode 100644 index 00000000000..e9e0beaf372 --- /dev/null +++ b/queue-5.10/nvme-don-t-check-nvme_req-flags-for-new-req.patch @@ -0,0 +1,84 @@ +From 1372f94d4eda930a943bbcb4303e05479ba1301d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 28 Feb 2021 18:06:08 -0800 +Subject: nvme: don't check nvme_req flags for new req +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Chaitanya Kulkarni + +[ Upstream commit c03fd85de293a4f65fcb94a795bf4c12a432bb6c ] + +nvme_clear_request() has a check for flag REQ_DONTPREP and it is called +from nvme_init_request() and nvme_setuo_cmd(). + +The function nvme_init_request() is called from nvme_alloc_request() +and nvme_alloc_request_qid(). From these two callers new request is +allocated everytime. For newly allocated request RQF_DONTPREP is never +set. Since after getting a tag, block layer sets the req->rq_flags == 0 +and never sets the REQ_DONTPREP when returning the request :- + +nvme_alloc_request() + blk_mq_alloc_request() + blk_mq_rq_ctx_init() + rq->rq_flags = 0 <---- + +nvme_alloc_request_qid() + blk_mq_alloc_request_hctx() + blk_mq_rq_ctx_init() + rq->rq_flags = 0 <---- + +The block layer does set req->rq_flags but REQ_DONTPREP is not one of +them and that is set by the driver. + +That means we can unconditinally set the REQ_DONTPREP value to the +rq->rq_flags when nvme_init_request()->nvme_clear_request() is called +from above two callers. + +Move the check for REQ_DONTPREP from nvme_clear_nvme_request() into +nvme_setup_cmd(). + +This is needed since nvme_alloc_request() now gets called from fast +path when NVMeOF target is configured with passthru backend to avoid +unnecessary checks in the fast path. + +Signed-off-by: Chaitanya Kulkarni +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/core.c | 11 +++++------ + 1 file changed, 5 insertions(+), 6 deletions(-) + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index d81b0cff15e0..c42ad0b8247b 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -531,11 +531,9 @@ EXPORT_SYMBOL_NS_GPL(nvme_put_ns, NVME_TARGET_PASSTHRU); + + static inline void nvme_clear_nvme_request(struct request *req) + { +- if (!(req->rq_flags & RQF_DONTPREP)) { +- nvme_req(req)->retries = 0; +- nvme_req(req)->flags = 0; +- req->rq_flags |= RQF_DONTPREP; +- } ++ nvme_req(req)->retries = 0; ++ nvme_req(req)->flags = 0; ++ req->rq_flags |= RQF_DONTPREP; + } + + static inline unsigned int nvme_req_op(struct nvme_command *cmd) +@@ -854,7 +852,8 @@ blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, + struct nvme_ctrl *ctrl = nvme_req(req)->ctrl; + blk_status_t ret = BLK_STS_OK; + +- nvme_clear_nvme_request(req); ++ if (!(req->rq_flags & RQF_DONTPREP)) ++ nvme_clear_nvme_request(req); + + memset(cmd, 0, sizeof(*cmd)); + switch (req_op(req)) { +-- +2.35.1 + diff --git a/queue-5.10/nvme-mark-nvme_setup_passsthru-inline.patch b/queue-5.10/nvme-mark-nvme_setup_passsthru-inline.patch new file mode 100644 index 00000000000..c0b9e062deb --- /dev/null +++ b/queue-5.10/nvme-mark-nvme_setup_passsthru-inline.patch @@ -0,0 +1,35 @@ +From 141f518671ddd0f54feb85b59b35fadb8726c5c4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 28 Feb 2021 18:06:06 -0800 +Subject: nvme: mark nvme_setup_passsthru() inline + +From: Chaitanya Kulkarni + +[ Upstream commit 7a36604668b9b1f84126ef0342144ba5b07e518f ] + +Since nvmet_setup_passthru() function falls in fast path when called +from the NVMeOF passthru backend, make it inline. + +Signed-off-by: Chaitanya Kulkarni +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/core.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 68395dcd067c..d81b0cff15e0 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -683,7 +683,7 @@ static void nvme_assign_write_stream(struct nvme_ctrl *ctrl, + req->q->write_hints[streamid] += blk_rq_bytes(req) >> 9; + } + +-static void nvme_setup_passthrough(struct request *req, ++static inline void nvme_setup_passthrough(struct request *req, + struct nvme_command *cmd) + { + memcpy(cmd, nvme_req(req)->cmd, sizeof(*cmd)); +-- +2.35.1 + diff --git a/queue-5.10/nvme-move-the-samsung-x5-quirk-entry-to-the-core-qui.patch b/queue-5.10/nvme-move-the-samsung-x5-quirk-entry-to-the-core-qui.patch new file mode 100644 index 00000000000..08683f3ac34 --- /dev/null +++ b/queue-5.10/nvme-move-the-samsung-x5-quirk-entry-to-the-core-qui.patch @@ -0,0 +1,65 @@ +From b9ea7dd9380243f072baa1262dbc3cca32750b9e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 17 Jun 2022 10:29:42 +0200 +Subject: nvme: move the Samsung X5 quirk entry to the core quirks + +From: Christoph Hellwig + +[ Upstream commit e6487833182a8a0187f0292aca542fc163ccd03e ] + +This device shares the PCI ID with the Samsung 970 Evo Plus that +does not need or want the quirks. Move the the quirk entry to the +core table based on the model number instead. + +Fixes: bc360b0b1611 ("nvme-pci: add quirks for Samsung X5 SSDs") +Signed-off-by: Christoph Hellwig +Reviewed-by: Pankaj Raghav +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/core.c | 14 ++++++++++++++ + drivers/nvme/host/pci.c | 4 ---- + 2 files changed, 14 insertions(+), 4 deletions(-) + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 9ec3ac367a76..af2902d70b19 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -2713,6 +2713,20 @@ static const struct nvme_core_quirk_entry core_quirks[] = { + .vid = 0x1e0f, + .mn = "KCD6XVUL6T40", + .quirks = NVME_QUIRK_NO_APST, ++ }, ++ { ++ /* ++ * The external Samsung X5 SSD fails initialization without a ++ * delay before checking if it is ready and has a whole set of ++ * other problems. To make this even more interesting, it ++ * shares the PCI ID with internal Samsung 970 Evo Plus that ++ * does not need or want these quirks. ++ */ ++ .vid = 0x144d, ++ .mn = "Samsung Portable SSD X5", ++ .quirks = NVME_QUIRK_DELAY_BEFORE_CHK_RDY | ++ NVME_QUIRK_NO_DEEPEST_PS | ++ NVME_QUIRK_IGNORE_DEV_SUBNQN, + } + }; + +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index 31c6938e5045..9e633f4dcec7 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -3265,10 +3265,6 @@ static const struct pci_device_id nvme_id_table[] = { + NVME_QUIRK_128_BYTES_SQES | + NVME_QUIRK_SHARED_TAGS | + NVME_QUIRK_SKIP_CID_GEN }, +- { PCI_DEVICE(0x144d, 0xa808), /* Samsung X5 */ +- .driver_data = NVME_QUIRK_DELAY_BEFORE_CHK_RDY| +- NVME_QUIRK_NO_DEEPEST_PS | +- NVME_QUIRK_IGNORE_DEV_SUBNQN, }, + { PCI_DEVICE_CLASS(PCI_CLASS_STORAGE_EXPRESS, 0xffffff) }, + { 0, } + }; +-- +2.35.1 + diff --git a/queue-5.10/nvme-pci-add-no-apst-quirk-for-kioxia-device.patch b/queue-5.10/nvme-pci-add-no-apst-quirk-for-kioxia-device.patch new file mode 100644 index 00000000000..a12ce60d5df --- /dev/null +++ b/queue-5.10/nvme-pci-add-no-apst-quirk-for-kioxia-device.patch @@ -0,0 +1,54 @@ +From 3a3cda25e4512f382d368b3115faf518f19c78d6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 5 Nov 2021 23:08:57 -0300 +Subject: nvme-pci: add NO APST quirk for Kioxia device + +From: Enzo Matsumiya + +[ Upstream commit 5a6254d55e2a9f7919ead8580d7aa0c7a382b26a ] + +This particular Kioxia device times out and aborts I/O during any load, +but it's more easily observable with discards (fstrim). + +The device gets to a state that is also not possible to use +"nvme set-feature" to disable APST. +Booting with nvme_core.default_ps_max_latency=0 solves the issue. + +We had a dozen or so of these devices behaving this same way in +customer environments. + +Signed-off-by: Enzo Matsumiya +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/core.c | 14 ++++++++++++++ + 1 file changed, 14 insertions(+) + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index c42ad0b8247b..9ec3ac367a76 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -2699,6 +2699,20 @@ static const struct nvme_core_quirk_entry core_quirks[] = { + .vid = 0x14a4, + .fr = "22301111", + .quirks = NVME_QUIRK_SIMPLE_SUSPEND, ++ }, ++ { ++ /* ++ * This Kioxia CD6-V Series / HPE PE8030 device times out and ++ * aborts I/O during any load, but more easily reproducible ++ * with discards (fstrim). ++ * ++ * The device is left in a state where it is also not possible ++ * to use "nvme set-feature" to disable APST, but booting with ++ * nvme_core.default_ps_max_latency=0 works. ++ */ ++ .vid = 0x1e0f, ++ .mn = "KCD6XVUL6T40", ++ .quirks = NVME_QUIRK_NO_APST, + } + }; + +-- +2.35.1 + diff --git a/queue-5.10/nvme-pci-allocate-nvme_command-within-driver-pdu.patch b/queue-5.10/nvme-pci-allocate-nvme_command-within-driver-pdu.patch new file mode 100644 index 00000000000..ff584ddfbae --- /dev/null +++ b/queue-5.10/nvme-pci-allocate-nvme_command-within-driver-pdu.patch @@ -0,0 +1,78 @@ +From b29836c8a82731e832a7feda9bd92a7c809e0a1a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 17 Mar 2021 13:37:02 -0700 +Subject: nvme-pci: allocate nvme_command within driver pdu + +From: Keith Busch + +[ Upstream commit af7fae857ea22e9c2aef812e1321d9c5c206edde ] + +Except for pci, all the nvme transport drivers allocate a command within +the driver's pdu. Align pci with everyone else by allocating the nvme +command within pci's pdu and replace the .queue_rq() stack variable with +this. + +Signed-off-by: Keith Busch +Reviewed-by: Jens Axboe +Reviewed-by: Sagi Grimberg +Reviewed-by: Chaitanya Kulkarni +Reviewed-by: Himanshu Madhani +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/pci.c | 11 ++++++----- + 1 file changed, 6 insertions(+), 5 deletions(-) + +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index 07a4d5d387cd..31c6938e5045 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -224,6 +224,7 @@ struct nvme_queue { + */ + struct nvme_iod { + struct nvme_request req; ++ struct nvme_command cmd; + struct nvme_queue *nvmeq; + bool use_sgl; + int aborted; +@@ -917,7 +918,7 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, + struct nvme_dev *dev = nvmeq->dev; + struct request *req = bd->rq; + struct nvme_iod *iod = blk_mq_rq_to_pdu(req); +- struct nvme_command cmnd; ++ struct nvme_command *cmnd = &iod->cmd; + blk_status_t ret; + + iod->aborted = 0; +@@ -931,24 +932,24 @@ static blk_status_t nvme_queue_rq(struct blk_mq_hw_ctx *hctx, + if (unlikely(!test_bit(NVMEQ_ENABLED, &nvmeq->flags))) + return BLK_STS_IOERR; + +- ret = nvme_setup_cmd(ns, req, &cmnd); ++ ret = nvme_setup_cmd(ns, req, cmnd); + if (ret) + return ret; + + if (blk_rq_nr_phys_segments(req)) { +- ret = nvme_map_data(dev, req, &cmnd); ++ ret = nvme_map_data(dev, req, cmnd); + if (ret) + goto out_free_cmd; + } + + if (blk_integrity_rq(req)) { +- ret = nvme_map_metadata(dev, req, &cmnd); ++ ret = nvme_map_metadata(dev, req, cmnd); + if (ret) + goto out_unmap_data; + } + + blk_mq_start_request(req); +- nvme_submit_cmd(nvmeq, &cmnd, bd->last); ++ nvme_submit_cmd(nvmeq, cmnd, bd->last); + return BLK_STS_OK; + out_unmap_data: + nvme_unmap_data(dev, req); +-- +2.35.1 + diff --git a/queue-5.10/nvme-split-nvme_alloc_request.patch b/queue-5.10/nvme-split-nvme_alloc_request.patch new file mode 100644 index 00000000000..811aa46533d --- /dev/null +++ b/queue-5.10/nvme-split-nvme_alloc_request.patch @@ -0,0 +1,236 @@ +From 5e31391ef73df25b6fdb9a5b406d833e92a6b87e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 9 Nov 2020 18:24:00 -0800 +Subject: nvme: split nvme_alloc_request() + +From: Chaitanya Kulkarni + +[ Upstream commit 39dfe84451b4526a8054cc5a127337bca980dfa3 ] + +Right now nvme_alloc_request() allocates a request from block layer +based on the value of the qid. When qid set to NVME_QID_ANY it used +blk_mq_alloc_request() else blk_mq_alloc_request_hctx(). + +The function nvme_alloc_request() is called from different context, The +only place where it uses non NVME_QID_ANY value is for fabrics connect +commands :- + +nvme_submit_sync_cmd() NVME_QID_ANY +nvme_features() NVME_QID_ANY +nvme_sec_submit() NVME_QID_ANY +nvmf_reg_read32() NVME_QID_ANY +nvmf_reg_read64() NVME_QID_ANY +nvmf_reg_write32() NVME_QID_ANY +nvmf_connect_admin_queue() NVME_QID_ANY +nvme_submit_user_cmd() NVME_QID_ANY + nvme_alloc_request() +nvme_keep_alive() NVME_QID_ANY + nvme_alloc_request() +nvme_timeout() NVME_QID_ANY + nvme_alloc_request() +nvme_delete_queue() NVME_QID_ANY + nvme_alloc_request() +nvmet_passthru_execute_cmd() NVME_QID_ANY + nvme_alloc_request() +nvmf_connect_io_queue() QID + __nvme_submit_sync_cmd() + nvme_alloc_request() + +With passthru nvme_alloc_request() now falls into the I/O fast path such +that blk_mq_alloc_request_hctx() is never gets called and that adds +additional branch check in fast path. + +Split the nvme_alloc_request() into nvme_alloc_request() and +nvme_alloc_request_qid(). + +Replace each call of the nvme_alloc_request() with NVME_QID_ANY param +with a call to newly added nvme_alloc_request() without NVME_QID_ANY. + +Replace a call to nvme_alloc_request() with QID param with a call to +newly added nvme_alloc_request() and nvme_alloc_request_qid() +based on the qid value set in the __nvme_submit_sync_cmd(). + +Signed-off-by: Chaitanya Kulkarni +Reviewed-by: Logan Gunthorpe +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/core.c | 52 +++++++++++++++++++++++----------- + drivers/nvme/host/lightnvm.c | 5 ++-- + drivers/nvme/host/nvme.h | 2 ++ + drivers/nvme/host/pci.c | 4 +-- + drivers/nvme/target/passthru.c | 2 +- + 5 files changed, 42 insertions(+), 23 deletions(-) + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index 4a7154cbca50..68395dcd067c 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -538,21 +538,14 @@ static inline void nvme_clear_nvme_request(struct request *req) + } + } + +-struct request *nvme_alloc_request(struct request_queue *q, +- struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) ++static inline unsigned int nvme_req_op(struct nvme_command *cmd) + { +- unsigned op = nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; +- struct request *req; +- +- if (qid == NVME_QID_ANY) { +- req = blk_mq_alloc_request(q, op, flags); +- } else { +- req = blk_mq_alloc_request_hctx(q, op, flags, +- qid ? qid - 1 : 0); +- } +- if (IS_ERR(req)) +- return req; ++ return nvme_is_write(cmd) ? REQ_OP_DRV_OUT : REQ_OP_DRV_IN; ++} + ++static inline void nvme_init_request(struct request *req, ++ struct nvme_command *cmd) ++{ + if (req->q->queuedata) + req->timeout = NVME_IO_TIMEOUT; + else /* no queuedata implies admin queue */ +@@ -561,11 +554,33 @@ struct request *nvme_alloc_request(struct request_queue *q, + req->cmd_flags |= REQ_FAILFAST_DRIVER; + nvme_clear_nvme_request(req); + nvme_req(req)->cmd = cmd; ++} + ++struct request *nvme_alloc_request(struct request_queue *q, ++ struct nvme_command *cmd, blk_mq_req_flags_t flags) ++{ ++ struct request *req; ++ ++ req = blk_mq_alloc_request(q, nvme_req_op(cmd), flags); ++ if (!IS_ERR(req)) ++ nvme_init_request(req, cmd); + return req; + } + EXPORT_SYMBOL_GPL(nvme_alloc_request); + ++struct request *nvme_alloc_request_qid(struct request_queue *q, ++ struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid) ++{ ++ struct request *req; ++ ++ req = blk_mq_alloc_request_hctx(q, nvme_req_op(cmd), flags, ++ qid ? qid - 1 : 0); ++ if (!IS_ERR(req)) ++ nvme_init_request(req, cmd); ++ return req; ++} ++EXPORT_SYMBOL_GPL(nvme_alloc_request_qid); ++ + static int nvme_toggle_streams(struct nvme_ctrl *ctrl, bool enable) + { + struct nvme_command c; +@@ -928,7 +943,10 @@ int __nvme_submit_sync_cmd(struct request_queue *q, struct nvme_command *cmd, + struct request *req; + int ret; + +- req = nvme_alloc_request(q, cmd, flags, qid); ++ if (qid == NVME_QID_ANY) ++ req = nvme_alloc_request(q, cmd, flags); ++ else ++ req = nvme_alloc_request_qid(q, cmd, flags, qid); + if (IS_ERR(req)) + return PTR_ERR(req); + +@@ -1099,7 +1117,7 @@ static int nvme_submit_user_cmd(struct request_queue *q, + void *meta = NULL; + int ret; + +- req = nvme_alloc_request(q, cmd, 0, NVME_QID_ANY); ++ req = nvme_alloc_request(q, cmd, 0); + if (IS_ERR(req)) + return PTR_ERR(req); + +@@ -1174,8 +1192,8 @@ static int nvme_keep_alive(struct nvme_ctrl *ctrl) + { + struct request *rq; + +- rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, BLK_MQ_REQ_RESERVED, +- NVME_QID_ANY); ++ rq = nvme_alloc_request(ctrl->admin_q, &ctrl->ka_cmd, ++ BLK_MQ_REQ_RESERVED); + if (IS_ERR(rq)) + return PTR_ERR(rq); + +diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c +index 88a7c8eac455..470cef3abec3 100644 +--- a/drivers/nvme/host/lightnvm.c ++++ b/drivers/nvme/host/lightnvm.c +@@ -653,7 +653,7 @@ static struct request *nvme_nvm_alloc_request(struct request_queue *q, + + nvme_nvm_rqtocmd(rqd, ns, cmd); + +- rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0, NVME_QID_ANY); ++ rq = nvme_alloc_request(q, (struct nvme_command *)cmd, 0); + if (IS_ERR(rq)) + return rq; + +@@ -767,8 +767,7 @@ static int nvme_nvm_submit_user_cmd(struct request_queue *q, + DECLARE_COMPLETION_ONSTACK(wait); + int ret = 0; + +- rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0, +- NVME_QID_ANY); ++ rq = nvme_alloc_request(q, (struct nvme_command *)vcmd, 0); + if (IS_ERR(rq)) { + ret = -ENOMEM; + goto err_cmd; +diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h +index 95b9657cabaf..8e40a6306e53 100644 +--- a/drivers/nvme/host/nvme.h ++++ b/drivers/nvme/host/nvme.h +@@ -662,6 +662,8 @@ void nvme_start_freeze(struct nvme_ctrl *ctrl); + + #define NVME_QID_ANY -1 + struct request *nvme_alloc_request(struct request_queue *q, ++ struct nvme_command *cmd, blk_mq_req_flags_t flags); ++struct request *nvme_alloc_request_qid(struct request_queue *q, + struct nvme_command *cmd, blk_mq_req_flags_t flags, int qid); + void nvme_cleanup_cmd(struct request *req); + blk_status_t nvme_setup_cmd(struct nvme_ns *ns, struct request *req, +diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c +index f2d0148d4050..07a4d5d387cd 100644 +--- a/drivers/nvme/host/pci.c ++++ b/drivers/nvme/host/pci.c +@@ -1350,7 +1350,7 @@ static enum blk_eh_timer_return nvme_timeout(struct request *req, bool reserved) + req->tag, nvmeq->qid); + + abort_req = nvme_alloc_request(dev->ctrl.admin_q, &cmd, +- BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); ++ BLK_MQ_REQ_NOWAIT); + if (IS_ERR(abort_req)) { + atomic_inc(&dev->ctrl.abort_limit); + return BLK_EH_RESET_TIMER; +@@ -2278,7 +2278,7 @@ static int nvme_delete_queue(struct nvme_queue *nvmeq, u8 opcode) + cmd.delete_queue.opcode = opcode; + cmd.delete_queue.qid = cpu_to_le16(nvmeq->qid); + +- req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT, NVME_QID_ANY); ++ req = nvme_alloc_request(q, &cmd, BLK_MQ_REQ_NOWAIT); + if (IS_ERR(req)) + return PTR_ERR(req); + +diff --git a/drivers/nvme/target/passthru.c b/drivers/nvme/target/passthru.c +index 8ee94f056898..d24251ece502 100644 +--- a/drivers/nvme/target/passthru.c ++++ b/drivers/nvme/target/passthru.c +@@ -244,7 +244,7 @@ static void nvmet_passthru_execute_cmd(struct nvmet_req *req) + q = ns->queue; + } + +- rq = nvme_alloc_request(q, req->cmd, 0, NVME_QID_ANY); ++ rq = nvme_alloc_request(q, req->cmd, 0); + if (IS_ERR(rq)) { + status = NVME_SC_INTERNAL; + goto out_put_ns; +-- +2.35.1 + diff --git a/queue-5.10/phy-aquantia-fix-an-when-higher-speeds-than-1g-are-n.patch b/queue-5.10/phy-aquantia-fix-an-when-higher-speeds-than-1g-are-n.patch new file mode 100644 index 00000000000..327db526f58 --- /dev/null +++ b/queue-5.10/phy-aquantia-fix-an-when-higher-speeds-than-1g-are-n.patch @@ -0,0 +1,63 @@ +From 97b3b0d71eb1ecb7a215f8cb7ece37062df99b3d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 10 Jun 2022 11:40:37 +0300 +Subject: phy: aquantia: Fix AN when higher speeds than 1G are not advertised + +From: Claudiu Manoil + +[ Upstream commit 9b7fd1670a94a57d974795acebde843a5c1a354e ] + +Even when the eth port is resticted to work with speeds not higher than 1G, +and so the eth driver is requesting the phy (via phylink) to advertise up +to 1000BASET support, the aquantia phy device is still advertising for 2.5G +and 5G speeds. +Clear these advertising defaults when requested. + +Cc: Ondrej Spacek +Fixes: 09c4c57f7bc41 ("net: phy: aquantia: add support for auto-negotiation configuration") +Signed-off-by: Claudiu Manoil +Link: https://lore.kernel.org/r/20220610084037.7625-1-claudiu.manoil@nxp.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/phy/aquantia_main.c | 15 ++++++++++++++- + 1 file changed, 14 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/phy/aquantia_main.c b/drivers/net/phy/aquantia_main.c +index 41e7c1432497..75a62d1cc737 100644 +--- a/drivers/net/phy/aquantia_main.c ++++ b/drivers/net/phy/aquantia_main.c +@@ -34,6 +34,8 @@ + #define MDIO_AN_VEND_PROV 0xc400 + #define MDIO_AN_VEND_PROV_1000BASET_FULL BIT(15) + #define MDIO_AN_VEND_PROV_1000BASET_HALF BIT(14) ++#define MDIO_AN_VEND_PROV_5000BASET_FULL BIT(11) ++#define MDIO_AN_VEND_PROV_2500BASET_FULL BIT(10) + #define MDIO_AN_VEND_PROV_DOWNSHIFT_EN BIT(4) + #define MDIO_AN_VEND_PROV_DOWNSHIFT_MASK GENMASK(3, 0) + #define MDIO_AN_VEND_PROV_DOWNSHIFT_DFLT 4 +@@ -230,9 +232,20 @@ static int aqr_config_aneg(struct phy_device *phydev) + phydev->advertising)) + reg |= MDIO_AN_VEND_PROV_1000BASET_HALF; + ++ /* Handle the case when the 2.5G and 5G speeds are not advertised */ ++ if (linkmode_test_bit(ETHTOOL_LINK_MODE_2500baseT_Full_BIT, ++ phydev->advertising)) ++ reg |= MDIO_AN_VEND_PROV_2500BASET_FULL; ++ ++ if (linkmode_test_bit(ETHTOOL_LINK_MODE_5000baseT_Full_BIT, ++ phydev->advertising)) ++ reg |= MDIO_AN_VEND_PROV_5000BASET_FULL; ++ + ret = phy_modify_mmd_changed(phydev, MDIO_MMD_AN, MDIO_AN_VEND_PROV, + MDIO_AN_VEND_PROV_1000BASET_HALF | +- MDIO_AN_VEND_PROV_1000BASET_FULL, reg); ++ MDIO_AN_VEND_PROV_1000BASET_FULL | ++ MDIO_AN_VEND_PROV_2500BASET_FULL | ++ MDIO_AN_VEND_PROV_5000BASET_FULL, reg); + if (ret < 0) + return ret; + if (ret > 0) +-- +2.35.1 + diff --git a/queue-5.10/regmap-irq-fix-a-bug-in-regmap_irq_enable-for-type_i.patch b/queue-5.10/regmap-irq-fix-a-bug-in-regmap_irq_enable-for-type_i.patch new file mode 100644 index 00000000000..eee5b796b90 --- /dev/null +++ b/queue-5.10/regmap-irq-fix-a-bug-in-regmap_irq_enable-for-type_i.patch @@ -0,0 +1,55 @@ +From be97683e4302df1872d23721a2b10381fefa6afd Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 20 Jun 2022 21:05:56 +0100 +Subject: regmap-irq: Fix a bug in regmap_irq_enable() for type_in_mask chips + +From: Aidan MacDonald + +[ Upstream commit 485037ae9a095491beb7f893c909a76cc4f9d1e7 ] + +When enabling a type_in_mask irq, the type_buf contents must be +AND'd with the mask of the IRQ we're enabling to avoid enabling +other IRQs by accident, which can happen if several type_in_mask +irqs share a mask register. + +Fixes: bc998a730367 ("regmap: irq: handle HW using separate rising/falling edge interrupts") +Signed-off-by: Aidan MacDonald +Link: https://lore.kernel.org/r/20220620200644.1961936-2-aidanmacdonald.0x0@gmail.com +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + drivers/base/regmap/regmap-irq.c | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +diff --git a/drivers/base/regmap/regmap-irq.c b/drivers/base/regmap/regmap-irq.c +index 87c5c421e0f4..4466f8bdab2e 100644 +--- a/drivers/base/regmap/regmap-irq.c ++++ b/drivers/base/regmap/regmap-irq.c +@@ -220,6 +220,7 @@ static void regmap_irq_enable(struct irq_data *data) + struct regmap_irq_chip_data *d = irq_data_get_irq_chip_data(data); + struct regmap *map = d->map; + const struct regmap_irq *irq_data = irq_to_regmap_irq(d, data->hwirq); ++ unsigned int reg = irq_data->reg_offset / map->reg_stride; + unsigned int mask, type; + + type = irq_data->type.type_falling_val | irq_data->type.type_rising_val; +@@ -236,14 +237,14 @@ static void regmap_irq_enable(struct irq_data *data) + * at the corresponding offset in regmap_irq_set_type(). + */ + if (d->chip->type_in_mask && type) +- mask = d->type_buf[irq_data->reg_offset / map->reg_stride]; ++ mask = d->type_buf[reg] & irq_data->mask; + else + mask = irq_data->mask; + + if (d->chip->clear_on_unmask) + d->clear_status = true; + +- d->mask_buf[irq_data->reg_offset / map->reg_stride] &= ~mask; ++ d->mask_buf[reg] &= ~mask; + } + + static void regmap_irq_disable(struct irq_data *data) +-- +2.35.1 + diff --git a/queue-5.10/revert-net-tls-fix-tls_sk_proto_close-executed-repea.patch b/queue-5.10/revert-net-tls-fix-tls_sk_proto_close-executed-repea.patch new file mode 100644 index 00000000000..1b9669ba8b1 --- /dev/null +++ b/queue-5.10/revert-net-tls-fix-tls_sk_proto_close-executed-repea.patch @@ -0,0 +1,42 @@ +From 232fee51318e1609450012d3859a6be260c1f356 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 20 Jun 2022 12:13:52 -0700 +Subject: Revert "net/tls: fix tls_sk_proto_close executed repeatedly" + +From: Jakub Kicinski + +[ Upstream commit 1b205d948fbb06a7613d87dcea0ff5fd8a08ed91 ] + +This reverts commit 69135c572d1f84261a6de2a1268513a7e71753e2. + +This commit was just papering over the issue, ULP should not +get ->update() called with its own sk_prot. Each ULP would +need to add this check. + +Fixes: 69135c572d1f ("net/tls: fix tls_sk_proto_close executed repeatedly") +Signed-off-by: Jakub Kicinski +Reviewed-by: John Fastabend +Link: https://lore.kernel.org/r/20220620191353.1184629-1-kuba@kernel.org +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + net/tls/tls_main.c | 3 --- + 1 file changed, 3 deletions(-) + +diff --git a/net/tls/tls_main.c b/net/tls/tls_main.c +index 9492528f5852..58d22d6b86ae 100644 +--- a/net/tls/tls_main.c ++++ b/net/tls/tls_main.c +@@ -787,9 +787,6 @@ static void tls_update(struct sock *sk, struct proto *p, + { + struct tls_context *ctx; + +- if (sk->sk_prot == p) +- return; +- + ctx = tls_get_ctx(sk); + if (likely(ctx)) { + ctx->sk_write_space = write_space; +-- +2.35.1 + diff --git a/queue-5.10/s390-cpumf-handle-events-cycles-and-instructions-ide.patch b/queue-5.10/s390-cpumf-handle-events-cycles-and-instructions-ide.patch new file mode 100644 index 00000000000..cb85f3f8ce7 --- /dev/null +++ b/queue-5.10/s390-cpumf-handle-events-cycles-and-instructions-ide.patch @@ -0,0 +1,102 @@ +From bf42ab5d6720f9bdd0533deb21c9b29e2365e02d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 10 Jun 2022 15:19:00 +0200 +Subject: s390/cpumf: Handle events cycles and instructions identical + +From: Thomas Richter + +[ Upstream commit be857b7f77d130dbbd47c91fc35198b040f35865 ] + +Events CPU_CYCLES and INSTRUCTIONS can be submitted with two different +perf_event attribute::type values: + - PERF_TYPE_HARDWARE: when invoked via perf tool predefined events name + cycles or cpu-cycles or instructions. + - pmu->type: when invoked via perf tool event name cpu_cf/CPU_CYLCES/ or + cpu_cf/INSTRUCTIONS/. This invocation also selects the PMU to which + the event belongs. +Handle both type of invocations identical for events CPU_CYLCES and +INSTRUCTIONS. They address the same hardware. +The result is different when event modifier exclude_kernel is also set. +Invocation with event modifier for user space event counting fails. + +Output before: + + # perf stat -e cpum_cf/cpu_cycles/u -- true + + Performance counter stats for 'true': + + cpum_cf/cpu_cycles/u + + 0.000761033 seconds time elapsed + + 0.000076000 seconds user + 0.000725000 seconds sys + + # + +Output after: + # perf stat -e cpum_cf/cpu_cycles/u -- true + + Performance counter stats for 'true': + + 349,613 cpum_cf/cpu_cycles/u + + 0.000844143 seconds time elapsed + + 0.000079000 seconds user + 0.000800000 seconds sys + # + +Fixes: 6a82e23f45fe ("s390/cpumf: Adjust registration of s390 PMU device drivers") +Signed-off-by: Thomas Richter +Acked-by: Sumanth Korikkar +[agordeev@linux.ibm.com corrected commit ID of Fixes commit] +Signed-off-by: Alexander Gordeev +Signed-off-by: Sasha Levin +--- + arch/s390/kernel/perf_cpum_cf.c | 22 +++++++++++++++++++++- + 1 file changed, 21 insertions(+), 1 deletion(-) + +diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c +index 0eb1d1cc53a8..dddb32e53db8 100644 +--- a/arch/s390/kernel/perf_cpum_cf.c ++++ b/arch/s390/kernel/perf_cpum_cf.c +@@ -292,6 +292,26 @@ static int __hw_perf_event_init(struct perf_event *event, unsigned int type) + return err; + } + ++/* Events CPU_CYLCES and INSTRUCTIONS can be submitted with two different ++ * attribute::type values: ++ * - PERF_TYPE_HARDWARE: ++ * - pmu->type: ++ * Handle both type of invocations identical. They address the same hardware. ++ * The result is different when event modifiers exclude_kernel and/or ++ * exclude_user are also set. ++ */ ++static int cpumf_pmu_event_type(struct perf_event *event) ++{ ++ u64 ev = event->attr.config; ++ ++ if (cpumf_generic_events_basic[PERF_COUNT_HW_CPU_CYCLES] == ev || ++ cpumf_generic_events_basic[PERF_COUNT_HW_INSTRUCTIONS] == ev || ++ cpumf_generic_events_user[PERF_COUNT_HW_CPU_CYCLES] == ev || ++ cpumf_generic_events_user[PERF_COUNT_HW_INSTRUCTIONS] == ev) ++ return PERF_TYPE_HARDWARE; ++ return PERF_TYPE_RAW; ++} ++ + static int cpumf_pmu_event_init(struct perf_event *event) + { + unsigned int type = event->attr.type; +@@ -301,7 +321,7 @@ static int cpumf_pmu_event_init(struct perf_event *event) + err = __hw_perf_event_init(event, type); + else if (event->pmu->type == type) + /* Registered as unknown PMU */ +- err = __hw_perf_event_init(event, PERF_TYPE_RAW); ++ err = __hw_perf_event_init(event, cpumf_pmu_event_type(event)); + else + return -ENOENT; + +-- +2.35.1 + diff --git a/queue-5.10/scsi-scsi_debug-fix-zone-transition-to-full-conditio.patch b/queue-5.10/scsi-scsi_debug-fix-zone-transition-to-full-conditio.patch new file mode 100644 index 00000000000..3cf4f4b8c06 --- /dev/null +++ b/queue-5.10/scsi-scsi_debug-fix-zone-transition-to-full-conditio.patch @@ -0,0 +1,81 @@ +From 49135a9739211a46a884baff351dc243c83520d6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 8 Jun 2022 10:13:02 +0900 +Subject: scsi: scsi_debug: Fix zone transition to full condition + +From: Damien Le Moal + +[ Upstream commit 566d3c57eb526f32951af15866086e236ce1fc8a ] + +When a write command to a sequential write required or sequential write +preferred zone result in the zone write pointer reaching the end of the +zone, the zone condition must be set to full AND the number of implicitly +or explicitly open zones updated to have a correct accounting for zone +resources. However, the function zbc_inc_wp() only sets the zone condition +to full without updating the open zone counters, resulting in a zone state +machine breakage. + +Introduce the helper function zbc_set_zone_full() and use it in +zbc_inc_wp() to correctly transition zones to the full condition. + +Link: https://lore.kernel.org/r/20220608011302.92061-1-damien.lemoal@opensource.wdc.com +Fixes: f0d1cf9378bd ("scsi: scsi_debug: Add ZBC zone commands") +Reviewed-by: Niklas Cassel +Acked-by: Douglas Gilbert +Signed-off-by: Damien Le Moal +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/scsi/scsi_debug.c | 22 ++++++++++++++++++++-- + 1 file changed, 20 insertions(+), 2 deletions(-) + +diff --git a/drivers/scsi/scsi_debug.c b/drivers/scsi/scsi_debug.c +index 6b00de6b6f0e..5eb959b5f701 100644 +--- a/drivers/scsi/scsi_debug.c ++++ b/drivers/scsi/scsi_debug.c +@@ -2746,6 +2746,24 @@ static void zbc_open_zone(struct sdebug_dev_info *devip, + } + } + ++static inline void zbc_set_zone_full(struct sdebug_dev_info *devip, ++ struct sdeb_zone_state *zsp) ++{ ++ switch (zsp->z_cond) { ++ case ZC2_IMPLICIT_OPEN: ++ devip->nr_imp_open--; ++ break; ++ case ZC3_EXPLICIT_OPEN: ++ devip->nr_exp_open--; ++ break; ++ default: ++ WARN_ONCE(true, "Invalid zone %llu condition %x\n", ++ zsp->z_start, zsp->z_cond); ++ break; ++ } ++ zsp->z_cond = ZC5_FULL; ++} ++ + static void zbc_inc_wp(struct sdebug_dev_info *devip, + unsigned long long lba, unsigned int num) + { +@@ -2758,7 +2776,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip, + if (zsp->z_type == ZBC_ZONE_TYPE_SWR) { + zsp->z_wp += num; + if (zsp->z_wp >= zend) +- zsp->z_cond = ZC5_FULL; ++ zbc_set_zone_full(devip, zsp); + return; + } + +@@ -2777,7 +2795,7 @@ static void zbc_inc_wp(struct sdebug_dev_info *devip, + n = num; + } + if (zsp->z_wp >= zend) +- zsp->z_cond = ZC5_FULL; ++ zbc_set_zone_full(devip, zsp); + + num -= n; + lba += n; +-- +2.35.1 + diff --git a/queue-5.10/selftests-netfilter-correct-pktgen_script_paths-in-n.patch b/queue-5.10/selftests-netfilter-correct-pktgen_script_paths-in-n.patch new file mode 100644 index 00000000000..3a1cd550347 --- /dev/null +++ b/queue-5.10/selftests-netfilter-correct-pktgen_script_paths-in-n.patch @@ -0,0 +1,61 @@ +From 5de023d62a698101d202a4d3cad1091dd079794d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 16 Jun 2022 15:40:46 +0800 +Subject: selftests: netfilter: correct PKTGEN_SCRIPT_PATHS in + nft_concat_range.sh + +From: Jie2x Zhou + +[ Upstream commit 5d79d8af8dec58bf709b3124d09d9572edd9c617 ] + +Before change: +make -C netfilter + TEST: performance + net,port [SKIP] + perf not supported + port,net [SKIP] + perf not supported + net6,port [SKIP] + perf not supported + port,proto [SKIP] + perf not supported + net6,port,mac [SKIP] + perf not supported + net6,port,mac,proto [SKIP] + perf not supported + net,mac [SKIP] + perf not supported + +After change: + net,mac [ OK ] + baseline (drop from netdev hook): 2061098pps + baseline hash (non-ranged entries): 1606741pps + baseline rbtree (match on first field only): 1191607pps + set with 1000 full, ranged entries: 1639119pps +ok 8 selftests: netfilter: nft_concat_range.sh + +Fixes: 611973c1e06f ("selftests: netfilter: Introduce tests for sets with range concatenation") +Reported-by: kernel test robot +Signed-off-by: Jie2x Zhou +Signed-off-by: Pablo Neira Ayuso +Signed-off-by: Sasha Levin +--- + tools/testing/selftests/netfilter/nft_concat_range.sh | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/tools/testing/selftests/netfilter/nft_concat_range.sh b/tools/testing/selftests/netfilter/nft_concat_range.sh +index b5eef5ffb58e..af3461cb5c40 100755 +--- a/tools/testing/selftests/netfilter/nft_concat_range.sh ++++ b/tools/testing/selftests/netfilter/nft_concat_range.sh +@@ -31,7 +31,7 @@ BUGS="flush_remove_add reload" + + # List of possible paths to pktgen script from kernel tree for performance tests + PKTGEN_SCRIPT_PATHS=" +- ../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh ++ ../../../../samples/pktgen/pktgen_bench_xmit_mode_netif_receive.sh + pktgen/pktgen_bench_xmit_mode_netif_receive.sh" + + # Definition of set types: +-- +2.35.1 + diff --git a/queue-5.10/series b/queue-5.10/series index cd88ccad7af..ea81f660d4b 100644 --- a/queue-5.10/series +++ b/queue-5.10/series @@ -18,3 +18,48 @@ dm-mirror-log-clear-log-bits-up-to-bits_per_long-boundary.patch usb-serial-option-add-telit-le910cx-0x1250-composition.patch usb-serial-option-add-quectel-em05-g-modem.patch usb-serial-option-add-quectel-rm500k-module-support.patch +drm-msm-fix-double-pm_runtime_disable-call.patch +netfilter-nftables-add-nft_parse_register_load-and-u.patch +netfilter-nftables-add-nft_parse_register_store-and-.patch +netfilter-use-get_random_u32-instead-of-prandom.patch +scsi-scsi_debug-fix-zone-transition-to-full-conditio.patch +drm-msm-use-for_each_sgtable_sg-to-iterate-over-scat.patch +bpf-fix-request_sock-leak-in-sk-lookup-helpers.patch +drm-sun4i-fix-crash-during-suspend-after-component-b.patch +bpf-x86-fix-tail-call-count-offset-calculation-on-bp.patch +phy-aquantia-fix-an-when-higher-speeds-than-1g-are-n.patch +tipc-simplify-the-finalize-work-queue.patch +tipc-fix-use-after-free-read-in-tipc_named_reinit.patch +igb-fix-a-use-after-free-issue-in-igb_clean_tx_ring.patch +bonding-arp-monitor-spams-netdev_notify_peers-notifi.patch +net-sched-sch_netem-fix-arithmetic-in-netem_dump-for.patch +drm-msm-mdp4-fix-refcount-leak-in-mdp4_modeset_init_.patch +drm-msm-dp-check-core_initialized-before-disable-int.patch +drm-msm-dp-fixes-wrong-connection-state-caused-by-fa.patch +drm-msm-dp-deinitialize-mainlink-if-link-training-fa.patch +drm-msm-dp-promote-irq_hpd-handle-to-handle-link-tra.patch +drm-msm-dp-fix-connect-disconnect-handled-at-irq_hpd.patch +erspan-do-not-assume-transport-header-is-always-set.patch +net-tls-fix-tls_sk_proto_close-executed-repeatedly.patch +udmabuf-add-back-sanity-check.patch +selftests-netfilter-correct-pktgen_script_paths-in-n.patch +x86-xen-remove-undefined-behavior-in-setup_features.patch +mips-remove-repetitive-increase-irq_err_count.patch +afs-fix-dynamic-root-getattr.patch +ice-ethtool-advertise-1000m-speeds-properly.patch +regmap-irq-fix-a-bug-in-regmap_irq_enable-for-type_i.patch +igb-make-dma-faster-when-cpu-is-active-on-the-pcie-l.patch +virtio_net-fix-xdp_rxq_info-bug-after-suspend-resume.patch +revert-net-tls-fix-tls_sk_proto_close-executed-repea.patch +nvme-centralize-setting-the-timeout-in-nvme_alloc_re.patch +nvme-split-nvme_alloc_request.patch +nvme-mark-nvme_setup_passsthru-inline.patch +nvme-don-t-check-nvme_req-flags-for-new-req.patch +nvme-pci-allocate-nvme_command-within-driver-pdu.patch +nvme-pci-add-no-apst-quirk-for-kioxia-device.patch +nvme-move-the-samsung-x5-quirk-entry-to-the-core-qui.patch +gpio-winbond-fix-error-code-in-winbond_gpio_get.patch +s390-cpumf-handle-events-cycles-and-instructions-ide.patch +iio-mma8452-fix-probe-fail-when-device-tree-compatib.patch +iio-adc-vf610-fix-conversion-mode-sysfs-node-name.patch +usb-typec-wcove-drop-wrong-dependency-to-intel_soc_p.patch diff --git a/queue-5.10/tipc-fix-use-after-free-read-in-tipc_named_reinit.patch b/queue-5.10/tipc-fix-use-after-free-read-in-tipc_named_reinit.patch new file mode 100644 index 00000000000..c97378c6aea --- /dev/null +++ b/queue-5.10/tipc-fix-use-after-free-read-in-tipc_named_reinit.patch @@ -0,0 +1,80 @@ +From ce41d42ef612169a963c1c62965bd65e006de6a6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 17 Jun 2022 08:45:51 +0700 +Subject: tipc: fix use-after-free Read in tipc_named_reinit + +From: Hoang Le + +[ Upstream commit 911600bf5a5e84bfda4d33ee32acc75ecf6159f0 ] + +syzbot found the following issue on: +================================================================== +BUG: KASAN: use-after-free in tipc_named_reinit+0x94f/0x9b0 +net/tipc/name_distr.c:413 +Read of size 8 at addr ffff88805299a000 by task kworker/1:9/23764 + +CPU: 1 PID: 23764 Comm: kworker/1:9 Not tainted +5.18.0-rc4-syzkaller-00878-g17d49e6e8012 #0 +Hardware name: Google Compute Engine/Google Compute Engine, +BIOS Google 01/01/2011 +Workqueue: events tipc_net_finalize_work +Call Trace: + + __dump_stack lib/dump_stack.c:88 [inline] + dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 + print_address_description.constprop.0.cold+0xeb/0x495 +mm/kasan/report.c:313 + print_report mm/kasan/report.c:429 [inline] + kasan_report.cold+0xf4/0x1c6 mm/kasan/report.c:491 + tipc_named_reinit+0x94f/0x9b0 net/tipc/name_distr.c:413 + tipc_net_finalize+0x234/0x3d0 net/tipc/net.c:138 + process_one_work+0x996/0x1610 kernel/workqueue.c:2289 + worker_thread+0x665/0x1080 kernel/workqueue.c:2436 + kthread+0x2e9/0x3a0 kernel/kthread.c:376 + ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 + +[...] +================================================================== + +In the commit +d966ddcc3821 ("tipc: fix a deadlock when flushing scheduled work"), +the cancel_work_sync() function just to make sure ONLY the work +tipc_net_finalize_work() is executing/pending on any CPU completed before +tipc namespace is destroyed through tipc_exit_net(). But this function +is not guaranteed the work is the last queued. So, the destroyed instance +may be accessed in the work which will try to enqueue later. + +In order to completely fix, we re-order the calling of cancel_work_sync() +to make sure the work tipc_net_finalize_work() was last queued and it +must be completed by calling cancel_work_sync(). + +Reported-by: syzbot+47af19f3307fc9c5c82e@syzkaller.appspotmail.com +Fixes: d966ddcc3821 ("tipc: fix a deadlock when flushing scheduled work") +Acked-by: Jon Maloy +Signed-off-by: Ying Xue +Signed-off-by: Hoang Le +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/tipc/core.c | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +diff --git a/net/tipc/core.c b/net/tipc/core.c +index 96bfcb2986f5..7724499f516e 100644 +--- a/net/tipc/core.c ++++ b/net/tipc/core.c +@@ -111,10 +111,9 @@ static void __net_exit tipc_exit_net(struct net *net) + struct tipc_net *tn = tipc_net(net); + + tipc_detach_loopback(net); ++ tipc_net_stop(net); + /* Make sure the tipc_net_finalize_work() finished */ + cancel_work_sync(&tn->work); +- tipc_net_stop(net); +- + tipc_bcast_stop(net); + tipc_nametbl_stop(net); + tipc_sk_rht_destroy(net); +-- +2.35.1 + diff --git a/queue-5.10/tipc-simplify-the-finalize-work-queue.patch b/queue-5.10/tipc-simplify-the-finalize-work-queue.patch new file mode 100644 index 00000000000..a5feef73ddd --- /dev/null +++ b/queue-5.10/tipc-simplify-the-finalize-work-queue.patch @@ -0,0 +1,163 @@ +From c3ad9349952ee048a53c52c6b1951b5f3edc93d9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 18 May 2021 10:09:08 +0800 +Subject: tipc: simplify the finalize work queue + +From: Xin Long + +[ Upstream commit be07f056396d6bb40963c45a02951c566ddeef8e ] + +This patch is to use "struct work_struct" for the finalize work queue +instead of "struct tipc_net_work", as it can get the "net" and "addr" +from tipc_net's other members and there is no need to add extra net +and addr in tipc_net by defining "struct tipc_net_work". + +Note that it's safe to get net from tn->bcl as bcl is always released +after the finalize work queue is done. + +Signed-off-by: Xin Long +Acked-by: Jon Maloy +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/tipc/core.c | 4 ++-- + net/tipc/core.h | 8 +------- + net/tipc/discover.c | 4 ++-- + net/tipc/link.c | 5 +++++ + net/tipc/link.h | 1 + + net/tipc/net.c | 15 +++------------ + 6 files changed, 14 insertions(+), 23 deletions(-) + +diff --git a/net/tipc/core.c b/net/tipc/core.c +index 40c03085c0ea..96bfcb2986f5 100644 +--- a/net/tipc/core.c ++++ b/net/tipc/core.c +@@ -60,7 +60,7 @@ static int __net_init tipc_init_net(struct net *net) + tn->trial_addr = 0; + tn->addr_trial_end = 0; + tn->capabilities = TIPC_NODE_CAPABILITIES; +- INIT_WORK(&tn->final_work.work, tipc_net_finalize_work); ++ INIT_WORK(&tn->work, tipc_net_finalize_work); + memset(tn->node_id, 0, sizeof(tn->node_id)); + memset(tn->node_id_string, 0, sizeof(tn->node_id_string)); + tn->mon_threshold = TIPC_DEF_MON_THRESHOLD; +@@ -112,7 +112,7 @@ static void __net_exit tipc_exit_net(struct net *net) + + tipc_detach_loopback(net); + /* Make sure the tipc_net_finalize_work() finished */ +- cancel_work_sync(&tn->final_work.work); ++ cancel_work_sync(&tn->work); + tipc_net_stop(net); + + tipc_bcast_stop(net); +diff --git a/net/tipc/core.h b/net/tipc/core.h +index 992924a849be..73a26b0b9ca1 100644 +--- a/net/tipc/core.h ++++ b/net/tipc/core.h +@@ -90,12 +90,6 @@ extern unsigned int tipc_net_id __read_mostly; + extern int sysctl_tipc_rmem[3] __read_mostly; + extern int sysctl_tipc_named_timeout __read_mostly; + +-struct tipc_net_work { +- struct work_struct work; +- struct net *net; +- u32 addr; +-}; +- + struct tipc_net { + u8 node_id[NODE_ID_LEN]; + u32 node_addr; +@@ -150,7 +144,7 @@ struct tipc_net { + struct tipc_crypto *crypto_tx; + #endif + /* Work item for net finalize */ +- struct tipc_net_work final_work; ++ struct work_struct work; + /* The numbers of work queues in schedule */ + atomic_t wq_count; + }; +diff --git a/net/tipc/discover.c b/net/tipc/discover.c +index d4ecacddb40c..14bc20604051 100644 +--- a/net/tipc/discover.c ++++ b/net/tipc/discover.c +@@ -167,7 +167,7 @@ static bool tipc_disc_addr_trial_msg(struct tipc_discoverer *d, + + /* Apply trial address if we just left trial period */ + if (!trial && !self) { +- tipc_sched_net_finalize(net, tn->trial_addr); ++ schedule_work(&tn->work); + msg_set_prevnode(buf_msg(d->skb), tn->trial_addr); + msg_set_type(buf_msg(d->skb), DSC_REQ_MSG); + } +@@ -307,7 +307,7 @@ static void tipc_disc_timeout(struct timer_list *t) + if (!time_before(jiffies, tn->addr_trial_end) && !tipc_own_addr(net)) { + mod_timer(&d->timer, jiffies + TIPC_DISC_INIT); + spin_unlock_bh(&d->lock); +- tipc_sched_net_finalize(net, tn->trial_addr); ++ schedule_work(&tn->work); + return; + } + +diff --git a/net/tipc/link.c b/net/tipc/link.c +index 7a353ff62844..064fdb8e50e1 100644 +--- a/net/tipc/link.c ++++ b/net/tipc/link.c +@@ -344,6 +344,11 @@ char tipc_link_plane(struct tipc_link *l) + return l->net_plane; + } + ++struct net *tipc_link_net(struct tipc_link *l) ++{ ++ return l->net; ++} ++ + void tipc_link_update_caps(struct tipc_link *l, u16 capabilities) + { + l->peer_caps = capabilities; +diff --git a/net/tipc/link.h b/net/tipc/link.h +index fc07232c9a12..a16f401fdabd 100644 +--- a/net/tipc/link.h ++++ b/net/tipc/link.h +@@ -156,4 +156,5 @@ int tipc_link_bc_sync_rcv(struct tipc_link *l, struct tipc_msg *hdr, + int tipc_link_bc_nack_rcv(struct tipc_link *l, struct sk_buff *skb, + struct sk_buff_head *xmitq); + bool tipc_link_too_silent(struct tipc_link *l); ++struct net *tipc_link_net(struct tipc_link *l); + #endif +diff --git a/net/tipc/net.c b/net/tipc/net.c +index 0bb2323201da..671cb4f9d563 100644 +--- a/net/tipc/net.c ++++ b/net/tipc/net.c +@@ -41,6 +41,7 @@ + #include "socket.h" + #include "node.h" + #include "bcast.h" ++#include "link.h" + #include "netlink.h" + #include "monitor.h" + +@@ -138,19 +139,9 @@ static void tipc_net_finalize(struct net *net, u32 addr) + + void tipc_net_finalize_work(struct work_struct *work) + { +- struct tipc_net_work *fwork; ++ struct tipc_net *tn = container_of(work, struct tipc_net, work); + +- fwork = container_of(work, struct tipc_net_work, work); +- tipc_net_finalize(fwork->net, fwork->addr); +-} +- +-void tipc_sched_net_finalize(struct net *net, u32 addr) +-{ +- struct tipc_net *tn = tipc_net(net); +- +- tn->final_work.net = net; +- tn->final_work.addr = addr; +- schedule_work(&tn->final_work.work); ++ tipc_net_finalize(tipc_link_net(tn->bcl), tn->trial_addr); + } + + void tipc_net_stop(struct net *net) +-- +2.35.1 + diff --git a/queue-5.10/udmabuf-add-back-sanity-check.patch b/queue-5.10/udmabuf-add-back-sanity-check.patch new file mode 100644 index 00000000000..da5b41826c3 --- /dev/null +++ b/queue-5.10/udmabuf-add-back-sanity-check.patch @@ -0,0 +1,42 @@ +From dbdafb34e5a5edc48f536bdc9b78ea177d46ec43 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 20 Jun 2022 09:15:47 +0200 +Subject: udmabuf: add back sanity check + +From: Gerd Hoffmann + +[ Upstream commit 05b252cccb2e5c3f56119d25de684b4f810ba40a ] + +Check vm_fault->pgoff before using it. When we removed the warning, we +also removed the check. + +Fixes: 7b26e4e2119d ("udmabuf: drop WARN_ON() check.") +Reported-by: zdi-disclosures@trendmicro.com +Suggested-by: Linus Torvalds +Signed-off-by: Gerd Hoffmann +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + drivers/dma-buf/udmabuf.c | 5 ++++- + 1 file changed, 4 insertions(+), 1 deletion(-) + +diff --git a/drivers/dma-buf/udmabuf.c b/drivers/dma-buf/udmabuf.c +index cfbf10128aae..2e3b76519b49 100644 +--- a/drivers/dma-buf/udmabuf.c ++++ b/drivers/dma-buf/udmabuf.c +@@ -26,8 +26,11 @@ static vm_fault_t udmabuf_vm_fault(struct vm_fault *vmf) + { + struct vm_area_struct *vma = vmf->vma; + struct udmabuf *ubuf = vma->vm_private_data; ++ pgoff_t pgoff = vmf->pgoff; + +- vmf->page = ubuf->pages[vmf->pgoff]; ++ if (pgoff >= ubuf->pagecount) ++ return VM_FAULT_SIGBUS; ++ vmf->page = ubuf->pages[pgoff]; + get_page(vmf->page); + return 0; + } +-- +2.35.1 + diff --git a/queue-5.10/usb-typec-wcove-drop-wrong-dependency-to-intel_soc_p.patch b/queue-5.10/usb-typec-wcove-drop-wrong-dependency-to-intel_soc_p.patch new file mode 100644 index 00000000000..203e3caf13c --- /dev/null +++ b/queue-5.10/usb-typec-wcove-drop-wrong-dependency-to-intel_soc_p.patch @@ -0,0 +1,45 @@ +From ddc81d22215f128017e46913f9bc0e4fd57e5874 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 20 Jun 2022 13:43:16 +0300 +Subject: usb: typec: wcove: Drop wrong dependency to INTEL_SOC_PMIC + +From: Andy Shevchenko + +[ Upstream commit 9ef165406308515dcf2e3f6e97b39a1c56d86db5 ] + +Intel SoC PMIC is a generic name for all PMICs that are used +on Intel platforms. In particular, INTEL_SOC_PMIC kernel configuration +option refers to Crystal Cove PMIC, which has never been a part +of any Intel Broxton hardware. Drop wrong dependency from Kconfig. + +Note, the correct dependency is satisfied via ACPI PMIC OpRegion driver, +which the Type-C depends on. + +Fixes: d2061f9cc32d ("usb: typec: add driver for Intel Whiskey Cove PMIC USB Type-C PHY") +Reported-by: Hans de Goede +Reviewed-by: Guenter Roeck +Reviewed-by: Heikki Krogerus +Reviewed-by: Hans de Goede +Signed-off-by: Andy Shevchenko +Link: https://lore.kernel.org/r/20220620104316.57592-1-andriy.shevchenko@linux.intel.com +Signed-off-by: Greg Kroah-Hartman +Signed-off-by: Sasha Levin +--- + drivers/usb/typec/tcpm/Kconfig | 1 - + 1 file changed, 1 deletion(-) + +diff --git a/drivers/usb/typec/tcpm/Kconfig b/drivers/usb/typec/tcpm/Kconfig +index 557f392fe24d..073fd2ea5e0b 100644 +--- a/drivers/usb/typec/tcpm/Kconfig ++++ b/drivers/usb/typec/tcpm/Kconfig +@@ -56,7 +56,6 @@ config TYPEC_WCOVE + tristate "Intel WhiskeyCove PMIC USB Type-C PHY driver" + depends on ACPI + depends on MFD_INTEL_PMC_BXT +- depends on INTEL_SOC_PMIC + depends on BXT_WC_PMIC_OPREGION + help + This driver adds support for USB Type-C on Intel Broxton platforms +-- +2.35.1 + diff --git a/queue-5.10/virtio_net-fix-xdp_rxq_info-bug-after-suspend-resume.patch b/queue-5.10/virtio_net-fix-xdp_rxq_info-bug-after-suspend-resume.patch new file mode 100644 index 00000000000..5a6c3be2bc0 --- /dev/null +++ b/queue-5.10/virtio_net-fix-xdp_rxq_info-bug-after-suspend-resume.patch @@ -0,0 +1,115 @@ +From ff4bfb3156f4432f4549fb635a52f5424f2fde18 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 21 Jun 2022 13:48:44 +0200 +Subject: virtio_net: fix xdp_rxq_info bug after suspend/resume + +From: Stephan Gerhold + +[ Upstream commit 8af52fe9fd3bf5e7478da99193c0632276e1dfce ] + +The following sequence currently causes a driver bug warning +when using virtio_net: + + # ip link set eth0 up + # echo mem > /sys/power/state (or e.g. # rtcwake -s 10 -m mem) + + # ip link set eth0 down + + Missing register, driver bug + WARNING: CPU: 0 PID: 375 at net/core/xdp.c:138 xdp_rxq_info_unreg+0x58/0x60 + Call trace: + xdp_rxq_info_unreg+0x58/0x60 + virtnet_close+0x58/0xac + __dev_close_many+0xac/0x140 + __dev_change_flags+0xd8/0x210 + dev_change_flags+0x24/0x64 + do_setlink+0x230/0xdd0 + ... + +This happens because virtnet_freeze() frees the receive_queue +completely (including struct xdp_rxq_info) but does not call +xdp_rxq_info_unreg(). Similarly, virtnet_restore() sets up the +receive_queue again but does not call xdp_rxq_info_reg(). + +Actually, parts of virtnet_freeze_down() and virtnet_restore_up() +are almost identical to virtnet_close() and virtnet_open(): only +the calls to xdp_rxq_info_(un)reg() are missing. This means that +we can fix this easily and avoid such problems in the future by +just calling virtnet_close()/open() from the freeze/restore handlers. + +Aside from adding the missing xdp_rxq_info calls the only difference +is that the refill work is only cancelled if netif_running(). However, +this should not make any functional difference since the refill work +should only be active if the network interface is actually up. + +Fixes: 754b8a21a96d ("virtio_net: setup xdp_rxq_info") +Signed-off-by: Stephan Gerhold +Acked-by: Jesper Dangaard Brouer +Acked-by: Jason Wang +Link: https://lore.kernel.org/r/20220621114845.3650258-1-stephan.gerhold@kernkonzept.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/virtio_net.c | 25 ++++++------------------- + 1 file changed, 6 insertions(+), 19 deletions(-) + +diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c +index cbe47eed7cc3..ad9064df3deb 100644 +--- a/drivers/net/virtio_net.c ++++ b/drivers/net/virtio_net.c +@@ -2366,7 +2366,6 @@ static const struct ethtool_ops virtnet_ethtool_ops = { + static void virtnet_freeze_down(struct virtio_device *vdev) + { + struct virtnet_info *vi = vdev->priv; +- int i; + + /* Make sure no work handler is accessing the device */ + flush_work(&vi->config_work); +@@ -2374,14 +2373,8 @@ static void virtnet_freeze_down(struct virtio_device *vdev) + netif_tx_lock_bh(vi->dev); + netif_device_detach(vi->dev); + netif_tx_unlock_bh(vi->dev); +- cancel_delayed_work_sync(&vi->refill); +- +- if (netif_running(vi->dev)) { +- for (i = 0; i < vi->max_queue_pairs; i++) { +- napi_disable(&vi->rq[i].napi); +- virtnet_napi_tx_disable(&vi->sq[i].napi); +- } +- } ++ if (netif_running(vi->dev)) ++ virtnet_close(vi->dev); + } + + static int init_vqs(struct virtnet_info *vi); +@@ -2389,7 +2382,7 @@ static int init_vqs(struct virtnet_info *vi); + static int virtnet_restore_up(struct virtio_device *vdev) + { + struct virtnet_info *vi = vdev->priv; +- int err, i; ++ int err; + + err = init_vqs(vi); + if (err) +@@ -2398,15 +2391,9 @@ static int virtnet_restore_up(struct virtio_device *vdev) + virtio_device_ready(vdev); + + if (netif_running(vi->dev)) { +- for (i = 0; i < vi->curr_queue_pairs; i++) +- if (!try_fill_recv(vi, &vi->rq[i], GFP_KERNEL)) +- schedule_delayed_work(&vi->refill, 0); +- +- for (i = 0; i < vi->max_queue_pairs; i++) { +- virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); +- virtnet_napi_tx_enable(vi, vi->sq[i].vq, +- &vi->sq[i].napi); +- } ++ err = virtnet_open(vi->dev); ++ if (err) ++ return err; + } + + netif_tx_lock_bh(vi->dev); +-- +2.35.1 + diff --git a/queue-5.10/x86-xen-remove-undefined-behavior-in-setup_features.patch b/queue-5.10/x86-xen-remove-undefined-behavior-in-setup_features.patch new file mode 100644 index 00000000000..9ee6d69727b --- /dev/null +++ b/queue-5.10/x86-xen-remove-undefined-behavior-in-setup_features.patch @@ -0,0 +1,36 @@ +From f62a5b96fee94ee9b55b7d65e55cdb57b3038064 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 17 Jun 2022 11:30:37 +0100 +Subject: x86/xen: Remove undefined behavior in setup_features() + +From: Julien Grall + +[ Upstream commit ecb6237fa397b7b810d798ad19322eca466dbab1 ] + +1 << 31 is undefined. So switch to 1U << 31. + +Fixes: 5ead97c84fa7 ("xen: Core Xen implementation") +Signed-off-by: Julien Grall +Reviewed-by: Juergen Gross +Link: https://lore.kernel.org/r/20220617103037.57828-1-julien@xen.org +Signed-off-by: Juergen Gross +Signed-off-by: Sasha Levin +--- + drivers/xen/features.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/xen/features.c b/drivers/xen/features.c +index 25c053b09605..2c306de228db 100644 +--- a/drivers/xen/features.c ++++ b/drivers/xen/features.c +@@ -29,6 +29,6 @@ void xen_setup_features(void) + if (HYPERVISOR_xen_version(XENVER_get_features, &fi) < 0) + break; + for (j = 0; j < 32; j++) +- xen_features[i * 32 + j] = !!(fi.submap & 1<