From: Sasha Levin Date: Fri, 10 May 2024 21:35:07 +0000 (-0400) Subject: Fixes for 5.15 X-Git-Tag: v4.19.314~98 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=03f5b097170f751e457a748cf529f6a258c41fed;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.15 Signed-off-by: Sasha Levin --- diff --git a/queue-5.15/arm-9381-1-kasan-clear-stale-stack-poison.patch b/queue-5.15/arm-9381-1-kasan-clear-stale-stack-poison.patch new file mode 100644 index 00000000000..6d6deeea720 --- /dev/null +++ b/queue-5.15/arm-9381-1-kasan-clear-stale-stack-poison.patch @@ -0,0 +1,116 @@ +From 5478b1aba1aa88262612e64325e2e815977c6fec Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 15 Apr 2024 05:21:55 +0100 +Subject: ARM: 9381/1: kasan: clear stale stack poison + +From: Boy.Wu + +[ Upstream commit c4238686f9093b98bd6245a348bcf059cdce23af ] + +We found below OOB crash: + +[ 33.452494] ================================================================== +[ 33.453513] BUG: KASAN: stack-out-of-bounds in refresh_cpu_vm_stats.constprop.0+0xcc/0x2ec +[ 33.454660] Write of size 164 at addr c1d03d30 by task swapper/0/0 +[ 33.455515] +[ 33.455767] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G O 6.1.25-mainline #1 +[ 33.456880] Hardware name: Generic DT based system +[ 33.457555] unwind_backtrace from show_stack+0x18/0x1c +[ 33.458326] show_stack from dump_stack_lvl+0x40/0x4c +[ 33.459072] dump_stack_lvl from print_report+0x158/0x4a4 +[ 33.459863] print_report from kasan_report+0x9c/0x148 +[ 33.460616] kasan_report from kasan_check_range+0x94/0x1a0 +[ 33.461424] kasan_check_range from memset+0x20/0x3c +[ 33.462157] memset from refresh_cpu_vm_stats.constprop.0+0xcc/0x2ec +[ 33.463064] refresh_cpu_vm_stats.constprop.0 from tick_nohz_idle_stop_tick+0x180/0x53c +[ 33.464181] tick_nohz_idle_stop_tick from do_idle+0x264/0x354 +[ 33.465029] do_idle from cpu_startup_entry+0x20/0x24 +[ 33.465769] cpu_startup_entry from rest_init+0xf0/0xf4 +[ 33.466528] rest_init from arch_post_acpi_subsys_init+0x0/0x18 +[ 33.467397] +[ 33.467644] The buggy address belongs to stack of task swapper/0/0 +[ 33.468493] and is located at offset 112 in frame: +[ 33.469172] refresh_cpu_vm_stats.constprop.0+0x0/0x2ec +[ 33.469917] +[ 33.470165] This frame has 2 objects: +[ 33.470696] [32, 76) 'global_zone_diff' +[ 33.470729] [112, 276) 'global_node_diff' +[ 33.471294] +[ 33.472095] The buggy address belongs to the physical page: +[ 33.472862] page:3cd72da8 refcount:1 mapcount:0 mapping:00000000 index:0x0 pfn:0x41d03 +[ 33.473944] flags: 0x1000(reserved|zone=0) +[ 33.474565] raw: 00001000 ed741470 ed741470 00000000 00000000 00000000 ffffffff 00000001 +[ 33.475656] raw: 00000000 +[ 33.476050] page dumped because: kasan: bad access detected +[ 33.476816] +[ 33.477061] Memory state around the buggy address: +[ 33.477732] c1d03c00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 +[ 33.478630] c1d03c80: 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00 00 00 00 +[ 33.479526] >c1d03d00: 00 04 f2 f2 f2 f2 00 00 00 00 00 00 f1 f1 f1 f1 +[ 33.480415] ^ +[ 33.481195] c1d03d80: 00 00 00 00 00 00 00 00 00 00 04 f3 f3 f3 f3 f3 +[ 33.482088] c1d03e00: f3 f3 f3 f3 00 00 00 00 00 00 00 00 00 00 00 00 +[ 33.482978] ================================================================== + +We find the root cause of this OOB is that arm does not clear stale stack +poison in the case of cpuidle. + +This patch refer to arch/arm64/kernel/sleep.S to resolve this issue. + +From cited commit [1] that explain the problem + +Functions which the compiler has instrumented for KASAN place poison on +the stack shadow upon entry and remove this poison prior to returning. + +In the case of cpuidle, CPUs exit the kernel a number of levels deep in +C code. Any instrumented functions on this critical path will leave +portions of the stack shadow poisoned. + +If CPUs lose context and return to the kernel via a cold path, we +restore a prior context saved in __cpu_suspend_enter are forgotten, and +we never remove the poison they placed in the stack shadow area by +functions calls between this and the actual exit of the kernel. + +Thus, (depending on stackframe layout) subsequent calls to instrumented +functions may hit this stale poison, resulting in (spurious) KASAN +splats to the console. + +To avoid this, clear any stale poison from the idle thread for a CPU +prior to bringing a CPU online. + +From cited commit [2] + +Extend to check for CONFIG_KASAN_STACK + +[1] commit 0d97e6d8024c ("arm64: kasan: clear stale stack poison") +[2] commit d56a9ef84bd0 ("kasan, arm64: unpoison stack only with CONFIG_KASAN_STACK") + +Signed-off-by: Boy Wu +Reviewed-by: Mark Rutland +Acked-by: Andrey Ryabinin +Reviewed-by: Linus Walleij +Fixes: 5615f69bc209 ("ARM: 9016/2: Initialize the mapping of KASan shadow memory") +Signed-off-by: Russell King (Oracle) +Signed-off-by: Sasha Levin +--- + arch/arm/kernel/sleep.S | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/arch/arm/kernel/sleep.S b/arch/arm/kernel/sleep.S +index 43077e11dafda..2acf880fcc344 100644 +--- a/arch/arm/kernel/sleep.S ++++ b/arch/arm/kernel/sleep.S +@@ -114,6 +114,10 @@ ENDPROC(cpu_resume_mmu) + .popsection + cpu_resume_after_mmu: + bl cpu_init @ restore the und/abt/irq banked regs ++#if defined(CONFIG_KASAN) && defined(CONFIG_KASAN_STACK) ++ mov r0, sp ++ bl kasan_unpoison_task_stack_below ++#endif + mov r0, #0 @ return zero on success + ldmfd sp!, {r4 - r11, pc} + ENDPROC(cpu_resume_after_mmu) +-- +2.43.0 + diff --git a/queue-5.15/bluetooth-fix-use-after-free-bugs-caused-by-sco_sock.patch b/queue-5.15/bluetooth-fix-use-after-free-bugs-caused-by-sco_sock.patch new file mode 100644 index 00000000000..d30c003ac01 --- /dev/null +++ b/queue-5.15/bluetooth-fix-use-after-free-bugs-caused-by-sco_sock.patch @@ -0,0 +1,145 @@ +From 8b1c24a11acc593c3ae357612761f9ffeda57e64 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 25 Apr 2024 22:23:45 +0800 +Subject: Bluetooth: Fix use-after-free bugs caused by sco_sock_timeout + +From: Duoming Zhou + +[ Upstream commit 483bc08181827fc475643272ffb69c533007e546 ] + +When the sco connection is established and then, the sco socket +is releasing, timeout_work will be scheduled to judge whether +the sco disconnection is timeout. The sock will be deallocated +later, but it is dereferenced again in sco_sock_timeout. As a +result, the use-after-free bugs will happen. The root cause is +shown below: + + Cleanup Thread | Worker Thread +sco_sock_release | + sco_sock_close | + __sco_sock_close | + sco_sock_set_timer | + schedule_delayed_work | + sco_sock_kill | (wait a time) + sock_put(sk) //FREE | sco_sock_timeout + | sock_hold(sk) //USE + +The KASAN report triggered by POC is shown below: + +[ 95.890016] ================================================================== +[ 95.890496] BUG: KASAN: slab-use-after-free in sco_sock_timeout+0x5e/0x1c0 +[ 95.890755] Write of size 4 at addr ffff88800c388080 by task kworker/0:0/7 +... +[ 95.890755] Workqueue: events sco_sock_timeout +[ 95.890755] Call Trace: +[ 95.890755] +[ 95.890755] dump_stack_lvl+0x45/0x110 +[ 95.890755] print_address_description+0x78/0x390 +[ 95.890755] print_report+0x11b/0x250 +[ 95.890755] ? __virt_addr_valid+0xbe/0xf0 +[ 95.890755] ? sco_sock_timeout+0x5e/0x1c0 +[ 95.890755] kasan_report+0x139/0x170 +[ 95.890755] ? update_load_avg+0xe5/0x9f0 +[ 95.890755] ? sco_sock_timeout+0x5e/0x1c0 +[ 95.890755] kasan_check_range+0x2c3/0x2e0 +[ 95.890755] sco_sock_timeout+0x5e/0x1c0 +[ 95.890755] process_one_work+0x561/0xc50 +[ 95.890755] worker_thread+0xab2/0x13c0 +[ 95.890755] ? pr_cont_work+0x490/0x490 +[ 95.890755] kthread+0x279/0x300 +[ 95.890755] ? pr_cont_work+0x490/0x490 +[ 95.890755] ? kthread_blkcg+0xa0/0xa0 +[ 95.890755] ret_from_fork+0x34/0x60 +[ 95.890755] ? kthread_blkcg+0xa0/0xa0 +[ 95.890755] ret_from_fork_asm+0x11/0x20 +[ 95.890755] +[ 95.890755] +[ 95.890755] Allocated by task 506: +[ 95.890755] kasan_save_track+0x3f/0x70 +[ 95.890755] __kasan_kmalloc+0x86/0x90 +[ 95.890755] __kmalloc+0x17f/0x360 +[ 95.890755] sk_prot_alloc+0xe1/0x1a0 +[ 95.890755] sk_alloc+0x31/0x4e0 +[ 95.890755] bt_sock_alloc+0x2b/0x2a0 +[ 95.890755] sco_sock_create+0xad/0x320 +[ 95.890755] bt_sock_create+0x145/0x320 +[ 95.890755] __sock_create+0x2e1/0x650 +[ 95.890755] __sys_socket+0xd0/0x280 +[ 95.890755] __x64_sys_socket+0x75/0x80 +[ 95.890755] do_syscall_64+0xc4/0x1b0 +[ 95.890755] entry_SYSCALL_64_after_hwframe+0x67/0x6f +[ 95.890755] +[ 95.890755] Freed by task 506: +[ 95.890755] kasan_save_track+0x3f/0x70 +[ 95.890755] kasan_save_free_info+0x40/0x50 +[ 95.890755] poison_slab_object+0x118/0x180 +[ 95.890755] __kasan_slab_free+0x12/0x30 +[ 95.890755] kfree+0xb2/0x240 +[ 95.890755] __sk_destruct+0x317/0x410 +[ 95.890755] sco_sock_release+0x232/0x280 +[ 95.890755] sock_close+0xb2/0x210 +[ 95.890755] __fput+0x37f/0x770 +[ 95.890755] task_work_run+0x1ae/0x210 +[ 95.890755] get_signal+0xe17/0xf70 +[ 95.890755] arch_do_signal_or_restart+0x3f/0x520 +[ 95.890755] syscall_exit_to_user_mode+0x55/0x120 +[ 95.890755] do_syscall_64+0xd1/0x1b0 +[ 95.890755] entry_SYSCALL_64_after_hwframe+0x67/0x6f +[ 95.890755] +[ 95.890755] The buggy address belongs to the object at ffff88800c388000 +[ 95.890755] which belongs to the cache kmalloc-1k of size 1024 +[ 95.890755] The buggy address is located 128 bytes inside of +[ 95.890755] freed 1024-byte region [ffff88800c388000, ffff88800c388400) +[ 95.890755] +[ 95.890755] The buggy address belongs to the physical page: +[ 95.890755] page: refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff88800c38a800 pfn:0xc388 +[ 95.890755] head: order:3 entire_mapcount:0 nr_pages_mapped:0 pincount:0 +[ 95.890755] anon flags: 0x100000000000840(slab|head|node=0|zone=1) +[ 95.890755] page_type: 0xffffffff() +[ 95.890755] raw: 0100000000000840 ffff888006842dc0 0000000000000000 0000000000000001 +[ 95.890755] raw: ffff88800c38a800 000000000010000a 00000001ffffffff 0000000000000000 +[ 95.890755] head: 0100000000000840 ffff888006842dc0 0000000000000000 0000000000000001 +[ 95.890755] head: ffff88800c38a800 000000000010000a 00000001ffffffff 0000000000000000 +[ 95.890755] head: 0100000000000003 ffffea000030e201 ffffea000030e248 00000000ffffffff +[ 95.890755] head: 0000000800000000 0000000000000000 00000000ffffffff 0000000000000000 +[ 95.890755] page dumped because: kasan: bad access detected +[ 95.890755] +[ 95.890755] Memory state around the buggy address: +[ 95.890755] ffff88800c387f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc +[ 95.890755] ffff88800c388000: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb +[ 95.890755] >ffff88800c388080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb +[ 95.890755] ^ +[ 95.890755] ffff88800c388100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb +[ 95.890755] ffff88800c388180: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb +[ 95.890755] ================================================================== + +Fix this problem by adding a check protected by sco_conn_lock to judget +whether the conn->hcon is null. Because the conn->hcon will be set to null, +when the sock is releasing. + +Fixes: ba316be1b6a0 ("Bluetooth: schedule SCO timeouts with delayed_work") +Signed-off-by: Duoming Zhou +Signed-off-by: Luiz Augusto von Dentz +Signed-off-by: Sasha Levin +--- + net/bluetooth/sco.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/net/bluetooth/sco.c b/net/bluetooth/sco.c +index 57c6a4f845a32..431e09cac1787 100644 +--- a/net/bluetooth/sco.c ++++ b/net/bluetooth/sco.c +@@ -83,6 +83,10 @@ static void sco_sock_timeout(struct work_struct *work) + struct sock *sk; + + sco_conn_lock(conn); ++ if (!conn->hcon) { ++ sco_conn_unlock(conn); ++ return; ++ } + sk = conn->sk; + if (sk) + sock_hold(sk); +-- +2.43.0 + diff --git a/queue-5.15/bluetooth-l2cap-fix-null-ptr-deref-in-l2cap_chan_tim.patch b/queue-5.15/bluetooth-l2cap-fix-null-ptr-deref-in-l2cap_chan_tim.patch new file mode 100644 index 00000000000..b75b4035ab5 --- /dev/null +++ b/queue-5.15/bluetooth-l2cap-fix-null-ptr-deref-in-l2cap_chan_tim.patch @@ -0,0 +1,136 @@ +From 94b3089e7dc2d969a8dcb11a46d139fd224b5e14 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 2 May 2024 20:57:36 +0800 +Subject: Bluetooth: l2cap: fix null-ptr-deref in l2cap_chan_timeout + +From: Duoming Zhou + +[ Upstream commit adf0398cee86643b8eacde95f17d073d022f782c ] + +There is a race condition between l2cap_chan_timeout() and +l2cap_chan_del(). When we use l2cap_chan_del() to delete the +channel, the chan->conn will be set to null. But the conn could +be dereferenced again in the mutex_lock() of l2cap_chan_timeout(). +As a result the null pointer dereference bug will happen. The +KASAN report triggered by POC is shown below: + +[ 472.074580] ================================================================== +[ 472.075284] BUG: KASAN: null-ptr-deref in mutex_lock+0x68/0xc0 +[ 472.075308] Write of size 8 at addr 0000000000000158 by task kworker/0:0/7 +[ 472.075308] +[ 472.075308] CPU: 0 PID: 7 Comm: kworker/0:0 Not tainted 6.9.0-rc5-00356-g78c0094a146b #36 +[ 472.075308] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu4 +[ 472.075308] Workqueue: events l2cap_chan_timeout +[ 472.075308] Call Trace: +[ 472.075308] +[ 472.075308] dump_stack_lvl+0x137/0x1a0 +[ 472.075308] print_report+0x101/0x250 +[ 472.075308] ? __virt_addr_valid+0x77/0x160 +[ 472.075308] ? mutex_lock+0x68/0xc0 +[ 472.075308] kasan_report+0x139/0x170 +[ 472.075308] ? mutex_lock+0x68/0xc0 +[ 472.075308] kasan_check_range+0x2c3/0x2e0 +[ 472.075308] mutex_lock+0x68/0xc0 +[ 472.075308] l2cap_chan_timeout+0x181/0x300 +[ 472.075308] process_one_work+0x5d2/0xe00 +[ 472.075308] worker_thread+0xe1d/0x1660 +[ 472.075308] ? pr_cont_work+0x5e0/0x5e0 +[ 472.075308] kthread+0x2b7/0x350 +[ 472.075308] ? pr_cont_work+0x5e0/0x5e0 +[ 472.075308] ? kthread_blkcg+0xd0/0xd0 +[ 472.075308] ret_from_fork+0x4d/0x80 +[ 472.075308] ? kthread_blkcg+0xd0/0xd0 +[ 472.075308] ret_from_fork_asm+0x11/0x20 +[ 472.075308] +[ 472.075308] ================================================================== +[ 472.094860] Disabling lock debugging due to kernel taint +[ 472.096136] BUG: kernel NULL pointer dereference, address: 0000000000000158 +[ 472.096136] #PF: supervisor write access in kernel mode +[ 472.096136] #PF: error_code(0x0002) - not-present page +[ 472.096136] PGD 0 P4D 0 +[ 472.096136] Oops: 0002 [#1] PREEMPT SMP KASAN NOPTI +[ 472.096136] CPU: 0 PID: 7 Comm: kworker/0:0 Tainted: G B 6.9.0-rc5-00356-g78c0094a146b #36 +[ 472.096136] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu4 +[ 472.096136] Workqueue: events l2cap_chan_timeout +[ 472.096136] RIP: 0010:mutex_lock+0x88/0xc0 +[ 472.096136] Code: be 08 00 00 00 e8 f8 23 1f fd 4c 89 f7 be 08 00 00 00 e8 eb 23 1f fd 42 80 3c 23 00 74 08 48 88 +[ 472.096136] RSP: 0018:ffff88800744fc78 EFLAGS: 00000246 +[ 472.096136] RAX: 0000000000000000 RBX: 1ffff11000e89f8f RCX: ffffffff8457c865 +[ 472.096136] RDX: 0000000000000001 RSI: 0000000000000008 RDI: ffff88800744fc78 +[ 472.096136] RBP: 0000000000000158 R08: ffff88800744fc7f R09: 1ffff11000e89f8f +[ 472.096136] R10: dffffc0000000000 R11: ffffed1000e89f90 R12: dffffc0000000000 +[ 472.096136] R13: 0000000000000158 R14: ffff88800744fc78 R15: ffff888007405a00 +[ 472.096136] FS: 0000000000000000(0000) GS:ffff88806d200000(0000) knlGS:0000000000000000 +[ 472.096136] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +[ 472.096136] CR2: 0000000000000158 CR3: 000000000da32000 CR4: 00000000000006f0 +[ 472.096136] Call Trace: +[ 472.096136] +[ 472.096136] ? __die_body+0x8d/0xe0 +[ 472.096136] ? page_fault_oops+0x6b8/0x9a0 +[ 472.096136] ? kernelmode_fixup_or_oops+0x20c/0x2a0 +[ 472.096136] ? do_user_addr_fault+0x1027/0x1340 +[ 472.096136] ? _printk+0x7a/0xa0 +[ 472.096136] ? mutex_lock+0x68/0xc0 +[ 472.096136] ? add_taint+0x42/0xd0 +[ 472.096136] ? exc_page_fault+0x6a/0x1b0 +[ 472.096136] ? asm_exc_page_fault+0x26/0x30 +[ 472.096136] ? mutex_lock+0x75/0xc0 +[ 472.096136] ? mutex_lock+0x88/0xc0 +[ 472.096136] ? mutex_lock+0x75/0xc0 +[ 472.096136] l2cap_chan_timeout+0x181/0x300 +[ 472.096136] process_one_work+0x5d2/0xe00 +[ 472.096136] worker_thread+0xe1d/0x1660 +[ 472.096136] ? pr_cont_work+0x5e0/0x5e0 +[ 472.096136] kthread+0x2b7/0x350 +[ 472.096136] ? pr_cont_work+0x5e0/0x5e0 +[ 472.096136] ? kthread_blkcg+0xd0/0xd0 +[ 472.096136] ret_from_fork+0x4d/0x80 +[ 472.096136] ? kthread_blkcg+0xd0/0xd0 +[ 472.096136] ret_from_fork_asm+0x11/0x20 +[ 472.096136] +[ 472.096136] Modules linked in: +[ 472.096136] CR2: 0000000000000158 +[ 472.096136] ---[ end trace 0000000000000000 ]--- +[ 472.096136] RIP: 0010:mutex_lock+0x88/0xc0 +[ 472.096136] Code: be 08 00 00 00 e8 f8 23 1f fd 4c 89 f7 be 08 00 00 00 e8 eb 23 1f fd 42 80 3c 23 00 74 08 48 88 +[ 472.096136] RSP: 0018:ffff88800744fc78 EFLAGS: 00000246 +[ 472.096136] RAX: 0000000000000000 RBX: 1ffff11000e89f8f RCX: ffffffff8457c865 +[ 472.096136] RDX: 0000000000000001 RSI: 0000000000000008 RDI: ffff88800744fc78 +[ 472.096136] RBP: 0000000000000158 R08: ffff88800744fc7f R09: 1ffff11000e89f8f +[ 472.132932] R10: dffffc0000000000 R11: ffffed1000e89f90 R12: dffffc0000000000 +[ 472.132932] R13: 0000000000000158 R14: ffff88800744fc78 R15: ffff888007405a00 +[ 472.132932] FS: 0000000000000000(0000) GS:ffff88806d200000(0000) knlGS:0000000000000000 +[ 472.132932] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +[ 472.132932] CR2: 0000000000000158 CR3: 000000000da32000 CR4: 00000000000006f0 +[ 472.132932] Kernel panic - not syncing: Fatal exception +[ 472.132932] Kernel Offset: disabled +[ 472.132932] ---[ end Kernel panic - not syncing: Fatal exception ]--- + +Add a check to judge whether the conn is null in l2cap_chan_timeout() +in order to mitigate the bug. + +Fixes: 3df91ea20e74 ("Bluetooth: Revert to mutexes from RCU list") +Signed-off-by: Duoming Zhou +Signed-off-by: Luiz Augusto von Dentz +Signed-off-by: Sasha Levin +--- + net/bluetooth/l2cap_core.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/net/bluetooth/l2cap_core.c b/net/bluetooth/l2cap_core.c +index 11bfc8737e6ce..900b352975856 100644 +--- a/net/bluetooth/l2cap_core.c ++++ b/net/bluetooth/l2cap_core.c +@@ -435,6 +435,9 @@ static void l2cap_chan_timeout(struct work_struct *work) + + BT_DBG("chan %p state %s", chan, state_to_string(chan->state)); + ++ if (!conn) ++ return; ++ + mutex_lock(&conn->chan_lock); + /* __set_chan_timer() calls l2cap_chan_hold(chan) while scheduling + * this work. No need to call l2cap_chan_hold(chan) here again. +-- +2.43.0 + diff --git a/queue-5.15/hwmon-corsair-cpro-protect-ccp-wait_input_report-wit.patch b/queue-5.15/hwmon-corsair-cpro-protect-ccp-wait_input_report-wit.patch new file mode 100644 index 00000000000..76a3d15d1bf --- /dev/null +++ b/queue-5.15/hwmon-corsair-cpro-protect-ccp-wait_input_report-wit.patch @@ -0,0 +1,96 @@ +From 484603fee7a3786659318d67e00fab370692bd9c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 May 2024 11:25:03 +0200 +Subject: hwmon: (corsair-cpro) Protect ccp->wait_input_report with a spinlock + +From: Aleksa Savic + +[ Upstream commit d02abd57e79469a026213f7f5827a98d909f236a ] + +Through hidraw, userspace can cause a status report to be sent +from the device. The parsing in ccp_raw_event() may happen in +parallel to a send_usb_cmd() call (which resets the completion +for tracking the report) if it's running on a different CPU where +bottom half interrupts are not disabled. + +Add a spinlock around the complete_all() in ccp_raw_event() and +reinit_completion() in send_usb_cmd() to prevent race issues. + +Fixes: 40c3a4454225 ("hwmon: add Corsair Commander Pro driver") +Signed-off-by: Aleksa Savic +Acked-by: Marius Zachmann +Link: https://lore.kernel.org/r/20240504092504.24158-4-savicaleksa83@gmail.com +Signed-off-by: Guenter Roeck +Signed-off-by: Sasha Levin +--- + drivers/hwmon/corsair-cpro.c | 24 +++++++++++++++++++----- + 1 file changed, 19 insertions(+), 5 deletions(-) + +diff --git a/drivers/hwmon/corsair-cpro.c b/drivers/hwmon/corsair-cpro.c +index 543a741fe5473..486fb6a8c3566 100644 +--- a/drivers/hwmon/corsair-cpro.c ++++ b/drivers/hwmon/corsair-cpro.c +@@ -16,6 +16,7 @@ + #include + #include + #include ++#include + #include + + #define USB_VENDOR_ID_CORSAIR 0x1b1c +@@ -77,6 +78,8 @@ + struct ccp_device { + struct hid_device *hdev; + struct device *hwmon_dev; ++ /* For reinitializing the completion below */ ++ spinlock_t wait_input_report_lock; + struct completion wait_input_report; + struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */ + u8 *cmd_buffer; +@@ -118,7 +121,15 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2, + ccp->cmd_buffer[2] = byte2; + ccp->cmd_buffer[3] = byte3; + ++ /* ++ * Disable raw event parsing for a moment to safely reinitialize the ++ * completion. Reinit is done because hidraw could have triggered ++ * the raw event parsing and marked the ccp->wait_input_report ++ * completion as done. ++ */ ++ spin_lock_bh(&ccp->wait_input_report_lock); + reinit_completion(&ccp->wait_input_report); ++ spin_unlock_bh(&ccp->wait_input_report_lock); + + ret = hid_hw_output_report(ccp->hdev, ccp->cmd_buffer, OUT_BUFFER_SIZE); + if (ret < 0) +@@ -136,11 +147,12 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8 + struct ccp_device *ccp = hid_get_drvdata(hdev); + + /* only copy buffer when requested */ +- if (completion_done(&ccp->wait_input_report)) +- return 0; +- +- memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size)); +- complete_all(&ccp->wait_input_report); ++ spin_lock(&ccp->wait_input_report_lock); ++ if (!completion_done(&ccp->wait_input_report)) { ++ memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size)); ++ complete_all(&ccp->wait_input_report); ++ } ++ spin_unlock(&ccp->wait_input_report_lock); + + return 0; + } +@@ -515,7 +527,9 @@ static int ccp_probe(struct hid_device *hdev, const struct hid_device_id *id) + + ccp->hdev = hdev; + hid_set_drvdata(hdev, ccp); ++ + mutex_init(&ccp->mutex); ++ spin_lock_init(&ccp->wait_input_report_lock); + init_completion(&ccp->wait_input_report); + + hid_device_io_start(hdev); +-- +2.43.0 + diff --git a/queue-5.15/hwmon-corsair-cpro-use-a-separate-buffer-for-sending.patch b/queue-5.15/hwmon-corsair-cpro-use-a-separate-buffer-for-sending.patch new file mode 100644 index 00000000000..ce18a1c204f --- /dev/null +++ b/queue-5.15/hwmon-corsair-cpro-use-a-separate-buffer-for-sending.patch @@ -0,0 +1,78 @@ +From 146ad7690f44220a332b8f402665ae92cc0f3cf4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 May 2024 11:25:01 +0200 +Subject: hwmon: (corsair-cpro) Use a separate buffer for sending commands + +From: Aleksa Savic + +[ Upstream commit e0cd85dc666cb08e1bd313d560cb4eff4d04219e ] + +Introduce cmd_buffer, a separate buffer for storing only +the command that is sent to the device. Before this separation, +the existing buffer was shared for both the command and the +report received in ccp_raw_event(), which was copied into it. + +However, because of hidraw, the raw event parsing may be triggered +in the middle of sending a command, resulting in outputting gibberish +to the device. Using a separate buffer resolves this. + +Fixes: 40c3a4454225 ("hwmon: add Corsair Commander Pro driver") +Signed-off-by: Aleksa Savic +Acked-by: Marius Zachmann +Link: https://lore.kernel.org/r/20240504092504.24158-2-savicaleksa83@gmail.com +Signed-off-by: Guenter Roeck +Signed-off-by: Sasha Levin +--- + drivers/hwmon/corsair-cpro.c | 19 ++++++++++++------- + 1 file changed, 12 insertions(+), 7 deletions(-) + +diff --git a/drivers/hwmon/corsair-cpro.c b/drivers/hwmon/corsair-cpro.c +index fa6aa4fc8b521..0a9cbb556188f 100644 +--- a/drivers/hwmon/corsair-cpro.c ++++ b/drivers/hwmon/corsair-cpro.c +@@ -79,6 +79,7 @@ struct ccp_device { + struct device *hwmon_dev; + struct completion wait_input_report; + struct mutex mutex; /* whenever buffer is used, lock before send_usb_cmd */ ++ u8 *cmd_buffer; + u8 *buffer; + int target[6]; + DECLARE_BITMAP(temp_cnct, NUM_TEMP_SENSORS); +@@ -111,15 +112,15 @@ static int send_usb_cmd(struct ccp_device *ccp, u8 command, u8 byte1, u8 byte2, + unsigned long t; + int ret; + +- memset(ccp->buffer, 0x00, OUT_BUFFER_SIZE); +- ccp->buffer[0] = command; +- ccp->buffer[1] = byte1; +- ccp->buffer[2] = byte2; +- ccp->buffer[3] = byte3; ++ memset(ccp->cmd_buffer, 0x00, OUT_BUFFER_SIZE); ++ ccp->cmd_buffer[0] = command; ++ ccp->cmd_buffer[1] = byte1; ++ ccp->cmd_buffer[2] = byte2; ++ ccp->cmd_buffer[3] = byte3; + + reinit_completion(&ccp->wait_input_report); + +- ret = hid_hw_output_report(ccp->hdev, ccp->buffer, OUT_BUFFER_SIZE); ++ ret = hid_hw_output_report(ccp->hdev, ccp->cmd_buffer, OUT_BUFFER_SIZE); + if (ret < 0) + return ret; + +@@ -492,7 +493,11 @@ static int ccp_probe(struct hid_device *hdev, const struct hid_device_id *id) + if (!ccp) + return -ENOMEM; + +- ccp->buffer = devm_kmalloc(&hdev->dev, OUT_BUFFER_SIZE, GFP_KERNEL); ++ ccp->cmd_buffer = devm_kmalloc(&hdev->dev, OUT_BUFFER_SIZE, GFP_KERNEL); ++ if (!ccp->cmd_buffer) ++ return -ENOMEM; ++ ++ ccp->buffer = devm_kmalloc(&hdev->dev, IN_BUFFER_SIZE, GFP_KERNEL); + if (!ccp->buffer) + return -ENOMEM; + +-- +2.43.0 + diff --git a/queue-5.15/hwmon-corsair-cpro-use-complete_all-instead-of-compl.patch b/queue-5.15/hwmon-corsair-cpro-use-complete_all-instead-of-compl.patch new file mode 100644 index 00000000000..0c8f954306e --- /dev/null +++ b/queue-5.15/hwmon-corsair-cpro-use-complete_all-instead-of-compl.patch @@ -0,0 +1,41 @@ +From 4741a90a04d9e742d4c4d76277aca08b6aeb1712 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 May 2024 11:25:02 +0200 +Subject: hwmon: (corsair-cpro) Use complete_all() instead of complete() in + ccp_raw_event() + +From: Aleksa Savic + +[ Upstream commit 3a034a7b0715eb51124a5263890b1ed39978ed3a ] + +In ccp_raw_event(), the ccp->wait_input_report completion is +completed once. Since we're waiting for exactly one report in +send_usb_cmd(), use complete_all() instead of complete() +to mark the completion as spent. + +Fixes: 40c3a4454225 ("hwmon: add Corsair Commander Pro driver") +Signed-off-by: Aleksa Savic +Acked-by: Marius Zachmann +Link: https://lore.kernel.org/r/20240504092504.24158-3-savicaleksa83@gmail.com +Signed-off-by: Guenter Roeck +Signed-off-by: Sasha Levin +--- + drivers/hwmon/corsair-cpro.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/hwmon/corsair-cpro.c b/drivers/hwmon/corsair-cpro.c +index 0a9cbb556188f..543a741fe5473 100644 +--- a/drivers/hwmon/corsair-cpro.c ++++ b/drivers/hwmon/corsair-cpro.c +@@ -140,7 +140,7 @@ static int ccp_raw_event(struct hid_device *hdev, struct hid_report *report, u8 + return 0; + + memcpy(ccp->buffer, data, min(IN_BUFFER_SIZE, size)); +- complete(&ccp->wait_input_report); ++ complete_all(&ccp->wait_input_report); + + return 0; + } +-- +2.43.0 + diff --git a/queue-5.15/ipv6-fib6_rules-avoid-possible-null-dereference-in-f.patch b/queue-5.15/ipv6-fib6_rules-avoid-possible-null-dereference-in-f.patch new file mode 100644 index 00000000000..7df7f7fce1f --- /dev/null +++ b/queue-5.15/ipv6-fib6_rules-avoid-possible-null-dereference-in-f.patch @@ -0,0 +1,92 @@ +From 907d5745abc6723cfd39066142199b5c66d9ef24 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 May 2024 16:31:45 +0000 +Subject: ipv6: fib6_rules: avoid possible NULL dereference in + fib6_rule_action() + +From: Eric Dumazet + +[ Upstream commit d101291b2681e5ab938554e3e323f7a7ee33e3aa ] + +syzbot is able to trigger the following crash [1], +caused by unsafe ip6_dst_idev() use. + +Indeed ip6_dst_idev() can return NULL, and must always be checked. + +[1] + +Oops: general protection fault, probably for non-canonical address 0xdffffc0000000000: 0000 [#1] PREEMPT SMP KASAN PTI +KASAN: null-ptr-deref in range [0x0000000000000000-0x0000000000000007] +CPU: 0 PID: 31648 Comm: syz-executor.0 Not tainted 6.9.0-rc4-next-20240417-syzkaller #0 +Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 + RIP: 0010:__fib6_rule_action net/ipv6/fib6_rules.c:237 [inline] + RIP: 0010:fib6_rule_action+0x241/0x7b0 net/ipv6/fib6_rules.c:267 +Code: 02 00 00 49 8d 9f d8 00 00 00 48 89 d8 48 c1 e8 03 42 80 3c 20 00 74 08 48 89 df e8 f9 32 bf f7 48 8b 1b 48 89 d8 48 c1 e8 03 <42> 80 3c 20 00 74 08 48 89 df e8 e0 32 bf f7 4c 8b 03 48 89 ef 4c +RSP: 0018:ffffc9000fc1f2f0 EFLAGS: 00010246 +RAX: 0000000000000000 RBX: 0000000000000000 RCX: 1a772f98c8186700 +RDX: 0000000000000003 RSI: ffffffff8bcac4e0 RDI: ffffffff8c1f9760 +RBP: ffff8880673fb980 R08: ffffffff8fac15ef R09: 1ffffffff1f582bd +R10: dffffc0000000000 R11: fffffbfff1f582be R12: dffffc0000000000 +R13: 0000000000000080 R14: ffff888076509000 R15: ffff88807a029a00 +FS: 00007f55e82ca6c0(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 0000001b31d23000 CR3: 0000000022b66000 CR4: 00000000003506f0 +DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 +DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 +Call Trace: + + fib_rules_lookup+0x62c/0xdb0 net/core/fib_rules.c:317 + fib6_rule_lookup+0x1fd/0x790 net/ipv6/fib6_rules.c:108 + ip6_route_output_flags_noref net/ipv6/route.c:2637 [inline] + ip6_route_output_flags+0x38e/0x610 net/ipv6/route.c:2649 + ip6_route_output include/net/ip6_route.h:93 [inline] + ip6_dst_lookup_tail+0x189/0x11a0 net/ipv6/ip6_output.c:1120 + ip6_dst_lookup_flow+0xb9/0x180 net/ipv6/ip6_output.c:1250 + sctp_v6_get_dst+0x792/0x1e20 net/sctp/ipv6.c:326 + sctp_transport_route+0x12c/0x2e0 net/sctp/transport.c:455 + sctp_assoc_add_peer+0x614/0x15c0 net/sctp/associola.c:662 + sctp_connect_new_asoc+0x31d/0x6c0 net/sctp/socket.c:1099 + __sctp_connect+0x66d/0xe30 net/sctp/socket.c:1197 + sctp_connect net/sctp/socket.c:4819 [inline] + sctp_inet_connect+0x149/0x1f0 net/sctp/socket.c:4834 + __sys_connect_file net/socket.c:2048 [inline] + __sys_connect+0x2df/0x310 net/socket.c:2065 + __do_sys_connect net/socket.c:2075 [inline] + __se_sys_connect net/socket.c:2072 [inline] + __x64_sys_connect+0x7a/0x90 net/socket.c:2072 + do_syscall_x64 arch/x86/entry/common.c:52 [inline] + do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83 + entry_SYSCALL_64_after_hwframe+0x77/0x7f + +Fixes: 5e5f3f0f8013 ("[IPV6] ADDRCONF: Convert ipv6_get_saddr() to ipv6_dev_get_saddr().") +Signed-off-by: Eric Dumazet +Reviewed-by: Simon Horman +Reviewed-by: David Ahern +Link: https://lore.kernel.org/r/20240507163145.835254-1-edumazet@google.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/ipv6/fib6_rules.c | 6 +++++- + 1 file changed, 5 insertions(+), 1 deletion(-) + +diff --git a/net/ipv6/fib6_rules.c b/net/ipv6/fib6_rules.c +index 8e9e80eb0f329..a4caaead74c1d 100644 +--- a/net/ipv6/fib6_rules.c ++++ b/net/ipv6/fib6_rules.c +@@ -232,8 +232,12 @@ static int __fib6_rule_action(struct fib_rule *rule, struct flowi *flp, + rt = pol_lookup_func(lookup, + net, table, flp6, arg->lookup_data, flags); + if (rt != net->ipv6.ip6_null_entry) { ++ struct inet6_dev *idev = ip6_dst_idev(&rt->dst); ++ ++ if (!idev) ++ goto again; + err = fib6_rule_saddr(net, rule, flags, flp6, +- ip6_dst_idev(&rt->dst)->dev); ++ idev->dev); + + if (err == -EAGAIN) + goto again; +-- +2.43.0 + diff --git a/queue-5.15/net-bridge-fix-corrupted-ethernet-header-on-multicas.patch b/queue-5.15/net-bridge-fix-corrupted-ethernet-header-on-multicas.patch new file mode 100644 index 00000000000..54d3aa3f64e --- /dev/null +++ b/queue-5.15/net-bridge-fix-corrupted-ethernet-header-on-multicas.patch @@ -0,0 +1,56 @@ +From 305ba1b17cebad08cf1635d0b9e624489f477e59 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 5 May 2024 20:42:38 +0200 +Subject: net: bridge: fix corrupted ethernet header on multicast-to-unicast + +From: Felix Fietkau + +[ Upstream commit 86b29d830ad69eecff25b22dc96c14c6573718e6 ] + +The change from skb_copy to pskb_copy unfortunately changed the data +copying to omit the ethernet header, since it was pulled before reaching +this point. Fix this by calling __skb_push/pull around pskb_copy. + +Fixes: 59c878cbcdd8 ("net: bridge: fix multicast-to-unicast with fraglist GSO") +Signed-off-by: Felix Fietkau +Acked-by: Nikolay Aleksandrov +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/bridge/br_forward.c | 9 +++++++-- + 1 file changed, 7 insertions(+), 2 deletions(-) + +diff --git a/net/bridge/br_forward.c b/net/bridge/br_forward.c +index 0bdd2892646db..1b66c276118a3 100644 +--- a/net/bridge/br_forward.c ++++ b/net/bridge/br_forward.c +@@ -253,6 +253,7 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb, + { + struct net_device *dev = BR_INPUT_SKB_CB(skb)->brdev; + const unsigned char *src = eth_hdr(skb)->h_source; ++ struct sk_buff *nskb; + + if (!should_deliver(p, skb)) + return; +@@ -261,12 +262,16 @@ static void maybe_deliver_addr(struct net_bridge_port *p, struct sk_buff *skb, + if (skb->dev == p->dev && ether_addr_equal(src, addr)) + return; + +- skb = pskb_copy(skb, GFP_ATOMIC); +- if (!skb) { ++ __skb_push(skb, ETH_HLEN); ++ nskb = pskb_copy(skb, GFP_ATOMIC); ++ __skb_pull(skb, ETH_HLEN); ++ if (!nskb) { + DEV_STATS_INC(dev, tx_dropped); + return; + } + ++ skb = nskb; ++ __skb_pull(skb, ETH_HLEN); + if (!is_broadcast_ether_addr(addr)) + memcpy(eth_hdr(skb)->h_dest, addr, ETH_ALEN); + +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-add-log-for-workqueue-scheduled-late.patch b/queue-5.15/net-hns3-add-log-for-workqueue-scheduled-late.patch new file mode 100644 index 00000000000..98639b03535 --- /dev/null +++ b/queue-5.15/net-hns3-add-log-for-workqueue-scheduled-late.patch @@ -0,0 +1,132 @@ +From d4a996a83742d648b5e3dd1dd3c463245b20b596 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 24 Nov 2021 09:06:51 +0800 +Subject: net: hns3: add log for workqueue scheduled late + +From: Yufeng Mo + +[ Upstream commit d9069dab207534d9f6f41993ee78a651733becea ] + +When the mbx or reset message arrives, the driver is informed +through an interrupt. This task can be processed only after +the workqueue is scheduled. In some cases, this workqueue +scheduling takes a long time. As a result, the mbx or reset +service task cannot be processed in time. So add some warning +message to improve debugging efficiency for this case. + +Signed-off-by: Yufeng Mo +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: 669554c512d2 ("net: hns3: direct return when receive a unknown mailbox message") +Signed-off-by: Sasha Levin +--- + .../net/ethernet/hisilicon/hns3/hclge_mbx.h | 3 +++ + .../hisilicon/hns3/hns3pf/hclge_main.c | 22 +++++++++++++++++-- + .../hisilicon/hns3/hns3pf/hclge_main.h | 2 ++ + .../hisilicon/hns3/hns3pf/hclge_mbx.c | 8 +++++++ + 4 files changed, 33 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +index 277d6d657c429..e1ba0ae055b02 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +@@ -80,6 +80,9 @@ enum hclge_mbx_tbl_cfg_subcode { + #define HCLGE_MBX_MAX_RESP_DATA_SIZE 8U + #define HCLGE_MBX_MAX_RING_CHAIN_PARAM_NUM 4 + ++#define HCLGE_RESET_SCHED_TIMEOUT (3 * HZ) ++#define HCLGE_MBX_SCHED_TIMEOUT (HZ / 2) ++ + struct hclge_ring_chain_param { + u8 ring_type; + u8 tqp_index; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 71b498aa327bb..93e55c6c4cf5e 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -2855,16 +2855,20 @@ static int hclge_mac_init(struct hclge_dev *hdev) + static void hclge_mbx_task_schedule(struct hclge_dev *hdev) + { + if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) && +- !test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state)) ++ !test_and_set_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state)) { ++ hdev->last_mbx_scheduled = jiffies; + mod_delayed_work(hclge_wq, &hdev->service_task, 0); ++ } + } + + static void hclge_reset_task_schedule(struct hclge_dev *hdev) + { + if (!test_bit(HCLGE_STATE_REMOVING, &hdev->state) && + test_bit(HCLGE_STATE_SERVICE_INITED, &hdev->state) && +- !test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state)) ++ !test_and_set_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state)) { ++ hdev->last_rst_scheduled = jiffies; + mod_delayed_work(hclge_wq, &hdev->service_task, 0); ++ } + } + + static void hclge_errhand_task_schedule(struct hclge_dev *hdev) +@@ -3697,6 +3701,13 @@ static void hclge_mailbox_service_task(struct hclge_dev *hdev) + test_and_set_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state)) + return; + ++ if (time_is_before_jiffies(hdev->last_mbx_scheduled + ++ HCLGE_MBX_SCHED_TIMEOUT)) ++ dev_warn(&hdev->pdev->dev, ++ "mbx service task is scheduled after %ums on cpu%u!\n", ++ jiffies_to_msecs(jiffies - hdev->last_mbx_scheduled), ++ smp_processor_id()); ++ + hclge_mbx_handler(hdev); + + clear_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state); +@@ -4346,6 +4357,13 @@ static void hclge_reset_service_task(struct hclge_dev *hdev) + if (!test_and_clear_bit(HCLGE_STATE_RST_SERVICE_SCHED, &hdev->state)) + return; + ++ if (time_is_before_jiffies(hdev->last_rst_scheduled + ++ HCLGE_RESET_SCHED_TIMEOUT)) ++ dev_warn(&hdev->pdev->dev, ++ "reset service task is scheduled after %ums on cpu%u!\n", ++ jiffies_to_msecs(jiffies - hdev->last_rst_scheduled), ++ smp_processor_id()); ++ + down(&hdev->reset_sem); + set_bit(HCLGE_STATE_RST_HANDLING, &hdev->state); + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +index ba0d41091b1da..6870ccc9d9eac 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +@@ -928,6 +928,8 @@ struct hclge_dev { + u16 hclge_fd_rule_num; + unsigned long serv_processed_cnt; + unsigned long last_serv_processed; ++ unsigned long last_rst_scheduled; ++ unsigned long last_mbx_scheduled; + unsigned long fd_bmap[BITS_TO_LONGS(MAX_FD_FILTER_NUM)]; + enum HCLGE_FD_ACTIVE_RULE_TYPE fd_active_type; + u8 fd_en; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index 5182051e5414d..ab6df4c1ea0f6 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -855,6 +855,14 @@ void hclge_mbx_handler(struct hclge_dev *hdev) + if (hnae3_get_bit(req->mbx_need_resp, HCLGE_MBX_NEED_RESP_B) && + req->msg.code < HCLGE_MBX_GET_VF_FLR_STATUS) { + resp_msg.status = ret; ++ if (time_is_before_jiffies(hdev->last_mbx_scheduled + ++ HCLGE_MBX_SCHED_TIMEOUT)) ++ dev_warn(&hdev->pdev->dev, ++ "resp vport%u mbx(%u,%u) late\n", ++ req->mbx_src_vfid, ++ req->msg.code, ++ req->msg.subcode); ++ + hclge_gen_resp_to_vf(vport, req, &resp_msg); + } + +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-add-query-vf-ring-and-vector-map-relation.patch b/queue-5.15/net-hns3-add-query-vf-ring-and-vector-map-relation.patch new file mode 100644 index 00000000000..c27edf91daf --- /dev/null +++ b/queue-5.15/net-hns3-add-query-vf-ring-and-vector-map-relation.patch @@ -0,0 +1,137 @@ +From 589e3344433668f069bb182f98631355d8b3edbf Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 9 May 2022 15:55:31 +0800 +Subject: net: hns3: add query vf ring and vector map relation + +From: Guangbin Huang + +[ Upstream commit a1aed456e3261c0096e36618db9aa61d5974ad16 ] + +This patch adds a new mailbox opcode to query map relation between +vf ring and vector. + +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: 669554c512d2 ("net: hns3: direct return when receive a unknown mailbox message") +Signed-off-by: Sasha Levin +--- + .../net/ethernet/hisilicon/hns3/hclge_mbx.h | 1 + + .../hisilicon/hns3/hns3pf/hclge_mbx.c | 83 +++++++++++++++++++ + 2 files changed, 84 insertions(+) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +index e1ba0ae055b02..09a2a7c9fca43 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +@@ -46,6 +46,7 @@ enum HCLGE_MBX_OPCODE { + HCLGE_MBX_PUSH_PROMISC_INFO, /* (PF -> VF) push vf promisc info */ + HCLGE_MBX_VF_UNINIT, /* (VF -> PF) vf is unintializing */ + HCLGE_MBX_HANDLE_VF_TBL, /* (VF -> PF) store/clear hw table */ ++ HCLGE_MBX_GET_RING_VECTOR_MAP, /* (VF -> PF) get ring-to-vector map */ + + HCLGE_MBX_GET_VF_FLR_STATUS = 200, /* (M7 -> PF) get vf flr status */ + HCLGE_MBX_PUSH_LINK_STATUS, /* (M7 -> PF) get port link status */ +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index ab6df4c1ea0f6..839555bf4bc49 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -250,6 +250,81 @@ static int hclge_map_unmap_ring_to_vf_vector(struct hclge_vport *vport, bool en, + return ret; + } + ++static int hclge_query_ring_vector_map(struct hclge_vport *vport, ++ struct hnae3_ring_chain_node *ring_chain, ++ struct hclge_desc *desc) ++{ ++ struct hclge_ctrl_vector_chain_cmd *req = ++ (struct hclge_ctrl_vector_chain_cmd *)desc->data; ++ struct hclge_dev *hdev = vport->back; ++ u16 tqp_type_and_id; ++ int status; ++ ++ hclge_cmd_setup_basic_desc(desc, HCLGE_OPC_ADD_RING_TO_VECTOR, true); ++ ++ tqp_type_and_id = le16_to_cpu(req->tqp_type_and_id[0]); ++ hnae3_set_field(tqp_type_and_id, HCLGE_INT_TYPE_M, HCLGE_INT_TYPE_S, ++ hnae3_get_bit(ring_chain->flag, HNAE3_RING_TYPE_B)); ++ hnae3_set_field(tqp_type_and_id, HCLGE_TQP_ID_M, HCLGE_TQP_ID_S, ++ ring_chain->tqp_index); ++ req->tqp_type_and_id[0] = cpu_to_le16(tqp_type_and_id); ++ req->vfid = vport->vport_id; ++ ++ status = hclge_cmd_send(&hdev->hw, desc, 1); ++ if (status) ++ dev_err(&hdev->pdev->dev, ++ "Get VF ring vector map info fail, status is %d.\n", ++ status); ++ ++ return status; ++} ++ ++static int hclge_get_vf_ring_vector_map(struct hclge_vport *vport, ++ struct hclge_mbx_vf_to_pf_cmd *req, ++ struct hclge_respond_to_vf_msg *resp) ++{ ++#define HCLGE_LIMIT_RING_NUM 1 ++#define HCLGE_RING_TYPE_OFFSET 0 ++#define HCLGE_TQP_INDEX_OFFSET 1 ++#define HCLGE_INT_GL_INDEX_OFFSET 2 ++#define HCLGE_VECTOR_ID_OFFSET 3 ++#define HCLGE_RING_VECTOR_MAP_INFO_LEN 4 ++ struct hnae3_ring_chain_node ring_chain; ++ struct hclge_desc desc; ++ struct hclge_ctrl_vector_chain_cmd *data = ++ (struct hclge_ctrl_vector_chain_cmd *)desc.data; ++ u16 tqp_type_and_id; ++ u8 int_gl_index; ++ int ret; ++ ++ req->msg.ring_num = HCLGE_LIMIT_RING_NUM; ++ ++ memset(&ring_chain, 0, sizeof(ring_chain)); ++ ret = hclge_get_ring_chain_from_mbx(req, &ring_chain, vport); ++ if (ret) ++ return ret; ++ ++ ret = hclge_query_ring_vector_map(vport, &ring_chain, &desc); ++ if (ret) { ++ hclge_free_vector_ring_chain(&ring_chain); ++ return ret; ++ } ++ ++ tqp_type_and_id = le16_to_cpu(data->tqp_type_and_id[0]); ++ int_gl_index = hnae3_get_field(tqp_type_and_id, ++ HCLGE_INT_GL_IDX_M, HCLGE_INT_GL_IDX_S); ++ ++ resp->data[HCLGE_RING_TYPE_OFFSET] = req->msg.param[0].ring_type; ++ resp->data[HCLGE_TQP_INDEX_OFFSET] = req->msg.param[0].tqp_index; ++ resp->data[HCLGE_INT_GL_INDEX_OFFSET] = int_gl_index; ++ resp->data[HCLGE_VECTOR_ID_OFFSET] = data->int_vector_id_l; ++ resp->len = HCLGE_RING_VECTOR_MAP_INFO_LEN; ++ ++ hclge_free_vector_ring_chain(&ring_chain); ++ ++ return ret; ++} ++ + static void hclge_set_vf_promisc_mode(struct hclge_vport *vport, + struct hclge_mbx_vf_to_pf_cmd *req) + { +@@ -749,6 +824,14 @@ void hclge_mbx_handler(struct hclge_dev *hdev) + ret = hclge_map_unmap_ring_to_vf_vector(vport, false, + req); + break; ++ case HCLGE_MBX_GET_RING_VECTOR_MAP: ++ ret = hclge_get_vf_ring_vector_map(vport, req, ++ &resp_msg); ++ if (ret) ++ dev_err(&hdev->pdev->dev, ++ "PF fail(%d) to get VF ring vector map\n", ++ ret); ++ break; + case HCLGE_MBX_SET_PROMISC_MODE: + hclge_set_vf_promisc_mode(vport, req); + break; +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-change-type-of-numa_node_mask-as-nodemask_t.patch b/queue-5.15/net-hns3-change-type-of-numa_node_mask-as-nodemask_t.patch new file mode 100644 index 00000000000..8fb606ed460 --- /dev/null +++ b/queue-5.15/net-hns3-change-type-of-numa_node_mask-as-nodemask_t.patch @@ -0,0 +1,117 @@ +From 41749f6c6a323145eae055ffc7d7376247c6b406 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 May 2024 21:42:20 +0800 +Subject: net: hns3: change type of numa_node_mask as nodemask_t + +From: Peiyang Wang + +[ Upstream commit 6639a7b953212ac51aa4baa7d7fb855bf736cf56 ] + +It provides nodemask_t to describe the numa node mask in kernel. To +improve transportability, change the type of numa_node_mask as nodemask_t. + +Fixes: 38caee9d3ee8 ("net: hns3: Add support of the HNAE3 framework") +Signed-off-by: Peiyang Wang +Signed-off-by: Jijie Shao +Reviewed-by: Simon Horman +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/hnae3.h | 2 +- + drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 6 ++++-- + drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 2 +- + drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 7 ++++--- + drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h | 2 +- + 5 files changed, 11 insertions(+), 8 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h +index 8dfa372df8e77..f362a2fac3c29 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h +@@ -829,7 +829,7 @@ struct hnae3_handle { + struct hnae3_roce_private_info rinfo; + }; + +- u32 numa_node_mask; /* for multi-chip support */ ++ nodemask_t numa_node_mask; /* for multi-chip support */ + + enum hnae3_port_base_vlan_state port_base_vlan_state; + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index a744ebb72b137..a0a64441199c5 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -1825,7 +1825,8 @@ static int hclge_vport_setup(struct hclge_vport *vport, u16 num_tqps) + + nic->pdev = hdev->pdev; + nic->ae_algo = &ae_algo; +- nic->numa_node_mask = hdev->numa_node_mask; ++ bitmap_copy(nic->numa_node_mask.bits, hdev->numa_node_mask.bits, ++ MAX_NUMNODES); + nic->kinfo.io_base = hdev->hw.hw.io_base; + + ret = hclge_knic_setup(vport, num_tqps, +@@ -2517,7 +2518,8 @@ static int hclge_init_roce_base_info(struct hclge_vport *vport) + + roce->pdev = nic->pdev; + roce->ae_algo = nic->ae_algo; +- roce->numa_node_mask = nic->numa_node_mask; ++ bitmap_copy(roce->numa_node_mask.bits, nic->numa_node_mask.bits, ++ MAX_NUMNODES); + + return 0; + } +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +index 4e52a7d96483c..1ef5b4c8625a7 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +@@ -863,7 +863,7 @@ struct hclge_dev { + + u16 fdir_pf_filter_count; /* Num of guaranteed filters for this PF */ + u16 num_alloc_vport; /* Num vports this driver supports */ +- u32 numa_node_mask; ++ nodemask_t numa_node_mask; + u16 rx_buf_len; + u16 num_tx_desc; /* desc num of per tx queue */ + u16 num_rx_desc; /* desc num of per rx queue */ +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +index bd8468c2d9a68..9afb44d738c4e 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +@@ -537,7 +537,8 @@ static int hclgevf_set_handle_info(struct hclgevf_dev *hdev) + + nic->ae_algo = &ae_algovf; + nic->pdev = hdev->pdev; +- nic->numa_node_mask = hdev->numa_node_mask; ++ bitmap_copy(nic->numa_node_mask.bits, hdev->numa_node_mask.bits, ++ MAX_NUMNODES); + nic->flags |= HNAE3_SUPPORT_VF; + nic->kinfo.io_base = hdev->hw.io_base; + +@@ -2588,8 +2589,8 @@ static int hclgevf_init_roce_base_info(struct hclgevf_dev *hdev) + + roce->pdev = nic->pdev; + roce->ae_algo = nic->ae_algo; +- roce->numa_node_mask = nic->numa_node_mask; +- ++ bitmap_copy(roce->numa_node_mask.bits, nic->numa_node_mask.bits, ++ MAX_NUMNODES); + return 0; + } + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h +index 5c7538ca36a76..2b216ac96914c 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.h +@@ -298,7 +298,7 @@ struct hclgevf_dev { + u16 rss_size_max; /* HW defined max RSS task queue */ + + u16 num_alloc_vport; /* num vports this driver supports */ +- u32 numa_node_mask; ++ nodemask_t numa_node_mask; + u16 rx_buf_len; + u16 num_tx_desc; /* desc num of per tx queue */ + u16 num_rx_desc; /* desc num of per rx queue */ +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-create-new-cmdq-hardware-description-struct.patch b/queue-5.15/net-hns3-create-new-cmdq-hardware-description-struct.patch new file mode 100644 index 00000000000..dbf9300ea27 --- /dev/null +++ b/queue-5.15/net-hns3-create-new-cmdq-hardware-description-struct.patch @@ -0,0 +1,140 @@ +From f2fd2a787458fa41b222e96d4405480af4856eaf Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 31 Dec 2021 18:22:32 +0800 +Subject: net: hns3: create new cmdq hardware description structure + hclge_comm_hw + +From: Jie Wang + +[ Upstream commit 0a7b6d221868be6aa3249c70ffab707a265b89d6 ] + +Currently PF and VF cmdq APIs use struct hclge(vf)_hw to describe cmdq +hardware information needed by hclge(vf)_cmd_send. There are a little +differences between its child struct hclge_cmq_ring and hclgevf_cmq_ring. +It is redundent to use two sets of structures to support same functions. + +So this patch creates new set of common cmdq hardware description +structures(hclge_comm_hw) to unify PF and VF cmdq functions. The struct +hclge_desc is still kept to avoid too many meaningless replacement. + +These new structures will be used to unify hclge(vf)_hw structures in PF +and VF cmdq APIs in next patches. + +Signed-off-by: Jie Wang +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: 6639a7b95321 ("net: hns3: change type of numa_node_mask as nodemask_t") +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/Makefile | 1 + + .../hns3/hns3_common/hclge_comm_cmd.h | 55 +++++++++++++++++++ + .../hisilicon/hns3/hns3pf/hclge_cmd.h | 9 +-- + 3 files changed, 57 insertions(+), 8 deletions(-) + create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h + +diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile +index 32e24e0945f5e..33e546cef2881 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/Makefile ++++ b/drivers/net/ethernet/hisilicon/hns3/Makefile +@@ -6,6 +6,7 @@ + ccflags-y += -I$(srctree)/$(src) + ccflags-y += -I$(srctree)/drivers/net/ethernet/hisilicon/hns3/hns3pf + ccflags-y += -I$(srctree)/drivers/net/ethernet/hisilicon/hns3/hns3vf ++ccflags-y += -I$(srctree)/drivers/net/ethernet/hisilicon/hns3/hns3_common + + obj-$(CONFIG_HNS3) += hnae3.o + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h +new file mode 100644 +index 0000000000000..f1e39003ceebe +--- /dev/null ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h +@@ -0,0 +1,55 @@ ++/* SPDX-License-Identifier: GPL-2.0+ */ ++// Copyright (c) 2021-2021 Hisilicon Limited. ++ ++#ifndef __HCLGE_COMM_CMD_H ++#define __HCLGE_COMM_CMD_H ++#include ++ ++#include "hnae3.h" ++ ++#define HCLGE_DESC_DATA_LEN 6 ++struct hclge_desc { ++ __le16 opcode; ++ __le16 flag; ++ __le16 retval; ++ __le16 rsv; ++ __le32 data[HCLGE_DESC_DATA_LEN]; ++}; ++ ++struct hclge_comm_cmq_ring { ++ dma_addr_t desc_dma_addr; ++ struct hclge_desc *desc; ++ struct pci_dev *pdev; ++ u32 head; ++ u32 tail; ++ ++ u16 buf_size; ++ u16 desc_num; ++ int next_to_use; ++ int next_to_clean; ++ u8 ring_type; /* cmq ring type */ ++ spinlock_t lock; /* Command queue lock */ ++}; ++ ++enum hclge_comm_cmd_status { ++ HCLGE_COMM_STATUS_SUCCESS = 0, ++ HCLGE_COMM_ERR_CSQ_FULL = -1, ++ HCLGE_COMM_ERR_CSQ_TIMEOUT = -2, ++ HCLGE_COMM_ERR_CSQ_ERROR = -3, ++}; ++ ++struct hclge_comm_cmq { ++ struct hclge_comm_cmq_ring csq; ++ struct hclge_comm_cmq_ring crq; ++ u16 tx_timeout; ++ enum hclge_comm_cmd_status last_status; ++}; ++ ++struct hclge_comm_hw { ++ void __iomem *io_base; ++ void __iomem *mem_base; ++ struct hclge_comm_cmq cmq; ++ unsigned long comm_state; ++}; ++ ++#endif +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h +index cfbb7c51b0cb3..e07709ef239df 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h +@@ -7,24 +7,17 @@ + #include + #include + #include "hnae3.h" ++#include "hclge_comm_cmd.h" + + #define HCLGE_CMDQ_TX_TIMEOUT 30000 + #define HCLGE_CMDQ_CLEAR_WAIT_TIME 200 + #define HCLGE_DESC_DATA_LEN 6 + + struct hclge_dev; +-struct hclge_desc { +- __le16 opcode; + + #define HCLGE_CMDQ_RX_INVLD_B 0 + #define HCLGE_CMDQ_RX_OUTVLD_B 1 + +- __le16 flag; +- __le16 retval; +- __le16 rsv; +- __le32 data[HCLGE_DESC_DATA_LEN]; +-}; +- + struct hclge_cmq_ring { + dma_addr_t desc_dma_addr; + struct hclge_desc *desc; +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-create-new-set-of-unified-hclge_comm_cmd_se.patch b/queue-5.15/net-hns3-create-new-set-of-unified-hclge_comm_cmd_se.patch new file mode 100644 index 00000000000..b6e182c11c0 --- /dev/null +++ b/queue-5.15/net-hns3-create-new-set-of-unified-hclge_comm_cmd_se.patch @@ -0,0 +1,378 @@ +From 5f43605042a699e08dc57cb34d5a330ced2cba01 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 31 Dec 2021 18:22:34 +0800 +Subject: net: hns3: create new set of unified hclge_comm_cmd_send APIs + +From: Jie Wang + +[ Upstream commit 8d307f8e8cf195921b10939dde673f1f039bd732 ] + +This patch create new set of unified hclge_comm_cmd_send APIs for PF and VF +cmdq module. Subfunctions called by hclge_comm_cmd_send are also created +include cmdq result check, cmdq return code conversion and ring space +opertaion APIs. + +These new common cmdq APIs will be used to replace the old PF and VF cmdq +APIs in next patches. + +Signed-off-by: Jie Wang +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: 6639a7b95321 ("net: hns3: change type of numa_node_mask as nodemask_t") +Signed-off-by: Sasha Levin +--- + .../hns3/hns3_common/hclge_comm_cmd.c | 259 ++++++++++++++++++ + .../hns3/hns3_common/hclge_comm_cmd.h | 66 +++++ + 2 files changed, 325 insertions(+) + create mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c +new file mode 100644 +index 0000000000000..89e999248b9af +--- /dev/null ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.c +@@ -0,0 +1,259 @@ ++// SPDX-License-Identifier: GPL-2.0+ ++// Copyright (c) 2021-2021 Hisilicon Limited. ++ ++#include "hnae3.h" ++#include "hclge_comm_cmd.h" ++ ++static bool hclge_is_elem_in_array(const u16 *spec_opcode, u32 size, u16 opcode) ++{ ++ u32 i; ++ ++ for (i = 0; i < size; i++) { ++ if (spec_opcode[i] == opcode) ++ return true; ++ } ++ ++ return false; ++} ++ ++static const u16 pf_spec_opcode[] = { HCLGE_COMM_OPC_STATS_64_BIT, ++ HCLGE_COMM_OPC_STATS_32_BIT, ++ HCLGE_COMM_OPC_STATS_MAC, ++ HCLGE_COMM_OPC_STATS_MAC_ALL, ++ HCLGE_COMM_OPC_QUERY_32_BIT_REG, ++ HCLGE_COMM_OPC_QUERY_64_BIT_REG, ++ HCLGE_COMM_QUERY_CLEAR_MPF_RAS_INT, ++ HCLGE_COMM_QUERY_CLEAR_PF_RAS_INT, ++ HCLGE_COMM_QUERY_CLEAR_ALL_MPF_MSIX_INT, ++ HCLGE_COMM_QUERY_CLEAR_ALL_PF_MSIX_INT, ++ HCLGE_COMM_QUERY_ALL_ERR_INFO }; ++ ++static const u16 vf_spec_opcode[] = { HCLGE_COMM_OPC_STATS_64_BIT, ++ HCLGE_COMM_OPC_STATS_32_BIT, ++ HCLGE_COMM_OPC_STATS_MAC }; ++ ++static bool hclge_comm_is_special_opcode(u16 opcode, bool is_pf) ++{ ++ /* these commands have several descriptors, ++ * and use the first one to save opcode and return value ++ */ ++ const u16 *spec_opcode = is_pf ? pf_spec_opcode : vf_spec_opcode; ++ u32 size = is_pf ? ARRAY_SIZE(pf_spec_opcode) : ++ ARRAY_SIZE(vf_spec_opcode); ++ ++ return hclge_is_elem_in_array(spec_opcode, size, opcode); ++} ++ ++static int hclge_comm_ring_space(struct hclge_comm_cmq_ring *ring) ++{ ++ int ntc = ring->next_to_clean; ++ int ntu = ring->next_to_use; ++ int used = (ntu - ntc + ring->desc_num) % ring->desc_num; ++ ++ return ring->desc_num - used - 1; ++} ++ ++static void hclge_comm_cmd_copy_desc(struct hclge_comm_hw *hw, ++ struct hclge_desc *desc, int num) ++{ ++ struct hclge_desc *desc_to_use; ++ int handle = 0; ++ ++ while (handle < num) { ++ desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use]; ++ *desc_to_use = desc[handle]; ++ (hw->cmq.csq.next_to_use)++; ++ if (hw->cmq.csq.next_to_use >= hw->cmq.csq.desc_num) ++ hw->cmq.csq.next_to_use = 0; ++ handle++; ++ } ++} ++ ++static int hclge_comm_is_valid_csq_clean_head(struct hclge_comm_cmq_ring *ring, ++ int head) ++{ ++ int ntc = ring->next_to_clean; ++ int ntu = ring->next_to_use; ++ ++ if (ntu > ntc) ++ return head >= ntc && head <= ntu; ++ ++ return head >= ntc || head <= ntu; ++} ++ ++static int hclge_comm_cmd_csq_clean(struct hclge_comm_hw *hw) ++{ ++ struct hclge_comm_cmq_ring *csq = &hw->cmq.csq; ++ int clean; ++ u32 head; ++ ++ head = hclge_comm_read_dev(hw, HCLGE_COMM_NIC_CSQ_HEAD_REG); ++ rmb(); /* Make sure head is ready before touch any data */ ++ ++ if (!hclge_comm_is_valid_csq_clean_head(csq, head)) { ++ dev_warn(&hw->cmq.csq.pdev->dev, "wrong cmd head (%u, %d-%d)\n", ++ head, csq->next_to_use, csq->next_to_clean); ++ dev_warn(&hw->cmq.csq.pdev->dev, ++ "Disabling any further commands to IMP firmware\n"); ++ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hw->comm_state); ++ dev_warn(&hw->cmq.csq.pdev->dev, ++ "IMP firmware watchdog reset soon expected!\n"); ++ return -EIO; ++ } ++ ++ clean = (head - csq->next_to_clean + csq->desc_num) % csq->desc_num; ++ csq->next_to_clean = head; ++ return clean; ++} ++ ++static int hclge_comm_cmd_csq_done(struct hclge_comm_hw *hw) ++{ ++ u32 head = hclge_comm_read_dev(hw, HCLGE_COMM_NIC_CSQ_HEAD_REG); ++ return head == hw->cmq.csq.next_to_use; ++} ++ ++static void hclge_comm_wait_for_resp(struct hclge_comm_hw *hw, ++ bool *is_completed) ++{ ++ u32 timeout = 0; ++ ++ do { ++ if (hclge_comm_cmd_csq_done(hw)) { ++ *is_completed = true; ++ break; ++ } ++ udelay(1); ++ timeout++; ++ } while (timeout < hw->cmq.tx_timeout); ++} ++ ++static int hclge_comm_cmd_convert_err_code(u16 desc_ret) ++{ ++ struct hclge_comm_errcode hclge_comm_cmd_errcode[] = { ++ { HCLGE_COMM_CMD_EXEC_SUCCESS, 0 }, ++ { HCLGE_COMM_CMD_NO_AUTH, -EPERM }, ++ { HCLGE_COMM_CMD_NOT_SUPPORTED, -EOPNOTSUPP }, ++ { HCLGE_COMM_CMD_QUEUE_FULL, -EXFULL }, ++ { HCLGE_COMM_CMD_NEXT_ERR, -ENOSR }, ++ { HCLGE_COMM_CMD_UNEXE_ERR, -ENOTBLK }, ++ { HCLGE_COMM_CMD_PARA_ERR, -EINVAL }, ++ { HCLGE_COMM_CMD_RESULT_ERR, -ERANGE }, ++ { HCLGE_COMM_CMD_TIMEOUT, -ETIME }, ++ { HCLGE_COMM_CMD_HILINK_ERR, -ENOLINK }, ++ { HCLGE_COMM_CMD_QUEUE_ILLEGAL, -ENXIO }, ++ { HCLGE_COMM_CMD_INVALID, -EBADR }, ++ }; ++ u32 errcode_count = ARRAY_SIZE(hclge_comm_cmd_errcode); ++ u32 i; ++ ++ for (i = 0; i < errcode_count; i++) ++ if (hclge_comm_cmd_errcode[i].imp_errcode == desc_ret) ++ return hclge_comm_cmd_errcode[i].common_errno; ++ ++ return -EIO; ++} ++ ++static int hclge_comm_cmd_check_retval(struct hclge_comm_hw *hw, ++ struct hclge_desc *desc, int num, ++ int ntc, bool is_pf) ++{ ++ u16 opcode, desc_ret; ++ int handle; ++ ++ opcode = le16_to_cpu(desc[0].opcode); ++ for (handle = 0; handle < num; handle++) { ++ desc[handle] = hw->cmq.csq.desc[ntc]; ++ ntc++; ++ if (ntc >= hw->cmq.csq.desc_num) ++ ntc = 0; ++ } ++ if (likely(!hclge_comm_is_special_opcode(opcode, is_pf))) ++ desc_ret = le16_to_cpu(desc[num - 1].retval); ++ else ++ desc_ret = le16_to_cpu(desc[0].retval); ++ ++ hw->cmq.last_status = desc_ret; ++ ++ return hclge_comm_cmd_convert_err_code(desc_ret); ++} ++ ++static int hclge_comm_cmd_check_result(struct hclge_comm_hw *hw, ++ struct hclge_desc *desc, ++ int num, int ntc, bool is_pf) ++{ ++ bool is_completed = false; ++ int handle, ret; ++ ++ /* If the command is sync, wait for the firmware to write back, ++ * if multi descriptors to be sent, use the first one to check ++ */ ++ if (HCLGE_COMM_SEND_SYNC(le16_to_cpu(desc->flag))) ++ hclge_comm_wait_for_resp(hw, &is_completed); ++ ++ if (!is_completed) ++ ret = -EBADE; ++ else ++ ret = hclge_comm_cmd_check_retval(hw, desc, num, ntc, is_pf); ++ ++ /* Clean the command send queue */ ++ handle = hclge_comm_cmd_csq_clean(hw); ++ if (handle < 0) ++ ret = handle; ++ else if (handle != num) ++ dev_warn(&hw->cmq.csq.pdev->dev, ++ "cleaned %d, need to clean %d\n", handle, num); ++ return ret; ++} ++ ++/** ++ * hclge_comm_cmd_send - send command to command queue ++ * @hw: pointer to the hw struct ++ * @desc: prefilled descriptor for describing the command ++ * @num : the number of descriptors to be sent ++ * @is_pf: bool to judge pf/vf module ++ * ++ * This is the main send command for command queue, it ++ * sends the queue, cleans the queue, etc ++ **/ ++int hclge_comm_cmd_send(struct hclge_comm_hw *hw, struct hclge_desc *desc, ++ int num, bool is_pf) ++{ ++ struct hclge_comm_cmq_ring *csq = &hw->cmq.csq; ++ int ret; ++ int ntc; ++ ++ spin_lock_bh(&hw->cmq.csq.lock); ++ ++ if (test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hw->comm_state)) { ++ spin_unlock_bh(&hw->cmq.csq.lock); ++ return -EBUSY; ++ } ++ ++ if (num > hclge_comm_ring_space(&hw->cmq.csq)) { ++ /* If CMDQ ring is full, SW HEAD and HW HEAD may be different, ++ * need update the SW HEAD pointer csq->next_to_clean ++ */ ++ csq->next_to_clean = ++ hclge_comm_read_dev(hw, HCLGE_COMM_NIC_CSQ_HEAD_REG); ++ spin_unlock_bh(&hw->cmq.csq.lock); ++ return -EBUSY; ++ } ++ ++ /** ++ * Record the location of desc in the ring for this time ++ * which will be use for hardware to write back ++ */ ++ ntc = hw->cmq.csq.next_to_use; ++ ++ hclge_comm_cmd_copy_desc(hw, desc, num); ++ ++ /* Write to hardware */ ++ hclge_comm_write_dev(hw, HCLGE_COMM_NIC_CSQ_TAIL_REG, ++ hw->cmq.csq.next_to_use); ++ ++ ret = hclge_comm_cmd_check_result(hw, desc, num, ntc, is_pf); ++ ++ spin_unlock_bh(&hw->cmq.csq.lock); ++ ++ return ret; ++} +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h +index f1e39003ceebe..5164c666cae71 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_common/hclge_comm_cmd.h +@@ -7,6 +7,52 @@ + + #include "hnae3.h" + ++#define HCLGE_COMM_CMD_FLAG_NO_INTR BIT(4) ++ ++#define HCLGE_COMM_SEND_SYNC(flag) \ ++ ((flag) & HCLGE_COMM_CMD_FLAG_NO_INTR) ++ ++#define HCLGE_COMM_NIC_CSQ_TAIL_REG 0x27010 ++#define HCLGE_COMM_NIC_CSQ_HEAD_REG 0x27014 ++ ++enum hclge_comm_cmd_return_status { ++ HCLGE_COMM_CMD_EXEC_SUCCESS = 0, ++ HCLGE_COMM_CMD_NO_AUTH = 1, ++ HCLGE_COMM_CMD_NOT_SUPPORTED = 2, ++ HCLGE_COMM_CMD_QUEUE_FULL = 3, ++ HCLGE_COMM_CMD_NEXT_ERR = 4, ++ HCLGE_COMM_CMD_UNEXE_ERR = 5, ++ HCLGE_COMM_CMD_PARA_ERR = 6, ++ HCLGE_COMM_CMD_RESULT_ERR = 7, ++ HCLGE_COMM_CMD_TIMEOUT = 8, ++ HCLGE_COMM_CMD_HILINK_ERR = 9, ++ HCLGE_COMM_CMD_QUEUE_ILLEGAL = 10, ++ HCLGE_COMM_CMD_INVALID = 11, ++}; ++ ++enum hclge_comm_special_cmd { ++ HCLGE_COMM_OPC_STATS_64_BIT = 0x0030, ++ HCLGE_COMM_OPC_STATS_32_BIT = 0x0031, ++ HCLGE_COMM_OPC_STATS_MAC = 0x0032, ++ HCLGE_COMM_OPC_STATS_MAC_ALL = 0x0034, ++ HCLGE_COMM_OPC_QUERY_32_BIT_REG = 0x0041, ++ HCLGE_COMM_OPC_QUERY_64_BIT_REG = 0x0042, ++ HCLGE_COMM_QUERY_CLEAR_MPF_RAS_INT = 0x1511, ++ HCLGE_COMM_QUERY_CLEAR_PF_RAS_INT = 0x1512, ++ HCLGE_COMM_QUERY_CLEAR_ALL_MPF_MSIX_INT = 0x1514, ++ HCLGE_COMM_QUERY_CLEAR_ALL_PF_MSIX_INT = 0x1515, ++ HCLGE_COMM_QUERY_ALL_ERR_INFO = 0x1517, ++}; ++ ++enum hclge_comm_cmd_state { ++ HCLGE_COMM_STATE_CMD_DISABLE, ++}; ++ ++struct hclge_comm_errcode { ++ u32 imp_errcode; ++ int common_errno; ++}; ++ + #define HCLGE_DESC_DATA_LEN 6 + struct hclge_desc { + __le16 opcode; +@@ -52,4 +98,24 @@ struct hclge_comm_hw { + unsigned long comm_state; + }; + ++static inline void hclge_comm_write_reg(void __iomem *base, u32 reg, u32 value) ++{ ++ writel(value, base + reg); ++} ++ ++static inline u32 hclge_comm_read_reg(u8 __iomem *base, u32 reg) ++{ ++ u8 __iomem *reg_addr = READ_ONCE(base); ++ ++ return readl(reg_addr + reg); ++} ++ ++#define hclge_comm_write_dev(a, reg, value) \ ++ hclge_comm_write_reg((a)->io_base, reg, value) ++#define hclge_comm_read_dev(a, reg) \ ++ hclge_comm_read_reg((a)->io_base, reg) ++ ++int hclge_comm_cmd_send(struct hclge_comm_hw *hw, struct hclge_desc *desc, ++ int num, bool is_pf); ++ + #endif +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-direct-return-when-receive-a-unknown-mailbo.patch b/queue-5.15/net-hns3-direct-return-when-receive-a-unknown-mailbo.patch new file mode 100644 index 00000000000..fbb53ee1ca7 --- /dev/null +++ b/queue-5.15/net-hns3-direct-return-when-receive-a-unknown-mailbo.patch @@ -0,0 +1,47 @@ +From 21d81a20ece0049f7c2326ab6509f1c180ac8bcb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 May 2024 21:42:19 +0800 +Subject: net: hns3: direct return when receive a unknown mailbox message + +From: Jian Shen + +[ Upstream commit 669554c512d2107e2f21616f38e050d40655101f ] + +Currently, the driver didn't return when receive a unknown +mailbox message, and continue checking whether need to +generate a response. It's unnecessary and may be incorrect. + +Fixes: bb5790b71bad ("net: hns3: refactor mailbox response scheme between PF and VF") +Signed-off-by: Jian Shen +Signed-off-by: Jijie Shao +Reviewed-by: Simon Horman +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c | 7 ++++--- + 1 file changed, 4 insertions(+), 3 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index 39848bcd77c75..1bd3d6056163b 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -1019,12 +1019,13 @@ static void hclge_mbx_request_handling(struct hclge_mbx_ops_param *param) + + hdev = param->vport->back; + cmd_func = hclge_mbx_ops_list[param->req->msg.code]; +- if (cmd_func) +- ret = cmd_func(param); +- else ++ if (!cmd_func) { + dev_err(&hdev->pdev->dev, + "un-supported mailbox message, code = %u\n", + param->req->msg.code); ++ return; ++ } ++ ret = cmd_func(param); + + /* PF driver should not reply IMP */ + if (hnae3_get_bit(param->req->mbx_need_resp, HCLGE_MBX_NEED_RESP_B) && +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-fix-port-vlan-filter-not-disabled-issue.patch b/queue-5.15/net-hns3-fix-port-vlan-filter-not-disabled-issue.patch new file mode 100644 index 00000000000..e385c70d07c --- /dev/null +++ b/queue-5.15/net-hns3-fix-port-vlan-filter-not-disabled-issue.patch @@ -0,0 +1,63 @@ +From 136d8ec90703647e2ed00cbb48062fbe45c90506 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 May 2024 21:42:23 +0800 +Subject: net: hns3: fix port vlan filter not disabled issue + +From: Yonglong Liu + +[ Upstream commit f5db7a3b65c84d723ca5e2bb6e83115180ab6336 ] + +According to hardware limitation, for device support modify +VLAN filter state but not support bypass port VLAN filter, +it should always disable the port VLAN filter. but the driver +enables port VLAN filter when initializing, if there is no +VLAN(except VLAN 0) id added, the driver will disable it +in service task. In most time, it works fine. But there is +a time window before the service task shceduled and net device +being registered. So if user adds VLAN at this time, the driver +will not update the VLAN filter state, and the port VLAN filter +remains enabled. + +To fix the problem, if support modify VLAN filter state but not +support bypass port VLAN filter, set the port vlan filter to "off". + +Fixes: 184cd221a863 ("net: hns3: disable port VLAN filter when support function level VLAN filter control") +Fixes: 2ba306627f59 ("net: hns3: add support for modify VLAN filter state") +Signed-off-by: Yonglong Liu +Signed-off-by: Jijie Shao +Reviewed-by: Simon Horman +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 6a346165d5881..d58048b056781 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -10105,6 +10105,7 @@ static int hclge_set_vlan_protocol_type(struct hclge_dev *hdev) + static int hclge_init_vlan_filter(struct hclge_dev *hdev) + { + struct hclge_vport *vport; ++ bool enable = true; + int ret; + int i; + +@@ -10124,8 +10125,12 @@ static int hclge_init_vlan_filter(struct hclge_dev *hdev) + vport->cur_vlan_fltr_en = true; + } + ++ if (test_bit(HNAE3_DEV_SUPPORT_VLAN_FLTR_MDF_B, hdev->ae_dev->caps) && ++ !test_bit(HNAE3_DEV_SUPPORT_PORT_VLAN_BYPASS_B, hdev->ae_dev->caps)) ++ enable = false; ++ + return hclge_set_vlan_filter_ctrl(hdev, HCLGE_FILTER_TYPE_PORT, +- HCLGE_FILTER_FE_INGRESS, true, 0); ++ HCLGE_FILTER_FE_INGRESS, enable, 0); + } + + static int hclge_init_vlan_type(struct hclge_dev *hdev) +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-pf-support-get-unicast-mac-address-space-as.patch b/queue-5.15/net-hns3-pf-support-get-unicast-mac-address-space-as.patch new file mode 100644 index 00000000000..5be256abacd --- /dev/null +++ b/queue-5.15/net-hns3-pf-support-get-unicast-mac-address-space-as.patch @@ -0,0 +1,136 @@ +From be5261028ba741fee063255573c143ef03bf38b7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 14 Sep 2021 20:11:16 +0800 +Subject: net: hns3: PF support get unicast MAC address space assigned by + firmware + +From: Guangbin Huang + +[ Upstream commit e435a6b5315a05a4e4e9f77679a57fd0d679e384 ] + +Currently, there are two ways for PF to set the unicast MAC address space +size: specified by config parameters in firmware or set to default value. + +That's mean if the config parameters in firmware is zero, driver will +divide the whole unicast MAC address space equally to 8 PFs. However, in +this case, the unicast MAC address space will be wasted a lot when the +hardware actually has less then 8 PFs. And in the other hand, if one PF has +much more VFs than other PFs, then each function of this PF will has much +less address space than other PFs. + +In order to ameliorate the above two situations, introduce the third way +of unicast MAC address space assignment: firmware divides the whole unicast +MAC address space equally to functions of all PFs, and calculates the space +size of each PF according to its function number. PF queries the space size +by the querying device specification command when in initialization +process. + +The third way assignment is lower priority than specified by config +parameters, only if the config parameters is zero can be used, and if +firmware does not support the third way assignment, then driver still +divides the whole unicast MAC address space equally to 8 PFs. + +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: 05eb60e9648c ("net: hns3: using user configure after hardware reset") +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/hnae3.h | 1 + + drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c | 2 ++ + .../net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h | 4 +++- + .../net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 11 ++++++++--- + 4 files changed, 14 insertions(+), 4 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hnae3.h b/drivers/net/ethernet/hisilicon/hns3/hnae3.h +index b51afb83d023e..8dfa372df8e77 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hnae3.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hnae3.h +@@ -341,6 +341,7 @@ struct hnae3_dev_specs { + u8 max_non_tso_bd_num; /* max BD number of one non-TSO packet */ + u16 max_frm_size; + u16 max_qset_num; ++ u16 umv_size; + }; + + struct hnae3_client_ops { +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c +index 45f245b1d331c..bd801e35d51ea 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3_debugfs.c +@@ -924,6 +924,8 @@ hns3_dbg_dev_specs(struct hnae3_handle *h, char *buf, int len, int *pos) + dev_specs->max_tm_rate); + *pos += scnprintf(buf + *pos, len - *pos, "MAX QSET number: %u\n", + dev_specs->max_qset_num); ++ *pos += scnprintf(buf + *pos, len - *pos, "umv size: %u\n", ++ dev_specs->umv_size); + } + + static int hns3_dbg_dev_info(struct hnae3_handle *h, char *buf, int len) +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h +index 33244472e0d0e..cfbb7c51b0cb3 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h +@@ -1188,7 +1188,9 @@ struct hclge_dev_specs_1_cmd { + __le16 max_frm_size; + __le16 max_qset_num; + __le16 max_int_gl; +- u8 rsv1[18]; ++ u8 rsv0[2]; ++ __le16 umv_size; ++ u8 rsv1[14]; + }; + + /* mac speed type defined in firmware command */ +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 598da1be22ebe..3423b8e278e3a 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -1343,8 +1343,6 @@ static void hclge_parse_cfg(struct hclge_cfg *cfg, struct hclge_desc *desc) + cfg->umv_space = hnae3_get_field(__le32_to_cpu(req->param[1]), + HCLGE_CFG_UMV_TBL_SPACE_M, + HCLGE_CFG_UMV_TBL_SPACE_S); +- if (!cfg->umv_space) +- cfg->umv_space = HCLGE_DEFAULT_UMV_SPACE_PER_PF; + + cfg->pf_rss_size_max = hnae3_get_field(__le32_to_cpu(req->param[2]), + HCLGE_CFG_PF_RSS_SIZE_M, +@@ -1420,6 +1418,7 @@ static void hclge_set_default_dev_specs(struct hclge_dev *hdev) + ae_dev->dev_specs.max_int_gl = HCLGE_DEF_MAX_INT_GL; + ae_dev->dev_specs.max_frm_size = HCLGE_MAC_MAX_FRAME; + ae_dev->dev_specs.max_qset_num = HCLGE_MAX_QSET_NUM; ++ ae_dev->dev_specs.umv_size = HCLGE_DEFAULT_UMV_SPACE_PER_PF; + } + + static void hclge_parse_dev_specs(struct hclge_dev *hdev, +@@ -1441,6 +1440,7 @@ static void hclge_parse_dev_specs(struct hclge_dev *hdev, + ae_dev->dev_specs.max_qset_num = le16_to_cpu(req1->max_qset_num); + ae_dev->dev_specs.max_int_gl = le16_to_cpu(req1->max_int_gl); + ae_dev->dev_specs.max_frm_size = le16_to_cpu(req1->max_frm_size); ++ ae_dev->dev_specs.umv_size = le16_to_cpu(req1->umv_size); + } + + static void hclge_check_dev_specs(struct hclge_dev *hdev) +@@ -1461,6 +1461,8 @@ static void hclge_check_dev_specs(struct hclge_dev *hdev) + dev_specs->max_int_gl = HCLGE_DEF_MAX_INT_GL; + if (!dev_specs->max_frm_size) + dev_specs->max_frm_size = HCLGE_MAC_MAX_FRAME; ++ if (!dev_specs->umv_size) ++ dev_specs->umv_size = HCLGE_DEFAULT_UMV_SPACE_PER_PF; + } + + static int hclge_query_dev_specs(struct hclge_dev *hdev) +@@ -1550,7 +1552,10 @@ static int hclge_configure(struct hclge_dev *hdev) + hdev->tm_info.num_pg = 1; + hdev->tc_max = cfg.tc_num; + hdev->tm_info.hw_pfc_map = 0; +- hdev->wanted_umv_size = cfg.umv_space; ++ if (cfg.umv_space) ++ hdev->wanted_umv_size = cfg.umv_space; ++ else ++ hdev->wanted_umv_size = hdev->ae_dev->dev_specs.umv_size; + hdev->tx_spare_buf_size = cfg.tx_spare_buf_size; + hdev->gro_en = true; + if (cfg.vlan_fliter_cap == HCLGE_VLAN_FLTR_CAN_MDF) +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-refactor-function-hclge_mbx_handler.patch b/queue-5.15/net-hns3-refactor-function-hclge_mbx_handler.patch new file mode 100644 index 00000000000..56b85bd4069 --- /dev/null +++ b/queue-5.15/net-hns3-refactor-function-hclge_mbx_handler.patch @@ -0,0 +1,496 @@ +From 84e51fb2bf44f4af859538d78963c7e12427ba81 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 16 Sep 2022 10:38:02 +0800 +Subject: net: hns3: refactor function hclge_mbx_handler() + +From: Hao Lan + +[ Upstream commit 09431ed8de874881e2d5d430042d718ae074d371 ] + +Currently, the function hclge_mbx_handler() has too many switch-case +statements, it makes this function too long. To improve code readability, +refactor this function and use lookup table instead. + +Signed-off-by: Hao Lan +Signed-off-by: Guangbin Huang +Signed-off-by: Jakub Kicinski +Stable-dep-of: 669554c512d2 ("net: hns3: direct return when receive a unknown mailbox message") +Signed-off-by: Sasha Levin +--- + .../net/ethernet/hisilicon/hns3/hclge_mbx.h | 11 + + .../hisilicon/hns3/hns3pf/hclge_mbx.c | 415 ++++++++++++------ + 2 files changed, 284 insertions(+), 142 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +index 09a2a7c9fca43..debbaa1822aa0 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hclge_mbx.h +@@ -212,6 +212,17 @@ struct hclgevf_mbx_arq_ring { + __le16 msg_q[HCLGE_MBX_MAX_ARQ_MSG_NUM][HCLGE_MBX_MAX_ARQ_MSG_SIZE]; + }; + ++struct hclge_dev; ++ ++#define HCLGE_MBX_OPCODE_MAX 256 ++struct hclge_mbx_ops_param { ++ struct hclge_vport *vport; ++ struct hclge_mbx_vf_to_pf_cmd *req; ++ struct hclge_respond_to_vf_msg *resp_msg; ++}; ++ ++typedef int (*hclge_mbx_ops_fn)(struct hclge_mbx_ops_param *param); ++ + #define hclge_mbx_ring_ptr_move_crq(crq) \ + (crq->next_to_use = (crq->next_to_use + 1) % crq->desc_num) + #define hclge_mbx_tail_ptr_move_arq(arq) \ +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index 839555bf4bc49..39848bcd77c75 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -774,17 +774,284 @@ static void hclge_handle_vf_tbl(struct hclge_vport *vport, + } + } + ++static int ++hclge_mbx_map_ring_to_vector_handler(struct hclge_mbx_ops_param *param) ++{ ++ return hclge_map_unmap_ring_to_vf_vector(param->vport, true, ++ param->req); ++} ++ ++static int ++hclge_mbx_unmap_ring_to_vector_handler(struct hclge_mbx_ops_param *param) ++{ ++ return hclge_map_unmap_ring_to_vf_vector(param->vport, false, ++ param->req); ++} ++ ++static int ++hclge_mbx_get_ring_vector_map_handler(struct hclge_mbx_ops_param *param) ++{ ++ int ret; ++ ++ ret = hclge_get_vf_ring_vector_map(param->vport, param->req, ++ param->resp_msg); ++ if (ret) ++ dev_err(¶m->vport->back->pdev->dev, ++ "PF fail(%d) to get VF ring vector map\n", ++ ret); ++ return ret; ++} ++ ++static int hclge_mbx_set_promisc_mode_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_set_vf_promisc_mode(param->vport, param->req); ++ return 0; ++} ++ ++static int hclge_mbx_set_unicast_handler(struct hclge_mbx_ops_param *param) ++{ ++ int ret; ++ ++ ret = hclge_set_vf_uc_mac_addr(param->vport, param->req); ++ if (ret) ++ dev_err(¶m->vport->back->pdev->dev, ++ "PF fail(%d) to set VF UC MAC Addr\n", ++ ret); ++ return ret; ++} ++ ++static int hclge_mbx_set_multicast_handler(struct hclge_mbx_ops_param *param) ++{ ++ int ret; ++ ++ ret = hclge_set_vf_mc_mac_addr(param->vport, param->req); ++ if (ret) ++ dev_err(¶m->vport->back->pdev->dev, ++ "PF fail(%d) to set VF MC MAC Addr\n", ++ ret); ++ return ret; ++} ++ ++static int hclge_mbx_set_vlan_handler(struct hclge_mbx_ops_param *param) ++{ ++ int ret; ++ ++ ret = hclge_set_vf_vlan_cfg(param->vport, param->req, param->resp_msg); ++ if (ret) ++ dev_err(¶m->vport->back->pdev->dev, ++ "PF failed(%d) to config VF's VLAN\n", ++ ret); ++ return ret; ++} ++ ++static int hclge_mbx_set_alive_handler(struct hclge_mbx_ops_param *param) ++{ ++ int ret; ++ ++ ret = hclge_set_vf_alive(param->vport, param->req); ++ if (ret) ++ dev_err(¶m->vport->back->pdev->dev, ++ "PF failed(%d) to set VF's ALIVE\n", ++ ret); ++ return ret; ++} ++ ++static int hclge_mbx_get_qinfo_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_get_vf_queue_info(param->vport, param->resp_msg); ++ return 0; ++} ++ ++static int hclge_mbx_get_qdepth_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_get_vf_queue_depth(param->vport, param->resp_msg); ++ return 0; ++} ++ ++static int hclge_mbx_get_basic_info_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_get_basic_info(param->vport, param->resp_msg); ++ return 0; ++} ++ ++static int hclge_mbx_get_link_status_handler(struct hclge_mbx_ops_param *param) ++{ ++ int ret; ++ ++ ret = hclge_push_vf_link_status(param->vport); ++ if (ret) ++ dev_err(¶m->vport->back->pdev->dev, ++ "failed to inform link stat to VF, ret = %d\n", ++ ret); ++ return ret; ++} ++ ++static int hclge_mbx_queue_reset_handler(struct hclge_mbx_ops_param *param) ++{ ++ return hclge_mbx_reset_vf_queue(param->vport, param->req, ++ param->resp_msg); ++} ++ ++static int hclge_mbx_reset_handler(struct hclge_mbx_ops_param *param) ++{ ++ return hclge_reset_vf(param->vport); ++} ++ ++static int hclge_mbx_keep_alive_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_vf_keep_alive(param->vport); ++ return 0; ++} ++ ++static int hclge_mbx_set_mtu_handler(struct hclge_mbx_ops_param *param) ++{ ++ int ret; ++ ++ ret = hclge_set_vf_mtu(param->vport, param->req); ++ if (ret) ++ dev_err(¶m->vport->back->pdev->dev, ++ "VF fail(%d) to set mtu\n", ret); ++ return ret; ++} ++ ++static int hclge_mbx_get_qid_in_pf_handler(struct hclge_mbx_ops_param *param) ++{ ++ return hclge_get_queue_id_in_pf(param->vport, param->req, ++ param->resp_msg); ++} ++ ++static int hclge_mbx_get_rss_key_handler(struct hclge_mbx_ops_param *param) ++{ ++ return hclge_get_rss_key(param->vport, param->req, param->resp_msg); ++} ++ ++static int hclge_mbx_get_link_mode_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_get_link_mode(param->vport, param->req); ++ return 0; ++} ++ ++static int ++hclge_mbx_get_vf_flr_status_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_rm_vport_all_mac_table(param->vport, false, ++ HCLGE_MAC_ADDR_UC); ++ hclge_rm_vport_all_mac_table(param->vport, false, ++ HCLGE_MAC_ADDR_MC); ++ hclge_rm_vport_all_vlan_table(param->vport, false); ++ return 0; ++} ++ ++static int hclge_mbx_vf_uninit_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_rm_vport_all_mac_table(param->vport, true, ++ HCLGE_MAC_ADDR_UC); ++ hclge_rm_vport_all_mac_table(param->vport, true, ++ HCLGE_MAC_ADDR_MC); ++ hclge_rm_vport_all_vlan_table(param->vport, true); ++ return 0; ++} ++ ++static int hclge_mbx_get_media_type_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_get_vf_media_type(param->vport, param->resp_msg); ++ return 0; ++} ++ ++static int hclge_mbx_push_link_status_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_handle_link_change_event(param->vport->back, param->req); ++ return 0; ++} ++ ++static int hclge_mbx_get_mac_addr_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_get_vf_mac_addr(param->vport, param->resp_msg); ++ return 0; ++} ++ ++static int hclge_mbx_ncsi_error_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_handle_ncsi_error(param->vport->back); ++ return 0; ++} ++ ++static int hclge_mbx_handle_vf_tbl_handler(struct hclge_mbx_ops_param *param) ++{ ++ hclge_handle_vf_tbl(param->vport, param->req); ++ return 0; ++} ++ ++static const hclge_mbx_ops_fn hclge_mbx_ops_list[HCLGE_MBX_OPCODE_MAX] = { ++ [HCLGE_MBX_RESET] = hclge_mbx_reset_handler, ++ [HCLGE_MBX_SET_UNICAST] = hclge_mbx_set_unicast_handler, ++ [HCLGE_MBX_SET_MULTICAST] = hclge_mbx_set_multicast_handler, ++ [HCLGE_MBX_SET_VLAN] = hclge_mbx_set_vlan_handler, ++ [HCLGE_MBX_MAP_RING_TO_VECTOR] = hclge_mbx_map_ring_to_vector_handler, ++ [HCLGE_MBX_UNMAP_RING_TO_VECTOR] = hclge_mbx_unmap_ring_to_vector_handler, ++ [HCLGE_MBX_SET_PROMISC_MODE] = hclge_mbx_set_promisc_mode_handler, ++ [HCLGE_MBX_GET_QINFO] = hclge_mbx_get_qinfo_handler, ++ [HCLGE_MBX_GET_QDEPTH] = hclge_mbx_get_qdepth_handler, ++ [HCLGE_MBX_GET_BASIC_INFO] = hclge_mbx_get_basic_info_handler, ++ [HCLGE_MBX_GET_RSS_KEY] = hclge_mbx_get_rss_key_handler, ++ [HCLGE_MBX_GET_MAC_ADDR] = hclge_mbx_get_mac_addr_handler, ++ [HCLGE_MBX_GET_LINK_STATUS] = hclge_mbx_get_link_status_handler, ++ [HCLGE_MBX_QUEUE_RESET] = hclge_mbx_queue_reset_handler, ++ [HCLGE_MBX_KEEP_ALIVE] = hclge_mbx_keep_alive_handler, ++ [HCLGE_MBX_SET_ALIVE] = hclge_mbx_set_alive_handler, ++ [HCLGE_MBX_SET_MTU] = hclge_mbx_set_mtu_handler, ++ [HCLGE_MBX_GET_QID_IN_PF] = hclge_mbx_get_qid_in_pf_handler, ++ [HCLGE_MBX_GET_LINK_MODE] = hclge_mbx_get_link_mode_handler, ++ [HCLGE_MBX_GET_MEDIA_TYPE] = hclge_mbx_get_media_type_handler, ++ [HCLGE_MBX_VF_UNINIT] = hclge_mbx_vf_uninit_handler, ++ [HCLGE_MBX_HANDLE_VF_TBL] = hclge_mbx_handle_vf_tbl_handler, ++ [HCLGE_MBX_GET_RING_VECTOR_MAP] = hclge_mbx_get_ring_vector_map_handler, ++ [HCLGE_MBX_GET_VF_FLR_STATUS] = hclge_mbx_get_vf_flr_status_handler, ++ [HCLGE_MBX_PUSH_LINK_STATUS] = hclge_mbx_push_link_status_handler, ++ [HCLGE_MBX_NCSI_ERROR] = hclge_mbx_ncsi_error_handler, ++}; ++ ++static void hclge_mbx_request_handling(struct hclge_mbx_ops_param *param) ++{ ++ hclge_mbx_ops_fn cmd_func = NULL; ++ struct hclge_dev *hdev; ++ int ret = 0; ++ ++ hdev = param->vport->back; ++ cmd_func = hclge_mbx_ops_list[param->req->msg.code]; ++ if (cmd_func) ++ ret = cmd_func(param); ++ else ++ dev_err(&hdev->pdev->dev, ++ "un-supported mailbox message, code = %u\n", ++ param->req->msg.code); ++ ++ /* PF driver should not reply IMP */ ++ if (hnae3_get_bit(param->req->mbx_need_resp, HCLGE_MBX_NEED_RESP_B) && ++ param->req->msg.code < HCLGE_MBX_GET_VF_FLR_STATUS) { ++ param->resp_msg->status = ret; ++ if (time_is_before_jiffies(hdev->last_mbx_scheduled + ++ HCLGE_MBX_SCHED_TIMEOUT)) ++ dev_warn(&hdev->pdev->dev, ++ "resp vport%u mbx(%u,%u) late\n", ++ param->req->mbx_src_vfid, ++ param->req->msg.code, ++ param->req->msg.subcode); ++ ++ hclge_gen_resp_to_vf(param->vport, param->req, param->resp_msg); ++ } ++} ++ + void hclge_mbx_handler(struct hclge_dev *hdev) + { + struct hclge_cmq_ring *crq = &hdev->hw.cmq.crq; + struct hclge_respond_to_vf_msg resp_msg; + struct hclge_mbx_vf_to_pf_cmd *req; +- struct hclge_vport *vport; ++ struct hclge_mbx_ops_param param; + struct hclge_desc *desc; +- bool is_del = false; + unsigned int flag; +- int ret = 0; + ++ param.resp_msg = &resp_msg; + /* handle all the mailbox requests in the queue */ + while (!hclge_cmd_crq_empty(&hdev->hw)) { + if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) { +@@ -808,152 +1075,16 @@ void hclge_mbx_handler(struct hclge_dev *hdev) + continue; + } + +- vport = &hdev->vport[req->mbx_src_vfid]; +- + trace_hclge_pf_mbx_get(hdev, req); + + /* clear the resp_msg before processing every mailbox message */ + memset(&resp_msg, 0, sizeof(resp_msg)); +- +- switch (req->msg.code) { +- case HCLGE_MBX_MAP_RING_TO_VECTOR: +- ret = hclge_map_unmap_ring_to_vf_vector(vport, true, +- req); +- break; +- case HCLGE_MBX_UNMAP_RING_TO_VECTOR: +- ret = hclge_map_unmap_ring_to_vf_vector(vport, false, +- req); +- break; +- case HCLGE_MBX_GET_RING_VECTOR_MAP: +- ret = hclge_get_vf_ring_vector_map(vport, req, +- &resp_msg); +- if (ret) +- dev_err(&hdev->pdev->dev, +- "PF fail(%d) to get VF ring vector map\n", +- ret); +- break; +- case HCLGE_MBX_SET_PROMISC_MODE: +- hclge_set_vf_promisc_mode(vport, req); +- break; +- case HCLGE_MBX_SET_UNICAST: +- ret = hclge_set_vf_uc_mac_addr(vport, req); +- if (ret) +- dev_err(&hdev->pdev->dev, +- "PF fail(%d) to set VF UC MAC Addr\n", +- ret); +- break; +- case HCLGE_MBX_SET_MULTICAST: +- ret = hclge_set_vf_mc_mac_addr(vport, req); +- if (ret) +- dev_err(&hdev->pdev->dev, +- "PF fail(%d) to set VF MC MAC Addr\n", +- ret); +- break; +- case HCLGE_MBX_SET_VLAN: +- ret = hclge_set_vf_vlan_cfg(vport, req, &resp_msg); +- if (ret) +- dev_err(&hdev->pdev->dev, +- "PF failed(%d) to config VF's VLAN\n", +- ret); +- break; +- case HCLGE_MBX_SET_ALIVE: +- ret = hclge_set_vf_alive(vport, req); +- if (ret) +- dev_err(&hdev->pdev->dev, +- "PF failed(%d) to set VF's ALIVE\n", +- ret); +- break; +- case HCLGE_MBX_GET_QINFO: +- hclge_get_vf_queue_info(vport, &resp_msg); +- break; +- case HCLGE_MBX_GET_QDEPTH: +- hclge_get_vf_queue_depth(vport, &resp_msg); +- break; +- case HCLGE_MBX_GET_BASIC_INFO: +- hclge_get_basic_info(vport, &resp_msg); +- break; +- case HCLGE_MBX_GET_LINK_STATUS: +- ret = hclge_push_vf_link_status(vport); +- if (ret) +- dev_err(&hdev->pdev->dev, +- "failed to inform link stat to VF, ret = %d\n", +- ret); +- break; +- case HCLGE_MBX_QUEUE_RESET: +- ret = hclge_mbx_reset_vf_queue(vport, req, &resp_msg); +- break; +- case HCLGE_MBX_RESET: +- ret = hclge_reset_vf(vport); +- break; +- case HCLGE_MBX_KEEP_ALIVE: +- hclge_vf_keep_alive(vport); +- break; +- case HCLGE_MBX_SET_MTU: +- ret = hclge_set_vf_mtu(vport, req); +- if (ret) +- dev_err(&hdev->pdev->dev, +- "VF fail(%d) to set mtu\n", ret); +- break; +- case HCLGE_MBX_GET_QID_IN_PF: +- ret = hclge_get_queue_id_in_pf(vport, req, &resp_msg); +- break; +- case HCLGE_MBX_GET_RSS_KEY: +- ret = hclge_get_rss_key(vport, req, &resp_msg); +- break; +- case HCLGE_MBX_GET_LINK_MODE: +- hclge_get_link_mode(vport, req); +- break; +- case HCLGE_MBX_GET_VF_FLR_STATUS: +- case HCLGE_MBX_VF_UNINIT: +- is_del = req->msg.code == HCLGE_MBX_VF_UNINIT; +- hclge_rm_vport_all_mac_table(vport, is_del, +- HCLGE_MAC_ADDR_UC); +- hclge_rm_vport_all_mac_table(vport, is_del, +- HCLGE_MAC_ADDR_MC); +- hclge_rm_vport_all_vlan_table(vport, is_del); +- break; +- case HCLGE_MBX_GET_MEDIA_TYPE: +- hclge_get_vf_media_type(vport, &resp_msg); +- break; +- case HCLGE_MBX_PUSH_LINK_STATUS: +- hclge_handle_link_change_event(hdev, req); +- break; +- case HCLGE_MBX_GET_MAC_ADDR: +- hclge_get_vf_mac_addr(vport, &resp_msg); +- break; +- case HCLGE_MBX_NCSI_ERROR: +- hclge_handle_ncsi_error(hdev); +- break; +- case HCLGE_MBX_HANDLE_VF_TBL: +- hclge_handle_vf_tbl(vport, req); +- break; +- default: +- dev_err(&hdev->pdev->dev, +- "un-supported mailbox message, code = %u\n", +- req->msg.code); +- break; +- } +- +- /* PF driver should not reply IMP */ +- if (hnae3_get_bit(req->mbx_need_resp, HCLGE_MBX_NEED_RESP_B) && +- req->msg.code < HCLGE_MBX_GET_VF_FLR_STATUS) { +- resp_msg.status = ret; +- if (time_is_before_jiffies(hdev->last_mbx_scheduled + +- HCLGE_MBX_SCHED_TIMEOUT)) +- dev_warn(&hdev->pdev->dev, +- "resp vport%u mbx(%u,%u) late\n", +- req->mbx_src_vfid, +- req->msg.code, +- req->msg.subcode); +- +- hclge_gen_resp_to_vf(vport, req, &resp_msg); +- } ++ param.vport = &hdev->vport[req->mbx_src_vfid]; ++ param.req = req; ++ hclge_mbx_request_handling(¶m); + + crq->desc[crq->next_to_use].flag = 0; + hclge_mbx_ring_ptr_move_crq(crq); +- +- /* reinitialize ret after complete the mbx message processing */ +- ret = 0; + } + + /* Write back CMDQ_RQ header pointer, M7 need this pointer */ +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-refactor-hclge_cmd_send-with-new-hclge_comm.patch b/queue-5.15/net-hns3-refactor-hclge_cmd_send-with-new-hclge_comm.patch new file mode 100644 index 00000000000..8601883a19e --- /dev/null +++ b/queue-5.15/net-hns3-refactor-hclge_cmd_send-with-new-hclge_comm.patch @@ -0,0 +1,918 @@ +From f358c9d8b29c368ff7e511f3cd63c562dd5b7780 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 31 Dec 2021 18:22:35 +0800 +Subject: net: hns3: refactor hclge_cmd_send with new hclge_comm_cmd_send API + +From: Jie Wang + +[ Upstream commit eaa5607db377a73e639162a459d8b125c6a67bfb ] + +This patch firstly uses new hardware description struct hclge_comm_hw as +child member of hclge_hw and deletes the original child memebers of +hclge_hw. All the hclge_hw variables used in PF module is modified +according to the new hclge_hw. + +Secondly hclge_cmd_send is refactored to use hclge_comm_cmd_send APIs. The +old functions called by hclge_cmd_send are deleted and hclge_cmd_send is +kept to avoid too many meaningless modifications. + +Signed-off-by: Jie Wang +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: 6639a7b95321 ("net: hns3: change type of numa_node_mask as nodemask_t") +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/Makefile | 7 +- + .../hisilicon/hns3/hns3pf/hclge_cmd.c | 311 +++--------------- + .../hisilicon/hns3/hns3pf/hclge_cmd.h | 72 +--- + .../hisilicon/hns3/hns3pf/hclge_main.c | 56 ++-- + .../hisilicon/hns3/hns3pf/hclge_main.h | 10 +- + .../hisilicon/hns3/hns3pf/hclge_mbx.c | 11 +- + .../hisilicon/hns3/hns3pf/hclge_mdio.c | 4 +- + .../hisilicon/hns3/hns3pf/hclge_ptp.c | 2 +- + 8 files changed, 100 insertions(+), 373 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile +index 33e546cef2881..cb3aaf5252d07 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/Makefile ++++ b/drivers/net/ethernet/hisilicon/hns3/Makefile +@@ -16,10 +16,13 @@ hns3-objs = hns3_enet.o hns3_ethtool.o hns3_debugfs.o + hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o + + obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o +-hclgevf-objs = hns3vf/hclgevf_main.o hns3vf/hclgevf_cmd.o hns3vf/hclgevf_mbx.o hns3vf/hclgevf_devlink.o ++ ++hclgevf-objs = hns3vf/hclgevf_main.o hns3vf/hclgevf_cmd.o hns3vf/hclgevf_mbx.o hns3vf/hclgevf_devlink.o \ ++ hns3_common/hclge_comm_cmd.o + + obj-$(CONFIG_HNS3_HCLGE) += hclge.o + hclge-objs = hns3pf/hclge_main.o hns3pf/hclge_cmd.o hns3pf/hclge_mdio.o hns3pf/hclge_tm.o \ +- hns3pf/hclge_mbx.o hns3pf/hclge_err.o hns3pf/hclge_debugfs.o hns3pf/hclge_ptp.o hns3pf/hclge_devlink.o ++ hns3pf/hclge_mbx.o hns3pf/hclge_err.o hns3pf/hclge_debugfs.o hns3pf/hclge_ptp.o hns3pf/hclge_devlink.o \ ++ hns3_common/hclge_comm_cmd.o + + hclge-$(CONFIG_HNS3_DCB) += hns3pf/hclge_dcb.o +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c +index 9c2eeaa822944..59dd2283d25bb 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.c +@@ -11,46 +11,24 @@ + #include "hnae3.h" + #include "hclge_main.h" + +-#define cmq_ring_to_dev(ring) (&(ring)->dev->pdev->dev) +- +-static int hclge_ring_space(struct hclge_cmq_ring *ring) +-{ +- int ntu = ring->next_to_use; +- int ntc = ring->next_to_clean; +- int used = (ntu - ntc + ring->desc_num) % ring->desc_num; +- +- return ring->desc_num - used - 1; +-} +- +-static int is_valid_csq_clean_head(struct hclge_cmq_ring *ring, int head) +-{ +- int ntu = ring->next_to_use; +- int ntc = ring->next_to_clean; +- +- if (ntu > ntc) +- return head >= ntc && head <= ntu; +- +- return head >= ntc || head <= ntu; +-} +- +-static int hclge_alloc_cmd_desc(struct hclge_cmq_ring *ring) ++static int hclge_alloc_cmd_desc(struct hclge_comm_cmq_ring *ring) + { + int size = ring->desc_num * sizeof(struct hclge_desc); + +- ring->desc = dma_alloc_coherent(cmq_ring_to_dev(ring), size, +- &ring->desc_dma_addr, GFP_KERNEL); ++ ring->desc = dma_alloc_coherent(&ring->pdev->dev, ++ size, &ring->desc_dma_addr, GFP_KERNEL); + if (!ring->desc) + return -ENOMEM; + + return 0; + } + +-static void hclge_free_cmd_desc(struct hclge_cmq_ring *ring) ++static void hclge_free_cmd_desc(struct hclge_comm_cmq_ring *ring) + { + int size = ring->desc_num * sizeof(struct hclge_desc); + + if (ring->desc) { +- dma_free_coherent(cmq_ring_to_dev(ring), size, ++ dma_free_coherent(&ring->pdev->dev, size, + ring->desc, ring->desc_dma_addr); + ring->desc = NULL; + } +@@ -59,12 +37,13 @@ static void hclge_free_cmd_desc(struct hclge_cmq_ring *ring) + static int hclge_alloc_cmd_queue(struct hclge_dev *hdev, int ring_type) + { + struct hclge_hw *hw = &hdev->hw; +- struct hclge_cmq_ring *ring = +- (ring_type == HCLGE_TYPE_CSQ) ? &hw->cmq.csq : &hw->cmq.crq; ++ struct hclge_comm_cmq_ring *ring = ++ (ring_type == HCLGE_TYPE_CSQ) ? &hw->hw.cmq.csq : ++ &hw->hw.cmq.crq; + int ret; + + ring->ring_type = ring_type; +- ring->dev = hdev; ++ ring->pdev = hdev->pdev; + + ret = hclge_alloc_cmd_desc(ring); + if (ret) { +@@ -96,11 +75,10 @@ void hclge_cmd_setup_basic_desc(struct hclge_desc *desc, + desc->flag |= cpu_to_le16(HCLGE_CMD_FLAG_WR); + } + +-static void hclge_cmd_config_regs(struct hclge_cmq_ring *ring) ++static void hclge_cmd_config_regs(struct hclge_hw *hw, ++ struct hclge_comm_cmq_ring *ring) + { + dma_addr_t dma = ring->desc_dma_addr; +- struct hclge_dev *hdev = ring->dev; +- struct hclge_hw *hw = &hdev->hw; + u32 reg_val; + + if (ring->ring_type == HCLGE_TYPE_CSQ) { +@@ -128,176 +106,8 @@ static void hclge_cmd_config_regs(struct hclge_cmq_ring *ring) + + static void hclge_cmd_init_regs(struct hclge_hw *hw) + { +- hclge_cmd_config_regs(&hw->cmq.csq); +- hclge_cmd_config_regs(&hw->cmq.crq); +-} +- +-static int hclge_cmd_csq_clean(struct hclge_hw *hw) +-{ +- struct hclge_dev *hdev = container_of(hw, struct hclge_dev, hw); +- struct hclge_cmq_ring *csq = &hw->cmq.csq; +- u32 head; +- int clean; +- +- head = hclge_read_dev(hw, HCLGE_NIC_CSQ_HEAD_REG); +- rmb(); /* Make sure head is ready before touch any data */ +- +- if (!is_valid_csq_clean_head(csq, head)) { +- dev_warn(&hdev->pdev->dev, "wrong cmd head (%u, %d-%d)\n", head, +- csq->next_to_use, csq->next_to_clean); +- dev_warn(&hdev->pdev->dev, +- "Disabling any further commands to IMP firmware\n"); +- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); +- dev_warn(&hdev->pdev->dev, +- "IMP firmware watchdog reset soon expected!\n"); +- return -EIO; +- } +- +- clean = (head - csq->next_to_clean + csq->desc_num) % csq->desc_num; +- csq->next_to_clean = head; +- return clean; +-} +- +-static int hclge_cmd_csq_done(struct hclge_hw *hw) +-{ +- u32 head = hclge_read_dev(hw, HCLGE_NIC_CSQ_HEAD_REG); +- return head == hw->cmq.csq.next_to_use; +-} +- +-static bool hclge_is_special_opcode(u16 opcode) +-{ +- /* these commands have several descriptors, +- * and use the first one to save opcode and return value +- */ +- static const u16 spec_opcode[] = { +- HCLGE_OPC_STATS_64_BIT, +- HCLGE_OPC_STATS_32_BIT, +- HCLGE_OPC_STATS_MAC, +- HCLGE_OPC_STATS_MAC_ALL, +- HCLGE_OPC_QUERY_32_BIT_REG, +- HCLGE_OPC_QUERY_64_BIT_REG, +- HCLGE_QUERY_CLEAR_MPF_RAS_INT, +- HCLGE_QUERY_CLEAR_PF_RAS_INT, +- HCLGE_QUERY_CLEAR_ALL_MPF_MSIX_INT, +- HCLGE_QUERY_CLEAR_ALL_PF_MSIX_INT, +- HCLGE_QUERY_ALL_ERR_INFO +- }; +- int i; +- +- for (i = 0; i < ARRAY_SIZE(spec_opcode); i++) { +- if (spec_opcode[i] == opcode) +- return true; +- } +- +- return false; +-} +- +-struct errcode { +- u32 imp_errcode; +- int common_errno; +-}; +- +-static void hclge_cmd_copy_desc(struct hclge_hw *hw, struct hclge_desc *desc, +- int num) +-{ +- struct hclge_desc *desc_to_use; +- int handle = 0; +- +- while (handle < num) { +- desc_to_use = &hw->cmq.csq.desc[hw->cmq.csq.next_to_use]; +- *desc_to_use = desc[handle]; +- (hw->cmq.csq.next_to_use)++; +- if (hw->cmq.csq.next_to_use >= hw->cmq.csq.desc_num) +- hw->cmq.csq.next_to_use = 0; +- handle++; +- } +-} +- +-static int hclge_cmd_convert_err_code(u16 desc_ret) +-{ +- struct errcode hclge_cmd_errcode[] = { +- {HCLGE_CMD_EXEC_SUCCESS, 0}, +- {HCLGE_CMD_NO_AUTH, -EPERM}, +- {HCLGE_CMD_NOT_SUPPORTED, -EOPNOTSUPP}, +- {HCLGE_CMD_QUEUE_FULL, -EXFULL}, +- {HCLGE_CMD_NEXT_ERR, -ENOSR}, +- {HCLGE_CMD_UNEXE_ERR, -ENOTBLK}, +- {HCLGE_CMD_PARA_ERR, -EINVAL}, +- {HCLGE_CMD_RESULT_ERR, -ERANGE}, +- {HCLGE_CMD_TIMEOUT, -ETIME}, +- {HCLGE_CMD_HILINK_ERR, -ENOLINK}, +- {HCLGE_CMD_QUEUE_ILLEGAL, -ENXIO}, +- {HCLGE_CMD_INVALID, -EBADR}, +- }; +- u32 errcode_count = ARRAY_SIZE(hclge_cmd_errcode); +- u32 i; +- +- for (i = 0; i < errcode_count; i++) +- if (hclge_cmd_errcode[i].imp_errcode == desc_ret) +- return hclge_cmd_errcode[i].common_errno; +- +- return -EIO; +-} +- +-static int hclge_cmd_check_retval(struct hclge_hw *hw, struct hclge_desc *desc, +- int num, int ntc) +-{ +- u16 opcode, desc_ret; +- int handle; +- +- opcode = le16_to_cpu(desc[0].opcode); +- for (handle = 0; handle < num; handle++) { +- desc[handle] = hw->cmq.csq.desc[ntc]; +- ntc++; +- if (ntc >= hw->cmq.csq.desc_num) +- ntc = 0; +- } +- if (likely(!hclge_is_special_opcode(opcode))) +- desc_ret = le16_to_cpu(desc[num - 1].retval); +- else +- desc_ret = le16_to_cpu(desc[0].retval); +- +- hw->cmq.last_status = desc_ret; +- +- return hclge_cmd_convert_err_code(desc_ret); +-} +- +-static int hclge_cmd_check_result(struct hclge_hw *hw, struct hclge_desc *desc, +- int num, int ntc) +-{ +- struct hclge_dev *hdev = container_of(hw, struct hclge_dev, hw); +- bool is_completed = false; +- u32 timeout = 0; +- int handle, ret; +- +- /** +- * If the command is sync, wait for the firmware to write back, +- * if multi descriptors to be sent, use the first one to check +- */ +- if (HCLGE_SEND_SYNC(le16_to_cpu(desc->flag))) { +- do { +- if (hclge_cmd_csq_done(hw)) { +- is_completed = true; +- break; +- } +- udelay(1); +- timeout++; +- } while (timeout < hw->cmq.tx_timeout); +- } +- +- if (!is_completed) +- ret = -EBADE; +- else +- ret = hclge_cmd_check_retval(hw, desc, num, ntc); +- +- /* Clean the command send queue */ +- handle = hclge_cmd_csq_clean(hw); +- if (handle < 0) +- ret = handle; +- else if (handle != num) +- dev_warn(&hdev->pdev->dev, +- "cleaned %d, need to clean %d\n", handle, num); +- return ret; ++ hclge_cmd_config_regs(hw, &hw->hw.cmq.csq); ++ hclge_cmd_config_regs(hw, &hw->hw.cmq.crq); + } + + /** +@@ -311,43 +121,7 @@ static int hclge_cmd_check_result(struct hclge_hw *hw, struct hclge_desc *desc, + **/ + int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num) + { +- struct hclge_dev *hdev = container_of(hw, struct hclge_dev, hw); +- struct hclge_cmq_ring *csq = &hw->cmq.csq; +- int ret; +- int ntc; +- +- spin_lock_bh(&hw->cmq.csq.lock); +- +- if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) { +- spin_unlock_bh(&hw->cmq.csq.lock); +- return -EBUSY; +- } +- +- if (num > hclge_ring_space(&hw->cmq.csq)) { +- /* If CMDQ ring is full, SW HEAD and HW HEAD may be different, +- * need update the SW HEAD pointer csq->next_to_clean +- */ +- csq->next_to_clean = hclge_read_dev(hw, HCLGE_NIC_CSQ_HEAD_REG); +- spin_unlock_bh(&hw->cmq.csq.lock); +- return -EBUSY; +- } +- +- /** +- * Record the location of desc in the ring for this time +- * which will be use for hardware to write back +- */ +- ntc = hw->cmq.csq.next_to_use; +- +- hclge_cmd_copy_desc(hw, desc, num); +- +- /* Write to hardware */ +- hclge_write_dev(hw, HCLGE_NIC_CSQ_TAIL_REG, hw->cmq.csq.next_to_use); +- +- ret = hclge_cmd_check_result(hw, desc, num, ntc); +- +- spin_unlock_bh(&hw->cmq.csq.lock); +- +- return ret; ++ return hclge_comm_cmd_send(&hw->hw, desc, num, true); + } + + static void hclge_set_default_capability(struct hclge_dev *hdev) +@@ -401,7 +175,7 @@ static __le32 hclge_build_api_caps(void) + return cpu_to_le32(api_caps); + } + +-static enum hclge_cmd_status ++static enum hclge_comm_cmd_status + hclge_cmd_query_version_and_capability(struct hclge_dev *hdev) + { + struct hnae3_ae_dev *ae_dev = pci_get_drvdata(hdev->pdev); +@@ -433,18 +207,22 @@ hclge_cmd_query_version_and_capability(struct hclge_dev *hdev) + + int hclge_cmd_queue_init(struct hclge_dev *hdev) + { ++ struct hclge_comm_cmq *cmdq = &hdev->hw.hw.cmq; + int ret; + + /* Setup the lock for command queue */ +- spin_lock_init(&hdev->hw.cmq.csq.lock); +- spin_lock_init(&hdev->hw.cmq.crq.lock); ++ spin_lock_init(&cmdq->csq.lock); ++ spin_lock_init(&cmdq->crq.lock); ++ ++ cmdq->csq.pdev = hdev->pdev; ++ cmdq->crq.pdev = hdev->pdev; + + /* Setup the queue entries for use cmd queue */ +- hdev->hw.cmq.csq.desc_num = HCLGE_NIC_CMQ_DESC_NUM; +- hdev->hw.cmq.crq.desc_num = HCLGE_NIC_CMQ_DESC_NUM; ++ cmdq->csq.desc_num = HCLGE_NIC_CMQ_DESC_NUM; ++ cmdq->crq.desc_num = HCLGE_NIC_CMQ_DESC_NUM; + + /* Setup Tx write back timeout */ +- hdev->hw.cmq.tx_timeout = HCLGE_CMDQ_TX_TIMEOUT; ++ cmdq->tx_timeout = HCLGE_CMDQ_TX_TIMEOUT; + + /* Setup queue rings */ + ret = hclge_alloc_cmd_queue(hdev, HCLGE_TYPE_CSQ); +@@ -463,7 +241,7 @@ int hclge_cmd_queue_init(struct hclge_dev *hdev) + + return 0; + err_csq: +- hclge_free_cmd_desc(&hdev->hw.cmq.csq); ++ hclge_free_cmd_desc(&hdev->hw.hw.cmq.csq); + return ret; + } + +@@ -491,22 +269,23 @@ static int hclge_firmware_compat_config(struct hclge_dev *hdev, bool en) + + int hclge_cmd_init(struct hclge_dev *hdev) + { ++ struct hclge_comm_cmq *cmdq = &hdev->hw.hw.cmq; + int ret; + +- spin_lock_bh(&hdev->hw.cmq.csq.lock); +- spin_lock(&hdev->hw.cmq.crq.lock); ++ spin_lock_bh(&cmdq->csq.lock); ++ spin_lock(&cmdq->crq.lock); + +- hdev->hw.cmq.csq.next_to_clean = 0; +- hdev->hw.cmq.csq.next_to_use = 0; +- hdev->hw.cmq.crq.next_to_clean = 0; +- hdev->hw.cmq.crq.next_to_use = 0; ++ cmdq->csq.next_to_clean = 0; ++ cmdq->csq.next_to_use = 0; ++ cmdq->crq.next_to_clean = 0; ++ cmdq->crq.next_to_use = 0; + + hclge_cmd_init_regs(&hdev->hw); + +- spin_unlock(&hdev->hw.cmq.crq.lock); +- spin_unlock_bh(&hdev->hw.cmq.csq.lock); ++ spin_unlock(&cmdq->crq.lock); ++ spin_unlock_bh(&cmdq->csq.lock); + +- clear_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ clear_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + + /* Check if there is new reset pending, because the higher level + * reset may happen when lower level reset is being processed. +@@ -550,7 +329,7 @@ int hclge_cmd_init(struct hclge_dev *hdev) + return 0; + + err_cmd_init: +- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + + return ret; + } +@@ -571,19 +350,23 @@ static void hclge_cmd_uninit_regs(struct hclge_hw *hw) + + void hclge_cmd_uninit(struct hclge_dev *hdev) + { ++ struct hclge_comm_cmq *cmdq = &hdev->hw.hw.cmq; ++ ++ cmdq->csq.pdev = hdev->pdev; ++ + hclge_firmware_compat_config(hdev, false); + +- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + /* wait to ensure that the firmware completes the possible left + * over commands. + */ + msleep(HCLGE_CMDQ_CLEAR_WAIT_TIME); +- spin_lock_bh(&hdev->hw.cmq.csq.lock); +- spin_lock(&hdev->hw.cmq.crq.lock); ++ spin_lock_bh(&cmdq->csq.lock); ++ spin_lock(&cmdq->crq.lock); + hclge_cmd_uninit_regs(&hdev->hw); +- spin_unlock(&hdev->hw.cmq.crq.lock); +- spin_unlock_bh(&hdev->hw.cmq.csq.lock); ++ spin_unlock(&cmdq->crq.lock); ++ spin_unlock_bh(&cmdq->csq.lock); + +- hclge_free_cmd_desc(&hdev->hw.cmq.csq); +- hclge_free_cmd_desc(&hdev->hw.cmq.crq); ++ hclge_free_cmd_desc(&cmdq->csq); ++ hclge_free_cmd_desc(&cmdq->crq); + } +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h +index e07709ef239df..303a7592bb18d 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_cmd.h +@@ -11,63 +11,18 @@ + + #define HCLGE_CMDQ_TX_TIMEOUT 30000 + #define HCLGE_CMDQ_CLEAR_WAIT_TIME 200 +-#define HCLGE_DESC_DATA_LEN 6 + + struct hclge_dev; + + #define HCLGE_CMDQ_RX_INVLD_B 0 + #define HCLGE_CMDQ_RX_OUTVLD_B 1 + +-struct hclge_cmq_ring { +- dma_addr_t desc_dma_addr; +- struct hclge_desc *desc; +- struct hclge_dev *dev; +- u32 head; +- u32 tail; +- +- u16 buf_size; +- u16 desc_num; +- int next_to_use; +- int next_to_clean; +- u8 ring_type; /* cmq ring type */ +- spinlock_t lock; /* Command queue lock */ +-}; +- +-enum hclge_cmd_return_status { +- HCLGE_CMD_EXEC_SUCCESS = 0, +- HCLGE_CMD_NO_AUTH = 1, +- HCLGE_CMD_NOT_SUPPORTED = 2, +- HCLGE_CMD_QUEUE_FULL = 3, +- HCLGE_CMD_NEXT_ERR = 4, +- HCLGE_CMD_UNEXE_ERR = 5, +- HCLGE_CMD_PARA_ERR = 6, +- HCLGE_CMD_RESULT_ERR = 7, +- HCLGE_CMD_TIMEOUT = 8, +- HCLGE_CMD_HILINK_ERR = 9, +- HCLGE_CMD_QUEUE_ILLEGAL = 10, +- HCLGE_CMD_INVALID = 11, +-}; +- +-enum hclge_cmd_status { +- HCLGE_STATUS_SUCCESS = 0, +- HCLGE_ERR_CSQ_FULL = -1, +- HCLGE_ERR_CSQ_TIMEOUT = -2, +- HCLGE_ERR_CSQ_ERROR = -3, +-}; +- + struct hclge_misc_vector { + u8 __iomem *addr; + int vector_irq; + char name[HNAE3_INT_NAME_LEN]; + }; + +-struct hclge_cmq { +- struct hclge_cmq_ring csq; +- struct hclge_cmq_ring crq; +- u16 tx_timeout; +- enum hclge_cmd_status last_status; +-}; +- + #define HCLGE_CMD_FLAG_IN BIT(0) + #define HCLGE_CMD_FLAG_OUT BIT(1) + #define HCLGE_CMD_FLAG_NEXT BIT(2) +@@ -1236,25 +1191,6 @@ struct hclge_caps_bit_map { + }; + + int hclge_cmd_init(struct hclge_dev *hdev); +-static inline void hclge_write_reg(void __iomem *base, u32 reg, u32 value) +-{ +- writel(value, base + reg); +-} +- +-#define hclge_write_dev(a, reg, value) \ +- hclge_write_reg((a)->io_base, reg, value) +-#define hclge_read_dev(a, reg) \ +- hclge_read_reg((a)->io_base, reg) +- +-static inline u32 hclge_read_reg(u8 __iomem *base, u32 reg) +-{ +- u8 __iomem *reg_addr = READ_ONCE(base); +- +- return readl(reg_addr + reg); +-} +- +-#define HCLGE_SEND_SYNC(flag) \ +- ((flag) & HCLGE_CMD_FLAG_NO_INTR) + + struct hclge_hw; + int hclge_cmd_send(struct hclge_hw *hw, struct hclge_desc *desc, int num); +@@ -1262,10 +1198,10 @@ void hclge_cmd_setup_basic_desc(struct hclge_desc *desc, + enum hclge_opcode_type opcode, bool is_read); + void hclge_cmd_reuse_desc(struct hclge_desc *desc, bool is_read); + +-enum hclge_cmd_status hclge_cmd_mdio_write(struct hclge_hw *hw, +- struct hclge_desc *desc); +-enum hclge_cmd_status hclge_cmd_mdio_read(struct hclge_hw *hw, +- struct hclge_desc *desc); ++enum hclge_comm_cmd_status hclge_cmd_mdio_write(struct hclge_hw *hw, ++ struct hclge_desc *desc); ++enum hclge_comm_cmd_status hclge_cmd_mdio_read(struct hclge_hw *hw, ++ struct hclge_desc *desc); + + void hclge_cmd_uninit(struct hclge_dev *hdev); + int hclge_cmd_queue_init(struct hclge_dev *hdev); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 93e55c6c4cf5e..a744ebb72b137 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -24,6 +24,7 @@ + #include "hclge_err.h" + #include "hnae3.h" + #include "hclge_devlink.h" ++#include "hclge_comm_cmd.h" + + #define HCLGE_NAME "hclge" + +@@ -1677,11 +1678,11 @@ static int hclge_alloc_tqps(struct hclge_dev *hdev) + * HCLGE_TQP_MAX_SIZE_DEV_V2 + */ + if (i < HCLGE_TQP_MAX_SIZE_DEV_V2) +- tqp->q.io_base = hdev->hw.io_base + ++ tqp->q.io_base = hdev->hw.hw.io_base + + HCLGE_TQP_REG_OFFSET + + i * HCLGE_TQP_REG_SIZE; + else +- tqp->q.io_base = hdev->hw.io_base + ++ tqp->q.io_base = hdev->hw.hw.io_base + + HCLGE_TQP_REG_OFFSET + + HCLGE_TQP_EXT_REG_OFFSET + + (i - HCLGE_TQP_MAX_SIZE_DEV_V2) * +@@ -1825,7 +1826,7 @@ static int hclge_vport_setup(struct hclge_vport *vport, u16 num_tqps) + nic->pdev = hdev->pdev; + nic->ae_algo = &ae_algo; + nic->numa_node_mask = hdev->numa_node_mask; +- nic->kinfo.io_base = hdev->hw.io_base; ++ nic->kinfo.io_base = hdev->hw.hw.io_base; + + ret = hclge_knic_setup(vport, num_tqps, + hdev->num_tx_desc, hdev->num_rx_desc); +@@ -2511,8 +2512,8 @@ static int hclge_init_roce_base_info(struct hclge_vport *vport) + roce->rinfo.base_vector = hdev->num_nic_msi; + + roce->rinfo.netdev = nic->kinfo.netdev; +- roce->rinfo.roce_io_base = hdev->hw.io_base; +- roce->rinfo.roce_mem_base = hdev->hw.mem_base; ++ roce->rinfo.roce_io_base = hdev->hw.hw.io_base; ++ roce->rinfo.roce_mem_base = hdev->hw.hw.mem_base; + + roce->pdev = nic->pdev; + roce->ae_algo = nic->ae_algo; +@@ -3366,7 +3367,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) + if (BIT(HCLGE_VECTOR0_IMPRESET_INT_B) & msix_src_reg) { + dev_info(&hdev->pdev->dev, "IMP reset interrupt\n"); + set_bit(HNAE3_IMP_RESET, &hdev->reset_pending); +- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + *clearval = BIT(HCLGE_VECTOR0_IMPRESET_INT_B); + hdev->rst_stats.imp_rst_cnt++; + return HCLGE_VECTOR0_EVENT_RST; +@@ -3374,7 +3375,7 @@ static u32 hclge_check_event_cause(struct hclge_dev *hdev, u32 *clearval) + + if (BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B) & msix_src_reg) { + dev_info(&hdev->pdev->dev, "global reset interrupt\n"); +- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + set_bit(HNAE3_GLOBAL_RESET, &hdev->reset_pending); + *clearval = BIT(HCLGE_VECTOR0_GLOBALRESET_INT_B); + hdev->rst_stats.global_rst_cnt++; +@@ -3513,7 +3514,7 @@ static void hclge_get_misc_vector(struct hclge_dev *hdev) + + vector->vector_irq = pci_irq_vector(hdev->pdev, 0); + +- vector->addr = hdev->hw.io_base + HCLGE_MISC_VECTOR_REG_BASE; ++ vector->addr = hdev->hw.hw.io_base + HCLGE_MISC_VECTOR_REG_BASE; + hdev->vector_status[0] = 0; + + hdev->num_msi_left -= 1; +@@ -3697,7 +3698,7 @@ static int hclge_set_all_vf_rst(struct hclge_dev *hdev, bool reset) + static void hclge_mailbox_service_task(struct hclge_dev *hdev) + { + if (!test_and_clear_bit(HCLGE_STATE_MBX_SERVICE_SCHED, &hdev->state) || +- test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state) || ++ test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state) || + test_and_set_bit(HCLGE_STATE_MBX_HANDLING, &hdev->state)) + return; + +@@ -3944,7 +3945,7 @@ static int hclge_reset_prepare_wait(struct hclge_dev *hdev) + * any mailbox handling or command to firmware is only valid + * after hclge_cmd_init is called. + */ +- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + hdev->rst_stats.pf_rst_cnt++; + break; + case HNAE3_FLR_RESET: +@@ -4498,11 +4499,11 @@ static void hclge_get_vector_info(struct hclge_dev *hdev, u16 idx, + + /* need an extend offset to config vector >= 64 */ + if (idx - 1 < HCLGE_PF_MAX_VECTOR_NUM_DEV_V2) +- vector_info->io_addr = hdev->hw.io_base + ++ vector_info->io_addr = hdev->hw.hw.io_base + + HCLGE_VECTOR_REG_BASE + + (idx - 1) * HCLGE_VECTOR_REG_OFFSET; + else +- vector_info->io_addr = hdev->hw.io_base + ++ vector_info->io_addr = hdev->hw.hw.io_base + + HCLGE_VECTOR_EXT_REG_BASE + + (idx - 1) / HCLGE_PF_MAX_VECTOR_NUM_DEV_V2 * + HCLGE_VECTOR_REG_OFFSET_H + +@@ -5140,7 +5141,7 @@ int hclge_bind_ring_with_vector(struct hclge_vport *vport, + struct hclge_desc desc; + struct hclge_ctrl_vector_chain_cmd *req = + (struct hclge_ctrl_vector_chain_cmd *)desc.data; +- enum hclge_cmd_status status; ++ enum hclge_comm_cmd_status status; + enum hclge_opcode_type op; + u16 tqp_type_and_id; + int i; +@@ -7666,7 +7667,7 @@ static bool hclge_get_cmdq_stat(struct hnae3_handle *handle) + struct hclge_vport *vport = hclge_get_vport(handle); + struct hclge_dev *hdev = vport->back; + +- return test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ return test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + } + + static bool hclge_ae_dev_resetting(struct hnae3_handle *handle) +@@ -8864,7 +8865,7 @@ int hclge_rm_mc_addr_common(struct hclge_vport *vport, + char format_mac_addr[HNAE3_FORMAT_MAC_ADDR_LEN]; + struct hclge_dev *hdev = vport->back; + struct hclge_mac_vlan_tbl_entry_cmd req; +- enum hclge_cmd_status status; ++ enum hclge_comm_cmd_status status; + struct hclge_desc desc[3]; + + /* mac addr check */ +@@ -11450,10 +11451,11 @@ static int hclge_dev_mem_map(struct hclge_dev *hdev) + if (!(pci_select_bars(pdev, IORESOURCE_MEM) & BIT(HCLGE_MEM_BAR))) + return 0; + +- hw->mem_base = devm_ioremap_wc(&pdev->dev, +- pci_resource_start(pdev, HCLGE_MEM_BAR), +- pci_resource_len(pdev, HCLGE_MEM_BAR)); +- if (!hw->mem_base) { ++ hw->hw.mem_base = ++ devm_ioremap_wc(&pdev->dev, ++ pci_resource_start(pdev, HCLGE_MEM_BAR), ++ pci_resource_len(pdev, HCLGE_MEM_BAR)); ++ if (!hw->hw.mem_base) { + dev_err(&pdev->dev, "failed to map device memory\n"); + return -EFAULT; + } +@@ -11492,8 +11494,8 @@ static int hclge_pci_init(struct hclge_dev *hdev) + + pci_set_master(pdev); + hw = &hdev->hw; +- hw->io_base = pcim_iomap(pdev, 2, 0); +- if (!hw->io_base) { ++ hw->hw.io_base = pcim_iomap(pdev, 2, 0); ++ if (!hw->hw.io_base) { + dev_err(&pdev->dev, "Can't map configuration register space\n"); + ret = -ENOMEM; + goto err_clr_master; +@@ -11508,7 +11510,7 @@ static int hclge_pci_init(struct hclge_dev *hdev) + return 0; + + err_unmap_io_base: +- pcim_iounmap(pdev, hdev->hw.io_base); ++ pcim_iounmap(pdev, hdev->hw.hw.io_base); + err_clr_master: + pci_clear_master(pdev); + pci_release_regions(pdev); +@@ -11522,10 +11524,10 @@ static void hclge_pci_uninit(struct hclge_dev *hdev) + { + struct pci_dev *pdev = hdev->pdev; + +- if (hdev->hw.mem_base) +- devm_iounmap(&pdev->dev, hdev->hw.mem_base); ++ if (hdev->hw.hw.mem_base) ++ devm_iounmap(&pdev->dev, hdev->hw.hw.mem_base); + +- pcim_iounmap(pdev, hdev->hw.io_base); ++ pcim_iounmap(pdev, hdev->hw.hw.io_base); + pci_free_irq_vectors(pdev); + pci_clear_master(pdev); + pci_release_mem_regions(pdev); +@@ -11586,7 +11588,7 @@ static void hclge_reset_prepare_general(struct hnae3_ae_dev *ae_dev, + + /* disable misc vector before reset done */ + hclge_enable_vector(&hdev->misc_vector, false); +- set_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state); ++ set_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state); + + if (hdev->reset_type == HNAE3_FLR_RESET) + hdev->rst_stats.flr_rst_cnt++; +@@ -11877,7 +11879,7 @@ static int hclge_init_ae_dev(struct hnae3_ae_dev *ae_dev) + err_devlink_uninit: + hclge_devlink_uninit(hdev); + err_pci_uninit: +- pcim_iounmap(pdev, hdev->hw.io_base); ++ pcim_iounmap(pdev, hdev->hw.hw.io_base); + pci_clear_master(pdev); + pci_release_regions(pdev); + pci_disable_device(pdev); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +index 6870ccc9d9eac..4e52a7d96483c 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +@@ -228,7 +228,6 @@ enum HCLGE_DEV_STATE { + HCLGE_STATE_MBX_HANDLING, + HCLGE_STATE_ERR_SERVICE_SCHED, + HCLGE_STATE_STATISTICS_UPDATING, +- HCLGE_STATE_CMD_DISABLE, + HCLGE_STATE_LINK_UPDATING, + HCLGE_STATE_RST_FAIL, + HCLGE_STATE_FD_TBL_CHANGED, +@@ -297,11 +296,9 @@ struct hclge_mac { + }; + + struct hclge_hw { +- void __iomem *io_base; +- void __iomem *mem_base; ++ struct hclge_comm_hw hw; + struct hclge_mac mac; + int num_vec; +- struct hclge_cmq cmq; + }; + + /* TQP stats */ +@@ -616,6 +613,11 @@ struct key_info { + #define MAX_FD_FILTER_NUM 4096 + #define HCLGE_ARFS_EXPIRE_INTERVAL 5UL + ++#define hclge_read_dev(a, reg) \ ++ hclge_comm_read_reg((a)->hw.io_base, reg) ++#define hclge_write_dev(a, reg, value) \ ++ hclge_comm_write_reg((a)->hw.io_base, reg, value) ++ + enum HCLGE_FD_ACTIVE_RULE_TYPE { + HCLGE_FD_RULE_NONE, + HCLGE_FD_ARFS_ACTIVE, +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +index 1bd3d6056163b..77c432ab7856c 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mbx.c +@@ -33,7 +33,7 @@ static int hclge_gen_resp_to_vf(struct hclge_vport *vport, + { + struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf; + struct hclge_dev *hdev = vport->back; +- enum hclge_cmd_status status; ++ enum hclge_comm_cmd_status status; + struct hclge_desc desc; + u16 resp; + +@@ -92,7 +92,7 @@ static int hclge_send_mbx_msg(struct hclge_vport *vport, u8 *msg, u16 msg_len, + { + struct hclge_mbx_pf_to_vf_cmd *resp_pf_to_vf; + struct hclge_dev *hdev = vport->back; +- enum hclge_cmd_status status; ++ enum hclge_comm_cmd_status status; + struct hclge_desc desc; + + if (msg_len > HCLGE_MBX_MAX_MSG_SIZE) { +@@ -745,7 +745,7 @@ static bool hclge_cmd_crq_empty(struct hclge_hw *hw) + { + u32 tail = hclge_read_dev(hw, HCLGE_NIC_CRQ_TAIL_REG); + +- return tail == hw->cmq.crq.next_to_use; ++ return tail == hw->hw.cmq.crq.next_to_use; + } + + static void hclge_handle_ncsi_error(struct hclge_dev *hdev) +@@ -1045,7 +1045,7 @@ static void hclge_mbx_request_handling(struct hclge_mbx_ops_param *param) + + void hclge_mbx_handler(struct hclge_dev *hdev) + { +- struct hclge_cmq_ring *crq = &hdev->hw.cmq.crq; ++ struct hclge_comm_cmq_ring *crq = &hdev->hw.hw.cmq.crq; + struct hclge_respond_to_vf_msg resp_msg; + struct hclge_mbx_vf_to_pf_cmd *req; + struct hclge_mbx_ops_param param; +@@ -1055,7 +1055,8 @@ void hclge_mbx_handler(struct hclge_dev *hdev) + param.resp_msg = &resp_msg; + /* handle all the mailbox requests in the queue */ + while (!hclge_cmd_crq_empty(&hdev->hw)) { +- if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) { ++ if (test_bit(HCLGE_COMM_STATE_CMD_DISABLE, ++ &hdev->hw.hw.comm_state)) { + dev_warn(&hdev->pdev->dev, + "command queue needs re-initializing\n"); + return; +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c +index 1231c34f09494..63d2be4349e3e 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_mdio.c +@@ -47,7 +47,7 @@ static int hclge_mdio_write(struct mii_bus *bus, int phyid, int regnum, + struct hclge_desc desc; + int ret; + +- if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) ++ if (test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state)) + return 0; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MDIO_CONFIG, false); +@@ -85,7 +85,7 @@ static int hclge_mdio_read(struct mii_bus *bus, int phyid, int regnum) + struct hclge_desc desc; + int ret; + +- if (test_bit(HCLGE_STATE_CMD_DISABLE, &hdev->state)) ++ if (test_bit(HCLGE_COMM_STATE_CMD_DISABLE, &hdev->hw.hw.comm_state)) + return 0; + + hclge_cmd_setup_basic_desc(&desc, HCLGE_OPC_MDIO_CONFIG, true); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c +index dd0750f6daa6c..0f06f95b09bc2 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_ptp.c +@@ -464,7 +464,7 @@ static int hclge_ptp_create_clock(struct hclge_dev *hdev) + } + + spin_lock_init(&ptp->lock); +- ptp->io_base = hdev->hw.io_base + HCLGE_PTP_REG_OFFSET; ++ ptp->io_base = hdev->hw.hw.io_base + HCLGE_PTP_REG_OFFSET; + ptp->ts_cfg.rx_filter = HWTSTAMP_FILTER_NONE; + ptp->ts_cfg.tx_type = HWTSTAMP_TX_OFF; + hdev->ptp = ptp; +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-refactor-hns3-makefile-to-support-hns3_comm.patch b/queue-5.15/net-hns3-refactor-hns3-makefile-to-support-hns3_comm.patch new file mode 100644 index 00000000000..362a7e37955 --- /dev/null +++ b/queue-5.15/net-hns3-refactor-hns3-makefile-to-support-hns3_comm.patch @@ -0,0 +1,98 @@ +From 44c5ee037b95265aa2cf7bf3f1b33c3e04247e36 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 31 Dec 2021 18:22:31 +0800 +Subject: net: hns3: refactor hns3 makefile to support hns3_common module + +From: Jie Wang + +[ Upstream commit 5f20be4e90e603d8967962f81ac89307fd4f8af9 ] + +Currently we plan to refactor PF and VF cmdq module. A new file folder +hns3_common will be created to store new common APIs used by PF and VF +cmdq module. Thus the PF and VF compilation process will both depends on +the hns3_common. This may cause parallel building problems if we add a new +makefile building unit. + +So this patch combined the PF and VF makefile scripts to the top level +makefile to support the new hns3_common which will be created in the next +patch. + +Signed-off-by: Jie Wang +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: 6639a7b95321 ("net: hns3: change type of numa_node_mask as nodemask_t") +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/Makefile | 14 +++++++++++--- + .../net/ethernet/hisilicon/hns3/hns3pf/Makefile | 12 ------------ + .../net/ethernet/hisilicon/hns3/hns3vf/Makefile | 10 ---------- + 3 files changed, 11 insertions(+), 25 deletions(-) + delete mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile + delete mode 100644 drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile + +diff --git a/drivers/net/ethernet/hisilicon/hns3/Makefile b/drivers/net/ethernet/hisilicon/hns3/Makefile +index 7aa2fac76c5e8..32e24e0945f5e 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/Makefile ++++ b/drivers/net/ethernet/hisilicon/hns3/Makefile +@@ -4,9 +4,8 @@ + # + + ccflags-y += -I$(srctree)/$(src) +- +-obj-$(CONFIG_HNS3) += hns3pf/ +-obj-$(CONFIG_HNS3) += hns3vf/ ++ccflags-y += -I$(srctree)/drivers/net/ethernet/hisilicon/hns3/hns3pf ++ccflags-y += -I$(srctree)/drivers/net/ethernet/hisilicon/hns3/hns3vf + + obj-$(CONFIG_HNS3) += hnae3.o + +@@ -14,3 +13,12 @@ obj-$(CONFIG_HNS3_ENET) += hns3.o + hns3-objs = hns3_enet.o hns3_ethtool.o hns3_debugfs.o + + hns3-$(CONFIG_HNS3_DCB) += hns3_dcbnl.o ++ ++obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o ++hclgevf-objs = hns3vf/hclgevf_main.o hns3vf/hclgevf_cmd.o hns3vf/hclgevf_mbx.o hns3vf/hclgevf_devlink.o ++ ++obj-$(CONFIG_HNS3_HCLGE) += hclge.o ++hclge-objs = hns3pf/hclge_main.o hns3pf/hclge_cmd.o hns3pf/hclge_mdio.o hns3pf/hclge_tm.o \ ++ hns3pf/hclge_mbx.o hns3pf/hclge_err.o hns3pf/hclge_debugfs.o hns3pf/hclge_ptp.o hns3pf/hclge_devlink.o ++ ++hclge-$(CONFIG_HNS3_DCB) += hns3pf/hclge_dcb.o +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile +deleted file mode 100644 +index d1bf5c4c0abbc..0000000000000 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/Makefile ++++ /dev/null +@@ -1,12 +0,0 @@ +-# SPDX-License-Identifier: GPL-2.0+ +-# +-# Makefile for the HISILICON network device drivers. +-# +- +-ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3 +-ccflags-y += -I $(srctree)/$(src) +- +-obj-$(CONFIG_HNS3_HCLGE) += hclge.o +-hclge-objs = hclge_main.o hclge_cmd.o hclge_mdio.o hclge_tm.o hclge_mbx.o hclge_err.o hclge_debugfs.o hclge_ptp.o hclge_devlink.o +- +-hclge-$(CONFIG_HNS3_DCB) += hclge_dcb.o +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile b/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile +deleted file mode 100644 +index 51ff7d86ee906..0000000000000 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/Makefile ++++ /dev/null +@@ -1,10 +0,0 @@ +-# SPDX-License-Identifier: GPL-2.0+ +-# +-# Makefile for the HISILICON network device drivers. +-# +- +-ccflags-y := -I $(srctree)/drivers/net/ethernet/hisilicon/hns3 +-ccflags-y += -I $(srctree)/$(src) +- +-obj-$(CONFIG_HNS3_HCLGEVF) += hclgevf.o +-hclgevf-objs = hclgevf_main.o hclgevf_cmd.o hclgevf_mbx.o hclgevf_devlink.o +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-split-function-hclge_init_vlan_config.patch b/queue-5.15/net-hns3-split-function-hclge_init_vlan_config.patch new file mode 100644 index 00000000000..7581c38a98c --- /dev/null +++ b/queue-5.15/net-hns3-split-function-hclge_init_vlan_config.patch @@ -0,0 +1,152 @@ +From 0700456ab8a8b5935982dcf8d94a6eb7087b88b8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 2 Dec 2021 16:35:57 +0800 +Subject: net: hns3: split function hclge_init_vlan_config() + +From: Jian Shen + +[ Upstream commit b60f9d2ec479966383c7f2cdf3b1a3a66c25f212 ] + +Currently the function hclge_init_vlan_config() is a bit long. +Split it to several small functions, to simplify code and +improve code readability. + +Signed-off-by: Jian Shen +Signed-off-by: Guangbin Huang +Signed-off-by: David S. Miller +Stable-dep-of: f5db7a3b65c8 ("net: hns3: fix port vlan filter not disabled issue") +Signed-off-by: Sasha Levin +--- + .../hisilicon/hns3/hns3pf/hclge_main.c | 97 +++++++++++-------- + 1 file changed, 55 insertions(+), 42 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 4b0027a41f3cd..6a346165d5881 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -10102,67 +10102,80 @@ static int hclge_set_vlan_protocol_type(struct hclge_dev *hdev) + return status; + } + +-static int hclge_init_vlan_config(struct hclge_dev *hdev) ++static int hclge_init_vlan_filter(struct hclge_dev *hdev) + { +-#define HCLGE_DEF_VLAN_TYPE 0x8100 +- +- struct hnae3_handle *handle = &hdev->vport[0].nic; + struct hclge_vport *vport; + int ret; + int i; + +- if (hdev->ae_dev->dev_version >= HNAE3_DEVICE_VERSION_V2) { +- /* for revision 0x21, vf vlan filter is per function */ +- for (i = 0; i < hdev->num_alloc_vport; i++) { +- vport = &hdev->vport[i]; +- ret = hclge_set_vlan_filter_ctrl(hdev, +- HCLGE_FILTER_TYPE_VF, +- HCLGE_FILTER_FE_EGRESS, +- true, +- vport->vport_id); +- if (ret) +- return ret; +- vport->cur_vlan_fltr_en = true; +- } ++ if (hdev->ae_dev->dev_version < HNAE3_DEVICE_VERSION_V2) ++ return hclge_set_vlan_filter_ctrl(hdev, HCLGE_FILTER_TYPE_VF, ++ HCLGE_FILTER_FE_EGRESS_V1_B, ++ true, 0); + +- ret = hclge_set_vlan_filter_ctrl(hdev, HCLGE_FILTER_TYPE_PORT, +- HCLGE_FILTER_FE_INGRESS, true, +- 0); +- if (ret) +- return ret; +- } else { ++ /* for revision 0x21, vf vlan filter is per function */ ++ for (i = 0; i < hdev->num_alloc_vport; i++) { ++ vport = &hdev->vport[i]; + ret = hclge_set_vlan_filter_ctrl(hdev, HCLGE_FILTER_TYPE_VF, +- HCLGE_FILTER_FE_EGRESS_V1_B, +- true, 0); ++ HCLGE_FILTER_FE_EGRESS, true, ++ vport->vport_id); + if (ret) + return ret; ++ vport->cur_vlan_fltr_en = true; + } + +- hdev->vlan_type_cfg.rx_in_fst_vlan_type = HCLGE_DEF_VLAN_TYPE; +- hdev->vlan_type_cfg.rx_in_sec_vlan_type = HCLGE_DEF_VLAN_TYPE; +- hdev->vlan_type_cfg.rx_ot_fst_vlan_type = HCLGE_DEF_VLAN_TYPE; +- hdev->vlan_type_cfg.rx_ot_sec_vlan_type = HCLGE_DEF_VLAN_TYPE; +- hdev->vlan_type_cfg.tx_ot_vlan_type = HCLGE_DEF_VLAN_TYPE; +- hdev->vlan_type_cfg.tx_in_vlan_type = HCLGE_DEF_VLAN_TYPE; ++ return hclge_set_vlan_filter_ctrl(hdev, HCLGE_FILTER_TYPE_PORT, ++ HCLGE_FILTER_FE_INGRESS, true, 0); ++} + +- ret = hclge_set_vlan_protocol_type(hdev); +- if (ret) +- return ret; ++static int hclge_init_vlan_type(struct hclge_dev *hdev) ++{ ++ hdev->vlan_type_cfg.rx_in_fst_vlan_type = ETH_P_8021Q; ++ hdev->vlan_type_cfg.rx_in_sec_vlan_type = ETH_P_8021Q; ++ hdev->vlan_type_cfg.rx_ot_fst_vlan_type = ETH_P_8021Q; ++ hdev->vlan_type_cfg.rx_ot_sec_vlan_type = ETH_P_8021Q; ++ hdev->vlan_type_cfg.tx_ot_vlan_type = ETH_P_8021Q; ++ hdev->vlan_type_cfg.tx_in_vlan_type = ETH_P_8021Q; + +- for (i = 0; i < hdev->num_alloc_vport; i++) { +- u16 vlan_tag; +- u8 qos; ++ return hclge_set_vlan_protocol_type(hdev); ++} + ++static int hclge_init_vport_vlan_offload(struct hclge_dev *hdev) ++{ ++ struct hclge_port_base_vlan_config *cfg; ++ struct hclge_vport *vport; ++ int ret; ++ int i; ++ ++ for (i = 0; i < hdev->num_alloc_vport; i++) { + vport = &hdev->vport[i]; +- vlan_tag = vport->port_base_vlan_cfg.vlan_info.vlan_tag; +- qos = vport->port_base_vlan_cfg.vlan_info.qos; ++ cfg = &vport->port_base_vlan_cfg; + +- ret = hclge_vlan_offload_cfg(vport, +- vport->port_base_vlan_cfg.state, +- vlan_tag, qos); ++ ret = hclge_vlan_offload_cfg(vport, cfg->state, ++ cfg->vlan_info.vlan_tag, ++ cfg->vlan_info.qos); + if (ret) + return ret; + } ++ return 0; ++} ++ ++static int hclge_init_vlan_config(struct hclge_dev *hdev) ++{ ++ struct hnae3_handle *handle = &hdev->vport[0].nic; ++ int ret; ++ ++ ret = hclge_init_vlan_filter(hdev); ++ if (ret) ++ return ret; ++ ++ ret = hclge_init_vlan_type(hdev); ++ if (ret) ++ return ret; ++ ++ ret = hclge_init_vport_vlan_offload(hdev); ++ if (ret) ++ return ret; + + return hclge_set_vlan_filter(handle, htons(ETH_P_8021Q), 0, false); + } +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-use-appropriate-barrier-function-after-sett.patch b/queue-5.15/net-hns3-use-appropriate-barrier-function-after-sett.patch new file mode 100644 index 00000000000..1401a7793f4 --- /dev/null +++ b/queue-5.15/net-hns3-use-appropriate-barrier-function-after-sett.patch @@ -0,0 +1,64 @@ +From 1faf71d085a94016547e9799cd96f47ef45a98a0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 May 2024 21:42:22 +0800 +Subject: net: hns3: use appropriate barrier function after setting a bit value + +From: Peiyang Wang + +[ Upstream commit 094c281228529d333458208fd02fcac3b139d93b ] + +There is a memory barrier in followed case. When set the port down, +hclgevf_set_timmer will set DOWN in state. Meanwhile, the service task has +different behaviour based on whether the state is DOWN. Thus, to make sure +service task see DOWN, use smp_mb__after_atomic after calling set_bit(). + + CPU0 CPU1 +========================== =================================== +hclgevf_set_timer_task() hclgevf_periodic_service_task() + set_bit(DOWN,state) test_bit(DOWN,state) + +pf also has this issue. + +Fixes: ff200099d271 ("net: hns3: remove unnecessary work in hclgevf_main") +Fixes: 1c6dfe6fc6f7 ("net: hns3: remove mailbox and reset work in hclge_main") +Signed-off-by: Peiyang Wang +Signed-off-by: Jijie Shao +Reviewed-by: Simon Horman +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 3 +-- + drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c | 3 +-- + 2 files changed, 2 insertions(+), 4 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index a0a64441199c5..4b0027a41f3cd 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -8156,8 +8156,7 @@ static void hclge_set_timer_task(struct hnae3_handle *handle, bool enable) + /* Set the DOWN flag here to disable link updating */ + set_bit(HCLGE_STATE_DOWN, &hdev->state); + +- /* flush memory to make sure DOWN is seen by service task */ +- smp_mb__before_atomic(); ++ smp_mb__after_atomic(); /* flush memory to make sure DOWN is seen by service task */ + hclge_flush_link_update(hdev); + } + } +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +index 9afb44d738c4e..a41e04796b0b6 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3vf/hclgevf_main.c +@@ -2722,8 +2722,7 @@ static void hclgevf_set_timer_task(struct hnae3_handle *handle, bool enable) + } else { + set_bit(HCLGEVF_STATE_DOWN, &hdev->state); + +- /* flush memory to make sure DOWN is seen by service task */ +- smp_mb__before_atomic(); ++ smp_mb__after_atomic(); /* flush memory to make sure DOWN is seen by service task */ + hclgevf_flush_link_update(hdev); + } + } +-- +2.43.0 + diff --git a/queue-5.15/net-hns3-using-user-configure-after-hardware-reset.patch b/queue-5.15/net-hns3-using-user-configure-after-hardware-reset.patch new file mode 100644 index 00000000000..bc6a8865f6c --- /dev/null +++ b/queue-5.15/net-hns3-using-user-configure-after-hardware-reset.patch @@ -0,0 +1,127 @@ +From 56777bcbacb205007cdf4bf89e2f44f307c2ea71 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 7 May 2024 21:42:18 +0800 +Subject: net: hns3: using user configure after hardware reset + +From: Peiyang Wang + +[ Upstream commit 05eb60e9648cca0beeebdbcd263b599fb58aee48 ] + +When a reset occurring, it's supposed to recover user's configuration. +Currently, the port info(speed, duplex and autoneg) is stored in hclge_mac +and will be scheduled updated. Consider the case that reset was happened +consecutively. During the first reset, the port info is configured with +a temporary value cause the PHY is reset and looking for best link config. +Second reset start and use pervious configuration which is not the user's. +The specific process is as follows: + ++------+ +----+ +----+ +| USER | | PF | | HW | ++---+--+ +-+--+ +-+--+ + | ethtool --reset | | + +------------------->| reset command | + | ethtool --reset +-------------------->| + +------------------->| +---+ + | +---+ | | + | | |reset currently | | HW RESET + | | |and wait to do | | + | |<--+ | | + | | send pervious cfg |<--+ + | | (1000M FULL AN_ON) | + | +-------------------->| + | | read cfg(time task) | + | | (10M HALF AN_OFF) +---+ + | |<--------------------+ | cfg take effect + | | reset command |<--+ + | +-------------------->| + | | +---+ + | | send pervious cfg | | HW RESET + | | (10M HALF AN_OFF) |<--+ + | +-------------------->| + | | read cfg(time task) | + | | (10M HALF AN_OFF) +---+ + | |<--------------------+ | cfg take effect + | | | | + | | read cfg(time task) |<--+ + | | (10M HALF AN_OFF) | + | |<--------------------+ + | | | + v v v + +To avoid aboved situation, this patch introduced req_speed, req_duplex, +req_autoneg to store user's configuration and it only be used after +hardware reset and to recover user's configuration + +Fixes: f5f2b3e4dcc0 ("net: hns3: add support for imp-controlled PHYs") +Signed-off-by: Peiyang Wang +Signed-off-by: Jijie Shao +Reviewed-by: Przemek Kitszel +Reviewed-by: Simon Horman +Signed-off-by: Paolo Abeni +Signed-off-by: Sasha Levin +--- + .../ethernet/hisilicon/hns3/hns3pf/hclge_main.c | 15 +++++++++------ + .../ethernet/hisilicon/hns3/hns3pf/hclge_main.h | 3 +++ + 2 files changed, 12 insertions(+), 6 deletions(-) + +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +index 3423b8e278e3a..71b498aa327bb 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.c +@@ -1572,6 +1572,9 @@ static int hclge_configure(struct hclge_dev *hdev) + cfg.default_speed, ret); + return ret; + } ++ hdev->hw.mac.req_speed = hdev->hw.mac.speed; ++ hdev->hw.mac.req_autoneg = AUTONEG_ENABLE; ++ hdev->hw.mac.req_duplex = DUPLEX_FULL; + + hclge_parse_link_mode(hdev, cfg.speed_ability); + +@@ -3163,9 +3166,9 @@ hclge_set_phy_link_ksettings(struct hnae3_handle *handle, + return ret; + } + +- hdev->hw.mac.autoneg = cmd->base.autoneg; +- hdev->hw.mac.speed = cmd->base.speed; +- hdev->hw.mac.duplex = cmd->base.duplex; ++ hdev->hw.mac.req_autoneg = cmd->base.autoneg; ++ hdev->hw.mac.req_speed = cmd->base.speed; ++ hdev->hw.mac.req_duplex = cmd->base.duplex; + linkmode_copy(hdev->hw.mac.advertising, cmd->link_modes.advertising); + + return 0; +@@ -3198,9 +3201,9 @@ static int hclge_tp_port_init(struct hclge_dev *hdev) + if (!hnae3_dev_phy_imp_supported(hdev)) + return 0; + +- cmd.base.autoneg = hdev->hw.mac.autoneg; +- cmd.base.speed = hdev->hw.mac.speed; +- cmd.base.duplex = hdev->hw.mac.duplex; ++ cmd.base.autoneg = hdev->hw.mac.req_autoneg; ++ cmd.base.speed = hdev->hw.mac.req_speed; ++ cmd.base.duplex = hdev->hw.mac.req_duplex; + linkmode_copy(cmd.link_modes.advertising, hdev->hw.mac.advertising); + + return hclge_set_phy_link_ksettings(&hdev->vport->nic, &cmd); +diff --git a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +index a716027df0ed1..ba0d41091b1da 100644 +--- a/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h ++++ b/drivers/net/ethernet/hisilicon/hns3/hns3pf/hclge_main.h +@@ -275,10 +275,13 @@ struct hclge_mac { + u8 media_type; /* port media type, e.g. fibre/copper/backplane */ + u8 mac_addr[ETH_ALEN]; + u8 autoneg; ++ u8 req_autoneg; + u8 duplex; ++ u8 req_duplex; + u8 support_autoneg; + u8 speed_type; /* 0: sfp speed, 1: active speed */ + u32 speed; ++ u32 req_speed; + u32 max_speed; + u32 speed_ability; /* speed ability supported by current media */ + u32 module_type; /* sub media type, e.g. kr/cr/sr/lr */ +-- +2.43.0 + diff --git a/queue-5.15/nfc-add-kcov-annotations.patch b/queue-5.15/nfc-add-kcov-annotations.patch new file mode 100644 index 00000000000..659ed1454b9 --- /dev/null +++ b/queue-5.15/nfc-add-kcov-annotations.patch @@ -0,0 +1,139 @@ +From eee1d2cff6a400ff92406ab864b2f5f156008e16 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 30 Oct 2022 16:03:37 +0100 +Subject: nfc: Add KCOV annotations + +From: Dmitry Vyukov + +[ Upstream commit 7e8cdc97148c6ba66671e88ad9f7d434f4df3438 ] + +Add remote KCOV annotations for NFC processing that is done +in background threads. This enables efficient coverage-guided +fuzzing of the NFC subsystem. + +The intention is to add annotations to background threads that +process skb's that were allocated in syscall context +(thus have a KCOV handle associated with the current fuzz test). +This includes nci_recv_frame() that is called by the virtual nci +driver in the syscall context. + +Signed-off-by: Dmitry Vyukov +Cc: Bongsu Jeon +Cc: Krzysztof Kozlowski +Cc: netdev@vger.kernel.org +Signed-off-by: David S. Miller +Stable-dep-of: 19e35f24750d ("nfc: nci: Fix kcov check in nci_rx_work()") +Signed-off-by: Sasha Levin +--- + net/nfc/nci/core.c | 8 +++++++- + net/nfc/nci/hci.c | 4 +++- + net/nfc/rawsock.c | 3 +++ + 3 files changed, 13 insertions(+), 2 deletions(-) + +diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c +index 2a821f2b2ffe8..20e01f20fdcb1 100644 +--- a/net/nfc/nci/core.c ++++ b/net/nfc/nci/core.c +@@ -24,6 +24,7 @@ + #include + #include + #include ++#include + + #include "../nfc.h" + #include +@@ -1485,6 +1486,7 @@ static void nci_tx_work(struct work_struct *work) + skb = skb_dequeue(&ndev->tx_q); + if (!skb) + return; ++ kcov_remote_start_common(skb_get_kcov_handle(skb)); + + /* Check if data flow control is used */ + if (atomic_read(&conn_info->credits_cnt) != +@@ -1500,6 +1502,7 @@ static void nci_tx_work(struct work_struct *work) + + mod_timer(&ndev->data_timer, + jiffies + msecs_to_jiffies(NCI_DATA_TIMEOUT)); ++ kcov_remote_stop(); + } + } + +@@ -1510,7 +1513,8 @@ static void nci_rx_work(struct work_struct *work) + struct nci_dev *ndev = container_of(work, struct nci_dev, rx_work); + struct sk_buff *skb; + +- while ((skb = skb_dequeue(&ndev->rx_q))) { ++ for (; (skb = skb_dequeue(&ndev->rx_q)); kcov_remote_stop()) { ++ kcov_remote_start_common(skb_get_kcov_handle(skb)); + + /* Send copy to sniffer */ + nfc_send_to_raw_sock(ndev->nfc_dev, skb, +@@ -1569,6 +1573,7 @@ static void nci_cmd_work(struct work_struct *work) + if (!skb) + return; + ++ kcov_remote_start_common(skb_get_kcov_handle(skb)); + atomic_dec(&ndev->cmd_cnt); + + pr_debug("NCI TX: MT=cmd, PBF=%d, GID=0x%x, OID=0x%x, plen=%d\n", +@@ -1581,6 +1586,7 @@ static void nci_cmd_work(struct work_struct *work) + + mod_timer(&ndev->cmd_timer, + jiffies + msecs_to_jiffies(NCI_CMD_TIMEOUT)); ++ kcov_remote_stop(); + } + } + +diff --git a/net/nfc/nci/hci.c b/net/nfc/nci/hci.c +index 85b808fdcbc3a..7ac5c03176843 100644 +--- a/net/nfc/nci/hci.c ++++ b/net/nfc/nci/hci.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + + struct nci_data { + u8 conn_id; +@@ -409,7 +410,8 @@ static void nci_hci_msg_rx_work(struct work_struct *work) + const struct nci_hcp_message *message; + u8 pipe, type, instruction; + +- while ((skb = skb_dequeue(&hdev->msg_rx_queue)) != NULL) { ++ for (; (skb = skb_dequeue(&hdev->msg_rx_queue)); kcov_remote_stop()) { ++ kcov_remote_start_common(skb_get_kcov_handle(skb)); + pipe = NCI_HCP_MSG_GET_PIPE(skb->data[0]); + skb_pull(skb, NCI_HCI_HCP_PACKET_HEADER_LEN); + message = (struct nci_hcp_message *)skb->data; +diff --git a/net/nfc/rawsock.c b/net/nfc/rawsock.c +index 0ca214ab5aeff..88e37a14a7e69 100644 +--- a/net/nfc/rawsock.c ++++ b/net/nfc/rawsock.c +@@ -12,6 +12,7 @@ + #include + #include + #include ++#include + + #include "nfc.h" + +@@ -189,6 +190,7 @@ static void rawsock_tx_work(struct work_struct *work) + } + + skb = skb_dequeue(&sk->sk_write_queue); ++ kcov_remote_start_common(skb_get_kcov_handle(skb)); + + sock_hold(sk); + rc = nfc_data_exchange(dev, target_idx, skb, +@@ -197,6 +199,7 @@ static void rawsock_tx_work(struct work_struct *work) + rawsock_report_error(sk, rc); + sock_put(sk); + } ++ kcov_remote_stop(); + } + + static int rawsock_sendmsg(struct socket *sock, struct msghdr *msg, size_t len) +-- +2.43.0 + diff --git a/queue-5.15/nfc-nci-fix-kcov-check-in-nci_rx_work.patch b/queue-5.15/nfc-nci-fix-kcov-check-in-nci_rx_work.patch new file mode 100644 index 00000000000..97e77f88e61 --- /dev/null +++ b/queue-5.15/nfc-nci-fix-kcov-check-in-nci_rx_work.patch @@ -0,0 +1,44 @@ +From e943d3612f48832ca87cd787410b8e4772a43a67 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 5 May 2024 19:36:49 +0900 +Subject: nfc: nci: Fix kcov check in nci_rx_work() + +From: Tetsuo Handa + +[ Upstream commit 19e35f24750ddf860c51e51c68cf07ea181b4881 ] + +Commit 7e8cdc97148c ("nfc: Add KCOV annotations") added +kcov_remote_start_common()/kcov_remote_stop() pair into nci_rx_work(), +with an assumption that kcov_remote_stop() is called upon continue of +the for loop. But commit d24b03535e5e ("nfc: nci: Fix uninit-value in +nci_dev_up and nci_ntf_packet") forgot to call kcov_remote_stop() before +break of the for loop. + +Reported-by: syzbot +Closes: https://syzkaller.appspot.com/bug?extid=0438378d6f157baae1a2 +Fixes: d24b03535e5e ("nfc: nci: Fix uninit-value in nci_dev_up and nci_ntf_packet") +Suggested-by: Andrey Konovalov +Signed-off-by: Tetsuo Handa +Reviewed-by: Krzysztof Kozlowski +Link: https://lore.kernel.org/r/6d10f829-5a0c-405a-b39a-d7266f3a1a0b@I-love.SAKURA.ne.jp +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/nfc/nci/core.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/net/nfc/nci/core.c b/net/nfc/nci/core.c +index 20e01f20fdcb1..d26c21df0d283 100644 +--- a/net/nfc/nci/core.c ++++ b/net/nfc/nci/core.c +@@ -1522,6 +1522,7 @@ static void nci_rx_work(struct work_struct *work) + + if (!nci_plen(skb->data)) { + kfree_skb(skb); ++ kcov_remote_stop(); + break; + } + +-- +2.43.0 + diff --git a/queue-5.15/phonet-fix-rtm_phonet_notify-skb-allocation.patch b/queue-5.15/phonet-fix-rtm_phonet_notify-skb-allocation.patch new file mode 100644 index 00000000000..ee1857c29f1 --- /dev/null +++ b/queue-5.15/phonet-fix-rtm_phonet_notify-skb-allocation.patch @@ -0,0 +1,50 @@ +From 8a619912e147622ed20266adff8d8b1698e01cdf Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 2 May 2024 16:17:00 +0000 +Subject: phonet: fix rtm_phonet_notify() skb allocation +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Eric Dumazet + +[ Upstream commit d8cac8568618dcb8a51af3db1103e8d4cc4aeea7 ] + +fill_route() stores three components in the skb: + +- struct rtmsg +- RTA_DST (u8) +- RTA_OIF (u32) + +Therefore, rtm_phonet_notify() should use + +NLMSG_ALIGN(sizeof(struct rtmsg)) + +nla_total_size(1) + +nla_total_size(4) + +Fixes: f062f41d0657 ("Phonet: routing table Netlink interface") +Signed-off-by: Eric Dumazet +Acked-by: Rémi Denis-Courmont +Link: https://lore.kernel.org/r/20240502161700.1804476-1-edumazet@google.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/phonet/pn_netlink.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/net/phonet/pn_netlink.c b/net/phonet/pn_netlink.c +index 59aebe2968907..dd4c7e9a634fb 100644 +--- a/net/phonet/pn_netlink.c ++++ b/net/phonet/pn_netlink.c +@@ -193,7 +193,7 @@ void rtm_phonet_notify(int event, struct net_device *dev, u8 dst) + struct sk_buff *skb; + int err = -ENOBUFS; + +- skb = nlmsg_new(NLMSG_ALIGN(sizeof(struct ifaddrmsg)) + ++ skb = nlmsg_new(NLMSG_ALIGN(sizeof(struct rtmsg)) + + nla_total_size(1) + nla_total_size(4), GFP_KERNEL); + if (skb == NULL) + goto errout; +-- +2.43.0 + diff --git a/queue-5.15/qibfs-fix-dentry-leak.patch b/queue-5.15/qibfs-fix-dentry-leak.patch new file mode 100644 index 00000000000..1b454b83f66 --- /dev/null +++ b/queue-5.15/qibfs-fix-dentry-leak.patch @@ -0,0 +1,38 @@ +From 42a77b2e7d1b6627afd7832a810b89e2e49444ff Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 25 Feb 2024 23:58:42 -0500 +Subject: qibfs: fix dentry leak + +From: Al Viro + +[ Upstream commit aa23317d0268b309bb3f0801ddd0d61813ff5afb ] + +simple_recursive_removal() drops the pinning references to all positives +in subtree. For the cases when its argument has been kept alive by +the pinning alone that's exactly the right thing to do, but here +the argument comes from dcache lookup, that needs to be balanced by +explicit dput(). + +Fixes: e41d237818598 "qib_fs: switch to simple_recursive_removal()" +Fucked-up-by: Al Viro +Signed-off-by: Al Viro +Signed-off-by: Sasha Levin +--- + drivers/infiniband/hw/qib/qib_fs.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/infiniband/hw/qib/qib_fs.c b/drivers/infiniband/hw/qib/qib_fs.c +index a0c5f3bdc3246..8665e506404f9 100644 +--- a/drivers/infiniband/hw/qib/qib_fs.c ++++ b/drivers/infiniband/hw/qib/qib_fs.c +@@ -441,6 +441,7 @@ static int remove_device_files(struct super_block *sb, + return PTR_ERR(dir); + } + simple_recursive_removal(dir, NULL); ++ dput(dir); + return 0; + } + +-- +2.43.0 + diff --git a/queue-5.15/rtnetlink-correct-nested-ifla_vf_vlan_list-attribute.patch b/queue-5.15/rtnetlink-correct-nested-ifla_vf_vlan_list-attribute.patch new file mode 100644 index 00000000000..789f76796ea --- /dev/null +++ b/queue-5.15/rtnetlink-correct-nested-ifla_vf_vlan_list-attribute.patch @@ -0,0 +1,44 @@ +From 3b6831df6ebebda5f3087218bb992f9297fcb7c3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 2 May 2024 18:57:51 +0300 +Subject: rtnetlink: Correct nested IFLA_VF_VLAN_LIST attribute validation + +From: Roded Zats + +[ Upstream commit 1aec77b2bb2ed1db0f5efc61c4c1ca3813307489 ] + +Each attribute inside a nested IFLA_VF_VLAN_LIST is assumed to be a +struct ifla_vf_vlan_info so the size of such attribute needs to be at least +of sizeof(struct ifla_vf_vlan_info) which is 14 bytes. +The current size validation in do_setvfinfo is against NLA_HDRLEN (4 bytes) +which is less than sizeof(struct ifla_vf_vlan_info) so this validation +is not enough and a too small attribute might be cast to a +struct ifla_vf_vlan_info, this might result in an out of bands +read access when accessing the saved (casted) entry in ivvl. + +Fixes: 79aab093a0b5 ("net: Update API for VF vlan protocol 802.1ad support") +Signed-off-by: Roded Zats +Reviewed-by: Donald Hunter +Link: https://lore.kernel.org/r/20240502155751.75705-1-rzats@paloaltonetworks.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/core/rtnetlink.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/net/core/rtnetlink.c b/net/core/rtnetlink.c +index ef218e290dfba..d25632fbfa892 100644 +--- a/net/core/rtnetlink.c ++++ b/net/core/rtnetlink.c +@@ -2383,7 +2383,7 @@ static int do_setvfinfo(struct net_device *dev, struct nlattr **tb) + + nla_for_each_nested(attr, tb[IFLA_VF_VLAN_LIST], rem) { + if (nla_type(attr) != IFLA_VF_VLAN_INFO || +- nla_len(attr) < NLA_HDRLEN) { ++ nla_len(attr) < sizeof(struct ifla_vf_vlan_info)) { + return -EINVAL; + } + if (len >= MAX_VLAN_LIST_LEN) +-- +2.43.0 + diff --git a/queue-5.15/series b/queue-5.15/series index f84db724fd4..19cc503ad6b 100644 --- a/queue-5.15/series +++ b/queue-5.15/series @@ -105,3 +105,33 @@ bpf-sockmap-reschedule-is-now-done-through-backlog.patch bpf-sockmap-improved-check-for-empty-queue.patch asoc-meson-axg-card-fix-nonatomic-links.patch asoc-meson-axg-tdm-interface-fix-formatters-in-trigg.patch +qibfs-fix-dentry-leak.patch +xfrm-preserve-vlan-tags-for-transport-mode-software-.patch +arm-9381-1-kasan-clear-stale-stack-poison.patch +tcp-defer-shutdown-send_shutdown-for-tcp_syn_recv-so.patch +tcp-use-refcount_inc_not_zero-in-tcp_twsk_unique.patch +bluetooth-fix-use-after-free-bugs-caused-by-sco_sock.patch +bluetooth-l2cap-fix-null-ptr-deref-in-l2cap_chan_tim.patch +rtnetlink-correct-nested-ifla_vf_vlan_list-attribute.patch +hwmon-corsair-cpro-use-a-separate-buffer-for-sending.patch +hwmon-corsair-cpro-use-complete_all-instead-of-compl.patch +hwmon-corsair-cpro-protect-ccp-wait_input_report-wit.patch +phonet-fix-rtm_phonet_notify-skb-allocation.patch +nfc-add-kcov-annotations.patch +nfc-nci-fix-kcov-check-in-nci_rx_work.patch +net-bridge-fix-corrupted-ethernet-header-on-multicas.patch +ipv6-fib6_rules-avoid-possible-null-dereference-in-f.patch +net-hns3-pf-support-get-unicast-mac-address-space-as.patch +net-hns3-using-user-configure-after-hardware-reset.patch +net-hns3-add-log-for-workqueue-scheduled-late.patch +net-hns3-add-query-vf-ring-and-vector-map-relation.patch +net-hns3-refactor-function-hclge_mbx_handler.patch +net-hns3-direct-return-when-receive-a-unknown-mailbo.patch +net-hns3-refactor-hns3-makefile-to-support-hns3_comm.patch +net-hns3-create-new-cmdq-hardware-description-struct.patch +net-hns3-create-new-set-of-unified-hclge_comm_cmd_se.patch +net-hns3-refactor-hclge_cmd_send-with-new-hclge_comm.patch +net-hns3-change-type-of-numa_node_mask-as-nodemask_t.patch +net-hns3-use-appropriate-barrier-function-after-sett.patch +net-hns3-split-function-hclge_init_vlan_config.patch +net-hns3-fix-port-vlan-filter-not-disabled-issue.patch diff --git a/queue-5.15/tcp-defer-shutdown-send_shutdown-for-tcp_syn_recv-so.patch b/queue-5.15/tcp-defer-shutdown-send_shutdown-for-tcp_syn_recv-so.patch new file mode 100644 index 00000000000..193c0c4a5e7 --- /dev/null +++ b/queue-5.15/tcp-defer-shutdown-send_shutdown-for-tcp_syn_recv-so.patch @@ -0,0 +1,145 @@ +From 178f7c0f3e42bbb9fd11c9cbb1816ff900a25cf4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 1 May 2024 12:54:48 +0000 +Subject: tcp: defer shutdown(SEND_SHUTDOWN) for TCP_SYN_RECV sockets + +From: Eric Dumazet + +[ Upstream commit 94062790aedb505bdda209b10bea47b294d6394f ] + +TCP_SYN_RECV state is really special, it is only used by +cross-syn connections, mostly used by fuzzers. + +In the following crash [1], syzbot managed to trigger a divide +by zero in tcp_rcv_space_adjust() + +A socket makes the following state transitions, +without ever calling tcp_init_transfer(), +meaning tcp_init_buffer_space() is also not called. + + TCP_CLOSE +connect() + TCP_SYN_SENT + TCP_SYN_RECV +shutdown() -> tcp_shutdown(sk, SEND_SHUTDOWN) + TCP_FIN_WAIT1 + +To fix this issue, change tcp_shutdown() to not +perform a TCP_SYN_RECV -> TCP_FIN_WAIT1 transition, +which makes no sense anyway. + +When tcp_rcv_state_process() later changes socket state +from TCP_SYN_RECV to TCP_ESTABLISH, then look at +sk->sk_shutdown to finally enter TCP_FIN_WAIT1 state, +and send a FIN packet from a sane socket state. + +This means tcp_send_fin() can now be called from BH +context, and must use GFP_ATOMIC allocations. + +[1] +divide error: 0000 [#1] PREEMPT SMP KASAN NOPTI +CPU: 1 PID: 5084 Comm: syz-executor358 Not tainted 6.9.0-rc6-syzkaller-00022-g98369dccd2f8 #0 +Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 + RIP: 0010:tcp_rcv_space_adjust+0x2df/0x890 net/ipv4/tcp_input.c:767 +Code: e3 04 4c 01 eb 48 8b 44 24 38 0f b6 04 10 84 c0 49 89 d5 0f 85 a5 03 00 00 41 8b 8e c8 09 00 00 89 e8 29 c8 48 0f af c3 31 d2 <48> f7 f1 48 8d 1c 43 49 8d 96 76 08 00 00 48 89 d0 48 c1 e8 03 48 +RSP: 0018:ffffc900031ef3f0 EFLAGS: 00010246 +RAX: 0c677a10441f8f42 RBX: 000000004fb95e7e RCX: 0000000000000000 +RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000 +RBP: 0000000027d4b11f R08: ffffffff89e535a4 R09: 1ffffffff25e6ab7 +R10: dffffc0000000000 R11: ffffffff8135e920 R12: ffff88802a9f8d30 +R13: dffffc0000000000 R14: ffff88802a9f8d00 R15: 1ffff1100553f2da +FS: 00005555775c0380(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 00007f1155bf2304 CR3: 000000002b9f2000 CR4: 0000000000350ef0 +Call Trace: + + tcp_recvmsg_locked+0x106d/0x25a0 net/ipv4/tcp.c:2513 + tcp_recvmsg+0x25d/0x920 net/ipv4/tcp.c:2578 + inet6_recvmsg+0x16a/0x730 net/ipv6/af_inet6.c:680 + sock_recvmsg_nosec net/socket.c:1046 [inline] + sock_recvmsg+0x109/0x280 net/socket.c:1068 + ____sys_recvmsg+0x1db/0x470 net/socket.c:2803 + ___sys_recvmsg net/socket.c:2845 [inline] + do_recvmmsg+0x474/0xae0 net/socket.c:2939 + __sys_recvmmsg net/socket.c:3018 [inline] + __do_sys_recvmmsg net/socket.c:3041 [inline] + __se_sys_recvmmsg net/socket.c:3034 [inline] + __x64_sys_recvmmsg+0x199/0x250 net/socket.c:3034 + do_syscall_x64 arch/x86/entry/common.c:52 [inline] + do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83 + entry_SYSCALL_64_after_hwframe+0x77/0x7f +RIP: 0033:0x7faeb6363db9 +Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 c1 17 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48 +RSP: 002b:00007ffcc1997168 EFLAGS: 00000246 ORIG_RAX: 000000000000012b +RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007faeb6363db9 +RDX: 0000000000000001 RSI: 0000000020000bc0 RDI: 0000000000000005 +RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000000001c +R10: 0000000000000122 R11: 0000000000000246 R12: 0000000000000000 +R13: 0000000000000000 R14: 0000000000000001 R15: 0000000000000001 + +Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") +Reported-by: syzbot +Signed-off-by: Eric Dumazet +Acked-by: Neal Cardwell +Link: https://lore.kernel.org/r/20240501125448.896529-1-edumazet@google.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/ipv4/tcp.c | 4 ++-- + net/ipv4/tcp_input.c | 2 ++ + net/ipv4/tcp_output.c | 4 +++- + 3 files changed, 7 insertions(+), 3 deletions(-) + +diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c +index c826db961fc08..1f858b3c9ac38 100644 +--- a/net/ipv4/tcp.c ++++ b/net/ipv4/tcp.c +@@ -2737,7 +2737,7 @@ void tcp_shutdown(struct sock *sk, int how) + /* If we've already sent a FIN, or it's a closed state, skip this. */ + if ((1 << sk->sk_state) & + (TCPF_ESTABLISHED | TCPF_SYN_SENT | +- TCPF_SYN_RECV | TCPF_CLOSE_WAIT)) { ++ TCPF_CLOSE_WAIT)) { + /* Clear out any half completed packets. FIN if needed. */ + if (tcp_close_state(sk)) + tcp_send_fin(sk); +@@ -2848,7 +2848,7 @@ void __tcp_close(struct sock *sk, long timeout) + * machine. State transitions: + * + * TCP_ESTABLISHED -> TCP_FIN_WAIT1 +- * TCP_SYN_RECV -> TCP_FIN_WAIT1 (forget it, it's impossible) ++ * TCP_SYN_RECV -> TCP_FIN_WAIT1 (it is difficult) + * TCP_CLOSE_WAIT -> TCP_LAST_ACK + * + * are legal only when FIN has been sent (i.e. in window), +diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c +index e51b5d887c24b..52a9d7f96da43 100644 +--- a/net/ipv4/tcp_input.c ++++ b/net/ipv4/tcp_input.c +@@ -6543,6 +6543,8 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb) + + tcp_initialize_rcv_mss(sk); + tcp_fast_path_on(tp); ++ if (sk->sk_shutdown & SEND_SHUTDOWN) ++ tcp_shutdown(sk, SEND_SHUTDOWN); + break; + + case TCP_FIN_WAIT1: { +diff --git a/net/ipv4/tcp_output.c b/net/ipv4/tcp_output.c +index d8817d6c7b96f..0fb84e57a2d49 100644 +--- a/net/ipv4/tcp_output.c ++++ b/net/ipv4/tcp_output.c +@@ -3441,7 +3441,9 @@ void tcp_send_fin(struct sock *sk) + return; + } + } else { +- skb = alloc_skb_fclone(MAX_TCP_HEADER, sk->sk_allocation); ++ skb = alloc_skb_fclone(MAX_TCP_HEADER, ++ sk_gfp_mask(sk, GFP_ATOMIC | ++ __GFP_NOWARN)); + if (unlikely(!skb)) + return; + +-- +2.43.0 + diff --git a/queue-5.15/tcp-use-refcount_inc_not_zero-in-tcp_twsk_unique.patch b/queue-5.15/tcp-use-refcount_inc_not_zero-in-tcp_twsk_unique.patch new file mode 100644 index 00000000000..c9736c0636c --- /dev/null +++ b/queue-5.15/tcp-use-refcount_inc_not_zero-in-tcp_twsk_unique.patch @@ -0,0 +1,118 @@ +From 8d498020d55d045dc08a70cc4deff5c9da13930e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 1 May 2024 14:31:45 -0700 +Subject: tcp: Use refcount_inc_not_zero() in tcp_twsk_unique(). + +From: Kuniyuki Iwashima + +[ Upstream commit f2db7230f73a80dbb179deab78f88a7947f0ab7e ] + +Anderson Nascimento reported a use-after-free splat in tcp_twsk_unique() +with nice analysis. + +Since commit ec94c2696f0b ("tcp/dccp: avoid one atomic operation for +timewait hashdance"), inet_twsk_hashdance() sets TIME-WAIT socket's +sk_refcnt after putting it into ehash and releasing the bucket lock. + +Thus, there is a small race window where other threads could try to +reuse the port during connect() and call sock_hold() in tcp_twsk_unique() +for the TIME-WAIT socket with zero refcnt. + +If that happens, the refcnt taken by tcp_twsk_unique() is overwritten +and sock_put() will cause underflow, triggering a real use-after-free +somewhere else. + +To avoid the use-after-free, we need to use refcount_inc_not_zero() in +tcp_twsk_unique() and give up on reusing the port if it returns false. + +[0]: +refcount_t: addition on 0; use-after-free. +WARNING: CPU: 0 PID: 1039313 at lib/refcount.c:25 refcount_warn_saturate+0xe5/0x110 +CPU: 0 PID: 1039313 Comm: trigger Not tainted 6.8.6-200.fc39.x86_64 #1 +Hardware name: VMware, Inc. VMware20,1/440BX Desktop Reference Platform, BIOS VMW201.00V.21805430.B64.2305221830 05/22/2023 +RIP: 0010:refcount_warn_saturate+0xe5/0x110 +Code: 42 8e ff 0f 0b c3 cc cc cc cc 80 3d aa 13 ea 01 00 0f 85 5e ff ff ff 48 c7 c7 f8 8e b7 82 c6 05 96 13 ea 01 01 e8 7b 42 8e ff <0f> 0b c3 cc cc cc cc 48 c7 c7 50 8f b7 82 c6 05 7a 13 ea 01 01 e8 +RSP: 0018:ffffc90006b43b60 EFLAGS: 00010282 +RAX: 0000000000000000 RBX: ffff888009bb3ef0 RCX: 0000000000000027 +RDX: ffff88807be218c8 RSI: 0000000000000001 RDI: ffff88807be218c0 +RBP: 0000000000069d70 R08: 0000000000000000 R09: ffffc90006b439f0 +R10: ffffc90006b439e8 R11: 0000000000000003 R12: ffff8880029ede84 +R13: 0000000000004e20 R14: ffffffff84356dc0 R15: ffff888009bb3ef0 +FS: 00007f62c10926c0(0000) GS:ffff88807be00000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 0000000020ccb000 CR3: 000000004628c005 CR4: 0000000000f70ef0 +PKRU: 55555554 +Call Trace: + + ? refcount_warn_saturate+0xe5/0x110 + ? __warn+0x81/0x130 + ? refcount_warn_saturate+0xe5/0x110 + ? report_bug+0x171/0x1a0 + ? refcount_warn_saturate+0xe5/0x110 + ? handle_bug+0x3c/0x80 + ? exc_invalid_op+0x17/0x70 + ? asm_exc_invalid_op+0x1a/0x20 + ? refcount_warn_saturate+0xe5/0x110 + tcp_twsk_unique+0x186/0x190 + __inet_check_established+0x176/0x2d0 + __inet_hash_connect+0x74/0x7d0 + ? __pfx___inet_check_established+0x10/0x10 + tcp_v4_connect+0x278/0x530 + __inet_stream_connect+0x10f/0x3d0 + inet_stream_connect+0x3a/0x60 + __sys_connect+0xa8/0xd0 + __x64_sys_connect+0x18/0x20 + do_syscall_64+0x83/0x170 + entry_SYSCALL_64_after_hwframe+0x78/0x80 +RIP: 0033:0x7f62c11a885d +Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 90 f3 0f 1e fa 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d a3 45 0c 00 f7 d8 64 89 01 48 +RSP: 002b:00007f62c1091e58 EFLAGS: 00000296 ORIG_RAX: 000000000000002a +RAX: ffffffffffffffda RBX: 0000000020ccb004 RCX: 00007f62c11a885d +RDX: 0000000000000010 RSI: 0000000020ccb000 RDI: 0000000000000003 +RBP: 00007f62c1091e90 R08: 0000000000000000 R09: 0000000000000000 +R10: 0000000000000000 R11: 0000000000000296 R12: 00007f62c10926c0 +R13: ffffffffffffff88 R14: 0000000000000000 R15: 00007ffe237885b0 + + +Fixes: ec94c2696f0b ("tcp/dccp: avoid one atomic operation for timewait hashdance") +Reported-by: Anderson Nascimento +Closes: https://lore.kernel.org/netdev/37a477a6-d39e-486b-9577-3463f655a6b7@allelesecurity.com/ +Suggested-by: Eric Dumazet +Signed-off-by: Kuniyuki Iwashima +Reviewed-by: Eric Dumazet +Link: https://lore.kernel.org/r/20240501213145.62261-1-kuniyu@amazon.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/ipv4/tcp_ipv4.c | 8 +++++++- + 1 file changed, 7 insertions(+), 1 deletion(-) + +diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c +index 0666be6b9ec93..e162bed1916ae 100644 +--- a/net/ipv4/tcp_ipv4.c ++++ b/net/ipv4/tcp_ipv4.c +@@ -153,6 +153,12 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) + if (tcptw->tw_ts_recent_stamp && + (!twp || (reuse && time_after32(ktime_get_seconds(), + tcptw->tw_ts_recent_stamp)))) { ++ /* inet_twsk_hashdance() sets sk_refcnt after putting twsk ++ * and releasing the bucket lock. ++ */ ++ if (unlikely(!refcount_inc_not_zero(&sktw->sk_refcnt))) ++ return 0; ++ + /* In case of repair and re-using TIME-WAIT sockets we still + * want to be sure that it is safe as above but honor the + * sequence numbers and time stamps set as part of the repair +@@ -173,7 +179,7 @@ int tcp_twsk_unique(struct sock *sk, struct sock *sktw, void *twp) + tp->rx_opt.ts_recent = tcptw->tw_ts_recent; + tp->rx_opt.ts_recent_stamp = tcptw->tw_ts_recent_stamp; + } +- sock_hold(sktw); ++ + return 1; + } + +-- +2.43.0 + diff --git a/queue-5.15/xfrm-preserve-vlan-tags-for-transport-mode-software-.patch b/queue-5.15/xfrm-preserve-vlan-tags-for-transport-mode-software-.patch new file mode 100644 index 00000000000..16e8e799146 --- /dev/null +++ b/queue-5.15/xfrm-preserve-vlan-tags-for-transport-mode-software-.patch @@ -0,0 +1,153 @@ +From 928d1eeea263fdec821d6beeee33dfcf293b6bfa Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 23 Apr 2024 18:00:24 +1200 +Subject: xfrm: Preserve vlan tags for transport mode software GRO + +From: Paul Davey + +[ Upstream commit 58fbfecab965014b6e3cc956a76b4a96265a1add ] + +The software GRO path for esp transport mode uses skb_mac_header_rebuild +prior to re-injecting the packet via the xfrm_napi_dev. This only +copies skb->mac_len bytes of header which may not be sufficient if the +packet contains 802.1Q tags or other VLAN tags. Worse copying only the +initial header will leave a packet marked as being VLAN tagged but +without the corresponding tag leading to mangling when it is later +untagged. + +The VLAN tags are important when receiving the decrypted esp transport +mode packet after GRO processing to ensure it is received on the correct +interface. + +Therefore record the full mac header length in xfrm*_transport_input for +later use in corresponding xfrm*_transport_finish to copy the entire mac +header when rebuilding the mac header for GRO. The skb->data pointer is +left pointing skb->mac_header bytes after the start of the mac header as +is expected by the network stack and network and transport header +offsets reset to this location. + +Fixes: 7785bba299a8 ("esp: Add a software GRO codepath") +Signed-off-by: Paul Davey +Signed-off-by: Steffen Klassert +Signed-off-by: Sasha Levin +--- + include/linux/skbuff.h | 15 +++++++++++++++ + include/net/xfrm.h | 3 +++ + net/ipv4/xfrm4_input.c | 6 +++++- + net/ipv6/xfrm6_input.c | 6 +++++- + net/xfrm/xfrm_input.c | 8 ++++++++ + 5 files changed, 36 insertions(+), 2 deletions(-) + +diff --git a/include/linux/skbuff.h b/include/linux/skbuff.h +index 7ed1d4472c0c8..15de91c65a09a 100644 +--- a/include/linux/skbuff.h ++++ b/include/linux/skbuff.h +@@ -2735,6 +2735,21 @@ static inline void skb_mac_header_rebuild(struct sk_buff *skb) + } + } + ++/* Move the full mac header up to current network_header. ++ * Leaves skb->data pointing at offset skb->mac_len into the mac_header. ++ * Must be provided the complete mac header length. ++ */ ++static inline void skb_mac_header_rebuild_full(struct sk_buff *skb, u32 full_mac_len) ++{ ++ if (skb_mac_header_was_set(skb)) { ++ const unsigned char *old_mac = skb_mac_header(skb); ++ ++ skb_set_mac_header(skb, -full_mac_len); ++ memmove(skb_mac_header(skb), old_mac, full_mac_len); ++ __skb_push(skb, full_mac_len - skb->mac_len); ++ } ++} ++ + static inline int skb_checksum_start_offset(const struct sk_buff *skb) + { + return skb->csum_start - skb_headroom(skb); +diff --git a/include/net/xfrm.h b/include/net/xfrm.h +index 6156ed2950f97..2e2e30d31a763 100644 +--- a/include/net/xfrm.h ++++ b/include/net/xfrm.h +@@ -1019,6 +1019,9 @@ struct xfrm_offload { + #define CRYPTO_INVALID_PACKET_SYNTAX 64 + #define CRYPTO_INVALID_PROTOCOL 128 + ++ /* Used to keep whole l2 header for transport mode GRO */ ++ __u32 orig_mac_len; ++ + __u8 proto; + __u8 inner_ipproto; + }; +diff --git a/net/ipv4/xfrm4_input.c b/net/ipv4/xfrm4_input.c +index eac206a290d05..1f50517289fd9 100644 +--- a/net/ipv4/xfrm4_input.c ++++ b/net/ipv4/xfrm4_input.c +@@ -61,7 +61,11 @@ int xfrm4_transport_finish(struct sk_buff *skb, int async) + ip_send_check(iph); + + if (xo && (xo->flags & XFRM_GRO)) { +- skb_mac_header_rebuild(skb); ++ /* The full l2 header needs to be preserved so that re-injecting the packet at l2 ++ * works correctly in the presence of vlan tags. ++ */ ++ skb_mac_header_rebuild_full(skb, xo->orig_mac_len); ++ skb_reset_network_header(skb); + skb_reset_transport_header(skb); + return 0; + } +diff --git a/net/ipv6/xfrm6_input.c b/net/ipv6/xfrm6_input.c +index 4907ab241d6be..7dbefbb338ca5 100644 +--- a/net/ipv6/xfrm6_input.c ++++ b/net/ipv6/xfrm6_input.c +@@ -56,7 +56,11 @@ int xfrm6_transport_finish(struct sk_buff *skb, int async) + skb_postpush_rcsum(skb, skb_network_header(skb), nhlen); + + if (xo && (xo->flags & XFRM_GRO)) { +- skb_mac_header_rebuild(skb); ++ /* The full l2 header needs to be preserved so that re-injecting the packet at l2 ++ * works correctly in the presence of vlan tags. ++ */ ++ skb_mac_header_rebuild_full(skb, xo->orig_mac_len); ++ skb_reset_network_header(skb); + skb_reset_transport_header(skb); + return 0; + } +diff --git a/net/xfrm/xfrm_input.c b/net/xfrm/xfrm_input.c +index a6861832710d9..7f326a01cbcea 100644 +--- a/net/xfrm/xfrm_input.c ++++ b/net/xfrm/xfrm_input.c +@@ -400,11 +400,15 @@ static int xfrm_prepare_input(struct xfrm_state *x, struct sk_buff *skb) + */ + static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb) + { ++ struct xfrm_offload *xo = xfrm_offload(skb); + int ihl = skb->data - skb_transport_header(skb); + + if (skb->transport_header != skb->network_header) { + memmove(skb_transport_header(skb), + skb_network_header(skb), ihl); ++ if (xo) ++ xo->orig_mac_len = ++ skb_mac_header_was_set(skb) ? skb_mac_header_len(skb) : 0; + skb->network_header = skb->transport_header; + } + ip_hdr(skb)->tot_len = htons(skb->len + ihl); +@@ -415,11 +419,15 @@ static int xfrm4_transport_input(struct xfrm_state *x, struct sk_buff *skb) + static int xfrm6_transport_input(struct xfrm_state *x, struct sk_buff *skb) + { + #if IS_ENABLED(CONFIG_IPV6) ++ struct xfrm_offload *xo = xfrm_offload(skb); + int ihl = skb->data - skb_transport_header(skb); + + if (skb->transport_header != skb->network_header) { + memmove(skb_transport_header(skb), + skb_network_header(skb), ihl); ++ if (xo) ++ xo->orig_mac_len = ++ skb_mac_header_was_set(skb) ? skb_mac_header_len(skb) : 0; + skb->network_header = skb->transport_header; + } + ipv6_hdr(skb)->payload_len = htons(skb->len + ihl - +-- +2.43.0 +