From: Jakub Kicinski Date: Tue, 4 Nov 2025 02:11:43 +0000 (-0800) Subject: Merge branch 'add-support-to-do-threaded-napi-busy-poll' X-Git-Tag: v6.19-rc1~170^2~252 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=ff371a7e73c8e624d6a28684d839ed074ac97a2b;p=thirdparty%2Fkernel%2Flinux.git Merge branch 'add-support-to-do-threaded-napi-busy-poll' Samiullah Khawaja says: ==================== Add support to do threaded napi busy poll Extend the already existing support of threaded napi poll to do continuous busy polling. This is used for doing continuous polling of napi to fetch descriptors from backing RX/TX queues for low latency applications. Allow enabling of threaded busypoll using netlink so this can be enabled on a set of dedicated napis for low latency applications. Once enabled user can fetch the PID of the kthread doing NAPI polling and set affinity, priority and scheduler for it depending on the low-latency requirements. Extend the netlink interface to allow enabling/disabling threaded busypolling at individual napi level. We use this for our AF_XDP based hard low-latency usecase with usecs level latency requirement. For our usecase we want low jitter and stable latency at P99. Following is an analysis and comparison of available (and compatible) busy poll interfaces for a low latency usecase with stable P99. This can be suitable for applications that want very low latency at the expense of cpu usage and efficiency. Already existing APIs (SO_BUSYPOLL and epoll) allow busy polling a NAPI backing a socket, but the missing piece is a mechanism to busy poll a NAPI instance in a dedicated thread while ignoring available events or packets, regardless of the userspace API. Most existing mechanisms are designed to work in a pattern where you poll until new packets or events are received, after which userspace is expected to handle them. As a result, one has to hack together a solution using a mechanism intended to receive packets or events, not to simply NAPI poll. NAPI threaded busy polling, on the other hand, provides this capability natively, independent of any userspace API. This makes it really easy to setup and manage. For analysis we use an AF_XDP based benchmarking tool `xsk_rr`. The description of the tool and how it tries to simulate the real workload is following, - It sends UDP packets between 2 machines. - The client machine sends packets at a fixed frequency. To maintain the frequency of the packet being sent, we use open-loop sampling. That is the packets are sent in a separate thread. - The server replies to the packet inline by reading the pkt from the recv ring and replies using the tx ring. - To simulate the application processing time, we use a configurable delay in usecs on the client side after a reply is received from the server. The xsk_rr tool is posted separately as an RFC for tools/testing/selftest. We use this tool with following napi polling configurations, - Interrupts only - SO_BUSYPOLL (inline in the same thread where the client receives the packet). - SO_BUSYPOLL (separate thread and separate core) - Threaded NAPI busypoll System is configured using following script in all 4 cases, ``` echo 0 | sudo tee /sys/class/net/eth0/threaded echo 0 | sudo tee /proc/sys/kernel/timer_migration echo off | sudo tee /sys/devices/system/cpu/smt/control sudo ethtool -L eth0 rx 1 tx 1 sudo ethtool -G eth0 rx 1024 echo 0 | sudo tee /proc/sys/net/core/rps_sock_flow_entries echo 0 | sudo tee /sys/class/net/eth0/queues/rx-0/rps_cpus # pin IRQs on CPU 2 IRQS="$(gawk '/eth0-(TxRx-)?1/ {match($1, /([0-9]+)/, arr); \ print arr[0]}' < /proc/interrupts)" for irq in "${IRQS}"; \ do echo 2 | sudo tee /proc/irq/$irq/smp_affinity_list; done echo -1 | sudo tee /proc/sys/kernel/sched_rt_runtime_us for i in /sys/devices/virtual/workqueue/*/cpumask; \ do echo $i; echo 1,2,3,4,5,6 > $i; done if [[ -z "$1" ]]; then echo 400 | sudo tee /proc/sys/net/core/busy_read echo 100 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs echo 15000 | sudo tee /sys/class/net/eth0/gro_flush_timeout fi sudo ethtool -C eth0 adaptive-rx off adaptive-tx off rx-usecs 0 tx-usecs 0 if [[ "$1" == "enable_threaded" ]]; then echo 0 | sudo tee /proc/sys/net/core/busy_poll echo 0 | sudo tee /proc/sys/net/core/busy_read echo 100 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs echo 15000 | sudo tee /sys/class/net/eth0/gro_flush_timeout NAPI_ID=$(ynl --family netdev --output-json --do queue-get \ --json '{"ifindex": '${IFINDEX}', "id": '0', "type": "rx"}' | jq '."napi-id"') ynl --family netdev --json '{"id": "'${NAPI_ID}'", "threaded": "busy-poll"}' NAPI_T=$(ynl --family netdev --output-json --do napi-get \ --json '{"id": "'$NAPI_ID'"}' | jq '."pid"') sudo chrt -f -p 50 $NAPI_T # pin threaded poll thread to CPU 2 sudo taskset -pc 2 $NAPI_T fi if [[ "$1" == "enable_interrupt" ]]; then echo 0 | sudo tee /proc/sys/net/core/busy_read echo 0 | sudo tee /sys/class/net/eth0/napi_defer_hard_irqs echo 15000 | sudo tee /sys/class/net/eth0/gro_flush_timeout fi ``` To enable various configurations, script can be run as following, - Interrupt Only ```