Merge tag 'net-6.16-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net
Pull networking fixes from Paolo Abeni:
"Including fixes from can and xfrm.
The TI regression notified last week is actually on our net-next tree,
it does not affect 6.16.
We are investigating a virtio regression which is quite hard to
reproduce - currently only our CI sporadically hits it. Hopefully it
should not be critical, and I'm not sure that an additional week would
be enough to solve it.
Current release - fix to a fix:
- sched: sch_qfq: avoid sleeping in atomic context in qfq_delete_class
Previous releases - regressions:
- xfrm:
- set transport header to fix UDP GRO handling
- delete x->tunnel as we delete x
- eth:
- mlx5: fix memory leak in cmd_exec()
- i40e: when removing VF MAC filters, avoid losing PF-set MAC
- gve: fix stuck TX queue for DQ queue format
Previous releases - always broken:
- can: fix NULL pointer deref of struct can_priv::do_set_mode
- eth:
- ice: fix a null pointer dereference in ice_copy_and_init_pkg()
- ism: fix concurrency management in ism_cmd()
- dpaa2: fix device reference count leak in MAC endpoint handling
- icssg-prueth: fix buffer allocation for ICSSG
Misc:
- selftests: mptcp: increase code coverage"
* tag 'net-6.16-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net: (34 commits)
net: hns3: default enable tx bounce buffer when smmu enabled
net: hns3: fixed vf get max channels bug
net: hns3: disable interrupt when ptp init failed
net: hns3: fix concurrent setting vlan filter issue
s390/ism: fix concurrency management in ism_cmd()
selftests: drv-net: wait for iperf client to stop sending
MAINTAINERS: Add in6.h to MAINTAINERS
selftests: netfilter: tone-down conntrack clash test
can: netlink: can_changelink(): fix NULL pointer deref of struct can_priv::do_set_mode
net/sched: sch_qfq: Avoid triggering might_sleep in atomic context in qfq_delete_class
gve: Fix stuck TX queue for DQ queue format
net: appletalk: Fix use-after-free in AARP proxy probe
net: bcmasp: Restore programming of TX map vector register
selftests: mptcp: connect: also cover checksum
selftests: mptcp: connect: also cover alt modes
e1000e: ignore uninitialized checksum word on tgp
e1000e: disregard NVM checksum on tgp when valid checksum bit is not set
ice: Fix a null pointer dereference in ice_copy_and_init_pkg()
i40e: When removing VF MAC filters, only check PF-set MAC
i40e: report VF tx_dropped with tx_errors instead of tx_discards
...
1) Optimize to hold device only for the asynchronous decryption,
where it is really needed.
From Jianbo Liu.
2) Align our inbund SA lookup to RFC 4301. Only SPI and protocol
should be used for an inbound SA lookup.
From Aakash Kumar S.
3) Skip redundant statistics update for xfrm crypto offload.
From Jianbo Liu.
Please pull or let me know if there are problems.
* tag 'ipsec-next-2025-07-23' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec-next:
xfrm: Skip redundant statistics update for crypto offload
xfrm: Duplicate SPI Handling
xfrm: hold device only for the asynchronous decryption
====================
1) Premption fixes for xfrm_state_find.
From Sabrina Dubroca.
2) Initialize offload path also for SW IPsec GRO. This fixes a
performance regression on SW IPsec offload.
From Leon Romanovsky.
3) Fix IPsec UDP GRO for IKE packets.
From Tobias Brunner,
4) Fix transport header setting for IPcomp after decompressing.
From Fernando Fernandez Mancera.
5) Fix use-after-free when xfrmi_changelink tries to change
collect_md for a xfrm interface.
From Eyal Birger .
6) Delete the special IPcomp x->tunnel state along with the state x
to avoid refcount problems.
From Sabrina Dubroca.
Please pull or let me know if there are problems.
* tag 'ipsec-2025-07-23' of git://git.kernel.org/pub/scm/linux/kernel/git/klassert/ipsec:
Revert "xfrm: destroy xfrm_state synchronously on net exit path"
xfrm: delete x->tunnel as we delete x
xfrm: interface: fix use-after-free after changing collect_md xfrm interface
xfrm: ipcomp: adjust transport header after decompressing
xfrm: Set transport header to fix UDP GRO handling
xfrm: always initialize offload path
xfrm: state: use a consistent pcpu_id in xfrm_state_find
xfrm: state: initialize state_ptrs earlier in xfrm_state_find
====================
net: hns3: default enable tx bounce buffer when smmu enabled
The SMMU engine on HIP09 chip has a hardware issue.
SMMU pagetable prefetch features may prefetch and use a invalid PTE
even the PTE is valid at that time. This will cause the device trigger
fake pagefaults. The solution is to avoid prefetching by adding a
SYNC command when smmu mapping a iova. But the performance of nic has a
sharp drop. Then we do this workaround, always enable tx bounce buffer,
avoid mapping/unmapping on TX path.
This issue only affects HNS3, so we always enable
tx bounce buffer when smmu enabled to improve performance.
Fixes: 295ba232a8c3 ("net: hns3: add device version to replace pci revision") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Jijie Shao <shaojijie@huawei.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250722125423.1270673-5-shaojijie@huawei.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Currently, the queried maximum of vf channels is the maximum of channels
supported by each TC. However, the actual maximum of channels is
the maximum of channels supported by the device.
Fixes: 849e46077689 ("net: hns3: add ethtool_ops.get_channels support for VF") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Hao Lan <lanhao@huawei.com> Signed-off-by: Jijie Shao <shaojijie@huawei.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250722125423.1270673-4-shaojijie@huawei.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
The vport->req_vlan_fltr_en may be changed concurrently by function
hclge_sync_vlan_fltr_state() called in periodic work task and
function hclge_enable_vport_vlan_filter() called by user configuration.
It may cause the user configuration inoperative. Fixes it by protect
the vport->req_vlan_fltr by vport_lock.
Fixes: 2ba306627f59 ("net: hns3: add support for modify VLAN filter state") Signed-off-by: Jian Shen <shenjian15@huawei.com> Signed-off-by: Jijie Shao <shaojijie@huawei.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250722125423.1270673-2-shaojijie@huawei.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Halil Pasic [Tue, 22 Jul 2025 16:18:17 +0000 (18:18 +0200)]
s390/ism: fix concurrency management in ism_cmd()
The s390x ISM device data sheet clearly states that only one
request-response sequence is allowable per ISM function at any point in
time. Unfortunately as of today the s390/ism driver in Linux does not
honor that requirement. This patch aims to rectify that.
This problem was discovered based on Aliaksei's bug report which states
that for certain workloads the ISM functions end up entering error state
(with PEC 2 as seen from the logs) after a while and as a consequence
connections handled by the respective function break, and for future
connection requests the ISM device is not considered -- given it is in a
dysfunctional state. During further debugging PEC 3A was observed as
well.
A kernel message like
[ 1211.244319] zpci: 061a:00:00.0: Event 0x2 reports an error for PCI function 0x61a
is a reliable indicator of the stated function entering error state
with PEC 2. Let me also point out that a kernel message like
[ 1211.244325] zpci: 061a:00:00.0: The ism driver bound to the device does not support error recovery
is a reliable indicator that the ISM function won't be auto-recovered
because the ISM driver currently lacks support for it.
On a technical level, without this synchronization, commands (inputs to
the FW) may be partially or fully overwritten (corrupted) by another CPU
trying to issue commands on the same function. There is hard evidence that
this can lead to DMB token values being used as DMB IOVAs, leading to
PEC 2 PCI events indicating invalid DMA. But this is only one of the
failure modes imaginable. In theory even completely losing one command
and executing another one twice and then trying to interpret the outputs
as if the command we intended to execute was actually executed and not
the other one is also possible. Frankly, I don't feel confident about
providing an exhaustive list of possible consequences.
Fixes: 684b89bc39ce ("s390/ism: add device driver for internal shared memory") Reported-by: Aliaksei Makarau <Aliaksei.Makarau@ibm.com> Tested-by: Mahanta Jambigi <mjambigi@linux.ibm.com> Tested-by: Aliaksei Makarau <Aliaksei.Makarau@ibm.com> Signed-off-by: Halil Pasic <pasic@linux.ibm.com> Reviewed-by: Alexandra Winter <wintera@linux.ibm.com> Signed-off-by: Alexandra Winter <wintera@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250722161817.1298473-1-wintera@linux.ibm.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Merge tag 'drm-fixes-2025-07-24' of https://gitlab.freedesktop.org/drm/kernel
Pull drm fixes from Dave Airlie:
"This might just be part one, but I'm sending it a bit early as it has
two sets of reverts for regressions, one is all the gem/dma-buf
handling and another was a nouveau ioctl change.
Otherwise there is an amdgpu fix, nouveau fix and a scheduler fix.
If any other changes come in I'll follow up with another more usual
Fri/Sat MR.
gem:
- revert all the dma-buf/gem changes as there as lifetime issues
with them
nouveau:
- revert an ioctl change as it causes issues
- fix NULL ptr on fermi
bridge:
- remove extra semicolon
sched:
- remove hang causing optimisation
amdgpu:
- fix garbage in cleared vram after resume"
* tag 'drm-fixes-2025-07-24' of https://gitlab.freedesktop.org/drm/kernel:
drm/bridge: ti-sn65dsi86: Remove extra semicolon in ti_sn_bridge_probe()
Revert "drm/nouveau: check ioctl command codes better"
drm/nouveau/nvif: fix null ptr deref on pre-fermi boards
Revert "drm/gem-dma: Use dma_buf from GEM object instance"
Revert "drm/gem-shmem: Use dma_buf from GEM object instance"
Revert "drm/gem-framebuffer: Use dma_buf from GEM object instance"
Revert "drm/prime: Use dma_buf from GEM object instance"
Revert "drm/etnaviv: Use dma_buf from GEM object instance"
Revert "drm/vmwgfx: Use dma_buf from GEM object instance"
Revert "drm/virtio: Use dma_buf from GEM object instance"
drm/sched: Remove optimization that causes hang when killing dependent jobs
drm/amdgpu: Reset the clear flag in buddy during resume
selftests: drv-net: wait for iperf client to stop sending
A few packets may still be sent out during the termination of iperf
processes. These late packets cause failures in rss_ctx.py when they
arrive on queues expected to be empty.
Jakub Kicinski [Thu, 24 Jul 2025 00:52:10 +0000 (17:52 -0700)]
Merge branch 'dualpi2-patch'
Chia-Yu Chang says:
====================
DUALPI2 patch
This patch serise adds DualPI Improved with a Square (DualPI2) with
following features:
* Supports congestion controls that comply with the Prague requirements
in RFC9331 (e.g. TCP-Prague)
* Coupled dual-queue that separates the L4S traffic in a low latency
queue (L-queue), without harming remaining traffic that is scheduled
in classic queue (C-queue) due to congestion-coupling using PI2
as defined in RFC9332
* Configurable overload strategies
* Use of sojourn time to reliably estimate queue delay
* Supports ECN L4S-identifier (IP.ECN==0b*1) to classify traffic into
respective queues
For more details of DualPI2, please refer IETF RFC9332
(https://datatracker.ietf.org/doc/html/rfc9332).
====================
Introduce the specification of tc qdisc DualPI2 stats and attributes,
which is the reference implementation of IETF RFC9332 DualQ Coupled AQM
(https://datatracker.ietf.org/doc/html/rfc9332) providing two different
queues: low latency queue (L-queue) and classic queue (C-queue).
selftests/tc-testing: Add selftests for qdisc DualPI2
Update configuration of tc-tests and preload DualPI2 module for self-tests,
and add following self-test cases for DualPI2:
Test a4c7: Create DualPI2 with default setting
Test 1ea4: Create DualPI2 with memlimit
Test 2130: Create DualPI2 with typical_rtt and max_rtt
Test 90c1: Create DualPI2 with max_rtt
Test 7b3c: Create DualPI2 with any_ect option
Test 49a3: Create DualPI2 with overflow option
Test d0a1: Create DualPI2 with drop_enqueue option
Test f051: Create DualPI2 with no_split_gso option
Test 456b: Create DualPI2 with packet step_thresh
Test 610c: Create DualPI2 with packet min_qlen_step
Test b4fa: Create DualPI2 with packet coupling_factor
Test 37f1: Create DualPI2 with packet classic_protection
DualPI2 provides L4S-type low latency & loss to traffic that uses a
scalable congestion controller (e.g. TCP-Prague, DCTCP) without
degrading the performance of 'classic' traffic (e.g. Reno,
Cubic etc.). It is to be the reference implementation of IETF RFC9332
DualQ Coupled AQM (https://datatracker.ietf.org/doc/html/rfc9332).
Note that creating two independent queues cannot meet the goal of
DualPI2 mentioned in RFC9332: "...to preserve fairness between
ECN-capable and non-ECN-capable traffic." Further, it could even
lead to starvation of Classic traffic, which is also inconsistent
with the requirements in RFC9332: "...although priority MUST be
bounded in order not to starve Classic traffic." DualPI2 is
designed to maintain approximate per-flow fairness on L-queue and
C-queue by forming a single qdisc using the coupling factor and
scheduler between two queues.
The qdisc provides two queues called low latency and classic. It
classifies packets based on the ECN field in the IP headers. By
default it directs non-ECN and ECT(0) into the classic queue and
ECT(1) and CE into the low latency queue, as per the IETF spec.
Each queue runs its own AQM:
* The classic AQM is called PI2, which is similar to the PIE AQM but
more responsive and simpler. Classic traffic requires a decent
target queue (default 15ms for Internet deployment) to fully
utilize the link and to avoid high drop rates.
* The low latency AQM is, by default, a very shallow ECN marking
threshold (1ms) similar to that used for DCTCP.
The DualQ isolates the low queuing delay of the Low Latency queue
from the larger delay of the 'Classic' queue. However, from a
bandwidth perspective, flows in either queue will share out the link
capacity as if there was just a single queue. This bandwidth pooling
effect is achieved by coupling together the drop and ECN-marking
probabilities of the two AQMs.
The PI2 AQM has two main parameters in addition to its target delay.
The integral gain factor alpha is used to slowly correct any persistent
standing queue error from the target delay, while the proportional gain
factor beta is used to quickly compensate for queue changes (growth or
shrinkage). Either alpha and beta are given as a parameter, or they can
be calculated by tc from alternative typical and maximum RTT parameters.
Internally, the output of a linear Proportional Integral (PI)
controller is used for both queues. This output is squared to
calculate the drop or ECN-marking probability of the classic queue.
This counterbalances the square-root rate equation of Reno/Cubic,
which is the trick that balances flow rates across the queues. For
the ECN-marking probability of the low latency queue, the output of
the base AQM is multiplied by a coupling factor. This determines the
balance between the flow rates in each queue. The default setting
makes the flow rates roughly equal, which should be generally
applicable.
If DUALPI2 AQM has detected overload (due to excessive non-responsive
traffic in either queue), it will switch to signaling congestion
solely using drop, irrespective of the ECN field. Alternatively, it
can be configured to limit the drop probability and let the queue
grow and eventually overflow (like tail-drop).
GSO splitting in DUALPI2 is configurable from userspace while the
default behavior is to split gso. When running DUALPI2 at unshaped
10gigE with 4 download streams test, splitting gso apart results in
halving the latency with no loss in throughput:
sched: Dump configuration and statistics of dualpi2 qdisc
The configuration and statistics dump of the DualPI2 Qdisc provides
information related to both queues, such as packet numbers and queuing
delays in the L-queue and C-queue, as well as general information such as
probability value, WRR credits, memory usage, packet marking counters, max
queue size, etc.
The following patch includes enqueue/dequeue for DualPI2.
sched: Struct definition and parsing of dualpi2 qdisc
DualPI2 is the reference implementation of IETF RFC9332 DualQ Coupled
AQM (https://datatracker.ietf.org/doc/html/rfc9332) providing two
queues called low latency (L-queue) and classic (C-queue). By default,
it enqueues non-ECN and ECT(0) packets into the C-queue and ECT(1) and
CE packets into the low latency queue (L-queue), as per IETF RFC9332 spec.
This patch defines the dualpi2 Qdisc structure and parsing, and the
following two patches include dumping and enqueue/dequeue for the DualPI2.
The first step towards that is splitting struct page by its individual
users, as has already been done with folio and slab. This patchset does
that for page pool.
Matthew Wilcox tried and stopped the same work, you can see in:
I focused on removing the page pool members in struct page this time,
not moving the allocation code of page pool from net to mm. It can be
done later if needed.
====================
Byungchul Park [Mon, 21 Jul 2025 02:18:26 +0000 (11:18 +0900)]
netmem, mlx4: access ->pp_ref_count through netmem_desc instead of page
To eliminate the use of struct page in page pool, the page pool users
should use netmem descriptor and APIs instead.
Make mlx4 access ->pp_ref_count through netmem_desc instead of page.
While at it, add a helper, pp_page_to_nmdesc() and __pp_page_to_nmdesc(),
that can be used to get netmem_desc from page only if it's a pp page.
For now that netmem_desc overlays on page, it can be achieved by just
casting, and use macro and _Generic to cover const casting as well.
Plus, change page_pool_page_is_pp() to check for 'const struct page *'
instead of 'struct page *' since it doesn't modify data and additionally
covers const type.
Byungchul Park [Mon, 21 Jul 2025 02:18:25 +0000 (11:18 +0900)]
netmem: use netmem_desc instead of page to access ->pp in __netmem_get_pp()
To eliminate the use of the page pool fields in struct page, the page
pool code should use netmem descriptor and APIs instead.
However, __netmem_get_pp() still accesses ->pp via struct page. So
change it to use struct netmem_desc instead, since ->pp no longer will
be available in struct page.
While at it, add a helper, __netmem_to_nmdesc(), that can be used to
unsafely get pointer to netmem_desc backing the netmem_ref, only when
the netmem_ref is always backed by system memory.
To simplify struct page, the page pool members of struct page should be
moved to other, allowing these members to be removed from struct page.
Introduce a network memory descriptor to store the members, struct
netmem_desc, and make it union'ed with the existing fields in struct
net_iov, allowing to organize the fields of struct net_iov.
Signed-off-by: Byungchul Park <byungchul@sk.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Reviewed-by: Mina Almasry <almasrymina@google.com> Reviewed-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Harry Yoo <harry.yoo@oracle.com> Link: https://patch.msgid.link/20250721021835.63939-2-byungchul@sk.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Netdevsim emulates firmware update and it takes 5 seconds to complete.
For some use cases, this is too long and unnecessary. Allow user to
configure the time by exposing debugfs a knob to set chunk time.
devlink: Fix excessive stack usage in rate TC bandwidth parsing
The devlink_nl_rate_tc_bw_parse function uses a large stack array for
devlink attributes, which triggers a warning about excessive stack
usage:
net/devlink/rate.c: In function 'devlink_nl_rate_tc_bw_parse':
net/devlink/rate.c:382:1: error: the frame size of 1648 bytes is larger than 1536 bytes [-Werror=frame-larger-than=]
Introduce a separate attribute set specifically for rate TC bandwidth
parsing that only contains the two attributes actually used: index
and bandwidth. This reduces the stack array from DEVLINK_ATTR_MAX
entries to just 2 entries, solving the stack usage issue.
Update devlink selftest to use the new 'index' and 'bw' attribute names
consistent with the YAML spec.
My CC-adding automation returned nothing on a future patch to the
include/linux/in6.h file, and I went looking for why. Add the missed
in6.h to MAINTAINERS.
Manuel Andreas [Wed, 23 Jul 2025 15:51:20 +0000 (17:51 +0200)]
KVM: x86/xen: Fix cleanup logic in emulation of Xen schedop poll hypercalls
kvm_xen_schedop_poll does a kmalloc_array() when a VM polls the host
for more than one event channel potr (nr_ports > 1).
After the kmalloc_array(), the error paths need to go through the
"out" label, but the call to kvm_read_guest_virt() does not.
Fixes: 92c58965e965 ("KVM: x86/xen: Use kvm_read_guest_virt() instead of open-coding it badly") Reviewed-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Manuel Andreas <manuel.andreas@tum.de>
[Adjusted commit message. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Dave Airlie [Wed, 23 Jul 2025 20:49:38 +0000 (06:49 +1000)]
Merge tag 'drm-misc-fixes-2025-07-23' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-fixes
drm-misc-fixes for v6.16-rc8/final?:
- Revert all uses of drm_gem_object->dmabuf to
drm_gem_object->import_attach->dmabuf.
- Fix amdgpu returning BIOS cluttered VRAM after resume.
- Scheduler hang fix.
- Revert nouveau ioctl fix as it caused regressions.
- Fix null pointer deref in nouveau.
- Fix unnecessary semicolon in ti_sn_bridge_probe.
Al Viro [Sun, 20 Jul 2025 20:43:53 +0000 (16:43 -0400)]
fix the regression in ufs options parsing
A really dumb braino on rebasing and a dumber fuckup with managing #for-next
Fixes: b70cb459890b ("ufs: convert ufs to the new mount API") Fucked-up-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Jakub Kicinski [Wed, 23 Jul 2025 01:37:23 +0000 (18:37 -0700)]
Merge branch 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux
Tariq Toukan says:
====================
mlx5-next updates 2025-07-22
The following pull-request contains common mlx5 updates
* 'mlx5-next' of git://git.kernel.org/pub/scm/linux/kernel/git/mellanox/linux:
net/mlx5: Expose cable_length field in PFCC register
net/mlx5: Add IFC bits and enums for buf_ownership
net/mlx5: Add IFC bits to support RSS for IPSec offload
====================
selftests: netfilter: tone-down conntrack clash test
The test is supposed to observe that the 'clash_resolve' stat counter
incremented (i.e., the code path was covered).
This check was incorrect, 'conntrack -S' needs to be called in the
revevant namespace, not the initial netns.
The clash resolution logic in conntrack is only exercised when multiple
packets with the same udp quadruple race. Depending on kernel config,
number of CPUs, scheduling policy etc. this might not trigger even
after several retries. Thus the script eventually returns SKIP if the
retry count is exceeded.
The udpclash tool with also exit with a failure if it did not observe
the expected number of replies.
In the script, make a note of this but do not fail anymore, just check if
the clash resolution logic triggered after all.
Remove the 'single-core' test: while unlikely, with preemptible kernel it
should be possible to also trigger clash resolution logic.
With this change the test will either SKIP or pass.
Hard error could be restored later once its clear whats going on, so
also dump 'conntrack -S' when some packets went missing to see if
conntrack dropped them on insert.
Fixes: 78a588363587 ("selftests: netfilter: add conntrack clash resolution test case") Signed-off-by: Florian Westphal <fw@strlen.de> Link: https://patch.msgid.link/20250721223652.6956-1-fw@strlen.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Jakub Kicinski [Wed, 23 Jul 2025 01:24:10 +0000 (18:24 -0700)]
Merge branch '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue
Tony Nguyen says:
====================
Intel Wired LAN Driver Updates 2025-07-21 (i40e, ice, e1000e)
For i40e:
Dennis Chen adjusts reporting of VF Tx dropped to a more appropriate
field.
Jamie Bainbridge fixes a check which can cause a PF set VF MAC address
to be lost.
For ice:
Haoxiang Li adds an error check in DDP load to prevent NULL pointer
dereference.
For e1000e:
Jacek Kowalski adds workarounds for issues surrounding Tiger Lake
platforms with uninitialized NVMs.
* '40GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/net-queue:
e1000e: ignore uninitialized checksum word on tgp
e1000e: disregard NVM checksum on tgp when valid checksum bit is not set
ice: Fix a null pointer dereference in ice_copy_and_init_pkg()
i40e: When removing VF MAC filters, only check PF-set MAC
i40e: report VF tx_dropped with tx_errors instead of tx_discards
====================
Paolo Abeni [Mon, 21 Jul 2025 17:20:22 +0000 (19:20 +0200)]
tcp: do not increment BeyondWindow MIB for old seq
The mentioned MIB is currently incremented even when a packet
with an old sequence number (i.e. a zero window probe) is received,
which is IMHO misleading.
Explicitly restrict such MIB increment at the relevant events.
Paolo Abeni [Mon, 21 Jul 2025 17:20:21 +0000 (19:20 +0200)]
tcp: do not set a zero size receive buffer
The nipa CI is reporting frequent failures in the mptcp_connect
self-tests.
In the failing scenarios (TCP -> MPTCP) the involved sockets are
actually plain TCP ones, as fallback for passive socket at 2whs
time cause the MPTCP listener to actually create a TCP socket.
The transfer is stuck due to the receiver buffer being zero.
With the stronger check in place, tcp_clamp_window() can be invoked
while the TCP socket has sk_rmem_alloc == 0, and the receive buffer
will be zeroed, too.
Check for the critical condition in tcp_prune_queue() and just
drop the packet without shrinking the receiver buffer.
Lama Kayal [Mon, 21 Jul 2025 07:13:17 +0000 (10:13 +0300)]
net/mlx5e: SHAMPO, Cleanup reservation size formula
The reservation size formula can be reduced to a simple evaluation of
MLX5E_SHAMPO_WQ_RESRV_SIZE. This leaves mlx5e_shampo_get_log_rsrv_size()
with one single use, which can be replaced with a macro for simplicity.
Also, function mlx5e_shampo_get_log_rsrv_size() is used only throughout
params.c, make it static.
Signed-off-by: Lama Kayal <lkayal@nvidia.com> Reviewed-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/1753081999-326247-2-git-send-email-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Fan Yu [Mon, 21 Jul 2025 03:16:07 +0000 (11:16 +0800)]
tcp: trace retransmit failures in tcp_retransmit_skb
Background
==========
When TCP retransmits a packet due to missing ACKs, the
retransmission may fail for various reasons (e.g., packets
stuck in driver queues, receiver zero windows, or routing issues).
The original tcp_retransmit_skb tracepoint:
'commit e086101b150a ("tcp: add a tracepoint for tcp retransmission")'
lacks visibility into these failure causes, making production
diagnostics difficult.
Solution
========
Adds the retval("err") to the tcp_retransmit_skb tracepoint.
Enables users to know why some tcp retransmission failed and
users can filter retransmission failures by retval.
Compatibility description
=========================
This patch extends the tcp_retransmit_skb tracepoint
by adding a new "err" field at the end of its
existing structure (within TP_STRUCT__entry). The
compatibility implications are detailed as follows:
1) Structural compatibility for legacy user-space tools
Legacy tools/BPF programs accessing existing fields
(by offset or name) can still work without modification
or recompilation.The new field is appended to the end,
preserving original memory layout.
2) Note: semantic changes
The original tracepoint primarily only focused on
successfully retransmitted packets. With this patch,
the tracepoint now can figure out packets that may
terminate early due to specific reasons. For accurate
statistics, users should filter using "err" to
distinguish outcomes.
====================
selftests: drv-net: Test XDP native support
This patch series add tests to validate XDP native support for PASS,
DROP, ABORT, and TX actions, as well as headroom and tailroom adjustment.
For adjustment tests, validate support for both the extension and
shrinking cases across various packet sizes and offset values.
The pass criteria for head/tail adjustment tests require that at-least
one adjustment value works for at-least one packet size. This ensure
that the variability in maximum supported head/tail adjustment offset
across different drivers is being incorporated.
The results reported in this series are based on netdevsim. However,
the series is tested against multiple other drivers including fbnic.
Note: The XDP support for fbnic will be added later.
====================
Add test to validate the headroom adjustment support for both extension
and the shrinking cases. For the extension part, eat up space from
the start of payload data whereas, for the shrinking part, populate
the newly available space with a tag. In the user-space, validate that a
test string is manipulated accordingly.
The negative and positive offset values result in shrinking and growing of
headroom (growing and shrinking of payload) respectively.
TAP version 13
1..9
ok 1 xdp.test_xdp_native_pass_sb
ok 2 xdp.test_xdp_native_pass_mb
ok 3 xdp.test_xdp_native_drop_sb
ok 4 xdp.test_xdp_native_drop_mb
ok 5 xdp.test_xdp_native_tx_mb
\# Failed run: pkt_sz 512, ... offset 1. Reason: Adjustment failed
ok 6 xdp.test_xdp_native_adjst_tail_grow_data
ok 7 xdp.test_xdp_native_adjst_tail_shrnk_data
\# Failed run: pkt_sz 512, ... offset -128. Reason: Adjustment failed
ok 8 xdp.test_xdp_native_adjst_head_grow_data
\# Failed run: pkt_sz (512) > HDS threshold (0) and offset 64 > 48
ok 9 xdp.test_xdp_native_adjst_head_shrnk_data
\# Totals: pass:9 fail:0 xfail:0 xpass:0 skip:0 error:0
Add test to validate support for the two cases of tail adjustment: 1)
tail extension, and 2) tail shrinking across different frame sizes and
offset values. For each of the two cases, test both the single and
multi-buffer cases by choosing appropriate packet size.
The negative offset value result in growing of tailroom (shrinking of
payload) while the positive offset result in shrinking of tailroom
(growing of payload).
Since the support for tail adjustment varies across drivers, classify the
test as pass if at least one combination of packet size and offset from a
pre-selected list results in a successful run. In case of an unsuccessful
run, report the failure and highlight the packet size and offset values
that caused the test to fail, as well as the values that resulted in the
last successful run.
Note: The growing part of this test for netdevsim may appear flaky when
the offset value is larger than 1. This behavior occurs because tailroom
is not explicitly reserved for netdevsim, with 1 being the typical
tailroom value. However, in certain cases, such as payload being the last
in the page with additional available space, the truesize is expanded.
This also result increases the tailroom causing the test to pass
intermittently. In contrast, when tailrrom is explicitly reserved, such
as in the of fbnic, the test results are deterministic.
./drivers/net/xdp.py
TAP version 13
1..7
ok 1 xdp.test_xdp_native_pass_sb
ok 2 xdp.test_xdp_native_pass_mb
ok 3 xdp.test_xdp_native_drop_sb
ok 4 xdp.test_xdp_native_drop_mb
ok 5 xdp.test_xdp_native_tx_mb
\# Failed run: ... successful run: ... offset 1. Reason: Adjustment failed
ok 6 xdp.test_xdp_native_adjst_tail_grow_data
ok 7 xdp.test_xdp_native_adjst_tail_shrnk_data
\# Totals: pass:7 fail:0 xfail:0 xpass:0 skip:0 error:0
Add test to verify the XDP_TX functionality by generating traffic from a
remote node on a specific UDP port and redirecting it back to the sender.
./drivers/net/xdp.py
TAP version 13
1..5
ok 1 xdp.test_xdp_native_pass_sb
ok 2 xdp.test_xdp_native_pass_mb
ok 3 xdp.test_xdp_native_drop_sb
ok 4 xdp.test_xdp_native_drop_mb
ok 5 xdp.test_xdp_native_tx_mb
\# Totals: pass:5 fail:0 xfail:0 xpass:0 skip:0 error:0
Test XDP_PASS/DROP in single buffer and multi buffer mode when
XDP native support is available.
./drivers/net/xdp.py
TAP version 13
1..4
ok 1 xdp.test_xdp_native_pass_sb
ok 2 xdp.test_xdp_native_pass_mb
ok 3 xdp.test_xdp_native_drop_sb
ok 4 xdp.test_xdp_native_drop_mb
\# Totals: pass:4 fail:0 xfail:0 xpass:0 skip:0 error:0
MAC address, Link modes (supported and advertised) and eeprom data
for the Netdev interface are read from the shared firmware data.
This patch adds debugfs support for the same.
Octeontx2/CN10k MAC block supports DMAC filters. DMAC filters
can be installed on the interface through ethtool.
When a user installs a DMAC filter, the interface's MAC address
is implicitly added to the filter list. To ensure consistency,
this MAC address must be kept in sync with the pfvf->mac_addr field,
which is used to install MAC-based NPC rules.
This patch updates the pfvf->mac_addr field with the programmed MAC
address and also enables VF interfaces to install DMAC filters.
Mingming Cao [Sat, 19 Jul 2025 09:13:56 +0000 (05:13 -0400)]
ibmveth: Add multi buffers rx replenishment hcall support
This patch enables batched RX buffer replenishment in ibmveth by
using the new firmware-supported h_add_logical_lan_buffers() hcall
to submit up to 8 RX buffers in a single call, instead of repeatedly
calling the single-buffer h_add_logical_lan_buffer() hcall.
During the probe, with the patch, the driver queries ILLAN attributes
to detect IBMVETH_ILLAN_RX_MULTI_BUFF_SUPPORT bit. If the attribute is
present, rx_buffers_per_hcall is set to 8, enabling batched replenishment.
Otherwise, it defaults to 1, preserving the original upstream behavior
with no change in code flow for unsupported systems.
The core rx replenish logic remains the same. But when batching
is enabled, the driver aggregates up to 8 fully prepared descriptors
into a single h_add_logical_lan_buffers() hypercall. If any allocation
or DMA mapping fails while preparing a batch, only the successfully
prepared buffers are submitted, and the remaining are deferred for
the next replenish cycle.
If at runtime the firmware stops accepting the batched hcall—e,g,
after a Live Partition Migration (LPM) to a host that does not
support h_add_logical_lan_buffers(), the hypercall returns H_FUNCTION.
In that case, the driver transparently disables batching, resets
rx_buffers_per_hcall to 1, and falls back to the single-buffer hcall
in next future replenishments to take care of these and future buffers.
Test were done on systems with firmware that both supports and
does not support the new h_add_logical_lan_buffers hcall.
On supported firmware, this reduces hypercall overhead significantly
over multiple buffers. SAR measurements showed about a 15% improvement
in packet processing rate under moderate RX load, with heavier traffic
seeing gains more than 30%
Signed-off-by: Mingming Cao <mmc@linux.ibm.com> Reviewed-by: Brian King <bjking1@linux.ibm.com> Reviewed-by: Haren Myneni <haren@linux.ibm.com> Reviewed-by: Dave Marquardt <davemarq@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250719091356.57252-1-mmc@linux.ibm.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
My previous patch ended up causing a regression for the
DRM_IOCTL_NOUVEAU_NVIF ioctl. The intention of my patch was to only
pass ioctl commands that have the correct dir/type/nr bits into the
nouveau_abi16_ioctl() function.
This turned out to be too strict, as userspace does use at least
write-only and write-read direction settings. Checking for both of these
still did not fix the issue, so the best we can do for the 6.16 release
is to revert back to what we've had since linux-3.16.
This version is still fragile, but at least it is known to work with
existing userspace. Fixing this properly requires a better understanding
of what commands are being passed from userspace in practice, and how
that relies on the undocumented (miss)behavior in nouveau_drm_ioctl().
can: netlink: can_changelink(): fix NULL pointer deref of struct can_priv::do_set_mode
Andrei Lalaev reported a NULL pointer deref when a CAN device is
restarted from Bus Off and the driver does not implement the struct
can_priv::do_set_mode callback.
There are 2 code path that call struct can_priv::do_set_mode:
- directly by a manual restart from the user space, via
can_changelink()
- delayed automatic restart after bus off (deactivated by default)
To prevent the NULL pointer deference, refuse a manual restart or
configure the automatic restart delay in can_changelink() and report
the error via extack to user space.
As an additional safety measure let can_restart() return an error if
can_priv::do_set_mode is not set instead of dereferencing it
unchecked.
Xiang Mei [Thu, 17 Jul 2025 23:01:28 +0000 (16:01 -0700)]
net/sched: sch_qfq: Avoid triggering might_sleep in atomic context in qfq_delete_class
might_sleep could be trigger in the atomic context in qfq_delete_class.
qfq_destroy_class was moved into atomic context locked
by sch_tree_lock to avoid a race condition bug on
qfq_aggregate. However, might_sleep could be triggered by
qfq_destroy_class, which introduced sleeping in atomic context (path:
qfq_destroy_class->qdisc_put->__qdisc_destroy->lockdep_unregister_key
->might_sleep).
Considering the race is on the qfq_aggregate objects, keeping
qfq_rm_from_agg in the lock but moving the left part out can solve
this issue.
Paolo Abeni [Tue, 22 Jul 2025 09:35:51 +0000 (11:35 +0200)]
Merge branch 'gve-af_xdp-zero-copy-for-dqo-rda'
Joshua Washington says:
====================
gve: AF_XDP zero-copy for DQO RDA
This patch series adds support for AF_XDP zero-copy in the DQO RDA queue
format.
XSK infrastructure is updated to re-post buffers when adding XSK pools
because XSK umem will be posted directly to the NIC, a departure from
the bounce buffer model used in GQI QPL. A registry of XSK pools is
introduced to prevent the usage of XSK pools when in copy mode.
gve: implement DQO RX datapath and control path for AF_XDP zero-copy
Add the RX datapath for AF_XDP zero-copy for DQ RDA. The RX path is
quite similar to that of the normal XDP case. Parallel methods are
introduced to properly handle XSKs instead of normal driver buffers.
To properly support posting from XSKs, queues are destroyed and
recreated, as the driver was initially making use of page pool buffers
instead of the XSK pool memory.
Expose support for AF_XDP zero-copy, as the TX and RX datapaths both
exist.
Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Link: https://patch.msgid.link/20250717152839.973004-6-jeroendb@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
gve: implement DQO TX datapath for AF_XDP zero-copy
In the descriptor clean path, a number of changes need to be made to
accommodate out of order completions and double completions.
The XSK stack can only handle completions being processed in order, as a
single counter is incremented in xsk_tx_completed to sigify how many XSK
descriptors have been completed. Because completions can come back out
of order in DQ, a separate queue of XSK descriptors must be maintained.
This queue keeps the pending packets in the order that they were written
so that the descriptors can be counted in xsk_tx_completed in the same
order.
For double completions, a new pending packet state and type are
introduced. The new type, GVE_TX_PENDING_PACKET_DQO_XSK, plays an
anlogous role to pre-existing _SKB and _XDP_FRAME pending packet types
for XSK descriptors. The new state, GVE_PACKET_STATE_XSK_COMPLETE,
represents packets for which no more completions are expected. This
includes packets which have received a packet completion or reinjection
completion, as well as packets whose reinjection completion timer have
timed out. At this point, such packets can be counted as part of
xsk_tx_completed() and freed.
Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Link: https://patch.msgid.link/20250717152839.973004-5-jeroendb@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Relying on xsk_get_pool_from_qid for getting whether zero copy is
enabled on a queue is erroneous, as an XSK pool is registered in
xp_assign_dev whether AF_XDP zero-copy is enabled or not. This becomes
problematic when queues are restarted in copy mode, as all RX queues
with XSKs will register a pool, causing the driver to exercise the
zero-copy codepath.
This patch adds a bitmap to keep track of which queues have zero-copy
enabled.
Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Link: https://patch.msgid.link/20250717152839.973004-4-jeroendb@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
The existence of both of these xdp_rxq and xsk_rxq is redundant. xdp_rxq
can be used in both the zero-copy mode and the copy mode case. XSK pool
memory model registration is prioritized over normal memory model
registration to ensure that memory model registration happens only once
per queue.
Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Link: https://patch.msgid.link/20250717152839.973004-3-jeroendb@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
gve: deduplicate xdp info and xsk pool registration logic
The XDP registration path currently has a lot of reused logic, leading
changes to the codepaths to be unnecessarily complex. gve_reg_xsk_pool
extracts the logic of registering an XSK pool with a queue into a method
that can be used by both XDP_SETUP_XSK_POOL and gve_reg_xdp_info.
gve_unreg_xdp_info is used to undo XDP info registration in the error
path instead of explicitly unregistering the XDP info, as it is more
complete and idempotent.
This patch will be followed by other changes to the XDP registration
logic, and will simplify those changes due to the use of common methods.
Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Joshua Washington <joshwash@google.com> Signed-off-by: Jeroen de Borst <jeroendb@google.com> Link: https://patch.msgid.link/20250717152839.973004-2-jeroendb@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
====================
ethtool: rss: support creating and removing contexts via Netlink
This series completes support of RSS configuration via Netlink.
All functionality supported by the IOCTL is now supported by
Netlink. Future series (time allowing) will add:
- hashing on the flow label, which started this whole thing;
- pinning the RSS context to a Netlink socket for auto-cleanup.
The first patch is a leftover held back from previous series
to avoid conflicting with Gal's fix.
Next 4 patches refactor existing code to make reusing it for
context creation possible. 2 patches after that add create
and delete commands. Last but not least the test is extended.
====================
Jakub Kicinski [Thu, 17 Jul 2025 23:43:43 +0000 (16:43 -0700)]
selftests: drv-net: rss_api: context create and delete tests
Add test cases for creating and deleting contexts.
TAP version 13
1..12
ok 1 rss_api.test_rxfh_nl_set_fail
ok 2 rss_api.test_rxfh_nl_set_indir
ok 3 rss_api.test_rxfh_nl_set_indir_ctx
ok 4 rss_api.test_rxfh_indir_ntf
ok 5 rss_api.test_rxfh_indir_ctx_ntf
ok 6 rss_api.test_rxfh_nl_set_key
ok 7 rss_api.test_rxfh_fields
ok 8 rss_api.test_rxfh_fields_set
ok 9 rss_api.test_rxfh_fields_set_xfrm # SKIP no input-xfrm supported
ok 10 rss_api.test_rxfh_fields_ntf
ok 11 rss_api.test_rss_ctx_add
ok 12 rss_api.test_rss_ctx_ntf
# Totals: pass:11 fail:0 xfail:0 xpass:0 skip:1 error:0
Jakub Kicinski [Thu, 17 Jul 2025 23:43:42 +0000 (16:43 -0700)]
ethtool: rss: support removing contexts via Netlink
Implement removing additional RSS contexts via Netlink.
Technically it'd be possible to shoehorn the delete operation
into ethnl_request_ops-compatible handler. The code ends
up longer than open coded version, and I think we'll need
a custom way of sending notifications at some stage (if we
allow tying the context lifetime to the netlink socket, in
the future).
Jakub Kicinski [Thu, 17 Jul 2025 23:43:41 +0000 (16:43 -0700)]
ethtool: rss: support creating contexts via Netlink
Support creating contexts via Netlink. Setting flow hashing
fields on the new context is not supported at this stage,
it can be added later.
An empty indirection table is not supported. This is a carry
over from the IOCTL interface where empty indirection table
meant delete. We can repurpose empty indirection table in
Netlink but for now to avoid confusion reject it using the
policy.
Support letting user choose the ID for the new context. This was
not possible in IOCTL since the context ID field for the create
action had to be set to the ETH_RXFH_CONTEXT_ALLOC magic value.
Jakub Kicinski [Thu, 17 Jul 2025 23:43:39 +0000 (16:43 -0700)]
ethtool: rss: factor out populating response from context
Similarly to previous change, factor out populating the response.
We will use this after the context was allocated to send a notification
so this time factor out from the additional context handling, rather
than context 0 handling (for request context didn't exist, for response
it does).
Jakub Kicinski [Thu, 17 Jul 2025 23:43:38 +0000 (16:43 -0700)]
ethtool: rss: factor out allocating memory for response
To ease the code reuse for RSS_CREATE we'll want to prepare
struct rss_reply_data for the new context. Unfortunately
we can't depend on the exiting scaffolding because the context
doesn't exist (ctx=NULL) when we start preparing. Factor out
the portion of the context 0 handling responsible for allocation
of request memory, so that we can call it directly.
Jakub Kicinski [Thu, 17 Jul 2025 23:43:36 +0000 (16:43 -0700)]
ethtool: assert that drivers with sym hash are consistent for RSS contexts
Supporting per-RSS context configuration of hashing fields but
not the hashing algorithm would complicate the code a lot.
We'd need to cross check the config against all RSS contexts.
None of the drivers need this today, so explicitly prevent
new drivers with such skewed capabilities from registering.
If such driver appears it will need to first adjust the checks
in the core.
====================
mptcp: add TCP_MAXSEG sockopt support
The TCP_MAXSEG socket option was not supported by MPTCP, mainly because
it has never been requested before. But there are still valid use-cases,
e.g. with HAProxy.
- Patch 1 is a small cleanup patch in the MPTCP sockopt file.
- Patch 2 expose some code from TCP, to avoid duplicating it in MPTCP.
- Patch 3 adds TCP_MAXSEG sockopt support in MPTCP.
- Patch 4 is not related to the others, it fixes a typo in a comment.
Note that the new TCP_MAXSEG sockopt support has been validated by a new
packetdrill script on the MPTCP CI:
The TCP_MAXSEG socket option is currently not supported by MPTCP, mainly
because it has never been requested before. But there are still valid
use-cases, e.g. with HAProxy.
This patch adds its support in MPTCP by propagating the value to all
subflows. The get part looks at the value on the first subflow, to be as
closed as possible to TCP. Only one value can be returned for the cached
MSS, so this can come only from one subflow.
Similar to mptcp_setsockopt_first_sf_only(), a generic helper
mptcp_setsockopt_all_subflows() is added to set sockopt for each
subflows of the mptcp socket.
Add a new member for struct mptcp_sock to store the TCP_MAXSEG value,
and return this value in getsockopt.
s390/qeth: Make hw_trap sysfs attribute idempotent
Update qeth driver to allow writing an existing value to the "hw_trap"
sysfs attribute. Attempting such a write earlier resulted in -EINVAL.
In other words, make the sysfs attribute idempotent.
Luo Jie [Fri, 18 Jul 2025 13:57:48 +0000 (21:57 +0800)]
net: phy: qcom: qca807x: Enable WoL support using shared library
The Wake-on-LAN (WoL) functionality for the QCA807x series is identical
to that of the AT8031. WoL support for QCA807x is enabled by utilizing
the at8031_set_wol() function provided in the shared library.
net: usb: smsc95xx: add support for ethtool pause parameters
Implement ethtool .get_pauseparam and .set_pauseparam handlers for
configuring flow control on smsc95xx. The driver now supports enabling
or disabling transmit and receive pause frames, with or without
autonegotiation. Pause settings are applied during link-up based on
current PHY state and user configuration.
Previously, the driver used phy_get_pause() during link-up handling,
but lacked initialization and an ethtool interface to configure pause
modes. As a result, flow control support was effectively non-functional.
For idpf:
Ahmed and Sudheer add support for flow steering via ntuple filters.
Current support is for IPv4 and TCP/UDP only.
Milena adds support for cross timestamping.
Ahmed preserves coalesce settings across resets.
For ice:
Alex adds reporting of 40GbE speed in devlink port split.
Dawid adds support for E835 devices.
Jesse refactors profile ptype processing for cleaner, more readable,
code.
Dave adds a couple of helper functions for LAG to reduce code
duplication.
For igc:
Siang adds support to configure "Default Queue" during runtime using
ethtool's Network Flow Classification (NFC) wildcard rule approach.
For igbvf:
Yuto Ohnuki removes unused fields from igbvf_adapter.
For ixgbevf:
Yuto Ohnuki removes unused fields from ixgbevf_adapter.
* '200GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue:
ixgbevf: remove unused fields from struct ixgbevf_adapter
igbvf: remove unused fields from struct igbvf_adapter
igc: Add wildcard rule support to ethtool NFC using Default Queue
igc: Relocate RSS field definitions to igc_defines.h
ice: breakout common LAG code into helpers
ice: convert ice_add_prof() to bitmap
ice: add E835 device IDs
ice: add 40G speed to Admin Command GET PORT OPTION
idpf: preserve coalescing settings across resets
idpf: add cross timestamping
idpf: add flow steering support
virtchnl2: add flow steering support
virtchnl2: rename enum virtchnl2_cap_rss
====================
gve_tx_timeout was calculating missed completions in a way that is only
relevant in the GQ queue format. Additionally, it was attempting to
disable device interrupts, which is not needed in either GQ or DQ queue
formats.
As a result, TX timeouts with the DQ queue format likely would have
triggered early resets without kicking the queue at all.
This patch drops the check for pending work altogether and always kicks
the queue after validating the queue has not seen a TX timeout too
recently.
Cc: stable@vger.kernel.org Fixes: 87a7f321bb6a ("gve: Recover from queue stall due to missed IRQ") Co-developed-by: Tim Hostetler <thostet@google.com> Signed-off-by: Tim Hostetler <thostet@google.com> Signed-off-by: Praveen Kaligineedi <pkaligineedi@google.com> Signed-off-by: Harshitha Ramamurthy <hramamurthy@google.com> Link: https://patch.msgid.link/20250717192024.1820931-1-hramamurthy@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Add PM suspend/resume callbacks for RZ/G3E SMARC EVK.
The PM deep entry is executed by pressing the SLEEP button and exit from
entry is by pressing the power button.
Logs:
root@smarc-rzg3e:~# PM: suspend entry (deep)
Filesystems sync: 0.115 seconds
Freezing user space processes
Freezing user space processes completed (elapsed 0.002 seconds)
OOM killer disabled.
Freezing remaining freezable tasks
Freezing remaining freezable tasks completed (elapsed 0.001 seconds)
printk: Suspending console(s) (use no_console_suspend to debug)
NOTICE: BL2: v2.10.5(release):2.10.5/rz_soc_dev-162-g7148ba838
NOTICE: BL2: Built : 14:23:58, Jul 5 2025
NOTICE: BL2: SYS_LSI_MODE: 0x13e06
NOTICE: BL2: SYS_LSI_DEVID: 0x8679447
NOTICE: BL2: SYS_LSI_PRR: 0x0
NOTICE: BL2: Booting BL31
renesas-gbeth 15c30000.ethernet end0: Link is Down
Disabling non-boot CPUs ...
psci: CPU3 killed (polled 0 ms)
psci: CPU2 killed (polled 0 ms)
psci: CPU1 killed (polled 0 ms)
Enabling non-boot CPUs ...
Detected VIPT I-cache on CPU1
GICv3: CPU1: found redistributor 100 region 0:0x0000000014960000
CPU1: Booted secondary processor 0x0000000100 [0x412fd050]
CPU1 is up
Detected VIPT I-cache on CPU2
GICv3: CPU2: found redistributor 200 region 0:0x0000000014980000
CPU2: Booted secondary processor 0x0000000200 [0x412fd050]
CPU2 is up
Detected VIPT I-cache on CPU3
GICv3: CPU3: found redistributor 300 region 0:0x00000000149a0000
CPU3: Booted secondary processor 0x0000000300 [0x412fd050]
CPU3 is up
dwmac4: Master AXI performs fixed burst length 15c30000.ethernet end0: No Safety Features support found 15c30000.ethernet end0: IEEE 1588-2008 Advanced Timestamp supported 15c30000.ethernet end0: configuring for phy/rgmii-id link mode
dwmac4: Master AXI performs fixed burst length 15c40000.ethernet end1: No Safety Features support found 15c40000.ethernet end1: IEEE 1588-2008 Advanced Timestamp supported 15c40000.ethernet end1: configuring for phy/rgmii-id link mode
OOM killer enabled.
Restarting tasks: Starting
Restarting tasks: Done
random: crng reseeded on system resumption
PM: suspend exit
15c30000.ethernet end0: Link is Up - 1Gbps/Full - flow control rx/tx
root@smarc-rzg3e:~# ifconfig end0 192.168.10.7 up
root@smarc-rzg3e:~# ping 192.168.10.1
PING 192.168.10.1 (192.168.10.1) 56(84) bytes of data.
64 bytes from 192.168.10.1: icmp_seq=1 ttl=64 time=2.05 ms
64 bytes from 192.168.10.1: icmp_seq=2 ttl=64 time=0.928 ms
Reviewed-by: Lad Prabhakar <prabhakar.mahadev-lad.rj@bp.renesas.com> Reviewed-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk> Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Link: https://patch.msgid.link/20250717071109.8213-1-biju.das.jz@bp.renesas.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net: appletalk: Fix use-after-free in AARP proxy probe
The AARP proxy‐probe routine (aarp_proxy_probe_network) sends a probe,
releases the aarp_lock, sleeps, then re-acquires the lock. During that
window an expire timer thread (__aarp_expire_timer) can remove and
kfree() the same entry, leading to a use-after-free.
==================================================================
BUG: KASAN: slab-use-after-free in aarp_proxy_probe_network+0x560/0x630 net/appletalk/aarp.c:493
Read of size 4 at addr ffff8880123aa360 by task repro/13278
The buggy address belongs to the object at ffff8880123aa300
which belongs to the cache kmalloc-192 of size 192
The buggy address is located 96 bytes inside of
freed 192-byte region [ffff8880123aa300, ffff8880123aa3c0)
Memory state around the buggy address: ffff8880123aa200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ffff8880123aa280: 00 00 00 00 fc fc fc fc fc fc fc fc fc fc fc fc
>ffff8880123aa300: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
^ ffff8880123aa380: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc ffff8880123aa400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
==================================================================
net: bcmasp: Restore programming of TX map vector register
On ASP versions v2.x we need to program the TX map vector register to
properly exercise end-to-end flow control, otherwise the TX engine can
either lock-up, or cause the hardware calculated checksum to be
wrong/corrupted when multiple back to back packets are being submitted
for transmission. This register defaults to 0, which means no flow
control being applied.