Kurt Kanzenbach [Wed, 19 Mar 2025 10:26:40 +0000 (11:26 +0100)]
igb: Link queues to NAPI instances
Link queues to NAPI instances via netdev-genl API. This is required to use
XDP/ZC busy polling. See commit 5ef44b3cb43b ("xsk: Bring back busy polling
support") for details.
This also allows users to query the info with netlink:
Reviewed-by: Joe Damato <jdamato@fastly.com> Tested-by: Rinitha S <sx.rinitha@intel.com> (A Contingent worker at Intel) Signed-off-by: Kurt Kanzenbach <kurt@linutronix.de> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
Improve layer 4 multipath hash policy for local tcp connections:
patch 1: Select a source address that matches the nexthop device.
Due to tcp_v4_connect making separate route lookups for saddr
and route, the two can currently be inconsistent.
patch 2: Use all paths when opening multiple local tcp connections to
the same ip address and port.
patch 3: Test the behavior. Extend the fib_tests.sh testsuite with one
opening many connections, and count SYNs on both egress
devices, for packets matching the source address of the dev.
Changelog in the individual patches
====================
Willem de Bruijn [Thu, 24 Apr 2025 14:35:20 +0000 (10:35 -0400)]
selftests/net: test tcp connection load balancing
Verify that TCP connections use both routes when connecting multiple
times to a remote service over a two nexthop multipath route.
Use socat to create the connections. Use tc prio + tc filter to
count routes taken, counting SYN packets across the two egress
devices. Also verify that the saddr matches that of the device.
To avoid flaky tests when testing inherently randomized behavior,
set a low bar and pass if even a single SYN is observed on each
device.
Willem de Bruijn [Thu, 24 Apr 2025 14:35:19 +0000 (10:35 -0400)]
ip: load balance tcp connections to single dst addr and port
Load balance new TCP connections across nexthops also when they
connect to the same service at a single remote address and port.
This affects only port-based multipath hashing:
fib_multipath_hash_policy 1 or 3.
Local connections must choose both a source address and port when
connecting to a remote service, in ip_route_connect. This
"chicken-and-egg problem" (commit 2d7192d6cbab ("ipv4: Sanitize and
simplify ip_route_{connect,newports}()")) is resolved by first
selecting a source address, by looking up a route using the zero
wildcard source port and address.
As a result multiple connections to the same destination address and
port have no entropy in fib_multipath_hash.
This is not a problem when forwarding, as skb-based hashing has a
4-tuple. Nor when establishing UDP connections, as autobind there
selects a port before reaching ip_route_connect.
Load balance also TCP, by using a random port in fib_multipath_hash.
Port assignment in inet_hash_connect is not atomic with
ip_route_connect. Thus ports are unpredictable, effectively random.
Implementation details:
Do not actually pass a random fl4_sport, as that affects not only
hashing, but routing more broadly, and can match a source port based
policy route, which existing wildcard port 0 will not. Instead,
define a new wildcard flowi flag that is used only for hashing.
Selecting a random source is equivalent to just selecting a random
hash entirely. But for code clarity, follow the normal 4-tuple hash
process and only update this field.
fib_multipath_hash can be reached with zero sport from other code
paths, so explicitly pass this flowi flag, rather than trying to infer
this case in the function itself.
Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20250424143549.669426-3-willemdebruijn.kernel@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Willem de Bruijn [Thu, 24 Apr 2025 14:35:18 +0000 (10:35 -0400)]
ipv4: prefer multipath nexthop that matches source address
With multipath routes, try to ensure that packets leave on the device
that is associated with the source address.
Avoid the following tcpdump example:
veth0 Out IP 10.1.0.2.38640 > 10.2.0.3.8000: Flags [S]
veth1 Out IP 10.1.0.2.38648 > 10.2.0.3.8000: Flags [S]
Which can happen easily with the most straightforward setup:
ip addr add 10.0.0.1/24 dev veth0
ip addr add 10.1.0.1/24 dev veth1
ip route add 10.2.0.3 nexthop via 10.0.0.2 dev veth0 \
nexthop via 10.1.0.2 dev veth1
This is apparently considered WAI, based on the comment in
ip_route_output_key_hash_rcu:
* 2. Moreover, we are allowed to send packets with saddr
* of another iface. --ANK
It may be ok for some uses of multipath, but not all. For instance,
when using two ISPs, a router may drop packets with unknown source.
The behavior occurs because tcp_v4_connect makes three route
lookups when establishing a connection:
1. ip_route_connect calls to select a source address, with saddr zero.
2. ip_route_connect calls again now that saddr and daddr are known.
3. ip_route_newports calls again after a source port is also chosen.
With a route with multiple nexthops, each lookup may make a different
choice depending on available entropy to fib_select_multipath. So it
is possible for 1 to select the saddr from the first entry, but 3 to
select the second entry. Leading to the above situation.
Address this by preferring a match that matches the flowi4 saddr. This
will make 2 and 3 make the same choice as 1. Continue to update the
backup choice until a choice that matches saddr is found.
Do this in fib_select_multipath itself, rather than passing an fl4_oif
constraint, to avoid changing non-multipath route selection. Commit e6b45241c57a ("ipv4: reset flowi parameters on route connect") shows
how that may cause regressions.
Also read ipv4.sysctl_fib_multipath_use_neigh only once. No need to
refresh in the loop.
This does not happen in IPv6, which performs only one lookup.
Signed-off-by: Willem de Bruijn <willemb@google.com> Reviewed-by: David Ahern <dsahern@kernel.org> Reviewed-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Ido Schimmel <idosch@nvidia.com> Link: https://patch.msgid.link/20250424143549.669426-2-willemdebruijn.kernel@gmail.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
The ICSSG firmware maintains set of stats called PA_STATS.
Currently the driver only dumps 4 stats. Add support for dumping more
stats.
The offset for different stats are defined as MACROs in icssg_switch_map.h
file. All the offsets are for Slice0. Slice1 offsets are slice0 + 4.
The offset calculation is taken care while reading the stats in
emac_update_hardware_stats().
The statistics are documented in
Documentation/networking/device_drivers/icssg_prueth.rst
Jakub Kicinski [Mon, 28 Apr 2025 23:19:19 +0000 (16:19 -0700)]
Merge branch 'phase-out-hybrid-pci-devres-api'
Philipp Stanner says:
====================
Phase out hybrid PCI devres API
Fixes a number of minor issues with the usage of the PCI API in net.
Notbaly, it replaces calls to the sometimes-managed
pci_request_regions() to the always-managed pcim_request_all_regions(),
enabling us to remove that hybrid functionality from PCI.
====================
thunder_bgx's PCI device is enabled with pcim_enable_device(), a managed
devres function which ensures that the device gets enabled on driver
detach automatically.
Philipp Stanner [Fri, 25 Apr 2025 08:57:40 +0000 (10:57 +0200)]
net: thunder_bgx: Use pure PCI devres API
The currently used function pci_request_regions() is one of the
problematic "hybrid devres" PCI functions, which are sometimes managed
through devres, and sometimes not (depending on whether
pci_enable_device() or pcim_enable_device() has been called before).
The PCI subsystem wants to remove this behavior and, therefore, needs to
port all users to functions that don't have this problem.
Furthermore, the PCI function being managed implies that it's not
necessary to call pci_release_regions() manually.
Remove the calls to pci_release_regions().
Replace pci_request_regions() with pcim_request_all_regions().
Philipp Stanner [Fri, 25 Apr 2025 08:57:39 +0000 (10:57 +0200)]
net: mdio: thunder: Use pure PCI devres API
The currently used function pci_request_regions() is one of the
problematic "hybrid devres" PCI functions, which are sometimes managed
through devres, and sometimes not (depending on whether
pci_enable_device() or pcim_enable_device() has been called before).
The PCI subsystem wants to remove this behavior and, therefore, needs to
port all users to functions that don't have this problem.
Furthermore, the PCI function being managed implies that it's not
necessary to call pci_release_regions() manually.
Remove the calls to pci_release_regions().
Replace pci_request_regions() with pcim_request_all_regions().
Philipp Stanner [Fri, 25 Apr 2025 08:57:38 +0000 (10:57 +0200)]
net: ethernet: sis900: Use pure PCI devres API
The currently used function pci_request_regions() is one of the
problematic "hybrid devres" PCI functions, which are sometimes managed
through devres, and sometimes not (depending on whether
pci_enable_device() or pcim_enable_device() has been called before).
The PCI subsystem wants to remove this behavior and, therefore, needs to
port all users to functions that don't have this problem.
Replace pci_request_regions() with pcim_request_all_regions().
Signed-off-by: Philipp Stanner <phasta@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Acked-by: Daniele Venzano <venza@brownhat.org> Link: https://patch.msgid.link/20250425085740.65304-7-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Philipp Stanner [Fri, 25 Apr 2025 08:57:37 +0000 (10:57 +0200)]
net: ethernet: natsemi: Use pure PCI devres API
The currently used function pci_request_regions() is one of the
problematic "hybrid devres" PCI functions, which are sometimes managed
through devres, and sometimes not (depending on whether
pci_enable_device() or pcim_enable_device() has been called before).
The PCI subsystem wants to remove this behavior and, therefore, needs to
port all users to functions that don't have this problem.
Replace pci_request_regions() with pcim_request_all_regions().
Philipp Stanner [Fri, 25 Apr 2025 08:57:36 +0000 (10:57 +0200)]
net: tulip: Use pure PCI devres API
The currently used function pci_request_regions() is one of the
problematic "hybrid devres" PCI functions, which are sometimes managed
through devres, and sometimes not (depending on whether
pci_enable_device() or pcim_enable_device() has been called before).
The PCI subsystem wants to remove this behavior and, therefore, needs to
port all users to functions that don't have this problem.
Replace pci_request_regions() with pcim_request_all_regions().
Philipp Stanner [Fri, 25 Apr 2025 08:57:35 +0000 (10:57 +0200)]
net: octeontx2: Use pure PCI devres API
The currently used function pci_request_regions() is one of the
problematic "hybrid devres" PCI functions, which are sometimes managed
through devres, and sometimes not (depending on whether
pci_enable_device() or pcim_enable_device() has been called before).
The PCI subsystem wants to remove this behavior and, therefore, needs to
port all users to functions that don't have this problem.
Furthermore, the PCI function being managed implies that it's not
necessary to call pci_release_regions() manually.
Remove the calls to pci_release_regions().
Replace pci_request_regions() with pcim_request_all_regions().
Philipp Stanner [Fri, 25 Apr 2025 08:57:34 +0000 (10:57 +0200)]
net: prestera: Use pure PCI devres API
The currently used function pci_request_regions() is one of the
problematic "hybrid devres" PCI functions, which are sometimes managed
through devres, and sometimes not (depending on whether
pci_enable_device() or pcim_enable_device() has been called before).
The PCI subsystem wants to remove this behavior and, therefore, needs to
port all users to functions that don't have this problem.
Furthermore, the PCI function being managed implies that it's not
necessary to call pci_release_regions() manually.
Remove the calls to pci_release_regions().
Replace pci_request_regions() with pcim_request_all_regions().
Signed-off-by: Philipp Stanner <phasta@kernel.org> Acked-by: Elad Nachman <enachman@marvell.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250425085740.65304-3-phasta@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
selftests: net: retry when bind returns EBUSY in xdp_helper
When binding the XDP socket, we may get EBUSY because the deferred
destructor of XDP socket in previous test has not been executed yet. If
that is the case, just sleep and retry some times.
====================
veth: qdisc backpressure and qdisc check refactor
This patch series addresses TX drops seen on veth devices under load,
particularly when using threaded NAPI, which is our setup in production.
The root cause is that the NAPI consumer often runs on a different CPU
than the producer. Combined with scheduling delays or simply slower
consumption, this increases the chance that the ptr_ring fills up before
packets are drained, resulting in drops from veth_xmit() (ndo_start_xmit()).
To make this easier to reproduce, we’ve created a script that sets up a
test scenario using network namespaces. The script inserts 1000 iptables
rules in the consumer namespace to slow down packet processing and
amplify the issue. Reproducer script:
This series first introduces a helper to detect no-queue qdiscs and then
uses it in the veth driver to conditionally apply qdisc-level
backpressure when a real qdisc is attached. The behavior is off by
default and opt-in, ensuring minimal impact and easy activation.
veth: apply qdisc backpressure on full ptr_ring to reduce TX drops
In production, we're seeing TX drops on veth devices when the ptr_ring
fills up. This can occur when NAPI mode is enabled, though it's
relatively rare. However, with threaded NAPI - which we use in
production - the drops become significantly more frequent.
The underlying issue is that with threaded NAPI, the consumer often runs
on a different CPU than the producer. This increases the likelihood of
the ring filling up before the consumer gets scheduled, especially under
load, leading to drops in veth_xmit() (ndo_start_xmit()).
This patch introduces backpressure by returning NETDEV_TX_BUSY when the
ring is full, signaling the qdisc layer to requeue the packet. The txq
(netdev queue) is stopped in this condition and restarted once
veth_poll() drains entries from the ring, ensuring coordination between
NAPI and qdisc.
Backpressure is only enabled when a qdisc is attached. Without a qdisc,
the driver retains its original behavior - dropping packets immediately
when the ring is full. This avoids unexpected behavior changes in setups
without a configured qdisc.
With a qdisc in place (e.g. fq, sfq) this allows Active Queue Management
(AQM) to fairly schedule packets across flows and reduce collateral
damage from elephant flows.
A known limitation of this approach is that the full ring sits in front
of the qdisc layer, effectively forming a FIFO buffer that introduces
base latency. While AQM still improves fairness and mitigates flow
dominance, the latency impact is measurable.
In hardware drivers, this issue is typically addressed using BQL (Byte
Queue Limits), which tracks in-flight bytes needed based on physical link
rate. However, for virtual drivers like veth, there is no fixed bandwidth
constraint - the bottleneck is CPU availability and the scheduler's ability
to run the NAPI thread. It is unclear how effective BQL would be in this
context.
This patch serves as a first step toward addressing TX drops. Future work
may explore adapting a BQL-like mechanism to better suit virtual devices
like veth.
net: sched: generalize check for no-queue qdisc on TX queue
The "noqueue" qdisc can either be directly attached, or get default
attached if net_device priv_flags has IFF_NO_QUEUE. In both cases, the
allocated Qdisc structure gets it's enqueue function pointer reset to
NULL by noqueue_init() via noqueue_qdisc_ops.
This is a common case for software virtual net_devices. For these devices
with no-queue, the transmission path in __dev_queue_xmit() will bypass
the qdisc layer. Directly invoking device drivers ndo_start_xmit (via
dev_hard_start_xmit). In this mode the device driver is not allowed to
ask for packets to be queued (either via returning NETDEV_TX_BUSY or
stopping the TXQ).
The simplest and most reliable way to identify this no-queue case is by
checking if enqueue == NULL.
The vrf driver currently open-codes this check (!qdisc->enqueue). While
functionally correct, this low-level detail is better encapsulated in a
dedicated helper for clarity and long-term maintainability.
To make this behavior more explicit and reusable, this patch introduce a
new helper: qdisc_txq_has_no_queue(). Helper will also be used by the
veth driver in the next patch, which introduces optional qdisc-based
backpressure.
x86_64-linux-ld: drivers/net/mdio/mdio-realtek-rtl9300.o: in function `rtl9300_mdiobus_probe':
mdio-realtek-rtl9300.c:(.text+0x941): undefined reference to `devm_mdiobus_alloc_size'
x86_64-linux-ld: mdio-realtek-rtl9300.c:(.text+0x9e2): undefined reference to `__devm_mdiobus_register'
Since this is a hidden symbol, it needs to be selected by each user,
rather than the usual 'depends on'. I see that there are a few other
drivers that accidentally use 'depends on', so fix these as well for
consistency and to avoid dependency loops.
====================
net: stmmac: dwmac-loongson: Add Loongson-2K3000 support
This series add stmmac driver support for Loongson-2K3000/Loongson-3B6000M,
which introduces a new CORE ID (0x12) and a new PCI device ID (0x7a23). The
new core reduces channel numbers from 8 to 4, but checksum is supported for
all channels.
====================
Note that the first patch of the series has been merged separately as
commit f438eee2c8c9 ("net: stmmac: dwmac-loongson: Move queue number init
to common function")
net: stmmac: dwmac-loongson: Add new GMAC's PCI device ID support
Add a new GMAC's PCI device ID (0x7a23) support which is used in
Loongson-2K3000/Loongson-3B6000M. The new GMAC device use external PHY,
so it reuses loongson_gmac_data() as the old GMAC device (0x7a03), and
the new GMAC device still doesn't support flow control now.
Reviewed-by: Andrew Lunn <andrew@lunn.ch> Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Tested-by: Henry Chen <chenx97@aosc.io> Tested-by: Biao Dong <dongbiao@loongson.cn> Signed-off-by: Baoqi Zhang <zhangbaoqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Link: https://patch.msgid.link/20250424072209.3134762-4-chenhuacai@loongson.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net: stmmac: dwmac-loongson: Add new multi-chan IP core support
Add a new multi-chan IP core (0x12) support which is used in Loongson-
2K3000/Loongson-3B6000M. Compared with the 0x10 core, the new 0x12 core
reduces channel numbers from 8 to 4, but checksum is supported for all
channels.
Add a "multichan" flag to loongson_data, so that we can simply use a
"if (ld->multichan)" condition rather than the complicated condition
"if (ld->loongson_id == DWMAC_CORE_MULTICHAN_V1 || ld->loongson_id ==
DWMAC_CORE_MULTICHAN_V2)".
Reviewed-by: Andrew Lunn <andrew@lunn.ch> Tested-by: Henry Chen <chenx97@aosc.io> Tested-by: Biao Dong <dongbiao@loongson.cn> Signed-off-by: Baoqi Zhang <zhangbaoqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Link: https://patch.msgid.link/20250424072209.3134762-3-chenhuacai@loongson.cn Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
net: stmmac: socfpga: 1000BaseX support and cleanups
This small series sorts-out 1000BaseX support and does a bit of cleanup
for the Lynx conversion.
Patch 1 makes sure that we set the right phy_mode when working in
1000BaseX mode, so that the internal GMII is configured correctly.
Patch 2 removes a check for phy_device upon calling fix_mac_speed(). As
the SGMII adapter may be chained to a Lynx PCS, checking for a
phy_device to be attached to the netdev before enabling the SGMII
adapter doesn't make sense, as we won't have a downstream PHY when using
1000BaseX.
Patch 3 cleans an unused field from the PCS conversion.
net: stmmac: socfpga: Remove unused pcs-mdiodev field
When dwmac-socfpga was converted to using the Lynx PCS (previously
referred to in the driver as the Altera TSE PCS), the
lynx_pcs_create_mdiodev() was used to create the pcs instance.
As this function didn't exist in the early versions of the series, a
local mdiodev object was stored for PCS creation. It was never used, but
still made it into the driver, so remove it.
net: stmmac: socfpga: Don't check for phy to enable the SGMII adapter
The SGMII adapter needs to be enabled for both Cisco SGMII and 1000BaseX
operations. It doesn't make sense to check for an attached phydev here,
as we simply might not have any, in particular if we're using the
1000BaseX interface mode.
Make so that we only re-enable the SGMII adapter when it's present, and
when we use a phy_mode that is handled by said adapter.
net: stmmac: socfpga: Enable internal GMII when using 1000BaseX
Dwmac Socfpga may be used with an instance of a Lynx / Altera TSE PCS,
in which case it gains support for 1000BaseX.
It appears that the PCS is wired to the MAC through an internal GMII
bus. Make sure that we enable the GMII_MII mode for the internal MAC when
using 1000BaseX.
net: stmmac: dwmac-loongson: Move queue number init to common function
Currently, the tx and rx queue number initialization is duplicated in
loongson_gmac_data() and loongson_gnet_data(), so move it to the common
function loongson_default_data().
This is a preparation for later patches.
Reviewed-by: Yanteng Si <si.yanteng@linux.dev> Tested-by: Henry Chen <chenx97@aosc.io> Tested-by: Biao Dong <dongbiao@loongson.cn> Signed-off-by: Baoqi Zhang <zhangbaoqi@loongson.cn> Signed-off-by: Huacai Chen <chenhuacai@loongson.cn> Signed-off-by: David S. Miller <davem@davemloft.net>
Hangbin Liu [Thu, 24 Apr 2025 04:22:38 +0000 (04:22 +0000)]
bonding: assign random address if device address is same as bond
This change addresses a MAC address conflict issue in failover scenarios,
similar to the problem described in commit a951bc1e6ba5 ("bonding: correct
the MAC address for 'follow' fail_over_mac policy").
In fail_over_mac=follow mode, the bonding driver expects the formerly active
slave to swap MAC addresses with the newly active slave during failover.
However, under certain conditions, two slaves may end up with the same MAC
address, which breaks this policy:
1) ip link set eth0 master bond0
-> bond0 adopts eth0's MAC address (MAC0).
2) ip link set eth1 master bond0
-> eth1 is added as a backup with its own MAC (MAC1).
3) ip link set eth0 nomaster
-> eth0 is released and restores its MAC (MAC0).
-> eth1 becomes the active slave, and bond0 assigns MAC0 to eth1.
4) ip link set eth0 master bond0
-> eth0 is re-added to bond0, now both eth0 and eth1 have MAC0.
This results in a MAC address conflict and violates the expected behavior
of the failover policy.
To fix this, we assign a random MAC address to any newly added slave if
its current MAC address matches that of the bond. The original (permanent)
MAC address is saved and will be restored when the device is released
from the bond.
This ensures that each slave has a unique MAC address during failover
transitions, preserving the integrity of the fail_over_mac=follow policy.
Fixes: 3915c1e8634a ("bonding: Add "follow" option to fail_over_mac") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Acked-by: Jay Vosburgh <jv@jvosburgh.net> Signed-off-by: David S. Miller <davem@davemloft.net>
====================
io_uring/zcrx: fix selftests and add new test for rss ctx
Update io_uring zero copy receive selftest. Patch 1 does a requested
cleanup to use defer() for undoing ethtool actions during the test and
restoring the NIC under test back to its original state.
Patch 2 adds a required call to set hds_thresh to 0. This is needed for
the queue API.
Patch 3 adds a new test case for steering into RSS contexts. A real
application using io_uring zero copy receive relies on this working to
shard work across multiple queues. There seems to be some
differences/bugs with steering into RSS contexts and individual queues.
====================
David Wei [Fri, 25 Apr 2025 02:20:49 +0000 (19:20 -0700)]
io_uring/zcrx: selftests: add test case for rss ctx
RSS contexts are used to shard work across multiple queues for an
application using io_uring zero copy receive. Add a test case checking
that steering flows into an RSS context works.
Until I add multi-thread support to the selftest binary, this test case
only has 1 queue in the RSS context.
====================
Fix netdevim to correctly mark NAPI IDs
This series fixes netdevsim to correctly set the NAPI ID on the skb.
This is helpful for writing tests around features that use
SO_INCOMING_NAPI_ID.
In addition to the netdevsim fix in patch 1, patches 2 & 3 do some self
test refactoring and add a test for NAPI IDs. The test itself (patch 3)
introduces a C helper because apparently python doesn't have
socket.SO_INCOMING_NAPI_ID.
Joe Damato [Thu, 24 Apr 2025 00:27:33 +0000 (00:27 +0000)]
selftests: drv-net: Test that NAPI ID is non-zero
Test that the SO_INCOMING_NAPI_ID of a network file descriptor is
non-zero. This ensures that either the core networking stack or, in some
cases like netdevsim, the driver correctly sets the NAPI ID.
Joe Damato [Thu, 24 Apr 2025 00:27:31 +0000 (00:27 +0000)]
netdevsim: Mark NAPI ID on skb in nsim_rcv
Previously, nsim_rcv was not marking the NAPI ID on the skb, leading to
applications seeing a napi ID of 0 when using SO_INCOMING_NAPI_ID.
To add to the userland confusion, netlink appears to correctly report
the NAPI IDs for netdevsim queues but the resulting file descriptor from
a call to accept() was reporting a NAPI ID of 0.
Johannes Berg [Wed, 23 Apr 2025 15:08:08 +0000 (17:08 +0200)]
net: ethernet: mtk_wed: annotate RCU release in attach()
There are some sparse warnings in wifi, and it seems that
it's actually possible to annotate a function pointer with
__releases(), making the sparse warnings go away. In a way
that also serves as documentation that rcu_read_unlock()
must be called in the attach method, so add that annotation.
Jakub Kicinski [Fri, 25 Apr 2025 01:21:07 +0000 (18:21 -0700)]
Merge branch 'tcp-fastopen-observability'
Jeremy Harris says:
====================
tcp: fastopen: observability
Whether TCP Fast Open was used for a connection is not reliably
observable by an accepting application when the SYN passed no data.
Fix this by noting during SYN receive processing that an acceptable Fast
Open option was used, and provide this to userland via getsockopt TCP_INFO.
====================
Dan Carpenter [Wed, 23 Apr 2025 08:25:45 +0000 (11:25 +0300)]
rxrpc: rxgk: Fix some reference count leaks
These paths should call rxgk_put(gk) but they don't. In the
rxgk_construct_response() function the "goto error;" will free the
"response" skb as well calling rxgk_put() so that's a bonus.
Fixes: 9d1d2b59341f ("rxrpc: rxgk: Implement the yfs-rxgk security class (GSSAPI)") Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org> Acked-by: David Howells <dhowells@redhat.com> Link: https://patch.msgid.link/aAikCbsnnzYtVmIA@stanley.mountain Signed-off-by: Jakub Kicinski <kuba@kernel.org>
net: ethernet: mtk_eth_soc: convert cap_bit in mtk_eth_muxc struct to u64
With commit 51a4df60db5c2 ("net: ethernet: mtk_eth_soc: convert caps in
mtk_soc_data struct to u64") the capabilities bitfield was converted to
a 64-bit value, but a cap_bit in struct mtk_eth_muxc which is used to
store a full bitfield (rather than the bit number, as the name would
suggest) still holds only a 32-bit value.
Change the type of cap_bit to u64 in order to avoid truncating the
bitfield which results in path selection to not work with capabilities
above the 32-bit limit.
The values currently stored in the cap_bit field are
MTK_ETH_MUX_GDM1_TO_GMAC1_ESW:
BIT_ULL(18) | BIT_ULL(5)
While all those values are currently still within 32-bit boundaries,
the addition of new capabilities of MT7988 as well as future SoC's
like MT7987 will exceed them. Also, the use of a 32-bit 'int' type to
store the result of a BIT_ULL(...) is misleading.
====================
net: bcmasp: Add v3.0 and remove v2.0
asp-v2.0 had one supported SoC that never saw the light of day.
Given that it was the first iteration of the HW, it ended up with
some one off HW design decisions that were changed in futher iterations
of the HW. We remove support to simplify the code and make it easier to
add future revisions.
Add support for asp-v3.0. asp-v3.0 reduces the feature set for cost
savings. We reduce the number of channel/network filters. And also
remove some features and statistics.
====================
The asp-v3.0 is a major HW revision that reduced the number of
channels and filters. The goal was to save cost by reducing the
feature set.
Changes for asp-v3.0
- Number of network filters were reduced.
- Number of channels were reduced.
- EDPKT stats were removed.
- Fix a bug with csum offload.
Add asp-v3.0 support. v3.0 is a major revision that reduces
the feature set for cost savings. We have a reduced amount of
channels and network filters.
The SoC that supported asp-v2.0 never saw the light of day. asp-v2.0 has
quirks that makes the logic overly complicated. For example, asp-v2.0 is
the only revision that has a different wake up IRQ hook up. Remove asp-v2.0
support to make supporting future HW revisions cleaner.
Cross-merge networking fixes after downstream PR (net-6.15-rc4).
This pull includes wireless and a fix to vxlan which isn't
in Linus's tree just yet. The latter creates with a silent conflict
/ build breakage, so merging it now to avoid causing problems.
drivers/net/vxlan/vxlan_vnifilter.c 094adad91310 ("vxlan: Use a single lock to protect the FDB table") 087a9eb9e597 ("vxlan: vnifilter: Fix unlocked deletion of default FDB entry")
https://lore.kernel.org/20250423145131.513029-1-idosch@nvidia.com
vxlan: vnifilter: Fix unlocked deletion of default FDB entry
When a VNI is deleted from a VXLAN device in 'vnifilter' mode, the FDB
entry associated with the default remote (assuming one was configured)
is deleted without holding the hash lock. This is wrong and will result
in a warning [1] being generated by the lockdep annotation that was
added by commit ebe642067455 ("vxlan: Create wrappers for FDB lookup").
Reproducer:
# ip link add vx0 up type vxlan dstport 4789 external vnifilter local 192.0.2.1
# bridge vni add vni 10010 remote 198.51.100.1 dev vx0
# bridge vni del vni 10010 dev vx0
Fix by acquiring the hash lock before the deletion and releasing it
afterwards. Blame the original commit that introduced the issue rather
than the one that exposed it.
Jakub Kicinski [Thu, 24 Apr 2025 18:10:57 +0000 (11:10 -0700)]
Merge tag 'wireless-2025-04-24' of https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless
Johannes Berg says:
====================
Some more fixes, notably:
* iwlwifi: various regression and iwlmld fixes
* mac80211: fix TX frames in monitor mode
* brcmfmac: error handling for firmware load
* tag 'wireless-2025-04-24' of https://git.kernel.org/pub/scm/linux/kernel/git/wireless/wireless:
wifi: iwlwifi: restore missing initialization of async_handlers_list
wifi: brcm80211: fmac: Add error handling for brcmf_usb_dl_writeimage()
wifi: plfxlc: Remove erroneous assert in plfxlc_mac_release
wifi: iwlwifi: fix the check for the SCRATCH register upon resume
wifi: iwlwifi: don't warn if the NIC is gone in resume
wifi: iwlwifi: mld: fix BAID validity check
wifi: iwlwifi: back off on continuous errors
wifi: iwlwifi: mld: only create debugfs symlink if it does not exist
wifi: iwlwifi: mld: inform trans on init failure
wifi: iwlwifi: mld: properly handle async notification in op mode start
Revert "wifi: iwlwifi: make no_160 more generic"
Revert "wifi: iwlwifi: add support for BE213"
wifi: mac80211: restore monitor for outgoing frames
====================
Merge tag 'v6.15-p5' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
Pull crypto fixes from Herbert Xu:
- Revert acomp multibuffer tests which were buggy
- Fix off-by-one regression in new scomp code
- Lower quality setting on atmel-sha204a as it may not be random
* tag 'v6.15-p5' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6:
crypto: atmel-sha204a - Set hwrng quality to lowest possible
crypto: scomp - Fix off-by-one bug when calculating last page
Revert "crypto: testmgr - Add multibuffer acomp testing"
net: phy: marvell-88q2xxx: Enable temperature sensor for mv88q211x
The temperature sensor enabled for mv88q222x devices also functions for
mv88q211x based devices. Unify the two devices probe functions to enable
the sensors for all devices supported by this driver.
The same oddity as for mv88q222x devices exists, the PHY link must be up
for a correct temperature reading to be reported.
# cat /sys/class/hwmon/hwmon9/temp1_input
-75000
# ifconfig end5 up
# cat /sys/class/hwmon/hwmon9/temp1_input
59000
Worth noting is that while the temperature register offsets and layout
are the same between mv88q211x and mv88q222x devices their names in the
datasheets are different. This change keeps the mv88q222x names for the
mv88q211x support.
====================
net: stmmac: fix timestamp snapshots on dwmac1000
this is the v2 of a small series containing two small fixes for the
timestamp snapshot feature on stmmac, especially on dwmac1000 version.
Those issues have been detected on a socfpga (Cyclone V) platform. They
kind of follow the big rework sent by Maxime at the end of last year to
properly split this feature support between different versions of the
DWMAC IP.
net: stmmac: fix multiplication overflow when reading timestamp
The current way of reading a timestamp snapshot in stmmac can lead to
integer overflow, as the computation is done on 32 bits. The issue has
been observed on a dwmac-socfpga platform returning chaotic timestamp
values due to this overflow. The corresponding multiplication is done
with a MUL instruction, which returns 32 bit values. Explicitly casting
the value to 64 bits replaced the MUL with a UMLAL, which computes and
returns the result on 64 bits, and so returns correctly the timestamps.
Prevent this overflow by explicitly casting the intermediate value to
u64 to make sure that the whole computation is made on u64. While at it,
apply the same cast on the other dwmac variant (GMAC4) method for
snapshot retrieval.
net: stmmac: fix dwmac1000 ptp timestamp status offset
When a PTP interrupt occurs, the driver accesses the wrong offset to
learn about the number of available snapshots in the FIFO for dwmac1000:
it should be accessing bits 29..25, while it is currently reading bits
19..16 (those are bits about the auxiliary triggers which have generated
the timestamps). As a consequence, it does not compute correctly the
number of available snapshots, and so possibly do not generate the
corresponding clock events if the bogus value ends up being 0.
Fix clock events generation by reading the correct bits in the timestamp
register for dwmac1000.
====================
ipv6: No RTNL for IPv6 routing table.
IPv6 routing tables are protected by each table's lock and work in
the interrupt context, which means we basically don't need RTNL to
modify an IPv6 routing table itself.
Currently, the control paths require RTNL because we may need to
perform device and nexthop lookups; we must prevent dev/nexthop from
going away from the netns.
This, however, can be achieved by RCU as well.
If we are in the RCU critical section while adding an IPv6 route,
synchronize_net() in __dev_change_net_namespace() and
unregister_netdevice_many_notify() guarantee that the dev will not be
moved to another netns or removed.
Also, nexthop is guaranteed not to be freed during the RCU grace period.
If we care about a race between nexthop removal and IPv6 route addition,
we can get rid of RTNL from the control paths.
Patch 1 moves a validation for RTA_MULTIPATH earlier.
Patch 2 removes RTNL for SIOCDELRT and RTM_DELROUTE.
Patch 3 ~ 11 moves validation and memory allocation earlier.
Patch 12 prevents a race between two requests for the same table.
Patch 13 & 14 prevents the nexthop race mentioned above.
Patch 15 removes RTNL for SIOCADDRT and RTM_NEWROUTE.
Test:
The script [0] lets each CPU-X create 100000 routes on table-X in a
batch.
I changed the number of routes (1000 ~ 100000 per CPU/table) and
consistently saw it finish 3x faster with this series.
[0]
mkdir tmp
NS="test"
ip netns add $NS
ip -n $NS link add veth0 type veth peer veth1
ip -n $NS link set veth0 up
ip -n $NS link set veth1 up
TABLES=()
for i in $(seq $(nproc)); do
TABLES+=("$i")
done
ROUTES=()
for i in {1..100}; do
for j in {1..1000}; do
ROUTES+=("2001:$i:$j::/64")
done
done
for TABLE in "${TABLES[@]}"; do
(
FILE="./tmp/batch-table-$TABLE.txt"
> $FILE
for ROUTE in "${ROUTES[@]}"; do
echo "route add $ROUTE dev veth0 table $TABLE" >> $FILE
done
) &
done
wait
echo "start adding routes"
START_TIME=$(date +%s%3N)
for TABLE in "${TABLES[@]}"; do
ip -n $NS -6 -batch "./tmp/batch-table-$TABLE.txt" &
done
ipv6: Get rid of RTNL for SIOCADDRT and RTM_NEWROUTE.
Now we are ready to remove RTNL from SIOCADDRT and RTM_NEWROUTE.
The remaining things to do are
1. pass false to lwtunnel_valid_encap_type_attr()
2. use rcu_dereference_rtnl() in fib6_check_nexthop()
3. place rcu_read_lock() before ip6_route_info_create_nh().
Let's complete the RTNL-free conversion.
When each CPU-X adds 100000 routes on table-X in a batch
concurrently on c7a.metal-48xl EC2 instance with 192 CPUs,
ipv6: Protect nh->f6i_list with spinlock and flag.
We will get rid of RTNL from RTM_NEWROUTE and SIOCADDRT.
Then, we may be going to add a route tied to a dying nexthop.
The nexthop itself is not freed during the RCU grace period, but
if we link a route after __remove_nexthop_fib() is called for the
nexthop, the route will be leaked.
To avoid the race between IPv6 route addition under RCU vs nexthop
deletion under RTNL, let's add a dead flag and protect it and
nh->f6i_list with a spinlock.
__remove_nexthop_fib() acquires the nexthop's spinlock and sets false
to nh->dead, then calls ip6_del_rt() for the linked route one by one
without the spinlock because fib6_purge_rt() acquires it later.
While adding an IPv6 route, fib6_add() acquires the nexthop lock and
checks the dead flag just before inserting the route.
ipv6: Defer fib6_purge_rt() in fib6_add_rt2node() to fib6_add().
The next patch adds per-nexthop spinlock which protects nh->f6i_list.
When rt->nh is not NULL, fib6_add_rt2node() will be called under the lock.
fib6_add_rt2node() could call fib6_purge_rt() for another route, which
could holds another nexthop lock.
Then, deadlock could happen between two nexthops.
Let's defer fib6_purge_rt() after fib6_add_rt2node().
We will get rid of RTNL from RTM_NEWROUTE and SIOCADDRT and rely
on RCU to guarantee dev and nexthop lifetime.
Then, the RCU section will start before ip6_route_info_create_nh()
in ip6_route_multipath_add(), but ip6_route_info_create() is called
in the same loop and will sleep.
Let's split the loop into ip6_route_mpath_info_create() and
ip6_route_mpath_info_create_nh().
Note that ip6_route_info_append() is now integrated into
ip6_route_mpath_info_create_nh() because we need to call different
free functions for nexthops that passed ip6_route_info_create_nh().
In case of failure, the remaining nexthops that ip6_route_info_create_nh()
has not been called for will be freed by ip6_route_mpath_info_cleanup().
OTOH, if a nexthop passes ip6_route_info_create_nh(), it will be linked
to a local temporary list, which will be spliced back to rt6_nh_list.
In case of failure, these nexthops will be released by fib6_info_release()
in ip6_route_multipath_add().
ipv6: Preallocate rt->fib6_nh->rt6i_pcpu in ip6_route_info_create().
ip6_route_info_create_nh() will be called under RCU.
Then, fib6_nh_init() is also under RCU, but per-cpu memory allocation
is very likely to fail with GFP_ATOMIC while bulk-adding IPv6 routes
and we would see a bunch of this message in dmesg.
percpu: allocation failed, size=8 align=8 atomic=1, atomic alloc failed, no space left
percpu: allocation failed, size=8 align=8 atomic=1, atomic alloc failed, no space left
Let's preallocate rt->fib6_nh->rt6i_pcpu in ip6_route_info_create().
If something fails before the original memory allocation in
fib6_nh_init(), ip6_route_info_create_nh() calls fib6_info_release(),
which releases the preallocated per-cpu memory.
Note that rt->fib6_nh->rt6i_pcpu is not preallocated when called via
ipv6_stub, so we still need alloc_percpu_gfp() in fib6_nh_init().
We will get rid of RTNL from RTM_NEWROUTE and SIOCADDRT and rely
on RCU to guarantee dev and nexthop lifetime.
Then, we want to allocate as much as possible before entering
the RCU section.
The RCU section will start in the middle of ip6_route_info_create(),
and this is problematic for ip6_route_multipath_add() that calls
ip6_route_info_create() multiple times.
Let's split ip6_route_info_create() into two parts; one for memory
allocation and another for nexthop setup.
ipv6: Move nexthop_find_by_id() after fib6_info_alloc().
We will get rid of RTNL from RTM_NEWROUTE and SIOCADDRT.
Then, we must perform two lookups for nexthop and dev under RCU
to guarantee their lifetime.
ip6_route_info_create() calls nexthop_find_by_id() first if
RTA_NH_ID is specified, and then allocates struct fib6_info.
nexthop_find_by_id() must be called under RCU, but we do not want
to use GFP_ATOMIC for memory allocation here, which will be likely
to fail in ip6_route_multipath_add().
Let's move nexthop_find_by_id() after the memory allocation so
that we can later split ip6_route_info_create() into two parts:
the sleepable part and the RCU part.
ipv6: Check GATEWAY in rtm_to_fib6_multipath_config().
In ip6_route_multipath_add(), we call rt6_qualify_for_ecmp() for each
entry. If it returns false, the request fails.
rt6_qualify_for_ecmp() returns false if either of the conditions below
is true:
1. f6i->fib6_flags has RTF_ADDRCONF
2. f6i->nh is not NULL
3. f6i->fib6_nh->fib_nh_gw_family is AF_UNSPEC
1 is unnecessary because rtm_to_fib6_config() never sets RTF_ADDRCONF
to cfg->fc_flags.
2. is equivalent with cfg->fc_nh_id.
3. can be replaced by checking RTF_GATEWAY in the base and each multipath
entry because AF_INET6 is set to f6i->fib6_nh->fib_nh_gw_family only when
cfg.fc_is_fdb is true or RTF_GATEWAY is set, but the former is always
false.
These checks do not require RCU and can be done earlier.
Let's perform the equivalent checks in rtm_to_fib6_multipath_config().
addrconf_f6i_alloc() does not need validation for struct fib6_config in
ip6_route_info_create().
ip6_route_multipath_add() calls ip6_route_info_create() for multiple
routes with slightly different fib6_config instances, which is copied
from the base config passed from userspace. So, we need not validate
the same config repeatedly.
Let's move such validation into rtm_to_fib6_config().
ipv6: Get rid of RTNL for SIOCDELRT and RTM_DELROUTE.
Basically, removing an IPv6 route does not require RTNL because
the IPv6 routing tables are protected by per table lock.
inet6_rtm_delroute() calls nexthop_find_by_id() to check if the
nexthop specified by RTA_NH_ID exists. nexthop uses rbtree and
the top-down walk can be safely performed under RCU.
ip6_route_del() already relies on RCU and the table lock, but we
need to extend the RCU critical section a bit more to cover
__ip6_del_rt(). For example, nexthop_for_each_fib6_nh() and
inet6_rt_notify() needs RCU.
Let's call nexthop_find_by_id() and __ip6_del_rt() under RCU and
get rid of RTNL from inet6_rtm_delroute() and SIOCDELRT.
Even if the nexthop is removed after rcu_read_unlock() in
inet6_rtm_delroute(), __remove_nexthop_fib() cleans up the routes
tied to the nexthop, and ip6_route_del() returns -ESRCH. So the
request was at least valid as of nexthop_find_by_id(), and it's just
a matter of timing.
Note that we need to pass false to lwtunnel_valid_encap_type_attr().
The following patches also use the newroute bool.
Note also that fib6_get_table() does not require RCU because once
allocated fib6_table is not freed until netns dismantle. I will
post a follow-up series to convert such callers to RCU-lockless
version. [0]
Shannon Nelson [Mon, 21 Apr 2025 17:46:06 +0000 (10:46 -0700)]
pds_core: make wait_context part of q_info
Make the wait_context a full part of the q_info struct rather
than a stack variable that goes away after pdsc_adminq_post()
is done so that the context is still available after the wait
loop has given up.
There was a case where a slow development firmware caused
the adminq request to time out, but then later the FW finally
finished the request and sent the interrupt. The handler tried
to complete_all() the completion context that had been created
on the stack in pdsc_adminq_post() but no longer existed.
This caused bad pointer usage, kernel crashes, and much wailing
and gnashing of teeth.
Fixes: 01ba61b55b20 ("pds_core: Add adminq processing and commands") Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250421174606.3892-5-shannon.nelson@amd.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
pds_core: Remove unnecessary check in pds_client_adminq_cmd()
When the pds_core driver was first created there were some race
conditions around using the adminq, especially for client drivers.
To reduce the possibility of a race condition there's a check
against pf->state in pds_client_adminq_cmd(). This is problematic
for a couple of reasons:
1. The PDSC_S_INITING_DRIVER bit is set during probe, but not
cleared until after everything in probe is complete, which
includes creating the auxiliary devices. For pds_fwctl this
means it can't make any adminq commands until after pds_core's
probe is complete even though the adminq is fully up by the
time pds_fwctl's auxiliary device is created.
2. The race conditions around using the adminq have been fixed
and this path is already protected against client drivers
calling pds_client_adminq_cmd() if the adminq isn't ready,
i.e. see pdsc_adminq_post() -> pdsc_adminq_inc_if_up().
Fix this by removing the pf->state check in pds_client_adminq_cmd()
because invalid accesses to pds_core's adminq is already handled by
pdsc_adminq_post()->pdsc_adminq_inc_if_up().
Fixes: 10659034c622 ("pds_core: add the aux client API") Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250421174606.3892-4-shannon.nelson@amd.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
pds_core: handle unsupported PDS_CORE_CMD_FW_CONTROL result
If the FW doesn't support the PDS_CORE_CMD_FW_CONTROL command
the driver might at the least print garbage and at the worst
crash when the user runs the "devlink dev info" devlink command.
This happens because the stack variable fw_list is not 0
initialized which results in fw_list.num_fw_slots being a
garbage value from the stack. Then the driver tries to access
fw_list.fw_names[i] with i >= ARRAY_SIZE and runs off the end
of the array.
Fix this by initializing the fw_list and by not failing
completely if the devcmd fails because other useful information
is printed via devlink dev info even if the devcmd fails.
Fixes: 45d76f492938 ("pds_core: set up device and adminq") Signed-off-by: Brett Creeley <brett.creeley@amd.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250421174606.3892-3-shannon.nelson@amd.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
pds_core: Prevent possible adminq overflow/stuck condition
The pds_core's adminq is protected by the adminq_lock, which prevents
more than 1 command to be posted onto it at any one time. This makes it
so the client drivers cannot simultaneously post adminq commands.
However, the completions happen in a different context, which means
multiple adminq commands can be posted sequentially and all waiting
on completion.
On the FW side, the backing adminq request queue is only 16 entries
long and the retry mechanism and/or overflow/stuck prevention is
lacking. This can cause the adminq to get stuck, so commands are no
longer processed and completions are no longer sent by the FW.
As an initial fix, prevent more than 16 outstanding adminq commands so
there's no way to cause the adminq from getting stuck. This works
because the backing adminq request queue will never have more than 16
pending adminq commands, so it will never overflow. This is done by
reducing the adminq depth to 16.
Fixes: 45d76f492938 ("pds_core: set up device and adminq") Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Signed-off-by: Brett Creeley <brett.creeley@amd.com> Signed-off-by: Shannon Nelson <shannon.nelson@amd.com> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250421174606.3892-2-shannon.nelson@amd.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
====================
net/mlx5: HWS, Improve IP version handling
This small series hardens our checks against a single matcher containing
rules that match on IPv4 and IPv6. This scenario is not supported by
hardware steering and the implementation now signals this instead of
failing silently.
Patches:
* Patch 1 forbids a single definer to match on mixed IP versions for
source and destination address.
* Patch 2 reproduces a couple of firmware checks: it forbids creating
a definer that matches on IP address without matching on IP version,
and also disallows matching on IPv6 addresses and the IPv4 IHL fields
in the same definer.
* Patch 3 forbids mixing rules that match on IPv4 and IPv6 addresses in
the same matcher. The underlying definer mechanism does not support
that.
====================
Signal clearly to the user, via an error, that mixing IPv4 and IPv6
rules in the same matcher is not supported. Previously such cases
silently failed by adding a rule that did not work correctly.
Rules can specify an IP version by one of two fields: IP version or
ethertype. At matcher creation, store whether the template matches on
any of these two fields. If yes, inspect each rule for its corresponding
match value and store the IP version inside the matcher to guard against
inconsistencies with subsequent rules.
Furthermore, also check rules for internal consistency, i.e. verify that
the ethertype and IP version match values do not contradict each other.
The logic applies to inner and outer headers independently, to account
for tunneling.
Rules that do not match on IP addresses are not affected.
Signed-off-by: Vlad Dogaru <vdogaru@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Mark Bloch <mbloch@nvidia.com> Link: https://patch.msgid.link/20250422092540.182091-4-mbloch@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Replicate some sanity checks that firmware does, since hardware steering
does not go through firmware.
When creating a definer, disallow matching on IP addresses without also
matching on IP version. The latter can be satisfied by matching either
on the version field in the IP header, or on the ethertype field.
Also refuse to match IPv4 IHL alongside IPv6.
Signed-off-by: Vlad Dogaru <vdogaru@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Mark Bloch <mbloch@nvidia.com> Link: https://patch.msgid.link/20250422092540.182091-3-mbloch@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Unify the check for IP version when creating a definer. A given matcher
is deemed to match on IPv6 if any of the higher order (>31) bits of
source or destination address mask are set.
A single packet cannot mix IP versions between source and destination
addresses, so it makes no sense that they would be decided on
independently.
Signed-off-by: Vlad Dogaru <vdogaru@nvidia.com> Reviewed-by: Yevgeny Kliteynik <kliteyn@nvidia.com> Signed-off-by: Mark Bloch <mbloch@nvidia.com> Link: https://patch.msgid.link/20250422092540.182091-2-mbloch@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Daniel Golle [Tue, 22 Apr 2025 03:10:20 +0000 (04:10 +0100)]
net: dsa: mt7530: sync driver-specific behavior of MT7531 variants
MT7531 standalone and MMIO variants found in MT7988 and EN7581 share
most basic properties. Despite that, assisted_learning_on_cpu_port and
mtu_enforcement_ingress were only applied for MT7531 but not for MT7988
or EN7581, causing the expected issues on MMIO devices.
Apply both settings equally also for MT7988 and EN7581 by moving both
assignments form mt7531_setup() to mt7531_setup_common().
This fixes unwanted flooding of packets due to unknown unicast
during DA lookup, as well as issues with heterogenous MTU settings.