openvswitch: Merge three per-CPU structures into one
exec_actions_level is a per-CPU integer allocated at compile time.
action_fifos and flow_keys are per-CPU pointer and have their data
allocated at module init time.
There is no gain in splitting it, once the module is allocated, the
structures are allocated.
Merge the three per-CPU variables into ovs_pcpu_storage, adapt callers.
Cc: Aaron Conole <aconole@redhat.com> Cc: Eelco Chaudron <echaudro@redhat.com> Cc: Ilya Maximets <i.maximets@ovn.org> Cc: dev@openvswitch.org Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Aaron Conole <aconole@redhat.com> Link: https://patch.msgid.link/20250512092736.229935-8-bigeasy@linutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
xfrm: Use nested-BH locking for nat_keepalive_sk_ipv[46]
nat_keepalive_sk_ipv[46] is a per-CPU variable and relies on disabled BH
for its locking. Without per-CPU locking in local_bh_disable() on
PREEMPT_RT this data structure requires explicit locking.
Use sock_bh_locked which has a sock pointer and a local_lock_t. Use
local_lock_nested_bh() for locking. This change adds only lockdep
coverage and does not alter the functional behaviour for !PREEMPT_RT.
Cc: Steffen Klassert <steffen.klassert@secunet.com> Cc: Herbert Xu <herbert@gondor.apana.org.au> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20250512092736.229935-7-bigeasy@linutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
system_page_pool is a per-CPU variable and relies on disabled BH for its
locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Make a struct with a page_pool member (original system_page_pool) and a
local_lock_t and use local_lock_nested_bh() for locking. This change
adds only lockdep coverage and does not alter the functional behaviour
for !PREEMPT_RT.
Cc: Andrew Lunn <andrew+netdev@lunn.ch> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Daniel Borkmann <daniel@iogearbox.net> Cc: Jesper Dangaard Brouer <hawk@kernel.org> Cc: John Fastabend <john.fastabend@gmail.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://patch.msgid.link/20250512092736.229935-6-bigeasy@linutronix.de Signed-off-by: Paolo Abeni <pabeni@redhat.com>
hmac_storage is a per-CPU variable and relies on disabled BH for its
locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Add a local_lock_t to the data structure and use
local_lock_nested_bh() for locking. This change adds only lockdep
coverage and does not alter the functional behaviour for !PREEMPT_RT.
ipv4/route: Use this_cpu_inc() for stats on PREEMPT_RT
The statistics are incremented with raw_cpu_inc() assuming it always
happens with bottom half disabled. Without per-CPU locking in
local_bh_disable() on PREEMPT_RT this is no longer true.
Use this_cpu_inc() on PREEMPT_RT for the increment to not worry about
preemption.
net: dst_cache: Use nested-BH locking for dst_cache::cache
dst_cache::cache is a per-CPU variable and relies on disabled BH for its
locking. Without per-CPU locking in local_bh_disable() on PREEMPT_RT
this data structure requires explicit locking.
Add a local_lock_t to the data structure and use
local_lock_nested_bh() for locking. This change adds only lockdep
coverage and does not alter the functional behaviour for !PREEMPT_RT.
Octeontx2 VF,PF and AF devices communicate using hardware
shared mailbox region where VFs can only to talk to its PFs
and PFs can only talk to AF. AF does the entire resource management
for all PFs and VFs. The shared mbox region is used for synchronous
requests (requests from PF to AF or VF to PF) and async notifications
(notifications from AF to PFs or PF to VFs). Sending a request to AF
from VF involves various stages like
1. VF allocates message in shared region
2. Triggers interrupt to PF
3. PF upon receiving interrupt from VF will copy the message
from VF<->PF region to PF<->AF region
4. Triggers interrupt to AF
5. AF processes it and writes response in PF<->AF region
6. Triggers interrupt to PF
7. PF copies responses from PF<->AF region to VF<->PF region
8. Triggers interrupt to Vf
9. VF reads response in VF<->PF region
Due to various stages involved, Tracepoints are used in mailbox code for
debugging. Existing tracepoints need some improvements so that maximum
information can be inferred from trace logs during an issue.
This patchset tries to enhance existing tracepoints and also adds
a couple of tracepoints.
====================
Apart from netdev interface Octeontx2 PF does the following:
1. Sends its own requests to AF and receives responses from AF.
2. Receives async messages from AF.
3. Forwards VF requests to AF, sends respective responses from AF to VFs.
4. Sends async messages to VFs.
This patch adds new tracepoint otx2_msg_status to display the status
of PF wrt mailbox handling.
octeontx2: Add pcifunc also to mailbox tracepoints
This patch adds pcifunc which represents PF and VF device to the
tracepoints otx2_msg_alloc, otx2_msg_send, otx2_msg_process so that
it is easier to correlate which device allocated the message, which
device forwarded it and which device processed that message.
Also add message id in otx2_msg_send tracepoint to check which
message is sent at any point of time from a device.
Shay Drory [Tue, 13 May 2025 08:19:22 +0000 (11:19 +0300)]
net: Look for bonding slaves in the bond's network namespace
Update the for_each_netdev_in_bond_rcu macro to iterate through network
devices in the bond's network namespace instead of always using
init_net. This change is safe because:
1. **Bond-Slave Namespace Relationship**: A bond device and its slaves
must reside in the same network namespace. The bond device's
namespace is established at creation time and cannot change.
2. **Slave Movement Implications**: Any attempt to move a slave device
to a different namespace automatically removes it from the bond, as
per kernel networking stack rules.
This maintains the invariant that slaves must exist in the same
namespace as their bond.
This change is part of an effort to enable Link Aggregation (LAG) to
work properly inside custom network namespaces. Previously, the macro
would only find slave devices in the initial network namespace,
preventing proper bonding functionality in custom namespaces.
====================
eth: fbnic: Add devlink dev flash support
fbnic supports updating firmware using signed PLDM images. PLDM images are
written into the flash. Flashing does not interrupt the operation of the
device.
Lee Trager [Mon, 12 May 2025 18:54:01 +0000 (11:54 -0700)]
eth: fbnic: Add devlink dev flash support
Add support to update the CMRT and control firmware as well as the UEFI
driver on fbnic using devlink dev flash.
Make sure the shutdown / quiescence paths like suspend take the devlink
lock to prevent them from interrupting the FW flashing process.
Signed-off-by: Lee Trager <lee@trager.us> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20250512190109.2475614-6-lee@trager.us Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Lee Trager [Mon, 12 May 2025 18:54:00 +0000 (11:54 -0700)]
eth: fbnic: Add mailbox support for PLDM updates
Add three new mailbox messages to support PLDM upgrades:
* FW_START_UPGRADE - Enables driver to request starting a firmware upgrade
by specifying the component to be upgraded and its
size.
* WRITE_CHUNK - Allows firmware to request driver to send a chunk of
data at the specified offset.
* FINISH_UPGRADE - Allows firmware to cancel the upgrade process and
return an error.
Lee Trager [Mon, 12 May 2025 18:53:59 +0000 (11:53 -0700)]
eth: fbnic: Add support for multiple concurrent completion messages
Extend fbnic mailbox to support multiple concurrent completion messages at
once. This enables fbnic to support running multiple operations at once
which depend on a response from firmware via the mailbox.
Signed-off-by: Lee Trager <lee@trager.us> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Reviewed-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250512190109.2475614-4-lee@trager.us Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Lee Trager [Mon, 12 May 2025 18:53:58 +0000 (11:53 -0700)]
eth: fbnic: Accept minimum anti-rollback version from firmware
fbnic supports applying firmware which may not be rolled back. This is
implemented in firmware however it is useful for the driver to know the
minimum supported firmware version. This will enable the driver validate
new firmware before it is sent to the NIC. If it is too old the driver can
provide a clear message that the version is too old.
Lee Trager [Mon, 12 May 2025 18:53:57 +0000 (11:53 -0700)]
pldmfw: Don't require send_package_data or send_component_table to be defined
Not all drivers require send_package_data or send_component_table when
updating firmware. Instead of forcing drivers to implement a stub allow
these functions to go undefined.
Signed-off-by: Lee Trager <lee@trager.us> Signed-off-by: Jakub Kicinski <kuba@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Acked-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://patch.msgid.link/20250512190109.2475614-2-lee@trager.us Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Dimitri Fedrau [Mon, 12 May 2025 12:03:42 +0000 (14:03 +0200)]
net: phy: marvell-88q2xxx: Enable temperature measurement in probe again
Enabling of the temperature sensor was moved from mv88q2xxx_hwmon_probe to
mv88q222x_config_init with the consequence that the sensor is only
usable when the PHY is configured. Enable the sensor in
mv88q2xxx_hwmon_probe as well to fix this.
Vladimir Oltean [Mon, 12 May 2025 11:44:22 +0000 (14:44 +0300)]
net: cpsw: isolate cpsw_ndo_ioctl() to just the old driver
cpsw->slaves[slave_no].phy should be equal to netdev->phydev, because it
is assigned from phy_attach_direct(). The latter is indirectly called
from the two identically named cpsw_slave_open() functions, one in
cpsw.c and another in cpsw_new.c.
Thus, the driver should not need custom logic to find the PHY, the core
can find it, and phy_do_ioctl_running() achieves exactly that.
However, that is only the case for cpsw_new and for the cpsw driver in
dual EMAC mode. This is explained in more detail in the previous commit.
Thus, allow the simpler core logic to execute for cpsw_new, and move
cpsw_ndo_ioctl() to cpsw.c.
Vladimir Oltean [Mon, 12 May 2025 11:44:21 +0000 (14:44 +0300)]
net: cpsw: convert to ndo_hwtstamp_get() and ndo_hwtstamp_set()
New timestamping API was introduced in commit 66f7223039c0 ("net: add
NDOs for configuring hardware timestamping") from kernel v6.6. It is
time to convert the two cpsw drivers to the new API, so that the
ndo_eth_ioctl() path can be removed completely.
The cpsw_hwtstamp_get() and cpsw_hwtstamp_set() methods (and their shim
definitions, for the case where CONFIG_TI_CPTS is not enabled) must have
their prototypes adjusted.
These methods are used by two drivers (cpsw and cpsw_new), with vastly
different configurations:
- cpsw has two operating modes:
- "dual EMAC" - enabled through the "dual_emac" device tree property -
creates one net_device per EMAC / slave interface (but there is no
bridging offload)
- "switch mode" - default - there is a single net_device, with two
EMACs/slaves behind it (and switching between them happens
unbeknownst to the network stack).
- cpsw_new always registers one net_device for each EMAC which doesn't
have status = "disabled". In terms of switching, it has two modes:
- "dual EMAC": default, no switching between ports, no switchdev
offload.
- "switch mode": enabled through the "switch_mode" devlink parameter,
offloads the Linux bridge through switchdev
Essentially, in 3 out of 4 operating modes, there is a bijective
relation between the net_device and the slave. Timestamping can thus be
configured on individual slaves. But in the "switch mode" of the cpsw
driver, ndo_eth_ioctl() targets a single slave, designated using the
"active_slave" device tree property.
To deal with these different cases, the common portion of the drivers,
cpsw_priv.c, has the cpsw_slave_index() function pointer, set to
separate, identically named cpsw_slave_index_priv() by the 2 drivers.
This is all relevant because cpsw_ndo_ioctl() has the old-style
phy_has_hwtstamp() logic which lets the PHY handle the timestamping
ioctls. Normally, that logic should be obsoleted by the more complex
logic in the core, which permits dynamically selecting the timestamp
provider - see dev_set_hwtstamp_phylib().
But I have doubts as to how this works for the "switch mode" of the dual
EMAC driver, because the core logic only engages if the PHY is visible
through ndev->phydev (this is set by phy_attach_direct()).
In cpsw.c, we have:
cpsw_ndo_open()
-> for_each_slave(priv, cpsw_slave_open, priv); // continues on errors
-> of_phy_connect()
-> phy_connect_direct()
-> phy_attach_direct()
OR
-> phy_connect()
-> phy_connect_direct()
-> phy_attach_direct()
The problem for "switch mode" is that the behavior of phy_attach_direct()
called twice in a row for the same net_device (once for each slave) is
probably undefined.
For sure it will overwrite dev->phydev. I don't see any explicit error
checks for this case, and even if there were, the for_each_slave() call
makes them non-fatal to cpsw_ndo_open() anyway.
I have no idea what is the extent to which this provides a usable
result, but the point is: only the last attached PHY will be visible
in dev->phydev, and this may well be a different PHY than
cpsw->slaves[slave_no].phy for the "active_slave".
In dual EMAC mode, as well as in cpsw_new, this should not be a problem.
I don't know whether PHY timestamping is a use case for the cpsw "switch
mode" as well, and I hope that there isn't, because for the sake of
simplicity, I've decided to deliberately break that functionality, by
refusing all PHY timestamping. Keeping it would mean blocking the old
API from ever being removed. In the new dev_set_hwtstamp_phylib() API,
it is not possible to operate on a phylib PHY other than dev->phydev,
and I would very much prefer not adding that much complexity for bizarre
driver decisions.
Final point about the cpsw_hwtstamp_get() conversion: we don't need to
propagate the unnecessary "config.flags = 0;", because dev_get_hwtstamp()
provides a zero-initialized struct kernel_hwtstamp_config.
This series modified three outstanding drivers among more than 100 drivers
because the software timestamp generation is too early. The idea of this
series is derived from the brief talk[1] with Willem. In conclusion, this
series makes the generation of software timestamp near/before kicking the
doorbell for drivers.
Jason Xing [Sat, 10 May 2025 13:48:12 +0000 (21:48 +0800)]
net: stmmac: generate software timestamp just before the doorbell
Make sure the call of skb_tx_timestamp is as close as possbile to the
doorbell.
The patch also adjusts the order of setting SKBTX_IN_PROGRESS and
generate software timestamp so that without SOF_TIMESTAMPING_OPT_TX_SWHW
being set the software and hardware timestamps will not appear in the
error queue of socket nearly at the same time (Please see __skb_tstamp_tx()).
Eric Biggers [Tue, 13 May 2025 05:01:42 +0000 (22:01 -0700)]
net: apple: bmac: use crc32() instead of hand-rolled equivalent
The calculation done by bmac_crc(addr) followed by taking the low 6 bits
and reversing them is equivalent to taking the high 6 bits from
crc32(~0, addr, ETH_ALEN). Just do that instead.
Eelco Chaudron [Mon, 12 May 2025 08:08:24 +0000 (10:08 +0200)]
openvswitch: Stricter validation for the userspace action
This change enhances the robustness of validate_userspace() by ensuring
that all Netlink attributes are fully contained within the parent
attribute. The previous use of nla_parse_nested_deprecated() could
silently skip trailing or malformed attributes, as it stops parsing at
the first invalid entry.
By switching to nla_parse_deprecated_strict(), we make sure only fully
validated attributes are copied for later use.
Wei Fang [Mon, 12 May 2025 06:17:01 +0000 (14:17 +0800)]
net: enetc: fix implicit declaration of function FIELD_PREP
The kernel test robot reported the following error:
drivers/net/ethernet/freescale/enetc/ntmp.c: In function 'ntmp_fill_request_hdr':
drivers/net/ethernet/freescale/enetc/ntmp.c:203:38: error: implicit
declaration of function 'FIELD_PREP' [-Wimplicit-function-declaration]
203 | cbd->req_hdr.access_method = FIELD_PREP(NTMP_ACCESS_METHOD,
| ^~~~~~~~~~
Therefore, add "bitfield.h" to ntmp_private.h to fix this issue.
Fixes: 4701073c3deb ("net: enetc: add initial netc-lib driver to support NTMP") Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202505101047.NTMcerZE-lkp@intel.com/ Signed-off-by: Wei Fang <wei.fang@nxp.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Vladimir Oltean [Mon, 12 May 2025 15:44:11 +0000 (18:44 +0300)]
net: mlxsw: convert to ndo_hwtstamp_get() and ndo_hwtstamp_set()
New timestamping API was introduced in commit 66f7223039c0 ("net: add
NDOs for configuring hardware timestamping") from kernel v6.6. It is
time to convert the mlxsw driver to the new API, so that the
ndo_eth_ioctl() path can be removed completely.
The UAPI is still ioctl-only, but it's best to remove the "ioctl"
mentions from the driver in case a netlink variant appears.
Vladimir Oltean [Mon, 12 May 2025 11:24:02 +0000 (14:24 +0300)]
net: enetc: convert to ndo_hwtstamp_get() and ndo_hwtstamp_set()
New timestamping API was introduced in commit 66f7223039c0 ("net: add
NDOs for configuring hardware timestamping") from kernel v6.6. It is
time to convert the ENETC driver to the new API, so that the
ndo_eth_ioctl() path can be removed completely.
Move the enetc_hwtstamp_get() and enetc_hwtstamp_set() calls away from
enetc_ioctl() to dedicated net_device_ops for the LS1028A PF and VF
(NETC v4 does not yet implement enetc_ioctl()), adapt the prototypes and
export these symbols (enetc_ioctl() is also exported).
Jiawen Wu [Mon, 12 May 2025 10:06:52 +0000 (18:06 +0800)]
net: txgbe: Fix pending interrupt
For unknown reasons, sometimes the value of MISC interrupt is 0 in the
IRQ handle function. In this case, wx_intr_enable() is also should be
invoked to clear the interrupt. Otherwise, the next interrupt would
never be reported.
====================
net/mlx5: HWS, Complex Matchers and rehash mechanism fixes
Motivation:
----------
A matcher can match a certain set of match parameters. However,
the number and size of match params for a single matcher are
limited — all the parameters must fit within a single definer.
A common example of this limitation is IPv6 address matching, where
matching both source and destination IPs requires more bits than a
single definer can support.
SW Steering addresses this limitation by chaining multiple Steering
Table Entries (STEs) within the same matcher, where each STE matches
on a subset of the parameters.
In HW Steering, such chaining is not possible — the matcher's STEs
are managed in a hash table, and a single definer is used to calculate
the hash index for STEs.
Overview:
--------
To address this limitation in HW Steering, we introduce
*Complex Matchers*, which consist of two chained matchers. This allows
matching on twice as many parameters. Complex Matchers are filled with
*Complex Rules* — rules that are split into two parts and inserted into
their respective matchers.
The first half of the Complex Matcher is a regular matcher and points
to the second half, which is an *Isolated Matcher*. An Isolated Matcher
has its own isolated table and is accessible only by traffic coming
from the first half of the Complex Matcher.
This splitting of matchers/rules into multiple parts is transparent to
users. It is hidden behind the BWC HWS API. It becomes visible only
when dumping steering debug information, where the Complex Matcher
appears as two separate matchers: one in the user-created table and
another in its isolated table.
Implementation Details:
----------------------
All user actions are performed on the second part of the rules only.
The first part handles matching and applies two actions: modify header
(set metadata, see details below) and go-to-table (directing traffic
to the isolated table containing the isolated matcher).
Rule updates (updating rule actions) are applied to the second part
of the rule since user-provided actions are not executed in the first
matcher.
We use REG_C_6 metadata register to set and match on unique per-rule
tag (see details below).
Splitting rules into two parts introduces new challenges:
1. Invalid Combinations
Consider two rules with different matching values:
- Rule 1: A+B
- Rule 2: C+D
Splitting these rules results in invalid combinations: A+D and C+B:
any packet that matched on A will be forwarded to the 2nd matcher,
where it will try to match on B (which is legal, and it is what the
user asked for), but it will also try to match on D (which is not
what the user asked for). To resolve this, we assign unique tags
to each rule on the first matcher and match on these tags on the
second matcher:
|----------| |---------|
| A | | B, TagA |
| action: | | |
| set TagA | | |
|----------| --> |---------|
| C | | D, TagB |
| action: | | |
| set TagB | | |
|----------| |---------|
2. Duplicated Entries:
Consider two rules with overlapping values:
- Rule 1: A+B
- Rule 2: A+D
Let's split the rules into two parts as follows:
|---| |---|
| A | | B |
|---| --> |---|
| | | D |
|---| |---|
This leads to the duplicated entries on the first matcher, which HWS
doesn't allow: subsequent delete of either of the rules will delete
the only entry in the first matcher, leaving the remaining rule
broken. To address this, we use a reference count for entries in the
first matcher and delete STEs only when their refcount reaches zero.
Both challenges are resolved by having a per-matcher data structure
(implemented with rhashtable) that manages refcounts for the first part
of the rules and holds unique tags (managed via IDA) for these rules to
set and to match on the second matcher.
Limitations:
-----------
We utilize metadata register REG_C_6 in this implementation, so its
usage anywhere along the flow that might include the need for Complex
Matcher is prohibited.
The number and size of match parameters remain limited — now
constrained by what can be represented by two definers instead of one.
This architectural limitation arises from the structure of Complex
Matchers. If future requirements demand more parameters, Complex
Matchers can be extended beyond two matchers.
Additionally, there is an implementation limit of 32 match parameters
per matcher (disregarding parameter size). This limit can be lifted
if needed.
Patches:
-------
- Patches 1-3: small additions/refactoring in preparation for
Complex Matcher: exposed mlx5hws_table_ft_set_next_ft() in header,
added definer function to convert field name enum to string,
expose the polling function mlx5hws_bwc_queue_poll() in a header.
- Patch 4: in preparation for Complex Matcher, this patch adds
support for Isolated Matcher.
- Patch 5: the main patch - Complex Matchers implementation.
[2]
Patch 6: fixing the usecase where rule insertion was failing,
but rehash couldn't be initiated if the number of rules in
the table is below the rehash threshold.
Patch 7: fixing the usecase where many rules in parallel
would require rehash, due to the way the counting of rules
was done.
Patch 8: fixing the case where rules were requiring action
template extension in parallel, leading to unneeded extensions
with the same templates.
Patch 9: refactor and simplify the rehash loop.
Patch 10: dump error completion details, which helps a lot
in trying to understand what went wrong, especially during
rehash.
====================
Failing to insert/delete a rule should not happen. If it does happen,
it would be good to know at which stage it happened and what was the
failure. This patch adds printing of bad CQE details.
Reworking the rehash loop - simplifying the code and making it less
error prone:
- Instead of doing round-robin on all the queues with batch of rules in
each cycle, just go over all the queues and move all the rules that
belong to this queue.
- If at some stage of moving the rule we get a failure (which should
not happen), this can't be rolled back. So instead of aborting
rehash and leaving the matcher in a broken state, allow the loop
to continue: attempt to move the rest of the rules and delete the
old matcher. A rule that failed to move to a new matcher will loose
its match STE once the rehash is completed and the old matcher is
deleted, so the rule won't match any traffic any more. This rule's
packets will fall back to the steering pipeline w/o HW offload.
Rehash procedure will return an error, which will cause the rule
insertion to fail for the rule that started this whole rehash.
net/mlx5: HWS, fix redundant extension of action templates
When a rule is inserted into a matcher, we search for the suitable
action template. If such template is not found, action template array
is extended with the new template. However, when several threads are
performing this in parallel, there is a race - we can end up with
extending the action templates array with the same template.
This patch is doing the following:
- refactor the code to find action template index in rule create and
update, have the common code in an auxiliary function
- after locking all the queues, check again if the action template
array still needs to be extended
net/mlx5: HWS, fix counting of rules in the matcher
Currently the counter that counts number of rules in a matcher is
increased only when rule insertion is completed. In a multi-threaded
usecase this can lead to a scenario that many rules can be in process
of insertion in the same matcher, while none of them has completed
the insertion and the rule counter is not updated. This results in
a rule insertion failure for many of them at first attempt, which
leads to all of them requiring rehash and requiring locking of all
the queue locks.
This patch fixes the case by increasing the rule counter in the
beginning of insertion process and decreasing in case of any failure.
net/mlx5: HWS, force rehash when rule insertion failed
Rules are inserted into hash table in accordance with their hash index.
When a certain number of rules is reached, the table is rehashed:
a bigger new table is allocated and all the rules are moved there.
But sometimes a new rule can't be inserted into the hash table
because its index is full, even though the number of rules in the
table is well below the threshold. The hash function is not perfect,
so such cases are not rare. When that happens, we want to do the same
rehash, in order to increase the table size and lower the probability
for such cases.
This patch fixes the usecase where rule insertion was failing, but
rehash couldn't be initiated due to low number of rules: it adds flag
that denotes that rehash is required, even if the number of rules in
the table is below the rehash threshold.
This patch adds support for Complex Matchers/Rules
Overview:
--------
A matcher can match on a certain set of match parameters. However, the
number and size of match params for a single matcher are limited: all
the parameters must fit within a single definer.
A common example of this limitation is IPv6 address matching, where
matching both source and destination IPs requires more bits than a
single definer can support.
SW Steering addresses this limitation by chaining multiple Steering
Table Entries (STEs) within the same matcher, where each STE matches
on a subset of the parameters.
In HW Steering, such chaining is not possible — the matcher's STEs
are managed in a hash table, and a single definer is used to calculate
the hash index for STEs.
To address this limitation in HW Steering, we introduce Complex
Matchers, which consist of two chained matchers. This allows matching
on twice as many parameters. Complex Matchers are filled with Complex
Rules — rules that are split into two parts and inserted into their
respective matchers.
The first half of the Complex Matcher is a regular matcher and points
to the second half, which is an Isolated Matcher. An Isolated Matcher
has its own isolated table and is accessible only by traffic coming
from the first half of the Complex Matcher.
This splitting of matchers/rules into multiple parts is transparent to
users. It is hidden under the BWC HWS API. It becomes visible only when
dumping steering debug information, where the Complex Matcher appears
as two separate matchers: one in the user-created table and another
in its isolated table.
Some implementation details:
---------------------------
All user actions are performed on the second part of the rules only.
The first part handles matching and applies two actions: modify header
(set metadata, see details below) and go-to-table (directing traffic to
the isolated table containing the isolated matcher).
Rule updates (updating rule actions) are applied to the second part of
the rule since user-provided actions are not executed in the first
matcher.
We use REG_C_6 metadata register to set and match on unique per-rule
tag (see details below).
Splitting rules into two parts introduces new challenges:
1. Invalid Combinations
Consider two rules with different matching values:
- Rule 1: A+B
- Rule 2: C+D
Let's split the rules into two parts as follows:
|---| |---|
| A | | B |
|---| --> |---|
| C | | D |
|---| |---|
Splitting these rules results in invalid combinations like A+D
and C+B.
To resolve this, we assign unique tags to each rule on the first
matcher and match these tags on the second matcher (the tag is
implemented through modify_hdr action that sets value to metadata
register REG_C_6):
|----------| |---------|
| A | | B, TagA |
| action: | | |
| set TagA | | |
|----------| --> |---------|
| C | | D, TagB |
| action: | | |
| set TagB | | |
|----------| |---------|
2. Duplicated Entries:
Consider two rules with overlapping values:
- Rule 1: A+B
- Rule 2: A+D
Let's split the rules into two parts as follows:
|---| |---|
| A | | B |
|---| --> |---|
| | | D |
|---| |---|
This leads to the duplicated entries on the first matcher, which HWS
doesn't allow: subsequent delete of either of the rules will delete
the only entry in the first matcher, leaving the remaining rule
broken.
To address this, we use a reference count for entries in the first
matcher and delete STEs only when their refcount reaches zero.
Both challenges are resolved by having a per-matcher data structure
(implemented with rhashtable) that manages refcounts for the first part
of the rules and holds unique tags (managed via IDA) for these rules to
set and to match on the second matcher.
Limitations:
-----------
We utilize metadata register REG_C_6 in this implementation, so its
usage anywhere along the steering of the flow that might include the
need for Complex Matcher is prohibited.
The number and size of match parameters remain limited — now it is
constrained by what can be represented by two definers instead of one.
This architectural limitation arises from the structure of Complex
Matchers. If future requirements demand more parameters,
Complex Matchers can be extended beyond two matchers.
Additionally, there is an implementation limit of 32 match parameters
per rule (disregarding parameter size). This limit can be lifted if
needed.
In preparation for complex matcher support, introduce the isolated
matcher.
Isolated matcher is a matcher that has its own isolated table.
It is used as the second half of the complex matcher: when the rule
is split into two parts (complex rule), then matching on the first
part will send the packet to the isolated matcher that will try to
match on the second part. In case of miss, the packet goes back to
the matcher's end flow table.
net/mlx5: HWS, expose polling function in header file
In preparation for complex matcher, expose the function that is
polling queue for completion (mlx5hws_bwc_queue_poll) in header
file, so that it will be used by complex matcher code.
====================
amd-xgbe: add support for AMD Renoir
Add support for a new AMD Ethernet device called "Renoir". It has a new
PCI ID, add this to the current list of supported devices in the
amd-xgbe devices. Also, the BAR1 addresses cannot be used to access the
PCS registers on Renoir platform, use the indirect addressing via SMN
instead.
====================
Raju Rangoju [Fri, 9 May 2025 15:53:23 +0000 (21:23 +0530)]
amd-xgbe: add support for new XPCS routines
Add the necessary support to enable Renoir ethernet device. Since the
BAR1 address cannot be used to access the XPCS registers on Renoir, use
the smn functions.
Some of the ethernet add-in-cards have dual PHY but share a single MDIO
line (between the ports). In such cases, link inconsistencies are
noticed during the heavy traffic and during reboot stress tests. Using
smn calls helps avoid such race conditions.
Raju Rangoju [Fri, 9 May 2025 15:53:21 +0000 (21:23 +0530)]
amd-xgbe: reorganize the code of XPCS access
The xgbe_{read/write}_mmd_regs_v* functions have common code which can
be moved to helper functions. Add new helper functions to calculate the
mmd_address for v1/v2 of xpcs access.
====================
tools: ynl-gen: support sub-types for binary attributes
Binary attributes have sub-type annotations which either indicate
that the binary object should be interpreted as a raw / C array of
a simple type (e.g. u32), or that it's a struct.
Use this information in the C codegen instead of outputting void *
for all binary attrs. It doesn't make a huge difference in the genl
families, but in classic Netlink there is a lot more structs.
Jakub Kicinski [Fri, 9 May 2025 15:42:13 +0000 (08:42 -0700)]
tools: ynl-gen: support struct for binary attributes
Support using a struct pointer for binary attrs. Len field is maintained
because the structs may grow with newer kernel versions. Or, which matters
more, be shorter if the binary is built against newer uAPI than kernel
against which it's executed. Since we are storing a pointer to a struct
type - always allocate at least the amount of memory needed by the struct
per current uAPI headers (unused mem is zeroed). Technically users should
check the length field but per modern ASAN checks storing a short object
under a pointer seems like a bad idea.
Jakub Kicinski [Fri, 9 May 2025 15:42:12 +0000 (08:42 -0700)]
tools: ynl-gen: auto-indent else
We auto-indent if statements (increase the indent of the subsequent
line by 1), do the same thing for else branches without a block.
There hasn't been any else branches before but we're about to add one.
Jakub Kicinski [Fri, 9 May 2025 15:42:11 +0000 (08:42 -0700)]
tools: ynl-gen: support sub-type for binary attributes
Sub-type annotation on binary attributes may indicate that the attribute
carries an array of simple types (also referred to as "C array" in docs).
Support rendering them as such in the C user code. For example for u32,
instead of:
struct {
u32 arr;
} _len;
void *arr;
render:
struct {
u32 arr;
} _count;
__u32 *arr;
Note that count is the number of elements while len was the length in bytes.
Paolo Abeni [Tue, 13 May 2025 09:13:05 +0000 (11:13 +0200)]
Merge branch 'device-memory-tcp-tx'
Mina Almasry says:
====================
Device memory TCP TX
The TX path had been dropped from the Device Memory TCP patch series
post RFCv1 [1], to make that series slightly easier to review. This
series rebases the implementation of the TX path on top of the
net_iov/netmem framework agreed upon and merged. The motivation for
the feature is thoroughly described in the docs & cover letter of the
original proposal, so I don't repeat the lengthy descriptions here, but
they are available in [1].
Full outline on usage of the TX path is detailed in the documentation
included with this series.
Test example is available via the kselftest included in the series as well.
The series is relatively small, as the TX path for this feature largely
piggybacks on the existing MSG_ZEROCOPY implementation.
Patch Overview:
---------------
1. Documentation & tests to give high level overview of the feature
being added.
1. Add netmem refcounting needed for the TX path.
2. Devmem TX netlink API.
3. Devmem TX net stack implementation.
4. Make dma-buf unbinding scheduled work to handle TX cases where it gets
freed from contexts where we can't sleep.
5. Add devmem TX documentation.
6. Add scaffolding enabling driver support for netmem_tx. Add helpers, driver
feature flag, and docs to enable drivers to declare netmem_tx support.
7. Guard netmem_tx against being enabled against drivers that don't
support it.
8. Add devmem_tx selftests. Add TX path to ncdevmem and add a test to
devmem.py.
Testing:
--------
Testing is very similar to devmem TCP RX path. The ncdevmem test used
for the RX path is now augemented with client functionality to test TX
path.
* Test Setup:
Kernel: net-next with this RFC and memory provider API cherry-picked
locally.
Performance results are not included with this version, unfortunately.
I'm having issues running the dma-buf exporter driver against the
upstream kernel on my test setup. The issues are specific to that
dma-buf exporter and do not affect this patch series. I plan to follow
up this series with perf fixes if the tests point to issues once they're
up and running.
Special thanks to Stan who took a stab at rebasing the TX implementation
on top of the netmem/net_iov framework merged. Parts of his proposal [2]
that are reused as-is are forked off into their own patches to give full
credit.
Mina Almasry [Thu, 8 May 2025 00:48:29 +0000 (00:48 +0000)]
selftests: ncdevmem: Implement devmem TCP TX
Add support for devmem TX in ncdevmem.
This is a combination of the ncdevmem from the devmem TCP series RFCv1
which included the TX path, and work by Stan to include the netlink API
and refactored on top of his generic memory_provider support.
Mina Almasry [Thu, 8 May 2025 00:48:24 +0000 (00:48 +0000)]
net: devmem: Implement TX path
Augment dmabuf binding to be able to handle TX. Additional to all the RX
binding, we also create tx_vec needed for the TX path.
Provide API for sendmsg to be able to send dmabufs bound to this device:
- Provide a new dmabuf_tx_cmsg which includes the dmabuf to send from.
- MSG_ZEROCOPY with SCM_DEVMEM_DMABUF cmsg indicates send from dma-buf.
Devmem is uncopyable, so piggyback off the existing MSG_ZEROCOPY
implementation, while disabling instances where MSG_ZEROCOPY falls back
to copying.
We additionally pipe the binding down to the new
zerocopy_fill_skb_from_devmem which fills a TX skb with net_iov netmems
instead of the traditional page netmems.
We also special case skb_frag_dma_map to return the dma-address of these
dmabuf net_iovs instead of attempting to map pages.
The TX path may release the dmabuf in a context where we cannot wait.
This happens when the user unbinds a TX dmabuf while there are still
references to its netmems in the TX path. In that case, the netmems will
be put_netmem'd from a context where we can't unmap the dmabuf, Resolve
this by making __net_devmem_dmabuf_binding_free schedule_work'd.
Based on work by Stanislav Fomichev <sdf@fomichev.me>. A lot of the meat
of the implementation came from devmem TCP RFC v1[1], which included the
TX path, but Stan did all the rebasing on top of netmem/net_iov.
Cc: Stanislav Fomichev <sdf@fomichev.me> Signed-off-by: Kaiyuan Zhang <kaiyuanz@google.com> Signed-off-by: Mina Almasry <almasrymina@google.com> Acked-by: Stanislav Fomichev <sdf@fomichev.me> Link: https://patch.msgid.link/20250508004830.4100853-5-almasrymina@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Mina Almasry [Thu, 8 May 2025 00:48:22 +0000 (00:48 +0000)]
net: add get_netmem/put_netmem support
Currently net_iovs support only pp ref counts, and do not support a
page ref equivalent.
This is fine for the RX path as net_iovs are used exclusively with the
pp and only pp refcounting is needed there. The TX path however does not
use pp ref counts, thus, support for get_page/put_page equivalent is
needed for netmem.
Support get_netmem/put_netmem. Check the type of the netmem before
passing it to page or net_iov specific code to obtain a page ref
equivalent.
For dmabuf net_iovs, we obtain a ref on the underlying binding. This
ensures the entire binding doesn't disappear until all the net_iovs have
been put_netmem'ed. We do not need to track the refcount of individual
dmabuf net_iovs as we don't allocate/free them from a pool similar to
what the buddy allocator does for pages.
This code is written to be extensible by other net_iov implementers.
get_netmem/put_netmem will check the type of the netmem and route it to
the correct helper:
pages -> [get|put]_page()
dmabuf net_iovs -> net_devmem_[get|put]_net_iov()
new net_iovs -> new helpers
Mina Almasry [Thu, 8 May 2025 00:48:21 +0000 (00:48 +0000)]
netmem: add niov->type attribute to distinguish different net_iov types
Later patches in the series adds TX net_iovs where there is no pp
associated, so we can't rely on niov->pp->mp_ops to tell what is the
type of the net_iov.
Add a type enum to the net_iov which tells us the net_iov type.
Jakub Kicinski [Fri, 9 May 2025 21:27:51 +0000 (14:27 -0700)]
netlink: fix policy dump for int with validation callback
Recent devlink change added validation of an integer value
via NLA_POLICY_VALIDATE_FN, for sparse enums. Handle this
in policy dump. We can't extract any info out of the callback,
so report only the type.
Fixes: 429ac6211494 ("devlink: define enum for attr types of dynamic attributes") Reported-by: syzbot+01eb26848144516e7f0a@syzkaller.appspotmail.com Link: https://patch.msgid.link/20250509212751.1905149-1-kuba@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
To align with review comments, the patch series introducing RDMA
RoCEv2 support for the Intel Infrastructure Processing Unit (IPU)
E2000 line of products is going to be submitted in three parts:
1. Modify ice to use specific and common IIDC definitions and
pass a core device info to irdma.
2. Add RDMA support to idpf and modify idpf to use specific and
common IIDC definitions and pass a core device info to irdma.
3. Add RDMA RoCEv2 support for the E2000 products, referred to as
GEN3 to irdma.
This first part is a 5 patch series based on the original
"iidc/ice/irdma: Update IDC to support multiple consumers" patch
to allow for multiple CORE PCI drivers, using the auxbus.
Patches:
1) Move header file to new name for clarity and replace ice
specific DSCP define with a kernel equivalent one in irdma
2) Unify naming convention
3) Separate header file into common and driver specific info
4) Replace ice specific DSCP define with a kernel equivalent
one in ice
5) Implement core device info struct and update drivers to use it
----------------------------------------------------------------
* 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/linux:
iidc/ice/irdma: Update IDC to support multiple consumers
ice: Replace ice specific DSCP mapping num with a kernel define
iidc/ice/irdma: Break iidc.h into two headers
iidc/ice/irdma: Rename to iidc_* convention
iidc/ice/irdma: Rename IDC header file
This series is the second part of two series for the Vertexcom driver.
It contains some improvements for the RX handling of the Vertexcom MSE102x.
====================
The function mse102x_rx_pkt_spi is used in two cases:
* initial polling to re-arm RX interrupt
* level based RX interrupt handler
Both of them doesn't need an open-coded retry mechanism.
In the first case the function can be called again, if the return code
is IRQ_NONE. This keeps the error behavior during netdev open.
In the second case the proper retry would be handled implicit by
the SPI interrupt. So drop the retry code and simplify the receive path.
Stefan Wahren [Fri, 9 May 2025 12:04:34 +0000 (14:04 +0200)]
net: vertexcom: mse102x: Return code for mse102x_rx_pkt_spi
The MSE102x doesn't provide any interrupt register, so the only way
to handle the level interrupt is to fetch the whole packet from
the MSE102x internal buffer via SPI. So in cases the interrupt
handler fails to do this, it should return IRQ_NONE. This allows
the core to disable the interrupt in case the issue persists
and prevent an interrupt storm.
Stefan Wahren [Fri, 9 May 2025 12:04:33 +0000 (14:04 +0200)]
net: vertexcom: mse102x: Implement flag for valid CMD
After removal of the invalid command counter only a relevant debug
message is left, which can be cumbersome. So add a new flag to debugfs,
which indicates whether the driver has ever received a valid CMD.
This helps to differentiate between general and temporary receive
issues.
Stefan Wahren [Fri, 9 May 2025 12:04:32 +0000 (14:04 +0200)]
net: vertexcom: mse102x: Drop invalid cmd stats
There are several reasons for an invalid command response
by the MSE102x:
* SPI line interferences
* MSE102x is in reset or has no firmware
* MSE102x is busy
* no packet in MSE102x receive buffer
So the counter for invalid command isn't very helpful without
further context. So drop the confusing statistics counter,
but keep the debug messages about "unexpected response" in order
to debug possible hardware issues.
Stefan Wahren [Fri, 9 May 2025 12:04:31 +0000 (14:04 +0200)]
net: vertexcom: mse102x: Add warning about IRQ trigger type
The example of the initial DT binding of the Vertexcom MSE 102x suggested
a IRQ_TYPE_EDGE_RISING, which is wrong. So warn everyone to fix their
device tree to level based IRQ.
net: phy: dp83867: use 2ns delay if not specified in DTB
Most PHY drivers default to a 2ns delay if internal delay is requested
and no value is specified. Having a default value makes sense, as it
allows a Device Tree to only care about board design (whether there are
delays on the PCB or not), and not whether the delay is added on the MAC
or the PHY side when needed.
Whether the delays are actually applied is controlled by the
DP83867_RGMII_*_CLK_DELAY_EN flags, so the behavior is only changed in
configurations that would previously be rejected with -EINVAL.
net: phy: dp83867: remove check of delay strap configuration
The check that intended to handle "rgmii" PHY mode differently to the
RGMII modes with internal delay never worked as intended:
- added in commit 2a10154abcb7 ("net: phy: dp83867: Add TI dp83867 phy"):
logic error caused the condition to always evaluate to true
- changed in commit a46fa260f6f5 ("net: phy: dp83867: Fix warning check
for setting the internal delay"): now the condition incorrectly
evaluates to false for rgmii-txid
- removed in commit 2b892649254f ("net: phy: dp83867: Set up RGMII TX
delay")
Around the time of the removal, commit c11669a2757e ("net: phy: dp83867:
Rework delay rgmii delay handling") started clearing the delay enable
flags in RGMIICTL. The change attempted to preserve the historical
behavior of not disabling internal delays with "rgmii" PHY mode and also
documented this in a comment, but due to a conflict between "Set up
RGMII TX delay" and "Rework delay rgmii delay handling", the behavior
dp83867_verify_rgmii_cfg() warned about (and that was also described in
a comment in dp83867_config_init()) disappeared in the following merge
of net into net-next in commit b4b12b0d2f02
("Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net").
While is doesn't appear that this breaking change was intentional, it
has been like this since 2019, and the new behavior to disable the delays
with "rgmii" PHY mode is generally desirable - in particular with MAC
drivers that have to fix up the delay mode, resulting in the PHY driver
not even seeing the same mode that was specified in the Device Tree.
Lad Prabhakar [Wed, 7 May 2025 17:35:50 +0000 (18:35 +0100)]
dt-bindings: net: renesas-gbeth: Add support for RZ/V2N (R9A09G056) SoC
Document support for the GBETH IP found on the Renesas RZ/V2N (R9A09G056)
SoC. The GBETH controller on the RZ/V2N SoC is functionally identical to
the one found on the RZ/V2H(P) (R9A09G057) SoC, so `renesas,rzv2h-gbeth`
will be used as a fallback compatible.
====================
selftests: net: configure rp_filter in setup_ns
Some distributions enable rp_filter globally by default, which can interfere
with various test cases. To address this, many tests explicitly disable
rp_filter within their scripts.
To avoid duplication and ensure consistent behavior across tests, this patch
moves the rp_filter configuration into setup_ns, applied immediately after a
new namespace is created. This change ensures that all namespace-based tests
inherit the appropriate rp_filter settings, simplifying individual test
scripts and improving maintainability.
====================
Hangbin Liu [Thu, 8 May 2025 08:19:08 +0000 (08:19 +0000)]
selftests: net: use setup_ns for SRv6 tests and remove rp_filter configuration
Some SRv6 tests manually set up network namespaces and disable rp_filter.
Since the setup_ns library function already handles rp_filter configuration,
convert these SRv6 tests to use setup_ns and remove the redundant rp_filter
settings.
Hangbin Liu [Thu, 8 May 2025 08:19:07 +0000 (08:19 +0000)]
selftests: net: use setup_ns for bareudp testing
Switch bareudp testing to use setup_ns, which sets up rp_filter by default.
This allows us to remove the manual rp_filter configuration from the script.
Additionally, since setup_ns handles namespace naming and cleanup, we no
longer need a separate cleanup function. We also move the trap setup earlier
in the script, before the test setup begins.
The following tests use setup_ns to create a network namespace, which
will disables rp_filter immediately after namespace creation. Therefore,
it is no longer necessary to disable rp_filter again within these individual
tests.
Hangbin Liu [Thu, 8 May 2025 08:19:05 +0000 (08:19 +0000)]
selftests: net: disable rp_filter after namespace initialization
Some distributions enable rp_filter globally by default. To ensure consistent
behavior across environments, we explicitly disable it in several test cases.
This patch moves the rp_filter disabling logic to immediately after the
network namespace is initialized. With this change, individual test cases
with creating namespace via setup_ns no longer need to disable rp_filter
again.
This helps avoid redundancy and ensures test consistency.
Vladimir Oltean [Thu, 8 May 2025 21:10:42 +0000 (00:10 +0300)]
net: ixp4xx_eth: convert to ndo_hwtstamp_get() and ndo_hwtstamp_set()
New timestamping API was introduced in commit 66f7223039c0 ("net: add
NDOs for configuring hardware timestamping") from kernel v6.6. It is
time to convert the intel ixp4xx ethernet driver to the new API, so that
the ndo_eth_ioctl() path can be removed completely.
hwtstamp_get() and hwtstamp_set() are only called if netif_running()
when the code path is engaged through the legacy ioctl. As I don't
want to make an unnecessary functional change which I can't test,
preserve that restriction when going through the new operations.
When cpu_is_ixp46x() is false, the execution of SIOCGHWTSTAMP and
SIOCSHWTSTAMP falls through to phy_mii_ioctl(), which may process it in
case of a timestamping PHY, or may return -EOPNOTSUPP. In the new API,
the core handles timestamping PHYs directly and does not call the netdev
driver, so just return -EOPNOTSUPP directly for equivalent logic.
A gratuitous change I chose to do anyway is prefixing hwtstamp_get() and
hwtstamp_set() with the driver name, ipx4xx. This reflects modern coding
sensibilities, and we are touching the involved lines anyway.
The remainder of eth_ioctl() is exactly equivalent to
phy_do_ioctl_running(), so use that.
Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Reviewed-by: Linus Walleij <linus.walleij@linaro.org> Link: https://patch.msgid.link/20250508211043.3388702-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Having a software timestamp (along with existing hardware one) is
useful to trace how the packets flow through the stack.
mlx5e_tx_skb_update_hwts_flags is called from tx paths
to setup HW timestamp; extend it to add software one as well.
Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Signed-off-by: Stanislav Fomichev <stfomichev@gmail.com> Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev> Acked-by: Martin KaFai Lau <martin.lau@kernel.org> Reviewed-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20250508235109.585096-1-stfomichev@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Refactoring designware VLAN code and introducing support for
hardware-accelerated VLAN stripping for dwxgmac2 IP,
the current patch set consists of two key changes:
1) Refactoring VLAN Functions:
The first change involves moving common VLAN-related functions
of the DesignWare Ethernet MAC into a dedicated file, stmmac_vlan.c.
This refactoring aims to improve code organization and maintainability
by centralizing VLAN handling logic.
2) Renaming all the functions, symbols and macro into more
generic name and consolidate the same function pointer into one.
3) Enabling VLAN for 10G Ethernet MAC IP:
The second change enables VLAN support specifically
for the 10G Ethernet MAC IP. This enhancement leverages
the hardware capabilities of the to perform VLAN stripping,
Changes from previous submmited patches.
v5:
a) Divided the refactor patch in to two patches,
first patch is to move the code into the separate file
and second patch to update the symbol name
b) get the dwmac4 vlan function up to date and port to
stmmac_vlan.c
c) remove the inline function in function pointer and
use only static function defeination.
d) Remove the outer parenthese that is not needed
on the 1 line return statement.
v4:
a) Updated the commit message to explain the descriptors
behaviour on different hardware.
b) Updated the perfect_match variable with the correct
byte order.
Boon Khai Ng [Wed, 7 May 2025 06:38:12 +0000 (14:38 +0800)]
net: stmmac: dwxgmac2: Add support for HW-accelerated VLAN stripping
This patch adds support for MAC level VLAN tag stripping for the
dwxgmac2 IP. This feature can be configured through ethtool.
If the rx-vlan-offload is off, then the VLAN tag will be stripped
by the driver. If the rx-vlan-offload is on then the VLAN tag
will be stripped by the MAC.
Boon Khai Ng [Wed, 7 May 2025 06:38:11 +0000 (14:38 +0800)]
net: stmmac: stmmac_vlan: rename VLAN functions and symbol to generic symbol.
With the VLAN handling code decoupled from dwmac4 and dwxgmac2 and
consolidated in stmmac_vlan.c, functions and symbols are renamed to
use a generic prefix. This change improves code clarity and
maintainability by reflecting the shared nature of the VLAN logic,
facilitating future enhancements or reuse without being tied to
specific MAC implementations.
No functional changes are introduced in this patch.
Note: The dwxgmac2_update_vlan_hash function is not combined due
to minor differences in setting the VTFE bit. A separate fix patch
will be submitted to align its behavior with the dwmac4 driver.
Boon Khai Ng [Wed, 7 May 2025 06:38:10 +0000 (14:38 +0800)]
net: stmmac: Refactor VLAN implementation
Refactor VLAN implementation by moving common code for DWMAC4 and
DWXGMAC IPs into a separate VLAN module. VLAN implementation for
DWMAC4 and DWXGMAC differs only for CSR base address, the descriptor
for the VLAN ID and VLAN VALID bit field.
The descriptor format for VLAN is not moved to the common code due
to hardware-specific differences between DWMAC4 and DWXGMAC.
For the DWMAC4 IP, the Receive Normal Descriptor 0 (RDES0) is
formatted as follows:
31 0
------------------------ -----------------------
RDES0| Inner VLAN TAG [31:16] | Outer VLAN TAG [15:0] |
------------------------ -----------------------
For the DWXGMAC IP, the RDES0 format varies based on the
Tunneled Frame bit (TNP):
a) For Non-Tunneled Frame (TNP=0)
31 0
------------------------ -----------------------
RDES0| Inner VLAN TAG [31:16] | Outer VLAN TAG [15:0] |
------------------------ -----------------------
The logic for handling tunneled frames is not yet implemented
in the dwxgmac2_wrback_get_rx_vlan_tci() function. Therefore,
it is prudent to maintain separate functions within their
respective descriptor driver files
(dwxgmac2_descs.c and dwmac4_descs.c).
Jakub Kicinski [Sat, 10 May 2025 00:09:37 +0000 (17:09 -0700)]
Merge tag 'batadv-next-pullrequest-20250509' of git://git.open-mesh.org/linux-merge
Simon Wunderlich says:
====================
This cleanup patchset includes the following patches:
- bump version strings, by Simon Wunderlich
- constify and move broadcast addr definition, Matthias Schiffer
- remove start/stop queue function for mesh-iface, by Antonio Quartulli
- switch to crc32 header for crc32c, by Sven Eckelmann
- drop unused net_namespace.h include, by Sven Eckelmann
* tag 'batadv-next-pullrequest-20250509' of git://git.open-mesh.org/linux-merge:
batman-adv: Drop unused net_namespace.h include
batman-adv: Switch to crc32 header for crc32c
batman-adv: no need to start/stop queue on mesh-iface
batman-adv: constify and move broadcast addr definition
batman-adv: Start new development cycle
====================
Vladimir Oltean [Thu, 8 May 2025 14:46:30 +0000 (17:46 +0300)]
net: mvpp2: convert to ndo_hwtstamp_get() and ndo_hwtstamp_set()
New timestamping API was introduced in commit 66f7223039c0 ("net: add
NDOs for configuring hardware timestamping") from kernel v6.6. It is
time to convert the mvpp2 driver to the new API, so that the
ndo_eth_ioctl() path can be removed completely.
Note that on the !port->hwtstamp condition, the old code used to fall
through in mvpp2_ioctl(), and return either -ENOTSUPP if !port->phylink,
or -EOPNOTSUPP, in phylink_mii_ioctl(). Keep the test for port->hwtstamp
in the newly introduced net_device_ops, but consolidate the error code
to just -EOPNOTSUPP. The other one is documented as NFS-specific, it's
best to avoid it anyway.