It was observed that by allowing pinctrl_amd to be loaded
later in the boot process that interrupts sent to the GPIO
controller early in the boot are not serviced. The kernel treats
these as a spurious IRQ and disables the IRQ.
This problem was exacerbated because it happened on a system with
an encrypted partition so the kernel object was not accesssible for
an extended period of time while waiting for a passphrase.
To avoid this situation from occurring, stop allowing pinctrl-amd
from being built as a module and instead require it to be built-in
or disabled.
net: prestera: acl: use proper mask for port selector
Adjusted as per packet processor documentation.
This allows to properly match 'indev' for clsact rules.
Fixes: 47327e198d42 ("net: prestera: acl: migrate to new vTCAM api") Signed-off-by: Maksym Glubokiy <maksym.glubokiy@plvision.eu> Signed-off-by: David S. Miller <davem@davemloft.net>
Marek Behún [Mon, 4 Jul 2022 11:36:21 +0000 (13:36 +0200)]
ARM: dts: turris-omnia: configure LED[0] pin function to link/activity
The marvell PHY driver changes the LED[0] pin function to "On - 1000
Mbps Link, Off - Else".
Turris Omnia expects that the function is "On - Link, Blink - Activity,
Off - No link".
Use the `marvell,reg-init` DT property to change the function.
In the future, once netdev trigger will support HW offloading, we will
be able to have this configured via the combination of PHY driver and
leds-turris-omnia driver.
Signed-off-by: Marek Behún <kabel@kernel.org> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
Both hfi1 and UML depend on x86_64, this can trigger build errors.
This driver must depends on !UML because it accesses x86_64
features that are not supported by UML.
Socket destruction flow and tls_device_down function sync against each
other using tls_device_lock and the context refcount, to guarantee the
device resources are freed via tls_dev_del() by the end of
tls_device_down.
In the following unfortunate flow, this won't happen:
- refcount is decreased to zero in tls_device_sk_destruct.
- tls_device_down starts, skips the context as refcount is zero, going
all the way until it flushes the gc work, and returns without freeing
the device resources.
- only then, tls_device_queue_ctx_destruction is called, queues the gc
work and frees the context's device resources.
Solve it by decreasing the refcount in the socket's destruction flow
under the tls_device_lock, for perfect synchronization. This does not
slow down the common likely destructor flow, in which both the refcount
is decreased and the spinlock is acquired, anyway.
Fixes: e8f69799810c ("net/tls: Add generic NIC offload infrastructure") Reviewed-by: Maxim Mikityanskiy <maximmi@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Reviewed-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Pali Rohár [Tue, 2 Nov 2021 17:12:58 +0000 (18:12 +0100)]
ARM: Marvell: Update PCIe fixup
- The code relies on rc_pci_fixup being called, which only happens
when CONFIG_PCI_QUIRKS is enabled, so add that to Kconfig. Omitting
this causes a booting failure with a non-obvious cause.
- Update rc_pci_fixup to set the class properly, copying the
more modern style from other places
- Correct the rc_pci_fixup comment
This patch just re-applies commit 1dc831bf53fd ("ARM: Kirkwood: Update
PCI-E fixup") for all other Marvell ARM platforms which have same buggy
PCIe controller and do not use pci-mvebu.c controller driver yet.
Long-term goal for these Marvell ARM platforms should be conversion to
pci-mvebu.c controller driver and removal of these fixups in arch code.
TLS calls skb_cow_data() on the skb it received from strparser
whenever it needs to hold onto the skb with the decrypted data.
(The alternative being decrypting directly to a user space buffer
in whic case the input skb doesn't get modified or used after.)
TLS needs the decrypted skb:
- almost always with TLS 1.3 (unless the new NoPad is enabled);
- when user space buffer is too small to fit the record;
- when BPF sockmap is enabled.
Most of the time the skb we get out of strparser is a clone of
a 64kB data unit coalsced by GRO. To make things worse skb_cow_data()
tries to output a linear skb and allocates it with GFP_ATOMIC.
This occasionally fails even under moderate memory pressure.
This patch set rejigs the TLS Rx so that we don't expect decryption
in place. The decryption handlers return an skb which may or may not
be the skb from strparser. For TLS 1.3 this results in a 20-30%
performance improvement without NoPad enabled.
v2: rebase after 3d8c51b25a23 ("net/tls: Check for errors in tls_device_init")
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:35 +0000 (22:22 -0700)]
tls: rx: decrypt into a fresh skb
We currently CoW Rx skbs whenever we can't decrypt to a user
space buffer. The skbs can be enormous (64kB) and CoW does
a linear alloc which has a strong chance of failing under
memory pressure. Or even without, skb_cow_data() assumes
GFP_ATOMIC.
Allocate a new frag'd skb and decrypt into it. We finally
take advantage of the decrypted skb getting returned via
darg.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:34 +0000 (22:22 -0700)]
tls: rx: async: don't put async zc on the list
The "zero-copy" path in SW TLS will engage either for no skbs or
for all but last. If the recvmsg parameters are right and the
socket can do ZC we'll ZC until the iterator can't fit a full
record at which point we'll decrypt one more record and copy
over the necessary bits to fill up the request.
The only reason we hold onto the ZC skbs which went thru the async
path until the end of recvmsg() is to count bytes. We need an accurate
count of zc'ed bytes so that we can calculate how much of the non-zc'd
data to copy. To allow freeing input skbs on the ZC path count only
how much of the list we'll need to consume.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:33 +0000 (22:22 -0700)]
tls: rx: async: hold onto the input skb
Async crypto currently benefits from the fact that we decrypt
in place. When we allow input and output to be different skbs
we will have to hang onto the input while we move to the next
record. Clone the inputs and keep them on a list.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:32 +0000 (22:22 -0700)]
tls: rx: async: adjust record geometry immediately
Async crypto TLS Rx currently waits for crypto to be done
in order to strip the TLS header and tailer. Simplify
the code by moving the pointers immediately, since only
TLS 1.2 is supported here there is no message padding.
This simplifies the decryption into a new skb in the next
patch as we don't have to worry about input vs output
skb in the decrypt_done() handler any more.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:31 +0000 (22:22 -0700)]
tls: rx: return the decrypted skb via darg
Instead of using ctx->recv_pkt after decryption read the skb
from darg.skb. This moves the decision of what the "output skb"
is to the decrypt handlers. For now after decrypt handler returns
successfully ctx->recv_pkt is simply moved to darg.skb, but it
will change soon.
Note that tls_decrypt_sg() cannot clear the ctx->recv_pkt
because it gets called to re-encrypt (i.e. by the device offload).
So we need an awkward temporary if() in tls_rx_one_record().
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:30 +0000 (22:22 -0700)]
tls: rx: read the input skb from ctx->recv_pkt
Callers always pass ctx->recv_pkt into decrypt_skb_update(),
and it propagates it to its callees. This may give someone
the false impression that those functions can accept any valid
skb containing a TLS record. That's not the case, the record
sequence number is read from the context, and they can only
take the next record coming out of the strp.
Let the functions get the skb from the context instead of
passing it in. This will also make it cleaner to return
a different skb than ctx->recv_pkt as the decrypted one
later on.
Since we're touching the definition of decrypt_skb_update()
use this as an opportunity to rename it.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:29 +0000 (22:22 -0700)]
tls: rx: factor out device darg update
I already forgot to transform darg from input to output
semantics once on the NIC inline crypto fastpath. To
avoid this happening again create a device equivalent
of decrypt_internal(). A function responsible for decryption
and transforming darg.
While at it rename decrypt_internal() to a hopefully slightly
more meaningful name.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:28 +0000 (22:22 -0700)]
tls: rx: remove the message decrypted tracking
We no longer allow a decrypted skb to remain linked to ctx->recv_pkt.
Anything on the list is decrypted, anything on ctx->recv_pkt needs
to be decrypted.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:26 +0000 (22:22 -0700)]
tls: rx: don't try to keep the skbs always on the list
I thought that having the skb either always on the ctx->rx_list
or ctx->recv_pkt will simplify the handling, as we would not
have to remember to flip it from one to the other on exit paths.
This became a little harder to justify with the fix for BPF
sockmaps. Subsequent changes will make the situation even worse.
Queue the skbs only when really needed.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
Jakub Kicinski [Fri, 15 Jul 2022 05:22:25 +0000 (22:22 -0700)]
tls: rx: allow only one reader at a time
recvmsg() in TLS gets data from the skb list (rx_list) or fresh
skbs we read from TCP via strparser. The former holds skbs which were
already decrypted for peek or decrypted and partially consumed.
tls_wait_data() only notices appearance of fresh skbs coming out
of TCP (or psock). It is possible, if there is a concurrent call
to peek() and recv() that the peek() will move the data from input
to rx_list without recv() noticing. recv() will then read data out
of order or never wake up.
This is not a practical use case/concern, but it makes the self
tests less reliable. This patch solves the problem by allowing
only one reader in.
Because having multiple processes calling read()/peek() is not
normal avoid adding a lock and try to fast-path the single reader
case.
Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: David S. Miller <davem@davemloft.net>
David S. Miller [Mon, 18 Jul 2022 10:19:17 +0000 (11:19 +0100)]
Merge branch 'net-smc-virt-contig-buffers'
Wen Gu says:
====================
net/smc: Introduce virtually contiguous buffers for SMC-R
On long-running enterprise production servers, high-order contiguous
memory pages are usually very rare and in most cases we can only get
fragmented pages.
When replacing TCP with SMC-R in such production scenarios, attempting
to allocate high-order physically contiguous sndbufs and RMBs may result
in frequent memory compaction, which will cause unexpected hung issue
and further stability risks.
So this patch set is aimed to allow SMC-R link group to use virtually
contiguous sndbufs and RMBs to avoid potential issues mentioned above.
Whether to use physically or virtually contiguous buffers can be set
by sysctl smcr_buf_type.
Note that using virtually contiguous buffers will bring an acceptable
performance regression, which can be mainly divided into two parts:
1) regression in data path, which is brought by additional address
translation of sndbuf by RNIC in Tx. But in general, translating
address through MTT is fast. According to qperf test, this part
regression is basically less than 10% in latency and bandwidth.
(see patch 5/6 for details)
2) regression in buffer initialization and destruction path, which is
brought by additional MR operations of sndbufs. But thanks to link
group buffer reuse mechanism, the impact of this kind of regression
decreases as times of buffer reuse increases.
Patch set overview:
- Patch 1/6 and 2/6 mainly about simplifying and optimizing DMA sync
operation, which will reduce overhead on the data path, especially
when using virtually contiguous buffers;
- Patch 3/6 and 4/6 introduce a sysctl smcr_buf_type to set the type
of buffers in new created link group;
- Patch 5/6 allows SMC-R to use virtually contiguous sndbufs and RMBs,
including buffer creation, destruction, MR operation and access;
- patch 6/6 extends netlink attribute for buffer type of SMC-R link group;
v1->v2:
- Patch 5/6 fixes build issue on 32bit;
- Patch 3/6 adds description of new sysctl in smc-sysctl.rst;
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
net/smc: Allow virtually contiguous sndbufs or RMBs for SMC-R
On long-running enterprise production servers, high-order contiguous
memory pages are usually very rare and in most cases we can only get
fragmented pages.
When replacing TCP with SMC-R in such production scenarios, attempting
to allocate high-order physically contiguous sndbufs and RMBs may result
in frequent memory compaction, which will cause unexpected hung issue
and further stability risks.
So this patch is aimed to allow SMC-R link group to use virtually
contiguous sndbufs and RMBs to avoid potential issues mentioned above.
Whether to use physically or virtually contiguous buffers can be set
by sysctl smcr_buf_type.
Note that using virtually contiguous buffers will bring an acceptable
performance regression, which can be mainly divided into two parts:
1) regression in data path, which is brought by additional address
translation of sndbuf by RNIC in Tx. But in general, translating
address through MTT is fast.
Taking 256KB sndbuf and RMB as an example, the comparisons in qperf
latency and bandwidth test with physically and virtually contiguous
buffers are as follows:
[latency]
msgsize tcp smcr smcr-use-virt-buf
1 11.17 us 7.56 us 7.51 us (-0.67%)
2 10.65 us 7.74 us 7.56 us (-2.31%)
4 11.11 us 7.52 us 7.59 us ( 0.84%)
8 10.83 us 7.55 us 7.51 us (-0.48%)
16 11.21 us 7.46 us 7.51 us ( 0.71%)
32 10.65 us 7.53 us 7.58 us ( 0.61%)
64 10.95 us 7.74 us 7.80 us ( 0.76%)
128 11.14 us 7.83 us 7.87 us ( 0.47%)
256 10.97 us 7.94 us 7.92 us (-0.28%)
512 11.23 us 7.94 us 8.20 us ( 3.25%)
1024 11.60 us 8.12 us 8.20 us ( 0.96%)
2048 14.04 us 8.30 us 8.51 us ( 2.49%)
4096 16.88 us 9.13 us 9.07 us (-0.64%)
8192 22.50 us 10.56 us 11.22 us ( 6.26%)
16384 28.99 us 12.88 us 13.83 us ( 7.37%)
32768 40.13 us 16.76 us 16.95 us ( 1.16%)
65536 68.70 us 24.68 us 24.85 us ( 0.68%)
[bandwidth]
msgsize tcp smcr smcr-use-virt-buf
1 1.65 MB/s 1.59 MB/s 1.53 MB/s (-3.88%)
2 3.32 MB/s 3.17 MB/s 3.08 MB/s (-2.67%)
4 6.66 MB/s 6.33 MB/s 6.09 MB/s (-3.85%)
8 13.67 MB/s 13.45 MB/s 11.97 MB/s (-10.99%)
16 25.36 MB/s 27.15 MB/s 24.16 MB/s (-11.01%)
32 48.22 MB/s 54.24 MB/s 49.41 MB/s (-8.89%)
64 106.79 MB/s 107.32 MB/s 99.05 MB/s (-7.71%)
128 210.21 MB/s 202.46 MB/s 201.02 MB/s (-0.71%)
256 400.81 MB/s 416.81 MB/s 393.52 MB/s (-5.59%)
512 746.49 MB/s 834.12 MB/s 809.99 MB/s (-2.89%)
1024 1292.33 MB/s 1641.96 MB/s 1571.82 MB/s (-4.27%)
2048 2007.64 MB/s 2760.44 MB/s 2717.68 MB/s (-1.55%)
4096 2665.17 MB/s 4157.44 MB/s 4070.76 MB/s (-2.09%)
8192 3159.72 MB/s 4361.57 MB/s 4270.65 MB/s (-2.08%)
16384 4186.70 MB/s 4574.13 MB/s 4501.17 MB/s (-1.60%)
32768 4093.21 MB/s 4487.42 MB/s 4322.43 MB/s (-3.68%)
65536 4057.14 MB/s 4735.61 MB/s 4555.17 MB/s (-3.81%)
2) regression in buffer initialization and destruction path, which is
brought by additional MR operations of sndbufs. But thanks to link
group buffer reuse mechanism, the impact of this kind of regression
decreases as times of buffer reuse increases.
Taking 256KB sndbuf and RMB as an example, latency of some key SMC-R
buffer-related function obtained by bpftrace are as follows:
------------
Test environment notes:
1. Above tests run on 2 VMs within the same Host.
2. The NIC is ConnectX-4Lx, using SRIOV and passing through 2 VFs to
the each VM respectively.
3. VMs' vCPUs are binded to different physical CPUs, and the binded
physical CPUs are isolated by `isolcpus=xxx` cmdline.
4. NICs' queue number are set to 1.
Signed-off-by: Wen Gu <guwen@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
net/smc: Use sysctl-specified types of buffers in new link group
This patch introduces a new SMC-R specific element buf_type
in struct smc_link_group, for recording the value of sysctl
smcr_buf_type when link group is created.
New created link group will create and reuse buffers of the
type specified by buf_type.
Signed-off-by: Wen Gu <guwen@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Guangguan Wang [Thu, 14 Jul 2022 09:44:01 +0000 (17:44 +0800)]
net/smc: optimize for smc_sndbuf_sync_sg_for_device and smc_rmb_sync_sg_for_cpu
Some CPU, such as Xeon, can guarantee DMA cache coherency.
So it is no need to use dma sync APIs to flush cache on such CPUs.
In order to avoid calling dma sync APIs on the IO path, use the
dma_need_sync to check whether smc_buf_desc needs dma sync when
creating smc_buf_desc.
Signed-off-by: Guangguan Wang <guangguan.wang@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Guangguan Wang [Thu, 14 Jul 2022 09:44:00 +0000 (17:44 +0800)]
net/smc: remove redundant dma sync ops
smc_ib_sync_sg_for_cpu/device are the ops used for dma memory cache
consistency. Smc sndbufs are dma buffers, where CPU writes data to
it and PCIE device reads data from it. So for sndbufs,
smc_ib_sync_sg_for_device is needed and smc_ib_sync_sg_for_cpu is
redundant as PCIE device will not write the buffers. Smc rmbs
are dma buffers, where PCIE device write data to it and CPU read
data from it. So for rmbs, smc_ib_sync_sg_for_cpu is needed and
smc_ib_sync_sg_for_device is redundant as CPU will not write the buffers.
Signed-off-by: Guangguan Wang <guangguan.wang@linux.alibaba.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Wong Vee Khee [Thu, 14 Jul 2022 07:54:27 +0000 (15:54 +0800)]
net: stmmac: switch to use interrupt for hw crosstimestamping
Using current implementation of polling mode, there is high chances we
will hit into timeout error when running phc2sys. Hence, update the
implementation of hardware crosstimestamping to use the MAC interrupt
service routine instead of polling for TSIS bit in the MAC Timestamp
Interrupt Status register to be set.
Cc: Richard Cochran <richardcochran@gmail.com> Signed-off-by: Wong Vee Khee <vee.khee.wong@linux.intel.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Dan Carpenter [Fri, 15 Jul 2022 10:35:18 +0000 (13:35 +0300)]
wifi: wil6210: debugfs: fix info leak in wil_write_file_wmi()
The simple_write_to_buffer() function will succeed if even a single
byte is initialized. However, we need to initialize the whole buffer
to prevent information leaks. Just use memdup_user().
Fixes: ff974e408334 ("wil6210: debugfs interface to send raw WMI command") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Kalle Valo <quic_kvalo@quicinc.com> Link: https://lore.kernel.org/r/Ysg14NdKAZF/hcNG@kili
Samuel Holland [Wed, 13 Jul 2022 02:52:33 +0000 (21:52 -0500)]
pinctrl: sunxi: Add driver for Allwinner D1
This SoC contains a pinctrl with a new register layout. Use the variant
parameter to set the right register offsets. This pinctrl also increases
the number of functions per pin from 8 to 16, taking advantage of all 4
bits in the mux config field (so far, only functions 0-8 and 14-15 are
used). This increases the maximum possible number of functions.
Reviewed-by: Heiko Stuebner <heiko@sntech.de> Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Samuel Holland <samuel@sholland.org> Reviewed-by: Andre Przywara <andre.przywara@arm.com> Link: https://lore.kernel.org/r/20220713025233.27248-7-samuel@sholland.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Samuel Holland [Wed, 13 Jul 2022 02:52:32 +0000 (21:52 -0500)]
pinctrl: sunxi: Make some layout parameters dynamic
Starting with the D1/D1s/T113 SoC, Allwinner changed the layout of the
pinctrl registers. This new layout widens the drive level field, which
affects the pull register offset and the overall bank size.
In order to support multiple register layouts, some of the layout
parameters need to be set based on the pinctrl variant. This requires
passing the pinctrl struct pointer to the register/offset calculation
functions.
Starting with the D1/D1s/T113 SoC, Allwinner changed the layout of the
pinctrl registers. This new layout widens the drive level field, which
affects the pull register offset and the overall bank size.
As a first step to support this, combine the register and offset
calculation functions, and refactor the math to depend on one constant
for field widths instead of three. This minimizes the code size impact
of making some of the factors dynamic.
While rewriting these functions, move them to the implementation file,
since that is the only file where they are used. And make the comment
more generic, without mentioning specific offsets/sizes.
The callers are updated to expect a shifted mask, and to use consistent
terminology (reg/shift/mask/val).
Samuel Holland [Wed, 13 Jul 2022 02:52:30 +0000 (21:52 -0500)]
pinctrl: sunxi: Support the 2.5V I/O bias mode
H616 and newer SoCs feature a 2.5V I/O bias mode in addition to the
1.8V and 3.3V modes. This mode is entered by selecting the 3.3V level
and disabling the "withstand function".
H616 supports this capability on its main PIO only. A100 supports this
capability on both its PIO and R-PIO.
Reviewed-by: Jernej Skrabec <jernej.skrabec@gmail.com> Tested-by: Heiko Stuebner <heiko@sntech.de> Signed-off-by: Samuel Holland <samuel@sholland.org> Link: https://lore.kernel.org/r/20220713025233.27248-4-samuel@sholland.org Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
dt-bindings: pinctrl: mt8195: Add and use drive-strength-microamp
As was already done for MT8192 in commit b52e695324bb ("dt-bindings:
pinctrl: mt8192: Add drive-strength-microamp"), replace the custom
mediatek,drive-strength-adv property with the standardized pinconf
'drive-strength-microamp' one.
Similarly to the mt8192 counterpart, there's no user of property
'mediatek,drive-strength-adv', hence removing it is safe.
Fixes: 69c3d58dc187 ("dt-bindings: pinctrl: mt8195: Add mediatek,drive-strength-adv property") Signed-off-by: AngeloGioacchino Del Regno <angelogioacchino.delregno@collabora.com> Reviewed-by: Nícolas F. R. A. Prado <nfraprado@collabora.com> Reviewed-by: Rob Herring <robh@kernel.org> Link: https://lore.kernel.org/r/20220630131543.225554-1-angelogioacchino.delregno@collabora.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Jack Wang [Tue, 12 Jul 2022 10:31:13 +0000 (12:31 +0200)]
RDMA/rtrs-srv: Do not use mempool for page allocation
The mempool is for guaranteed memory allocation during
extreme VM load (see the header of mempool.c of the kernel).
But rtrs-srv allocates pages only when creating new session.
There is no need to use the mempool.
With the removal of mempool, rtrs-server no longer need to reserve
huge mount of memory, this will avoid error like this:
https://lore.kernel.org/lkml/20220620020727.GA3669@xsang-OptiPlex-9020/
Link: https://lore.kernel.org/r/20220712103113.617754-6-haris.iqbal@ionos.com Reported-by: kernel test robot <oliver.sang@intel.com> Signed-off-by: Gioh Kim <gi-oh.kim@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Acked-by: Guoqing Jiang <guoqing.jiang@linux.dev> Signed-off-by: Leon Romanovsky <leon@kernel.org>
RDMA/rtrs-clt: Replace list_next_or_null_rr_rcu with an inline function
removes list_next_or_null_rr_rcu macro to fix below warnings.
That macro is used only twice.
CHECK:MACRO_ARG_REUSE: Macro argument reuse 'head' - possible side-effects?
CHECK:MACRO_ARG_REUSE: Macro argument reuse 'ptr' - possible side-effects?
CHECK:MACRO_ARG_REUSE: Macro argument reuse 'memb' - possible side-effects?
Replaces that macro with an inline function.
Fixes: 6a98d71daea1 ("RDMA/rtrs: client: main functionality") Cc: jinpu.wang@ionos.com Link: https://lore.kernel.org/r/20220712103113.617754-5-haris.iqbal@ionos.com Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Suggested-by: Jason Gunthorpe <jgg@ziepe.ca> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
RDMA/rtrs-srv: Use per-cpu variables for rdma stats
Convert server stat counters from atomic to per-cpu variables.
Link: https://lore.kernel.org/r/20220712103113.617754-4-haris.iqbal@ionos.com Signed-off-by: Santosh Kumar Pradhan <santosh.pradhan@ionos.com> Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
Jack Wang [Tue, 12 Jul 2022 10:31:09 +0000 (12:31 +0200)]
RDMA/rtrs-srv: Fix modinfo output for stringify
stringify works with define, not enum.
Fixes: 91fddedd439c ("RDMA/rtrs: private headers with rtrs protocol structs and helpers") Cc: jinpu.wang@ionos.com Link: https://lore.kernel.org/r/20220712103113.617754-2-haris.iqbal@ionos.com Signed-off-by: Jack Wang <jinpu.wang@ionos.com> Signed-off-by: Md Haris Iqbal <haris.iqbal@ionos.com> Reviewed-by: Aleksei Marov <aleksei.marov@ionos.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
The blamed commit changed to use regmaps instead of __iomem. But it
didn't update the register offsets to be at word offset, so it uses byte
offset.
Another issue with the same commit is that it has a limit of 32 registers
which is incorrect. The sparx5 has 64 while lan966x has 77.
Fixes: 076d9e71bcf8 ("pinctrl: ocelot: convert pinctrl to regmap") Acked-by: Colin Foster <colin.foster@in-advantage.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Link: https://lore.kernel.org/r/20220713193750.4079621-3-horatiu.vultur@microchip.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
The blamed commit introduce support for lan966x which use the same
pinconf_ops as sparx5. The problem is that pinconf_ops is specific to
sparx5. More precisely the offset of the bits in the pincfg register are
different and also lan966x doesn't have support for
PIN_CONFIG_INPUT_SCHMITT_ENABLE.
Fix this by making pinconf_ops more generic such that it can be also
used by lan966x. This is done by introducing 'ocelot_pincfg_data' which
contains the offset and what is supported for each SOC.
Fixes: 531d6ab36571 ("pinctrl: ocelot: Extend support for lan966x") Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com> Reviewed-by: Andy Shevchenko <andy.shevchenko@gmail.com> Link: https://lore.kernel.org/r/20220713193750.4079621-2-horatiu.vultur@microchip.com Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Mustafa Ismail [Tue, 5 Jul 2022 23:08:15 +0000 (18:08 -0500)]
RDMA/irdma: Fix setting of QP context err_rq_idx_valid field
Setting err_rq_idx_valid field in QP context when the AE source of the
AEQE is not associated with an RQ causes the firmware flush to fail.
Set err_rq_idx_valid field in QP context only if it is associated with an
RQ. Additionally, cleanup the redundant setting of this field in
irdma_process_aeq.
Mustafa Ismail [Tue, 5 Jul 2022 23:08:14 +0000 (18:08 -0500)]
RDMA/irdma: Fix VLAN connection with wildcard address
When an application listens on a wildcard address, and there are VLAN and
non-VLAN IP addresses, iWARP connection establishemnt can fail if the listen
node VLAN ID does not match.
Fix this by checking the vlan_id only if not a wildcard listen node.
Fixes: 146b9756f14c ("RDMA/irdma: Add connection manager") Link: https://lore.kernel.org/r/20220705230815.265-7-shiraz.saleem@intel.com Signed-off-by: Mustafa Ismail <mustafa.ismail@intel.com> Signed-off-by: Shiraz Saleem <shiraz.saleem@intel.com> Signed-off-by: Leon Romanovsky <leon@kernel.org>
Mustafa Ismail [Tue, 5 Jul 2022 23:08:13 +0000 (18:08 -0500)]
RDMA/irdma: Fix a window for use-after-free
During a destroy CQ an interrupt may cause processing of a CQE after CQ
resources are freed by irdma_cq_free_rsrc(). Fix this by moving the call
to irdma_cq_free_rsrc() after the irdma_sc_cleanup_ceqes(), which is
called under the cq_lock.
RDMA/irdma: Make resource distribution algorithm more QP oriented
Adapt the resource distribution algorithm in irdma_cfg_fpm_val to be more
QP oriented. If the configuration is too big for the available memory,
trim the MR and PBLE's first before trimming the QPs. This also avoids
having to double QPs requested as input to algorithm for GEN1 devices.
Christian König [Fri, 15 Jul 2022 07:57:22 +0000 (09:57 +0200)]
drm/ttm: fix locking in vmap/vunmap TTM GEM helpers
I've stumbled over this while reviewing patches for DMA-buf and it looks
like we completely messed the locking up here.
In general most TTM function should only be called while holding the
appropriate BO resv lock. Without this we could break the internal
buffer object state here.
Michael Walle [Sat, 26 Mar 2022 19:40:28 +0000 (20:40 +0100)]
ARM: dts: lan966x: fix sys_clk frequency
The sys_clk frequency is 165.625MHz. The register reference of the
Generic Clock controller lists the CPU clock as 600MHz, the DDR clock as
300MHz and the SYS clock as 162.5MHz. This is wrong. It was first
noticed during the fan driver development and it was measured and
verified via the CLK_MON output of the SoC which can be configured to
output sys_clk/64.
The core PLL settings (which drives the SYS clock) seems to be as
follows:
DIVF = 52
DIVQ = 3
DIVR = 1
With a refernce clock of 25MHz, this means we have a post divider clock
Fpfd = Fref / (DIVR + 1) = 25MHz / (1 + 1) = 12.5MHz
The resulting VCO frequency is then
Fvco = Fpfd * (DIVF + 1) * 2 = 12.5MHz * (52 + 1) * 2 = 1325MHz
And the output frequency is
Fout = Fvco / 2^DIVQ = 1325MHz / 2^3 = 165.625Mhz
This all adds up to the constrains of the PLL:
10MHz <= Fpfd <= 200MHz
20MHz <= Fout <= 1000MHz
1000MHz <= Fvco <= 2000MHz
Fixes: 290deaa10c50 ("ARM: dts: add DT for lan966 SoC and 2-port board pcb8291") Signed-off-by: Michael Walle <michael@walle.cc> Reviewed-by: Kavyasree Kotagiri <kavyasree.kotagiri@microchip.com> Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com> Link: https://lore.kernel.org/r/20220326194028.2945985-1-michael@walle.cc
There is no need to EXPORT the functions hwa742_update_window_async()
and omapfb_update_window_async() since they are not used anywhere inside
or outside the kernel tree.
As of v2.1.0, falcon_decode_var() contains a quirk to fix a rounding
error, as explained by Günther Kelleter on Fri, 30 Aug 1996:
This diff removes the now obsolete Falcon video option "pwrsave", and
fixes a rounding error that is triggered by the resolution switching X
server (those who use the pixel clock value 39722 in their /etc/fb.modes
should change it to 39721).
However, this causes the modified video mode returned by
falcon_decode_var() to not match the video mode returned by
falcon_encode_var(). Fix this by dropping the quirk.
Unfortunately /etc/fb.modes in fbset was never updated, so the
"640x480-60" mode still contains the wrong pixclock.
Hence this change may introduce a regression.
The pixclock values in the vga and vga70 modes are wrong, as they should
use the 25.175 MHz clock instead of the 32 MHz clock.
Swap the left and right margins to match f25.{right,left} (struct
pixel_clock declares them in a different order), and update the hsync
lengths to match what the driver programs by default.
Correct the (wrong) floating-point vrefresh value for the vga mode.
The red, green, and blue color values are 16-bit, while the external
graphics hardware registers are 8-bit.
Add the missing conversion from 16-bit to 8-bit.
Currently, the "inverse" option does not do anything, as it just sets a
flag, which is further unused.
Fix this by calling fb_invert_cmaps() instead, like other drivers do.
As this only affects the console colormap, this does not affect X.
Update the documentation to match the actual behavior.
The fb_pan_display() function in the core already takes care of
validating the panning parameters before calling the driver's
.fb_pan_display() callback, and of updating the panning state
afterwards, so there is no need to repeat that in the driver.
video: fbdev: Make *fb_setup() and *fb_init() static
The various *fb_setup() and *fb_init() functions are only used locally,
so they should be static. Most of them do not need forward
declarations, so remove these where appropriate.
The fb_pan_display() function in the core already takes care of
validating most panning parameters before calling the driver's
.fb_pan_display() callback, and of updating the panning state
afterwards, so there is no need to repeat that in the driver.
swiotlb: ensure a segment doesn't cross the area boundary
Free slots tracking assumes that slots in a segment can be allocated to
fulfill a request. This implies that slots in a segment should belong to
the same area. Although the possibility of a violation is low, it is better
to explicitly enforce segments won't span multiple areas by adjusting the
number of slabs when configuring areas.
Signed-off-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
default_nslabs are rounded up in two cases with exactly same comments.
Add a simple wrapper to reduce duplicate code/comments. It is preparatory
to adding more logics into the round-up.
No functional change intended.
Signed-off-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
Commit 20347fca71a3 ("swiotlb: split up the global swiotlb lock") splits
io_tlb_mem into multiple areas. Each area has its own lock and index. The
global ones are not used so remove them.
Signed-off-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
Dan Carpenter [Fri, 15 Jul 2022 08:19:50 +0000 (11:19 +0300)]
swiotlb: fix use after free on error handling path
Don't dereference "mem" after it has been freed. Flip the
two kfree()s around to address this bug.
Fixes: 26ffb91fa5e0 ("swiotlb: split up the global swiotlb lock") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>