]> git.ipfire.org Git - people/ms/linux.git/log
people/ms/linux.git
3 years agoirqchip/mmp: Declare init functions in common header file
Ben Dooks [Sun, 24 Jul 2022 22:21:52 +0000 (23:21 +0100)] 
irqchip/mmp: Declare init functions in common header file

The functions icu_init_irq and mmp2_init_icu are exported
from this code, so declare them in the header file to avoid
the following sparse warnings:

drivers/irqchip/irq-mmp.c:248:13: warning: symbol 'icu_init_irq' was not declared. Should it be static?
drivers/irqchip/irq-mmp.c:271:13: warning: symbol 'mmp2_init_icu' was not declared. Should it be static?

Signed-off-by: Ben Dooks <ben-linux@fluff.org>
[maz: fixup commit message]
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/20220724222152.551850-1-ben-linux@fluff.org
3 years agox86/purgatory: Omit use of bin2c
Masahiro Yamada [Mon, 25 Jul 2022 02:08:12 +0000 (11:08 +0900)] 
x86/purgatory: Omit use of bin2c

The .incbin assembler directive is much faster than bin2c + $(CC).

Do similar refactoring as in

  4c0f032d4963 ("s390/purgatory: Omit use of bin2c").

Please note the .quad directive matches to size_t in C (both 8
byte) because the purgatory is compiled only for the 64-bit kernel.
(KEXEC_FILE depends on X86_64).

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lore.kernel.org/r/20220725020812.622255-2-masahiroy@kernel.org
3 years agox86/purgatory: Hard-code obj-y in Makefile
Masahiro Yamada [Mon, 25 Jul 2022 02:08:11 +0000 (11:08 +0900)] 
x86/purgatory: Hard-code obj-y in Makefile

arch/x86/Kbuild guards the entire purgatory/ directory, and
CONFIG_KEXEC_FILE is bool type.

$(CONFIG_KEXEC_FILE) is always 'y' when this directory is being built.

Signed-off-by: Masahiro Yamada <masahiroy@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Link: https://lore.kernel.org/r/20220725020812.622255-1-masahiroy@kernel.org
3 years agodt-bindings: arm: at91: add lan966 pcb8309 board
Horatiu Vultur [Fri, 22 Jul 2022 13:18:35 +0000 (15:18 +0200)] 
dt-bindings: arm: at91: add lan966 pcb8309 board

Add documentation for Microchip LAN9662 PCB8309.

Signed-off-by: Horatiu Vultur <horatiu.vultur@microchip.com>
Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Claudiu Beznea <claudiu.beznea@microchip.com>
Link: https://lore.kernel.org/r/20220722131836.2377720-2-horatiu.vultur@microchip.com
3 years agoALSA: hda/realtek: Enable speaker and mute LEDs for HP laptops
Kai-Heng Feng [Tue, 19 Jul 2022 14:20:14 +0000 (22:20 +0800)] 
ALSA: hda/realtek: Enable speaker and mute LEDs for HP laptops

Two more HP laptops that use cs35l41 AMP for speaker and GPIO for mute
LEDs.

So use the existing quirk to enable them accordingly.

[ Sort the entries at the SSID order by tiwai ]

Signed-off-by: Kai-Heng Feng <kai.heng.feng@canonical.com>
Reviewed-by: Lucas Tanure <tanureal@opensource.cirrus.com>
Link: https://lore.kernel.org/r/20220719142015.244426-1-kai.heng.feng@canonical.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
3 years agoMerge branch 'for-linus' into for-next
Takashi Iwai [Mon, 25 Jul 2022 06:35:23 +0000 (08:35 +0200)] 
Merge branch 'for-linus' into for-next

Merge 5.19-rc devel branch for applying HD-audio quirk patches more
cleanly.

Signed-off-by: Takashi Iwai <tiwai@suse.de>
3 years agoALSA: hda: cs35l41: Fix build error unused-function
Ren Zhijie [Mon, 25 Jul 2022 02:36:11 +0000 (10:36 +0800)] 
ALSA: hda: cs35l41: Fix build error unused-function

If CONFIG_PM_SLEEP is not set,
make ARCH=x86_64 CROSS_COMPILE=x86_64-linux-gnu-, will be failed, like this:

sound/pci/hda/cs35l41_hda.c:583:12: error: ‘cs35l41_runtime_resume’ defined but not used [-Werror=unused-function]
 static int cs35l41_runtime_resume(struct device *dev)
            ^~~~~~~~~~~~~~~~~~~~~~
sound/pci/hda/cs35l41_hda.c:565:12: error: ‘cs35l41_runtime_suspend’ defined but not used [-Werror=unused-function]
 static int cs35l41_runtime_suspend(struct device *dev)
            ^~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
make[3]: *** [sound/pci/hda/cs35l41_hda.o] Error 1

commit 1a3c7bb08826 ("PM: core: Add new *_PM_OPS macros,
deprecate old ones"), add new marco RUNTIME_PM_OPS to fix this unused-function problem.

Fixes: 1873ebd30cc8 ("ALSA: hda: cs35l41: Support Hibernation during Suspend")
Signed-off-by: Ren Zhijie <renzhijie2@huawei.com>
Link: https://lore.kernel.org/r/20220725023611.57055-1-renzhijie2@huawei.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
3 years agoALSA: hiface: fix repeated words in comments
wangjianli [Sun, 24 Jul 2022 07:18:29 +0000 (15:18 +0800)] 
ALSA: hiface: fix repeated words in comments

Delete the redundant word 'in'.

Signed-off-by: wangjianli <wangjianli@cdjrlc.com>
Link: https://lore.kernel.org/r/20220724071829.11117-1-wangjianli@cdjrlc.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
3 years agoALSA: usb/6fire: fix repeated words in comments
wangjianli [Sun, 24 Jul 2022 07:16:44 +0000 (15:16 +0800)] 
ALSA: usb/6fire: fix repeated words in comments

Delete the redundant word 'in'.

Signed-off-by: wangjianli <wangjianli@cdjrlc.com>
Link: https://lore.kernel.org/r/20220724071644.10630-1-wangjianli@cdjrlc.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
3 years agoALSA: asihpi: fix repeated words in comments
wangjianli [Sun, 24 Jul 2022 07:14:13 +0000 (15:14 +0800)] 
ALSA: asihpi: fix repeated words in comments

Delete the redundant word 'in'.

Signed-off-by: wangjianli <wangjianli@cdjrlc.com>
Link: https://lore.kernel.org/r/20220724071413.10085-1-wangjianli@cdjrlc.com
Signed-off-by: Takashi Iwai <tiwai@suse.de>
3 years agonvme-pci: Crucial P2 has bogus namespace ids
Tobias Gruetzmacher [Fri, 22 Jul 2022 17:05:57 +0000 (19:05 +0200)] 
nvme-pci: Crucial P2 has bogus namespace ids

This adds a quirk for the Crucial P2.

Signed-off-by: Tobias Gruetzmacher <tobias-git@23.gs>
Signed-off-by: Christoph Hellwig <hch@lst.de>
3 years agoMerge tag 'v5.19-rc7' into fixes
Michael Ellerman [Mon, 25 Jul 2022 03:49:22 +0000 (13:49 +1000)] 
Merge tag 'v5.19-rc7' into fixes

Merge v5.19-rc7 into fixes to bring in:
  d11219ad53dc ("amdgpu: disable powerpc support for the newer display engine")

3 years agopowerpc/mobility: wait for memory transfer to complete
Laurent Dufour [Wed, 13 Jul 2022 15:47:26 +0000 (17:47 +0200)] 
powerpc/mobility: wait for memory transfer to complete

In pseries_migration_partition(), loop until the memory transfer is
complete. This way the calling drmgr process will not exit earlier,
allowing callbacks to be run only once the migration is fully completed.

If reading the VASI state is done after the hypervisor has completed the
migration, the HCALL is returning H_PARAMETER. We can safely assume that
the memory transfer is achieved if this happens.

This will also allow to manage the NMI watchdog state in the next commits.

Signed-off-by: Laurent Dufour <ldufour@linux.ibm.com>
Reviewed-by: Nathan Lynch <nathanl@linux.ibm.com>
Reviewed-by: Nicholas Piggin <npiggin@gmail.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220713154729.80789-2-ldufour@linux.ibm.com
3 years agoselftests/powerpc/ptrace: Add peek/poke of FPRs
Michael Ellerman [Mon, 27 Jun 2022 14:02:39 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Add peek/poke of FPRs

Currently the ptrace-gpr test only tests the GET/SET(FP)REGS ptrace
APIs. But there's an alternate (older) API, called PEEK/POKEUSR.

Add some minimal testing of PEEK/POKEUSR of the FPRs. This is sufficient
to detect the bug that was fixed recently in the 32-bit ptrace FPR
handling.

Depends-on: 8e1278444446 ("powerpc/32: Fix overread/overwrite of thread_struct via ptrace")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-13-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Use more interesting values
Michael Ellerman [Mon, 27 Jun 2022 14:02:38 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Use more interesting values

The ptrace-gpr test uses fixed values to test that registers can be
read/written via ptrace. In particular it sets all GPRs to 1, which
means the test could miss some types of bugs - eg. if the kernel was
only returning the low word.

So generate some random values at startup and use those instead.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-12-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Make child errors more obvious
Michael Ellerman [Mon, 27 Jun 2022 14:02:37 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Make child errors more obvious

Use the FAIL_IF() macro so that errors in the child report a line
number, rather than just silently exiting.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-11-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Do more of ptrace-gpr in asm
Michael Ellerman [Mon, 27 Jun 2022 14:02:36 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Do more of ptrace-gpr in asm

The ptrace-gpr test includes some inline asm to load GPR and FPR
registers. It then goes back to C to wait for the parent to trace it and
then checks register contents.

The split between inline asm and C is fragile, it relies on the compiler
not using any non-volatile GPRs after the inline asm block. It also
requires a very large and unwieldy inline asm block.

So convert the logic to set registers, wait, and store registers to a
single asm function, meaning there's no window for the compiler to
intervene.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-10-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Build the ptrace-gpr test as 32-bit when possible
Michael Ellerman [Mon, 27 Jun 2022 14:02:35 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Build the ptrace-gpr test as 32-bit when possible

The ptrace-gpr test can now be built 32-bit, so do that if that's the
compiler default rather than forcing a 64-bit build.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-9-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Convert to load/store doubles
Michael Ellerman [Mon, 27 Jun 2022 14:02:34 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Convert to load/store doubles

Some of the ptrace tests check the contents of floating pointer
registers. Currently these use float, which is always 4 bytes, but the
ptrace API supports saving/restoring 8 bytes per register, so switch to
using doubles to exercise the code more fully.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-8-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Drop unused load_fpr_single_precision()
Michael Ellerman [Mon, 27 Jun 2022 14:02:33 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Drop unused load_fpr_single_precision()

This function is never called, drop it.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-7-mpe@ellerman.id.au
3 years agoselftests/powerpc: Add 32-bit support to asm helpers
Michael Ellerman [Mon, 27 Jun 2022 14:02:32 +0000 (00:02 +1000)] 
selftests/powerpc: Add 32-bit support to asm helpers

Add support for 32-bit builds to the asm helpers.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-6-mpe@ellerman.id.au
3 years agoselftests/powerpc: Don't save TOC by default in asm helpers
Michael Ellerman [Mon, 27 Jun 2022 14:02:31 +0000 (00:02 +1000)] 
selftests/powerpc: Don't save TOC by default in asm helpers

Thare are some asm helpers for creating/popping stack frames in
basic_asm.h. They always save/restore r2 (TOC pointer), but none of the
selftests change r2, so it's unnecessary to save it by default.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-5-mpe@ellerman.id.au
3 years agoselftests/powerpc: Don't save CR by default in asm helpers
Michael Ellerman [Mon, 27 Jun 2022 14:02:30 +0000 (00:02 +1000)] 
selftests/powerpc: Don't save CR by default in asm helpers

Thare are some asm helpers for creating/popping stack frames in
basic_asm.h. They always save/restore CR, but none of the selftests
tests touch non-volatile CR fields, so it's unnecessary to save them by
default.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-4-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Split CFLAGS better
Michael Ellerman [Mon, 27 Jun 2022 14:02:29 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Split CFLAGS better

Currently all ptrace tests are built 64-bit and with TM enabled.

Only the TM tests need TM enabled, so split those out into a separate
variable so that can be specified precisely.

Split the rest of the tests into a variable, and add -m64 to CFLAGS for
those tests, so that in a subsequent patch some tests can be made to
build 32-bit.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-3-mpe@ellerman.id.au
3 years agoselftests/powerpc/ptrace: Set LOCAL_HDRS
Michael Ellerman [Mon, 27 Jun 2022 14:02:28 +0000 (00:02 +1000)] 
selftests/powerpc/ptrace: Set LOCAL_HDRS

Set LOCAL_HDRS so header changes cause rebuilds. The lib.mk logic adds
all the headers in LOCAL_HDRS as dependencies, so there's no need to
also list them explicitly.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-2-mpe@ellerman.id.au
3 years agoselftests/powerpc: Ensure 16-byte stack pointer alignment
Michael Ellerman [Mon, 27 Jun 2022 14:02:27 +0000 (00:02 +1000)] 
selftests/powerpc: Ensure 16-byte stack pointer alignment

The PUSH/POP_BASIC_STACK helpers in basic_asm.h do not ensure that the
stack pointer is always 16-byte aligned, which is required per the ABI.

Fix the macros to do the alignment if the caller fails to.

Currently only one caller passes a non-aligned size, tm_signal_self(),
which hasn't been caught in testing, presumably because it's a leaf
function.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220627140239.2464900-1-mpe@ellerman.id.au
3 years agopowerpc: Fix all occurences of duplicate words
Michael Ellerman [Mon, 18 Jul 2022 09:51:58 +0000 (19:51 +1000)] 
powerpc: Fix all occurences of duplicate words

Since commit 87c78b612f4f ("powerpc: Fix all occurences of "the the"")
fixed "the the", there's now a steady stream of patches fixing other
duplicate words.

Just fix them all at once, to save the overhead of dealing with
individual patches for each case.

This leaves a few cases of "that that", which in some contexts is
correct.

Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Link: https://lore.kernel.org/r/20220718095158.326606-1-mpe@ellerman.id.au
3 years agoMerge branch 'fixes' into next
Michael Ellerman [Mon, 25 Jul 2022 02:04:44 +0000 (12:04 +1000)] 
Merge branch 'fixes' into next

Bring in a build fix for GCC12 from our fixes branch.

3 years agoselftests/io_uring: test zerocopy send
Pavel Begunkov [Tue, 12 Jul 2022 20:52:51 +0000 (21:52 +0100)] 
selftests/io_uring: test zerocopy send

Add selftests for io_uring zerocopy sends and io_uring's notification
infrastructure. It's largely influenced by msg_zerocopy and uses it on
the receive side.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/03d5ec78061cf52db420f88ed0b48eb8f47ce9f7.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: enable managed frags with register buffers
Pavel Begunkov [Tue, 12 Jul 2022 20:52:50 +0000 (21:52 +0100)] 
io_uring: enable managed frags with register buffers

io_uring's registered buffers infra has a good performant way of pinning
pages, so let's use SKBFL_MANAGED_FRAG_REFS when our requests are purely
register buffer backed.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/278731d3f20caf346cfc025fbee0b4c9ee4ed751.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add zc notification flush requests
Pavel Begunkov [Tue, 12 Jul 2022 20:52:49 +0000 (21:52 +0100)] 
io_uring: add zc notification flush requests

Overlay notification control onto IORING_OP_RSRC_UPDATE (former
IORING_OP_FILES_UPDATE). It allows to flush a range of zc notifications
from slots with indexes [sqe->off, sqe->off+sqe->len). If sqe->arg is
not zero, it also copies sqe->arg as a new tag for all flushed
notifications.

Note, it doesn't flush a notification of a slot if there was no requests
attached to it (since last flush or registration).

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/df13e2363400682a73dd9e71c3b990b8d1ff0333.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: rename IORING_OP_FILES_UPDATE
Pavel Begunkov [Tue, 12 Jul 2022 20:52:48 +0000 (21:52 +0100)] 
io_uring: rename IORING_OP_FILES_UPDATE

IORING_OP_FILES_UPDATE will be a more generic opcode serving different
resource types, rename it into IORING_OP_RSRC_UPDATE and add subtype
handling.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0a907133907d9af3415a8a7aa1802c6aa97c03c6.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: flush notifiers after sendzc
Pavel Begunkov [Tue, 12 Jul 2022 20:52:47 +0000 (21:52 +0100)] 
io_uring: flush notifiers after sendzc

Allow to flush notifiers as a part of sendzc request by setting
IORING_SENDZC_FLUSH flag. When the sendzc request succeedes it will
flush the used [active] notifier.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/e0b4d9a6797e2fd6092824fe42953db7a519bbc8.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: sendzc with fixed buffers
Pavel Begunkov [Tue, 12 Jul 2022 20:52:46 +0000 (21:52 +0100)] 
io_uring: sendzc with fixed buffers

Allow zerocopy sends to use fixed buffers. There is an optimisation for
this case, the network layer don't need to reference the pages, see
SKBFL_MANAGED_FRAG_REFS, so io_uring have to ensure validity of fixed
buffers until the notifier is released.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/e1d8bd1b5934e541d90c1824eb4020ae3f5f43f3.1657643355.git.asml.silence@gmail.com
[axboe: fold in 32-bit pointer cast warning fix]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: allow to pass addr into sendzc
Pavel Begunkov [Tue, 12 Jul 2022 20:52:45 +0000 (21:52 +0100)] 
io_uring: allow to pass addr into sendzc

Allow to specify an address to zerocopy sends making it more like
sendto(2).

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/70417a8f7c5b51ab454690bae08adc0c187f89e8.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: account locked pages for non-fixed zc
Pavel Begunkov [Tue, 12 Jul 2022 20:52:44 +0000 (21:52 +0100)] 
io_uring: account locked pages for non-fixed zc

Fixed buffers are RLIMIT_MEMLOCK accounted, however it doesn't cover iovec
based zerocopy sends. Do the accounting on the io_uring side.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/19b6e3975440f59f1f6199c7ee7acf977b4eecdc.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: wire send zc request type
Pavel Begunkov [Tue, 12 Jul 2022 20:52:43 +0000 (21:52 +0100)] 
io_uring: wire send zc request type

Add a new io_uring opcode IORING_OP_SENDZC. The main distinction from
IORING_OP_SEND is that the user should specify a notification slot
index in sqe::notification_idx and the buffers are safe to reuse only
when the used notification is flushed and completes.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a80387c6a68ce9cf99b3b6ef6f71068468761fb7.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add notification slot registration
Pavel Begunkov [Tue, 12 Jul 2022 20:52:42 +0000 (21:52 +0100)] 
io_uring: add notification slot registration

Let the userspace to register and unregister notification slots.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a0aa8161fe3ebb2a4cc6e5dbd0cffb96e6881cf5.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add rsrc referencing for notifiers
Pavel Begunkov [Tue, 12 Jul 2022 20:52:41 +0000 (21:52 +0100)] 
io_uring: add rsrc referencing for notifiers

In preparation to zerocopy sends with fixed buffers make notifiers to
reference the rsrc node to protect the used fixed buffers. We can't just
grab it for a send request as notifiers can likely outlive requests that
used it.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/3cd7a01d26837945b6982fa9cf15a63230f2ed4f.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: complete notifiers in tw
Pavel Begunkov [Tue, 12 Jul 2022 20:52:40 +0000 (21:52 +0100)] 
io_uring: complete notifiers in tw

We need a task context to post CQEs but using wq is too expensive.
Try to complete notifiers using task_work and fall back to wq if fails.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/089799ab665b10b78fdc614ae6d59fa7ef0d5f91.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: cache struct io_notif
Pavel Begunkov [Tue, 12 Jul 2022 20:52:39 +0000 (21:52 +0100)] 
io_uring: cache struct io_notif

kmalloc'ing struct io_notif is too expensive when done frequently, cache
them as many other resources in io_uring. Keep two list, the first one
is from where we're getting notifiers, it's protected by ->uring_lock.
The second is protected by ->completion_lock, to which we queue released
notifiers. Then we splice one list into another when needed.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/9dec18f7fcbab9f4bd40b96e5ae158b119945230.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add zc notification infrastructure
Pavel Begunkov [Tue, 12 Jul 2022 20:52:38 +0000 (21:52 +0100)] 
io_uring: add zc notification infrastructure

Add internal part of send zerocopy notifications. There are two main
structures, the first one is struct io_notif, which carries inside
struct ubuf_info and maps 1:1 to it. io_uring will be binding a number
of zerocopy send requests to it and ask to complete (aka flush) it. When
flushed and all attached requests and skbs complete, it'll generate one
and only one CQE. There are intended to be passed into the network layer
as struct msghdr::msg_ubuf.

The second concept is notification slots. The userspace will be able to
register an array of slots and subsequently addressing them by the index
in the array. Slots are independent of each other. Each slot can have
only one notifier at a time (called active notifier) but many notifiers
during the lifetime. When active, a notifier not going to post any
completion but the userspace can attach requests to it by specifying
the corresponding slot while issueing send zc requests. Eventually, the
userspace will want to "flush" the notifier losing any way to attach
new requests to it, however it can use the next atomatically added
notifier of this slot or of any other slot.

When the network layer is done with all enqueued skbs attached to a
notifier and doesn't need the specified in them user data, the flushed
notifier will post a CQE.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/3ecf54c31a85762bf679b0a432c9f43ecf7e61cc.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: export io_put_task()
Pavel Begunkov [Tue, 12 Jul 2022 20:52:37 +0000 (21:52 +0100)] 
io_uring: export io_put_task()

Make io_put_task() available to non-core parts of io_uring, we'll need
it for notification infrastructure.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/3686807d4c03b72e389947b0e8692d4d44334ef0.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: initialise msghdr::msg_ubuf
Pavel Begunkov [Tue, 12 Jul 2022 20:52:36 +0000 (21:52 +0100)] 
io_uring: initialise msghdr::msg_ubuf

Initialise newly added ->msg_ubuf in io_recv() and io_send().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/b8f9f263875a4a36e7f26cc5d55ebe315308f57d.1657643355.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'for-5.20/io_uring' into for-5.20/io_uring-zerocopy-send
Jens Axboe [Mon, 25 Jul 2022 00:41:03 +0000 (18:41 -0600)] 
Merge branch 'for-5.20/io_uring' into for-5.20/io_uring-zerocopy-send

* for-5.20/io_uring: (716 commits)
  io_uring: ensure REQ_F_ISREG is set async offload
  net: fix compat pointer in get_compat_msghdr()
  io_uring: Don't require reinitable percpu_ref
  io_uring: fix types in io_recvmsg_multishot_overflow
  io_uring: Use atomic_long_try_cmpxchg in __io_account_mem
  io_uring: support multishot in recvmsg
  net: copy from user before calling __get_compat_msghdr
  net: copy from user before calling __copy_msghdr
  io_uring: support 0 length iov in buffer select in compat
  io_uring: fix multishot ending when not polled
  io_uring: add netmsg cache
  io_uring: impose max limit on apoll cache
  io_uring: add abstraction around apoll cache
  io_uring: move apoll cache to poll.c
  io_uring: consolidate hash_locked io-wq handling
  io_uring: clear REQ_F_HASH_LOCKED on hash removal
  io_uring: don't race double poll setting REQ_F_ASYNC_DATA
  io_uring: don't miss setting REQ_F_DOUBLE_POLL
  io_uring: disable multishot recvmsg
  io_uring: only trace one of complete or overflow
  ...

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoMerge branch 'io_uring-zerocopy-send' of git://git.kernel.org/pub/scm/linux/kernel...
Jens Axboe [Mon, 25 Jul 2022 00:40:31 +0000 (18:40 -0600)] 
Merge branch 'io_uring-zerocopy-send' of git://git.kernel.org/pub/scm/linux/kernel/git/kuba/linux into for-5.20/io_uring-zerocopy-send

Merge prep net series for io_uring tx zc from the Jakub's tree.

* 'io_uring-zerocopy-send' of git://git.kernel.org/pub/scm/linux/kernel/git/kuba/linux:
  net: fix uninitialised msghdr->sg_from_iter
  tcp: support externally provided ubufs
  ipv6/udp: support externally provided ubufs
  ipv4/udp: support externally provided ubufs
  net: introduce __skb_fill_page_desc_noacc
  net: introduce managed frags infrastructure
  net: Allow custom iter handler in msghdr
  skbuff: carry external ubuf_info in msghdr
  skbuff: add SKBFL_DONT_ORPHAN flag
  skbuff: don't mix ubuf_info from different sources
  ipv6: avoid partial copy for zc
  ipv4: avoid partial copy for zc

3 years agomm: honor FGP_NOWAIT for page cache page allocation
Jens Axboe [Fri, 1 Jul 2022 20:04:43 +0000 (14:04 -0600)] 
mm: honor FGP_NOWAIT for page cache page allocation

If we're creating a page cache page with FGP_CREAT but FGP_NOWAIT is
set, we should dial back the gfp flags to avoid frivolous blocking
which is trivial to hit in low memory conditions:

[   10.117661]  __schedule+0x8c/0x550
[   10.118305]  schedule+0x58/0xa0
[   10.118897]  schedule_timeout+0x30/0xdc
[   10.119610]  __wait_for_common+0x88/0x114
[   10.120348]  wait_for_completion+0x1c/0x24
[   10.121103]  __flush_work.isra.0+0x16c/0x19c
[   10.121896]  flush_work+0xc/0x14
[   10.122496]  __drain_all_pages+0x144/0x218
[   10.123267]  drain_all_pages+0x10/0x18
[   10.123941]  __alloc_pages+0x464/0x9e4
[   10.124633]  __folio_alloc+0x18/0x3c
[   10.125294]  __filemap_get_folio+0x17c/0x204
[   10.126084]  iomap_write_begin+0xf8/0x428
[   10.126829]  iomap_file_buffered_write+0x144/0x24c
[   10.127710]  xfs_file_buffered_write+0xe8/0x248
[   10.128553]  xfs_file_write_iter+0xa8/0x120
[   10.129324]  io_write+0x16c/0x38c
[   10.129940]  io_issue_sqe+0x70/0x1cc
[   10.130617]  io_queue_sqe+0x18/0xfc
[   10.131277]  io_submit_sqes+0x5d4/0x600
[   10.131946]  __arm64_sys_io_uring_enter+0x224/0x600
[   10.132752]  invoke_syscall.constprop.0+0x70/0xc0
[   10.133616]  do_el0_svc+0xd0/0x118
[   10.134238]  el0_svc+0x78/0xa0

Clear IO, FS, and reclaim flags and mark the allocation as GFP_NOWAIT and
add __GFP_NOWARN to avoid polluting dmesg with pointless allocations
failures. A caller with FGP_NOWAIT must be expected to handle the
resulting -EAGAIN return and retry from a suitable context without NOWAIT
set.

Reviewed-by: Shakeel Butt <shakeelb@google.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoxfs: Add async buffered write support
Stefan Roesch [Thu, 23 Jun 2022 17:51:57 +0000 (10:51 -0700)] 
xfs: Add async buffered write support

This adds the async buffered write support to XFS. For async buffered
write requests, the request will return -EAGAIN if the ilock cannot be
obtained immediately.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-15-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoxfs: Specify lockmode when calling xfs_ilock_for_iomap()
Stefan Roesch [Thu, 23 Jun 2022 17:51:56 +0000 (10:51 -0700)] 
xfs: Specify lockmode when calling xfs_ilock_for_iomap()

This patch changes the helper function xfs_ilock_for_iomap such that the
lock mode must be passed in.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-14-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: Add tracepoint for short writes
Stefan Roesch [Thu, 16 Jun 2022 21:22:19 +0000 (14:22 -0700)] 
io_uring: Add tracepoint for short writes

This adds the io_uring_short_write tracepoint to io_uring. A short write
is issued if not all pages that are required for a write are in the page
cache and the async buffered writes have to return EAGAIN.

Signed-off-by: Stefan Roesch <shr@fb.com>
Link: https://lore.kernel.org/r/20220616212221.2024518-13-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix issue with io_write() not always undoing sb_start_write()
Jens Axboe [Fri, 24 Jun 2022 16:24:45 +0000 (10:24 -0600)] 
io_uring: fix issue with io_write() not always undoing sb_start_write()

This is actually an older issue, but we never used to hit the -EAGAIN
path before having done sb_start_write(). Make sure that we always call
kiocb_end_write() if we need to retry the write, so that we keep the
calls to sb_start_write() etc balanced.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: Add support for async buffered writes
Stefan Roesch [Thu, 16 Jun 2022 21:22:18 +0000 (14:22 -0700)] 
io_uring: Add support for async buffered writes

This enables the async buffered writes for the filesystems that support
async buffered writes in io-uring. Buffered writes are enabled for
blocks that are already in the page cache or can be acquired with noio.

Signed-off-by: Stefan Roesch <shr@fb.com>
Link: https://lore.kernel.org/r/20220616212221.2024518-12-shr@fb.com
[axboe: adapt to 5.20 branch]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agofs: Add async write file modification handling.
Stefan Roesch [Thu, 23 Jun 2022 17:51:53 +0000 (10:51 -0700)] 
fs: Add async write file modification handling.

This adds a file_modified_async() function to return -EAGAIN if the
request either requires to remove privileges or needs to update the file
modification time. This is required for async buffered writes, so the
request gets handled in the io worker of io-uring.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-11-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agofs: Split off inode_needs_update_time and __file_update_time
Stefan Roesch [Thu, 23 Jun 2022 17:51:52 +0000 (10:51 -0700)] 
fs: Split off inode_needs_update_time and __file_update_time

This splits off the functions inode_needs_update_time() and
__file_update_time() from the function file_update_time().

This is required to support async buffered writes.
No intended functional changes in this patch.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-10-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agofs: add __remove_file_privs() with flags parameter
Stefan Roesch [Thu, 23 Jun 2022 17:51:51 +0000 (10:51 -0700)] 
fs: add __remove_file_privs() with flags parameter

This adds the function __remove_file_privs, which allows the caller to
pass the kiocb flags parameter.

No intended functional changes in this patch.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-9-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agofs: add a FMODE_BUF_WASYNC flags for f_mode
Stefan Roesch [Thu, 23 Jun 2022 17:51:50 +0000 (10:51 -0700)] 
fs: add a FMODE_BUF_WASYNC flags for f_mode

This introduces the flag FMODE_BUF_WASYNC. If devices support async
buffered writes, this flag can be set. It also modifies the check in
generic_write_checks to take async buffered writes into consideration.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Christian Brauner (Microsoft) <brauner@kernel.org>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-8-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoiomap: Return -EAGAIN from iomap_write_iter()
Stefan Roesch [Thu, 23 Jun 2022 17:51:49 +0000 (10:51 -0700)] 
iomap: Return -EAGAIN from iomap_write_iter()

If iomap_write_iter() encounters -EAGAIN, return -EAGAIN to the caller.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-7-shr@fb.com
Reviewed-by: Christoph Hellwig <hch@lst.de>
[axboe: make the suggested ternary edit]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoiomap: Add async buffered write support
Stefan Roesch [Thu, 23 Jun 2022 17:51:48 +0000 (10:51 -0700)] 
iomap: Add async buffered write support

This adds async buffered write support to iomap.

This replaces the call to balance_dirty_pages_ratelimited() with the
call to balance_dirty_pages_ratelimited_flags. This allows to specify if
the write request is async or not.

In addition this also moves the above function call to the beginning of
the function. If the function call is at the end of the function and the
decision is made to throttle writes, then there is no request that
io-uring can wait on. By moving it to the beginning of the function, the
write request is not issued, but returns -EAGAIN instead. io-uring will
punt the request and process it in the io-worker.

By moving the function call to the beginning of the function, the write
throttling will happen one page later.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-6-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoiomap: Add flags parameter to iomap_page_create()
Stefan Roesch [Thu, 23 Jun 2022 17:51:47 +0000 (10:51 -0700)] 
iomap: Add flags parameter to iomap_page_create()

Add the kiocb flags parameter to the function iomap_page_create().
Depending on the value of the flags parameter it enables different gfp
flags.

No intended functional changes in this patch.

Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Link: https://lore.kernel.org/r/20220623175157.1715274-5-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomm: Add balance_dirty_pages_ratelimited_flags() function
Jan Kara [Thu, 23 Jun 2022 17:51:46 +0000 (10:51 -0700)] 
mm: Add balance_dirty_pages_ratelimited_flags() function

This adds the helper function balance_dirty_pages_ratelimited_flags().
It adds the parameter flags to balance_dirty_pages_ratelimited().
The flags parameter is passed to balance_dirty_pages(). For async
buffered writes the flag value will be BDP_ASYNC.

If balance_dirty_pages() gets called for async buffered write, we don't
want to wait. Instead we need to indicate to the caller that throttling
is needed so that it can stop writing and offload the rest of the write
to a context that can block.

The new helper function is also used by balance_dirty_pages_ratelimited().

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Stefan Roesch <shr@fb.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Link: https://lore.kernel.org/r/20220623175157.1715274-4-shr@fb.com
[axboe: fix kerneltest bot 'ret' issue]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomm: Move updates of dirty_exceeded into one place
Jan Kara [Thu, 23 Jun 2022 17:51:45 +0000 (10:51 -0700)] 
mm: Move updates of dirty_exceeded into one place

Transition of wb->dirty_exceeded from 0 to 1 happens before we go to
sleep in balance_dirty_pages() while transition from 1 to 0 happens when
exiting from balance_dirty_pages(), possibly based on old values. This
does not make a lot of sense since wb->dirty_exceeded should simply
reflect whether wb is over dirty limit and so we should ratelimit
entering to balance_dirty_pages() less. Move the two updates together.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Stefan Roesch <shr@fb.com>
Link: https://lore.kernel.org/r/20220623175157.1715274-3-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agomm: Move starting of background writeback into the main balancing loop
Jan Kara [Thu, 23 Jun 2022 17:51:44 +0000 (10:51 -0700)] 
mm: Move starting of background writeback into the main balancing loop

We start background writeback if we are over background threshold after
exiting the main loop in balance_dirty_pages(). This may result in
basing the decision on already stale values (we may have slept for
significant amount of time) and it is also inconvenient for refactoring
needed for async dirty throttling. Move the check into the main waiting
loop.

Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Stefan Roesch <shr@fb.com>
Link: https://lore.kernel.org/r/20220623175157.1715274-2-shr@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: ensure REQ_F_ISREG is set async offload
Jens Axboe [Thu, 21 Jul 2022 15:06:47 +0000 (09:06 -0600)] 
io_uring: ensure REQ_F_ISREG is set async offload

If we're offloading requests directly to io-wq because IOSQE_ASYNC was
set in the sqe, we can miss hashing writes appropriately because we
haven't set REQ_F_ISREG yet. This can cause a performance regression
with buffered writes, as io-wq then no longer correctly serializes writes
to that file.

Ensure that we set the flags in io_prep_async_work(), which will cause
the io-wq work item to be hashed appropriately.

Fixes: 584b0180f0f4 ("io_uring: move read/write file prep state into actual opcode handler")
Link: https://lore.kernel.org/io-uring/20220608080054.GB22428@xsang-OptiPlex-9020/
Reported-by: kernel test robot <oliver.sang@intel.com>
Tested-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonet: fix compat pointer in get_compat_msghdr()
Jens Axboe [Fri, 15 Jul 2022 21:54:47 +0000 (15:54 -0600)] 
net: fix compat pointer in get_compat_msghdr()

A previous change enabled external users to copy the data before
calling __get_compat_msghdr(), but didn't modify get_compat_msghdr() or
__io_compat_recvmsg_copy_hdr() to take that into account. They are both
stil passing in the __user pointer rather than the copied version.

Ensure we pass in the kernel struct, not the pointer to the user data.

Link: https://lore.kernel.org/all/46439555-644d-08a1-7d66-16f8f9a320f0@samsung.com/
Fixes: 1a3e4e94a1b9 ("net: copy from user before calling __get_compat_msghdr")
Reported-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: Don't require reinitable percpu_ref
Michal Koutný [Fri, 15 Jul 2022 17:45:01 +0000 (19:45 +0200)] 
io_uring: Don't require reinitable percpu_ref

The commit 8bb649ee1da3 ("io_uring: remove ring quiesce for
io_uring_register") removed the worklow relying on reinit/resurrection
of the percpu_ref, hence, initialization with that requested is a relic.

This is based on code review, this causes no real bug (and theoretically
can't). Technically it's a revert of commit 214828962dea ("io_uring:
initialize percpu refcounters using PERCU_REF_ALLOW_REINIT") but since
the flag omission is now justified, I'm not making this a revert.

Fixes: 8bb649ee1da3 ("io_uring: remove ring quiesce for io_uring_register")
Signed-off-by: Michal Koutný <mkoutny@suse.com>
Acked-by: Roman Gushchin <roman.gushchin@linux.dev>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix types in io_recvmsg_multishot_overflow
Dylan Yudaken [Fri, 15 Jul 2022 13:02:52 +0000 (06:02 -0700)] 
io_uring: fix types in io_recvmsg_multishot_overflow

io_recvmsg_multishot_overflow had incorrect types on non x64 system.
But also it had an unnecessary INT_MAX check, which could just be done
by changing the type of the accumulator to int (also simplifying the
casts).

Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Fixes: a8b38c4ce724 ("io_uring: support multishot in recvmsg")
Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220715130252.610639-1-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: Use atomic_long_try_cmpxchg in __io_account_mem
Uros Bizjak [Thu, 14 Jul 2022 16:33:01 +0000 (18:33 +0200)] 
io_uring: Use atomic_long_try_cmpxchg in __io_account_mem

Use atomic_long_try_cmpxchg instead of
atomic_long_cmpxchg (*ptr, old, new) == old in __io_account_mem.
x86 CMPXCHG instruction returns success in ZF flag, so this
change saves a compare after cmpxchg (and related move
instruction in front of cmpxchg).

Also, atomic_long_try_cmpxchg implicitly assigns old *ptr value
to "old" when cmpxchg fails, enabling further code simplifications.

No functional change intended.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Pavel Begunkov <asml.silence@gmail.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: support multishot in recvmsg
Dylan Yudaken [Thu, 14 Jul 2022 11:02:58 +0000 (04:02 -0700)] 
io_uring: support multishot in recvmsg

Similar to multishot recv, this will require provided buffers to be
used. However recvmsg is much more complex than recv as it has multiple
outputs. Specifically flags, name, and control messages.

Support this by introducing a new struct io_uring_recvmsg_out with 4
fields. namelen, controllen and flags match the similar out fields in
msghdr from standard recvmsg(2), payloadlen is the length of the payload
following the header.
This struct is placed at the start of the returned buffer. Based on what
the user specifies in struct msghdr, the next bytes of the buffer will be
name (the next msg_namelen bytes), and then control (the next
msg_controllen bytes). The payload will come at the end. The return value
in the CQE is the total used size of the provided buffer.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220714110258.1336200-4-dylany@fb.com
[axboe: style fixups, see link]
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonet: copy from user before calling __get_compat_msghdr
Dylan Yudaken [Thu, 14 Jul 2022 11:02:57 +0000 (04:02 -0700)] 
net: copy from user before calling __get_compat_msghdr

this is in preparation for multishot receive from io_uring, where it needs
to have access to the original struct user_msghdr.

functionally this should be a no-op.

Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220714110258.1336200-3-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agonet: copy from user before calling __copy_msghdr
Dylan Yudaken [Thu, 14 Jul 2022 11:02:56 +0000 (04:02 -0700)] 
net: copy from user before calling __copy_msghdr

this is in preparation for multishot receive from io_uring, where it needs
to have access to the original struct user_msghdr.

functionally this should be a no-op.

Acked-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220714110258.1336200-2-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: support 0 length iov in buffer select in compat
Dylan Yudaken [Fri, 8 Jul 2022 18:18:36 +0000 (11:18 -0700)] 
io_uring: support 0 length iov in buffer select in compat

Match up work done in "io_uring: allow iov_len = 0 for recvmsg and buffer
select", but for compat code path.

Fixes: a68caad69ce5 ("io_uring: allow iov_len = 0 for recvmsg and buffer select")
Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220708181838.1495428-3-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix multishot ending when not polled
Dylan Yudaken [Fri, 8 Jul 2022 18:18:35 +0000 (11:18 -0700)] 
io_uring: fix multishot ending when not polled

If multishot is not actually polling then return IOU_OK rather than the
result.
If the result was > 0 this will confuse things further up the callstack
which expect a return <= 0.

Fixes: 1300ebb20286 ("io_uring: multishot recv")
Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220708181838.1495428-2-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add netmsg cache
Jens Axboe [Thu, 7 Jul 2022 20:30:09 +0000 (14:30 -0600)] 
io_uring: add netmsg cache

For recvmsg/sendmsg, if they don't complete inline, we currently need
to allocate a struct io_async_msghdr for each request. This is a
somewhat large struct.

Hook up sendmsg/recvmsg to use the io_alloc_cache. This reduces the
alloc + free overhead considerably, yielding 4-5% of extra performance
running netbench.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: impose max limit on apoll cache
Jens Axboe [Thu, 7 Jul 2022 20:20:54 +0000 (14:20 -0600)] 
io_uring: impose max limit on apoll cache

Caches like this tend to grow to the peak size, and then never get any
smaller. Impose a max limit on the size, to prevent it from growing too
big.

A somewhat randomly chosen 512 is the max size we'll allow the cache
to get. If a batch of frees come in and would bring it over that, we
simply start kfree'ing the surplus.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add abstraction around apoll cache
Jens Axboe [Thu, 7 Jul 2022 20:16:20 +0000 (14:16 -0600)] 
io_uring: add abstraction around apoll cache

In preparation for adding limits, and one more user, abstract out the
core bits of the allocation+free cache.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: move apoll cache to poll.c
Jens Axboe [Thu, 7 Jul 2022 17:18:33 +0000 (11:18 -0600)] 
io_uring: move apoll cache to poll.c

This is where it's used, move the flush handler in there.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: consolidate hash_locked io-wq handling
Pavel Begunkov [Thu, 7 Jul 2022 14:13:17 +0000 (15:13 +0100)] 
io_uring: consolidate hash_locked io-wq handling

Don't duplicate code disabling REQ_F_HASH_LOCKED for IO_URING_F_UNLOCKED
(i.e. io-wq), move the handling into __io_arm_poll_handler().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/0ff0ffdfaa65b3d536131535c3dad3c63d9b7bb0.1657203020.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: clear REQ_F_HASH_LOCKED on hash removal
Pavel Begunkov [Thu, 7 Jul 2022 14:13:16 +0000 (15:13 +0100)] 
io_uring: clear REQ_F_HASH_LOCKED on hash removal

Instead of clearing REQ_F_HASH_LOCKED while arming a poll, unset the bit
when we're removing the entry from the table in io_poll_tw_hash_eject().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/02e48bb88d6f1480c94ac2924c43ad1fbd48e92a.1657203020.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't race double poll setting REQ_F_ASYNC_DATA
Pavel Begunkov [Thu, 7 Jul 2022 14:13:15 +0000 (15:13 +0100)] 
io_uring: don't race double poll setting REQ_F_ASYNC_DATA

Just as with io_poll_double_prepare() setting REQ_F_DOUBLE_POLL, we can
race with the first poll entry when setting REQ_F_ASYNC_DATA. Move it
under io_poll_double_prepare().

Fixes: a18427bb2d9b ("io_uring: optimise submission side poll_refs")
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/df6920f509c11115aa2bce8b34dc5fdb0eb98920.1657203020.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't miss setting REQ_F_DOUBLE_POLL
Pavel Begunkov [Thu, 7 Jul 2022 14:13:14 +0000 (15:13 +0100)] 
io_uring: don't miss setting REQ_F_DOUBLE_POLL

When adding a second poll entry we should set REQ_F_DOUBLE_POLL
unconditionally. We might race with the first entry removal but that
doesn't change the rule.

Fixes: a18427bb2d9b ("io_uring: optimise submission side poll_refs")
Reported-and-tested-by: syzbot+49950ba66096b1f0209b@syzkaller.appspotmail.com
Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/8b680d83ded07424db83e8745585e7a6d72826ef.1657203020.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: disable multishot recvmsg
Dylan Yudaken [Mon, 4 Jul 2022 14:01:06 +0000 (07:01 -0700)] 
io_uring: disable multishot recvmsg

recvmsg has semantics that do not make it trivial to extend to
multishot. Specifically it has user pointers and returns data in the
original parameter. In order to make this API useful these will need to be
somehow included with the provided buffers.

For now remove multishot for recvmsg as it is not useful.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220704140106.200167-1-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: only trace one of complete or overflow
Dylan Yudaken [Thu, 30 Jun 2022 09:12:31 +0000 (02:12 -0700)] 
io_uring: only trace one of complete or overflow

In overflow we see a duplcate line in the trace, and in some cases 3
lines (if initial io_post_aux_cqe fails).
Instead just trace once for each CQE

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-13-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix io_uring_cqe_overflow trace format
Dylan Yudaken [Thu, 30 Jun 2022 09:12:30 +0000 (02:12 -0700)] 
io_uring: fix io_uring_cqe_overflow trace format

Make the trace format consistent with io_uring_complete for cflags

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-12-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: multishot recv
Dylan Yudaken [Thu, 30 Jun 2022 09:12:29 +0000 (02:12 -0700)] 
io_uring: multishot recv

Support multishot receive for io_uring.
Typical server applications will run a loop where for each recv CQE it
requeues another recv/recvmsg.

This can be simplified by using the existing multishot functionality
combined with io_uring's provided buffers.
The API is to add the IORING_RECV_MULTISHOT flag to the SQE. CQEs will
then be posted (with IORING_CQE_F_MORE flag set) when data is available
and is read. Once an error occurs or the socket ends, the multishot will
be removed and a completion without IORING_CQE_F_MORE will be posted.

The benefit to this is that the recv is much more performant.
 * Subsequent receives are queued up straight away without requiring the
   application to finish a processing loop.
 * If there are more data in the socket (sat the provided buffer size is
   smaller than the socket buffer) then the data is immediately
   returned, improving batching.
 * Poll is only armed once and reused, saving CPU cycles

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-11-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix multishot accept ordering
Dylan Yudaken [Thu, 30 Jun 2022 09:12:28 +0000 (02:12 -0700)] 
io_uring: fix multishot accept ordering

Similar to multishot poll, drop multishot accept when CQE overflow occurs.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-10-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: fix multishot poll on overflow
Dylan Yudaken [Thu, 30 Jun 2022 09:12:27 +0000 (02:12 -0700)] 
io_uring: fix multishot poll on overflow

On overflow, multishot poll can still complete with the IORING_CQE_F_MORE
flag set.
If in the meantime the user clears a CQE and a the poll was cancelled then
the poll will post a CQE without the IORING_CQE_F_MORE (and likely result
-ECANCELED).

However when processing the application will encounter the non-overflow
CQE which indicates that there will be no more events posted. Typical
userspace applications would free memory associated with the poll in this
case.
It will then subsequently receive the earlier CQE which has overflowed,
which breaks the contract given by the IORING_CQE_F_MORE flag.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-9-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add allow_overflow to io_post_aux_cqe
Dylan Yudaken [Thu, 30 Jun 2022 09:12:26 +0000 (02:12 -0700)] 
io_uring: add allow_overflow to io_post_aux_cqe

Some use cases of io_post_aux_cqe would not want to overflow as is, but
might want to change the flags/result. For example multishot receive
requires in order CQE, and so if there is an overflow it would need to
stop receiving until the overflow is taken care of.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-8-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add IOU_STOP_MULTISHOT return code
Dylan Yudaken [Thu, 30 Jun 2022 09:12:25 +0000 (02:12 -0700)] 
io_uring: add IOU_STOP_MULTISHOT return code

For multishot we want a way to signal the caller that multishot has ended
but also this might not be an error return.

For example sockets return 0 when closed, which should end a multishot
recv, but still have a CQE with result 0

Introduce IOU_STOP_MULTISHOT which does this and indicates that the return
code is stored inside req->cqe

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-7-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: clean up io_poll_check_events return values
Dylan Yudaken [Thu, 30 Jun 2022 09:12:24 +0000 (02:12 -0700)] 
io_uring: clean up io_poll_check_events return values

The values returned are a bit confusing, where 0 and 1 have implied
meaning, so add some definitions for them.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-6-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: recycle buffers on error
Dylan Yudaken [Thu, 30 Jun 2022 09:12:23 +0000 (02:12 -0700)] 
io_uring: recycle buffers on error

Rather than passing an error back to the user with a buffer attached,
recycle the buffer immediately.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-5-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: allow iov_len = 0 for recvmsg and buffer select
Dylan Yudaken [Thu, 30 Jun 2022 09:12:22 +0000 (02:12 -0700)] 
io_uring: allow iov_len = 0 for recvmsg and buffer select

When using BUFFER_SELECT there is no technical requirement that the user
actually provides iov, and this removes one copy_from_user call.

So allow iov_len to be 0.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-4-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: restore bgid in io_put_kbuf
Dylan Yudaken [Thu, 30 Jun 2022 09:12:21 +0000 (02:12 -0700)] 
io_uring: restore bgid in io_put_kbuf

Attempt to restore bgid. This is needed when recycling unused buffers as
the next time around it will want the correct bgid.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-3-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: allow 0 length for buffer select
Dylan Yudaken [Thu, 30 Jun 2022 09:12:20 +0000 (02:12 -0700)] 
io_uring: allow 0 length for buffer select

If user gives 0 for length, we can set it from the available buffer size.

Signed-off-by: Dylan Yudaken <dylany@fb.com>
Link: https://lore.kernel.org/r/20220630091231.1456789-2-dylany@fb.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: let to set a range for file slot allocation
Pavel Begunkov [Sat, 25 Jun 2022 10:55:38 +0000 (11:55 +0100)] 
io_uring: let to set a range for file slot allocation

From recently io_uring provides an option to allocate a file index for
operation registering fixed files. However, it's utterly unusable with
mixed approaches when for a part of files the userspace knows better
where to place it, as it may race and users don't have any sane way to
pick a slot and hoping it will not be taken.

Let the userspace to register a range of fixed file slots in which the
auto-allocation happens. The use case is splittting the fixed table in
two parts, where on of them is used for auto-allocation and another for
slot-specified operations.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/66ab0394e436f38437cf7c44676e1920d09687ad.1656154403.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: add support for passing fixed file descriptors
Jens Axboe [Mon, 13 Jun 2022 10:47:02 +0000 (04:47 -0600)] 
io_uring: add support for passing fixed file descriptors

With IORING_OP_MSG_RING, one ring can send a message to another ring.
Extend that support to also allow sending a fixed file descriptor to
that ring, enabling one ring to pass a registered descriptor to another
one.

Arguments are extended to pass in:

sqe->addr3 fixed file slot in source ring
sqe->file_index fixed file slot in destination ring

IORING_OP_MSG_RING is extended to take a command argument in sqe->addr.
If set to zero (or IORING_MSG_DATA), it sends just a message like before.
If set to IORING_MSG_SEND_FD, a fixed file descriptor is sent according
to the above arguments.

Two common use cases for this are:

1) Server needs to be shutdown or restarted, pass file descriptors to
   another onei

2) Backend is split, and one accepts connections, while others then get
  the fd passed and handle the actual connection.

Both of those are classic SCM_RIGHTS use cases, and it's not possible to
support them with direct descriptors today.

By default, this will post a CQE to the target ring, similarly to how
IORING_MSG_DATA does it. If IORING_MSG_RING_CQE_SKIP is set, no message
is posted to the target ring. The issuer is expected to notify the
receiver side separately.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: split out fixed file installation and removal
Jens Axboe [Mon, 13 Jun 2022 10:42:56 +0000 (04:42 -0600)] 
io_uring: split out fixed file installation and removal

Put it with the filetable code, which is where it belongs. While doing
so, have the helpers take a ctx rather than an io_kiocb. It doesn't make
sense to use a request, as it's not an operation on the request itself.
It applies to the ring itself.

Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: replace zero-length array with flexible-array member
Gustavo A. R. Silva [Tue, 28 Jun 2022 19:33:20 +0000 (21:33 +0200)] 
io_uring: replace zero-length array with flexible-array member

There is a regular need in the kernel to provide a way to declare
having a dynamically sized set of trailing elements in a structure.
Kernel code should always use “flexible array members”[1] for these
cases. The older style of one-element or zero-length arrays should
no longer be used[2].

[1] https://en.wikipedia.org/wiki/Flexible_array_member
[2] https://www.kernel.org/doc/html/v5.16/process/deprecated.html#zero-length-and-one-element-arrays

Link: https://github.com/KSPP/linux/issues/78
Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: remove ctx->refs pinning on enter
Pavel Begunkov [Sat, 25 Jun 2022 10:53:02 +0000 (11:53 +0100)] 
io_uring: remove ctx->refs pinning on enter

io_uring_enter() takes ctx->refs, which was previously preventing racing
with register quiesce. However, as register now doesn't touch the refs,
we can freely kill extra ctx pinning and rely on the fact that we're
holding a file reference preventing the ring from being destroyed.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/a11c57ad33a1be53541fce90669c1b79cf4d8940.1656153286.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: don't check file ops of registered rings
Pavel Begunkov [Sat, 25 Jun 2022 10:53:01 +0000 (11:53 +0100)] 
io_uring: don't check file ops of registered rings

Registered rings are per definitions io_uring files, so we don't need to
additionally verify them.

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/425cd64fd885b8e329a46c205ee811987691baaf.1656153286.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>
3 years agoio_uring: remove extra TIF_NOTIFY_SIGNAL check
Pavel Begunkov [Sat, 25 Jun 2022 10:53:00 +0000 (11:53 +0100)] 
io_uring: remove extra TIF_NOTIFY_SIGNAL check

io_run_task_work() accounts for TIF_NOTIFY_SIGNAL, so no need to have an
second check in io_run_task_work_sig().

Signed-off-by: Pavel Begunkov <asml.silence@gmail.com>
Link: https://lore.kernel.org/r/52ce41a592ad904511697f432141e5690fd4b968.1656153285.git.asml.silence@gmail.com
Signed-off-by: Jens Axboe <axboe@kernel.dk>