]> git.ipfire.org Git - thirdparty/linux.git/log
thirdparty/linux.git
3 months agocrypto: lib/Kconfig - Hide arch options from user
Herbert Xu [Thu, 27 Feb 2025 07:48:39 +0000 (15:48 +0800)] 
crypto: lib/Kconfig - Hide arch options from user

The ARCH_MAY_HAVE patch missed arm64, mips and s390.  But it may
also lead to arch options being enabled but ineffective because
of modular/built-in conflicts.

As the primary user of all these options wireguard is selecting
the arch options anyway, make the same selections at the lib/crypto
option level and hide the arch options from the user.

Instead of selecting them centrally from lib/crypto, simply set
the default of each arch option as suggested by Eric Biggers.

Change the Crypto API generic algorithms to select the top-level
lib/crypto options instead of the generic one as otherwise there
is no way to enable the arch options (Eric Biggers).  Introduce a
set of INTERNAL options to work around dependency cycles on the
CONFIG_CRYPTO symbol.

Fixes: 1047e21aecdf ("crypto: lib/Kconfig - Fix lib built-in failure when arch is modular")
Reported-by: kernel test robot <lkp@intel.com>
Reported-by: Arnd Bergmann <arnd@kernel.org>
Closes: https://lore.kernel.org/oe-kbuild-all/202502232152.JC84YDLp-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: skcipher - Use restrict rather than hand-rolling accesses
Herbert Xu [Sun, 23 Feb 2025 06:27:51 +0000 (14:27 +0800)] 
crypto: skcipher - Use restrict rather than hand-rolling accesses

Rather than accessing 'alg' directly to avoid the aliasing issue
which leads to unnecessary reloads, use the __restrict keyword
to explicitly tell the compiler that there is no aliasing.

This generates equivalent if not superior code on x86 with gcc 12.

Note that in skcipher_walk_virt the alg assignment is moved after
might_sleep_if because that function is a compiler barrier and
forces a reload.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: octeontx - Remove unused function otx_cpt_eng_grp_has_eng_type
Dr. David Alan Gilbert [Sun, 23 Feb 2025 00:56:46 +0000 (00:56 +0000)] 
crypto: octeontx - Remove unused function otx_cpt_eng_grp_has_eng_type

otx_cpt_eng_grp_has_eng_type() was added in 2020 by
commit d9110b0b01ff ("crypto: marvell - add support for OCTEON TX CPT
engine")
but has remained unused.

Remove it.

Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: octeontx2 - Remove unused otx2_cpt_print_uc_dbg_info
Dr. David Alan Gilbert [Sun, 23 Feb 2025 00:53:54 +0000 (00:53 +0000)] 
crypto: octeontx2 - Remove unused otx2_cpt_print_uc_dbg_info

otx2_cpt_print_uc_dbg_info() has been unused since 2023's
commit 82f89f1aa6ca ("crypto: octeontx2 - add devlink option to set t106
mode")

Remove it and the get_engs_info() helper it's the only user of.

Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agodt-bindings: crypto: Convert fsl,sec-2.0 to YAML
J. Neuschäfer [Thu, 20 Feb 2025 11:57:35 +0000 (12:57 +0100)] 
dt-bindings: crypto: Convert fsl,sec-2.0 to YAML

Convert the Freescale security engine (crypto accelerator) binding from
text form to YAML. The list of compatible strings reflects what was
previously described in prose; not all combinations occur in existing
devicetrees.

Reviewed-by: Frank Li <Frank.Li@nxp.com>
Signed-off-by: J. Neuschäfer <j.ne@posteo.net>
Reviewed-by: Rob Herring (Arm) <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: scatterwalk - don't split at page boundaries when !HIGHMEM
Eric Biggers [Wed, 19 Feb 2025 18:23:41 +0000 (10:23 -0800)] 
crypto: scatterwalk - don't split at page boundaries when !HIGHMEM

When !HIGHMEM, the kmap_local_page() in the scatterlist walker does not
actually map anything, and the address it returns is just the address
from the kernel's direct map, where each sg entry's data is virtually
contiguous.  To improve performance, stop unnecessarily clamping data
segments to page boundaries in this case.

For now, still limit segments to PAGE_SIZE.  This is needed to prevent
preemption from being disabled for too long when SIMD is used, and to
support the alignmask case which still uses a page-sized bounce buffer.

Even so, this change still helps a lot in cases where messages cross a
page boundary.  For example, testing IPsec with AES-GCM on x86_64, the
messages are 1424 bytes which is less than PAGE_SIZE, but on the Rx side
over a third cross a page boundary.  These ended up being processed in
three parts, with the middle part going through skcipher_next_slow which
uses a 16-byte bounce buffer.  That was causing a significant amount of
overhead which unnecessarily reduced the performance benefit of the new
x86_64 AES-GCM assembly code.  This change solves the problem; all these
messages now get passed to the assembly code in one part.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: scatterwalk - remove obsolete functions
Eric Biggers [Wed, 19 Feb 2025 18:23:40 +0000 (10:23 -0800)] 
crypto: scatterwalk - remove obsolete functions

Remove various functions that are no longer used.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: skcipher - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:39 +0000 (10:23 -0800)] 
crypto: skcipher - use the new scatterwalk functions

Convert skcipher_walk to use the new scatterwalk functions.

This includes a few changes to exactly where the different parts of the
iteration happen.  For example the dcache flush that previously happened
in scatterwalk_done() now happens in scatterwalk_dst_done() or in
memcpy_to_scatterwalk().  Advancing to the next sg entry now happens
just-in-time in scatterwalk_clamp() instead of in scatterwalk_done().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agonet/tls: use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:38 +0000 (10:23 -0800)] 
net/tls: use the new scatterwalk functions

Replace calls to the deprecated function scatterwalk_copychunks() with
memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), or
scatterwalk_skip() as appropriate.  The new functions generally behave
more as expected and eliminate the need to call scatterwalk_done() or
scatterwalk_pagedone().

However, the new functions intentionally do not advance to the next sg
entry right away, which would have broken chain_to_walk() which is
accessing the fields of struct scatter_walk directly.  To avoid this,
replace chain_to_walk() with scatterwalk_get_sglist() which supports the
needed functionality.

Cc: Boris Pismenny <borisp@nvidia.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: x86/aegis - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:37 +0000 (10:23 -0800)] 
crypto: x86/aegis - use the new scatterwalk functions

In crypto_aegis128_aesni_process_ad(), use scatterwalk_next() which
consolidates scatterwalk_clamp() and scatterwalk_map().  Use
scatterwalk_done_src() which consolidates scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: x86/aes-gcm - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:36 +0000 (10:23 -0800)] 
crypto: x86/aes-gcm - use the new scatterwalk functions

In gcm_process_assoc(), use scatterwalk_next() which consolidates
scatterwalk_clamp() and scatterwalk_map().  Use scatterwalk_done_src()
which consolidates scatterwalk_unmap(), scatterwalk_advance(), and
scatterwalk_done().

Also rename some variables to avoid implying that anything is actually
mapped (it's not), or that the loop is going page by page (it is for
now, but nothing actually requires that to be the case).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: stm32 - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:35 +0000 (10:23 -0800)] 
crypto: stm32 - use the new scatterwalk functions

Replace calls to the deprecated function scatterwalk_copychunks() with
memcpy_from_scatterwalk(), memcpy_to_scatterwalk(), scatterwalk_skip(),
or scatterwalk_start_at_pos() as appropriate.

Cc: Alexandre Torgue <alexandre.torgue@foss.st.com>
Cc: Maxime Coquelin <mcoquelin.stm32@gmail.com>
Cc: Maxime Méré <maxime.mere@foss.st.com>
Cc: Thomas Bourgoin <thomas.bourgoin@foss.st.com>
Cc: linux-stm32@st-md-mailman.stormreply.com
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: s5p-sss - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:34 +0000 (10:23 -0800)] 
crypto: s5p-sss - use the new scatterwalk functions

s5p_sg_copy_buf() open-coded a copy from/to a scatterlist using
scatterwalk_* functions that are planned for removal.  Replace it with
the new functions memcpy_from_sglist() and memcpy_to_sglist() instead.
Also take the opportunity to replace calls to scatterwalk_map_and_copy()
in the same file; this eliminates the confusing 'out' argument.

Cc: Krzysztof Kozlowski <krzk@kernel.org>
Cc: Vladimir Zapolskiy <vz@mleia.com>
Cc: linux-samsung-soc@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: s390/aes-gcm - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:33 +0000 (10:23 -0800)] 
crypto: s390/aes-gcm - use the new scatterwalk functions

Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map().  Use scatterwalk_done_src() and
scatterwalk_done_dst() which consolidate scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done().

Besides the new functions being a bit easier to use, this is necessary
because scatterwalk_done() is planned to be removed.

Reviewed-by: Harald Freudenberger <freude@linux.ibm.com>
Tested-by: Harald Freudenberger <freude@linux.ibm.com>
Cc: Holger Dengler <dengler@linux.ibm.com>
Cc: linux-s390@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: nx - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:32 +0000 (10:23 -0800)] 
crypto: nx - use the new scatterwalk functions

- In nx_walk_and_build(), use scatterwalk_start_at_pos() instead of a
  more complex way to achieve the same result.

- Also in nx_walk_and_build(), use the new functions scatterwalk_next()
  which consolidates scatterwalk_clamp() and scatterwalk_map(), and use
  scatterwalk_done_src() which consolidates scatterwalk_unmap(),
  scatterwalk_advance(), and scatterwalk_done().  Remove unnecessary
  code that seemed to be intended to advance to the next sg entry, which
  is already handled by the scatterwalk functions.

  Note that nx_walk_and_build() does not actually read or write the
  mapped virtual address, and thus it is misusing the scatter_walk API.
  It really should just access the scatterlist directly.  This patch
  does not try to address this existing issue.

- In nx_gca(), use memcpy_from_sglist() instead of a more complex way to
  achieve the same result.

- In various functions, replace calls to scatterwalk_map_and_copy() with
  memcpy_from_sglist() or memcpy_to_sglist() as appropriate.  Note that
  this eliminates the confusing 'out' argument (which this driver had
  tried to work around by defining the missing constants for it...)

Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Naveen N Rao <naveen@kernel.org>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: linuxppc-dev@lists.ozlabs.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: arm64 - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:31 +0000 (10:23 -0800)] 
crypto: arm64 - use the new scatterwalk functions

Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().
Remove unnecessary code that seemed to be intended to advance to the
next sg entry, which is already handled by the scatterwalk functions.
Adjust variable naming slightly to keep things consistent.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: arm/ghash - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:30 +0000 (10:23 -0800)] 
crypto: arm/ghash - use the new scatterwalk functions

Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().
Remove unnecessary code that seemed to be intended to advance to the
next sg entry, which is already handled by the scatterwalk functions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: aegis - use the new scatterwalk functions
Eric Biggers [Wed, 19 Feb 2025 18:23:29 +0000 (10:23 -0800)] 
crypto: aegis - use the new scatterwalk functions

Use scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map(), and use scatterwalk_done_src() which consolidates
scatterwalk_unmap(), scatterwalk_advance(), and scatterwalk_done().

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: skcipher - use scatterwalk_start_at_pos()
Eric Biggers [Wed, 19 Feb 2025 18:23:28 +0000 (10:23 -0800)] 
crypto: skcipher - use scatterwalk_start_at_pos()

In skcipher_walk_aead_common(), use scatterwalk_start_at_pos() instead
of a sequence of scatterwalk_start(), scatterwalk_copychunks(..., 2),
and scatterwalk_done().  This is simpler and faster.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: scatterwalk - add scatterwalk_get_sglist()
Eric Biggers [Wed, 19 Feb 2025 18:23:27 +0000 (10:23 -0800)] 
crypto: scatterwalk - add scatterwalk_get_sglist()

Add a function that creates a scatterlist that represents the remaining
data in a walk.  This will be used to replace chain_to_walk() in
net/tls/tls_device_fallback.c so that it will no longer need to reach
into the internals of struct scatter_walk.

Cc: Boris Pismenny <borisp@nvidia.com>
Cc: Jakub Kicinski <kuba@kernel.org>
Cc: John Fastabend <john.fastabend@gmail.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: scatterwalk - add new functions for copying data
Eric Biggers [Wed, 19 Feb 2025 18:23:26 +0000 (10:23 -0800)] 
crypto: scatterwalk - add new functions for copying data

Add memcpy_from_sglist() and memcpy_to_sglist() which are more readable
versions of scatterwalk_map_and_copy() with the 'out' argument 0 and 1
respectively.  They follow the same argument order as memcpy_from_page()
and memcpy_to_page() from <linux/highmem.h>.  Note that in the case of
memcpy_from_sglist(), this also happens to be the same argument order
that scatterwalk_map_and_copy() uses.

The new code is also faster, mainly because it builds the scatter_walk
directly without creating a temporary scatterlist.  E.g., a 20%
performance improvement is seen for copying the AES-GCM auth tag.

Make scatterwalk_map_and_copy() be a wrapper around memcpy_from_sglist()
and memcpy_to_sglist().  Callers of scatterwalk_map_and_copy() should be
updated to call memcpy_from_sglist() or memcpy_to_sglist() directly, but
there are a lot of them so they aren't all being updated right away.

Also add functions memcpy_from_scatterwalk() and memcpy_to_scatterwalk()
which are similar but operate on a scatter_walk instead of a
scatterlist.  These will replace scatterwalk_copychunks() with the 'out'
argument 0 and 1 respectively.  Their behavior differs slightly from
scatterwalk_copychunks() in that they automatically take care of
flushing the dcache when needed, making them easier to use.

scatterwalk_copychunks() itself is left unchanged for now.  It will be
removed after its callers are updated to use other functions instead.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: scatterwalk - add new functions for iterating through data
Eric Biggers [Wed, 19 Feb 2025 18:23:25 +0000 (10:23 -0800)] 
crypto: scatterwalk - add new functions for iterating through data

Add scatterwalk_next() which consolidates scatterwalk_clamp() and
scatterwalk_map().  Also add scatterwalk_done_src() and
scatterwalk_done_dst() which consolidate scatterwalk_unmap(),
scatterwalk_advance(), and scatterwalk_done() or scatterwalk_pagedone().
A later patch will remove scatterwalk_done() and scatterwalk_pagedone().

The new code eliminates the error-prone 'more' parameter.  Advancing to
the next sg entry now only happens just-in-time in scatterwalk_next().

The new code also pairs the dcache flush more closely with the actual
write, similar to memcpy_to_page().  Previously it was paired with
advancing to the next page.  This is currently causing bugs where the
dcache flush is incorrectly being skipped, usually due to
scatterwalk_copychunks() being called without a following
scatterwalk_done().  The dcache flush may have been placed where it was
in order to not call flush_dcache_page() redundantly when visiting a
page more than once.  However, that case is rare in practice, and most
architectures either do not implement flush_dcache_page() anyway or
implement it lazily where it just clears a page flag.

Another limitation of the old code was that by the time the flush
happened, there was no way to tell if more than one page needed to be
flushed.  That has been sufficient because the code goes page by page,
but I would like to optimize that on !HIGHMEM platforms.  The new code
makes this possible, and a later patch will implement this optimization.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: scatterwalk - add new functions for skipping data
Eric Biggers [Wed, 19 Feb 2025 18:23:24 +0000 (10:23 -0800)] 
crypto: scatterwalk - add new functions for skipping data

Add scatterwalk_skip() to skip the given number of bytes in a
scatter_walk.  Previously support for skipping was provided through
scatterwalk_copychunks(..., 2) followed by scatterwalk_done(), which was
confusing and less efficient.

Also add scatterwalk_start_at_pos() which starts a scatter_walk at the
given position, equivalent to scatterwalk_start() + scatterwalk_skip().
This addresses another common need in a more streamlined way.

Later patches will convert various users to use these functions.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: scatterwalk - move to next sg entry just in time
Eric Biggers [Wed, 19 Feb 2025 18:23:23 +0000 (10:23 -0800)] 
crypto: scatterwalk - move to next sg entry just in time

The scatterwalk_* functions are designed to advance to the next sg entry
only when there is more data from the request to process.  Compared to
the alternative of advancing after each step if !sg_is_last(sg), this
has the advantage that it doesn't cause problems if users accidentally
don't terminate their scatterlist with the end marker (which is an easy
mistake to make, and there are examples of this).

Currently, the advance to the next sg entry happens in
scatterwalk_done(), which is called after each "step" of the walk.  It
requires the caller to pass in a boolean 'more' that indicates whether
there is more data.  This works when the caller immediately knows
whether there is more data, though it adds some complexity.  However in
the case of scatterwalk_copychunks() it's not immediately known whether
there is more data, so the call to scatterwalk_done() has to happen
higher up the stack.  This is error-prone, and indeed the needed call to
scatterwalk_done() is not always made, e.g. scatterwalk_copychunks() is
sometimes called multiple times in a row.  This causes a zero-length
step to get added in some cases, which is unexpected and seems to work
only by accident.

This patch begins the switch to a less error-prone approach where the
advance to the next sg entry happens just in time instead.  For now,
that means just doing the advance in scatterwalk_clamp() if it's needed
there.  Initially this is redundant, but it's needed to keep the tree in
a working state as later patches change things to the final state.

Later patches will similarly move the dcache flushing logic out of
scatterwalk_done() and then remove scatterwalk_done() entirely.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agohwrng: Kconfig - Fix indentation of HW_RANDOM_CN10K help text
Geert Uytterhoeven [Wed, 19 Feb 2025 15:03:32 +0000 (16:03 +0100)] 
hwrng: Kconfig - Fix indentation of HW_RANDOM_CN10K help text

Change the indentation of the help text of the HW_RANDOM_CN10K symbol
from one TAB plus one space to one TAB plus two spaces, as is customary.

Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Dragan Simic <dsimic@manjaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: bpf - Add MODULE_DESCRIPTION for skcipher
Arnd Bergmann [Mon, 17 Feb 2025 12:55:55 +0000 (13:55 +0100)] 
crypto: bpf - Add MODULE_DESCRIPTION for skcipher

All modules should have a description, building with extra warnings
enabled prints this outfor the for bpf_crypto_skcipher module:

WARNING: modpost: missing MODULE_DESCRIPTION() in crypto/bpf_crypto_skcipher.o

Add a description line.

Fixes: fda4f71282b2 ("bpf: crypto: add skcipher to bpf crypto")
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Reviewed-by: Vadim Fedorenko <vadim.fedorenko@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: ahash - Set default reqsize from ahash_alg
Herbert Xu [Sun, 16 Feb 2025 03:07:24 +0000 (11:07 +0800)] 
crypto: ahash - Set default reqsize from ahash_alg

Add a reqsize field to struct ahash_alg and use it to set the
default reqsize so that algorithms with a static reqsize are
not forced to create an init_tfm function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: ahash - Add virtual address support
Herbert Xu [Sun, 16 Feb 2025 03:07:22 +0000 (11:07 +0800)] 
crypto: ahash - Add virtual address support

This patch adds virtual address support to ahash.  Virtual addresses
were previously only supported through shash.  The user may choose
to use virtual addresses with ahash by calling ahash_request_set_virt
instead of ahash_request_set_crypt.

The API will take care of translating this to an SG list if necessary,
unless the algorithm declares that it supports chaining.  Therefore
in order for an ahash algorithm to support chaining, it must also
support virtual addresses directly.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: tcrypt - Restore multibuffer ahash tests
Herbert Xu [Sun, 16 Feb 2025 03:07:19 +0000 (11:07 +0800)] 
crypto: tcrypt - Restore multibuffer ahash tests

This patch is a revert of commit 388ac25efc8ce3bf9768ce7bf24268d6fac285d5.

As multibuffer ahash is coming back in the form of request chaining,
restore the multibuffer ahash tests using the new interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: hash - Add request chaining API
Herbert Xu [Sun, 16 Feb 2025 03:07:17 +0000 (11:07 +0800)] 
crypto: hash - Add request chaining API

This adds request chaining to the ahash interface.  Request chaining
allows multiple requests to be submitted in one shot.  An algorithm
can elect to receive chained requests by setting the flag
CRYPTO_ALG_REQ_CHAIN.  If this bit is not set, the API will break
up chained requests and submit them one-by-one.

A new err field is added to struct crypto_async_request to record
the return value for each individual request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: x86/ghash - Use proper helpers to clone request
Herbert Xu [Sun, 16 Feb 2025 03:07:15 +0000 (11:07 +0800)] 
crypto: x86/ghash - Use proper helpers to clone request

Rather than copying a request by hand with memcpy, use the correct
API helpers to setup the new request.  This will matter once the
API helpers start setting up chained requests as a simple memcpy
will break chaining.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: ahash - Only save callback and data in ahash_save_req
Herbert Xu [Sun, 16 Feb 2025 03:07:12 +0000 (11:07 +0800)] 
crypto: ahash - Only save callback and data in ahash_save_req

As unaligned operations are supported by the underlying algorithm,
ahash_save_req and ahash_restore_req can be greatly simplified to
only preserve the callback and data.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: inside-secure/eip93 - Correctly handle return of for sg_nents_for_len
Christian Marangi [Sat, 15 Feb 2025 23:36:18 +0000 (00:36 +0100)] 
crypto: inside-secure/eip93 - Correctly handle return of for sg_nents_for_len

Fix smatch warning for sg_nents_for_len return value in Inside Secure
EIP93 driver.

The return value of sg_nents_for_len was assigned to an u32 and the
error was ignored and converted to a positive integer.

Rework the code to correctly handle the error from sg_nents_for_len to
mute smatch warning.

Fixes: 9739f5f93b78 ("crypto: eip93 - Add Inside Secure SafeXcel EIP-93 crypto engine support")
Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: skcipher - Zap type in crypto_alloc_sync_skcipher
Herbert Xu [Sat, 15 Feb 2025 00:57:51 +0000 (08:57 +0800)] 
crypto: skcipher - Zap type in crypto_alloc_sync_skcipher

The type needs to be zeroed as otherwise the user could use it to
allocate an asynchronous sync skcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: qat - refactor service parsing logic
Małgorzata Mielnik [Fri, 14 Feb 2025 16:40:43 +0000 (16:40 +0000)] 
crypto: qat - refactor service parsing logic

The service parsing logic is used to parse the configuration string
provided by the user using the attribute qat/cfg_services in sysfs.
The logic relies on hard-coded strings. For example, the service
"sym;asym" is also replicated as "asym;sym".
This makes the addition of new services or service combinations
complex as it requires the addition of new hard-coded strings for all
possible combinations.

This commit addresses this issue by:
 * reducing the number of internal service strings to only the basic
   service representations.
 * modifying the service parsing logic to analyze the service string
   token by token instead of comparing a whole string with patterns.
 * introducing the concept of a service mask where each service is
   represented by a single bit.
 * dividing the parsing logic into several functions to allow for code
   reuse (e.g. by sysfs-related functions).
 * introducing a new, device generation-specific function to verify
   whether the requested service combination is supported by the
   currently used device.

Signed-off-by: Małgorzata Mielnik <malgorzata.mielnik@intel.com>
Co-developed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: qat - do not export adf_cfg_services
Giovanni Cabiddu [Fri, 14 Feb 2025 16:40:42 +0000 (16:40 +0000)] 
crypto: qat - do not export adf_cfg_services

The symbol `adf_cfg_services` is only used on the intel_qat module.
There is no need to export it.

Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: skcipher - Set tfm in SYNC_SKCIPHER_REQUEST_ON_STACK
Herbert Xu [Fri, 14 Feb 2025 06:02:08 +0000 (14:02 +0800)] 
crypto: skcipher - Set tfm in SYNC_SKCIPHER_REQUEST_ON_STACK

Set the request tfm directly in SYNC_SKCIPHER_REQUEST_ON_STACK since
the tfm is already available.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: api - Fix larval relookup type and mask
Herbert Xu [Fri, 14 Feb 2025 02:31:25 +0000 (10:31 +0800)] 
crypto: api - Fix larval relookup type and mask

When the lookup is retried after instance construction, it uses
the type and mask from the larval, which may not match the values
used by the caller.  For example, if the caller is requesting for
a !NEEDS_FALLBACK algorithm, it may end up getting an algorithm
that needs fallbacks.

Fix this by making the caller supply the type/mask and using that
for the lookup.

Reported-by: Coiby Xu <coxu@redhat.com>
Fixes: 96ad59552059 ("crypto: api - Remove instance larval fulfilment")
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agodt-bindings: crypto: qcom-qce: Document the X1E80100 crypto engine
Abel Vesa [Thu, 13 Feb 2025 12:37:05 +0000 (14:37 +0200)] 
dt-bindings: crypto: qcom-qce: Document the X1E80100 crypto engine

Document the crypto engine on the X1E80100 Platform.

Signed-off-by: Abel Vesa <abel.vesa@linaro.org>
Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: null - Use spin lock instead of mutex
Herbert Xu [Wed, 12 Feb 2025 06:10:07 +0000 (14:10 +0800)] 
crypto: null - Use spin lock instead of mutex

As the null algorithm may be freed in softirq context through
af_alg, use spin locks instead of mutexes to protect the default
null algorithm.

Reported-by: syzbot+b3e02953598f447d4d2a@syzkaller.appspotmail.com
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: lib/Kconfig - Fix lib built-in failure when arch is modular
Herbert Xu [Wed, 12 Feb 2025 04:48:55 +0000 (12:48 +0800)] 
crypto: lib/Kconfig - Fix lib built-in failure when arch is modular

The HAVE_ARCH Kconfig options in lib/crypto try to solve the
modular versus built-in problem, but it still fails when the
the LIB option (e.g., CRYPTO_LIB_CURVE25519) is selected externally.

Fix this by introducing a level of indirection with ARCH_MAY_HAVE
Kconfig options, these then go on to select the ARCH_HAVE options
if the ARCH Kconfig options matches that of the LIB option.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202501230223.ikroNDr1-lkp@intel.com/
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: qat - reorder objects in qat_common Makefile
Giovanni Cabiddu [Tue, 11 Feb 2025 09:58:53 +0000 (09:58 +0000)] 
crypto: qat - reorder objects in qat_common Makefile

The objects in the qat_common Makefile are currently listed in a random
order.

Reorder the objects alphabetically to make it easier to find where to
add a new object.

Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Reviewed-by: Ahsan Atta <ahsan.atta@intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: qat - fix object goals in Makefiles
Giovanni Cabiddu [Tue, 11 Feb 2025 09:58:52 +0000 (09:58 +0000)] 
crypto: qat - fix object goals in Makefiles

Align with kbuild documentation by using <module_name>-y instead of
<module_name>-objs, following the kernel convention for building modules
from multiple object files.

Link: https://docs.kernel.org/kbuild/makefiles.html#loadable-module-goals-obj-m
Signed-off-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: aead - use str_yes_no() helper in crypto_aead_show()
Thorsten Blum [Tue, 11 Feb 2025 09:52:54 +0000 (10:52 +0100)] 
crypto: aead - use str_yes_no() helper in crypto_aead_show()

Remove hard-coded strings by using the str_yes_no() helper function.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: bcm - set memory to zero only once
Thorsten Blum [Mon, 10 Feb 2025 22:36:44 +0000 (23:36 +0100)] 
crypto: bcm - set memory to zero only once

Use kmalloc_array() instead of kcalloc() because sg_init_table() already
sets the memory to zero. This avoids zeroing the memory twice.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: x86/aes-xts - change license to Apache-2.0 OR BSD-2-Clause
Eric Biggers [Mon, 10 Feb 2025 17:17:40 +0000 (09:17 -0800)] 
crypto: x86/aes-xts - change license to Apache-2.0 OR BSD-2-Clause

As with the other AES modes I've implemented, I've received interest in
my AES-XTS assembly code being reused in other projects.  Therefore,
change the license to Apache-2.0 OR BSD-2-Clause like what I used for
AES-GCM.  Apache-2.0 is the license of OpenSSL and BoringSSL.

Note that it is difficult to *directly* share code between the kernel,
OpenSSL, and BoringSSL for various reasons such as perlasm vs. plain
asm, Windows ABI support, different divisions of responsibility between
C and asm in each project, etc.  So whether that will happen instead of
just doing ports is still TBD.  But this dual license should at least
make it possible to port changes between the projects.

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: x86/aes-ctr - rewrite AESNI+AVX optimized CTR and add VAES support
Eric Biggers [Mon, 10 Feb 2025 16:50:20 +0000 (08:50 -0800)] 
crypto: x86/aes-ctr - rewrite AESNI+AVX optimized CTR and add VAES support

Delete aes_ctrby8_avx-x86_64.S and add a new assembly file
aes-ctr-avx-x86_64.S which follows a similar approach to
aes-xts-avx-x86_64.S in that it uses a "template" to provide AESNI+AVX,
VAES+AVX2, VAES+AVX10/256, and VAES+AVX10/512 code, instead of just
AESNI+AVX.  Wire it up to the crypto API accordingly.

This greatly improves the performance of AES-CTR and AES-XCTR on
VAES-capable CPUs, with the best case being AMD Zen 5 where an over 230%
increase in throughput is seen on long messages.  Performance on
non-VAES-capable CPUs remains about the same, and the non-AVX AES-CTR
code (aesni_ctr_enc) is also kept as-is for now.  There are some slight
regressions (less than 10%) on some short message lengths on some CPUs;
these are difficult to avoid, given how the previous code was so heavily
unrolled by message length, and they are not particularly important.
Detailed performance results are given in the tables below.

Both CTR and XCTR support is retained.  The main loop remains
8-vector-wide, which differs from the 4-vector-wide main loops that are
used in the XTS and GCM code.  A wider loop is appropriate for CTR and
XCTR since they have fewer other instructions (such as vpclmulqdq) to
interleave with the AES instructions.

Similar to what was the case for AES-GCM, the new assembly code also has
a much smaller binary size, as it fixes the excessive unrolling by data
length and key length present in the old code.  Specifically, the new
assembly file compiles to about 9 KB of text vs. 28 KB for the old file.
This is despite 4x as many implementations being included.

The tables below show the detailed performance results.  The tables show
percentage improvement in single-threaded throughput for repeated
encryption of the given message length; an increase from 6000 MB/s to
12000 MB/s would be listed as 100%.  They were collected by directly
measuring the Linux crypto API performance using a custom kernel module.
The tested CPUs were all server processors from Google Compute Engine
except for Zen 5 which was a Ryzen 9 9950X desktop processor.

Table 1: AES-256-CTR throughput improvement,
         CPU microarchitecture vs. message length in bytes:

                     | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
---------------------+-------+-------+-------+-------+-------+-------+
AMD Zen 5            |  232% |  203% |  212% |  143% |   71% |   95% |
Intel Emerald Rapids |  116% |  116% |  117% |   91% |   78% |   79% |
Intel Ice Lake       |  109% |  103% |  107% |   81% |   54% |   56% |
AMD Zen 4            |  109% |   91% |  100% |   70% |   43% |   59% |
AMD Zen 3            |   92% |   78% |   87% |   57% |   32% |   43% |
AMD Zen 2            |    9% |    8% |   14% |   12% |    8% |   21% |
Intel Skylake        |    7% |    7% |    8% |    5% |    3% |    8% |

                     |   300 |   200 |    64 |    63 |    16 |
---------------------+-------+-------+-------+-------+-------+
AMD Zen 5            |   57% |   39% |   -9% |    7% |   -7% |
Intel Emerald Rapids |   37% |   42% |   -0% |   13% |   -8% |
Intel Ice Lake       |   39% |   30% |   -1% |   14% |   -9% |
AMD Zen 4            |   42% |   38% |   -0% |   18% |   -3% |
AMD Zen 3            |   38% |   35% |    6% |   31% |    5% |
AMD Zen 2            |   24% |   23% |    5% |   30% |    3% |
Intel Skylake        |    9% |    1% |   -4% |   10% |   -7% |

Table 2: AES-256-XCTR throughput improvement,
         CPU microarchitecture vs. message length in bytes:

                     | 16384 |  4096 |  4095 |  1420 |   512 |   500 |
---------------------+-------+-------+-------+-------+-------+-------+
AMD Zen 5            |  240% |  201% |  216% |  151% |   75% |  108% |
Intel Emerald Rapids |  100% |   99% |  102% |   91% |   94% |  104% |
Intel Ice Lake       |   93% |   89% |   92% |   74% |   50% |   64% |
AMD Zen 4            |   86% |   75% |   83% |   60% |   41% |   52% |
AMD Zen 3            |   73% |   63% |   69% |   45% |   21% |   33% |
AMD Zen 2            |   -2% |   -2% |    2% |    3% |   -1% |   11% |
Intel Skylake        |   -1% |   -1% |    1% |    2% |   -1% |    9% |

                     |   300 |   200 |    64 |    63 |    16 |
---------------------+-------+-------+-------+-------+-------+
AMD Zen 5            |   78% |   56% |   -4% |   38% |   -2% |
Intel Emerald Rapids |   61% |   55% |    4% |   32% |   -5% |
Intel Ice Lake       |   57% |   42% |    3% |   44% |   -4% |
AMD Zen 4            |   35% |   28% |   -1% |   17% |   -3% |
AMD Zen 3            |   26% |   23% |   -3% |   11% |   -6% |
AMD Zen 2            |   13% |   24% |   -1% |   14% |   -3% |
Intel Skylake        |   16% |    8% |   -4% |   35% |   -3% |

Signed-off-by: Eric Biggers <ebiggers@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Tested-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: ahash - use str_yes_no() helper in crypto_ahash_show()
Thorsten Blum [Mon, 10 Feb 2025 10:04:48 +0000 (11:04 +0100)] 
crypto: ahash - use str_yes_no() helper in crypto_ahash_show()

Remove hard-coded strings by using the str_yes_no() helper function.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: inside-secure - Eliminate duplication in top-level Makefile
Herbert Xu [Sun, 9 Feb 2025 10:17:30 +0000 (18:17 +0800)] 
crypto: inside-secure - Eliminate duplication in top-level Makefile

Instead of having two entries for inside-secure in the top-level
Makefile, make it just a single one.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: ccp - Add support for PCI device 0x1134
Devaraj Rangasamy [Thu, 6 Feb 2025 22:11:52 +0000 (03:41 +0530)] 
crypto: ccp - Add support for PCI device 0x1134

PCI device 0x1134 shares same register features as PCI device 0x17E0.
Hence reuse same data for the new PCI device ID 0x1134.

Signed-off-by: Devaraj Rangasamy <Devaraj.Rangasamy@amd.com>
Acked-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Mario Limonciello <mario.limonciello@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: hisilicon/sec2 - fix for sec spec check
Wenkai Lin [Wed, 5 Feb 2025 03:56:28 +0000 (11:56 +0800)] 
crypto: hisilicon/sec2 - fix for sec spec check

During encryption and decryption, user requests
must be checked first, if the specifications that
are not supported by the hardware are used, the
software computing is used for processing.

Fixes: 2f072d75d1ab ("crypto: hisilicon - Add aead support on SEC2")
Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: hisilicon/sec2 - fix for aead authsize alignment
Wenkai Lin [Wed, 5 Feb 2025 03:56:27 +0000 (11:56 +0800)] 
crypto: hisilicon/sec2 - fix for aead authsize alignment

The hardware only supports authentication sizes
that are 4-byte aligned. Therefore, the driver
switches to software computation in this case.

Fixes: 2f072d75d1ab ("crypto: hisilicon - Add aead support on SEC2")
Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: hisilicon/sec2 - fix for aead auth key length
Wenkai Lin [Wed, 5 Feb 2025 03:56:26 +0000 (11:56 +0800)] 
crypto: hisilicon/sec2 - fix for aead auth key length

According to the HMAC RFC, the authentication key
can be 0 bytes, and the hardware can handle this
scenario. Therefore, remove the incorrect validation
for this case.

Fixes: 2f072d75d1ab ("crypto: hisilicon - Add aead support on SEC2")
Signed-off-by: Wenkai Lin <linwenkai6@hisilicon.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agoMAINTAINERS: add Nicolas Frattaroli to rockchip-rng maintainers
Nicolas Frattaroli [Tue, 4 Feb 2025 15:35:52 +0000 (16:35 +0100)] 
MAINTAINERS: add Nicolas Frattaroli to rockchip-rng maintainers

I maintain the rockchip,rk3588-rng bindings, and I guess also the part
of the driver that implements support for it. Therefore, add me to the
MAINTAINERS for this driver and these bindings.

Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agohwrng: rockchip - add support for rk3588's standalone TRNG
Nicolas Frattaroli [Tue, 4 Feb 2025 15:35:50 +0000 (16:35 +0100)] 
hwrng: rockchip - add support for rk3588's standalone TRNG

The RK3588 SoC includes several TRNGs, one part of the Crypto IP block,
and the other one (referred to as "trngv1") as a standalone new IP.

Add support for this new standalone TRNG to the driver by both
generalising it to support multiple different rockchip RNGs and then
implementing the required functionality for the new hardware.

This work was partly based on the downstream vendor driver by Rockchip's
Lin Jinhan, which is why they are listed as a Co-author.

While the hardware does support notifying the CPU with an IRQ when the
random data is ready, I've discovered while implementing the code to use
this interrupt that this results in significantly slower throughput of
the TRNG even when under heavy CPU load. I assume this is because with
only 32 bytes of data per invocation, the overhead of reinitialising a
completion, enabling the interrupt, sleeping and then triggering the
completion in the IRQ handler is way more expensive than busylooping.

Speaking of busylooping, the poll interval for reading the ISTAT is an
atomic read with a delay of 0. In my testing, I've found that this gives
us the largest throughput, and it appears the random data is ready
pretty much the moment we begin polling, as increasing the poll delay
leads to a drop in throughput significant enough to not just be due to
the poll interval missing the ideal timing by a microsecond or two.

According to downstream, the IP should take 1024 clock cycles to
generate 56 bits of random data, which at 150MHz should work out to
6.8us. I did not test whether the data really does take 256/56*6.8us
to arrive, though changing the readl to a __raw_readl makes no
difference in throughput, and this data does pass the rngtest FIPS
checks, so I'm not entirely sure what's going on but I presume it's got
something to do with the AHB bus speed and the memory barriers that
mainline's readl/writel functions insert.

The only other current SoC that uses this new IP is the Rockchip RV1106,
but that SoC does not have mainline support as of the time of writing,
so we make no effort to declare it as supported for now.

Co-developed-by: Lin Jinhan <troy.lin@rock-chips.com>
Signed-off-by: Lin Jinhan <troy.lin@rock-chips.com>
Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agohwrng: rockchip - eliminate some unnecessary dereferences
Nicolas Frattaroli [Tue, 4 Feb 2025 15:35:49 +0000 (16:35 +0100)] 
hwrng: rockchip - eliminate some unnecessary dereferences

Despite assigning a temporary variable the value of &pdev->dev early on
in the probe function, the probe function then continues to use this
construct when it could just use the local dev variable instead.

Simplify this by using the local dev variable directly.

Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agohwrng: rockchip - store dev pointer in driver struct
Nicolas Frattaroli [Tue, 4 Feb 2025 15:35:48 +0000 (16:35 +0100)] 
hwrng: rockchip - store dev pointer in driver struct

The rockchip rng driver does a dance to store the dev pointer in the
hwrng's unsigned long "priv" member. However, since the struct hwrng
member of rk_rng is not a pointer, we can use container_of to get the
struct rk_rng instance from just the struct hwrng*, which means we don't
have to subvert what little there is in C of a type system and can
instead store a pointer to the device struct in the rk_rng itself.

Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agodt-bindings: rng: add binding for Rockchip RK3588 RNG
Nicolas Frattaroli [Tue, 4 Feb 2025 15:35:47 +0000 (16:35 +0100)] 
dt-bindings: rng: add binding for Rockchip RK3588 RNG

The Rockchip RK3588 SoC has two hardware RNGs accessible to the
non-secure world: an RNG in the Crypto IP, and a standalone RNG that is
new to this SoC.

Add a binding for this new standalone RNG. It is distinct hardware from
the existing rockchip,rk3568-rng, and therefore gets its own binding as
the two hardware IPs are unrelated other than both being made by the
same vendor.

The RNG is capable of firing an interrupt when entropy is ready.

The reset is optional, as the hardware does a power-on reset, and
functions without the software manually resetting it.

Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agodt-bindings: reset: Add SCMI reset IDs for RK3588
Nicolas Frattaroli [Tue, 4 Feb 2025 15:35:46 +0000 (16:35 +0100)] 
dt-bindings: reset: Add SCMI reset IDs for RK3588

When TF-A is used to assert/deassert the resets through SCMI, the
IDs communicated to it are different than the ones mainline Linux uses.

Import the list of SCMI reset IDs from mainline TF-A so that devicetrees
can use these IDs more easily.

Co-developed-by: XiaoDong Huang <derrick.huang@rock-chips.com>
Signed-off-by: XiaoDong Huang <derrick.huang@rock-chips.com>
Acked-by: Conor Dooley <conor.dooley@microchip.com>
Signed-off-by: Nicolas Frattaroli <nicolas.frattaroli@collabora.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: virtio - Drop superfluous [as]kcipher_req pointer
Lukas Wunner [Mon, 3 Feb 2025 13:37:05 +0000 (14:37 +0100)] 
crypto: virtio - Drop superfluous [as]kcipher_req pointer

The request context virtio_crypto_{akcipher,sym}_request contains a
pointer to the [as]kcipher_request itself.

The pointer is superfluous as it can be calculated with container_of().

Drop the superfluous pointer.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: virtio - Drop superfluous [as]kcipher_ctx pointer
Lukas Wunner [Mon, 3 Feb 2025 13:37:04 +0000 (14:37 +0100)] 
crypto: virtio - Drop superfluous [as]kcipher_ctx pointer

The request context virtio_crypto_{akcipher,sym}_request contains a
pointer to the transform context virtio_crypto_[as]kcipher_ctx.

The pointer is superfluous as it can be calculated with the cheap
crypto_akcipher_reqtfm() + akcipher_tfm_ctx() and
crypto_skcipher_reqtfm() + crypto_skcipher_ctx() combos.

Drop the superfluous pointer.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: virtio - Drop superfluous ctx->tfm backpointer
Lukas Wunner [Mon, 3 Feb 2025 13:37:03 +0000 (14:37 +0100)] 
crypto: virtio - Drop superfluous ctx->tfm backpointer

struct virtio_crypto_[as]kcipher_ctx contains a backpointer to struct
crypto_[as]kcipher which is superfluous in two ways:

First, it's not used anywhere.  Second, the context is embedded into
struct crypto_tfm, so one could just use container_of() to get from the
context to crypto_tfm and from there to crypto_[as]kcipher.

Drop the superfluous backpointer.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: virtio - Simplify RSA key size caching
Lukas Wunner [Mon, 3 Feb 2025 13:37:02 +0000 (14:37 +0100)] 
crypto: virtio - Simplify RSA key size caching

When setting a public or private RSA key, the integer n is cached in the
transform context virtio_crypto_akcipher_ctx -- with the sole purpose of
calculating the key size from it in virtio_crypto_rsa_max_size().
It looks like this was copy-pasted from crypto/rsa.c.

Cache the key size directly instead of the integer n, thus simplifying
the code and reducing the memory footprint.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
3 months agocrypto: virtio - Fix kernel-doc of virtcrypto_dev_stop()
Lukas Wunner [Mon, 3 Feb 2025 13:37:01 +0000 (14:37 +0100)] 
crypto: virtio - Fix kernel-doc of virtcrypto_dev_stop()

It seems the kernel-doc of virtcrypto_dev_start() was copied verbatim to
virtcrypto_dev_stop().  Fix it.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: ecdsa - Harden against integer overflows in DIV_ROUND_UP()
Lukas Wunner [Sun, 2 Feb 2025 19:00:52 +0000 (20:00 +0100)] 
crypto: ecdsa - Harden against integer overflows in DIV_ROUND_UP()

Herbert notes that DIV_ROUND_UP() may overflow unnecessarily if an ecdsa
implementation's ->key_size() callback returns an unusually large value.
Herbert instead suggests (for a division by 8):

  X / 8 + !!(X & 7)

Based on this formula, introduce a generic DIV_ROUND_UP_POW2() macro and
use it in lieu of DIV_ROUND_UP() for ->key_size() return values.

Additionally, use the macro in ecc_digits_from_bytes(), whose "nbytes"
parameter is a ->key_size() return value in some instances, or a
user-specified ASN.1 length in the case of ecdsa_get_signature_rs().

Link: https://lore.kernel.org/r/Z3iElsILmoSu6FuC@gondor.apana.org.au/
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: sig - Prepare for algorithms with variable signature size
Lukas Wunner [Sun, 2 Feb 2025 19:00:51 +0000 (20:00 +0100)] 
crypto: sig - Prepare for algorithms with variable signature size

The callers of crypto_sig_sign() assume that the signature size is
always equivalent to the key size.

This happens to be true for RSA, which is currently the only algorithm
implementing the ->sign() callback.  But it is false e.g. for X9.62
encoded ECDSA signatures because they have variable length.

Prepare for addition of a ->sign() callback to such algorithms by
letting the callback return the signature size (or a negative integer
on error).  When testing the ->sign() callback in test_sig_one(),
use crypto_sig_maxsize() instead of crypto_sig_keysize() to verify that
the test vector's signature does not exceed an algorithm's maximum
signature size.

There has been a relatively recent effort to upstream ECDSA signature
generation support which may benefit from this change:

https://lore.kernel.org/linux-crypto/20220908200036.2034-1-ignat@cloudflare.com/

However the main motivation for this commit is to reduce the number of
crypto_sig_keysize() callers:  This function is about to be changed to
return the size in bits instead of bytes and that will require amending
most callers to divide the return value by 8.

Signed-off-by: Lukas Wunner <lukas@wunner.de>
Reviewed-by: Stefan Berger <stefanb@linux.ibm.com>
Cc: Ignat Korchagin <ignat@cloudflare.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agohwrng: imx-rngc - add runtime pm
Martin Kaiser [Sat, 1 Feb 2025 18:39:07 +0000 (19:39 +0100)] 
hwrng: imx-rngc - add runtime pm

Add runtime power management to the imx-rngc driver. Disable the
peripheral clock when the rngc is idle.

The callback functions from struct hwrng wake the rngc up when they're
called and set it to idle on exit. Helper functions which are invoked
from the callbacks assume that the rngc is active.

Device init and probe are done before runtime pm is enabled. The
peripheral clock will be handled manually during these steps. Do not use
devres any more to enable/disable the peripheral clock, this conflicts
with runtime pm.

Signed-off-by: Martin Kaiser <martin@kaiser.cx>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: qat - set command ids as reserved
Suman Kumar Chakraborty [Fri, 31 Jan 2025 11:34:54 +0000 (11:34 +0000)] 
crypto: qat - set command ids as reserved

The XP10 algorithm is not supported by any QAT device.
Remove the definition of bit 7 (ICP_QAT_FW_COMP_20_CMD_XP10_COMPRESS)
and bit 8 (ICP_QAT_FW_COMP_20_CMD_XP10_DECOMPRESS) in the firmware
command id enum and rename them as reserved.
Those bits shall not be used in future.

Signed-off-by: Suman Kumar Chakraborty <suman.kumar.chakraborty@intel.com>
Reviewed-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agoMAINTAINERS: Add Vinicius Gomes to MAINTAINERS for IAA Crypto
Kristen Carlson Accardi [Tue, 28 Jan 2025 23:57:43 +0000 (15:57 -0800)] 
MAINTAINERS: Add Vinicius Gomes to MAINTAINERS for IAA Crypto

Add Vinicius Gomes to the MAINTAINERS list for the IAA Crypto driver.

Signed-off-by: Kristen Carlson Accardi <kristen.c.accardi@intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: x86/aes-xts - make the fast path 64-bit specific
Eric Biggers [Mon, 27 Jan 2025 21:16:09 +0000 (13:16 -0800)] 
crypto: x86/aes-xts - make the fast path 64-bit specific

Remove 32-bit support from the fast path in xts_crypt().  Then optimize
it for 64-bit, and simplify the code, by switching to sg_virt() and
removing the now-unnecessary checks for crossing a page boundary.

The result is simpler code that is slightly smaller and faster in the
case that actually matters (64-bit).

Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: hisilicon/hpre - adapt ECDH for high-performance cores
lizhi [Sat, 18 Jan 2025 00:45:20 +0000 (08:45 +0800)] 
crypto: hisilicon/hpre - adapt ECDH for high-performance cores

Only the ECDH with NIST P-256 meets requirements.
The algorithm will be scheduled first for high-performance cores.
The key step is to config resv1 field of BD.

Signed-off-by: lizhi <lizhi206@huawei.com>
Signed-off-by: Chenghai Huang <huangchenghai2@huawei.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: ccp - Fix check for the primary ASP device
Tom Lendacky [Fri, 17 Jan 2025 23:05:47 +0000 (17:05 -0600)] 
crypto: ccp - Fix check for the primary ASP device

Currently, the ASP primary device check does not have support for PCI
domains, and, as a result, when the system is configured with PCI domains
(PCI segments) the wrong device can be selected as primary. This results
in commands submitted to the device timing out and failing. The device
check also relies on specific device and function assignments that may
not hold in the future.

Fix the primary ASP device check to include support for PCI domains and
to perform proper checking of the Bus/Device/Function positions.

Fixes: 2a6170dfe755 ("crypto: ccp: Add Platform Security Processor (PSP) device support")
Cc: stable@vger.kernel.org
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: skcipher - use str_yes_no() helper in crypto_skcipher_show()
Thorsten Blum [Fri, 17 Jan 2025 14:42:22 +0000 (15:42 +0100)] 
crypto: skcipher - use str_yes_no() helper in crypto_skcipher_show()

Remove hard-coded strings by using the str_yes_no() helper function.

Signed-off-by: Thorsten Blum <thorsten.blum@linux.dev>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agohwrng: Kconfig - Move one "tristate" Kconfig description to the usual place
Dragan Simic [Wed, 15 Jan 2025 13:07:01 +0000 (14:07 +0100)] 
hwrng: Kconfig - Move one "tristate" Kconfig description to the usual place

It's pretty usual to have "tristate" descriptions in Kconfig files placed
immediately after the actual configuration options, so correct the position
of one misplaced "tristate" spotted in the hw_random Kconfig file.

No intended functional changes are introduced by this trivial cleanup.

Signed-off-by: Dragan Simic <dsimic@manjaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agohwrng: Kconfig - Use tabs as leading whitespace consistently in Kconfig
Dragan Simic [Wed, 15 Jan 2025 13:07:00 +0000 (14:07 +0100)] 
hwrng: Kconfig - Use tabs as leading whitespace consistently in Kconfig

Replace instances of leading size-eight groups of space characters with
the usual tab characters, as spotted in the hw_random Kconfig file.

No intended functional changes are introduced by this trivial cleanup.

Signed-off-by: Dragan Simic <dsimic@manjaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: drivers - Use str_enable_disable-like helpers
Krzysztof Kozlowski [Tue, 14 Jan 2025 19:05:01 +0000 (20:05 +0100)] 
crypto: drivers - Use str_enable_disable-like helpers

Replace ternary (condition ? "enable" : "disable") syntax with helpers
from string_choices.h because:
1. Simple function call with one argument is easier to read.  Ternary
   operator has three arguments and with wrapping might lead to quite
   long code.
2. Is slightly shorter thus also easier to read.
3. It brings uniformity in the text - same string.
4. Allows deduping by the linker, which results in a smaller binary
   file.

Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
Acked-by: Giovanni Cabiddu <giovanni.cabiddu@intel.com> # QAT
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agolib: 842: Improve error handling in sw842_compress()
Tanya Agarwal [Tue, 14 Jan 2025 14:12:04 +0000 (19:42 +0530)] 
lib: 842: Improve error handling in sw842_compress()

The static code analysis tool "Coverity Scan" pointed the following
implementation details out for further development considerations:
CID 1309755: Unused value
In sw842_compress: A value assigned to a variable is never used. (CWE-563)
returned_value: Assigning value from add_repeat_template(p, repeat_count)
to ret here, but that stored value is overwritten before it can be used.

Conclusion:
Add error handling for the return value from an add_repeat_template()
call.

Fixes: 2da572c959dd ("lib: add software 842 compression/decompression")
Signed-off-by: Tanya Agarwal <tanyaagarwal25699@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agocrypto: eip93 - Add Inside Secure SafeXcel EIP-93 crypto engine support
Christian Marangi [Tue, 14 Jan 2025 12:36:36 +0000 (13:36 +0100)] 
crypto: eip93 - Add Inside Secure SafeXcel EIP-93 crypto engine support

Add support for the Inside Secure SafeXcel EIP-93 Crypto Engine used on
Mediatek MT7621 SoC and new Airoha SoC.

EIP-93 IP supports AES/DES/3DES ciphers in ECB/CBC and CTR modes as well as
authenc(HMAC(x), cipher(y)) using HMAC MD5, SHA1, SHA224 and SHA256.

EIP-93 provide regs to signal support for specific chipers and the
driver dynamically register only the supported one by the chip.

Signed-off-by: Richard van Schagen <vschagen@icloud.com>
Co-developed-by: Christian Marangi <ansuelsmth@gmail.com>
Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agodt-bindings: crypto: Add Inside Secure SafeXcel EIP-93 crypto engine
Christian Marangi [Tue, 14 Jan 2025 12:36:35 +0000 (13:36 +0100)] 
dt-bindings: crypto: Add Inside Secure SafeXcel EIP-93 crypto engine

Add bindings for the Inside Secure SafeXcel EIP-93 crypto engine.

The IP is present on Airoha SoC and on various Mediatek devices and
other SoC under different names like mtk-eip93 or PKTE.

All the compatible that currently doesn't have any user are defined but
rejected waiting for an actual device that makes use of them.

Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
Reviewed-by: Rob Herring (Arm) <robh@kernel.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agospinlock: extend guard with spinlock_bh variants
Christian Marangi [Tue, 14 Jan 2025 12:36:34 +0000 (13:36 +0100)] 
spinlock: extend guard with spinlock_bh variants

Extend guard APIs with missing raw/spinlock_bh variants.

Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
4 months agoLinux 6.14-rc1 v6.14-rc1
Linus Torvalds [Sun, 2 Feb 2025 23:39:26 +0000 (15:39 -0800)] 
Linux 6.14-rc1

4 months agoMerge tag 'turbostat-2025.02.02' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sun, 2 Feb 2025 18:49:13 +0000 (10:49 -0800)] 
Merge tag 'turbostat-2025.02.02' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux

Pull turbostat updates from Len Brown:

 - Fix regression that affinitized forked child in one-shot mode.

 - Harden one-shot mode against hotplug online/offline

 - Enable RAPL SysWatt column by default

 - Add initial PTL, CWF platform support

 - Harden initial PMT code in response to early use

 - Enable first built-in PMT counter: CWF c1e residency

 - Refuse to run on unsupported platforms without --force, to encourage
   updating to a version that supports the system, and to avoid
   no-so-useful measurement results

* tag 'turbostat-2025.02.02' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux: (25 commits)
  tools/power turbostat: version 2025.02.02
  tools/power turbostat: Add CPU%c1e BIC for CWF
  tools/power turbostat: Harden one-shot mode against cpu offline
  tools/power turbostat: Fix forked child affinity regression
  tools/power turbostat: Add tcore clock PMT type
  tools/power turbostat: version 2025.01.14
  tools/power turbostat: Allow adding PMT counters directly by sysfs path
  tools/power turbostat: Allow mapping multiple PMT files with the same GUID
  tools/power turbostat: Add PMT directory iterator helper
  tools/power turbostat: Extend PMT identification with a sequence number
  tools/power turbostat: Return default value for unmapped PMT domains
  tools/power turbostat: Check for non-zero value when MSR probing
  tools/power turbostat: Enhance turbostat self-performance visibility
  tools/power turbostat: Add fixed RAPL PSYS divisor for SPR
  tools/power turbostat: Fix PMT mmaped file size rounding
  tools/power turbostat: Remove SysWatt from DISABLED_BY_DEFAULT
  tools/power turbostat: Add an NMI column
  tools/power turbostat: add Busy% to "show idle"
  tools/power turbostat: Introduce --force parameter
  tools/power turbostat: Improve --help output
  ...

4 months agoMerge tag 'sh-for-v6.14-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/glaubi...
Linus Torvalds [Sun, 2 Feb 2025 18:40:27 +0000 (10:40 -0800)] 
Merge tag 'sh-for-v6.14-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/glaubitz/sh-linux

Pull sh updates from John Paul Adrian Glaubitz:
 "Fixes and improvements for sh:

   - replace seq_printf() with the more efficient
     seq_put_decimal_ull_width() to increase performance when stress
     reading /proc/interrupts (David Wang)

   - migrate sh to the generic rule for built-in DTB to help avoid race
     conditions during parallel builds which can occur because Kbuild
     decends into arch/*/boot/dts twice (Masahiro Yamada)

   - replace select with imply in the board Kconfig for enabling
     hardware with complex dependencies. This addresses warnings which
     were reported by the kernel test robot (Geert Uytterhoeven)"

* tag 'sh-for-v6.14-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/glaubitz/sh-linux:
  sh: boards: Use imply to enable hardware with complex dependencies
  sh: Migrate to the generic rule for built-in DTB
  sh: irq: Use seq_put_decimal_ull_width() for decimal values

4 months agotools/power turbostat: version 2025.02.02
Len Brown [Sun, 2 Feb 2025 16:43:02 +0000 (10:43 -0600)] 
tools/power turbostat: version 2025.02.02

Summary of Changes since 2024.11.30:

Fix regression in 2023.11.07 that affinitized forked child
in one-shot mode.

Harden one-shot mode against hotplug online/offline

Enable RAPL SysWatt column by default.

Add initial PTL, CWF platform support.

Harden initial PMT code in response to early use.

Enable first built-in PMT counter: CWF c1e residency

Refuse to run on unsupported platforms without --force,
to encourage updating to a version that supports the system,
and to avoid no-so-useful measurement results.

Signed-off-by: Len Brown <len.brown@intel.com>
4 months agoMerge tag 'pull-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Linus Torvalds [Sat, 1 Feb 2025 23:07:56 +0000 (15:07 -0800)] 
Merge tag 'pull-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull misc vfs cleanups from Al Viro:
 "Two unrelated patches - one is a removal of long-obsolete include in
  overlayfs (it used to need fs/internal.h, but the extern it wanted has
  been moved back to include/linux/namei.h) and another introduces
  convenience helper constructing struct qstr by a NUL-terminated
  string"

* tag 'pull-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
  add a string-to-qstr constructor
  fs/overlayfs/namei.c: get rid of include ../internal.h

4 months agoMerge tag 'mips_6.14_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
Linus Torvalds [Sat, 1 Feb 2025 22:54:33 +0000 (14:54 -0800)] 
Merge tag 'mips_6.14_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux

Pull MIPS fix from Thomas Bogendoerfer:
 "Revert commit breaking sysv ipc for o32 ABI"

* tag 'mips_6.14_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
  Revert "mips: fix shmctl/semctl/msgctl syscall for o32"

4 months agoMerge tag 'v6.14-rc-smb3-client-fixes-part2' of git://git.samba.org/sfrench/cifs-2.6
Linus Torvalds [Sat, 1 Feb 2025 19:30:41 +0000 (11:30 -0800)] 
Merge tag 'v6.14-rc-smb3-client-fixes-part2' of git://git.samba.org/sfrench/cifs-2.6

Pull more smb client updates from Steve French:

   - various updates for special file handling: symlink handling,
     support for creating sockets, cleanups, new mount options (e.g. to
     allow disabling using reparse points for them, and to allow
     overriding the way symlinks are saved), and fixes to error paths

   - fix for kerberos mounts (allow IAKerb)

   - SMB1 fix for stat and for setting SACL (auditing)

   - fix an incorrect error code mapping

   - cleanups"

* tag 'v6.14-rc-smb3-client-fixes-part2' of git://git.samba.org/sfrench/cifs-2.6: (21 commits)
  cifs: Fix parsing native symlinks directory/file type
  cifs: update internal version number
  cifs: Add support for creating WSL-style symlinks
  smb3: add support for IAKerb
  cifs: Fix struct FILE_ALL_INFO
  cifs: Add support for creating NFS-style symlinks
  cifs: Add support for creating native Windows sockets
  cifs: Add mount option -o reparse=none
  cifs: Add mount option -o symlink= for choosing symlink create type
  cifs: Fix creating and resolving absolute NT-style symlinks
  cifs: Simplify reparse point check in cifs_query_path_info() function
  cifs: Remove symlink member from cifs_open_info_data union
  cifs: Update description about ACL permissions
  cifs: Rename struct reparse_posix_data to reparse_nfs_data_buffer and move to common/smb2pdu.h
  cifs: Remove struct reparse_posix_data from struct cifs_open_info_data
  cifs: Remove unicode parameter from parse_reparse_point() function
  cifs: Fix getting and setting SACLs over SMB1
  cifs: Remove intermediate object of failed create SFU call
  cifs: Validate EAs for WSL reparse points
  cifs: Change translation of STATUS_PRIVILEGE_NOT_HELD to -EPERM
  ...

4 months agoMerge tag 'driver-core-6.14-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 1 Feb 2025 18:04:29 +0000 (10:04 -0800)] 
Merge tag 'driver-core-6.14-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core

Pull debugfs fix from Greg KH:
 "Here is a single debugfs fix from Al to resolve a reported regression
  in the driver-core tree. It has been reported to fix the issue"

* tag 'driver-core-6.14-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
  debugfs: Fix the missing initializations in __debugfs_file_get()

4 months agoMerge tag 'mm-hotfixes-stable-2025-02-01-03-56' of git://git.kernel.org/pub/scm/linux...
Linus Torvalds [Sat, 1 Feb 2025 17:49:20 +0000 (09:49 -0800)] 
Merge tag 'mm-hotfixes-stable-2025-02-01-03-56' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm

Pull misc fixes from Andrew Morton:
 "21 hotfixes. 8 are cc:stable and the remainder address post-6.13
  issues. 13 are for MM and 8 are for non-MM.

  All are singletons, please see the changelogs for details"

* tag 'mm-hotfixes-stable-2025-02-01-03-56' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (21 commits)
  MAINTAINERS: include linux-mm for xarray maintenance
  revert "xarray: port tests to kunit"
  MAINTAINERS: add lib/test_xarray.c
  mailmap, MAINTAINERS, docs: update Carlos's email address
  mm/hugetlb: fix hugepage allocation for interleaved memory nodes
  mm: gup: fix infinite loop within __get_longterm_locked
  mm, swap: fix reclaim offset calculation error during allocation
  .mailmap: update email address for Christopher Obbard
  kfence: skip __GFP_THISNODE allocations on NUMA systems
  nilfs2: fix possible int overflows in nilfs_fiemap()
  mm: compaction: use the proper flag to determine watermarks
  kernel: be more careful about dup_mmap() failures and uprobe registering
  mm/fake-numa: handle cases with no SRAT info
  mm: kmemleak: fix upper boundary check for physical address objects
  mailmap: add an entry for Hamza Mahfooz
  MAINTAINERS: mailmap: update Yosry Ahmed's email address
  scripts/gdb: fix aarch64 userspace detection in get_current_task
  mm/vmscan: accumulate nr_demoted for accurate demotion statistics
  ocfs2: fix incorrect CPU endianness conversion causing mount failure
  mm/zsmalloc: add __maybe_unused attribute for is_first_zpdesc()
  ...

4 months agoMerge tag 'media/v6.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab...
Linus Torvalds [Sat, 1 Feb 2025 17:15:01 +0000 (09:15 -0800)] 
Merge tag 'media/v6.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media

Pull media fix from Mauro Carvalho Chehab:
 "A revert for a regression in the uvcvideo driver"

* tag 'media/v6.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
  Revert "media: uvcvideo: Require entities to have a non-zero unique ID"

4 months agoMAINTAINERS: include linux-mm for xarray maintenance
Andrew Morton [Fri, 31 Jan 2025 00:16:20 +0000 (16:16 -0800)] 
MAINTAINERS: include linux-mm for xarray maintenance

MM developers have an interest in the xarray code.

Cc: David Gow <davidgow@google.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Cc: Tamir Duberstein <tamird@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agorevert "xarray: port tests to kunit"
Andrew Morton [Fri, 31 Jan 2025 00:09:20 +0000 (16:09 -0800)] 
revert "xarray: port tests to kunit"

Revert c7bb5cf9fc4e ("xarray: port tests to kunit").  It broke the build
when compiing the xarray userspace test harness code.

Reported-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Closes: https://lkml.kernel.org/r/07cf896e-adf8-414f-a629-a808fc26014a@oracle.com
Cc: David Gow <davidgow@google.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Tamir Duberstein <tamird@gmail.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agoMAINTAINERS: add lib/test_xarray.c
Tamir Duberstein [Wed, 29 Jan 2025 21:13:49 +0000 (16:13 -0500)] 
MAINTAINERS: add lib/test_xarray.c

Ensure test-only changes are sent to the relevant maintainer.

Link: https://lkml.kernel.org/r/20250129-xarray-test-maintainer-v1-1-482e31f30f47@gmail.com
Signed-off-by: Tamir Duberstein <tamird@gmail.com>
Cc: Mattew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agomailmap, MAINTAINERS, docs: update Carlos's email address
Carlos Bilbao [Thu, 30 Jan 2025 01:22:44 +0000 (19:22 -0600)] 
mailmap, MAINTAINERS, docs: update Carlos's email address

Update .mailmap to reflect my new (and final) primary email address,
carlos.bilbao@kernel.org.  Also update contact information in files
Documentation/translations/sp_SP/index.rst and MAINTAINERS.

Link: https://lkml.kernel.org/r/20250130012248.1196208-1-carlos.bilbao@kernel.org
Signed-off-by: Carlos Bilbao <carlos.bilbao@kernel.org>
Cc: Carlos Bilbao <bilbao@vt.edu>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Mattew Wilcox <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agomm/hugetlb: fix hugepage allocation for interleaved memory nodes
Ritesh Harjani (IBM) [Sat, 11 Jan 2025 11:06:55 +0000 (16:36 +0530)] 
mm/hugetlb: fix hugepage allocation for interleaved memory nodes

gather_bootmem_prealloc() assumes the start nid as 0 and size as
num_node_state(N_MEMORY).  That means in case if memory attached numa
nodes are interleaved, then gather_bootmem_prealloc_parallel() will fail
to scan few of these nodes.

Since memory attached numa nodes can be interleaved in any fashion, hence
ensure that the current code checks for all numa node ids
(.size = nr_node_ids). Let's still keep max_threads as N_MEMORY, so that
it can distributes all nr_node_ids among the these many no. threads.

e.g. qemu cmdline
========================
numa_cmd="-numa node,nodeid=1,memdev=mem1,cpus=2-3 -numa node,nodeid=0,cpus=0-1 -numa dist,src=0,dst=1,val=20"
mem_cmd="-object memory-backend-ram,id=mem1,size=16G"

w/o this patch for cmdline (default_hugepagesz=1GB hugepagesz=1GB hugepages=2):
==========================
~ # cat /proc/meminfo  |grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:               0 kB

with this patch for cmdline (default_hugepagesz=1GB hugepagesz=1GB hugepages=2):
===========================
~ # cat /proc/meminfo |grep -i huge
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
FileHugePages:         0 kB
HugePages_Total:       2
HugePages_Free:        2
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:    1048576 kB
Hugetlb:         2097152 kB

Link: https://lkml.kernel.org/r/f8d8dad3a5471d284f54185f65d575a6aaab692b.1736592534.git.ritesh.list@gmail.com
Fixes: b78b27d02930 ("hugetlb: parallelize 1G hugetlb initialization")
Signed-off-by: Ritesh Harjani (IBM) <ritesh.list@gmail.com>
Reported-by: Pavithra Prakash <pavrampu@linux.ibm.com>
Suggested-by: Muchun Song <muchun.song@linux.dev>
Tested-by: Sourabh Jain <sourabhjain@linux.ibm.com>
Reviewed-by: Luiz Capitulino <luizcap@redhat.com>
Acked-by: David Rientjes <rientjes@google.com>
Cc: Donet Tom <donettom@linux.ibm.com>
Cc: Gang Li <gang.li@linux.dev>
Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agomm: gup: fix infinite loop within __get_longterm_locked
Zhaoyang Huang [Tue, 21 Jan 2025 02:01:59 +0000 (10:01 +0800)] 
mm: gup: fix infinite loop within __get_longterm_locked

We can run into an infinite loop in __get_longterm_locked() when
collect_longterm_unpinnable_folios() finds only folios that are isolated
from the LRU or were never added to the LRU.  This can happen when all
folios to be pinned are never added to the LRU, for example when
vm_ops->fault allocated pages using cma_alloc() and never added them to
the LRU.

Fix it by simply taking a look at the list in the single caller, to see if
anything was added.

[zhaoyang.huang@unisoc.com: move definition of local]
Link: https://lkml.kernel.org/r/20250122012604.3654667-1-zhaoyang.huang@unisoc.com
Link: https://lkml.kernel.org/r/20250121020159.3636477-1-zhaoyang.huang@unisoc.com
Fixes: 67e139b02d99 ("mm/gup.c: refactor check_and_migrate_movable_pages()")
Signed-off-by: Zhaoyang Huang <zhaoyang.huang@unisoc.com>
Reviewed-by: John Hubbard <jhubbard@nvidia.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Suggested-by: David Hildenbrand <david@redhat.com>
Acked-by: David Hildenbrand <david@redhat.com>
Cc: Aijun Sun <aijun.sun@unisoc.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agomm, swap: fix reclaim offset calculation error during allocation
Kairui Song [Thu, 30 Jan 2025 11:51:31 +0000 (19:51 +0800)] 
mm, swap: fix reclaim offset calculation error during allocation

There is a code error that will cause the swap entry allocator to reclaim
and check the whole cluster with an unexpected tail offset instead of the
part that needs to be reclaimed.  This may cause corruption of the swap
map, so fix it.

Link: https://lkml.kernel.org/r/20250130115131.37777-1-ryncsn@gmail.com
Fixes: 3b644773eefd ("mm, swap: reduce contention on device lock")
Signed-off-by: Kairui Song <kasong@tencent.com>
Cc: Chris Li <chrisl@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months ago.mailmap: update email address for Christopher Obbard
Christopher Obbard [Wed, 22 Jan 2025 12:04:27 +0000 (12:04 +0000)] 
.mailmap: update email address for Christopher Obbard

Update my email address.

Link: https://lkml.kernel.org/r/20250122-wip-obbardc-update-email-v2-1-12bde6b79ad0@linaro.org
Signed-off-by: Christopher Obbard <christopher.obbard@linaro.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agokfence: skip __GFP_THISNODE allocations on NUMA systems
Marco Elver [Fri, 24 Jan 2025 12:01:38 +0000 (13:01 +0100)] 
kfence: skip __GFP_THISNODE allocations on NUMA systems

On NUMA systems, __GFP_THISNODE indicates that an allocation _must_ be on
a particular node, and failure to allocate on the desired node will result
in a failed allocation.

Skip __GFP_THISNODE allocations if we are running on a NUMA system, since
KFENCE can't guarantee which node its pool pages are allocated on.

Link: https://lkml.kernel.org/r/20250124120145.410066-1-elver@google.com
Fixes: 236e9f153852 ("kfence: skip all GFP_ZONEMASK allocations")
Signed-off-by: Marco Elver <elver@google.com>
Reported-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Chistoph Lameter <cl@linux.com>
Cc: Dmitriy Vyukov <dvyukov@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
4 months agonilfs2: fix possible int overflows in nilfs_fiemap()
Nikita Zhandarovich [Fri, 24 Jan 2025 22:20:53 +0000 (07:20 +0900)] 
nilfs2: fix possible int overflows in nilfs_fiemap()

Since nilfs_bmap_lookup_contig() in nilfs_fiemap() calculates its result
by being prepared to go through potentially maxblocks == INT_MAX blocks,
the value in n may experience an overflow caused by left shift of blkbits.

While it is extremely unlikely to occur, play it safe and cast right hand
expression to wider type to mitigate the issue.

Found by Linux Verification Center (linuxtesting.org) with static analysis
tool SVACE.

Link: https://lkml.kernel.org/r/20250124222133.5323-1-konishi.ryusuke@gmail.com
Fixes: 622daaff0a89 ("nilfs2: fiemap support")
Signed-off-by: Nikita Zhandarovich <n.zhandarovich@fintech.ru>
Signed-off-by: Ryusuke Konishi <konishi.ryusuke@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>