From: Kanchana P. Sridhar Date: Tue, 31 Mar 2026 18:33:50 +0000 (-0700) Subject: mm: zswap: remove redundant checks in zswap_cpu_comp_dead() X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=1556478e9e86585d4c48fcddb8f490713bd78156;p=thirdparty%2Fkernel%2Flinux.git mm: zswap: remove redundant checks in zswap_cpu_comp_dead() Patch series "zswap pool per-CPU acomp_ctx simplifications", v3. This patchset first removes redundant checks on the acomp_ctx and its "req" member in zswap_cpu_comp_dead(). Next, it persists the zswap pool's per-CPU acomp_ctx resources to last until the pool is destroyed. It then simplifies the per-CPU acomp_ctx mutex locking in zswap_compress()/zswap_decompress(). Code comments added after allocation and before checking to deallocate the per-CPU acomp_ctx's members, based on expected crypto API return values and zswap changes this patchset makes. Patch 2 is an independent submission of patch 23 from [1], to facilitate merging. This patch (of 2): There are presently redundant checks on the per-CPU acomp_ctx and it's "req" member in zswap_cpu_comp_dead(): redundant because they are inconsistent with zswap_pool_create() handling of failure in allocating the acomp_ctx, and with the expected NULL return value from the acomp_request_alloc() API when it fails to allocate an acomp_req. Fix these by converting to them to be NULL checks. Add comments in zswap_cpu_comp_prepare() clarifying the expected return values of the crypto_alloc_acomp_node() and acomp_request_alloc() API. Link: https://lore.kernel.org/20260331183351.29844-2-kanchanapsridhar2026@gmail.com Link: https://patchwork.kernel.org/project/linux-mm/list/?series=1046677 Signed-off-by: Kanchana P. Sridhar Suggested-by: Yosry Ahmed Acked-by: Yosry Ahmed Signed-off-by: Andrew Morton --- diff --git a/mm/zswap.c b/mm/zswap.c index 4f2e652e8ad3..c59045b59ffe 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -749,6 +749,10 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) goto fail; } + /* + * In case of an error, crypto_alloc_acomp_node() returns an + * error pointer, never NULL. + */ acomp = crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); if (IS_ERR(acomp)) { pr_err("could not alloc crypto acomp %s : %pe\n", @@ -757,6 +761,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, struct hlist_node *node) goto fail; } + /* acomp_request_alloc() returns NULL in case of an error. */ req = acomp_request_alloc(acomp); if (!req) { pr_err("could not alloc crypto acomp_request %s\n", @@ -802,7 +807,7 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) struct crypto_acomp *acomp; u8 *buffer; - if (IS_ERR_OR_NULL(acomp_ctx)) + if (!acomp_ctx) return 0; mutex_lock(&acomp_ctx->mutex); @@ -817,8 +822,11 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) /* * Do the actual freeing after releasing the mutex to avoid subtle * locking dependencies causing deadlocks. + * + * If there was an error in allocating @acomp_ctx->req, it + * would be set to NULL. */ - if (!IS_ERR_OR_NULL(req)) + if (req) acomp_request_free(req); if (!IS_ERR_OR_NULL(acomp)) crypto_free_acomp(acomp);