From: Sasha Levin Date: Mon, 7 Jul 2025 04:22:41 +0000 (-0400) Subject: Fixes for 6.12 X-Git-Tag: v5.15.187~55 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=657f01485eeebd7f66d5a0966b8396c593b3b05d;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 6.12 Signed-off-by: Sasha Levin --- diff --git a/queue-6.12/acpi-thermal-execute-_scp-before-reading-trip-points.patch b/queue-6.12/acpi-thermal-execute-_scp-before-reading-trip-points.patch new file mode 100644 index 0000000000..dabdf6c8e9 --- /dev/null +++ b/queue-6.12/acpi-thermal-execute-_scp-before-reading-trip-points.patch @@ -0,0 +1,57 @@ +From 0141b029f6ff15b246770aecb82c1edc3c2b46ee Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 10 Apr 2025 18:54:55 +0200 +Subject: ACPI: thermal: Execute _SCP before reading trip points + +From: Armin Wolf + +[ Upstream commit 3f7cd28ae3d1a1d6f151178469cfaef1b07fdbcc ] + +As specified in section 11.4.13 of the ACPI specification the +operating system is required to evaluate the _ACx and _PSV objects +after executing the _SCP control method. + +Move the execution of the _SCP control method before the invocation +of acpi_thermal_get_trip_points() to avoid missing updates to the +_ACx and _PSV objects. + +Fixes: b09872a652d3 ("ACPI: thermal: Fold acpi_thermal_get_info() into its caller") +Signed-off-by: Armin Wolf +Link: https://patch.msgid.link/20250410165456.4173-3-W_Armin@gmx.de +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/acpi/thermal.c | 10 ++++++---- + 1 file changed, 6 insertions(+), 4 deletions(-) + +diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c +index dbc371ac2fa66..125d7df8f30ae 100644 +--- a/drivers/acpi/thermal.c ++++ b/drivers/acpi/thermal.c +@@ -803,6 +803,12 @@ static int acpi_thermal_add(struct acpi_device *device) + + acpi_thermal_aml_dependency_fix(tz); + ++ /* ++ * Set the cooling mode [_SCP] to active cooling. This needs to happen before ++ * we retrieve the trip point values. ++ */ ++ acpi_execute_simple_method(tz->device->handle, "_SCP", ACPI_THERMAL_MODE_ACTIVE); ++ + /* Get trip points [_ACi, _PSV, etc.] (required). */ + acpi_thermal_get_trip_points(tz); + +@@ -814,10 +820,6 @@ static int acpi_thermal_add(struct acpi_device *device) + if (result) + goto free_memory; + +- /* Set the cooling mode [_SCP] to active cooling. */ +- acpi_execute_simple_method(tz->device->handle, "_SCP", +- ACPI_THERMAL_MODE_ACTIVE); +- + /* Determine the default polling frequency [_TZP]. */ + if (tzp) + tz->polling_frequency = tzp; +-- +2.39.5 + diff --git a/queue-6.12/acpi-thermal-fix-stale-comment-regarding-trip-points.patch b/queue-6.12/acpi-thermal-fix-stale-comment-regarding-trip-points.patch new file mode 100644 index 0000000000..d17c233261 --- /dev/null +++ b/queue-6.12/acpi-thermal-fix-stale-comment-regarding-trip-points.patch @@ -0,0 +1,41 @@ +From af993d9dc393be870220ed0f314addaf7a9bbcf4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 8 Feb 2025 09:33:35 +0800 +Subject: ACPI: thermal: Fix stale comment regarding trip points + +From: xueqin Luo + +[ Upstream commit 01ca2846338d314cdcd3da1aca7f290ec380542c ] + +Update the comment next to acpi_thermal_get_trip_points() call site +in acpi_thermal_add() to reflect what the code does. + +It has diverged from the code after changes that removed the _CRT +evaluation from acpi_thermal_get_trip_points() among other things. + +Signed-off-by: xueqin Luo +Link: https://patch.msgid.link/20250208013335.126343-1-luoxueqin@kylinos.cn +[ rjw: Subject and changelog edits ] +Signed-off-by: Rafael J. Wysocki +Stable-dep-of: 3f7cd28ae3d1 ("ACPI: thermal: Execute _SCP before reading trip points") +Signed-off-by: Sasha Levin +--- + drivers/acpi/thermal.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/acpi/thermal.c b/drivers/acpi/thermal.c +index 78db38c7076e4..dbc371ac2fa66 100644 +--- a/drivers/acpi/thermal.c ++++ b/drivers/acpi/thermal.c +@@ -803,7 +803,7 @@ static int acpi_thermal_add(struct acpi_device *device) + + acpi_thermal_aml_dependency_fix(tz); + +- /* Get trip points [_CRT, _PSV, etc.] (required). */ ++ /* Get trip points [_ACi, _PSV, etc.] (required). */ + acpi_thermal_get_trip_points(tz); + + crit_temp = acpi_thermal_get_critical_trip(tz); +-- +2.39.5 + diff --git a/queue-6.12/acpica-refuse-to-evaluate-a-method-if-arguments-are-.patch b/queue-6.12/acpica-refuse-to-evaluate-a-method-if-arguments-are-.patch new file mode 100644 index 0000000000..adf5943fd4 --- /dev/null +++ b/queue-6.12/acpica-refuse-to-evaluate-a-method-if-arguments-are-.patch @@ -0,0 +1,51 @@ +From e414e64784e9cd531a768ee0677edffbc6528865 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Jun 2025 14:17:45 +0200 +Subject: ACPICA: Refuse to evaluate a method if arguments are missing + +From: Rafael J. Wysocki + +[ Upstream commit 6fcab2791543924d438e7fa49276d0998b0a069f ] + +As reported in [1], a platform firmware update that increased the number +of method parameters and forgot to update a least one of its callers, +caused ACPICA to crash due to use-after-free. + +Since this a result of a clear AML issue that arguably cannot be fixed +up by the interpreter (it cannot produce missing data out of thin air), +address it by making ACPICA refuse to evaluate a method if the caller +attempts to pass fewer arguments than expected to it. + +Closes: https://github.com/acpica/acpica/issues/1027 [1] +Reported-by: Peter Williams +Signed-off-by: Rafael J. Wysocki +Reviewed-by: Hans de Goede +Tested-by: Hans de Goede # Dell XPS 9640 with BIOS 1.12.0 +Link: https://patch.msgid.link/5909446.DvuYhMxLoT@rjwysocki.net +Signed-off-by: Rafael J. Wysocki +Signed-off-by: Sasha Levin +--- + drivers/acpi/acpica/dsmethod.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c +index e809c2aed78ae..a232746d150a7 100644 +--- a/drivers/acpi/acpica/dsmethod.c ++++ b/drivers/acpi/acpica/dsmethod.c +@@ -483,6 +483,13 @@ acpi_ds_call_control_method(struct acpi_thread_state *thread, + return_ACPI_STATUS(AE_NULL_OBJECT); + } + ++ if (this_walk_state->num_operands < obj_desc->method.param_count) { ++ ACPI_ERROR((AE_INFO, "Missing argument for method [%4.4s]", ++ acpi_ut_get_node_name(method_node))); ++ ++ return_ACPI_STATUS(AE_AML_UNINITIALIZED_ARG); ++ } ++ + /* Init for new method, possibly wait on method mutex */ + + status = +-- +2.39.5 + diff --git a/queue-6.12/add-a-string-to-qstr-constructor.patch b/queue-6.12/add-a-string-to-qstr-constructor.patch new file mode 100644 index 0000000000..ed40f41e39 --- /dev/null +++ b/queue-6.12/add-a-string-to-qstr-constructor.patch @@ -0,0 +1,241 @@ +From 9da442885e052dc1aad540d2ea290fbeee3f56be Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 23 Jan 2025 22:51:04 -0500 +Subject: add a string-to-qstr constructor + +From: Al Viro + +[ Upstream commit c1feab95e0b2e9fce7e4f4b2739baf40d84543af ] + +Quite a few places want to build a struct qstr by given string; +it would be convenient to have a primitive doing that, rather +than open-coding it via QSTR_INIT(). + +The closest approximation was in bcachefs, but that expands to +initializer list - {.len = strlen(string), .name = string}. +It would be more useful to have it as compound literal - +(struct qstr){.len = strlen(string), .name = string}. + +Unlike initializer list it's a valid expression. What's more, +it's a valid lvalue - it's an equivalent of anonymous local +variable with such initializer, so the things like + path->dentry = d_alloc_pseudo(mnt->mnt_sb, &QSTR(name)); +are valid. It can also be used as initializer, with identical +effect - + struct qstr x = (struct qstr){.name = s, .len = strlen(s)}; +is equivalent to + struct qstr anon_variable = {.name = s, .len = strlen(s)}; + struct qstr x = anon_variable; + // anon_variable is never used after that point +and any even remotely sane compiler will manage to collapse that +into + struct qstr x = {.name = s, .len = strlen(s)}; + +What compound literals can't be used for is initialization of +global variables, but those are covered by QSTR_INIT(). + +This commit lifts definition(s) of QSTR() into linux/dcache.h, +converts it to compound literal (all bcachefs users are fine +with that) and converts assorted open-coded instances to using +that. + +Signed-off-by: Al Viro +Stable-dep-of: cbe4134ea4bc ("fs: export anon_inode_make_secure_inode() and fix secretmem LSM bypass") +Signed-off-by: Sasha Levin +--- + fs/anon_inodes.c | 4 ++-- + fs/bcachefs/fsck.c | 2 +- + fs/bcachefs/recovery.c | 2 -- + fs/bcachefs/util.h | 2 -- + fs/erofs/xattr.c | 2 +- + fs/file_table.c | 4 +--- + fs/kernfs/file.c | 2 +- + include/linux/dcache.h | 1 + + mm/secretmem.c | 3 +-- + net/sunrpc/rpc_pipe.c | 14 +++++--------- + 10 files changed, 13 insertions(+), 23 deletions(-) + +diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c +index 42bd1cb7c9cdd..583ac81669c24 100644 +--- a/fs/anon_inodes.c ++++ b/fs/anon_inodes.c +@@ -60,14 +60,14 @@ static struct inode *anon_inode_make_secure_inode( + const struct inode *context_inode) + { + struct inode *inode; +- const struct qstr qname = QSTR_INIT(name, strlen(name)); + int error; + + inode = alloc_anon_inode(anon_inode_mnt->mnt_sb); + if (IS_ERR(inode)) + return inode; + inode->i_flags &= ~S_PRIVATE; +- error = security_inode_init_security_anon(inode, &qname, context_inode); ++ error = security_inode_init_security_anon(inode, &QSTR(name), ++ context_inode); + if (error) { + iput(inode); + return ERR_PTR(error); +diff --git a/fs/bcachefs/fsck.c b/fs/bcachefs/fsck.c +index 75c8a97a6954c..7b3b63ed747cf 100644 +--- a/fs/bcachefs/fsck.c ++++ b/fs/bcachefs/fsck.c +@@ -405,7 +405,7 @@ static int reattach_inode(struct btree_trans *trans, struct bch_inode_unpacked * + return ret; + + struct bch_hash_info dir_hash = bch2_hash_info_init(c, &lostfound); +- struct qstr name = (struct qstr) QSTR(name_buf); ++ struct qstr name = QSTR(name_buf); + + inode->bi_dir = lostfound.bi_inum; + +diff --git a/fs/bcachefs/recovery.c b/fs/bcachefs/recovery.c +index 3c7f941dde39a..ebabba2968821 100644 +--- a/fs/bcachefs/recovery.c ++++ b/fs/bcachefs/recovery.c +@@ -32,8 +32,6 @@ + #include + #include + +-#define QSTR(n) { { { .len = strlen(n) } }, .name = n } +- + void bch2_btree_lost_data(struct bch_fs *c, enum btree_id btree) + { + if (btree >= BTREE_ID_NR_MAX) +diff --git a/fs/bcachefs/util.h b/fs/bcachefs/util.h +index fb02c1c360044..a27f4b84fe775 100644 +--- a/fs/bcachefs/util.h ++++ b/fs/bcachefs/util.h +@@ -647,8 +647,6 @@ static inline int cmp_le32(__le32 l, __le32 r) + + #include + +-#define QSTR(n) { { { .len = strlen(n) } }, .name = n } +- + static inline bool qstr_eq(const struct qstr l, const struct qstr r) + { + return l.len == r.len && !memcmp(l.name, r.name, l.len); +diff --git a/fs/erofs/xattr.c b/fs/erofs/xattr.c +index a90d7d6497390..60d2cf26e837e 100644 +--- a/fs/erofs/xattr.c ++++ b/fs/erofs/xattr.c +@@ -407,7 +407,7 @@ int erofs_getxattr(struct inode *inode, int index, const char *name, + } + + it.index = index; +- it.name = (struct qstr)QSTR_INIT(name, strlen(name)); ++ it.name = QSTR(name); + if (it.name.len > EROFS_NAME_LEN) + return -ERANGE; + +diff --git a/fs/file_table.c b/fs/file_table.c +index 18735dc8269a1..cf3422edf737c 100644 +--- a/fs/file_table.c ++++ b/fs/file_table.c +@@ -332,9 +332,7 @@ static struct file *alloc_file(const struct path *path, int flags, + static inline int alloc_path_pseudo(const char *name, struct inode *inode, + struct vfsmount *mnt, struct path *path) + { +- struct qstr this = QSTR_INIT(name, strlen(name)); +- +- path->dentry = d_alloc_pseudo(mnt->mnt_sb, &this); ++ path->dentry = d_alloc_pseudo(mnt->mnt_sb, &QSTR(name)); + if (!path->dentry) + return -ENOMEM; + path->mnt = mntget(mnt); +diff --git a/fs/kernfs/file.c b/fs/kernfs/file.c +index 1943c8bd479bf..2d9d5dfa19b87 100644 +--- a/fs/kernfs/file.c ++++ b/fs/kernfs/file.c +@@ -928,7 +928,7 @@ static void kernfs_notify_workfn(struct work_struct *work) + if (!inode) + continue; + +- name = (struct qstr)QSTR_INIT(kn->name, strlen(kn->name)); ++ name = QSTR(kn->name); + parent = kernfs_get_parent(kn); + if (parent) { + p_inode = ilookup(info->sb, kernfs_ino(parent)); +diff --git a/include/linux/dcache.h b/include/linux/dcache.h +index bff956f7b2b98..3d53a60145911 100644 +--- a/include/linux/dcache.h ++++ b/include/linux/dcache.h +@@ -57,6 +57,7 @@ struct qstr { + }; + + #define QSTR_INIT(n,l) { { { .len = l } }, .name = n } ++#define QSTR(n) (struct qstr)QSTR_INIT(n, strlen(n)) + + extern const struct qstr empty_name; + extern const struct qstr slash_name; +diff --git a/mm/secretmem.c b/mm/secretmem.c +index 399552814fd0f..1b0a214ee5580 100644 +--- a/mm/secretmem.c ++++ b/mm/secretmem.c +@@ -195,14 +195,13 @@ static struct file *secretmem_file_create(unsigned long flags) + struct file *file; + struct inode *inode; + const char *anon_name = "[secretmem]"; +- const struct qstr qname = QSTR_INIT(anon_name, strlen(anon_name)); + int err; + + inode = alloc_anon_inode(secretmem_mnt->mnt_sb); + if (IS_ERR(inode)) + return ERR_CAST(inode); + +- err = security_inode_init_security_anon(inode, &qname, NULL); ++ err = security_inode_init_security_anon(inode, &QSTR(anon_name), NULL); + if (err) { + file = ERR_PTR(err); + goto err_free_inode; +diff --git a/net/sunrpc/rpc_pipe.c b/net/sunrpc/rpc_pipe.c +index 7ce3721c06ca5..eadc00410ebc5 100644 +--- a/net/sunrpc/rpc_pipe.c ++++ b/net/sunrpc/rpc_pipe.c +@@ -630,7 +630,7 @@ static int __rpc_rmpipe(struct inode *dir, struct dentry *dentry) + static struct dentry *__rpc_lookup_create_exclusive(struct dentry *parent, + const char *name) + { +- struct qstr q = QSTR_INIT(name, strlen(name)); ++ struct qstr q = QSTR(name); + struct dentry *dentry = d_hash_and_lookup(parent, &q); + if (!dentry) { + dentry = d_alloc(parent, &q); +@@ -1190,8 +1190,7 @@ static const struct rpc_filelist files[] = { + struct dentry *rpc_d_lookup_sb(const struct super_block *sb, + const unsigned char *dir_name) + { +- struct qstr dir = QSTR_INIT(dir_name, strlen(dir_name)); +- return d_hash_and_lookup(sb->s_root, &dir); ++ return d_hash_and_lookup(sb->s_root, &QSTR(dir_name)); + } + EXPORT_SYMBOL_GPL(rpc_d_lookup_sb); + +@@ -1300,11 +1299,9 @@ rpc_gssd_dummy_populate(struct dentry *root, struct rpc_pipe *pipe_data) + struct dentry *gssd_dentry; + struct dentry *clnt_dentry = NULL; + struct dentry *pipe_dentry = NULL; +- struct qstr q = QSTR_INIT(files[RPCAUTH_gssd].name, +- strlen(files[RPCAUTH_gssd].name)); + + /* We should never get this far if "gssd" doesn't exist */ +- gssd_dentry = d_hash_and_lookup(root, &q); ++ gssd_dentry = d_hash_and_lookup(root, &QSTR(files[RPCAUTH_gssd].name)); + if (!gssd_dentry) + return ERR_PTR(-ENOENT); + +@@ -1314,9 +1311,8 @@ rpc_gssd_dummy_populate(struct dentry *root, struct rpc_pipe *pipe_data) + goto out; + } + +- q.name = gssd_dummy_clnt_dir[0].name; +- q.len = strlen(gssd_dummy_clnt_dir[0].name); +- clnt_dentry = d_hash_and_lookup(gssd_dentry, &q); ++ clnt_dentry = d_hash_and_lookup(gssd_dentry, ++ &QSTR(gssd_dummy_clnt_dir[0].name)); + if (!clnt_dentry) { + __rpc_depopulate(gssd_dentry, gssd_dummy_clnt_dir, 0, 1); + pipe_dentry = ERR_PTR(-ENOENT); +-- +2.39.5 + diff --git a/queue-6.12/alsa-sb-don-t-allow-changing-the-dma-mode-during-ope.patch b/queue-6.12/alsa-sb-don-t-allow-changing-the-dma-mode-during-ope.patch new file mode 100644 index 0000000000..3bcfd246fe --- /dev/null +++ b/queue-6.12/alsa-sb-don-t-allow-changing-the-dma-mode-during-ope.patch @@ -0,0 +1,38 @@ +From f62c489e5382ebf0954184309444411a70ec74b3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 10 Jun 2025 08:43:19 +0200 +Subject: ALSA: sb: Don't allow changing the DMA mode during operations + +From: Takashi Iwai + +[ Upstream commit ed29e073ba93f2d52832804cabdd831d5d357d33 ] + +When a PCM stream is already running, one shouldn't change the DMA +mode via kcontrol, which may screw up the hardware. Return -EBUSY +instead. + +Link: https://bugzilla.kernel.org/show_bug.cgi?id=218185 +Link: https://patch.msgid.link/20250610064322.26787-1-tiwai@suse.de +Signed-off-by: Takashi Iwai +Signed-off-by: Sasha Levin +--- + sound/isa/sb/sb16_main.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c +index 74db115250030..c4930efd44e3a 100644 +--- a/sound/isa/sb/sb16_main.c ++++ b/sound/isa/sb/sb16_main.c +@@ -703,6 +703,9 @@ static int snd_sb16_dma_control_put(struct snd_kcontrol *kcontrol, struct snd_ct + unsigned char nval, oval; + int change; + ++ if (chip->mode & (SB_MODE_PLAYBACK | SB_MODE_CAPTURE)) ++ return -EBUSY; ++ + nval = ucontrol->value.enumerated.item[0]; + if (nval > 2) + return -EINVAL; +-- +2.39.5 + diff --git a/queue-6.12/alsa-sb-force-to-disable-dmas-once-when-dma-mode-is-.patch b/queue-6.12/alsa-sb-force-to-disable-dmas-once-when-dma-mode-is-.patch new file mode 100644 index 0000000000..fc50e03e49 --- /dev/null +++ b/queue-6.12/alsa-sb-force-to-disable-dmas-once-when-dma-mode-is-.patch @@ -0,0 +1,41 @@ +From f8086d21dc93dcf1adbd151f5169ef379fd7b54e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 10 Jun 2025 08:43:20 +0200 +Subject: ALSA: sb: Force to disable DMAs once when DMA mode is changed + +From: Takashi Iwai + +[ Upstream commit 4c267ae2ef349639b4d9ebf00dd28586a82fdbe6 ] + +When the DMA mode is changed on the (still real!) SB AWE32 after +playing a stream and closing, the previous DMA setup was still +silently kept, and it can confuse the hardware, resulting in the +unexpected noises. As a workaround, enforce the disablement of DMA +setups when the DMA setup is changed by the kcontrol. + +https://bugzilla.kernel.org/show_bug.cgi?id=218185 +Link: https://patch.msgid.link/20250610064322.26787-2-tiwai@suse.de +Signed-off-by: Takashi Iwai +Signed-off-by: Sasha Levin +--- + sound/isa/sb/sb16_main.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/sound/isa/sb/sb16_main.c b/sound/isa/sb/sb16_main.c +index c4930efd44e3a..5a083eecaa6b9 100644 +--- a/sound/isa/sb/sb16_main.c ++++ b/sound/isa/sb/sb16_main.c +@@ -714,6 +714,10 @@ static int snd_sb16_dma_control_put(struct snd_kcontrol *kcontrol, struct snd_ct + change = nval != oval; + snd_sb16_set_dma_mode(chip, nval); + spin_unlock_irqrestore(&chip->reg_lock, flags); ++ if (change) { ++ snd_dma_disable(chip->dma8); ++ snd_dma_disable(chip->dma16); ++ } + return change; + } + +-- +2.39.5 + diff --git a/queue-6.12/amd-xgbe-align-cl37-an-sequence-as-per-databook.patch b/queue-6.12/amd-xgbe-align-cl37-an-sequence-as-per-databook.patch new file mode 100644 index 0000000000..4adcdda890 --- /dev/null +++ b/queue-6.12/amd-xgbe-align-cl37-an-sequence-as-per-databook.patch @@ -0,0 +1,90 @@ +From e0a45a94c6c97fd8489227399a9b3bb66aee1d5c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 1 Jul 2025 00:56:36 +0530 +Subject: amd-xgbe: align CL37 AN sequence as per databook + +From: Raju Rangoju + +[ Upstream commit 42fd432fe6d320323215ebdf4de4d0d7e56e6792 ] + +Update the Clause 37 Auto-Negotiation implementation to properly align +with the PCS hardware specifications: +- Fix incorrect bit settings in Link Status and Link Duplex fields +- Implement missing sequence steps 2 and 7 + +These changes ensure CL37 auto-negotiation protocol follows the exact +sequence patterns as specified in the hardware databook. + +Fixes: 1bf40ada6290 ("amd-xgbe: Add support for clause 37 auto-negotiation") +Signed-off-by: Raju Rangoju +Link: https://patch.msgid.link/20250630192636.3838291-1-Raju.Rangoju@amd.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/amd/xgbe/xgbe-common.h | 2 ++ + drivers/net/ethernet/amd/xgbe/xgbe-mdio.c | 9 +++++++++ + drivers/net/ethernet/amd/xgbe/xgbe.h | 4 ++-- + 3 files changed, 13 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-common.h b/drivers/net/ethernet/amd/xgbe/xgbe-common.h +index 3b70f67376331..aa25a8a0a106f 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-common.h ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-common.h +@@ -1373,6 +1373,8 @@ + #define MDIO_VEND2_CTRL1_SS13 BIT(13) + #endif + ++#define XGBE_VEND2_MAC_AUTO_SW BIT(9) ++ + /* MDIO mask values */ + #define XGBE_AN_CL73_INT_CMPLT BIT(0) + #define XGBE_AN_CL73_INC_LINK BIT(1) +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +index 07f4f3418d018..3316c719f9f8c 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +@@ -375,6 +375,10 @@ static void xgbe_an37_set(struct xgbe_prv_data *pdata, bool enable, + reg |= MDIO_VEND2_CTRL1_AN_RESTART; + + XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_CTRL1, reg); ++ ++ reg = XMDIO_READ(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL); ++ reg |= XGBE_VEND2_MAC_AUTO_SW; ++ XMDIO_WRITE(pdata, MDIO_MMD_VEND2, MDIO_PCS_DIG_CTRL, reg); + } + + static void xgbe_an37_restart(struct xgbe_prv_data *pdata) +@@ -1003,6 +1007,11 @@ static void xgbe_an37_init(struct xgbe_prv_data *pdata) + + netif_dbg(pdata, link, pdata->netdev, "CL37 AN (%s) initialized\n", + (pdata->an_mode == XGBE_AN_MODE_CL37) ? "BaseX" : "SGMII"); ++ ++ reg = XMDIO_READ(pdata, MDIO_MMD_AN, MDIO_CTRL1); ++ reg &= ~MDIO_AN_CTRL1_ENABLE; ++ XMDIO_WRITE(pdata, MDIO_MMD_AN, MDIO_CTRL1, reg); ++ + } + + static void xgbe_an73_init(struct xgbe_prv_data *pdata) +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe.h b/drivers/net/ethernet/amd/xgbe/xgbe.h +index ed5d43c16d0e2..7526a0906b391 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe.h ++++ b/drivers/net/ethernet/amd/xgbe/xgbe.h +@@ -292,12 +292,12 @@ + #define XGBE_LINK_TIMEOUT 5 + #define XGBE_KR_TRAINING_WAIT_ITER 50 + +-#define XGBE_SGMII_AN_LINK_STATUS BIT(1) ++#define XGBE_SGMII_AN_LINK_DUPLEX BIT(1) + #define XGBE_SGMII_AN_LINK_SPEED (BIT(2) | BIT(3)) + #define XGBE_SGMII_AN_LINK_SPEED_10 0x00 + #define XGBE_SGMII_AN_LINK_SPEED_100 0x04 + #define XGBE_SGMII_AN_LINK_SPEED_1000 0x08 +-#define XGBE_SGMII_AN_LINK_DUPLEX BIT(4) ++#define XGBE_SGMII_AN_LINK_STATUS BIT(4) + + /* ECC correctable error notification window (seconds) */ + #define XGBE_ECC_LIMIT 60 +-- +2.39.5 + diff --git a/queue-6.12/amd-xgbe-do-not-double-read-link-status.patch b/queue-6.12/amd-xgbe-do-not-double-read-link-status.patch new file mode 100644 index 0000000000..80187231e3 --- /dev/null +++ b/queue-6.12/amd-xgbe-do-not-double-read-link-status.patch @@ -0,0 +1,94 @@ +From 4888bc5509ab71a1535325c988710f49d6ffba68 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 1 Jul 2025 12:20:16 +0530 +Subject: amd-xgbe: do not double read link status + +From: Raju Rangoju + +[ Upstream commit 16ceda2ef683a50cd0783006c0504e1931cd8879 ] + +The link status is latched low so that momentary link drops +can be detected. Always double-reading the status defeats this +design feature. Only double read if link was already down + +This prevents unnecessary duplicate readings of the link status. + +Fixes: 4f3b20bfbb75 ("amd-xgbe: add support for rx-adaptation") +Signed-off-by: Raju Rangoju +Reviewed-by: Simon Horman +Link: https://patch.msgid.link/20250701065016.4140707-1-Raju.Rangoju@amd.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/amd/xgbe/xgbe-mdio.c | 4 ++++ + drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c | 24 +++++++++++++-------- + 2 files changed, 19 insertions(+), 9 deletions(-) + +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +index 3316c719f9f8c..ed76a8df6ec6e 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-mdio.c +@@ -1413,6 +1413,10 @@ static void xgbe_phy_status(struct xgbe_prv_data *pdata) + + pdata->phy.link = pdata->phy_if.phy_impl.link_status(pdata, + &an_restart); ++ /* bail out if the link status register read fails */ ++ if (pdata->phy.link < 0) ++ return; ++ + if (an_restart) { + xgbe_phy_config_aneg(pdata); + goto adjust_link; +diff --git a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c +index 268399dfcf22f..32e633d113484 100644 +--- a/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c ++++ b/drivers/net/ethernet/amd/xgbe/xgbe-phy-v2.c +@@ -2855,8 +2855,7 @@ static bool xgbe_phy_valid_speed(struct xgbe_prv_data *pdata, int speed) + static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart) + { + struct xgbe_phy_data *phy_data = pdata->phy_data; +- unsigned int reg; +- int ret; ++ int reg, ret; + + *an_restart = 0; + +@@ -2890,11 +2889,20 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart) + return 0; + } + +- /* Link status is latched low, so read once to clear +- * and then read again to get current state +- */ +- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); + reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); ++ if (reg < 0) ++ return reg; ++ ++ /* Link status is latched low so that momentary link drops ++ * can be detected. If link was already down read again ++ * to get the latest state. ++ */ ++ ++ if (!pdata->phy.link && !(reg & MDIO_STAT1_LSTATUS)) { ++ reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); ++ if (reg < 0) ++ return reg; ++ } + + if (pdata->en_rx_adap) { + /* if the link is available and adaptation is done, +@@ -2913,9 +2921,7 @@ static int xgbe_phy_link_status(struct xgbe_prv_data *pdata, int *an_restart) + xgbe_phy_set_mode(pdata, phy_data->cur_mode); + } + +- /* check again for the link and adaptation status */ +- reg = XMDIO_READ(pdata, MDIO_MMD_PCS, MDIO_STAT1); +- if ((reg & MDIO_STAT1_LSTATUS) && pdata->rx_adapt_done) ++ if (pdata->rx_adapt_done) + return 1; + } else if (reg & MDIO_STAT1_LSTATUS) + return 1; +-- +2.39.5 + diff --git a/queue-6.12/aoe-defer-rexmit-timer-downdev-work-to-workqueue.patch b/queue-6.12/aoe-defer-rexmit-timer-downdev-work-to-workqueue.patch new file mode 100644 index 0000000000..a30e789f6d --- /dev/null +++ b/queue-6.12/aoe-defer-rexmit-timer-downdev-work-to-workqueue.patch @@ -0,0 +1,94 @@ +From 029e640bc39538719cacb433b371198f857deab3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 10 Jun 2025 17:06:00 +0000 +Subject: aoe: defer rexmit timer downdev work to workqueue + +From: Justin Sanders + +[ Upstream commit cffc873d68ab09a0432b8212008c5613f8a70a2c ] + +When aoe's rexmit_timer() notices that an aoe target fails to respond to +commands for more than aoe_deadsecs, it calls aoedev_downdev() which +cleans the outstanding aoe and block queues. This can involve sleeping, +such as in blk_mq_freeze_queue(), which should not occur in irq context. + +This patch defers that aoedev_downdev() call to the aoe device's +workqueue. + +Link: https://bugzilla.kernel.org/show_bug.cgi?id=212665 +Signed-off-by: Justin Sanders +Link: https://lore.kernel.org/r/20250610170600.869-2-jsanders.devel@gmail.com +Tested-By: Valentin Kleibel +Signed-off-by: Jens Axboe +Signed-off-by: Sasha Levin +--- + drivers/block/aoe/aoe.h | 1 + + drivers/block/aoe/aoecmd.c | 8 ++++++-- + drivers/block/aoe/aoedev.c | 5 ++++- + 3 files changed, 11 insertions(+), 3 deletions(-) + +diff --git a/drivers/block/aoe/aoe.h b/drivers/block/aoe/aoe.h +index 749ae1246f4cf..d35caa3c69e15 100644 +--- a/drivers/block/aoe/aoe.h ++++ b/drivers/block/aoe/aoe.h +@@ -80,6 +80,7 @@ enum { + DEVFL_NEWSIZE = (1<<6), /* need to update dev size in block layer */ + DEVFL_FREEING = (1<<7), /* set when device is being cleaned up */ + DEVFL_FREED = (1<<8), /* device has been cleaned up */ ++ DEVFL_DEAD = (1<<9), /* device has timed out of aoe_deadsecs */ + }; + + enum { +diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c +index 92b06d1de4cc7..6c94cfd1c480e 100644 +--- a/drivers/block/aoe/aoecmd.c ++++ b/drivers/block/aoe/aoecmd.c +@@ -754,7 +754,7 @@ rexmit_timer(struct timer_list *timer) + + utgts = count_targets(d, NULL); + +- if (d->flags & DEVFL_TKILL) { ++ if (d->flags & (DEVFL_TKILL | DEVFL_DEAD)) { + spin_unlock_irqrestore(&d->lock, flags); + return; + } +@@ -786,7 +786,8 @@ rexmit_timer(struct timer_list *timer) + * to clean up. + */ + list_splice(&flist, &d->factive[0]); +- aoedev_downdev(d); ++ d->flags |= DEVFL_DEAD; ++ queue_work(aoe_wq, &d->work); + goto out; + } + +@@ -898,6 +899,9 @@ aoecmd_sleepwork(struct work_struct *work) + { + struct aoedev *d = container_of(work, struct aoedev, work); + ++ if (d->flags & DEVFL_DEAD) ++ aoedev_downdev(d); ++ + if (d->flags & DEVFL_GDALLOC) + aoeblk_gdalloc(d); + +diff --git a/drivers/block/aoe/aoedev.c b/drivers/block/aoe/aoedev.c +index 280679bde3a50..4240e11adfb76 100644 +--- a/drivers/block/aoe/aoedev.c ++++ b/drivers/block/aoe/aoedev.c +@@ -200,8 +200,11 @@ aoedev_downdev(struct aoedev *d) + struct list_head *head, *pos, *nx; + struct request *rq, *rqnext; + int i; ++ unsigned long flags; + +- d->flags &= ~DEVFL_UP; ++ spin_lock_irqsave(&d->lock, flags); ++ d->flags &= ~(DEVFL_UP | DEVFL_DEAD); ++ spin_unlock_irqrestore(&d->lock, flags); + + /* clean out active and to-be-retransmitted buffers */ + for (i = 0; i < NFACTIVE; i++) { +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-apple-t8103-fix-pcie-bcm4377-nodename.patch b/queue-6.12/arm64-dts-apple-t8103-fix-pcie-bcm4377-nodename.patch new file mode 100644 index 0000000000..97a7dd6737 --- /dev/null +++ b/queue-6.12/arm64-dts-apple-t8103-fix-pcie-bcm4377-nodename.patch @@ -0,0 +1,42 @@ +From 63bdca608d74bb5514394b3760199d6a78878920 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 11 Jun 2025 22:30:31 +0200 +Subject: arm64: dts: apple: t8103: Fix PCIe BCM4377 nodename + +From: Janne Grunau + +[ Upstream commit ac1daa91e9370e3b88ef7826a73d62a4d09e2717 ] + +Fix the following `make dtbs_check` warnings for all t8103 based devices: + +arch/arm64/boot/dts/apple/t8103-j274.dtb: network@0,0: $nodename:0: 'network@0,0' does not match '^wifi(@.*)?$' + from schema $id: http://devicetree.org/schemas/net/wireless/brcm,bcm4329-fmac.yaml# +arch/arm64/boot/dts/apple/t8103-j274.dtb: network@0,0: Unevaluated properties are not allowed ('local-mac-address' was unexpected) + from schema $id: http://devicetree.org/schemas/net/wireless/brcm,bcm4329-fmac.yaml# + +Fixes: bf2c05b619ff ("arm64: dts: apple: t8103: Expose PCI node for the WiFi MAC address") +Signed-off-by: Janne Grunau +Reviewed-by: Sven Peter +Link: https://lore.kernel.org/r/20250611-arm64_dts_apple_wifi-v1-1-fb959d8e1eb4@jannau.net +Signed-off-by: Sven Peter +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/apple/t8103-jxxx.dtsi | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi b/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi +index 5988a4eb6efaa..cb78ce7af0b38 100644 +--- a/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi ++++ b/arch/arm64/boot/dts/apple/t8103-jxxx.dtsi +@@ -71,7 +71,7 @@ hpm1: usb-pd@3f { + */ + &port00 { + bus-range = <1 1>; +- wifi0: network@0,0 { ++ wifi0: wifi@0,0 { + compatible = "pci14e4,4425"; + reg = <0x10000 0x0 0x0 0x0 0x0>; + /* To be filled by the loader */ +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-qcom-sm8650-add-the-missing-l2-cache-node.patch b/queue-6.12/arm64-dts-qcom-sm8650-add-the-missing-l2-cache-node.patch new file mode 100644 index 0000000000..f0f1b154f6 --- /dev/null +++ b/queue-6.12/arm64-dts-qcom-sm8650-add-the-missing-l2-cache-node.patch @@ -0,0 +1,52 @@ +From 0958be738b9101e54ac82a180c9a250c60fa9c4d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Apr 2025 18:55:28 +0800 +Subject: arm64: dts: qcom: sm8650: add the missing l2 cache node + +From: Pengyu Luo + +[ Upstream commit 4becd72352b6861de0c24074a8502ca85080fd63 ] + +Only two little a520s share the same L2, every a720 has their own L2 +cache. + +Fixes: d2350377997f ("arm64: dts: qcom: add initial SM8650 dtsi") +Signed-off-by: Pengyu Luo +Reviewed-by: Konrad Dybcio +Reviewed-by: Neil Armstrong +Link: https://lore.kernel.org/r/20250405105529.309711-1-mitltlatltl@gmail.com +Signed-off-by: Bjorn Andersson +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/qcom/sm8650.dtsi | 9 ++++++++- + 1 file changed, 8 insertions(+), 1 deletion(-) + +diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi +index 72e3dcd495c3b..bd91624bd3bfc 100644 +--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi +@@ -159,13 +159,20 @@ cpu3: cpu@300 { + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&l2_200>; ++ next-level-cache = <&l2_300>; + capacity-dmips-mhz = <1792>; + dynamic-power-coefficient = <238>; + + qcom,freq-domain = <&cpufreq_hw 3>; + + #cooling-cells = <2>; ++ ++ l2_300: l2-cache { ++ compatible = "cache"; ++ cache-level = <2>; ++ cache-unified; ++ next-level-cache = <&l3_0>; ++ }; + }; + + cpu4: cpu@400 { +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-qcom-sm8650-change-labels-to-lower-case.patch b/queue-6.12/arm64-dts-qcom-sm8650-change-labels-to-lower-case.patch new file mode 100644 index 0000000000..7ac15e3e3e --- /dev/null +++ b/queue-6.12/arm64-dts-qcom-sm8650-change-labels-to-lower-case.patch @@ -0,0 +1,459 @@ +From 0ef48512a3c866fe55e8fb0d65f1e3371d0391a3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 22 Oct 2024 17:47:39 +0200 +Subject: arm64: dts: qcom: sm8650: change labels to lower-case + +From: Krzysztof Kozlowski + +[ Upstream commit 20eb2057b3e46feb0c2b517bcff3acfbba28320f ] + +DTS coding style expects labels to be lowercase. No functional impact. +Verified with comparing decompiled DTB (dtx_diff and fdtdump+diff). + +Reviewed-by: Neil Armstrong +Signed-off-by: Krzysztof Kozlowski +Link: https://lore.kernel.org/r/20241022-dts-qcom-label-v3-14-0505bc7d2c56@linaro.org +Signed-off-by: Bjorn Andersson +Stable-dep-of: 9bb5ca464100 ("arm64: dts: qcom: sm8650: Fix domain-idle-state for CPU2") +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/qcom/sm8650.dtsi | 156 +++++++++++++-------------- + 1 file changed, 78 insertions(+), 78 deletions(-) + +diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi +index edde21972f5ac..3a7daeb2c12e3 100644 +--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi +@@ -68,18 +68,18 @@ cpus { + #address-cells = <2>; + #size-cells = <0>; + +- CPU0: cpu@0 { ++ cpu0: cpu@0 { + device_type = "cpu"; + compatible = "arm,cortex-a520"; + reg = <0 0>; + + clocks = <&cpufreq_hw 0>; + +- power-domains = <&CPU_PD0>; ++ power-domains = <&cpu_pd0>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_0>; ++ next-level-cache = <&l2_0>; + capacity-dmips-mhz = <1024>; + dynamic-power-coefficient = <100>; + +@@ -87,13 +87,13 @@ CPU0: cpu@0 { + + #cooling-cells = <2>; + +- L2_0: l2-cache { ++ l2_0: l2-cache { + compatible = "cache"; + cache-level = <2>; + cache-unified; +- next-level-cache = <&L3_0>; ++ next-level-cache = <&l3_0>; + +- L3_0: l3-cache { ++ l3_0: l3-cache { + compatible = "cache"; + cache-level = <3>; + cache-unified; +@@ -101,18 +101,18 @@ L3_0: l3-cache { + }; + }; + +- CPU1: cpu@100 { ++ cpu1: cpu@100 { + device_type = "cpu"; + compatible = "arm,cortex-a520"; + reg = <0 0x100>; + + clocks = <&cpufreq_hw 0>; + +- power-domains = <&CPU_PD1>; ++ power-domains = <&cpu_pd1>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_0>; ++ next-level-cache = <&l2_0>; + capacity-dmips-mhz = <1024>; + dynamic-power-coefficient = <100>; + +@@ -121,18 +121,18 @@ CPU1: cpu@100 { + #cooling-cells = <2>; + }; + +- CPU2: cpu@200 { ++ cpu2: cpu@200 { + device_type = "cpu"; + compatible = "arm,cortex-a720"; + reg = <0 0x200>; + + clocks = <&cpufreq_hw 3>; + +- power-domains = <&CPU_PD2>; ++ power-domains = <&cpu_pd2>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_200>; ++ next-level-cache = <&l2_200>; + capacity-dmips-mhz = <1792>; + dynamic-power-coefficient = <238>; + +@@ -140,26 +140,26 @@ CPU2: cpu@200 { + + #cooling-cells = <2>; + +- L2_200: l2-cache { ++ l2_200: l2-cache { + compatible = "cache"; + cache-level = <2>; + cache-unified; +- next-level-cache = <&L3_0>; ++ next-level-cache = <&l3_0>; + }; + }; + +- CPU3: cpu@300 { ++ cpu3: cpu@300 { + device_type = "cpu"; + compatible = "arm,cortex-a720"; + reg = <0 0x300>; + + clocks = <&cpufreq_hw 3>; + +- power-domains = <&CPU_PD3>; ++ power-domains = <&cpu_pd3>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_200>; ++ next-level-cache = <&l2_200>; + capacity-dmips-mhz = <1792>; + dynamic-power-coefficient = <238>; + +@@ -168,18 +168,18 @@ CPU3: cpu@300 { + #cooling-cells = <2>; + }; + +- CPU4: cpu@400 { ++ cpu4: cpu@400 { + device_type = "cpu"; + compatible = "arm,cortex-a720"; + reg = <0 0x400>; + + clocks = <&cpufreq_hw 3>; + +- power-domains = <&CPU_PD4>; ++ power-domains = <&cpu_pd4>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_400>; ++ next-level-cache = <&l2_400>; + capacity-dmips-mhz = <1792>; + dynamic-power-coefficient = <238>; + +@@ -187,26 +187,26 @@ CPU4: cpu@400 { + + #cooling-cells = <2>; + +- L2_400: l2-cache { ++ l2_400: l2-cache { + compatible = "cache"; + cache-level = <2>; + cache-unified; +- next-level-cache = <&L3_0>; ++ next-level-cache = <&l3_0>; + }; + }; + +- CPU5: cpu@500 { ++ cpu5: cpu@500 { + device_type = "cpu"; + compatible = "arm,cortex-a720"; + reg = <0 0x500>; + + clocks = <&cpufreq_hw 1>; + +- power-domains = <&CPU_PD5>; ++ power-domains = <&cpu_pd5>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_500>; ++ next-level-cache = <&l2_500>; + capacity-dmips-mhz = <1792>; + dynamic-power-coefficient = <238>; + +@@ -214,26 +214,26 @@ CPU5: cpu@500 { + + #cooling-cells = <2>; + +- L2_500: l2-cache { ++ l2_500: l2-cache { + compatible = "cache"; + cache-level = <2>; + cache-unified; +- next-level-cache = <&L3_0>; ++ next-level-cache = <&l3_0>; + }; + }; + +- CPU6: cpu@600 { ++ cpu6: cpu@600 { + device_type = "cpu"; + compatible = "arm,cortex-a720"; + reg = <0 0x600>; + + clocks = <&cpufreq_hw 1>; + +- power-domains = <&CPU_PD6>; ++ power-domains = <&cpu_pd6>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_600>; ++ next-level-cache = <&l2_600>; + capacity-dmips-mhz = <1792>; + dynamic-power-coefficient = <238>; + +@@ -241,26 +241,26 @@ CPU6: cpu@600 { + + #cooling-cells = <2>; + +- L2_600: l2-cache { ++ l2_600: l2-cache { + compatible = "cache"; + cache-level = <2>; + cache-unified; +- next-level-cache = <&L3_0>; ++ next-level-cache = <&l3_0>; + }; + }; + +- CPU7: cpu@700 { ++ cpu7: cpu@700 { + device_type = "cpu"; + compatible = "arm,cortex-x4"; + reg = <0 0x700>; + + clocks = <&cpufreq_hw 2>; + +- power-domains = <&CPU_PD7>; ++ power-domains = <&cpu_pd7>; + power-domain-names = "psci"; + + enable-method = "psci"; +- next-level-cache = <&L2_700>; ++ next-level-cache = <&l2_700>; + capacity-dmips-mhz = <1894>; + dynamic-power-coefficient = <588>; + +@@ -268,46 +268,46 @@ CPU7: cpu@700 { + + #cooling-cells = <2>; + +- L2_700: l2-cache { ++ l2_700: l2-cache { + compatible = "cache"; + cache-level = <2>; + cache-unified; +- next-level-cache = <&L3_0>; ++ next-level-cache = <&l3_0>; + }; + }; + + cpu-map { + cluster0 { + core0 { +- cpu = <&CPU0>; ++ cpu = <&cpu0>; + }; + + core1 { +- cpu = <&CPU1>; ++ cpu = <&cpu1>; + }; + + core2 { +- cpu = <&CPU2>; ++ cpu = <&cpu2>; + }; + + core3 { +- cpu = <&CPU3>; ++ cpu = <&cpu3>; + }; + + core4 { +- cpu = <&CPU4>; ++ cpu = <&cpu4>; + }; + + core5 { +- cpu = <&CPU5>; ++ cpu = <&cpu5>; + }; + + core6 { +- cpu = <&CPU6>; ++ cpu = <&cpu6>; + }; + + core7 { +- cpu = <&CPU7>; ++ cpu = <&cpu7>; + }; + }; + }; +@@ -315,7 +315,7 @@ core7 { + idle-states { + entry-method = "psci"; + +- SILVER_CPU_SLEEP_0: cpu-sleep-0-0 { ++ silver_cpu_sleep_0: cpu-sleep-0-0 { + compatible = "arm,idle-state"; + idle-state-name = "silver-rail-power-collapse"; + arm,psci-suspend-param = <0x40000004>; +@@ -325,7 +325,7 @@ SILVER_CPU_SLEEP_0: cpu-sleep-0-0 { + local-timer-stop; + }; + +- GOLD_CPU_SLEEP_0: cpu-sleep-1-0 { ++ gold_cpu_sleep_0: cpu-sleep-1-0 { + compatible = "arm,idle-state"; + idle-state-name = "gold-rail-power-collapse"; + arm,psci-suspend-param = <0x40000004>; +@@ -335,7 +335,7 @@ GOLD_CPU_SLEEP_0: cpu-sleep-1-0 { + local-timer-stop; + }; + +- GOLD_PLUS_CPU_SLEEP_0: cpu-sleep-2-0 { ++ gold_plus_cpu_sleep_0: cpu-sleep-2-0 { + compatible = "arm,idle-state"; + idle-state-name = "gold-plus-rail-power-collapse"; + arm,psci-suspend-param = <0x40000004>; +@@ -347,7 +347,7 @@ GOLD_PLUS_CPU_SLEEP_0: cpu-sleep-2-0 { + }; + + domain-idle-states { +- CLUSTER_SLEEP_0: cluster-sleep-0 { ++ cluster_sleep_0: cluster-sleep-0 { + compatible = "domain-idle-state"; + arm,psci-suspend-param = <0x41000044>; + entry-latency-us = <750>; +@@ -355,7 +355,7 @@ CLUSTER_SLEEP_0: cluster-sleep-0 { + min-residency-us = <9144>; + }; + +- CLUSTER_SLEEP_1: cluster-sleep-1 { ++ cluster_sleep_1: cluster-sleep-1 { + compatible = "domain-idle-state"; + arm,psci-suspend-param = <0x4100c344>; + entry-latency-us = <2800>; +@@ -411,58 +411,58 @@ psci { + compatible = "arm,psci-1.0"; + method = "smc"; + +- CPU_PD0: power-domain-cpu0 { ++ cpu_pd0: power-domain-cpu0 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&SILVER_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&silver_cpu_sleep_0>; + }; + +- CPU_PD1: power-domain-cpu1 { ++ cpu_pd1: power-domain-cpu1 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&SILVER_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&silver_cpu_sleep_0>; + }; + +- CPU_PD2: power-domain-cpu2 { ++ cpu_pd2: power-domain-cpu2 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&SILVER_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&silver_cpu_sleep_0>; + }; + +- CPU_PD3: power-domain-cpu3 { ++ cpu_pd3: power-domain-cpu3 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&GOLD_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&gold_cpu_sleep_0>; + }; + +- CPU_PD4: power-domain-cpu4 { ++ cpu_pd4: power-domain-cpu4 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&GOLD_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&gold_cpu_sleep_0>; + }; + +- CPU_PD5: power-domain-cpu5 { ++ cpu_pd5: power-domain-cpu5 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&GOLD_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&gold_cpu_sleep_0>; + }; + +- CPU_PD6: power-domain-cpu6 { ++ cpu_pd6: power-domain-cpu6 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&GOLD_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&gold_cpu_sleep_0>; + }; + +- CPU_PD7: power-domain-cpu7 { ++ cpu_pd7: power-domain-cpu7 { + #power-domain-cells = <0>; +- power-domains = <&CLUSTER_PD>; +- domain-idle-states = <&GOLD_PLUS_CPU_SLEEP_0>; ++ power-domains = <&cluster_pd>; ++ domain-idle-states = <&gold_plus_cpu_sleep_0>; + }; + +- CLUSTER_PD: power-domain-cluster { ++ cluster_pd: power-domain-cluster { + #power-domain-cells = <0>; +- domain-idle-states = <&CLUSTER_SLEEP_0>, +- <&CLUSTER_SLEEP_1>; ++ domain-idle-states = <&cluster_sleep_0>, ++ <&cluster_sleep_1>; + }; + }; + +@@ -5233,7 +5233,7 @@ apps_rsc: rsc@17a00000 { + , + ; + +- power-domains = <&CLUSTER_PD>; ++ power-domains = <&cluster_pd>; + + qcom,tcs-offset = <0xd00>; + qcom,drv-id = <2>; +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-qcom-sm8650-fix-domain-idle-state-for-cpu2.patch b/queue-6.12/arm64-dts-qcom-sm8650-fix-domain-idle-state-for-cpu2.patch new file mode 100644 index 0000000000..81acab943d --- /dev/null +++ b/queue-6.12/arm64-dts-qcom-sm8650-fix-domain-idle-state-for-cpu2.patch @@ -0,0 +1,40 @@ +From f2c441c366f59c583dc373cabdd7c1804dfcb10a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 Mar 2025 09:21:16 +0100 +Subject: arm64: dts: qcom: sm8650: Fix domain-idle-state for CPU2 + +From: Luca Weiss + +[ Upstream commit 9bb5ca464100e7c8f2d740148088f60e04fed8ed ] + +On SM8650 the CPUs 0-1 are "silver" (Cortex-A520), CPU 2-6 are "gold" +(Cortex-A720) and CPU 7 is "gold-plus" (Cortex-X4). + +So reference the correct "gold" idle-state for CPU core 2. + +Fixes: d2350377997f ("arm64: dts: qcom: add initial SM8650 dtsi") +Signed-off-by: Luca Weiss +Reviewed-by: Konrad Dybcio +Link: https://lore.kernel.org/r/20250314-sm8650-cpu2-sleep-v1-1-31d5c7c87a5d@fairphone.com +Signed-off-by: Bjorn Andersson +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/qcom/sm8650.dtsi | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/arm64/boot/dts/qcom/sm8650.dtsi b/arch/arm64/boot/dts/qcom/sm8650.dtsi +index 3a7daeb2c12e3..72e3dcd495c3b 100644 +--- a/arch/arm64/boot/dts/qcom/sm8650.dtsi ++++ b/arch/arm64/boot/dts/qcom/sm8650.dtsi +@@ -426,7 +426,7 @@ cpu_pd1: power-domain-cpu1 { + cpu_pd2: power-domain-cpu2 { + #power-domain-cells = <0>; + power-domains = <&cluster_pd>; +- domain-idle-states = <&silver_cpu_sleep_0>; ++ domain-idle-states = <&gold_cpu_sleep_0>; + }; + + cpu_pd3: power-domain-cpu3 { +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-qcom-x1e80100-crd-mark-l12b-and-l15b-alway.patch b/queue-6.12/arm64-dts-qcom-x1e80100-crd-mark-l12b-and-l15b-alway.patch new file mode 100644 index 0000000000..a22d24c97a --- /dev/null +++ b/queue-6.12/arm64-dts-qcom-x1e80100-crd-mark-l12b-and-l15b-alway.patch @@ -0,0 +1,52 @@ +From bc4c5f02d2f7a769d2af394f0864ad4a6f915c4d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 Mar 2025 15:54:33 +0100 +Subject: arm64: dts: qcom: x1e80100-crd: mark l12b and l15b always-on + +From: Johan Hovold + +[ Upstream commit abf89bc4bb09c16a53d693b09ea85225cf57ff39 ] + +The l12b and l15b supplies are used by components that are not (fully) +described (and some never will be) and must never be disabled. + +Mark the regulators as always-on to prevent them from being disabled, +for example, when consumers probe defer or suspend. + +Fixes: bd50b1f5b6f3 ("arm64: dts: qcom: x1e80100: Add Compute Reference Device") +Cc: stable@vger.kernel.org # 6.8 +Cc: Abel Vesa +Cc: Rajendra Nayak +Cc: Sibi Sankar +Reviewed-by: Konrad Dybcio +Signed-off-by: Johan Hovold +Link: https://lore.kernel.org/r/20250314145440.11371-2-johan+linaro@kernel.org +Signed-off-by: Bjorn Andersson +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/qcom/x1e80100-crd.dts | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts +index 044a2f1432fe3..2a504a449b0bb 100644 +--- a/arch/arm64/boot/dts/qcom/x1e80100-crd.dts ++++ b/arch/arm64/boot/dts/qcom/x1e80100-crd.dts +@@ -419,6 +419,7 @@ vreg_l12b_1p2: ldo12 { + regulator-min-microvolt = <1200000>; + regulator-max-microvolt = <1200000>; + regulator-initial-mode = ; ++ regulator-always-on; + }; + + vreg_l13b_3p0: ldo13 { +@@ -440,6 +441,7 @@ vreg_l15b_1p8: ldo15 { + regulator-min-microvolt = <1800000>; + regulator-max-microvolt = <1800000>; + regulator-initial-mode = ; ++ regulator-always-on; + }; + + vreg_l16b_2p9: ldo16 { +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-renesas-factor-out-white-hawk-single-board.patch b/queue-6.12/arm64-dts-renesas-factor-out-white-hawk-single-board.patch new file mode 100644 index 0000000000..30c44a7e78 --- /dev/null +++ b/queue-6.12/arm64-dts-renesas-factor-out-white-hawk-single-board.patch @@ -0,0 +1,181 @@ +From 17e2b9806b0276b46c070304ecfc865a63d27e0d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 2 Dec 2024 17:30:09 +0100 +Subject: arm64: dts: renesas: Factor out White Hawk Single board support + +From: Geert Uytterhoeven + +[ Upstream commit d43c077cb88d800d0c2a372d70d5af75c6a16356 ] + +Move the common parts for the Renesas White Hawk Single board to +white-hawk-single.dtsi, to enable future reuse. + +Signed-off-by: Geert Uytterhoeven +Reviewed-by: Kuninori Morimoto +Link: https://lore.kernel.org/1661743b18a9ff9fac716f98a663b39fc8488d7e.1733156661.git.geert+renesas@glider.be +Stable-dep-of: 8ffec7d62c69 ("arm64: dts: renesas: white-hawk-single: Improve Ethernet TSN description") +Signed-off-by: Sasha Levin +--- + .../renesas/r8a779g2-white-hawk-single.dts | 62 +--------------- + .../boot/dts/renesas/white-hawk-single.dtsi | 73 +++++++++++++++++++ + 2 files changed, 74 insertions(+), 61 deletions(-) + create mode 100644 arch/arm64/boot/dts/renesas/white-hawk-single.dtsi + +diff --git a/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts b/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts +index 0062362b0ba06..48befde389376 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts ++++ b/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts +@@ -7,70 +7,10 @@ + + /dts-v1/; + #include "r8a779g2.dtsi" +-#include "white-hawk-cpu-common.dtsi" +-#include "white-hawk-common.dtsi" ++#include "white-hawk-single.dtsi" + + / { + model = "Renesas White Hawk Single board based on r8a779g2"; + compatible = "renesas,white-hawk-single", "renesas,r8a779g2", + "renesas,r8a779g0"; + }; +- +-&hscif0 { +- uart-has-rtscts; +-}; +- +-&hscif0_pins { +- groups = "hscif0_data", "hscif0_ctrl"; +- function = "hscif0"; +-}; +- +-&pfc { +- tsn0_pins: tsn0 { +- mux { +- groups = "tsn0_link", "tsn0_mdio", "tsn0_rgmii", +- "tsn0_txcrefclk"; +- function = "tsn0"; +- }; +- +- link { +- groups = "tsn0_link"; +- bias-disable; +- }; +- +- mdio { +- groups = "tsn0_mdio"; +- drive-strength = <24>; +- bias-disable; +- }; +- +- rgmii { +- groups = "tsn0_rgmii"; +- drive-strength = <24>; +- bias-disable; +- }; +- }; +-}; +- +-&tsn0 { +- pinctrl-0 = <&tsn0_pins>; +- pinctrl-names = "default"; +- phy-mode = "rgmii"; +- phy-handle = <&phy3>; +- status = "okay"; +- +- mdio { +- #address-cells = <1>; +- #size-cells = <0>; +- +- reset-gpios = <&gpio1 23 GPIO_ACTIVE_LOW>; +- reset-post-delay-us = <4000>; +- +- phy3: ethernet-phy@0 { +- compatible = "ethernet-phy-id002b.0980", +- "ethernet-phy-ieee802.3-c22"; +- reg = <0>; +- interrupts-extended = <&gpio4 3 IRQ_TYPE_LEVEL_LOW>; +- }; +- }; +-}; +diff --git a/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi +new file mode 100644 +index 0000000000000..20e8232f2f323 +--- /dev/null ++++ b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi +@@ -0,0 +1,73 @@ ++// SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) ++/* ++ * Device Tree Source for the White Hawk Single board ++ * ++ * Copyright (C) 2023-2024 Glider bv ++ */ ++ ++#include "white-hawk-cpu-common.dtsi" ++#include "white-hawk-common.dtsi" ++ ++/ { ++ model = "Renesas White Hawk Single board"; ++ compatible = "renesas,white-hawk-single"; ++}; ++ ++&hscif0 { ++ uart-has-rtscts; ++}; ++ ++&hscif0_pins { ++ groups = "hscif0_data", "hscif0_ctrl"; ++ function = "hscif0"; ++}; ++ ++&pfc { ++ tsn0_pins: tsn0 { ++ mux { ++ groups = "tsn0_link", "tsn0_mdio", "tsn0_rgmii", ++ "tsn0_txcrefclk"; ++ function = "tsn0"; ++ }; ++ ++ link { ++ groups = "tsn0_link"; ++ bias-disable; ++ }; ++ ++ mdio { ++ groups = "tsn0_mdio"; ++ drive-strength = <24>; ++ bias-disable; ++ }; ++ ++ rgmii { ++ groups = "tsn0_rgmii"; ++ drive-strength = <24>; ++ bias-disable; ++ }; ++ }; ++}; ++ ++&tsn0 { ++ pinctrl-0 = <&tsn0_pins>; ++ pinctrl-names = "default"; ++ phy-mode = "rgmii"; ++ phy-handle = <&phy3>; ++ status = "okay"; ++ ++ mdio { ++ #address-cells = <1>; ++ #size-cells = <0>; ++ ++ reset-gpios = <&gpio1 23 GPIO_ACTIVE_LOW>; ++ reset-post-delay-us = <4000>; ++ ++ phy3: ethernet-phy@0 { ++ compatible = "ethernet-phy-id002b.0980", ++ "ethernet-phy-ieee802.3-c22"; ++ reg = <0>; ++ interrupts-extended = <&gpio4 3 IRQ_TYPE_LEVEL_LOW>; ++ }; ++ }; ++}; +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-renesas-use-interrupts-extended-for-ethern.patch b/queue-6.12/arm64-dts-renesas-use-interrupts-extended-for-ethern.patch new file mode 100644 index 0000000000..d43908ac47 --- /dev/null +++ b/queue-6.12/arm64-dts-renesas-use-interrupts-extended-for-ethern.patch @@ -0,0 +1,428 @@ +From fa8e6936e6e8e2fee45ad7d4955f67e12ab7a0b9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 4 Oct 2024 14:52:54 +0200 +Subject: arm64: dts: renesas: Use interrupts-extended for Ethernet PHYs +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Geert Uytterhoeven + +[ Upstream commit ba4d843a2ac646abc034b013c0722630f6ea1c90 ] + +Use the more concise interrupts-extended property to fully describe the +interrupts. + +Signed-off-by: Geert Uytterhoeven +Reviewed-by: Niklas Söderlund +Reviewed-by: Lad Prabhakar +Tested-by: Lad Prabhakar # G2L family and G3S +Link: https://lore.kernel.org/e9db8758d275ec63b0d6ce086ac3d0ea62966865.1728045620.git.geert+renesas@glider.be +Stable-dep-of: 8ffec7d62c69 ("arm64: dts: renesas: white-hawk-single: Improve Ethernet TSN description") +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/cat875.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/condor-common.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/draak.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/ebisu.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/r8a77970-eagle.dts | 3 +-- + arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts | 3 +-- + arch/arm64/boot/dts/renesas/r8a77980-v3hsk.dts | 3 +-- + arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts | 3 +-- + .../arm64/boot/dts/renesas/r8a779f0-spider-ethernet.dtsi | 9 +++------ + arch/arm64/boot/dts/renesas/r8a779f4-s4sk.dts | 6 ++---- + .../boot/dts/renesas/r8a779g2-white-hawk-single.dts | 3 +-- + .../arm64/boot/dts/renesas/r8a779h0-gray-hawk-single.dts | 3 +-- + arch/arm64/boot/dts/renesas/rzg2l-smarc-som.dtsi | 6 ++---- + arch/arm64/boot/dts/renesas/rzg2lc-smarc-som.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/rzg2ul-smarc-som.dtsi | 6 ++---- + arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi | 6 ++---- + arch/arm64/boot/dts/renesas/salvator-common.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/ulcb.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/white-hawk-cpu-common.dtsi | 3 +-- + arch/arm64/boot/dts/renesas/white-hawk-ethernet.dtsi | 6 ++---- + 22 files changed, 29 insertions(+), 58 deletions(-) + +diff --git a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi +index 68b04e56ae562..5a15a956702a6 100644 +--- a/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi ++++ b/arch/arm64/boot/dts/renesas/beacon-renesom-som.dtsi +@@ -62,8 +62,7 @@ phy0: ethernet-phy@0 { + compatible = "ethernet-phy-id0022.1640", + "ethernet-phy-ieee802.3-c22"; + reg = <0>; +- interrupt-parent = <&gpio2>; +- interrupts = <11 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/cat875.dtsi b/arch/arm64/boot/dts/renesas/cat875.dtsi +index 8c9da8b4bd60b..191b051ecfd45 100644 +--- a/arch/arm64/boot/dts/renesas/cat875.dtsi ++++ b/arch/arm64/boot/dts/renesas/cat875.dtsi +@@ -25,8 +25,7 @@ phy0: ethernet-phy@0 { + compatible = "ethernet-phy-id001c.c915", + "ethernet-phy-ieee802.3-c22"; + reg = <0>; +- interrupt-parent = <&gpio2>; +- interrupts = <21 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio2 21 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/condor-common.dtsi b/arch/arm64/boot/dts/renesas/condor-common.dtsi +index 8b7c0c34eadce..b2d99dfaa0cdf 100644 +--- a/arch/arm64/boot/dts/renesas/condor-common.dtsi ++++ b/arch/arm64/boot/dts/renesas/condor-common.dtsi +@@ -166,8 +166,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio4>; +- interrupts = <23 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio4 23 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio4 22 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/draak.dtsi b/arch/arm64/boot/dts/renesas/draak.dtsi +index 6f133f54ded54..402112a37d75a 100644 +--- a/arch/arm64/boot/dts/renesas/draak.dtsi ++++ b/arch/arm64/boot/dts/renesas/draak.dtsi +@@ -247,8 +247,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio5>; +- interrupts = <19 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio5 19 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio5 18 GPIO_ACTIVE_LOW>; + /* + * TX clock internal delay mode is required for reliable +diff --git a/arch/arm64/boot/dts/renesas/ebisu.dtsi b/arch/arm64/boot/dts/renesas/ebisu.dtsi +index cba2fde9dd368..1aedd093fb41b 100644 +--- a/arch/arm64/boot/dts/renesas/ebisu.dtsi ++++ b/arch/arm64/boot/dts/renesas/ebisu.dtsi +@@ -314,8 +314,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio2>; +- interrupts = <21 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio2 21 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio1 20 GPIO_ACTIVE_LOW>; + /* + * TX clock internal delay mode is required for reliable +diff --git a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi +index ad898c6db4e62..4113710d55226 100644 +--- a/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi ++++ b/arch/arm64/boot/dts/renesas/hihope-rzg2-ex.dtsi +@@ -27,8 +27,7 @@ phy0: ethernet-phy@0 { + compatible = "ethernet-phy-id001c.c915", + "ethernet-phy-ieee802.3-c22"; + reg = <0>; +- interrupt-parent = <&gpio2>; +- interrupts = <11 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a77970-eagle.dts b/arch/arm64/boot/dts/renesas/r8a77970-eagle.dts +index 0608dce92e405..7dd9e13cf0074 100644 +--- a/arch/arm64/boot/dts/renesas/r8a77970-eagle.dts ++++ b/arch/arm64/boot/dts/renesas/r8a77970-eagle.dts +@@ -111,8 +111,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio1>; +- interrupts = <17 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio1 17 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts b/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts +index e36999e91af53..0a103f93b14d7 100644 +--- a/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts ++++ b/arch/arm64/boot/dts/renesas/r8a77970-v3msk.dts +@@ -117,8 +117,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio1>; +- interrupts = <17 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio1 17 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio1 16 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a77980-v3hsk.dts b/arch/arm64/boot/dts/renesas/r8a77980-v3hsk.dts +index 77d22df25fffa..a8a20c748ffcd 100644 +--- a/arch/arm64/boot/dts/renesas/r8a77980-v3hsk.dts ++++ b/arch/arm64/boot/dts/renesas/r8a77980-v3hsk.dts +@@ -124,8 +124,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio4>; +- interrupts = <23 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio4 23 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio4 22 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts b/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts +index 63db822e5f466..6bd580737f25d 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts ++++ b/arch/arm64/boot/dts/renesas/r8a779a0-falcon.dts +@@ -31,8 +31,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio4>; +- interrupts = <16 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio4 16 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio4 15 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a779f0-spider-ethernet.dtsi b/arch/arm64/boot/dts/renesas/r8a779f0-spider-ethernet.dtsi +index 33c1015e9ab38..5d38669ed1ec3 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779f0-spider-ethernet.dtsi ++++ b/arch/arm64/boot/dts/renesas/r8a779f0-spider-ethernet.dtsi +@@ -60,8 +60,7 @@ mdio { + u101: ethernet-phy@1 { + reg = <1>; + compatible = "ethernet-phy-ieee802.3-c45"; +- interrupt-parent = <&gpio3>; +- interrupts = <10 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio3 10 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +@@ -78,8 +77,7 @@ mdio { + u201: ethernet-phy@2 { + reg = <2>; + compatible = "ethernet-phy-ieee802.3-c45"; +- interrupt-parent = <&gpio3>; +- interrupts = <11 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio3 11 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +@@ -96,8 +94,7 @@ mdio { + u301: ethernet-phy@3 { + reg = <3>; + compatible = "ethernet-phy-ieee802.3-c45"; +- interrupt-parent = <&gpio3>; +- interrupts = <9 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio3 9 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a779f4-s4sk.dts b/arch/arm64/boot/dts/renesas/r8a779f4-s4sk.dts +index fa910b85859e9..5d71d52f9c654 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779f4-s4sk.dts ++++ b/arch/arm64/boot/dts/renesas/r8a779f4-s4sk.dts +@@ -197,8 +197,7 @@ mdio { + ic99: ethernet-phy@1 { + reg = <1>; + compatible = "ethernet-phy-ieee802.3-c45"; +- interrupt-parent = <&gpio3>; +- interrupts = <10 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio3 10 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +@@ -216,8 +215,7 @@ mdio { + ic102: ethernet-phy@2 { + reg = <2>; + compatible = "ethernet-phy-ieee802.3-c45"; +- interrupt-parent = <&gpio3>; +- interrupts = <11 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio3 11 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts b/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts +index 50a428572d9bd..0062362b0ba06 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts ++++ b/arch/arm64/boot/dts/renesas/r8a779g2-white-hawk-single.dts +@@ -70,8 +70,7 @@ phy3: ethernet-phy@0 { + compatible = "ethernet-phy-id002b.0980", + "ethernet-phy-ieee802.3-c22"; + reg = <0>; +- interrupt-parent = <&gpio4>; +- interrupts = <3 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio4 3 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/r8a779h0-gray-hawk-single.dts b/arch/arm64/boot/dts/renesas/r8a779h0-gray-hawk-single.dts +index 9a1917b87f613..f4d721a7f505c 100644 +--- a/arch/arm64/boot/dts/renesas/r8a779h0-gray-hawk-single.dts ++++ b/arch/arm64/boot/dts/renesas/r8a779h0-gray-hawk-single.dts +@@ -175,8 +175,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio7>; +- interrupts = <5 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio7 5 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio7 10 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/rzg2l-smarc-som.dtsi b/arch/arm64/boot/dts/renesas/rzg2l-smarc-som.dtsi +index 83f5642d0d35c..502d9f17bf16d 100644 +--- a/arch/arm64/boot/dts/renesas/rzg2l-smarc-som.dtsi ++++ b/arch/arm64/boot/dts/renesas/rzg2l-smarc-som.dtsi +@@ -102,8 +102,7 @@ phy0: ethernet-phy@7 { + compatible = "ethernet-phy-id0022.1640", + "ethernet-phy-ieee802.3-c22"; + reg = <7>; +- interrupt-parent = <&irqc>; +- interrupts = ; ++ interrupts-extended = <&irqc RZG2L_IRQ2 IRQ_TYPE_LEVEL_LOW>; + rxc-skew-psec = <2400>; + txc-skew-psec = <2400>; + rxdv-skew-psec = <0>; +@@ -130,8 +129,7 @@ phy1: ethernet-phy@7 { + compatible = "ethernet-phy-id0022.1640", + "ethernet-phy-ieee802.3-c22"; + reg = <7>; +- interrupt-parent = <&irqc>; +- interrupts = ; ++ interrupts-extended = <&irqc RZG2L_IRQ3 IRQ_TYPE_LEVEL_LOW>; + rxc-skew-psec = <2400>; + txc-skew-psec = <2400>; + rxdv-skew-psec = <0>; +diff --git a/arch/arm64/boot/dts/renesas/rzg2lc-smarc-som.dtsi b/arch/arm64/boot/dts/renesas/rzg2lc-smarc-som.dtsi +index b4ef5ea8a9e34..de39311a77dc2 100644 +--- a/arch/arm64/boot/dts/renesas/rzg2lc-smarc-som.dtsi ++++ b/arch/arm64/boot/dts/renesas/rzg2lc-smarc-som.dtsi +@@ -82,8 +82,7 @@ phy0: ethernet-phy@7 { + compatible = "ethernet-phy-id0022.1640", + "ethernet-phy-ieee802.3-c22"; + reg = <7>; +- interrupt-parent = <&irqc>; +- interrupts = ; ++ interrupts-extended = <&irqc RZG2L_IRQ0 IRQ_TYPE_LEVEL_LOW>; + rxc-skew-psec = <2400>; + txc-skew-psec = <2400>; + rxdv-skew-psec = <0>; +diff --git a/arch/arm64/boot/dts/renesas/rzg2ul-smarc-som.dtsi b/arch/arm64/boot/dts/renesas/rzg2ul-smarc-som.dtsi +index 79443fb3f5810..1a6fd58bd3682 100644 +--- a/arch/arm64/boot/dts/renesas/rzg2ul-smarc-som.dtsi ++++ b/arch/arm64/boot/dts/renesas/rzg2ul-smarc-som.dtsi +@@ -78,8 +78,7 @@ phy0: ethernet-phy@7 { + compatible = "ethernet-phy-id0022.1640", + "ethernet-phy-ieee802.3-c22"; + reg = <7>; +- interrupt-parent = <&irqc>; +- interrupts = ; ++ interrupts-extended = <&irqc RZG2L_IRQ2 IRQ_TYPE_LEVEL_LOW>; + rxc-skew-psec = <2400>; + txc-skew-psec = <2400>; + rxdv-skew-psec = <0>; +@@ -107,8 +106,7 @@ phy1: ethernet-phy@7 { + compatible = "ethernet-phy-id0022.1640", + "ethernet-phy-ieee802.3-c22"; + reg = <7>; +- interrupt-parent = <&irqc>; +- interrupts = ; ++ interrupts-extended = <&irqc RZG2L_IRQ7 IRQ_TYPE_LEVEL_LOW>; + rxc-skew-psec = <2400>; + txc-skew-psec = <2400>; + rxdv-skew-psec = <0>; +diff --git a/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi b/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi +index 612cdc7efabbc..d2d367c09abd4 100644 +--- a/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi ++++ b/arch/arm64/boot/dts/renesas/rzg3s-smarc-som.dtsi +@@ -98,8 +98,7 @@ ð0 { + + phy0: ethernet-phy@7 { + reg = <7>; +- interrupt-parent = <&pinctrl>; +- interrupts = ; ++ interrupts-extended = <&pinctrl RZG2L_GPIO(12, 0) IRQ_TYPE_EDGE_FALLING>; + rxc-skew-psec = <0>; + txc-skew-psec = <0>; + rxdv-skew-psec = <0>; +@@ -124,8 +123,7 @@ ð1 { + + phy1: ethernet-phy@7 { + reg = <7>; +- interrupt-parent = <&pinctrl>; +- interrupts = ; ++ interrupts-extended = <&pinctrl RZG2L_GPIO(12, 1) IRQ_TYPE_EDGE_FALLING>; + rxc-skew-psec = <0>; + txc-skew-psec = <0>; + rxdv-skew-psec = <0>; +diff --git a/arch/arm64/boot/dts/renesas/salvator-common.dtsi b/arch/arm64/boot/dts/renesas/salvator-common.dtsi +index 1eb4883b32197..c5035232956a8 100644 +--- a/arch/arm64/boot/dts/renesas/salvator-common.dtsi ++++ b/arch/arm64/boot/dts/renesas/salvator-common.dtsi +@@ -353,8 +353,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio2>; +- interrupts = <11 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/ulcb.dtsi b/arch/arm64/boot/dts/renesas/ulcb.dtsi +index a2f66f9160484..4cf141a701c06 100644 +--- a/arch/arm64/boot/dts/renesas/ulcb.dtsi ++++ b/arch/arm64/boot/dts/renesas/ulcb.dtsi +@@ -150,8 +150,7 @@ phy0: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio2>; +- interrupts = <11 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio2 11 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio2 10 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/white-hawk-cpu-common.dtsi b/arch/arm64/boot/dts/renesas/white-hawk-cpu-common.dtsi +index 3845b413bd24c..69e4fddebd4e4 100644 +--- a/arch/arm64/boot/dts/renesas/white-hawk-cpu-common.dtsi ++++ b/arch/arm64/boot/dts/renesas/white-hawk-cpu-common.dtsi +@@ -167,8 +167,7 @@ avb0_phy: ethernet-phy@0 { + "ethernet-phy-ieee802.3-c22"; + rxc-skew-ps = <1500>; + reg = <0>; +- interrupt-parent = <&gpio7>; +- interrupts = <5 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio7 5 IRQ_TYPE_LEVEL_LOW>; + reset-gpios = <&gpio7 10 GPIO_ACTIVE_LOW>; + }; + }; +diff --git a/arch/arm64/boot/dts/renesas/white-hawk-ethernet.dtsi b/arch/arm64/boot/dts/renesas/white-hawk-ethernet.dtsi +index 595ec4ff4cdd0..ad94bf3f5e6c4 100644 +--- a/arch/arm64/boot/dts/renesas/white-hawk-ethernet.dtsi ++++ b/arch/arm64/boot/dts/renesas/white-hawk-ethernet.dtsi +@@ -29,8 +29,7 @@ mdio { + avb1_phy: ethernet-phy@0 { + compatible = "ethernet-phy-ieee802.3-c45"; + reg = <0>; +- interrupt-parent = <&gpio6>; +- interrupts = <3 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio6 3 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +@@ -51,8 +50,7 @@ mdio { + avb2_phy: ethernet-phy@0 { + compatible = "ethernet-phy-ieee802.3-c45"; + reg = <0>; +- interrupt-parent = <&gpio5>; +- interrupts = <4 IRQ_TYPE_LEVEL_LOW>; ++ interrupts-extended = <&gpio5 4 IRQ_TYPE_LEVEL_LOW>; + }; + }; + }; +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-renesas-white-hawk-single-improve-ethernet.patch b/queue-6.12/arm64-dts-renesas-white-hawk-single-improve-ethernet.patch new file mode 100644 index 0000000000..9786512412 --- /dev/null +++ b/queue-6.12/arm64-dts-renesas-white-hawk-single-improve-ethernet.patch @@ -0,0 +1,65 @@ +From f3ae9a64d8d9c306363a399172ade387c2f23886 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 7 May 2025 15:31:55 +0200 +Subject: arm64: dts: renesas: white-hawk-single: Improve Ethernet TSN + description +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Geert Uytterhoeven + +[ Upstream commit 8ffec7d62c6956199f442dac3b2d5d02231c3977 ] + + - Add the missing "ethernet3" alias for the Ethernet TSN port, so + U-Boot will fill its local-mac-address property based on the + "eth3addr" environment variable (if set), avoiding a random MAC + address being assigned by the OS, + - Rename the numerical Ethernet PHY label to "tsn0_phy", to avoid + future conflicts, and for consistency with the "avbN_phy" labels. + +Fixes: 3d8e475bd7a724a9 ("arm64: dts: renesas: white-hawk-single: Wire-up Ethernet TSN") +Signed-off-by: Geert Uytterhoeven +Reviewed-by: Niklas Söderlund +Link: https://lore.kernel.org/367f10a18aa196ff1c96734dd9bd5634b312c421.1746624368.git.geert+renesas@glider.be +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/renesas/white-hawk-single.dtsi | 8 ++++++-- + 1 file changed, 6 insertions(+), 2 deletions(-) + +diff --git a/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi +index 20e8232f2f323..976a3ab44e5a5 100644 +--- a/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi ++++ b/arch/arm64/boot/dts/renesas/white-hawk-single.dtsi +@@ -11,6 +11,10 @@ + / { + model = "Renesas White Hawk Single board"; + compatible = "renesas,white-hawk-single"; ++ ++ aliases { ++ ethernet3 = &tsn0; ++ }; + }; + + &hscif0 { +@@ -53,7 +57,7 @@ &tsn0 { + pinctrl-0 = <&tsn0_pins>; + pinctrl-names = "default"; + phy-mode = "rgmii"; +- phy-handle = <&phy3>; ++ phy-handle = <&tsn0_phy>; + status = "okay"; + + mdio { +@@ -63,7 +67,7 @@ mdio { + reset-gpios = <&gpio1 23 GPIO_ACTIVE_LOW>; + reset-post-delay-us = <4000>; + +- phy3: ethernet-phy@0 { ++ tsn0_phy: ethernet-phy@0 { + compatible = "ethernet-phy-id002b.0980", + "ethernet-phy-ieee802.3-c22"; + reg = <0>; +-- +2.39.5 + diff --git a/queue-6.12/arm64-dts-rockchip-fix-internal-usb-hub-instability-.patch b/queue-6.12/arm64-dts-rockchip-fix-internal-usb-hub-instability-.patch new file mode 100644 index 0000000000..e91a0e44ed --- /dev/null +++ b/queue-6.12/arm64-dts-rockchip-fix-internal-usb-hub-instability-.patch @@ -0,0 +1,133 @@ +From af2a34ffaf8fbef07036b04efe9b95a70aaf155d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 25 Apr 2025 17:18:08 +0200 +Subject: arm64: dts: rockchip: fix internal USB hub instability on RK3399 Puma +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Lukasz Czechowski + +[ Upstream commit d7cc532df95f7f159e40595440e4e4b99481457b ] + +Currently, the onboard Cypress CYUSB3304 USB hub is not defined in +the device tree, and hub reset pin is provided as vcc5v0_host +regulator to usb phy. This causes instability issues, as a result +of improper reset duration. + +The fixed regulator device requests the GPIO during probe in its +inactive state (except if regulator-boot-on property is set, in +which case it is requested in the active state). Considering gpio +is GPIO_ACTIVE_LOW for Puma, it means it’s driving it high. Then +the regulator gets enabled (because regulator-always-on property), +which drives it to its active state, meaning driving it low. + +The Cypress CYUSB3304 USB hub actually requires the reset to be +asserted for at least 5 ms, which we cannot guarantee right now +since there's no delay in the current config, meaning the hub may +sometimes work or not. We could add delay as offered by +fixed-regulator but let's rather fix this by using the proper way +to model onboard USB hubs. + +Define hub_2_0 and hub_3_0 nodes, as the onboard Cypress hub +consist of two 'logical' hubs, for USB2.0 and USB3.0. +Use the 'reset-gpios' property of hub to assign reset pin instead +of using regulator. Rename the vcc5v0_host regulator to +cy3304_reset to be more meaningful. Pin is configured to +output-high by default, which sets the hub in reset state +during pin controller initialization. This allows to avoid double +enumeration of devices in case the bootloader has setup the USB +hub before the kernel. +The vdd-supply and vdd2-supply properties in hub nodes are +added to provide correct dt-bindings, although power supplies are +always enabled based on HW design. + +Fixes: 2c66fc34e945 ("arm64: dts: rockchip: add RK3399-Q7 (Puma) SoM") +Cc: stable@vger.kernel.org # 6.6 +Cc: stable@vger.kernel.org # Backport of the patch in this series fixing product ID in onboard_dev_id_table in drivers/usb/misc/onboard_usb_dev.c driver +Signed-off-by: Lukasz Czechowski +Link: https://lore.kernel.org/r/20250425-onboard_usb_dev-v2-3-4a76a474a010@thaumatec.com +Signed-off-by: Heiko Stuebner +Signed-off-by: Sasha Levin +--- + arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi | 42 ++++++++++++------- + 1 file changed, 27 insertions(+), 15 deletions(-) + +diff --git a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi +index 257636d0d2cbb..0a73218ea37b3 100644 +--- a/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi ++++ b/arch/arm64/boot/dts/rockchip/rk3399-puma.dtsi +@@ -59,17 +59,7 @@ vcc3v3_sys: vcc3v3-sys { + vin-supply = <&vcc5v0_sys>; + }; + +- vcc5v0_host: vcc5v0-host-regulator { +- compatible = "regulator-fixed"; +- gpio = <&gpio4 RK_PA3 GPIO_ACTIVE_LOW>; +- pinctrl-names = "default"; +- pinctrl-0 = <&vcc5v0_host_en>; +- regulator-name = "vcc5v0_host"; +- regulator-always-on; +- vin-supply = <&vcc5v0_sys>; +- }; +- +- vcc5v0_sys: vcc5v0-sys { ++ vcc5v0_sys: regulator-vcc5v0-sys { + compatible = "regulator-fixed"; + regulator-name = "vcc5v0_sys"; + regulator-always-on; +@@ -509,10 +499,10 @@ pmic_int_l: pmic-int-l { + }; + }; + +- usb2 { +- vcc5v0_host_en: vcc5v0-host-en { ++ usb { ++ cy3304_reset: cy3304-reset { + rockchip,pins = +- <4 RK_PA3 RK_FUNC_GPIO &pcfg_pull_none>; ++ <4 RK_PA3 RK_FUNC_GPIO &pcfg_output_high>; + }; + }; + +@@ -579,7 +569,6 @@ u2phy1_otg: otg-port { + }; + + u2phy1_host: host-port { +- phy-supply = <&vcc5v0_host>; + status = "okay"; + }; + }; +@@ -591,6 +580,29 @@ &usbdrd3_1 { + &usbdrd_dwc3_1 { + status = "okay"; + dr_mode = "host"; ++ pinctrl-names = "default"; ++ pinctrl-0 = <&cy3304_reset>; ++ #address-cells = <1>; ++ #size-cells = <0>; ++ ++ hub_2_0: hub@1 { ++ compatible = "usb4b4,6502", "usb4b4,6506"; ++ reg = <1>; ++ peer-hub = <&hub_3_0>; ++ reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>; ++ vdd-supply = <&vcc1v2_phy>; ++ vdd2-supply = <&vcc3v3_sys>; ++ ++ }; ++ ++ hub_3_0: hub@2 { ++ compatible = "usb4b4,6500", "usb4b4,6504"; ++ reg = <2>; ++ peer-hub = <&hub_2_0>; ++ reset-gpios = <&gpio4 RK_PA3 GPIO_ACTIVE_HIGH>; ++ vdd-supply = <&vcc1v2_phy>; ++ vdd2-supply = <&vcc3v3_sys>; ++ }; + }; + + &usb_host1_ehci { +-- +2.39.5 + diff --git a/queue-6.12/asoc-amd-yc-add-quirk-for-msi-bravo-17-d7vf-internal.patch b/queue-6.12/asoc-amd-yc-add-quirk-for-msi-bravo-17-d7vf-internal.patch new file mode 100644 index 0000000000..9c321bdad3 --- /dev/null +++ b/queue-6.12/asoc-amd-yc-add-quirk-for-msi-bravo-17-d7vf-internal.patch @@ -0,0 +1,42 @@ +From 83156e5e2ec202e901068dfffb17ed8242741a2b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 30 May 2025 02:52:32 +0200 +Subject: ASoC: amd: yc: Add quirk for MSI Bravo 17 D7VF internal mic + +From: Gabriel Santese + +[ Upstream commit ba06528ad5a31923efc24324706116ccd17e12d8 ] + +MSI Bravo 17 (D7VF), like other laptops from the family, +has broken ACPI tables and needs a quirk for internal mic +to work properly. + +Signed-off-by: Gabriel Santese +Link: https://patch.msgid.link/20250530005444.23398-1-santesegabriel@gmail.com +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + sound/soc/amd/yc/acp6x-mach.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c +index b27966f82c8b6..4f8481c6802b1 100644 +--- a/sound/soc/amd/yc/acp6x-mach.c ++++ b/sound/soc/amd/yc/acp6x-mach.c +@@ -451,6 +451,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VEK"), + } + }, ++ { ++ .driver_data = &acp6x_card, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "Micro-Star International Co., Ltd."), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Bravo 17 D7VF"), ++ } ++ }, + { + .driver_data = &acp6x_card, + .matches = { +-- +2.39.5 + diff --git a/queue-6.12/asoc-amd-yc-update-quirk-data-for-hp-victus.patch b/queue-6.12/asoc-amd-yc-update-quirk-data-for-hp-victus.patch new file mode 100644 index 0000000000..576aef8c5f --- /dev/null +++ b/queue-6.12/asoc-amd-yc-update-quirk-data-for-hp-victus.patch @@ -0,0 +1,40 @@ +From 8f443f30ab01a1444400a00faa48126903c1798c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 13 Jun 2025 07:51:25 -0400 +Subject: ASoC: amd: yc: update quirk data for HP Victus + +From: Raven Black + +[ Upstream commit 13b86ea92ebf0fa587fbadfb8a60ca2e9993203f ] + +Make the internal microphone work on HP Victus laptops. + +Signed-off-by: Raven Black +Link: https://patch.msgid.link/20250613-support-hp-victus-microphone-v1-1-bebc4c3a2041@gmail.com +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + sound/soc/amd/yc/acp6x-mach.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +diff --git a/sound/soc/amd/yc/acp6x-mach.c b/sound/soc/amd/yc/acp6x-mach.c +index 4f8481c6802b1..723cb7bc12851 100644 +--- a/sound/soc/amd/yc/acp6x-mach.c ++++ b/sound/soc/amd/yc/acp6x-mach.c +@@ -521,6 +521,13 @@ static const struct dmi_system_id yc_acp_quirk_table[] = { + DMI_MATCH(DMI_PRODUCT_NAME, "OMEN by HP Gaming Laptop 16z-n000"), + } + }, ++ { ++ .driver_data = &acp6x_card, ++ .matches = { ++ DMI_MATCH(DMI_BOARD_VENDOR, "HP"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Victus by HP Gaming Laptop 15-fb2xxx"), ++ } ++ }, + { + .driver_data = &acp6x_card, + .matches = { +-- +2.39.5 + diff --git a/queue-6.12/asoc-tas2764-extend-driver-to-sn012776.patch b/queue-6.12/asoc-tas2764-extend-driver-to-sn012776.patch new file mode 100644 index 0000000000..44062d24ce --- /dev/null +++ b/queue-6.12/asoc-tas2764-extend-driver-to-sn012776.patch @@ -0,0 +1,150 @@ +From 5f3576add7bc2a19c7ffcf46aa60614dceca31ec Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 27 Feb 2025 22:07:30 +1000 +Subject: ASoC: tas2764: Extend driver to SN012776 +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Martin PoviÅ¡er + +[ Upstream commit ad18392962df46a858432839cc6bcaf2ede7cc86 ] + +SN012776 is a speaker amp chip found in Apple's 2021 laptops. It appears +similar and more-or-less compatible to TAS2764. Extend the TAS2764 +driver with some SN012776 specifics and configure the chip assuming +it's in one of the Apple machines. + +Reviewed-by: Neal Gompa +Signed-off-by: Martin PoviÅ¡er +Signed-off-by: James Calligeros +Link: https://patch.msgid.link/20250227-apple-codec-changes-v3-3-cbb130030acf@gmail.com +Signed-off-by: Mark Brown +Stable-dep-of: 592ab3936b09 ("ASoC: tas2764: Reinit cache on part reset") +Signed-off-by: Sasha Levin +--- + sound/soc/codecs/tas2764.c | 43 +++++++++++++++++++++++++++++++++++--- + sound/soc/codecs/tas2764.h | 3 +++ + 2 files changed, 43 insertions(+), 3 deletions(-) + +diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c +index 4326555aac032..fb34d029f7a6d 100644 +--- a/sound/soc/codecs/tas2764.c ++++ b/sound/soc/codecs/tas2764.c +@@ -14,6 +14,7 @@ + #include + #include + #include ++#include + #include + #include + #include +@@ -23,6 +24,11 @@ + + #include "tas2764.h" + ++enum tas2764_devid { ++ DEVID_TAS2764 = 0, ++ DEVID_SN012776 = 1 ++}; ++ + struct tas2764_priv { + struct snd_soc_component *component; + struct gpio_desc *reset_gpio; +@@ -30,7 +36,8 @@ struct tas2764_priv { + struct regmap *regmap; + struct device *dev; + int irq; +- ++ enum tas2764_devid devid; ++ + int v_sense_slot; + int i_sense_slot; + +@@ -525,10 +532,16 @@ static struct snd_soc_dai_driver tas2764_dai_driver[] = { + }, + }; + ++static uint8_t sn012776_bop_presets[] = { ++ 0x01, 0x32, 0x02, 0x22, 0x83, 0x2d, 0x80, 0x02, 0x06, ++ 0x32, 0x46, 0x30, 0x02, 0x06, 0x38, 0x40, 0x30, 0x02, ++ 0x06, 0x3e, 0x37, 0x30, 0xff, 0xe6 ++}; ++ + static int tas2764_codec_probe(struct snd_soc_component *component) + { + struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); +- int ret; ++ int ret, i; + + tas2764->component = component; + +@@ -577,6 +590,27 @@ static int tas2764_codec_probe(struct snd_soc_component *component) + if (ret < 0) + return ret; + ++ switch (tas2764->devid) { ++ case DEVID_SN012776: ++ ret = snd_soc_component_update_bits(component, TAS2764_PWR_CTRL, ++ TAS2764_PWR_CTRL_BOP_SRC, ++ TAS2764_PWR_CTRL_BOP_SRC); ++ if (ret < 0) ++ return ret; ++ ++ for (i = 0; i < ARRAY_SIZE(sn012776_bop_presets); i++) { ++ ret = snd_soc_component_write(component, ++ TAS2764_BOP_CFG0 + i, ++ sn012776_bop_presets[i]); ++ ++ if (ret < 0) ++ return ret; ++ } ++ break; ++ default: ++ break; ++ } ++ + return 0; + } + +@@ -708,6 +742,8 @@ static int tas2764_i2c_probe(struct i2c_client *client) + if (!tas2764) + return -ENOMEM; + ++ tas2764->devid = (enum tas2764_devid)of_device_get_match_data(&client->dev); ++ + tas2764->dev = &client->dev; + tas2764->irq = client->irq; + i2c_set_clientdata(client, tas2764); +@@ -744,7 +780,8 @@ MODULE_DEVICE_TABLE(i2c, tas2764_i2c_id); + + #if defined(CONFIG_OF) + static const struct of_device_id tas2764_of_match[] = { +- { .compatible = "ti,tas2764" }, ++ { .compatible = "ti,tas2764", .data = (void *)DEVID_TAS2764 }, ++ { .compatible = "ti,sn012776", .data = (void *)DEVID_SN012776 }, + {}, + }; + MODULE_DEVICE_TABLE(of, tas2764_of_match); +diff --git a/sound/soc/codecs/tas2764.h b/sound/soc/codecs/tas2764.h +index 9490f2686e389..69c0f91cb4239 100644 +--- a/sound/soc/codecs/tas2764.h ++++ b/sound/soc/codecs/tas2764.h +@@ -29,6 +29,7 @@ + #define TAS2764_PWR_CTRL_ACTIVE 0x0 + #define TAS2764_PWR_CTRL_MUTE BIT(0) + #define TAS2764_PWR_CTRL_SHUTDOWN BIT(1) ++#define TAS2764_PWR_CTRL_BOP_SRC BIT(7) + + #define TAS2764_VSENSE_POWER_EN 3 + #define TAS2764_ISENSE_POWER_EN 4 +@@ -116,4 +117,6 @@ + #define TAS2764_INT_CLK_CFG TAS2764_REG(0x0, 0x5c) + #define TAS2764_INT_CLK_CFG_IRQZ_CLR BIT(2) + ++#define TAS2764_BOP_CFG0 TAS2764_REG(0X0, 0x1d) ++ + #endif /* __TAS2764__ */ +-- +2.39.5 + diff --git a/queue-6.12/asoc-tas2764-reinit-cache-on-part-reset.patch b/queue-6.12/asoc-tas2764-reinit-cache-on-part-reset.patch new file mode 100644 index 0000000000..a453386fc3 --- /dev/null +++ b/queue-6.12/asoc-tas2764-reinit-cache-on-part-reset.patch @@ -0,0 +1,53 @@ +From 871f22648aacc2e8a1b54eda56814f90ec724a4d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 6 Apr 2025 09:15:07 +1000 +Subject: ASoC: tas2764: Reinit cache on part reset +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Martin PoviÅ¡er + +[ Upstream commit 592ab3936b096da5deb64d4c906edbeb989174d6 ] + +When the part is reset in component_probe, do not forget to reinit the +regcache, otherwise the cache can get out of sync with the part's +actual state. This fix is similar to commit 0a0342ede303 +("ASoC: tas2770: Reinit regcache on reset") which concerned the +tas2770 driver. + +Fixes: 827ed8a0fa50 ("ASoC: tas2764: Add the driver for the TAS2764") +Reviewed-by: Neal Gompa +Signed-off-by: Martin PoviÅ¡er +Signed-off-by: James Calligeros +Link: https://patch.msgid.link/20250406-apple-codec-changes-v5-3-50a00ec850a3@gmail.com +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + sound/soc/codecs/tas2764.c | 3 +++ + 1 file changed, 3 insertions(+) + +diff --git a/sound/soc/codecs/tas2764.c b/sound/soc/codecs/tas2764.c +index fb34d029f7a6d..e8fbe8a399f6d 100644 +--- a/sound/soc/codecs/tas2764.c ++++ b/sound/soc/codecs/tas2764.c +@@ -538,6 +538,8 @@ static uint8_t sn012776_bop_presets[] = { + 0x06, 0x3e, 0x37, 0x30, 0xff, 0xe6 + }; + ++static const struct regmap_config tas2764_i2c_regmap; ++ + static int tas2764_codec_probe(struct snd_soc_component *component) + { + struct tas2764_priv *tas2764 = snd_soc_component_get_drvdata(component); +@@ -551,6 +553,7 @@ static int tas2764_codec_probe(struct snd_soc_component *component) + } + + tas2764_reset(tas2764); ++ regmap_reinit_cache(tas2764->regmap, &tas2764_i2c_regmap); + + if (tas2764->irq) { + ret = snd_soc_component_write(tas2764->component, TAS2764_INT_MASK0, 0x00); +-- +2.39.5 + diff --git a/queue-6.12/ata-libata-acpi-do-not-assume-40-wire-cable-if-no-de.patch b/queue-6.12/ata-libata-acpi-do-not-assume-40-wire-cable-if-no-de.patch new file mode 100644 index 0000000000..eb098d5bf2 --- /dev/null +++ b/queue-6.12/ata-libata-acpi-do-not-assume-40-wire-cable-if-no-de.patch @@ -0,0 +1,138 @@ +From 7cbb70951e659a98a0da8a004f847b690610b7d8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 19 May 2025 11:56:55 +0300 +Subject: ata: libata-acpi: Do not assume 40 wire cable if no devices are + enabled + +From: Tasos Sahanidis + +[ Upstream commit 33877220b8641b4cde474a4229ea92c0e3637883 ] + +On at least an ASRock 990FX Extreme 4 with a VIA VT6330, the devices +have not yet been enabled by the first time ata_acpi_cbl_80wire() is +called. This means that the ata_for_each_dev loop is never entered, +and a 40 wire cable is assumed. + +The VIA controller on this board does not report the cable in the PCI +config space, thus having to fall back to ACPI even though no SATA +bridge is present. + +The _GTM values are correctly reported by the firmware through ACPI, +which has already set up faster transfer modes, but due to the above +the controller is forced down to a maximum of UDMA/33. + +Resolve this by modifying ata_acpi_cbl_80wire() to directly return the +cable type. First, an unknown cable is assumed which preserves the mode +set by the firmware, and then on subsequent calls when the devices have +been enabled, an 80 wire cable is correctly detected. + +Since the function now directly returns the cable type, it is renamed +to ata_acpi_cbl_pata_type(). + +Signed-off-by: Tasos Sahanidis +Link: https://lore.kernel.org/r/20250519085945.1399466-1-tasos@tasossah.com +Signed-off-by: Niklas Cassel +Signed-off-by: Sasha Levin +--- + drivers/ata/libata-acpi.c | 24 ++++++++++++++++-------- + drivers/ata/pata_via.c | 6 ++---- + include/linux/libata.h | 7 +++---- + 3 files changed, 21 insertions(+), 16 deletions(-) + +diff --git a/drivers/ata/libata-acpi.c b/drivers/ata/libata-acpi.c +index d36e71f475abd..39a350755a1ba 100644 +--- a/drivers/ata/libata-acpi.c ++++ b/drivers/ata/libata-acpi.c +@@ -514,15 +514,19 @@ unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev, + EXPORT_SYMBOL_GPL(ata_acpi_gtm_xfermask); + + /** +- * ata_acpi_cbl_80wire - Check for 80 wire cable ++ * ata_acpi_cbl_pata_type - Return PATA cable type + * @ap: Port to check +- * @gtm: GTM data to use + * +- * Return 1 if the @gtm indicates the BIOS selected an 80wire mode. ++ * Return ATA_CBL_PATA* according to the transfer mode selected by BIOS + */ +-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm) ++int ata_acpi_cbl_pata_type(struct ata_port *ap) + { + struct ata_device *dev; ++ int ret = ATA_CBL_PATA_UNK; ++ const struct ata_acpi_gtm *gtm = ata_acpi_init_gtm(ap); ++ ++ if (!gtm) ++ return ATA_CBL_PATA40; + + ata_for_each_dev(dev, &ap->link, ENABLED) { + unsigned int xfer_mask, udma_mask; +@@ -530,13 +534,17 @@ int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm) + xfer_mask = ata_acpi_gtm_xfermask(dev, gtm); + ata_unpack_xfermask(xfer_mask, NULL, NULL, &udma_mask); + +- if (udma_mask & ~ATA_UDMA_MASK_40C) +- return 1; ++ ret = ATA_CBL_PATA40; ++ ++ if (udma_mask & ~ATA_UDMA_MASK_40C) { ++ ret = ATA_CBL_PATA80; ++ break; ++ } + } + +- return 0; ++ return ret; + } +-EXPORT_SYMBOL_GPL(ata_acpi_cbl_80wire); ++EXPORT_SYMBOL_GPL(ata_acpi_cbl_pata_type); + + static void ata_acpi_gtf_to_tf(struct ata_device *dev, + const struct ata_acpi_gtf *gtf, +diff --git a/drivers/ata/pata_via.c b/drivers/ata/pata_via.c +index d82728a01832b..bb80e7800dcbe 100644 +--- a/drivers/ata/pata_via.c ++++ b/drivers/ata/pata_via.c +@@ -201,11 +201,9 @@ static int via_cable_detect(struct ata_port *ap) { + two drives */ + if (ata66 & (0x10100000 >> (16 * ap->port_no))) + return ATA_CBL_PATA80; ++ + /* Check with ACPI so we can spot BIOS reported SATA bridges */ +- if (ata_acpi_init_gtm(ap) && +- ata_acpi_cbl_80wire(ap, ata_acpi_init_gtm(ap))) +- return ATA_CBL_PATA80; +- return ATA_CBL_PATA40; ++ return ata_acpi_cbl_pata_type(ap); + } + + static int via_pre_reset(struct ata_link *link, unsigned long deadline) +diff --git a/include/linux/libata.h b/include/linux/libata.h +index 79974a99265fc..2d3bfec568ebe 100644 +--- a/include/linux/libata.h ++++ b/include/linux/libata.h +@@ -1366,7 +1366,7 @@ int ata_acpi_stm(struct ata_port *ap, const struct ata_acpi_gtm *stm); + int ata_acpi_gtm(struct ata_port *ap, struct ata_acpi_gtm *stm); + unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev, + const struct ata_acpi_gtm *gtm); +-int ata_acpi_cbl_80wire(struct ata_port *ap, const struct ata_acpi_gtm *gtm); ++int ata_acpi_cbl_pata_type(struct ata_port *ap); + #else + static inline const struct ata_acpi_gtm *ata_acpi_init_gtm(struct ata_port *ap) + { +@@ -1391,10 +1391,9 @@ static inline unsigned int ata_acpi_gtm_xfermask(struct ata_device *dev, + return 0; + } + +-static inline int ata_acpi_cbl_80wire(struct ata_port *ap, +- const struct ata_acpi_gtm *gtm) ++static inline int ata_acpi_cbl_pata_type(struct ata_port *ap) + { +- return 0; ++ return ATA_CBL_PATA40; + } + #endif + +-- +2.39.5 + diff --git a/queue-6.12/ata-pata_cs5536-fix-build-on-32-bit-uml.patch b/queue-6.12/ata-pata_cs5536-fix-build-on-32-bit-uml.patch new file mode 100644 index 0000000000..e131e099d1 --- /dev/null +++ b/queue-6.12/ata-pata_cs5536-fix-build-on-32-bit-uml.patch @@ -0,0 +1,38 @@ +From e6fabb56cc4d59411ca091438d8cbd6a8814ca70 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 6 Jun 2025 11:01:11 +0200 +Subject: ata: pata_cs5536: fix build on 32-bit UML + +From: Johannes Berg + +[ Upstream commit fe5b391fc56f77cf3c22a9dd4f0ce20db0e3533f ] + +On 32-bit ARCH=um, CONFIG_X86_32 is still defined, so it +doesn't indicate building on real X86 machines. There's +no MSR on UML though, so add a check for CONFIG_X86. + +Reported-by: Arnd Bergmann +Signed-off-by: Johannes Berg +Link: https://lore.kernel.org/r/20250606090110.15784-2-johannes@sipsolutions.net +Signed-off-by: Niklas Cassel +Signed-off-by: Sasha Levin +--- + drivers/ata/pata_cs5536.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/ata/pata_cs5536.c b/drivers/ata/pata_cs5536.c +index b811efd2cc346..73e81e160c91f 100644 +--- a/drivers/ata/pata_cs5536.c ++++ b/drivers/ata/pata_cs5536.c +@@ -27,7 +27,7 @@ + #include + #include + +-#ifdef CONFIG_X86_32 ++#if defined(CONFIG_X86) && defined(CONFIG_X86_32) + #include + static int use_msr; + module_param_named(msr, use_msr, int, 0644); +-- +2.39.5 + diff --git a/queue-6.12/bluetooth-prevent-unintended-pause-by-checking-if-ad.patch b/queue-6.12/bluetooth-prevent-unintended-pause-by-checking-if-ad.patch new file mode 100644 index 0000000000..e49a74aff5 --- /dev/null +++ b/queue-6.12/bluetooth-prevent-unintended-pause-by-checking-if-ad.patch @@ -0,0 +1,69 @@ +From 4984845d98d2c9d2a32783e9ada669059474867d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Jun 2025 11:01:07 +0800 +Subject: Bluetooth: Prevent unintended pause by checking if advertising is + active + +From: Yang Li + +[ Upstream commit 1f029b4e30a602db33dedee5ac676e9236ad193c ] + +When PA Create Sync is enabled, advertising resumes unexpectedly. +Therefore, it's necessary to check whether advertising is currently +active before attempting to pause it. + + < HCI Command: LE Add Device To... (0x08|0x0011) plen 7 #1345 [hci0] 48.306205 + Address type: Random (0x01) + Address: 4F:84:84:5F:88:17 (Resolvable) + Identity type: Random (0x01) + Identity: FC:5B:8C:F7:5D:FB (Static) + < HCI Command: LE Set Address Re.. (0x08|0x002d) plen 1 #1347 [hci0] 48.308023 + Address resolution: Enabled (0x01) + ... + < HCI Command: LE Set Extended A.. (0x08|0x0039) plen 6 #1349 [hci0] 48.309650 + Extended advertising: Enabled (0x01) + Number of sets: 1 (0x01) + Entry 0 + Handle: 0x01 + Duration: 0 ms (0x00) + Max ext adv events: 0 + ... + < HCI Command: LE Periodic Adve.. (0x08|0x0044) plen 14 #1355 [hci0] 48.314575 + Options: 0x0000 + Use advertising SID, Advertiser Address Type and address + Reporting initially enabled + SID: 0x02 + Adv address type: Random (0x01) + Adv address: 4F:84:84:5F:88:17 (Resolvable) + Identity type: Random (0x01) + Identity: FC:5B:8C:F7:5D:FB (Static) + Skip: 0x0000 + Sync timeout: 20000 msec (0x07d0) + Sync CTE type: 0x0000 + +Fixes: ad383c2c65a5 ("Bluetooth: hci_sync: Enable advertising when LL privacy is enabled") +Signed-off-by: Yang Li +Signed-off-by: Luiz Augusto von Dentz +Signed-off-by: Sasha Levin +--- + net/bluetooth/hci_sync.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/net/bluetooth/hci_sync.c b/net/bluetooth/hci_sync.c +index 50cc60ac6242e..79d1a6ed08b29 100644 +--- a/net/bluetooth/hci_sync.c ++++ b/net/bluetooth/hci_sync.c +@@ -2531,6 +2531,10 @@ static int hci_pause_advertising_sync(struct hci_dev *hdev) + int err; + int old_state; + ++ /* If controller is not advertising we are done. */ ++ if (!hci_dev_test_flag(hdev, HCI_LE_ADV)) ++ return 0; ++ + /* If already been paused there is nothing to do. */ + if (hdev->advertising_paused) + return 0; +-- +2.39.5 + diff --git a/queue-6.12/bonding-mark-active-offloaded-xfrm_states.patch b/queue-6.12/bonding-mark-active-offloaded-xfrm_states.patch new file mode 100644 index 0000000000..920886f56c --- /dev/null +++ b/queue-6.12/bonding-mark-active-offloaded-xfrm_states.patch @@ -0,0 +1,134 @@ +From 741150be06a2d22da426dff8eae1f9794530d054 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 11 Apr 2025 10:49:57 +0300 +Subject: bonding: Mark active offloaded xfrm_states + +From: Cosmin Ratiu + +[ Upstream commit fd4e41ebf66cb8b43de2f640b97314c4ee3b4499 ] + +When the active link is changed for a bond device, the existing xfrm +states need to be migrated over to the new link. This is done with: +- bond_ipsec_del_sa_all() goes through the offloaded states list and + removes all of them from hw. +- bond_ipsec_add_sa_all() re-offloads all states to the new device. + +But because the offload status of xfrm states isn't marked in any way, +there can be bugs. + +When all bond links are down, bond_ipsec_del_sa_all() unoffloads +everything from the previous active link. If the same link then comes +back up, nothing gets reoffloaded by bond_ipsec_add_sa_all(). +This results in a stack trace like this a bit later when user space +removes the offloaded rules, because mlx5e_xfrm_del_state() is asked to +remove a rule that's no longer offloaded: + + [] Call Trace: + [] + [] ? __warn+0x7d/0x110 + [] ? mlx5e_xfrm_del_state+0x90/0xa0 [mlx5_core] + [] ? report_bug+0x16d/0x180 + [] ? handle_bug+0x4f/0x90 + [] ? exc_invalid_op+0x14/0x70 + [] ? asm_exc_invalid_op+0x16/0x20 + [] ? mlx5e_xfrm_del_state+0x73/0xa0 [mlx5_core] + [] ? mlx5e_xfrm_del_state+0x90/0xa0 [mlx5_core] + [] bond_ipsec_del_sa+0x1ab/0x200 [bonding] + [] xfrm_dev_state_delete+0x1f/0x60 + [] __xfrm_state_delete+0x196/0x200 + [] xfrm_state_delete+0x21/0x40 + [] xfrm_del_sa+0x69/0x110 + [] xfrm_user_rcv_msg+0x11d/0x300 + [] ? release_pages+0xca/0x140 + [] ? copy_to_user_tmpl.part.0+0x110/0x110 + [] netlink_rcv_skb+0x54/0x100 + [] xfrm_netlink_rcv+0x31/0x40 + [] netlink_unicast+0x1fc/0x2d0 + [] netlink_sendmsg+0x1e4/0x410 + [] __sock_sendmsg+0x38/0x60 + [] sock_write_iter+0x94/0xf0 + [] vfs_write+0x338/0x3f0 + [] ksys_write+0xba/0xd0 + [] do_syscall_64+0x4c/0x100 + [] entry_SYSCALL_64_after_hwframe+0x4b/0x53 + +There's also another theoretical bug: +Calling bond_ipsec_del_sa_all() multiple times can result in corruption +in the driver implementation if the double-free isn't tolerated. This +isn't nice. + +Before the "Fixes" commit, xs->xso.real_dev was set to NULL when an xfrm +state was unoffloaded from a device, but a race with netdevsim's +.xdo_dev_offload_ok() accessing real_dev was considered a sufficient +reason to not set real_dev to NULL anymore. This unfortunately +introduced the new bugs. + +Since .xdo_dev_offload_ok() was significantly refactored by [1] and +there are no more users in the stack of xso.real_dev, that +race is now gone and xs->xso.real_dev can now once again be used to +represent which device (if any) currently holds the offloaded rule. + +Go one step further and set real_dev after add/before delete calls, to +catch any future driver misuses of real_dev. + +[1] https://lore.kernel.org/netdev/cover.1739972570.git.leon@kernel.org/ +Fixes: f8cde9805981 ("bonding: fix xfrm real_dev null pointer dereference") +Signed-off-by: Cosmin Ratiu +Reviewed-by: Leon Romanovsky +Reviewed-by: Nikolay Aleksandrov +Reviewed-by: Hangbin Liu +Tested-by: Hangbin Liu +Signed-off-by: Steffen Klassert +Signed-off-by: Sasha Levin +--- + drivers/net/bonding/bond_main.c | 8 +++++--- + 1 file changed, 5 insertions(+), 3 deletions(-) + +diff --git a/drivers/net/bonding/bond_main.c b/drivers/net/bonding/bond_main.c +index 2a513dbbd9756..52ff0f9e04e07 100644 +--- a/drivers/net/bonding/bond_main.c ++++ b/drivers/net/bonding/bond_main.c +@@ -497,9 +497,9 @@ static int bond_ipsec_add_sa(struct xfrm_state *xs, + goto out; + } + +- xs->xso.real_dev = real_dev; + err = real_dev->xfrmdev_ops->xdo_dev_state_add(xs, extack); + if (!err) { ++ xs->xso.real_dev = real_dev; + ipsec->xs = xs; + INIT_LIST_HEAD(&ipsec->list); + mutex_lock(&bond->ipsec_lock); +@@ -541,11 +541,11 @@ static void bond_ipsec_add_sa_all(struct bonding *bond) + if (ipsec->xs->xso.real_dev == real_dev) + continue; + +- ipsec->xs->xso.real_dev = real_dev; + if (real_dev->xfrmdev_ops->xdo_dev_state_add(ipsec->xs, NULL)) { + slave_warn(bond_dev, real_dev, "%s: failed to add SA\n", __func__); +- ipsec->xs->xso.real_dev = NULL; ++ continue; + } ++ ipsec->xs->xso.real_dev = real_dev; + } + out: + mutex_unlock(&bond->ipsec_lock); +@@ -627,6 +627,7 @@ static void bond_ipsec_del_sa_all(struct bonding *bond) + "%s: no slave xdo_dev_state_delete\n", + __func__); + } else { ++ ipsec->xs->xso.real_dev = NULL; + real_dev->xfrmdev_ops->xdo_dev_state_delete(ipsec->xs); + if (real_dev->xfrmdev_ops->xdo_dev_state_free) + real_dev->xfrmdev_ops->xdo_dev_state_free(ipsec->xs); +@@ -661,6 +662,7 @@ static void bond_ipsec_free_sa(struct xfrm_state *xs) + + WARN_ON(xs->xso.real_dev != real_dev); + ++ xs->xso.real_dev = NULL; + if (real_dev && real_dev->xfrmdev_ops && + real_dev->xfrmdev_ops->xdo_dev_state_free) + real_dev->xfrmdev_ops->xdo_dev_state_free(xs); +-- +2.39.5 + diff --git a/queue-6.12/bpf-do-not-include-stack-ptr-register-in-precision-b.patch b/queue-6.12/bpf-do-not-include-stack-ptr-register-in-precision-b.patch new file mode 100644 index 0000000000..144fd515e3 --- /dev/null +++ b/queue-6.12/bpf-do-not-include-stack-ptr-register-in-precision-b.patch @@ -0,0 +1,209 @@ +From 4bf19e9bb94c1e5454d7803afa6e662c6867c3f2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 23 May 2025 21:13:35 -0700 +Subject: bpf: Do not include stack ptr register in precision backtracking + bookkeeping + +From: Yonghong Song + +[ Upstream commit e2d2115e56c4a02377189bfc3a9a7933552a7b0f ] + +Yi Lai reported an issue ([1]) where the following warning appears +in kernel dmesg: + [ 60.643604] verifier backtracking bug + [ 60.643635] WARNING: CPU: 10 PID: 2315 at kernel/bpf/verifier.c:4302 __mark_chain_precision+0x3a6c/0x3e10 + [ 60.648428] Modules linked in: bpf_testmod(OE) + [ 60.650471] CPU: 10 UID: 0 PID: 2315 Comm: test_progs Tainted: G OE 6.15.0-rc4-gef11287f8289-dirty #327 PREEMPT(full) + [ 60.654385] Tainted: [O]=OOT_MODULE, [E]=UNSIGNED_MODULE + [ 60.656682] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.14.0-0-g155821a1990b-prebuilt.qemu.org 04/01/2014 + [ 60.660475] RIP: 0010:__mark_chain_precision+0x3a6c/0x3e10 + [ 60.662814] Code: 5a 30 84 89 ea e8 c4 d9 01 00 80 3d 3e 7d d8 04 00 0f 85 60 fa ff ff c6 05 31 7d d8 04 + 01 48 c7 c7 00 58 30 84 e8 c4 06 a5 ff <0f> 0b e9 46 fa ff ff 48 ... + [ 60.668720] RSP: 0018:ffff888116cc7298 EFLAGS: 00010246 + [ 60.671075] RAX: 54d70e82dfd31900 RBX: ffff888115b65e20 RCX: 0000000000000000 + [ 60.673659] RDX: 0000000000000001 RSI: 0000000000000004 RDI: 00000000ffffffff + [ 60.676241] RBP: 0000000000000400 R08: ffff8881f6f23bd3 R09: 1ffff1103ede477a + [ 60.678787] R10: dffffc0000000000 R11: ffffed103ede477b R12: ffff888115b60ae8 + [ 60.681420] R13: 1ffff11022b6cbc4 R14: 00000000fffffff2 R15: 0000000000000001 + [ 60.684030] FS: 00007fc2aedd80c0(0000) GS:ffff88826fa8a000(0000) knlGS:0000000000000000 + [ 60.686837] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 + [ 60.689027] CR2: 000056325369e000 CR3: 000000011088b002 CR4: 0000000000370ef0 + [ 60.691623] Call Trace: + [ 60.692821] + [ 60.693960] ? __pfx_verbose+0x10/0x10 + [ 60.695656] ? __pfx_disasm_kfunc_name+0x10/0x10 + [ 60.697495] check_cond_jmp_op+0x16f7/0x39b0 + [ 60.699237] do_check+0x58fa/0xab10 + ... + +Further analysis shows the warning is at line 4302 as below: + + 4294 /* static subprog call instruction, which + 4295 * means that we are exiting current subprog, + 4296 * so only r1-r5 could be still requested as + 4297 * precise, r0 and r6-r10 or any stack slot in + 4298 * the current frame should be zero by now + 4299 */ + 4300 if (bt_reg_mask(bt) & ~BPF_REGMASK_ARGS) { + 4301 verbose(env, "BUG regs %x\n", bt_reg_mask(bt)); + 4302 WARN_ONCE(1, "verifier backtracking bug"); + 4303 return -EFAULT; + 4304 } + +With the below test (also in the next patch): + __used __naked static void __bpf_jmp_r10(void) + { + asm volatile ( + "r2 = 2314885393468386424 ll;" + "goto +0;" + "if r2 <= r10 goto +3;" + "if r1 >= -1835016 goto +0;" + "if r2 <= 8 goto +0;" + "if r3 <= 0 goto +0;" + "exit;" + ::: __clobber_all); + } + + SEC("?raw_tp") + __naked void bpf_jmp_r10(void) + { + asm volatile ( + "r3 = 0 ll;" + "call __bpf_jmp_r10;" + "r0 = 0;" + "exit;" + ::: __clobber_all); + } + +The following is the verifier failure log: + 0: (18) r3 = 0x0 ; R3_w=0 + 2: (85) call pc+2 + caller: + R10=fp0 + callee: + frame1: R1=ctx() R3_w=0 R10=fp0 + 5: frame1: R1=ctx() R3_w=0 R10=fp0 + ; asm volatile (" \ @ verifier_precision.c:184 + 5: (18) r2 = 0x20202000256c6c78 ; frame1: R2_w=0x20202000256c6c78 + 7: (05) goto pc+0 + 8: (bd) if r2 <= r10 goto pc+3 ; frame1: R2_w=0x20202000256c6c78 R10=fp0 + 9: (35) if r1 >= 0xffe3fff8 goto pc+0 ; frame1: R1=ctx() + 10: (b5) if r2 <= 0x8 goto pc+0 + mark_precise: frame1: last_idx 10 first_idx 0 subseq_idx -1 + mark_precise: frame1: regs=r2 stack= before 9: (35) if r1 >= 0xffe3fff8 goto pc+0 + mark_precise: frame1: regs=r2 stack= before 8: (bd) if r2 <= r10 goto pc+3 + mark_precise: frame1: regs=r2,r10 stack= before 7: (05) goto pc+0 + mark_precise: frame1: regs=r2,r10 stack= before 5: (18) r2 = 0x20202000256c6c78 + mark_precise: frame1: regs=r10 stack= before 2: (85) call pc+2 + BUG regs 400 + +The main failure reason is due to r10 in precision backtracking bookkeeping. +Actually r10 is always precise and there is no need to add it for the precision +backtracking bookkeeping. + +One way to fix the issue is to prevent bt_set_reg() if any src/dst reg is +r10. Andrii suggested to go with push_insn_history() approach to avoid +explicitly checking r10 in backtrack_insn(). + +This patch added push_insn_history() support for cond_jmp like 'rX rY' +operations. In check_cond_jmp_op(), if any of rX or rY is a stack pointer, +push_insn_history() will record such information, and later backtrack_insn() +will do bt_set_reg() properly for those register(s). + + [1] https://lore.kernel.org/bpf/Z%2F8q3xzpU59CIYQE@ly-workstation/ + +Reported by: Yi Lai + +Fixes: 407958a0e980 ("bpf: encapsulate precision backtracking bookkeeping") +Signed-off-by: Yonghong Song +Signed-off-by: Andrii Nakryiko +Link: https://lore.kernel.org/bpf/20250524041335.4046126-1-yonghong.song@linux.dev +Signed-off-by: Sasha Levin +--- + include/linux/bpf_verifier.h | 12 ++++++++---- + kernel/bpf/verifier.c | 18 ++++++++++++++++-- + 2 files changed, 24 insertions(+), 6 deletions(-) + +diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h +index b82ff91916e42..fb33458f2fc77 100644 +--- a/include/linux/bpf_verifier.h ++++ b/include/linux/bpf_verifier.h +@@ -361,7 +361,11 @@ enum { + INSN_F_SPI_MASK = 0x3f, /* 6 bits */ + INSN_F_SPI_SHIFT = 3, /* shifted 3 bits to the left */ + +- INSN_F_STACK_ACCESS = BIT(9), /* we need 10 bits total */ ++ INSN_F_STACK_ACCESS = BIT(9), ++ ++ INSN_F_DST_REG_STACK = BIT(10), /* dst_reg is PTR_TO_STACK */ ++ INSN_F_SRC_REG_STACK = BIT(11), /* src_reg is PTR_TO_STACK */ ++ /* total 12 bits are used now. */ + }; + + static_assert(INSN_F_FRAMENO_MASK + 1 >= MAX_CALL_FRAMES); +@@ -370,9 +374,9 @@ static_assert(INSN_F_SPI_MASK + 1 >= MAX_BPF_STACK / 8); + struct bpf_insn_hist_entry { + u32 idx; + /* insn idx can't be bigger than 1 million */ +- u32 prev_idx : 22; +- /* special flags, e.g., whether insn is doing register stack spill/load */ +- u32 flags : 10; ++ u32 prev_idx : 20; ++ /* special INSN_F_xxx flags */ ++ u32 flags : 12; + /* additional registers that need precision tracking when this + * jump is backtracked, vector of six 10-bit records + */ +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 7f00dc993041b..f01477cecf393 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -4066,8 +4066,10 @@ static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, + * before it would be equally necessary to + * propagate it to dreg. + */ +- bt_set_reg(bt, dreg); +- bt_set_reg(bt, sreg); ++ if (!hist || !(hist->flags & INSN_F_SRC_REG_STACK)) ++ bt_set_reg(bt, sreg); ++ if (!hist || !(hist->flags & INSN_F_DST_REG_STACK)) ++ bt_set_reg(bt, dreg); + } else if (BPF_SRC(insn->code) == BPF_K) { + /* dreg K + * Only dreg still needs precision before +@@ -15415,6 +15417,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, + struct bpf_reg_state *eq_branch_regs; + struct linked_regs linked_regs = {}; + u8 opcode = BPF_OP(insn->code); ++ int insn_flags = 0; + bool is_jmp32; + int pred = -1; + int err; +@@ -15474,6 +15477,9 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, + insn->src_reg); + return -EACCES; + } ++ ++ if (src_reg->type == PTR_TO_STACK) ++ insn_flags |= INSN_F_SRC_REG_STACK; + } else { + if (insn->src_reg != BPF_REG_0) { + verbose(env, "BPF_JMP/JMP32 uses reserved fields\n"); +@@ -15485,6 +15491,14 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, + __mark_reg_known(src_reg, insn->imm); + } + ++ if (dst_reg->type == PTR_TO_STACK) ++ insn_flags |= INSN_F_DST_REG_STACK; ++ if (insn_flags) { ++ err = push_insn_history(env, this_branch, insn_flags, 0); ++ if (err) ++ return err; ++ } ++ + is_jmp32 = BPF_CLASS(insn->code) == BPF_JMP32; + pred = is_branch_taken(dst_reg, src_reg, opcode, is_jmp32); + if (pred >= 0) { +-- +2.39.5 + diff --git a/queue-6.12/bpf-use-common-instruction-history-across-all-states.patch b/queue-6.12/bpf-use-common-instruction-history-across-all-states.patch new file mode 100644 index 0000000000..9621288d23 --- /dev/null +++ b/queue-6.12/bpf-use-common-instruction-history-across-all-states.patch @@ -0,0 +1,421 @@ +From ace52eac43fe4d3cb39d2fcf5456a2ebf372ac61 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 14 Nov 2024 16:13:03 -0800 +Subject: bpf: use common instruction history across all states + +From: Andrii Nakryiko + +[ Upstream commit 96a30e469ca1d2b8cc7811b40911f8614b558241 ] + +Instead of allocating and copying instruction history each time we +enqueue child verifier state, switch to a model where we use one common +dynamically sized array of instruction history entries across all states. + +The key observation for proving this is correct is that instruction +history is only relevant while state is active, which means it either is +a current state (and thus we are actively modifying instruction history +and no other state can interfere with us) or we are checkpointed state +with some children still active (either enqueued or being current). + +In the latter case our portion of instruction history is finalized and +won't change or grow, so as long as we keep it immutable until the state +is finalized, we are good. + +Now, when state is finalized and is put into state hash for potentially +future pruning lookups, instruction history is not used anymore. This is +because instruction history is only used by precision marking logic, and +we never modify precision markings for finalized states. + +So, instead of each state having its own small instruction history, we +keep a global dynamically-sized instruction history, where each state in +current DFS path from root to active state remembers its portion of +instruction history. Current state can append to this history, but +cannot modify any of its parent histories. + +Async callback state enqueueing, while logically detached from parent +state, still is part of verification backtracking tree, so has to follow +the same schema as normal state checkpoints. + +Because the insn_hist array can be grown through realloc, states don't +keep pointers, they instead maintain two indices, [start, end), into +global instruction history array. End is exclusive index, so +`start == end` means there is no relevant instruction history. + +This eliminates a lot of allocations and minimizes overall memory usage. + +For instance, running a worst-case test from [0] (but without the +heuristics-based fix [1]), it took 12.5 minutes until we get -ENOMEM. +With the changes in this patch the whole test succeeds in 10 minutes +(very slow, so heuristics from [1] is important, of course). + +To further validate correctness, veristat-based comparison was performed for +Meta production BPF objects and BPF selftests objects. In both cases there +were no differences *at all* in terms of verdict or instruction and state +counts, providing a good confidence in the change. + +Having this low-memory-overhead solution of keeping dynamic +per-instruction history cheaply opens up some new possibilities, like +keeping extra information for literally every single validated +instruction. This will be used for simplifying precision backpropagation +logic in follow up patches. + + [0] https://lore.kernel.org/bpf/20241029172641.1042523-2-eddyz87@gmail.com/ + [1] https://lore.kernel.org/bpf/20241029172641.1042523-1-eddyz87@gmail.com/ + +Acked-by: Eduard Zingerman +Signed-off-by: Andrii Nakryiko +Link: https://lore.kernel.org/r/20241115001303.277272-1-andrii@kernel.org +Signed-off-by: Alexei Starovoitov +Stable-dep-of: e2d2115e56c4 ("bpf: Do not include stack ptr register in precision backtracking bookkeeping") +Signed-off-by: Sasha Levin +--- + include/linux/bpf_verifier.h | 19 ++++--- + kernel/bpf/verifier.c | 107 +++++++++++++++++------------------ + 2 files changed, 63 insertions(+), 63 deletions(-) + +diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h +index 50eeb5b86ed70..b82ff91916e42 100644 +--- a/include/linux/bpf_verifier.h ++++ b/include/linux/bpf_verifier.h +@@ -349,7 +349,7 @@ struct bpf_func_state { + + #define MAX_CALL_FRAMES 8 + +-/* instruction history flags, used in bpf_jmp_history_entry.flags field */ ++/* instruction history flags, used in bpf_insn_hist_entry.flags field */ + enum { + /* instruction references stack slot through PTR_TO_STACK register; + * we also store stack's frame number in lower 3 bits (MAX_CALL_FRAMES is 8) +@@ -367,7 +367,7 @@ enum { + static_assert(INSN_F_FRAMENO_MASK + 1 >= MAX_CALL_FRAMES); + static_assert(INSN_F_SPI_MASK + 1 >= MAX_BPF_STACK / 8); + +-struct bpf_jmp_history_entry { ++struct bpf_insn_hist_entry { + u32 idx; + /* insn idx can't be bigger than 1 million */ + u32 prev_idx : 22; +@@ -458,13 +458,14 @@ struct bpf_verifier_state { + * See get_loop_entry() for more information. + */ + struct bpf_verifier_state *loop_entry; +- /* jmp history recorded from first to last. +- * backtracking is using it to go from last to first. +- * For most states jmp_history_cnt is [0-3]. ++ /* Sub-range of env->insn_hist[] corresponding to this state's ++ * instruction history. ++ * Backtracking is using it to go from last to first. ++ * For most states instruction history is short, 0-3 instructions. + * For loops can go up to ~40. + */ +- struct bpf_jmp_history_entry *jmp_history; +- u32 jmp_history_cnt; ++ u32 insn_hist_start; ++ u32 insn_hist_end; + u32 dfs_depth; + u32 callback_unroll_depth; + u32 may_goto_depth; +@@ -748,7 +749,9 @@ struct bpf_verifier_env { + int cur_stack; + } cfg; + struct backtrack_state bt; +- struct bpf_jmp_history_entry *cur_hist_ent; ++ struct bpf_insn_hist_entry *insn_hist; ++ struct bpf_insn_hist_entry *cur_hist_ent; ++ u32 insn_hist_cap; + u32 pass_cnt; /* number of times do_check() was called */ + u32 subprog_cnt; + /* number of instructions analyzed by the verifier */ +diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c +index 39a3d750f2ff9..7f00dc993041b 100644 +--- a/kernel/bpf/verifier.c ++++ b/kernel/bpf/verifier.c +@@ -1376,13 +1376,6 @@ static void free_func_state(struct bpf_func_state *state) + kfree(state); + } + +-static void clear_jmp_history(struct bpf_verifier_state *state) +-{ +- kfree(state->jmp_history); +- state->jmp_history = NULL; +- state->jmp_history_cnt = 0; +-} +- + static void free_verifier_state(struct bpf_verifier_state *state, + bool free_self) + { +@@ -1392,7 +1385,6 @@ static void free_verifier_state(struct bpf_verifier_state *state, + free_func_state(state->frame[i]); + state->frame[i] = NULL; + } +- clear_jmp_history(state); + if (free_self) + kfree(state); + } +@@ -1418,13 +1410,6 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, + struct bpf_func_state *dst; + int i, err; + +- dst_state->jmp_history = copy_array(dst_state->jmp_history, src->jmp_history, +- src->jmp_history_cnt, sizeof(*dst_state->jmp_history), +- GFP_USER); +- if (!dst_state->jmp_history) +- return -ENOMEM; +- dst_state->jmp_history_cnt = src->jmp_history_cnt; +- + /* if dst has more stack frames then src frame, free them, this is also + * necessary in case of exceptional exits using bpf_throw. + */ +@@ -1443,6 +1428,8 @@ static int copy_verifier_state(struct bpf_verifier_state *dst_state, + dst_state->parent = src->parent; + dst_state->first_insn_idx = src->first_insn_idx; + dst_state->last_insn_idx = src->last_insn_idx; ++ dst_state->insn_hist_start = src->insn_hist_start; ++ dst_state->insn_hist_end = src->insn_hist_end; + dst_state->dfs_depth = src->dfs_depth; + dst_state->callback_unroll_depth = src->callback_unroll_depth; + dst_state->used_as_loop_entry = src->used_as_loop_entry; +@@ -2496,9 +2483,14 @@ static struct bpf_verifier_state *push_async_cb(struct bpf_verifier_env *env, + * The caller state doesn't matter. + * This is async callback. It starts in a fresh stack. + * Initialize it similar to do_check_common(). ++ * But we do need to make sure to not clobber insn_hist, so we keep ++ * chaining insn_hist_start/insn_hist_end indices as for a normal ++ * child state. + */ + elem->st.branches = 1; + elem->st.in_sleepable = is_sleepable; ++ elem->st.insn_hist_start = env->cur_state->insn_hist_end; ++ elem->st.insn_hist_end = elem->st.insn_hist_start; + frame = kzalloc(sizeof(*frame), GFP_KERNEL); + if (!frame) + goto err; +@@ -3513,11 +3505,10 @@ static void linked_regs_unpack(u64 val, struct linked_regs *s) + } + + /* for any branch, call, exit record the history of jmps in the given state */ +-static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_state *cur, +- int insn_flags, u64 linked_regs) ++static int push_insn_history(struct bpf_verifier_env *env, struct bpf_verifier_state *cur, ++ int insn_flags, u64 linked_regs) + { +- u32 cnt = cur->jmp_history_cnt; +- struct bpf_jmp_history_entry *p; ++ struct bpf_insn_hist_entry *p; + size_t alloc_size; + + /* combine instruction flags if we already recorded this instruction */ +@@ -3537,29 +3528,32 @@ static int push_jmp_history(struct bpf_verifier_env *env, struct bpf_verifier_st + return 0; + } + +- cnt++; +- alloc_size = kmalloc_size_roundup(size_mul(cnt, sizeof(*p))); +- p = krealloc(cur->jmp_history, alloc_size, GFP_USER); +- if (!p) +- return -ENOMEM; +- cur->jmp_history = p; ++ if (cur->insn_hist_end + 1 > env->insn_hist_cap) { ++ alloc_size = size_mul(cur->insn_hist_end + 1, sizeof(*p)); ++ p = kvrealloc(env->insn_hist, alloc_size, GFP_USER); ++ if (!p) ++ return -ENOMEM; ++ env->insn_hist = p; ++ env->insn_hist_cap = alloc_size / sizeof(*p); ++ } + +- p = &cur->jmp_history[cnt - 1]; ++ p = &env->insn_hist[cur->insn_hist_end]; + p->idx = env->insn_idx; + p->prev_idx = env->prev_insn_idx; + p->flags = insn_flags; + p->linked_regs = linked_regs; +- cur->jmp_history_cnt = cnt; ++ ++ cur->insn_hist_end++; + env->cur_hist_ent = p; + + return 0; + } + +-static struct bpf_jmp_history_entry *get_jmp_hist_entry(struct bpf_verifier_state *st, +- u32 hist_end, int insn_idx) ++static struct bpf_insn_hist_entry *get_insn_hist_entry(struct bpf_verifier_env *env, ++ u32 hist_start, u32 hist_end, int insn_idx) + { +- if (hist_end > 0 && st->jmp_history[hist_end - 1].idx == insn_idx) +- return &st->jmp_history[hist_end - 1]; ++ if (hist_end > hist_start && env->insn_hist[hist_end - 1].idx == insn_idx) ++ return &env->insn_hist[hist_end - 1]; + return NULL; + } + +@@ -3576,25 +3570,26 @@ static struct bpf_jmp_history_entry *get_jmp_hist_entry(struct bpf_verifier_stat + * history entry recording a jump from last instruction of parent state and + * first instruction of given state. + */ +-static int get_prev_insn_idx(struct bpf_verifier_state *st, int i, +- u32 *history) ++static int get_prev_insn_idx(const struct bpf_verifier_env *env, ++ struct bpf_verifier_state *st, ++ int insn_idx, u32 hist_start, u32 *hist_endp) + { +- u32 cnt = *history; ++ u32 hist_end = *hist_endp; ++ u32 cnt = hist_end - hist_start; + +- if (i == st->first_insn_idx) { ++ if (insn_idx == st->first_insn_idx) { + if (cnt == 0) + return -ENOENT; +- if (cnt == 1 && st->jmp_history[0].idx == i) ++ if (cnt == 1 && env->insn_hist[hist_start].idx == insn_idx) + return -ENOENT; + } + +- if (cnt && st->jmp_history[cnt - 1].idx == i) { +- i = st->jmp_history[cnt - 1].prev_idx; +- (*history)--; ++ if (cnt && env->insn_hist[hist_end - 1].idx == insn_idx) { ++ (*hist_endp)--; ++ return env->insn_hist[hist_end - 1].prev_idx; + } else { +- i--; ++ return insn_idx - 1; + } +- return i; + } + + static const char *disasm_kfunc_name(void *data, const struct bpf_insn *insn) +@@ -3766,7 +3761,7 @@ static void fmt_stack_mask(char *buf, ssize_t buf_sz, u64 stack_mask) + /* If any register R in hist->linked_regs is marked as precise in bt, + * do bt_set_frame_{reg,slot}(bt, R) for all registers in hist->linked_regs. + */ +-static void bt_sync_linked_regs(struct backtrack_state *bt, struct bpf_jmp_history_entry *hist) ++static void bt_sync_linked_regs(struct backtrack_state *bt, struct bpf_insn_hist_entry *hist) + { + struct linked_regs linked_regs; + bool some_precise = false; +@@ -3811,7 +3806,7 @@ static bool calls_callback(struct bpf_verifier_env *env, int insn_idx); + * - *was* processed previously during backtracking. + */ + static int backtrack_insn(struct bpf_verifier_env *env, int idx, int subseq_idx, +- struct bpf_jmp_history_entry *hist, struct backtrack_state *bt) ++ struct bpf_insn_hist_entry *hist, struct backtrack_state *bt) + { + const struct bpf_insn_cbs cbs = { + .cb_call = disasm_kfunc_name, +@@ -4230,7 +4225,7 @@ static void mark_all_scalars_imprecise(struct bpf_verifier_env *env, struct bpf_ + * SCALARS, as well as any other registers and slots that contribute to + * a tracked state of given registers/stack slots, depending on specific BPF + * assembly instructions (see backtrack_insns() for exact instruction handling +- * logic). This backtracking relies on recorded jmp_history and is able to ++ * logic). This backtracking relies on recorded insn_hist and is able to + * traverse entire chain of parent states. This process ends only when all the + * necessary registers/slots and their transitive dependencies are marked as + * precise. +@@ -4347,8 +4342,9 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno) + + for (;;) { + DECLARE_BITMAP(mask, 64); +- u32 history = st->jmp_history_cnt; +- struct bpf_jmp_history_entry *hist; ++ u32 hist_start = st->insn_hist_start; ++ u32 hist_end = st->insn_hist_end; ++ struct bpf_insn_hist_entry *hist; + + if (env->log.level & BPF_LOG_LEVEL2) { + verbose(env, "mark_precise: frame%d: last_idx %d first_idx %d subseq_idx %d \n", +@@ -4387,7 +4383,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno) + err = 0; + skip_first = false; + } else { +- hist = get_jmp_hist_entry(st, history, i); ++ hist = get_insn_hist_entry(env, hist_start, hist_end, i); + err = backtrack_insn(env, i, subseq_idx, hist, bt); + } + if (err == -ENOTSUPP) { +@@ -4404,7 +4400,7 @@ static int __mark_chain_precision(struct bpf_verifier_env *env, int regno) + */ + return 0; + subseq_idx = i; +- i = get_prev_insn_idx(st, i, &history); ++ i = get_prev_insn_idx(env, st, i, hist_start, &hist_end); + if (i == -ENOENT) + break; + if (i >= env->prog->len) { +@@ -4771,7 +4767,7 @@ static int check_stack_write_fixed_off(struct bpf_verifier_env *env, + } + + if (insn_flags) +- return push_jmp_history(env, env->cur_state, insn_flags, 0); ++ return push_insn_history(env, env->cur_state, insn_flags, 0); + return 0; + } + +@@ -5078,7 +5074,7 @@ static int check_stack_read_fixed_off(struct bpf_verifier_env *env, + insn_flags = 0; /* we are not restoring spilled register */ + } + if (insn_flags) +- return push_jmp_history(env, env->cur_state, insn_flags, 0); ++ return push_insn_history(env, env->cur_state, insn_flags, 0); + return 0; + } + +@@ -15542,7 +15538,7 @@ static int check_cond_jmp_op(struct bpf_verifier_env *env, + if (dst_reg->type == SCALAR_VALUE && dst_reg->id) + collect_linked_regs(this_branch, dst_reg->id, &linked_regs); + if (linked_regs.cnt > 1) { +- err = push_jmp_history(env, this_branch, 0, linked_regs_pack(&linked_regs)); ++ err = push_insn_history(env, this_branch, 0, linked_regs_pack(&linked_regs)); + if (err) + return err; + } +@@ -17984,7 +17980,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) + + force_new_state = env->test_state_freq || is_force_checkpoint(env, insn_idx) || + /* Avoid accumulating infinitely long jmp history */ +- cur->jmp_history_cnt > 40; ++ cur->insn_hist_end - cur->insn_hist_start > 40; + + /* bpf progs typically have pruning point every 4 instructions + * http://vger.kernel.org/bpfconf2019.html#session-1 +@@ -18182,7 +18178,7 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) + * the current state. + */ + if (is_jmp_point(env, env->insn_idx)) +- err = err ? : push_jmp_history(env, cur, 0, 0); ++ err = err ? : push_insn_history(env, cur, 0, 0); + err = err ? : propagate_precision(env, &sl->state); + if (err) + return err; +@@ -18281,8 +18277,8 @@ static int is_state_visited(struct bpf_verifier_env *env, int insn_idx) + + cur->parent = new; + cur->first_insn_idx = insn_idx; ++ cur->insn_hist_start = cur->insn_hist_end; + cur->dfs_depth = new->dfs_depth + 1; +- clear_jmp_history(cur); + new_sl->next = *explored_state(env, insn_idx); + *explored_state(env, insn_idx) = new_sl; + /* connect new state to parentage chain. Current frame needs all +@@ -18450,7 +18446,7 @@ static int do_check(struct bpf_verifier_env *env) + } + + if (is_jmp_point(env, env->insn_idx)) { +- err = push_jmp_history(env, state, 0, 0); ++ err = push_insn_history(env, state, 0, 0); + if (err) + return err; + } +@@ -22716,6 +22712,7 @@ int bpf_check(struct bpf_prog **prog, union bpf_attr *attr, bpfptr_t uattr, __u3 + if (!is_priv) + mutex_unlock(&bpf_verifier_lock); + vfree(env->insn_aux_data); ++ kvfree(env->insn_hist); + err_free_env: + kvfree(env); + return ret; +-- +2.39.5 + diff --git a/queue-6.12/btrfs-fix-inode-lookup-error-handling-during-log-rep.patch b/queue-6.12/btrfs-fix-inode-lookup-error-handling-during-log-rep.patch new file mode 100644 index 0000000000..8f1c379249 --- /dev/null +++ b/queue-6.12/btrfs-fix-inode-lookup-error-handling-during-log-rep.patch @@ -0,0 +1,295 @@ +From 734204249e318918f8510b714140e7093abeeb03 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Jun 2025 15:58:31 +0100 +Subject: btrfs: fix inode lookup error handling during log replay + +From: Filipe Manana + +[ Upstream commit 5f61b961599acbd2bed028d3089105a1f7d224b8 ] + +When replaying log trees we use read_one_inode() to get an inode, which is +just a wrapper around btrfs_iget_logging(), which in turn is a wrapper for +btrfs_iget(). But read_one_inode() always returns NULL for any error +that btrfs_iget_logging() / btrfs_iget() may return and this is a problem +because: + +1) In many callers of read_one_inode() we convert the NULL into -EIO, + which is not accurate since btrfs_iget() may return -ENOMEM and -ENOENT + for example, besides -EIO and other errors. So during log replay we + may end up reporting a false -EIO, which is confusing since we may + not have had any IO error at all; + +2) When replaying directory deletes, at replay_dir_deletes(), we assume + the NULL returned from read_one_inode() means that the inode doesn't + exist and then proceed as if no error had happened. This is wrong + because unless btrfs_iget() returned ERR_PTR(-ENOENT), we had an + actual error and the target inode may exist in the target subvolume + root - this may later result in the log replay code failing at a + later stage (if we are "lucky") or succeed but leaving some + inconsistency in the filesystem. + +So fix this by not ignoring errors from btrfs_iget_logging() and as +a consequence remove the read_one_inode() wrapper and just use +btrfs_iget_logging() directly. Also since btrfs_iget_logging() is +supposed to be called only against subvolume roots, just like +read_one_inode() which had a comment about it, add an assertion to +btrfs_iget_logging() to check that the target root corresponds to a +subvolume root. + +Fixes: 5d4f98a28c7d ("Btrfs: Mixed back reference (FORWARD ROLLING FORMAT CHANGE)") +Reviewed-by: Johannes Thumshirn +Reviewed-by: Qu Wenruo +Signed-off-by: Filipe Manana +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/tree-log.c | 127 +++++++++++++++++++++----------------------- + 1 file changed, 62 insertions(+), 65 deletions(-) + +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index f4317fce569b7..97c5dc0ebd9d6 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -143,6 +143,9 @@ static struct btrfs_inode *btrfs_iget_logging(u64 objectid, struct btrfs_root *r + unsigned int nofs_flag; + struct inode *inode; + ++ /* Only meant to be called for subvolume roots and not for log roots. */ ++ ASSERT(is_fstree(btrfs_root_id(root))); ++ + /* + * We're holding a transaction handle whether we are logging or + * replaying a log tree, so we must make sure NOFS semantics apply +@@ -613,21 +616,6 @@ static int read_alloc_one_name(struct extent_buffer *eb, void *start, int len, + return 0; + } + +-/* +- * simple helper to read an inode off the disk from a given root +- * This can only be called for subvolume roots and not for the log +- */ +-static noinline struct btrfs_inode *read_one_inode(struct btrfs_root *root, +- u64 objectid) +-{ +- struct btrfs_inode *inode; +- +- inode = btrfs_iget_logging(objectid, root); +- if (IS_ERR(inode)) +- return NULL; +- return inode; +-} +- + /* replays a single extent in 'eb' at 'slot' with 'key' into the + * subvolume 'root'. path is released on entry and should be released + * on exit. +@@ -680,9 +668,9 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, + return 0; + } + +- inode = read_one_inode(root, key->objectid); +- if (!inode) +- return -EIO; ++ inode = btrfs_iget_logging(key->objectid, root); ++ if (IS_ERR(inode)) ++ return PTR_ERR(inode); + + /* + * first check to see if we already have this extent in the +@@ -961,9 +949,10 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans, + + btrfs_release_path(path); + +- inode = read_one_inode(root, location.objectid); +- if (!inode) { +- ret = -EIO; ++ inode = btrfs_iget_logging(location.objectid, root); ++ if (IS_ERR(inode)) { ++ ret = PTR_ERR(inode); ++ inode = NULL; + goto out; + } + +@@ -1182,10 +1171,10 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans, + kfree(victim_name.name); + return ret; + } else if (!ret) { +- ret = -ENOENT; +- victim_parent = read_one_inode(root, +- parent_objectid); +- if (victim_parent) { ++ victim_parent = btrfs_iget_logging(parent_objectid, root); ++ if (IS_ERR(victim_parent)) { ++ ret = PTR_ERR(victim_parent); ++ } else { + inc_nlink(&inode->vfs_inode); + btrfs_release_path(path); + +@@ -1330,9 +1319,9 @@ static int unlink_old_inode_refs(struct btrfs_trans_handle *trans, + struct btrfs_inode *dir; + + btrfs_release_path(path); +- dir = read_one_inode(root, parent_id); +- if (!dir) { +- ret = -ENOENT; ++ dir = btrfs_iget_logging(parent_id, root); ++ if (IS_ERR(dir)) { ++ ret = PTR_ERR(dir); + kfree(name.name); + goto out; + } +@@ -1404,15 +1393,17 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + * copy the back ref in. The link count fixup code will take + * care of the rest + */ +- dir = read_one_inode(root, parent_objectid); +- if (!dir) { +- ret = -ENOENT; ++ dir = btrfs_iget_logging(parent_objectid, root); ++ if (IS_ERR(dir)) { ++ ret = PTR_ERR(dir); ++ dir = NULL; + goto out; + } + +- inode = read_one_inode(root, inode_objectid); +- if (!inode) { +- ret = -EIO; ++ inode = btrfs_iget_logging(inode_objectid, root); ++ if (IS_ERR(inode)) { ++ ret = PTR_ERR(inode); ++ inode = NULL; + goto out; + } + +@@ -1424,11 +1415,13 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + * parent object can change from one array + * item to another. + */ +- if (!dir) +- dir = read_one_inode(root, parent_objectid); + if (!dir) { +- ret = -ENOENT; +- goto out; ++ dir = btrfs_iget_logging(parent_objectid, root); ++ if (IS_ERR(dir)) { ++ ret = PTR_ERR(dir); ++ dir = NULL; ++ goto out; ++ } + } + } else { + ret = ref_get_fields(eb, ref_ptr, &name, &ref_index); +@@ -1697,9 +1690,9 @@ static noinline int fixup_inode_link_counts(struct btrfs_trans_handle *trans, + break; + + btrfs_release_path(path); +- inode = read_one_inode(root, key.offset); +- if (!inode) { +- ret = -EIO; ++ inode = btrfs_iget_logging(key.offset, root); ++ if (IS_ERR(inode)) { ++ ret = PTR_ERR(inode); + break; + } + +@@ -1735,9 +1728,9 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans, + struct btrfs_inode *inode; + struct inode *vfs_inode; + +- inode = read_one_inode(root, objectid); +- if (!inode) +- return -EIO; ++ inode = btrfs_iget_logging(objectid, root); ++ if (IS_ERR(inode)) ++ return PTR_ERR(inode); + + vfs_inode = &inode->vfs_inode; + key.objectid = BTRFS_TREE_LOG_FIXUP_OBJECTID; +@@ -1776,14 +1769,14 @@ static noinline int insert_one_name(struct btrfs_trans_handle *trans, + struct btrfs_inode *dir; + int ret; + +- inode = read_one_inode(root, location->objectid); +- if (!inode) +- return -ENOENT; ++ inode = btrfs_iget_logging(location->objectid, root); ++ if (IS_ERR(inode)) ++ return PTR_ERR(inode); + +- dir = read_one_inode(root, dirid); +- if (!dir) { ++ dir = btrfs_iget_logging(dirid, root); ++ if (IS_ERR(dir)) { + iput(&inode->vfs_inode); +- return -EIO; ++ return PTR_ERR(dir); + } + + ret = btrfs_add_link(trans, dir, inode, name, 1, index); +@@ -1860,9 +1853,9 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans, + bool update_size = true; + bool name_added = false; + +- dir = read_one_inode(root, key->objectid); +- if (!dir) +- return -EIO; ++ dir = btrfs_iget_logging(key->objectid, root); ++ if (IS_ERR(dir)) ++ return PTR_ERR(dir); + + ret = read_alloc_one_name(eb, di + 1, btrfs_dir_name_len(eb, di), &name); + if (ret) +@@ -2162,9 +2155,10 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans, + btrfs_dir_item_key_to_cpu(eb, di, &location); + btrfs_release_path(path); + btrfs_release_path(log_path); +- inode = read_one_inode(root, location.objectid); +- if (!inode) { +- ret = -EIO; ++ inode = btrfs_iget_logging(location.objectid, root); ++ if (IS_ERR(inode)) { ++ ret = PTR_ERR(inode); ++ inode = NULL; + goto out; + } + +@@ -2316,14 +2310,17 @@ static noinline int replay_dir_deletes(struct btrfs_trans_handle *trans, + if (!log_path) + return -ENOMEM; + +- dir = read_one_inode(root, dirid); +- /* it isn't an error if the inode isn't there, that can happen +- * because we replay the deletes before we copy in the inode item +- * from the log ++ dir = btrfs_iget_logging(dirid, root); ++ /* ++ * It isn't an error if the inode isn't there, that can happen because ++ * we replay the deletes before we copy in the inode item from the log. + */ +- if (!dir) { ++ if (IS_ERR(dir)) { + btrfs_free_path(log_path); +- return 0; ++ ret = PTR_ERR(dir); ++ if (ret == -ENOENT) ++ ret = 0; ++ return ret; + } + + range_start = 0; +@@ -2482,9 +2479,9 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb, + struct btrfs_inode *inode; + u64 from; + +- inode = read_one_inode(root, key.objectid); +- if (!inode) { +- ret = -EIO; ++ inode = btrfs_iget_logging(key.objectid, root); ++ if (IS_ERR(inode)) { ++ ret = PTR_ERR(inode); + break; + } + from = ALIGN(i_size_read(&inode->vfs_inode), +-- +2.39.5 + diff --git a/queue-6.12/btrfs-fix-invalid-inode-pointer-dereferences-during-.patch b/queue-6.12/btrfs-fix-invalid-inode-pointer-dereferences-during-.patch new file mode 100644 index 0000000000..d957ba0a5d --- /dev/null +++ b/queue-6.12/btrfs-fix-invalid-inode-pointer-dereferences-during-.patch @@ -0,0 +1,77 @@ +From 3eaef30e2282ad43eb5b2e529c2519ca9b7a5b6b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 3 Jun 2025 19:29:01 +0100 +Subject: btrfs: fix invalid inode pointer dereferences during log replay + +From: Filipe Manana + +[ Upstream commit 2dcf838cf5c2f0f4501edaa1680fcad03618d760 ] + +In a few places where we call read_one_inode(), if we get a NULL pointer +we end up jumping into an error path, or fallthrough in case of +__add_inode_ref(), where we then do something like this: + + iput(&inode->vfs_inode); + +which results in an invalid inode pointer that triggers an invalid memory +access, resulting in a crash. + +Fix this by making sure we don't do such dereferences. + +Fixes: b4c50cbb01a1 ("btrfs: return a btrfs_inode from read_one_inode()") +CC: stable@vger.kernel.org # 6.15+ +Signed-off-by: Filipe Manana +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Stable-dep-of: 5f61b961599a ("btrfs: fix inode lookup error handling during log replay") +Signed-off-by: Sasha Levin +--- + fs/btrfs/tree-log.c | 14 ++++++-------- + 1 file changed, 6 insertions(+), 8 deletions(-) + +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 7a1c7070287b2..f4317fce569b7 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -677,15 +677,12 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, + extent_end = ALIGN(start + size, + fs_info->sectorsize); + } else { +- ret = 0; +- goto out; ++ return 0; + } + + inode = read_one_inode(root, key->objectid); +- if (!inode) { +- ret = -EIO; +- goto out; +- } ++ if (!inode) ++ return -EIO; + + /* + * first check to see if we already have this extent in the +@@ -977,7 +974,8 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans, + ret = unlink_inode_for_log_replay(trans, dir, inode, &name); + out: + kfree(name.name); +- iput(&inode->vfs_inode); ++ if (inode) ++ iput(&inode->vfs_inode); + return ret; + } + +@@ -1194,8 +1192,8 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans, + ret = unlink_inode_for_log_replay(trans, + victim_parent, + inode, &victim_name); ++ iput(&victim_parent->vfs_inode); + } +- iput(&victim_parent->vfs_inode); + kfree(victim_name.name); + if (ret) + return ret; +-- +2.39.5 + diff --git a/queue-6.12/btrfs-fix-iteration-of-extrefs-during-log-replay.patch b/queue-6.12/btrfs-fix-iteration-of-extrefs-during-log-replay.patch new file mode 100644 index 0000000000..21e3f69ff1 --- /dev/null +++ b/queue-6.12/btrfs-fix-iteration-of-extrefs-during-log-replay.patch @@ -0,0 +1,51 @@ +From a54dc73e80f02e4d51a0b46e6e1103806488b170 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 23 Jun 2025 12:11:58 +0100 +Subject: btrfs: fix iteration of extrefs during log replay + +From: Filipe Manana + +[ Upstream commit 54a7081ed168b72a8a2d6ef4ba3a1259705a2926 ] + +At __inode_add_ref() when processing extrefs, if we jump into the next +label we have an undefined value of victim_name.len, since we haven't +initialized it before we did the goto. This results in an invalid memory +access in the next iteration of the loop since victim_name.len was not +initialized to the length of the name of the current extref. + +Fix this by initializing victim_name.len with the current extref's name +length. + +Fixes: e43eec81c516 ("btrfs: use struct qstr instead of name and namelen pairs") +Reviewed-by: Johannes Thumshirn +Reviewed-by: Qu Wenruo +Signed-off-by: Filipe Manana +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/tree-log.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 40acf9ccccfe7..3ecab032907e7 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -1162,13 +1162,13 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans, + struct fscrypt_str victim_name; + + extref = (struct btrfs_inode_extref *)(base + cur_offset); ++ victim_name.len = btrfs_inode_extref_name_len(leaf, extref); + + if (btrfs_inode_extref_parent(leaf, extref) != parent_objectid) + goto next; + + ret = read_alloc_one_name(leaf, &extref->name, +- btrfs_inode_extref_name_len(leaf, extref), +- &victim_name); ++ victim_name.len, &victim_name); + if (ret) + return ret; + +-- +2.39.5 + diff --git a/queue-6.12/btrfs-fix-missing-error-handling-when-searching-for-.patch b/queue-6.12/btrfs-fix-missing-error-handling-when-searching-for-.patch new file mode 100644 index 0000000000..0d791f8e28 --- /dev/null +++ b/queue-6.12/btrfs-fix-missing-error-handling-when-searching-for-.patch @@ -0,0 +1,45 @@ +From 54389c3aec54ab7c435f4f64b6f15b5dca04f10d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Jun 2025 16:57:07 +0100 +Subject: btrfs: fix missing error handling when searching for inode refs + during log replay + +From: Filipe Manana + +[ Upstream commit 6561a40ceced9082f50c374a22d5966cf9fc5f5c ] + +During log replay, at __add_inode_ref(), when we are searching for inode +ref keys we totally ignore if btrfs_search_slot() returns an error. This +may make a log replay succeed when there was an actual error and leave +some metadata inconsistency in a subvolume tree. Fix this by checking if +an error was returned from btrfs_search_slot() and if so, return it to +the caller. + +Fixes: e02119d5a7b4 ("Btrfs: Add a write ahead tree log to optimize synchronous operations") +Reviewed-by: Johannes Thumshirn +Reviewed-by: Qu Wenruo +Signed-off-by: Filipe Manana +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/tree-log.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 9637c7cdc0cf9..40acf9ccccfe7 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -1087,7 +1087,9 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans, + search_key.type = BTRFS_INODE_REF_KEY; + search_key.offset = parent_objectid; + ret = btrfs_search_slot(NULL, root, &search_key, path, 0, 0); +- if (ret == 0) { ++ if (ret < 0) { ++ return ret; ++ } else if (ret == 0) { + struct btrfs_inode_ref *victim_ref; + unsigned long ptr; + unsigned long ptr_end; +-- +2.39.5 + diff --git a/queue-6.12/btrfs-fix-wrong-start-offset-for-delalloc-space-rele.patch b/queue-6.12/btrfs-fix-wrong-start-offset-for-delalloc-space-rele.patch new file mode 100644 index 0000000000..93ed8c4ddb --- /dev/null +++ b/queue-6.12/btrfs-fix-wrong-start-offset-for-delalloc-space-rele.patch @@ -0,0 +1,50 @@ +From 2b82d1be5f28595a6206337d6fb4b016da8eef53 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 14 May 2025 10:30:58 +0100 +Subject: btrfs: fix wrong start offset for delalloc space release during mmap + write + +From: Filipe Manana + +[ Upstream commit 17a85f520469a1838379de8ad24f63e778f7c277 ] + +If we're doing a mmap write against a folio that has i_size somewhere in +the middle and we have multiple sectors in the folio, we may have to +release excess space previously reserved, for the range going from the +rounded up (to sector size) i_size to the folio's end offset. We are +calculating the right amount to release and passing it to +btrfs_delalloc_release_space(), but we are passing the wrong start offset +of that range - we're passing the folio's start offset instead of the +end offset, plus 1, of the range for which we keep the reservation. This +may result in releasing more space then we should and eventually trigger +an underflow of the data space_info's bytes_may_use counter. + +So fix this by passing the start offset as 'end + 1' instead of +'page_start' to btrfs_delalloc_release_space(). + +Fixes: d0b7da88f640 ("Btrfs: btrfs_page_mkwrite: Reserve space in sectorsized units") +Reviewed-by: Qu Wenruo +Signed-off-by: Filipe Manana +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/file.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index 86e7150babd5c..0e63603ac5c78 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -1992,7 +1992,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + if (reserved_space < fsize) { + end = page_start + reserved_space - 1; + btrfs_delalloc_release_space(BTRFS_I(inode), +- data_reserved, page_start, ++ data_reserved, end + 1, + fsize - reserved_space, true); + } + } +-- +2.39.5 + diff --git a/queue-6.12/btrfs-prepare-btrfs_page_mkwrite-for-large-folios.patch b/queue-6.12/btrfs-prepare-btrfs_page_mkwrite-for-large-folios.patch new file mode 100644 index 0000000000..a4a9fcdfd4 --- /dev/null +++ b/queue-6.12/btrfs-prepare-btrfs_page_mkwrite-for-large-folios.patch @@ -0,0 +1,104 @@ +From 421d84692bbc37ad4c7c0b6b5a0f96ce0bf97904 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 20 Feb 2025 19:52:26 +1030 +Subject: btrfs: prepare btrfs_page_mkwrite() for large folios + +From: Qu Wenruo + +[ Upstream commit 49990d8fa27d75f8ecf4ad013b13de3c4b1ff433 ] + +This changes the assumption that the folio is always page sized. +(Although the ASSERT() for folio order is still kept as-is). + +Just replace the PAGE_SIZE with folio_size(). + +Reviewed-by: Johannes Thumshirn +Signed-off-by: Qu Wenruo +Signed-off-by: David Sterba +Stable-dep-of: 17a85f520469 ("btrfs: fix wrong start offset for delalloc space release during mmap write") +Signed-off-by: Sasha Levin +--- + fs/btrfs/file.c | 19 ++++++++++--------- + 1 file changed, 10 insertions(+), 9 deletions(-) + +diff --git a/fs/btrfs/file.c b/fs/btrfs/file.c +index eaa991e698049..86e7150babd5c 100644 +--- a/fs/btrfs/file.c ++++ b/fs/btrfs/file.c +@@ -1912,6 +1912,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + struct extent_changeset *data_reserved = NULL; + unsigned long zero_start; + loff_t size; ++ size_t fsize = folio_size(folio); + vm_fault_t ret; + int ret2; + int reserved = 0; +@@ -1922,7 +1923,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + + ASSERT(folio_order(folio) == 0); + +- reserved_space = PAGE_SIZE; ++ reserved_space = fsize; + + sb_start_pagefault(inode->i_sb); + page_start = folio_pos(folio); +@@ -1976,7 +1977,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + * We can't set the delalloc bits if there are pending ordered + * extents. Drop our locks and wait for them to finish. + */ +- ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), page_start, PAGE_SIZE); ++ ordered = btrfs_lookup_ordered_range(BTRFS_I(inode), page_start, fsize); + if (ordered) { + unlock_extent(io_tree, page_start, page_end, &cached_state); + folio_unlock(folio); +@@ -1988,11 +1989,11 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + + if (folio->index == ((size - 1) >> PAGE_SHIFT)) { + reserved_space = round_up(size - page_start, fs_info->sectorsize); +- if (reserved_space < PAGE_SIZE) { ++ if (reserved_space < fsize) { + end = page_start + reserved_space - 1; + btrfs_delalloc_release_space(BTRFS_I(inode), + data_reserved, page_start, +- PAGE_SIZE - reserved_space, true); ++ fsize - reserved_space, true); + } + } + +@@ -2019,12 +2020,12 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + if (page_start + folio_size(folio) > size) + zero_start = offset_in_folio(folio, size); + else +- zero_start = PAGE_SIZE; ++ zero_start = fsize; + +- if (zero_start != PAGE_SIZE) ++ if (zero_start != fsize) + folio_zero_range(folio, zero_start, folio_size(folio) - zero_start); + +- btrfs_folio_clear_checked(fs_info, folio, page_start, PAGE_SIZE); ++ btrfs_folio_clear_checked(fs_info, folio, page_start, fsize); + btrfs_folio_set_dirty(fs_info, folio, page_start, end + 1 - page_start); + btrfs_folio_set_uptodate(fs_info, folio, page_start, end + 1 - page_start); + +@@ -2033,7 +2034,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + unlock_extent(io_tree, page_start, page_end, &cached_state); + up_read(&BTRFS_I(inode)->i_mmap_lock); + +- btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE); ++ btrfs_delalloc_release_extents(BTRFS_I(inode), fsize); + sb_end_pagefault(inode->i_sb); + extent_changeset_free(data_reserved); + return VM_FAULT_LOCKED; +@@ -2042,7 +2043,7 @@ static vm_fault_t btrfs_page_mkwrite(struct vm_fault *vmf) + folio_unlock(folio); + up_read(&BTRFS_I(inode)->i_mmap_lock); + out: +- btrfs_delalloc_release_extents(BTRFS_I(inode), PAGE_SIZE); ++ btrfs_delalloc_release_extents(BTRFS_I(inode), fsize); + btrfs_delalloc_release_space(BTRFS_I(inode), data_reserved, page_start, + reserved_space, (ret != 0)); + out_noreserve: +-- +2.39.5 + diff --git a/queue-6.12/btrfs-propagate-last_unlink_trans-earlier-when-doing.patch b/queue-6.12/btrfs-propagate-last_unlink_trans-earlier-when-doing.patch new file mode 100644 index 0000000000..e546ecfd61 --- /dev/null +++ b/queue-6.12/btrfs-propagate-last_unlink_trans-earlier-when-doing.patch @@ -0,0 +1,100 @@ +From b3cf0ef5986eb2c4eb12261c0bfdc79062f6d390 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 20 Jun 2025 15:54:05 +0100 +Subject: btrfs: propagate last_unlink_trans earlier when doing a rmdir + +From: Filipe Manana + +[ Upstream commit c466e33e729a0ee017d10d919cba18f503853c60 ] + +In case the removed directory had a snapshot that was deleted, we are +propagating its inode's last_unlink_trans to the parent directory after +we removed the entry from the parent directory. This leaves a small race +window where someone can log the parent directory after we removed the +entry and before we updated last_unlink_trans, and as a result if we ever +try to replay such a log tree, we will fail since we will attempt to +remove a snapshot during log replay, which is currently not possible and +results in the log replay (and mount) to fail. This is the type of failure +described in commit 1ec9a1ae1e30 ("Btrfs: fix unreplayable log after +snapshot delete + parent dir fsync"). + +So fix this by propagating the last_unlink_trans to the parent directory +before we remove the entry from it. + +Fixes: 44f714dae50a ("Btrfs: improve performance on fsync against new inode after rename/unlink") +Reviewed-by: Johannes Thumshirn +Signed-off-by: Filipe Manana +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/inode.c | 36 ++++++++++++++++++------------------ + 1 file changed, 18 insertions(+), 18 deletions(-) + +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 921ec3802648b..345e86fd844df 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -4734,7 +4734,6 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry) + struct btrfs_fs_info *fs_info = BTRFS_I(inode)->root->fs_info; + int ret = 0; + struct btrfs_trans_handle *trans; +- u64 last_unlink_trans; + struct fscrypt_name fname; + + if (inode->i_size > BTRFS_EMPTY_DIR_SIZE) +@@ -4760,6 +4759,23 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry) + goto out_notrans; + } + ++ /* ++ * Propagate the last_unlink_trans value of the deleted dir to its ++ * parent directory. This is to prevent an unrecoverable log tree in the ++ * case we do something like this: ++ * 1) create dir foo ++ * 2) create snapshot under dir foo ++ * 3) delete the snapshot ++ * 4) rmdir foo ++ * 5) mkdir foo ++ * 6) fsync foo or some file inside foo ++ * ++ * This is because we can't unlink other roots when replaying the dir ++ * deletes for directory foo. ++ */ ++ if (BTRFS_I(inode)->last_unlink_trans >= trans->transid) ++ BTRFS_I(dir)->last_unlink_trans = BTRFS_I(inode)->last_unlink_trans; ++ + if (unlikely(btrfs_ino(BTRFS_I(inode)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) { + ret = btrfs_unlink_subvol(trans, BTRFS_I(dir), dentry); + goto out; +@@ -4769,27 +4785,11 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry) + if (ret) + goto out; + +- last_unlink_trans = BTRFS_I(inode)->last_unlink_trans; +- + /* now the directory is empty */ + ret = btrfs_unlink_inode(trans, BTRFS_I(dir), BTRFS_I(d_inode(dentry)), + &fname.disk_name); +- if (!ret) { ++ if (!ret) + btrfs_i_size_write(BTRFS_I(inode), 0); +- /* +- * Propagate the last_unlink_trans value of the deleted dir to +- * its parent directory. This is to prevent an unrecoverable +- * log tree in the case we do something like this: +- * 1) create dir foo +- * 2) create snapshot under dir foo +- * 3) delete the snapshot +- * 4) rmdir foo +- * 5) mkdir foo +- * 6) fsync foo or some file inside foo +- */ +- if (last_unlink_trans >= trans->transid) +- BTRFS_I(dir)->last_unlink_trans = last_unlink_trans; +- } + out: + btrfs_end_transaction(trans); + out_notrans: +-- +2.39.5 + diff --git a/queue-6.12/btrfs-record-new-subvolume-in-parent-dir-earlier-to-.patch b/queue-6.12/btrfs-record-new-subvolume-in-parent-dir-earlier-to-.patch new file mode 100644 index 0000000000..7a7559d24c --- /dev/null +++ b/queue-6.12/btrfs-record-new-subvolume-in-parent-dir-earlier-to-.patch @@ -0,0 +1,70 @@ +From cab9e40c4db9782abec29eb43bc270f18b418c01 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Jun 2025 13:13:38 +0100 +Subject: btrfs: record new subvolume in parent dir earlier to avoid dir + logging races + +From: Filipe Manana + +[ Upstream commit bf5bcf9a6fa070ec8a725b08db63fb1318f77366 ] + +Instead of recording that a new subvolume was created in a directory after +we add the entry do the directory, record it before adding the entry. This +is to avoid races where after creating the entry and before recording the +new subvolume in the directory (the call to btrfs_record_new_subvolume()), +another task logs the directory, so we end up with a log tree where we +logged a directory that has an entry pointing to a root that was not yet +committed, resulting in an invalid entry if the log is persisted and +replayed later due to a power failure or crash. + +Also state this requirement in the function comment for +btrfs_record_new_subvolume(), similar to what we do for the +btrfs_record_unlink_dir() and btrfs_record_snapshot_destroy(). + +Fixes: 45c4102f0d82 ("btrfs: avoid transaction commit on any fsync after subvolume creation") +Reviewed-by: Johannes Thumshirn +Signed-off-by: Filipe Manana +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/ioctl.c | 4 ++-- + fs/btrfs/tree-log.c | 2 ++ + 2 files changed, 4 insertions(+), 2 deletions(-) + +diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c +index 3e3722a732393..1706f6d9b12e6 100644 +--- a/fs/btrfs/ioctl.c ++++ b/fs/btrfs/ioctl.c +@@ -758,14 +758,14 @@ static noinline int create_subvol(struct mnt_idmap *idmap, + goto out; + } + ++ btrfs_record_new_subvolume(trans, BTRFS_I(dir)); ++ + ret = btrfs_create_new_inode(trans, &new_inode_args); + if (ret) { + btrfs_abort_transaction(trans, ret); + goto out; + } + +- btrfs_record_new_subvolume(trans, BTRFS_I(dir)); +- + d_instantiate_new(dentry, new_inode_args.inode); + new_inode_args.inode = NULL; + +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 97c5dc0ebd9d6..16b4474ded4bc 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -7463,6 +7463,8 @@ void btrfs_record_snapshot_destroy(struct btrfs_trans_handle *trans, + * full log sync. + * Also we don't need to worry with renames, since btrfs_rename() marks the log + * for full commit when renaming a subvolume. ++ * ++ * Must be called before creating the subvolume entry in its parent directory. + */ + void btrfs_record_new_subvolume(const struct btrfs_trans_handle *trans, + struct btrfs_inode *dir) +-- +2.39.5 + diff --git a/queue-6.12/btrfs-return-a-btrfs_inode-from-btrfs_iget_logging.patch b/queue-6.12/btrfs-return-a-btrfs_inode-from-btrfs_iget_logging.patch new file mode 100644 index 0000000000..32f8d15d70 --- /dev/null +++ b/queue-6.12/btrfs-return-a-btrfs_inode-from-btrfs_iget_logging.patch @@ -0,0 +1,286 @@ +From 539402f67752be3ae1d3e56051d5792c2f01b1ab Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 6 Mar 2025 16:42:28 +0000 +Subject: btrfs: return a btrfs_inode from btrfs_iget_logging() + +From: Filipe Manana + +[ Upstream commit a488d8ac2c4d96ecc7da59bb35a573277204ac6b ] + +All callers of btrfs_iget_logging() are interested in the btrfs_inode +structure rather than the VFS inode, so make btrfs_iget_logging() return +the btrfs_inode instead, avoiding lots of BTRFS_I() calls. + +Signed-off-by: Filipe Manana +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Stable-dep-of: 5f61b961599a ("btrfs: fix inode lookup error handling during log replay") +Signed-off-by: Sasha Levin +--- + fs/btrfs/tree-log.c | 94 ++++++++++++++++++++++----------------------- + 1 file changed, 45 insertions(+), 49 deletions(-) + +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 3ecab032907e7..262523cd80476 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -138,7 +138,7 @@ static void wait_log_commit(struct btrfs_root *root, int transid); + * and once to do all the other items. + */ + +-static struct inode *btrfs_iget_logging(u64 objectid, struct btrfs_root *root) ++static struct btrfs_inode *btrfs_iget_logging(u64 objectid, struct btrfs_root *root) + { + unsigned int nofs_flag; + struct inode *inode; +@@ -154,7 +154,10 @@ static struct inode *btrfs_iget_logging(u64 objectid, struct btrfs_root *root) + inode = btrfs_iget(objectid, root); + memalloc_nofs_restore(nofs_flag); + +- return inode; ++ if (IS_ERR(inode)) ++ return ERR_CAST(inode); ++ ++ return BTRFS_I(inode); + } + + /* +@@ -617,12 +620,12 @@ static int read_alloc_one_name(struct extent_buffer *eb, void *start, int len, + static noinline struct inode *read_one_inode(struct btrfs_root *root, + u64 objectid) + { +- struct inode *inode; ++ struct btrfs_inode *inode; + + inode = btrfs_iget_logging(objectid, root); + if (IS_ERR(inode)) +- inode = NULL; +- return inode; ++ return NULL; ++ return &inode->vfs_inode; + } + + /* replays a single extent in 'eb' at 'slot' with 'key' into the +@@ -5487,7 +5490,6 @@ static int log_new_dir_dentries(struct btrfs_trans_handle *trans, + ihold(&curr_inode->vfs_inode); + + while (true) { +- struct inode *vfs_inode; + struct btrfs_key key; + struct btrfs_key found_key; + u64 next_index; +@@ -5503,7 +5505,7 @@ static int log_new_dir_dentries(struct btrfs_trans_handle *trans, + struct extent_buffer *leaf = path->nodes[0]; + struct btrfs_dir_item *di; + struct btrfs_key di_key; +- struct inode *di_inode; ++ struct btrfs_inode *di_inode; + int log_mode = LOG_INODE_EXISTS; + int type; + +@@ -5530,17 +5532,16 @@ static int log_new_dir_dentries(struct btrfs_trans_handle *trans, + goto out; + } + +- if (!need_log_inode(trans, BTRFS_I(di_inode))) { +- btrfs_add_delayed_iput(BTRFS_I(di_inode)); ++ if (!need_log_inode(trans, di_inode)) { ++ btrfs_add_delayed_iput(di_inode); + break; + } + + ctx->log_new_dentries = false; + if (type == BTRFS_FT_DIR) + log_mode = LOG_INODE_ALL; +- ret = btrfs_log_inode(trans, BTRFS_I(di_inode), +- log_mode, ctx); +- btrfs_add_delayed_iput(BTRFS_I(di_inode)); ++ ret = btrfs_log_inode(trans, di_inode, log_mode, ctx); ++ btrfs_add_delayed_iput(di_inode); + if (ret) + goto out; + if (ctx->log_new_dentries) { +@@ -5582,14 +5583,13 @@ static int log_new_dir_dentries(struct btrfs_trans_handle *trans, + kfree(dir_elem); + + btrfs_add_delayed_iput(curr_inode); +- curr_inode = NULL; + +- vfs_inode = btrfs_iget_logging(ino, root); +- if (IS_ERR(vfs_inode)) { +- ret = PTR_ERR(vfs_inode); ++ curr_inode = btrfs_iget_logging(ino, root); ++ if (IS_ERR(curr_inode)) { ++ ret = PTR_ERR(curr_inode); ++ curr_inode = NULL; + break; + } +- curr_inode = BTRFS_I(vfs_inode); + } + out: + btrfs_free_path(path); +@@ -5667,7 +5667,7 @@ static int add_conflicting_inode(struct btrfs_trans_handle *trans, + struct btrfs_log_ctx *ctx) + { + struct btrfs_ino_list *ino_elem; +- struct inode *inode; ++ struct btrfs_inode *inode; + + /* + * It's rare to have a lot of conflicting inodes, in practice it is not +@@ -5758,12 +5758,12 @@ static int add_conflicting_inode(struct btrfs_trans_handle *trans, + * inode in LOG_INODE_EXISTS mode and rename operations update the log, + * so that the log ends up with the new name and without the old name. + */ +- if (!need_log_inode(trans, BTRFS_I(inode))) { +- btrfs_add_delayed_iput(BTRFS_I(inode)); ++ if (!need_log_inode(trans, inode)) { ++ btrfs_add_delayed_iput(inode); + return 0; + } + +- btrfs_add_delayed_iput(BTRFS_I(inode)); ++ btrfs_add_delayed_iput(inode); + + ino_elem = kmalloc(sizeof(*ino_elem), GFP_NOFS); + if (!ino_elem) +@@ -5799,7 +5799,7 @@ static int log_conflicting_inodes(struct btrfs_trans_handle *trans, + */ + while (!list_empty(&ctx->conflict_inodes)) { + struct btrfs_ino_list *curr; +- struct inode *inode; ++ struct btrfs_inode *inode; + u64 ino; + u64 parent; + +@@ -5835,9 +5835,8 @@ static int log_conflicting_inodes(struct btrfs_trans_handle *trans, + * dir index key range logged for the directory. So we + * must make sure the deletion is recorded. + */ +- ret = btrfs_log_inode(trans, BTRFS_I(inode), +- LOG_INODE_ALL, ctx); +- btrfs_add_delayed_iput(BTRFS_I(inode)); ++ ret = btrfs_log_inode(trans, inode, LOG_INODE_ALL, ctx); ++ btrfs_add_delayed_iput(inode); + if (ret) + break; + continue; +@@ -5853,8 +5852,8 @@ static int log_conflicting_inodes(struct btrfs_trans_handle *trans, + * it again because if some other task logged the inode after + * that, we can avoid doing it again. + */ +- if (!need_log_inode(trans, BTRFS_I(inode))) { +- btrfs_add_delayed_iput(BTRFS_I(inode)); ++ if (!need_log_inode(trans, inode)) { ++ btrfs_add_delayed_iput(inode); + continue; + } + +@@ -5865,8 +5864,8 @@ static int log_conflicting_inodes(struct btrfs_trans_handle *trans, + * well because during a rename we pin the log and update the + * log with the new name before we unpin it. + */ +- ret = btrfs_log_inode(trans, BTRFS_I(inode), LOG_INODE_EXISTS, ctx); +- btrfs_add_delayed_iput(BTRFS_I(inode)); ++ ret = btrfs_log_inode(trans, inode, LOG_INODE_EXISTS, ctx); ++ btrfs_add_delayed_iput(inode); + if (ret) + break; + } +@@ -6358,7 +6357,7 @@ static int log_new_delayed_dentries(struct btrfs_trans_handle *trans, + + list_for_each_entry(item, delayed_ins_list, log_list) { + struct btrfs_dir_item *dir_item; +- struct inode *di_inode; ++ struct btrfs_inode *di_inode; + struct btrfs_key key; + int log_mode = LOG_INODE_EXISTS; + +@@ -6374,8 +6373,8 @@ static int log_new_delayed_dentries(struct btrfs_trans_handle *trans, + break; + } + +- if (!need_log_inode(trans, BTRFS_I(di_inode))) { +- btrfs_add_delayed_iput(BTRFS_I(di_inode)); ++ if (!need_log_inode(trans, di_inode)) { ++ btrfs_add_delayed_iput(di_inode); + continue; + } + +@@ -6383,12 +6382,12 @@ static int log_new_delayed_dentries(struct btrfs_trans_handle *trans, + log_mode = LOG_INODE_ALL; + + ctx->log_new_dentries = false; +- ret = btrfs_log_inode(trans, BTRFS_I(di_inode), log_mode, ctx); ++ ret = btrfs_log_inode(trans, di_inode, log_mode, ctx); + + if (!ret && ctx->log_new_dentries) +- ret = log_new_dir_dentries(trans, BTRFS_I(di_inode), ctx); ++ ret = log_new_dir_dentries(trans, di_inode, ctx); + +- btrfs_add_delayed_iput(BTRFS_I(di_inode)); ++ btrfs_add_delayed_iput(di_inode); + + if (ret) + break; +@@ -6796,7 +6795,7 @@ static int btrfs_log_all_parents(struct btrfs_trans_handle *trans, + ptr = btrfs_item_ptr_offset(leaf, slot); + while (cur_offset < item_size) { + struct btrfs_key inode_key; +- struct inode *dir_inode; ++ struct btrfs_inode *dir_inode; + + inode_key.type = BTRFS_INODE_ITEM_KEY; + inode_key.offset = 0; +@@ -6845,18 +6844,16 @@ static int btrfs_log_all_parents(struct btrfs_trans_handle *trans, + goto out; + } + +- if (!need_log_inode(trans, BTRFS_I(dir_inode))) { +- btrfs_add_delayed_iput(BTRFS_I(dir_inode)); ++ if (!need_log_inode(trans, dir_inode)) { ++ btrfs_add_delayed_iput(dir_inode); + continue; + } + + ctx->log_new_dentries = false; +- ret = btrfs_log_inode(trans, BTRFS_I(dir_inode), +- LOG_INODE_ALL, ctx); ++ ret = btrfs_log_inode(trans, dir_inode, LOG_INODE_ALL, ctx); + if (!ret && ctx->log_new_dentries) +- ret = log_new_dir_dentries(trans, +- BTRFS_I(dir_inode), ctx); +- btrfs_add_delayed_iput(BTRFS_I(dir_inode)); ++ ret = log_new_dir_dentries(trans, dir_inode, ctx); ++ btrfs_add_delayed_iput(dir_inode); + if (ret) + goto out; + } +@@ -6881,7 +6878,7 @@ static int log_new_ancestors(struct btrfs_trans_handle *trans, + struct extent_buffer *leaf; + int slot; + struct btrfs_key search_key; +- struct inode *inode; ++ struct btrfs_inode *inode; + u64 ino; + int ret = 0; + +@@ -6896,11 +6893,10 @@ static int log_new_ancestors(struct btrfs_trans_handle *trans, + if (IS_ERR(inode)) + return PTR_ERR(inode); + +- if (BTRFS_I(inode)->generation >= trans->transid && +- need_log_inode(trans, BTRFS_I(inode))) +- ret = btrfs_log_inode(trans, BTRFS_I(inode), +- LOG_INODE_EXISTS, ctx); +- btrfs_add_delayed_iput(BTRFS_I(inode)); ++ if (inode->generation >= trans->transid && ++ need_log_inode(trans, inode)) ++ ret = btrfs_log_inode(trans, inode, LOG_INODE_EXISTS, ctx); ++ btrfs_add_delayed_iput(inode); + if (ret) + return ret; + +-- +2.39.5 + diff --git a/queue-6.12/btrfs-return-a-btrfs_inode-from-read_one_inode.patch b/queue-6.12/btrfs-return-a-btrfs_inode-from-read_one_inode.patch new file mode 100644 index 0000000000..4f0481df66 --- /dev/null +++ b/queue-6.12/btrfs-return-a-btrfs_inode-from-read_one_inode.patch @@ -0,0 +1,483 @@ +From 98626a6fc275485210c6d8ecda855137e55f2049 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 6 Mar 2025 17:11:00 +0000 +Subject: btrfs: return a btrfs_inode from read_one_inode() + +From: Filipe Manana + +[ Upstream commit b4c50cbb01a1b6901d2b94469636dd80fa93de81 ] + +All callers of read_one_inode() are mostly interested in the btrfs_inode +structure rather than the VFS inode, so make read_one_inode() return +the btrfs_inode instead, avoiding lots of BTRFS_I() calls. + +Signed-off-by: Filipe Manana +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Stable-dep-of: 5f61b961599a ("btrfs: fix inode lookup error handling during log replay") +Signed-off-by: Sasha Levin +--- + fs/btrfs/tree-log.c | 152 +++++++++++++++++++++----------------------- + 1 file changed, 73 insertions(+), 79 deletions(-) + +diff --git a/fs/btrfs/tree-log.c b/fs/btrfs/tree-log.c +index 262523cd80476..7a1c7070287b2 100644 +--- a/fs/btrfs/tree-log.c ++++ b/fs/btrfs/tree-log.c +@@ -617,15 +617,15 @@ static int read_alloc_one_name(struct extent_buffer *eb, void *start, int len, + * simple helper to read an inode off the disk from a given root + * This can only be called for subvolume roots and not for the log + */ +-static noinline struct inode *read_one_inode(struct btrfs_root *root, +- u64 objectid) ++static noinline struct btrfs_inode *read_one_inode(struct btrfs_root *root, ++ u64 objectid) + { + struct btrfs_inode *inode; + + inode = btrfs_iget_logging(objectid, root); + if (IS_ERR(inode)) + return NULL; +- return &inode->vfs_inode; ++ return inode; + } + + /* replays a single extent in 'eb' at 'slot' with 'key' into the +@@ -653,7 +653,7 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, + u64 start = key->offset; + u64 nbytes = 0; + struct btrfs_file_extent_item *item; +- struct inode *inode = NULL; ++ struct btrfs_inode *inode = NULL; + unsigned long size; + int ret = 0; + +@@ -692,8 +692,7 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, + * file. This must be done before the btrfs_drop_extents run + * so we don't try to drop this extent. + */ +- ret = btrfs_lookup_file_extent(trans, root, path, +- btrfs_ino(BTRFS_I(inode)), start, 0); ++ ret = btrfs_lookup_file_extent(trans, root, path, btrfs_ino(inode), start, 0); + + if (ret == 0 && + (found_type == BTRFS_FILE_EXTENT_REG || +@@ -727,7 +726,7 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, + drop_args.start = start; + drop_args.end = extent_end; + drop_args.drop_cache = true; +- ret = btrfs_drop_extents(trans, root, BTRFS_I(inode), &drop_args); ++ ret = btrfs_drop_extents(trans, root, inode, &drop_args); + if (ret) + goto out; + +@@ -905,16 +904,15 @@ static noinline int replay_one_extent(struct btrfs_trans_handle *trans, + goto out; + } + +- ret = btrfs_inode_set_file_extent_range(BTRFS_I(inode), start, +- extent_end - start); ++ ret = btrfs_inode_set_file_extent_range(inode, start, extent_end - start); + if (ret) + goto out; + + update_inode: +- btrfs_update_inode_bytes(BTRFS_I(inode), nbytes, drop_args.bytes_found); +- ret = btrfs_update_inode(trans, BTRFS_I(inode)); ++ btrfs_update_inode_bytes(inode, nbytes, drop_args.bytes_found); ++ ret = btrfs_update_inode(trans, inode); + out: +- iput(inode); ++ iput(&inode->vfs_inode); + return ret; + } + +@@ -951,7 +949,7 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans, + struct btrfs_dir_item *di) + { + struct btrfs_root *root = dir->root; +- struct inode *inode; ++ struct btrfs_inode *inode; + struct fscrypt_str name; + struct extent_buffer *leaf; + struct btrfs_key location; +@@ -976,10 +974,10 @@ static noinline int drop_one_dir_item(struct btrfs_trans_handle *trans, + if (ret) + goto out; + +- ret = unlink_inode_for_log_replay(trans, dir, BTRFS_I(inode), &name); ++ ret = unlink_inode_for_log_replay(trans, dir, inode, &name); + out: + kfree(name.name); +- iput(inode); ++ iput(&inode->vfs_inode); + return ret; + } + +@@ -1154,7 +1152,7 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans, + u32 item_size; + u32 cur_offset = 0; + unsigned long base; +- struct inode *victim_parent; ++ struct btrfs_inode *victim_parent; + + leaf = path->nodes[0]; + +@@ -1194,10 +1192,10 @@ static inline int __add_inode_ref(struct btrfs_trans_handle *trans, + btrfs_release_path(path); + + ret = unlink_inode_for_log_replay(trans, +- BTRFS_I(victim_parent), ++ victim_parent, + inode, &victim_name); + } +- iput(victim_parent); ++ iput(&victim_parent->vfs_inode); + kfree(victim_name.name); + if (ret) + return ret; +@@ -1331,7 +1329,7 @@ static int unlink_old_inode_refs(struct btrfs_trans_handle *trans, + ret = !!btrfs_find_name_in_backref(log_eb, log_slot, &name); + + if (!ret) { +- struct inode *dir; ++ struct btrfs_inode *dir; + + btrfs_release_path(path); + dir = read_one_inode(root, parent_id); +@@ -1340,10 +1338,9 @@ static int unlink_old_inode_refs(struct btrfs_trans_handle *trans, + kfree(name.name); + goto out; + } +- ret = unlink_inode_for_log_replay(trans, BTRFS_I(dir), +- inode, &name); ++ ret = unlink_inode_for_log_replay(trans, dir, inode, &name); + kfree(name.name); +- iput(dir); ++ iput(&dir->vfs_inode); + if (ret) + goto out; + goto again; +@@ -1375,8 +1372,8 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + struct extent_buffer *eb, int slot, + struct btrfs_key *key) + { +- struct inode *dir = NULL; +- struct inode *inode = NULL; ++ struct btrfs_inode *dir = NULL; ++ struct btrfs_inode *inode = NULL; + unsigned long ref_ptr; + unsigned long ref_end; + struct fscrypt_str name = { 0 }; +@@ -1441,8 +1438,8 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + if (ret) + goto out; + +- ret = inode_in_dir(root, path, btrfs_ino(BTRFS_I(dir)), +- btrfs_ino(BTRFS_I(inode)), ref_index, &name); ++ ret = inode_in_dir(root, path, btrfs_ino(dir), btrfs_ino(inode), ++ ref_index, &name); + if (ret < 0) { + goto out; + } else if (ret == 0) { +@@ -1453,8 +1450,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + * overwrite any existing back reference, and we don't + * want to create dangling pointers in the directory. + */ +- ret = __add_inode_ref(trans, root, path, log, +- BTRFS_I(dir), BTRFS_I(inode), ++ ret = __add_inode_ref(trans, root, path, log, dir, inode, + inode_objectid, parent_objectid, + ref_index, &name); + if (ret) { +@@ -1464,12 +1460,11 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + } + + /* insert our name */ +- ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode), +- &name, 0, ref_index); ++ ret = btrfs_add_link(trans, dir, inode, &name, 0, ref_index); + if (ret) + goto out; + +- ret = btrfs_update_inode(trans, BTRFS_I(inode)); ++ ret = btrfs_update_inode(trans, inode); + if (ret) + goto out; + } +@@ -1479,7 +1474,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + kfree(name.name); + name.name = NULL; + if (log_ref_ver) { +- iput(dir); ++ iput(&dir->vfs_inode); + dir = NULL; + } + } +@@ -1492,8 +1487,7 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + * dir index entries exist for a name but there is no inode reference + * item with the same name. + */ +- ret = unlink_old_inode_refs(trans, root, path, BTRFS_I(inode), eb, slot, +- key); ++ ret = unlink_old_inode_refs(trans, root, path, inode, eb, slot, key); + if (ret) + goto out; + +@@ -1502,8 +1496,10 @@ static noinline int add_inode_ref(struct btrfs_trans_handle *trans, + out: + btrfs_release_path(path); + kfree(name.name); +- iput(dir); +- iput(inode); ++ if (dir) ++ iput(&dir->vfs_inode); ++ if (inode) ++ iput(&inode->vfs_inode); + return ret; + } + +@@ -1675,12 +1671,13 @@ static noinline int fixup_inode_link_counts(struct btrfs_trans_handle *trans, + { + int ret; + struct btrfs_key key; +- struct inode *inode; + + key.objectid = BTRFS_TREE_LOG_FIXUP_OBJECTID; + key.type = BTRFS_ORPHAN_ITEM_KEY; + key.offset = (u64)-1; + while (1) { ++ struct btrfs_inode *inode; ++ + ret = btrfs_search_slot(trans, root, &key, path, -1, 1); + if (ret < 0) + break; +@@ -1708,8 +1705,8 @@ static noinline int fixup_inode_link_counts(struct btrfs_trans_handle *trans, + break; + } + +- ret = fixup_inode_link_count(trans, inode); +- iput(inode); ++ ret = fixup_inode_link_count(trans, &inode->vfs_inode); ++ iput(&inode->vfs_inode); + if (ret) + break; + +@@ -1737,12 +1734,14 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans, + { + struct btrfs_key key; + int ret = 0; +- struct inode *inode; ++ struct btrfs_inode *inode; ++ struct inode *vfs_inode; + + inode = read_one_inode(root, objectid); + if (!inode) + return -EIO; + ++ vfs_inode = &inode->vfs_inode; + key.objectid = BTRFS_TREE_LOG_FIXUP_OBJECTID; + key.type = BTRFS_ORPHAN_ITEM_KEY; + key.offset = objectid; +@@ -1751,15 +1750,15 @@ static noinline int link_to_fixup_dir(struct btrfs_trans_handle *trans, + + btrfs_release_path(path); + if (ret == 0) { +- if (!inode->i_nlink) +- set_nlink(inode, 1); ++ if (!vfs_inode->i_nlink) ++ set_nlink(vfs_inode, 1); + else +- inc_nlink(inode); +- ret = btrfs_update_inode(trans, BTRFS_I(inode)); ++ inc_nlink(vfs_inode); ++ ret = btrfs_update_inode(trans, inode); + } else if (ret == -EEXIST) { + ret = 0; + } +- iput(inode); ++ iput(vfs_inode); + + return ret; + } +@@ -1775,8 +1774,8 @@ static noinline int insert_one_name(struct btrfs_trans_handle *trans, + const struct fscrypt_str *name, + struct btrfs_key *location) + { +- struct inode *inode; +- struct inode *dir; ++ struct btrfs_inode *inode; ++ struct btrfs_inode *dir; + int ret; + + inode = read_one_inode(root, location->objectid); +@@ -1785,17 +1784,16 @@ static noinline int insert_one_name(struct btrfs_trans_handle *trans, + + dir = read_one_inode(root, dirid); + if (!dir) { +- iput(inode); ++ iput(&inode->vfs_inode); + return -EIO; + } + +- ret = btrfs_add_link(trans, BTRFS_I(dir), BTRFS_I(inode), name, +- 1, index); ++ ret = btrfs_add_link(trans, dir, inode, name, 1, index); + + /* FIXME, put inode into FIXUP list */ + +- iput(inode); +- iput(dir); ++ iput(&inode->vfs_inode); ++ iput(&dir->vfs_inode); + return ret; + } + +@@ -1857,7 +1855,7 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans, + bool index_dst_matches = false; + struct btrfs_key log_key; + struct btrfs_key search_key; +- struct inode *dir; ++ struct btrfs_inode *dir; + u8 log_flags; + bool exists; + int ret; +@@ -1887,9 +1885,8 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans, + ret = PTR_ERR(dir_dst_di); + goto out; + } else if (dir_dst_di) { +- ret = delete_conflicting_dir_entry(trans, BTRFS_I(dir), path, +- dir_dst_di, &log_key, +- log_flags, exists); ++ ret = delete_conflicting_dir_entry(trans, dir, path, dir_dst_di, ++ &log_key, log_flags, exists); + if (ret < 0) + goto out; + dir_dst_matches = (ret == 1); +@@ -1904,9 +1901,8 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans, + ret = PTR_ERR(index_dst_di); + goto out; + } else if (index_dst_di) { +- ret = delete_conflicting_dir_entry(trans, BTRFS_I(dir), path, +- index_dst_di, &log_key, +- log_flags, exists); ++ ret = delete_conflicting_dir_entry(trans, dir, path, index_dst_di, ++ &log_key, log_flags, exists); + if (ret < 0) + goto out; + index_dst_matches = (ret == 1); +@@ -1961,11 +1957,11 @@ static noinline int replay_one_name(struct btrfs_trans_handle *trans, + + out: + if (!ret && update_size) { +- btrfs_i_size_write(BTRFS_I(dir), dir->i_size + name.len * 2); +- ret = btrfs_update_inode(trans, BTRFS_I(dir)); ++ btrfs_i_size_write(dir, dir->vfs_inode.i_size + name.len * 2); ++ ret = btrfs_update_inode(trans, dir); + } + kfree(name.name); +- iput(dir); ++ iput(&dir->vfs_inode); + if (!ret && name_added) + ret = 1; + return ret; +@@ -2122,16 +2118,16 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans, + struct btrfs_root *log, + struct btrfs_path *path, + struct btrfs_path *log_path, +- struct inode *dir, ++ struct btrfs_inode *dir, + struct btrfs_key *dir_key) + { +- struct btrfs_root *root = BTRFS_I(dir)->root; ++ struct btrfs_root *root = dir->root; + int ret; + struct extent_buffer *eb; + int slot; + struct btrfs_dir_item *di; + struct fscrypt_str name = { 0 }; +- struct inode *inode = NULL; ++ struct btrfs_inode *inode = NULL; + struct btrfs_key location; + + /* +@@ -2178,9 +2174,8 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans, + if (ret) + goto out; + +- inc_nlink(inode); +- ret = unlink_inode_for_log_replay(trans, BTRFS_I(dir), BTRFS_I(inode), +- &name); ++ inc_nlink(&inode->vfs_inode); ++ ret = unlink_inode_for_log_replay(trans, dir, inode, &name); + /* + * Unlike dir item keys, dir index keys can only have one name (entry) in + * them, as there are no key collisions since each key has a unique offset +@@ -2190,7 +2185,8 @@ static noinline int check_item_in_log(struct btrfs_trans_handle *trans, + btrfs_release_path(path); + btrfs_release_path(log_path); + kfree(name.name); +- iput(inode); ++ if (inode) ++ iput(&inode->vfs_inode); + return ret; + } + +@@ -2314,7 +2310,7 @@ static noinline int replay_dir_deletes(struct btrfs_trans_handle *trans, + struct btrfs_key dir_key; + struct btrfs_key found_key; + struct btrfs_path *log_path; +- struct inode *dir; ++ struct btrfs_inode *dir; + + dir_key.objectid = dirid; + dir_key.type = BTRFS_DIR_INDEX_KEY; +@@ -2391,7 +2387,7 @@ static noinline int replay_dir_deletes(struct btrfs_trans_handle *trans, + out: + btrfs_release_path(path); + btrfs_free_path(log_path); +- iput(dir); ++ iput(&dir->vfs_inode); + return ret; + } + +@@ -2485,7 +2481,7 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb, + */ + if (S_ISREG(mode)) { + struct btrfs_drop_extents_args drop_args = { 0 }; +- struct inode *inode; ++ struct btrfs_inode *inode; + u64 from; + + inode = read_one_inode(root, key.objectid); +@@ -2493,22 +2489,20 @@ static int replay_one_buffer(struct btrfs_root *log, struct extent_buffer *eb, + ret = -EIO; + break; + } +- from = ALIGN(i_size_read(inode), ++ from = ALIGN(i_size_read(&inode->vfs_inode), + root->fs_info->sectorsize); + drop_args.start = from; + drop_args.end = (u64)-1; + drop_args.drop_cache = true; +- ret = btrfs_drop_extents(wc->trans, root, +- BTRFS_I(inode), ++ ret = btrfs_drop_extents(wc->trans, root, inode, + &drop_args); + if (!ret) { +- inode_sub_bytes(inode, ++ inode_sub_bytes(&inode->vfs_inode, + drop_args.bytes_found); + /* Update the inode's nbytes. */ +- ret = btrfs_update_inode(wc->trans, +- BTRFS_I(inode)); ++ ret = btrfs_update_inode(wc->trans, inode); + } +- iput(inode); ++ iput(&inode->vfs_inode); + if (ret) + break; + } +-- +2.39.5 + diff --git a/queue-6.12/btrfs-use-btrfs_record_snapshot_destroy-during-rmdir.patch b/queue-6.12/btrfs-use-btrfs_record_snapshot_destroy-during-rmdir.patch new file mode 100644 index 0000000000..31ed9ffe79 --- /dev/null +++ b/queue-6.12/btrfs-use-btrfs_record_snapshot_destroy-during-rmdir.patch @@ -0,0 +1,46 @@ +From 142edc58fc10c25f258b5d9bc96e3c79101f87a9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 20 Jun 2025 16:37:01 +0100 +Subject: btrfs: use btrfs_record_snapshot_destroy() during rmdir + +From: Filipe Manana + +[ Upstream commit 157501b0469969fc1ba53add5049575aadd79d80 ] + +We are setting the parent directory's last_unlink_trans directly which +may result in a concurrent task starting to log the directory not see the +update and therefore can log the directory after we removed a child +directory which had a snapshot within instead of falling back to a +transaction commit. Replaying such a log tree would result in a mount +failure since we can't currently delete snapshots (and subvolumes) during +log replay. This is the type of failure described in commit 1ec9a1ae1e30 +("Btrfs: fix unreplayable log after snapshot delete + parent dir fsync"). + +Fix this by using btrfs_record_snapshot_destroy() which updates the +last_unlink_trans field while holding the inode's log_mutex lock. + +Fixes: 44f714dae50a ("Btrfs: improve performance on fsync against new inode after rename/unlink") +Reviewed-by: Johannes Thumshirn +Signed-off-by: Filipe Manana +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/inode.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/fs/btrfs/inode.c b/fs/btrfs/inode.c +index 345e86fd844df..f84e3f9fad84a 100644 +--- a/fs/btrfs/inode.c ++++ b/fs/btrfs/inode.c +@@ -4774,7 +4774,7 @@ static int btrfs_rmdir(struct inode *dir, struct dentry *dentry) + * deletes for directory foo. + */ + if (BTRFS_I(inode)->last_unlink_trans >= trans->transid) +- BTRFS_I(dir)->last_unlink_trans = BTRFS_I(inode)->last_unlink_trans; ++ btrfs_record_snapshot_destroy(trans, BTRFS_I(dir)); + + if (unlikely(btrfs_ino(BTRFS_I(inode)) == BTRFS_EMPTY_SUBVOL_DIR_OBJECTID)) { + ret = btrfs_unlink_subvol(trans, BTRFS_I(dir), dentry); +-- +2.39.5 + diff --git a/queue-6.12/crypto-iaa-do-not-clobber-req-base.data.patch b/queue-6.12/crypto-iaa-do-not-clobber-req-base.data.patch new file mode 100644 index 0000000000..8799d8efe3 --- /dev/null +++ b/queue-6.12/crypto-iaa-do-not-clobber-req-base.data.patch @@ -0,0 +1,49 @@ +From a8d8d7e0bb500e73c8972c87dcaf7bcdcb95c1e0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 24 Mar 2025 12:04:18 +0800 +Subject: crypto: iaa - Do not clobber req->base.data + +From: Herbert Xu + +[ Upstream commit cc98d8ce934b99789d30421957fd6a20fffb1c22 ] + +The req->base.data field is for the user and must not be touched by +the driver, unless you save it first. + +The iaa driver doesn't seem to be using the req->base.data value +so just remove the assignment. + +Fixes: 09646c98d0bf ("crypto: iaa - Add irq support for the crypto async interface") +Signed-off-by: Herbert Xu +Signed-off-by: Sasha Levin +--- + drivers/crypto/intel/iaa/iaa_crypto_main.c | 6 ++---- + 1 file changed, 2 insertions(+), 4 deletions(-) + +diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c +index 711c6e8914978..df2728cccf8b3 100644 +--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c ++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c +@@ -1182,8 +1182,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, + " src_addr %llx, dst_addr %llx\n", __func__, + active_compression_mode->name, + src_addr, dst_addr); +- } else if (ctx->async_mode) +- req->base.data = idxd_desc; ++ } + + dev_dbg(dev, "%s: compression mode %s," + " desc->src1_addr %llx, desc->src1_size %d," +@@ -1420,8 +1419,7 @@ static int iaa_decompress(struct crypto_tfm *tfm, struct acomp_req *req, + " src_addr %llx, dst_addr %llx\n", __func__, + active_compression_mode->name, + src_addr, dst_addr); +- } else if (ctx->async_mode && !disable_async) +- req->base.data = idxd_desc; ++ } + + dev_dbg(dev, "%s: decompression mode %s," + " desc->src1_addr %llx, desc->src1_size %d," +-- +2.39.5 + diff --git a/queue-6.12/crypto-iaa-remove-dst_null-support.patch b/queue-6.12/crypto-iaa-remove-dst_null-support.patch new file mode 100644 index 0000000000..552a865d02 --- /dev/null +++ b/queue-6.12/crypto-iaa-remove-dst_null-support.patch @@ -0,0 +1,247 @@ +From 465fb3be1899150fc92a4902b6796645a90a2e44 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 15 Mar 2025 18:30:24 +0800 +Subject: crypto: iaa - Remove dst_null support + +From: Herbert Xu + +[ Upstream commit 02c974294c740bfb747ec64933e12148eb3d99e1 ] + +Remove the unused dst_null support. + +Signed-off-by: Herbert Xu +Stable-dep-of: cc98d8ce934b ("crypto: iaa - Do not clobber req->base.data") +Signed-off-by: Sasha Levin +--- + drivers/crypto/intel/iaa/iaa_crypto_main.c | 136 +-------------------- + 1 file changed, 6 insertions(+), 130 deletions(-) + +diff --git a/drivers/crypto/intel/iaa/iaa_crypto_main.c b/drivers/crypto/intel/iaa/iaa_crypto_main.c +index e1f60f0f507c9..711c6e8914978 100644 +--- a/drivers/crypto/intel/iaa/iaa_crypto_main.c ++++ b/drivers/crypto/intel/iaa/iaa_crypto_main.c +@@ -1126,8 +1126,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, + struct idxd_wq *wq, + dma_addr_t src_addr, unsigned int slen, + dma_addr_t dst_addr, unsigned int *dlen, +- u32 *compression_crc, +- bool disable_async) ++ u32 *compression_crc) + { + struct iaa_device_compression_mode *active_compression_mode; + struct iaa_compression_ctx *ctx = crypto_tfm_ctx(tfm); +@@ -1170,7 +1169,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, + desc->src2_size = sizeof(struct aecs_comp_table_record); + desc->completion_addr = idxd_desc->compl_dma; + +- if (ctx->use_irq && !disable_async) { ++ if (ctx->use_irq) { + desc->flags |= IDXD_OP_FLAG_RCI; + + idxd_desc->crypto.req = req; +@@ -1183,7 +1182,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, + " src_addr %llx, dst_addr %llx\n", __func__, + active_compression_mode->name, + src_addr, dst_addr); +- } else if (ctx->async_mode && !disable_async) ++ } else if (ctx->async_mode) + req->base.data = idxd_desc; + + dev_dbg(dev, "%s: compression mode %s," +@@ -1204,7 +1203,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, + update_total_comp_calls(); + update_wq_comp_calls(wq); + +- if (ctx->async_mode && !disable_async) { ++ if (ctx->async_mode) { + ret = -EINPROGRESS; + dev_dbg(dev, "%s: returning -EINPROGRESS\n", __func__); + goto out; +@@ -1224,7 +1223,7 @@ static int iaa_compress(struct crypto_tfm *tfm, struct acomp_req *req, + + *compression_crc = idxd_desc->iax_completion->crc; + +- if (!ctx->async_mode || disable_async) ++ if (!ctx->async_mode) + idxd_free_desc(wq, idxd_desc); + out: + return ret; +@@ -1490,13 +1489,11 @@ static int iaa_comp_acompress(struct acomp_req *req) + struct iaa_compression_ctx *compression_ctx; + struct crypto_tfm *tfm = req->base.tfm; + dma_addr_t src_addr, dst_addr; +- bool disable_async = false; + int nr_sgs, cpu, ret = 0; + struct iaa_wq *iaa_wq; + u32 compression_crc; + struct idxd_wq *wq; + struct device *dev; +- int order = -1; + + compression_ctx = crypto_tfm_ctx(tfm); + +@@ -1526,21 +1523,6 @@ static int iaa_comp_acompress(struct acomp_req *req) + + iaa_wq = idxd_wq_get_private(wq); + +- if (!req->dst) { +- gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? GFP_KERNEL : GFP_ATOMIC; +- +- /* incompressible data will always be < 2 * slen */ +- req->dlen = 2 * req->slen; +- order = order_base_2(round_up(req->dlen, PAGE_SIZE) / PAGE_SIZE); +- req->dst = sgl_alloc_order(req->dlen, order, false, flags, NULL); +- if (!req->dst) { +- ret = -ENOMEM; +- order = -1; +- goto out; +- } +- disable_async = true; +- } +- + dev = &wq->idxd->pdev->dev; + + nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); +@@ -1570,7 +1552,7 @@ static int iaa_comp_acompress(struct acomp_req *req) + req->dst, req->dlen, sg_dma_len(req->dst)); + + ret = iaa_compress(tfm, req, wq, src_addr, req->slen, dst_addr, +- &req->dlen, &compression_crc, disable_async); ++ &req->dlen, &compression_crc); + if (ret == -EINPROGRESS) + return ret; + +@@ -1601,100 +1583,6 @@ static int iaa_comp_acompress(struct acomp_req *req) + out: + iaa_wq_put(wq); + +- if (order >= 0) +- sgl_free_order(req->dst, order); +- +- return ret; +-} +- +-static int iaa_comp_adecompress_alloc_dest(struct acomp_req *req) +-{ +- gfp_t flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP ? +- GFP_KERNEL : GFP_ATOMIC; +- struct crypto_tfm *tfm = req->base.tfm; +- dma_addr_t src_addr, dst_addr; +- int nr_sgs, cpu, ret = 0; +- struct iaa_wq *iaa_wq; +- struct device *dev; +- struct idxd_wq *wq; +- int order = -1; +- +- cpu = get_cpu(); +- wq = wq_table_next_wq(cpu); +- put_cpu(); +- if (!wq) { +- pr_debug("no wq configured for cpu=%d\n", cpu); +- return -ENODEV; +- } +- +- ret = iaa_wq_get(wq); +- if (ret) { +- pr_debug("no wq available for cpu=%d\n", cpu); +- return -ENODEV; +- } +- +- iaa_wq = idxd_wq_get_private(wq); +- +- dev = &wq->idxd->pdev->dev; +- +- nr_sgs = dma_map_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); +- if (nr_sgs <= 0 || nr_sgs > 1) { +- dev_dbg(dev, "couldn't map src sg for iaa device %d," +- " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, +- iaa_wq->wq->id, ret); +- ret = -EIO; +- goto out; +- } +- src_addr = sg_dma_address(req->src); +- dev_dbg(dev, "dma_map_sg, src_addr %llx, nr_sgs %d, req->src %p," +- " req->slen %d, sg_dma_len(sg) %d\n", src_addr, nr_sgs, +- req->src, req->slen, sg_dma_len(req->src)); +- +- req->dlen = 4 * req->slen; /* start with ~avg comp rato */ +-alloc_dest: +- order = order_base_2(round_up(req->dlen, PAGE_SIZE) / PAGE_SIZE); +- req->dst = sgl_alloc_order(req->dlen, order, false, flags, NULL); +- if (!req->dst) { +- ret = -ENOMEM; +- order = -1; +- goto out; +- } +- +- nr_sgs = dma_map_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); +- if (nr_sgs <= 0 || nr_sgs > 1) { +- dev_dbg(dev, "couldn't map dst sg for iaa device %d," +- " wq %d: ret=%d\n", iaa_wq->iaa_device->idxd->id, +- iaa_wq->wq->id, ret); +- ret = -EIO; +- goto err_map_dst; +- } +- +- dst_addr = sg_dma_address(req->dst); +- dev_dbg(dev, "dma_map_sg, dst_addr %llx, nr_sgs %d, req->dst %p," +- " req->dlen %d, sg_dma_len(sg) %d\n", dst_addr, nr_sgs, +- req->dst, req->dlen, sg_dma_len(req->dst)); +- ret = iaa_decompress(tfm, req, wq, src_addr, req->slen, +- dst_addr, &req->dlen, true); +- if (ret == -EOVERFLOW) { +- dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); +- req->dlen *= 2; +- if (req->dlen > CRYPTO_ACOMP_DST_MAX) +- goto err_map_dst; +- goto alloc_dest; +- } +- +- if (ret != 0) +- dev_dbg(dev, "asynchronous decompress failed ret=%d\n", ret); +- +- dma_unmap_sg(dev, req->dst, sg_nents(req->dst), DMA_FROM_DEVICE); +-err_map_dst: +- dma_unmap_sg(dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); +-out: +- iaa_wq_put(wq); +- +- if (order >= 0) +- sgl_free_order(req->dst, order); +- + return ret; + } + +@@ -1717,9 +1605,6 @@ static int iaa_comp_adecompress(struct acomp_req *req) + return -EINVAL; + } + +- if (!req->dst) +- return iaa_comp_adecompress_alloc_dest(req); +- + cpu = get_cpu(); + wq = wq_table_next_wq(cpu); + put_cpu(); +@@ -1800,19 +1685,10 @@ static int iaa_comp_init_fixed(struct crypto_acomp *acomp_tfm) + return 0; + } + +-static void dst_free(struct scatterlist *sgl) +-{ +- /* +- * Called for req->dst = NULL cases but we free elsewhere +- * using sgl_free_order(). +- */ +-} +- + static struct acomp_alg iaa_acomp_fixed_deflate = { + .init = iaa_comp_init_fixed, + .compress = iaa_comp_acompress, + .decompress = iaa_comp_adecompress, +- .dst_free = dst_free, + .base = { + .cra_name = "deflate", + .cra_driver_name = "deflate-iaa", +-- +2.39.5 + diff --git a/queue-6.12/crypto-powerpc-poly1305-add-depends-on-broken-for-no.patch b/queue-6.12/crypto-powerpc-poly1305-add-depends-on-broken-for-no.patch new file mode 100644 index 0000000000..312940dd6a --- /dev/null +++ b/queue-6.12/crypto-powerpc-poly1305-add-depends-on-broken-for-no.patch @@ -0,0 +1,55 @@ +From c5eef0e2402ded238d0a8469208de6080139fd9e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 20 May 2025 10:39:29 +0800 +Subject: crypto: powerpc/poly1305 - add depends on BROKEN for now + +From: Eric Biggers + +[ Upstream commit bc8169003b41e89fe7052e408cf9fdbecb4017fe ] + +As discussed in the thread containing +https://lore.kernel.org/linux-crypto/20250510053308.GB505731@sol/, the +Power10-optimized Poly1305 code is currently not safe to call in softirq +context. Disable it for now. It can be re-enabled once it is fixed. + +Fixes: ba8f8624fde2 ("crypto: poly1305-p10 - Glue code for optmized Poly1305 implementation for ppc64le") +Cc: stable@vger.kernel.org +Signed-off-by: Eric Biggers +Signed-off-by: Herbert Xu +Signed-off-by: Sasha Levin +--- + arch/powerpc/lib/crypto/Kconfig | 22 ++++++++++++++++++++++ + 1 file changed, 22 insertions(+) + create mode 100644 arch/powerpc/lib/crypto/Kconfig + +diff --git a/arch/powerpc/lib/crypto/Kconfig b/arch/powerpc/lib/crypto/Kconfig +new file mode 100644 +index 0000000000000..3f9e1bbd9905b +--- /dev/null ++++ b/arch/powerpc/lib/crypto/Kconfig +@@ -0,0 +1,22 @@ ++# SPDX-License-Identifier: GPL-2.0-only ++ ++config CRYPTO_CHACHA20_P10 ++ tristate ++ depends on PPC64 && CPU_LITTLE_ENDIAN && VSX ++ default CRYPTO_LIB_CHACHA ++ select CRYPTO_LIB_CHACHA_GENERIC ++ select CRYPTO_ARCH_HAVE_LIB_CHACHA ++ ++config CRYPTO_POLY1305_P10 ++ tristate ++ depends on PPC64 && CPU_LITTLE_ENDIAN && VSX ++ depends on BROKEN # Needs to be fixed to work in softirq context ++ default CRYPTO_LIB_POLY1305 ++ select CRYPTO_ARCH_HAVE_LIB_POLY1305 ++ select CRYPTO_LIB_POLY1305_GENERIC ++ ++config CRYPTO_SHA256_PPC_SPE ++ tristate ++ depends on SPE ++ default CRYPTO_LIB_SHA256 ++ select CRYPTO_ARCH_HAVE_LIB_SHA256 +-- +2.39.5 + diff --git a/queue-6.12/crypto-zynqmp-sha-add-locking.patch b/queue-6.12/crypto-zynqmp-sha-add-locking.patch new file mode 100644 index 0000000000..c20b2d8136 --- /dev/null +++ b/queue-6.12/crypto-zynqmp-sha-add-locking.patch @@ -0,0 +1,81 @@ +From 6b7cd1895066517c174f5950622a3da3e6e9919f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 12 Apr 2025 18:57:22 +0800 +Subject: crypto: zynqmp-sha - Add locking + +From: Herbert Xu + +[ Upstream commit c7e68043620e0d5f89a37e573c667beab72d2937 ] + +The hardwrae is only capable of one hash at a time, so add a lock +to make sure that it isn't used concurrently. + +Fixes: 7ecc3e34474b ("crypto: xilinx - Add Xilinx SHA3 driver") +Signed-off-by: Herbert Xu +Signed-off-by: Sasha Levin +--- + drivers/crypto/xilinx/zynqmp-sha.c | 18 ++++++++++++++---- + 1 file changed, 14 insertions(+), 4 deletions(-) + +diff --git a/drivers/crypto/xilinx/zynqmp-sha.c b/drivers/crypto/xilinx/zynqmp-sha.c +index 1bcec6f46c9c7..9b5345068604f 100644 +--- a/drivers/crypto/xilinx/zynqmp-sha.c ++++ b/drivers/crypto/xilinx/zynqmp-sha.c +@@ -3,18 +3,19 @@ + * Xilinx ZynqMP SHA Driver. + * Copyright (c) 2022 Xilinx Inc. + */ +-#include + #include + #include + #include +-#include ++#include ++#include + #include + #include ++#include + #include +-#include + #include + #include + #include ++#include + #include + + #define ZYNQMP_DMA_BIT_MASK 32U +@@ -43,6 +44,8 @@ struct zynqmp_sha_desc_ctx { + static dma_addr_t update_dma_addr, final_dma_addr; + static char *ubuf, *fbuf; + ++static DEFINE_SPINLOCK(zynqmp_sha_lock); ++ + static int zynqmp_sha_init_tfm(struct crypto_shash *hash) + { + const char *fallback_driver_name = crypto_shash_alg_name(hash); +@@ -124,7 +127,8 @@ static int zynqmp_sha_export(struct shash_desc *desc, void *out) + return crypto_shash_export(&dctx->fbk_req, out); + } + +-static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) ++static int __zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, ++ unsigned int len, u8 *out) + { + unsigned int remaining_len = len; + int update_size; +@@ -159,6 +163,12 @@ static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned i + return ret; + } + ++static int zynqmp_sha_digest(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) ++{ ++ scoped_guard(spinlock_bh, &zynqmp_sha_lock) ++ return __zynqmp_sha_digest(desc, data, len, out); ++} ++ + static struct zynqmp_sha_drv_ctx sha3_drv_ctx = { + .sha3_384 = { + .init = zynqmp_sha_init, +-- +2.39.5 + diff --git a/queue-6.12/dpaa2-eth-fix-xdp_rxq_info-leak.patch b/queue-6.12/dpaa2-eth-fix-xdp_rxq_info-leak.patch new file mode 100644 index 0000000000..250511a7eb --- /dev/null +++ b/queue-6.12/dpaa2-eth-fix-xdp_rxq_info-leak.patch @@ -0,0 +1,101 @@ +From f2a8eb4a476cbdd7715ef34ff6674256e9ab5fa0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 26 Jun 2025 21:30:03 +0800 +Subject: dpaa2-eth: fix xdp_rxq_info leak + +From: Fushuai Wang + +[ Upstream commit 2def09ead4ad5907988b655d1e1454003aaf8297 ] + +The driver registered xdp_rxq_info structures via xdp_rxq_info_reg() +but failed to properly unregister them in error paths and during +removal. + +Fixes: d678be1dc1ec ("dpaa2-eth: add XDP_REDIRECT support") +Signed-off-by: Fushuai Wang +Reviewed-by: Simon Horman +Reviewed-by: Ioana Ciornei +Link: https://patch.msgid.link/20250626133003.80136-1-wangfushuai@baidu.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + .../net/ethernet/freescale/dpaa2/dpaa2-eth.c | 26 +++++++++++++++++-- + 1 file changed, 24 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +index 29886a8ba73f3..efd0048acd3b2 100644 +--- a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c ++++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c +@@ -3928,6 +3928,7 @@ static int dpaa2_eth_setup_rx_flow(struct dpaa2_eth_priv *priv, + MEM_TYPE_PAGE_ORDER0, NULL); + if (err) { + dev_err(dev, "xdp_rxq_info_reg_mem_model failed\n"); ++ xdp_rxq_info_unreg(&fq->channel->xdp_rxq); + return err; + } + +@@ -4421,17 +4422,25 @@ static int dpaa2_eth_bind_dpni(struct dpaa2_eth_priv *priv) + return -EINVAL; + } + if (err) +- return err; ++ goto out; + } + + err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token, + DPNI_QUEUE_TX, &priv->tx_qdid); + if (err) { + dev_err(dev, "dpni_get_qdid() failed\n"); +- return err; ++ goto out; + } + + return 0; ++ ++out: ++ while (i--) { ++ if (priv->fq[i].type == DPAA2_RX_FQ && ++ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq)) ++ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq); ++ } ++ return err; + } + + /* Allocate rings for storing incoming frame descriptors */ +@@ -4814,6 +4823,17 @@ static void dpaa2_eth_del_ch_napi(struct dpaa2_eth_priv *priv) + } + } + ++static void dpaa2_eth_free_rx_xdp_rxq(struct dpaa2_eth_priv *priv) ++{ ++ int i; ++ ++ for (i = 0; i < priv->num_fqs; i++) { ++ if (priv->fq[i].type == DPAA2_RX_FQ && ++ xdp_rxq_info_is_reg(&priv->fq[i].channel->xdp_rxq)) ++ xdp_rxq_info_unreg(&priv->fq[i].channel->xdp_rxq); ++ } ++} ++ + static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev) + { + struct device *dev; +@@ -5017,6 +5037,7 @@ static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev) + free_percpu(priv->percpu_stats); + err_alloc_percpu_stats: + dpaa2_eth_del_ch_napi(priv); ++ dpaa2_eth_free_rx_xdp_rxq(priv); + err_bind: + dpaa2_eth_free_dpbps(priv); + err_dpbp_setup: +@@ -5069,6 +5090,7 @@ static void dpaa2_eth_remove(struct fsl_mc_device *ls_dev) + free_percpu(priv->percpu_extras); + + dpaa2_eth_del_ch_napi(priv); ++ dpaa2_eth_free_rx_xdp_rxq(priv); + dpaa2_eth_free_dpbps(priv); + dpaa2_eth_free_dpio(priv); + dpaa2_eth_free_dpni(priv); +-- +2.39.5 + diff --git a/queue-6.12/drm-amd-display-add-more-checks-for-dsc-hubp-ono-gua.patch b/queue-6.12/drm-amd-display-add-more-checks-for-dsc-hubp-ono-gua.patch new file mode 100644 index 0000000000..9816a5fcac --- /dev/null +++ b/queue-6.12/drm-amd-display-add-more-checks-for-dsc-hubp-ono-gua.patch @@ -0,0 +1,90 @@ +From 6055c38bc790da6a6fe4a0cd6f46ad6db5f66f2d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 21 May 2025 16:40:25 -0400 +Subject: drm/amd/display: Add more checks for DSC / HUBP ONO guarantees + +From: Nicholas Kazlauskas + +[ Upstream commit 0d57dd1765d311111d9885346108c4deeae1deb4 ] + +[WHY] +For non-zero DSC instances it's possible that the HUBP domain required +to drive it for sequential ONO ASICs isn't met, potentially causing +the logic to the tile to enter an undefined state leading to a system +hang. + +[HOW] +Add more checks to ensure that the HUBP domain matching the DSC instance +is appropriately powered. + +Cc: Mario Limonciello +Cc: Alex Deucher +Reviewed-by: Duncan Ma +Signed-off-by: Nicholas Kazlauskas +Signed-off-by: Alex Hung +Tested-by: Daniel Wheeler +Signed-off-by: Alex Deucher +(cherry picked from commit da63df07112e5a9857a8d2aaa04255c4206754ec) +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + .../amd/display/dc/hwss/dcn35/dcn35_hwseq.c | 35 ++++++++++++++++++- + 1 file changed, 34 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c +index ca446e08f6a27..21aff7fa6375d 100644 +--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c ++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c +@@ -1019,8 +1019,22 @@ void dcn35_calc_blocks_to_gate(struct dc *dc, struct dc_state *context, + if (pipe_ctx->plane_res.dpp || pipe_ctx->stream_res.opp) + update_state->pg_pipe_res_update[PG_MPCC][pipe_ctx->plane_res.mpcc_inst] = false; + +- if (pipe_ctx->stream_res.dsc) ++ if (pipe_ctx->stream_res.dsc) { + update_state->pg_pipe_res_update[PG_DSC][pipe_ctx->stream_res.dsc->inst] = false; ++ if (dc->caps.sequential_ono) { ++ update_state->pg_pipe_res_update[PG_HUBP][pipe_ctx->stream_res.dsc->inst] = false; ++ update_state->pg_pipe_res_update[PG_DPP][pipe_ctx->stream_res.dsc->inst] = false; ++ ++ /* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */ ++ if (!pipe_ctx->top_pipe && pipe_ctx->plane_res.hubp && ++ pipe_ctx->plane_res.hubp->inst != pipe_ctx->stream_res.dsc->inst) { ++ for (j = 0; j < dc->res_pool->pipe_count; ++j) { ++ update_state->pg_pipe_res_update[PG_HUBP][j] = false; ++ update_state->pg_pipe_res_update[PG_DPP][j] = false; ++ } ++ } ++ } ++ } + + if (pipe_ctx->stream_res.opp) + update_state->pg_pipe_res_update[PG_OPP][pipe_ctx->stream_res.opp->inst] = false; +@@ -1165,6 +1179,25 @@ void dcn35_calc_blocks_to_ungate(struct dc *dc, struct dc_state *context, + update_state->pg_pipe_res_update[PG_HDMISTREAM][0] = true; + + if (dc->caps.sequential_ono) { ++ for (i = 0; i < dc->res_pool->pipe_count; i++) { ++ struct pipe_ctx *new_pipe = &context->res_ctx.pipe_ctx[i]; ++ ++ if (new_pipe->stream_res.dsc && !new_pipe->top_pipe && ++ update_state->pg_pipe_res_update[PG_DSC][new_pipe->stream_res.dsc->inst]) { ++ update_state->pg_pipe_res_update[PG_HUBP][new_pipe->stream_res.dsc->inst] = true; ++ update_state->pg_pipe_res_update[PG_DPP][new_pipe->stream_res.dsc->inst] = true; ++ ++ /* All HUBP/DPP instances must be powered if the DSC inst != HUBP inst */ ++ if (new_pipe->plane_res.hubp && ++ new_pipe->plane_res.hubp->inst != new_pipe->stream_res.dsc->inst) { ++ for (j = 0; j < dc->res_pool->pipe_count; ++j) { ++ update_state->pg_pipe_res_update[PG_HUBP][j] = true; ++ update_state->pg_pipe_res_update[PG_DPP][j] = true; ++ } ++ } ++ } ++ } ++ + for (i = dc->res_pool->pipe_count - 1; i >= 0; i--) { + if (update_state->pg_pipe_res_update[PG_HUBP][i] && + update_state->pg_pipe_res_update[PG_DPP][i]) { +-- +2.39.5 + diff --git a/queue-6.12/drm-amdgpu-add-kicker-fws-loading-for-gfx11-smu13-ps.patch b/queue-6.12/drm-amdgpu-add-kicker-fws-loading-for-gfx11-smu13-ps.patch new file mode 100644 index 0000000000..277389b74e --- /dev/null +++ b/queue-6.12/drm-amdgpu-add-kicker-fws-loading-for-gfx11-smu13-ps.patch @@ -0,0 +1,155 @@ +From 31dc1c0f45ac1329b4701d037e0356ed68d7fbd6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 6 Jul 2025 00:52:10 -0400 +Subject: drm/amdgpu: add kicker fws loading for gfx11/smu13/psp13 + +From: Frank Min + +[ Upstream commit 854171405e7f093532b33d8ed0875b9e34fc55b4 ] + +1. Add kicker firmwares loading for gfx11/smu13/psp13 +2. Register additional MODULE_FIRMWARE entries for kicker fws + - gc_11_0_0_rlc_kicker.bin + - gc_11_0_0_imu_kicker.bin + - psp_13_0_0_sos_kicker.bin + - psp_13_0_0_ta_kicker.bin + - smu_13_0_0_kicker.bin + +Signed-off-by: Frank Min +Reviewed-by: Hawking Zhang +Signed-off-by: Alex Deucher +(cherry picked from commit fb5ec2174d70a8989bc207d257db90ffeca3b163) +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c | 10 ++++++++-- + drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c | 4 ++++ + drivers/gpu/drm/amd/amdgpu/imu_v11_0.c | 6 +++++- + drivers/gpu/drm/amd/amdgpu/psp_v13_0.c | 2 ++ + drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c | 8 ++++++-- + 5 files changed, 25 insertions(+), 5 deletions(-) + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +index 48e30e5f83389..3d42f6c3308ed 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_psp.c +@@ -3430,7 +3430,10 @@ int psp_init_sos_microcode(struct psp_context *psp, const char *chip_name) + uint8_t *ucode_array_start_addr; + int err = 0; + +- err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, "amdgpu/%s_sos.bin", chip_name); ++ if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, "amdgpu/%s_sos_kicker.bin", chip_name); ++ else ++ err = amdgpu_ucode_request(adev, &adev->psp.sos_fw, "amdgpu/%s_sos.bin", chip_name); + if (err) + goto out; + +@@ -3672,7 +3675,10 @@ int psp_init_ta_microcode(struct psp_context *psp, const char *chip_name) + struct amdgpu_device *adev = psp->adev; + int err; + +- err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, "amdgpu/%s_ta.bin", chip_name); ++ if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, "amdgpu/%s_ta_kicker.bin", chip_name); ++ else ++ err = amdgpu_ucode_request(adev, &adev->psp.ta_fw, "amdgpu/%s_ta.bin", chip_name); + if (err) + return err; + +diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c +index 1f06b22dbe7c6..96e5c520af316 100644 +--- a/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/gfx_v11_0.c +@@ -84,6 +84,7 @@ MODULE_FIRMWARE("amdgpu/gc_11_0_0_pfp.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_0_me.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_0_mec.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc.bin"); ++MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_kicker.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_0_rlc_1.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_0_toc.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_1_pfp.bin"); +@@ -734,6 +735,9 @@ static int gfx_v11_0_init_microcode(struct amdgpu_device *adev) + adev->pdev->revision == 0xCE) + err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, + "amdgpu/gc_11_0_0_rlc_1.bin"); ++ else if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, ++ "amdgpu/%s_rlc_kicker.bin", ucode_prefix); + else + err = amdgpu_ucode_request(adev, &adev->gfx.rlc_fw, + "amdgpu/%s_rlc.bin", ucode_prefix); +diff --git a/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c +index d4f72e47ae9e2..c4f5cbf1ecd7d 100644 +--- a/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/imu_v11_0.c +@@ -32,6 +32,7 @@ + #include "gc/gc_11_0_0_sh_mask.h" + + MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu.bin"); ++MODULE_FIRMWARE("amdgpu/gc_11_0_0_imu_kicker.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_1_imu.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_2_imu.bin"); + MODULE_FIRMWARE("amdgpu/gc_11_0_3_imu.bin"); +@@ -50,7 +51,10 @@ static int imu_v11_0_init_microcode(struct amdgpu_device *adev) + DRM_DEBUG("\n"); + + amdgpu_ucode_ip_version_decode(adev, GC_HWIP, ucode_prefix, sizeof(ucode_prefix)); +- err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, "amdgpu/%s_imu.bin", ucode_prefix); ++ if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, "amdgpu/%s_imu_kicker.bin", ucode_prefix); ++ else ++ err = amdgpu_ucode_request(adev, &adev->gfx.imu_fw, "amdgpu/%s_imu.bin", ucode_prefix); + if (err) + goto out; + +diff --git a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c +index bf00de763acb0..124f74e862d7f 100644 +--- a/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/psp_v13_0.c +@@ -42,7 +42,9 @@ MODULE_FIRMWARE("amdgpu/psp_13_0_5_ta.bin"); + MODULE_FIRMWARE("amdgpu/psp_13_0_8_toc.bin"); + MODULE_FIRMWARE("amdgpu/psp_13_0_8_ta.bin"); + MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos.bin"); ++MODULE_FIRMWARE("amdgpu/psp_13_0_0_sos_kicker.bin"); + MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta.bin"); ++MODULE_FIRMWARE("amdgpu/psp_13_0_0_ta_kicker.bin"); + MODULE_FIRMWARE("amdgpu/psp_13_0_7_sos.bin"); + MODULE_FIRMWARE("amdgpu/psp_13_0_7_ta.bin"); + MODULE_FIRMWARE("amdgpu/psp_13_0_10_sos.bin"); +diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +index 4f78c84da780c..c5bca3019de07 100644 +--- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c ++++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c +@@ -58,6 +58,7 @@ + + MODULE_FIRMWARE("amdgpu/aldebaran_smc.bin"); + MODULE_FIRMWARE("amdgpu/smu_13_0_0.bin"); ++MODULE_FIRMWARE("amdgpu/smu_13_0_0_kicker.bin"); + MODULE_FIRMWARE("amdgpu/smu_13_0_7.bin"); + MODULE_FIRMWARE("amdgpu/smu_13_0_10.bin"); + +@@ -92,7 +93,7 @@ const int pmfw_decoded_link_width[7] = {0, 1, 2, 4, 8, 12, 16}; + int smu_v13_0_init_microcode(struct smu_context *smu) + { + struct amdgpu_device *adev = smu->adev; +- char ucode_prefix[15]; ++ char ucode_prefix[30]; + int err = 0; + const struct smc_firmware_header_v1_0 *hdr; + const struct common_firmware_header *header; +@@ -103,7 +104,10 @@ int smu_v13_0_init_microcode(struct smu_context *smu) + return 0; + + amdgpu_ucode_ip_version_decode(adev, MP1_HWIP, ucode_prefix, sizeof(ucode_prefix)); +- err = amdgpu_ucode_request(adev, &adev->pm.fw, "amdgpu/%s.bin", ucode_prefix); ++ if (amdgpu_is_kicker_fw(adev)) ++ err = amdgpu_ucode_request(adev, &adev->pm.fw, "amdgpu/%s_kicker.bin", ucode_prefix); ++ else ++ err = amdgpu_ucode_request(adev, &adev->pm.fw, "amdgpu/%s.bin", ucode_prefix); + if (err) + goto out; + +-- +2.39.5 + diff --git a/queue-6.12/drm-amdgpu-mes-add-missing-locking-in-helper-functio.patch b/queue-6.12/drm-amdgpu-mes-add-missing-locking-in-helper-functio.patch new file mode 100644 index 0000000000..9c96dfddc3 --- /dev/null +++ b/queue-6.12/drm-amdgpu-mes-add-missing-locking-in-helper-functio.patch @@ -0,0 +1,96 @@ +From 73bd4837e524d166b0e956b259db45a246a6a982 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 19 May 2025 15:46:25 -0400 +Subject: drm/amdgpu/mes: add missing locking in helper functions + +From: Alex Deucher + +[ Upstream commit 40f970ba7a4ab77be2ffe6d50a70416c8876496a ] + +We need to take the MES lock. + +Reviewed-by: Michael Chen +Signed-off-by: Alex Deucher +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c | 14 ++++++++++++++ + 1 file changed, 14 insertions(+) + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +index 7d4b540340e02..41b88e0ea98b8 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mes.c +@@ -860,7 +860,9 @@ int amdgpu_mes_map_legacy_queue(struct amdgpu_device *adev, + queue_input.mqd_addr = amdgpu_bo_gpu_offset(ring->mqd_obj); + queue_input.wptr_addr = ring->wptr_gpu_addr; + ++ amdgpu_mes_lock(&adev->mes); + r = adev->mes.funcs->map_legacy_queue(&adev->mes, &queue_input); ++ amdgpu_mes_unlock(&adev->mes); + if (r) + DRM_ERROR("failed to map legacy queue\n"); + +@@ -883,7 +885,9 @@ int amdgpu_mes_unmap_legacy_queue(struct amdgpu_device *adev, + queue_input.trail_fence_addr = gpu_addr; + queue_input.trail_fence_data = seq; + ++ amdgpu_mes_lock(&adev->mes); + r = adev->mes.funcs->unmap_legacy_queue(&adev->mes, &queue_input); ++ amdgpu_mes_unlock(&adev->mes); + if (r) + DRM_ERROR("failed to unmap legacy queue\n"); + +@@ -910,7 +914,9 @@ int amdgpu_mes_reset_legacy_queue(struct amdgpu_device *adev, + queue_input.vmid = vmid; + queue_input.use_mmio = use_mmio; + ++ amdgpu_mes_lock(&adev->mes); + r = adev->mes.funcs->reset_legacy_queue(&adev->mes, &queue_input); ++ amdgpu_mes_unlock(&adev->mes); + if (r) + DRM_ERROR("failed to reset legacy queue\n"); + +@@ -931,7 +937,9 @@ uint32_t amdgpu_mes_rreg(struct amdgpu_device *adev, uint32_t reg) + goto error; + } + ++ amdgpu_mes_lock(&adev->mes); + r = adev->mes.funcs->misc_op(&adev->mes, &op_input); ++ amdgpu_mes_unlock(&adev->mes); + if (r) + DRM_ERROR("failed to read reg (0x%x)\n", reg); + else +@@ -957,7 +965,9 @@ int amdgpu_mes_wreg(struct amdgpu_device *adev, + goto error; + } + ++ amdgpu_mes_lock(&adev->mes); + r = adev->mes.funcs->misc_op(&adev->mes, &op_input); ++ amdgpu_mes_unlock(&adev->mes); + if (r) + DRM_ERROR("failed to write reg (0x%x)\n", reg); + +@@ -984,7 +994,9 @@ int amdgpu_mes_reg_write_reg_wait(struct amdgpu_device *adev, + goto error; + } + ++ amdgpu_mes_lock(&adev->mes); + r = adev->mes.funcs->misc_op(&adev->mes, &op_input); ++ amdgpu_mes_unlock(&adev->mes); + if (r) + DRM_ERROR("failed to reg_write_reg_wait\n"); + +@@ -1009,7 +1021,9 @@ int amdgpu_mes_reg_wait(struct amdgpu_device *adev, uint32_t reg, + goto error; + } + ++ amdgpu_mes_lock(&adev->mes); + r = adev->mes.funcs->misc_op(&adev->mes, &op_input); ++ amdgpu_mes_unlock(&adev->mes); + if (r) + DRM_ERROR("failed to reg_write_reg_wait\n"); + +-- +2.39.5 + diff --git a/queue-6.12/drm-amdgpu-vcn-v5_0_1-to-prevent-fw-checking-rb-duri.patch b/queue-6.12/drm-amdgpu-vcn-v5_0_1-to-prevent-fw-checking-rb-duri.patch new file mode 100644 index 0000000000..6ae6b2ca86 --- /dev/null +++ b/queue-6.12/drm-amdgpu-vcn-v5_0_1-to-prevent-fw-checking-rb-duri.patch @@ -0,0 +1,1657 @@ +From b3ffbec74d2f76048f0f13bd28b82d39f32bf6a6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 12 Jun 2025 11:01:08 -0400 +Subject: drm/amdgpu: VCN v5_0_1 to prevent FW checking RB during DPG pause + +From: Sonny Jiang + +[ Upstream commit 46e15197b513e60786a44107759d6ca293d6288c ] + +Add a protection to ensure programming are all complete prior VCPU +starting. This is a WA for an unintended VCPU running. + +Signed-off-by: Sonny Jiang +Acked-by: Leo Liu +Reviewed-by: Ruijing Dong +Signed-off-by: Alex Deucher +(cherry picked from commit c29521b529fa5e225feaf709d863a636ca0cbbfa) +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c | 1624 +++++++++++++++++++++++ + 1 file changed, 1624 insertions(+) + create mode 100644 drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c + +diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c +new file mode 100644 +index 0000000000000..cdefd7fcb0da6 +--- /dev/null ++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v5_0_1.c +@@ -0,0 +1,1624 @@ ++/* ++ * Copyright 2024 Advanced Micro Devices, Inc. All rights reserved. ++ * ++ * Permission is hereby granted, free of charge, to any person obtaining a ++ * copy of this software and associated documentation files (the "Software"), ++ * to deal in the Software without restriction, including without limitation ++ * the rights to use, copy, modify, merge, publish, distribute, sublicense, ++ * and/or sell copies of the Software, and to permit persons to whom the ++ * Software is furnished to do so, subject to the following conditions: ++ * ++ * The above copyright notice and this permission notice shall be included in ++ * all copies or substantial portions of the Software. ++ * ++ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR ++ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, ++ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL ++ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR ++ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ++ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR ++ * OTHER DEALINGS IN THE SOFTWARE. ++ * ++ */ ++ ++#include ++#include "amdgpu.h" ++#include "amdgpu_vcn.h" ++#include "amdgpu_pm.h" ++#include "soc15.h" ++#include "soc15d.h" ++#include "soc15_hw_ip.h" ++#include "vcn_v2_0.h" ++#include "vcn_v4_0_3.h" ++#include "mmsch_v5_0.h" ++ ++#include "vcn/vcn_5_0_0_offset.h" ++#include "vcn/vcn_5_0_0_sh_mask.h" ++#include "ivsrcid/vcn/irqsrcs_vcn_5_0.h" ++#include "vcn_v5_0_0.h" ++#include "vcn_v5_0_1.h" ++ ++#include ++ ++static int vcn_v5_0_1_start_sriov(struct amdgpu_device *adev); ++static void vcn_v5_0_1_set_unified_ring_funcs(struct amdgpu_device *adev); ++static void vcn_v5_0_1_set_irq_funcs(struct amdgpu_device *adev); ++static int vcn_v5_0_1_set_pg_state(struct amdgpu_vcn_inst *vinst, ++ enum amd_powergating_state state); ++static void vcn_v5_0_1_unified_ring_set_wptr(struct amdgpu_ring *ring); ++static void vcn_v5_0_1_set_ras_funcs(struct amdgpu_device *adev); ++/** ++ * vcn_v5_0_1_early_init - set function pointers and load microcode ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * Set ring and irq function pointers ++ * Load microcode from filesystem ++ */ ++static int vcn_v5_0_1_early_init(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ int i, r; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) ++ /* re-use enc ring as unified ring */ ++ adev->vcn.inst[i].num_enc_rings = 1; ++ ++ vcn_v5_0_1_set_unified_ring_funcs(adev); ++ vcn_v5_0_1_set_irq_funcs(adev); ++ vcn_v5_0_1_set_ras_funcs(adev); ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { ++ adev->vcn.inst[i].set_pg_state = vcn_v5_0_1_set_pg_state; ++ ++ r = amdgpu_vcn_early_init(adev, i); ++ if (r) ++ return r; ++ } ++ ++ return 0; ++} ++ ++static void vcn_v5_0_1_fw_shared_init(struct amdgpu_device *adev, int inst_idx) ++{ ++ struct amdgpu_vcn5_fw_shared *fw_shared; ++ ++ fw_shared = adev->vcn.inst[inst_idx].fw_shared.cpu_addr; ++ ++ if (fw_shared->sq.is_enabled) ++ return; ++ fw_shared->present_flag_0 = ++ cpu_to_le32(AMDGPU_FW_SHARED_FLAG_0_UNIFIED_QUEUE); ++ fw_shared->sq.is_enabled = 1; ++ ++ if (amdgpu_vcnfw_log) ++ amdgpu_vcn_fwlog_init(&adev->vcn.inst[inst_idx]); ++} ++ ++/** ++ * vcn_v5_0_1_sw_init - sw init for VCN block ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * Load firmware and sw initialization ++ */ ++static int vcn_v5_0_1_sw_init(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ struct amdgpu_ring *ring; ++ int i, r, vcn_inst; ++ ++ /* VCN UNIFIED TRAP */ ++ r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_VCN, ++ VCN_5_0__SRCID__UVD_ENC_GENERAL_PURPOSE, &adev->vcn.inst->irq); ++ if (r) ++ return r; ++ ++ /* VCN POISON TRAP */ ++ r = amdgpu_irq_add_id(adev, SOC15_IH_CLIENTID_VCN, ++ VCN_5_0__SRCID_UVD_POISON, &adev->vcn.inst->ras_poison_irq); ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; i++) { ++ vcn_inst = GET_INST(VCN, i); ++ ++ r = amdgpu_vcn_sw_init(adev, i); ++ if (r) ++ return r; ++ ++ amdgpu_vcn_setup_ucode(adev, i); ++ ++ r = amdgpu_vcn_resume(adev, i); ++ if (r) ++ return r; ++ ++ ring = &adev->vcn.inst[i].ring_enc[0]; ++ ring->use_doorbell = true; ++ if (!amdgpu_sriov_vf(adev)) ++ ring->doorbell_index = ++ (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + ++ 11 * vcn_inst; ++ else ++ ring->doorbell_index = ++ (adev->doorbell_index.vcn.vcn_ring0_1 << 1) + ++ 32 * vcn_inst; ++ ++ ring->vm_hub = AMDGPU_MMHUB0(adev->vcn.inst[i].aid_id); ++ sprintf(ring->name, "vcn_unified_%d", adev->vcn.inst[i].aid_id); ++ ++ r = amdgpu_ring_init(adev, ring, 512, &adev->vcn.inst[i].irq, 0, ++ AMDGPU_RING_PRIO_DEFAULT, &adev->vcn.inst[i].sched_score); ++ if (r) ++ return r; ++ ++ vcn_v5_0_1_fw_shared_init(adev, i); ++ } ++ ++ /* TODO: Add queue reset mask when FW fully supports it */ ++ adev->vcn.supported_reset = ++ amdgpu_get_soft_full_reset_mask(&adev->vcn.inst[0].ring_enc[0]); ++ ++ if (amdgpu_sriov_vf(adev)) { ++ r = amdgpu_virt_alloc_mm_table(adev); ++ if (r) ++ return r; ++ } ++ ++ vcn_v5_0_0_alloc_ip_dump(adev); ++ ++ return amdgpu_vcn_sysfs_reset_mask_init(adev); ++} ++ ++/** ++ * vcn_v5_0_1_sw_fini - sw fini for VCN block ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * VCN suspend and free up sw allocation ++ */ ++static int vcn_v5_0_1_sw_fini(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ int i, r, idx; ++ ++ if (drm_dev_enter(adev_to_drm(adev), &idx)) { ++ for (i = 0; i < adev->vcn.num_vcn_inst; i++) { ++ volatile struct amdgpu_vcn5_fw_shared *fw_shared; ++ ++ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr; ++ fw_shared->present_flag_0 = 0; ++ fw_shared->sq.is_enabled = 0; ++ } ++ ++ drm_dev_exit(idx); ++ } ++ ++ if (amdgpu_sriov_vf(adev)) ++ amdgpu_virt_free_mm_table(adev); ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; i++) { ++ r = amdgpu_vcn_suspend(adev, i); ++ if (r) ++ return r; ++ } ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; i++) { ++ r = amdgpu_vcn_sw_fini(adev, i); ++ if (r) ++ return r; ++ } ++ ++ amdgpu_vcn_sysfs_reset_mask_fini(adev); ++ ++ kfree(adev->vcn.ip_dump); ++ ++ return 0; ++} ++ ++/** ++ * vcn_v5_0_1_hw_init - start and test VCN block ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * Initialize the hardware, boot up the VCPU and do some testing ++ */ ++static int vcn_v5_0_1_hw_init(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ struct amdgpu_ring *ring; ++ int i, r, vcn_inst; ++ ++ if (amdgpu_sriov_vf(adev)) { ++ r = vcn_v5_0_1_start_sriov(adev); ++ if (r) ++ return r; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { ++ ring = &adev->vcn.inst[i].ring_enc[0]; ++ ring->wptr = 0; ++ ring->wptr_old = 0; ++ vcn_v5_0_1_unified_ring_set_wptr(ring); ++ ring->sched.ready = true; ++ } ++ } else { ++ if (RREG32_SOC15(VCN, GET_INST(VCN, 0), regVCN_RRMT_CNTL) & 0x100) ++ adev->vcn.caps |= AMDGPU_VCN_CAPS(RRMT_ENABLED); ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { ++ vcn_inst = GET_INST(VCN, i); ++ ring = &adev->vcn.inst[i].ring_enc[0]; ++ ++ if (ring->use_doorbell) ++ adev->nbio.funcs->vcn_doorbell_range(adev, ring->use_doorbell, ++ ((adev->doorbell_index.vcn.vcn_ring0_1 << 1) + ++ 11 * vcn_inst), ++ adev->vcn.inst[i].aid_id); ++ ++ /* Re-init fw_shared, if required */ ++ vcn_v5_0_1_fw_shared_init(adev, i); ++ ++ r = amdgpu_ring_test_helper(ring); ++ if (r) ++ return r; ++ } ++ } ++ ++ return 0; ++} ++ ++/** ++ * vcn_v5_0_1_hw_fini - stop the hardware block ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * Stop the VCN block, mark ring as not ready any more ++ */ ++static int vcn_v5_0_1_hw_fini(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ int i; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { ++ struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[i]; ++ ++ cancel_delayed_work_sync(&adev->vcn.inst[i].idle_work); ++ if (vinst->cur_state != AMD_PG_STATE_GATE) ++ vinst->set_pg_state(vinst, AMD_PG_STATE_GATE); ++ } ++ ++ if (amdgpu_ras_is_supported(adev, AMDGPU_RAS_BLOCK__VCN)) ++ amdgpu_irq_put(adev, &adev->vcn.inst->ras_poison_irq, 0); ++ ++ return 0; ++} ++ ++/** ++ * vcn_v5_0_1_suspend - suspend VCN block ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * HW fini and suspend VCN block ++ */ ++static int vcn_v5_0_1_suspend(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ int r, i; ++ ++ r = vcn_v5_0_1_hw_fini(ip_block); ++ if (r) ++ return r; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; i++) { ++ r = amdgpu_vcn_suspend(ip_block->adev, i); ++ if (r) ++ return r; ++ } ++ ++ return r; ++} ++ ++/** ++ * vcn_v5_0_1_resume - resume VCN block ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * Resume firmware and hw init VCN block ++ */ ++static int vcn_v5_0_1_resume(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ int r, i; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; i++) { ++ struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[i]; ++ ++ if (amdgpu_in_reset(adev)) ++ vinst->cur_state = AMD_PG_STATE_GATE; ++ ++ r = amdgpu_vcn_resume(ip_block->adev, i); ++ if (r) ++ return r; ++ } ++ ++ r = vcn_v5_0_1_hw_init(ip_block); ++ ++ return r; ++} ++ ++/** ++ * vcn_v5_0_1_mc_resume - memory controller programming ++ * ++ * @vinst: VCN instance ++ * ++ * Let the VCN memory controller know it's offsets ++ */ ++static void vcn_v5_0_1_mc_resume(struct amdgpu_vcn_inst *vinst) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ int inst = vinst->inst; ++ uint32_t offset, size, vcn_inst; ++ const struct common_firmware_header *hdr; ++ ++ hdr = (const struct common_firmware_header *)adev->vcn.inst[inst].fw->data; ++ size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8); ++ ++ vcn_inst = GET_INST(VCN, inst); ++ /* cache window 0: fw */ ++ if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW, ++ (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + inst].tmr_mc_addr_lo)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH, ++ (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + inst].tmr_mc_addr_hi)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_CACHE_OFFSET0, 0); ++ offset = 0; ++ } else { ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW, ++ lower_32_bits(adev->vcn.inst[inst].gpu_addr)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH, ++ upper_32_bits(adev->vcn.inst[inst].gpu_addr)); ++ offset = size; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_CACHE_OFFSET0, ++ AMDGPU_UVD_FIRMWARE_OFFSET >> 3); ++ } ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_CACHE_SIZE0, size); ++ ++ /* cache window 1: stack */ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW, ++ lower_32_bits(adev->vcn.inst[inst].gpu_addr + offset)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH, ++ upper_32_bits(adev->vcn.inst[inst].gpu_addr + offset)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_CACHE_OFFSET1, 0); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_CACHE_SIZE1, AMDGPU_VCN_STACK_SIZE); ++ ++ /* cache window 2: context */ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW, ++ lower_32_bits(adev->vcn.inst[inst].gpu_addr + offset + AMDGPU_VCN_STACK_SIZE)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH, ++ upper_32_bits(adev->vcn.inst[inst].gpu_addr + offset + AMDGPU_VCN_STACK_SIZE)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_CACHE_OFFSET2, 0); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_CACHE_SIZE2, AMDGPU_VCN_CONTEXT_SIZE); ++ ++ /* non-cache window */ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_NC0_64BIT_BAR_LOW, ++ lower_32_bits(adev->vcn.inst[inst].fw_shared.gpu_addr)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_VCPU_NC0_64BIT_BAR_HIGH, ++ upper_32_bits(adev->vcn.inst[inst].fw_shared.gpu_addr)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_NONCACHE_OFFSET0, 0); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_VCPU_NONCACHE_SIZE0, ++ AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_vcn5_fw_shared))); ++} ++ ++/** ++ * vcn_v5_0_1_mc_resume_dpg_mode - memory controller programming for dpg mode ++ * ++ * @vinst: VCN instance ++ * @indirect: indirectly write sram ++ * ++ * Let the VCN memory controller know it's offsets with dpg mode ++ */ ++static void vcn_v5_0_1_mc_resume_dpg_mode(struct amdgpu_vcn_inst *vinst, ++ bool indirect) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ int inst_idx = vinst->inst; ++ uint32_t offset, size; ++ const struct common_firmware_header *hdr; ++ ++ hdr = (const struct common_firmware_header *)adev->vcn.inst[inst_idx].fw->data; ++ size = AMDGPU_GPU_PAGE_ALIGN(le32_to_cpu(hdr->ucode_size_bytes) + 8); ++ ++ /* cache window 0: fw */ ++ if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { ++ if (!indirect) { ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW), ++ (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + ++ inst_idx].tmr_mc_addr_lo), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH), ++ (adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + ++ inst_idx].tmr_mc_addr_hi), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_OFFSET0), 0, 0, indirect); ++ } else { ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW), 0, 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH), 0, 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_OFFSET0), 0, 0, indirect); ++ } ++ offset = 0; ++ } else { ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW), ++ lower_32_bits(adev->vcn.inst[inst_idx].gpu_addr), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH), ++ upper_32_bits(adev->vcn.inst[inst_idx].gpu_addr), 0, indirect); ++ offset = size; ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_OFFSET0), ++ AMDGPU_UVD_FIRMWARE_OFFSET >> 3, 0, indirect); ++ } ++ ++ if (!indirect) ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_SIZE0), size, 0, indirect); ++ else ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_SIZE0), 0, 0, indirect); ++ ++ /* cache window 1: stack */ ++ if (!indirect) { ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW), ++ lower_32_bits(adev->vcn.inst[inst_idx].gpu_addr + offset), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH), ++ upper_32_bits(adev->vcn.inst[inst_idx].gpu_addr + offset), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_OFFSET1), 0, 0, indirect); ++ } else { ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW), 0, 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH), 0, 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_OFFSET1), 0, 0, indirect); ++ } ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_SIZE1), AMDGPU_VCN_STACK_SIZE, 0, indirect); ++ ++ /* cache window 2: context */ ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW), ++ lower_32_bits(adev->vcn.inst[inst_idx].gpu_addr + offset + ++ AMDGPU_VCN_STACK_SIZE), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH), ++ upper_32_bits(adev->vcn.inst[inst_idx].gpu_addr + offset + ++ AMDGPU_VCN_STACK_SIZE), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_OFFSET2), 0, 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CACHE_SIZE2), AMDGPU_VCN_CONTEXT_SIZE, 0, indirect); ++ ++ /* non-cache window */ ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_NC0_64BIT_BAR_LOW), ++ lower_32_bits(adev->vcn.inst[inst_idx].fw_shared.gpu_addr), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_VCPU_NC0_64BIT_BAR_HIGH), ++ upper_32_bits(adev->vcn.inst[inst_idx].fw_shared.gpu_addr), 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_NONCACHE_OFFSET0), 0, 0, indirect); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_NONCACHE_SIZE0), ++ AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_vcn5_fw_shared)), 0, indirect); ++ ++ /* VCN global tiling registers */ ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_GFX10_ADDR_CONFIG), adev->gfx.config.gb_addr_config, 0, indirect); ++} ++ ++/** ++ * vcn_v5_0_1_disable_clock_gating - disable VCN clock gating ++ * ++ * @vinst: VCN instance ++ * ++ * Disable clock gating for VCN block ++ */ ++static void vcn_v5_0_1_disable_clock_gating(struct amdgpu_vcn_inst *vinst) ++{ ++} ++ ++/** ++ * vcn_v5_0_1_enable_clock_gating - enable VCN clock gating ++ * ++ * @vinst: VCN instance ++ * ++ * Enable clock gating for VCN block ++ */ ++static void vcn_v5_0_1_enable_clock_gating(struct amdgpu_vcn_inst *vinst) ++{ ++} ++ ++/** ++ * vcn_v5_0_1_pause_dpg_mode - VCN pause with dpg mode ++ * ++ * @vinst: VCN instance ++ * @new_state: pause state ++ * ++ * Pause dpg mode for VCN block ++ */ ++static int vcn_v5_0_1_pause_dpg_mode(struct amdgpu_vcn_inst *vinst, ++ struct dpg_pause_state *new_state) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ uint32_t reg_data = 0; ++ int vcn_inst; ++ ++ vcn_inst = GET_INST(VCN, vinst->inst); ++ ++ /* pause/unpause if state is changed */ ++ if (vinst->pause_state.fw_based != new_state->fw_based) { ++ DRM_DEV_DEBUG(adev->dev, "dpg pause state changed %d -> %d %s\n", ++ vinst->pause_state.fw_based, new_state->fw_based, ++ new_state->fw_based ? "VCN_DPG_STATE__PAUSE" : "VCN_DPG_STATE__UNPAUSE"); ++ reg_data = RREG32_SOC15(VCN, vcn_inst, regUVD_DPG_PAUSE) & ++ (~UVD_DPG_PAUSE__NJ_PAUSE_DPG_ACK_MASK); ++ ++ if (new_state->fw_based == VCN_DPG_STATE__PAUSE) { ++ /* pause DPG */ ++ reg_data |= UVD_DPG_PAUSE__NJ_PAUSE_DPG_REQ_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_DPG_PAUSE, reg_data); ++ ++ /* wait for ACK */ ++ SOC15_WAIT_ON_RREG(VCN, vcn_inst, regUVD_DPG_PAUSE, ++ UVD_DPG_PAUSE__NJ_PAUSE_DPG_ACK_MASK, ++ UVD_DPG_PAUSE__NJ_PAUSE_DPG_ACK_MASK); ++ } else { ++ /* unpause DPG, no need to wait */ ++ reg_data &= ~UVD_DPG_PAUSE__NJ_PAUSE_DPG_REQ_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_DPG_PAUSE, reg_data); ++ } ++ vinst->pause_state.fw_based = new_state->fw_based; ++ } ++ ++ return 0; ++} ++ ++ ++/** ++ * vcn_v5_0_1_start_dpg_mode - VCN start with dpg mode ++ * ++ * @vinst: VCN instance ++ * @indirect: indirectly write sram ++ * ++ * Start VCN block with dpg mode ++ */ ++static int vcn_v5_0_1_start_dpg_mode(struct amdgpu_vcn_inst *vinst, ++ bool indirect) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ int inst_idx = vinst->inst; ++ volatile struct amdgpu_vcn5_fw_shared *fw_shared = ++ adev->vcn.inst[inst_idx].fw_shared.cpu_addr; ++ struct amdgpu_ring *ring; ++ struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__PAUSE}; ++ int vcn_inst; ++ uint32_t tmp; ++ ++ vcn_inst = GET_INST(VCN, inst_idx); ++ ++ /* disable register anti-hang mechanism */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_POWER_STATUS), 1, ++ ~UVD_POWER_STATUS__UVD_POWER_STATUS_MASK); ++ ++ /* enable dynamic power gating mode */ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_POWER_STATUS); ++ tmp |= UVD_POWER_STATUS__UVD_PG_MODE_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_POWER_STATUS, tmp); ++ ++ if (indirect) { ++ adev->vcn.inst[inst_idx].dpg_sram_curr_addr = ++ (uint32_t *)adev->vcn.inst[inst_idx].dpg_sram_cpu_addr; ++ /* Use dummy register 0xDEADBEEF passing AID selection to PSP FW */ ++ WREG32_SOC24_DPG_MODE(inst_idx, 0xDEADBEEF, ++ adev->vcn.inst[inst_idx].aid_id, 0, true); ++ } ++ ++ /* enable VCPU clock */ ++ tmp = (0xFF << UVD_VCPU_CNTL__PRB_TIMEOUT_VAL__SHIFT); ++ tmp |= UVD_VCPU_CNTL__CLK_EN_MASK | UVD_VCPU_CNTL__BLK_RST_MASK; ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CNTL), tmp, 0, indirect); ++ ++ /* disable master interrupt */ ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_MASTINT_EN), 0, 0, indirect); ++ ++ /* setup regUVD_LMI_CTRL */ ++ tmp = (UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK | ++ UVD_LMI_CTRL__REQ_MODE_MASK | ++ UVD_LMI_CTRL__CRC_RESET_MASK | ++ UVD_LMI_CTRL__MASK_MC_URGENT_MASK | ++ UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK | ++ UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK | ++ (8 << UVD_LMI_CTRL__WRITE_CLEAN_TIMER__SHIFT) | ++ 0x00100000L); ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_CTRL), tmp, 0, indirect); ++ ++ vcn_v5_0_1_mc_resume_dpg_mode(vinst, indirect); ++ ++ tmp = (0xFF << UVD_VCPU_CNTL__PRB_TIMEOUT_VAL__SHIFT); ++ tmp |= UVD_VCPU_CNTL__CLK_EN_MASK; ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_VCPU_CNTL), tmp, 0, indirect); ++ ++ /* enable LMI MC and UMC channels */ ++ tmp = 0x1f << UVD_LMI_CTRL2__RE_OFLD_MIF_WR_REQ_NUM__SHIFT; ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_LMI_CTRL2), tmp, 0, indirect); ++ ++ /* enable master interrupt */ ++ WREG32_SOC24_DPG_MODE(inst_idx, SOC24_DPG_MODE_OFFSET( ++ VCN, 0, regUVD_MASTINT_EN), ++ UVD_MASTINT_EN__VCPU_EN_MASK, 0, indirect); ++ ++ if (indirect) ++ amdgpu_vcn_psp_update_sram(adev, inst_idx, AMDGPU_UCODE_ID_VCN0_RAM); ++ ++ /* resetting ring, fw should not check RB ring */ ++ fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET; ++ ++ /* Pause dpg */ ++ vcn_v5_0_1_pause_dpg_mode(vinst, &state); ++ ++ ring = &adev->vcn.inst[inst_idx].ring_enc[0]; ++ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_BASE_LO, lower_32_bits(ring->gpu_addr)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_BASE_HI, upper_32_bits(ring->gpu_addr)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_SIZE, ring->ring_size / sizeof(uint32_t)); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); ++ tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK); ++ WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); ++ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR, 0); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, 0); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, tmp); ++ ring->wptr = RREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); ++ tmp |= VCN_RB_ENABLE__RB1_EN_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); ++ /* resetting done, fw can check RB ring */ ++ fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF); ++ ++ WREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL, ++ ring->doorbell_index << VCN_RB1_DB_CTRL__OFFSET__SHIFT | ++ VCN_RB1_DB_CTRL__EN_MASK); ++ /* Read DB_CTRL to flush the write DB_CTRL command. */ ++ RREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL); ++ ++ return 0; ++} ++ ++static int vcn_v5_0_1_start_sriov(struct amdgpu_device *adev) ++{ ++ int i, vcn_inst; ++ struct amdgpu_ring *ring_enc; ++ uint64_t cache_addr; ++ uint64_t rb_enc_addr; ++ uint64_t ctx_addr; ++ uint32_t param, resp, expected; ++ uint32_t offset, cache_size; ++ uint32_t tmp, timeout; ++ ++ struct amdgpu_mm_table *table = &adev->virt.mm_table; ++ uint32_t *table_loc; ++ uint32_t table_size; ++ uint32_t size, size_dw; ++ uint32_t init_status; ++ uint32_t enabled_vcn; ++ ++ struct mmsch_v5_0_cmd_direct_write ++ direct_wt = { {0} }; ++ struct mmsch_v5_0_cmd_direct_read_modify_write ++ direct_rd_mod_wt = { {0} }; ++ struct mmsch_v5_0_cmd_end end = { {0} }; ++ struct mmsch_v5_0_init_header header; ++ ++ volatile struct amdgpu_vcn5_fw_shared *fw_shared; ++ volatile struct amdgpu_fw_shared_rb_setup *rb_setup; ++ ++ direct_wt.cmd_header.command_type = ++ MMSCH_COMMAND__DIRECT_REG_WRITE; ++ direct_rd_mod_wt.cmd_header.command_type = ++ MMSCH_COMMAND__DIRECT_REG_READ_MODIFY_WRITE; ++ end.cmd_header.command_type = MMSCH_COMMAND__END; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; i++) { ++ vcn_inst = GET_INST(VCN, i); ++ ++ vcn_v5_0_1_fw_shared_init(adev, vcn_inst); ++ ++ memset(&header, 0, sizeof(struct mmsch_v5_0_init_header)); ++ header.version = MMSCH_VERSION; ++ header.total_size = sizeof(struct mmsch_v5_0_init_header) >> 2; ++ ++ table_loc = (uint32_t *)table->cpu_addr; ++ table_loc += header.total_size; ++ ++ table_size = 0; ++ ++ MMSCH_V5_0_INSERT_DIRECT_RD_MOD_WT(SOC15_REG_OFFSET(VCN, 0, regUVD_STATUS), ++ ~UVD_STATUS__UVD_BUSY, UVD_STATUS__UVD_BUSY); ++ ++ cache_size = AMDGPU_GPU_PAGE_ALIGN(adev->vcn.inst[i].fw->size + 4); ++ ++ if (adev->firmware.load_type == AMDGPU_FW_LOAD_PSP) { ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW), ++ adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + i].tmr_mc_addr_lo); ++ ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH), ++ adev->firmware.ucode[AMDGPU_UCODE_ID_VCN + i].tmr_mc_addr_hi); ++ ++ offset = 0; ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_CACHE_OFFSET0), 0); ++ } else { ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE_64BIT_BAR_LOW), ++ lower_32_bits(adev->vcn.inst[i].gpu_addr)); ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE_64BIT_BAR_HIGH), ++ upper_32_bits(adev->vcn.inst[i].gpu_addr)); ++ offset = cache_size; ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_CACHE_OFFSET0), ++ AMDGPU_UVD_FIRMWARE_OFFSET >> 3); ++ } ++ ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_CACHE_SIZE0), ++ cache_size); ++ ++ cache_addr = adev->vcn.inst[vcn_inst].gpu_addr + offset; ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE1_64BIT_BAR_LOW), lower_32_bits(cache_addr)); ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE1_64BIT_BAR_HIGH), upper_32_bits(cache_addr)); ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_CACHE_OFFSET1), 0); ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_CACHE_SIZE1), AMDGPU_VCN_STACK_SIZE); ++ ++ cache_addr = adev->vcn.inst[vcn_inst].gpu_addr + offset + ++ AMDGPU_VCN_STACK_SIZE; ++ ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE2_64BIT_BAR_LOW), lower_32_bits(cache_addr)); ++ ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_CACHE2_64BIT_BAR_HIGH), upper_32_bits(cache_addr)); ++ ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_CACHE_OFFSET2), 0); ++ ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_CACHE_SIZE2), AMDGPU_VCN_CONTEXT_SIZE); ++ ++ fw_shared = adev->vcn.inst[vcn_inst].fw_shared.cpu_addr; ++ rb_setup = &fw_shared->rb_setup; ++ ++ ring_enc = &adev->vcn.inst[vcn_inst].ring_enc[0]; ++ ring_enc->wptr = 0; ++ rb_enc_addr = ring_enc->gpu_addr; ++ ++ rb_setup->is_rb_enabled_flags |= RB_ENABLED; ++ rb_setup->rb_addr_lo = lower_32_bits(rb_enc_addr); ++ rb_setup->rb_addr_hi = upper_32_bits(rb_enc_addr); ++ rb_setup->rb_size = ring_enc->ring_size / 4; ++ fw_shared->present_flag_0 |= cpu_to_le32(AMDGPU_VCN_VF_RB_SETUP_FLAG); ++ ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_NC0_64BIT_BAR_LOW), ++ lower_32_bits(adev->vcn.inst[vcn_inst].fw_shared.gpu_addr)); ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_LMI_VCPU_NC0_64BIT_BAR_HIGH), ++ upper_32_bits(adev->vcn.inst[vcn_inst].fw_shared.gpu_addr)); ++ MMSCH_V5_0_INSERT_DIRECT_WT(SOC15_REG_OFFSET(VCN, 0, ++ regUVD_VCPU_NONCACHE_SIZE0), ++ AMDGPU_GPU_PAGE_ALIGN(sizeof(struct amdgpu_vcn4_fw_shared))); ++ MMSCH_V5_0_INSERT_END(); ++ ++ header.vcn0.init_status = 0; ++ header.vcn0.table_offset = header.total_size; ++ header.vcn0.table_size = table_size; ++ header.total_size += table_size; ++ ++ /* Send init table to mmsch */ ++ size = sizeof(struct mmsch_v5_0_init_header); ++ table_loc = (uint32_t *)table->cpu_addr; ++ memcpy((void *)table_loc, &header, size); ++ ++ ctx_addr = table->gpu_addr; ++ WREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_CTX_ADDR_LO, lower_32_bits(ctx_addr)); ++ WREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_CTX_ADDR_HI, upper_32_bits(ctx_addr)); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_VMID); ++ tmp &= ~MMSCH_VF_VMID__VF_CTX_VMID_MASK; ++ tmp |= (0 << MMSCH_VF_VMID__VF_CTX_VMID__SHIFT); ++ WREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_VMID, tmp); ++ ++ size = header.total_size; ++ WREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_CTX_SIZE, size); ++ ++ WREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_MAILBOX_RESP, 0); ++ ++ param = 0x00000001; ++ WREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_MAILBOX_HOST, param); ++ tmp = 0; ++ timeout = 1000; ++ resp = 0; ++ expected = MMSCH_VF_MAILBOX_RESP__OK; ++ while (resp != expected) { ++ resp = RREG32_SOC15(VCN, vcn_inst, regMMSCH_VF_MAILBOX_RESP); ++ if (resp != 0) ++ break; ++ ++ udelay(10); ++ tmp = tmp + 10; ++ if (tmp >= timeout) { ++ DRM_ERROR("failed to init MMSCH. TIME-OUT after %d usec"\ ++ " waiting for regMMSCH_VF_MAILBOX_RESP "\ ++ "(expected=0x%08x, readback=0x%08x)\n", ++ tmp, expected, resp); ++ return -EBUSY; ++ } ++ } ++ ++ enabled_vcn = amdgpu_vcn_is_disabled_vcn(adev, VCN_DECODE_RING, 0) ? 1 : 0; ++ init_status = ((struct mmsch_v5_0_init_header *)(table_loc))->vcn0.init_status; ++ if (resp != expected && resp != MMSCH_VF_MAILBOX_RESP__INCOMPLETE ++ && init_status != MMSCH_VF_ENGINE_STATUS__PASS) { ++ DRM_ERROR("MMSCH init status is incorrect! readback=0x%08x, header init "\ ++ "status for VCN%x: 0x%x\n", resp, enabled_vcn, init_status); ++ } ++ } ++ ++ return 0; ++} ++ ++/** ++ * vcn_v5_0_1_start - VCN start ++ * ++ * @vinst: VCN instance ++ * ++ * Start VCN block ++ */ ++static int vcn_v5_0_1_start(struct amdgpu_vcn_inst *vinst) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ int i = vinst->inst; ++ volatile struct amdgpu_vcn5_fw_shared *fw_shared; ++ struct amdgpu_ring *ring; ++ uint32_t tmp; ++ int j, k, r, vcn_inst; ++ ++ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr; ++ ++ if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ++ return vcn_v5_0_1_start_dpg_mode(vinst, adev->vcn.inst[i].indirect_sram); ++ ++ vcn_inst = GET_INST(VCN, i); ++ ++ /* set VCN status busy */ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS) | UVD_STATUS__UVD_BUSY; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_STATUS, tmp); ++ ++ /* enable VCPU clock */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_VCPU_CNTL), ++ UVD_VCPU_CNTL__CLK_EN_MASK, ~UVD_VCPU_CNTL__CLK_EN_MASK); ++ ++ /* disable master interrupt */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_MASTINT_EN), 0, ++ ~UVD_MASTINT_EN__VCPU_EN_MASK); ++ ++ /* enable LMI MC and UMC channels */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_LMI_CTRL2), 0, ++ ~UVD_LMI_CTRL2__STALL_ARB_UMC_MASK); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_SOFT_RESET); ++ tmp &= ~UVD_SOFT_RESET__LMI_SOFT_RESET_MASK; ++ tmp &= ~UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_SOFT_RESET, tmp); ++ ++ /* setup regUVD_LMI_CTRL */ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_LMI_CTRL); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_CTRL, tmp | ++ UVD_LMI_CTRL__WRITE_CLEAN_TIMER_EN_MASK | ++ UVD_LMI_CTRL__MASK_MC_URGENT_MASK | ++ UVD_LMI_CTRL__DATA_COHERENCY_EN_MASK | ++ UVD_LMI_CTRL__VCPU_DATA_COHERENCY_EN_MASK); ++ ++ vcn_v5_0_1_mc_resume(vinst); ++ ++ /* VCN global tiling registers */ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_GFX10_ADDR_CONFIG, ++ adev->gfx.config.gb_addr_config); ++ ++ /* unblock VCPU register access */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_RB_ARB_CTRL), 0, ++ ~UVD_RB_ARB_CTRL__VCPU_DIS_MASK); ++ ++ /* release VCPU reset to boot */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_VCPU_CNTL), 0, ++ ~UVD_VCPU_CNTL__BLK_RST_MASK); ++ ++ for (j = 0; j < 10; ++j) { ++ uint32_t status; ++ ++ for (k = 0; k < 100; ++k) { ++ status = RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS); ++ if (status & 2) ++ break; ++ mdelay(100); ++ if (amdgpu_emu_mode == 1) ++ msleep(20); ++ } ++ ++ if (amdgpu_emu_mode == 1) { ++ r = -1; ++ if (status & 2) { ++ r = 0; ++ break; ++ } ++ } else { ++ r = 0; ++ if (status & 2) ++ break; ++ ++ dev_err(adev->dev, ++ "VCN[%d] is not responding, trying to reset the VCPU!!!\n", i); ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_VCPU_CNTL), ++ UVD_VCPU_CNTL__BLK_RST_MASK, ++ ~UVD_VCPU_CNTL__BLK_RST_MASK); ++ mdelay(10); ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_VCPU_CNTL), 0, ++ ~UVD_VCPU_CNTL__BLK_RST_MASK); ++ ++ mdelay(10); ++ r = -1; ++ } ++ } ++ ++ if (r) { ++ dev_err(adev->dev, "VCN[%d] is not responding, giving up!!!\n", i); ++ return r; ++ } ++ ++ /* enable master interrupt */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_MASTINT_EN), ++ UVD_MASTINT_EN__VCPU_EN_MASK, ++ ~UVD_MASTINT_EN__VCPU_EN_MASK); ++ ++ /* clear the busy bit of VCN_STATUS */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_STATUS), 0, ++ ~(2 << UVD_STATUS__VCPU_REPORT__SHIFT)); ++ ++ ring = &adev->vcn.inst[i].ring_enc[0]; ++ ++ WREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL, ++ ring->doorbell_index << VCN_RB1_DB_CTRL__OFFSET__SHIFT | ++ VCN_RB1_DB_CTRL__EN_MASK); ++ ++ /* Read DB_CTRL to flush the write DB_CTRL command. */ ++ RREG32_SOC15(VCN, vcn_inst, regVCN_RB1_DB_CTRL); ++ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_BASE_LO, ring->gpu_addr); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_BASE_HI, upper_32_bits(ring->gpu_addr)); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_SIZE, ring->ring_size / 4); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); ++ tmp &= ~(VCN_RB_ENABLE__RB1_EN_MASK); ++ WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); ++ fw_shared->sq.queue_mode |= FW_QUEUE_RING_RESET; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR, 0); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, 0); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_RB_RPTR); ++ WREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR, tmp); ++ ring->wptr = RREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR); ++ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE); ++ tmp |= VCN_RB_ENABLE__RB1_EN_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regVCN_RB_ENABLE, tmp); ++ fw_shared->sq.queue_mode &= ~(FW_QUEUE_RING_RESET | FW_QUEUE_DPG_HOLD_OFF); ++ ++ /* Keeping one read-back to ensure all register writes are done, ++ * otherwise it may introduce race conditions. ++ */ ++ RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS); ++ ++ return 0; ++} ++ ++/** ++ * vcn_v5_0_1_stop_dpg_mode - VCN stop with dpg mode ++ * ++ * @vinst: VCN instance ++ * ++ * Stop VCN block with dpg mode ++ */ ++static void vcn_v5_0_1_stop_dpg_mode(struct amdgpu_vcn_inst *vinst) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ int inst_idx = vinst->inst; ++ uint32_t tmp; ++ int vcn_inst; ++ struct dpg_pause_state state = {.fw_based = VCN_DPG_STATE__UNPAUSE}; ++ ++ vcn_inst = GET_INST(VCN, inst_idx); ++ ++ /* Unpause dpg */ ++ vcn_v5_0_1_pause_dpg_mode(vinst, &state); ++ ++ /* Wait for power status to be 1 */ ++ SOC15_WAIT_ON_RREG(VCN, vcn_inst, regUVD_POWER_STATUS, 1, ++ UVD_POWER_STATUS__UVD_POWER_STATUS_MASK); ++ ++ /* wait for read ptr to be equal to write ptr */ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_RB_WPTR); ++ SOC15_WAIT_ON_RREG(VCN, vcn_inst, regUVD_RB_RPTR, tmp, 0xFFFFFFFF); ++ ++ /* disable dynamic power gating mode */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_POWER_STATUS), 0, ++ ~UVD_POWER_STATUS__UVD_PG_MODE_MASK); ++ ++ /* Keeping one read-back to ensure all register writes are done, ++ * otherwise it may introduce race conditions. ++ */ ++ RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS); ++} ++ ++/** ++ * vcn_v5_0_1_stop - VCN stop ++ * ++ * @vinst: VCN instance ++ * ++ * Stop VCN block ++ */ ++static int vcn_v5_0_1_stop(struct amdgpu_vcn_inst *vinst) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ int i = vinst->inst; ++ volatile struct amdgpu_vcn5_fw_shared *fw_shared; ++ uint32_t tmp; ++ int r = 0, vcn_inst; ++ ++ vcn_inst = GET_INST(VCN, i); ++ ++ fw_shared = adev->vcn.inst[i].fw_shared.cpu_addr; ++ fw_shared->sq.queue_mode |= FW_QUEUE_DPG_HOLD_OFF; ++ ++ if (adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) { ++ vcn_v5_0_1_stop_dpg_mode(vinst); ++ return 0; ++ } ++ ++ /* wait for vcn idle */ ++ r = SOC15_WAIT_ON_RREG(VCN, vcn_inst, regUVD_STATUS, UVD_STATUS__IDLE, 0x7); ++ if (r) ++ return r; ++ ++ tmp = UVD_LMI_STATUS__VCPU_LMI_WRITE_CLEAN_MASK | ++ UVD_LMI_STATUS__READ_CLEAN_MASK | ++ UVD_LMI_STATUS__WRITE_CLEAN_MASK | ++ UVD_LMI_STATUS__WRITE_CLEAN_RAW_MASK; ++ r = SOC15_WAIT_ON_RREG(VCN, vcn_inst, regUVD_LMI_STATUS, tmp, tmp); ++ if (r) ++ return r; ++ ++ /* disable LMI UMC channel */ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_LMI_CTRL2); ++ tmp |= UVD_LMI_CTRL2__STALL_ARB_UMC_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_LMI_CTRL2, tmp); ++ tmp = UVD_LMI_STATUS__UMC_READ_CLEAN_RAW_MASK | ++ UVD_LMI_STATUS__UMC_WRITE_CLEAN_RAW_MASK; ++ r = SOC15_WAIT_ON_RREG(VCN, vcn_inst, regUVD_LMI_STATUS, tmp, tmp); ++ if (r) ++ return r; ++ ++ /* block VCPU register access */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_RB_ARB_CTRL), ++ UVD_RB_ARB_CTRL__VCPU_DIS_MASK, ++ ~UVD_RB_ARB_CTRL__VCPU_DIS_MASK); ++ ++ /* reset VCPU */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_VCPU_CNTL), ++ UVD_VCPU_CNTL__BLK_RST_MASK, ++ ~UVD_VCPU_CNTL__BLK_RST_MASK); ++ ++ /* disable VCPU clock */ ++ WREG32_P(SOC15_REG_OFFSET(VCN, vcn_inst, regUVD_VCPU_CNTL), 0, ++ ~(UVD_VCPU_CNTL__CLK_EN_MASK)); ++ ++ /* apply soft reset */ ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_SOFT_RESET); ++ tmp |= UVD_SOFT_RESET__LMI_UMC_SOFT_RESET_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_SOFT_RESET, tmp); ++ tmp = RREG32_SOC15(VCN, vcn_inst, regUVD_SOFT_RESET); ++ tmp |= UVD_SOFT_RESET__LMI_SOFT_RESET_MASK; ++ WREG32_SOC15(VCN, vcn_inst, regUVD_SOFT_RESET, tmp); ++ ++ /* clear status */ ++ WREG32_SOC15(VCN, vcn_inst, regUVD_STATUS, 0); ++ ++ /* Keeping one read-back to ensure all register writes are done, ++ * otherwise it may introduce race conditions. ++ */ ++ RREG32_SOC15(VCN, vcn_inst, regUVD_STATUS); ++ ++ return 0; ++} ++ ++/** ++ * vcn_v5_0_1_unified_ring_get_rptr - get unified read pointer ++ * ++ * @ring: amdgpu_ring pointer ++ * ++ * Returns the current hardware unified read pointer ++ */ ++static uint64_t vcn_v5_0_1_unified_ring_get_rptr(struct amdgpu_ring *ring) ++{ ++ struct amdgpu_device *adev = ring->adev; ++ ++ if (ring != &adev->vcn.inst[ring->me].ring_enc[0]) ++ DRM_ERROR("wrong ring id is identified in %s", __func__); ++ ++ return RREG32_SOC15(VCN, GET_INST(VCN, ring->me), regUVD_RB_RPTR); ++} ++ ++/** ++ * vcn_v5_0_1_unified_ring_get_wptr - get unified write pointer ++ * ++ * @ring: amdgpu_ring pointer ++ * ++ * Returns the current hardware unified write pointer ++ */ ++static uint64_t vcn_v5_0_1_unified_ring_get_wptr(struct amdgpu_ring *ring) ++{ ++ struct amdgpu_device *adev = ring->adev; ++ ++ if (ring != &adev->vcn.inst[ring->me].ring_enc[0]) ++ DRM_ERROR("wrong ring id is identified in %s", __func__); ++ ++ if (ring->use_doorbell) ++ return *ring->wptr_cpu_addr; ++ else ++ return RREG32_SOC15(VCN, GET_INST(VCN, ring->me), regUVD_RB_WPTR); ++} ++ ++/** ++ * vcn_v5_0_1_unified_ring_set_wptr - set enc write pointer ++ * ++ * @ring: amdgpu_ring pointer ++ * ++ * Commits the enc write pointer to the hardware ++ */ ++static void vcn_v5_0_1_unified_ring_set_wptr(struct amdgpu_ring *ring) ++{ ++ struct amdgpu_device *adev = ring->adev; ++ ++ if (ring != &adev->vcn.inst[ring->me].ring_enc[0]) ++ DRM_ERROR("wrong ring id is identified in %s", __func__); ++ ++ if (ring->use_doorbell) { ++ *ring->wptr_cpu_addr = lower_32_bits(ring->wptr); ++ WDOORBELL32(ring->doorbell_index, lower_32_bits(ring->wptr)); ++ } else { ++ WREG32_SOC15(VCN, GET_INST(VCN, ring->me), regUVD_RB_WPTR, ++ lower_32_bits(ring->wptr)); ++ } ++} ++ ++static const struct amdgpu_ring_funcs vcn_v5_0_1_unified_ring_vm_funcs = { ++ .type = AMDGPU_RING_TYPE_VCN_ENC, ++ .align_mask = 0x3f, ++ .nop = VCN_ENC_CMD_NO_OP, ++ .get_rptr = vcn_v5_0_1_unified_ring_get_rptr, ++ .get_wptr = vcn_v5_0_1_unified_ring_get_wptr, ++ .set_wptr = vcn_v5_0_1_unified_ring_set_wptr, ++ .emit_frame_size = SOC15_FLUSH_GPU_TLB_NUM_WREG * 3 + ++ SOC15_FLUSH_GPU_TLB_NUM_REG_WAIT * 4 + ++ 4 + /* vcn_v2_0_enc_ring_emit_vm_flush */ ++ 5 + ++ 5 + /* vcn_v2_0_enc_ring_emit_fence x2 vm fence */ ++ 1, /* vcn_v2_0_enc_ring_insert_end */ ++ .emit_ib_size = 5, /* vcn_v2_0_enc_ring_emit_ib */ ++ .emit_ib = vcn_v2_0_enc_ring_emit_ib, ++ .emit_fence = vcn_v2_0_enc_ring_emit_fence, ++ .emit_vm_flush = vcn_v4_0_3_enc_ring_emit_vm_flush, ++ .emit_hdp_flush = vcn_v4_0_3_ring_emit_hdp_flush, ++ .test_ring = amdgpu_vcn_enc_ring_test_ring, ++ .test_ib = amdgpu_vcn_unified_ring_test_ib, ++ .insert_nop = amdgpu_ring_insert_nop, ++ .insert_end = vcn_v2_0_enc_ring_insert_end, ++ .pad_ib = amdgpu_ring_generic_pad_ib, ++ .begin_use = amdgpu_vcn_ring_begin_use, ++ .end_use = amdgpu_vcn_ring_end_use, ++ .emit_wreg = vcn_v4_0_3_enc_ring_emit_wreg, ++ .emit_reg_wait = vcn_v4_0_3_enc_ring_emit_reg_wait, ++ .emit_reg_write_reg_wait = amdgpu_ring_emit_reg_write_reg_wait_helper, ++}; ++ ++/** ++ * vcn_v5_0_1_set_unified_ring_funcs - set unified ring functions ++ * ++ * @adev: amdgpu_device pointer ++ * ++ * Set unified ring functions ++ */ ++static void vcn_v5_0_1_set_unified_ring_funcs(struct amdgpu_device *adev) ++{ ++ int i, vcn_inst; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { ++ adev->vcn.inst[i].ring_enc[0].funcs = &vcn_v5_0_1_unified_ring_vm_funcs; ++ adev->vcn.inst[i].ring_enc[0].me = i; ++ vcn_inst = GET_INST(VCN, i); ++ adev->vcn.inst[i].aid_id = vcn_inst / adev->vcn.num_inst_per_aid; ++ } ++} ++ ++/** ++ * vcn_v5_0_1_is_idle - check VCN block is idle ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block structure ++ * ++ * Check whether VCN block is idle ++ */ ++static bool vcn_v5_0_1_is_idle(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ int i, ret = 1; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) ++ ret &= (RREG32_SOC15(VCN, GET_INST(VCN, i), regUVD_STATUS) == UVD_STATUS__IDLE); ++ ++ return ret; ++} ++ ++/** ++ * vcn_v5_0_1_wait_for_idle - wait for VCN block idle ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * ++ * Wait for VCN block idle ++ */ ++static int vcn_v5_0_1_wait_for_idle(struct amdgpu_ip_block *ip_block) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ int i, ret = 0; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { ++ ret = SOC15_WAIT_ON_RREG(VCN, GET_INST(VCN, i), regUVD_STATUS, UVD_STATUS__IDLE, ++ UVD_STATUS__IDLE); ++ if (ret) ++ return ret; ++ } ++ ++ return ret; ++} ++ ++/** ++ * vcn_v5_0_1_set_clockgating_state - set VCN block clockgating state ++ * ++ * @ip_block: Pointer to the amdgpu_ip_block for this hw instance. ++ * @state: clock gating state ++ * ++ * Set VCN block clockgating state ++ */ ++static int vcn_v5_0_1_set_clockgating_state(struct amdgpu_ip_block *ip_block, ++ enum amd_clockgating_state state) ++{ ++ struct amdgpu_device *adev = ip_block->adev; ++ bool enable = state == AMD_CG_STATE_GATE; ++ int i; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { ++ struct amdgpu_vcn_inst *vinst = &adev->vcn.inst[i]; ++ ++ if (enable) { ++ if (RREG32_SOC15(VCN, GET_INST(VCN, i), regUVD_STATUS) != UVD_STATUS__IDLE) ++ return -EBUSY; ++ vcn_v5_0_1_enable_clock_gating(vinst); ++ } else { ++ vcn_v5_0_1_disable_clock_gating(vinst); ++ } ++ } ++ ++ return 0; ++} ++ ++static int vcn_v5_0_1_set_pg_state(struct amdgpu_vcn_inst *vinst, ++ enum amd_powergating_state state) ++{ ++ struct amdgpu_device *adev = vinst->adev; ++ int ret = 0; ++ ++ /* for SRIOV, guest should not control VCN Power-gating ++ * MMSCH FW should control Power-gating and clock-gating ++ * guest should avoid touching CGC and PG ++ */ ++ if (amdgpu_sriov_vf(adev)) { ++ vinst->cur_state = AMD_PG_STATE_UNGATE; ++ return 0; ++ } ++ ++ if (state == vinst->cur_state) ++ return 0; ++ ++ if (state == AMD_PG_STATE_GATE) ++ ret = vcn_v5_0_1_stop(vinst); ++ else ++ ret = vcn_v5_0_1_start(vinst); ++ ++ if (!ret) ++ vinst->cur_state = state; ++ ++ return ret; ++} ++ ++/** ++ * vcn_v5_0_1_process_interrupt - process VCN block interrupt ++ * ++ * @adev: amdgpu_device pointer ++ * @source: interrupt sources ++ * @entry: interrupt entry from clients and sources ++ * ++ * Process VCN block interrupt ++ */ ++static int vcn_v5_0_1_process_interrupt(struct amdgpu_device *adev, struct amdgpu_irq_src *source, ++ struct amdgpu_iv_entry *entry) ++{ ++ uint32_t i, inst; ++ ++ i = node_id_to_phys_map[entry->node_id]; ++ ++ DRM_DEV_DEBUG(adev->dev, "IH: VCN TRAP\n"); ++ ++ for (inst = 0; inst < adev->vcn.num_vcn_inst; ++inst) ++ if (adev->vcn.inst[inst].aid_id == i) ++ break; ++ if (inst >= adev->vcn.num_vcn_inst) { ++ dev_WARN_ONCE(adev->dev, 1, ++ "Interrupt received for unknown VCN instance %d", ++ entry->node_id); ++ return 0; ++ } ++ ++ switch (entry->src_id) { ++ case VCN_5_0__SRCID__UVD_ENC_GENERAL_PURPOSE: ++ amdgpu_fence_process(&adev->vcn.inst[inst].ring_enc[0]); ++ break; ++ default: ++ DRM_DEV_ERROR(adev->dev, "Unhandled interrupt: %d %d\n", ++ entry->src_id, entry->src_data[0]); ++ break; ++ } ++ ++ return 0; ++} ++ ++static int vcn_v5_0_1_set_ras_interrupt_state(struct amdgpu_device *adev, ++ struct amdgpu_irq_src *source, ++ unsigned int type, ++ enum amdgpu_interrupt_state state) ++{ ++ return 0; ++} ++ ++static const struct amdgpu_irq_src_funcs vcn_v5_0_1_irq_funcs = { ++ .process = vcn_v5_0_1_process_interrupt, ++}; ++ ++static const struct amdgpu_irq_src_funcs vcn_v5_0_1_ras_irq_funcs = { ++ .set = vcn_v5_0_1_set_ras_interrupt_state, ++ .process = amdgpu_vcn_process_poison_irq, ++}; ++ ++ ++/** ++ * vcn_v5_0_1_set_irq_funcs - set VCN block interrupt irq functions ++ * ++ * @adev: amdgpu_device pointer ++ * ++ * Set VCN block interrupt irq functions ++ */ ++static void vcn_v5_0_1_set_irq_funcs(struct amdgpu_device *adev) ++{ ++ int i; ++ ++ for (i = 0; i < adev->vcn.num_vcn_inst; ++i) ++ adev->vcn.inst->irq.num_types++; ++ ++ adev->vcn.inst->irq.funcs = &vcn_v5_0_1_irq_funcs; ++ ++ adev->vcn.inst->ras_poison_irq.num_types = 1; ++ adev->vcn.inst->ras_poison_irq.funcs = &vcn_v5_0_1_ras_irq_funcs; ++ ++} ++ ++static const struct amd_ip_funcs vcn_v5_0_1_ip_funcs = { ++ .name = "vcn_v5_0_1", ++ .early_init = vcn_v5_0_1_early_init, ++ .late_init = NULL, ++ .sw_init = vcn_v5_0_1_sw_init, ++ .sw_fini = vcn_v5_0_1_sw_fini, ++ .hw_init = vcn_v5_0_1_hw_init, ++ .hw_fini = vcn_v5_0_1_hw_fini, ++ .suspend = vcn_v5_0_1_suspend, ++ .resume = vcn_v5_0_1_resume, ++ .is_idle = vcn_v5_0_1_is_idle, ++ .wait_for_idle = vcn_v5_0_1_wait_for_idle, ++ .check_soft_reset = NULL, ++ .pre_soft_reset = NULL, ++ .soft_reset = NULL, ++ .post_soft_reset = NULL, ++ .set_clockgating_state = vcn_v5_0_1_set_clockgating_state, ++ .set_powergating_state = vcn_set_powergating_state, ++ .dump_ip_state = vcn_v5_0_0_dump_ip_state, ++ .print_ip_state = vcn_v5_0_0_print_ip_state, ++}; ++ ++const struct amdgpu_ip_block_version vcn_v5_0_1_ip_block = { ++ .type = AMD_IP_BLOCK_TYPE_VCN, ++ .major = 5, ++ .minor = 0, ++ .rev = 1, ++ .funcs = &vcn_v5_0_1_ip_funcs, ++}; ++ ++static uint32_t vcn_v5_0_1_query_poison_by_instance(struct amdgpu_device *adev, ++ uint32_t instance, uint32_t sub_block) ++{ ++ uint32_t poison_stat = 0, reg_value = 0; ++ ++ switch (sub_block) { ++ case AMDGPU_VCN_V5_0_1_VCPU_VCODEC: ++ reg_value = RREG32_SOC15(VCN, instance, regUVD_RAS_VCPU_VCODEC_STATUS); ++ poison_stat = REG_GET_FIELD(reg_value, UVD_RAS_VCPU_VCODEC_STATUS, POISONED_PF); ++ break; ++ default: ++ break; ++ } ++ ++ if (poison_stat) ++ dev_info(adev->dev, "Poison detected in VCN%d, sub_block%d\n", ++ instance, sub_block); ++ ++ return poison_stat; ++} ++ ++static bool vcn_v5_0_1_query_poison_status(struct amdgpu_device *adev) ++{ ++ uint32_t inst, sub; ++ uint32_t poison_stat = 0; ++ ++ for (inst = 0; inst < adev->vcn.num_vcn_inst; inst++) ++ for (sub = 0; sub < AMDGPU_VCN_V5_0_1_MAX_SUB_BLOCK; sub++) ++ poison_stat += ++ vcn_v5_0_1_query_poison_by_instance(adev, inst, sub); ++ ++ return !!poison_stat; ++} ++ ++static const struct amdgpu_ras_block_hw_ops vcn_v5_0_1_ras_hw_ops = { ++ .query_poison_status = vcn_v5_0_1_query_poison_status, ++}; ++ ++static int vcn_v5_0_1_aca_bank_parser(struct aca_handle *handle, struct aca_bank *bank, ++ enum aca_smu_type type, void *data) ++{ ++ struct aca_bank_info info; ++ u64 misc0; ++ int ret; ++ ++ ret = aca_bank_info_decode(bank, &info); ++ if (ret) ++ return ret; ++ ++ misc0 = bank->regs[ACA_REG_IDX_MISC0]; ++ switch (type) { ++ case ACA_SMU_TYPE_UE: ++ bank->aca_err_type = ACA_ERROR_TYPE_UE; ++ ret = aca_error_cache_log_bank_error(handle, &info, ACA_ERROR_TYPE_UE, ++ 1ULL); ++ break; ++ case ACA_SMU_TYPE_CE: ++ bank->aca_err_type = ACA_ERROR_TYPE_CE; ++ ret = aca_error_cache_log_bank_error(handle, &info, bank->aca_err_type, ++ ACA_REG__MISC0__ERRCNT(misc0)); ++ break; ++ default: ++ return -EINVAL; ++ } ++ ++ return ret; ++} ++ ++/* reference to smu driver if header file */ ++static int vcn_v5_0_1_err_codes[] = { ++ 14, 15, /* VCN */ ++}; ++ ++static bool vcn_v5_0_1_aca_bank_is_valid(struct aca_handle *handle, struct aca_bank *bank, ++ enum aca_smu_type type, void *data) ++{ ++ u32 instlo; ++ ++ instlo = ACA_REG__IPID__INSTANCEIDLO(bank->regs[ACA_REG_IDX_IPID]); ++ instlo &= GENMASK(31, 1); ++ ++ if (instlo != mmSMNAID_AID0_MCA_SMU) ++ return false; ++ ++ if (aca_bank_check_error_codes(handle->adev, bank, ++ vcn_v5_0_1_err_codes, ++ ARRAY_SIZE(vcn_v5_0_1_err_codes))) ++ return false; ++ ++ return true; ++} ++ ++static const struct aca_bank_ops vcn_v5_0_1_aca_bank_ops = { ++ .aca_bank_parser = vcn_v5_0_1_aca_bank_parser, ++ .aca_bank_is_valid = vcn_v5_0_1_aca_bank_is_valid, ++}; ++ ++static const struct aca_info vcn_v5_0_1_aca_info = { ++ .hwip = ACA_HWIP_TYPE_SMU, ++ .mask = ACA_ERROR_UE_MASK, ++ .bank_ops = &vcn_v5_0_1_aca_bank_ops, ++}; ++ ++static int vcn_v5_0_1_ras_late_init(struct amdgpu_device *adev, struct ras_common_if *ras_block) ++{ ++ int r; ++ ++ r = amdgpu_ras_block_late_init(adev, ras_block); ++ if (r) ++ return r; ++ ++ r = amdgpu_ras_bind_aca(adev, AMDGPU_RAS_BLOCK__VCN, ++ &vcn_v5_0_1_aca_info, NULL); ++ if (r) ++ goto late_fini; ++ ++ return 0; ++ ++late_fini: ++ amdgpu_ras_block_late_fini(adev, ras_block); ++ ++ return r; ++} ++ ++static struct amdgpu_vcn_ras vcn_v5_0_1_ras = { ++ .ras_block = { ++ .hw_ops = &vcn_v5_0_1_ras_hw_ops, ++ .ras_late_init = vcn_v5_0_1_ras_late_init, ++ }, ++}; ++ ++static void vcn_v5_0_1_set_ras_funcs(struct amdgpu_device *adev) ++{ ++ adev->vcn.ras = &vcn_v5_0_1_ras; ++} +-- +2.39.5 + diff --git a/queue-6.12/drm-bridge-aux-hpd-bridge-fix-assignment-of-the-of_n.patch b/queue-6.12/drm-bridge-aux-hpd-bridge-fix-assignment-of-the-of_n.patch new file mode 100644 index 0000000000..0b554909b7 --- /dev/null +++ b/queue-6.12/drm-bridge-aux-hpd-bridge-fix-assignment-of-the-of_n.patch @@ -0,0 +1,49 @@ +From 37fade1f747287909dfef11175af836a2f281e39 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 8 Jun 2025 18:52:04 +0300 +Subject: drm/bridge: aux-hpd-bridge: fix assignment of the of_node + +From: Dmitry Baryshkov + +[ Upstream commit e8537cad824065b0425fb0429e762e14a08067c2 ] + +Perform fix similar to the one in the commit 85e444a68126 ("drm/bridge: +Fix assignment of the of_node of the parent to aux bridge"). + +The assignment of the of_node to the aux HPD bridge needs to mark the +of_node as reused, otherwise driver core will attempt to bind resources +like pinctrl, which is going to fail as corresponding pins are already +marked as used by the parent device. +Fix that by using the device_set_of_node_from_dev() helper instead of +assigning it directly. + +Fixes: e560518a6c2e ("drm/bridge: implement generic DP HPD bridge") +Signed-off-by: Dmitry Baryshkov +Reviewed-by: Neil Armstrong +Signed-off-by: Neil Armstrong +Link: https://lore.kernel.org/r/20250608-fix-aud-hpd-bridge-v1-1-4641a6f8e381@oss.qualcomm.com +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/bridge/aux-hpd-bridge.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/bridge/aux-hpd-bridge.c b/drivers/gpu/drm/bridge/aux-hpd-bridge.c +index 6886db2d9e00c..8e889a38fad00 100644 +--- a/drivers/gpu/drm/bridge/aux-hpd-bridge.c ++++ b/drivers/gpu/drm/bridge/aux-hpd-bridge.c +@@ -64,10 +64,11 @@ struct auxiliary_device *devm_drm_dp_hpd_bridge_alloc(struct device *parent, str + adev->id = ret; + adev->name = "dp_hpd_bridge"; + adev->dev.parent = parent; +- adev->dev.of_node = of_node_get(parent->of_node); + adev->dev.release = drm_aux_hpd_bridge_release; + adev->dev.platform_data = of_node_get(np); + ++ device_set_of_node_from_dev(&adev->dev, parent); ++ + ret = auxiliary_device_init(adev); + if (ret) { + of_node_put(adev->dev.platform_data); +-- +2.39.5 + diff --git a/queue-6.12/drm-exynos-fimd-guard-display-clock-control-with-run.patch b/queue-6.12/drm-exynos-fimd-guard-display-clock-control-with-run.patch new file mode 100644 index 0000000000..52fd732e2e --- /dev/null +++ b/queue-6.12/drm-exynos-fimd-guard-display-clock-control-with-run.patch @@ -0,0 +1,67 @@ +From 54bc940687bd18df5c25a27e25eb5ef02debfab8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Jun 2025 14:06:26 +0200 +Subject: drm/exynos: fimd: Guard display clock control with runtime PM calls + +From: Marek Szyprowski + +[ Upstream commit 5d91394f236167ac624b823820faf4aa928b889e ] + +Commit c9b1150a68d9 ("drm/atomic-helper: Re-order bridge chain pre-enable +and post-disable") changed the call sequence to the CRTC enable/disable +and bridge pre_enable/post_disable methods, so those bridge methods are +now called when CRTC is not yet enabled. + +This causes a lockup observed on Samsung Peach-Pit/Pi Chromebooks. The +source of this lockup is a call to fimd_dp_clock_enable() function, when +FIMD device is not yet runtime resumed. It worked before the mentioned +commit only because the CRTC implemented by the FIMD driver was always +enabled what guaranteed the FIMD device to be runtime resumed. + +This patch adds runtime PM guards to the fimd_dp_clock_enable() function +to enable its proper operation also when the CRTC implemented by FIMD is +not yet enabled. + +Fixes: 196e059a8a6a ("drm/exynos: convert clock_enable crtc callback to pipeline clock") +Signed-off-by: Marek Szyprowski +Reviewed-by: Tomi Valkeinen +Signed-off-by: Inki Dae +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/exynos/exynos_drm_fimd.c | 12 ++++++++++++ + 1 file changed, 12 insertions(+) + +diff --git a/drivers/gpu/drm/exynos/exynos_drm_fimd.c b/drivers/gpu/drm/exynos/exynos_drm_fimd.c +index f57df8c481391..05e4a5a63f5d8 100644 +--- a/drivers/gpu/drm/exynos/exynos_drm_fimd.c ++++ b/drivers/gpu/drm/exynos/exynos_drm_fimd.c +@@ -187,6 +187,7 @@ struct fimd_context { + u32 i80ifcon; + bool i80_if; + bool suspended; ++ bool dp_clk_enabled; + wait_queue_head_t wait_vsync_queue; + atomic_t wait_vsync_event; + atomic_t win_updated; +@@ -1047,7 +1048,18 @@ static void fimd_dp_clock_enable(struct exynos_drm_clk *clk, bool enable) + struct fimd_context *ctx = container_of(clk, struct fimd_context, + dp_clk); + u32 val = enable ? DP_MIE_CLK_DP_ENABLE : DP_MIE_CLK_DISABLE; ++ ++ if (enable == ctx->dp_clk_enabled) ++ return; ++ ++ if (enable) ++ pm_runtime_resume_and_get(ctx->dev); ++ ++ ctx->dp_clk_enabled = enable; + writel(val, ctx->regs + DP_MIE_CLKCON); ++ ++ if (!enable) ++ pm_runtime_put(ctx->dev); + } + + static const struct exynos_drm_crtc_ops fimd_crtc_ops = { +-- +2.39.5 + diff --git a/queue-6.12/drm-i915-dp_mst-work-around-thunderbolt-sink-disconn.patch b/queue-6.12/drm-i915-dp_mst-work-around-thunderbolt-sink-disconn.patch new file mode 100644 index 0000000000..b53b78691d --- /dev/null +++ b/queue-6.12/drm-i915-dp_mst-work-around-thunderbolt-sink-disconn.patch @@ -0,0 +1,71 @@ +From b5509c98977aad77f121dfe8ddb9f742b8b7258d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 19 May 2025 16:34:17 +0300 +Subject: drm/i915/dp_mst: Work around Thunderbolt sink disconnect after + SINK_COUNT_ESI read + +From: Imre Deak + +[ Upstream commit 9cb15478916e849d62a6ec44b10c593b9663328c ] + +Due to a problem in the iTBT DP-in adapter's firmware the sink on a TBT +link may get disconnected inadvertently if the SINK_COUNT_ESI and the +DP_LINK_SERVICE_IRQ_VECTOR_ESI0 registers are read in a single AUX +transaction. Work around the issue by reading these registers in +separate transactions. + +The issue affects MTL+ platforms and will be fixed in the DP-in adapter +firmware, however releasing that firmware fix may take some time and is +not guaranteed to be available for all systems. Based on this apply the +workaround on affected platforms. + +See HSD #13013007775. + +v2: Cc'ing Mika Westerberg. + +Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/13760 +Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14147 +Cc: Mika Westerberg +Cc: stable@vger.kernel.org +Reviewed-by: Mika Westerberg +Signed-off-by: Imre Deak +Link: https://lore.kernel.org/r/20250519133417.1469181-1-imre.deak@intel.com +(cherry picked from commit c3a48363cf1f76147088b1adb518136ac5df86a0) +Signed-off-by: Joonas Lahtinen +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/i915/display/intel_dp.c | 18 ++++++++++++++++++ + 1 file changed, 18 insertions(+) + +diff --git a/drivers/gpu/drm/i915/display/intel_dp.c b/drivers/gpu/drm/i915/display/intel_dp.c +index 45cca965c11b4..ca9e0c730013d 100644 +--- a/drivers/gpu/drm/i915/display/intel_dp.c ++++ b/drivers/gpu/drm/i915/display/intel_dp.c +@@ -4300,6 +4300,24 @@ intel_dp_mst_disconnect(struct intel_dp *intel_dp) + static bool + intel_dp_get_sink_irq_esi(struct intel_dp *intel_dp, u8 *esi) + { ++ struct intel_display *display = to_intel_display(intel_dp); ++ struct drm_i915_private *i915 = dp_to_i915(intel_dp); ++ ++ /* ++ * Display WA for HSD #13013007775: mtl/arl/lnl ++ * Read the sink count and link service IRQ registers in separate ++ * transactions to prevent disconnecting the sink on a TBT link ++ * inadvertently. ++ */ ++ if (IS_DISPLAY_VER(display, 14, 20) && !IS_BATTLEMAGE(i915)) { ++ if (drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT_ESI, esi, 3) != 3) ++ return false; ++ ++ /* DP_SINK_COUNT_ESI + 3 == DP_LINK_SERVICE_IRQ_VECTOR_ESI0 */ ++ return drm_dp_dpcd_readb(&intel_dp->aux, DP_LINK_SERVICE_IRQ_VECTOR_ESI0, ++ &esi[3]) == 1; ++ } ++ + return drm_dp_dpcd_read(&intel_dp->aux, DP_SINK_COUNT_ESI, esi, 4) == 4; + } + +-- +2.39.5 + diff --git a/queue-6.12/drm-i915-gsc-mei-interrupt-top-half-should-be-in-irq.patch b/queue-6.12/drm-i915-gsc-mei-interrupt-top-half-should-be-in-irq.patch new file mode 100644 index 0000000000..bef3c67b42 --- /dev/null +++ b/queue-6.12/drm-i915-gsc-mei-interrupt-top-half-should-be-in-irq.patch @@ -0,0 +1,52 @@ +From d3ac16c4c7cae332c5a6890c3fe9e661dc093669 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 25 Apr 2025 23:11:07 +0800 +Subject: drm/i915/gsc: mei interrupt top half should be in irq disabled + context + +From: Junxiao Chang + +[ Upstream commit 8cadce97bf264ed478669c6f32d5603b34608335 ] + +MEI GSC interrupt comes from i915. It has top half and bottom half. +Top half is called from i915 interrupt handler. It should be in +irq disabled context. + +With RT kernel, by default i915 IRQ handler is in threaded IRQ. MEI GSC +top half might be in threaded IRQ context. generic_handle_irq_safe API +could be called from either IRQ or process context, it disables local +IRQ then calls MEI GSC interrupt top half. + +This change fixes A380/A770 GPU boot hang issue with RT kernel. + +Fixes: 1e3dc1d8622b ("drm/i915/gsc: add gsc as a mei auxiliary device") +Tested-by: Furong Zhou +Suggested-by: Sebastian Andrzej Siewior +Acked-by: Sebastian Andrzej Siewior +Signed-off-by: Junxiao Chang +Link: https://lore.kernel.org/r/20250425151108.643649-1-junxiao.chang@intel.com +Reviewed-by: Rodrigo Vivi +Signed-off-by: Rodrigo Vivi +(cherry picked from commit dccf655f69002d496a527ba441b4f008aa5bebbf) +Signed-off-by: Joonas Lahtinen +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/i915/gt/intel_gsc.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/i915/gt/intel_gsc.c b/drivers/gpu/drm/i915/gt/intel_gsc.c +index 1e925c75fb080..c43febc862dc3 100644 +--- a/drivers/gpu/drm/i915/gt/intel_gsc.c ++++ b/drivers/gpu/drm/i915/gt/intel_gsc.c +@@ -284,7 +284,7 @@ static void gsc_irq_handler(struct intel_gt *gt, unsigned int intf_id) + if (gt->gsc.intf[intf_id].irq < 0) + return; + +- ret = generic_handle_irq(gt->gsc.intf[intf_id].irq); ++ ret = generic_handle_irq_safe(gt->gsc.intf[intf_id].irq); + if (ret) + gt_err_ratelimited(gt, "error handling GSC irq: %d\n", ret); + } +-- +2.39.5 + diff --git a/queue-6.12/drm-i915-gt-fix-timeline-left-held-on-vma-alloc-erro.patch b/queue-6.12/drm-i915-gt-fix-timeline-left-held-on-vma-alloc-erro.patch new file mode 100644 index 0000000000..a149d6c012 --- /dev/null +++ b/queue-6.12/drm-i915-gt-fix-timeline-left-held-on-vma-alloc-erro.patch @@ -0,0 +1,128 @@ +From 2d6af38c8ce0c9f43dd9158cb6b936bad59fcfb6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 11 Jun 2025 12:42:13 +0200 +Subject: drm/i915/gt: Fix timeline left held on VMA alloc error + +From: Janusz Krzysztofik + +[ Upstream commit a5aa7bc1fca78c7fa127d9e33aa94a0c9066c1d6 ] + +The following error has been reported sporadically by CI when a test +unbinds the i915 driver on a ring submission platform: + +<4> [239.330153] ------------[ cut here ]------------ +<4> [239.330166] i915 0000:00:02.0: [drm] drm_WARN_ON(dev_priv->mm.shrink_count) +<4> [239.330196] WARNING: CPU: 1 PID: 18570 at drivers/gpu/drm/i915/i915_gem.c:1309 i915_gem_cleanup_early+0x13e/0x150 [i915] +... +<4> [239.330640] RIP: 0010:i915_gem_cleanup_early+0x13e/0x150 [i915] +... +<4> [239.330942] Call Trace: +<4> [239.330944] +<4> [239.330949] i915_driver_late_release+0x2b/0xa0 [i915] +<4> [239.331202] i915_driver_release+0x86/0xa0 [i915] +<4> [239.331482] devm_drm_dev_init_release+0x61/0x90 +<4> [239.331494] devm_action_release+0x15/0x30 +<4> [239.331504] release_nodes+0x3d/0x120 +<4> [239.331517] devres_release_all+0x96/0xd0 +<4> [239.331533] device_unbind_cleanup+0x12/0x80 +<4> [239.331543] device_release_driver_internal+0x23a/0x280 +<4> [239.331550] ? bus_find_device+0xa5/0xe0 +<4> [239.331563] device_driver_detach+0x14/0x20 +... +<4> [357.719679] ---[ end trace 0000000000000000 ]--- + +If the test also unloads the i915 module then that's followed with: + +<3> [357.787478] ============================================================================= +<3> [357.788006] BUG i915_vma (Tainted: G U W N ): Objects remaining on __kmem_cache_shutdown() +<3> [357.788031] ----------------------------------------------------------------------------- +<3> [357.788204] Object 0xffff888109e7f480 @offset=29824 +<3> [357.788670] Allocated in i915_vma_instance+0xee/0xc10 [i915] age=292729 cpu=4 pid=2244 +<4> [357.788994] i915_vma_instance+0xee/0xc10 [i915] +<4> [357.789290] init_status_page+0x7b/0x420 [i915] +<4> [357.789532] intel_engines_init+0x1d8/0x980 [i915] +<4> [357.789772] intel_gt_init+0x175/0x450 [i915] +<4> [357.790014] i915_gem_init+0x113/0x340 [i915] +<4> [357.790281] i915_driver_probe+0x847/0xed0 [i915] +<4> [357.790504] i915_pci_probe+0xe6/0x220 [i915] +... + +Closer analysis of CI results history has revealed a dependency of the +error on a few IGT tests, namely: +- igt@api_intel_allocator@fork-simple-stress-signal, +- igt@api_intel_allocator@two-level-inception-interruptible, +- igt@gem_linear_blits@interruptible, +- igt@prime_mmap_coherency@ioctl-errors, +which invisibly trigger the issue, then exhibited with first driver unbind +attempt. + +All of the above tests perform actions which are actively interrupted with +signals. Further debugging has allowed to narrow that scope down to +DRM_IOCTL_I915_GEM_EXECBUFFER2, and ring_context_alloc(), specific to ring +submission, in particular. + +If successful then that function, or its execlists or GuC submission +equivalent, is supposed to be called only once per GEM context engine, +followed by raise of a flag that prevents the function from being called +again. The function is expected to unwind its internal errors itself, so +it may be safely called once more after it returns an error. + +In case of ring submission, the function first gets a reference to the +engine's legacy timeline and then allocates a VMA. If the VMA allocation +fails, e.g. when i915_vma_instance() called from inside is interrupted +with a signal, then ring_context_alloc() fails, leaving the timeline held +referenced. On next I915_GEM_EXECBUFFER2 IOCTL, another reference to the +timeline is got, and only that last one is put on successful completion. +As a consequence, the legacy timeline, with its underlying engine status +page's VMA object, is still held and not released on driver unbind. + +Get the legacy timeline only after successful allocation of the context +engine's VMA. + +v2: Add a note on other submission methods (Krzysztof Karas): + Both execlists and GuC submission use lrc_alloc() which seems free + from a similar issue. + +Fixes: 75d0a7f31eec ("drm/i915: Lift timeline into intel_context") +Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/12061 +Cc: Chris Wilson +Cc: Matthew Auld +Cc: Krzysztof Karas +Reviewed-by: Sebastian Brzezinka +Reviewed-by: Krzysztof Niemiec +Signed-off-by: Janusz Krzysztofik +Reviewed-by: Nitin Gote +Reviewed-by: Andi Shyti +Signed-off-by: Andi Shyti +Link: https://lore.kernel.org/r/20250611104352.1014011-2-janusz.krzysztofik@linux.intel.com +(cherry picked from commit cc43422b3cc79eacff4c5a8ba0d224688ca9dd4f) +Signed-off-by: Joonas Lahtinen +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/i915/gt/intel_ring_submission.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/i915/gt/intel_ring_submission.c b/drivers/gpu/drm/i915/gt/intel_ring_submission.c +index 72277bc8322e8..f84fa09cdb339 100644 +--- a/drivers/gpu/drm/i915/gt/intel_ring_submission.c ++++ b/drivers/gpu/drm/i915/gt/intel_ring_submission.c +@@ -575,7 +575,6 @@ static int ring_context_alloc(struct intel_context *ce) + /* One ringbuffer to rule them all */ + GEM_BUG_ON(!engine->legacy.ring); + ce->ring = engine->legacy.ring; +- ce->timeline = intel_timeline_get(engine->legacy.timeline); + + GEM_BUG_ON(ce->state); + if (engine->context_size) { +@@ -588,6 +587,8 @@ static int ring_context_alloc(struct intel_context *ce) + ce->state = vma; + } + ++ ce->timeline = intel_timeline_get(engine->legacy.timeline); ++ + return 0; + } + +-- +2.39.5 + diff --git a/queue-6.12/drm-i915-selftests-change-mock_request-to-return-err.patch b/queue-6.12/drm-i915-selftests-change-mock_request-to-return-err.patch new file mode 100644 index 0000000000..5aab9a1a22 --- /dev/null +++ b/queue-6.12/drm-i915-selftests-change-mock_request-to-return-err.patch @@ -0,0 +1,107 @@ +From 35048e8fddf218ec631d53f3b7e34e5e751c12bd Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 25 Jun 2025 10:21:58 -0500 +Subject: drm/i915/selftests: Change mock_request() to return error pointers + +From: Dan Carpenter + +[ Upstream commit caa7c7a76b78ce41d347003f84975125383e6b59 ] + +There was an error pointer vs NULL bug in __igt_breadcrumbs_smoketest(). +The __mock_request_alloc() function implements the +smoketest->request_alloc() function pointer. It was supposed to return +error pointers, but it propogates the NULL return from mock_request() +so in the event of a failure, it would lead to a NULL pointer +dereference. + +To fix this, change the mock_request() function to return error pointers +and update all the callers to expect that. + +Fixes: 52c0fdb25c7c ("drm/i915: Replace global breadcrumbs with per-context interrupt tracking") +Signed-off-by: Dan Carpenter +Reviewed-by: Rodrigo Vivi +Link: https://lore.kernel.org/r/685c1417.050a0220.696f5.5c05@mx.google.com +Signed-off-by: Rodrigo Vivi +(cherry picked from commit 778fa8ad5f0f23397d045c7ebca048ce8def1c43) +Signed-off-by: Joonas Lahtinen +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/i915/selftests/i915_request.c | 20 +++++++++---------- + drivers/gpu/drm/i915/selftests/mock_request.c | 2 +- + 2 files changed, 11 insertions(+), 11 deletions(-) + +diff --git a/drivers/gpu/drm/i915/selftests/i915_request.c b/drivers/gpu/drm/i915/selftests/i915_request.c +index acae30a04a947..0122719ee9218 100644 +--- a/drivers/gpu/drm/i915/selftests/i915_request.c ++++ b/drivers/gpu/drm/i915/selftests/i915_request.c +@@ -73,8 +73,8 @@ static int igt_add_request(void *arg) + /* Basic preliminary test to create a request and let it loose! */ + + request = mock_request(rcs0(i915)->kernel_context, HZ / 10); +- if (!request) +- return -ENOMEM; ++ if (IS_ERR(request)) ++ return PTR_ERR(request); + + i915_request_add(request); + +@@ -91,8 +91,8 @@ static int igt_wait_request(void *arg) + /* Submit a request, then wait upon it */ + + request = mock_request(rcs0(i915)->kernel_context, T); +- if (!request) +- return -ENOMEM; ++ if (IS_ERR(request)) ++ return PTR_ERR(request); + + i915_request_get(request); + +@@ -160,8 +160,8 @@ static int igt_fence_wait(void *arg) + /* Submit a request, treat it as a fence and wait upon it */ + + request = mock_request(rcs0(i915)->kernel_context, T); +- if (!request) +- return -ENOMEM; ++ if (IS_ERR(request)) ++ return PTR_ERR(request); + + if (dma_fence_wait_timeout(&request->fence, false, T) != -ETIME) { + pr_err("fence wait success before submit (expected timeout)!\n"); +@@ -219,8 +219,8 @@ static int igt_request_rewind(void *arg) + GEM_BUG_ON(IS_ERR(ce)); + request = mock_request(ce, 2 * HZ); + intel_context_put(ce); +- if (!request) { +- err = -ENOMEM; ++ if (IS_ERR(request)) { ++ err = PTR_ERR(request); + goto err_context_0; + } + +@@ -237,8 +237,8 @@ static int igt_request_rewind(void *arg) + GEM_BUG_ON(IS_ERR(ce)); + vip = mock_request(ce, 0); + intel_context_put(ce); +- if (!vip) { +- err = -ENOMEM; ++ if (IS_ERR(vip)) { ++ err = PTR_ERR(vip); + goto err_context_1; + } + +diff --git a/drivers/gpu/drm/i915/selftests/mock_request.c b/drivers/gpu/drm/i915/selftests/mock_request.c +index 09f747228dff5..1b0cf073e9643 100644 +--- a/drivers/gpu/drm/i915/selftests/mock_request.c ++++ b/drivers/gpu/drm/i915/selftests/mock_request.c +@@ -35,7 +35,7 @@ mock_request(struct intel_context *ce, unsigned long delay) + /* NB the i915->requests slab cache is enlarged to fit mock_request */ + request = intel_context_create_request(ce); + if (IS_ERR(request)) +- return NULL; ++ return request; + + request->mock.delay = delay; + return request; +-- +2.39.5 + diff --git a/queue-6.12/drm-msm-fix-a-fence-leak-in-submit-error-path.patch b/queue-6.12/drm-msm-fix-a-fence-leak-in-submit-error-path.patch new file mode 100644 index 0000000000..b9d5c1ed00 --- /dev/null +++ b/queue-6.12/drm-msm-fix-a-fence-leak-in-submit-error-path.patch @@ -0,0 +1,45 @@ +From 4c232f53c2017759c38ac914bd20be4d5c6a1ede Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 14 May 2025 09:33:32 -0700 +Subject: drm/msm: Fix a fence leak in submit error path + +From: Rob Clark + +[ Upstream commit 5d319f75ccf7f0927425a7545aa1a22b3eedc189 ] + +In error paths, we could unref the submit without calling +drm_sched_entity_push_job(), so msm_job_free() will never get +called. Since drm_sched_job_cleanup() will NULL out the +s_fence, we can use that to detect this case. + +Signed-off-by: Rob Clark +Patchwork: https://patchwork.freedesktop.org/patch/653584/ +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/msm_gem_submit.c | 9 +++++++++ + 1 file changed, 9 insertions(+) + +diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c +index f775638d239a5..937c1f5d88cbb 100644 +--- a/drivers/gpu/drm/msm/msm_gem_submit.c ++++ b/drivers/gpu/drm/msm/msm_gem_submit.c +@@ -85,6 +85,15 @@ void __msm_gem_submit_destroy(struct kref *kref) + container_of(kref, struct msm_gem_submit, ref); + unsigned i; + ++ /* ++ * In error paths, we could unref the submit without calling ++ * drm_sched_entity_push_job(), so msm_job_free() will never ++ * get called. Since drm_sched_job_cleanup() will NULL out ++ * s_fence, we can use that to detect this case. ++ */ ++ if (submit->base.s_fence) ++ drm_sched_job_cleanup(&submit->base); ++ + if (submit->fence_id) { + spin_lock(&submit->queue->idr_lock); + idr_remove(&submit->queue->fence_idr, submit->fence_id); +-- +2.39.5 + diff --git a/queue-6.12/drm-msm-fix-another-leak-in-the-submit-error-path.patch b/queue-6.12/drm-msm-fix-another-leak-in-the-submit-error-path.patch new file mode 100644 index 0000000000..74d79fd123 --- /dev/null +++ b/queue-6.12/drm-msm-fix-another-leak-in-the-submit-error-path.patch @@ -0,0 +1,57 @@ +From 434cc1e0bee95b053d9222e04e3f81f015c3b7c4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 14 May 2025 09:33:33 -0700 +Subject: drm/msm: Fix another leak in the submit error path + +From: Rob Clark + +[ Upstream commit f681c2aa8676a890eacc84044717ab0fd26e058f ] + +put_unused_fd() doesn't free the installed file, if we've already done +fd_install(). So we need to also free the sync_file. + +Signed-off-by: Rob Clark +Patchwork: https://patchwork.freedesktop.org/patch/653583/ +Signed-off-by: Rob Clark +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/msm/msm_gem_submit.c | 8 ++++++-- + 1 file changed, 6 insertions(+), 2 deletions(-) + +diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c +index 937c1f5d88cbb..4b3a8ee8e278f 100644 +--- a/drivers/gpu/drm/msm/msm_gem_submit.c ++++ b/drivers/gpu/drm/msm/msm_gem_submit.c +@@ -667,6 +667,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, + struct msm_ringbuffer *ring; + struct msm_submit_post_dep *post_deps = NULL; + struct drm_syncobj **syncobjs_to_reset = NULL; ++ struct sync_file *sync_file = NULL; + int out_fence_fd = -1; + unsigned i; + int ret; +@@ -877,7 +878,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, + } + + if (ret == 0 && args->flags & MSM_SUBMIT_FENCE_FD_OUT) { +- struct sync_file *sync_file = sync_file_create(submit->user_fence); ++ sync_file = sync_file_create(submit->user_fence); + if (!sync_file) { + ret = -ENOMEM; + } else { +@@ -911,8 +912,11 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data, + out_unlock: + mutex_unlock(&queue->lock); + out_post_unlock: +- if (ret && (out_fence_fd >= 0)) ++ if (ret && (out_fence_fd >= 0)) { + put_unused_fd(out_fence_fd); ++ if (sync_file) ++ fput(sync_file->file); ++ } + + if (!IS_ERR_OR_NULL(submit)) { + msm_gem_submit_put(submit); +-- +2.39.5 + diff --git a/queue-6.12/drm-simpledrm-do-not-upcast-in-release-helpers.patch b/queue-6.12/drm-simpledrm-do-not-upcast-in-release-helpers.patch new file mode 100644 index 0000000000..be2257a280 --- /dev/null +++ b/queue-6.12/drm-simpledrm-do-not-upcast-in-release-helpers.patch @@ -0,0 +1,51 @@ +From 7b62fcb22df93c3646887aaa4ae3f558453adaae Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 7 Apr 2025 15:47:24 +0200 +Subject: drm/simpledrm: Do not upcast in release helpers + +From: Thomas Zimmermann + +[ Upstream commit d231cde7c84359fb18fb268cf6cff03b5bce48ff ] + +The res pointer passed to simpledrm_device_release_clocks() and +simpledrm_device_release_regulators() points to an instance of +struct simpledrm_device. No need to upcast from struct drm_device. +The upcast is harmless, as DRM device is the first field in struct +simpledrm_device. + +Signed-off-by: Thomas Zimmermann +Fixes: 11e8f5fd223b ("drm: Add simpledrm driver") +Cc: # v5.14+ +Reviewed-by: Javier Martinez Canillas +Reviewed-by: Jocelyn Falempe +Link: https://lore.kernel.org/r/20250407134753.985925-2-tzimmermann@suse.de +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/tiny/simpledrm.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/gpu/drm/tiny/simpledrm.c b/drivers/gpu/drm/tiny/simpledrm.c +index d19e102894282..07abaf27315f7 100644 +--- a/drivers/gpu/drm/tiny/simpledrm.c ++++ b/drivers/gpu/drm/tiny/simpledrm.c +@@ -284,7 +284,7 @@ static struct simpledrm_device *simpledrm_device_of_dev(struct drm_device *dev) + + static void simpledrm_device_release_clocks(void *res) + { +- struct simpledrm_device *sdev = simpledrm_device_of_dev(res); ++ struct simpledrm_device *sdev = res; + unsigned int i; + + for (i = 0; i < sdev->clk_count; ++i) { +@@ -382,7 +382,7 @@ static int simpledrm_device_init_clocks(struct simpledrm_device *sdev) + + static void simpledrm_device_release_regulators(void *res) + { +- struct simpledrm_device *sdev = simpledrm_device_of_dev(res); ++ struct simpledrm_device *sdev = res; + unsigned int i; + + for (i = 0; i < sdev->regulator_count; ++i) { +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-add-interface-to-request-physical-alignment-f.patch b/queue-6.12/drm-xe-add-interface-to-request-physical-alignment-f.patch new file mode 100644 index 0000000000..79cf9cf056 --- /dev/null +++ b/queue-6.12/drm-xe-add-interface-to-request-physical-alignment-f.patch @@ -0,0 +1,181 @@ +From b61953b442cbce3fec31ddd294d2d9e9ea35b172 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 9 Oct 2024 18:19:46 +0300 +Subject: drm/xe: add interface to request physical alignment for buffer + objects + +From: Juha-Pekka Heikkila + +[ Upstream commit 3ad86ae1da97d0091f673f08846848714f6dd745 ] + +Add xe_bo_create_pin_map_at_aligned() which augment +xe_bo_create_pin_map_at() with alignment parameter allowing to pass +required alignemnt if it differ from default. + +Signed-off-by: Juha-Pekka Heikkila +Reviewed-by: Jonathan Cavitt +Signed-off-by: Mika Kahola +Link: https://patchwork.freedesktop.org/patch/msgid/20241009151947.2240099-2-juhapekka.heikkila@gmail.com +Stable-dep-of: f16873f42a06 ("drm/xe: move DPT l2 flush to a more sensible place") +Signed-off-by: Sasha Levin +--- + .../compat-i915-headers/gem/i915_gem_stolen.h | 2 +- + drivers/gpu/drm/xe/xe_bo.c | 29 +++++++++++++++---- + drivers/gpu/drm/xe/xe_bo.h | 8 ++++- + drivers/gpu/drm/xe/xe_bo_types.h | 5 ++++ + drivers/gpu/drm/xe/xe_ggtt.c | 2 +- + 5 files changed, 37 insertions(+), 9 deletions(-) + +diff --git a/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h b/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h +index cb6c7598824be..9c4cf050059ac 100644 +--- a/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h ++++ b/drivers/gpu/drm/xe/compat-i915-headers/gem/i915_gem_stolen.h +@@ -29,7 +29,7 @@ static inline int i915_gem_stolen_insert_node_in_range(struct xe_device *xe, + + bo = xe_bo_create_locked_range(xe, xe_device_get_root_tile(xe), + NULL, size, start, end, +- ttm_bo_type_kernel, flags); ++ ttm_bo_type_kernel, flags, 0); + if (IS_ERR(bo)) { + err = PTR_ERR(bo); + bo = NULL; +diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c +index 8acc4640f0a28..c92953c08d682 100644 +--- a/drivers/gpu/drm/xe/xe_bo.c ++++ b/drivers/gpu/drm/xe/xe_bo.c +@@ -1454,7 +1454,8 @@ static struct xe_bo * + __xe_bo_create_locked(struct xe_device *xe, + struct xe_tile *tile, struct xe_vm *vm, + size_t size, u64 start, u64 end, +- u16 cpu_caching, enum ttm_bo_type type, u32 flags) ++ u16 cpu_caching, enum ttm_bo_type type, u32 flags, ++ u64 alignment) + { + struct xe_bo *bo = NULL; + int err; +@@ -1483,6 +1484,8 @@ __xe_bo_create_locked(struct xe_device *xe, + if (IS_ERR(bo)) + return bo; + ++ bo->min_align = alignment; ++ + /* + * Note that instead of taking a reference no the drm_gpuvm_resv_bo(), + * to ensure the shared resv doesn't disappear under the bo, the bo +@@ -1523,16 +1526,18 @@ struct xe_bo * + xe_bo_create_locked_range(struct xe_device *xe, + struct xe_tile *tile, struct xe_vm *vm, + size_t size, u64 start, u64 end, +- enum ttm_bo_type type, u32 flags) ++ enum ttm_bo_type type, u32 flags, u64 alignment) + { +- return __xe_bo_create_locked(xe, tile, vm, size, start, end, 0, type, flags); ++ return __xe_bo_create_locked(xe, tile, vm, size, start, end, 0, type, ++ flags, alignment); + } + + struct xe_bo *xe_bo_create_locked(struct xe_device *xe, struct xe_tile *tile, + struct xe_vm *vm, size_t size, + enum ttm_bo_type type, u32 flags) + { +- return __xe_bo_create_locked(xe, tile, vm, size, 0, ~0ULL, 0, type, flags); ++ return __xe_bo_create_locked(xe, tile, vm, size, 0, ~0ULL, 0, type, ++ flags, 0); + } + + struct xe_bo *xe_bo_create_user(struct xe_device *xe, struct xe_tile *tile, +@@ -1542,7 +1547,7 @@ struct xe_bo *xe_bo_create_user(struct xe_device *xe, struct xe_tile *tile, + { + struct xe_bo *bo = __xe_bo_create_locked(xe, tile, vm, size, 0, ~0ULL, + cpu_caching, ttm_bo_type_device, +- flags | XE_BO_FLAG_USER); ++ flags | XE_BO_FLAG_USER, 0); + if (!IS_ERR(bo)) + xe_bo_unlock_vm_held(bo); + +@@ -1565,6 +1570,17 @@ struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile + struct xe_vm *vm, + size_t size, u64 offset, + enum ttm_bo_type type, u32 flags) ++{ ++ return xe_bo_create_pin_map_at_aligned(xe, tile, vm, size, offset, ++ type, flags, 0); ++} ++ ++struct xe_bo *xe_bo_create_pin_map_at_aligned(struct xe_device *xe, ++ struct xe_tile *tile, ++ struct xe_vm *vm, ++ size_t size, u64 offset, ++ enum ttm_bo_type type, u32 flags, ++ u64 alignment) + { + struct xe_bo *bo; + int err; +@@ -1576,7 +1592,8 @@ struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile + flags |= XE_BO_FLAG_GGTT; + + bo = xe_bo_create_locked_range(xe, tile, vm, size, start, end, type, +- flags | XE_BO_FLAG_NEEDS_CPU_ACCESS); ++ flags | XE_BO_FLAG_NEEDS_CPU_ACCESS, ++ alignment); + if (IS_ERR(bo)) + return bo; + +diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h +index d22269a230aa1..704f5068d0d0a 100644 +--- a/drivers/gpu/drm/xe/xe_bo.h ++++ b/drivers/gpu/drm/xe/xe_bo.h +@@ -77,7 +77,7 @@ struct xe_bo * + xe_bo_create_locked_range(struct xe_device *xe, + struct xe_tile *tile, struct xe_vm *vm, + size_t size, u64 start, u64 end, +- enum ttm_bo_type type, u32 flags); ++ enum ttm_bo_type type, u32 flags, u64 alignment); + struct xe_bo *xe_bo_create_locked(struct xe_device *xe, struct xe_tile *tile, + struct xe_vm *vm, size_t size, + enum ttm_bo_type type, u32 flags); +@@ -94,6 +94,12 @@ struct xe_bo *xe_bo_create_pin_map(struct xe_device *xe, struct xe_tile *tile, + struct xe_bo *xe_bo_create_pin_map_at(struct xe_device *xe, struct xe_tile *tile, + struct xe_vm *vm, size_t size, u64 offset, + enum ttm_bo_type type, u32 flags); ++struct xe_bo *xe_bo_create_pin_map_at_aligned(struct xe_device *xe, ++ struct xe_tile *tile, ++ struct xe_vm *vm, ++ size_t size, u64 offset, ++ enum ttm_bo_type type, u32 flags, ++ u64 alignment); + struct xe_bo *xe_bo_create_from_data(struct xe_device *xe, struct xe_tile *tile, + const void *data, size_t size, + enum ttm_bo_type type, u32 flags); +diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h +index 2ed558ac2264a..35372c46edfa5 100644 +--- a/drivers/gpu/drm/xe/xe_bo_types.h ++++ b/drivers/gpu/drm/xe/xe_bo_types.h +@@ -76,6 +76,11 @@ struct xe_bo { + + /** @vram_userfault_link: Link into @mem_access.vram_userfault.list */ + struct list_head vram_userfault_link; ++ ++ /** @min_align: minimum alignment needed for this BO if different ++ * from default ++ */ ++ u64 min_align; + }; + + #define intel_bo_to_drm_bo(bo) (&(bo)->ttm.base) +diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c +index e9820126feb96..9cb5760006a1c 100644 +--- a/drivers/gpu/drm/xe/xe_ggtt.c ++++ b/drivers/gpu/drm/xe/xe_ggtt.c +@@ -620,7 +620,7 @@ static int __xe_ggtt_insert_bo_at(struct xe_ggtt *ggtt, struct xe_bo *bo, + u64 start, u64 end) + { + int err; +- u64 alignment = XE_PAGE_SIZE; ++ u64 alignment = bo->min_align > 0 ? bo->min_align : XE_PAGE_SIZE; + + if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K) + alignment = SZ_64K; +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-allow-bo-mapping-on-multiple-ggtts.patch b/queue-6.12/drm-xe-allow-bo-mapping-on-multiple-ggtts.patch new file mode 100644 index 0000000000..368199023e --- /dev/null +++ b/queue-6.12/drm-xe-allow-bo-mapping-on-multiple-ggtts.patch @@ -0,0 +1,356 @@ +From 5f8b80d09c74e3233d92a3c3ead0114e3dbe8cdd Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 19 Nov 2024 16:02:21 -0800 +Subject: drm/xe: Allow bo mapping on multiple ggtts + +From: Niranjana Vishwanathapura + +[ Upstream commit 5a3b0df25d6a78098d548213384665eeead608c9 ] + +Make bo->ggtt an array to support bo mapping on multiple ggtts. +Add XE_BO_FLAG_GGTTx flags to map the bo on ggtt of tile 'x'. + +Signed-off-by: Niranjana Vishwanathapura +Reviewed-by: Matthew Brost +Signed-off-by: John Harrison +Link: https://patchwork.freedesktop.org/patch/msgid/20241120000222.204095-2-John.C.Harrison@Intel.com +Stable-dep-of: f16873f42a06 ("drm/xe: move DPT l2 flush to a more sensible place") +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/xe/display/xe_fb_pin.c | 12 ++++--- + drivers/gpu/drm/xe/xe_bo.c | 49 ++++++++++++++++++-------- + drivers/gpu/drm/xe/xe_bo.h | 32 ++++++++++++++--- + drivers/gpu/drm/xe/xe_bo_evict.c | 14 +++++--- + drivers/gpu/drm/xe/xe_bo_types.h | 5 +-- + drivers/gpu/drm/xe/xe_ggtt.c | 35 +++++++++--------- + 6 files changed, 101 insertions(+), 46 deletions(-) + +diff --git a/drivers/gpu/drm/xe/display/xe_fb_pin.c b/drivers/gpu/drm/xe/display/xe_fb_pin.c +index b58fc4ba2aacb..972b7db52f785 100644 +--- a/drivers/gpu/drm/xe/display/xe_fb_pin.c ++++ b/drivers/gpu/drm/xe/display/xe_fb_pin.c +@@ -153,7 +153,7 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb, + } + + vma->dpt = dpt; +- vma->node = dpt->ggtt_node; ++ vma->node = dpt->ggtt_node[tile0->id]; + return 0; + } + +@@ -203,8 +203,8 @@ static int __xe_pin_fb_vma_ggtt(const struct intel_framebuffer *fb, + if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K) + align = max_t(u32, align, SZ_64K); + +- if (bo->ggtt_node && view->type == I915_GTT_VIEW_NORMAL) { +- vma->node = bo->ggtt_node; ++ if (bo->ggtt_node[ggtt->tile->id] && view->type == I915_GTT_VIEW_NORMAL) { ++ vma->node = bo->ggtt_node[ggtt->tile->id]; + } else if (view->type == I915_GTT_VIEW_NORMAL) { + u32 x, size = bo->ttm.base.size; + +@@ -333,10 +333,12 @@ static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb, + + static void __xe_unpin_fb_vma(struct i915_vma *vma) + { ++ u8 tile_id = vma->node->ggtt->tile->id; ++ + if (vma->dpt) + xe_bo_unpin_map_no_vm(vma->dpt); +- else if (!xe_ggtt_node_allocated(vma->bo->ggtt_node) || +- vma->bo->ggtt_node->base.start != vma->node->base.start) ++ else if (!xe_ggtt_node_allocated(vma->bo->ggtt_node[tile_id]) || ++ vma->bo->ggtt_node[tile_id]->base.start != vma->node->base.start) + xe_ggtt_node_remove(vma->node, false); + + ttm_bo_reserve(&vma->bo->ttm, false, false, NULL); +diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c +index c92953c08d682..5f745d9ed6cc2 100644 +--- a/drivers/gpu/drm/xe/xe_bo.c ++++ b/drivers/gpu/drm/xe/xe_bo.c +@@ -1130,6 +1130,8 @@ static void xe_ttm_bo_destroy(struct ttm_buffer_object *ttm_bo) + { + struct xe_bo *bo = ttm_to_xe_bo(ttm_bo); + struct xe_device *xe = ttm_to_xe_device(ttm_bo->bdev); ++ struct xe_tile *tile; ++ u8 id; + + if (bo->ttm.base.import_attach) + drm_prime_gem_destroy(&bo->ttm.base, NULL); +@@ -1137,8 +1139,9 @@ static void xe_ttm_bo_destroy(struct ttm_buffer_object *ttm_bo) + + xe_assert(xe, list_empty(&ttm_bo->base.gpuva.list)); + +- if (bo->ggtt_node && bo->ggtt_node->base.size) +- xe_ggtt_remove_bo(bo->tile->mem.ggtt, bo); ++ for_each_tile(tile, xe, id) ++ if (bo->ggtt_node[id] && bo->ggtt_node[id]->base.size) ++ xe_ggtt_remove_bo(tile->mem.ggtt, bo); + + #ifdef CONFIG_PROC_FS + if (bo->client) +@@ -1308,6 +1311,10 @@ struct xe_bo *___xe_bo_create_locked(struct xe_device *xe, struct xe_bo *bo, + return ERR_PTR(-EINVAL); + } + ++ /* XE_BO_FLAG_GGTTx requires XE_BO_FLAG_GGTT also be set */ ++ if ((flags & XE_BO_FLAG_GGTT_ALL) && !(flags & XE_BO_FLAG_GGTT)) ++ return ERR_PTR(-EINVAL); ++ + if (flags & (XE_BO_FLAG_VRAM_MASK | XE_BO_FLAG_STOLEN) && + !(flags & XE_BO_FLAG_IGNORE_MIN_PAGE_SIZE) && + ((xe->info.vram_flags & XE_VRAM_FLAGS_NEED64K) || +@@ -1498,19 +1505,29 @@ __xe_bo_create_locked(struct xe_device *xe, + bo->vm = vm; + + if (bo->flags & XE_BO_FLAG_GGTT) { +- if (!tile && flags & XE_BO_FLAG_STOLEN) +- tile = xe_device_get_root_tile(xe); ++ struct xe_tile *t; ++ u8 id; + +- xe_assert(xe, tile); ++ if (!(bo->flags & XE_BO_FLAG_GGTT_ALL)) { ++ if (!tile && flags & XE_BO_FLAG_STOLEN) ++ tile = xe_device_get_root_tile(xe); + +- if (flags & XE_BO_FLAG_FIXED_PLACEMENT) { +- err = xe_ggtt_insert_bo_at(tile->mem.ggtt, bo, +- start + bo->size, U64_MAX); +- } else { +- err = xe_ggtt_insert_bo(tile->mem.ggtt, bo); ++ xe_assert(xe, tile); ++ } ++ ++ for_each_tile(t, xe, id) { ++ if (t != tile && !(bo->flags & XE_BO_FLAG_GGTTx(t))) ++ continue; ++ ++ if (flags & XE_BO_FLAG_FIXED_PLACEMENT) { ++ err = xe_ggtt_insert_bo_at(t->mem.ggtt, bo, ++ start + bo->size, U64_MAX); ++ } else { ++ err = xe_ggtt_insert_bo(t->mem.ggtt, bo); ++ } ++ if (err) ++ goto err_unlock_put_bo; + } +- if (err) +- goto err_unlock_put_bo; + } + + return bo; +@@ -2372,14 +2389,18 @@ void xe_bo_put_commit(struct llist_head *deferred) + + void xe_bo_put(struct xe_bo *bo) + { ++ struct xe_tile *tile; ++ u8 id; ++ + might_sleep(); + if (bo) { + #ifdef CONFIG_PROC_FS + if (bo->client) + might_lock(&bo->client->bos_lock); + #endif +- if (bo->ggtt_node && bo->ggtt_node->ggtt) +- might_lock(&bo->ggtt_node->ggtt->lock); ++ for_each_tile(tile, xe_bo_device(bo), id) ++ if (bo->ggtt_node[id] && bo->ggtt_node[id]->ggtt) ++ might_lock(&bo->ggtt_node[id]->ggtt->lock); + drm_gem_object_put(&bo->ttm.base); + } + } +diff --git a/drivers/gpu/drm/xe/xe_bo.h b/drivers/gpu/drm/xe/xe_bo.h +index 704f5068d0d0a..d04159c598465 100644 +--- a/drivers/gpu/drm/xe/xe_bo.h ++++ b/drivers/gpu/drm/xe/xe_bo.h +@@ -39,10 +39,22 @@ + #define XE_BO_FLAG_NEEDS_64K BIT(15) + #define XE_BO_FLAG_NEEDS_2M BIT(16) + #define XE_BO_FLAG_GGTT_INVALIDATE BIT(17) ++#define XE_BO_FLAG_GGTT0 BIT(18) ++#define XE_BO_FLAG_GGTT1 BIT(19) ++#define XE_BO_FLAG_GGTT2 BIT(20) ++#define XE_BO_FLAG_GGTT3 BIT(21) ++#define XE_BO_FLAG_GGTT_ALL (XE_BO_FLAG_GGTT0 | \ ++ XE_BO_FLAG_GGTT1 | \ ++ XE_BO_FLAG_GGTT2 | \ ++ XE_BO_FLAG_GGTT3) ++ + /* this one is trigger internally only */ + #define XE_BO_FLAG_INTERNAL_TEST BIT(30) + #define XE_BO_FLAG_INTERNAL_64K BIT(31) + ++#define XE_BO_FLAG_GGTTx(tile) \ ++ (XE_BO_FLAG_GGTT0 << (tile)->id) ++ + #define XE_PTE_SHIFT 12 + #define XE_PAGE_SIZE (1 << XE_PTE_SHIFT) + #define XE_PTE_MASK (XE_PAGE_SIZE - 1) +@@ -194,14 +206,24 @@ xe_bo_main_addr(struct xe_bo *bo, size_t page_size) + } + + static inline u32 +-xe_bo_ggtt_addr(struct xe_bo *bo) ++__xe_bo_ggtt_addr(struct xe_bo *bo, u8 tile_id) + { +- if (XE_WARN_ON(!bo->ggtt_node)) ++ struct xe_ggtt_node *ggtt_node = bo->ggtt_node[tile_id]; ++ ++ if (XE_WARN_ON(!ggtt_node)) + return 0; + +- XE_WARN_ON(bo->ggtt_node->base.size > bo->size); +- XE_WARN_ON(bo->ggtt_node->base.start + bo->ggtt_node->base.size > (1ull << 32)); +- return bo->ggtt_node->base.start; ++ XE_WARN_ON(ggtt_node->base.size > bo->size); ++ XE_WARN_ON(ggtt_node->base.start + ggtt_node->base.size > (1ull << 32)); ++ return ggtt_node->base.start; ++} ++ ++static inline u32 ++xe_bo_ggtt_addr(struct xe_bo *bo) ++{ ++ xe_assert(xe_bo_device(bo), bo->tile); ++ ++ return __xe_bo_ggtt_addr(bo, bo->tile->id); + } + + int xe_bo_vmap(struct xe_bo *bo); +diff --git a/drivers/gpu/drm/xe/xe_bo_evict.c b/drivers/gpu/drm/xe/xe_bo_evict.c +index 8fb2be0610035..6a40eedd9db10 100644 +--- a/drivers/gpu/drm/xe/xe_bo_evict.c ++++ b/drivers/gpu/drm/xe/xe_bo_evict.c +@@ -152,11 +152,17 @@ int xe_bo_restore_kernel(struct xe_device *xe) + } + + if (bo->flags & XE_BO_FLAG_GGTT) { +- struct xe_tile *tile = bo->tile; ++ struct xe_tile *tile; ++ u8 id; + +- mutex_lock(&tile->mem.ggtt->lock); +- xe_ggtt_map_bo(tile->mem.ggtt, bo); +- mutex_unlock(&tile->mem.ggtt->lock); ++ for_each_tile(tile, xe, id) { ++ if (tile != bo->tile && !(bo->flags & XE_BO_FLAG_GGTTx(tile))) ++ continue; ++ ++ mutex_lock(&tile->mem.ggtt->lock); ++ xe_ggtt_map_bo(tile->mem.ggtt, bo); ++ mutex_unlock(&tile->mem.ggtt->lock); ++ } + } + + /* +diff --git a/drivers/gpu/drm/xe/xe_bo_types.h b/drivers/gpu/drm/xe/xe_bo_types.h +index 35372c46edfa5..aa298d33c2508 100644 +--- a/drivers/gpu/drm/xe/xe_bo_types.h ++++ b/drivers/gpu/drm/xe/xe_bo_types.h +@@ -13,6 +13,7 @@ + #include + #include + ++#include "xe_device_types.h" + #include "xe_ggtt_types.h" + + struct xe_device; +@@ -39,8 +40,8 @@ struct xe_bo { + struct ttm_place placements[XE_BO_MAX_PLACEMENTS]; + /** @placement: current placement for this BO */ + struct ttm_placement placement; +- /** @ggtt_node: GGTT node if this BO is mapped in the GGTT */ +- struct xe_ggtt_node *ggtt_node; ++ /** @ggtt_node: Array of GGTT nodes if this BO is mapped in the GGTTs */ ++ struct xe_ggtt_node *ggtt_node[XE_MAX_TILES_PER_DEVICE]; + /** @vmap: iosys map of this buffer */ + struct iosys_map vmap; + /** @ttm_kmap: TTM bo kmap object for internal use only. Keep off. */ +diff --git a/drivers/gpu/drm/xe/xe_ggtt.c b/drivers/gpu/drm/xe/xe_ggtt.c +index 9cb5760006a1c..76e1092f51d92 100644 +--- a/drivers/gpu/drm/xe/xe_ggtt.c ++++ b/drivers/gpu/drm/xe/xe_ggtt.c +@@ -605,10 +605,10 @@ void xe_ggtt_map_bo(struct xe_ggtt *ggtt, struct xe_bo *bo) + u64 start; + u64 offset, pte; + +- if (XE_WARN_ON(!bo->ggtt_node)) ++ if (XE_WARN_ON(!bo->ggtt_node[ggtt->tile->id])) + return; + +- start = bo->ggtt_node->base.start; ++ start = bo->ggtt_node[ggtt->tile->id]->base.start; + + for (offset = 0; offset < bo->size; offset += XE_PAGE_SIZE) { + pte = ggtt->pt_ops->pte_encode_bo(bo, offset, pat_index); +@@ -619,15 +619,16 @@ void xe_ggtt_map_bo(struct xe_ggtt *ggtt, struct xe_bo *bo) + static int __xe_ggtt_insert_bo_at(struct xe_ggtt *ggtt, struct xe_bo *bo, + u64 start, u64 end) + { +- int err; + u64 alignment = bo->min_align > 0 ? bo->min_align : XE_PAGE_SIZE; ++ u8 tile_id = ggtt->tile->id; ++ int err; + + if (xe_bo_is_vram(bo) && ggtt->flags & XE_GGTT_FLAGS_64K) + alignment = SZ_64K; + +- if (XE_WARN_ON(bo->ggtt_node)) { ++ if (XE_WARN_ON(bo->ggtt_node[tile_id])) { + /* Someone's already inserted this BO in the GGTT */ +- xe_tile_assert(ggtt->tile, bo->ggtt_node->base.size == bo->size); ++ xe_tile_assert(ggtt->tile, bo->ggtt_node[tile_id]->base.size == bo->size); + return 0; + } + +@@ -637,19 +638,19 @@ static int __xe_ggtt_insert_bo_at(struct xe_ggtt *ggtt, struct xe_bo *bo, + + xe_pm_runtime_get_noresume(tile_to_xe(ggtt->tile)); + +- bo->ggtt_node = xe_ggtt_node_init(ggtt); +- if (IS_ERR(bo->ggtt_node)) { +- err = PTR_ERR(bo->ggtt_node); +- bo->ggtt_node = NULL; ++ bo->ggtt_node[tile_id] = xe_ggtt_node_init(ggtt); ++ if (IS_ERR(bo->ggtt_node[tile_id])) { ++ err = PTR_ERR(bo->ggtt_node[tile_id]); ++ bo->ggtt_node[tile_id] = NULL; + goto out; + } + + mutex_lock(&ggtt->lock); +- err = drm_mm_insert_node_in_range(&ggtt->mm, &bo->ggtt_node->base, bo->size, +- alignment, 0, start, end, 0); ++ err = drm_mm_insert_node_in_range(&ggtt->mm, &bo->ggtt_node[tile_id]->base, ++ bo->size, alignment, 0, start, end, 0); + if (err) { +- xe_ggtt_node_fini(bo->ggtt_node); +- bo->ggtt_node = NULL; ++ xe_ggtt_node_fini(bo->ggtt_node[tile_id]); ++ bo->ggtt_node[tile_id] = NULL; + } else { + xe_ggtt_map_bo(ggtt, bo); + } +@@ -698,13 +699,15 @@ int xe_ggtt_insert_bo(struct xe_ggtt *ggtt, struct xe_bo *bo) + */ + void xe_ggtt_remove_bo(struct xe_ggtt *ggtt, struct xe_bo *bo) + { +- if (XE_WARN_ON(!bo->ggtt_node)) ++ u8 tile_id = ggtt->tile->id; ++ ++ if (XE_WARN_ON(!bo->ggtt_node[tile_id])) + return; + + /* This BO is not currently in the GGTT */ +- xe_tile_assert(ggtt->tile, bo->ggtt_node->base.size == bo->size); ++ xe_tile_assert(ggtt->tile, bo->ggtt_node[tile_id]->base.size == bo->size); + +- xe_ggtt_node_remove(bo->ggtt_node, ++ xe_ggtt_node_remove(bo->ggtt_node[tile_id], + bo->flags & XE_BO_FLAG_GGTT_INVALIDATE); + } + +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-fix-dsb-buffer-coherency.patch b/queue-6.12/drm-xe-fix-dsb-buffer-coherency.patch new file mode 100644 index 0000000000..76f3f553e4 --- /dev/null +++ b/queue-6.12/drm-xe-fix-dsb-buffer-coherency.patch @@ -0,0 +1,57 @@ +From a2366891825104aacf2057e07547d8ed45f59d04 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 13 Sep 2024 13:47:53 +0200 +Subject: drm/xe: Fix DSB buffer coherency +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Maarten Lankhorst + +[ Upstream commit 71a3161e9d7d2229cb4eefd4c49effb97caf3db3 ] + +Add the scanout flag to force WC caching, and add the memory barrier +where needed. + +Reviewed-by: Matthew Auld +Reviewed-by: Ville Syrjälä +Link: https://patchwork.freedesktop.org/patch/msgid/20240913114754.7956-2-maarten.lankhorst@linux.intel.com +Signed-off-by: Maarten Lankhorst +Stable-dep-of: a4b1b51ae132 ("drm/xe: Move DSB l2 flush to a more sensible place") +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/xe/display/xe_dsb_buffer.c | 9 +++++++-- + 1 file changed, 7 insertions(+), 2 deletions(-) + +diff --git a/drivers/gpu/drm/xe/display/xe_dsb_buffer.c b/drivers/gpu/drm/xe/display/xe_dsb_buffer.c +index f99d901a3214f..f95375451e2fa 100644 +--- a/drivers/gpu/drm/xe/display/xe_dsb_buffer.c ++++ b/drivers/gpu/drm/xe/display/xe_dsb_buffer.c +@@ -48,11 +48,12 @@ bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *d + if (!vma) + return false; + ++ /* Set scanout flag for WC mapping */ + obj = xe_bo_create_pin_map(xe, xe_device_get_root_tile(xe), + NULL, PAGE_ALIGN(size), + ttm_bo_type_kernel, + XE_BO_FLAG_VRAM_IF_DGFX(xe_device_get_root_tile(xe)) | +- XE_BO_FLAG_GGTT); ++ XE_BO_FLAG_SCANOUT | XE_BO_FLAG_GGTT); + if (IS_ERR(obj)) { + kfree(vma); + return false; +@@ -73,5 +74,9 @@ void intel_dsb_buffer_cleanup(struct intel_dsb_buffer *dsb_buf) + + void intel_dsb_buffer_flush_map(struct intel_dsb_buffer *dsb_buf) + { +- /* TODO: add xe specific flush_map() for dsb buffer object. */ ++ /* ++ * The memory barrier here is to ensure coherency of DSB vs MMIO, ++ * both for weak ordering archs and discrete cards. ++ */ ++ xe_device_wmb(dsb_buf->vma->bo->tile->xe); + } +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-guc-dead-ct-helper.patch b/queue-6.12/drm-xe-guc-dead-ct-helper.patch new file mode 100644 index 0000000000..cc18e19df4 --- /dev/null +++ b/queue-6.12/drm-xe-guc-dead-ct-helper.patch @@ -0,0 +1,673 @@ +From c295e4b62c83e55bedfd781cb503f9bafb9dbaed Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 2 Oct 2024 17:46:08 -0700 +Subject: drm/xe/guc: Dead CT helper + +From: John Harrison + +[ Upstream commit d2c5a5a926f43b2e42c5c955f917bad8ad6dd68c ] + +Add a worker function helper for asynchronously dumping state when an +internal/fatal error is detected in CT processing. Being asynchronous +is required to avoid deadlocks and scheduling-while-atomic or +process-stalled-for-too-long issues. Also check for a bunch more error +conditions and improve the handling of some existing checks. + +v2: Use compile time CONFIG check for new (but not directly CT_DEAD +related) checks and use unsigned int for a bitmask, rename +CT_DEAD_RESET to CT_DEAD_REARM and add some explaining comments, +rename 'hxg' macro parameter to 'ctb' - review feedback from Michal W. +Drop CT_DEAD_ALIVE as no need for a bitfield define to just set the +entire mask to zero. +v3: Fix kerneldoc +v4: Nullify some floating pointers after free. +v5: Add section headings and device info to make the state dump look +more like a devcoredump to allow parsing by the same tools (eventual +aim is to just call the devcoredump code itself, but that currently +requires an xe_sched_job, which is not available in the CT code). +v6: Fix potential for leaking snapshots with concurrent error +conditions (review feedback from Julia F). +v7: Don't complain about unexpected G2H messages yet because there is +a known issue causing them. Fix bit shift bug with v6 change. Add GT +id to fake coredump headers and use puts instead of printf. +v8: Disable the head mis-match check in g2h_read because it is failing +on various discrete platforms due to unknown reasons. + +Signed-off-by: John Harrison +Reviewed-by: Julia Filipchuk +Link: https://patchwork.freedesktop.org/patch/msgid/20241003004611.2323493-9-John.C.Harrison@Intel.com +Stable-dep-of: ad40098da5c3 ("drm/xe/guc: Explicitly exit CT safe mode on unwind") +Signed-off-by: Sasha Levin +--- + .../drm/xe/abi/guc_communication_ctb_abi.h | 1 + + drivers/gpu/drm/xe/xe_guc.c | 2 +- + drivers/gpu/drm/xe/xe_guc_ct.c | 336 ++++++++++++++++-- + drivers/gpu/drm/xe/xe_guc_ct.h | 2 +- + drivers/gpu/drm/xe/xe_guc_ct_types.h | 23 ++ + 5 files changed, 335 insertions(+), 29 deletions(-) + +diff --git a/drivers/gpu/drm/xe/abi/guc_communication_ctb_abi.h b/drivers/gpu/drm/xe/abi/guc_communication_ctb_abi.h +index 8f86a16dc5777..f58198cf2cf63 100644 +--- a/drivers/gpu/drm/xe/abi/guc_communication_ctb_abi.h ++++ b/drivers/gpu/drm/xe/abi/guc_communication_ctb_abi.h +@@ -52,6 +52,7 @@ struct guc_ct_buffer_desc { + #define GUC_CTB_STATUS_OVERFLOW (1 << 0) + #define GUC_CTB_STATUS_UNDERFLOW (1 << 1) + #define GUC_CTB_STATUS_MISMATCH (1 << 2) ++#define GUC_CTB_STATUS_DISABLED (1 << 3) + u32 reserved[13]; + } __packed; + static_assert(sizeof(struct guc_ct_buffer_desc) == 64); +diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c +index c67d4807f37df..96373cdb366be 100644 +--- a/drivers/gpu/drm/xe/xe_guc.c ++++ b/drivers/gpu/drm/xe/xe_guc.c +@@ -1175,7 +1175,7 @@ void xe_guc_print_info(struct xe_guc *guc, struct drm_printer *p) + + xe_force_wake_put(gt_to_fw(gt), XE_FW_GT); + +- xe_guc_ct_print(&guc->ct, p, false); ++ xe_guc_ct_print(&guc->ct, p); + xe_guc_submit_print(guc, p); + } + +diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c +index 483c2b521a2d1..32d55be93ef30 100644 +--- a/drivers/gpu/drm/xe/xe_guc_ct.c ++++ b/drivers/gpu/drm/xe/xe_guc_ct.c +@@ -25,12 +25,48 @@ + #include "xe_gt_sriov_pf_monitor.h" + #include "xe_gt_tlb_invalidation.h" + #include "xe_guc.h" ++#include "xe_guc_log.h" + #include "xe_guc_relay.h" + #include "xe_guc_submit.h" + #include "xe_map.h" + #include "xe_pm.h" + #include "xe_trace_guc.h" + ++#if IS_ENABLED(CONFIG_DRM_XE_DEBUG) ++enum { ++ /* Internal states, not error conditions */ ++ CT_DEAD_STATE_REARM, /* 0x0001 */ ++ CT_DEAD_STATE_CAPTURE, /* 0x0002 */ ++ ++ /* Error conditions */ ++ CT_DEAD_SETUP, /* 0x0004 */ ++ CT_DEAD_H2G_WRITE, /* 0x0008 */ ++ CT_DEAD_H2G_HAS_ROOM, /* 0x0010 */ ++ CT_DEAD_G2H_READ, /* 0x0020 */ ++ CT_DEAD_G2H_RECV, /* 0x0040 */ ++ CT_DEAD_G2H_RELEASE, /* 0x0080 */ ++ CT_DEAD_DEADLOCK, /* 0x0100 */ ++ CT_DEAD_PROCESS_FAILED, /* 0x0200 */ ++ CT_DEAD_FAST_G2H, /* 0x0400 */ ++ CT_DEAD_PARSE_G2H_RESPONSE, /* 0x0800 */ ++ CT_DEAD_PARSE_G2H_UNKNOWN, /* 0x1000 */ ++ CT_DEAD_PARSE_G2H_ORIGIN, /* 0x2000 */ ++ CT_DEAD_PARSE_G2H_TYPE, /* 0x4000 */ ++}; ++ ++static void ct_dead_worker_func(struct work_struct *w); ++static void ct_dead_capture(struct xe_guc_ct *ct, struct guc_ctb *ctb, u32 reason_code); ++ ++#define CT_DEAD(ct, ctb, reason_code) ct_dead_capture((ct), (ctb), CT_DEAD_##reason_code) ++#else ++#define CT_DEAD(ct, ctb, reason) \ ++ do { \ ++ struct guc_ctb *_ctb = (ctb); \ ++ if (_ctb) \ ++ _ctb->info.broken = true; \ ++ } while (0) ++#endif ++ + /* Used when a CT send wants to block and / or receive data */ + struct g2h_fence { + u32 *response_buffer; +@@ -183,6 +219,10 @@ int xe_guc_ct_init(struct xe_guc_ct *ct) + xa_init(&ct->fence_lookup); + INIT_WORK(&ct->g2h_worker, g2h_worker_func); + INIT_DELAYED_WORK(&ct->safe_mode_worker, safe_mode_worker_func); ++#if IS_ENABLED(CONFIG_DRM_XE_DEBUG) ++ spin_lock_init(&ct->dead.lock); ++ INIT_WORK(&ct->dead.worker, ct_dead_worker_func); ++#endif + init_waitqueue_head(&ct->wq); + init_waitqueue_head(&ct->g2h_fence_wq); + +@@ -419,10 +459,22 @@ int xe_guc_ct_enable(struct xe_guc_ct *ct) + if (ct_needs_safe_mode(ct)) + ct_enter_safe_mode(ct); + ++#if IS_ENABLED(CONFIG_DRM_XE_DEBUG) ++ /* ++ * The CT has now been reset so the dumper can be re-armed ++ * after any existing dead state has been dumped. ++ */ ++ spin_lock_irq(&ct->dead.lock); ++ if (ct->dead.reason) ++ ct->dead.reason |= (1 << CT_DEAD_STATE_REARM); ++ spin_unlock_irq(&ct->dead.lock); ++#endif ++ + return 0; + + err_out: + xe_gt_err(gt, "Failed to enable GuC CT (%pe)\n", ERR_PTR(err)); ++ CT_DEAD(ct, NULL, SETUP); + + return err; + } +@@ -469,6 +521,19 @@ static bool h2g_has_room(struct xe_guc_ct *ct, u32 cmd_len) + + if (cmd_len > h2g->info.space) { + h2g->info.head = desc_read(ct_to_xe(ct), h2g, head); ++ ++ if (h2g->info.head > h2g->info.size) { ++ struct xe_device *xe = ct_to_xe(ct); ++ u32 desc_status = desc_read(xe, h2g, status); ++ ++ desc_write(xe, h2g, status, desc_status | GUC_CTB_STATUS_OVERFLOW); ++ ++ xe_gt_err(ct_to_gt(ct), "CT: invalid head offset %u >= %u)\n", ++ h2g->info.head, h2g->info.size); ++ CT_DEAD(ct, h2g, H2G_HAS_ROOM); ++ return false; ++ } ++ + h2g->info.space = CIRC_SPACE(h2g->info.tail, h2g->info.head, + h2g->info.size) - + h2g->info.resv_space; +@@ -524,10 +589,24 @@ static void __g2h_reserve_space(struct xe_guc_ct *ct, u32 g2h_len, u32 num_g2h) + + static void __g2h_release_space(struct xe_guc_ct *ct, u32 g2h_len) + { ++ bool bad = false; ++ + lockdep_assert_held(&ct->fast_lock); +- xe_gt_assert(ct_to_gt(ct), ct->ctbs.g2h.info.space + g2h_len <= +- ct->ctbs.g2h.info.size - ct->ctbs.g2h.info.resv_space); +- xe_gt_assert(ct_to_gt(ct), ct->g2h_outstanding); ++ ++ bad = ct->ctbs.g2h.info.space + g2h_len > ++ ct->ctbs.g2h.info.size - ct->ctbs.g2h.info.resv_space; ++ bad |= !ct->g2h_outstanding; ++ ++ if (bad) { ++ xe_gt_err(ct_to_gt(ct), "Invalid G2H release: %d + %d vs %d - %d -> %d vs %d, outstanding = %d!\n", ++ ct->ctbs.g2h.info.space, g2h_len, ++ ct->ctbs.g2h.info.size, ct->ctbs.g2h.info.resv_space, ++ ct->ctbs.g2h.info.space + g2h_len, ++ ct->ctbs.g2h.info.size - ct->ctbs.g2h.info.resv_space, ++ ct->g2h_outstanding); ++ CT_DEAD(ct, &ct->ctbs.g2h, G2H_RELEASE); ++ return; ++ } + + ct->ctbs.g2h.info.space += g2h_len; + if (!--ct->g2h_outstanding) +@@ -554,12 +633,43 @@ static int h2g_write(struct xe_guc_ct *ct, const u32 *action, u32 len, + u32 full_len; + struct iosys_map map = IOSYS_MAP_INIT_OFFSET(&h2g->cmds, + tail * sizeof(u32)); ++ u32 desc_status; + + full_len = len + GUC_CTB_HDR_LEN; + + lockdep_assert_held(&ct->lock); + xe_gt_assert(gt, full_len <= GUC_CTB_MSG_MAX_LEN); +- xe_gt_assert(gt, tail <= h2g->info.size); ++ ++ desc_status = desc_read(xe, h2g, status); ++ if (desc_status) { ++ xe_gt_err(gt, "CT write: non-zero status: %u\n", desc_status); ++ goto corrupted; ++ } ++ ++ if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) { ++ u32 desc_tail = desc_read(xe, h2g, tail); ++ u32 desc_head = desc_read(xe, h2g, head); ++ ++ if (tail != desc_tail) { ++ desc_write(xe, h2g, status, desc_status | GUC_CTB_STATUS_MISMATCH); ++ xe_gt_err(gt, "CT write: tail was modified %u != %u\n", desc_tail, tail); ++ goto corrupted; ++ } ++ ++ if (tail > h2g->info.size) { ++ desc_write(xe, h2g, status, desc_status | GUC_CTB_STATUS_OVERFLOW); ++ xe_gt_err(gt, "CT write: tail out of range: %u vs %u\n", ++ tail, h2g->info.size); ++ goto corrupted; ++ } ++ ++ if (desc_head >= h2g->info.size) { ++ desc_write(xe, h2g, status, desc_status | GUC_CTB_STATUS_OVERFLOW); ++ xe_gt_err(gt, "CT write: invalid head offset %u >= %u)\n", ++ desc_head, h2g->info.size); ++ goto corrupted; ++ } ++ } + + /* Command will wrap, zero fill (NOPs), return and check credits again */ + if (tail + full_len > h2g->info.size) { +@@ -612,6 +722,10 @@ static int h2g_write(struct xe_guc_ct *ct, const u32 *action, u32 len, + desc_read(xe, h2g, head), h2g->info.tail); + + return 0; ++ ++corrupted: ++ CT_DEAD(ct, &ct->ctbs.h2g, H2G_WRITE); ++ return -EPIPE; + } + + /* +@@ -719,7 +833,6 @@ static int guc_ct_send_locked(struct xe_guc_ct *ct, const u32 *action, u32 len, + { + struct xe_device *xe = ct_to_xe(ct); + struct xe_gt *gt = ct_to_gt(ct); +- struct drm_printer p = xe_gt_info_printer(gt); + unsigned int sleep_period_ms = 1; + int ret; + +@@ -772,8 +885,13 @@ static int guc_ct_send_locked(struct xe_guc_ct *ct, const u32 *action, u32 len, + goto broken; + #undef g2h_avail + +- if (dequeue_one_g2h(ct) < 0) ++ ret = dequeue_one_g2h(ct); ++ if (ret < 0) { ++ if (ret != -ECANCELED) ++ xe_gt_err(ct_to_gt(ct), "CTB receive failed (%pe)", ++ ERR_PTR(ret)); + goto broken; ++ } + + goto try_again; + } +@@ -782,8 +900,7 @@ static int guc_ct_send_locked(struct xe_guc_ct *ct, const u32 *action, u32 len, + + broken: + xe_gt_err(gt, "No forward process on H2G, reset required\n"); +- xe_guc_ct_print(ct, &p, true); +- ct->ctbs.h2g.info.broken = true; ++ CT_DEAD(ct, &ct->ctbs.h2g, DEADLOCK); + + return -EDEADLK; + } +@@ -1049,6 +1166,7 @@ static int parse_g2h_response(struct xe_guc_ct *ct, u32 *msg, u32 len) + else + xe_gt_err(gt, "unexpected response %u for FAST_REQ H2G fence 0x%x!\n", + type, fence); ++ CT_DEAD(ct, NULL, PARSE_G2H_RESPONSE); + + return -EPROTO; + } +@@ -1056,6 +1174,7 @@ static int parse_g2h_response(struct xe_guc_ct *ct, u32 *msg, u32 len) + g2h_fence = xa_erase(&ct->fence_lookup, fence); + if (unlikely(!g2h_fence)) { + /* Don't tear down channel, as send could've timed out */ ++ /* CT_DEAD(ct, NULL, PARSE_G2H_UNKNOWN); */ + xe_gt_warn(gt, "G2H fence (%u) not found!\n", fence); + g2h_release_space(ct, GUC_CTB_HXG_MSG_MAX_LEN); + return 0; +@@ -1100,7 +1219,7 @@ static int parse_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len) + if (unlikely(origin != GUC_HXG_ORIGIN_GUC)) { + xe_gt_err(gt, "G2H channel broken on read, origin=%u, reset required\n", + origin); +- ct->ctbs.g2h.info.broken = true; ++ CT_DEAD(ct, &ct->ctbs.g2h, PARSE_G2H_ORIGIN); + + return -EPROTO; + } +@@ -1118,7 +1237,7 @@ static int parse_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len) + default: + xe_gt_err(gt, "G2H channel broken on read, type=%u, reset required\n", + type); +- ct->ctbs.g2h.info.broken = true; ++ CT_DEAD(ct, &ct->ctbs.g2h, PARSE_G2H_TYPE); + + ret = -EOPNOTSUPP; + } +@@ -1195,9 +1314,11 @@ static int process_g2h_msg(struct xe_guc_ct *ct, u32 *msg, u32 len) + xe_gt_err(gt, "unexpected G2H action 0x%04x\n", action); + } + +- if (ret) ++ if (ret) { + xe_gt_err(gt, "G2H action 0x%04x failed (%pe)\n", + action, ERR_PTR(ret)); ++ CT_DEAD(ct, NULL, PROCESS_FAILED); ++ } + + return 0; + } +@@ -1207,7 +1328,7 @@ static int g2h_read(struct xe_guc_ct *ct, u32 *msg, bool fast_path) + struct xe_device *xe = ct_to_xe(ct); + struct xe_gt *gt = ct_to_gt(ct); + struct guc_ctb *g2h = &ct->ctbs.g2h; +- u32 tail, head, len; ++ u32 tail, head, len, desc_status; + s32 avail; + u32 action; + u32 *hxg; +@@ -1226,6 +1347,63 @@ static int g2h_read(struct xe_guc_ct *ct, u32 *msg, bool fast_path) + + xe_gt_assert(gt, xe_guc_ct_enabled(ct)); + ++ desc_status = desc_read(xe, g2h, status); ++ if (desc_status) { ++ if (desc_status & GUC_CTB_STATUS_DISABLED) { ++ /* ++ * Potentially valid if a CLIENT_RESET request resulted in ++ * contexts/engines being reset. But should never happen as ++ * no contexts should be active when CLIENT_RESET is sent. ++ */ ++ xe_gt_err(gt, "CT read: unexpected G2H after GuC has stopped!\n"); ++ desc_status &= ~GUC_CTB_STATUS_DISABLED; ++ } ++ ++ if (desc_status) { ++ xe_gt_err(gt, "CT read: non-zero status: %u\n", desc_status); ++ goto corrupted; ++ } ++ } ++ ++ if (IS_ENABLED(CONFIG_DRM_XE_DEBUG)) { ++ u32 desc_tail = desc_read(xe, g2h, tail); ++ /* ++ u32 desc_head = desc_read(xe, g2h, head); ++ ++ * info.head and desc_head are updated back-to-back at the end of ++ * this function and nowhere else. Hence, they cannot be different ++ * unless two g2h_read calls are running concurrently. Which is not ++ * possible because it is guarded by ct->fast_lock. And yet, some ++ * discrete platforms are reguarly hitting this error :(. ++ * ++ * desc_head rolling backwards shouldn't cause any noticeable ++ * problems - just a delay in GuC being allowed to proceed past that ++ * point in the queue. So for now, just disable the error until it ++ * can be root caused. ++ * ++ if (g2h->info.head != desc_head) { ++ desc_write(xe, g2h, status, desc_status | GUC_CTB_STATUS_MISMATCH); ++ xe_gt_err(gt, "CT read: head was modified %u != %u\n", ++ desc_head, g2h->info.head); ++ goto corrupted; ++ } ++ */ ++ ++ if (g2h->info.head > g2h->info.size) { ++ desc_write(xe, g2h, status, desc_status | GUC_CTB_STATUS_OVERFLOW); ++ xe_gt_err(gt, "CT read: head out of range: %u vs %u\n", ++ g2h->info.head, g2h->info.size); ++ goto corrupted; ++ } ++ ++ if (desc_tail >= g2h->info.size) { ++ desc_write(xe, g2h, status, desc_status | GUC_CTB_STATUS_OVERFLOW); ++ xe_gt_err(gt, "CT read: invalid tail offset %u >= %u)\n", ++ desc_tail, g2h->info.size); ++ goto corrupted; ++ } ++ } ++ + /* Calculate DW available to read */ + tail = desc_read(xe, g2h, tail); + avail = tail - g2h->info.head; +@@ -1242,9 +1420,7 @@ static int g2h_read(struct xe_guc_ct *ct, u32 *msg, bool fast_path) + if (len > avail) { + xe_gt_err(gt, "G2H channel broken on read, avail=%d, len=%d, reset required\n", + avail, len); +- g2h->info.broken = true; +- +- return -EPROTO; ++ goto corrupted; + } + + head = (g2h->info.head + 1) % g2h->info.size; +@@ -1290,6 +1466,10 @@ static int g2h_read(struct xe_guc_ct *ct, u32 *msg, bool fast_path) + action, len, g2h->info.head, tail); + + return len; ++ ++corrupted: ++ CT_DEAD(ct, &ct->ctbs.g2h, G2H_READ); ++ return -EPROTO; + } + + static void g2h_fast_path(struct xe_guc_ct *ct, u32 *msg, u32 len) +@@ -1316,9 +1496,11 @@ static void g2h_fast_path(struct xe_guc_ct *ct, u32 *msg, u32 len) + xe_gt_warn(gt, "NOT_POSSIBLE"); + } + +- if (ret) ++ if (ret) { + xe_gt_err(gt, "G2H action 0x%04x failed (%pe)\n", + action, ERR_PTR(ret)); ++ CT_DEAD(ct, NULL, FAST_G2H); ++ } + } + + /** +@@ -1378,7 +1560,6 @@ static int dequeue_one_g2h(struct xe_guc_ct *ct) + + static void receive_g2h(struct xe_guc_ct *ct) + { +- struct xe_gt *gt = ct_to_gt(ct); + bool ongoing; + int ret; + +@@ -1415,9 +1596,8 @@ static void receive_g2h(struct xe_guc_ct *ct) + mutex_unlock(&ct->lock); + + if (unlikely(ret == -EPROTO || ret == -EOPNOTSUPP)) { +- struct drm_printer p = xe_gt_info_printer(gt); +- +- xe_guc_ct_print(ct, &p, false); ++ xe_gt_err(ct_to_gt(ct), "CT dequeue failed: %d", ret); ++ CT_DEAD(ct, NULL, G2H_RECV); + kick_reset(ct); + } + } while (ret == 1); +@@ -1445,9 +1625,8 @@ static void guc_ctb_snapshot_capture(struct xe_device *xe, struct guc_ctb *ctb, + + snapshot->cmds = kmalloc_array(ctb->info.size, sizeof(u32), + atomic ? GFP_ATOMIC : GFP_KERNEL); +- + if (!snapshot->cmds) { +- drm_err(&xe->drm, "Skipping CTB commands snapshot. Only CTB info will be available.\n"); ++ drm_err(&xe->drm, "Skipping CTB commands snapshot. Only CT info will be available.\n"); + return; + } + +@@ -1528,7 +1707,7 @@ struct xe_guc_ct_snapshot *xe_guc_ct_snapshot_capture(struct xe_guc_ct *ct, + atomic ? GFP_ATOMIC : GFP_KERNEL); + + if (!snapshot) { +- drm_err(&xe->drm, "Skipping CTB snapshot entirely.\n"); ++ xe_gt_err(ct_to_gt(ct), "Skipping CTB snapshot entirely.\n"); + return NULL; + } + +@@ -1592,16 +1771,119 @@ void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot) + * xe_guc_ct_print - GuC CT Print. + * @ct: GuC CT. + * @p: drm_printer where it will be printed out. +- * @atomic: Boolean to indicate if this is called from atomic context like +- * reset or CTB handler or from some regular path like debugfs. + * + * This function quickly capture a snapshot and immediately print it out. + */ +-void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool atomic) ++void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p) + { + struct xe_guc_ct_snapshot *snapshot; + +- snapshot = xe_guc_ct_snapshot_capture(ct, atomic); ++ snapshot = xe_guc_ct_snapshot_capture(ct, false); + xe_guc_ct_snapshot_print(snapshot, p); + xe_guc_ct_snapshot_free(snapshot); + } ++ ++#if IS_ENABLED(CONFIG_DRM_XE_DEBUG) ++static void ct_dead_capture(struct xe_guc_ct *ct, struct guc_ctb *ctb, u32 reason_code) ++{ ++ struct xe_guc_log_snapshot *snapshot_log; ++ struct xe_guc_ct_snapshot *snapshot_ct; ++ struct xe_guc *guc = ct_to_guc(ct); ++ unsigned long flags; ++ bool have_capture; ++ ++ if (ctb) ++ ctb->info.broken = true; ++ ++ /* Ignore further errors after the first dump until a reset */ ++ if (ct->dead.reported) ++ return; ++ ++ spin_lock_irqsave(&ct->dead.lock, flags); ++ ++ /* And only capture one dump at a time */ ++ have_capture = ct->dead.reason & (1 << CT_DEAD_STATE_CAPTURE); ++ ct->dead.reason |= (1 << reason_code) | ++ (1 << CT_DEAD_STATE_CAPTURE); ++ ++ spin_unlock_irqrestore(&ct->dead.lock, flags); ++ ++ if (have_capture) ++ return; ++ ++ snapshot_log = xe_guc_log_snapshot_capture(&guc->log, true); ++ snapshot_ct = xe_guc_ct_snapshot_capture((ct), true); ++ ++ spin_lock_irqsave(&ct->dead.lock, flags); ++ ++ if (ct->dead.snapshot_log || ct->dead.snapshot_ct) { ++ xe_gt_err(ct_to_gt(ct), "Got unexpected dead CT capture!\n"); ++ xe_guc_log_snapshot_free(snapshot_log); ++ xe_guc_ct_snapshot_free(snapshot_ct); ++ } else { ++ ct->dead.snapshot_log = snapshot_log; ++ ct->dead.snapshot_ct = snapshot_ct; ++ } ++ ++ spin_unlock_irqrestore(&ct->dead.lock, flags); ++ ++ queue_work(system_unbound_wq, &(ct)->dead.worker); ++} ++ ++static void ct_dead_print(struct xe_dead_ct *dead) ++{ ++ struct xe_guc_ct *ct = container_of(dead, struct xe_guc_ct, dead); ++ struct xe_device *xe = ct_to_xe(ct); ++ struct xe_gt *gt = ct_to_gt(ct); ++ static int g_count; ++ struct drm_printer ip = xe_gt_info_printer(gt); ++ struct drm_printer lp = drm_line_printer(&ip, "Capture", ++g_count); ++ ++ if (!dead->reason) { ++ xe_gt_err(gt, "CTB is dead for no reason!?\n"); ++ return; ++ } ++ ++ drm_printf(&lp, "CTB is dead - reason=0x%X\n", dead->reason); ++ ++ /* Can't generate a genuine core dump at this point, so just do the good bits */ ++ drm_puts(&lp, "**** Xe Device Coredump ****\n"); ++ xe_device_snapshot_print(xe, &lp); ++ ++ drm_printf(&lp, "**** GT #%d ****\n", gt->info.id); ++ drm_printf(&lp, "\tTile: %d\n", gt->tile->id); ++ ++ drm_puts(&lp, "**** GuC Log ****\n"); ++ xe_guc_log_snapshot_print(dead->snapshot_log, &lp); ++ ++ drm_puts(&lp, "**** GuC CT ****\n"); ++ xe_guc_ct_snapshot_print(dead->snapshot_ct, &lp); ++ ++ drm_puts(&lp, "Done.\n"); ++} ++ ++static void ct_dead_worker_func(struct work_struct *w) ++{ ++ struct xe_guc_ct *ct = container_of(w, struct xe_guc_ct, dead.worker); ++ ++ if (!ct->dead.reported) { ++ ct->dead.reported = true; ++ ct_dead_print(&ct->dead); ++ } ++ ++ spin_lock_irq(&ct->dead.lock); ++ ++ xe_guc_log_snapshot_free(ct->dead.snapshot_log); ++ ct->dead.snapshot_log = NULL; ++ xe_guc_ct_snapshot_free(ct->dead.snapshot_ct); ++ ct->dead.snapshot_ct = NULL; ++ ++ if (ct->dead.reason & (1 << CT_DEAD_STATE_REARM)) { ++ /* A reset has occurred so re-arm the error reporting */ ++ ct->dead.reason = 0; ++ ct->dead.reported = false; ++ } ++ ++ spin_unlock_irq(&ct->dead.lock); ++} ++#endif +diff --git a/drivers/gpu/drm/xe/xe_guc_ct.h b/drivers/gpu/drm/xe/xe_guc_ct.h +index 13e316668e901..c7ac9407b861e 100644 +--- a/drivers/gpu/drm/xe/xe_guc_ct.h ++++ b/drivers/gpu/drm/xe/xe_guc_ct.h +@@ -21,7 +21,7 @@ xe_guc_ct_snapshot_capture(struct xe_guc_ct *ct, bool atomic); + void xe_guc_ct_snapshot_print(struct xe_guc_ct_snapshot *snapshot, + struct drm_printer *p); + void xe_guc_ct_snapshot_free(struct xe_guc_ct_snapshot *snapshot); +-void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p, bool atomic); ++void xe_guc_ct_print(struct xe_guc_ct *ct, struct drm_printer *p); + + static inline bool xe_guc_ct_initialized(struct xe_guc_ct *ct) + { +diff --git a/drivers/gpu/drm/xe/xe_guc_ct_types.h b/drivers/gpu/drm/xe/xe_guc_ct_types.h +index 761cb90312984..85e127ec91d7a 100644 +--- a/drivers/gpu/drm/xe/xe_guc_ct_types.h ++++ b/drivers/gpu/drm/xe/xe_guc_ct_types.h +@@ -86,6 +86,24 @@ enum xe_guc_ct_state { + XE_GUC_CT_STATE_ENABLED, + }; + ++#if IS_ENABLED(CONFIG_DRM_XE_DEBUG) ++/** struct xe_dead_ct - Information for debugging a dead CT */ ++struct xe_dead_ct { ++ /** @lock: protects memory allocation/free operations, and @reason updates */ ++ spinlock_t lock; ++ /** @reason: bit mask of CT_DEAD_* reason codes */ ++ unsigned int reason; ++ /** @reported: for preventing multiple dumps per error sequence */ ++ bool reported; ++ /** @worker: worker thread to get out of interrupt context before dumping */ ++ struct work_struct worker; ++ /** snapshot_ct: copy of CT state and CTB content at point of error */ ++ struct xe_guc_ct_snapshot *snapshot_ct; ++ /** snapshot_log: copy of GuC log at point of error */ ++ struct xe_guc_log_snapshot *snapshot_log; ++}; ++#endif ++ + /** + * struct xe_guc_ct - GuC command transport (CT) layer + * +@@ -128,6 +146,11 @@ struct xe_guc_ct { + u32 msg[GUC_CTB_MSG_MAX_LEN]; + /** @fast_msg: Message buffer */ + u32 fast_msg[GUC_CTB_MSG_MAX_LEN]; ++ ++#if IS_ENABLED(CONFIG_DRM_XE_DEBUG) ++ /** @dead: information for debugging dead CTs */ ++ struct xe_dead_ct dead; ++#endif + }; + + #endif +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-guc-explicitly-exit-ct-safe-mode-on-unwind.patch b/queue-6.12/drm-xe-guc-explicitly-exit-ct-safe-mode-on-unwind.patch new file mode 100644 index 0000000000..0b63db6012 --- /dev/null +++ b/queue-6.12/drm-xe-guc-explicitly-exit-ct-safe-mode-on-unwind.patch @@ -0,0 +1,79 @@ +From 0a6d5eb5ea0d46eadd4466fd3a16e189c58ea82b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 13 Jun 2025 00:09:37 +0200 +Subject: drm/xe/guc: Explicitly exit CT safe mode on unwind +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Michal Wajdeczko + +[ Upstream commit ad40098da5c3b43114d860a5b5740e7204158534 ] + +During driver probe we might be briefly using CT safe mode, which +is based on a delayed work, but usually we are able to stop this +once we have IRQ fully operational. However, if we abort the probe +quite early then during unwind we might try to destroy the workqueue +while there is still a pending delayed work that attempts to restart +itself which triggers a WARN. + +This was recently observed during unsuccessful VF initialization: + + [ ] xe 0000:00:02.1: probe with driver xe failed with error -62 + [ ] ------------[ cut here ]------------ + [ ] workqueue: cannot queue safe_mode_worker_func [xe] on wq xe-g2h-wq + [ ] WARNING: CPU: 9 PID: 0 at kernel/workqueue.c:2257 __queue_work+0x287/0x710 + [ ] RIP: 0010:__queue_work+0x287/0x710 + [ ] Call Trace: + [ ] delayed_work_timer_fn+0x19/0x30 + [ ] call_timer_fn+0xa1/0x2a0 + +Exit the CT safe mode on unwind to avoid that warning. + +Fixes: 09b286950f29 ("drm/xe/guc: Allow CTB G2H processing without G2H IRQ") +Signed-off-by: Michal Wajdeczko +Cc: Matthew Brost +Reviewed-by: Matthew Brost +Link: https://lore.kernel.org/r/20250612220937.857-3-michal.wajdeczko@intel.com +(cherry picked from commit 2ddbb73ec20b98e70a5200cb85deade22ccea2ec) +Signed-off-by: Thomas Hellström +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/xe/xe_guc_ct.c | 10 ++++++---- + 1 file changed, 6 insertions(+), 4 deletions(-) + +diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c +index 32d55be93ef30..f1ce4e14dcb5f 100644 +--- a/drivers/gpu/drm/xe/xe_guc_ct.c ++++ b/drivers/gpu/drm/xe/xe_guc_ct.c +@@ -32,6 +32,11 @@ + #include "xe_pm.h" + #include "xe_trace_guc.h" + ++static void receive_g2h(struct xe_guc_ct *ct); ++static void g2h_worker_func(struct work_struct *w); ++static void safe_mode_worker_func(struct work_struct *w); ++static void ct_exit_safe_mode(struct xe_guc_ct *ct); ++ + #if IS_ENABLED(CONFIG_DRM_XE_DEBUG) + enum { + /* Internal states, not error conditions */ +@@ -183,14 +188,11 @@ static void guc_ct_fini(struct drm_device *drm, void *arg) + { + struct xe_guc_ct *ct = arg; + ++ ct_exit_safe_mode(ct); + destroy_workqueue(ct->g2h_wq); + xa_destroy(&ct->fence_lookup); + } + +-static void receive_g2h(struct xe_guc_ct *ct); +-static void g2h_worker_func(struct work_struct *w); +-static void safe_mode_worker_func(struct work_struct *w); +- + static void primelockdep(struct xe_guc_ct *ct) + { + if (!IS_ENABLED(CONFIG_LOCKDEP)) +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-move-dpt-l2-flush-to-a-more-sensible-place.patch b/queue-6.12/drm-xe-move-dpt-l2-flush-to-a-more-sensible-place.patch new file mode 100644 index 0000000000..fc18518c27 --- /dev/null +++ b/queue-6.12/drm-xe-move-dpt-l2-flush-to-a-more-sensible-place.patch @@ -0,0 +1,56 @@ +From 0286bce1acd2022f44574671b6fa43e91768a48d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 6 Jun 2025 11:45:48 +0100 +Subject: drm/xe: move DPT l2 flush to a more sensible place +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Matthew Auld + +[ Upstream commit f16873f42a06b620669d48a4b5c3f888cb3653a1 ] + +Only need the flush for DPT host updates here. Normal GGTT updates don't +need special flush. + +Fixes: 01570b446939 ("drm/xe/bmg: implement Wa_16023588340") +Signed-off-by: Matthew Auld +Cc: Maarten Lankhorst +Cc: stable@vger.kernel.org # v6.12+ +Reviewed-by: Ville Syrjälä +Reviewed-by: Lucas De Marchi +Link: https://lore.kernel.org/r/20250606104546.1996818-4-matthew.auld@intel.com +Signed-off-by: Lucas De Marchi +(cherry picked from commit 35db1da40c8cfd7511dc42f342a133601eb45449) +Signed-off-by: Thomas Hellström +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/xe/display/xe_fb_pin.c | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +diff --git a/drivers/gpu/drm/xe/display/xe_fb_pin.c b/drivers/gpu/drm/xe/display/xe_fb_pin.c +index 972b7db52f785..0558b106f8b60 100644 +--- a/drivers/gpu/drm/xe/display/xe_fb_pin.c ++++ b/drivers/gpu/drm/xe/display/xe_fb_pin.c +@@ -154,6 +154,9 @@ static int __xe_pin_fb_vma_dpt(const struct intel_framebuffer *fb, + + vma->dpt = dpt; + vma->node = dpt->ggtt_node[tile0->id]; ++ ++ /* Ensure DPT writes are flushed */ ++ xe_device_l2_flush(xe); + return 0; + } + +@@ -318,8 +321,6 @@ static struct i915_vma *__xe_pin_fb_vma(const struct intel_framebuffer *fb, + if (ret) + goto err_unpin; + +- /* Ensure DPT writes are flushed */ +- xe_device_l2_flush(xe); + return vma; + + err_unpin: +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-move-dsb-l2-flush-to-a-more-sensible-place.patch b/queue-6.12/drm-xe-move-dsb-l2-flush-to-a-more-sensible-place.patch new file mode 100644 index 0000000000..553c5b9858 --- /dev/null +++ b/queue-6.12/drm-xe-move-dsb-l2-flush-to-a-more-sensible-place.patch @@ -0,0 +1,76 @@ +From b0a0ca81f3394ba38e3c7e8451e7aa499e7fce71 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 6 Jun 2025 11:45:47 +0100 +Subject: drm/xe: Move DSB l2 flush to a more sensible place +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Maarten Lankhorst + +[ Upstream commit a4b1b51ae132ac199412028a2df7b6c267888190 ] + +Flushing l2 is only needed after all data has been written. + +Fixes: 01570b446939 ("drm/xe/bmg: implement Wa_16023588340") +Signed-off-by: Maarten Lankhorst +Cc: Matthew Auld +Cc: stable@vger.kernel.org # v6.12+ +Reviewed-by: Matthew Auld +Signed-off-by: Matthew Auld +Reviewed-by: Lucas De Marchi +Reviewed-by: Ville Syrjälä +Link: https://lore.kernel.org/r/20250606104546.1996818-3-matthew.auld@intel.com +Signed-off-by: Lucas De Marchi +(cherry picked from commit 0dd2dd0182bc444a62652e89d08c7f0e4fde15ba) +Signed-off-by: Thomas Hellström +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/xe/display/xe_dsb_buffer.c | 11 ++++------- + 1 file changed, 4 insertions(+), 7 deletions(-) + +diff --git a/drivers/gpu/drm/xe/display/xe_dsb_buffer.c b/drivers/gpu/drm/xe/display/xe_dsb_buffer.c +index f95375451e2fa..9f941fc2e36bb 100644 +--- a/drivers/gpu/drm/xe/display/xe_dsb_buffer.c ++++ b/drivers/gpu/drm/xe/display/xe_dsb_buffer.c +@@ -17,10 +17,7 @@ u32 intel_dsb_buffer_ggtt_offset(struct intel_dsb_buffer *dsb_buf) + + void intel_dsb_buffer_write(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val) + { +- struct xe_device *xe = dsb_buf->vma->bo->tile->xe; +- + iosys_map_wr(&dsb_buf->vma->bo->vmap, idx * 4, u32, val); +- xe_device_l2_flush(xe); + } + + u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx) +@@ -30,12 +27,9 @@ u32 intel_dsb_buffer_read(struct intel_dsb_buffer *dsb_buf, u32 idx) + + void intel_dsb_buffer_memset(struct intel_dsb_buffer *dsb_buf, u32 idx, u32 val, size_t size) + { +- struct xe_device *xe = dsb_buf->vma->bo->tile->xe; +- + WARN_ON(idx > (dsb_buf->buf_size - size) / sizeof(*dsb_buf->cmd_buf)); + + iosys_map_memset(&dsb_buf->vma->bo->vmap, idx * 4, val, size); +- xe_device_l2_flush(xe); + } + + bool intel_dsb_buffer_create(struct intel_crtc *crtc, struct intel_dsb_buffer *dsb_buf, size_t size) +@@ -74,9 +68,12 @@ void intel_dsb_buffer_cleanup(struct intel_dsb_buffer *dsb_buf) + + void intel_dsb_buffer_flush_map(struct intel_dsb_buffer *dsb_buf) + { ++ struct xe_device *xe = dsb_buf->vma->bo->tile->xe; ++ + /* + * The memory barrier here is to ensure coherency of DSB vs MMIO, + * both for weak ordering archs and discrete cards. + */ +- xe_device_wmb(dsb_buf->vma->bo->tile->xe); ++ xe_device_wmb(xe); ++ xe_device_l2_flush(xe); + } +-- +2.39.5 + diff --git a/queue-6.12/drm-xe-replace-double-space-with-single-space-after-.patch b/queue-6.12/drm-xe-replace-double-space-with-single-space-after-.patch new file mode 100644 index 0000000000..919d83e308 --- /dev/null +++ b/queue-6.12/drm-xe-replace-double-space-with-single-space-after-.patch @@ -0,0 +1,113 @@ +From 4c5c3eb72087f86996c5514c3dfde393aef65251 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 23 Aug 2024 13:36:43 +0530 +Subject: drm/xe: Replace double space with single space after comma + +From: Nitin Gote + +[ Upstream commit cd89de14bbacce1fc060fdfab75bacf95b1c5d40 ] + +Avoid using double space, ", " in function or macro parameters +where it's not required by any alignment purpose. Replace it with +a single space, ", ". + +Signed-off-by: Nitin Gote +Reviewed-by: Andi Shyti +Link: https://patchwork.freedesktop.org/patch/msgid/20240823080643.2461992-1-nitin.r.gote@intel.com +Signed-off-by: Nirmoy Das +Stable-dep-of: ad40098da5c3 ("drm/xe/guc: Explicitly exit CT safe mode on unwind") +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/xe/regs/xe_reg_defs.h | 2 +- + drivers/gpu/drm/xe/xe_guc.c | 2 +- + drivers/gpu/drm/xe/xe_guc_ct.c | 4 ++-- + drivers/gpu/drm/xe/xe_irq.c | 4 ++-- + drivers/gpu/drm/xe/xe_trace_bo.h | 2 +- + 5 files changed, 7 insertions(+), 7 deletions(-) + +diff --git a/drivers/gpu/drm/xe/regs/xe_reg_defs.h b/drivers/gpu/drm/xe/regs/xe_reg_defs.h +index 23f7dc5bbe995..51fd40ffafcb9 100644 +--- a/drivers/gpu/drm/xe/regs/xe_reg_defs.h ++++ b/drivers/gpu/drm/xe/regs/xe_reg_defs.h +@@ -128,7 +128,7 @@ struct xe_reg_mcr { + * options. + */ + #define XE_REG_MCR(r_, ...) ((const struct xe_reg_mcr){ \ +- .__reg = XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .mcr = 1) \ ++ .__reg = XE_REG_INITIALIZER(r_, ##__VA_ARGS__, .mcr = 1) \ + }) + + static inline bool xe_reg_is_valid(struct xe_reg r) +diff --git a/drivers/gpu/drm/xe/xe_guc.c b/drivers/gpu/drm/xe/xe_guc.c +index 52df28032a6ff..c67d4807f37df 100644 +--- a/drivers/gpu/drm/xe/xe_guc.c ++++ b/drivers/gpu/drm/xe/xe_guc.c +@@ -985,7 +985,7 @@ int xe_guc_mmio_send_recv(struct xe_guc *guc, const u32 *request, + BUILD_BUG_ON(FIELD_MAX(GUC_HXG_MSG_0_TYPE) != GUC_HXG_TYPE_RESPONSE_SUCCESS); + BUILD_BUG_ON((GUC_HXG_TYPE_RESPONSE_SUCCESS ^ GUC_HXG_TYPE_RESPONSE_FAILURE) != 1); + +- ret = xe_mmio_wait32(gt, reply_reg, resp_mask, resp_mask, ++ ret = xe_mmio_wait32(gt, reply_reg, resp_mask, resp_mask, + 1000000, &header, false); + + if (unlikely(FIELD_GET(GUC_HXG_MSG_0_ORIGIN, header) != +diff --git a/drivers/gpu/drm/xe/xe_guc_ct.c b/drivers/gpu/drm/xe/xe_guc_ct.c +index 1f74f6bd50f31..483c2b521a2d1 100644 +--- a/drivers/gpu/drm/xe/xe_guc_ct.c ++++ b/drivers/gpu/drm/xe/xe_guc_ct.c +@@ -182,7 +182,7 @@ int xe_guc_ct_init(struct xe_guc_ct *ct) + spin_lock_init(&ct->fast_lock); + xa_init(&ct->fence_lookup); + INIT_WORK(&ct->g2h_worker, g2h_worker_func); +- INIT_DELAYED_WORK(&ct->safe_mode_worker, safe_mode_worker_func); ++ INIT_DELAYED_WORK(&ct->safe_mode_worker, safe_mode_worker_func); + init_waitqueue_head(&ct->wq); + init_waitqueue_head(&ct->g2h_fence_wq); + +@@ -851,7 +851,7 @@ static bool retry_failure(struct xe_guc_ct *ct, int ret) + #define ct_alive(ct) \ + (xe_guc_ct_enabled(ct) && !ct->ctbs.h2g.info.broken && \ + !ct->ctbs.g2h.info.broken) +- if (!wait_event_interruptible_timeout(ct->wq, ct_alive(ct), HZ * 5)) ++ if (!wait_event_interruptible_timeout(ct->wq, ct_alive(ct), HZ * 5)) + return false; + #undef ct_alive + +diff --git a/drivers/gpu/drm/xe/xe_irq.c b/drivers/gpu/drm/xe/xe_irq.c +index 5f2c368c35adb..14c3a476597a7 100644 +--- a/drivers/gpu/drm/xe/xe_irq.c ++++ b/drivers/gpu/drm/xe/xe_irq.c +@@ -173,7 +173,7 @@ void xe_irq_enable_hwe(struct xe_gt *gt) + if (ccs_mask & (BIT(0)|BIT(1))) + xe_mmio_write32(gt, CCS0_CCS1_INTR_MASK, ~dmask); + if (ccs_mask & (BIT(2)|BIT(3))) +- xe_mmio_write32(gt, CCS2_CCS3_INTR_MASK, ~dmask); ++ xe_mmio_write32(gt, CCS2_CCS3_INTR_MASK, ~dmask); + } + + if (xe_gt_is_media_type(gt) || MEDIA_VER(xe) < 13) { +@@ -504,7 +504,7 @@ static void gt_irq_reset(struct xe_tile *tile) + if (ccs_mask & (BIT(0)|BIT(1))) + xe_mmio_write32(mmio, CCS0_CCS1_INTR_MASK, ~0); + if (ccs_mask & (BIT(2)|BIT(3))) +- xe_mmio_write32(mmio, CCS2_CCS3_INTR_MASK, ~0); ++ xe_mmio_write32(mmio, CCS2_CCS3_INTR_MASK, ~0); + + if ((tile->media_gt && + xe_hw_engine_mask_per_class(tile->media_gt, XE_ENGINE_CLASS_OTHER)) || +diff --git a/drivers/gpu/drm/xe/xe_trace_bo.h b/drivers/gpu/drm/xe/xe_trace_bo.h +index ba0f61e7d2d6b..4ff023b5d040d 100644 +--- a/drivers/gpu/drm/xe/xe_trace_bo.h ++++ b/drivers/gpu/drm/xe/xe_trace_bo.h +@@ -189,7 +189,7 @@ DECLARE_EVENT_CLASS(xe_vm, + ), + + TP_printk("dev=%s, vm=%p, asid=0x%05x", __get_str(dev), +- __entry->vm, __entry->asid) ++ __entry->vm, __entry->asid) + ); + + DEFINE_EVENT(xe_vm, xe_vm_kill, +-- +2.39.5 + diff --git a/queue-6.12/enic-fix-incorrect-mtu-comparison-in-enic_change_mtu.patch b/queue-6.12/enic-fix-incorrect-mtu-comparison-in-enic_change_mtu.patch new file mode 100644 index 0000000000..fc4598c35b --- /dev/null +++ b/queue-6.12/enic-fix-incorrect-mtu-comparison-in-enic_change_mtu.patch @@ -0,0 +1,47 @@ +From 2e57c74b039d59c6c2448762a0314df5bc30dabb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 28 Jun 2025 07:56:05 -0700 +Subject: enic: fix incorrect MTU comparison in enic_change_mtu() + +From: Alok Tiwari + +[ Upstream commit aaf2b2480375099c022a82023e1cd772bf1c6a5d ] + +The comparison in enic_change_mtu() incorrectly used the current +netdev->mtu instead of the new new_mtu value when warning about +an MTU exceeding the port MTU. This could suppress valid warnings +or issue incorrect ones. + +Fix the condition and log to properly reflect the new_mtu. + +Fixes: ab123fe071c9 ("enic: handle mtu change for vf properly") +Signed-off-by: Alok Tiwari +Acked-by: John Daley +Reviewed-by: Simon Horman +Link: https://patch.msgid.link/20250628145612.476096-1-alok.a.tiwari@oracle.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/cisco/enic/enic_main.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c +index ffed14b63d41d..a432783756d8c 100644 +--- a/drivers/net/ethernet/cisco/enic/enic_main.c ++++ b/drivers/net/ethernet/cisco/enic/enic_main.c +@@ -2127,10 +2127,10 @@ static int enic_change_mtu(struct net_device *netdev, int new_mtu) + if (enic_is_dynamic(enic) || enic_is_sriov_vf(enic)) + return -EOPNOTSUPP; + +- if (netdev->mtu > enic->port_mtu) ++ if (new_mtu > enic->port_mtu) + netdev_warn(netdev, + "interface MTU (%d) set higher than port MTU (%d)\n", +- netdev->mtu, enic->port_mtu); ++ new_mtu, enic->port_mtu); + + return _enic_change_mtu(netdev, new_mtu); + } +-- +2.39.5 + diff --git a/queue-6.12/ethernet-atl1-add-missing-dma-mapping-error-checks-a.patch b/queue-6.12/ethernet-atl1-add-missing-dma-mapping-error-checks-a.patch new file mode 100644 index 0000000000..2e7b2069b8 --- /dev/null +++ b/queue-6.12/ethernet-atl1-add-missing-dma-mapping-error-checks-a.patch @@ -0,0 +1,213 @@ +From f50d76c3ddcdfc45a1c3dad23727858916c80a5e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 25 Jun 2025 16:16:24 +0200 +Subject: ethernet: atl1: Add missing DMA mapping error checks and count errors + +From: Thomas Fourier + +[ Upstream commit d72411d20905180cdc452c553be17481b24463d2 ] + +The `dma_map_XXX()` functions can fail and must be checked using +`dma_mapping_error()`. This patch adds proper error handling for all +DMA mapping calls. + +In `atl1_alloc_rx_buffers()`, if DMA mapping fails, the buffer is +deallocated and marked accordingly. + +In `atl1_tx_map()`, previously mapped buffers are unmapped and the +packet is dropped on failure. + +If `atl1_xmit_frame()` drops the packet, increment the tx_error counter. + +Fixes: f3cc28c79760 ("Add Attansic L1 ethernet driver.") +Signed-off-by: Thomas Fourier +Link: https://patch.msgid.link/20250625141629.114984-2-fourier.thomas@gmail.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/atheros/atlx/atl1.c | 79 +++++++++++++++++------- + 1 file changed, 57 insertions(+), 22 deletions(-) + +diff --git a/drivers/net/ethernet/atheros/atlx/atl1.c b/drivers/net/ethernet/atheros/atlx/atl1.c +index 3afd3627ce485..9c5d619909045 100644 +--- a/drivers/net/ethernet/atheros/atlx/atl1.c ++++ b/drivers/net/ethernet/atheros/atlx/atl1.c +@@ -1861,14 +1861,21 @@ static u16 atl1_alloc_rx_buffers(struct atl1_adapter *adapter) + break; + } + +- buffer_info->alloced = 1; +- buffer_info->skb = skb; +- buffer_info->length = (u16) adapter->rx_buffer_len; + page = virt_to_page(skb->data); + offset = offset_in_page(skb->data); + buffer_info->dma = dma_map_page(&pdev->dev, page, offset, + adapter->rx_buffer_len, + DMA_FROM_DEVICE); ++ if (dma_mapping_error(&pdev->dev, buffer_info->dma)) { ++ kfree_skb(skb); ++ adapter->soft_stats.rx_dropped++; ++ break; ++ } ++ ++ buffer_info->alloced = 1; ++ buffer_info->skb = skb; ++ buffer_info->length = (u16)adapter->rx_buffer_len; ++ + rfd_desc->buffer_addr = cpu_to_le64(buffer_info->dma); + rfd_desc->buf_len = cpu_to_le16(adapter->rx_buffer_len); + rfd_desc->coalese = 0; +@@ -2183,8 +2190,8 @@ static int atl1_tx_csum(struct atl1_adapter *adapter, struct sk_buff *skb, + return 0; + } + +-static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, +- struct tx_packet_desc *ptpd) ++static bool atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, ++ struct tx_packet_desc *ptpd) + { + struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring; + struct atl1_buffer *buffer_info; +@@ -2194,6 +2201,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, + unsigned int nr_frags; + unsigned int f; + int retval; ++ u16 first_mapped; + u16 next_to_use; + u16 data_len; + u8 hdr_len; +@@ -2201,6 +2209,7 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, + buf_len -= skb->data_len; + nr_frags = skb_shinfo(skb)->nr_frags; + next_to_use = atomic_read(&tpd_ring->next_to_use); ++ first_mapped = next_to_use; + buffer_info = &tpd_ring->buffer_info[next_to_use]; + BUG_ON(buffer_info->skb); + /* put skb in last TPD */ +@@ -2216,6 +2225,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, + buffer_info->dma = dma_map_page(&adapter->pdev->dev, page, + offset, hdr_len, + DMA_TO_DEVICE); ++ if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) ++ goto dma_err; + + if (++next_to_use == tpd_ring->count) + next_to_use = 0; +@@ -2242,6 +2253,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, + page, offset, + buffer_info->length, + DMA_TO_DEVICE); ++ if (dma_mapping_error(&adapter->pdev->dev, ++ buffer_info->dma)) ++ goto dma_err; + if (++next_to_use == tpd_ring->count) + next_to_use = 0; + } +@@ -2254,6 +2268,8 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, + buffer_info->dma = dma_map_page(&adapter->pdev->dev, page, + offset, buf_len, + DMA_TO_DEVICE); ++ if (dma_mapping_error(&adapter->pdev->dev, buffer_info->dma)) ++ goto dma_err; + if (++next_to_use == tpd_ring->count) + next_to_use = 0; + } +@@ -2277,6 +2293,9 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, + buffer_info->dma = skb_frag_dma_map(&adapter->pdev->dev, + frag, i * ATL1_MAX_TX_BUF_LEN, + buffer_info->length, DMA_TO_DEVICE); ++ if (dma_mapping_error(&adapter->pdev->dev, ++ buffer_info->dma)) ++ goto dma_err; + + if (++next_to_use == tpd_ring->count) + next_to_use = 0; +@@ -2285,6 +2304,22 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, + + /* last tpd's buffer-info */ + buffer_info->skb = skb; ++ ++ return true; ++ ++ dma_err: ++ while (first_mapped != next_to_use) { ++ buffer_info = &tpd_ring->buffer_info[first_mapped]; ++ dma_unmap_page(&adapter->pdev->dev, ++ buffer_info->dma, ++ buffer_info->length, ++ DMA_TO_DEVICE); ++ buffer_info->dma = 0; ++ ++ if (++first_mapped == tpd_ring->count) ++ first_mapped = 0; ++ } ++ return false; + } + + static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count, +@@ -2355,10 +2390,8 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb, + + len = skb_headlen(skb); + +- if (unlikely(skb->len <= 0)) { +- dev_kfree_skb_any(skb); +- return NETDEV_TX_OK; +- } ++ if (unlikely(skb->len <= 0)) ++ goto drop_packet; + + nr_frags = skb_shinfo(skb)->nr_frags; + for (f = 0; f < nr_frags; f++) { +@@ -2371,10 +2404,9 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb, + if (mss) { + if (skb->protocol == htons(ETH_P_IP)) { + proto_hdr_len = skb_tcp_all_headers(skb); +- if (unlikely(proto_hdr_len > len)) { +- dev_kfree_skb_any(skb); +- return NETDEV_TX_OK; +- } ++ if (unlikely(proto_hdr_len > len)) ++ goto drop_packet; ++ + /* need additional TPD ? */ + if (proto_hdr_len != len) + count += (len - proto_hdr_len + +@@ -2406,23 +2438,26 @@ static netdev_tx_t atl1_xmit_frame(struct sk_buff *skb, + } + + tso = atl1_tso(adapter, skb, ptpd); +- if (tso < 0) { +- dev_kfree_skb_any(skb); +- return NETDEV_TX_OK; +- } ++ if (tso < 0) ++ goto drop_packet; + + if (!tso) { + ret_val = atl1_tx_csum(adapter, skb, ptpd); +- if (ret_val < 0) { +- dev_kfree_skb_any(skb); +- return NETDEV_TX_OK; +- } ++ if (ret_val < 0) ++ goto drop_packet; + } + +- atl1_tx_map(adapter, skb, ptpd); ++ if (!atl1_tx_map(adapter, skb, ptpd)) ++ goto drop_packet; ++ + atl1_tx_queue(adapter, count, ptpd); + atl1_update_mailbox(adapter); + return NETDEV_TX_OK; ++ ++drop_packet: ++ adapter->soft_stats.tx_errors++; ++ dev_kfree_skb_any(skb); ++ return NETDEV_TX_OK; + } + + static int atl1_rings_clean(struct napi_struct *napi, int budget) +-- +2.39.5 + diff --git a/queue-6.12/f2fs-decrease-spare-area-for-pinned-files-for-zoned-.patch b/queue-6.12/f2fs-decrease-spare-area-for-pinned-files-for-zoned-.patch new file mode 100644 index 0000000000..ce889dd3e7 --- /dev/null +++ b/queue-6.12/f2fs-decrease-spare-area-for-pinned-files-for-zoned-.patch @@ -0,0 +1,66 @@ +From a2457833d98ff170263b0ce6df8f8761a9053ce5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 15 Oct 2024 09:54:27 -0700 +Subject: f2fs: decrease spare area for pinned files for zoned devices + +From: Daeho Jeong + +[ Upstream commit fa08972bcb7baaf5f1f4fdf251dc08bdd3ab1cf0 ] + +Now we reclaim too much space before allocating pinned space for zoned +devices. + +Signed-off-by: Daeho Jeong +Reviewed-by: Chao Yu +Signed-off-by: Jaegeuk Kim +Stable-dep-of: dc6d9ef57fcf ("f2fs: zone: fix to calculate first_zoned_segno correctly") +Signed-off-by: Sasha Levin +--- + fs/f2fs/file.c | 3 ++- + fs/f2fs/gc.h | 1 + + fs/f2fs/segment.c | 3 ++- + 3 files changed, 5 insertions(+), 2 deletions(-) + +diff --git a/fs/f2fs/file.c b/fs/f2fs/file.c +index 02f438cd6bfaf..d9037e74631c0 100644 +--- a/fs/f2fs/file.c ++++ b/fs/f2fs/file.c +@@ -1828,7 +1828,8 @@ static int f2fs_expand_inode_data(struct inode *inode, loff_t offset, + + map.m_len = sec_blks; + next_alloc: +- if (has_not_enough_free_secs(sbi, 0, ++ if (has_not_enough_free_secs(sbi, 0, f2fs_sb_has_blkzoned(sbi) ? ++ ZONED_PIN_SEC_REQUIRED_COUNT : + GET_SEC_FROM_SEG(sbi, overprovision_segments(sbi)))) { + f2fs_down_write(&sbi->gc_lock); + stat_inc_gc_call_count(sbi, FOREGROUND); +diff --git a/fs/f2fs/gc.h b/fs/f2fs/gc.h +index 2914b678bf8fb..5c1eaf55e1277 100644 +--- a/fs/f2fs/gc.h ++++ b/fs/f2fs/gc.h +@@ -35,6 +35,7 @@ + #define LIMIT_BOOST_ZONED_GC 25 /* percentage over total user space of boosted gc for zoned devices */ + #define DEF_MIGRATION_WINDOW_GRANULARITY_ZONED 3 + #define BOOST_GC_MULTIPLE 5 ++#define ZONED_PIN_SEC_REQUIRED_COUNT 1 + + #define DEF_GC_FAILED_PINNED_FILES 2048 + #define MAX_GC_FAILED_PINNED_FILES USHRT_MAX +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index 449c0acbfabc0..3db89becdbfcd 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -3250,7 +3250,8 @@ int f2fs_allocate_pinning_section(struct f2fs_sb_info *sbi) + + if (f2fs_sb_has_blkzoned(sbi) && err == -EAGAIN && gc_required) { + f2fs_down_write(&sbi->gc_lock); +- err = f2fs_gc_range(sbi, 0, GET_SEGNO(sbi, FDEV(0).end_blk), true, 1); ++ err = f2fs_gc_range(sbi, 0, GET_SEGNO(sbi, FDEV(0).end_blk), ++ true, ZONED_PIN_SEC_REQUIRED_COUNT); + f2fs_up_write(&sbi->gc_lock); + + gc_required = false; +-- +2.39.5 + diff --git a/queue-6.12/f2fs-zone-fix-to-calculate-first_zoned_segno-correct.patch b/queue-6.12/f2fs-zone-fix-to-calculate-first_zoned_segno-correct.patch new file mode 100644 index 0000000000..0fd426d1e3 --- /dev/null +++ b/queue-6.12/f2fs-zone-fix-to-calculate-first_zoned_segno-correct.patch @@ -0,0 +1,267 @@ +From f185376d84df3c518657c9570b17c3207559a996 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 10 Apr 2025 11:10:19 +0800 +Subject: f2fs: zone: fix to calculate first_zoned_segno correctly + +From: Chao Yu + +[ Upstream commit dc6d9ef57fcf42fac1b3be4bff5ac5b3f1e8f9f3 ] + +A zoned device can has both conventional zones and sequential zones, +so we should not treat first segment of zoned device as first_zoned_segno, +instead, we need to check zone type for each zone during traversing zoned +device to find first_zoned_segno. + +Otherwise, for below case, first_zoned_segno will be 0, which could be +wrong. + +create_null_blk 512 2 1024 1024 +mkfs.f2fs -m /dev/nullb0 + +Testcase: + +export SCRIPTS_PATH=/share/git/scripts + +test multiple devices w/ zoned device +for ((i=0;i<8;i++)) do { + zonesize=$((2<<$i)) + conzone=$((4096/$zonesize)) + seqzone=$((4096/$zonesize)) + $SCRIPTS_PATH/nullblk_create.sh 512 $zonesize $conzone $seqzone + mkfs.f2fs -f -m /dev/vdb -c /dev/nullb0 + mount /dev/vdb /mnt/f2fs + touch /mnt/f2fs/file + f2fs_io pinfile set /mnt/f2fs/file $((8589934592*2)) + stat /mnt/f2fs/file + df + cat /proc/fs/f2fs/vdb/segment_info + umount /mnt/f2fs + $SCRIPTS_PATH/nullblk_remove.sh 0 +} done + +test single zoned device +for ((i=0;i<8;i++)) do { + zonesize=$((2<<$i)) + conzone=$((4096/$zonesize)) + seqzone=$((4096/$zonesize)) + $SCRIPTS_PATH/nullblk_create.sh 512 $zonesize $conzone $seqzone + mkfs.f2fs -f -m /dev/nullb0 + mount /dev/nullb0 /mnt/f2fs + touch /mnt/f2fs/file + f2fs_io pinfile set /mnt/f2fs/file $((8589934592*2)) + stat /mnt/f2fs/file + df + cat /proc/fs/f2fs/nullb0/segment_info + umount /mnt/f2fs + $SCRIPTS_PATH/nullblk_remove.sh 0 +} done + +Fixes: 9703d69d9d15 ("f2fs: support file pinning for zoned devices") +Cc: Daeho Jeong +Signed-off-by: Chao Yu +Signed-off-by: Jaegeuk Kim +Signed-off-by: Sasha Levin +--- + fs/f2fs/data.c | 2 +- + fs/f2fs/f2fs.h | 36 ++++++++++++++++++++++++++++-------- + fs/f2fs/segment.c | 10 +++++----- + fs/f2fs/super.c | 41 +++++++++++++++++++++++++++++++++++------ + 4 files changed, 69 insertions(+), 20 deletions(-) + +diff --git a/fs/f2fs/data.c b/fs/f2fs/data.c +index 62c7fd1168a15..654f672639b3c 100644 +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -3986,7 +3986,7 @@ static int check_swap_activate(struct swap_info_struct *sis, + + if ((pblock - SM_I(sbi)->main_blkaddr) % blks_per_sec || + nr_pblocks % blks_per_sec || +- !f2fs_valid_pinned_area(sbi, pblock)) { ++ f2fs_is_sequential_zone_area(sbi, pblock)) { + bool last_extent = false; + + not_aligned++; +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 08b0f35be76bc..a435550b2839b 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -1762,7 +1762,7 @@ struct f2fs_sb_info { + unsigned int dirty_device; /* for checkpoint data flush */ + spinlock_t dev_lock; /* protect dirty_device */ + bool aligned_blksize; /* all devices has the same logical blksize */ +- unsigned int first_zoned_segno; /* first zoned segno */ ++ unsigned int first_seq_zone_segno; /* first segno in sequential zone */ + + /* For write statistics */ + u64 sectors_written_start; +@@ -4557,12 +4557,16 @@ F2FS_FEATURE_FUNCS(compression, COMPRESSION); + F2FS_FEATURE_FUNCS(readonly, RO); + + #ifdef CONFIG_BLK_DEV_ZONED +-static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi, +- block_t blkaddr) ++static inline bool f2fs_zone_is_seq(struct f2fs_sb_info *sbi, int devi, ++ unsigned int zone) + { +- unsigned int zno = blkaddr / sbi->blocks_per_blkz; ++ return test_bit(zone, FDEV(devi).blkz_seq); ++} + +- return test_bit(zno, FDEV(devi).blkz_seq); ++static inline bool f2fs_blkz_is_seq(struct f2fs_sb_info *sbi, int devi, ++ block_t blkaddr) ++{ ++ return f2fs_zone_is_seq(sbi, devi, blkaddr / sbi->blocks_per_blkz); + } + #endif + +@@ -4634,15 +4638,31 @@ static inline bool f2fs_lfs_mode(struct f2fs_sb_info *sbi) + return F2FS_OPTION(sbi).fs_mode == FS_MODE_LFS; + } + +-static inline bool f2fs_valid_pinned_area(struct f2fs_sb_info *sbi, ++static inline bool f2fs_is_sequential_zone_area(struct f2fs_sb_info *sbi, + block_t blkaddr) + { + if (f2fs_sb_has_blkzoned(sbi)) { ++#ifdef CONFIG_BLK_DEV_ZONED + int devi = f2fs_target_device_index(sbi, blkaddr); + +- return !bdev_is_zoned(FDEV(devi).bdev); ++ if (!bdev_is_zoned(FDEV(devi).bdev)) ++ return false; ++ ++ if (f2fs_is_multi_device(sbi)) { ++ if (blkaddr < FDEV(devi).start_blk || ++ blkaddr > FDEV(devi).end_blk) { ++ f2fs_err(sbi, "Invalid block %x", blkaddr); ++ return false; ++ } ++ blkaddr -= FDEV(devi).start_blk; ++ } ++ ++ return f2fs_blkz_is_seq(sbi, devi, blkaddr); ++#else ++ return false; ++#endif + } +- return true; ++ return false; + } + + static inline bool f2fs_low_mem_mode(struct f2fs_sb_info *sbi) +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index c7919b9cebcd0..e48b5e2efea28 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -2719,7 +2719,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi, + if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_PRIOR_CONV || pinning) + segno = 0; + else +- segno = max(sbi->first_zoned_segno, *newseg); ++ segno = max(sbi->first_seq_zone_segno, *newseg); + hint = GET_SEC_FROM_SEG(sbi, segno); + } + #endif +@@ -2731,7 +2731,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi, + if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) { + /* Write only to sequential zones */ + if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_ONLY_SEQ) { +- hint = GET_SEC_FROM_SEG(sbi, sbi->first_zoned_segno); ++ hint = GET_SEC_FROM_SEG(sbi, sbi->first_seq_zone_segno); + secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint); + } else + secno = find_first_zero_bit(free_i->free_secmap, +@@ -2784,9 +2784,9 @@ static int get_new_segment(struct f2fs_sb_info *sbi, + goto out_unlock; + } + +- /* no free section in conventional zone */ ++ /* no free section in conventional device or conventional zone */ + if (new_sec && pinning && +- !f2fs_valid_pinned_area(sbi, START_BLOCK(sbi, segno))) { ++ f2fs_is_sequential_zone_area(sbi, START_BLOCK(sbi, segno))) { + ret = -EAGAIN; + goto out_unlock; + } +@@ -3250,7 +3250,7 @@ int f2fs_allocate_pinning_section(struct f2fs_sb_info *sbi) + + if (f2fs_sb_has_blkzoned(sbi) && err == -EAGAIN && gc_required) { + f2fs_down_write(&sbi->gc_lock); +- err = f2fs_gc_range(sbi, 0, GET_SEGNO(sbi, FDEV(0).end_blk), ++ err = f2fs_gc_range(sbi, 0, sbi->first_seq_zone_segno - 1, + true, ZONED_PIN_SEC_REQUIRED_COUNT); + f2fs_up_write(&sbi->gc_lock); + +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index 0508527ebe115..3f2c6fa3623ba 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -4260,14 +4260,35 @@ static void f2fs_record_error_work(struct work_struct *work) + f2fs_record_stop_reason(sbi); + } + +-static inline unsigned int get_first_zoned_segno(struct f2fs_sb_info *sbi) ++static inline unsigned int get_first_seq_zone_segno(struct f2fs_sb_info *sbi) + { ++#ifdef CONFIG_BLK_DEV_ZONED ++ unsigned int zoneno, total_zones; + int devi; + +- for (devi = 0; devi < sbi->s_ndevs; devi++) +- if (bdev_is_zoned(FDEV(devi).bdev)) +- return GET_SEGNO(sbi, FDEV(devi).start_blk); +- return 0; ++ if (!f2fs_sb_has_blkzoned(sbi)) ++ return NULL_SEGNO; ++ ++ for (devi = 0; devi < sbi->s_ndevs; devi++) { ++ if (!bdev_is_zoned(FDEV(devi).bdev)) ++ continue; ++ ++ total_zones = GET_ZONE_FROM_SEG(sbi, FDEV(devi).total_segments); ++ ++ for (zoneno = 0; zoneno < total_zones; zoneno++) { ++ unsigned int segs, blks; ++ ++ if (!f2fs_zone_is_seq(sbi, devi, zoneno)) ++ continue; ++ ++ segs = GET_SEG_FROM_SEC(sbi, ++ zoneno * sbi->secs_per_zone); ++ blks = SEGS_TO_BLKS(sbi, segs); ++ return GET_SEGNO(sbi, FDEV(devi).start_blk + blks); ++ } ++ } ++#endif ++ return NULL_SEGNO; + } + + static int f2fs_scan_devices(struct f2fs_sb_info *sbi) +@@ -4304,6 +4325,14 @@ static int f2fs_scan_devices(struct f2fs_sb_info *sbi) + #endif + + for (i = 0; i < max_devices; i++) { ++ if (max_devices == 1) { ++ FDEV(i).total_segments = ++ le32_to_cpu(raw_super->segment_count_main); ++ FDEV(i).start_blk = 0; ++ FDEV(i).end_blk = FDEV(i).total_segments * ++ BLKS_PER_SEG(sbi); ++ } ++ + if (i == 0) + FDEV(0).bdev_file = sbi->sb->s_bdev_file; + else if (!RDEV(i).path[0]) +@@ -4671,7 +4700,7 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) + sbi->sectors_written_start = f2fs_get_sectors_written(sbi); + + /* get segno of first zoned block device */ +- sbi->first_zoned_segno = get_first_zoned_segno(sbi); ++ sbi->first_seq_zone_segno = get_first_seq_zone_segno(sbi); + + /* Read accumulated write IO statistics if exists */ + seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE); +-- +2.39.5 + diff --git a/queue-6.12/f2fs-zone-introduce-first_zoned_segno-in-f2fs_sb_inf.patch b/queue-6.12/f2fs-zone-introduce-first_zoned_segno-in-f2fs_sb_inf.patch new file mode 100644 index 0000000000..3a2381e5d3 --- /dev/null +++ b/queue-6.12/f2fs-zone-introduce-first_zoned_segno-in-f2fs_sb_inf.patch @@ -0,0 +1,109 @@ +From f1ae11b81818978c136daae997a888e151c31bef Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 18 Oct 2024 14:26:36 +0800 +Subject: f2fs: zone: introduce first_zoned_segno in f2fs_sb_info + +From: Chao Yu + +[ Upstream commit 5bc5aae843128aefb1c55d769d057c92dd8a32c9 ] + +first_zoned_segno() returns a fixed value, let's cache it in +structure f2fs_sb_info to avoid redundant calculation. + +Signed-off-by: Chao Yu +Signed-off-by: Jaegeuk Kim +Stable-dep-of: dc6d9ef57fcf ("f2fs: zone: fix to calculate first_zoned_segno correctly") +Signed-off-by: Sasha Levin +--- + fs/f2fs/f2fs.h | 1 + + fs/f2fs/segment.c | 4 ++-- + fs/f2fs/segment.h | 10 ---------- + fs/f2fs/super.c | 13 +++++++++++++ + 4 files changed, 16 insertions(+), 12 deletions(-) + +diff --git a/fs/f2fs/f2fs.h b/fs/f2fs/f2fs.h +index 61b715cc2e231..08b0f35be76bc 100644 +--- a/fs/f2fs/f2fs.h ++++ b/fs/f2fs/f2fs.h +@@ -1762,6 +1762,7 @@ struct f2fs_sb_info { + unsigned int dirty_device; /* for checkpoint data flush */ + spinlock_t dev_lock; /* protect dirty_device */ + bool aligned_blksize; /* all devices has the same logical blksize */ ++ unsigned int first_zoned_segno; /* first zoned segno */ + + /* For write statistics */ + u64 sectors_written_start; +diff --git a/fs/f2fs/segment.c b/fs/f2fs/segment.c +index 3db89becdbfcd..c7919b9cebcd0 100644 +--- a/fs/f2fs/segment.c ++++ b/fs/f2fs/segment.c +@@ -2719,7 +2719,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi, + if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_PRIOR_CONV || pinning) + segno = 0; + else +- segno = max(first_zoned_segno(sbi), *newseg); ++ segno = max(sbi->first_zoned_segno, *newseg); + hint = GET_SEC_FROM_SEG(sbi, segno); + } + #endif +@@ -2731,7 +2731,7 @@ static int get_new_segment(struct f2fs_sb_info *sbi, + if (secno >= MAIN_SECS(sbi) && f2fs_sb_has_blkzoned(sbi)) { + /* Write only to sequential zones */ + if (sbi->blkzone_alloc_policy == BLKZONE_ALLOC_ONLY_SEQ) { +- hint = GET_SEC_FROM_SEG(sbi, first_zoned_segno(sbi)); ++ hint = GET_SEC_FROM_SEG(sbi, sbi->first_zoned_segno); + secno = find_next_zero_bit(free_i->free_secmap, MAIN_SECS(sbi), hint); + } else + secno = find_first_zero_bit(free_i->free_secmap, +diff --git a/fs/f2fs/segment.h b/fs/f2fs/segment.h +index 05a342933f98f..52bb1a2819357 100644 +--- a/fs/f2fs/segment.h ++++ b/fs/f2fs/segment.h +@@ -992,13 +992,3 @@ static inline void wake_up_discard_thread(struct f2fs_sb_info *sbi, bool force) + dcc->discard_wake = true; + wake_up_interruptible_all(&dcc->discard_wait_queue); + } +- +-static inline unsigned int first_zoned_segno(struct f2fs_sb_info *sbi) +-{ +- int devi; +- +- for (devi = 0; devi < sbi->s_ndevs; devi++) +- if (bdev_is_zoned(FDEV(devi).bdev)) +- return GET_SEGNO(sbi, FDEV(devi).start_blk); +- return 0; +-} +diff --git a/fs/f2fs/super.c b/fs/f2fs/super.c +index f0e83ea56e38c..0508527ebe115 100644 +--- a/fs/f2fs/super.c ++++ b/fs/f2fs/super.c +@@ -4260,6 +4260,16 @@ static void f2fs_record_error_work(struct work_struct *work) + f2fs_record_stop_reason(sbi); + } + ++static inline unsigned int get_first_zoned_segno(struct f2fs_sb_info *sbi) ++{ ++ int devi; ++ ++ for (devi = 0; devi < sbi->s_ndevs; devi++) ++ if (bdev_is_zoned(FDEV(devi).bdev)) ++ return GET_SEGNO(sbi, FDEV(devi).start_blk); ++ return 0; ++} ++ + static int f2fs_scan_devices(struct f2fs_sb_info *sbi) + { + struct f2fs_super_block *raw_super = F2FS_RAW_SUPER(sbi); +@@ -4660,6 +4670,9 @@ static int f2fs_fill_super(struct super_block *sb, void *data, int silent) + /* For write statistics */ + sbi->sectors_written_start = f2fs_get_sectors_written(sbi); + ++ /* get segno of first zoned block device */ ++ sbi->first_zoned_segno = get_first_zoned_segno(sbi); ++ + /* Read accumulated write IO statistics if exists */ + seg_i = CURSEG_I(sbi, CURSEG_HOT_NODE); + if (__exist_node_summaries(sbi)) +-- +2.39.5 + diff --git a/queue-6.12/firmware-arm_ffa-add-support-for-un-registration-of-.patch b/queue-6.12/firmware-arm_ffa-add-support-for-un-registration-of-.patch new file mode 100644 index 0000000000..ac2870072d --- /dev/null +++ b/queue-6.12/firmware-arm_ffa-add-support-for-un-registration-of-.patch @@ -0,0 +1,269 @@ +From 39d93731826e452a9266714b570c6696b185724c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 Feb 2025 15:38:57 +0000 +Subject: firmware: arm_ffa: Add support for {un,}registration of framework + notifications + +From: Sudeep Holla + +[ Upstream commit c10debfe7f028c11f7a501a0f8e937c9be9e5327 ] + +Framework notifications are doorbells that are rung by the partition +managers to signal common events to an endpoint. These doorbells cannot +be rung by an endpoint directly. A partition manager can signal a +Framework notification in response to an FF-A ABI invocation by an +endpoint. + +Two additional notify_ops interface is being added for any FF-A device/ +driver to register and unregister for such a framework notifications. + +Tested-by: Viresh Kumar +Message-Id: <20250217-ffa_updates-v3-16-bd1d9de615e7@arm.com> +Signed-off-by: Sudeep Holla +Stable-dep-of: 27e850c88df0 ("firmware: arm_ffa: Move memory allocation outside the mutex locking") +Signed-off-by: Sasha Levin +--- + drivers/firmware/arm_ffa/driver.c | 113 ++++++++++++++++++++++++------ + include/linux/arm_ffa.h | 5 ++ + 2 files changed, 97 insertions(+), 21 deletions(-) + +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index f0c7f417d7524..f19b142645cc5 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -1079,6 +1079,7 @@ static int ffa_memory_lend(struct ffa_mem_ops_args *args) + struct notifier_cb_info { + struct hlist_node hnode; + struct ffa_device *dev; ++ ffa_fwk_notifier_cb fwk_cb; + ffa_notifier_cb cb; + void *cb_data; + }; +@@ -1142,28 +1143,61 @@ static enum notify_type ffa_notify_type_get(u16 vm_id) + return NON_SECURE_VM; + } + +-/* Should be called while the notify_lock is taken */ ++/* notifier_hnode_get* should be called with notify_lock held */ + static struct notifier_cb_info * +-notifier_hash_node_get(u16 notify_id, enum notify_type type) ++notifier_hnode_get_by_vmid(u16 notify_id, int vmid) + { + struct notifier_cb_info *node; + + hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) +- if (type == ffa_notify_type_get(node->dev->vm_id)) ++ if (node->fwk_cb && vmid == node->dev->vm_id) ++ return node; ++ ++ return NULL; ++} ++ ++static struct notifier_cb_info * ++notifier_hnode_get_by_vmid_uuid(u16 notify_id, int vmid, const uuid_t *uuid) ++{ ++ struct notifier_cb_info *node; ++ ++ if (uuid_is_null(uuid)) ++ return notifier_hnode_get_by_vmid(notify_id, vmid); ++ ++ hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) ++ if (node->fwk_cb && vmid == node->dev->vm_id && ++ uuid_equal(&node->dev->uuid, uuid)) ++ return node; ++ ++ return NULL; ++} ++ ++static struct notifier_cb_info * ++notifier_hnode_get_by_type(u16 notify_id, enum notify_type type) ++{ ++ struct notifier_cb_info *node; ++ ++ hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) ++ if (node->cb && type == ffa_notify_type_get(node->dev->vm_id)) + return node; + + return NULL; + } + + static int +-update_notifier_cb(struct ffa_device *dev, int notify_id, ffa_notifier_cb cb, +- void *cb_data, bool is_registration) ++update_notifier_cb(struct ffa_device *dev, int notify_id, void *cb, ++ void *cb_data, bool is_registration, bool is_framework) + { + struct notifier_cb_info *cb_info = NULL; + enum notify_type type = ffa_notify_type_get(dev->vm_id); + bool cb_found; + +- cb_info = notifier_hash_node_get(notify_id, type); ++ if (is_framework) ++ cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, dev->vm_id, ++ &dev->uuid); ++ else ++ cb_info = notifier_hnode_get_by_type(notify_id, type); ++ + cb_found = !!cb_info; + + if (!(is_registration ^ cb_found)) +@@ -1175,8 +1209,11 @@ update_notifier_cb(struct ffa_device *dev, int notify_id, ffa_notifier_cb cb, + return -ENOMEM; + + cb_info->dev = dev; +- cb_info->cb = cb; + cb_info->cb_data = cb_data; ++ if (is_framework) ++ cb_info->fwk_cb = cb; ++ else ++ cb_info->cb = cb; + + hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id); + } else { +@@ -1187,7 +1224,8 @@ update_notifier_cb(struct ffa_device *dev, int notify_id, ffa_notifier_cb cb, + return 0; + } + +-static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) ++static int __ffa_notify_relinquish(struct ffa_device *dev, int notify_id, ++ bool is_framework) + { + int rc; + +@@ -1199,22 +1237,35 @@ static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) + + mutex_lock(&drv_info->notify_lock); + +- rc = update_notifier_cb(dev, notify_id, NULL, NULL, false); ++ rc = update_notifier_cb(dev, notify_id, NULL, NULL, false, ++ is_framework); + if (rc) { + pr_err("Could not unregister notification callback\n"); + mutex_unlock(&drv_info->notify_lock); + return rc; + } + +- rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id)); ++ if (!is_framework) ++ rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id)); + + mutex_unlock(&drv_info->notify_lock); + + return rc; + } + +-static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, +- ffa_notifier_cb cb, void *cb_data, int notify_id) ++static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) ++{ ++ return __ffa_notify_relinquish(dev, notify_id, false); ++} ++ ++static int ffa_fwk_notify_relinquish(struct ffa_device *dev, int notify_id) ++{ ++ return __ffa_notify_relinquish(dev, notify_id, true); ++} ++ ++static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, ++ void *cb, void *cb_data, ++ int notify_id, bool is_framework) + { + int rc; + u32 flags = 0; +@@ -1227,26 +1278,44 @@ static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + + mutex_lock(&drv_info->notify_lock); + +- if (is_per_vcpu) +- flags = PER_VCPU_NOTIFICATION_FLAG; ++ if (!is_framework) { ++ if (is_per_vcpu) ++ flags = PER_VCPU_NOTIFICATION_FLAG; + +- rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags); +- if (rc) { +- mutex_unlock(&drv_info->notify_lock); +- return rc; ++ rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags); ++ if (rc) { ++ mutex_unlock(&drv_info->notify_lock); ++ return rc; ++ } + } + +- rc = update_notifier_cb(dev, notify_id, cb, cb_data, true); ++ rc = update_notifier_cb(dev, notify_id, cb, cb_data, true, ++ is_framework); + if (rc) { + pr_err("Failed to register callback for %d - %d\n", + notify_id, rc); +- ffa_notification_unbind(dev->vm_id, BIT(notify_id)); ++ if (!is_framework) ++ ffa_notification_unbind(dev->vm_id, BIT(notify_id)); + } + mutex_unlock(&drv_info->notify_lock); + + return rc; + } + ++static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, ++ ffa_notifier_cb cb, void *cb_data, int notify_id) ++{ ++ return __ffa_notify_request(dev, is_per_vcpu, cb, cb_data, notify_id, ++ false); ++} ++ ++static int ++ffa_fwk_notify_request(struct ffa_device *dev, ffa_fwk_notifier_cb cb, ++ void *cb_data, int notify_id) ++{ ++ return __ffa_notify_request(dev, false, cb, cb_data, notify_id, true); ++} ++ + static int ffa_notify_send(struct ffa_device *dev, int notify_id, + bool is_per_vcpu, u16 vcpu) + { +@@ -1276,7 +1345,7 @@ static void handle_notif_callbacks(u64 bitmap, enum notify_type type) + continue; + + mutex_lock(&drv_info->notify_lock); +- cb_info = notifier_hash_node_get(notify_id, type); ++ cb_info = notifier_hnode_get_by_type(notify_id, type); + mutex_unlock(&drv_info->notify_lock); + + if (cb_info && cb_info->cb) +@@ -1349,6 +1418,8 @@ static const struct ffa_notifier_ops ffa_drv_notifier_ops = { + .sched_recv_cb_unregister = ffa_sched_recv_cb_unregister, + .notify_request = ffa_notify_request, + .notify_relinquish = ffa_notify_relinquish, ++ .fwk_notify_request = ffa_fwk_notify_request, ++ .fwk_notify_relinquish = ffa_fwk_notify_relinquish, + .notify_send = ffa_notify_send, + }; + +diff --git a/include/linux/arm_ffa.h b/include/linux/arm_ffa.h +index 74169dd0f6594..5e2530f23b793 100644 +--- a/include/linux/arm_ffa.h ++++ b/include/linux/arm_ffa.h +@@ -455,6 +455,7 @@ struct ffa_cpu_ops { + + typedef void (*ffa_sched_recv_cb)(u16 vcpu, bool is_per_vcpu, void *cb_data); + typedef void (*ffa_notifier_cb)(int notify_id, void *cb_data); ++typedef void (*ffa_fwk_notifier_cb)(int notify_id, void *cb_data, void *buf); + + struct ffa_notifier_ops { + int (*sched_recv_cb_register)(struct ffa_device *dev, +@@ -463,6 +464,10 @@ struct ffa_notifier_ops { + int (*notify_request)(struct ffa_device *dev, bool per_vcpu, + ffa_notifier_cb cb, void *cb_data, int notify_id); + int (*notify_relinquish)(struct ffa_device *dev, int notify_id); ++ int (*fwk_notify_request)(struct ffa_device *dev, ++ ffa_fwk_notifier_cb cb, void *cb_data, ++ int notify_id); ++ int (*fwk_notify_relinquish)(struct ffa_device *dev, int notify_id); + int (*notify_send)(struct ffa_device *dev, int notify_id, bool per_vcpu, + u16 vcpu); + }; +-- +2.39.5 + diff --git a/queue-6.12/firmware-arm_ffa-fix-memory-leak-by-freeing-notifier.patch b/queue-6.12/firmware-arm_ffa-fix-memory-leak-by-freeing-notifier.patch new file mode 100644 index 0000000000..0f0494cfb4 --- /dev/null +++ b/queue-6.12/firmware-arm_ffa-fix-memory-leak-by-freeing-notifier.patch @@ -0,0 +1,43 @@ +From 0bb7cdb75f792f37da530948d90aed70a253b228 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 28 May 2025 09:49:41 +0100 +Subject: firmware: arm_ffa: Fix memory leak by freeing notifier callback node + +From: Sudeep Holla + +[ Upstream commit a833d31ad867103ba72a0b73f3606f4ab8601719 ] + +Commit e0573444edbf ("firmware: arm_ffa: Add interfaces to request +notification callbacks") adds support for notifier callbacks by allocating +and inserting a callback node into a hashtable during registration of +notifiers. However, during unregistration, the code only removes the +node from the hashtable without freeing the associated memory, resulting +in a memory leak. + +Resolve the memory leak issue by ensuring the allocated notifier callback +node is properly freed after it is removed from the hashtable entry. + +Fixes: e0573444edbf ("firmware: arm_ffa: Add interfaces to request notification callbacks") +Message-Id: <20250528-ffa_notif_fix-v1-1-5ed7bc7f8437@arm.com> +Reviewed-by: Jens Wiklander +Signed-off-by: Sudeep Holla +Signed-off-by: Sasha Levin +--- + drivers/firmware/arm_ffa/driver.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index 47751b2c057ae..c0f3b7cdb6edb 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -1166,6 +1166,7 @@ update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb, + hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id); + } else { + hash_del(&cb_info->hnode); ++ kfree(cb_info); + } + + return 0; +-- +2.39.5 + diff --git a/queue-6.12/firmware-arm_ffa-move-memory-allocation-outside-the-.patch b/queue-6.12/firmware-arm_ffa-move-memory-allocation-outside-the-.patch new file mode 100644 index 0000000000..322c3ebec9 --- /dev/null +++ b/queue-6.12/firmware-arm_ffa-move-memory-allocation-outside-the-.patch @@ -0,0 +1,135 @@ +From 1a6f005f801af4f40bb68ecdce8e6c1c0df68255 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 28 May 2025 09:49:42 +0100 +Subject: firmware: arm_ffa: Move memory allocation outside the mutex locking + +From: Sudeep Holla + +[ Upstream commit 27e850c88df0e25474a8caeb2903e2e90b62c1dc ] + +The notifier callback node allocation is currently done while holding +the notify_lock mutex. While this is safe even if memory allocation may +sleep, we need to move the allocation outside the locked region in +preparation to move from using muxtes to rwlocks. + +Move the memory allocation to avoid potential sleeping in atomic context +once the locks are moved from mutex to rwlocks. + +Fixes: e0573444edbf ("firmware: arm_ffa: Add interfaces to request notification callbacks") +Message-Id: <20250528-ffa_notif_fix-v1-2-5ed7bc7f8437@arm.com> +Reviewed-by: Jens Wiklander +Signed-off-by: Sudeep Holla +Signed-off-by: Sasha Levin +--- + drivers/firmware/arm_ffa/driver.c | 48 +++++++++++++++---------------- + 1 file changed, 24 insertions(+), 24 deletions(-) + +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index f19b142645cc5..33f7bdb5c86dd 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -1184,13 +1184,12 @@ notifier_hnode_get_by_type(u16 notify_id, enum notify_type type) + return NULL; + } + +-static int +-update_notifier_cb(struct ffa_device *dev, int notify_id, void *cb, +- void *cb_data, bool is_registration, bool is_framework) ++static int update_notifier_cb(struct ffa_device *dev, int notify_id, ++ struct notifier_cb_info *cb, bool is_framework) + { + struct notifier_cb_info *cb_info = NULL; + enum notify_type type = ffa_notify_type_get(dev->vm_id); +- bool cb_found; ++ bool cb_found, is_registration = !!cb; + + if (is_framework) + cb_info = notifier_hnode_get_by_vmid_uuid(notify_id, dev->vm_id, +@@ -1204,18 +1203,7 @@ update_notifier_cb(struct ffa_device *dev, int notify_id, void *cb, + return -EINVAL; + + if (is_registration) { +- cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL); +- if (!cb_info) +- return -ENOMEM; +- +- cb_info->dev = dev; +- cb_info->cb_data = cb_data; +- if (is_framework) +- cb_info->fwk_cb = cb; +- else +- cb_info->cb = cb; +- +- hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id); ++ hash_add(drv_info->notifier_hash, &cb->hnode, notify_id); + } else { + hash_del(&cb_info->hnode); + kfree(cb_info); +@@ -1237,8 +1225,7 @@ static int __ffa_notify_relinquish(struct ffa_device *dev, int notify_id, + + mutex_lock(&drv_info->notify_lock); + +- rc = update_notifier_cb(dev, notify_id, NULL, NULL, false, +- is_framework); ++ rc = update_notifier_cb(dev, notify_id, NULL, is_framework); + if (rc) { + pr_err("Could not unregister notification callback\n"); + mutex_unlock(&drv_info->notify_lock); +@@ -1269,6 +1256,7 @@ static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + { + int rc; + u32 flags = 0; ++ struct notifier_cb_info *cb_info = NULL; + + if (ffa_notifications_disabled()) + return -EOPNOTSUPP; +@@ -1276,6 +1264,17 @@ static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + if (notify_id >= FFA_MAX_NOTIFICATIONS) + return -EINVAL; + ++ cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL); ++ if (!cb_info) ++ return -ENOMEM; ++ ++ cb_info->dev = dev; ++ cb_info->cb_data = cb_data; ++ if (is_framework) ++ cb_info->fwk_cb = cb; ++ else ++ cb_info->cb = cb; ++ + mutex_lock(&drv_info->notify_lock); + + if (!is_framework) { +@@ -1283,21 +1282,22 @@ static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + flags = PER_VCPU_NOTIFICATION_FLAG; + + rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags); +- if (rc) { +- mutex_unlock(&drv_info->notify_lock); +- return rc; +- } ++ if (rc) ++ goto out_unlock_free; + } + +- rc = update_notifier_cb(dev, notify_id, cb, cb_data, true, +- is_framework); ++ rc = update_notifier_cb(dev, notify_id, cb_info, is_framework); + if (rc) { + pr_err("Failed to register callback for %d - %d\n", + notify_id, rc); + if (!is_framework) + ffa_notification_unbind(dev->vm_id, BIT(notify_id)); + } ++ ++out_unlock_free: + mutex_unlock(&drv_info->notify_lock); ++ if (rc) ++ kfree(cb_info); + + return rc; + } +-- +2.39.5 + diff --git a/queue-6.12/firmware-arm_ffa-refactoring-to-prepare-for-framewor.patch b/queue-6.12/firmware-arm_ffa-refactoring-to-prepare-for-framewor.patch new file mode 100644 index 0000000000..bdb00d5487 --- /dev/null +++ b/queue-6.12/firmware-arm_ffa-refactoring-to-prepare-for-framewor.patch @@ -0,0 +1,155 @@ +From 922f199a91b30879211f73469530dfe767e784f8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 Feb 2025 15:38:55 +0000 +Subject: firmware: arm_ffa: Refactoring to prepare for framework notification + support + +From: Sudeep Holla + +[ Upstream commit 07b760e713255a2224cfaad62eeaae85de913bac ] + +Currently, the framework notifications are not supported at all. +handle_notif_callbacks() doesn't handle them though it is called with +framework bitmap. Make that explicit by adding checks for the same. + +Also, we need to further classify the framework notifications as Secure +Partition Manager(SPM) and NonSecure Hypervisor(NS_HYP). Extend/change +notify_type enumeration to accommodate all the 4 type and rejig the +values so that it can be reused in the bitmap enable mask macros. + +While at this, move ffa_notify_type_get() so that it can be used in +notifier_hash_node_get() in the future. + +No functional change. + +Tested-by: Viresh Kumar +Message-Id: <20250217-ffa_updates-v3-14-bd1d9de615e7@arm.com> +Signed-off-by: Sudeep Holla +Stable-dep-of: 27e850c88df0 ("firmware: arm_ffa: Move memory allocation outside the mutex locking") +Signed-off-by: Sasha Levin +--- + drivers/firmware/arm_ffa/driver.c | 57 ++++++++++++++++++------------- + 1 file changed, 34 insertions(+), 23 deletions(-) + +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index c0f3b7cdb6edb..5ac6dbde31f53 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -769,6 +769,13 @@ static int ffa_notification_bitmap_destroy(void) + return 0; + } + ++enum notify_type { ++ SECURE_PARTITION, ++ NON_SECURE_VM, ++ SPM_FRAMEWORK, ++ NS_HYP_FRAMEWORK, ++}; ++ + #define NOTIFICATION_LOW_MASK GENMASK(31, 0) + #define NOTIFICATION_HIGH_MASK GENMASK(63, 32) + #define NOTIFICATION_BITMAP_HIGH(x) \ +@@ -792,10 +799,17 @@ static int ffa_notification_bitmap_destroy(void) + #define MAX_IDS_32 10 + + #define PER_VCPU_NOTIFICATION_FLAG BIT(0) +-#define SECURE_PARTITION_BITMAP BIT(0) +-#define NON_SECURE_VM_BITMAP BIT(1) +-#define SPM_FRAMEWORK_BITMAP BIT(2) +-#define NS_HYP_FRAMEWORK_BITMAP BIT(3) ++#define SECURE_PARTITION_BITMAP_ENABLE BIT(SECURE_PARTITION) ++#define NON_SECURE_VM_BITMAP_ENABLE BIT(NON_SECURE_VM) ++#define SPM_FRAMEWORK_BITMAP_ENABLE BIT(SPM_FRAMEWORK) ++#define NS_HYP_FRAMEWORK_BITMAP_ENABLE BIT(NS_HYP_FRAMEWORK) ++#define FFA_BITMAP_ENABLE_MASK \ ++ (SECURE_PARTITION_BITMAP_ENABLE | SPM_FRAMEWORK_BITMAP_ENABLE) ++ ++#define FFA_SECURE_PARTITION_ID_FLAG BIT(15) ++ ++#define SPM_FRAMEWORK_BITMAP(x) NOTIFICATION_BITMAP_LOW(x) ++#define NS_HYP_FRAMEWORK_BITMAP(x) NOTIFICATION_BITMAP_HIGH(x) + + static int ffa_notification_bind_common(u16 dst_id, u64 bitmap, + u32 flags, bool is_bind) +@@ -1060,16 +1074,8 @@ static int ffa_memory_lend(struct ffa_mem_ops_args *args) + return ffa_memory_ops(FFA_MEM_LEND, args); + } + +-#define FFA_SECURE_PARTITION_ID_FLAG BIT(15) +- + #define ffa_notifications_disabled() (!drv_info->notif_enabled) + +-enum notify_type { +- NON_SECURE_VM, +- SECURE_PARTITION, +- FRAMEWORK, +-}; +- + struct notifier_cb_info { + struct hlist_node hnode; + ffa_notifier_cb cb; +@@ -1128,6 +1134,14 @@ static int ffa_notification_unbind(u16 dst_id, u64 bitmap) + return ffa_notification_bind_common(dst_id, bitmap, 0, false); + } + ++static enum notify_type ffa_notify_type_get(u16 vm_id) ++{ ++ if (vm_id & FFA_SECURE_PARTITION_ID_FLAG) ++ return SECURE_PARTITION; ++ else ++ return NON_SECURE_VM; ++} ++ + /* Should be called while the notify_lock is taken */ + static struct notifier_cb_info * + notifier_hash_node_get(u16 notify_id, enum notify_type type) +@@ -1172,14 +1186,6 @@ update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb, + return 0; + } + +-static enum notify_type ffa_notify_type_get(u16 vm_id) +-{ +- if (vm_id & FFA_SECURE_PARTITION_ID_FLAG) +- return SECURE_PARTITION; +- else +- return NON_SECURE_VM; +-} +- + static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) + { + int rc; +@@ -1262,6 +1268,9 @@ static void handle_notif_callbacks(u64 bitmap, enum notify_type type) + int notify_id; + struct notifier_cb_info *cb_info = NULL; + ++ if (type == SPM_FRAMEWORK || type == NS_HYP_FRAMEWORK) ++ return; ++ + for (notify_id = 0; notify_id <= FFA_MAX_NOTIFICATIONS && bitmap; + notify_id++, bitmap >>= 1) { + if (!(bitmap & 1)) +@@ -1281,16 +1290,18 @@ static void notif_get_and_handle(void *unused) + int rc; + struct ffa_notify_bitmaps bitmaps; + +- rc = ffa_notification_get(SECURE_PARTITION_BITMAP | +- SPM_FRAMEWORK_BITMAP, &bitmaps); ++ rc = ffa_notification_get(FFA_BITMAP_ENABLE_MASK, &bitmaps); + if (rc) { + pr_err("Failed to retrieve notifications with %d!\n", rc); + return; + } + ++ handle_notif_callbacks(SPM_FRAMEWORK_BITMAP(bitmaps.arch_map), ++ SPM_FRAMEWORK); ++ handle_notif_callbacks(NS_HYP_FRAMEWORK_BITMAP(bitmaps.arch_map), ++ NS_HYP_FRAMEWORK); + handle_notif_callbacks(bitmaps.vm_map, NON_SECURE_VM); + handle_notif_callbacks(bitmaps.sp_map, SECURE_PARTITION); +- handle_notif_callbacks(bitmaps.arch_map, FRAMEWORK); + } + + static void +-- +2.39.5 + diff --git a/queue-6.12/firmware-arm_ffa-replace-mutex-with-rwlock-to-avoid-.patch b/queue-6.12/firmware-arm_ffa-replace-mutex-with-rwlock-to-avoid-.patch new file mode 100644 index 0000000000..5bf74763bc --- /dev/null +++ b/queue-6.12/firmware-arm_ffa-replace-mutex-with-rwlock-to-avoid-.patch @@ -0,0 +1,148 @@ +From 9065bf41771c0a75a56aa041311467829ef560eb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 28 May 2025 09:49:43 +0100 +Subject: firmware: arm_ffa: Replace mutex with rwlock to avoid sleep in atomic + context +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Sudeep Holla + +[ Upstream commit 9ca7a421229bbdfbe2e1e628cff5cfa782720a10 ] + +The current use of a mutex to protect the notifier hashtable accesses +can lead to issues in the atomic context. It results in the below +kernel warnings: + + | BUG: sleeping function called from invalid context at kernel/locking/mutex.c:258 + | in_atomic(): 1, irqs_disabled(): 1, non_block: 0, pid: 9, name: kworker/0:0 + | preempt_count: 1, expected: 0 + | RCU nest depth: 0, expected: 0 + | CPU: 0 UID: 0 PID: 9 Comm: kworker/0:0 Not tainted 6.14.0 #4 + | Workqueue: ffa_pcpu_irq_notification notif_pcpu_irq_work_fn + | Call trace: + | show_stack+0x18/0x24 (C) + | dump_stack_lvl+0x78/0x90 + | dump_stack+0x18/0x24 + | __might_resched+0x114/0x170 + | __might_sleep+0x48/0x98 + | mutex_lock+0x24/0x80 + | handle_notif_callbacks+0x54/0xe0 + | notif_get_and_handle+0x40/0x88 + | generic_exec_single+0x80/0xc0 + | smp_call_function_single+0xfc/0x1a0 + | notif_pcpu_irq_work_fn+0x2c/0x38 + | process_one_work+0x14c/0x2b4 + | worker_thread+0x2e4/0x3e0 + | kthread+0x13c/0x210 + | ret_from_fork+0x10/0x20 + +To address this, replace the mutex with an rwlock to protect the notifier +hashtable accesses. This ensures that read-side locking does not sleep and +multiple readers can acquire the lock concurrently, avoiding unnecessary +contention and potential deadlocks. Writer access remains exclusive, +preserving correctness. + +This change resolves warnings from lockdep about potential sleep in +atomic context. + +Cc: Jens Wiklander +Reported-by: Jérôme Forissier +Closes: https://github.com/OP-TEE/optee_os/issues/7394 +Fixes: e0573444edbf ("firmware: arm_ffa: Add interfaces to request notification callbacks") +Message-Id: <20250528-ffa_notif_fix-v1-3-5ed7bc7f8437@arm.com> +Reviewed-by: Jens Wiklander +Tested-by: Jens Wiklander +Signed-off-by: Sudeep Holla +Signed-off-by: Sasha Levin +--- + drivers/firmware/arm_ffa/driver.c | 20 ++++++++++---------- + 1 file changed, 10 insertions(+), 10 deletions(-) + +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index 33f7bdb5c86dd..134e77a646cc1 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -110,7 +110,7 @@ struct ffa_drv_info { + struct work_struct sched_recv_irq_work; + struct xarray partition_info; + DECLARE_HASHTABLE(notifier_hash, ilog2(FFA_MAX_NOTIFICATIONS)); +- struct mutex notify_lock; /* lock to protect notifier hashtable */ ++ rwlock_t notify_lock; /* lock to protect notifier hashtable */ + }; + + static struct ffa_drv_info *drv_info; +@@ -1223,19 +1223,19 @@ static int __ffa_notify_relinquish(struct ffa_device *dev, int notify_id, + if (notify_id >= FFA_MAX_NOTIFICATIONS) + return -EINVAL; + +- mutex_lock(&drv_info->notify_lock); ++ write_lock(&drv_info->notify_lock); + + rc = update_notifier_cb(dev, notify_id, NULL, is_framework); + if (rc) { + pr_err("Could not unregister notification callback\n"); +- mutex_unlock(&drv_info->notify_lock); ++ write_unlock(&drv_info->notify_lock); + return rc; + } + + if (!is_framework) + rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id)); + +- mutex_unlock(&drv_info->notify_lock); ++ write_unlock(&drv_info->notify_lock); + + return rc; + } +@@ -1275,7 +1275,7 @@ static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + else + cb_info->cb = cb; + +- mutex_lock(&drv_info->notify_lock); ++ write_lock(&drv_info->notify_lock); + + if (!is_framework) { + if (is_per_vcpu) +@@ -1295,7 +1295,7 @@ static int __ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + } + + out_unlock_free: +- mutex_unlock(&drv_info->notify_lock); ++ write_unlock(&drv_info->notify_lock); + if (rc) + kfree(cb_info); + +@@ -1344,16 +1344,16 @@ static void handle_notif_callbacks(u64 bitmap, enum notify_type type) + if (!(bitmap & 1)) + continue; + +- mutex_lock(&drv_info->notify_lock); ++ read_lock(&drv_info->notify_lock); + cb_info = notifier_hnode_get_by_type(notify_id, type); +- mutex_unlock(&drv_info->notify_lock); ++ read_unlock(&drv_info->notify_lock); + + if (cb_info && cb_info->cb) + cb_info->cb(notify_id, cb_info->cb_data); + } + } + +-static void notif_get_and_handle(void *unused) ++static void notif_get_and_handle(void *cb_data) + { + int rc; + struct ffa_notify_bitmaps bitmaps; +@@ -1800,7 +1800,7 @@ static void ffa_notifications_setup(void) + goto cleanup; + + hash_init(drv_info->notifier_hash); +- mutex_init(&drv_info->notify_lock); ++ rwlock_init(&drv_info->notify_lock); + + drv_info->notif_enabled = true; + return; +-- +2.39.5 + diff --git a/queue-6.12/firmware-arm_ffa-stash-ffa_device-instead-of-notify_.patch b/queue-6.12/firmware-arm_ffa-stash-ffa_device-instead-of-notify_.patch new file mode 100644 index 0000000000..826f9f4d45 --- /dev/null +++ b/queue-6.12/firmware-arm_ffa-stash-ffa_device-instead-of-notify_.patch @@ -0,0 +1,110 @@ +From 9aaed6f314ca39a11bee06f95ae6495bf761c73b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 Feb 2025 15:38:56 +0000 +Subject: firmware: arm_ffa: Stash ffa_device instead of notify_type in + notifier_cb_info + +From: Sudeep Holla + +[ Upstream commit a3d73fe8ae5db389f2108a052c0a9c3c3fbc29cf ] + +Currently, we store the type of the notification in the notifier_cb_info +structure that is put into the hast list to identify if the notification +block is for the secure partition or the non secure VM. + +In order to support framework notifications to reuse the hash list and +to avoid creating one for each time, we need store the ffa_device pointer +itself as the same notification ID in framework notifications can be +registered by multiple FF-A devices. + +Tested-by: Viresh Kumar +Message-Id: <20250217-ffa_updates-v3-15-bd1d9de615e7@arm.com> +Signed-off-by: Sudeep Holla +Stable-dep-of: 27e850c88df0 ("firmware: arm_ffa: Move memory allocation outside the mutex locking") +Signed-off-by: Sasha Levin +--- + drivers/firmware/arm_ffa/driver.c | 15 +++++++-------- + 1 file changed, 7 insertions(+), 8 deletions(-) + +diff --git a/drivers/firmware/arm_ffa/driver.c b/drivers/firmware/arm_ffa/driver.c +index 5ac6dbde31f53..f0c7f417d7524 100644 +--- a/drivers/firmware/arm_ffa/driver.c ++++ b/drivers/firmware/arm_ffa/driver.c +@@ -1078,9 +1078,9 @@ static int ffa_memory_lend(struct ffa_mem_ops_args *args) + + struct notifier_cb_info { + struct hlist_node hnode; ++ struct ffa_device *dev; + ffa_notifier_cb cb; + void *cb_data; +- enum notify_type type; + }; + + static int ffa_sched_recv_cb_update(u16 part_id, ffa_sched_recv_cb callback, +@@ -1149,17 +1149,18 @@ notifier_hash_node_get(u16 notify_id, enum notify_type type) + struct notifier_cb_info *node; + + hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) +- if (type == node->type) ++ if (type == ffa_notify_type_get(node->dev->vm_id)) + return node; + + return NULL; + } + + static int +-update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb, ++update_notifier_cb(struct ffa_device *dev, int notify_id, ffa_notifier_cb cb, + void *cb_data, bool is_registration) + { + struct notifier_cb_info *cb_info = NULL; ++ enum notify_type type = ffa_notify_type_get(dev->vm_id); + bool cb_found; + + cb_info = notifier_hash_node_get(notify_id, type); +@@ -1173,7 +1174,7 @@ update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb, + if (!cb_info) + return -ENOMEM; + +- cb_info->type = type; ++ cb_info->dev = dev; + cb_info->cb = cb; + cb_info->cb_data = cb_data; + +@@ -1189,7 +1190,6 @@ update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb, + static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) + { + int rc; +- enum notify_type type = ffa_notify_type_get(dev->vm_id); + + if (ffa_notifications_disabled()) + return -EOPNOTSUPP; +@@ -1199,7 +1199,7 @@ static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) + + mutex_lock(&drv_info->notify_lock); + +- rc = update_notifier_cb(notify_id, type, NULL, NULL, false); ++ rc = update_notifier_cb(dev, notify_id, NULL, NULL, false); + if (rc) { + pr_err("Could not unregister notification callback\n"); + mutex_unlock(&drv_info->notify_lock); +@@ -1218,7 +1218,6 @@ static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + { + int rc; + u32 flags = 0; +- enum notify_type type = ffa_notify_type_get(dev->vm_id); + + if (ffa_notifications_disabled()) + return -EOPNOTSUPP; +@@ -1237,7 +1236,7 @@ static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, + return rc; + } + +- rc = update_notifier_cb(notify_id, type, cb, cb_data, true); ++ rc = update_notifier_cb(dev, notify_id, cb, cb_data, true); + if (rc) { + pr_err("Failed to register callback for %d - %d\n", + notify_id, rc); +-- +2.39.5 + diff --git a/queue-6.12/fs-export-anon_inode_make_secure_inode-and-fix-secre.patch b/queue-6.12/fs-export-anon_inode_make_secure_inode-and-fix-secre.patch new file mode 100644 index 0000000000..50e55286f5 --- /dev/null +++ b/queue-6.12/fs-export-anon_inode_make_secure_inode-and-fix-secre.patch @@ -0,0 +1,130 @@ +From 2149eedc59ec84c9f77672177e19077de3db611f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 20 Jun 2025 07:03:30 +0000 +Subject: fs: export anon_inode_make_secure_inode() and fix secretmem LSM + bypass + +From: Shivank Garg + +[ Upstream commit cbe4134ea4bc493239786220bd69cb8a13493190 ] + +Export anon_inode_make_secure_inode() to allow KVM guest_memfd to create +anonymous inodes with proper security context. This replaces the current +pattern of calling alloc_anon_inode() followed by +inode_init_security_anon() for creating security context manually. + +This change also fixes a security regression in secretmem where the +S_PRIVATE flag was not cleared after alloc_anon_inode(), causing +LSM/SELinux checks to be bypassed for secretmem file descriptors. + +As guest_memfd currently resides in the KVM module, we need to export this +symbol for use outside the core kernel. In the future, guest_memfd might be +moved to core-mm, at which point the symbols no longer would have to be +exported. When/if that happens is still unclear. + +Fixes: 2bfe15c52612 ("mm: create security context for memfd_secret inodes") +Suggested-by: David Hildenbrand +Suggested-by: Mike Rapoport +Signed-off-by: Shivank Garg +Link: https://lore.kernel.org/20250620070328.803704-3-shivankg@amd.com +Acked-by: "Mike Rapoport (Microsoft)" +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/anon_inodes.c | 23 ++++++++++++++++++----- + include/linux/fs.h | 2 ++ + mm/secretmem.c | 9 +-------- + 3 files changed, 21 insertions(+), 13 deletions(-) + +diff --git a/fs/anon_inodes.c b/fs/anon_inodes.c +index 583ac81669c24..35f765610802a 100644 +--- a/fs/anon_inodes.c ++++ b/fs/anon_inodes.c +@@ -55,14 +55,25 @@ static struct file_system_type anon_inode_fs_type = { + .kill_sb = kill_anon_super, + }; + +-static struct inode *anon_inode_make_secure_inode( +- const char *name, +- const struct inode *context_inode) ++/** ++ * anon_inode_make_secure_inode - allocate an anonymous inode with security context ++ * @sb: [in] Superblock to allocate from ++ * @name: [in] Name of the class of the newfile (e.g., "secretmem") ++ * @context_inode: ++ * [in] Optional parent inode for security inheritance ++ * ++ * The function ensures proper security initialization through the LSM hook ++ * security_inode_init_security_anon(). ++ * ++ * Return: Pointer to new inode on success, ERR_PTR on failure. ++ */ ++struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name, ++ const struct inode *context_inode) + { + struct inode *inode; + int error; + +- inode = alloc_anon_inode(anon_inode_mnt->mnt_sb); ++ inode = alloc_anon_inode(sb); + if (IS_ERR(inode)) + return inode; + inode->i_flags &= ~S_PRIVATE; +@@ -74,6 +85,7 @@ static struct inode *anon_inode_make_secure_inode( + } + return inode; + } ++EXPORT_SYMBOL_GPL_FOR_MODULES(anon_inode_make_secure_inode, "kvm"); + + static struct file *__anon_inode_getfile(const char *name, + const struct file_operations *fops, +@@ -88,7 +100,8 @@ static struct file *__anon_inode_getfile(const char *name, + return ERR_PTR(-ENOENT); + + if (make_inode) { +- inode = anon_inode_make_secure_inode(name, context_inode); ++ inode = anon_inode_make_secure_inode(anon_inode_mnt->mnt_sb, ++ name, context_inode); + if (IS_ERR(inode)) { + file = ERR_CAST(inode); + goto err; +diff --git a/include/linux/fs.h b/include/linux/fs.h +index b98f128c9afa7..a6de8d93838d1 100644 +--- a/include/linux/fs.h ++++ b/include/linux/fs.h +@@ -3407,6 +3407,8 @@ extern int simple_write_begin(struct file *file, struct address_space *mapping, + extern const struct address_space_operations ram_aops; + extern int always_delete_dentry(const struct dentry *); + extern struct inode *alloc_anon_inode(struct super_block *); ++struct inode *anon_inode_make_secure_inode(struct super_block *sb, const char *name, ++ const struct inode *context_inode); + extern int simple_nosetlease(struct file *, int, struct file_lease **, void **); + extern const struct dentry_operations simple_dentry_operations; + +diff --git a/mm/secretmem.c b/mm/secretmem.c +index 1b0a214ee5580..4662f2510ae5f 100644 +--- a/mm/secretmem.c ++++ b/mm/secretmem.c +@@ -195,18 +195,11 @@ static struct file *secretmem_file_create(unsigned long flags) + struct file *file; + struct inode *inode; + const char *anon_name = "[secretmem]"; +- int err; + +- inode = alloc_anon_inode(secretmem_mnt->mnt_sb); ++ inode = anon_inode_make_secure_inode(secretmem_mnt->mnt_sb, anon_name, NULL); + if (IS_ERR(inode)) + return ERR_CAST(inode); + +- err = security_inode_init_security_anon(inode, &QSTR(anon_name), NULL); +- if (err) { +- file = ERR_PTR(err); +- goto err_free_inode; +- } +- + file = alloc_file_pseudo(inode, secretmem_mnt, "secretmem", + O_RDWR, &secretmem_fops); + if (IS_ERR(file)) +-- +2.39.5 + diff --git a/queue-6.12/genirq-irq_sim-initialize-work-context-pointers-prop.patch b/queue-6.12/genirq-irq_sim-initialize-work-context-pointers-prop.patch new file mode 100644 index 0000000000..c679071627 --- /dev/null +++ b/queue-6.12/genirq-irq_sim-initialize-work-context-pointers-prop.patch @@ -0,0 +1,37 @@ +From c82f97388d7de26beef083b58764af80d2696625 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 12 Jun 2025 21:48:27 +0900 +Subject: genirq/irq_sim: Initialize work context pointers properly + +From: Gyeyoung Baek + +[ Upstream commit 8a2277a3c9e4cc5398f80821afe7ecbe9bdf2819 ] + +Initialize `ops` member's pointers properly by using kzalloc() instead of +kmalloc() when allocating the simulation work context. Otherwise the +pointers contain random content leading to invalid dereferencing. + +Signed-off-by: Gyeyoung Baek +Signed-off-by: Thomas Gleixner +Link: https://lore.kernel.org/all/20250612124827.63259-1-gye976@gmail.com +Signed-off-by: Sasha Levin +--- + kernel/irq/irq_sim.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/kernel/irq/irq_sim.c b/kernel/irq/irq_sim.c +index 1a3d483548e2f..ae4c9cbd1b4b9 100644 +--- a/kernel/irq/irq_sim.c ++++ b/kernel/irq/irq_sim.c +@@ -202,7 +202,7 @@ struct irq_domain *irq_domain_create_sim_full(struct fwnode_handle *fwnode, + void *data) + { + struct irq_sim_work_ctx *work_ctx __free(kfree) = +- kmalloc(sizeof(*work_ctx), GFP_KERNEL); ++ kzalloc(sizeof(*work_ctx), GFP_KERNEL); + + if (!work_ctx) + return ERR_PTR(-ENOMEM); +-- +2.39.5 + diff --git a/queue-6.12/gfs2-add-glf_pending_reply-flag.patch b/queue-6.12/gfs2-add-glf_pending_reply-flag.patch new file mode 100644 index 0000000000..c8fb3418ca --- /dev/null +++ b/queue-6.12/gfs2-add-glf_pending_reply-flag.patch @@ -0,0 +1,88 @@ +From 9e05c1405326fb38726dadaceb0d28cab164234d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 24 Jan 2025 17:23:50 +0100 +Subject: gfs2: Add GLF_PENDING_REPLY flag + +From: Andreas Gruenbacher + +[ Upstream commit 8bbfde0875590b71f012bd8b0c9cb988c9a873b9 ] + +Introduce a new GLF_PENDING_REPLY flag to indicate that a reply from DLM +is expected. Include that flag in glock dumps to show more clearly +what's going on. (When the GLF_PENDING_REPLY flag is set, the GLF_LOCK +flag will also be set but the GLF_LOCK flag alone isn't sufficient to +tell that we are waiting for a DLM reply.) + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/glock.c | 5 +++++ + fs/gfs2/incore.h | 1 + + fs/gfs2/trace_gfs2.h | 1 + + 3 files changed, 7 insertions(+) + +diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c +index 3c70c383b9bdd..ec043aa71de8c 100644 +--- a/fs/gfs2/glock.c ++++ b/fs/gfs2/glock.c +@@ -807,6 +807,7 @@ __acquires(&gl->gl_lockref.lock) + } + + if (ls->ls_ops->lm_lock) { ++ set_bit(GLF_PENDING_REPLY, &gl->gl_flags); + spin_unlock(&gl->gl_lockref.lock); + ret = ls->ls_ops->lm_lock(gl, target, lck_flags); + spin_lock(&gl->gl_lockref.lock); +@@ -825,6 +826,7 @@ __acquires(&gl->gl_lockref.lock) + /* The operation will be completed asynchronously. */ + return; + } ++ clear_bit(GLF_PENDING_REPLY, &gl->gl_flags); + } + + /* Complete the operation now. */ +@@ -1960,6 +1962,7 @@ void gfs2_glock_complete(struct gfs2_glock *gl, int ret) + struct lm_lockstruct *ls = &gl->gl_name.ln_sbd->sd_lockstruct; + + spin_lock(&gl->gl_lockref.lock); ++ clear_bit(GLF_PENDING_REPLY, &gl->gl_flags); + gl->gl_reply = ret; + + if (unlikely(test_bit(DFL_BLOCK_LOCKS, &ls->ls_recover_flags))) { +@@ -2360,6 +2363,8 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl) + *p++ = 'f'; + if (test_bit(GLF_INVALIDATE_IN_PROGRESS, gflags)) + *p++ = 'i'; ++ if (test_bit(GLF_PENDING_REPLY, gflags)) ++ *p++ = 'R'; + if (test_bit(GLF_HAVE_REPLY, gflags)) + *p++ = 'r'; + if (test_bit(GLF_INITIAL, gflags)) +diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h +index 98a41c631ce10..f6aee2c9b9118 100644 +--- a/fs/gfs2/incore.h ++++ b/fs/gfs2/incore.h +@@ -330,6 +330,7 @@ enum { + GLF_UNLOCKED = 16, /* Wait for glock to be unlocked */ + GLF_TRY_TO_EVICT = 17, /* iopen glocks only */ + GLF_VERIFY_DELETE = 18, /* iopen glocks only */ ++ GLF_PENDING_REPLY = 19, + }; + + struct gfs2_glock { +diff --git a/fs/gfs2/trace_gfs2.h b/fs/gfs2/trace_gfs2.h +index ac8ca485c46fe..09121c2c198ba 100644 +--- a/fs/gfs2/trace_gfs2.h ++++ b/fs/gfs2/trace_gfs2.h +@@ -53,6 +53,7 @@ + {(1UL << GLF_DIRTY), "y" }, \ + {(1UL << GLF_LFLUSH), "f" }, \ + {(1UL << GLF_INVALIDATE_IN_PROGRESS), "i" }, \ ++ {(1UL << GLF_PENDING_REPLY), "R" }, \ + {(1UL << GLF_HAVE_REPLY), "r" }, \ + {(1UL << GLF_INITIAL), "a" }, \ + {(1UL << GLF_HAVE_FROZEN_REPLY), "F" }, \ +-- +2.39.5 + diff --git a/queue-6.12/gfs2-deallocate-inodes-in-gfs2_create_inode.patch b/queue-6.12/gfs2-deallocate-inodes-in-gfs2_create_inode.patch new file mode 100644 index 0000000000..c5e35d4b7f --- /dev/null +++ b/queue-6.12/gfs2-deallocate-inodes-in-gfs2_create_inode.patch @@ -0,0 +1,143 @@ +From b9e176a14e2000c6c8213917529479cada840605 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 18 Apr 2025 16:41:52 +0200 +Subject: gfs2: deallocate inodes in gfs2_create_inode + +From: Andreas Gruenbacher + +[ Upstream commit 2c63986dd35fa9eb0d7d1530b5eb2244b7296e22 ] + +When creating and destroying inodes, we are relying on the inode hash +table to make sure that for a given inode number, only a single inode +will exist. We then link that inode to its inode and iopen glock and +let those glocks point back at the inode. However, when iget_failed() +is called, the inode is removed from the inode hash table before +gfs_evict_inode() is called, and uniqueness is no longer guaranteed. + +Commit f1046a472b70 ("gfs2: gl_object races fix") was trying to work +around that problem by detaching the inode glock from the inode before +calling iget_failed(), but that broke the inode deallocation code in +gfs_evict_inode(). + +To fix that, deallocate partially created inodes in gfs2_create_inode() +instead of relying on gfs_evict_inode() for doing that. + +This means that gfs2_evict_inode() and its helper functions will no +longer see partially created inodes, and so some simplifications are +possible there. + +Fixes: 9ffa18884cce ("gfs2: gl_object races fix") +Signed-off-by: Andreas Gruenbacher +Signed-off-by: Sasha Levin +--- + fs/gfs2/inode.c | 27 +++++++++++++++++++-------- + fs/gfs2/super.c | 6 +----- + 2 files changed, 20 insertions(+), 13 deletions(-) + +diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c +index e59fde1bae7b7..0b546024f5ef7 100644 +--- a/fs/gfs2/inode.c ++++ b/fs/gfs2/inode.c +@@ -697,10 +697,11 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry, + struct gfs2_inode *dip = GFS2_I(dir), *ip; + struct gfs2_sbd *sdp = GFS2_SB(&dip->i_inode); + struct gfs2_glock *io_gl; +- int error; ++ int error, dealloc_error; + u32 aflags = 0; + unsigned blocks = 1; + struct gfs2_diradd da = { .bh = NULL, .save_loc = 1, }; ++ bool xattr_initialized = false; + + if (!name->len || name->len > GFS2_FNAMESIZE) + return -ENAMETOOLONG; +@@ -813,11 +814,11 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry, + + error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_inode_glops, CREATE, &ip->i_gl); + if (error) +- goto fail_free_inode; ++ goto fail_dealloc_inode; + + error = gfs2_glock_get(sdp, ip->i_no_addr, &gfs2_iopen_glops, CREATE, &io_gl); + if (error) +- goto fail_free_inode; ++ goto fail_dealloc_inode; + gfs2_cancel_delete_work(io_gl); + io_gl->gl_no_formal_ino = ip->i_no_formal_ino; + +@@ -841,8 +842,10 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry, + if (error) + goto fail_gunlock3; + +- if (blocks > 1) ++ if (blocks > 1) { + gfs2_init_xattr(ip); ++ xattr_initialized = true; ++ } + init_dinode(dip, ip, symname); + gfs2_trans_end(sdp); + +@@ -897,6 +900,18 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry, + gfs2_glock_dq_uninit(&ip->i_iopen_gh); + fail_gunlock2: + gfs2_glock_put(io_gl); ++fail_dealloc_inode: ++ set_bit(GIF_ALLOC_FAILED, &ip->i_flags); ++ dealloc_error = 0; ++ if (ip->i_eattr) ++ dealloc_error = gfs2_ea_dealloc(ip, xattr_initialized); ++ clear_nlink(inode); ++ mark_inode_dirty(inode); ++ if (!dealloc_error) ++ dealloc_error = gfs2_dinode_dealloc(ip); ++ if (dealloc_error) ++ fs_warn(sdp, "%s: %d\n", __func__, dealloc_error); ++ ip->i_no_addr = 0; + fail_free_inode: + if (ip->i_gl) { + gfs2_glock_put(ip->i_gl); +@@ -911,10 +926,6 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry, + gfs2_dir_no_add(&da); + gfs2_glock_dq_uninit(&d_gh); + if (!IS_ERR_OR_NULL(inode)) { +- set_bit(GIF_ALLOC_FAILED, &ip->i_flags); +- clear_nlink(inode); +- if (ip->i_no_addr) +- mark_inode_dirty(inode); + if (inode->i_state & I_NEW) + iget_failed(inode); + else +diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c +index 694d554dba546..3b1303f97a3bc 100644 +--- a/fs/gfs2/super.c ++++ b/fs/gfs2/super.c +@@ -1255,9 +1255,6 @@ static enum evict_behavior evict_should_delete(struct inode *inode, + struct gfs2_sbd *sdp = sb->s_fs_info; + int ret; + +- if (unlikely(test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) +- goto should_delete; +- + if (gfs2_holder_initialized(&ip->i_iopen_gh) && + test_bit(GLF_DEFER_DELETE, &ip->i_iopen_gh.gh_gl->gl_flags)) + return EVICT_SHOULD_DEFER_DELETE; +@@ -1291,7 +1288,6 @@ static enum evict_behavior evict_should_delete(struct inode *inode, + if (inode->i_nlink) + return EVICT_SHOULD_SKIP_DELETE; + +-should_delete: + if (gfs2_holder_initialized(&ip->i_iopen_gh) && + test_bit(HIF_HOLDER, &ip->i_iopen_gh.gh_iflags)) { + if (!gfs2_upgrade_iopen_glock(inode)) { +@@ -1319,7 +1315,7 @@ static int evict_unlinked_inode(struct inode *inode) + } + + if (ip->i_eattr) { +- ret = gfs2_ea_dealloc(ip, !test_bit(GIF_ALLOC_FAILED, &ip->i_flags)); ++ ret = gfs2_ea_dealloc(ip, true); + if (ret) + goto out; + } +-- +2.39.5 + diff --git a/queue-6.12/gfs2-decode-missing-glock-flags-in-tracepoints.patch b/queue-6.12/gfs2-decode-missing-glock-flags-in-tracepoints.patch new file mode 100644 index 0000000000..87a39be223 --- /dev/null +++ b/queue-6.12/gfs2-decode-missing-glock-flags-in-tracepoints.patch @@ -0,0 +1,40 @@ +From f0f6d5c7f1d365dbcc0c645e5dde5f7d5c82e24b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 23 Jan 2025 19:50:19 +0100 +Subject: gfs2: Decode missing glock flags in tracepoints + +From: Andreas Gruenbacher + +[ Upstream commit 57882533923ce7842a21b8f5be14de861403dd26 ] + +Add a number of glock flags are currently not shown in the text form of +glock tracepoints. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/trace_gfs2.h | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +diff --git a/fs/gfs2/trace_gfs2.h b/fs/gfs2/trace_gfs2.h +index 8eae8d62a4132..ac8ca485c46fe 100644 +--- a/fs/gfs2/trace_gfs2.h ++++ b/fs/gfs2/trace_gfs2.h +@@ -58,7 +58,12 @@ + {(1UL << GLF_HAVE_FROZEN_REPLY), "F" }, \ + {(1UL << GLF_LRU), "L" }, \ + {(1UL << GLF_OBJECT), "o" }, \ +- {(1UL << GLF_BLOCKING), "b" }) ++ {(1UL << GLF_BLOCKING), "b" }, \ ++ {(1UL << GLF_UNLOCKED), "x" }, \ ++ {(1UL << GLF_INSTANTIATE_NEEDED), "n" }, \ ++ {(1UL << GLF_INSTANTIATE_IN_PROG), "N" }, \ ++ {(1UL << GLF_TRY_TO_EVICT), "e" }, \ ++ {(1UL << GLF_VERIFY_DELETE), "E" }) + + #ifndef NUMPTY + #define NUMPTY +-- +2.39.5 + diff --git a/queue-6.12/gfs2-don-t-start-unnecessary-transactions-during-log.patch b/queue-6.12/gfs2-don-t-start-unnecessary-transactions-during-log.patch new file mode 100644 index 0000000000..1bcee2068b --- /dev/null +++ b/queue-6.12/gfs2-don-t-start-unnecessary-transactions-during-log.patch @@ -0,0 +1,110 @@ +From f2bdd12bd158d36136b872a36e639d34e1eb6409 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 28 Mar 2025 22:47:02 +0100 +Subject: gfs2: Don't start unnecessary transactions during log flush + +From: Andreas Gruenbacher + +[ Upstream commit 5a90f8d499225512a385585ffe3e28f687263d47 ] + +Commit 8d391972ae2d ("gfs2: Remove __gfs2_writepage()") changed the log +flush code in gfs2_ail1_start_one() to call aops->writepages() instead +of aops->writepage(). For jdata inodes, this means that we will now try +to reserve log space and start a transaction before we can determine +that the pages in question have already been journaled. When this +happens in the context of gfs2_logd(), it can now appear that not enough +log space is available for freeing up log space, and we will lock up. + +Fix that by issuing journal writes directly instead of going through +aops->writepages() in the log flush code. + +Fixes: 8d391972ae2d ("gfs2: Remove __gfs2_writepage()") +Signed-off-by: Andreas Gruenbacher +Signed-off-by: Sasha Levin +--- + fs/gfs2/aops.c | 31 +++++++++++++++++++++++++++++++ + fs/gfs2/aops.h | 1 + + fs/gfs2/log.c | 7 ++++++- + 3 files changed, 38 insertions(+), 1 deletion(-) + +diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c +index ed2c708a215a4..eb4270e82ef8e 100644 +--- a/fs/gfs2/aops.c ++++ b/fs/gfs2/aops.c +@@ -117,6 +117,37 @@ static int __gfs2_jdata_write_folio(struct folio *folio, + return gfs2_write_jdata_folio(folio, wbc); + } + ++/** ++ * gfs2_jdata_writeback - Write jdata folios to the log ++ * @mapping: The mapping to write ++ * @wbc: The writeback control ++ * ++ * Returns: errno ++ */ ++int gfs2_jdata_writeback(struct address_space *mapping, struct writeback_control *wbc) ++{ ++ struct inode *inode = mapping->host; ++ struct gfs2_inode *ip = GFS2_I(inode); ++ struct gfs2_sbd *sdp = GFS2_SB(mapping->host); ++ struct folio *folio = NULL; ++ int error; ++ ++ BUG_ON(current->journal_info); ++ if (gfs2_assert_withdraw(sdp, ip->i_gl->gl_state == LM_ST_EXCLUSIVE)) ++ return 0; ++ ++ while ((folio = writeback_iter(mapping, wbc, folio, &error))) { ++ if (folio_test_checked(folio)) { ++ folio_redirty_for_writepage(wbc, folio); ++ folio_unlock(folio); ++ continue; ++ } ++ error = __gfs2_jdata_write_folio(folio, wbc); ++ } ++ ++ return error; ++} ++ + /** + * gfs2_writepages - Write a bunch of dirty pages back to disk + * @mapping: The mapping to write +diff --git a/fs/gfs2/aops.h b/fs/gfs2/aops.h +index f9fa41aaeaf41..bf002522a7822 100644 +--- a/fs/gfs2/aops.h ++++ b/fs/gfs2/aops.h +@@ -9,5 +9,6 @@ + #include "incore.h" + + void adjust_fs_space(struct inode *inode); ++int gfs2_jdata_writeback(struct address_space *mapping, struct writeback_control *wbc); + + #endif /* __AOPS_DOT_H__ */ +diff --git a/fs/gfs2/log.c b/fs/gfs2/log.c +index f9c5089783d24..115c4ac457e90 100644 +--- a/fs/gfs2/log.c ++++ b/fs/gfs2/log.c +@@ -31,6 +31,7 @@ + #include "dir.h" + #include "trace_gfs2.h" + #include "trans.h" ++#include "aops.h" + + static void gfs2_log_shutdown(struct gfs2_sbd *sdp); + +@@ -131,7 +132,11 @@ __acquires(&sdp->sd_ail_lock) + if (!mapping) + continue; + spin_unlock(&sdp->sd_ail_lock); +- ret = mapping->a_ops->writepages(mapping, wbc); ++ BUG_ON(GFS2_SB(mapping->host) != sdp); ++ if (gfs2_is_jdata(GFS2_I(mapping->host))) ++ ret = gfs2_jdata_writeback(mapping, wbc); ++ else ++ ret = mapping->a_ops->writepages(mapping, wbc); + if (need_resched()) { + blk_finish_plug(plug); + cond_resched(); +-- +2.39.5 + diff --git a/queue-6.12/gfs2-initialize-gl_no_formal_ino-earlier.patch b/queue-6.12/gfs2-initialize-gl_no_formal_ino-earlier.patch new file mode 100644 index 0000000000..60594f39fc --- /dev/null +++ b/queue-6.12/gfs2-initialize-gl_no_formal_ino-earlier.patch @@ -0,0 +1,74 @@ +From f53b07718e2c405f2c05c101b3963166af14836c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 16 Sep 2024 15:42:57 +0200 +Subject: gfs2: Initialize gl_no_formal_ino earlier + +From: Andreas Gruenbacher + +[ Upstream commit 1072b3aa6863bc4d91006038b032bfb4dcc98dec ] + +Set gl_no_formal_ino of the iopen glock to the generation of the +associated inode (ip->i_no_formal_ino) as soon as that value is known. +This saves us from setting it later, possibly repeatedly, when queuing +GLF_VERIFY_DELETE work. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/glock.c | 1 - + fs/gfs2/glops.c | 9 ++++++++- + fs/gfs2/inode.c | 1 + + 3 files changed, 9 insertions(+), 2 deletions(-) + +diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c +index aecce4bb5e1a9..ed699f2872f55 100644 +--- a/fs/gfs2/glock.c ++++ b/fs/gfs2/glock.c +@@ -985,7 +985,6 @@ static bool gfs2_try_evict(struct gfs2_glock *gl) + ip = NULL; + spin_unlock(&gl->gl_lockref.lock); + if (ip) { +- gl->gl_no_formal_ino = ip->i_no_formal_ino; + set_bit(GIF_DEFERRED_DELETE, &ip->i_flags); + d_prune_aliases(&ip->i_inode); + iput(&ip->i_inode); +diff --git a/fs/gfs2/glops.c b/fs/gfs2/glops.c +index 72a0601ce65e2..4b6b23c638e29 100644 +--- a/fs/gfs2/glops.c ++++ b/fs/gfs2/glops.c +@@ -494,11 +494,18 @@ int gfs2_inode_refresh(struct gfs2_inode *ip) + static int inode_go_instantiate(struct gfs2_glock *gl) + { + struct gfs2_inode *ip = gl->gl_object; ++ struct gfs2_glock *io_gl; ++ int error; + + if (!ip) /* no inode to populate - read it in later */ + return 0; + +- return gfs2_inode_refresh(ip); ++ error = gfs2_inode_refresh(ip); ++ if (error) ++ return error; ++ io_gl = ip->i_iopen_gh.gh_gl; ++ io_gl->gl_no_formal_ino = ip->i_no_formal_ino; ++ return 0; + } + + static int inode_go_held(struct gfs2_holder *gh) +diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c +index 3be24285ab01d..2d2f7646440f5 100644 +--- a/fs/gfs2/inode.c ++++ b/fs/gfs2/inode.c +@@ -751,6 +751,7 @@ static int gfs2_create_inode(struct inode *dir, struct dentry *dentry, + if (error) + goto fail_free_inode; + gfs2_cancel_delete_work(io_gl); ++ io_gl->gl_no_formal_ino = ip->i_no_formal_ino; + + retry: + error = insert_inode_locked4(inode, ip->i_no_addr, iget_test, &ip->i_no_addr); +-- +2.39.5 + diff --git a/queue-6.12/gfs2-move-gfs2_dinode_dealloc.patch b/queue-6.12/gfs2-move-gfs2_dinode_dealloc.patch new file mode 100644 index 0000000000..d479e5a2b3 --- /dev/null +++ b/queue-6.12/gfs2-move-gfs2_dinode_dealloc.patch @@ -0,0 +1,194 @@ +From a171fae6f06d96d0bd5ba78524ececfeecc4f2e6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 17 Apr 2025 22:41:40 +0200 +Subject: gfs2: Move gfs2_dinode_dealloc + +From: Andreas Gruenbacher + +[ Upstream commit bcd18105fb34e27c097f222733dba9a3e79f191c ] + +Move gfs2_dinode_dealloc() and its helper gfs2_final_release_pages() +from super.c to inode.c. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/inode.c | 68 +++++++++++++++++++++++++++++++++++++++++++++++++ + fs/gfs2/inode.h | 1 + + fs/gfs2/super.c | 68 ------------------------------------------------- + 3 files changed, 69 insertions(+), 68 deletions(-) + +diff --git a/fs/gfs2/inode.c b/fs/gfs2/inode.c +index 2d2f7646440f5..e59fde1bae7b7 100644 +--- a/fs/gfs2/inode.c ++++ b/fs/gfs2/inode.c +@@ -439,6 +439,74 @@ static int alloc_dinode(struct gfs2_inode *ip, u32 flags, unsigned *dblocks) + return error; + } + ++static void gfs2_final_release_pages(struct gfs2_inode *ip) ++{ ++ struct inode *inode = &ip->i_inode; ++ struct gfs2_glock *gl = ip->i_gl; ++ ++ if (unlikely(!gl)) { ++ /* This can only happen during incomplete inode creation. */ ++ BUG_ON(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags)); ++ return; ++ } ++ ++ truncate_inode_pages(gfs2_glock2aspace(gl), 0); ++ truncate_inode_pages(&inode->i_data, 0); ++ ++ if (atomic_read(&gl->gl_revokes) == 0) { ++ clear_bit(GLF_LFLUSH, &gl->gl_flags); ++ clear_bit(GLF_DIRTY, &gl->gl_flags); ++ } ++} ++ ++int gfs2_dinode_dealloc(struct gfs2_inode *ip) ++{ ++ struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); ++ struct gfs2_rgrpd *rgd; ++ struct gfs2_holder gh; ++ int error; ++ ++ if (gfs2_get_inode_blocks(&ip->i_inode) != 1) { ++ gfs2_consist_inode(ip); ++ return -EIO; ++ } ++ ++ gfs2_rindex_update(sdp); ++ ++ error = gfs2_quota_hold(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE); ++ if (error) ++ return error; ++ ++ rgd = gfs2_blk2rgrpd(sdp, ip->i_no_addr, 1); ++ if (!rgd) { ++ gfs2_consist_inode(ip); ++ error = -EIO; ++ goto out_qs; ++ } ++ ++ error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE, ++ LM_FLAG_NODE_SCOPE, &gh); ++ if (error) ++ goto out_qs; ++ ++ error = gfs2_trans_begin(sdp, RES_RG_BIT + RES_STATFS + RES_QUOTA, ++ sdp->sd_jdesc->jd_blocks); ++ if (error) ++ goto out_rg_gunlock; ++ ++ gfs2_free_di(rgd, ip); ++ ++ gfs2_final_release_pages(ip); ++ ++ gfs2_trans_end(sdp); ++ ++out_rg_gunlock: ++ gfs2_glock_dq_uninit(&gh); ++out_qs: ++ gfs2_quota_unhold(ip); ++ return error; ++} ++ + static void gfs2_init_dir(struct buffer_head *dibh, + const struct gfs2_inode *parent) + { +diff --git a/fs/gfs2/inode.h b/fs/gfs2/inode.h +index fd15d1c6b6fb1..225b9d0038cd0 100644 +--- a/fs/gfs2/inode.h ++++ b/fs/gfs2/inode.h +@@ -92,6 +92,7 @@ struct inode *gfs2_inode_lookup(struct super_block *sb, unsigned type, + struct inode *gfs2_lookup_by_inum(struct gfs2_sbd *sdp, u64 no_addr, + u64 no_formal_ino, + unsigned int blktype); ++int gfs2_dinode_dealloc(struct gfs2_inode *ip); + + int gfs2_inode_refresh(struct gfs2_inode *ip); + +diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c +index d982db129b2b4..aad6d5d2816e3 100644 +--- a/fs/gfs2/super.c ++++ b/fs/gfs2/super.c +@@ -1175,74 +1175,6 @@ static int gfs2_show_options(struct seq_file *s, struct dentry *root) + return 0; + } + +-static void gfs2_final_release_pages(struct gfs2_inode *ip) +-{ +- struct inode *inode = &ip->i_inode; +- struct gfs2_glock *gl = ip->i_gl; +- +- if (unlikely(!gl)) { +- /* This can only happen during incomplete inode creation. */ +- BUG_ON(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags)); +- return; +- } +- +- truncate_inode_pages(gfs2_glock2aspace(gl), 0); +- truncate_inode_pages(&inode->i_data, 0); +- +- if (atomic_read(&gl->gl_revokes) == 0) { +- clear_bit(GLF_LFLUSH, &gl->gl_flags); +- clear_bit(GLF_DIRTY, &gl->gl_flags); +- } +-} +- +-static int gfs2_dinode_dealloc(struct gfs2_inode *ip) +-{ +- struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); +- struct gfs2_rgrpd *rgd; +- struct gfs2_holder gh; +- int error; +- +- if (gfs2_get_inode_blocks(&ip->i_inode) != 1) { +- gfs2_consist_inode(ip); +- return -EIO; +- } +- +- gfs2_rindex_update(sdp); +- +- error = gfs2_quota_hold(ip, NO_UID_QUOTA_CHANGE, NO_GID_QUOTA_CHANGE); +- if (error) +- return error; +- +- rgd = gfs2_blk2rgrpd(sdp, ip->i_no_addr, 1); +- if (!rgd) { +- gfs2_consist_inode(ip); +- error = -EIO; +- goto out_qs; +- } +- +- error = gfs2_glock_nq_init(rgd->rd_gl, LM_ST_EXCLUSIVE, +- LM_FLAG_NODE_SCOPE, &gh); +- if (error) +- goto out_qs; +- +- error = gfs2_trans_begin(sdp, RES_RG_BIT + RES_STATFS + RES_QUOTA, +- sdp->sd_jdesc->jd_blocks); +- if (error) +- goto out_rg_gunlock; +- +- gfs2_free_di(rgd, ip); +- +- gfs2_final_release_pages(ip); +- +- gfs2_trans_end(sdp); +- +-out_rg_gunlock: +- gfs2_glock_dq_uninit(&gh); +-out_qs: +- gfs2_quota_unhold(ip); +- return error; +-} +- + /** + * gfs2_glock_put_eventually + * @gl: The glock to put +-- +2.39.5 + diff --git a/queue-6.12/gfs2-move-gfs2_trans_add_databufs.patch b/queue-6.12/gfs2-move-gfs2_trans_add_databufs.patch new file mode 100644 index 0000000000..eb1358e2f1 --- /dev/null +++ b/queue-6.12/gfs2-move-gfs2_trans_add_databufs.patch @@ -0,0 +1,138 @@ +From bea377a365d01b34de9c4b42339edb68dc5b2af3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 26 Mar 2025 22:21:08 +0100 +Subject: gfs2: Move gfs2_trans_add_databufs + +From: Andreas Gruenbacher + +[ Upstream commit d50a64e3c55e59e45e415c65531b0d76ad4cea36 ] + +Move gfs2_trans_add_databufs() to trans.c. Pass in a glock instead of +a gfs2_inode. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 5a90f8d49922 ("gfs2: Don't start unnecessary transactions during log flush") +Signed-off-by: Sasha Levin +--- + fs/gfs2/aops.c | 23 +---------------------- + fs/gfs2/aops.h | 2 -- + fs/gfs2/bmap.c | 3 ++- + fs/gfs2/trans.c | 21 +++++++++++++++++++++ + fs/gfs2/trans.h | 2 ++ + 5 files changed, 26 insertions(+), 25 deletions(-) + +diff --git a/fs/gfs2/aops.c b/fs/gfs2/aops.c +index 68fc8af14700d..ed2c708a215a4 100644 +--- a/fs/gfs2/aops.c ++++ b/fs/gfs2/aops.c +@@ -37,27 +37,6 @@ + #include "aops.h" + + +-void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio, +- size_t from, size_t len) +-{ +- struct buffer_head *head = folio_buffers(folio); +- unsigned int bsize = head->b_size; +- struct buffer_head *bh; +- size_t to = from + len; +- size_t start, end; +- +- for (bh = head, start = 0; bh != head || !start; +- bh = bh->b_this_page, start = end) { +- end = start + bsize; +- if (end <= from) +- continue; +- if (start >= to) +- break; +- set_buffer_uptodate(bh); +- gfs2_trans_add_data(ip->i_gl, bh); +- } +-} +- + /** + * gfs2_get_block_noalloc - Fills in a buffer head with details about a block + * @inode: The inode +@@ -133,7 +112,7 @@ static int __gfs2_jdata_write_folio(struct folio *folio, + inode->i_sb->s_blocksize, + BIT(BH_Dirty)|BIT(BH_Uptodate)); + } +- gfs2_trans_add_databufs(ip, folio, 0, folio_size(folio)); ++ gfs2_trans_add_databufs(ip->i_gl, folio, 0, folio_size(folio)); + } + return gfs2_write_jdata_folio(folio, wbc); + } +diff --git a/fs/gfs2/aops.h b/fs/gfs2/aops.h +index a10c4334d2489..f9fa41aaeaf41 100644 +--- a/fs/gfs2/aops.h ++++ b/fs/gfs2/aops.h +@@ -9,7 +9,5 @@ + #include "incore.h" + + void adjust_fs_space(struct inode *inode); +-void gfs2_trans_add_databufs(struct gfs2_inode *ip, struct folio *folio, +- size_t from, size_t len); + + #endif /* __AOPS_DOT_H__ */ +diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c +index 1795c4e8dbf66..28ad07b003484 100644 +--- a/fs/gfs2/bmap.c ++++ b/fs/gfs2/bmap.c +@@ -988,7 +988,8 @@ static void gfs2_iomap_put_folio(struct inode *inode, loff_t pos, + struct gfs2_sbd *sdp = GFS2_SB(inode); + + if (!gfs2_is_stuffed(ip)) +- gfs2_trans_add_databufs(ip, folio, offset_in_folio(folio, pos), ++ gfs2_trans_add_databufs(ip->i_gl, folio, ++ offset_in_folio(folio, pos), + copied); + + folio_unlock(folio); +diff --git a/fs/gfs2/trans.c b/fs/gfs2/trans.c +index 192213c7359af..42cf8c5204db4 100644 +--- a/fs/gfs2/trans.c ++++ b/fs/gfs2/trans.c +@@ -226,6 +226,27 @@ void gfs2_trans_add_data(struct gfs2_glock *gl, struct buffer_head *bh) + unlock_buffer(bh); + } + ++void gfs2_trans_add_databufs(struct gfs2_glock *gl, struct folio *folio, ++ size_t from, size_t len) ++{ ++ struct buffer_head *head = folio_buffers(folio); ++ unsigned int bsize = head->b_size; ++ struct buffer_head *bh; ++ size_t to = from + len; ++ size_t start, end; ++ ++ for (bh = head, start = 0; bh != head || !start; ++ bh = bh->b_this_page, start = end) { ++ end = start + bsize; ++ if (end <= from) ++ continue; ++ if (start >= to) ++ break; ++ set_buffer_uptodate(bh); ++ gfs2_trans_add_data(gl, bh); ++ } ++} ++ + void gfs2_trans_add_meta(struct gfs2_glock *gl, struct buffer_head *bh) + { + +diff --git a/fs/gfs2/trans.h b/fs/gfs2/trans.h +index f8ce5302280d3..790c55f59e612 100644 +--- a/fs/gfs2/trans.h ++++ b/fs/gfs2/trans.h +@@ -42,6 +42,8 @@ int gfs2_trans_begin(struct gfs2_sbd *sdp, unsigned int blocks, + + void gfs2_trans_end(struct gfs2_sbd *sdp); + void gfs2_trans_add_data(struct gfs2_glock *gl, struct buffer_head *bh); ++void gfs2_trans_add_databufs(struct gfs2_glock *gl, struct folio *folio, ++ size_t from, size_t len); + void gfs2_trans_add_meta(struct gfs2_glock *gl, struct buffer_head *bh); + void gfs2_trans_add_revoke(struct gfs2_sbd *sdp, struct gfs2_bufdata *bd); + void gfs2_trans_remove_revoke(struct gfs2_sbd *sdp, u64 blkno, unsigned int len); +-- +2.39.5 + diff --git a/queue-6.12/gfs2-move-gif_alloc_failed-check-out-of-gfs2_ea_deal.patch b/queue-6.12/gfs2-move-gif_alloc_failed-check-out-of-gfs2_ea_deal.patch new file mode 100644 index 0000000000..e74494a65e --- /dev/null +++ b/queue-6.12/gfs2-move-gif_alloc_failed-check-out-of-gfs2_ea_deal.patch @@ -0,0 +1,105 @@ +From 9b917be930298e325e88552bd6d96b1fafd347c9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 18 Apr 2025 01:09:32 +0200 +Subject: gfs2: Move GIF_ALLOC_FAILED check out of gfs2_ea_dealloc + +From: Andreas Gruenbacher + +[ Upstream commit 0cc617a54dfe6b44624c9a03e2e11a24eb9bc720 ] + +Don't check for the GIF_ALLOC_FAILED flag in gfs2_ea_dealloc() and pass +that information explicitly instead. This allows for a cleaner +follow-up patch. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/super.c | 2 +- + fs/gfs2/xattr.c | 11 ++++++----- + fs/gfs2/xattr.h | 2 +- + 3 files changed, 8 insertions(+), 7 deletions(-) + +diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c +index aad6d5d2816e3..694d554dba546 100644 +--- a/fs/gfs2/super.c ++++ b/fs/gfs2/super.c +@@ -1319,7 +1319,7 @@ static int evict_unlinked_inode(struct inode *inode) + } + + if (ip->i_eattr) { +- ret = gfs2_ea_dealloc(ip); ++ ret = gfs2_ea_dealloc(ip, !test_bit(GIF_ALLOC_FAILED, &ip->i_flags)); + if (ret) + goto out; + } +diff --git a/fs/gfs2/xattr.c b/fs/gfs2/xattr.c +index 17ae5070a90e6..df9c93de94c79 100644 +--- a/fs/gfs2/xattr.c ++++ b/fs/gfs2/xattr.c +@@ -1383,7 +1383,7 @@ static int ea_dealloc_indirect(struct gfs2_inode *ip) + return error; + } + +-static int ea_dealloc_block(struct gfs2_inode *ip) ++static int ea_dealloc_block(struct gfs2_inode *ip, bool initialized) + { + struct gfs2_sbd *sdp = GFS2_SB(&ip->i_inode); + struct gfs2_rgrpd *rgd; +@@ -1416,7 +1416,7 @@ static int ea_dealloc_block(struct gfs2_inode *ip) + ip->i_eattr = 0; + gfs2_add_inode_blocks(&ip->i_inode, -1); + +- if (likely(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) { ++ if (initialized) { + error = gfs2_meta_inode_buffer(ip, &dibh); + if (!error) { + gfs2_trans_add_meta(ip->i_gl, dibh); +@@ -1435,11 +1435,12 @@ static int ea_dealloc_block(struct gfs2_inode *ip) + /** + * gfs2_ea_dealloc - deallocate the extended attribute fork + * @ip: the inode ++ * @initialized: xattrs have been initialized + * + * Returns: errno + */ + +-int gfs2_ea_dealloc(struct gfs2_inode *ip) ++int gfs2_ea_dealloc(struct gfs2_inode *ip, bool initialized) + { + int error; + +@@ -1451,7 +1452,7 @@ int gfs2_ea_dealloc(struct gfs2_inode *ip) + if (error) + return error; + +- if (likely(!test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) { ++ if (initialized) { + error = ea_foreach(ip, ea_dealloc_unstuffed, NULL); + if (error) + goto out_quota; +@@ -1463,7 +1464,7 @@ int gfs2_ea_dealloc(struct gfs2_inode *ip) + } + } + +- error = ea_dealloc_block(ip); ++ error = ea_dealloc_block(ip, initialized); + + out_quota: + gfs2_quota_unhold(ip); +diff --git a/fs/gfs2/xattr.h b/fs/gfs2/xattr.h +index eb12eb7e37c19..3c9788e0e1375 100644 +--- a/fs/gfs2/xattr.h ++++ b/fs/gfs2/xattr.h +@@ -54,7 +54,7 @@ int __gfs2_xattr_set(struct inode *inode, const char *name, + const void *value, size_t size, + int flags, int type); + ssize_t gfs2_listxattr(struct dentry *dentry, char *buffer, size_t size); +-int gfs2_ea_dealloc(struct gfs2_inode *ip); ++int gfs2_ea_dealloc(struct gfs2_inode *ip, bool initialized); + + /* Exported to acl.c */ + +-- +2.39.5 + diff --git a/queue-6.12/gfs2-prevent-inode-creation-race.patch b/queue-6.12/gfs2-prevent-inode-creation-race.patch new file mode 100644 index 0000000000..b2d7da78bf --- /dev/null +++ b/queue-6.12/gfs2-prevent-inode-creation-race.patch @@ -0,0 +1,44 @@ +From 10de8b93e1e2410fe27545d1443fd2afce9ddea9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 19 Nov 2024 12:15:26 +0100 +Subject: gfs2: Prevent inode creation race + +From: Andreas Gruenbacher + +[ Upstream commit ffd1cf0443a208b80e40100ed02892d2ec74c7e9 ] + +When a request to evict an inode comes in over the network, we are +trying to grab an inode reference via the iopen glock's gl_object +pointer. There is a very small probability that by the time such a +request comes in, inode creation hasn't completed and the I_NEW flag is +still set. To deal with that, wait for the inode and then check if +inode creation was successful. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/glock.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c +index 9d72c5b8b7762..3c70c383b9bdd 100644 +--- a/fs/gfs2/glock.c ++++ b/fs/gfs2/glock.c +@@ -984,6 +984,13 @@ static bool gfs2_try_evict(struct gfs2_glock *gl) + if (ip && !igrab(&ip->i_inode)) + ip = NULL; + spin_unlock(&gl->gl_lockref.lock); ++ if (ip) { ++ wait_on_inode(&ip->i_inode); ++ if (is_bad_inode(&ip->i_inode)) { ++ iput(&ip->i_inode); ++ ip = NULL; ++ } ++ } + if (ip) { + set_bit(GIF_DEFER_DELETE, &ip->i_flags); + d_prune_aliases(&ip->i_inode); +-- +2.39.5 + diff --git a/queue-6.12/gfs2-rename-dinode_demise-to-evict_behavior.patch b/queue-6.12/gfs2-rename-dinode_demise-to-evict_behavior.patch new file mode 100644 index 0000000000..d0d38d3bc5 --- /dev/null +++ b/queue-6.12/gfs2-rename-dinode_demise-to-evict_behavior.patch @@ -0,0 +1,135 @@ +From 16bce8f80e294650dd708e46997ea385180e42a6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 14 Sep 2024 00:37:03 +0200 +Subject: gfs2: Rename dinode_demise to evict_behavior + +From: Andreas Gruenbacher + +[ Upstream commit c79ba4be351a06e0ac4c51143a83023bb37888d6 ] + +Rename enum dinode_demise to evict_behavior and its items +SHOULD_DELETE_DINODE to EVICT_SHOULD_DELETE, +SHOULD_NOT_DELETE_DINODE to EVICT_SHOULD_SKIP_DELETE, and +SHOULD_DEFER_EVICTION to EVICT_SHOULD_DEFER_DELETE. + +In gfs2_evict_inode(), add a separate variable of type enum +evict_behavior instead of implicitly casting to int. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/super.c | 37 +++++++++++++++++++------------------ + 1 file changed, 19 insertions(+), 18 deletions(-) + +diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c +index 6584fd5e0a5b7..6a0c0f3780b4c 100644 +--- a/fs/gfs2/super.c ++++ b/fs/gfs2/super.c +@@ -44,10 +44,10 @@ + #include "xattr.h" + #include "lops.h" + +-enum dinode_demise { +- SHOULD_DELETE_DINODE, +- SHOULD_NOT_DELETE_DINODE, +- SHOULD_DEFER_EVICTION, ++enum evict_behavior { ++ EVICT_SHOULD_DELETE, ++ EVICT_SHOULD_SKIP_DELETE, ++ EVICT_SHOULD_DEFER_DELETE, + }; + + /** +@@ -1315,8 +1315,8 @@ static bool gfs2_upgrade_iopen_glock(struct inode *inode) + * + * Returns: the fate of the dinode + */ +-static enum dinode_demise evict_should_delete(struct inode *inode, +- struct gfs2_holder *gh) ++static enum evict_behavior evict_should_delete(struct inode *inode, ++ struct gfs2_holder *gh) + { + struct gfs2_inode *ip = GFS2_I(inode); + struct super_block *sb = inode->i_sb; +@@ -1327,11 +1327,11 @@ static enum dinode_demise evict_should_delete(struct inode *inode, + goto should_delete; + + if (test_bit(GIF_DEFER_DELETE, &ip->i_flags)) +- return SHOULD_DEFER_EVICTION; ++ return EVICT_SHOULD_DEFER_DELETE; + + /* Deletes should never happen under memory pressure anymore. */ + if (WARN_ON_ONCE(current->flags & PF_MEMALLOC)) +- return SHOULD_DEFER_EVICTION; ++ return EVICT_SHOULD_DEFER_DELETE; + + /* Must not read inode block until block type has been verified */ + ret = gfs2_glock_nq_init(ip->i_gl, LM_ST_EXCLUSIVE, GL_SKIP, gh); +@@ -1339,34 +1339,34 @@ static enum dinode_demise evict_should_delete(struct inode *inode, + glock_clear_object(ip->i_iopen_gh.gh_gl, ip); + ip->i_iopen_gh.gh_flags |= GL_NOCACHE; + gfs2_glock_dq_uninit(&ip->i_iopen_gh); +- return SHOULD_DEFER_EVICTION; ++ return EVICT_SHOULD_DEFER_DELETE; + } + + if (gfs2_inode_already_deleted(ip->i_gl, ip->i_no_formal_ino)) +- return SHOULD_NOT_DELETE_DINODE; ++ return EVICT_SHOULD_SKIP_DELETE; + ret = gfs2_check_blk_type(sdp, ip->i_no_addr, GFS2_BLKST_UNLINKED); + if (ret) +- return SHOULD_NOT_DELETE_DINODE; ++ return EVICT_SHOULD_SKIP_DELETE; + + ret = gfs2_instantiate(gh); + if (ret) +- return SHOULD_NOT_DELETE_DINODE; ++ return EVICT_SHOULD_SKIP_DELETE; + + /* + * The inode may have been recreated in the meantime. + */ + if (inode->i_nlink) +- return SHOULD_NOT_DELETE_DINODE; ++ return EVICT_SHOULD_SKIP_DELETE; + + should_delete: + if (gfs2_holder_initialized(&ip->i_iopen_gh) && + test_bit(HIF_HOLDER, &ip->i_iopen_gh.gh_iflags)) { + if (!gfs2_upgrade_iopen_glock(inode)) { + gfs2_holder_uninit(&ip->i_iopen_gh); +- return SHOULD_NOT_DELETE_DINODE; ++ return EVICT_SHOULD_SKIP_DELETE; + } + } +- return SHOULD_DELETE_DINODE; ++ return EVICT_SHOULD_DELETE; + } + + /** +@@ -1477,6 +1477,7 @@ static void gfs2_evict_inode(struct inode *inode) + struct gfs2_sbd *sdp = sb->s_fs_info; + struct gfs2_inode *ip = GFS2_I(inode); + struct gfs2_holder gh; ++ enum evict_behavior behavior; + int ret; + + if (inode->i_nlink || sb_rdonly(sb) || !ip->i_no_addr) +@@ -1491,10 +1492,10 @@ static void gfs2_evict_inode(struct inode *inode) + goto out; + + gfs2_holder_mark_uninitialized(&gh); +- ret = evict_should_delete(inode, &gh); +- if (ret == SHOULD_DEFER_EVICTION) ++ behavior = evict_should_delete(inode, &gh); ++ if (behavior == EVICT_SHOULD_DEFER_DELETE) + goto out; +- if (ret == SHOULD_DELETE_DINODE) ++ if (behavior == EVICT_SHOULD_DELETE) + ret = evict_unlinked_inode(inode); + else + ret = evict_linked_inode(inode); +-- +2.39.5 + diff --git a/queue-6.12/gfs2-rename-gif_-deferred-defer-_delete.patch b/queue-6.12/gfs2-rename-gif_-deferred-defer-_delete.patch new file mode 100644 index 0000000000..f764f181ee --- /dev/null +++ b/queue-6.12/gfs2-rename-gif_-deferred-defer-_delete.patch @@ -0,0 +1,72 @@ +From aa0c8ad7b3112c7b3bbc31af902357e723403208 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 12 Sep 2024 22:22:12 +0200 +Subject: gfs2: Rename GIF_{DEFERRED -> DEFER}_DELETE + +From: Andreas Gruenbacher + +[ Upstream commit 9fb794aac6ddd08a9c4982372250f06137696e90 ] + +The GIF_DEFERRED_DELETE flag indicates an action that gfs2_evict_inode() +should take, so rename the flag to GIF_DEFER_DELETE to clarify. + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/glock.c | 4 ++-- + fs/gfs2/incore.h | 2 +- + fs/gfs2/super.c | 2 +- + 3 files changed, 4 insertions(+), 4 deletions(-) + +diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c +index ed699f2872f55..9d72c5b8b7762 100644 +--- a/fs/gfs2/glock.c ++++ b/fs/gfs2/glock.c +@@ -985,7 +985,7 @@ static bool gfs2_try_evict(struct gfs2_glock *gl) + ip = NULL; + spin_unlock(&gl->gl_lockref.lock); + if (ip) { +- set_bit(GIF_DEFERRED_DELETE, &ip->i_flags); ++ set_bit(GIF_DEFER_DELETE, &ip->i_flags); + d_prune_aliases(&ip->i_inode); + iput(&ip->i_inode); + +@@ -993,7 +993,7 @@ static bool gfs2_try_evict(struct gfs2_glock *gl) + spin_lock(&gl->gl_lockref.lock); + ip = gl->gl_object; + if (ip) { +- clear_bit(GIF_DEFERRED_DELETE, &ip->i_flags); ++ clear_bit(GIF_DEFER_DELETE, &ip->i_flags); + if (!igrab(&ip->i_inode)) + ip = NULL; + } +diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h +index e5535d7b46592..98a41c631ce10 100644 +--- a/fs/gfs2/incore.h ++++ b/fs/gfs2/incore.h +@@ -376,7 +376,7 @@ enum { + GIF_SW_PAGED = 3, + GIF_FREE_VFS_INODE = 5, + GIF_GLOP_PENDING = 6, +- GIF_DEFERRED_DELETE = 7, ++ GIF_DEFER_DELETE = 7, + }; + + struct gfs2_inode { +diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c +index 5ecb857cf74e3..6584fd5e0a5b7 100644 +--- a/fs/gfs2/super.c ++++ b/fs/gfs2/super.c +@@ -1326,7 +1326,7 @@ static enum dinode_demise evict_should_delete(struct inode *inode, + if (unlikely(test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) + goto should_delete; + +- if (test_bit(GIF_DEFERRED_DELETE, &ip->i_flags)) ++ if (test_bit(GIF_DEFER_DELETE, &ip->i_flags)) + return SHOULD_DEFER_EVICTION; + + /* Deletes should never happen under memory pressure anymore. */ +-- +2.39.5 + diff --git a/queue-6.12/gfs2-replace-gif_defer_delete-with-glf_defer_delete.patch b/queue-6.12/gfs2-replace-gif_defer_delete-with-glf_defer_delete.patch new file mode 100644 index 0000000000..425e24696a --- /dev/null +++ b/queue-6.12/gfs2-replace-gif_defer_delete-with-glf_defer_delete.patch @@ -0,0 +1,104 @@ +From e2b6c472496830aba03344682311786f6cdff9db Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 4 Feb 2025 17:35:13 +0100 +Subject: gfs2: Replace GIF_DEFER_DELETE with GLF_DEFER_DELETE + +From: Andreas Gruenbacher + +[ Upstream commit 3774f53d7f0b30a996eab4a1264611489b48f14c ] + +Having this flag attached to the iopen glock instead of the inode is +much simpler; it eliminates a protential weird race in gfs2_try_evict(). + +Signed-off-by: Andreas Gruenbacher +Stable-dep-of: 2c63986dd35f ("gfs2: deallocate inodes in gfs2_create_inode") +Signed-off-by: Sasha Levin +--- + fs/gfs2/glock.c | 6 ++++-- + fs/gfs2/incore.h | 2 +- + fs/gfs2/super.c | 3 ++- + fs/gfs2/trace_gfs2.h | 3 ++- + 4 files changed, 9 insertions(+), 5 deletions(-) + +diff --git a/fs/gfs2/glock.c b/fs/gfs2/glock.c +index ec043aa71de8c..161fc76ed5b0e 100644 +--- a/fs/gfs2/glock.c ++++ b/fs/gfs2/glock.c +@@ -994,15 +994,15 @@ static bool gfs2_try_evict(struct gfs2_glock *gl) + } + } + if (ip) { +- set_bit(GIF_DEFER_DELETE, &ip->i_flags); ++ set_bit(GLF_DEFER_DELETE, &gl->gl_flags); + d_prune_aliases(&ip->i_inode); + iput(&ip->i_inode); ++ clear_bit(GLF_DEFER_DELETE, &gl->gl_flags); + + /* If the inode was evicted, gl->gl_object will now be NULL. */ + spin_lock(&gl->gl_lockref.lock); + ip = gl->gl_object; + if (ip) { +- clear_bit(GIF_DEFER_DELETE, &ip->i_flags); + if (!igrab(&ip->i_inode)) + ip = NULL; + } +@@ -2389,6 +2389,8 @@ static const char *gflags2str(char *buf, const struct gfs2_glock *gl) + *p++ = 'e'; + if (test_bit(GLF_VERIFY_DELETE, gflags)) + *p++ = 'E'; ++ if (test_bit(GLF_DEFER_DELETE, gflags)) ++ *p++ = 's'; + *p = 0; + return buf; + } +diff --git a/fs/gfs2/incore.h b/fs/gfs2/incore.h +index f6aee2c9b9118..142f61228d15e 100644 +--- a/fs/gfs2/incore.h ++++ b/fs/gfs2/incore.h +@@ -331,6 +331,7 @@ enum { + GLF_TRY_TO_EVICT = 17, /* iopen glocks only */ + GLF_VERIFY_DELETE = 18, /* iopen glocks only */ + GLF_PENDING_REPLY = 19, ++ GLF_DEFER_DELETE = 20, /* iopen glocks only */ + }; + + struct gfs2_glock { +@@ -377,7 +378,6 @@ enum { + GIF_SW_PAGED = 3, + GIF_FREE_VFS_INODE = 5, + GIF_GLOP_PENDING = 6, +- GIF_DEFER_DELETE = 7, + }; + + struct gfs2_inode { +diff --git a/fs/gfs2/super.c b/fs/gfs2/super.c +index 6a0c0f3780b4c..d982db129b2b4 100644 +--- a/fs/gfs2/super.c ++++ b/fs/gfs2/super.c +@@ -1326,7 +1326,8 @@ static enum evict_behavior evict_should_delete(struct inode *inode, + if (unlikely(test_bit(GIF_ALLOC_FAILED, &ip->i_flags))) + goto should_delete; + +- if (test_bit(GIF_DEFER_DELETE, &ip->i_flags)) ++ if (gfs2_holder_initialized(&ip->i_iopen_gh) && ++ test_bit(GLF_DEFER_DELETE, &ip->i_iopen_gh.gh_gl->gl_flags)) + return EVICT_SHOULD_DEFER_DELETE; + + /* Deletes should never happen under memory pressure anymore. */ +diff --git a/fs/gfs2/trace_gfs2.h b/fs/gfs2/trace_gfs2.h +index 09121c2c198ba..43de603ab347e 100644 +--- a/fs/gfs2/trace_gfs2.h ++++ b/fs/gfs2/trace_gfs2.h +@@ -64,7 +64,8 @@ + {(1UL << GLF_INSTANTIATE_NEEDED), "n" }, \ + {(1UL << GLF_INSTANTIATE_IN_PROG), "N" }, \ + {(1UL << GLF_TRY_TO_EVICT), "e" }, \ +- {(1UL << GLF_VERIFY_DELETE), "E" }) ++ {(1UL << GLF_VERIFY_DELETE), "E" }, \ ++ {(1UL << GLF_DEFER_DELETE), "s" }) + + #ifndef NUMPTY + #define NUMPTY +-- +2.39.5 + diff --git a/queue-6.12/hisi_acc_vfio_pci-bugfix-cache-write-back-issue.patch b/queue-6.12/hisi_acc_vfio_pci-bugfix-cache-write-back-issue.patch new file mode 100644 index 0000000000..d973aa8fb6 --- /dev/null +++ b/queue-6.12/hisi_acc_vfio_pci-bugfix-cache-write-back-issue.patch @@ -0,0 +1,62 @@ +From 7bc02d6bc44df255c1dbcc4b2b5b106e80d7c999 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 10 May 2025 16:11:52 +0800 +Subject: hisi_acc_vfio_pci: bugfix cache write-back issue + +From: Longfang Liu + +[ Upstream commit e63c466398731bb7867f42f44b76fa984de59db2 ] + +At present, cache write-back is placed in the device data +copy stage after stopping the device operation. +Writing back to the cache at this stage will cause the data +obtained by the cache to be written back to be empty. + +In order to ensure that the cache data is written back +successfully, the data needs to be written back into the +stop device stage. + +Fixes: b0eed085903e ("hisi_acc_vfio_pci: Add support for VFIO live migration") +Signed-off-by: Longfang Liu +Reviewed-by: Shameer Kolothum +Link: https://lore.kernel.org/r/20250510081155.55840-4-liulongfang@huawei.com +Signed-off-by: Alex Williamson +Signed-off-by: Sasha Levin +--- + drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c | 13 +++++++------ + 1 file changed, 7 insertions(+), 6 deletions(-) + +diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c +index 68300fcd3c41b..e37699fc03707 100644 +--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c ++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c +@@ -554,12 +554,6 @@ static int vf_qm_state_save(struct hisi_acc_vf_core_device *hisi_acc_vdev, + vf_data->vf_qm_state = QM_READY; + hisi_acc_vdev->vf_qm_state = vf_data->vf_qm_state; + +- ret = vf_qm_cache_wb(vf_qm); +- if (ret) { +- dev_err(dev, "failed to writeback QM Cache!\n"); +- return ret; +- } +- + ret = qm_get_regs(vf_qm, vf_data); + if (ret) + return -EINVAL; +@@ -985,6 +979,13 @@ static int hisi_acc_vf_stop_device(struct hisi_acc_vf_core_device *hisi_acc_vdev + dev_err(dev, "failed to check QM INT state!\n"); + return ret; + } ++ ++ ret = vf_qm_cache_wb(vf_qm); ++ if (ret) { ++ dev_err(dev, "failed to writeback QM cache!\n"); ++ return ret; ++ } ++ + return 0; + } + +-- +2.39.5 + diff --git a/queue-6.12/hisi_acc_vfio_pci-bugfix-the-problem-of-uninstalling.patch b/queue-6.12/hisi_acc_vfio_pci-bugfix-the-problem-of-uninstalling.patch new file mode 100644 index 0000000000..6f07fccd41 --- /dev/null +++ b/queue-6.12/hisi_acc_vfio_pci-bugfix-the-problem-of-uninstalling.patch @@ -0,0 +1,46 @@ +From 978ed4fa18c0d7e45c077a0c079d372b0a5f9b6c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Jul 2025 21:45:06 -0400 +Subject: hisi_acc_vfio_pci: bugfix the problem of uninstalling driver + +From: Longfang Liu + +[ Upstream commit db6525a8573957faea28850392f4744e5f8f7a53 ] + +In a live migration scenario. If the number of VFs at the +destination is greater than the source, the recovery operation +will fail and qemu will not be able to complete the process and +exit after shutting down the device FD. + +This will cause the driver to be unable to be unloaded normally due +to abnormal reference counting of the live migration driver caused +by the abnormal closing operation of fd. + +Therefore, make sure the migration file descriptor references are +always released when the device is closed. + +Fixes: b0eed085903e ("hisi_acc_vfio_pci: Add support for VFIO live migration") +Signed-off-by: Longfang Liu +Reviewed-by: Shameer Kolothum +Link: https://lore.kernel.org/r/20250510081155.55840-5-liulongfang@huawei.com +Signed-off-by: Alex Williamson +Signed-off-by: Sasha Levin +--- + drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c +index e37699fc03707..dda8cb3262e0b 100644 +--- a/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c ++++ b/drivers/vfio/pci/hisilicon/hisi_acc_vfio_pci.c +@@ -1359,6 +1359,7 @@ static void hisi_acc_vfio_pci_close_device(struct vfio_device *core_vdev) + struct hisi_acc_vf_core_device, core_device.vdev); + struct hisi_qm *vf_qm = &hisi_acc_vdev->vf_qm; + ++ hisi_acc_vf_disable_fds(hisi_acc_vdev); + iounmap(vf_qm->io_base); + vfio_pci_core_close_device(core_vdev); + } +-- +2.39.5 + diff --git a/queue-6.12/idpf-convert-control-queue-mutex-to-a-spinlock.patch b/queue-6.12/idpf-convert-control-queue-mutex-to-a-spinlock.patch new file mode 100644 index 0000000000..8421698ce0 --- /dev/null +++ b/queue-6.12/idpf-convert-control-queue-mutex-to-a-spinlock.patch @@ -0,0 +1,232 @@ +From d3502a16b5e5fb4e2d424ed21de7e9048abb67d0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 23 May 2025 14:55:37 -0600 +Subject: idpf: convert control queue mutex to a spinlock + +From: Ahmed Zaki + +[ Upstream commit b2beb5bb2cd90d7939e470ed4da468683f41baa3 ] + +With VIRTCHNL2_CAP_MACFILTER enabled, the following warning is generated +on module load: + +[ 324.701677] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:578 +[ 324.701684] in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 1582, name: NetworkManager +[ 324.701689] preempt_count: 201, expected: 0 +[ 324.701693] RCU nest depth: 0, expected: 0 +[ 324.701697] 2 locks held by NetworkManager/1582: +[ 324.701702] #0: ffffffff9f7be770 (rtnl_mutex){....}-{3:3}, at: rtnl_newlink+0x791/0x21e0 +[ 324.701730] #1: ff1100216c380368 (_xmit_ETHER){....}-{2:2}, at: __dev_open+0x3f0/0x870 +[ 324.701749] Preemption disabled at: +[ 324.701752] [] __dev_open+0x3dd/0x870 +[ 324.701765] CPU: 30 UID: 0 PID: 1582 Comm: NetworkManager Not tainted 6.15.0-rc5+ #2 PREEMPT(voluntary) +[ 324.701771] Hardware name: Intel Corporation M50FCP2SBSTD/M50FCP2SBSTD, BIOS SE5C741.86B.01.01.0001.2211140926 11/14/2022 +[ 324.701774] Call Trace: +[ 324.701777] +[ 324.701779] dump_stack_lvl+0x5d/0x80 +[ 324.701788] ? __dev_open+0x3dd/0x870 +[ 324.701793] __might_resched.cold+0x1ef/0x23d +<..> +[ 324.701818] __mutex_lock+0x113/0x1b80 +<..> +[ 324.701917] idpf_ctlq_clean_sq+0xad/0x4b0 [idpf] +[ 324.701935] ? kasan_save_track+0x14/0x30 +[ 324.701941] idpf_mb_clean+0x143/0x380 [idpf] +<..> +[ 324.701991] idpf_send_mb_msg+0x111/0x720 [idpf] +[ 324.702009] idpf_vc_xn_exec+0x4cc/0x990 [idpf] +[ 324.702021] ? rcu_is_watching+0x12/0xc0 +[ 324.702035] idpf_add_del_mac_filters+0x3ed/0xb50 [idpf] +<..> +[ 324.702122] __hw_addr_sync_dev+0x1cf/0x300 +[ 324.702126] ? find_held_lock+0x32/0x90 +[ 324.702134] idpf_set_rx_mode+0x317/0x390 [idpf] +[ 324.702152] __dev_open+0x3f8/0x870 +[ 324.702159] ? __pfx___dev_open+0x10/0x10 +[ 324.702174] __dev_change_flags+0x443/0x650 +<..> +[ 324.702208] netif_change_flags+0x80/0x160 +[ 324.702218] do_setlink.isra.0+0x16a0/0x3960 +<..> +[ 324.702349] rtnl_newlink+0x12fd/0x21e0 + +The sequence is as follows: + rtnl_newlink()-> + __dev_change_flags()-> + __dev_open()-> + dev_set_rx_mode() - > # disables BH and grabs "dev->addr_list_lock" + idpf_set_rx_mode() -> # proceed only if VIRTCHNL2_CAP_MACFILTER is ON + __dev_uc_sync() -> + idpf_add_mac_filter -> + idpf_add_del_mac_filters -> + idpf_send_mb_msg() -> + idpf_mb_clean() -> + idpf_ctlq_clean_sq() # mutex_lock(cq_lock) + +Fix by converting cq_lock to a spinlock. All operations under the new +lock are safe except freeing the DMA memory, which may use vunmap(). Fix +by requesting a contiguous physical memory for the DMA mapping. + +Fixes: a251eee62133 ("idpf: add SRIOV support and other ndo_ops") +Reviewed-by: Aleksandr Loktionov +Signed-off-by: Ahmed Zaki +Reviewed-by: Simon Horman +Tested-by: Samuel Salin +Signed-off-by: Tony Nguyen +Signed-off-by: Sasha Levin +--- + .../net/ethernet/intel/idpf/idpf_controlq.c | 23 +++++++++---------- + .../ethernet/intel/idpf/idpf_controlq_api.h | 2 +- + drivers/net/ethernet/intel/idpf/idpf_lib.c | 12 ++++++---- + 3 files changed, 20 insertions(+), 17 deletions(-) + +diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq.c b/drivers/net/ethernet/intel/idpf/idpf_controlq.c +index b28991dd18703..48b8e184f3db6 100644 +--- a/drivers/net/ethernet/intel/idpf/idpf_controlq.c ++++ b/drivers/net/ethernet/intel/idpf/idpf_controlq.c +@@ -96,7 +96,7 @@ static void idpf_ctlq_init_rxq_bufs(struct idpf_ctlq_info *cq) + */ + static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq) + { +- mutex_lock(&cq->cq_lock); ++ spin_lock(&cq->cq_lock); + + /* free ring buffers and the ring itself */ + idpf_ctlq_dealloc_ring_res(hw, cq); +@@ -104,8 +104,7 @@ static void idpf_ctlq_shutdown(struct idpf_hw *hw, struct idpf_ctlq_info *cq) + /* Set ring_size to 0 to indicate uninitialized queue */ + cq->ring_size = 0; + +- mutex_unlock(&cq->cq_lock); +- mutex_destroy(&cq->cq_lock); ++ spin_unlock(&cq->cq_lock); + } + + /** +@@ -173,7 +172,7 @@ int idpf_ctlq_add(struct idpf_hw *hw, + + idpf_ctlq_init_regs(hw, cq, is_rxq); + +- mutex_init(&cq->cq_lock); ++ spin_lock_init(&cq->cq_lock); + + list_add(&cq->cq_list, &hw->cq_list_head); + +@@ -272,7 +271,7 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + int err = 0; + int i; + +- mutex_lock(&cq->cq_lock); ++ spin_lock(&cq->cq_lock); + + /* Ensure there are enough descriptors to send all messages */ + num_desc_avail = IDPF_CTLQ_DESC_UNUSED(cq); +@@ -332,7 +331,7 @@ int idpf_ctlq_send(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + wr32(hw, cq->reg.tail, cq->next_to_use); + + err_unlock: +- mutex_unlock(&cq->cq_lock); ++ spin_unlock(&cq->cq_lock); + + return err; + } +@@ -364,7 +363,7 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, + if (*clean_count > cq->ring_size) + return -EBADR; + +- mutex_lock(&cq->cq_lock); ++ spin_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + +@@ -397,7 +396,7 @@ int idpf_ctlq_clean_sq(struct idpf_ctlq_info *cq, u16 *clean_count, + + cq->next_to_clean = ntc; + +- mutex_unlock(&cq->cq_lock); ++ spin_unlock(&cq->cq_lock); + + /* Return number of descriptors actually cleaned */ + *clean_count = i; +@@ -435,7 +434,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + if (*buff_count > 0) + buffs_avail = true; + +- mutex_lock(&cq->cq_lock); ++ spin_lock(&cq->cq_lock); + + if (tbp >= cq->ring_size) + tbp = 0; +@@ -524,7 +523,7 @@ int idpf_ctlq_post_rx_buffs(struct idpf_hw *hw, struct idpf_ctlq_info *cq, + wr32(hw, cq->reg.tail, cq->next_to_post); + } + +- mutex_unlock(&cq->cq_lock); ++ spin_unlock(&cq->cq_lock); + + /* return the number of buffers that were not posted */ + *buff_count = *buff_count - i; +@@ -552,7 +551,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, + u16 i; + + /* take the lock before we start messing with the ring */ +- mutex_lock(&cq->cq_lock); ++ spin_lock(&cq->cq_lock); + + ntc = cq->next_to_clean; + +@@ -614,7 +613,7 @@ int idpf_ctlq_recv(struct idpf_ctlq_info *cq, u16 *num_q_msg, + + cq->next_to_clean = ntc; + +- mutex_unlock(&cq->cq_lock); ++ spin_unlock(&cq->cq_lock); + + *num_q_msg = i; + if (*num_q_msg == 0) +diff --git a/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h b/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h +index e8e046ef2f0d7..5890d8adca4a8 100644 +--- a/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h ++++ b/drivers/net/ethernet/intel/idpf/idpf_controlq_api.h +@@ -99,7 +99,7 @@ struct idpf_ctlq_info { + + enum idpf_ctlq_type cq_type; + int q_id; +- struct mutex cq_lock; /* control queue lock */ ++ spinlock_t cq_lock; /* control queue lock */ + /* used for interrupt processing */ + u16 next_to_use; + u16 next_to_clean; +diff --git a/drivers/net/ethernet/intel/idpf/idpf_lib.c b/drivers/net/ethernet/intel/idpf/idpf_lib.c +index ba645ab22d394..746b655337275 100644 +--- a/drivers/net/ethernet/intel/idpf/idpf_lib.c ++++ b/drivers/net/ethernet/intel/idpf/idpf_lib.c +@@ -2315,8 +2315,12 @@ void *idpf_alloc_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem, u64 size) + struct idpf_adapter *adapter = hw->back; + size_t sz = ALIGN(size, 4096); + +- mem->va = dma_alloc_coherent(&adapter->pdev->dev, sz, +- &mem->pa, GFP_KERNEL); ++ /* The control queue resources are freed under a spinlock, contiguous ++ * pages will avoid IOMMU remapping and the use vmap (and vunmap in ++ * dma_free_*() path. ++ */ ++ mem->va = dma_alloc_attrs(&adapter->pdev->dev, sz, &mem->pa, ++ GFP_KERNEL, DMA_ATTR_FORCE_CONTIGUOUS); + mem->size = sz; + + return mem->va; +@@ -2331,8 +2335,8 @@ void idpf_free_dma_mem(struct idpf_hw *hw, struct idpf_dma_mem *mem) + { + struct idpf_adapter *adapter = hw->back; + +- dma_free_coherent(&adapter->pdev->dev, mem->size, +- mem->va, mem->pa); ++ dma_free_attrs(&adapter->pdev->dev, mem->size, ++ mem->va, mem->pa, DMA_ATTR_FORCE_CONTIGUOUS); + mem->size = 0; + mem->va = NULL; + mem->pa = 0; +-- +2.39.5 + diff --git a/queue-6.12/idpf-return-0-size-for-rss-key-if-not-supported.patch b/queue-6.12/idpf-return-0-size-for-rss-key-if-not-supported.patch new file mode 100644 index 0000000000..a9fafa60c2 --- /dev/null +++ b/queue-6.12/idpf-return-0-size-for-rss-key-if-not-supported.patch @@ -0,0 +1,101 @@ +From bb8dae98a31ec01244d674af042f031cecd53ec0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 22 May 2025 10:52:06 +0200 +Subject: idpf: return 0 size for RSS key if not supported + +From: Michal Swiatkowski + +[ Upstream commit f77bf1ebf8ff6301ccdbc346f7b52db928f9cbf8 ] + +Returning -EOPNOTSUPP from function returning u32 is leading to +cast and invalid size value as a result. + +-EOPNOTSUPP as a size probably will lead to allocation fail. + +Command: ethtool -x eth0 +It is visible on all devices that don't have RSS caps set. + +[ 136.615917] Call Trace: +[ 136.615921] +[ 136.615927] ? __warn+0x89/0x130 +[ 136.615942] ? __alloc_frozen_pages_noprof+0x322/0x330 +[ 136.615953] ? report_bug+0x164/0x190 +[ 136.615968] ? handle_bug+0x58/0x90 +[ 136.615979] ? exc_invalid_op+0x17/0x70 +[ 136.615987] ? asm_exc_invalid_op+0x1a/0x20 +[ 136.616001] ? rss_prepare_get.constprop.0+0xb9/0x170 +[ 136.616016] ? __alloc_frozen_pages_noprof+0x322/0x330 +[ 136.616028] __alloc_pages_noprof+0xe/0x20 +[ 136.616038] ___kmalloc_large_node+0x80/0x110 +[ 136.616072] __kmalloc_large_node_noprof+0x1d/0xa0 +[ 136.616081] __kmalloc_noprof+0x32c/0x4c0 +[ 136.616098] ? rss_prepare_get.constprop.0+0xb9/0x170 +[ 136.616105] rss_prepare_get.constprop.0+0xb9/0x170 +[ 136.616114] ethnl_default_doit+0x107/0x3d0 +[ 136.616131] genl_family_rcv_msg_doit+0x100/0x160 +[ 136.616147] genl_rcv_msg+0x1b8/0x2c0 +[ 136.616156] ? __pfx_ethnl_default_doit+0x10/0x10 +[ 136.616168] ? __pfx_genl_rcv_msg+0x10/0x10 +[ 136.616176] netlink_rcv_skb+0x58/0x110 +[ 136.616186] genl_rcv+0x28/0x40 +[ 136.616195] netlink_unicast+0x19b/0x290 +[ 136.616206] netlink_sendmsg+0x222/0x490 +[ 136.616215] __sys_sendto+0x1fd/0x210 +[ 136.616233] __x64_sys_sendto+0x24/0x30 +[ 136.616242] do_syscall_64+0x82/0x160 +[ 136.616252] ? __sys_recvmsg+0x83/0xe0 +[ 136.616265] ? syscall_exit_to_user_mode+0x10/0x210 +[ 136.616275] ? do_syscall_64+0x8e/0x160 +[ 136.616282] ? __count_memcg_events+0xa1/0x130 +[ 136.616295] ? count_memcg_events.constprop.0+0x1a/0x30 +[ 136.616306] ? handle_mm_fault+0xae/0x2d0 +[ 136.616319] ? do_user_addr_fault+0x379/0x670 +[ 136.616328] ? clear_bhb_loop+0x45/0xa0 +[ 136.616340] ? clear_bhb_loop+0x45/0xa0 +[ 136.616349] ? clear_bhb_loop+0x45/0xa0 +[ 136.616359] entry_SYSCALL_64_after_hwframe+0x76/0x7e +[ 136.616369] RIP: 0033:0x7fd30ba7b047 +[ 136.616376] Code: 0c 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb b8 0f 1f 00 f3 0f 1e fa 80 3d bd d5 0c 00 00 41 89 ca 74 10 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 71 c3 55 48 83 ec 30 44 89 4c 24 2c 4c 89 44 +[ 136.616381] RSP: 002b:00007ffde1796d68 EFLAGS: 00000202 ORIG_RAX: 000000000000002c +[ 136.616388] RAX: ffffffffffffffda RBX: 000055d7bd89f2a0 RCX: 00007fd30ba7b047 +[ 136.616392] RDX: 0000000000000028 RSI: 000055d7bd89f3b0 RDI: 0000000000000003 +[ 136.616396] RBP: 00007ffde1796e10 R08: 00007fd30bb4e200 R09: 000000000000000c +[ 136.616399] R10: 0000000000000000 R11: 0000000000000202 R12: 000055d7bd89f340 +[ 136.616403] R13: 000055d7bd89f3b0 R14: 000055d78943f200 R15: 0000000000000000 + +Fixes: 02cbfba1add5 ("idpf: add ethtool callbacks") +Reviewed-by: Ahmed Zaki +Signed-off-by: Michal Swiatkowski +Reviewed-by: Simon Horman +Tested-by: Samuel Salin +Signed-off-by: Tony Nguyen +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/intel/idpf/idpf_ethtool.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +index 59b1a1a099967..f72420cf68216 100644 +--- a/drivers/net/ethernet/intel/idpf/idpf_ethtool.c ++++ b/drivers/net/ethernet/intel/idpf/idpf_ethtool.c +@@ -46,7 +46,7 @@ static u32 idpf_get_rxfh_key_size(struct net_device *netdev) + struct idpf_vport_user_config_data *user_config; + + if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) +- return -EOPNOTSUPP; ++ return 0; + + user_config = &np->adapter->vport_config[np->vport_idx]->user_config; + +@@ -65,7 +65,7 @@ static u32 idpf_get_rxfh_indir_size(struct net_device *netdev) + struct idpf_vport_user_config_data *user_config; + + if (!idpf_is_cap_ena_all(np->adapter, IDPF_RSS_CAPS, IDPF_CAP_RSS)) +- return -EOPNOTSUPP; ++ return 0; + + user_config = &np->adapter->vport_config[np->vport_idx]->user_config; + +-- +2.39.5 + diff --git a/queue-6.12/igc-disable-l1.2-pci-e-link-substate-to-avoid-perfor.patch b/queue-6.12/igc-disable-l1.2-pci-e-link-substate-to-avoid-perfor.patch new file mode 100644 index 0000000000..f53c6d2be6 --- /dev/null +++ b/queue-6.12/igc-disable-l1.2-pci-e-link-substate-to-avoid-perfor.patch @@ -0,0 +1,66 @@ +From c5fd0b0e872e96d8460fa0a17e3d9dcfa78b69f4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 11 Jun 2025 15:52:54 +0300 +Subject: igc: disable L1.2 PCI-E link substate to avoid performance issue + +From: Vitaly Lifshits + +[ Upstream commit 0325143b59c6c6d79987afc57d2456e7a20d13b7 ] + +I226 devices advertise support for the PCI-E link L1.2 substate. However, +due to a hardware limitation, the exit latency from this low-power state +is longer than the packet buffer can tolerate under high traffic +conditions. This can lead to packet loss and degraded performance. + +To mitigate this, disable the L1.2 substate. The increased power draw +between L1.1 and L1.2 is insignificant. + +Fixes: 43546211738e ("igc: Add new device ID's") +Link: https://lore.kernel.org/intel-wired-lan/15248b4f-3271-42dd-8e35-02bfc92b25e1@intel.com +Signed-off-by: Vitaly Lifshits +Reviewed-by: Aleksandr Loktionov +Tested-by: Mor Bar-Gabay +Signed-off-by: Tony Nguyen +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/intel/igc/igc_main.c | 10 ++++++++++ + 1 file changed, 10 insertions(+) + +diff --git a/drivers/net/ethernet/intel/igc/igc_main.c b/drivers/net/ethernet/intel/igc/igc_main.c +index 082b0baf5d37c..2a0c5a343e472 100644 +--- a/drivers/net/ethernet/intel/igc/igc_main.c ++++ b/drivers/net/ethernet/intel/igc/igc_main.c +@@ -6987,6 +6987,10 @@ static int igc_probe(struct pci_dev *pdev, + adapter->port_num = hw->bus.func; + adapter->msg_enable = netif_msg_init(debug, DEFAULT_MSG_ENABLE); + ++ /* Disable ASPM L1.2 on I226 devices to avoid packet loss */ ++ if (igc_is_device_id_i226(hw)) ++ pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); ++ + err = pci_save_state(pdev); + if (err) + goto err_ioremap; +@@ -7368,6 +7372,9 @@ static int igc_resume(struct device *dev) + pci_enable_wake(pdev, PCI_D3hot, 0); + pci_enable_wake(pdev, PCI_D3cold, 0); + ++ if (igc_is_device_id_i226(hw)) ++ pci_disable_link_state(pdev, PCIE_LINK_STATE_L1_2); ++ + if (igc_init_interrupt_scheme(adapter, true)) { + netdev_err(netdev, "Unable to allocate memory for queues\n"); + return -ENOMEM; +@@ -7480,6 +7487,9 @@ static pci_ers_result_t igc_io_slot_reset(struct pci_dev *pdev) + pci_enable_wake(pdev, PCI_D3hot, 0); + pci_enable_wake(pdev, PCI_D3cold, 0); + ++ if (igc_is_device_id_i226(hw)) ++ pci_disable_link_state_locked(pdev, PCIE_LINK_STATE_L1_2); ++ + /* In case of PCI error, adapter loses its HW address + * so we should re-assign it here. + */ +-- +2.39.5 + diff --git a/queue-6.12/iommu-ipmmu-vmsa-avoid-wformat-security-warning.patch b/queue-6.12/iommu-ipmmu-vmsa-avoid-wformat-security-warning.patch new file mode 100644 index 0000000000..7c166cb11d --- /dev/null +++ b/queue-6.12/iommu-ipmmu-vmsa-avoid-wformat-security-warning.patch @@ -0,0 +1,50 @@ +From a9c7ec5a38cf190cc99f80bbff84490ad7dcee94 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Jul 2025 14:42:43 -0400 +Subject: iommu: ipmmu-vmsa: avoid Wformat-security warning + +From: Arnd Bergmann + +[ Upstream commit 33647d0be323f9e2f27cd1e86de5cfd965cec654 ] + +iommu_device_sysfs_add() requires a constant format string, otherwise +a W=1 build produces a warning: + +drivers/iommu/ipmmu-vmsa.c:1093:62: error: format string is not a string literal (potentially insecure) [-Werror,-Wformat-security] + 1093 | ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, dev_name(&pdev->dev)); + | ^~~~~~~~~~~~~~~~~~~~ +drivers/iommu/ipmmu-vmsa.c:1093:62: note: treat the string as an argument to avoid this + 1093 | ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, dev_name(&pdev->dev)); + | ^ + | "%s", + +This was an old bug but I saw it now because the code was changed as part +of commit d9d3cede4167 ("iommu/ipmmu-vmsa: Register in a sensible order"). + +Fixes: 7af9a5fdb9e0 ("iommu/ipmmu-vmsa: Use iommu_device_sysfs_add()/remove()") +Signed-off-by: Arnd Bergmann +Reviewed-by: Geert Uytterhoeven +Reviewed-by: Jason Gunthorpe +Link: https://lore.kernel.org/r/20250423164006.2661372-1-arnd@kernel.org +Signed-off-by: Joerg Roedel +Signed-off-by: Sasha Levin +--- + drivers/iommu/ipmmu-vmsa.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c +index ff55b8c307126..ae69691471e9f 100644 +--- a/drivers/iommu/ipmmu-vmsa.c ++++ b/drivers/iommu/ipmmu-vmsa.c +@@ -1087,7 +1087,7 @@ static int ipmmu_probe(struct platform_device *pdev) + * - R-Car Gen3 IPMMU (leaf devices only - skip root IPMMU-MM device) + */ + if (!mmu->features->has_cache_leaf_nodes || !ipmmu_is_root(mmu)) { +- ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, ++ ret = iommu_device_sysfs_add(&mmu->iommu, &pdev->dev, NULL, "%s", + dev_name(&pdev->dev)); + if (ret) + return ret; +-- +2.39.5 + diff --git a/queue-6.12/kunit-qemu_configs-disable-faulting-tests-on-32-bit-.patch b/queue-6.12/kunit-qemu_configs-disable-faulting-tests-on-32-bit-.patch new file mode 100644 index 0000000000..79b41c3990 --- /dev/null +++ b/queue-6.12/kunit-qemu_configs-disable-faulting-tests-on-32-bit-.patch @@ -0,0 +1,46 @@ +From 757344e63b3fa3d9eddb1e490d4aa832406133ad Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 16 Apr 2025 17:38:25 +0800 +Subject: kunit: qemu_configs: Disable faulting tests on 32-bit SPARC +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: David Gow + +[ Upstream commit 1d31d536871fe8b16c8c0de58d201c78e21eb3a2 ] + +The 32-bit sparc configuration (--arch sparc) crashes on +the kunit_fault_test. It's known that some architectures don't handle +deliberate segfaults in kernel mode well, so there's a config switch to +disable tests which rely upon it by default. + +Use this for the sparc config, making sure the default config for it +passes. + +Link: https://lore.kernel.org/r/20250416093826.1550040-1-davidgow@google.com +Fixes: 87c9c1631788 ("kunit: tool: add support for QEMU") +Signed-off-by: David Gow +Reviewed-by: Thomas Weißschuh +Tested-by: Thomas Weißschuh +Signed-off-by: Shuah Khan +Signed-off-by: Sasha Levin +--- + tools/testing/kunit/qemu_configs/sparc.py | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/tools/testing/kunit/qemu_configs/sparc.py b/tools/testing/kunit/qemu_configs/sparc.py +index 3131dd299a6e3..2019550a1b692 100644 +--- a/tools/testing/kunit/qemu_configs/sparc.py ++++ b/tools/testing/kunit/qemu_configs/sparc.py +@@ -2,6 +2,7 @@ from ..qemu_config import QemuArchParams + + QEMU_ARCH = QemuArchParams(linux_arch='sparc', + kconfig=''' ++CONFIG_KUNIT_FAULT_TEST=n + CONFIG_SPARC32=y + CONFIG_SERIAL_SUNZILOG=y + CONFIG_SERIAL_SUNZILOG_CONSOLE=y +-- +2.39.5 + diff --git a/queue-6.12/kunit-qemu_configs-sparc-explicitly-enable-config_sp.patch b/queue-6.12/kunit-qemu_configs-sparc-explicitly-enable-config_sp.patch new file mode 100644 index 0000000000..5805768cb3 --- /dev/null +++ b/queue-6.12/kunit-qemu_configs-sparc-explicitly-enable-config_sp.patch @@ -0,0 +1,42 @@ +From 5d9e707737b07b9b0eba6b7272a71fd28ef4409a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 15 Apr 2025 15:38:05 +0200 +Subject: kunit: qemu_configs: sparc: Explicitly enable CONFIG_SPARC32=y +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Thomas Weißschuh + +[ Upstream commit d16b3d0fb43cb0f9eb21b35c2d2c870b3f38ab1d ] + +The configuration generated by kunit ends up with a 32bit configuration. +A new kunit configuration for 64bit is to be added. +To make the difference clearer spell out the variant in the kunit +reference config. + +Link: https://lore.kernel.org/r/20250415-kunit-qemu-sparc64-v1-1-253906f61102@linutronix.de +Signed-off-by: Thomas Weißschuh +Reviewed-by: David Gow +Signed-off-by: Shuah Khan +Stable-dep-of: 1d31d536871f ("kunit: qemu_configs: Disable faulting tests on 32-bit SPARC") +Signed-off-by: Sasha Levin +--- + tools/testing/kunit/qemu_configs/sparc.py | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/tools/testing/kunit/qemu_configs/sparc.py b/tools/testing/kunit/qemu_configs/sparc.py +index 256d9573b4464..3131dd299a6e3 100644 +--- a/tools/testing/kunit/qemu_configs/sparc.py ++++ b/tools/testing/kunit/qemu_configs/sparc.py +@@ -2,6 +2,7 @@ from ..qemu_config import QemuArchParams + + QEMU_ARCH = QemuArchParams(linux_arch='sparc', + kconfig=''' ++CONFIG_SPARC32=y + CONFIG_SERIAL_SUNZILOG=y + CONFIG_SERIAL_SUNZILOG_CONSOLE=y + ''', +-- +2.39.5 + diff --git a/queue-6.12/kunit-qemu_configs-sparc-use-zilog-console.patch b/queue-6.12/kunit-qemu_configs-sparc-use-zilog-console.patch new file mode 100644 index 0000000000..876a3a87a8 --- /dev/null +++ b/queue-6.12/kunit-qemu_configs-sparc-use-zilog-console.patch @@ -0,0 +1,55 @@ +From 3f525dc06f9a03f7ab66bdd1f05b55ebcdd8651a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 Feb 2025 14:27:05 +0100 +Subject: kunit: qemu_configs: sparc: use Zilog console +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Thomas Weißschuh + +[ Upstream commit e275f44e0a187b9d76830847976072f1c17b4b7b ] + +The driver for the 8250 console is not used, as no port is found. +Instead the prom0 bootconsole is used the whole time. +The prom driver translates '\n' to '\r\n' before handing of the message +off to the firmware. The firmware performs the same translation again. +In the final output produced by QEMU each line ends with '\r\r\n'. +This breaks the kunit parser, which can only handle '\r\n' and '\n'. + +Use the Zilog console instead. It works correctly, is the one documented +by the QEMU manual and also saves a bit of codesize: +Before=4051011, After=4023326, chg -0.68% + +Observed on QEMU 9.2.0. + +Link: https://lore.kernel.org/r/20250214-kunit-qemu-sparc-console-v1-1-ba1dfdf8f0b1@linutronix.de +Fixes: 87c9c1631788 ("kunit: tool: add support for QEMU") +Signed-off-by: Thomas Weißschuh +Reviewed-by: David Gow +Signed-off-by: Shuah Khan +Stable-dep-of: 1d31d536871f ("kunit: qemu_configs: Disable faulting tests on 32-bit SPARC") +Signed-off-by: Sasha Levin +--- + tools/testing/kunit/qemu_configs/sparc.py | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +diff --git a/tools/testing/kunit/qemu_configs/sparc.py b/tools/testing/kunit/qemu_configs/sparc.py +index e975c4331a7c2..256d9573b4464 100644 +--- a/tools/testing/kunit/qemu_configs/sparc.py ++++ b/tools/testing/kunit/qemu_configs/sparc.py +@@ -2,8 +2,9 @@ from ..qemu_config import QemuArchParams + + QEMU_ARCH = QemuArchParams(linux_arch='sparc', + kconfig=''' +-CONFIG_SERIAL_8250=y +-CONFIG_SERIAL_8250_CONSOLE=y''', ++CONFIG_SERIAL_SUNZILOG=y ++CONFIG_SERIAL_SUNZILOG_CONSOLE=y ++''', + qemu_arch='sparc', + kernel_path='arch/sparc/boot/zImage', + kernel_command_line='console=ttyS0 mem=256M', +-- +2.39.5 + diff --git a/queue-6.12/lib-test_objagg-set-error-message-in-check_expect_hi.patch b/queue-6.12/lib-test_objagg-set-error-message-in-check_expect_hi.patch new file mode 100644 index 0000000000..474239e18f --- /dev/null +++ b/queue-6.12/lib-test_objagg-set-error-message-in-check_expect_hi.patch @@ -0,0 +1,50 @@ +From e75957ad67b579d63ec1580adca3e3cb6bfe6599 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 30 Jun 2025 14:36:40 -0500 +Subject: lib: test_objagg: Set error message in check_expect_hints_stats() + +From: Dan Carpenter + +[ Upstream commit e6ed134a4ef592fe1fd0cafac9683813b3c8f3e8 ] + +Smatch complains that the error message isn't set in the caller: + + lib/test_objagg.c:923 test_hints_case2() + error: uninitialized symbol 'errmsg'. + +This static checker warning only showed up after a recent refactoring +but the bug dates back to when the code was originally added. This +likely doesn't affect anything in real life. + +Reported-by: kernel test robot +Closes: https://lore.kernel.org/r/202506281403.DsuyHFTZ-lkp@intel.com/ +Fixes: 0a020d416d0a ("lib: introduce initial implementation of object aggregation manager") +Signed-off-by: Dan Carpenter +Reviewed-by: Ido Schimmel +Reviewed-by: Simon Horman +Link: https://patch.msgid.link/8548f423-2e3b-4bb7-b816-5041de2762aa@sabinyo.mountain +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + lib/test_objagg.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/lib/test_objagg.c b/lib/test_objagg.c +index d34df4306b874..222b39fc2629e 100644 +--- a/lib/test_objagg.c ++++ b/lib/test_objagg.c +@@ -899,8 +899,10 @@ static int check_expect_hints_stats(struct objagg_hints *objagg_hints, + int err; + + stats = objagg_hints_stats_get(objagg_hints); +- if (IS_ERR(stats)) ++ if (IS_ERR(stats)) { ++ *errmsg = "objagg_hints_stats_get() failed."; + return PTR_ERR(stats); ++ } + err = __check_expect_stats(stats, expect_stats, errmsg); + objagg_stats_put(stats); + return err; +-- +2.39.5 + diff --git a/queue-6.12/mfd-exynos-lpass-fix-another-error-handling-path-in-.patch b/queue-6.12/mfd-exynos-lpass-fix-another-error-handling-path-in-.patch new file mode 100644 index 0000000000..b3776881d0 --- /dev/null +++ b/queue-6.12/mfd-exynos-lpass-fix-another-error-handling-path-in-.patch @@ -0,0 +1,85 @@ +From 8a5adced1e02f7058d95d31a05772a9c149518bb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Jul 2025 21:48:54 -0400 +Subject: mfd: exynos-lpass: Fix another error handling path in + exynos_lpass_probe() + +From: Christophe JAILLET + +[ Upstream commit f41cc37f4bc0e8cd424697bf6e26586cadcf4b9b ] + +If devm_of_platform_populate() fails, some clean-up needs to be done, as +already done in the remove function. + +Add a new devm_add_action_or_reset() to fix the leak in the probe and +remove the need of a remove function. + +Fixes: c695abab2429 ("mfd: Add Samsung Exynos Low Power Audio Subsystem driver") +Signed-off-by: Christophe JAILLET +Reviewed-by: Krzysztof Kozlowski +Link: https://lore.kernel.org/r/69471e839efc0249a504492a8de3497fcdb6a009.1745247209.git.christophe.jaillet@wanadoo.fr +Signed-off-by: Lee Jones +Signed-off-by: Sasha Levin +--- + drivers/mfd/exynos-lpass.c | 25 +++++++++++++++---------- + 1 file changed, 15 insertions(+), 10 deletions(-) + +diff --git a/drivers/mfd/exynos-lpass.c b/drivers/mfd/exynos-lpass.c +index e36805f07282e..8b5fed4760394 100644 +--- a/drivers/mfd/exynos-lpass.c ++++ b/drivers/mfd/exynos-lpass.c +@@ -104,11 +104,22 @@ static const struct regmap_config exynos_lpass_reg_conf = { + .fast_io = true, + }; + ++static void exynos_lpass_disable_lpass(void *data) ++{ ++ struct platform_device *pdev = data; ++ struct exynos_lpass *lpass = platform_get_drvdata(pdev); ++ ++ pm_runtime_disable(&pdev->dev); ++ if (!pm_runtime_status_suspended(&pdev->dev)) ++ exynos_lpass_disable(lpass); ++} ++ + static int exynos_lpass_probe(struct platform_device *pdev) + { + struct device *dev = &pdev->dev; + struct exynos_lpass *lpass; + void __iomem *base_top; ++ int ret; + + lpass = devm_kzalloc(dev, sizeof(*lpass), GFP_KERNEL); + if (!lpass) +@@ -134,16 +145,11 @@ static int exynos_lpass_probe(struct platform_device *pdev) + pm_runtime_enable(dev); + exynos_lpass_enable(lpass); + +- return devm_of_platform_populate(dev); +-} +- +-static void exynos_lpass_remove(struct platform_device *pdev) +-{ +- struct exynos_lpass *lpass = platform_get_drvdata(pdev); ++ ret = devm_add_action_or_reset(dev, exynos_lpass_disable_lpass, pdev); ++ if (ret) ++ return ret; + +- pm_runtime_disable(&pdev->dev); +- if (!pm_runtime_status_suspended(&pdev->dev)) +- exynos_lpass_disable(lpass); ++ return devm_of_platform_populate(dev); + } + + static int __maybe_unused exynos_lpass_suspend(struct device *dev) +@@ -183,7 +189,6 @@ static struct platform_driver exynos_lpass_driver = { + .of_match_table = exynos_lpass_of_match, + }, + .probe = exynos_lpass_probe, +- .remove_new = exynos_lpass_remove, + }; + module_platform_driver(exynos_lpass_driver); + +-- +2.39.5 + diff --git a/queue-6.12/module-provide-export_symbol_gpl_for_modules-helper.patch b/queue-6.12/module-provide-export_symbol_gpl_for_modules-helper.patch new file mode 100644 index 0000000000..de22944068 --- /dev/null +++ b/queue-6.12/module-provide-export_symbol_gpl_for_modules-helper.patch @@ -0,0 +1,110 @@ +From 13d8ffee7b66d56f6a9b3fd2b57ebfac99e7c7d7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 2 May 2025 16:12:09 +0200 +Subject: module: Provide EXPORT_SYMBOL_GPL_FOR_MODULES() helper + +From: Peter Zijlstra + +[ Upstream commit 707f853d7fa3ce323a6875487890c213e34d81a0 ] + +Helper macro to more easily limit the export of a symbol to a given +list of modules. + +Eg: + + EXPORT_SYMBOL_GPL_FOR_MODULES(preempt_notifier_inc, "kvm"); + +will limit the use of said function to kvm.ko, any other module trying +to use this symbol will refure to load (and get modpost build +failures). + +Requested-by: Masahiro Yamada +Requested-by: Christoph Hellwig +Signed-off-by: Peter Zijlstra +Reviewed-by: Petr Pavlu +Signed-off-by: Masahiro Yamada +Stable-dep-of: cbe4134ea4bc ("fs: export anon_inode_make_secure_inode() and fix secretmem LSM bypass") +Signed-off-by: Sasha Levin +--- + Documentation/core-api/symbol-namespaces.rst | 22 ++++++++++++++++++++ + include/linux/export.h | 12 +++++++++-- + 2 files changed, 32 insertions(+), 2 deletions(-) + +diff --git a/Documentation/core-api/symbol-namespaces.rst b/Documentation/core-api/symbol-namespaces.rst +index d1154eb438101..cca94469fa414 100644 +--- a/Documentation/core-api/symbol-namespaces.rst ++++ b/Documentation/core-api/symbol-namespaces.rst +@@ -28,6 +28,9 @@ kernel. As of today, modules that make use of symbols exported into namespaces, + are required to import the namespace. Otherwise the kernel will, depending on + its configuration, reject loading the module or warn about a missing import. + ++Additionally, it is possible to put symbols into a module namespace, strictly ++limiting which modules are allowed to use these symbols. ++ + 2. How to define Symbol Namespaces + ================================== + +@@ -84,6 +87,22 @@ unit as preprocessor statement. The above example would then read:: + within the corresponding compilation unit before any EXPORT_SYMBOL macro is + used. + ++2.3 Using the EXPORT_SYMBOL_GPL_FOR_MODULES() macro ++=================================================== ++ ++Symbols exported using this macro are put into a module namespace. This ++namespace cannot be imported. ++ ++The macro takes a comma separated list of module names, allowing only those ++modules to access this symbol. Simple tail-globs are supported. ++ ++For example: ++ ++ EXPORT_SYMBOL_GPL_FOR_MODULES(preempt_notifier_inc, "kvm,kvm-*") ++ ++will limit usage of this symbol to modules whoes name matches the given ++patterns. ++ + 3. How to use Symbols exported in Namespaces + ============================================ + +@@ -155,3 +174,6 @@ in-tree modules:: + You can also run nsdeps for external module builds. A typical usage is:: + + $ make -C M=$PWD nsdeps ++ ++Note: it will happily generate an import statement for the module namespace; ++which will not work and generates build and runtime failures. +diff --git a/include/linux/export.h b/include/linux/export.h +index 1e04dbc675c2f..b40ae79b767da 100644 +--- a/include/linux/export.h ++++ b/include/linux/export.h +@@ -24,11 +24,17 @@ + .long sym + #endif + +-#define ___EXPORT_SYMBOL(sym, license, ns) \ ++/* ++ * LLVM integrated assembler cam merge adjacent string literals (like ++ * C and GNU-as) passed to '.ascii', but not to '.asciz' and chokes on: ++ * ++ * .asciz "MODULE_" "kvm" ; ++ */ ++#define ___EXPORT_SYMBOL(sym, license, ns...) \ + .section ".export_symbol","a" ASM_NL \ + __export_symbol_##sym: ASM_NL \ + .asciz license ASM_NL \ +- .asciz ns ASM_NL \ ++ .ascii ns "\0" ASM_NL \ + __EXPORT_SYMBOL_REF(sym) ASM_NL \ + .previous + +@@ -70,4 +76,6 @@ + #define EXPORT_SYMBOL_NS(sym, ns) __EXPORT_SYMBOL(sym, "", __stringify(ns)) + #define EXPORT_SYMBOL_NS_GPL(sym, ns) __EXPORT_SYMBOL(sym, "GPL", __stringify(ns)) + ++#define EXPORT_SYMBOL_GPL_FOR_MODULES(sym, mods) __EXPORT_SYMBOL(sym, "GPL", "module:" mods) ++ + #endif /* _LINUX_EXPORT_H */ +-- +2.39.5 + diff --git a/queue-6.12/mtd-spinand-fix-memory-leak-of-ecc-engine-conf.patch b/queue-6.12/mtd-spinand-fix-memory-leak-of-ecc-engine-conf.patch new file mode 100644 index 0000000000..89008873a5 --- /dev/null +++ b/queue-6.12/mtd-spinand-fix-memory-leak-of-ecc-engine-conf.patch @@ -0,0 +1,59 @@ +From 0f5c1f9ec9e97ac6fd7de5325f7b321a1380cf26 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Jun 2025 13:35:16 +0200 +Subject: mtd: spinand: fix memory leak of ECC engine conf + +From: Pablo Martin-Gomez + +[ Upstream commit 6463cbe08b0cbf9bba8763306764f5fd643023e1 ] + +Memory allocated for the ECC engine conf is not released during spinand +cleanup. Below kmemleak trace is seen for this memory leak: + +unreferenced object 0xffffff80064f00e0 (size 8): + comm "swapper/0", pid 1, jiffies 4294937458 + hex dump (first 8 bytes): + 00 00 00 00 00 00 00 00 ........ + backtrace (crc 0): + kmemleak_alloc+0x30/0x40 + __kmalloc_cache_noprof+0x208/0x3c0 + spinand_ondie_ecc_init_ctx+0x114/0x200 + nand_ecc_init_ctx+0x70/0xa8 + nanddev_ecc_engine_init+0xec/0x27c + spinand_probe+0xa2c/0x1620 + spi_mem_probe+0x130/0x21c + spi_probe+0xf0/0x170 + really_probe+0x17c/0x6e8 + __driver_probe_device+0x17c/0x21c + driver_probe_device+0x58/0x180 + __device_attach_driver+0x15c/0x1f8 + bus_for_each_drv+0xec/0x150 + __device_attach+0x188/0x24c + device_initial_probe+0x10/0x20 + bus_probe_device+0x11c/0x160 + +Fix the leak by calling nanddev_ecc_engine_cleanup() inside +spinand_cleanup(). + +Signed-off-by: Pablo Martin-Gomez +Signed-off-by: Miquel Raynal +Signed-off-by: Sasha Levin +--- + drivers/mtd/nand/spi/core.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/mtd/nand/spi/core.c b/drivers/mtd/nand/spi/core.c +index 4d76f9f71a0e9..241f6a4df16c1 100644 +--- a/drivers/mtd/nand/spi/core.c ++++ b/drivers/mtd/nand/spi/core.c +@@ -1496,6 +1496,7 @@ static void spinand_cleanup(struct spinand_device *spinand) + { + struct nand_device *nand = spinand_to_nand(spinand); + ++ nanddev_ecc_engine_cleanup(nand); + nanddev_cleanup(nand); + spinand_manufacturer_cleanup(spinand); + kfree(spinand->databuf); +-- +2.39.5 + diff --git a/queue-6.12/net-sched-always-pass-notifications-when-child-class.patch b/queue-6.12/net-sched-always-pass-notifications-when-child-class.patch new file mode 100644 index 0000000000..3c6b7a0d9f --- /dev/null +++ b/queue-6.12/net-sched-always-pass-notifications-when-child-class.patch @@ -0,0 +1,109 @@ +From a819c1497b2a7fd54f2c0813baeee44ce61e0879 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 30 Jun 2025 15:27:30 +0200 +Subject: net/sched: Always pass notifications when child class becomes empty + +From: Lion Ackermann + +[ Upstream commit 103406b38c600fec1fe375a77b27d87e314aea09 ] + +Certain classful qdiscs may invoke their classes' dequeue handler on an +enqueue operation. This may unexpectedly empty the child qdisc and thus +make an in-flight class passive via qlen_notify(). Most qdiscs do not +expect such behaviour at this point in time and may re-activate the +class eventually anyways which will lead to a use-after-free. + +The referenced fix commit attempted to fix this behavior for the HFSC +case by moving the backlog accounting around, though this turned out to +be incomplete since the parent's parent may run into the issue too. +The following reproducer demonstrates this use-after-free: + + tc qdisc add dev lo root handle 1: drr + tc filter add dev lo parent 1: basic classid 1:1 + tc class add dev lo parent 1: classid 1:1 drr + tc qdisc add dev lo parent 1:1 handle 2: hfsc def 1 + tc class add dev lo parent 2: classid 2:1 hfsc rt m1 8 d 1 m2 0 + tc qdisc add dev lo parent 2:1 handle 3: netem + tc qdisc add dev lo parent 3:1 handle 4: blackhole + + echo 1 | socat -u STDIN UDP4-DATAGRAM:127.0.0.1:8888 + tc class delete dev lo classid 1:1 + echo 1 | socat -u STDIN UDP4-DATAGRAM:127.0.0.1:8888 + +Since backlog accounting issues leading to a use-after-frees on stale +class pointers is a recurring pattern at this point, this patch takes +a different approach. Instead of trying to fix the accounting, the patch +ensures that qdisc_tree_reduce_backlog always calls qlen_notify when +the child qdisc is empty. This solves the problem because deletion of +qdiscs always involves a call to qdisc_reset() and / or +qdisc_purge_queue() which ultimately resets its qlen to 0 thus causing +the following qdisc_tree_reduce_backlog() to report to the parent. Note +that this may call qlen_notify on passive classes multiple times. This +is not a problem after the recent patch series that made all the +classful qdiscs qlen_notify() handlers idempotent. + +Fixes: 3f981138109f ("sch_hfsc: Fix qlen accounting bug when using peek in hfsc_enqueue()") +Signed-off-by: Lion Ackermann +Reviewed-by: Jamal Hadi Salim +Acked-by: Cong Wang +Acked-by: Jamal Hadi Salim +Link: https://patch.msgid.link/d912cbd7-193b-4269-9857-525bee8bbb6a@gmail.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/sched/sch_api.c | 19 +++++-------------- + 1 file changed, 5 insertions(+), 14 deletions(-) + +diff --git a/net/sched/sch_api.c b/net/sched/sch_api.c +index 518f52f65a49d..26378eac1bd08 100644 +--- a/net/sched/sch_api.c ++++ b/net/sched/sch_api.c +@@ -779,15 +779,12 @@ static u32 qdisc_alloc_handle(struct net_device *dev) + + void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len) + { +- bool qdisc_is_offloaded = sch->flags & TCQ_F_OFFLOADED; + const struct Qdisc_class_ops *cops; + unsigned long cl; + u32 parentid; + bool notify; + int drops; + +- if (n == 0 && len == 0) +- return; + drops = max_t(int, n, 0); + rcu_read_lock(); + while ((parentid = sch->parent)) { +@@ -796,17 +793,8 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len) + + if (sch->flags & TCQ_F_NOPARENT) + break; +- /* Notify parent qdisc only if child qdisc becomes empty. +- * +- * If child was empty even before update then backlog +- * counter is screwed and we skip notification because +- * parent class is already passive. +- * +- * If the original child was offloaded then it is allowed +- * to be seem as empty, so the parent is notified anyway. +- */ +- notify = !sch->q.qlen && !WARN_ON_ONCE(!n && +- !qdisc_is_offloaded); ++ /* Notify parent qdisc only if child qdisc becomes empty. */ ++ notify = !sch->q.qlen; + /* TODO: perform the search on a per txq basis */ + sch = qdisc_lookup_rcu(qdisc_dev(sch), TC_H_MAJ(parentid)); + if (sch == NULL) { +@@ -815,6 +803,9 @@ void qdisc_tree_reduce_backlog(struct Qdisc *sch, int n, int len) + } + cops = sch->ops->cl_ops; + if (notify && cops->qlen_notify) { ++ /* Note that qlen_notify must be idempotent as it may get called ++ * multiple times. ++ */ + cl = cops->find(sch, parentid); + cops->qlen_notify(sch, cl); + } +-- +2.39.5 + diff --git a/queue-6.12/net-usb-lan78xx-fix-warn-in-__netif_napi_del_locked-.patch b/queue-6.12/net-usb-lan78xx-fix-warn-in-__netif_napi_del_locked-.patch new file mode 100644 index 0000000000..61b62a88a7 --- /dev/null +++ b/queue-6.12/net-usb-lan78xx-fix-warn-in-__netif_napi_del_locked-.patch @@ -0,0 +1,96 @@ +From e4858f7425b80eab33cc8375a2e89e7e8689144d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 27 Jun 2025 07:13:46 +0200 +Subject: net: usb: lan78xx: fix WARN in __netif_napi_del_locked on disconnect + +From: Oleksij Rempel + +[ Upstream commit 6c7ffc9af7186ed79403a3ffee9a1e5199fc7450 ] + +Remove redundant netif_napi_del() call from disconnect path. + +A WARN may be triggered in __netif_napi_del_locked() during USB device +disconnect: + + WARNING: CPU: 0 PID: 11 at net/core/dev.c:7417 __netif_napi_del_locked+0x2b4/0x350 + +This happens because netif_napi_del() is called in the disconnect path while +NAPI is still enabled. However, it is not necessary to call netif_napi_del() +explicitly, since unregister_netdev() will handle NAPI teardown automatically +and safely. Removing the redundant call avoids triggering the warning. + +Full trace: + lan78xx 1-1:1.0 enu1: Failed to read register index 0x000000c4. ret = -ENODEV + lan78xx 1-1:1.0 enu1: Failed to set MAC down with error -ENODEV + lan78xx 1-1:1.0 enu1: Link is Down + lan78xx 1-1:1.0 enu1: Failed to read register index 0x00000120. ret = -ENODEV + ------------[ cut here ]------------ + WARNING: CPU: 0 PID: 11 at net/core/dev.c:7417 __netif_napi_del_locked+0x2b4/0x350 + Modules linked in: flexcan can_dev fuse + CPU: 0 UID: 0 PID: 11 Comm: kworker/0:1 Not tainted 6.16.0-rc2-00624-ge926949dab03 #9 PREEMPT + Hardware name: SKOV IMX8MP CPU revC - bd500 (DT) + Workqueue: usb_hub_wq hub_event + pstate: 60000005 (nZCv daif -PAN -UAO -TCO -DIT -SSBS BTYPE=--) + pc : __netif_napi_del_locked+0x2b4/0x350 + lr : __netif_napi_del_locked+0x7c/0x350 + sp : ffffffc085b673c0 + x29: ffffffc085b673c0 x28: ffffff800b7f2000 x27: ffffff800b7f20d8 + x26: ffffff80110bcf58 x25: ffffff80110bd978 x24: 1ffffff0022179eb + x23: ffffff80110bc000 x22: ffffff800b7f5000 x21: ffffff80110bc000 + x20: ffffff80110bcf38 x19: ffffff80110bcf28 x18: dfffffc000000000 + x17: ffffffc081578940 x16: ffffffc08284cee0 x15: 0000000000000028 + x14: 0000000000000006 x13: 0000000000040000 x12: ffffffb0022179e8 + x11: 1ffffff0022179e7 x10: ffffffb0022179e7 x9 : dfffffc000000000 + x8 : 0000004ffdde8619 x7 : ffffff80110bcf3f x6 : 0000000000000001 + x5 : ffffff80110bcf38 x4 : ffffff80110bcf38 x3 : 0000000000000000 + x2 : 0000000000000000 x1 : 1ffffff0022179e7 x0 : 0000000000000000 + Call trace: + __netif_napi_del_locked+0x2b4/0x350 (P) + lan78xx_disconnect+0xf4/0x360 + usb_unbind_interface+0x158/0x718 + device_remove+0x100/0x150 + device_release_driver_internal+0x308/0x478 + device_release_driver+0x1c/0x30 + bus_remove_device+0x1a8/0x368 + device_del+0x2e0/0x7b0 + usb_disable_device+0x244/0x540 + usb_disconnect+0x220/0x758 + hub_event+0x105c/0x35e0 + process_one_work+0x760/0x17b0 + worker_thread+0x768/0xce8 + kthread+0x3bc/0x690 + ret_from_fork+0x10/0x20 + irq event stamp: 211604 + hardirqs last enabled at (211603): [] _raw_spin_unlock_irqrestore+0x84/0x98 + hardirqs last disabled at (211604): [] el1_dbg+0x24/0x80 + softirqs last enabled at (211296): [] handle_softirqs+0x820/0xbc8 + softirqs last disabled at (210993): [] __do_softirq+0x18/0x20 + ---[ end trace 0000000000000000 ]--- + lan78xx 1-1:1.0 enu1: failed to kill vid 0081/0 + +Fixes: ec4c7e12396b ("lan78xx: Introduce NAPI polling support") +Suggested-by: Jakub Kicinski +Signed-off-by: Oleksij Rempel +Link: https://patch.msgid.link/20250627051346.276029-1-o.rempel@pengutronix.de +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + drivers/net/usb/lan78xx.c | 2 -- + 1 file changed, 2 deletions(-) + +diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c +index 531b1b6a37d19..2f8637224b69e 100644 +--- a/drivers/net/usb/lan78xx.c ++++ b/drivers/net/usb/lan78xx.c +@@ -4229,8 +4229,6 @@ static void lan78xx_disconnect(struct usb_interface *intf) + if (!dev) + return; + +- netif_napi_del(&dev->napi); +- + udev = interface_to_usbdev(intf); + net = dev->net; + +-- +2.39.5 + diff --git a/queue-6.12/netfs-fix-i_size-updating.patch b/queue-6.12/netfs-fix-i_size-updating.patch new file mode 100644 index 0000000000..8079ad1b0e --- /dev/null +++ b/queue-6.12/netfs-fix-i_size-updating.patch @@ -0,0 +1,86 @@ +From 1104e8f81cb6c2f30d204e825210bdd66a7b7882 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 1 Jul 2025 17:38:45 +0100 +Subject: netfs: Fix i_size updating + +From: David Howells + +[ Upstream commit 2e0658940d90a3dc130bb3b7f75bae9f4100e01f ] + +Fix the updating of i_size, particularly in regard to the completion of DIO +writes and especially async DIO writes by using a lock. + +The bug is triggered occasionally by the generic/207 xfstest as it chucks a +bunch of AIO DIO writes at the filesystem and then checks that fstat() +returns a reasonable st_size as each completes. + +The problem is that netfs is trying to do "if new_size > inode->i_size, +update inode->i_size" sort of thing but without a lock around it. + +This can be seen with cifs, but shouldn't be seen with kafs because kafs +serialises modification ops on the client whereas cifs sends the requests +to the server as they're generated and lets the server order them. + +Fixes: 153a9961b551 ("netfs: Implement unbuffered/DIO write support") +Signed-off-by: David Howells +Link: https://lore.kernel.org/20250701163852.2171681-11-dhowells@redhat.com +Reviewed-by: Paulo Alcantara (Red Hat) +cc: Steve French +cc: Paulo Alcantara +cc: linux-cifs@vger.kernel.org +cc: netfs@lists.linux.dev +cc: linux-fsdevel@vger.kernel.org +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/netfs/buffered_write.c | 2 ++ + fs/netfs/direct_write.c | 8 ++++++-- + 2 files changed, 8 insertions(+), 2 deletions(-) + +diff --git a/fs/netfs/buffered_write.c b/fs/netfs/buffered_write.c +index b3910dfcb56d3..896d1d4219ed9 100644 +--- a/fs/netfs/buffered_write.c ++++ b/fs/netfs/buffered_write.c +@@ -64,6 +64,7 @@ static void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode, + return; + } + ++ spin_lock(&inode->i_lock); + i_size_write(inode, pos); + #if IS_ENABLED(CONFIG_FSCACHE) + fscache_update_cookie(ctx->cache, NULL, &pos); +@@ -77,6 +78,7 @@ static void netfs_update_i_size(struct netfs_inode *ctx, struct inode *inode, + DIV_ROUND_UP(pos, SECTOR_SIZE), + inode->i_blocks + add); + } ++ spin_unlock(&inode->i_lock); + } + + /** +diff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c +index 26cf9c94deebb..8fbfaf71c154c 100644 +--- a/fs/netfs/direct_write.c ++++ b/fs/netfs/direct_write.c +@@ -14,13 +14,17 @@ static void netfs_cleanup_dio_write(struct netfs_io_request *wreq) + struct inode *inode = wreq->inode; + unsigned long long end = wreq->start + wreq->transferred; + +- if (!wreq->error && +- i_size_read(inode) < end) { ++ if (wreq->error || end <= i_size_read(inode)) ++ return; ++ ++ spin_lock(&inode->i_lock); ++ if (end > i_size_read(inode)) { + if (wreq->netfs_ops->update_i_size) + wreq->netfs_ops->update_i_size(inode, end); + else + i_size_write(inode, end); + } ++ spin_unlock(&inode->i_lock); + } + + /* +-- +2.39.5 + diff --git a/queue-6.12/netfs-fix-oops-in-write-retry-from-mis-resetting-the.patch b/queue-6.12/netfs-fix-oops-in-write-retry-from-mis-resetting-the.patch new file mode 100644 index 0000000000..e4a1c0ba6f --- /dev/null +++ b/queue-6.12/netfs-fix-oops-in-write-retry-from-mis-resetting-the.patch @@ -0,0 +1,80 @@ +From a43d71044c9cd7bf8a4824bfa8783499f8780914 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Jul 2025 18:25:56 -0400 +Subject: netfs: Fix oops in write-retry from mis-resetting the subreq iterator + +From: David Howells + +[ Upstream commit 4481f7f2b3df123ec77e828c849138f75cff2bf2 ] + +Fix the resetting of the subrequest iterator in netfs_retry_write_stream() +to use the iterator-reset function as the iterator may have been shortened +by a previous retry. In such a case, the amount of data to be written by +the subrequest is not "subreq->len" but "subreq->len - +subreq->transferred". + +Without this, KASAN may see an error in iov_iter_revert(): + + BUG: KASAN: slab-out-of-bounds in iov_iter_revert lib/iov_iter.c:633 [inline] + BUG: KASAN: slab-out-of-bounds in iov_iter_revert+0x443/0x5a0 lib/iov_iter.c:611 + Read of size 4 at addr ffff88802912a0b8 by task kworker/u32:7/1147 + + CPU: 1 UID: 0 PID: 1147 Comm: kworker/u32:7 Not tainted 6.15.0-rc6-syzkaller-00052-g9f35e33144ae #0 PREEMPT(full) + Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2~bpo12+1 04/01/2014 + Workqueue: events_unbound netfs_write_collection_worker + Call Trace: + + __dump_stack lib/dump_stack.c:94 [inline] + dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 + print_address_description mm/kasan/report.c:408 [inline] + print_report+0xc3/0x670 mm/kasan/report.c:521 + kasan_report+0xe0/0x110 mm/kasan/report.c:634 + iov_iter_revert lib/iov_iter.c:633 [inline] + iov_iter_revert+0x443/0x5a0 lib/iov_iter.c:611 + netfs_retry_write_stream fs/netfs/write_retry.c:44 [inline] + netfs_retry_writes+0x166d/0x1a50 fs/netfs/write_retry.c:231 + netfs_collect_write_results fs/netfs/write_collect.c:352 [inline] + netfs_write_collection_worker+0x23fd/0x3830 fs/netfs/write_collect.c:374 + process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3238 + process_scheduled_works kernel/workqueue.c:3319 [inline] + worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400 + kthread+0x3c2/0x780 kernel/kthread.c:464 + ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153 + ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 + + +Fixes: cd0277ed0c18 ("netfs: Use new folio_queue data type and iterator instead of xarray iter") +Reported-by: syzbot+25b83a6f2c702075fcbc@syzkaller.appspotmail.com +Closes: https://syzkaller.appspot.com/bug?extid=25b83a6f2c702075fcbc +Signed-off-by: David Howells +Link: https://lore.kernel.org/20250519090707.2848510-2-dhowells@redhat.com +Tested-by: syzbot+25b83a6f2c702075fcbc@syzkaller.appspotmail.com +cc: Paulo Alcantara +cc: netfs@lists.linux.dev +cc: linux-fsdevel@vger.kernel.org +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/netfs/write_collect.c | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +diff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c +index 412d4da742270..7cb21da40a0a4 100644 +--- a/fs/netfs/write_collect.c ++++ b/fs/netfs/write_collect.c +@@ -176,9 +176,10 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq, + if (test_bit(NETFS_SREQ_FAILED, &subreq->flags)) + break; + if (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) { +- struct iov_iter source = subreq->io_iter; ++ struct iov_iter source; + +- iov_iter_revert(&source, subreq->len - source.count); ++ netfs_reset_iter(subreq); ++ source = subreq->io_iter; + __set_bit(NETFS_SREQ_RETRYING, &subreq->flags); + netfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit); + netfs_reissue_write(stream, subreq, &source); +-- +2.39.5 + diff --git a/queue-6.12/nfs-clean-up-proc-net-rpc-nfs-when-nfs_fs_proc_net_i.patch b/queue-6.12/nfs-clean-up-proc-net-rpc-nfs-when-nfs_fs_proc_net_i.patch new file mode 100644 index 0000000000..e4a7c119c9 --- /dev/null +++ b/queue-6.12/nfs-clean-up-proc-net-rpc-nfs-when-nfs_fs_proc_net_i.patch @@ -0,0 +1,139 @@ +From d5114aa50f8d8b50bb7be4c904fdb45c15effeff Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 12 Jun 2025 14:52:50 -0700 +Subject: nfs: Clean up /proc/net/rpc/nfs when nfs_fs_proc_net_init() fails. + +From: Kuniyuki Iwashima + +[ Upstream commit e8d6f3ab59468e230f3253efe5cb63efa35289f7 ] + +syzbot reported a warning below [1] following a fault injection in +nfs_fs_proc_net_init(). [0] + +When nfs_fs_proc_net_init() fails, /proc/net/rpc/nfs is not removed. + +Later, rpc_proc_exit() tries to remove /proc/net/rpc, and the warning +is logged as the directory is not empty. + +Let's handle the error of nfs_fs_proc_net_init() properly. + +[0]: +FAULT_INJECTION: forcing a failure. +name failslab, interval 1, probability 0, space 0, times 0 +CPU: 1 UID: 0 PID: 6120 Comm: syz.2.27 Not tainted 6.16.0-rc1-syzkaller-00010-g2c4a1f3fe03e #0 PREEMPT(full) +Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 +Call Trace: + + dump_stack_lvl (lib/dump_stack.c:123) + should_fail_ex (lib/fault-inject.c:73 lib/fault-inject.c:174) + should_failslab (mm/failslab.c:46) + kmem_cache_alloc_noprof (mm/slub.c:4178 mm/slub.c:4204) + __proc_create (fs/proc/generic.c:427) + proc_create_reg (fs/proc/generic.c:554) + proc_create_net_data (fs/proc/proc_net.c:120) + nfs_fs_proc_net_init (fs/nfs/client.c:1409) + nfs_net_init (fs/nfs/inode.c:2600) + ops_init (net/core/net_namespace.c:138) + setup_net (net/core/net_namespace.c:443) + copy_net_ns (net/core/net_namespace.c:576) + create_new_namespaces (kernel/nsproxy.c:110) + unshare_nsproxy_namespaces (kernel/nsproxy.c:218 (discriminator 4)) + ksys_unshare (kernel/fork.c:3123) + __x64_sys_unshare (kernel/fork.c:3190) + do_syscall_64 (arch/x86/entry/syscall_64.c:63 arch/x86/entry/syscall_64.c:94) + entry_SYSCALL_64_after_hwframe (arch/x86/entry/entry_64.S:130) + + +[1]: +remove_proc_entry: removing non-empty directory 'net/rpc', leaking at least 'nfs' + WARNING: CPU: 1 PID: 6120 at fs/proc/generic.c:727 remove_proc_entry+0x45e/0x530 fs/proc/generic.c:727 +Modules linked in: +CPU: 1 UID: 0 PID: 6120 Comm: syz.2.27 Not tainted 6.16.0-rc1-syzkaller-00010-g2c4a1f3fe03e #0 PREEMPT(full) +Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025 + RIP: 0010:remove_proc_entry+0x45e/0x530 fs/proc/generic.c:727 +Code: 3c 02 00 0f 85 85 00 00 00 48 8b 93 d8 00 00 00 4d 89 f0 4c 89 e9 48 c7 c6 40 ba a2 8b 48 c7 c7 60 b9 a2 8b e8 33 81 1d ff 90 <0f> 0b 90 90 e9 5f fe ff ff e8 04 69 5e ff 90 48 b8 00 00 00 00 00 +RSP: 0018:ffffc90003637b08 EFLAGS: 00010282 +RAX: 0000000000000000 RBX: ffff88805f534140 RCX: ffffffff817a92c8 +RDX: ffff88807da99e00 RSI: ffffffff817a92d5 RDI: 0000000000000001 +RBP: ffff888033431ac0 R08: 0000000000000001 R09: 0000000000000000 +R10: 0000000000000001 R11: 0000000000000001 R12: ffff888033431a00 +R13: ffff888033431ae4 R14: ffff888033184724 R15: dffffc0000000000 +FS: 0000555580328500(0000) GS:ffff888124a62000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 00007f71733743e0 CR3: 000000007f618000 CR4: 00000000003526f0 +DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 +DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 +Call Trace: + + sunrpc_exit_net+0x46/0x90 net/sunrpc/sunrpc_syms.c:76 + ops_exit_list net/core/net_namespace.c:200 [inline] + ops_undo_list+0x2eb/0xab0 net/core/net_namespace.c:253 + setup_net+0x2e1/0x510 net/core/net_namespace.c:457 + copy_net_ns+0x2a6/0x5f0 net/core/net_namespace.c:574 + create_new_namespaces+0x3ea/0xa90 kernel/nsproxy.c:110 + unshare_nsproxy_namespaces+0xc0/0x1f0 kernel/nsproxy.c:218 + ksys_unshare+0x45b/0xa40 kernel/fork.c:3121 + __do_sys_unshare kernel/fork.c:3192 [inline] + __se_sys_unshare kernel/fork.c:3190 [inline] + __x64_sys_unshare+0x31/0x40 kernel/fork.c:3190 + do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] + do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94 + entry_SYSCALL_64_after_hwframe+0x77/0x7f +RIP: 0033:0x7fa1a6b8e929 +Code: ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 a8 ff ff ff f7 d8 64 89 01 48 +RSP: 002b:00007fff3a090368 EFLAGS: 00000246 ORIG_RAX: 0000000000000110 +RAX: ffffffffffffffda RBX: 00007fa1a6db5fa0 RCX: 00007fa1a6b8e929 +RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040000080 +RBP: 00007fa1a6c10b39 R08: 0000000000000000 R09: 0000000000000000 +R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 +R13: 00007fa1a6db5fa0 R14: 00007fa1a6db5fa0 R15: 0000000000000001 + + +Fixes: d47151b79e32 ("nfs: expose /proc/net/sunrpc/nfs in net namespaces") +Reported-by: syzbot+a4cc4ac22daa4a71b87c@syzkaller.appspotmail.com +Closes: https://syzkaller.appspot.com/bug?extid=a4cc4ac22daa4a71b87c +Tested-by: syzbot+a4cc4ac22daa4a71b87c@syzkaller.appspotmail.com +Signed-off-by: Kuniyuki Iwashima +Signed-off-by: Anna Schumaker +Signed-off-by: Sasha Levin +--- + fs/nfs/inode.c | 17 ++++++++++++++--- + 1 file changed, 14 insertions(+), 3 deletions(-) + +diff --git a/fs/nfs/inode.c b/fs/nfs/inode.c +index 16607b24ab9c1..8827cb00f86d5 100644 +--- a/fs/nfs/inode.c ++++ b/fs/nfs/inode.c +@@ -2586,15 +2586,26 @@ EXPORT_SYMBOL_GPL(nfs_net_id); + static int nfs_net_init(struct net *net) + { + struct nfs_net *nn = net_generic(net, nfs_net_id); ++ int err; + + nfs_clients_init(net); + + if (!rpc_proc_register(net, &nn->rpcstats)) { +- nfs_clients_exit(net); +- return -ENOMEM; ++ err = -ENOMEM; ++ goto err_proc_rpc; + } + +- return nfs_fs_proc_net_init(net); ++ err = nfs_fs_proc_net_init(net); ++ if (err) ++ goto err_proc_nfs; ++ ++ return 0; ++ ++err_proc_nfs: ++ rpc_proc_unregister(net, "nfs"); ++err_proc_rpc: ++ nfs_clients_exit(net); ++ return err; + } + + static void nfs_net_exit(struct net *net) +-- +2.39.5 + diff --git a/queue-6.12/nfsv4-pnfs-fix-a-race-to-wake-on-nfs_layout_drain.patch b/queue-6.12/nfsv4-pnfs-fix-a-race-to-wake-on-nfs_layout_drain.patch new file mode 100644 index 0000000000..35a6ad9b70 --- /dev/null +++ b/queue-6.12/nfsv4-pnfs-fix-a-race-to-wake-on-nfs_layout_drain.patch @@ -0,0 +1,45 @@ +From 6cd5c7b9110e8e1b3db9f7e15173afb4716d6ac0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Jun 2025 11:02:21 -0400 +Subject: NFSv4/pNFS: Fix a race to wake on NFS_LAYOUT_DRAIN + +From: Benjamin Coddington + +[ Upstream commit c01776287414ca43412d1319d2877cbad65444ac ] + +We found a few different systems hung up in writeback waiting on the same +page lock, and one task waiting on the NFS_LAYOUT_DRAIN bit in +pnfs_update_layout(), however the pnfs_layout_hdr's plh_outstanding count +was zero. + +It seems most likely that this is another race between the waiter and waker +similar to commit ed0172af5d6f ("SUNRPC: Fix a race to wake a sync task"). +Fix it up by applying the advised barrier. + +Fixes: 880265c77ac4 ("pNFS: Avoid a live lock condition in pnfs_update_layout()") +Signed-off-by: Benjamin Coddington +Signed-off-by: Anna Schumaker +Signed-off-by: Sasha Levin +--- + fs/nfs/pnfs.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/fs/nfs/pnfs.c b/fs/nfs/pnfs.c +index 683e09be25adf..6b888e9ff394a 100644 +--- a/fs/nfs/pnfs.c ++++ b/fs/nfs/pnfs.c +@@ -2051,8 +2051,10 @@ static void nfs_layoutget_begin(struct pnfs_layout_hdr *lo) + static void nfs_layoutget_end(struct pnfs_layout_hdr *lo) + { + if (atomic_dec_and_test(&lo->plh_outstanding) && +- test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) ++ test_and_clear_bit(NFS_LAYOUT_DRAIN, &lo->plh_flags)) { ++ smp_mb__after_atomic(); + wake_up_bit(&lo->plh_flags, NFS_LAYOUT_DRAIN); ++ } + } + + static bool pnfs_is_first_layoutget(struct pnfs_layout_hdr *lo) +-- +2.39.5 + diff --git a/queue-6.12/nui-fix-dma_mapping_error-check.patch b/queue-6.12/nui-fix-dma_mapping_error-check.patch new file mode 100644 index 0000000000..fc87817b87 --- /dev/null +++ b/queue-6.12/nui-fix-dma_mapping_error-check.patch @@ -0,0 +1,143 @@ +From ddd3b872b52da22397cbcf745c8b4190127a5cf7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 30 Jun 2025 10:36:43 +0200 +Subject: nui: Fix dma_mapping_error() check + +From: Thomas Fourier + +[ Upstream commit 561aa0e22b70a5e7246b73d62a824b3aef3fc375 ] + +dma_map_XXX() functions return values DMA_MAPPING_ERROR as error values +which is often ~0. The error value should be tested with +dma_mapping_error(). + +This patch creates a new function in niu_ops to test if the mapping +failed. The test is fixed in niu_rbr_add_page(), added in +niu_start_xmit() and the successfully mapped pages are unmaped upon error. + +Fixes: ec2deec1f352 ("niu: Fix to check for dma mapping errors.") +Signed-off-by: Thomas Fourier +Reviewed-by: Simon Horman +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + drivers/net/ethernet/sun/niu.c | 31 ++++++++++++++++++++++++++++++- + drivers/net/ethernet/sun/niu.h | 4 ++++ + 2 files changed, 34 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c +index f5449b73b9a76..1e4cf89bd79ad 100644 +--- a/drivers/net/ethernet/sun/niu.c ++++ b/drivers/net/ethernet/sun/niu.c +@@ -3336,7 +3336,7 @@ static int niu_rbr_add_page(struct niu *np, struct rx_ring_info *rp, + + addr = np->ops->map_page(np->device, page, 0, + PAGE_SIZE, DMA_FROM_DEVICE); +- if (!addr) { ++ if (np->ops->mapping_error(np->device, addr)) { + __free_page(page); + return -ENOMEM; + } +@@ -6672,6 +6672,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb, + len = skb_headlen(skb); + mapping = np->ops->map_single(np->device, skb->data, + len, DMA_TO_DEVICE); ++ if (np->ops->mapping_error(np->device, mapping)) ++ goto out_drop; + + prod = rp->prod; + +@@ -6713,6 +6715,8 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb, + mapping = np->ops->map_page(np->device, skb_frag_page(frag), + skb_frag_off(frag), len, + DMA_TO_DEVICE); ++ if (np->ops->mapping_error(np->device, mapping)) ++ goto out_unmap; + + rp->tx_buffs[prod].skb = NULL; + rp->tx_buffs[prod].mapping = mapping; +@@ -6737,6 +6741,19 @@ static netdev_tx_t niu_start_xmit(struct sk_buff *skb, + out: + return NETDEV_TX_OK; + ++out_unmap: ++ while (i--) { ++ const skb_frag_t *frag; ++ ++ prod = PREVIOUS_TX(rp, prod); ++ frag = &skb_shinfo(skb)->frags[i]; ++ np->ops->unmap_page(np->device, rp->tx_buffs[prod].mapping, ++ skb_frag_size(frag), DMA_TO_DEVICE); ++ } ++ ++ np->ops->unmap_single(np->device, rp->tx_buffs[rp->prod].mapping, ++ skb_headlen(skb), DMA_TO_DEVICE); ++ + out_drop: + rp->tx_errors++; + kfree_skb(skb); +@@ -9638,6 +9655,11 @@ static void niu_pci_unmap_single(struct device *dev, u64 dma_address, + dma_unmap_single(dev, dma_address, size, direction); + } + ++static int niu_pci_mapping_error(struct device *dev, u64 addr) ++{ ++ return dma_mapping_error(dev, addr); ++} ++ + static const struct niu_ops niu_pci_ops = { + .alloc_coherent = niu_pci_alloc_coherent, + .free_coherent = niu_pci_free_coherent, +@@ -9645,6 +9667,7 @@ static const struct niu_ops niu_pci_ops = { + .unmap_page = niu_pci_unmap_page, + .map_single = niu_pci_map_single, + .unmap_single = niu_pci_unmap_single, ++ .mapping_error = niu_pci_mapping_error, + }; + + static void niu_driver_version(void) +@@ -10011,6 +10034,11 @@ static void niu_phys_unmap_single(struct device *dev, u64 dma_address, + /* Nothing to do. */ + } + ++static int niu_phys_mapping_error(struct device *dev, u64 dma_address) ++{ ++ return false; ++} ++ + static const struct niu_ops niu_phys_ops = { + .alloc_coherent = niu_phys_alloc_coherent, + .free_coherent = niu_phys_free_coherent, +@@ -10018,6 +10046,7 @@ static const struct niu_ops niu_phys_ops = { + .unmap_page = niu_phys_unmap_page, + .map_single = niu_phys_map_single, + .unmap_single = niu_phys_unmap_single, ++ .mapping_error = niu_phys_mapping_error, + }; + + static int niu_of_probe(struct platform_device *op) +diff --git a/drivers/net/ethernet/sun/niu.h b/drivers/net/ethernet/sun/niu.h +index 04c215f91fc08..0b169c08b0f2d 100644 +--- a/drivers/net/ethernet/sun/niu.h ++++ b/drivers/net/ethernet/sun/niu.h +@@ -2879,6 +2879,9 @@ struct tx_ring_info { + #define NEXT_TX(tp, index) \ + (((index) + 1) < (tp)->pending ? ((index) + 1) : 0) + ++#define PREVIOUS_TX(tp, index) \ ++ (((index) - 1) >= 0 ? ((index) - 1) : (((tp)->pending) - 1)) ++ + static inline u32 niu_tx_avail(struct tx_ring_info *tp) + { + return (tp->pending - +@@ -3140,6 +3143,7 @@ struct niu_ops { + enum dma_data_direction direction); + void (*unmap_single)(struct device *dev, u64 dma_address, + size_t size, enum dma_data_direction direction); ++ int (*mapping_error)(struct device *dev, u64 dma_address); + }; + + struct niu_link_config { +-- +2.39.5 + diff --git a/queue-6.12/nvme-fix-incorrect-cdw15-value-in-passthru-error-log.patch b/queue-6.12/nvme-fix-incorrect-cdw15-value-in-passthru-error-log.patch new file mode 100644 index 0000000000..fa915959bf --- /dev/null +++ b/queue-6.12/nvme-fix-incorrect-cdw15-value-in-passthru-error-log.patch @@ -0,0 +1,38 @@ +From eb8e669d512a2a94848594a9607a0817a854c554 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 28 Jun 2025 11:12:32 -0700 +Subject: nvme: Fix incorrect cdw15 value in passthru error logging + +From: Alok Tiwari + +[ Upstream commit 2e96d2d8c2a7a6c2cef45593c028d9c5ef180316 ] + +Fix an error in nvme_log_err_passthru() where cdw14 was incorrectly +printed twice instead of cdw15. This fix ensures accurate logging of +the full passthrough command payload. + +Fixes: 9f079dda1433 ("nvme: allow passthru cmd error logging") +Signed-off-by: Alok Tiwari +Reviewed-by: Martin K. Petersen +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/host/core.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c +index eca764fede48f..abd42598fc78b 100644 +--- a/drivers/nvme/host/core.c ++++ b/drivers/nvme/host/core.c +@@ -380,7 +380,7 @@ static void nvme_log_err_passthru(struct request *req) + nr->cmd->common.cdw12, + nr->cmd->common.cdw13, + nr->cmd->common.cdw14, +- nr->cmd->common.cdw14); ++ nr->cmd->common.cdw15); + } + + enum nvme_disposition { +-- +2.39.5 + diff --git a/queue-6.12/nvmet-fix-memory-leak-of-bio-integrity.patch b/queue-6.12/nvmet-fix-memory-leak-of-bio-integrity.patch new file mode 100644 index 0000000000..efc12f2134 --- /dev/null +++ b/queue-6.12/nvmet-fix-memory-leak-of-bio-integrity.patch @@ -0,0 +1,42 @@ +From 28219327386e1aeeb5f0d7a44df5952cdb6ad526 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 25 Jun 2025 14:45:33 +0300 +Subject: nvmet: fix memory leak of bio integrity + +From: Dmitry Bogdanov + +[ Upstream commit 190f4c2c863af7cc5bb354b70e0805f06419c038 ] + +If nvmet receives commands with metadata there is a continuous memory +leak of kmalloc-128 slab or more precisely bio->bi_integrity. + +Since commit bf4c89fc8797 ("block: don't call bio_uninit from bio_endio") +each user of bio_init has to use bio_uninit as well. Otherwise the bio +integrity is not getting free. Nvmet uses bio_init for inline bios. + +Uninit the inline bio to complete deallocation of integrity in bio. + +Fixes: bf4c89fc8797 ("block: don't call bio_uninit from bio_endio") +Signed-off-by: Dmitry Bogdanov +Signed-off-by: Christoph Hellwig +Signed-off-by: Sasha Levin +--- + drivers/nvme/target/nvmet.h | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/drivers/nvme/target/nvmet.h b/drivers/nvme/target/nvmet.h +index 190f55e6d7532..3062562c096a1 100644 +--- a/drivers/nvme/target/nvmet.h ++++ b/drivers/nvme/target/nvmet.h +@@ -714,6 +714,8 @@ static inline void nvmet_req_bio_put(struct nvmet_req *req, struct bio *bio) + { + if (bio != &req->b.inline_bio) + bio_put(bio); ++ else ++ bio_uninit(bio); + } + + #ifdef CONFIG_NVME_TARGET_AUTH +-- +2.39.5 + diff --git a/queue-6.12/platform-mellanox-mlxbf-pmc-fix-duplicate-event-id-f.patch b/queue-6.12/platform-mellanox-mlxbf-pmc-fix-duplicate-event-id-f.patch new file mode 100644 index 0000000000..2c14046c36 --- /dev/null +++ b/queue-6.12/platform-mellanox-mlxbf-pmc-fix-duplicate-event-id-f.patch @@ -0,0 +1,44 @@ +From adea7d00eabc3858c9090bc99b19c666a10a3950 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Jun 2025 23:05:00 -0700 +Subject: platform/mellanox: mlxbf-pmc: Fix duplicate event ID for CACHE_DATA1 +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Alok Tiwari + +[ Upstream commit 173bbec6693f3f3f00dac144f3aa0cd62fb60d33 ] + +same ID (103) was assigned to both GDC_BANK0_G_RSE_PIPE_CACHE_DATA0 +and GDC_BANK0_G_RSE_PIPE_CACHE_DATA1. This could lead to incorrect +event mapping. +Updated the ID to 104 to ensure uniqueness. + +Fixes: 423c3361855c ("platform/mellanox: mlxbf-pmc: Add support for BlueField-3") +Signed-off-by: Alok Tiwari +Reviewed-by: David Thompson +Link: https://lore.kernel.org/r/20250619060502.3594350-1-alok.a.tiwari@oracle.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/mellanox/mlxbf-pmc.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/platform/mellanox/mlxbf-pmc.c b/drivers/platform/mellanox/mlxbf-pmc.c +index 9ff7b487dc489..fbb8128d19de4 100644 +--- a/drivers/platform/mellanox/mlxbf-pmc.c ++++ b/drivers/platform/mellanox/mlxbf-pmc.c +@@ -710,7 +710,7 @@ static const struct mlxbf_pmc_events mlxbf_pmc_llt_events[] = { + {101, "GDC_BANK0_HIT_DCL_PARTIAL"}, + {102, "GDC_BANK0_EVICT_DCL"}, + {103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA0"}, +- {103, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA1"}, ++ {104, "GDC_BANK0_G_RSE_PIPE_CACHE_DATA1"}, + {105, "GDC_BANK0_ARB_STRB"}, + {106, "GDC_BANK0_ARB_WAIT"}, + {107, "GDC_BANK0_GGA_STRB"}, +-- +2.39.5 + diff --git a/queue-6.12/platform-mellanox-mlxbf-tmfifo-fix-vring_desc.len-as.patch b/queue-6.12/platform-mellanox-mlxbf-tmfifo-fix-vring_desc.len-as.patch new file mode 100644 index 0000000000..9c08193ec7 --- /dev/null +++ b/queue-6.12/platform-mellanox-mlxbf-tmfifo-fix-vring_desc.len-as.patch @@ -0,0 +1,46 @@ +From 7d1b555ce985b2f2324a4cddc567d4c2646af50a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 13 Jun 2025 21:46:08 +0000 +Subject: platform/mellanox: mlxbf-tmfifo: fix vring_desc.len assignment +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: David Thompson + +[ Upstream commit 109f4d29dade8ae5b4ac6325af9d1bc24b4230f8 ] + +Fix warnings reported by sparse, related to incorrect type: +drivers/platform/mellanox/mlxbf-tmfifo.c:284:38: warning: incorrect type in assignment (different base types) +drivers/platform/mellanox/mlxbf-tmfifo.c:284:38: expected restricted __virtio32 [usertype] len +drivers/platform/mellanox/mlxbf-tmfifo.c:284:38: got unsigned long + +Reported-by: kernel test robot +Closes: https://lore.kernel.org/oe-kbuild-all/202404040339.S7CUIgf3-lkp@intel.com/ +Fixes: 78034cbece79 ("platform/mellanox: mlxbf-tmfifo: Drop the Rx packet if no more descriptors") +Signed-off-by: David Thompson +Link: https://lore.kernel.org/r/20250613214608.2250130-1-davthompson@nvidia.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/mellanox/mlxbf-tmfifo.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c +index 6c834e39352d6..d2c27cc0733bb 100644 +--- a/drivers/platform/mellanox/mlxbf-tmfifo.c ++++ b/drivers/platform/mellanox/mlxbf-tmfifo.c +@@ -281,7 +281,8 @@ static int mlxbf_tmfifo_alloc_vrings(struct mlxbf_tmfifo *fifo, + vring->align = SMP_CACHE_BYTES; + vring->index = i; + vring->vdev_id = tm_vdev->vdev.id.device; +- vring->drop_desc.len = VRING_DROP_DESC_MAX_LEN; ++ vring->drop_desc.len = cpu_to_virtio32(&tm_vdev->vdev, ++ VRING_DROP_DESC_MAX_LEN); + dev = &tm_vdev->vdev.dev; + + size = vring_size(vring->num, vring->align); +-- +2.39.5 + diff --git a/queue-6.12/platform-mellanox-mlxreg-lc-fix-logic-error-in-power.patch b/queue-6.12/platform-mellanox-mlxreg-lc-fix-logic-error-in-power.patch new file mode 100644 index 0000000000..9f7fdc872b --- /dev/null +++ b/queue-6.12/platform-mellanox-mlxreg-lc-fix-logic-error-in-power.patch @@ -0,0 +1,51 @@ +From 83d89d5589556f1d9a7285a915fd3b7fd9b28ca7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 30 Jun 2025 03:58:08 -0700 +Subject: platform/mellanox: mlxreg-lc: Fix logic error in power state check +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Alok Tiwari + +[ Upstream commit 644bec18e705ca41d444053407419a21832fcb2f ] + +Fixes a logic issue in mlxreg_lc_completion_notify() where the +intention was to check if MLXREG_LC_POWERED flag is not set before +powering on the device. + +The original code used "state & ~MLXREG_LC_POWERED" to check for the +absence of the POWERED bit. However this condition evaluates to true +even when other bits are set, leading to potentially incorrect +behavior. + +Corrected the logic to explicitly check for the absence of +MLXREG_LC_POWERED using !(state & MLXREG_LC_POWERED). + +Fixes: 62f9529b8d5c ("platform/mellanox: mlxreg-lc: Add initial support for Nvidia line card devices") +Suggested-by: Vadim Pasternak +Signed-off-by: Alok Tiwari +Link: https://lore.kernel.org/r/20250630105812.601014-1-alok.a.tiwari@oracle.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/mellanox/mlxreg-lc.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/platform/mellanox/mlxreg-lc.c b/drivers/platform/mellanox/mlxreg-lc.c +index 43d119e3a4734..99152676dbd28 100644 +--- a/drivers/platform/mellanox/mlxreg-lc.c ++++ b/drivers/platform/mellanox/mlxreg-lc.c +@@ -688,7 +688,7 @@ static int mlxreg_lc_completion_notify(void *handle, struct i2c_adapter *parent, + if (regval & mlxreg_lc->data->mask) { + mlxreg_lc->state |= MLXREG_LC_SYNCED; + mlxreg_lc_state_update_locked(mlxreg_lc, MLXREG_LC_SYNCED, 1); +- if (mlxreg_lc->state & ~MLXREG_LC_POWERED) { ++ if (!(mlxreg_lc->state & MLXREG_LC_POWERED)) { + err = mlxreg_lc_power_on_off(mlxreg_lc, 1); + if (err) + goto mlxreg_lc_regmap_power_on_off_fail; +-- +2.39.5 + diff --git a/queue-6.12/platform-mellanox-nvsw-sn2201-fix-bus-number-in-adap.patch b/queue-6.12/platform-mellanox-nvsw-sn2201-fix-bus-number-in-adap.patch new file mode 100644 index 0000000000..7a757ac637 --- /dev/null +++ b/queue-6.12/platform-mellanox-nvsw-sn2201-fix-bus-number-in-adap.patch @@ -0,0 +1,42 @@ +From 693c96c7e51320820ca470e4113cadebaeb12436 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 22 Jun 2025 00:29:12 -0700 +Subject: platform/mellanox: nvsw-sn2201: Fix bus number in adapter error + message +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Alok Tiwari + +[ Upstream commit d07143b507c51c04c091081627c5a130e9d3c517 ] + +change error log to use correct bus number from main_mux_devs +instead of cpld_devs. + +Fixes: 662f24826f95 ("platform/mellanox: Add support for new SN2201 system") +Signed-off-by: Alok Tiwari +Reviewed-by: Vadim Pasternak +Link: https://lore.kernel.org/r/20250622072921.4111552-2-alok.a.tiwari@oracle.com +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/mellanox/nvsw-sn2201.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/platform/mellanox/nvsw-sn2201.c b/drivers/platform/mellanox/nvsw-sn2201.c +index abe7be602f846..e708521e52740 100644 +--- a/drivers/platform/mellanox/nvsw-sn2201.c ++++ b/drivers/platform/mellanox/nvsw-sn2201.c +@@ -1088,7 +1088,7 @@ static int nvsw_sn2201_i2c_completion_notify(void *handle, int id) + if (!nvsw_sn2201->main_mux_devs->adapter) { + err = -ENODEV; + dev_err(nvsw_sn2201->dev, "Failed to get adapter for bus %d\n", +- nvsw_sn2201->cpld_devs->nr); ++ nvsw_sn2201->main_mux_devs->nr); + goto i2c_get_adapter_main_fail; + } + +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-amd-pmc-add-pcspecialist-lafite-pro-v-1.patch b/queue-6.12/platform-x86-amd-pmc-add-pcspecialist-lafite-pro-v-1.patch new file mode 100644 index 0000000000..eaa070da28 --- /dev/null +++ b/queue-6.12/platform-x86-amd-pmc-add-pcspecialist-lafite-pro-v-1.patch @@ -0,0 +1,56 @@ +From 410327cd88d14e410b1ace23752761faa76a044f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 11 Jun 2025 15:33:40 -0500 +Subject: platform/x86/amd/pmc: Add PCSpecialist Lafite Pro V 14M to 8042 + quirks list +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Mario Limonciello + +[ Upstream commit 9ba75ccad85708c5a484637dccc1fc59295b0a83 ] + +Every other s2idle cycle fails to reach hardware sleep when keyboard +wakeup is enabled. This appears to be an EC bug, but the vendor +refuses to fix it. + +It was confirmed that turning off i8042 wakeup avoids ths issue +(albeit keyboard wakeup is disabled). Take the lesser of two evils +and add it to the i8042 quirk list. + +Reported-by: Raoul +Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220116 +Tested-by: Raoul +Signed-off-by: Mario Limonciello +Link: https://lore.kernel.org/r/20250611203341.3733478-1-superm1@kernel.org +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/amd/pmc/pmc-quirks.c | 9 +++++++++ + 1 file changed, 9 insertions(+) + +diff --git a/drivers/platform/x86/amd/pmc/pmc-quirks.c b/drivers/platform/x86/amd/pmc/pmc-quirks.c +index 2e3f6fc67c568..7ed12c1d3b34c 100644 +--- a/drivers/platform/x86/amd/pmc/pmc-quirks.c ++++ b/drivers/platform/x86/amd/pmc/pmc-quirks.c +@@ -224,6 +224,15 @@ static const struct dmi_system_id fwbug_list[] = { + DMI_MATCH(DMI_BOARD_NAME, "WUJIE14-GX4HRXL"), + } + }, ++ /* https://bugzilla.kernel.org/show_bug.cgi?id=220116 */ ++ { ++ .ident = "PCSpecialist Lafite Pro V 14M", ++ .driver_data = &quirk_spurious_8042, ++ .matches = { ++ DMI_MATCH(DMI_SYS_VENDOR, "PCSpecialist"), ++ DMI_MATCH(DMI_PRODUCT_NAME, "Lafite Pro V 14M"), ++ } ++ }, + {} + }; + +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-dell-sysman-directly-use-firmware_attri.patch b/queue-6.12/platform-x86-dell-sysman-directly-use-firmware_attri.patch new file mode 100644 index 0000000000..3bff988a0a --- /dev/null +++ b/queue-6.12/platform-x86-dell-sysman-directly-use-firmware_attri.patch @@ -0,0 +1,83 @@ +From c8471d896c70a4dd8301527513c971d72d2a98db Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 Jan 2025 00:05:13 +0100 +Subject: platform/x86: dell-sysman: Directly use firmware_attributes_class +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Thomas Weißschuh + +[ Upstream commit 501d2f0e78951b9a933bbff73404b25aec45f389 ] + +The usage of the lifecycle functions is not necessary anymore. + +Signed-off-by: Thomas Weißschuh +Reviewed-by: Armin Wolf +Reviewed-by: Mario Limonciello +Reviewed-by: Mark Pearson +Tested-by: Mark Pearson +Link: https://lore.kernel.org/r/20250104-firmware-attributes-simplify-v1-5-949f9709e405@weissschuh.net +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Stable-dep-of: 314e5ad4782d ("platform/x86: dell-wmi-sysman: Fix class device unregistration") +Signed-off-by: Sasha Levin +--- + .../platform/x86/dell/dell-wmi-sysman/sysman.c | 17 ++++------------- + 1 file changed, 4 insertions(+), 13 deletions(-) + +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +index decb3b997d86a..3c74d5e8350a4 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +@@ -25,7 +25,6 @@ struct wmi_sysman_priv wmi_priv = { + /* reset bios to defaults */ + static const char * const reset_types[] = {"builtinsafe", "lastknowngood", "factory", "custom"}; + static int reset_option = -1; +-static const struct class *fw_attr_class; + + + /** +@@ -541,15 +540,11 @@ static int __init sysman_init(void) + goto err_exit_bios_attr_pass_interface; + } + +- ret = fw_attributes_class_get(&fw_attr_class); +- if (ret) +- goto err_exit_bios_attr_pass_interface; +- +- wmi_priv.class_dev = device_create(fw_attr_class, NULL, MKDEV(0, 0), ++ wmi_priv.class_dev = device_create(&firmware_attributes_class, NULL, MKDEV(0, 0), + NULL, "%s", DRIVER_NAME); + if (IS_ERR(wmi_priv.class_dev)) { + ret = PTR_ERR(wmi_priv.class_dev); +- goto err_unregister_class; ++ goto err_exit_bios_attr_pass_interface; + } + + wmi_priv.main_dir_kset = kset_create_and_add("attributes", NULL, +@@ -602,10 +597,7 @@ static int __init sysman_init(void) + release_attributes_data(); + + err_destroy_classdev: +- device_destroy(fw_attr_class, MKDEV(0, 0)); +- +-err_unregister_class: +- fw_attributes_class_put(); ++ device_destroy(&firmware_attributes_class, MKDEV(0, 0)); + + err_exit_bios_attr_pass_interface: + exit_bios_attr_pass_interface(); +@@ -619,8 +611,7 @@ static int __init sysman_init(void) + static void __exit sysman_exit(void) + { + release_attributes_data(); +- device_destroy(fw_attr_class, MKDEV(0, 0)); +- fw_attributes_class_put(); ++ device_destroy(&firmware_attributes_class, MKDEV(0, 0)); + exit_bios_attr_set_interface(); + exit_bios_attr_pass_interface(); + } +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-dell-wmi-sysman-fix-class-device-unregi.patch b/queue-6.12/platform-x86-dell-wmi-sysman-fix-class-device-unregi.patch new file mode 100644 index 0000000000..daf13ee0dc --- /dev/null +++ b/queue-6.12/platform-x86-dell-wmi-sysman-fix-class-device-unregi.patch @@ -0,0 +1,52 @@ +From 7cdc237d47c96c801c71d3f3e012afc8be82118f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 25 Jun 2025 22:17:37 -0300 +Subject: platform/x86: dell-wmi-sysman: Fix class device unregistration +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Kurt Borja + +[ Upstream commit 314e5ad4782d08858b3abc325c0487bd2abc23a1 ] + +Devices under the firmware_attributes_class do not have unique a dev_t. +Therefore, device_unregister() should be used instead of +device_destroy(), since the latter may match any device with a given +dev_t. + +Fixes: e8a60aa7404b ("platform/x86: Introduce support for Systems Management Driver over WMI for Dell Systems") +Signed-off-by: Kurt Borja +Link: https://lore.kernel.org/r/20250625-dest-fix-v1-3-3a0f342312bb@gmail.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/dell/dell-wmi-sysman/sysman.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +index 3c74d5e8350a4..f5402b7146572 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +@@ -597,7 +597,7 @@ static int __init sysman_init(void) + release_attributes_data(); + + err_destroy_classdev: +- device_destroy(&firmware_attributes_class, MKDEV(0, 0)); ++ device_unregister(wmi_priv.class_dev); + + err_exit_bios_attr_pass_interface: + exit_bios_attr_pass_interface(); +@@ -611,7 +611,7 @@ static int __init sysman_init(void) + static void __exit sysman_exit(void) + { + release_attributes_data(); +- device_destroy(&firmware_attributes_class, MKDEV(0, 0)); ++ device_unregister(wmi_priv.class_dev); + exit_bios_attr_set_interface(); + exit_bios_attr_pass_interface(); + } +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-dell-wmi-sysman-fix-wmi-data-block-retr.patch b/queue-6.12/platform-x86-dell-wmi-sysman-fix-wmi-data-block-retr.patch new file mode 100644 index 0000000000..7af34e3bdd --- /dev/null +++ b/queue-6.12/platform-x86-dell-wmi-sysman-fix-wmi-data-block-retr.patch @@ -0,0 +1,141 @@ +From 3572679bb13b43907883732395673ca896f64e40 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 30 Jun 2025 00:43:12 -0300 +Subject: platform/x86: dell-wmi-sysman: Fix WMI data block retrieval in sysfs + callbacks +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Kurt Borja + +[ Upstream commit eb617dd25ca176f3fee24f873f0fd60010773d67 ] + +After retrieving WMI data blocks in sysfs callbacks, check for the +validity of them before dereferencing their content. + +Reported-by: Jan Graczyk +Closes: https://lore.kernel.org/r/CAHk-=wgMiSKXf7SvQrfEnxVtmT=QVQPjJdNjfm3aXS7wc=rzTw@mail.gmail.com/ +Fixes: e8a60aa7404b ("platform/x86: Introduce support for Systems Management Driver over WMI for Dell Systems") +Suggested-by: Linus Torvalds +Reviewed-by: Armin Wolf +Signed-off-by: Kurt Borja +Link: https://lore.kernel.org/r/20250630-sysman-fix-v2-1-d185674d0a30@gmail.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + .../platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h | 5 +++++ + .../platform/x86/dell/dell-wmi-sysman/enum-attributes.c | 5 +++-- + .../platform/x86/dell/dell-wmi-sysman/int-attributes.c | 5 +++-- + .../x86/dell/dell-wmi-sysman/passobj-attributes.c | 5 +++-- + .../platform/x86/dell/dell-wmi-sysman/string-attributes.c | 5 +++-- + drivers/platform/x86/dell/dell-wmi-sysman/sysman.c | 8 ++++---- + 6 files changed, 21 insertions(+), 12 deletions(-) + +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h b/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h +index 3ad33a094588c..817ee7ba07ca0 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/dell-wmi-sysman.h +@@ -89,6 +89,11 @@ extern struct wmi_sysman_priv wmi_priv; + + enum { ENUM, INT, STR, PO }; + ++#define ENUM_MIN_ELEMENTS 8 ++#define INT_MIN_ELEMENTS 9 ++#define STR_MIN_ELEMENTS 8 ++#define PO_MIN_ELEMENTS 4 ++ + enum { + ATTR_NAME, + DISPL_NAME_LANG_CODE, +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c +index 8cc212c852668..fc2f58b4cbc6e 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/enum-attributes.c +@@ -23,9 +23,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a + obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_ENUMERATION_ATTRIBUTE_GUID); + if (!obj) + return -EIO; +- if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { ++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < ENUM_MIN_ELEMENTS || ++ obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { + kfree(obj); +- return -EINVAL; ++ return -EIO; + } + ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer); + kfree(obj); +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c +index 951e75b538fad..7352480642391 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/int-attributes.c +@@ -25,9 +25,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a + obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_INTEGER_ATTRIBUTE_GUID); + if (!obj) + return -EIO; +- if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) { ++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < INT_MIN_ELEMENTS || ++ obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_INTEGER) { + kfree(obj); +- return -EINVAL; ++ return -EIO; + } + ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[CURRENT_VAL].integer.value); + kfree(obj); +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c +index d8f1bf5e58a0f..3167e06d416ed 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/passobj-attributes.c +@@ -26,9 +26,10 @@ static ssize_t is_enabled_show(struct kobject *kobj, struct kobj_attribute *attr + obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_PASSOBJ_ATTRIBUTE_GUID); + if (!obj) + return -EIO; +- if (obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) { ++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < PO_MIN_ELEMENTS || ++ obj->package.elements[IS_PASS_SET].type != ACPI_TYPE_INTEGER) { + kfree(obj); +- return -EINVAL; ++ return -EIO; + } + ret = snprintf(buf, PAGE_SIZE, "%lld\n", obj->package.elements[IS_PASS_SET].integer.value); + kfree(obj); +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c b/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c +index c392f0ecf8b55..0d2c74f8d1aad 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/string-attributes.c +@@ -25,9 +25,10 @@ static ssize_t current_value_show(struct kobject *kobj, struct kobj_attribute *a + obj = get_wmiobj_pointer(instance_id, DELL_WMI_BIOS_STRING_ATTRIBUTE_GUID); + if (!obj) + return -EIO; +- if (obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { ++ if (obj->type != ACPI_TYPE_PACKAGE || obj->package.count < STR_MIN_ELEMENTS || ++ obj->package.elements[CURRENT_VAL].type != ACPI_TYPE_STRING) { + kfree(obj); +- return -EINVAL; ++ return -EIO; + } + ret = snprintf(buf, PAGE_SIZE, "%s\n", obj->package.elements[CURRENT_VAL].string.pointer); + kfree(obj); +diff --git a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +index 40ddc6eb75624..decb3b997d86a 100644 +--- a/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c ++++ b/drivers/platform/x86/dell/dell-wmi-sysman/sysman.c +@@ -408,10 +408,10 @@ static int init_bios_attributes(int attr_type, const char *guid) + return retval; + + switch (attr_type) { +- case ENUM: min_elements = 8; break; +- case INT: min_elements = 9; break; +- case STR: min_elements = 8; break; +- case PO: min_elements = 4; break; ++ case ENUM: min_elements = ENUM_MIN_ELEMENTS; break; ++ case INT: min_elements = INT_MIN_ELEMENTS; break; ++ case STR: min_elements = STR_MIN_ELEMENTS; break; ++ case PO: min_elements = PO_MIN_ELEMENTS; break; + default: + pr_err("Error: Unknown attr_type: %d\n", attr_type); + return -EINVAL; +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-firmware_attributes_class-move-include-.patch b/queue-6.12/platform-x86-firmware_attributes_class-move-include-.patch new file mode 100644 index 0000000000..0881c6f416 --- /dev/null +++ b/queue-6.12/platform-x86-firmware_attributes_class-move-include-.patch @@ -0,0 +1,59 @@ +From 01067a3b8e111ccc28ad6158080da4ce8e301451 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 Jan 2025 00:05:09 +0100 +Subject: platform/x86: firmware_attributes_class: Move include + linux/device/class.h +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Thomas Weißschuh + +[ Upstream commit d0eee1be379204d2ee6cdb09bd98b3fd0165b6d3 ] + +The header firmware_attributes_class.h uses 'struct class'. It should +also include the necessary dependency header. + +Signed-off-by: Thomas Weißschuh +Reviewed-by: Armin Wolf +Reviewed-by: Mario Limonciello +Reviewed-by: Mark Pearson +Tested-by: Mark Pearson +Link: https://lore.kernel.org/r/20250104-firmware-attributes-simplify-v1-1-949f9709e405@weissschuh.net +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Stable-dep-of: 5ff1fbb30597 ("platform/x86: think-lmi: Fix class device unregistration") +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/firmware_attributes_class.c | 1 - + drivers/platform/x86/firmware_attributes_class.h | 2 ++ + 2 files changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/platform/x86/firmware_attributes_class.c b/drivers/platform/x86/firmware_attributes_class.c +index 182a07d8ae3df..cbc56e5db5928 100644 +--- a/drivers/platform/x86/firmware_attributes_class.c ++++ b/drivers/platform/x86/firmware_attributes_class.c +@@ -3,7 +3,6 @@ + /* Firmware attributes class helper module */ + + #include +-#include + #include + #include "firmware_attributes_class.h" + +diff --git a/drivers/platform/x86/firmware_attributes_class.h b/drivers/platform/x86/firmware_attributes_class.h +index 363c75f1ac1b8..8e0f47cfdf92e 100644 +--- a/drivers/platform/x86/firmware_attributes_class.h ++++ b/drivers/platform/x86/firmware_attributes_class.h +@@ -5,6 +5,8 @@ + #ifndef FW_ATTR_CLASS_H + #define FW_ATTR_CLASS_H + ++#include ++ + int fw_attributes_class_get(const struct class **fw_attr_class); + int fw_attributes_class_put(void); + +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-firmware_attributes_class-simplify-api.patch b/queue-6.12/platform-x86-firmware_attributes_class-simplify-api.patch new file mode 100644 index 0000000000..3fd704b62d --- /dev/null +++ b/queue-6.12/platform-x86-firmware_attributes_class-simplify-api.patch @@ -0,0 +1,118 @@ +From 717ece87318ff349943b45cba73876f9a3b85cd2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 Jan 2025 00:05:10 +0100 +Subject: platform/x86: firmware_attributes_class: Simplify API +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Thomas Weißschuh + +[ Upstream commit d03cfde56f5cf9ec50b4cf099a42bf056fc80ddd ] + +The module core already guarantees that a module can only be unloaded +after all other modules using its symbols have been unloaded. +As it's already the responsibility of the drivers using +firmware_attributes_class to clean up their devices before unloading, +the lifetime of the firmware_attributes_class can be bound to the +lifetime of the module. +This enables the direct usage of firmware_attributes_class from the +drivers, without having to go through the lifecycle functions, +leading to simplifications for both the subsystem and its users. + +Signed-off-by: Thomas Weißschuh +Reviewed-by: Armin Wolf +Reviewed-by: Mario Limonciello +Reviewed-by: Mark Pearson +Tested-by: Mark Pearson +Link: https://lore.kernel.org/r/20250104-firmware-attributes-simplify-v1-2-949f9709e405@weissschuh.net +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Stable-dep-of: 5ff1fbb30597 ("platform/x86: think-lmi: Fix class device unregistration") +Signed-off-by: Sasha Levin +--- + .../platform/x86/firmware_attributes_class.c | 40 +++++++------------ + .../platform/x86/firmware_attributes_class.h | 1 + + 2 files changed, 15 insertions(+), 26 deletions(-) + +diff --git a/drivers/platform/x86/firmware_attributes_class.c b/drivers/platform/x86/firmware_attributes_class.c +index cbc56e5db5928..87672c49e86ae 100644 +--- a/drivers/platform/x86/firmware_attributes_class.c ++++ b/drivers/platform/x86/firmware_attributes_class.c +@@ -2,47 +2,35 @@ + + /* Firmware attributes class helper module */ + +-#include + #include + #include "firmware_attributes_class.h" + +-static DEFINE_MUTEX(fw_attr_lock); +-static int fw_attr_inuse; +- +-static const struct class firmware_attributes_class = { ++const struct class firmware_attributes_class = { + .name = "firmware-attributes", + }; ++EXPORT_SYMBOL_GPL(firmware_attributes_class); ++ ++static __init int fw_attributes_class_init(void) ++{ ++ return class_register(&firmware_attributes_class); ++} ++module_init(fw_attributes_class_init); ++ ++static __exit void fw_attributes_class_exit(void) ++{ ++ class_unregister(&firmware_attributes_class); ++} ++module_exit(fw_attributes_class_exit); + + int fw_attributes_class_get(const struct class **fw_attr_class) + { +- int err; +- +- mutex_lock(&fw_attr_lock); +- if (!fw_attr_inuse) { /*first time class is being used*/ +- err = class_register(&firmware_attributes_class); +- if (err) { +- mutex_unlock(&fw_attr_lock); +- return err; +- } +- } +- fw_attr_inuse++; + *fw_attr_class = &firmware_attributes_class; +- mutex_unlock(&fw_attr_lock); + return 0; + } + EXPORT_SYMBOL_GPL(fw_attributes_class_get); + + int fw_attributes_class_put(void) + { +- mutex_lock(&fw_attr_lock); +- if (!fw_attr_inuse) { +- mutex_unlock(&fw_attr_lock); +- return -EINVAL; +- } +- fw_attr_inuse--; +- if (!fw_attr_inuse) /* No more consumers */ +- class_unregister(&firmware_attributes_class); +- mutex_unlock(&fw_attr_lock); + return 0; + } + EXPORT_SYMBOL_GPL(fw_attributes_class_put); +diff --git a/drivers/platform/x86/firmware_attributes_class.h b/drivers/platform/x86/firmware_attributes_class.h +index 8e0f47cfdf92e..ef6c3764a8349 100644 +--- a/drivers/platform/x86/firmware_attributes_class.h ++++ b/drivers/platform/x86/firmware_attributes_class.h +@@ -7,6 +7,7 @@ + + #include + ++extern const struct class firmware_attributes_class; + int fw_attributes_class_get(const struct class **fw_attr_class); + int fw_attributes_class_put(void); + +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-hp-bioscfg-directly-use-firmware_attrib.patch b/queue-6.12/platform-x86-hp-bioscfg-directly-use-firmware_attrib.patch new file mode 100644 index 0000000000..18f816c4ed --- /dev/null +++ b/queue-6.12/platform-x86-hp-bioscfg-directly-use-firmware_attrib.patch @@ -0,0 +1,80 @@ +From 1865d3839a7e9c31020fbf623647b5a7dc303d93 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 Jan 2025 00:05:12 +0100 +Subject: platform/x86: hp-bioscfg: Directly use firmware_attributes_class +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Thomas Weißschuh + +[ Upstream commit 63f8c058036057644f095123a35895cd11639b88 ] + +The usage of the lifecycle functions is not necessary anymore. + +Signed-off-by: Thomas Weißschuh +Reviewed-by: Armin Wolf +Reviewed-by: Mario Limonciello +Reviewed-by: Mark Pearson +Tested-by: Mark Pearson +Link: https://lore.kernel.org/r/20250104-firmware-attributes-simplify-v1-4-949f9709e405@weissschuh.net +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Stable-dep-of: 11cba4793b95 ("platform/x86: hp-bioscfg: Fix class device unregistration") +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/hp/hp-bioscfg/bioscfg.c | 14 +++----------- + 1 file changed, 3 insertions(+), 11 deletions(-) + +diff --git a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c +index 2dc50152158a3..0b277b7e37dd6 100644 +--- a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c ++++ b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c +@@ -24,8 +24,6 @@ struct bioscfg_priv bioscfg_drv = { + .mutex = __MUTEX_INITIALIZER(bioscfg_drv.mutex), + }; + +-static const struct class *fw_attr_class; +- + ssize_t display_name_language_code_show(struct kobject *kobj, + struct kobj_attribute *attr, + char *buf) +@@ -972,11 +970,7 @@ static int __init hp_init(void) + if (ret) + return ret; + +- ret = fw_attributes_class_get(&fw_attr_class); +- if (ret) +- goto err_unregister_class; +- +- bioscfg_drv.class_dev = device_create(fw_attr_class, NULL, MKDEV(0, 0), ++ bioscfg_drv.class_dev = device_create(&firmware_attributes_class, NULL, MKDEV(0, 0), + NULL, "%s", DRIVER_NAME); + if (IS_ERR(bioscfg_drv.class_dev)) { + ret = PTR_ERR(bioscfg_drv.class_dev); +@@ -1043,10 +1037,9 @@ static int __init hp_init(void) + release_attributes_data(); + + err_destroy_classdev: +- device_destroy(fw_attr_class, MKDEV(0, 0)); ++ device_destroy(&firmware_attributes_class, MKDEV(0, 0)); + + err_unregister_class: +- fw_attributes_class_put(); + hp_exit_attr_set_interface(); + + return ret; +@@ -1055,9 +1048,8 @@ static int __init hp_init(void) + static void __exit hp_exit(void) + { + release_attributes_data(); +- device_destroy(fw_attr_class, MKDEV(0, 0)); ++ device_destroy(&firmware_attributes_class, MKDEV(0, 0)); + +- fw_attributes_class_put(); + hp_exit_attr_set_interface(); + } + +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-hp-bioscfg-fix-class-device-unregistrat.patch b/queue-6.12/platform-x86-hp-bioscfg-fix-class-device-unregistrat.patch new file mode 100644 index 0000000000..7df6b77141 --- /dev/null +++ b/queue-6.12/platform-x86-hp-bioscfg-fix-class-device-unregistrat.patch @@ -0,0 +1,52 @@ +From b5687cc166e42a9659dbb96710eaab93fcd6ac11 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 25 Jun 2025 22:17:35 -0300 +Subject: platform/x86: hp-bioscfg: Fix class device unregistration +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Kurt Borja + +[ Upstream commit 11cba4793b95df3bc192149a6eb044f69aa0b99e ] + +Devices under the firmware_attributes_class do not have unique a dev_t. +Therefore, device_unregister() should be used instead of +device_destroy(), since the latter may match any device with a given +dev_t. + +Fixes: a34fc329b189 ("platform/x86: hp-bioscfg: bioscfg") +Signed-off-by: Kurt Borja +Link: https://lore.kernel.org/r/20250625-dest-fix-v1-1-3a0f342312bb@gmail.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/hp/hp-bioscfg/bioscfg.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c +index 0b277b7e37dd6..00b04adb4f191 100644 +--- a/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c ++++ b/drivers/platform/x86/hp/hp-bioscfg/bioscfg.c +@@ -1037,7 +1037,7 @@ static int __init hp_init(void) + release_attributes_data(); + + err_destroy_classdev: +- device_destroy(&firmware_attributes_class, MKDEV(0, 0)); ++ device_unregister(bioscfg_drv.class_dev); + + err_unregister_class: + hp_exit_attr_set_interface(); +@@ -1048,7 +1048,7 @@ static int __init hp_init(void) + static void __exit hp_exit(void) + { + release_attributes_data(); +- device_destroy(&firmware_attributes_class, MKDEV(0, 0)); ++ device_unregister(bioscfg_drv.class_dev); + + hp_exit_attr_set_interface(); + } +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-think-lmi-directly-use-firmware_attribu.patch b/queue-6.12/platform-x86-think-lmi-directly-use-firmware_attribu.patch new file mode 100644 index 0000000000..aa5e484fca --- /dev/null +++ b/queue-6.12/platform-x86-think-lmi-directly-use-firmware_attribu.patch @@ -0,0 +1,77 @@ +From 1189e69b9b3e7083c0fa5a92bae789855a05154c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 4 Jan 2025 00:05:11 +0100 +Subject: platform/x86: think-lmi: Directly use firmware_attributes_class +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Thomas Weißschuh + +[ Upstream commit 55922403807a12d4f96c67ba01a920edfb6f2633 ] + +The usage of the lifecycle functions is not necessary anymore. + +Signed-off-by: Thomas Weißschuh +Reviewed-by: Armin Wolf +Reviewed-by: Mario Limonciello +Reviewed-by: Mark Pearson +Tested-by: Mark Pearson +Link: https://lore.kernel.org/r/20250104-firmware-attributes-simplify-v1-3-949f9709e405@weissschuh.net +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Stable-dep-of: 5ff1fbb30597 ("platform/x86: think-lmi: Fix class device unregistration") +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/think-lmi.c | 13 +++---------- + 1 file changed, 3 insertions(+), 10 deletions(-) + +diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c +index 1abd8378f158d..a7c3285323d6b 100644 +--- a/drivers/platform/x86/think-lmi.c ++++ b/drivers/platform/x86/think-lmi.c +@@ -192,7 +192,6 @@ static const char * const level_options[] = { + [TLMI_LEVEL_MASTER] = "master", + }; + static struct think_lmi tlmi_priv; +-static const struct class *fw_attr_class; + static DEFINE_MUTEX(tlmi_mutex); + + static inline struct tlmi_pwd_setting *to_tlmi_pwd_setting(struct kobject *kobj) +@@ -1375,11 +1374,7 @@ static int tlmi_sysfs_init(void) + { + int i, ret; + +- ret = fw_attributes_class_get(&fw_attr_class); +- if (ret) +- return ret; +- +- tlmi_priv.class_dev = device_create(fw_attr_class, NULL, MKDEV(0, 0), ++ tlmi_priv.class_dev = device_create(&firmware_attributes_class, NULL, MKDEV(0, 0), + NULL, "%s", "thinklmi"); + if (IS_ERR(tlmi_priv.class_dev)) { + ret = PTR_ERR(tlmi_priv.class_dev); +@@ -1492,9 +1487,8 @@ static int tlmi_sysfs_init(void) + fail_create_attr: + tlmi_release_attr(); + fail_device_created: +- device_destroy(fw_attr_class, MKDEV(0, 0)); ++ device_destroy(&firmware_attributes_class, MKDEV(0, 0)); + fail_class_created: +- fw_attributes_class_put(); + return ret; + } + +@@ -1717,8 +1711,7 @@ static int tlmi_analyze(void) + static void tlmi_remove(struct wmi_device *wdev) + { + tlmi_release_attr(); +- device_destroy(fw_attr_class, MKDEV(0, 0)); +- fw_attributes_class_put(); ++ device_destroy(&firmware_attributes_class, MKDEV(0, 0)); + } + + static int tlmi_probe(struct wmi_device *wdev, const void *context) +-- +2.39.5 + diff --git a/queue-6.12/platform-x86-think-lmi-fix-class-device-unregistrati.patch b/queue-6.12/platform-x86-think-lmi-fix-class-device-unregistrati.patch new file mode 100644 index 0000000000..cc6b515344 --- /dev/null +++ b/queue-6.12/platform-x86-think-lmi-fix-class-device-unregistrati.patch @@ -0,0 +1,52 @@ +From 3d7cc3de457f04944cba37dfc8651f9c23e7bdc4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 25 Jun 2025 22:17:36 -0300 +Subject: platform/x86: think-lmi: Fix class device unregistration +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Kurt Borja + +[ Upstream commit 5ff1fbb3059730700b4823f43999fc1315984632 ] + +Devices under the firmware_attributes_class do not have unique a dev_t. +Therefore, device_unregister() should be used instead of +device_destroy(), since the latter may match any device with a given +dev_t. + +Fixes: a40cd7ef22fb ("platform/x86: think-lmi: Add WMI interface support on Lenovo platforms") +Signed-off-by: Kurt Borja +Link: https://lore.kernel.org/r/20250625-dest-fix-v1-2-3a0f342312bb@gmail.com +Reviewed-by: Ilpo Järvinen +Signed-off-by: Ilpo Järvinen +Signed-off-by: Sasha Levin +--- + drivers/platform/x86/think-lmi.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/platform/x86/think-lmi.c b/drivers/platform/x86/think-lmi.c +index a7c3285323d6b..3a8e84589518a 100644 +--- a/drivers/platform/x86/think-lmi.c ++++ b/drivers/platform/x86/think-lmi.c +@@ -1487,7 +1487,7 @@ static int tlmi_sysfs_init(void) + fail_create_attr: + tlmi_release_attr(); + fail_device_created: +- device_destroy(&firmware_attributes_class, MKDEV(0, 0)); ++ device_unregister(tlmi_priv.class_dev); + fail_class_created: + return ret; + } +@@ -1711,7 +1711,7 @@ static int tlmi_analyze(void) + static void tlmi_remove(struct wmi_device *wdev) + { + tlmi_release_attr(); +- device_destroy(&firmware_attributes_class, MKDEV(0, 0)); ++ device_unregister(tlmi_priv.class_dev); + } + + static int tlmi_probe(struct wmi_device *wdev, const void *context) +-- +2.39.5 + diff --git a/queue-6.12/powerpc-fix-struct-termio-related-ioctl-macros.patch b/queue-6.12/powerpc-fix-struct-termio-related-ioctl-macros.patch new file mode 100644 index 0000000000..c71ed245d0 --- /dev/null +++ b/queue-6.12/powerpc-fix-struct-termio-related-ioctl-macros.patch @@ -0,0 +1,58 @@ +From 8c6b36e3b3114deda1e356d12333c2c4ff219ea9 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 17 May 2025 19:52:37 +0530 +Subject: powerpc: Fix struct termio related ioctl macros + +From: Madhavan Srinivasan + +[ Upstream commit ab107276607af90b13a5994997e19b7b9731e251 ] + +Since termio interface is now obsolete, include/uapi/asm/ioctls.h +has some constant macros referring to "struct termio", this caused +build failure at userspace. + +In file included from /usr/include/asm/ioctl.h:12, + from /usr/include/asm/ioctls.h:5, + from tst-ioctls.c:3: +tst-ioctls.c: In function 'get_TCGETA': +tst-ioctls.c:12:10: error: invalid application of 'sizeof' to incomplete type 'struct termio' + 12 | return TCGETA; + | ^~~~~~ + +Even though termios.h provides "struct termio", trying to juggle definitions around to +make it compile could introduce regressions. So better to open code it. + +Reported-by: Tulio Magno +Suggested-by: Nicholas Piggin +Tested-by: Justin M. Forbes +Reviewed-by: Michael Ellerman +Closes: https://lore.kernel.org/linuxppc-dev/8734dji5wl.fsf@ascii.art.br/ +Signed-off-by: Madhavan Srinivasan +Link: https://patch.msgid.link/20250517142237.156665-1-maddy@linux.ibm.com +Signed-off-by: Sasha Levin +--- + arch/powerpc/include/uapi/asm/ioctls.h | 8 ++++---- + 1 file changed, 4 insertions(+), 4 deletions(-) + +diff --git a/arch/powerpc/include/uapi/asm/ioctls.h b/arch/powerpc/include/uapi/asm/ioctls.h +index 2c145da3b774a..b5211e413829a 100644 +--- a/arch/powerpc/include/uapi/asm/ioctls.h ++++ b/arch/powerpc/include/uapi/asm/ioctls.h +@@ -23,10 +23,10 @@ + #define TCSETSW _IOW('t', 21, struct termios) + #define TCSETSF _IOW('t', 22, struct termios) + +-#define TCGETA _IOR('t', 23, struct termio) +-#define TCSETA _IOW('t', 24, struct termio) +-#define TCSETAW _IOW('t', 25, struct termio) +-#define TCSETAF _IOW('t', 28, struct termio) ++#define TCGETA 0x40147417 /* _IOR('t', 23, struct termio) */ ++#define TCSETA 0x80147418 /* _IOW('t', 24, struct termio) */ ++#define TCSETAW 0x80147419 /* _IOW('t', 25, struct termio) */ ++#define TCSETAF 0x8014741c /* _IOW('t', 28, struct termio) */ + + #define TCSBRK _IO('t', 29) + #define TCXONC _IO('t', 30) +-- +2.39.5 + diff --git a/queue-6.12/rcu-return-early-if-callback-is-not-specified.patch b/queue-6.12/rcu-return-early-if-callback-is-not-specified.patch new file mode 100644 index 0000000000..4f7319275f --- /dev/null +++ b/queue-6.12/rcu-return-early-if-callback-is-not-specified.patch @@ -0,0 +1,43 @@ +From 347ea671b0c39c84013f5b55a647ca54ca597291 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 10 Jun 2025 19:34:48 +0200 +Subject: rcu: Return early if callback is not specified + +From: Uladzislau Rezki (Sony) + +[ Upstream commit 33b6a1f155d627f5bd80c7485c598ce45428f74f ] + +Currently the call_rcu() API does not check whether a callback +pointer is NULL. If NULL is passed, rcu_core() will try to invoke +it, resulting in NULL pointer dereference and a kernel crash. + +To prevent this and improve debuggability, this patch adds a check +for NULL and emits a kernel stack trace to help identify a faulty +caller. + +Signed-off-by: Uladzislau Rezki (Sony) +Reviewed-by: Joel Fernandes +Signed-off-by: Joel Fernandes +Signed-off-by: Sasha Levin +--- + kernel/rcu/tree.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c +index cefa831c8cb32..552464dcffe27 100644 +--- a/kernel/rcu/tree.c ++++ b/kernel/rcu/tree.c +@@ -3076,6 +3076,10 @@ __call_rcu_common(struct rcu_head *head, rcu_callback_t func, bool lazy_in) + /* Misaligned rcu_head! */ + WARN_ON_ONCE((unsigned long)head & (sizeof(void *) - 1)); + ++ /* Avoid NULL dereference if callback is NULL. */ ++ if (WARN_ON_ONCE(!func)) ++ return; ++ + if (debug_rcu_head_queue(head)) { + /* + * Probable double call_rcu(), so leak the callback. +-- +2.39.5 + diff --git a/queue-6.12/rdma-mlx5-fix-cc-counters-query-for-mpv.patch b/queue-6.12/rdma-mlx5-fix-cc-counters-query-for-mpv.patch new file mode 100644 index 0000000000..332e311836 --- /dev/null +++ b/queue-6.12/rdma-mlx5-fix-cc-counters-query-for-mpv.patch @@ -0,0 +1,38 @@ +From 0b3ccf386ef83b0d38f52955470d4899810709b2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 16 Jun 2025 12:14:53 +0300 +Subject: RDMA/mlx5: Fix CC counters query for MPV + +From: Patrisious Haddad + +[ Upstream commit acd245b1e33fc4b9d0f2e3372021d632f7ee0652 ] + +In case, CC counters are querying for the second port use the correct +core device for the query instead of always using the master core device. + +Fixes: aac4492ef23a ("IB/mlx5: Update counter implementation for dual port RoCE") +Signed-off-by: Patrisious Haddad +Reviewed-by: Michael Guralnik +Link: https://patch.msgid.link/9cace74dcf106116118bebfa9146d40d4166c6b0.1750064969.git.leon@kernel.org +Signed-off-by: Leon Romanovsky +Signed-off-by: Sasha Levin +--- + drivers/infiniband/hw/mlx5/counters.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c +index fbabc5ac9ef27..ad6c195d077bb 100644 +--- a/drivers/infiniband/hw/mlx5/counters.c ++++ b/drivers/infiniband/hw/mlx5/counters.c +@@ -411,7 +411,7 @@ static int do_get_hw_stats(struct ib_device *ibdev, + */ + goto done; + } +- ret = mlx5_lag_query_cong_counters(dev->mdev, ++ ret = mlx5_lag_query_cong_counters(mdev, + stats->value + + cnts->num_q_counters, + cnts->num_cong_counters, +-- +2.39.5 + diff --git a/queue-6.12/rdma-mlx5-fix-hw-counters-query-for-non-representor-.patch b/queue-6.12/rdma-mlx5-fix-hw-counters-query-for-non-representor-.patch new file mode 100644 index 0000000000..cdc23678c3 --- /dev/null +++ b/queue-6.12/rdma-mlx5-fix-hw-counters-query-for-non-representor-.patch @@ -0,0 +1,46 @@ +From b818b8e4394db08b5abeceabfdd3566f417c2568 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 16 Jun 2025 12:14:52 +0300 +Subject: RDMA/mlx5: Fix HW counters query for non-representor devices + +From: Patrisious Haddad + +[ Upstream commit 3cc1dbfddf88dc5ecce0a75185061403b1f7352d ] + +To get the device HW counters, a non-representor switchdev device +should use the mlx5_ib_query_q_counters() function and query all of +the available counters. While a representor device in switchdev mode +should use the mlx5_ib_query_q_counters_vport() function and query only +the Q_Counters without the PPCNT counters and congestion control counters, +since they aren't relevant for a representor device. + +Currently a non-representor switchdev device skips querying the PPCNT +counters and congestion control counters, leaving them unupdated. +Fix that by properly querying those counters for non-representor devices. + +Fixes: d22467a71ebe ("RDMA/mlx5: Expand switchdev Q-counters to expose representor statistics") +Signed-off-by: Patrisious Haddad +Reviewed-by: Maher Sanalla +Link: https://patch.msgid.link/56bf8af4ca8c58e3fb9f7e47b1dca2009eeeed81.1750064969.git.leon@kernel.org +Signed-off-by: Leon Romanovsky +Signed-off-by: Sasha Levin +--- + drivers/infiniband/hw/mlx5/counters.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/infiniband/hw/mlx5/counters.c b/drivers/infiniband/hw/mlx5/counters.c +index 81cfa74147a18..fbabc5ac9ef27 100644 +--- a/drivers/infiniband/hw/mlx5/counters.c ++++ b/drivers/infiniband/hw/mlx5/counters.c +@@ -391,7 +391,7 @@ static int do_get_hw_stats(struct ib_device *ibdev, + return ret; + + /* We don't expose device counters over Vports */ +- if (is_mdev_switchdev_mode(dev->mdev) && port_num != 0) ++ if (is_mdev_switchdev_mode(dev->mdev) && dev->is_rep && port_num != 0) + goto done; + + if (MLX5_CAP_PCAM_FEATURE(dev->mdev, rx_icrc_encapsulated_counter)) { +-- +2.39.5 + diff --git a/queue-6.12/rdma-mlx5-fix-unsafe-xarray-access-in-implicit-odp-h.patch b/queue-6.12/rdma-mlx5-fix-unsafe-xarray-access-in-implicit-odp-h.patch new file mode 100644 index 0000000000..b0a4ff22cc --- /dev/null +++ b/queue-6.12/rdma-mlx5-fix-unsafe-xarray-access-in-implicit-odp-h.patch @@ -0,0 +1,104 @@ +From d9b204f403b0f09db45f4c63bd3e2519a7fce362 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 16 Jun 2025 11:17:01 +0300 +Subject: RDMA/mlx5: Fix unsafe xarray access in implicit ODP handling + +From: Or Har-Toov + +[ Upstream commit 2c6b640ea08bff1a192bf87fa45246ff1e40767c ] + +__xa_store() and __xa_erase() were used without holding the proper lock, +which led to a lockdep warning due to unsafe RCU usage. This patch +replaces them with xa_store() and xa_erase(), which perform the necessary +locking internally. + + ============================= + WARNING: suspicious RCPU usage + 6.14.0-rc7_for_upstream_debug_2025_03_18_15_01 #1 Not tainted + ----------------------------- + ./include/linux/xarray.h:1211 suspicious rcu_dereference_protected() usage! + + other info that might help us debug this: + + rcu_scheduler_active = 2, debug_locks = 1 + 3 locks held by kworker/u136:0/219: + at: process_one_work+0xbe4/0x15f0 + process_one_work+0x75c/0x15f0 + pagefault_mr+0x9a5/0x1390 [mlx5_ib] + + stack backtrace: + CPU: 14 UID: 0 PID: 219 Comm: kworker/u136:0 Not tainted + 6.14.0-rc7_for_upstream_debug_2025_03_18_15_01 #1 + Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS + rel-1.16.0-0-gd239552ce722-prebuilt.qemu.org 04/01/2014 + Workqueue: mlx5_ib_page_fault mlx5_ib_eqe_pf_action [mlx5_ib] + Call Trace: + dump_stack_lvl+0xa8/0xc0 + lockdep_rcu_suspicious+0x1e6/0x260 + xas_create+0xb8a/0xee0 + xas_store+0x73/0x14c0 + __xa_store+0x13c/0x220 + ? xa_store_range+0x390/0x390 + ? spin_bug+0x1d0/0x1d0 + pagefault_mr+0xcb5/0x1390 [mlx5_ib] + ? _raw_spin_unlock+0x1f/0x30 + mlx5_ib_eqe_pf_action+0x3be/0x2620 [mlx5_ib] + ? lockdep_hardirqs_on_prepare+0x400/0x400 + ? mlx5_ib_invalidate_range+0xcb0/0xcb0 [mlx5_ib] + process_one_work+0x7db/0x15f0 + ? pwq_dec_nr_in_flight+0xda0/0xda0 + ? assign_work+0x168/0x240 + worker_thread+0x57d/0xcd0 + ? rescuer_thread+0xc40/0xc40 + kthread+0x3b3/0x800 + ? kthread_is_per_cpu+0xb0/0xb0 + ? lock_downgrade+0x680/0x680 + ? do_raw_spin_lock+0x12d/0x270 + ? spin_bug+0x1d0/0x1d0 + ? finish_task_switch.isra.0+0x284/0x9e0 + ? lockdep_hardirqs_on_prepare+0x284/0x400 + ? kthread_is_per_cpu+0xb0/0xb0 + ret_from_fork+0x2d/0x70 + ? kthread_is_per_cpu+0xb0/0xb0 + ret_from_fork_asm+0x11/0x20 + +Fixes: d3d930411ce3 ("RDMA/mlx5: Fix implicit ODP use after free") +Link: https://patch.msgid.link/r/a85ddd16f45c8cb2bc0a188c2b0fcedfce975eb8.1750061791.git.leon@kernel.org +Signed-off-by: Or Har-Toov +Reviewed-by: Patrisious Haddad +Signed-off-by: Leon Romanovsky +Signed-off-by: Jason Gunthorpe +Signed-off-by: Sasha Levin +--- + drivers/infiniband/hw/mlx5/odp.c | 8 ++++---- + 1 file changed, 4 insertions(+), 4 deletions(-) + +diff --git a/drivers/infiniband/hw/mlx5/odp.c b/drivers/infiniband/hw/mlx5/odp.c +index e158d5b1ab17b..98a76c9db7aba 100644 +--- a/drivers/infiniband/hw/mlx5/odp.c ++++ b/drivers/infiniband/hw/mlx5/odp.c +@@ -247,8 +247,8 @@ static void destroy_unused_implicit_child_mr(struct mlx5_ib_mr *mr) + } + + if (MLX5_CAP_ODP(mr_to_mdev(mr)->mdev, mem_page_fault)) +- __xa_erase(&mr_to_mdev(mr)->odp_mkeys, +- mlx5_base_mkey(mr->mmkey.key)); ++ xa_erase(&mr_to_mdev(mr)->odp_mkeys, ++ mlx5_base_mkey(mr->mmkey.key)); + xa_unlock(&imr->implicit_children); + + /* Freeing a MR is a sleeping operation, so bounce to a work queue */ +@@ -521,8 +521,8 @@ static struct mlx5_ib_mr *implicit_get_child_mr(struct mlx5_ib_mr *imr, + } + + if (MLX5_CAP_ODP(dev->mdev, mem_page_fault)) { +- ret = __xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key), +- &mr->mmkey, GFP_KERNEL); ++ ret = xa_store(&dev->odp_mkeys, mlx5_base_mkey(mr->mmkey.key), ++ &mr->mmkey, GFP_KERNEL); + if (xa_is_err(ret)) { + ret = ERR_PTR(xa_err(ret)); + __xa_erase(&imr->implicit_children, idx); +-- +2.39.5 + diff --git a/queue-6.12/rdma-mlx5-fix-vport-loopback-for-mpv-device.patch b/queue-6.12/rdma-mlx5-fix-vport-loopback-for-mpv-device.patch new file mode 100644 index 0000000000..59950cc772 --- /dev/null +++ b/queue-6.12/rdma-mlx5-fix-vport-loopback-for-mpv-device.patch @@ -0,0 +1,87 @@ +From c5f15c6f84ed613b5eb7d34103208ff60d877c89 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 16 Jun 2025 12:14:54 +0300 +Subject: RDMA/mlx5: Fix vport loopback for MPV device + +From: Patrisious Haddad + +[ Upstream commit a9a9e68954f29b1e197663f76289db4879fd51bb ] + +Always enable vport loopback for both MPV devices on driver start. + +Previously in some cases related to MPV RoCE, packets weren't correctly +executing loopback check at vport in FW, since it was disabled. +Due to complexity of identifying such cases for MPV always enable vport +loopback for both GVMIs when binding the slave to the master port. + +Fixes: 0042f9e458a5 ("RDMA/mlx5: Enable vport loopback when user context or QP mandate") +Signed-off-by: Patrisious Haddad +Reviewed-by: Mark Bloch +Link: https://patch.msgid.link/d4298f5ebb2197459e9e7221c51ecd6a34699847.1750064969.git.leon@kernel.org +Signed-off-by: Leon Romanovsky +Signed-off-by: Sasha Levin +--- + drivers/infiniband/hw/mlx5/main.c | 33 +++++++++++++++++++++++++++++++ + 1 file changed, 33 insertions(+) + +diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c +index 8c47cb4edd0a0..435c456a4fd5b 100644 +--- a/drivers/infiniband/hw/mlx5/main.c ++++ b/drivers/infiniband/hw/mlx5/main.c +@@ -1766,6 +1766,33 @@ static void deallocate_uars(struct mlx5_ib_dev *dev, + context->devx_uid); + } + ++static int mlx5_ib_enable_lb_mp(struct mlx5_core_dev *master, ++ struct mlx5_core_dev *slave) ++{ ++ int err; ++ ++ err = mlx5_nic_vport_update_local_lb(master, true); ++ if (err) ++ return err; ++ ++ err = mlx5_nic_vport_update_local_lb(slave, true); ++ if (err) ++ goto out; ++ ++ return 0; ++ ++out: ++ mlx5_nic_vport_update_local_lb(master, false); ++ return err; ++} ++ ++static void mlx5_ib_disable_lb_mp(struct mlx5_core_dev *master, ++ struct mlx5_core_dev *slave) ++{ ++ mlx5_nic_vport_update_local_lb(slave, false); ++ mlx5_nic_vport_update_local_lb(master, false); ++} ++ + int mlx5_ib_enable_lb(struct mlx5_ib_dev *dev, bool td, bool qp) + { + int err = 0; +@@ -3448,6 +3475,8 @@ static void mlx5_ib_unbind_slave_port(struct mlx5_ib_dev *ibdev, + + lockdep_assert_held(&mlx5_ib_multiport_mutex); + ++ mlx5_ib_disable_lb_mp(ibdev->mdev, mpi->mdev); ++ + mlx5_core_mp_event_replay(ibdev->mdev, + MLX5_DRIVER_EVENT_AFFILIATION_REMOVED, + NULL); +@@ -3543,6 +3572,10 @@ static bool mlx5_ib_bind_slave_port(struct mlx5_ib_dev *ibdev, + MLX5_DRIVER_EVENT_AFFILIATION_DONE, + &key); + ++ err = mlx5_ib_enable_lb_mp(ibdev->mdev, mpi->mdev); ++ if (err) ++ goto unbind; ++ + return true; + + unbind: +-- +2.39.5 + diff --git a/queue-6.12/rdma-mlx5-initialize-obj_event-obj_sub_list-before-x.patch b/queue-6.12/rdma-mlx5-initialize-obj_event-obj_sub_list-before-x.patch new file mode 100644 index 0000000000..d5bd52ca18 --- /dev/null +++ b/queue-6.12/rdma-mlx5-initialize-obj_event-obj_sub_list-before-x.patch @@ -0,0 +1,100 @@ +From 2ef2c534e5f51ea777efcab0127b27099eb90b5a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 17 Jun 2025 11:13:55 +0300 +Subject: RDMA/mlx5: Initialize obj_event->obj_sub_list before xa_insert + +From: Mark Zhang + +[ Upstream commit 8edab8a72d67742f87e9dc2e2b0cdfddda5dc29a ] + +The obj_event may be loaded immediately after inserted, then if the +list_head is not initialized then we may get a poisonous pointer. This +fixes the crash below: + + mlx5_core 0000:03:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 enhanced) + mlx5_core.sf mlx5_core.sf.4: firmware version: 32.38.3056 + mlx5_core 0000:03:00.0 en3f0pf0sf2002: renamed from eth0 + mlx5_core.sf mlx5_core.sf.4: Rate limit: 127 rates are supported, range: 0Mbps to 195312Mbps + IPv6: ADDRCONF(NETDEV_CHANGE): en3f0pf0sf2002: link becomes ready + Unable to handle kernel NULL pointer dereference at virtual address 0000000000000060 + Mem abort info: + ESR = 0x96000006 + EC = 0x25: DABT (current EL), IL = 32 bits + SET = 0, FnV = 0 + EA = 0, S1PTW = 0 + Data abort info: + ISV = 0, ISS = 0x00000006 + CM = 0, WnR = 0 + user pgtable: 4k pages, 48-bit VAs, pgdp=00000007760fb000 + [0000000000000060] pgd=000000076f6d7003, p4d=000000076f6d7003, pud=0000000777841003, pmd=0000000000000000 + Internal error: Oops: 96000006 [#1] SMP + Modules linked in: ipmb_host(OE) act_mirred(E) cls_flower(E) sch_ingress(E) mptcp_diag(E) udp_diag(E) raw_diag(E) unix_diag(E) tcp_diag(E) inet_diag(E) binfmt_misc(E) bonding(OE) rdma_ucm(OE) rdma_cm(OE) iw_cm(OE) ib_ipoib(OE) ib_cm(OE) isofs(E) cdrom(E) mst_pciconf(OE) ib_umad(OE) mlx5_ib(OE) ipmb_dev_int(OE) mlx5_core(OE) kpatch_15237886(OEK) mlxdevm(OE) auxiliary(OE) ib_uverbs(OE) ib_core(OE) psample(E) mlxfw(OE) tls(E) sunrpc(E) vfat(E) fat(E) crct10dif_ce(E) ghash_ce(E) sha1_ce(E) sbsa_gwdt(E) virtio_console(E) ext4(E) mbcache(E) jbd2(E) xfs(E) libcrc32c(E) mmc_block(E) virtio_net(E) net_failover(E) failover(E) sha2_ce(E) sha256_arm64(E) nvme(OE) nvme_core(OE) gpio_mlxbf3(OE) mlx_compat(OE) mlxbf_pmc(OE) i2c_mlxbf(OE) sdhci_of_dwcmshc(OE) pinctrl_mlxbf3(OE) mlxbf_pka(OE) gpio_generic(E) i2c_core(E) mmc_core(E) mlxbf_gige(OE) vitesse(E) pwr_mlxbf(OE) mlxbf_tmfifo(OE) micrel(E) mlxbf_bootctl(OE) virtio_ring(E) virtio(E) ipmi_devintf(E) ipmi_msghandler(E) + [last unloaded: mst_pci] + CPU: 11 PID: 20913 Comm: rte-worker-11 Kdump: loaded Tainted: G OE K 5.10.134-13.1.an8.aarch64 #1 + Hardware name: https://www.mellanox.com BlueField-3 SmartNIC Main Card/BlueField-3 SmartNIC Main Card, BIOS 4.2.2.12968 Oct 26 2023 + pstate: a0400089 (NzCv daIf +PAN -UAO -TCO BTYPE=--) + pc : dispatch_event_fd+0x68/0x300 [mlx5_ib] + lr : devx_event_notifier+0xcc/0x228 [mlx5_ib] + sp : ffff80001005bcf0 + x29: ffff80001005bcf0 x28: 0000000000000001 + x27: ffff244e0740a1d8 x26: ffff244e0740a1d0 + x25: ffffda56beff5ae0 x24: ffffda56bf911618 + x23: ffff244e0596a480 x22: ffff244e0596a480 + x21: ffff244d8312ad90 x20: ffff244e0596a480 + x19: fffffffffffffff0 x18: 0000000000000000 + x17: 0000000000000000 x16: ffffda56be66d620 + x15: 0000000000000000 x14: 0000000000000000 + x13: 0000000000000000 x12: 0000000000000000 + x11: 0000000000000040 x10: ffffda56bfcafb50 + x9 : ffffda5655c25f2c x8 : 0000000000000010 + x7 : 0000000000000000 x6 : ffff24545a2e24b8 + x5 : 0000000000000003 x4 : ffff80001005bd28 + x3 : 0000000000000000 x2 : 0000000000000000 + x1 : ffff244e0596a480 x0 : ffff244d8312ad90 + Call trace: + dispatch_event_fd+0x68/0x300 [mlx5_ib] + devx_event_notifier+0xcc/0x228 [mlx5_ib] + atomic_notifier_call_chain+0x58/0x80 + mlx5_eq_async_int+0x148/0x2b0 [mlx5_core] + atomic_notifier_call_chain+0x58/0x80 + irq_int_handler+0x20/0x30 [mlx5_core] + __handle_irq_event_percpu+0x60/0x220 + handle_irq_event_percpu+0x3c/0x90 + handle_irq_event+0x58/0x158 + handle_fasteoi_irq+0xfc/0x188 + generic_handle_irq+0x34/0x48 + ... + +Fixes: 759738537142 ("IB/mlx5: Enable subscription for device events over DEVX") +Link: https://patch.msgid.link/r/3ce7f20e0d1a03dc7de6e57494ec4b8eaf1f05c2.1750147949.git.leon@kernel.org +Signed-off-by: Mark Zhang +Signed-off-by: Leon Romanovsky +Signed-off-by: Jason Gunthorpe +Signed-off-by: Sasha Levin +--- + drivers/infiniband/hw/mlx5/devx.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/infiniband/hw/mlx5/devx.c b/drivers/infiniband/hw/mlx5/devx.c +index 69999d8d24f37..f49f78b69ab9c 100644 +--- a/drivers/infiniband/hw/mlx5/devx.c ++++ b/drivers/infiniband/hw/mlx5/devx.c +@@ -1914,6 +1914,7 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table, + /* Level1 is valid for future use, no need to free */ + return -ENOMEM; + ++ INIT_LIST_HEAD(&obj_event->obj_sub_list); + err = xa_insert(&event->object_ids, + key_level2, + obj_event, +@@ -1922,7 +1923,6 @@ subscribe_event_xa_alloc(struct mlx5_devx_event_table *devx_event_table, + kfree(obj_event); + return err; + } +- INIT_LIST_HEAD(&obj_event->obj_sub_list); + } + + return 0; +-- +2.39.5 + diff --git a/queue-6.12/rdma-rxe-fix-trying-to-register-non-static-key-in-rx.patch b/queue-6.12/rdma-rxe-fix-trying-to-register-non-static-key-in-rx.patch new file mode 100644 index 0000000000..0a6f27bf5c --- /dev/null +++ b/queue-6.12/rdma-rxe-fix-trying-to-register-non-static-key-in-rx.patch @@ -0,0 +1,90 @@ +From 4e38b9e65e3f2a9109a0c41415aac37cf8b4f9a7 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Jul 2025 14:19:39 -0400 +Subject: RDMA/rxe: Fix "trying to register non-static key in + rxe_qp_do_cleanup" bug + +From: Zhu Yanjun + +[ Upstream commit 1c7eec4d5f3b39cdea2153abaebf1b7229a47072 ] + +Call Trace: + + __dump_stack lib/dump_stack.c:94 [inline] + dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120 + assign_lock_key kernel/locking/lockdep.c:986 [inline] + register_lock_class+0x4a3/0x4c0 kernel/locking/lockdep.c:1300 + __lock_acquire+0x99/0x1ba0 kernel/locking/lockdep.c:5110 + lock_acquire kernel/locking/lockdep.c:5866 [inline] + lock_acquire+0x179/0x350 kernel/locking/lockdep.c:5823 + __timer_delete_sync+0x152/0x1b0 kernel/time/timer.c:1644 + rxe_qp_do_cleanup+0x5c3/0x7e0 drivers/infiniband/sw/rxe/rxe_qp.c:815 + execute_in_process_context+0x3a/0x160 kernel/workqueue.c:4596 + __rxe_cleanup+0x267/0x3c0 drivers/infiniband/sw/rxe/rxe_pool.c:232 + rxe_create_qp+0x3f7/0x5f0 drivers/infiniband/sw/rxe/rxe_verbs.c:604 + create_qp+0x62d/0xa80 drivers/infiniband/core/verbs.c:1250 + ib_create_qp_kernel+0x9f/0x310 drivers/infiniband/core/verbs.c:1361 + ib_create_qp include/rdma/ib_verbs.h:3803 [inline] + rdma_create_qp+0x10c/0x340 drivers/infiniband/core/cma.c:1144 + rds_ib_setup_qp+0xc86/0x19a0 net/rds/ib_cm.c:600 + rds_ib_cm_initiate_connect+0x1e8/0x3d0 net/rds/ib_cm.c:944 + rds_rdma_cm_event_handler_cmn+0x61f/0x8c0 net/rds/rdma_transport.c:109 + cma_cm_event_handler+0x94/0x300 drivers/infiniband/core/cma.c:2184 + cma_work_handler+0x15b/0x230 drivers/infiniband/core/cma.c:3042 + process_one_work+0x9cc/0x1b70 kernel/workqueue.c:3238 + process_scheduled_works kernel/workqueue.c:3319 [inline] + worker_thread+0x6c8/0xf10 kernel/workqueue.c:3400 + kthread+0x3c2/0x780 kernel/kthread.c:464 + ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:153 + ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 + + +The root cause is as below: + +In the function rxe_create_qp, the function rxe_qp_from_init is called +to create qp, if this function rxe_qp_from_init fails, rxe_cleanup will +be called to handle all the allocated resources, including the timers: +retrans_timer and rnr_nak_timer. + +The function rxe_qp_from_init calls the function rxe_qp_init_req to +initialize the timers: retrans_timer and rnr_nak_timer. + +But these timers are initialized in the end of rxe_qp_init_req. +If some errors occur before the initialization of these timers, this +problem will occur. + +The solution is to check whether these timers are initialized or not. +If these timers are not initialized, ignore these timers. + +Fixes: 8700e3e7c485 ("Soft RoCE driver") +Reported-by: syzbot+4edb496c3cad6e953a31@syzkaller.appspotmail.com +Closes: https://syzkaller.appspot.com/bug?extid=4edb496c3cad6e953a31 +Signed-off-by: Zhu Yanjun +Link: https://patch.msgid.link/20250419080741.1515231-1-yanjun.zhu@linux.dev +Signed-off-by: Leon Romanovsky +Signed-off-by: Sasha Levin +--- + drivers/infiniband/sw/rxe/rxe_qp.c | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +diff --git a/drivers/infiniband/sw/rxe/rxe_qp.c b/drivers/infiniband/sw/rxe/rxe_qp.c +index 91d329e903083..8b805b16136e5 100644 +--- a/drivers/infiniband/sw/rxe/rxe_qp.c ++++ b/drivers/infiniband/sw/rxe/rxe_qp.c +@@ -811,7 +811,12 @@ static void rxe_qp_do_cleanup(struct work_struct *work) + spin_unlock_irqrestore(&qp->state_lock, flags); + qp->qp_timeout_jiffies = 0; + +- if (qp_type(qp) == IB_QPT_RC) { ++ /* In the function timer_setup, .function is initialized. If .function ++ * is NULL, it indicates the function timer_setup is not called, the ++ * timer is not initialized. Or else, the timer is initialized. ++ */ ++ if (qp_type(qp) == IB_QPT_RC && qp->retrans_timer.function && ++ qp->rnr_nak_timer.function) { + del_timer_sync(&qp->retrans_timer); + del_timer_sync(&qp->rnr_nak_timer); + } +-- +2.39.5 + diff --git a/queue-6.12/regulator-fan53555-add-enable_time-support-and-soft-.patch b/queue-6.12/regulator-fan53555-add-enable_time-support-and-soft-.patch new file mode 100644 index 0000000000..1bccb1cdff --- /dev/null +++ b/queue-6.12/regulator-fan53555-add-enable_time-support-and-soft-.patch @@ -0,0 +1,149 @@ +From 8d46a052b5903b84f5c690107720ffd16b56fad8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 6 Jun 2025 21:04:18 +0200 +Subject: regulator: fan53555: add enable_time support and soft-start times + +From: Heiko Stuebner + +[ Upstream commit 8acfb165a492251a08a22a4fa6497a131e8c2609 ] + +The datasheets for all the fan53555 variants (and clones using the same +interface) define so called soft start times, from enabling the regulator +until at least some percentage of the output (i.e. 92% for the rk860x +types) are available. + +The regulator framework supports this with the enable_time property +but currently the fan53555 driver does not define enable_times for any +variant. + +I ran into a problem with this while testing the new driver for the +Rockchip NPUs (rocket), which does runtime-pm including disabling and +enabling a rk8602 as needed. When reenabling the regulator while running +a load, fatal hangs could be observed while enabling the associated +power-domain, which the regulator supplies. + +Experimentally setting the regulator to always-on, made the issue +disappear, leading to the missing delay to let power stabilize. +And as expected, setting the enable-time to a non-zero value +according to the datasheet also resolved the regulator-issue. + +The datasheets in nearly all cases only specify "typical" values, +except for the fan53555 type 08. There both a typical and maximum +value are listed - 40uS apart. + +For all typical values I've added 100uS to be on the safe side. +Individual details for the relevant regulators below: + +- fan53526: + The datasheet for all variants lists a typical value of 150uS, so + make that 250uS with safety margin. +- fan53555: + types 08 and 18 (unsupported) are given a typical enable time of 135uS + but also a maximum of 175uS so use that value. All the other types only + have a typical time in the datasheet of 300uS, so give a bit margin by + setting it to 400uS. +- rk8600 + rk8602: + Datasheet reports a typical value of 260us, so use 360uS to be safe. +- syr82x + syr83x: + All datasheets report typical soft-start values of 300uS for these + regulators, so use 400uS. +- tcs452x: + Datasheet sadly does not report a soft-start time, so I've not set + an enable-time + +Signed-off-by: Heiko Stuebner +Link: https://patch.msgid.link/20250606190418.478633-1-heiko@sntech.de +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + drivers/regulator/fan53555.c | 14 ++++++++++++++ + 1 file changed, 14 insertions(+) + +diff --git a/drivers/regulator/fan53555.c b/drivers/regulator/fan53555.c +index bd9447dac5967..c282236959b18 100644 +--- a/drivers/regulator/fan53555.c ++++ b/drivers/regulator/fan53555.c +@@ -147,6 +147,7 @@ struct fan53555_device_info { + unsigned int slew_mask; + const unsigned int *ramp_delay_table; + unsigned int n_ramp_values; ++ unsigned int enable_time; + unsigned int slew_rate; + }; + +@@ -282,6 +283,7 @@ static int fan53526_voltages_setup_fairchild(struct fan53555_device_info *di) + di->slew_mask = CTL_SLEW_MASK; + di->ramp_delay_table = slew_rates; + di->n_ramp_values = ARRAY_SIZE(slew_rates); ++ di->enable_time = 250; + di->vsel_count = FAN53526_NVOLTAGES; + + return 0; +@@ -296,10 +298,12 @@ static int fan53555_voltages_setup_fairchild(struct fan53555_device_info *di) + case FAN53555_CHIP_REV_00: + di->vsel_min = 600000; + di->vsel_step = 10000; ++ di->enable_time = 400; + break; + case FAN53555_CHIP_REV_13: + di->vsel_min = 800000; + di->vsel_step = 10000; ++ di->enable_time = 400; + break; + default: + dev_err(di->dev, +@@ -311,13 +315,19 @@ static int fan53555_voltages_setup_fairchild(struct fan53555_device_info *di) + case FAN53555_CHIP_ID_01: + case FAN53555_CHIP_ID_03: + case FAN53555_CHIP_ID_05: ++ di->vsel_min = 600000; ++ di->vsel_step = 10000; ++ di->enable_time = 400; ++ break; + case FAN53555_CHIP_ID_08: + di->vsel_min = 600000; + di->vsel_step = 10000; ++ di->enable_time = 175; + break; + case FAN53555_CHIP_ID_04: + di->vsel_min = 603000; + di->vsel_step = 12826; ++ di->enable_time = 400; + break; + default: + dev_err(di->dev, +@@ -350,6 +360,7 @@ static int fan53555_voltages_setup_rockchip(struct fan53555_device_info *di) + di->slew_mask = CTL_SLEW_MASK; + di->ramp_delay_table = slew_rates; + di->n_ramp_values = ARRAY_SIZE(slew_rates); ++ di->enable_time = 360; + di->vsel_count = FAN53555_NVOLTAGES; + + return 0; +@@ -372,6 +383,7 @@ static int rk8602_voltages_setup_rockchip(struct fan53555_device_info *di) + di->slew_mask = CTL_SLEW_MASK; + di->ramp_delay_table = slew_rates; + di->n_ramp_values = ARRAY_SIZE(slew_rates); ++ di->enable_time = 360; + di->vsel_count = RK8602_NVOLTAGES; + + return 0; +@@ -395,6 +407,7 @@ static int fan53555_voltages_setup_silergy(struct fan53555_device_info *di) + di->slew_mask = CTL_SLEW_MASK; + di->ramp_delay_table = slew_rates; + di->n_ramp_values = ARRAY_SIZE(slew_rates); ++ di->enable_time = 400; + di->vsel_count = FAN53555_NVOLTAGES; + + return 0; +@@ -594,6 +607,7 @@ static int fan53555_regulator_register(struct fan53555_device_info *di, + rdesc->ramp_mask = di->slew_mask; + rdesc->ramp_delay_table = di->ramp_delay_table; + rdesc->n_ramp_values = di->n_ramp_values; ++ rdesc->enable_time = di->enable_time; + rdesc->owner = THIS_MODULE; + + rdev = devm_regulator_register(di->dev, &di->desc, config); +-- +2.39.5 + diff --git a/queue-6.12/remoteproc-k3-call-of_node_put-rmem_np-only-once-in-.patch b/queue-6.12/remoteproc-k3-call-of_node_put-rmem_np-only-once-in-.patch new file mode 100644 index 0000000000..a0bd49dfa7 --- /dev/null +++ b/queue-6.12/remoteproc-k3-call-of_node_put-rmem_np-only-once-in-.patch @@ -0,0 +1,84 @@ +From 4e2183f599318b43fc878876bf00d7d39a8d6708 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 24 Sep 2024 14:28:35 +0200 +Subject: remoteproc: k3: Call of_node_put(rmem_np) only once in three + functions + +From: Markus Elfring + +[ Upstream commit a36d9f96d1cf7c0308bf091e810bec06ce492c3d ] + +An of_node_put(rmem_np) call was immediately used after a pointer check +for a of_reserved_mem_lookup() call in three function implementations. +Thus call such a function only once instead directly before the checks. + +This issue was transformed by using the Coccinelle software. + +Signed-off-by: Markus Elfring +Link: https://lore.kernel.org/r/c46b06f9-72b1-420b-9dce-a392b982140e@web.de +Signed-off-by: Mathieu Poirier +Stable-dep-of: 701177511abd ("remoteproc: k3-r5: Refactor sequential core power up/down operations") +Signed-off-by: Sasha Levin +--- + drivers/remoteproc/ti_k3_dsp_remoteproc.c | 6 ++---- + drivers/remoteproc/ti_k3_m4_remoteproc.c | 6 ++---- + drivers/remoteproc/ti_k3_r5_remoteproc.c | 3 +-- + 3 files changed, 5 insertions(+), 10 deletions(-) + +diff --git a/drivers/remoteproc/ti_k3_dsp_remoteproc.c b/drivers/remoteproc/ti_k3_dsp_remoteproc.c +index 2ae0655ddf1d2..73be3d2167914 100644 +--- a/drivers/remoteproc/ti_k3_dsp_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_dsp_remoteproc.c +@@ -568,11 +568,9 @@ static int k3_dsp_reserved_mem_init(struct k3_dsp_rproc *kproc) + return -EINVAL; + + rmem = of_reserved_mem_lookup(rmem_np); +- if (!rmem) { +- of_node_put(rmem_np); +- return -EINVAL; +- } + of_node_put(rmem_np); ++ if (!rmem) ++ return -EINVAL; + + kproc->rmem[i].bus_addr = rmem->base; + /* 64-bit address regions currently not supported */ +diff --git a/drivers/remoteproc/ti_k3_m4_remoteproc.c b/drivers/remoteproc/ti_k3_m4_remoteproc.c +index fba6e393635e3..6cd50b16a8e82 100644 +--- a/drivers/remoteproc/ti_k3_m4_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_m4_remoteproc.c +@@ -433,11 +433,9 @@ static int k3_m4_reserved_mem_init(struct k3_m4_rproc *kproc) + return -EINVAL; + + rmem = of_reserved_mem_lookup(rmem_np); +- if (!rmem) { +- of_node_put(rmem_np); +- return -EINVAL; +- } + of_node_put(rmem_np); ++ if (!rmem) ++ return -EINVAL; + + kproc->rmem[i].bus_addr = rmem->base; + /* 64-bit address regions currently not supported */ +diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c +index 4894461aa65f3..6cbe74486ebd4 100644 +--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c +@@ -993,12 +993,11 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + } + + rmem = of_reserved_mem_lookup(rmem_np); ++ of_node_put(rmem_np); + if (!rmem) { +- of_node_put(rmem_np); + ret = -EINVAL; + goto unmap_rmem; + } +- of_node_put(rmem_np); + + kproc->rmem[i].bus_addr = rmem->base; + /* +-- +2.39.5 + diff --git a/queue-6.12/remoteproc-k3-r5-add-devm-action-to-release-reserved.patch b/queue-6.12/remoteproc-k3-r5-add-devm-action-to-release-reserved.patch new file mode 100644 index 0000000000..352f9d6ddf --- /dev/null +++ b/queue-6.12/remoteproc-k3-r5-add-devm-action-to-release-reserved.patch @@ -0,0 +1,81 @@ +From f4cc28a3b2c0b484704ef2e8656da7529d803056 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Dec 2024 16:35:41 +0530 +Subject: remoteproc: k3-r5: Add devm action to release reserved memory + +From: Beleswar Padhi + +[ Upstream commit 972361e397797320a624d1a5b457520c10ab4a28 ] + +Use a device lifecycle managed action to release reserved memory. This +helps prevent mistakes like releasing out of order in cleanup functions +and forgetting to release on error paths. + +Signed-off-by: Beleswar Padhi +Reviewed-by: Andrew Davis +Link: https://lore.kernel.org/r/20241219110545.1898883-2-b-padhi@ti.com +Signed-off-by: Mathieu Poirier +Stable-dep-of: 701177511abd ("remoteproc: k3-r5: Refactor sequential core power up/down operations") +Signed-off-by: Sasha Levin +--- + drivers/remoteproc/ti_k3_r5_remoteproc.c | 21 +++++++++++++-------- + 1 file changed, 13 insertions(+), 8 deletions(-) + +diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c +index 6cbe74486ebd4..a9ec65c12fb93 100644 +--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c +@@ -947,6 +947,13 @@ static int k3_r5_rproc_configure(struct k3_r5_rproc *kproc) + return ret; + } + ++static void k3_r5_mem_release(void *data) ++{ ++ struct device *dev = data; ++ ++ of_reserved_mem_device_release(dev); ++} ++ + static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + { + struct device *dev = kproc->dev; +@@ -977,12 +984,14 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + return ret; + } + ++ ret = devm_add_action_or_reset(dev, k3_r5_mem_release, dev); ++ if (ret) ++ return ret; ++ + num_rmems--; + kproc->rmem = kcalloc(num_rmems, sizeof(*kproc->rmem), GFP_KERNEL); +- if (!kproc->rmem) { +- ret = -ENOMEM; +- goto release_rmem; +- } ++ if (!kproc->rmem) ++ return -ENOMEM; + + /* use remaining reserved memory regions for static carveouts */ + for (i = 0; i < num_rmems; i++) { +@@ -1033,8 +1042,6 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + for (i--; i >= 0; i--) + iounmap(kproc->rmem[i].cpu_addr); + kfree(kproc->rmem); +-release_rmem: +- of_reserved_mem_device_release(dev); + return ret; + } + +@@ -1045,8 +1052,6 @@ static void k3_r5_reserved_mem_exit(struct k3_r5_rproc *kproc) + for (i = 0; i < kproc->num_rmems; i++) + iounmap(kproc->rmem[i].cpu_addr); + kfree(kproc->rmem); +- +- of_reserved_mem_device_release(kproc->dev); + } + + /* +-- +2.39.5 + diff --git a/queue-6.12/remoteproc-k3-r5-refactor-sequential-core-power-up-d.patch b/queue-6.12/remoteproc-k3-r5-refactor-sequential-core-power-up-d.patch new file mode 100644 index 0000000000..70f5b3be2c --- /dev/null +++ b/queue-6.12/remoteproc-k3-r5-refactor-sequential-core-power-up-d.patch @@ -0,0 +1,231 @@ +From d361df26b046b7bc0bbdbb2b172f6f0d373702be Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 13 May 2025 11:14:37 +0530 +Subject: remoteproc: k3-r5: Refactor sequential core power up/down operations + +From: Beleswar Padhi + +[ Upstream commit 701177511abd295e0fc2499796e466d8ff12165c ] + +The existing implementation of the waiting mechanism in +"k3_r5_cluster_rproc_init()" waits for the "released_from_reset" flag to +be set as part of the firmware boot process in "k3_r5_rproc_start()". +The "k3_r5_cluster_rproc_init()" function is invoked in the probe +routine which causes unexpected failures in cases where the firmware is +unavailable at boot time, resulting in probe failure and removal of the +remoteproc handles in the sysfs paths. + +To address this, the waiting mechanism is refactored out of the probe +routine into the appropriate "k3_r5_rproc_{prepare/unprepare}()" +functions. This allows the probe routine to complete without depending +on firmware booting, while still maintaining the required +power-synchronization between cores. + +Further, this wait mechanism is dropped from +"k3_r5_rproc_{start/stop}()" functions as they deal with Core Run/Halt +operations, and as such, there is no constraint in Running or Halting +the cores of a cluster in order. + +Fixes: 61f6f68447ab ("remoteproc: k3-r5: Wait for core0 power-up before powering up core1") +Signed-off-by: Beleswar Padhi +Tested-by: Judith Mendez +Reviewed-by: Andrew Davis +Link: https://lore.kernel.org/r/20250513054510.3439842-4-b-padhi@ti.com +Signed-off-by: Mathieu Poirier +Signed-off-by: Sasha Levin +--- + drivers/remoteproc/ti_k3_r5_remoteproc.c | 110 +++++++++++++---------- + 1 file changed, 63 insertions(+), 47 deletions(-) + +diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c +index 055bdd36ef865..941bb130c85c4 100644 +--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c +@@ -440,13 +440,36 @@ static int k3_r5_rproc_prepare(struct rproc *rproc) + { + struct k3_r5_rproc *kproc = rproc->priv; + struct k3_r5_cluster *cluster = kproc->cluster; +- struct k3_r5_core *core = kproc->core; ++ struct k3_r5_core *core = kproc->core, *core0, *core1; + struct device *dev = kproc->dev; + u32 ctrl = 0, cfg = 0, stat = 0; + u64 boot_vec = 0; + bool mem_init_dis; + int ret; + ++ /* ++ * R5 cores require to be powered on sequentially, core0 should be in ++ * higher power state than core1 in a cluster. So, wait for core0 to ++ * power up before proceeding to core1 and put timeout of 2sec. This ++ * waiting mechanism is necessary because rproc_auto_boot_callback() for ++ * core1 can be called before core0 due to thread execution order. ++ * ++ * By placing the wait mechanism here in .prepare() ops, this condition ++ * is enforced for rproc boot requests from sysfs as well. ++ */ ++ core0 = list_first_entry(&cluster->cores, struct k3_r5_core, elem); ++ core1 = list_last_entry(&cluster->cores, struct k3_r5_core, elem); ++ if (cluster->mode == CLUSTER_MODE_SPLIT && core == core1 && ++ !core0->released_from_reset) { ++ ret = wait_event_interruptible_timeout(cluster->core_transition, ++ core0->released_from_reset, ++ msecs_to_jiffies(2000)); ++ if (ret <= 0) { ++ dev_err(dev, "can not power up core1 before core0"); ++ return -EPERM; ++ } ++ } ++ + ret = ti_sci_proc_get_status(core->tsp, &boot_vec, &cfg, &ctrl, &stat); + if (ret < 0) + return ret; +@@ -462,6 +485,14 @@ static int k3_r5_rproc_prepare(struct rproc *rproc) + return ret; + } + ++ /* ++ * Notify all threads in the wait queue when core0 state has changed so ++ * that threads waiting for this condition can be executed. ++ */ ++ core->released_from_reset = true; ++ if (core == core0) ++ wake_up_interruptible(&cluster->core_transition); ++ + /* + * Newer IP revisions like on J7200 SoCs support h/w auto-initialization + * of TCMs, so there is no need to perform the s/w memzero. This bit is +@@ -507,10 +538,30 @@ static int k3_r5_rproc_unprepare(struct rproc *rproc) + { + struct k3_r5_rproc *kproc = rproc->priv; + struct k3_r5_cluster *cluster = kproc->cluster; +- struct k3_r5_core *core = kproc->core; ++ struct k3_r5_core *core = kproc->core, *core0, *core1; + struct device *dev = kproc->dev; + int ret; + ++ /* ++ * Ensure power-down of cores is sequential in split mode. Core1 must ++ * power down before Core0 to maintain the expected state. By placing ++ * the wait mechanism here in .unprepare() ops, this condition is ++ * enforced for rproc stop or shutdown requests from sysfs and device ++ * removal as well. ++ */ ++ core0 = list_first_entry(&cluster->cores, struct k3_r5_core, elem); ++ core1 = list_last_entry(&cluster->cores, struct k3_r5_core, elem); ++ if (cluster->mode == CLUSTER_MODE_SPLIT && core == core0 && ++ core1->released_from_reset) { ++ ret = wait_event_interruptible_timeout(cluster->core_transition, ++ !core1->released_from_reset, ++ msecs_to_jiffies(2000)); ++ if (ret <= 0) { ++ dev_err(dev, "can not power down core0 before core1"); ++ return -EPERM; ++ } ++ } ++ + /* Re-use LockStep-mode reset logic for Single-CPU mode */ + ret = (cluster->mode == CLUSTER_MODE_LOCKSTEP || + cluster->mode == CLUSTER_MODE_SINGLECPU) ? +@@ -518,6 +569,14 @@ static int k3_r5_rproc_unprepare(struct rproc *rproc) + if (ret) + dev_err(dev, "unable to disable cores, ret = %d\n", ret); + ++ /* ++ * Notify all threads in the wait queue when core1 state has changed so ++ * that threads waiting for this condition can be executed. ++ */ ++ core->released_from_reset = false; ++ if (core == core1) ++ wake_up_interruptible(&cluster->core_transition); ++ + return ret; + } + +@@ -543,7 +602,7 @@ static int k3_r5_rproc_start(struct rproc *rproc) + struct k3_r5_rproc *kproc = rproc->priv; + struct k3_r5_cluster *cluster = kproc->cluster; + struct device *dev = kproc->dev; +- struct k3_r5_core *core0, *core; ++ struct k3_r5_core *core; + u32 boot_addr; + int ret; + +@@ -565,21 +624,9 @@ static int k3_r5_rproc_start(struct rproc *rproc) + goto unroll_core_run; + } + } else { +- /* do not allow core 1 to start before core 0 */ +- core0 = list_first_entry(&cluster->cores, struct k3_r5_core, +- elem); +- if (core != core0 && core0->rproc->state == RPROC_OFFLINE) { +- dev_err(dev, "%s: can not start core 1 before core 0\n", +- __func__); +- return -EPERM; +- } +- + ret = k3_r5_core_run(core); + if (ret) + return ret; +- +- core->released_from_reset = true; +- wake_up_interruptible(&cluster->core_transition); + } + + return 0; +@@ -620,8 +667,7 @@ static int k3_r5_rproc_stop(struct rproc *rproc) + { + struct k3_r5_rproc *kproc = rproc->priv; + struct k3_r5_cluster *cluster = kproc->cluster; +- struct device *dev = kproc->dev; +- struct k3_r5_core *core1, *core = kproc->core; ++ struct k3_r5_core *core = kproc->core; + int ret; + + /* halt all applicable cores */ +@@ -634,16 +680,6 @@ static int k3_r5_rproc_stop(struct rproc *rproc) + } + } + } else { +- /* do not allow core 0 to stop before core 1 */ +- core1 = list_last_entry(&cluster->cores, struct k3_r5_core, +- elem); +- if (core != core1 && core1->rproc->state != RPROC_OFFLINE) { +- dev_err(dev, "%s: can not stop core 0 before core 1\n", +- __func__); +- ret = -EPERM; +- goto out; +- } +- + ret = k3_r5_core_halt(core); + if (ret) + goto out; +@@ -1271,26 +1307,6 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev) + cluster->mode == CLUSTER_MODE_SINGLECPU || + cluster->mode == CLUSTER_MODE_SINGLECORE) + break; +- +- /* +- * R5 cores require to be powered on sequentially, core0 +- * should be in higher power state than core1 in a cluster +- * So, wait for current core to power up before proceeding +- * to next core and put timeout of 2sec for each core. +- * +- * This waiting mechanism is necessary because +- * rproc_auto_boot_callback() for core1 can be called before +- * core0 due to thread execution order. +- */ +- ret = wait_event_interruptible_timeout(cluster->core_transition, +- core->released_from_reset, +- msecs_to_jiffies(2000)); +- if (ret <= 0) { +- dev_err(dev, +- "Timed out waiting for %s core to power up!\n", +- rproc->name); +- goto out; +- } + } + + return 0; +-- +2.39.5 + diff --git a/queue-6.12/remoteproc-k3-r5-use-devm_ioremap_wc-helper.patch b/queue-6.12/remoteproc-k3-r5-use-devm_ioremap_wc-helper.patch new file mode 100644 index 0000000000..12e351fc7c --- /dev/null +++ b/queue-6.12/remoteproc-k3-r5-use-devm_ioremap_wc-helper.patch @@ -0,0 +1,116 @@ +From 28135bf5049c5c74d3155abb64be85c0229d728c Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Dec 2024 16:35:43 +0530 +Subject: remoteproc: k3-r5: Use devm_ioremap_wc() helper + +From: Beleswar Padhi + +[ Upstream commit a572439f7143db5ea8446b7d755a16dfb12da7c7 ] + +Use a device lifecycle managed ioremap helper function. This helps +prevent mistakes like unmapping out of order in cleanup functions and +forgetting to unmap on all error paths. + +Signed-off-by: Beleswar Padhi +Reviewed-by: Andrew Davis +Link: https://lore.kernel.org/r/20241219110545.1898883-4-b-padhi@ti.com +Signed-off-by: Mathieu Poirier +Stable-dep-of: 701177511abd ("remoteproc: k3-r5: Refactor sequential core power up/down operations") +Signed-off-by: Sasha Levin +--- + drivers/remoteproc/ti_k3_r5_remoteproc.c | 38 +++++------------------- + 1 file changed, 8 insertions(+), 30 deletions(-) + +diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c +index c730ba09b92c7..c38b76b4943d5 100644 +--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c +@@ -996,17 +996,13 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + /* use remaining reserved memory regions for static carveouts */ + for (i = 0; i < num_rmems; i++) { + rmem_np = of_parse_phandle(np, "memory-region", i + 1); +- if (!rmem_np) { +- ret = -EINVAL; +- goto unmap_rmem; +- } ++ if (!rmem_np) ++ return -EINVAL; + + rmem = of_reserved_mem_lookup(rmem_np); + of_node_put(rmem_np); +- if (!rmem) { +- ret = -EINVAL; +- goto unmap_rmem; +- } ++ if (!rmem) ++ return -EINVAL; + + kproc->rmem[i].bus_addr = rmem->base; + /* +@@ -1021,12 +1017,11 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + */ + kproc->rmem[i].dev_addr = (u32)rmem->base; + kproc->rmem[i].size = rmem->size; +- kproc->rmem[i].cpu_addr = ioremap_wc(rmem->base, rmem->size); ++ kproc->rmem[i].cpu_addr = devm_ioremap_wc(dev, rmem->base, rmem->size); + if (!kproc->rmem[i].cpu_addr) { + dev_err(dev, "failed to map reserved memory#%d at %pa of size %pa\n", + i + 1, &rmem->base, &rmem->size); +- ret = -ENOMEM; +- goto unmap_rmem; ++ return -ENOMEM; + } + + dev_dbg(dev, "reserved memory%d: bus addr %pa size 0x%zx va %pK da 0x%x\n", +@@ -1037,19 +1032,6 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + kproc->num_rmems = num_rmems; + + return 0; +- +-unmap_rmem: +- for (i--; i >= 0; i--) +- iounmap(kproc->rmem[i].cpu_addr); +- return ret; +-} +- +-static void k3_r5_reserved_mem_exit(struct k3_r5_rproc *kproc) +-{ +- int i; +- +- for (i = 0; i < kproc->num_rmems; i++) +- iounmap(kproc->rmem[i].cpu_addr); + } + + /* +@@ -1278,8 +1260,8 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev) + + ret = rproc_add(rproc); + if (ret) { +- dev_err(dev, "rproc_add failed, ret = %d\n", ret); +- goto err_add; ++ dev_err_probe(dev, ret, "rproc_add failed\n"); ++ goto out; + } + + /* create only one rproc in lockstep, single-cpu or +@@ -1325,8 +1307,6 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev) + + err_powerup: + rproc_del(rproc); +-err_add: +- k3_r5_reserved_mem_exit(kproc); + out: + /* undo core0 upon any failures on core1 in split-mode */ + if (cluster->mode == CLUSTER_MODE_SPLIT && core == core1) { +@@ -1371,8 +1351,6 @@ static void k3_r5_cluster_rproc_exit(void *data) + mbox_free_channel(kproc->mbox); + + rproc_del(rproc); +- +- k3_r5_reserved_mem_exit(kproc); + } + } + +-- +2.39.5 + diff --git a/queue-6.12/remoteproc-k3-r5-use-devm_kcalloc-helper.patch b/queue-6.12/remoteproc-k3-r5-use-devm_kcalloc-helper.patch new file mode 100644 index 0000000000..b7e12f532f --- /dev/null +++ b/queue-6.12/remoteproc-k3-r5-use-devm_kcalloc-helper.patch @@ -0,0 +1,55 @@ +From 88057654ef7f9037746428d741266f97e391bf4a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Dec 2024 16:35:42 +0530 +Subject: remoteproc: k3-r5: Use devm_kcalloc() helper + +From: Beleswar Padhi + +[ Upstream commit f2e3d0d70986b1f135963dc28462fce4e65c0fc4 ] + +Use a device lifecycle managed action to free memory. This helps prevent +mistakes like freeing out of order in cleanup functions and forgetting +to free on error paths. + +Signed-off-by: Beleswar Padhi +Reviewed-by: Andrew Davis +Link: https://lore.kernel.org/r/20241219110545.1898883-3-b-padhi@ti.com +Signed-off-by: Mathieu Poirier +Stable-dep-of: 701177511abd ("remoteproc: k3-r5: Refactor sequential core power up/down operations") +Signed-off-by: Sasha Levin +--- + drivers/remoteproc/ti_k3_r5_remoteproc.c | 4 +--- + 1 file changed, 1 insertion(+), 3 deletions(-) + +diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c +index a9ec65c12fb93..c730ba09b92c7 100644 +--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c +@@ -989,7 +989,7 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + return ret; + + num_rmems--; +- kproc->rmem = kcalloc(num_rmems, sizeof(*kproc->rmem), GFP_KERNEL); ++ kproc->rmem = devm_kcalloc(dev, num_rmems, sizeof(*kproc->rmem), GFP_KERNEL); + if (!kproc->rmem) + return -ENOMEM; + +@@ -1041,7 +1041,6 @@ static int k3_r5_reserved_mem_init(struct k3_r5_rproc *kproc) + unmap_rmem: + for (i--; i >= 0; i--) + iounmap(kproc->rmem[i].cpu_addr); +- kfree(kproc->rmem); + return ret; + } + +@@ -1051,7 +1050,6 @@ static void k3_r5_reserved_mem_exit(struct k3_r5_rproc *kproc) + + for (i = 0; i < kproc->num_rmems; i++) + iounmap(kproc->rmem[i].cpu_addr); +- kfree(kproc->rmem); + } + + /* +-- +2.39.5 + diff --git a/queue-6.12/remoteproc-k3-r5-use-devm_rproc_add-helper.patch b/queue-6.12/remoteproc-k3-r5-use-devm_rproc_add-helper.patch new file mode 100644 index 0000000000..a79199c22b --- /dev/null +++ b/queue-6.12/remoteproc-k3-r5-use-devm_rproc_add-helper.patch @@ -0,0 +1,66 @@ +From fb83cd249e0c4a48655df0fb3743516ccd554e77 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Dec 2024 16:35:44 +0530 +Subject: remoteproc: k3-r5: Use devm_rproc_add() helper + +From: Beleswar Padhi + +[ Upstream commit de182d2f5ca0801425919f38ec955033c09c601d ] + +Use device lifecycle managed devm_rproc_add() helper function. This +helps prevent mistakes like deleting out of order in cleanup functions +and forgetting to delete on all error paths. + +Signed-off-by: Beleswar Padhi +Reviewed-by: Andrew Davis +Link: https://lore.kernel.org/r/20241219110545.1898883-5-b-padhi@ti.com +Signed-off-by: Mathieu Poirier +Stable-dep-of: 701177511abd ("remoteproc: k3-r5: Refactor sequential core power up/down operations") +Signed-off-by: Sasha Levin +--- + drivers/remoteproc/ti_k3_r5_remoteproc.c | 8 ++------ + 1 file changed, 2 insertions(+), 6 deletions(-) + +diff --git a/drivers/remoteproc/ti_k3_r5_remoteproc.c b/drivers/remoteproc/ti_k3_r5_remoteproc.c +index c38b76b4943d5..055bdd36ef865 100644 +--- a/drivers/remoteproc/ti_k3_r5_remoteproc.c ++++ b/drivers/remoteproc/ti_k3_r5_remoteproc.c +@@ -1258,7 +1258,7 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev) + goto out; + } + +- ret = rproc_add(rproc); ++ ret = devm_rproc_add(dev, rproc); + if (ret) { + dev_err_probe(dev, ret, "rproc_add failed\n"); + goto out; +@@ -1289,7 +1289,7 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev) + dev_err(dev, + "Timed out waiting for %s core to power up!\n", + rproc->name); +- goto err_powerup; ++ goto out; + } + } + +@@ -1305,8 +1305,6 @@ static int k3_r5_cluster_rproc_init(struct platform_device *pdev) + } + } + +-err_powerup: +- rproc_del(rproc); + out: + /* undo core0 upon any failures on core1 in split-mode */ + if (cluster->mode == CLUSTER_MODE_SPLIT && core == core1) { +@@ -1349,8 +1347,6 @@ static void k3_r5_cluster_rproc_exit(void *data) + } + + mbox_free_channel(kproc->mbox); +- +- rproc_del(rproc); + } + } + +-- +2.39.5 + diff --git a/queue-6.12/rose-fix-dangling-neighbour-pointers-in-rose_rt_devi.patch b/queue-6.12/rose-fix-dangling-neighbour-pointers-in-rose_rt_devi.patch new file mode 100644 index 0000000000..722d83f0da --- /dev/null +++ b/queue-6.12/rose-fix-dangling-neighbour-pointers-in-rose_rt_devi.patch @@ -0,0 +1,84 @@ +From ca7c9e05624ce90699dfabfcb5f4f5099a4c9b53 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 29 Jun 2025 12:06:31 +0900 +Subject: rose: fix dangling neighbour pointers in rose_rt_device_down() + +From: Kohei Enju + +[ Upstream commit 34a500caf48c47d5171f4aa1f237da39b07c6157 ] + +There are two bugs in rose_rt_device_down() that can cause +use-after-free: + +1. The loop bound `t->count` is modified within the loop, which can + cause the loop to terminate early and miss some entries. + +2. When removing an entry from the neighbour array, the subsequent entries + are moved up to fill the gap, but the loop index `i` is still + incremented, causing the next entry to be skipped. + +For example, if a node has three neighbours (A, A, B) with count=3 and A +is being removed, the second A is not checked. + + i=0: (A, A, B) -> (A, B) with count=2 + ^ checked + i=1: (A, B) -> (A, B) with count=2 + ^ checked (B, not A!) + i=2: (doesn't occur because i < count is false) + +This leaves the second A in the array with count=2, but the rose_neigh +structure has been freed. Code that accesses these entries assumes that +the first `count` entries are valid pointers, causing a use-after-free +when it accesses the dangling pointer. + +Fix both issues by iterating over the array in reverse order with a fixed +loop bound. This ensures that all entries are examined and that the removal +of an entry doesn't affect subsequent iterations. + +Reported-by: syzbot+e04e2c007ba2c80476cb@syzkaller.appspotmail.com +Closes: https://syzkaller.appspot.com/bug?extid=e04e2c007ba2c80476cb +Tested-by: syzbot+e04e2c007ba2c80476cb@syzkaller.appspotmail.com +Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") +Signed-off-by: Kohei Enju +Reviewed-by: Simon Horman +Link: https://patch.msgid.link/20250629030833.6680-1-enjuk@amazon.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + net/rose/rose_route.c | 15 ++++----------- + 1 file changed, 4 insertions(+), 11 deletions(-) + +diff --git a/net/rose/rose_route.c b/net/rose/rose_route.c +index fee772b4637c8..a7054546f52df 100644 +--- a/net/rose/rose_route.c ++++ b/net/rose/rose_route.c +@@ -497,22 +497,15 @@ void rose_rt_device_down(struct net_device *dev) + t = rose_node; + rose_node = rose_node->next; + +- for (i = 0; i < t->count; i++) { ++ for (i = t->count - 1; i >= 0; i--) { + if (t->neighbour[i] != s) + continue; + + t->count--; + +- switch (i) { +- case 0: +- t->neighbour[0] = t->neighbour[1]; +- fallthrough; +- case 1: +- t->neighbour[1] = t->neighbour[2]; +- break; +- case 2: +- break; +- } ++ memmove(&t->neighbour[i], &t->neighbour[i + 1], ++ sizeof(t->neighbour[0]) * ++ (t->count - i)); + } + + if (t->count <= 0) +-- +2.39.5 + diff --git a/queue-6.12/sched-fair-add-new-cfs_rq.h_nr_runnable.patch b/queue-6.12/sched-fair-add-new-cfs_rq.h_nr_runnable.patch new file mode 100644 index 0000000000..ba02812b00 --- /dev/null +++ b/queue-6.12/sched-fair-add-new-cfs_rq.h_nr_runnable.patch @@ -0,0 +1,177 @@ +From fe8ed20fa388f5a0510d1bb58bad13d85c340fd8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 2 Dec 2024 18:45:59 +0100 +Subject: sched/fair: Add new cfs_rq.h_nr_runnable + +From: Vincent Guittot + +[ Upstream commit c2a295bffeaf9461ecba76dc9e4780c898c94f03 ] + +With delayed dequeued feature, a sleeping sched_entity remains queued in +the rq until its lag has elapsed. As a result, it stays also visible +in the statistics that are used to balance the system and in particular +the field cfs.h_nr_queued when the sched_entity is associated to a task. + +Create a new h_nr_runnable that tracks only queued and runnable tasks. + +Signed-off-by: Vincent Guittot +Signed-off-by: Peter Zijlstra (Intel) +Reviewed-by: Dietmar Eggemann +Link: https://lore.kernel.org/r/20241202174606.4074512-5-vincent.guittot@linaro.org +Stable-dep-of: aa3ee4f0b754 ("sched/fair: Fixup wake_up_sync() vs DELAYED_DEQUEUE") +Signed-off-by: Sasha Levin +--- + kernel/sched/debug.c | 1 + + kernel/sched/fair.c | 20 ++++++++++++++++++-- + kernel/sched/sched.h | 1 + + 3 files changed, 20 insertions(+), 2 deletions(-) + +diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c +index 7cf0c138c78e5..9815f9a0cd592 100644 +--- a/kernel/sched/debug.c ++++ b/kernel/sched/debug.c +@@ -843,6 +843,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) + spread = right_vruntime - left_vruntime; + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); + SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); ++ SEQ_printf(m, " .%-30s: %d\n", "h_nr_runnable", cfs_rq->h_nr_runnable); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed); + SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index 33438b6c35478..47ff4856561ea 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -5511,6 +5511,7 @@ static void set_delayed(struct sched_entity *se) + for_each_sched_entity(se) { + struct cfs_rq *cfs_rq = cfs_rq_of(se); + ++ cfs_rq->h_nr_runnable--; + cfs_rq->h_nr_delayed++; + if (cfs_rq_throttled(cfs_rq)) + break; +@@ -5533,6 +5534,7 @@ static void clear_delayed(struct sched_entity *se) + for_each_sched_entity(se) { + struct cfs_rq *cfs_rq = cfs_rq_of(se); + ++ cfs_rq->h_nr_runnable++; + cfs_rq->h_nr_delayed--; + if (cfs_rq_throttled(cfs_rq)) + break; +@@ -5985,7 +5987,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + struct rq *rq = rq_of(cfs_rq); + struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg); + struct sched_entity *se; +- long queued_delta, idle_task_delta, delayed_delta, dequeue = 1; ++ long queued_delta, runnable_delta, idle_task_delta, delayed_delta, dequeue = 1; + long rq_h_nr_queued = rq->cfs.h_nr_queued; + + raw_spin_lock(&cfs_b->lock); +@@ -6017,6 +6019,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + rcu_read_unlock(); + + queued_delta = cfs_rq->h_nr_queued; ++ runnable_delta = cfs_rq->h_nr_runnable; + idle_task_delta = cfs_rq->idle_h_nr_running; + delayed_delta = cfs_rq->h_nr_delayed; + for_each_sched_entity(se) { +@@ -6041,6 +6044,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + idle_task_delta = cfs_rq->h_nr_queued; + + qcfs_rq->h_nr_queued -= queued_delta; ++ qcfs_rq->h_nr_runnable -= runnable_delta; + qcfs_rq->idle_h_nr_running -= idle_task_delta; + qcfs_rq->h_nr_delayed -= delayed_delta; + +@@ -6064,6 +6068,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + idle_task_delta = cfs_rq->h_nr_queued; + + qcfs_rq->h_nr_queued -= queued_delta; ++ qcfs_rq->h_nr_runnable -= runnable_delta; + qcfs_rq->idle_h_nr_running -= idle_task_delta; + qcfs_rq->h_nr_delayed -= delayed_delta; + } +@@ -6091,7 +6096,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + struct rq *rq = rq_of(cfs_rq); + struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg); + struct sched_entity *se; +- long queued_delta, idle_task_delta, delayed_delta; ++ long queued_delta, runnable_delta, idle_task_delta, delayed_delta; + long rq_h_nr_queued = rq->cfs.h_nr_queued; + + se = cfs_rq->tg->se[cpu_of(rq)]; +@@ -6126,6 +6131,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + } + + queued_delta = cfs_rq->h_nr_queued; ++ runnable_delta = cfs_rq->h_nr_runnable; + idle_task_delta = cfs_rq->idle_h_nr_running; + delayed_delta = cfs_rq->h_nr_delayed; + for_each_sched_entity(se) { +@@ -6144,6 +6150,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + idle_task_delta = cfs_rq->h_nr_queued; + + qcfs_rq->h_nr_queued += queued_delta; ++ qcfs_rq->h_nr_runnable += runnable_delta; + qcfs_rq->idle_h_nr_running += idle_task_delta; + qcfs_rq->h_nr_delayed += delayed_delta; + +@@ -6162,6 +6169,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + idle_task_delta = cfs_rq->h_nr_queued; + + qcfs_rq->h_nr_queued += queued_delta; ++ qcfs_rq->h_nr_runnable += runnable_delta; + qcfs_rq->idle_h_nr_running += idle_task_delta; + qcfs_rq->h_nr_delayed += delayed_delta; + +@@ -7081,6 +7089,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) + enqueue_entity(cfs_rq, se, flags); + slice = cfs_rq_min_slice(cfs_rq); + ++ if (!h_nr_delayed) ++ cfs_rq->h_nr_runnable++; + cfs_rq->h_nr_queued++; + cfs_rq->idle_h_nr_running += idle_h_nr_running; + cfs_rq->h_nr_delayed += h_nr_delayed; +@@ -7107,6 +7117,8 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) + min_vruntime_cb_propagate(&se->run_node, NULL); + slice = cfs_rq_min_slice(cfs_rq); + ++ if (!h_nr_delayed) ++ cfs_rq->h_nr_runnable++; + cfs_rq->h_nr_queued++; + cfs_rq->idle_h_nr_running += idle_h_nr_running; + cfs_rq->h_nr_delayed += h_nr_delayed; +@@ -7195,6 +7207,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) + break; + } + ++ if (!h_nr_delayed) ++ cfs_rq->h_nr_runnable -= h_nr_queued; + cfs_rq->h_nr_queued -= h_nr_queued; + cfs_rq->idle_h_nr_running -= idle_h_nr_running; + cfs_rq->h_nr_delayed -= h_nr_delayed; +@@ -7236,6 +7250,8 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) + min_vruntime_cb_propagate(&se->run_node, NULL); + slice = cfs_rq_min_slice(cfs_rq); + ++ if (!h_nr_delayed) ++ cfs_rq->h_nr_runnable -= h_nr_queued; + cfs_rq->h_nr_queued -= h_nr_queued; + cfs_rq->idle_h_nr_running -= idle_h_nr_running; + cfs_rq->h_nr_delayed -= h_nr_delayed; +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index dc338f4f911dd..e7f5ab21221c4 100644 +--- a/kernel/sched/sched.h ++++ b/kernel/sched/sched.h +@@ -652,6 +652,7 @@ struct cfs_rq { + struct load_weight load; + unsigned int nr_running; + unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ ++ unsigned int h_nr_runnable; /* SCHED_{NORMAL,BATCH,IDLE} */ + unsigned int idle_nr_running; /* SCHED_IDLE */ + unsigned int idle_h_nr_running; /* SCHED_IDLE */ + unsigned int h_nr_delayed; +-- +2.39.5 + diff --git a/queue-6.12/sched-fair-fixup-wake_up_sync-vs-delayed_dequeue.patch b/queue-6.12/sched-fair-fixup-wake_up_sync-vs-delayed_dequeue.patch new file mode 100644 index 0000000000..a1841e350a --- /dev/null +++ b/queue-6.12/sched-fair-fixup-wake_up_sync-vs-delayed_dequeue.patch @@ -0,0 +1,61 @@ +From dde92a3e287939035f8f0772bd08e556b1e8ce72 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 3 Mar 2025 18:52:39 +0800 +Subject: sched/fair: Fixup wake_up_sync() vs DELAYED_DEQUEUE + +From: Xuewen Yan + +[ Upstream commit aa3ee4f0b7541382c9f6f43f7408d73a5d4f4042 ] + +Delayed dequeued feature keeps a sleeping task enqueued until its +lag has elapsed. As a result, it stays also visible in rq->nr_running. +So when in wake_affine_idle(), we should use the real running-tasks +in rq to check whether we should place the wake-up task to +current cpu. +On the other hand, add a helper function to return the nr-delayed. + +Fixes: 152e11f6df29 ("sched/fair: Implement delayed dequeue") +Signed-off-by: Xuewen Yan +Reviewed-and-tested-by: Tianchen Ding +Signed-off-by: Peter Zijlstra (Intel) +Reviewed-by: Vincent Guittot +Link: https://lore.kernel.org/r/20250303105241.17251-2-xuewen.yan@unisoc.com +Signed-off-by: Sasha Levin +--- + kernel/sched/fair.c | 13 +++++++++++-- + 1 file changed, 11 insertions(+), 2 deletions(-) + +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index 47ff4856561ea..7280ed04c96ce 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -7313,6 +7313,11 @@ static bool dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags) + return true; + } + ++static inline unsigned int cfs_h_nr_delayed(struct rq *rq) ++{ ++ return (rq->cfs.h_nr_queued - rq->cfs.h_nr_runnable); ++} ++ + #ifdef CONFIG_SMP + + /* Working cpumask for: sched_balance_rq(), sched_balance_newidle(). */ +@@ -7474,8 +7479,12 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync) + if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu)) + return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu; + +- if (sync && cpu_rq(this_cpu)->nr_running == 1) +- return this_cpu; ++ if (sync) { ++ struct rq *rq = cpu_rq(this_cpu); ++ ++ if ((rq->nr_running - cfs_h_nr_delayed(rq)) == 1) ++ return this_cpu; ++ } + + if (available_idle_cpu(prev_cpu)) + return prev_cpu; +-- +2.39.5 + diff --git a/queue-6.12/sched-fair-rename-h_nr_running-into-h_nr_queued.patch b/queue-6.12/sched-fair-rename-h_nr_running-into-h_nr_queued.patch new file mode 100644 index 0000000000..1d840708c3 --- /dev/null +++ b/queue-6.12/sched-fair-rename-h_nr_running-into-h_nr_queued.patch @@ -0,0 +1,457 @@ +From 0ff1d299cd949547b6a89ef935d23fd7df5e36e3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 2 Dec 2024 18:45:58 +0100 +Subject: sched/fair: Rename h_nr_running into h_nr_queued + +From: Vincent Guittot + +[ Upstream commit 7b8a702d943827130cc00ae36075eff5500f86f1 ] + +With delayed dequeued feature, a sleeping sched_entity remains queued +in the rq until its lag has elapsed but can't run. +Rename h_nr_running into h_nr_queued to reflect this new behavior. + +Signed-off-by: Vincent Guittot +Signed-off-by: Peter Zijlstra (Intel) +Reviewed-by: Dietmar Eggemann +Link: https://lore.kernel.org/r/20241202174606.4074512-4-vincent.guittot@linaro.org +Stable-dep-of: aa3ee4f0b754 ("sched/fair: Fixup wake_up_sync() vs DELAYED_DEQUEUE") +Signed-off-by: Sasha Levin +--- + kernel/sched/core.c | 4 +- + kernel/sched/debug.c | 6 +-- + kernel/sched/fair.c | 88 ++++++++++++++++++++++---------------------- + kernel/sched/pelt.c | 4 +- + kernel/sched/sched.h | 4 +- + 5 files changed, 53 insertions(+), 53 deletions(-) + +diff --git a/kernel/sched/core.c b/kernel/sched/core.c +index d4948a8629929..50531e462a4ba 100644 +--- a/kernel/sched/core.c ++++ b/kernel/sched/core.c +@@ -1303,7 +1303,7 @@ bool sched_can_stop_tick(struct rq *rq) + if (scx_enabled() && !scx_can_stop_tick(rq)) + return false; + +- if (rq->cfs.h_nr_running > 1) ++ if (rq->cfs.h_nr_queued > 1) + return false; + + /* +@@ -5976,7 +5976,7 @@ __pick_next_task(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) + * opportunity to pull in more work from other CPUs. + */ + if (likely(!sched_class_above(prev->sched_class, &fair_sched_class) && +- rq->nr_running == rq->cfs.h_nr_running)) { ++ rq->nr_running == rq->cfs.h_nr_queued)) { + + p = pick_next_task_fair(rq, prev, rf); + if (unlikely(p == RETRY_TASK)) +diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c +index 1e3bc0774efd5..7cf0c138c78e5 100644 +--- a/kernel/sched/debug.c ++++ b/kernel/sched/debug.c +@@ -378,7 +378,7 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu + return -EINVAL; + } + +- if (rq->cfs.h_nr_running) { ++ if (rq->cfs.h_nr_queued) { + update_rq_clock(rq); + dl_server_stop(&rq->fair_server); + } +@@ -391,7 +391,7 @@ static ssize_t sched_fair_server_write(struct file *filp, const char __user *ubu + printk_deferred("Fair server disabled in CPU %d, system may crash due to starvation.\n", + cpu_of(rq)); + +- if (rq->cfs.h_nr_running) ++ if (rq->cfs.h_nr_queued) + dl_server_start(&rq->fair_server); + } + +@@ -843,7 +843,7 @@ void print_cfs_rq(struct seq_file *m, int cpu, struct cfs_rq *cfs_rq) + spread = right_vruntime - left_vruntime; + SEQ_printf(m, " .%-30s: %Ld.%06ld\n", "spread", SPLIT_NS(spread)); + SEQ_printf(m, " .%-30s: %d\n", "nr_running", cfs_rq->nr_running); +- SEQ_printf(m, " .%-30s: %d\n", "h_nr_running", cfs_rq->h_nr_running); ++ SEQ_printf(m, " .%-30s: %d\n", "h_nr_queued", cfs_rq->h_nr_queued); + SEQ_printf(m, " .%-30s: %d\n", "h_nr_delayed", cfs_rq->h_nr_delayed); + SEQ_printf(m, " .%-30s: %d\n", "idle_nr_running", + cfs_rq->idle_nr_running); +diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c +index 443f6a9ef3f8f..33438b6c35478 100644 +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -2147,7 +2147,7 @@ static void update_numa_stats(struct task_numa_env *env, + ns->load += cpu_load(rq); + ns->runnable += cpu_runnable(rq); + ns->util += cpu_util_cfs(cpu); +- ns->nr_running += rq->cfs.h_nr_running; ++ ns->nr_running += rq->cfs.h_nr_queued; + ns->compute_capacity += capacity_of(cpu); + + if (find_idle && idle_core < 0 && !rq->nr_running && idle_cpu(cpu)) { +@@ -5427,7 +5427,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) + * When enqueuing a sched_entity, we must: + * - Update loads to have both entity and cfs_rq synced with now. + * - For group_entity, update its runnable_weight to reflect the new +- * h_nr_running of its group cfs_rq. ++ * h_nr_queued of its group cfs_rq. + * - For group_entity, update its weight to reflect the new share of + * its group cfs_rq + * - Add its new weight to cfs_rq->load.weight +@@ -5583,7 +5583,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags) + * When dequeuing a sched_entity, we must: + * - Update loads to have both entity and cfs_rq synced with now. + * - For group_entity, update its runnable_weight to reflect the new +- * h_nr_running of its group cfs_rq. ++ * h_nr_queued of its group cfs_rq. + * - Subtract its previous weight from cfs_rq->load.weight. + * - For group entity, update its weight to reflect the new share + * of its group cfs_rq. +@@ -5985,8 +5985,8 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + struct rq *rq = rq_of(cfs_rq); + struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg); + struct sched_entity *se; +- long task_delta, idle_task_delta, delayed_delta, dequeue = 1; +- long rq_h_nr_running = rq->cfs.h_nr_running; ++ long queued_delta, idle_task_delta, delayed_delta, dequeue = 1; ++ long rq_h_nr_queued = rq->cfs.h_nr_queued; + + raw_spin_lock(&cfs_b->lock); + /* This will start the period timer if necessary */ +@@ -6016,7 +6016,7 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + walk_tg_tree_from(cfs_rq->tg, tg_throttle_down, tg_nop, (void *)rq); + rcu_read_unlock(); + +- task_delta = cfs_rq->h_nr_running; ++ queued_delta = cfs_rq->h_nr_queued; + idle_task_delta = cfs_rq->idle_h_nr_running; + delayed_delta = cfs_rq->h_nr_delayed; + for_each_sched_entity(se) { +@@ -6038,9 +6038,9 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + dequeue_entity(qcfs_rq, se, flags); + + if (cfs_rq_is_idle(group_cfs_rq(se))) +- idle_task_delta = cfs_rq->h_nr_running; ++ idle_task_delta = cfs_rq->h_nr_queued; + +- qcfs_rq->h_nr_running -= task_delta; ++ qcfs_rq->h_nr_queued -= queued_delta; + qcfs_rq->idle_h_nr_running -= idle_task_delta; + qcfs_rq->h_nr_delayed -= delayed_delta; + +@@ -6061,18 +6061,18 @@ static bool throttle_cfs_rq(struct cfs_rq *cfs_rq) + se_update_runnable(se); + + if (cfs_rq_is_idle(group_cfs_rq(se))) +- idle_task_delta = cfs_rq->h_nr_running; ++ idle_task_delta = cfs_rq->h_nr_queued; + +- qcfs_rq->h_nr_running -= task_delta; ++ qcfs_rq->h_nr_queued -= queued_delta; + qcfs_rq->idle_h_nr_running -= idle_task_delta; + qcfs_rq->h_nr_delayed -= delayed_delta; + } + + /* At this point se is NULL and we are at root level*/ +- sub_nr_running(rq, task_delta); ++ sub_nr_running(rq, queued_delta); + + /* Stop the fair server if throttling resulted in no runnable tasks */ +- if (rq_h_nr_running && !rq->cfs.h_nr_running) ++ if (rq_h_nr_queued && !rq->cfs.h_nr_queued) + dl_server_stop(&rq->fair_server); + done: + /* +@@ -6091,8 +6091,8 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + struct rq *rq = rq_of(cfs_rq); + struct cfs_bandwidth *cfs_b = tg_cfs_bandwidth(cfs_rq->tg); + struct sched_entity *se; +- long task_delta, idle_task_delta, delayed_delta; +- long rq_h_nr_running = rq->cfs.h_nr_running; ++ long queued_delta, idle_task_delta, delayed_delta; ++ long rq_h_nr_queued = rq->cfs.h_nr_queued; + + se = cfs_rq->tg->se[cpu_of(rq)]; + +@@ -6125,7 +6125,7 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + goto unthrottle_throttle; + } + +- task_delta = cfs_rq->h_nr_running; ++ queued_delta = cfs_rq->h_nr_queued; + idle_task_delta = cfs_rq->idle_h_nr_running; + delayed_delta = cfs_rq->h_nr_delayed; + for_each_sched_entity(se) { +@@ -6141,9 +6141,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + enqueue_entity(qcfs_rq, se, ENQUEUE_WAKEUP); + + if (cfs_rq_is_idle(group_cfs_rq(se))) +- idle_task_delta = cfs_rq->h_nr_running; ++ idle_task_delta = cfs_rq->h_nr_queued; + +- qcfs_rq->h_nr_running += task_delta; ++ qcfs_rq->h_nr_queued += queued_delta; + qcfs_rq->idle_h_nr_running += idle_task_delta; + qcfs_rq->h_nr_delayed += delayed_delta; + +@@ -6159,9 +6159,9 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + se_update_runnable(se); + + if (cfs_rq_is_idle(group_cfs_rq(se))) +- idle_task_delta = cfs_rq->h_nr_running; ++ idle_task_delta = cfs_rq->h_nr_queued; + +- qcfs_rq->h_nr_running += task_delta; ++ qcfs_rq->h_nr_queued += queued_delta; + qcfs_rq->idle_h_nr_running += idle_task_delta; + qcfs_rq->h_nr_delayed += delayed_delta; + +@@ -6171,11 +6171,11 @@ void unthrottle_cfs_rq(struct cfs_rq *cfs_rq) + } + + /* Start the fair server if un-throttling resulted in new runnable tasks */ +- if (!rq_h_nr_running && rq->cfs.h_nr_running) ++ if (!rq_h_nr_queued && rq->cfs.h_nr_queued) + dl_server_start(&rq->fair_server); + + /* At this point se is NULL and we are at root level*/ +- add_nr_running(rq, task_delta); ++ add_nr_running(rq, queued_delta); + + unthrottle_throttle: + assert_list_leaf_cfs_rq(rq); +@@ -6890,7 +6890,7 @@ static void hrtick_start_fair(struct rq *rq, struct task_struct *p) + + SCHED_WARN_ON(task_rq(p) != rq); + +- if (rq->cfs.h_nr_running > 1) { ++ if (rq->cfs.h_nr_queued > 1) { + u64 ran = se->sum_exec_runtime - se->prev_sum_exec_runtime; + u64 slice = se->slice; + s64 delta = slice - ran; +@@ -7033,7 +7033,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) + int idle_h_nr_running = task_has_idle_policy(p); + int h_nr_delayed = 0; + int task_new = !(flags & ENQUEUE_WAKEUP); +- int rq_h_nr_running = rq->cfs.h_nr_running; ++ int rq_h_nr_queued = rq->cfs.h_nr_queued; + u64 slice = 0; + + /* +@@ -7081,7 +7081,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) + enqueue_entity(cfs_rq, se, flags); + slice = cfs_rq_min_slice(cfs_rq); + +- cfs_rq->h_nr_running++; ++ cfs_rq->h_nr_queued++; + cfs_rq->idle_h_nr_running += idle_h_nr_running; + cfs_rq->h_nr_delayed += h_nr_delayed; + +@@ -7107,7 +7107,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) + min_vruntime_cb_propagate(&se->run_node, NULL); + slice = cfs_rq_min_slice(cfs_rq); + +- cfs_rq->h_nr_running++; ++ cfs_rq->h_nr_queued++; + cfs_rq->idle_h_nr_running += idle_h_nr_running; + cfs_rq->h_nr_delayed += h_nr_delayed; + +@@ -7119,7 +7119,7 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) + goto enqueue_throttle; + } + +- if (!rq_h_nr_running && rq->cfs.h_nr_running) { ++ if (!rq_h_nr_queued && rq->cfs.h_nr_queued) { + /* Account for idle runtime */ + if (!rq->nr_running) + dl_server_update_idle_time(rq, rq->curr); +@@ -7166,19 +7166,19 @@ static void set_next_buddy(struct sched_entity *se); + static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) + { + bool was_sched_idle = sched_idle_rq(rq); +- int rq_h_nr_running = rq->cfs.h_nr_running; ++ int rq_h_nr_queued = rq->cfs.h_nr_queued; + bool task_sleep = flags & DEQUEUE_SLEEP; + bool task_delayed = flags & DEQUEUE_DELAYED; + struct task_struct *p = NULL; + int idle_h_nr_running = 0; +- int h_nr_running = 0; ++ int h_nr_queued = 0; + int h_nr_delayed = 0; + struct cfs_rq *cfs_rq; + u64 slice = 0; + + if (entity_is_task(se)) { + p = task_of(se); +- h_nr_running = 1; ++ h_nr_queued = 1; + idle_h_nr_running = task_has_idle_policy(p); + if (!task_sleep && !task_delayed) + h_nr_delayed = !!se->sched_delayed; +@@ -7195,12 +7195,12 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) + break; + } + +- cfs_rq->h_nr_running -= h_nr_running; ++ cfs_rq->h_nr_queued -= h_nr_queued; + cfs_rq->idle_h_nr_running -= idle_h_nr_running; + cfs_rq->h_nr_delayed -= h_nr_delayed; + + if (cfs_rq_is_idle(cfs_rq)) +- idle_h_nr_running = h_nr_running; ++ idle_h_nr_running = h_nr_queued; + + /* end evaluation on encountering a throttled cfs_rq */ + if (cfs_rq_throttled(cfs_rq)) +@@ -7236,21 +7236,21 @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) + min_vruntime_cb_propagate(&se->run_node, NULL); + slice = cfs_rq_min_slice(cfs_rq); + +- cfs_rq->h_nr_running -= h_nr_running; ++ cfs_rq->h_nr_queued -= h_nr_queued; + cfs_rq->idle_h_nr_running -= idle_h_nr_running; + cfs_rq->h_nr_delayed -= h_nr_delayed; + + if (cfs_rq_is_idle(cfs_rq)) +- idle_h_nr_running = h_nr_running; ++ idle_h_nr_running = h_nr_queued; + + /* end evaluation on encountering a throttled cfs_rq */ + if (cfs_rq_throttled(cfs_rq)) + return 0; + } + +- sub_nr_running(rq, h_nr_running); ++ sub_nr_running(rq, h_nr_queued); + +- if (rq_h_nr_running && !rq->cfs.h_nr_running) ++ if (rq_h_nr_queued && !rq->cfs.h_nr_queued) + dl_server_stop(&rq->fair_server); + + /* balance early to pull high priority tasks */ +@@ -10394,7 +10394,7 @@ sched_reduced_capacity(struct rq *rq, struct sched_domain *sd) + * When there is more than 1 task, the group_overloaded case already + * takes care of cpu with reduced capacity + */ +- if (rq->cfs.h_nr_running != 1) ++ if (rq->cfs.h_nr_queued != 1) + return false; + + return check_cpu_capacity(rq, sd); +@@ -10429,7 +10429,7 @@ static inline void update_sg_lb_stats(struct lb_env *env, + sgs->group_load += load; + sgs->group_util += cpu_util_cfs(i); + sgs->group_runnable += cpu_runnable(rq); +- sgs->sum_h_nr_running += rq->cfs.h_nr_running; ++ sgs->sum_h_nr_running += rq->cfs.h_nr_queued; + + nr_running = rq->nr_running; + sgs->sum_nr_running += nr_running; +@@ -10744,7 +10744,7 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd, + sgs->group_util += cpu_util_without(i, p); + sgs->group_runnable += cpu_runnable_without(rq, p); + local = task_running_on_cpu(i, p); +- sgs->sum_h_nr_running += rq->cfs.h_nr_running - local; ++ sgs->sum_h_nr_running += rq->cfs.h_nr_queued - local; + + nr_running = rq->nr_running - local; + sgs->sum_nr_running += nr_running; +@@ -11526,7 +11526,7 @@ static struct rq *sched_balance_find_src_rq(struct lb_env *env, + if (rt > env->fbq_type) + continue; + +- nr_running = rq->cfs.h_nr_running; ++ nr_running = rq->cfs.h_nr_queued; + if (!nr_running) + continue; + +@@ -11685,7 +11685,7 @@ static int need_active_balance(struct lb_env *env) + * available on dst_cpu. + */ + if (env->idle && +- (env->src_rq->cfs.h_nr_running == 1)) { ++ (env->src_rq->cfs.h_nr_queued == 1)) { + if ((check_cpu_capacity(env->src_rq, sd)) && + (capacity_of(env->src_cpu)*sd->imbalance_pct < capacity_of(env->dst_cpu)*100)) + return 1; +@@ -12428,7 +12428,7 @@ static void nohz_balancer_kick(struct rq *rq) + * If there's a runnable CFS task and the current CPU has reduced + * capacity, kick the ILB to see if there's a better CPU to run on: + */ +- if (rq->cfs.h_nr_running >= 1 && check_cpu_capacity(rq, sd)) { ++ if (rq->cfs.h_nr_queued >= 1 && check_cpu_capacity(rq, sd)) { + flags = NOHZ_STATS_KICK | NOHZ_BALANCE_KICK; + goto unlock; + } +@@ -12926,11 +12926,11 @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) + * have been enqueued in the meantime. Since we're not going idle, + * pretend we pulled a task. + */ +- if (this_rq->cfs.h_nr_running && !pulled_task) ++ if (this_rq->cfs.h_nr_queued && !pulled_task) + pulled_task = 1; + + /* Is there a task of a high priority class? */ +- if (this_rq->nr_running != this_rq->cfs.h_nr_running) ++ if (this_rq->nr_running != this_rq->cfs.h_nr_queued) + pulled_task = -1; + + out: +@@ -13617,7 +13617,7 @@ int sched_group_set_idle(struct task_group *tg, long idle) + parent_cfs_rq->idle_nr_running--; + } + +- idle_task_delta = grp_cfs_rq->h_nr_running - ++ idle_task_delta = grp_cfs_rq->h_nr_queued - + grp_cfs_rq->idle_h_nr_running; + if (!cfs_rq_is_idle(grp_cfs_rq)) + idle_task_delta *= -1; +diff --git a/kernel/sched/pelt.c b/kernel/sched/pelt.c +index 171a802420a10..8189a35e53fe1 100644 +--- a/kernel/sched/pelt.c ++++ b/kernel/sched/pelt.c +@@ -275,7 +275,7 @@ ___update_load_avg(struct sched_avg *sa, unsigned long load) + * + * group: [ see update_cfs_group() ] + * se_weight() = tg->weight * grq->load_avg / tg->load_avg +- * se_runnable() = grq->h_nr_running ++ * se_runnable() = grq->h_nr_queued + * + * runnable_sum = se_runnable() * runnable = grq->runnable_sum + * runnable_avg = runnable_sum +@@ -321,7 +321,7 @@ int __update_load_avg_cfs_rq(u64 now, struct cfs_rq *cfs_rq) + { + if (___update_load_sum(now, &cfs_rq->avg, + scale_load_down(cfs_rq->load.weight), +- cfs_rq->h_nr_running - cfs_rq->h_nr_delayed, ++ cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed, + cfs_rq->curr != NULL)) { + + ___update_load_avg(&cfs_rq->avg, 1); +diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h +index d79de755c1c26..dc338f4f911dd 100644 +--- a/kernel/sched/sched.h ++++ b/kernel/sched/sched.h +@@ -651,7 +651,7 @@ struct balance_callback { + struct cfs_rq { + struct load_weight load; + unsigned int nr_running; +- unsigned int h_nr_running; /* SCHED_{NORMAL,BATCH,IDLE} */ ++ unsigned int h_nr_queued; /* SCHED_{NORMAL,BATCH,IDLE} */ + unsigned int idle_nr_running; /* SCHED_IDLE */ + unsigned int idle_h_nr_running; /* SCHED_IDLE */ + unsigned int h_nr_delayed; +@@ -907,7 +907,7 @@ static inline void se_update_runnable(struct sched_entity *se) + if (!entity_is_task(se)) { + struct cfs_rq *cfs_rq = se->my_q; + +- se->runnable_weight = cfs_rq->h_nr_running - cfs_rq->h_nr_delayed; ++ se->runnable_weight = cfs_rq->h_nr_queued - cfs_rq->h_nr_delayed; + } + } + +-- +2.39.5 + diff --git a/queue-6.12/sched_ext-make-scx_group_set_weight-always-update-tg.patch b/queue-6.12/sched_ext-make-scx_group_set_weight-always-update-tg.patch new file mode 100644 index 0000000000..98d55f5c59 --- /dev/null +++ b/queue-6.12/sched_ext-make-scx_group_set_weight-always-update-tg.patch @@ -0,0 +1,46 @@ +From 72b1500cef51760566e47f48deb8aaede4455b4f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 6 Jul 2025 02:42:09 -0400 +Subject: sched_ext: Make scx_group_set_weight() always update tg->scx.weight + +From: Tejun Heo + +[ Upstream commit c50784e99f0e7199cdb12dbddf02229b102744ef ] + +Otherwise, tg->scx.weight can go out of sync while scx_cgroup is not enabled +and ops.cgroup_init() may be called with a stale weight value. + +Signed-off-by: Tejun Heo +Fixes: 819513666966 ("sched_ext: Add cgroup support") +Cc: stable@vger.kernel.org # v6.12+ +Signed-off-by: Sasha Levin +--- + kernel/sched/ext.c | 12 ++++++------ + 1 file changed, 6 insertions(+), 6 deletions(-) + +diff --git a/kernel/sched/ext.c b/kernel/sched/ext.c +index ddd4fa785264e..c801dd20c63d9 100644 +--- a/kernel/sched/ext.c ++++ b/kernel/sched/ext.c +@@ -4058,12 +4058,12 @@ void scx_group_set_weight(struct task_group *tg, unsigned long weight) + { + percpu_down_read(&scx_cgroup_rwsem); + +- if (scx_cgroup_enabled && tg->scx_weight != weight) { +- if (SCX_HAS_OP(cgroup_set_weight)) +- SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_set_weight, +- tg_cgrp(tg), weight); +- tg->scx_weight = weight; +- } ++ if (scx_cgroup_enabled && SCX_HAS_OP(cgroup_set_weight) && ++ tg->scx_weight != weight) ++ SCX_CALL_OP(SCX_KF_UNLOCKED, cgroup_set_weight, ++ tg_cgrp(tg), weight); ++ ++ tg->scx_weight = weight; + + percpu_up_read(&scx_cgroup_rwsem); + } +-- +2.39.5 + diff --git a/queue-6.12/scsi-lpfc-avoid-potential-ndlp-use-after-free-in-dev.patch b/queue-6.12/scsi-lpfc-avoid-potential-ndlp-use-after-free-in-dev.patch new file mode 100644 index 0000000000..b3898e7426 --- /dev/null +++ b/queue-6.12/scsi-lpfc-avoid-potential-ndlp-use-after-free-in-dev.patch @@ -0,0 +1,97 @@ +From 0362d9dd8bfd9844248b5e0793bda43f0784a0b8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 25 Apr 2025 12:48:03 -0700 +Subject: scsi: lpfc: Avoid potential ndlp use-after-free in + dev_loss_tmo_callbk + +From: Justin Tee + +[ Upstream commit b5162bb6aa1ec04dff4509b025883524b6d7e7ca ] + +Smatch detected a potential use-after-free of an ndlp oject in +dev_loss_tmo_callbk during driver unload or fatal error handling. + +Fix by reordering code to avoid potential use-after-free if initial +nodelist reference has been previously removed. + +Fixes: 4281f44ea8bf ("scsi: lpfc: Prevent NDLP reference count underflow in dev_loss_tmo callback") +Reported-by: Dan Carpenter +Closes: https://lore.kernel.org/linux-scsi/41c1d855-9eb5-416f-ac12-8b61929201a3@stanley.mountain/ +Signed-off-by: Justin Tee +Link: https://lore.kernel.org/r/20250425194806.3585-6-justintee8345@gmail.com +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/scsi/lpfc/lpfc_hbadisc.c | 32 +++++++++++++++++--------------- + 1 file changed, 17 insertions(+), 15 deletions(-) + +diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c +index 5297122f89fc8..b5dd17eecf82d 100644 +--- a/drivers/scsi/lpfc/lpfc_hbadisc.c ++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c +@@ -155,7 +155,7 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + struct lpfc_hba *phba; + struct lpfc_work_evt *evtp; + unsigned long iflags; +- bool nvme_reg = false; ++ bool drop_initial_node_ref = false; + + ndlp = ((struct lpfc_rport_data *)rport->dd_data)->pnode; + if (!ndlp) +@@ -182,8 +182,13 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + spin_lock_irqsave(&ndlp->lock, iflags); + ndlp->rport = NULL; + +- if (ndlp->fc4_xpt_flags & NVME_XPT_REGD) +- nvme_reg = true; ++ /* Only 1 thread can drop the initial node reference. ++ * If not registered for NVME and NLP_DROPPED flag is ++ * clear, remove the initial reference. ++ */ ++ if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD)) ++ if (!test_and_set_bit(NLP_DROPPED, &ndlp->nlp_flag)) ++ drop_initial_node_ref = true; + + /* The scsi_transport is done with the rport so lpfc cannot + * call to unregister. +@@ -194,13 +199,16 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + /* If NLP_XPT_REGD was cleared in lpfc_nlp_unreg_node, + * unregister calls were made to the scsi and nvme + * transports and refcnt was already decremented. Clear +- * the NLP_XPT_REGD flag only if the NVME Rport is ++ * the NLP_XPT_REGD flag only if the NVME nrport is + * confirmed unregistered. + */ +- if (!nvme_reg && ndlp->fc4_xpt_flags & NLP_XPT_REGD) { +- ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD; ++ if (ndlp->fc4_xpt_flags & NLP_XPT_REGD) { ++ if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD)) ++ ndlp->fc4_xpt_flags &= ~NLP_XPT_REGD; + spin_unlock_irqrestore(&ndlp->lock, iflags); +- lpfc_nlp_put(ndlp); /* may free ndlp */ ++ ++ /* Release scsi transport reference */ ++ lpfc_nlp_put(ndlp); + } else { + spin_unlock_irqrestore(&ndlp->lock, iflags); + } +@@ -208,14 +216,8 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + spin_unlock_irqrestore(&ndlp->lock, iflags); + } + +- /* Only 1 thread can drop the initial node reference. If +- * another thread has set NLP_DROPPED, this thread is done. +- */ +- if (nvme_reg || test_bit(NLP_DROPPED, &ndlp->nlp_flag)) +- return; +- +- set_bit(NLP_DROPPED, &ndlp->nlp_flag); +- lpfc_nlp_put(ndlp); ++ if (drop_initial_node_ref) ++ lpfc_nlp_put(ndlp); + return; + } + +-- +2.39.5 + diff --git a/queue-6.12/scsi-lpfc-change-lpfc_nodelist-nlp_flag-member-into-.patch b/queue-6.12/scsi-lpfc-change-lpfc_nodelist-nlp_flag-member-into-.patch new file mode 100644 index 0000000000..9728cc1f92 --- /dev/null +++ b/queue-6.12/scsi-lpfc-change-lpfc_nodelist-nlp_flag-member-into-.patch @@ -0,0 +1,3447 @@ +From 4d0b43c5b56e5f1afc7bddfbeb6c7957202d852b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 31 Oct 2024 15:32:17 -0700 +Subject: scsi: lpfc: Change lpfc_nodelist nlp_flag member into a bitmask + +From: Justin Tee + +[ Upstream commit 92b99f1a73b7951f05590fd01640dcafdbc82c21 ] + +In attempt to reduce the amount of unnecessary ndlp->lock acquisitions +in the lpfc driver, change nlpa_flag into an unsigned long bitmask and +use clear_bit/test_bit bitwise atomic APIs instead of reliance on +ndlp->lock for synchronization. + +Signed-off-by: Justin Tee +Link: https://lore.kernel.org/r/20241031223219.152342-10-justintee8345@gmail.com +Signed-off-by: Martin K. Petersen +Stable-dep-of: b5162bb6aa1e ("scsi: lpfc: Avoid potential ndlp use-after-free in dev_loss_tmo_callbk") +Signed-off-by: Sasha Levin +--- + drivers/scsi/lpfc/lpfc_bsg.c | 6 +- + drivers/scsi/lpfc/lpfc_ct.c | 18 +- + drivers/scsi/lpfc/lpfc_debugfs.c | 4 +- + drivers/scsi/lpfc/lpfc_disc.h | 59 ++--- + drivers/scsi/lpfc/lpfc_els.c | 405 ++++++++++++----------------- + drivers/scsi/lpfc/lpfc_hbadisc.c | 225 +++++++--------- + drivers/scsi/lpfc/lpfc_init.c | 24 +- + drivers/scsi/lpfc/lpfc_nportdisc.c | 329 ++++++++++------------- + drivers/scsi/lpfc/lpfc_nvme.c | 9 +- + drivers/scsi/lpfc/lpfc_nvmet.c | 2 +- + drivers/scsi/lpfc/lpfc_scsi.c | 8 +- + drivers/scsi/lpfc/lpfc_sli.c | 42 ++- + drivers/scsi/lpfc/lpfc_vport.c | 6 +- + 13 files changed, 476 insertions(+), 661 deletions(-) + +diff --git a/drivers/scsi/lpfc/lpfc_bsg.c b/drivers/scsi/lpfc/lpfc_bsg.c +index 85059b83ea6b4..1c6b024160da7 100644 +--- a/drivers/scsi/lpfc/lpfc_bsg.c ++++ b/drivers/scsi/lpfc/lpfc_bsg.c +@@ -398,7 +398,11 @@ lpfc_bsg_send_mgmt_cmd(struct bsg_job *job) + /* in case no data is transferred */ + bsg_reply->reply_payload_rcv_len = 0; + +- if (ndlp->nlp_flag & NLP_ELS_SND_MASK) ++ if (test_bit(NLP_PLOGI_SND, &ndlp->nlp_flag) || ++ test_bit(NLP_PRLI_SND, &ndlp->nlp_flag) || ++ test_bit(NLP_ADISC_SND, &ndlp->nlp_flag) || ++ test_bit(NLP_LOGO_SND, &ndlp->nlp_flag) || ++ test_bit(NLP_RNID_SND, &ndlp->nlp_flag)) + return -ENODEV; + + /* allocate our bsg tracking structure */ +diff --git a/drivers/scsi/lpfc/lpfc_ct.c b/drivers/scsi/lpfc/lpfc_ct.c +index ce3a1f42713dd..30891ad17e2a4 100644 +--- a/drivers/scsi/lpfc/lpfc_ct.c ++++ b/drivers/scsi/lpfc/lpfc_ct.c +@@ -735,7 +735,7 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type) + + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "0238 Process x%06x NameServer Rsp " +- "Data: x%x x%x x%x x%lx x%x\n", Did, ++ "Data: x%lx x%x x%x x%lx x%x\n", Did, + ndlp->nlp_flag, ndlp->nlp_fc4_type, + ndlp->nlp_state, vport->fc_flag, + vport->fc_rscn_id_cnt); +@@ -744,7 +744,7 @@ lpfc_prep_node_fc4type(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type) + * state of ndlp hit devloss, change state to + * allow rediscovery. + */ +- if (ndlp->nlp_flag & NLP_NPR_2B_DISC && ++ if (test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag) && + ndlp->nlp_state == NLP_STE_UNUSED_NODE) { + lpfc_nlp_set_state(vport, ndlp, + NLP_STE_NPR_NODE); +@@ -832,12 +832,10 @@ lpfc_ns_rsp_audit_did(struct lpfc_vport *vport, uint32_t Did, uint8_t fc4_type) + if (ndlp->nlp_type != NLP_NVME_INITIATOR || + ndlp->nlp_state != NLP_STE_UNMAPPED_NODE) + continue; +- spin_lock_irq(&ndlp->lock); + if (ndlp->nlp_DID == Did) +- ndlp->nlp_flag &= ~NLP_NVMET_RECOV; ++ clear_bit(NLP_NVMET_RECOV, &ndlp->nlp_flag); + else +- ndlp->nlp_flag |= NLP_NVMET_RECOV; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NVMET_RECOV, &ndlp->nlp_flag); + } + } + } +@@ -894,13 +892,11 @@ lpfc_ns_rsp(struct lpfc_vport *vport, struct lpfc_dmabuf *mp, uint8_t fc4_type, + */ + if (vport->phba->nvmet_support) { + list_for_each_entry(ndlp, &vport->fc_nodes, nlp_listp) { +- if (!(ndlp->nlp_flag & NLP_NVMET_RECOV)) ++ if (!test_bit(NLP_NVMET_RECOV, &ndlp->nlp_flag)) + continue; + lpfc_disc_state_machine(vport, ndlp, NULL, + NLP_EVT_DEVICE_RECOVERY); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NVMET_RECOV; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NVMET_RECOV, &ndlp->nlp_flag); + } + } + +@@ -1440,7 +1436,7 @@ lpfc_cmpl_ct_cmd_gff_id(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + if (ndlp) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "0242 Process x%x GFF " +- "NameServer Rsp Data: x%x x%lx x%x\n", ++ "NameServer Rsp Data: x%lx x%lx x%x\n", + did, ndlp->nlp_flag, vport->fc_flag, + vport->fc_rscn_id_cnt); + } else { +diff --git a/drivers/scsi/lpfc/lpfc_debugfs.c b/drivers/scsi/lpfc/lpfc_debugfs.c +index a2d2b02b34187..3fd1aa5cc78cc 100644 +--- a/drivers/scsi/lpfc/lpfc_debugfs.c ++++ b/drivers/scsi/lpfc/lpfc_debugfs.c +@@ -870,8 +870,8 @@ lpfc_debugfs_nodelist_data(struct lpfc_vport *vport, char *buf, int size) + wwn_to_u64(ndlp->nlp_nodename.u.wwn)); + len += scnprintf(buf+len, size-len, "RPI:x%04x ", + ndlp->nlp_rpi); +- len += scnprintf(buf+len, size-len, "flag:x%08x ", +- ndlp->nlp_flag); ++ len += scnprintf(buf+len, size-len, "flag:x%08lx ", ++ ndlp->nlp_flag); + if (!ndlp->nlp_type) + len += scnprintf(buf+len, size-len, "UNKNOWN_TYPE "); + if (ndlp->nlp_type & NLP_FC_NODE) +diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h +index 5d6eabaeb094e..af5d5bd75642c 100644 +--- a/drivers/scsi/lpfc/lpfc_disc.h ++++ b/drivers/scsi/lpfc/lpfc_disc.h +@@ -102,7 +102,7 @@ struct lpfc_nodelist { + + spinlock_t lock; /* Node management lock */ + +- uint32_t nlp_flag; /* entry flags */ ++ unsigned long nlp_flag; /* entry flags */ + uint32_t nlp_DID; /* FC D_ID of entry */ + uint32_t nlp_last_elscmd; /* Last ELS cmd sent */ + uint16_t nlp_type; +@@ -182,36 +182,37 @@ struct lpfc_node_rrq { + #define lpfc_ndlp_check_qdepth(phba, ndlp) \ + (ndlp->cmd_qdepth < phba->sli4_hba.max_cfg_param.max_xri) + +-/* Defines for nlp_flag (uint32) */ +-#define NLP_IGNR_REG_CMPL 0x00000001 /* Rcvd rscn before we cmpl reg login */ +-#define NLP_REG_LOGIN_SEND 0x00000002 /* sent reglogin to adapter */ +-#define NLP_SUPPRESS_RSP 0x00000010 /* Remote NPort supports suppress rsp */ +-#define NLP_PLOGI_SND 0x00000020 /* sent PLOGI request for this entry */ +-#define NLP_PRLI_SND 0x00000040 /* sent PRLI request for this entry */ +-#define NLP_ADISC_SND 0x00000080 /* sent ADISC request for this entry */ +-#define NLP_LOGO_SND 0x00000100 /* sent LOGO request for this entry */ +-#define NLP_RNID_SND 0x00000400 /* sent RNID request for this entry */ +-#define NLP_ELS_SND_MASK 0x000007e0 /* sent ELS request for this entry */ +-#define NLP_NVMET_RECOV 0x00001000 /* NVMET auditing node for recovery. */ +-#define NLP_UNREG_INP 0x00008000 /* UNREG_RPI cmd is in progress */ +-#define NLP_DROPPED 0x00010000 /* Init ref count has been dropped */ +-#define NLP_DELAY_TMO 0x00020000 /* delay timeout is running for node */ +-#define NLP_NPR_2B_DISC 0x00040000 /* node is included in num_disc_nodes */ +-#define NLP_RCV_PLOGI 0x00080000 /* Rcv'ed PLOGI from remote system */ +-#define NLP_LOGO_ACC 0x00100000 /* Process LOGO after ACC completes */ +-#define NLP_TGT_NO_SCSIID 0x00200000 /* good PRLI but no binding for scsid */ +-#define NLP_ISSUE_LOGO 0x00400000 /* waiting to issue a LOGO */ +-#define NLP_IN_DEV_LOSS 0x00800000 /* devloss in progress */ +-#define NLP_ACC_REGLOGIN 0x01000000 /* Issue Reg Login after successful ++/* nlp_flag mask bits */ ++enum lpfc_nlp_flag { ++ NLP_IGNR_REG_CMPL = 0, /* Rcvd rscn before we cmpl reg login */ ++ NLP_REG_LOGIN_SEND = 1, /* sent reglogin to adapter */ ++ NLP_SUPPRESS_RSP = 4, /* Remote NPort supports suppress rsp */ ++ NLP_PLOGI_SND = 5, /* sent PLOGI request for this entry */ ++ NLP_PRLI_SND = 6, /* sent PRLI request for this entry */ ++ NLP_ADISC_SND = 7, /* sent ADISC request for this entry */ ++ NLP_LOGO_SND = 8, /* sent LOGO request for this entry */ ++ NLP_RNID_SND = 10, /* sent RNID request for this entry */ ++ NLP_NVMET_RECOV = 12, /* NVMET auditing node for recovery. */ ++ NLP_UNREG_INP = 15, /* UNREG_RPI cmd is in progress */ ++ NLP_DROPPED = 16, /* Init ref count has been dropped */ ++ NLP_DELAY_TMO = 17, /* delay timeout is running for node */ ++ NLP_NPR_2B_DISC = 18, /* node is included in num_disc_nodes */ ++ NLP_RCV_PLOGI = 19, /* Rcv'ed PLOGI from remote system */ ++ NLP_LOGO_ACC = 20, /* Process LOGO after ACC completes */ ++ NLP_TGT_NO_SCSIID = 21, /* good PRLI but no binding for scsid */ ++ NLP_ISSUE_LOGO = 22, /* waiting to issue a LOGO */ ++ NLP_IN_DEV_LOSS = 23, /* devloss in progress */ ++ NLP_ACC_REGLOGIN = 24, /* Issue Reg Login after successful + ACC */ +-#define NLP_NPR_ADISC 0x02000000 /* Issue ADISC when dq'ed from ++ NLP_NPR_ADISC = 25, /* Issue ADISC when dq'ed from + NPR list */ +-#define NLP_RM_DFLT_RPI 0x04000000 /* need to remove leftover dflt RPI */ +-#define NLP_NODEV_REMOVE 0x08000000 /* Defer removal till discovery ends */ +-#define NLP_TARGET_REMOVE 0x10000000 /* Target remove in process */ +-#define NLP_SC_REQ 0x20000000 /* Target requires authentication */ +-#define NLP_FIRSTBURST 0x40000000 /* Target supports FirstBurst */ +-#define NLP_RPI_REGISTERED 0x80000000 /* nlp_rpi is valid */ ++ NLP_RM_DFLT_RPI = 26, /* need to remove leftover dflt RPI */ ++ NLP_NODEV_REMOVE = 27, /* Defer removal till discovery ends */ ++ NLP_TARGET_REMOVE = 28, /* Target remove in process */ ++ NLP_SC_REQ = 29, /* Target requires authentication */ ++ NLP_FIRSTBURST = 30, /* Target supports FirstBurst */ ++ NLP_RPI_REGISTERED = 31 /* nlp_rpi is valid */ ++}; + + /* There are 4 different double linked lists nodelist entries can reside on. + * The Port Login (PLOGI) list and Address Discovery (ADISC) list are used +diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c +index 4e049783fc94e..b5fa5054e952e 100644 +--- a/drivers/scsi/lpfc/lpfc_els.c ++++ b/drivers/scsi/lpfc/lpfc_els.c +@@ -725,11 +725,9 @@ lpfc_cmpl_els_flogi_fabric(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + list_for_each_entry_safe(np, next_np, + &vport->fc_nodes, nlp_listp) { + if ((np->nlp_state != NLP_STE_NPR_NODE) || +- !(np->nlp_flag & NLP_NPR_ADISC)) ++ !test_bit(NLP_NPR_ADISC, &np->nlp_flag)) + continue; +- spin_lock_irq(&np->lock); +- np->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&np->lock); ++ clear_bit(NLP_NPR_ADISC, &np->nlp_flag); + lpfc_unreg_rpi(vport, np); + } + lpfc_cleanup_pending_mbox(vport); +@@ -864,9 +862,7 @@ lpfc_cmpl_els_flogi_nport(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + sizeof(struct lpfc_name)); + /* Set state will put ndlp onto node list if not already done */ + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + + mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); + if (!mbox) +@@ -1018,7 +1014,7 @@ lpfc_cmpl_els_flogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + * registered with the SCSI transport, remove the initial + * reference to trigger node release. + */ +- if (!(ndlp->nlp_flag & NLP_IN_DEV_LOSS) && ++ if (!test_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag) && + !(ndlp->fc4_xpt_flags & SCSI_XPT_REGD)) + lpfc_nlp_put(ndlp); + +@@ -1548,7 +1544,7 @@ lpfc_initial_flogi(struct lpfc_vport *vport) + * Otherwise, decrement node reference to trigger release. + */ + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD)) && +- !(ndlp->nlp_flag & NLP_IN_DEV_LOSS)) ++ !test_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag)) + lpfc_nlp_put(ndlp); + return 0; + } +@@ -1597,7 +1593,7 @@ lpfc_initial_fdisc(struct lpfc_vport *vport) + * Otherwise, decrement node reference to trigger release. + */ + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD)) && +- !(ndlp->nlp_flag & NLP_IN_DEV_LOSS)) ++ !test_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag)) + lpfc_nlp_put(ndlp); + return 0; + } +@@ -1675,9 +1671,9 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, + struct lpfc_nodelist *new_ndlp; + struct serv_parm *sp; + uint8_t name[sizeof(struct lpfc_name)]; +- uint32_t keepDID = 0, keep_nlp_flag = 0; ++ uint32_t keepDID = 0; + int rc; +- uint32_t keep_new_nlp_flag = 0; ++ unsigned long keep_nlp_flag = 0, keep_new_nlp_flag = 0; + uint16_t keep_nlp_state; + u32 keep_nlp_fc4_type = 0; + struct lpfc_nvme_rport *keep_nrport = NULL; +@@ -1704,8 +1700,8 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, + } + + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS | LOG_NODE, +- "3178 PLOGI confirm: ndlp x%x x%x x%x: " +- "new_ndlp x%x x%x x%x\n", ++ "3178 PLOGI confirm: ndlp x%x x%lx x%x: " ++ "new_ndlp x%x x%lx x%x\n", + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_fc4_type, + (new_ndlp ? new_ndlp->nlp_DID : 0), + (new_ndlp ? new_ndlp->nlp_flag : 0), +@@ -1769,48 +1765,48 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, + new_ndlp->nlp_flag = ndlp->nlp_flag; + + /* if new_ndlp had NLP_UNREG_INP set, keep it */ +- if (keep_new_nlp_flag & NLP_UNREG_INP) +- new_ndlp->nlp_flag |= NLP_UNREG_INP; ++ if (test_bit(NLP_UNREG_INP, &keep_new_nlp_flag)) ++ set_bit(NLP_UNREG_INP, &new_ndlp->nlp_flag); + else +- new_ndlp->nlp_flag &= ~NLP_UNREG_INP; ++ clear_bit(NLP_UNREG_INP, &new_ndlp->nlp_flag); + + /* if new_ndlp had NLP_RPI_REGISTERED set, keep it */ +- if (keep_new_nlp_flag & NLP_RPI_REGISTERED) +- new_ndlp->nlp_flag |= NLP_RPI_REGISTERED; ++ if (test_bit(NLP_RPI_REGISTERED, &keep_new_nlp_flag)) ++ set_bit(NLP_RPI_REGISTERED, &new_ndlp->nlp_flag); + else +- new_ndlp->nlp_flag &= ~NLP_RPI_REGISTERED; ++ clear_bit(NLP_RPI_REGISTERED, &new_ndlp->nlp_flag); + + /* + * Retain the DROPPED flag. This will take care of the init + * refcount when affecting the state change + */ +- if (keep_new_nlp_flag & NLP_DROPPED) +- new_ndlp->nlp_flag |= NLP_DROPPED; ++ if (test_bit(NLP_DROPPED, &keep_new_nlp_flag)) ++ set_bit(NLP_DROPPED, &new_ndlp->nlp_flag); + else +- new_ndlp->nlp_flag &= ~NLP_DROPPED; ++ clear_bit(NLP_DROPPED, &new_ndlp->nlp_flag); + + ndlp->nlp_flag = keep_new_nlp_flag; + + /* if ndlp had NLP_UNREG_INP set, keep it */ +- if (keep_nlp_flag & NLP_UNREG_INP) +- ndlp->nlp_flag |= NLP_UNREG_INP; ++ if (test_bit(NLP_UNREG_INP, &keep_nlp_flag)) ++ set_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + else +- ndlp->nlp_flag &= ~NLP_UNREG_INP; ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + + /* if ndlp had NLP_RPI_REGISTERED set, keep it */ +- if (keep_nlp_flag & NLP_RPI_REGISTERED) +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; ++ if (test_bit(NLP_RPI_REGISTERED, &keep_nlp_flag)) ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); + else +- ndlp->nlp_flag &= ~NLP_RPI_REGISTERED; ++ clear_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); + + /* + * Retain the DROPPED flag. This will take care of the init + * refcount when affecting the state change + */ +- if (keep_nlp_flag & NLP_DROPPED) +- ndlp->nlp_flag |= NLP_DROPPED; ++ if (test_bit(NLP_DROPPED, &keep_nlp_flag)) ++ set_bit(NLP_DROPPED, &ndlp->nlp_flag); + else +- ndlp->nlp_flag &= ~NLP_DROPPED; ++ clear_bit(NLP_DROPPED, &ndlp->nlp_flag); + + spin_unlock_irq(&new_ndlp->lock); + spin_unlock_irq(&ndlp->lock); +@@ -1888,7 +1884,7 @@ lpfc_plogi_confirm_nport(struct lpfc_hba *phba, uint32_t *prsp, + phba->active_rrq_pool); + + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS | LOG_NODE, +- "3173 PLOGI confirm exit: new_ndlp x%x x%x x%x\n", ++ "3173 PLOGI confirm exit: new_ndlp x%x x%lx x%x\n", + new_ndlp->nlp_DID, new_ndlp->nlp_flag, + new_ndlp->nlp_fc4_type); + +@@ -2009,7 +2005,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + IOCB_t *irsp; + struct lpfc_nodelist *ndlp, *free_ndlp; + struct lpfc_dmabuf *prsp; +- int disc; ++ bool disc; + struct serv_parm *sp = NULL; + u32 ulp_status, ulp_word4, did, iotag; + bool release_node = false; +@@ -2044,10 +2040,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* Since ndlp can be freed in the disc state machine, note if this node + * is being used during discovery. + */ +- spin_lock_irq(&ndlp->lock); +- disc = (ndlp->nlp_flag & NLP_NPR_2B_DISC); +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ disc = test_and_clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + + /* PLOGI completes to NPort */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, +@@ -2060,9 +2053,7 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + + /* Check to see if link went down during discovery */ + if (lpfc_els_chk_latt(vport)) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + goto out; + } + +@@ -2070,11 +2061,8 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* Check for retry */ + if (lpfc_els_retry(phba, cmdiocb, rspiocb)) { + /* ELS command is being retried */ +- if (disc) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); +- } ++ if (disc) ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + goto out; + } + /* Warn PLOGI status Don't print the vport to vport rjts */ +@@ -2097,7 +2085,8 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + * with the reglogin process. + */ + spin_lock_irq(&ndlp->lock); +- if ((ndlp->nlp_flag & (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI)) && ++ if ((test_bit(NLP_ACC_REGLOGIN, &ndlp->nlp_flag) || ++ test_bit(NLP_RCV_PLOGI, &ndlp->nlp_flag)) && + ndlp->nlp_state == NLP_STE_REG_LOGIN_ISSUE) { + spin_unlock_irq(&ndlp->lock); + goto out; +@@ -2108,8 +2097,8 @@ lpfc_cmpl_els_plogi(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + * start the device remove process. + */ + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) { +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- if (!(ndlp->nlp_flag & NLP_IN_DEV_LOSS)) ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); ++ if (!test_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag)) + release_node = true; + } + spin_unlock_irq(&ndlp->lock); +@@ -2212,12 +2201,13 @@ lpfc_issue_els_plogi(struct lpfc_vport *vport, uint32_t did, uint8_t retry) + * outstanding UNREG_RPI mbox command completes, unless we + * are going offline. This logic does not apply for Fabric DIDs + */ +- if ((ndlp->nlp_flag & (NLP_IGNR_REG_CMPL | NLP_UNREG_INP)) && ++ if ((test_bit(NLP_IGNR_REG_CMPL, &ndlp->nlp_flag) || ++ test_bit(NLP_UNREG_INP, &ndlp->nlp_flag)) && + ((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) && + !test_bit(FC_OFFLINE_MODE, &vport->fc_flag)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "4110 Issue PLOGI x%x deferred " +- "on NPort x%x rpi x%x flg x%x Data:" ++ "on NPort x%x rpi x%x flg x%lx Data:" + " x%px\n", + ndlp->nlp_defer_did, ndlp->nlp_DID, + ndlp->nlp_rpi, ndlp->nlp_flag, ndlp); +@@ -2335,10 +2325,10 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + ulp_status = get_job_ulpstatus(phba, rspiocb); + ulp_word4 = get_job_word4(phba, rspiocb); + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_PRLI_SND; ++ clear_bit(NLP_PRLI_SND, &ndlp->nlp_flag); + + /* Driver supports multiple FC4 types. Counters matter. */ ++ spin_lock_irq(&ndlp->lock); + vport->fc_prli_sent--; + ndlp->fc4_prli_sent--; + spin_unlock_irq(&ndlp->lock); +@@ -2379,7 +2369,7 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* Warn PRLI status */ + lpfc_printf_vlog(vport, mode, LOG_ELS, + "2754 PRLI DID:%06X Status:x%x/x%x, " +- "data: x%x x%x x%x\n", ++ "data: x%x x%x x%lx\n", + ndlp->nlp_DID, ulp_status, + ulp_word4, ndlp->nlp_state, + ndlp->fc4_prli_sent, ndlp->nlp_flag); +@@ -2396,10 +2386,10 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + if ((ndlp->nlp_state >= NLP_STE_PLOGI_ISSUE && + ndlp->nlp_state <= NLP_STE_REG_LOGIN_ISSUE) || + (ndlp->nlp_state == NLP_STE_NPR_NODE && +- ndlp->nlp_flag & NLP_DELAY_TMO)) { +- lpfc_printf_vlog(vport, KERN_WARNING, LOG_NODE, ++ test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag))) { ++ lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE, + "2784 PRLI cmpl: Allow Node recovery " +- "DID x%06x nstate x%x nflag x%x\n", ++ "DID x%06x nstate x%x nflag x%lx\n", + ndlp->nlp_DID, ndlp->nlp_state, + ndlp->nlp_flag); + goto out; +@@ -2420,8 +2410,8 @@ lpfc_cmpl_els_prli(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + spin_lock_irq(&ndlp->lock); + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD)) && + !ndlp->fc4_prli_sent) { +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- if (!(ndlp->nlp_flag & NLP_IN_DEV_LOSS)) ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); ++ if (!test_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag)) + release_node = true; + } + spin_unlock_irq(&ndlp->lock); +@@ -2496,7 +2486,8 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_type &= ~(NLP_FCP_TARGET | NLP_FCP_INITIATOR); + ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR); + ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; +- ndlp->nlp_flag &= ~(NLP_FIRSTBURST | NLP_NPR_2B_DISC); ++ clear_bit(NLP_FIRSTBURST, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + ndlp->nvme_fb_size = 0; + + send_next_prli: +@@ -2627,8 +2618,8 @@ lpfc_issue_els_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + * the ndlp is used to track outstanding PRLIs for different + * FC4 types. + */ ++ set_bit(NLP_PRLI_SND, &ndlp->nlp_flag); + spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_PRLI_SND; + vport->fc_prli_sent++; + ndlp->fc4_prli_sent++; + spin_unlock_irq(&ndlp->lock); +@@ -2789,7 +2780,7 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + struct lpfc_vport *vport = cmdiocb->vport; + IOCB_t *irsp; + struct lpfc_nodelist *ndlp; +- int disc; ++ bool disc; + u32 ulp_status, ulp_word4, tmo, iotag; + bool release_node = false; + +@@ -2818,10 +2809,8 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* Since ndlp can be freed in the disc state machine, note if this node + * is being used during discovery. + */ +- spin_lock_irq(&ndlp->lock); +- disc = (ndlp->nlp_flag & NLP_NPR_2B_DISC); +- ndlp->nlp_flag &= ~(NLP_ADISC_SND | NLP_NPR_2B_DISC); +- spin_unlock_irq(&ndlp->lock); ++ disc = test_and_clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); ++ clear_bit(NLP_ADISC_SND, &ndlp->nlp_flag); + /* ADISC completes to NPort */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0104 ADISC completes to NPort x%x " +@@ -2832,9 +2821,7 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + + /* Check to see if link went down during discovery */ + if (lpfc_els_chk_latt(vport)) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + goto out; + } + +@@ -2843,9 +2830,7 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + if (lpfc_els_retry(phba, cmdiocb, rspiocb)) { + /* ELS command is being retried */ + if (disc) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_set_disctmo(vport); + } + goto out; +@@ -2864,8 +2849,8 @@ lpfc_cmpl_els_adisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + */ + spin_lock_irq(&ndlp->lock); + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) { +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- if (!(ndlp->nlp_flag & NLP_IN_DEV_LOSS)) ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); ++ if (!test_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag)) + release_node = true; + } + spin_unlock_irq(&ndlp->lock); +@@ -2938,9 +2923,7 @@ lpfc_issue_els_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + phba->fc_stat.elsXmitADISC++; + elsiocb->cmd_cmpl = lpfc_cmpl_els_adisc; +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_ADISC_SND; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_ADISC_SND, &ndlp->nlp_flag); + elsiocb->ndlp = lpfc_nlp_get(ndlp); + if (!elsiocb->ndlp) { + lpfc_els_free_iocb(phba, elsiocb); +@@ -2961,9 +2944,7 @@ lpfc_issue_els_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + return 0; + + err: +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_ADISC_SND; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_ADISC_SND, &ndlp->nlp_flag); + return 1; + } + +@@ -2985,7 +2966,6 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + struct lpfc_nodelist *ndlp = cmdiocb->ndlp; + struct lpfc_vport *vport = ndlp->vport; + IOCB_t *irsp; +- unsigned long flags; + uint32_t skip_recovery = 0; + int wake_up_waiter = 0; + u32 ulp_status; +@@ -3007,8 +2987,8 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + iotag = irsp->ulpIoTag; + } + ++ clear_bit(NLP_LOGO_SND, &ndlp->nlp_flag); + spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_LOGO_SND; + if (ndlp->save_flags & NLP_WAIT_FOR_LOGO) { + wake_up_waiter = 1; + ndlp->save_flags &= ~NLP_WAIT_FOR_LOGO; +@@ -3023,7 +3003,7 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* LOGO completes to NPort */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0105 LOGO completes to NPort x%x " +- "IoTag x%x refcnt %d nflags x%x xflags x%x " ++ "IoTag x%x refcnt %d nflags x%lx xflags x%x " + "Data: x%x x%x x%x x%x\n", + ndlp->nlp_DID, iotag, + kref_read(&ndlp->kref), ndlp->nlp_flag, +@@ -3061,10 +3041,8 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* The driver sets this flag for an NPIV instance that doesn't want to + * log into the remote port. + */ +- if (ndlp->nlp_flag & NLP_TARGET_REMOVE) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_TARGET_REMOVE, &ndlp->nlp_flag)) { ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_disc_state_machine(vport, ndlp, cmdiocb, + NLP_EVT_DEVICE_RM); + goto out_rsrc_free; +@@ -3087,9 +3065,7 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + if (ndlp->nlp_type & (NLP_FCP_TARGET | NLP_NVME_TARGET) && + skip_recovery == 0) { + lpfc_cancel_retry_delay_tmo(vport, ndlp); +- spin_lock_irqsave(&ndlp->lock, flags); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irqrestore(&ndlp->lock, flags); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "3187 LOGO completes to NPort x%x: Start " +@@ -3111,9 +3087,7 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + * register with the transport. + */ + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_disc_state_machine(vport, ndlp, cmdiocb, + NLP_EVT_DEVICE_RM); + } +@@ -3154,12 +3128,8 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + uint16_t cmdsize; + int rc; + +- spin_lock_irq(&ndlp->lock); +- if (ndlp->nlp_flag & NLP_LOGO_SND) { +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_LOGO_SND, &ndlp->nlp_flag)) + return 0; +- } +- spin_unlock_irq(&ndlp->lock); + + cmdsize = (2 * sizeof(uint32_t)) + sizeof(struct lpfc_name); + elsiocb = lpfc_prep_els_iocb(vport, 1, cmdsize, retry, ndlp, +@@ -3178,10 +3148,8 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + phba->fc_stat.elsXmitLOGO++; + elsiocb->cmd_cmpl = lpfc_cmpl_els_logo; +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_LOGO_SND; +- ndlp->nlp_flag &= ~NLP_ISSUE_LOGO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_LOGO_SND, &ndlp->nlp_flag); ++ clear_bit(NLP_ISSUE_LOGO, &ndlp->nlp_flag); + elsiocb->ndlp = lpfc_nlp_get(ndlp); + if (!elsiocb->ndlp) { + lpfc_els_free_iocb(phba, elsiocb); +@@ -3206,9 +3174,7 @@ lpfc_issue_els_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + return 0; + + err: +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_LOGO_SND; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_LOGO_SND, &ndlp->nlp_flag); + return 1; + } + +@@ -3284,13 +3250,13 @@ lpfc_cmpl_els_cmd(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + static int + lpfc_reg_fab_ctrl_node(struct lpfc_vport *vport, struct lpfc_nodelist *fc_ndlp) + { +- int rc = 0; ++ int rc; + struct lpfc_hba *phba = vport->phba; + struct lpfc_nodelist *ns_ndlp; + LPFC_MBOXQ_t *mbox; + +- if (fc_ndlp->nlp_flag & NLP_RPI_REGISTERED) +- return rc; ++ if (test_bit(NLP_RPI_REGISTERED, &fc_ndlp->nlp_flag)) ++ return 0; + + ns_ndlp = lpfc_findnode_did(vport, NameServer_DID); + if (!ns_ndlp) +@@ -3307,7 +3273,7 @@ lpfc_reg_fab_ctrl_node(struct lpfc_vport *vport, struct lpfc_nodelist *fc_ndlp) + if (!mbox) { + lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE, + "0936 %s: no memory for reg_login " +- "Data: x%x x%x x%x x%x\n", __func__, ++ "Data: x%x x%x x%lx x%x\n", __func__, + fc_ndlp->nlp_DID, fc_ndlp->nlp_state, + fc_ndlp->nlp_flag, fc_ndlp->nlp_rpi); + return -ENOMEM; +@@ -3319,7 +3285,7 @@ lpfc_reg_fab_ctrl_node(struct lpfc_vport *vport, struct lpfc_nodelist *fc_ndlp) + goto out; + } + +- fc_ndlp->nlp_flag |= NLP_REG_LOGIN_SEND; ++ set_bit(NLP_REG_LOGIN_SEND, &fc_ndlp->nlp_flag); + mbox->mbox_cmpl = lpfc_mbx_cmpl_fc_reg_login; + mbox->ctx_ndlp = lpfc_nlp_get(fc_ndlp); + if (!mbox->ctx_ndlp) { +@@ -3343,7 +3309,7 @@ lpfc_reg_fab_ctrl_node(struct lpfc_vport *vport, struct lpfc_nodelist *fc_ndlp) + lpfc_mbox_rsrc_cleanup(phba, mbox, MBOX_THD_UNLOCKED); + lpfc_printf_vlog(vport, KERN_ERR, LOG_NODE, + "0938 %s: failed to format reg_login " +- "Data: x%x x%x x%x x%x\n", __func__, ++ "Data: x%x x%x x%lx x%x\n", __func__, + fc_ndlp->nlp_DID, fc_ndlp->nlp_state, + fc_ndlp->nlp_flag, fc_ndlp->nlp_rpi); + return rc; +@@ -4382,11 +4348,8 @@ lpfc_cancel_retry_delay_tmo(struct lpfc_vport *vport, struct lpfc_nodelist *nlp) + { + struct lpfc_work_evt *evtp; + +- if (!(nlp->nlp_flag & NLP_DELAY_TMO)) ++ if (!test_and_clear_bit(NLP_DELAY_TMO, &nlp->nlp_flag)) + return; +- spin_lock_irq(&nlp->lock); +- nlp->nlp_flag &= ~NLP_DELAY_TMO; +- spin_unlock_irq(&nlp->lock); + del_timer_sync(&nlp->nlp_delayfunc); + nlp->nlp_last_elscmd = 0; + if (!list_empty(&nlp->els_retry_evt.evt_listp)) { +@@ -4395,10 +4358,7 @@ lpfc_cancel_retry_delay_tmo(struct lpfc_vport *vport, struct lpfc_nodelist *nlp) + evtp = &nlp->els_retry_evt; + lpfc_nlp_put((struct lpfc_nodelist *)evtp->evt_arg1); + } +- if (nlp->nlp_flag & NLP_NPR_2B_DISC) { +- spin_lock_irq(&nlp->lock); +- nlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- spin_unlock_irq(&nlp->lock); ++ if (test_and_clear_bit(NLP_NPR_2B_DISC, &nlp->nlp_flag)) { + if (vport->num_disc_nodes) { + if (vport->port_state < LPFC_VPORT_READY) { + /* Check if there are more ADISCs to be sent */ +@@ -4478,14 +4438,11 @@ lpfc_els_retry_delay_handler(struct lpfc_nodelist *ndlp) + spin_lock_irq(&ndlp->lock); + cmd = ndlp->nlp_last_elscmd; + ndlp->nlp_last_elscmd = 0; ++ spin_unlock_irq(&ndlp->lock); + +- if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) { +- spin_unlock_irq(&ndlp->lock); ++ if (!test_and_clear_bit(NLP_DELAY_TMO, &ndlp->nlp_flag)) + return; +- } + +- ndlp->nlp_flag &= ~NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); + /* + * If a discovery event readded nlp_delayfunc after timer + * firing and before processing the timer, cancel the +@@ -5008,9 +4965,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* delay is specified in milliseconds */ + mod_timer(&ndlp->nlp_delayfunc, + jiffies + msecs_to_jiffies(delay)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + + ndlp->nlp_prev_state = ndlp->nlp_state; + if ((cmd == ELS_CMD_PRLI) || +@@ -5070,7 +5025,7 @@ lpfc_els_retry(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0108 No retry ELS command x%x to remote " + "NPORT x%x Retried:%d Error:x%x/%x " +- "IoTag x%x nflags x%x\n", ++ "IoTag x%x nflags x%lx\n", + cmd, did, cmdiocb->retry, ulp_status, + ulp_word4, cmdiocb->iotag, + (ndlp ? ndlp->nlp_flag : 0)); +@@ -5237,7 +5192,7 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* ACC to LOGO completes to NPort */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0109 ACC to LOGO completes to NPort x%x refcnt %d " +- "last els x%x Data: x%x x%x x%x\n", ++ "last els x%x Data: x%lx x%x x%x\n", + ndlp->nlp_DID, kref_read(&ndlp->kref), + ndlp->nlp_last_elscmd, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi); +@@ -5252,16 +5207,14 @@ lpfc_cmpl_els_logo_acc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + goto out; + + if (ndlp->nlp_state == NLP_STE_NPR_NODE) { +- if (ndlp->nlp_flag & NLP_RPI_REGISTERED) ++ if (test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag)) + lpfc_unreg_rpi(vport, ndlp); + + /* If came from PRLO, then PRLO_ACC is done. + * Start rediscovery now. + */ + if (ndlp->nlp_last_elscmd == ELS_CMD_PRLO) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + ndlp->nlp_prev_state = ndlp->nlp_state; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE); + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); +@@ -5298,7 +5251,7 @@ lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + + if (ndlp) { + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE, +- "0006 rpi x%x DID:%x flg:%x %d x%px " ++ "0006 rpi x%x DID:%x flg:%lx %d x%px " + "mbx_cmd x%x mbx_flag x%x x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag, + kref_read(&ndlp->kref), ndlp, mbx_cmd, +@@ -5309,11 +5262,9 @@ lpfc_mbx_cmpl_dflt_rpi(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + * first on an UNREG_LOGIN and then release the final + * references. + */ +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; ++ clear_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + if (mbx_cmd == MBX_UNREG_LOGIN) +- ndlp->nlp_flag &= ~NLP_UNREG_INP; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + lpfc_nlp_put(ndlp); + lpfc_drop_node(ndlp->vport, ndlp); + } +@@ -5379,23 +5330,23 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* ELS response tag completes */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0110 ELS response tag x%x completes " +- "Data: x%x x%x x%x x%x x%x x%x x%x x%x %p %p\n", ++ "Data: x%x x%x x%x x%x x%lx x%x x%x x%x %p %p\n", + iotag, ulp_status, ulp_word4, tmo, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi, kref_read(&ndlp->kref), mbox, ndlp); + if (mbox) { +- if (ulp_status == 0 +- && (ndlp->nlp_flag & NLP_ACC_REGLOGIN)) { ++ if (ulp_status == 0 && ++ test_bit(NLP_ACC_REGLOGIN, &ndlp->nlp_flag)) { + if (!lpfc_unreg_rpi(vport, ndlp) && + !test_bit(FC_PT2PT, &vport->fc_flag)) { +- if (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE || ++ if (ndlp->nlp_state == NLP_STE_PLOGI_ISSUE || + ndlp->nlp_state == + NLP_STE_REG_LOGIN_ISSUE) { + lpfc_printf_vlog(vport, KERN_INFO, + LOG_DISCOVERY, + "0314 PLOGI recov " + "DID x%x " +- "Data: x%x x%x x%x\n", ++ "Data: x%x x%x x%lx\n", + ndlp->nlp_DID, + ndlp->nlp_state, + ndlp->nlp_rpi, +@@ -5412,18 +5363,17 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + goto out_free_mbox; + + mbox->vport = vport; +- if (ndlp->nlp_flag & NLP_RM_DFLT_RPI) { ++ if (test_bit(NLP_RM_DFLT_RPI, &ndlp->nlp_flag)) { + mbox->mbox_flag |= LPFC_MBX_IMED_UNREG; + mbox->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi; +- } +- else { ++ } else { + mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; + ndlp->nlp_prev_state = ndlp->nlp_state; + lpfc_nlp_set_state(vport, ndlp, + NLP_STE_REG_LOGIN_ISSUE); + } + +- ndlp->nlp_flag |= NLP_REG_LOGIN_SEND; ++ set_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + if (lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT) + != MBX_NOT_FINISHED) + goto out; +@@ -5432,12 +5382,12 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + * set for this failed mailbox command. + */ + lpfc_nlp_put(ndlp); +- ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; ++ clear_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + + /* ELS rsp: Cannot issue reg_login for */ + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0138 ELS rsp: Cannot issue reg_login for x%x " +- "Data: x%x x%x x%x\n", ++ "Data: x%lx x%x x%x\n", + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi); + } +@@ -5446,11 +5396,9 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + } + out: + if (ndlp && shost) { +- spin_lock_irq(&ndlp->lock); + if (mbox) +- ndlp->nlp_flag &= ~NLP_ACC_REGLOGIN; +- ndlp->nlp_flag &= ~NLP_RM_DFLT_RPI; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_ACC_REGLOGIN, &ndlp->nlp_flag); ++ clear_bit(NLP_RM_DFLT_RPI, &ndlp->nlp_flag); + } + + /* An SLI4 NPIV instance wants to drop the node at this point under +@@ -5528,9 +5476,7 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag, + elsiocb = lpfc_prep_els_iocb(vport, 0, cmdsize, oldiocb->retry, + ndlp, ndlp->nlp_DID, ELS_CMD_ACC); + if (!elsiocb) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_LOGO_ACC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + return 1; + } + +@@ -5558,7 +5504,7 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag, + pcmd += sizeof(uint32_t); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue ACC: did:x%x flg:x%x", ++ "Issue ACC: did:x%x flg:x%lx", + ndlp->nlp_DID, ndlp->nlp_flag, 0); + break; + case ELS_CMD_FLOGI: +@@ -5637,7 +5583,7 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag, + } + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue ACC FLOGI/PLOGI: did:x%x flg:x%x", ++ "Issue ACC FLOGI/PLOGI: did:x%x flg:x%lx", + ndlp->nlp_DID, ndlp->nlp_flag, 0); + break; + case ELS_CMD_PRLO: +@@ -5675,7 +5621,7 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag, + els_pkt_ptr->un.prlo.acceptRspCode = PRLO_REQ_EXECUTED; + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue ACC PRLO: did:x%x flg:x%x", ++ "Issue ACC PRLO: did:x%x flg:x%lx", + ndlp->nlp_DID, ndlp->nlp_flag, 0); + break; + case ELS_CMD_RDF: +@@ -5720,12 +5666,10 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag, + default: + return 1; + } +- if (ndlp->nlp_flag & NLP_LOGO_ACC) { +- spin_lock_irq(&ndlp->lock); +- if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED || +- ndlp->nlp_flag & NLP_REG_LOGIN_SEND)) +- ndlp->nlp_flag &= ~NLP_LOGO_ACC; +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_LOGO_ACC, &ndlp->nlp_flag)) { ++ if (!test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag) && ++ !test_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag)) ++ clear_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + elsiocb->cmd_cmpl = lpfc_cmpl_els_logo_acc; + } else { + elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp; +@@ -5748,7 +5692,7 @@ lpfc_els_rsp_acc(struct lpfc_vport *vport, uint32_t flag, + /* Xmit ELS ACC response tag */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0128 Xmit ELS ACC response Status: x%x, IoTag: x%x, " +- "XRI: x%x, DID: x%x, nlp_flag: x%x nlp_state: x%x " ++ "XRI: x%x, DID: x%x, nlp_flag: x%lx nlp_state: x%x " + "RPI: x%x, fc_flag x%lx refcnt %d\n", + rc, elsiocb->iotag, elsiocb->sli4_xritag, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, +@@ -5823,13 +5767,13 @@ lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError, + /* Xmit ELS RJT response tag */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0129 Xmit ELS RJT x%x response tag x%x " +- "xri x%x, did x%x, nlp_flag x%x, nlp_state x%x, " ++ "xri x%x, did x%x, nlp_flag x%lx, nlp_state x%x, " + "rpi x%x\n", + rejectError, elsiocb->iotag, + get_job_ulpcontext(phba, elsiocb), ndlp->nlp_DID, + ndlp->nlp_flag, ndlp->nlp_state, ndlp->nlp_rpi); + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue LS_RJT: did:x%x flg:x%x err:x%x", ++ "Issue LS_RJT: did:x%x flg:x%lx err:x%x", + ndlp->nlp_DID, ndlp->nlp_flag, rejectError); + + phba->fc_stat.elsXmitLSRJT++; +@@ -5920,7 +5864,7 @@ lpfc_issue_els_edc_rsp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, + lpfc_format_edc_lft_desc(phba, tlv); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue EDC ACC: did:x%x flg:x%x refcnt %d", ++ "Issue EDC ACC: did:x%x flg:x%lx refcnt %d", + ndlp->nlp_DID, ndlp->nlp_flag, + kref_read(&ndlp->kref)); + elsiocb->cmd_cmpl = lpfc_cmpl_els_rsp; +@@ -5942,7 +5886,7 @@ lpfc_issue_els_edc_rsp(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, + /* Xmit ELS ACC response tag */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0152 Xmit EDC ACC response Status: x%x, IoTag: x%x, " +- "XRI: x%x, DID: x%x, nlp_flag: x%x nlp_state: x%x " ++ "XRI: x%x, DID: x%x, nlp_flag: x%lx nlp_state: x%x " + "RPI: x%x, fc_flag x%lx\n", + rc, elsiocb->iotag, elsiocb->sli4_xritag, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, +@@ -6011,7 +5955,7 @@ lpfc_els_rsp_adisc_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb, + /* Xmit ADISC ACC response tag */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0130 Xmit ADISC ACC response iotag x%x xri: " +- "x%x, did x%x, nlp_flag x%x, nlp_state x%x rpi x%x\n", ++ "x%x, did x%x, nlp_flag x%lx, nlp_state x%x rpi x%x\n", + elsiocb->iotag, ulp_context, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi); +@@ -6027,7 +5971,7 @@ lpfc_els_rsp_adisc_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb, + ap->DID = be32_to_cpu(vport->fc_myDID); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue ACC ADISC: did:x%x flg:x%x refcnt %d", ++ "Issue ACC ADISC: did:x%x flg:x%lx refcnt %d", + ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref)); + + phba->fc_stat.elsXmitACC++; +@@ -6133,7 +6077,7 @@ lpfc_els_rsp_prli_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb, + /* Xmit PRLI ACC response tag */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0131 Xmit PRLI ACC response tag x%x xri x%x, " +- "did x%x, nlp_flag x%x, nlp_state x%x, rpi x%x\n", ++ "did x%x, nlp_flag x%lx, nlp_state x%x, rpi x%x\n", + elsiocb->iotag, ulp_context, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi); +@@ -6204,7 +6148,7 @@ lpfc_els_rsp_prli_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb, + + lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, + "6015 NVME issue PRLI ACC word1 x%08x " +- "word4 x%08x word5 x%08x flag x%x, " ++ "word4 x%08x word5 x%08x flag x%lx, " + "fcp_info x%x nlp_type x%x\n", + npr_nvme->word1, npr_nvme->word4, + npr_nvme->word5, ndlp->nlp_flag, +@@ -6219,7 +6163,7 @@ lpfc_els_rsp_prli_acc(struct lpfc_vport *vport, struct lpfc_iocbq *oldiocb, + ndlp->nlp_DID); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue ACC PRLI: did:x%x flg:x%x", ++ "Issue ACC PRLI: did:x%x flg:x%lx", + ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref)); + + phba->fc_stat.elsXmitACC++; +@@ -6333,7 +6277,7 @@ lpfc_els_rsp_rnid_acc(struct lpfc_vport *vport, uint8_t format, + } + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue ACC RNID: did:x%x flg:x%x refcnt %d", ++ "Issue ACC RNID: did:x%x flg:x%lx refcnt %d", + ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref)); + + phba->fc_stat.elsXmitACC++; +@@ -6390,7 +6334,7 @@ lpfc_els_clear_rrq(struct lpfc_vport *vport, + get_job_ulpcontext(phba, iocb)); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Clear RRQ: did:x%x flg:x%x exchg:x%.08x", ++ "Clear RRQ: did:x%x flg:x%lx exchg:x%.08x", + ndlp->nlp_DID, ndlp->nlp_flag, rrq->rrq_exchg); + if (vport->fc_myDID == be32_to_cpu(bf_get(rrq_did, rrq))) + xri = bf_get(rrq_oxid, rrq); +@@ -6467,7 +6411,7 @@ lpfc_els_rsp_echo_acc(struct lpfc_vport *vport, uint8_t *data, + memcpy(pcmd, data, cmdsize - sizeof(uint32_t)); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_RSP, +- "Issue ACC ECHO: did:x%x flg:x%x refcnt %d", ++ "Issue ACC ECHO: did:x%x flg:x%lx refcnt %d", + ndlp->nlp_DID, ndlp->nlp_flag, kref_read(&ndlp->kref)); + + phba->fc_stat.elsXmitACC++; +@@ -6517,14 +6461,12 @@ lpfc_els_disc_adisc(struct lpfc_vport *vport) + list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) { + + if (ndlp->nlp_state != NLP_STE_NPR_NODE || +- !(ndlp->nlp_flag & NLP_NPR_ADISC)) ++ !test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) + continue; + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + +- if (!(ndlp->nlp_flag & NLP_NPR_2B_DISC)) { ++ if (!test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { + /* This node was marked for ADISC but was not picked + * for discovery. This is possible if the node was + * missing in gidft response. +@@ -6582,9 +6524,9 @@ lpfc_els_disc_plogi(struct lpfc_vport *vport) + /* go thru NPR nodes and issue any remaining ELS PLOGIs */ + list_for_each_entry_safe(ndlp, next_ndlp, &vport->fc_nodes, nlp_listp) { + if (ndlp->nlp_state == NLP_STE_NPR_NODE && +- (ndlp->nlp_flag & NLP_NPR_2B_DISC) != 0 && +- (ndlp->nlp_flag & NLP_DELAY_TMO) == 0 && +- (ndlp->nlp_flag & NLP_NPR_ADISC) == 0) { ++ test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag) && ++ !test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag) && ++ !test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) { + ndlp->nlp_prev_state = ndlp->nlp_state; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE); + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); +@@ -7080,7 +7022,7 @@ lpfc_els_rdp_cmpl(struct lpfc_hba *phba, struct lpfc_rdp_context *rdp_context, + + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "2171 Xmit RDP response tag x%x xri x%x, " +- "did x%x, nlp_flag x%x, nlp_state x%x, rpi x%x", ++ "did x%x, nlp_flag x%lx, nlp_state x%x, rpi x%x", + elsiocb->iotag, ulp_context, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi); +@@ -8054,7 +7996,7 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, + */ + if (vport->port_state <= LPFC_NS_QRY) { + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RSCN ignore: did:x%x/ste:x%x flg:x%x", ++ "RCV RSCN ignore: did:x%x/ste:x%x flg:x%lx", + ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag); + + lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); +@@ -8084,7 +8026,7 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, + vport->fc_flag, payload_len, + *lp, vport->fc_rscn_id_cnt); + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RSCN vport: did:x%x/ste:x%x flg:x%x", ++ "RCV RSCN vport: did:x%x/ste:x%x flg:x%lx", + ndlp->nlp_DID, vport->port_state, + ndlp->nlp_flag); + +@@ -8121,7 +8063,7 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, + if (test_bit(FC_RSCN_MODE, &vport->fc_flag) || + test_bit(FC_NDISC_ACTIVE, &vport->fc_flag)) { + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RSCN defer: did:x%x/ste:x%x flg:x%x", ++ "RCV RSCN defer: did:x%x/ste:x%x flg:x%lx", + ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag); + + set_bit(FC_RSCN_DEFERRED, &vport->fc_flag); +@@ -8177,7 +8119,7 @@ lpfc_els_rcv_rscn(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, + return 0; + } + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RSCN: did:x%x/ste:x%x flg:x%x", ++ "RCV RSCN: did:x%x/ste:x%x flg:x%lx", + ndlp->nlp_DID, vport->port_state, ndlp->nlp_flag); + + set_bit(FC_RSCN_MODE, &vport->fc_flag); +@@ -8683,7 +8625,7 @@ lpfc_els_rsp_rls_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + /* Xmit ELS RLS ACC response tag */ + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_ELS, + "2874 Xmit ELS RLS ACC response tag x%x xri x%x, " +- "did x%x, nlp_flag x%x, nlp_state x%x, rpi x%x\n", ++ "did x%x, nlp_flag x%lx, nlp_state x%x, rpi x%x\n", + elsiocb->iotag, ulp_context, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi); +@@ -8845,7 +8787,7 @@ lpfc_els_rcv_rtv(struct lpfc_vport *vport, struct lpfc_iocbq *cmdiocb, + /* Xmit ELS RLS ACC response tag */ + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_ELS, + "2875 Xmit ELS RTV ACC response tag x%x xri x%x, " +- "did x%x, nlp_flag x%x, nlp_state x%x, rpi x%x, " ++ "did x%x, nlp_flag x%lx, nlp_state x%x, rpi x%x, " + "Data: x%x x%x x%x\n", + elsiocb->iotag, ulp_context, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, +@@ -9042,7 +8984,7 @@ lpfc_els_rsp_rpl_acc(struct lpfc_vport *vport, uint16_t cmdsize, + /* Xmit ELS RPL ACC response tag */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "0120 Xmit ELS RPL ACC response tag x%x " +- "xri x%x, did x%x, nlp_flag x%x, nlp_state x%x, " ++ "xri x%x, did x%x, nlp_flag x%lx, nlp_state x%x, " + "rpi x%x\n", + elsiocb->iotag, ulp_context, + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, +@@ -10387,14 +10329,11 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + * Do not process any unsolicited ELS commands + * if the ndlp is in DEV_LOSS + */ +- spin_lock_irq(&ndlp->lock); +- if (ndlp->nlp_flag & NLP_IN_DEV_LOSS) { +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag)) { + if (newnode) + lpfc_nlp_put(ndlp); + goto dropit; + } +- spin_unlock_irq(&ndlp->lock); + + elsiocb->ndlp = lpfc_nlp_get(ndlp); + if (!elsiocb->ndlp) +@@ -10423,7 +10362,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + switch (cmd) { + case ELS_CMD_PLOGI: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV PLOGI: did:x%x/ste:x%x flg:x%x", ++ "RCV PLOGI: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvPLOGI++; +@@ -10462,9 +10401,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + } + } + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_TARGET_REMOVE; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_TARGET_REMOVE, &ndlp->nlp_flag); + + lpfc_disc_state_machine(vport, ndlp, elsiocb, + NLP_EVT_RCV_PLOGI); +@@ -10472,7 +10409,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_FLOGI: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV FLOGI: did:x%x/ste:x%x flg:x%x", ++ "RCV FLOGI: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvFLOGI++; +@@ -10499,7 +10436,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_LOGO: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV LOGO: did:x%x/ste:x%x flg:x%x", ++ "RCV LOGO: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvLOGO++; +@@ -10516,7 +10453,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_PRLO: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV PRLO: did:x%x/ste:x%x flg:x%x", ++ "RCV PRLO: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvPRLO++; +@@ -10545,7 +10482,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_ADISC: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV ADISC: did:x%x/ste:x%x flg:x%x", ++ "RCV ADISC: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + lpfc_send_els_event(vport, ndlp, payload); +@@ -10560,7 +10497,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_PDISC: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV PDISC: did:x%x/ste:x%x flg:x%x", ++ "RCV PDISC: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvPDISC++; +@@ -10574,7 +10511,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_FARPR: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV FARPR: did:x%x/ste:x%x flg:x%x", ++ "RCV FARPR: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvFARPR++; +@@ -10582,7 +10519,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_FARP: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV FARP: did:x%x/ste:x%x flg:x%x", ++ "RCV FARP: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvFARP++; +@@ -10590,7 +10527,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_FAN: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV FAN: did:x%x/ste:x%x flg:x%x", ++ "RCV FAN: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvFAN++; +@@ -10599,7 +10536,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + case ELS_CMD_PRLI: + case ELS_CMD_NVMEPRLI: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV PRLI: did:x%x/ste:x%x flg:x%x", ++ "RCV PRLI: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvPRLI++; +@@ -10613,7 +10550,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_LIRR: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV LIRR: did:x%x/ste:x%x flg:x%x", ++ "RCV LIRR: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvLIRR++; +@@ -10624,7 +10561,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_RLS: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RLS: did:x%x/ste:x%x flg:x%x", ++ "RCV RLS: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvRLS++; +@@ -10635,7 +10572,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_RPL: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RPL: did:x%x/ste:x%x flg:x%x", ++ "RCV RPL: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvRPL++; +@@ -10646,7 +10583,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_RNID: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RNID: did:x%x/ste:x%x flg:x%x", ++ "RCV RNID: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvRNID++; +@@ -10657,7 +10594,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_RTV: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RTV: did:x%x/ste:x%x flg:x%x", ++ "RCV RTV: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + phba->fc_stat.elsRcvRTV++; + lpfc_els_rcv_rtv(vport, elsiocb, ndlp); +@@ -10667,7 +10604,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_RRQ: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV RRQ: did:x%x/ste:x%x flg:x%x", ++ "RCV RRQ: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvRRQ++; +@@ -10678,7 +10615,7 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_ECHO: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV ECHO: did:x%x/ste:x%x flg:x%x", ++ "RCV ECHO: did:x%x/ste:x%x flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + phba->fc_stat.elsRcvECHO++; +@@ -10694,7 +10631,8 @@ lpfc_els_unsol_buffer(struct lpfc_hba *phba, struct lpfc_sli_ring *pring, + break; + case ELS_CMD_FPIN: + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_UNSOL, +- "RCV FPIN: did:x%x/ste:x%x flg:x%x", ++ "RCV FPIN: did:x%x/ste:x%x " ++ "flg:x%lx", + did, vport->port_state, ndlp->nlp_flag); + + lpfc_els_rcv_fpin(vport, (struct fc_els_fpin *)payload, +@@ -11202,9 +11140,7 @@ lpfc_retry_pport_discovery(struct lpfc_hba *phba) + return; + + mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_FLOGI; + phba->pport->port_state = LPFC_FLOGI; + return; +@@ -11335,11 +11271,9 @@ lpfc_cmpl_els_fdisc(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + list_for_each_entry_safe(np, next_np, + &vport->fc_nodes, nlp_listp) { + if ((np->nlp_state != NLP_STE_NPR_NODE) || +- !(np->nlp_flag & NLP_NPR_ADISC)) ++ !test_bit(NLP_NPR_ADISC, &np->nlp_flag)) + continue; +- spin_lock_irq(&ndlp->lock); +- np->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NPR_ADISC, &np->nlp_flag); + lpfc_unreg_rpi(vport, np); + } + lpfc_cleanup_pending_mbox(vport); +@@ -11542,7 +11476,7 @@ lpfc_cmpl_els_npiv_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* NPIV LOGO completes to NPort */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, + "2928 NPIV LOGO completes to NPort x%x " +- "Data: x%x x%x x%x x%x x%x x%x x%x\n", ++ "Data: x%x x%x x%x x%x x%x x%lx x%x\n", + ndlp->nlp_DID, ulp_status, ulp_word4, + tmo, vport->num_disc_nodes, + kref_read(&ndlp->kref), ndlp->nlp_flag, +@@ -11558,8 +11492,9 @@ lpfc_cmpl_els_npiv_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + /* Wake up lpfc_vport_delete if waiting...*/ + if (ndlp->logo_waitq) + wake_up(ndlp->logo_waitq); ++ clear_bit(NLP_ISSUE_LOGO, &ndlp->nlp_flag); ++ clear_bit(NLP_LOGO_SND, &ndlp->nlp_flag); + spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_ISSUE_LOGO | NLP_LOGO_SND); + ndlp->save_flags &= ~NLP_WAIT_FOR_LOGO; + spin_unlock_irq(&ndlp->lock); + } +@@ -11609,13 +11544,11 @@ lpfc_issue_els_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + memcpy(pcmd, &vport->fc_portname, sizeof(struct lpfc_name)); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_ELS_CMD, +- "Issue LOGO npiv did:x%x flg:x%x", ++ "Issue LOGO npiv did:x%x flg:x%lx", + ndlp->nlp_DID, ndlp->nlp_flag, 0); + + elsiocb->cmd_cmpl = lpfc_cmpl_els_npiv_logo; +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_LOGO_SND; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_LOGO_SND, &ndlp->nlp_flag); + elsiocb->ndlp = lpfc_nlp_get(ndlp); + if (!elsiocb->ndlp) { + lpfc_els_free_iocb(phba, elsiocb); +@@ -11631,9 +11564,7 @@ lpfc_issue_els_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + return 0; + + err: +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_LOGO_SND; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_LOGO_SND, &ndlp->nlp_flag); + return 1; + } + +@@ -12114,7 +12045,7 @@ lpfc_sli_abts_recover_port(struct lpfc_vport *vport, + lpfc_printf_log(phba, KERN_ERR, LOG_TRACE_EVENT, + "3094 Start rport recovery on shost id 0x%x " + "fc_id 0x%06x vpi 0x%x rpi 0x%x state 0x%x " +- "flags 0x%x\n", ++ "flag 0x%lx\n", + shost->host_no, ndlp->nlp_DID, + vport->vpi, ndlp->nlp_rpi, ndlp->nlp_state, + ndlp->nlp_flag); +@@ -12124,8 +12055,8 @@ lpfc_sli_abts_recover_port(struct lpfc_vport *vport, + */ + spin_lock_irqsave(&ndlp->lock, flags); + ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; +- ndlp->nlp_flag |= NLP_ISSUE_LOGO; + spin_unlock_irqrestore(&ndlp->lock, flags); ++ set_bit(NLP_ISSUE_LOGO, &ndlp->nlp_flag); + lpfc_unreg_rpi(vport, ndlp); + } + +diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c +index ff559b28738cf..5297122f89fc8 100644 +--- a/drivers/scsi/lpfc/lpfc_hbadisc.c ++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c +@@ -137,7 +137,7 @@ lpfc_terminate_rport_io(struct fc_rport *rport) + ndlp = rdata->pnode; + vport = ndlp->vport; + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT, +- "rport terminate: sid:x%x did:x%x flg:x%x", ++ "rport terminate: sid:x%x did:x%x flg:x%lx", + ndlp->nlp_sid, ndlp->nlp_DID, ndlp->nlp_flag); + + if (ndlp->nlp_sid != NLP_NO_SID) +@@ -165,11 +165,11 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + phba = vport->phba; + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT, +- "rport devlosscb: sid:x%x did:x%x flg:x%x", ++ "rport devlosscb: sid:x%x did:x%x flg:x%lx", + ndlp->nlp_sid, ndlp->nlp_DID, ndlp->nlp_flag); + + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE, +- "3181 dev_loss_callbk x%06x, rport x%px flg x%x " ++ "3181 dev_loss_callbk x%06x, rport x%px flg x%lx " + "load_flag x%lx refcnt %u state %d xpt x%x\n", + ndlp->nlp_DID, ndlp->rport, ndlp->nlp_flag, + vport->load_flag, kref_read(&ndlp->kref), +@@ -208,18 +208,13 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + spin_unlock_irqrestore(&ndlp->lock, iflags); + } + +- spin_lock_irqsave(&ndlp->lock, iflags); +- + /* Only 1 thread can drop the initial node reference. If + * another thread has set NLP_DROPPED, this thread is done. + */ +- if (nvme_reg || (ndlp->nlp_flag & NLP_DROPPED)) { +- spin_unlock_irqrestore(&ndlp->lock, iflags); ++ if (nvme_reg || test_bit(NLP_DROPPED, &ndlp->nlp_flag)) + return; +- } + +- ndlp->nlp_flag |= NLP_DROPPED; +- spin_unlock_irqrestore(&ndlp->lock, iflags); ++ set_bit(NLP_DROPPED, &ndlp->nlp_flag); + lpfc_nlp_put(ndlp); + return; + } +@@ -253,14 +248,14 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + return; + } + +- spin_lock_irqsave(&ndlp->lock, iflags); +- ndlp->nlp_flag |= NLP_IN_DEV_LOSS; ++ set_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag); + ++ spin_lock_irqsave(&ndlp->lock, iflags); + /* If there is a PLOGI in progress, and we are in a + * NLP_NPR_2B_DISC state, don't turn off the flag. + */ + if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE) +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + + /* + * The backend does not expect any more calls associated with this +@@ -289,15 +284,13 @@ lpfc_dev_loss_tmo_callbk(struct fc_rport *rport) + } else { + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE, + "3188 worker thread is stopped %s x%06x, " +- " rport x%px flg x%x load_flag x%lx refcnt " ++ " rport x%px flg x%lx load_flag x%lx refcnt " + "%d\n", __func__, ndlp->nlp_DID, + ndlp->rport, ndlp->nlp_flag, + vport->load_flag, kref_read(&ndlp->kref)); + if (!(ndlp->fc4_xpt_flags & NVME_XPT_REGD)) { +- spin_lock_irqsave(&ndlp->lock, iflags); + /* Node is in dev loss. No further transaction. */ +- ndlp->nlp_flag &= ~NLP_IN_DEV_LOSS; +- spin_unlock_irqrestore(&ndlp->lock, iflags); ++ clear_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag); + lpfc_disc_state_machine(vport, ndlp, NULL, + NLP_EVT_DEVICE_RM); + } +@@ -430,7 +423,7 @@ lpfc_check_nlp_post_devloss(struct lpfc_vport *vport, + lpfc_nlp_get(ndlp); + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY | LOG_NODE, + "8438 Devloss timeout reversed on DID x%x " +- "refcnt %d ndlp %p flag x%x " ++ "refcnt %d ndlp %p flag x%lx " + "port_state = x%x\n", + ndlp->nlp_DID, kref_read(&ndlp->kref), ndlp, + ndlp->nlp_flag, vport->port_state); +@@ -473,7 +466,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp) + ndlp->nlp_DID, ndlp->nlp_type, ndlp->nlp_sid); + + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_NODE, +- "3182 %s x%06x, nflag x%x xflags x%x refcnt %d\n", ++ "3182 %s x%06x, nflag x%lx xflags x%x refcnt %d\n", + __func__, ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->fc4_xpt_flags, kref_read(&ndlp->kref)); + +@@ -487,9 +480,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp) + *(name+4), *(name+5), *(name+6), *(name+7), + ndlp->nlp_DID); + +- spin_lock_irqsave(&ndlp->lock, iflags); +- ndlp->nlp_flag &= ~NLP_IN_DEV_LOSS; +- spin_unlock_irqrestore(&ndlp->lock, iflags); ++ clear_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag); + return fcf_inuse; + } + +@@ -517,7 +508,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp) + } + break; + case Fabric_Cntl_DID: +- if (ndlp->nlp_flag & NLP_REG_LOGIN_SEND) ++ if (test_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag)) + recovering = true; + break; + case FDMI_DID: +@@ -545,15 +536,13 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp) + * the following lpfc_nlp_put is necessary after fabric node is + * recovered. + */ +- spin_lock_irqsave(&ndlp->lock, iflags); +- ndlp->nlp_flag &= ~NLP_IN_DEV_LOSS; +- spin_unlock_irqrestore(&ndlp->lock, iflags); ++ clear_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag); + if (recovering) { + lpfc_printf_vlog(vport, KERN_INFO, + LOG_DISCOVERY | LOG_NODE, + "8436 Devloss timeout marked on " + "DID x%x refcnt %d ndlp %p " +- "flag x%x port_state = x%x\n", ++ "flag x%lx port_state = x%x\n", + ndlp->nlp_DID, kref_read(&ndlp->kref), + ndlp, ndlp->nlp_flag, + vport->port_state); +@@ -570,7 +559,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp) + LOG_DISCOVERY | LOG_NODE, + "8437 Devloss timeout ignored on " + "DID x%x refcnt %d ndlp %p " +- "flag x%x port_state = x%x\n", ++ "flag x%lx port_state = x%x\n", + ndlp->nlp_DID, kref_read(&ndlp->kref), + ndlp, ndlp->nlp_flag, + vport->port_state); +@@ -590,7 +579,7 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp) + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0203 Devloss timeout on " + "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x " +- "NPort x%06x Data: x%x x%x x%x refcnt %d\n", ++ "NPort x%06x Data: x%lx x%x x%x refcnt %d\n", + *name, *(name+1), *(name+2), *(name+3), + *(name+4), *(name+5), *(name+6), *(name+7), + ndlp->nlp_DID, ndlp->nlp_flag, +@@ -600,15 +589,13 @@ lpfc_dev_loss_tmo_handler(struct lpfc_nodelist *ndlp) + lpfc_printf_vlog(vport, KERN_INFO, LOG_TRACE_EVENT, + "0204 Devloss timeout on " + "WWPN %02x:%02x:%02x:%02x:%02x:%02x:%02x:%02x " +- "NPort x%06x Data: x%x x%x x%x\n", ++ "NPort x%06x Data: x%lx x%x x%x\n", + *name, *(name+1), *(name+2), *(name+3), + *(name+4), *(name+5), *(name+6), *(name+7), + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_state, ndlp->nlp_rpi); + } +- spin_lock_irqsave(&ndlp->lock, iflags); +- ndlp->nlp_flag &= ~NLP_IN_DEV_LOSS; +- spin_unlock_irqrestore(&ndlp->lock, iflags); ++ clear_bit(NLP_IN_DEV_LOSS, &ndlp->nlp_flag); + + /* If we are devloss, but we are in the process of rediscovering the + * ndlp, don't issue a NLP_EVT_DEVICE_RM event. +@@ -1373,7 +1360,7 @@ lpfc_linkup_cleanup_nodes(struct lpfc_vport *vport) + if (ndlp->nlp_DID != Fabric_DID) + lpfc_unreg_rpi(vport, ndlp); + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- } else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) { ++ } else if (!test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) { + /* Fail outstanding IO now since device is + * marked for PLOGI. + */ +@@ -3882,14 +3869,13 @@ lpfc_mbx_cmpl_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + pmb->ctx_ndlp = NULL; + + lpfc_printf_vlog(vport, KERN_INFO, LOG_SLI | LOG_NODE | LOG_DISCOVERY, +- "0002 rpi:%x DID:%x flg:%x %d x%px\n", ++ "0002 rpi:%x DID:%x flg:%lx %d x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag, + kref_read(&ndlp->kref), + ndlp); +- if (ndlp->nlp_flag & NLP_REG_LOGIN_SEND) +- ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; ++ clear_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + +- if (ndlp->nlp_flag & NLP_IGNR_REG_CMPL || ++ if (test_bit(NLP_IGNR_REG_CMPL, &ndlp->nlp_flag) || + ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE) { + /* We rcvd a rscn after issuing this + * mbox reg login, we may have cycled +@@ -3899,16 +3885,14 @@ lpfc_mbx_cmpl_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + * there is another reg login in + * process. + */ +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_IGNR_REG_CMPL; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_IGNR_REG_CMPL, &ndlp->nlp_flag); + + /* + * We cannot leave the RPI registered because + * if we go thru discovery again for this ndlp + * a subsequent REG_RPI will fail. + */ +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); + lpfc_unreg_rpi(vport, ndlp); + } + +@@ -4221,7 +4205,7 @@ lpfc_mbx_cmpl_fabric_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + + if (phba->sli_rev < LPFC_SLI_REV4) + ndlp->nlp_rpi = mb->un.varWords[0]; +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); + ndlp->nlp_type |= NLP_FABRIC; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE); + +@@ -4352,9 +4336,7 @@ lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + * reference. + */ + if (!(ndlp->fc4_xpt_flags & (SCSI_XPT_REGD | NVME_XPT_REGD))) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_nlp_put(ndlp); + } + +@@ -4375,11 +4357,11 @@ lpfc_mbx_cmpl_ns_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + + if (phba->sli_rev < LPFC_SLI_REV4) + ndlp->nlp_rpi = mb->un.varWords[0]; +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); + ndlp->nlp_type |= NLP_FABRIC; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE); + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_DISCOVERY, +- "0003 rpi:%x DID:%x flg:%x %d x%px\n", ++ "0003 rpi:%x DID:%x flg:%lx %d x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag, + kref_read(&ndlp->kref), + ndlp); +@@ -4471,8 +4453,8 @@ lpfc_mbx_cmpl_fc_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + __func__, ndlp->nlp_DID, ndlp->nlp_rpi, + ndlp->nlp_state); + +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; +- ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); ++ clear_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + ndlp->nlp_type |= NLP_FABRIC; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE); + +@@ -4506,7 +4488,7 @@ lpfc_register_remote_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT, +- "rport add: did:x%x flg:x%x type x%x", ++ "rport add: did:x%x flg:x%lx type x%x", + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type); + + /* Don't add the remote port if unloading. */ +@@ -4574,7 +4556,7 @@ lpfc_unregister_remote_port(struct lpfc_nodelist *ndlp) + return; + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT, +- "rport delete: did:x%x flg:x%x type x%x", ++ "rport delete: did:x%x flg:x%lx type x%x", + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type); + + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, +@@ -4690,7 +4672,7 @@ lpfc_nlp_unreg_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + lpfc_printf_vlog(vport, KERN_INFO, + LOG_ELS | LOG_NODE | LOG_DISCOVERY, + "0999 %s Not regd: ndlp x%px rport x%px DID " +- "x%x FLG x%x XPT x%x\n", ++ "x%x FLG x%lx XPT x%x\n", + __func__, ndlp, ndlp->rport, ndlp->nlp_DID, + ndlp->nlp_flag, ndlp->fc4_xpt_flags); + return; +@@ -4706,7 +4688,7 @@ lpfc_nlp_unreg_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + } else if (!ndlp->rport) { + lpfc_printf_vlog(vport, KERN_INFO, + LOG_ELS | LOG_NODE | LOG_DISCOVERY, +- "1999 %s NDLP in devloss x%px DID x%x FLG x%x" ++ "1999 %s NDLP in devloss x%px DID x%x FLG x%lx" + " XPT x%x refcnt %u\n", + __func__, ndlp, ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->fc4_xpt_flags, +@@ -4751,7 +4733,7 @@ lpfc_handle_adisc_state(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_type |= NLP_FC_NODE; + fallthrough; + case NLP_STE_MAPPED_NODE: +- ndlp->nlp_flag &= ~NLP_NODEV_REMOVE; ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + lpfc_nlp_reg_node(vport, ndlp); + break; + +@@ -4762,7 +4744,7 @@ lpfc_handle_adisc_state(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + * backend, attempt it now + */ + case NLP_STE_NPR_NODE: +- ndlp->nlp_flag &= ~NLP_RCV_PLOGI; ++ clear_bit(NLP_RCV_PLOGI, &ndlp->nlp_flag); + fallthrough; + default: + lpfc_nlp_unreg_node(vport, ndlp); +@@ -4783,13 +4765,13 @@ lpfc_nlp_state_cleanup(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + } + + if (new_state == NLP_STE_UNMAPPED_NODE) { +- ndlp->nlp_flag &= ~NLP_NODEV_REMOVE; ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + ndlp->nlp_type |= NLP_FC_NODE; + } + if (new_state == NLP_STE_MAPPED_NODE) +- ndlp->nlp_flag &= ~NLP_NODEV_REMOVE; ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + if (new_state == NLP_STE_NPR_NODE) +- ndlp->nlp_flag &= ~NLP_RCV_PLOGI; ++ clear_bit(NLP_RCV_PLOGI, &ndlp->nlp_flag); + + /* Reg/Unreg for FCP and NVME Transport interface */ + if ((old_state == NLP_STE_MAPPED_NODE || +@@ -4797,7 +4779,7 @@ lpfc_nlp_state_cleanup(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + /* For nodes marked for ADISC, Handle unreg in ADISC cmpl + * if linkup. In linkdown do unreg_node + */ +- if (!(ndlp->nlp_flag & NLP_NPR_ADISC) || ++ if (!test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag) || + !lpfc_is_link_up(vport->phba)) + lpfc_nlp_unreg_node(vport, ndlp); + } +@@ -4817,9 +4799,7 @@ lpfc_nlp_state_cleanup(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + (!ndlp->rport || + ndlp->rport->scsi_target_id == -1 || + ndlp->rport->scsi_target_id >= LPFC_MAX_TARGET)) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_TGT_NO_SCSIID; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_TGT_NO_SCSIID, &ndlp->nlp_flag); + lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE); + } + } +@@ -4851,7 +4831,7 @@ lpfc_nlp_set_state(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + int state) + { + int old_state = ndlp->nlp_state; +- int node_dropped = ndlp->nlp_flag & NLP_DROPPED; ++ bool node_dropped = test_bit(NLP_DROPPED, &ndlp->nlp_flag); + char name1[16], name2[16]; + unsigned long iflags; + +@@ -4867,7 +4847,7 @@ lpfc_nlp_set_state(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + if (node_dropped && old_state == NLP_STE_UNUSED_NODE && + state != NLP_STE_UNUSED_NODE) { +- ndlp->nlp_flag &= ~NLP_DROPPED; ++ clear_bit(NLP_DROPPED, &ndlp->nlp_flag); + lpfc_nlp_get(ndlp); + } + +@@ -4875,7 +4855,7 @@ lpfc_nlp_set_state(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + state != NLP_STE_NPR_NODE) + lpfc_cancel_retry_delay_tmo(vport, ndlp); + if (old_state == NLP_STE_UNMAPPED_NODE) { +- ndlp->nlp_flag &= ~NLP_TGT_NO_SCSIID; ++ clear_bit(NLP_TGT_NO_SCSIID, &ndlp->nlp_flag); + ndlp->nlp_type &= ~NLP_FC_NODE; + } + +@@ -4972,14 +4952,8 @@ lpfc_drop_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + * reference from lpfc_nlp_init. If set, don't drop it again and + * introduce an imbalance. + */ +- spin_lock_irq(&ndlp->lock); +- if (!(ndlp->nlp_flag & NLP_DROPPED)) { +- ndlp->nlp_flag |= NLP_DROPPED; +- spin_unlock_irq(&ndlp->lock); ++ if (!test_and_set_bit(NLP_DROPPED, &ndlp->nlp_flag)) + lpfc_nlp_put(ndlp); +- return; +- } +- spin_unlock_irq(&ndlp->lock); + } + + /* +@@ -5094,9 +5068,9 @@ lpfc_check_sli_ndlp(struct lpfc_hba *phba, + } else if (pring->ringno == LPFC_FCP_RING) { + /* Skip match check if waiting to relogin to FCP target */ + if ((ndlp->nlp_type & NLP_FCP_TARGET) && +- (ndlp->nlp_flag & NLP_DELAY_TMO)) { ++ test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag)) + return 0; +- } ++ + if (ulp_context == ndlp->nlp_rpi) + return 1; + } +@@ -5166,7 +5140,7 @@ lpfc_no_rpi(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp) + * Everything that matches on txcmplq will be returned + * by firmware with a no rpi error. + */ +- if (ndlp->nlp_flag & NLP_RPI_REGISTERED) { ++ if (test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag)) { + if (phba->sli_rev != LPFC_SLI_REV4) + lpfc_sli3_dequeue_nport_iocbs(phba, ndlp, &completions); + else +@@ -5200,21 +5174,19 @@ lpfc_nlp_logo_unreg(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + lpfc_issue_els_logo(vport, ndlp, 0); + + /* Check to see if there are any deferred events to process */ +- if ((ndlp->nlp_flag & NLP_UNREG_INP) && +- (ndlp->nlp_defer_did != NLP_EVT_NOTHING_PENDING)) { ++ if (test_bit(NLP_UNREG_INP, &ndlp->nlp_flag) && ++ ndlp->nlp_defer_did != NLP_EVT_NOTHING_PENDING) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1434 UNREG cmpl deferred logo x%x " + "on NPort x%x Data: x%x x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, ndlp); + +- ndlp->nlp_flag &= ~NLP_UNREG_INP; ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); + } else { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_UNREG_INP; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + } + + /* The node has an outstanding reference for the unreg. Now +@@ -5241,9 +5213,8 @@ lpfc_set_unreg_login_mbx_cmpl(struct lpfc_hba *phba, struct lpfc_vport *vport, + if (!mbox->ctx_ndlp) + return; + +- if (ndlp->nlp_flag & NLP_ISSUE_LOGO) { ++ if (test_bit(NLP_ISSUE_LOGO, &ndlp->nlp_flag)) { + mbox->mbox_cmpl = lpfc_nlp_logo_unreg; +- + } else if (phba->sli_rev == LPFC_SLI_REV4 && + !test_bit(FC_UNLOADING, &vport->load_flag) && + (bf_get(lpfc_sli_intf_if_type, &phba->sli4_hba.sli_intf) >= +@@ -5272,13 +5243,13 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + int rc, acc_plogi = 1; + uint16_t rpi; + +- if (ndlp->nlp_flag & NLP_RPI_REGISTERED || +- ndlp->nlp_flag & NLP_REG_LOGIN_SEND) { +- if (ndlp->nlp_flag & NLP_REG_LOGIN_SEND) ++ if (test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag) || ++ test_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag)) { ++ if (test_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag)) + lpfc_printf_vlog(vport, KERN_INFO, + LOG_NODE | LOG_DISCOVERY, + "3366 RPI x%x needs to be " +- "unregistered nlp_flag x%x " ++ "unregistered nlp_flag x%lx " + "did x%x\n", + ndlp->nlp_rpi, ndlp->nlp_flag, + ndlp->nlp_DID); +@@ -5286,11 +5257,11 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + /* If there is already an UNREG in progress for this ndlp, + * no need to queue up another one. + */ +- if (ndlp->nlp_flag & NLP_UNREG_INP) { ++ if (test_bit(NLP_UNREG_INP, &ndlp->nlp_flag)) { + lpfc_printf_vlog(vport, KERN_INFO, + LOG_NODE | LOG_DISCOVERY, + "1436 unreg_rpi SKIP UNREG x%x on " +- "NPort x%x deferred x%x flg x%x " ++ "NPort x%x deferred x%x flg x%lx " + "Data: x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, +@@ -5318,19 +5289,19 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + acc_plogi = 0; + + if (!test_bit(FC_OFFLINE_MODE, &vport->fc_flag)) +- ndlp->nlp_flag |= NLP_UNREG_INP; ++ set_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + + lpfc_printf_vlog(vport, KERN_INFO, + LOG_NODE | LOG_DISCOVERY, + "1433 unreg_rpi UNREG x%x on " +- "NPort x%x deferred flg x%x " ++ "NPort x%x deferred flg x%lx " + "Data:x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag, ndlp); + + rc = lpfc_sli_issue_mbox(phba, mbox, MBX_NOWAIT); + if (rc == MBX_NOT_FINISHED) { +- ndlp->nlp_flag &= ~NLP_UNREG_INP; ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + mempool_free(mbox, phba->mbox_mem_pool); + acc_plogi = 1; + lpfc_nlp_put(ndlp); +@@ -5340,7 +5311,7 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + LOG_NODE | LOG_DISCOVERY, + "1444 Failed to allocate mempool " + "unreg_rpi UNREG x%x, " +- "DID x%x, flag x%x, " ++ "DID x%x, flag x%lx, " + "ndlp x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag, ndlp); +@@ -5350,7 +5321,7 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + * not unloading. + */ + if (!test_bit(FC_UNLOADING, &vport->load_flag)) { +- ndlp->nlp_flag &= ~NLP_UNREG_INP; ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + lpfc_issue_els_logo(vport, ndlp, 0); + ndlp->nlp_prev_state = ndlp->nlp_state; + lpfc_nlp_set_state(vport, ndlp, +@@ -5363,13 +5334,13 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + out: + if (phba->sli_rev != LPFC_SLI_REV4) + ndlp->nlp_rpi = 0; +- ndlp->nlp_flag &= ~NLP_RPI_REGISTERED; +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; ++ clear_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + if (acc_plogi) +- ndlp->nlp_flag &= ~NLP_LOGO_ACC; ++ clear_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + return 1; + } +- ndlp->nlp_flag &= ~NLP_LOGO_ACC; ++ clear_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + return 0; + } + +@@ -5397,7 +5368,7 @@ lpfc_unreg_hba_rpis(struct lpfc_hba *phba) + for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) { + spin_lock_irqsave(&vports[i]->fc_nodes_list_lock, iflags); + list_for_each_entry(ndlp, &vports[i]->fc_nodes, nlp_listp) { +- if (ndlp->nlp_flag & NLP_RPI_REGISTERED) { ++ if (test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag)) { + /* The mempool_alloc might sleep */ + spin_unlock_irqrestore(&vports[i]->fc_nodes_list_lock, + iflags); +@@ -5485,7 +5456,7 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + /* Cleanup node for NPort */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, + "0900 Cleanup node for NPort x%x " +- "Data: x%x x%x x%x\n", ++ "Data: x%lx x%x x%x\n", + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_state, ndlp->nlp_rpi); + lpfc_dequeue_node(vport, ndlp); +@@ -5530,9 +5501,7 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + + lpfc_els_abort(phba, ndlp); + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + + ndlp->nlp_last_elscmd = 0; + del_timer_sync(&ndlp->nlp_delayfunc); +@@ -5615,7 +5584,7 @@ __lpfc_findnode_did(struct lpfc_vport *vport, uint32_t did) + ); + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE, + "0929 FIND node DID " +- "Data: x%px x%x x%x x%x x%x x%px\n", ++ "Data: x%px x%x x%lx x%x x%x x%px\n", + ndlp, ndlp->nlp_DID, + ndlp->nlp_flag, data1, ndlp->nlp_rpi, + ndlp->active_rrqs_xri_bitmap); +@@ -5668,7 +5637,7 @@ lpfc_findnode_mapped(struct lpfc_vport *vport) + iflags); + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE, + "2025 FIND node DID MAPPED " +- "Data: x%px x%x x%x x%x x%px\n", ++ "Data: x%px x%x x%lx x%x x%px\n", + ndlp, ndlp->nlp_DID, + ndlp->nlp_flag, data1, + ndlp->active_rrqs_xri_bitmap); +@@ -5702,13 +5671,11 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did) + + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6453 Setup New Node 2B_DISC x%x " +- "Data:x%x x%x x%lx\n", ++ "Data:x%lx x%x x%lx\n", + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_state, vport->fc_flag); + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + return ndlp; + } + +@@ -5727,7 +5694,7 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did) + + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6455 Setup RSCN Node 2B_DISC x%x " +- "Data:x%x x%x x%lx\n", ++ "Data:x%lx x%x x%lx\n", + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_state, vport->fc_flag); + +@@ -5745,13 +5712,11 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did) + NLP_EVT_DEVICE_RECOVERY); + } + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + } else { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6456 Skip Setup RSCN Node x%x " +- "Data:x%x x%x x%lx\n", ++ "Data:x%lx x%x x%lx\n", + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_state, vport->fc_flag); + ndlp = NULL; +@@ -5759,7 +5724,7 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did) + } else { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "6457 Setup Active Node 2B_DISC x%x " +- "Data:x%x x%x x%lx\n", ++ "Data:x%lx x%x x%lx\n", + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_state, vport->fc_flag); + +@@ -5770,7 +5735,7 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did) + if (ndlp->nlp_state == NLP_STE_ADISC_ISSUE || + ndlp->nlp_state == NLP_STE_PLOGI_ISSUE || + (!vport->phba->nvmet_support && +- ndlp->nlp_flag & NLP_RCV_PLOGI)) ++ test_bit(NLP_RCV_PLOGI, &ndlp->nlp_flag))) + return NULL; + + if (vport->phba->nvmet_support) +@@ -5780,10 +5745,7 @@ lpfc_setup_disc_node(struct lpfc_vport *vport, uint32_t did) + * allows for rediscovery + */ + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + } + return ndlp; + } +@@ -6154,7 +6116,7 @@ lpfc_disc_timeout_handler(struct lpfc_vport *vport) + /* Clean up the ndlp on Fabric connections */ + lpfc_drop_node(vport, ndlp); + +- } else if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) { ++ } else if (!test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) { + /* Fail outstanding IO now since device + * is marked for PLOGI. + */ +@@ -6367,11 +6329,11 @@ lpfc_mbx_cmpl_fdmi_reg_login(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + + if (phba->sli_rev < LPFC_SLI_REV4) + ndlp->nlp_rpi = mb->un.varWords[0]; +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); + ndlp->nlp_type |= NLP_FABRIC; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_UNMAPPED_NODE); + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_DISCOVERY, +- "0004 rpi:%x DID:%x flg:%x %d x%px\n", ++ "0004 rpi:%x DID:%x flg:%lx %d x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag, + kref_read(&ndlp->kref), + ndlp); +@@ -6421,7 +6383,7 @@ __lpfc_find_node(struct lpfc_vport *vport, node_filter filter, void *param) + if (filter(ndlp, param)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE_VERBOSE, + "3185 FIND node filter %ps DID " +- "ndlp x%px did x%x flg x%x st x%x " ++ "ndlp x%px did x%x flg x%lx st x%x " + "xri x%x type x%x rpi x%x\n", + filter, ndlp, ndlp->nlp_DID, + ndlp->nlp_flag, ndlp->nlp_state, +@@ -6559,7 +6521,7 @@ lpfc_nlp_init(struct lpfc_vport *vport, uint32_t did) + lpfc_printf_vlog(vport, KERN_INFO, + LOG_ELS | LOG_NODE | LOG_DISCOVERY, + "0007 Init New ndlp x%px, rpi:x%x DID:x%x " +- "flg:x%x refcnt:%d\n", ++ "flg:x%lx refcnt:%d\n", + ndlp, ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag, kref_read(&ndlp->kref)); + +@@ -6591,7 +6553,7 @@ lpfc_nlp_release(struct kref *kref) + struct lpfc_vport *vport = ndlp->vport; + + lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE, +- "node release: did:x%x flg:x%x type:x%x", ++ "node release: did:x%x flg:x%lx type:x%x", + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_type); + + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE, +@@ -6637,7 +6599,7 @@ lpfc_nlp_get(struct lpfc_nodelist *ndlp) + + if (ndlp) { + lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE, +- "node get: did:x%x flg:x%x refcnt:x%x", ++ "node get: did:x%x flg:x%lx refcnt:x%x", + ndlp->nlp_DID, ndlp->nlp_flag, + kref_read(&ndlp->kref)); + +@@ -6669,7 +6631,7 @@ lpfc_nlp_put(struct lpfc_nodelist *ndlp) + { + if (ndlp) { + lpfc_debugfs_disc_trc(ndlp->vport, LPFC_DISC_TRC_NODE, +- "node put: did:x%x flg:x%x refcnt:x%x", ++ "node put: did:x%x flg:x%lx refcnt:x%x", + ndlp->nlp_DID, ndlp->nlp_flag, + kref_read(&ndlp->kref)); + } else { +@@ -6722,11 +6684,12 @@ lpfc_fcf_inuse(struct lpfc_hba *phba) + spin_unlock_irqrestore(&vports[i]->fc_nodes_list_lock, + iflags); + goto out; +- } else if (ndlp->nlp_flag & NLP_RPI_REGISTERED) { ++ } else if (test_bit(NLP_RPI_REGISTERED, ++ &ndlp->nlp_flag)) { + ret = 1; + lpfc_printf_log(phba, KERN_INFO, + LOG_NODE | LOG_DISCOVERY, +- "2624 RPI %x DID %x flag %x " ++ "2624 RPI %x DID %x flag %lx " + "still logged in\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag); +diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c +index 986e2898b10b8..3ddcaa864f075 100644 +--- a/drivers/scsi/lpfc/lpfc_init.c ++++ b/drivers/scsi/lpfc/lpfc_init.c +@@ -3092,7 +3092,8 @@ lpfc_cleanup(struct lpfc_vport *vport) + lpfc_printf_vlog(ndlp->vport, KERN_ERR, + LOG_DISCOVERY, + "0282 did:x%x ndlp:x%px " +- "refcnt:%d xflags x%x nflag x%x\n", ++ "refcnt:%d xflags x%x " ++ "nflag x%lx\n", + ndlp->nlp_DID, (void *)ndlp, + kref_read(&ndlp->kref), + ndlp->fc4_xpt_flags, +@@ -3413,7 +3414,7 @@ lpfc_sli4_node_rpi_restore(struct lpfc_hba *phba) + LOG_NODE | LOG_DISCOVERY, + "0099 RPI alloc error for " + "ndlp x%px DID:x%06x " +- "flg:x%x\n", ++ "flg:x%lx\n", + ndlp, ndlp->nlp_DID, + ndlp->nlp_flag); + continue; +@@ -3422,7 +3423,7 @@ lpfc_sli4_node_rpi_restore(struct lpfc_hba *phba) + lpfc_printf_vlog(ndlp->vport, KERN_INFO, + LOG_NODE | LOG_DISCOVERY, + "0009 Assign RPI x%x to ndlp x%px " +- "DID:x%06x flg:x%x\n", ++ "DID:x%06x flg:x%lx\n", + ndlp->nlp_rpi, ndlp, ndlp->nlp_DID, + ndlp->nlp_flag); + } +@@ -3826,15 +3827,12 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action) + &vports[i]->fc_nodes, + nlp_listp) { + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); +- ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + if (offline || hba_pci_err) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_UNREG_INP | +- NLP_RPI_REGISTERED); +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_UNREG_INP, ++ &ndlp->nlp_flag); ++ clear_bit(NLP_RPI_REGISTERED, ++ &ndlp->nlp_flag); + } + + if (ndlp->nlp_type & NLP_FABRIC) { +@@ -6911,9 +6909,7 @@ lpfc_sli4_async_fip_evt(struct lpfc_hba *phba, + */ + mod_timer(&ndlp->nlp_delayfunc, + jiffies + msecs_to_jiffies(1000)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_FDISC; + vport->port_state = LPFC_FDISC; + } else { +diff --git a/drivers/scsi/lpfc/lpfc_nportdisc.c b/drivers/scsi/lpfc/lpfc_nportdisc.c +index 4574716c8764f..4d88cfe71caed 100644 +--- a/drivers/scsi/lpfc/lpfc_nportdisc.c ++++ b/drivers/scsi/lpfc/lpfc_nportdisc.c +@@ -65,7 +65,7 @@ lpfc_check_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + struct lpfc_name *nn, struct lpfc_name *pn) + { + /* First, we MUST have a RPI registered */ +- if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED)) ++ if (!test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag)) + return 0; + + /* Compare the ADISC rsp WWNN / WWPN matches our internal node +@@ -239,7 +239,7 @@ lpfc_els_abort(struct lpfc_hba *phba, struct lpfc_nodelist *ndlp) + /* Abort outstanding I/O on NPort */ + lpfc_printf_vlog(ndlp->vport, KERN_INFO, LOG_DISCOVERY, + "2819 Abort outstanding I/O on NPort x%x " +- "Data: x%x x%x x%x\n", ++ "Data: x%lx x%x x%x\n", + ndlp->nlp_DID, ndlp->nlp_flag, ndlp->nlp_state, + ndlp->nlp_rpi); + /* Clean up all fabric IOs first.*/ +@@ -340,7 +340,7 @@ lpfc_defer_plogi_acc(struct lpfc_hba *phba, LPFC_MBOXQ_t *login_mbox) + + /* Now process the REG_RPI cmpl */ + lpfc_mbx_cmpl_reg_login(phba, login_mbox); +- ndlp->nlp_flag &= ~NLP_ACC_REGLOGIN; ++ clear_bit(NLP_ACC_REGLOGIN, &ndlp->nlp_flag); + kfree(save_iocb); + } + +@@ -404,7 +404,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + /* PLOGI chkparm OK */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, +- "0114 PLOGI chkparm OK Data: x%x x%x x%x " ++ "0114 PLOGI chkparm OK Data: x%x x%x x%lx " + "x%x x%x x%lx\n", + ndlp->nlp_DID, ndlp->nlp_state, ndlp->nlp_flag, + ndlp->nlp_rpi, vport->port_state, +@@ -429,7 +429,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + /* if already logged in, do implicit logout */ + switch (ndlp->nlp_state) { + case NLP_STE_NPR_NODE: +- if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) ++ if (!test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) + break; + fallthrough; + case NLP_STE_REG_LOGIN_ISSUE: +@@ -449,7 +449,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR); + ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; + ndlp->nlp_nvme_info &= ~NLP_NVME_NSLER; +- ndlp->nlp_flag &= ~NLP_FIRSTBURST; ++ clear_bit(NLP_FIRSTBURST, &ndlp->nlp_flag); + + lpfc_els_rsp_acc(vport, ELS_CMD_PLOGI, cmdiocb, + ndlp, NULL); +@@ -480,7 +480,7 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_type &= ~(NLP_NVME_TARGET | NLP_NVME_INITIATOR); + ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; + ndlp->nlp_nvme_info &= ~NLP_NVME_NSLER; +- ndlp->nlp_flag &= ~NLP_FIRSTBURST; ++ clear_bit(NLP_FIRSTBURST, &ndlp->nlp_flag); + + login_mbox = NULL; + link_mbox = NULL; +@@ -552,13 +552,13 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + lpfc_can_disctmo(vport); + } + +- ndlp->nlp_flag &= ~NLP_SUPPRESS_RSP; ++ clear_bit(NLP_SUPPRESS_RSP, &ndlp->nlp_flag); + if ((phba->sli.sli_flag & LPFC_SLI_SUPPRESS_RSP) && + sp->cmn.valid_vendor_ver_level) { + vid = be32_to_cpu(sp->un.vv.vid); + flag = be32_to_cpu(sp->un.vv.flags); + if ((vid == LPFC_VV_EMLX_ID) && (flag & LPFC_VV_SUPPRESS_RSP)) +- ndlp->nlp_flag |= NLP_SUPPRESS_RSP; ++ set_bit(NLP_SUPPRESS_RSP, &ndlp->nlp_flag); + } + + login_mbox = mempool_alloc(phba->mbox_mem_pool, GFP_KERNEL); +@@ -627,10 +627,9 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + * this ELS request. The only way to do this is + * to register, then unregister the RPI. + */ +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= (NLP_RM_DFLT_RPI | NLP_ACC_REGLOGIN | +- NLP_RCV_PLOGI); +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_RM_DFLT_RPI, &ndlp->nlp_flag); ++ set_bit(NLP_ACC_REGLOGIN, &ndlp->nlp_flag); ++ set_bit(NLP_RCV_PLOGI, &ndlp->nlp_flag); + } + + stat.un.b.lsRjtRsnCode = LSRJT_INVALID_CMD; +@@ -665,9 +664,8 @@ lpfc_rcv_plogi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + login_mbox->ctx_u.save_iocb = save_iocb; /* For PLOGI ACC */ + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= (NLP_ACC_REGLOGIN | NLP_RCV_PLOGI); +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_ACC_REGLOGIN, &ndlp->nlp_flag); ++ set_bit(NLP_RCV_PLOGI, &ndlp->nlp_flag); + + /* Start the ball rolling by issuing REG_LOGIN here */ + rc = lpfc_sli_issue_mbox(phba, login_mbox, MBX_NOWAIT); +@@ -797,7 +795,7 @@ lpfc_rcv_padisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + */ + if (ndlp->nlp_type & (NLP_FCP_TARGET | NLP_NVME_TARGET)) { + if ((ndlp->nlp_state != NLP_STE_MAPPED_NODE) && +- !(ndlp->nlp_flag & NLP_NPR_ADISC)) ++ !test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) + lpfc_nlp_set_state(vport, ndlp, + NLP_STE_MAPPED_NODE); + } +@@ -814,9 +812,7 @@ lpfc_rcv_padisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + /* 1 sec timeout */ + mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000)); + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; + ndlp->nlp_prev_state = ndlp->nlp_state; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +@@ -835,9 +831,7 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + /* Only call LOGO ACC for first LOGO, this avoids sending unnecessary + * PLOGIs during LOGO storms from a device. + */ +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_LOGO_ACC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + if (els_cmd == ELS_CMD_PRLO) + lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL); + else +@@ -890,9 +884,7 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + */ + mod_timer(&ndlp->nlp_delayfunc, + jiffies + msecs_to_jiffies(1000)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_FDISC; + vport->port_state = LPFC_FDISC; + } else { +@@ -915,14 +907,12 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_state <= NLP_STE_PRLI_ISSUE)) { + mod_timer(&ndlp->nlp_delayfunc, + jiffies + msecs_to_jiffies(1000 * 1)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; + lpfc_printf_vlog(vport, KERN_INFO, + LOG_NODE | LOG_ELS | LOG_DISCOVERY, + "3204 Start nlpdelay on DID x%06x " +- "nflag x%x lastels x%x ref cnt %u", ++ "nflag x%lx lastels x%x ref cnt %u", + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_last_elscmd, + kref_read(&ndlp->kref)); +@@ -935,9 +925,7 @@ lpfc_rcv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_prev_state = ndlp->nlp_state; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + /* The driver has to wait until the ACC completes before it continues + * processing the LOGO. The action will resume in + * lpfc_cmpl_els_logo_acc routine. Since part of processing includes an +@@ -978,7 +966,7 @@ lpfc_rcv_prli_support_check(struct lpfc_vport *vport, + out: + lpfc_printf_vlog(vport, KERN_WARNING, LOG_DISCOVERY, + "6115 Rcv PRLI (%x) check failed: ndlp rpi %d " +- "state x%x flags x%x port_type: x%x " ++ "state x%x flags x%lx port_type: x%x " + "npr->initfcn: x%x npr->tgtfcn: x%x\n", + cmd, ndlp->nlp_rpi, ndlp->nlp_state, + ndlp->nlp_flag, vport->port_type, +@@ -1020,7 +1008,7 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + if (npr->prliType == PRLI_NVME_TYPE) + ndlp->nlp_type |= NLP_NVME_TARGET; + if (npr->writeXferRdyDis) +- ndlp->nlp_flag |= NLP_FIRSTBURST; ++ set_bit(NLP_FIRSTBURST, &ndlp->nlp_flag); + } + if (npr->Retry && ndlp->nlp_type & + (NLP_FCP_INITIATOR | NLP_FCP_TARGET)) +@@ -1057,7 +1045,7 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + roles |= FC_RPORT_ROLE_FCP_TARGET; + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_RPORT, +- "rport rolechg: role:x%x did:x%x flg:x%x", ++ "rport rolechg: role:x%x did:x%x flg:x%lx", + roles, ndlp->nlp_DID, ndlp->nlp_flag); + + if (vport->cfg_enable_fc4_type != LPFC_ENABLE_NVME) +@@ -1068,10 +1056,8 @@ lpfc_rcv_prli(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + static uint32_t + lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + { +- if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED)) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ if (!test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag)) { ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + return 0; + } + +@@ -1081,16 +1067,12 @@ lpfc_disc_set_adisc(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + (test_bit(FC_RSCN_MODE, &vport->fc_flag) || + ((ndlp->nlp_fcp_info & NLP_FCP_2_DEVICE) && + (ndlp->nlp_type & NLP_FCP_TARGET)))) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + return 1; + } + } + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + lpfc_unreg_rpi(vport, ndlp); + return 0; + } +@@ -1115,10 +1097,10 @@ lpfc_release_rpi(struct lpfc_hba *phba, struct lpfc_vport *vport, + /* If there is already an UNREG in progress for this ndlp, + * no need to queue up another one. + */ +- if (ndlp->nlp_flag & NLP_UNREG_INP) { ++ if (test_bit(NLP_UNREG_INP, &ndlp->nlp_flag)) { + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1435 release_rpi SKIP UNREG x%x on " +- "NPort x%x deferred x%x flg x%x " ++ "NPort x%x deferred x%x flg x%lx " + "Data: x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, +@@ -1143,11 +1125,11 @@ lpfc_release_rpi(struct lpfc_hba *phba, struct lpfc_vport *vport, + + if (((ndlp->nlp_DID & Fabric_DID_MASK) != Fabric_DID_MASK) && + (!test_bit(FC_OFFLINE_MODE, &vport->fc_flag))) +- ndlp->nlp_flag |= NLP_UNREG_INP; ++ set_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "1437 release_rpi UNREG x%x " +- "on NPort x%x flg x%x\n", ++ "on NPort x%x flg x%lx\n", + ndlp->nlp_rpi, ndlp->nlp_DID, ndlp->nlp_flag); + + rc = lpfc_sli_issue_mbox(phba, pmb, MBX_NOWAIT); +@@ -1175,7 +1157,7 @@ lpfc_disc_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + } + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0271 Illegal State Transition: node x%x " +- "event x%x, state x%x Data: x%x x%x\n", ++ "event x%x, state x%x Data: x%x x%lx\n", + ndlp->nlp_DID, evt, ndlp->nlp_state, ndlp->nlp_rpi, + ndlp->nlp_flag); + return ndlp->nlp_state; +@@ -1190,13 +1172,12 @@ lpfc_cmpl_plogi_illegal(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + * working on the same NPortID, do nothing for this thread + * to stop it. + */ +- if (!(ndlp->nlp_flag & NLP_RCV_PLOGI)) { ++ if (!test_bit(NLP_RCV_PLOGI, &ndlp->nlp_flag)) + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0272 Illegal State Transition: node x%x " +- "event x%x, state x%x Data: x%x x%x\n", ++ "event x%x, state x%x Data: x%x x%lx\n", + ndlp->nlp_DID, evt, ndlp->nlp_state, + ndlp->nlp_rpi, ndlp->nlp_flag); +- } + return ndlp->nlp_state; + } + +@@ -1230,9 +1211,7 @@ lpfc_rcv_logo_unused_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + { + struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_LOGO_ACC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); + + return ndlp->nlp_state; +@@ -1290,11 +1269,9 @@ lpfc_rcv_plogi_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + NULL); + } else { + if (lpfc_rcv_plogi(vport, ndlp, cmdiocb) && +- (ndlp->nlp_flag & NLP_NPR_2B_DISC) && +- (vport->num_disc_nodes)) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag) && ++ vport->num_disc_nodes) { ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + /* Check if there are more PLOGIs to be sent */ + lpfc_more_plogi(vport); + if (vport->num_disc_nodes == 0) { +@@ -1356,9 +1333,7 @@ lpfc_rcv_els_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + /* Put ndlp in npr state set plogi timer for 1 sec */ + mod_timer(&ndlp->nlp_delayfunc, jiffies + msecs_to_jiffies(1000 * 1)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; + ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +@@ -1389,7 +1364,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + + ulp_status = get_job_ulpstatus(phba, rspiocb); + +- if (ndlp->nlp_flag & NLP_ACC_REGLOGIN) { ++ if (test_bit(NLP_ACC_REGLOGIN, &ndlp->nlp_flag)) { + /* Recovery from PLOGI collision logic */ + return ndlp->nlp_state; + } +@@ -1418,7 +1393,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + goto out; + /* PLOGI chkparm OK */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_ELS, +- "0121 PLOGI chkparm OK Data: x%x x%x x%x x%x\n", ++ "0121 PLOGI chkparm OK Data: x%x x%x x%lx x%x\n", + ndlp->nlp_DID, ndlp->nlp_state, + ndlp->nlp_flag, ndlp->nlp_rpi); + if (vport->cfg_fcp_class == 2 && (sp->cls2.classValid)) +@@ -1446,14 +1421,14 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + ed_tov = (phba->fc_edtov + 999999) / 1000000; + } + +- ndlp->nlp_flag &= ~NLP_SUPPRESS_RSP; ++ clear_bit(NLP_SUPPRESS_RSP, &ndlp->nlp_flag); + if ((phba->sli.sli_flag & LPFC_SLI_SUPPRESS_RSP) && + sp->cmn.valid_vendor_ver_level) { + vid = be32_to_cpu(sp->un.vv.vid); + flag = be32_to_cpu(sp->un.vv.flags); + if ((vid == LPFC_VV_EMLX_ID) && + (flag & LPFC_VV_SUPPRESS_RSP)) +- ndlp->nlp_flag |= NLP_SUPPRESS_RSP; ++ set_bit(NLP_SUPPRESS_RSP, &ndlp->nlp_flag); + } + + /* +@@ -1476,7 +1451,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + LOG_TRACE_EVENT, + "0133 PLOGI: no memory " + "for config_link " +- "Data: x%x x%x x%x x%x\n", ++ "Data: x%x x%x x%lx x%x\n", + ndlp->nlp_DID, ndlp->nlp_state, + ndlp->nlp_flag, ndlp->nlp_rpi); + goto out; +@@ -1500,7 +1475,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + if (!mbox) { + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0018 PLOGI: no memory for reg_login " +- "Data: x%x x%x x%x x%x\n", ++ "Data: x%x x%x x%lx x%x\n", + ndlp->nlp_DID, ndlp->nlp_state, + ndlp->nlp_flag, ndlp->nlp_rpi); + goto out; +@@ -1520,7 +1495,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + mbox->mbox_cmpl = lpfc_mbx_cmpl_fdmi_reg_login; + break; + default: +- ndlp->nlp_flag |= NLP_REG_LOGIN_SEND; ++ set_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + mbox->mbox_cmpl = lpfc_mbx_cmpl_reg_login; + } + +@@ -1535,8 +1510,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + NLP_STE_REG_LOGIN_ISSUE); + return ndlp->nlp_state; + } +- if (ndlp->nlp_flag & NLP_REG_LOGIN_SEND) +- ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; ++ clear_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + /* decrement node reference count to the failed mbox + * command + */ +@@ -1544,7 +1518,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + lpfc_mbox_rsrc_cleanup(phba, mbox, MBOX_THD_UNLOCKED); + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0134 PLOGI: cannot issue reg_login " +- "Data: x%x x%x x%x x%x\n", ++ "Data: x%x x%x x%lx x%x\n", + ndlp->nlp_DID, ndlp->nlp_state, + ndlp->nlp_flag, ndlp->nlp_rpi); + } else { +@@ -1552,7 +1526,7 @@ lpfc_cmpl_plogi_plogi_issue(struct lpfc_vport *vport, + + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0135 PLOGI: cannot format reg_login " +- "Data: x%x x%x x%x x%x\n", ++ "Data: x%x x%x x%lx x%x\n", + ndlp->nlp_DID, ndlp->nlp_state, + ndlp->nlp_flag, ndlp->nlp_rpi); + } +@@ -1605,18 +1579,15 @@ static uint32_t + lpfc_device_rm_plogi_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + void *arg, uint32_t evt) + { +- if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NODEV_REMOVE; +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { ++ set_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + return ndlp->nlp_state; +- } else { +- /* software abort outstanding PLOGI */ +- lpfc_els_abort(vport->phba, ndlp); +- +- lpfc_drop_node(vport, ndlp); +- return NLP_STE_FREED_NODE; + } ++ /* software abort outstanding PLOGI */ ++ lpfc_els_abort(vport->phba, ndlp); ++ ++ lpfc_drop_node(vport, ndlp); ++ return NLP_STE_FREED_NODE; + } + + static uint32_t +@@ -1636,9 +1607,8 @@ lpfc_device_recov_plogi_issue(struct lpfc_vport *vport, + + ndlp->nlp_prev_state = NLP_STE_PLOGI_ISSUE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + + return ndlp->nlp_state; + } +@@ -1656,10 +1626,7 @@ lpfc_rcv_plogi_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + cmdiocb = (struct lpfc_iocbq *) arg; + + if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) { +- if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; +- spin_unlock_irq(&ndlp->lock); ++ if (test_and_clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { + if (vport->num_disc_nodes) + lpfc_more_adisc(vport); + } +@@ -1748,9 +1715,7 @@ lpfc_cmpl_adisc_adisc_issue(struct lpfc_vport *vport, + /* 1 sec timeout */ + mod_timer(&ndlp->nlp_delayfunc, + jiffies + msecs_to_jiffies(1000)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; + + ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE; +@@ -1789,18 +1754,15 @@ static uint32_t + lpfc_device_rm_adisc_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + void *arg, uint32_t evt) + { +- if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NODEV_REMOVE; +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { ++ set_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + return ndlp->nlp_state; +- } else { +- /* software abort outstanding ADISC */ +- lpfc_els_abort(vport->phba, ndlp); +- +- lpfc_drop_node(vport, ndlp); +- return NLP_STE_FREED_NODE; + } ++ /* software abort outstanding ADISC */ ++ lpfc_els_abort(vport->phba, ndlp); ++ ++ lpfc_drop_node(vport, ndlp); ++ return NLP_STE_FREED_NODE; + } + + static uint32_t +@@ -1820,9 +1782,8 @@ lpfc_device_recov_adisc_issue(struct lpfc_vport *vport, + + ndlp->nlp_prev_state = NLP_STE_ADISC_ISSUE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_disc_set_adisc(vport, ndlp); + return ndlp->nlp_state; + } +@@ -1856,7 +1817,7 @@ lpfc_rcv_prli_reglogin_issue(struct lpfc_vport *vport, + * transition to UNMAPPED provided the RPI has completed + * registration. + */ +- if (ndlp->nlp_flag & NLP_RPI_REGISTERED) { ++ if (test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag)) { + lpfc_rcv_prli(vport, ndlp, cmdiocb); + lpfc_els_rsp_prli_acc(vport, cmdiocb, ndlp); + } else { +@@ -1895,7 +1856,7 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport, + if ((mb = phba->sli.mbox_active)) { + if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && + (ndlp == mb->ctx_ndlp)) { +- ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; ++ clear_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + lpfc_nlp_put(ndlp); + mb->ctx_ndlp = NULL; + mb->mbox_cmpl = lpfc_sli_def_mbox_cmpl; +@@ -1906,7 +1867,7 @@ lpfc_rcv_logo_reglogin_issue(struct lpfc_vport *vport, + list_for_each_entry_safe(mb, nextmb, &phba->sli.mboxq, list) { + if ((mb->u.mb.mbxCommand == MBX_REG_LOGIN64) && + (ndlp == mb->ctx_ndlp)) { +- ndlp->nlp_flag &= ~NLP_REG_LOGIN_SEND; ++ clear_bit(NLP_REG_LOGIN_SEND, &ndlp->nlp_flag); + lpfc_nlp_put(ndlp); + list_del(&mb->list); + phba->sli.mboxq_cnt--; +@@ -1976,9 +1937,7 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport, + /* Put ndlp in npr state set plogi timer for 1 sec */ + mod_timer(&ndlp->nlp_delayfunc, + jiffies + msecs_to_jiffies(1000 * 1)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; + + lpfc_issue_els_logo(vport, ndlp, 0); +@@ -1989,7 +1948,7 @@ lpfc_cmpl_reglogin_reglogin_issue(struct lpfc_vport *vport, + if (phba->sli_rev < LPFC_SLI_REV4) + ndlp->nlp_rpi = mb->un.varWords[0]; + +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); + + /* Only if we are not a fabric nport do we issue PRLI */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, +@@ -2061,15 +2020,12 @@ lpfc_device_rm_reglogin_issue(struct lpfc_vport *vport, + void *arg, + uint32_t evt) + { +- if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NODEV_REMOVE; +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { ++ set_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + return ndlp->nlp_state; +- } else { +- lpfc_drop_node(vport, ndlp); +- return NLP_STE_FREED_NODE; + } ++ lpfc_drop_node(vport, ndlp); ++ return NLP_STE_FREED_NODE; + } + + static uint32_t +@@ -2084,17 +2040,16 @@ lpfc_device_recov_reglogin_issue(struct lpfc_vport *vport, + + ndlp->nlp_prev_state = NLP_STE_REG_LOGIN_ISSUE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- spin_lock_irq(&ndlp->lock); + + /* If we are a target we won't immediately transition into PRLI, + * so if REG_LOGIN already completed we don't need to ignore it. + */ +- if (!(ndlp->nlp_flag & NLP_RPI_REGISTERED) || ++ if (!test_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag) || + !vport->phba->nvmet_support) +- ndlp->nlp_flag |= NLP_IGNR_REG_CMPL; ++ set_bit(NLP_IGNR_REG_CMPL, &ndlp->nlp_flag); + +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_disc_set_adisc(vport, ndlp); + return ndlp->nlp_state; + } +@@ -2228,7 +2183,8 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + if (npr->targetFunc) { + ndlp->nlp_type |= NLP_FCP_TARGET; + if (npr->writeXferRdyDis) +- ndlp->nlp_flag |= NLP_FIRSTBURST; ++ set_bit(NLP_FIRSTBURST, ++ &ndlp->nlp_flag); + } + if (npr->Retry) + ndlp->nlp_fcp_info |= NLP_FCP_2_DEVICE; +@@ -2272,7 +2228,7 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + /* Both sides support FB. The target's first + * burst size is a 512 byte encoded value. + */ +- ndlp->nlp_flag |= NLP_FIRSTBURST; ++ set_bit(NLP_FIRSTBURST, &ndlp->nlp_flag); + ndlp->nvme_fb_size = bf_get_be32(prli_fb_sz, + nvpr); + +@@ -2287,7 +2243,7 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + lpfc_printf_vlog(vport, KERN_INFO, LOG_NVME_DISC, + "6029 NVME PRLI Cmpl w1 x%08x " +- "w4 x%08x w5 x%08x flag x%x, " ++ "w4 x%08x w5 x%08x flag x%lx, " + "fcp_info x%x nlp_type x%x\n", + be32_to_cpu(nvpr->word1), + be32_to_cpu(nvpr->word4), +@@ -2299,9 +2255,7 @@ lpfc_cmpl_prli_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + (vport->port_type == LPFC_NPIV_PORT) && + vport->cfg_restrict_login) { + out: +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_TARGET_REMOVE; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_TARGET_REMOVE, &ndlp->nlp_flag); + lpfc_issue_els_logo(vport, ndlp, 0); + + ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE; +@@ -2353,18 +2307,15 @@ static uint32_t + lpfc_device_rm_prli_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + void *arg, uint32_t evt) + { +- if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NODEV_REMOVE; +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { ++ set_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + return ndlp->nlp_state; +- } else { +- /* software abort outstanding PLOGI */ +- lpfc_els_abort(vport->phba, ndlp); +- +- lpfc_drop_node(vport, ndlp); +- return NLP_STE_FREED_NODE; + } ++ /* software abort outstanding PLOGI */ ++ lpfc_els_abort(vport->phba, ndlp); ++ ++ lpfc_drop_node(vport, ndlp); ++ return NLP_STE_FREED_NODE; + } + + +@@ -2401,9 +2352,8 @@ lpfc_device_recov_prli_issue(struct lpfc_vport *vport, + + ndlp->nlp_prev_state = NLP_STE_PRLI_ISSUE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_disc_set_adisc(vport, ndlp); + return ndlp->nlp_state; + } +@@ -2442,9 +2392,7 @@ lpfc_rcv_logo_logo_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + { + struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *)arg; + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_LOGO_ACC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); + return ndlp->nlp_state; + } +@@ -2483,9 +2431,8 @@ lpfc_cmpl_logo_logo_issue(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + { + ndlp->nlp_prev_state = NLP_STE_LOGO_ISSUE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + lpfc_disc_set_adisc(vport, ndlp); + return ndlp->nlp_state; + } +@@ -2591,8 +2538,9 @@ lpfc_device_recov_unmap_node(struct lpfc_vport *vport, + { + ndlp->nlp_prev_state = NLP_STE_UNMAPPED_NODE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); + ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME); + spin_unlock_irq(&ndlp->lock); + lpfc_disc_set_adisc(vport, ndlp); +@@ -2653,9 +2601,7 @@ lpfc_rcv_prlo_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + lpfc_sli_abort_iocb(vport, ndlp->nlp_sid, 0, LPFC_CTX_TGT); + + /* Send PRLO_ACC */ +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_LOGO_ACC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + lpfc_els_rsp_acc(vport, ELS_CMD_PRLO, cmdiocb, ndlp, NULL); + + /* Save ELS_CMD_PRLO as the last elscmd and then set to NPR. +@@ -2665,7 +2611,7 @@ lpfc_rcv_prlo_mapped_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ndlp->nlp_prev_state = ndlp->nlp_state; + + lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_ELS | LOG_DISCOVERY, +- "3422 DID x%06x nflag x%x lastels x%x ref cnt %u\n", ++ "3422 DID x%06x nflag x%lx lastels x%x ref cnt %u\n", + ndlp->nlp_DID, ndlp->nlp_flag, + ndlp->nlp_last_elscmd, + kref_read(&ndlp->kref)); +@@ -2685,8 +2631,9 @@ lpfc_device_recov_mapped_node(struct lpfc_vport *vport, + + ndlp->nlp_prev_state = NLP_STE_MAPPED_NODE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_NPR_NODE); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); + ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME); + spin_unlock_irq(&ndlp->lock); + return ndlp->nlp_state; +@@ -2699,16 +2646,16 @@ lpfc_rcv_plogi_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; + + /* Ignore PLOGI if we have an outstanding LOGO */ +- if (ndlp->nlp_flag & (NLP_LOGO_SND | NLP_LOGO_ACC)) ++ if (test_bit(NLP_LOGO_SND, &ndlp->nlp_flag) || ++ test_bit(NLP_LOGO_ACC, &ndlp->nlp_flag)) + return ndlp->nlp_state; + if (lpfc_rcv_plogi(vport, ndlp, cmdiocb)) { + lpfc_cancel_retry_delay_tmo(vport, ndlp); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NPR_ADISC | NLP_NPR_2B_DISC); +- spin_unlock_irq(&ndlp->lock); +- } else if (!(ndlp->nlp_flag & NLP_NPR_2B_DISC)) { ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); ++ } else if (!test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { + /* send PLOGI immediately, move to PLOGI issue state */ +- if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) { ++ if (!test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag)) { + ndlp->nlp_prev_state = NLP_STE_NPR_NODE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE); + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); +@@ -2729,14 +2676,14 @@ lpfc_rcv_prli_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + stat.un.b.lsRjtRsnCodeExp = LSEXP_NOTHING_MORE; + lpfc_els_rsp_reject(vport, stat.un.lsRjtError, cmdiocb, ndlp, NULL); + +- if (!(ndlp->nlp_flag & NLP_DELAY_TMO)) { ++ if (!test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag)) { + /* + * ADISC nodes will be handled in regular discovery path after + * receiving response from NS. + * + * For other nodes, Send PLOGI to trigger an implicit LOGO. + */ +- if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) { ++ if (!test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) { + ndlp->nlp_prev_state = NLP_STE_NPR_NODE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE); + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); +@@ -2767,15 +2714,15 @@ lpfc_rcv_padisc_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + * or discovery in progress for this node. Starting discovery + * here will affect the counting of discovery threads. + */ +- if (!(ndlp->nlp_flag & NLP_DELAY_TMO) && +- !(ndlp->nlp_flag & NLP_NPR_2B_DISC)) { ++ if (!test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag) && ++ !test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { + /* + * ADISC nodes will be handled in regular discovery path after + * receiving response from NS. + * + * For other nodes, Send PLOGI to trigger an implicit LOGO. + */ +- if (!(ndlp->nlp_flag & NLP_NPR_ADISC)) { ++ if (!test_bit(NLP_NPR_ADISC, &ndlp->nlp_flag)) { + ndlp->nlp_prev_state = NLP_STE_NPR_NODE; + lpfc_nlp_set_state(vport, ndlp, NLP_STE_PLOGI_ISSUE); + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); +@@ -2790,24 +2737,18 @@ lpfc_rcv_prlo_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + { + struct lpfc_iocbq *cmdiocb = (struct lpfc_iocbq *) arg; + +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_LOGO_ACC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + + lpfc_els_rsp_acc(vport, ELS_CMD_ACC, cmdiocb, ndlp, NULL); + +- if ((ndlp->nlp_flag & NLP_DELAY_TMO) == 0) { ++ if (!test_bit(NLP_DELAY_TMO, &ndlp->nlp_flag)) { + mod_timer(&ndlp->nlp_delayfunc, + jiffies + msecs_to_jiffies(1000 * 1)); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_DELAY_TMO; +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ set_bit(NLP_DELAY_TMO, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + ndlp->nlp_last_elscmd = ELS_CMD_PLOGI; + } else { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_NPR_ADISC; +- spin_unlock_irq(&ndlp->lock); ++ clear_bit(NLP_NPR_ADISC, &ndlp->nlp_flag); + } + return ndlp->nlp_state; + } +@@ -2844,7 +2785,7 @@ lpfc_cmpl_prli_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + ulp_status = get_job_ulpstatus(phba, rspiocb); + +- if (ulp_status && (ndlp->nlp_flag & NLP_NODEV_REMOVE)) { ++ if (ulp_status && test_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag)) { + lpfc_drop_node(vport, ndlp); + return NLP_STE_FREED_NODE; + } +@@ -2877,7 +2818,7 @@ lpfc_cmpl_adisc_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + + ulp_status = get_job_ulpstatus(phba, rspiocb); + +- if (ulp_status && (ndlp->nlp_flag & NLP_NODEV_REMOVE)) { ++ if (ulp_status && test_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag)) { + lpfc_drop_node(vport, ndlp); + return NLP_STE_FREED_NODE; + } +@@ -2896,12 +2837,11 @@ lpfc_cmpl_reglogin_npr_node(struct lpfc_vport *vport, + /* SLI4 ports have preallocated logical rpis. */ + if (vport->phba->sli_rev < LPFC_SLI_REV4) + ndlp->nlp_rpi = mb->un.varWords[0]; +- ndlp->nlp_flag |= NLP_RPI_REGISTERED; +- if (ndlp->nlp_flag & NLP_LOGO_ACC) { ++ set_bit(NLP_RPI_REGISTERED, &ndlp->nlp_flag); ++ if (test_bit(NLP_LOGO_ACC, &ndlp->nlp_flag)) + lpfc_unreg_rpi(vport, ndlp); +- } + } else { +- if (ndlp->nlp_flag & NLP_NODEV_REMOVE) { ++ if (test_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag)) { + lpfc_drop_node(vport, ndlp); + return NLP_STE_FREED_NODE; + } +@@ -2913,10 +2853,8 @@ static uint32_t + lpfc_device_rm_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + void *arg, uint32_t evt) + { +- if (ndlp->nlp_flag & NLP_NPR_2B_DISC) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_NODEV_REMOVE; +- spin_unlock_irq(&ndlp->lock); ++ if (test_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag)) { ++ set_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); + return ndlp->nlp_state; + } + lpfc_drop_node(vport, ndlp); +@@ -2932,8 +2870,9 @@ lpfc_device_recov_npr_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + return ndlp->nlp_state; + + lpfc_cancel_retry_delay_tmo(vport, ndlp); ++ clear_bit(NLP_NODEV_REMOVE, &ndlp->nlp_flag); ++ clear_bit(NLP_NPR_2B_DISC, &ndlp->nlp_flag); + spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~(NLP_NODEV_REMOVE | NLP_NPR_2B_DISC); + ndlp->nlp_fc4_type &= ~(NLP_FC4_FCP | NLP_FC4_NVME); + spin_unlock_irq(&ndlp->lock); + return ndlp->nlp_state; +@@ -3146,7 +3085,7 @@ lpfc_disc_state_machine(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + /* DSM in event on NPort in state */ + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "0211 DSM in event x%x on NPort x%x in " +- "state %d rpi x%x Data: x%x x%x\n", ++ "state %d rpi x%x Data: x%lx x%x\n", + evt, ndlp->nlp_DID, cur_state, ndlp->nlp_rpi, + ndlp->nlp_flag, data1); + +@@ -3163,12 +3102,12 @@ lpfc_disc_state_machine(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp, + ((uint32_t)ndlp->nlp_type)); + lpfc_printf_vlog(vport, KERN_INFO, LOG_DISCOVERY, + "0212 DSM out state %d on NPort x%x " +- "rpi x%x Data: x%x x%x\n", ++ "rpi x%x Data: x%lx x%x\n", + rc, ndlp->nlp_DID, ndlp->nlp_rpi, ndlp->nlp_flag, + data1); + + lpfc_debugfs_disc_trc(vport, LPFC_DISC_TRC_DSM, +- "DSM out: ste:%d did:x%x flg:x%x", ++ "DSM out: ste:%d did:x%x flg:x%lx", + rc, ndlp->nlp_DID, ndlp->nlp_flag); + /* Decrement the ndlp reference count held for this function */ + lpfc_nlp_put(ndlp); +diff --git a/drivers/scsi/lpfc/lpfc_nvme.c b/drivers/scsi/lpfc/lpfc_nvme.c +index fec23c7237304..e9d9884830f30 100644 +--- a/drivers/scsi/lpfc/lpfc_nvme.c ++++ b/drivers/scsi/lpfc/lpfc_nvme.c +@@ -1232,7 +1232,7 @@ lpfc_nvme_prep_io_cmd(struct lpfc_vport *vport, + + /* Word 5 */ + if ((phba->cfg_nvme_enable_fb) && +- (pnode->nlp_flag & NLP_FIRSTBURST)) { ++ test_bit(NLP_FIRSTBURST, &pnode->nlp_flag)) { + req_len = lpfc_ncmd->nvmeCmd->payload_length; + if (req_len < pnode->nvme_fb_size) + wqe->fcp_iwrite.initial_xfer_len = +@@ -2644,14 +2644,11 @@ lpfc_nvme_unregister_port(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + * reference. Check if another thread has set + * NLP_DROPPED. + */ +- spin_lock_irq(&ndlp->lock); +- if (!(ndlp->nlp_flag & NLP_DROPPED)) { +- ndlp->nlp_flag |= NLP_DROPPED; +- spin_unlock_irq(&ndlp->lock); ++ if (!test_and_set_bit(NLP_DROPPED, ++ &ndlp->nlp_flag)) { + lpfc_nlp_put(ndlp); + return; + } +- spin_unlock_irq(&ndlp->lock); + } + } + } +diff --git a/drivers/scsi/lpfc/lpfc_nvmet.c b/drivers/scsi/lpfc/lpfc_nvmet.c +index 55c3e2c2bf8f7..e6c9112a88627 100644 +--- a/drivers/scsi/lpfc/lpfc_nvmet.c ++++ b/drivers/scsi/lpfc/lpfc_nvmet.c +@@ -2854,7 +2854,7 @@ lpfc_nvmet_prep_fcp_wqe(struct lpfc_hba *phba, + /* In template ar=1 wqes=0 sup=0 irsp=0 irsplen=0 */ + + if (rsp->rsplen == LPFC_NVMET_SUCCESS_LEN) { +- if (ndlp->nlp_flag & NLP_SUPPRESS_RSP) ++ if (test_bit(NLP_SUPPRESS_RSP, &ndlp->nlp_flag)) + bf_set(wqe_sup, + &wqe->fcp_tsend.wqe_com, 1); + } else { +diff --git a/drivers/scsi/lpfc/lpfc_scsi.c b/drivers/scsi/lpfc/lpfc_scsi.c +index 11c974bffa720..905026a4782cf 100644 +--- a/drivers/scsi/lpfc/lpfc_scsi.c ++++ b/drivers/scsi/lpfc/lpfc_scsi.c +@@ -4629,7 +4629,7 @@ static int lpfc_scsi_prep_cmnd_buf_s3(struct lpfc_vport *vport, + iocb_cmd->ulpCommand = CMD_FCP_IWRITE64_CR; + iocb_cmd->ulpPU = PARM_READ_CHECK; + if (vport->cfg_first_burst_size && +- (pnode->nlp_flag & NLP_FIRSTBURST)) { ++ test_bit(NLP_FIRSTBURST, &pnode->nlp_flag)) { + u32 xrdy_len; + + fcpdl = scsi_bufflen(scsi_cmnd); +@@ -5829,7 +5829,7 @@ lpfc_send_taskmgmt(struct lpfc_vport *vport, struct fc_rport *rport, + + lpfc_printf_vlog(vport, KERN_INFO, LOG_FCP, + "0702 Issue %s to TGT %d LUN %llu " +- "rpi x%x nlp_flag x%x Data: x%x x%x\n", ++ "rpi x%x nlp_flag x%lx Data: x%x x%x\n", + lpfc_taskmgmt_name(task_mgmt_cmd), tgt_id, lun_id, + pnode->nlp_rpi, pnode->nlp_flag, iocbq->sli4_xritag, + iocbq->cmd_flag); +@@ -6094,8 +6094,8 @@ lpfc_target_reset_handler(struct scsi_cmnd *cmnd) + lpfc_printf_vlog(vport, KERN_ERR, LOG_TRACE_EVENT, + "0722 Target Reset rport failure: rdata x%px\n", rdata); + if (pnode) { ++ clear_bit(NLP_NPR_ADISC, &pnode->nlp_flag); + spin_lock_irqsave(&pnode->lock, flags); +- pnode->nlp_flag &= ~NLP_NPR_ADISC; + pnode->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; + spin_unlock_irqrestore(&pnode->lock, flags); + } +@@ -6124,7 +6124,7 @@ lpfc_target_reset_handler(struct scsi_cmnd *cmnd) + !pnode->logo_waitq) { + pnode->logo_waitq = &waitq; + pnode->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; +- pnode->nlp_flag |= NLP_ISSUE_LOGO; ++ set_bit(NLP_ISSUE_LOGO, &pnode->nlp_flag); + pnode->save_flags |= NLP_WAIT_FOR_LOGO; + spin_unlock_irqrestore(&pnode->lock, flags); + lpfc_unreg_rpi(vport, pnode); +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c +index 17ecb2625eb84..80c3c84c23914 100644 +--- a/drivers/scsi/lpfc/lpfc_sli.c ++++ b/drivers/scsi/lpfc/lpfc_sli.c +@@ -2911,14 +2911,14 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + vport, + KERN_INFO, LOG_MBOX | LOG_DISCOVERY, + "1438 UNREG cmpl deferred mbox x%x " +- "on NPort x%x Data: x%x x%x x%px x%lx x%x\n", ++ "on NPort x%x Data: x%lx x%x x%px x%lx x%x\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag, ndlp->nlp_defer_did, + ndlp, vport->load_flag, kref_read(&ndlp->kref)); + +- if ((ndlp->nlp_flag & NLP_UNREG_INP) && +- (ndlp->nlp_defer_did != NLP_EVT_NOTHING_PENDING)) { +- ndlp->nlp_flag &= ~NLP_UNREG_INP; ++ if (test_bit(NLP_UNREG_INP, &ndlp->nlp_flag) && ++ ndlp->nlp_defer_did != NLP_EVT_NOTHING_PENDING) { ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); + } +@@ -2968,7 +2968,7 @@ lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + { + struct lpfc_vport *vport = pmb->vport; + struct lpfc_nodelist *ndlp; +- u32 unreg_inp; ++ bool unreg_inp; + + ndlp = pmb->ctx_ndlp; + if (pmb->u.mb.mbxCommand == MBX_UNREG_LOGIN) { +@@ -2981,7 +2981,7 @@ lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + vport, KERN_INFO, + LOG_MBOX | LOG_SLI | LOG_NODE, + "0010 UNREG_LOGIN vpi:x%x " +- "rpi:%x DID:%x defer x%x flg x%x " ++ "rpi:%x DID:%x defer x%x flg x%lx " + "x%px\n", + vport->vpi, ndlp->nlp_rpi, + ndlp->nlp_DID, ndlp->nlp_defer_did, +@@ -2991,11 +2991,9 @@ lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + /* Cleanup the nlp_flag now that the UNREG RPI + * has completed. + */ +- spin_lock_irq(&ndlp->lock); +- unreg_inp = ndlp->nlp_flag & NLP_UNREG_INP; +- ndlp->nlp_flag &= +- ~(NLP_UNREG_INP | NLP_LOGO_ACC); +- spin_unlock_irq(&ndlp->lock); ++ unreg_inp = test_and_clear_bit(NLP_UNREG_INP, ++ &ndlp->nlp_flag); ++ clear_bit(NLP_LOGO_ACC, &ndlp->nlp_flag); + + /* Check to see if there are any deferred + * events to process +@@ -14340,9 +14338,7 @@ lpfc_sli4_sp_handle_mbox_event(struct lpfc_hba *phba, struct lpfc_mcqe *mcqe) + * an unsolicited PLOGI from the same NPortId from + * starting another mailbox transaction. + */ +- spin_lock_irqsave(&ndlp->lock, iflags); +- ndlp->nlp_flag |= NLP_UNREG_INP; +- spin_unlock_irqrestore(&ndlp->lock, iflags); ++ set_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + lpfc_unreg_login(phba, vport->vpi, + pmbox->un.varWords[0], pmb); + pmb->mbox_cmpl = lpfc_mbx_cmpl_dflt_rpi; +@@ -19091,9 +19087,9 @@ lpfc_sli4_seq_abort_rsp(struct lpfc_vport *vport, + * to free ndlp when transmit completes + */ + if (ndlp->nlp_state == NLP_STE_UNUSED_NODE && +- !(ndlp->nlp_flag & NLP_DROPPED) && ++ !test_bit(NLP_DROPPED, &ndlp->nlp_flag) && + !(ndlp->fc4_xpt_flags & (NVME_XPT_REGD | SCSI_XPT_REGD))) { +- ndlp->nlp_flag |= NLP_DROPPED; ++ set_bit(NLP_DROPPED, &ndlp->nlp_flag); + lpfc_nlp_put(ndlp); + } + } +@@ -21111,11 +21107,7 @@ lpfc_cleanup_pending_mbox(struct lpfc_vport *vport) + /* Unregister the RPI when mailbox complete */ + mb->mbox_flag |= LPFC_MBX_IMED_UNREG; + restart_loop = 1; +- spin_unlock_irq(&phba->hbalock); +- spin_lock(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_IGNR_REG_CMPL; +- spin_unlock(&ndlp->lock); +- spin_lock_irq(&phba->hbalock); ++ clear_bit(NLP_IGNR_REG_CMPL, &ndlp->nlp_flag); + break; + } + } +@@ -21130,9 +21122,7 @@ lpfc_cleanup_pending_mbox(struct lpfc_vport *vport) + ndlp = mb->ctx_ndlp; + mb->ctx_ndlp = NULL; + if (ndlp) { +- spin_lock(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_IGNR_REG_CMPL; +- spin_unlock(&ndlp->lock); ++ clear_bit(NLP_IGNR_REG_CMPL, &ndlp->nlp_flag); + lpfc_nlp_put(ndlp); + } + } +@@ -21141,9 +21131,7 @@ lpfc_cleanup_pending_mbox(struct lpfc_vport *vport) + + /* Release the ndlp with the cleaned-up active mailbox command */ + if (act_mbx_ndlp) { +- spin_lock(&act_mbx_ndlp->lock); +- act_mbx_ndlp->nlp_flag &= ~NLP_IGNR_REG_CMPL; +- spin_unlock(&act_mbx_ndlp->lock); ++ clear_bit(NLP_IGNR_REG_CMPL, &act_mbx_ndlp->nlp_flag); + lpfc_nlp_put(act_mbx_ndlp); + } + } +diff --git a/drivers/scsi/lpfc/lpfc_vport.c b/drivers/scsi/lpfc/lpfc_vport.c +index 7a4d4d8e2ad55..9e0e357633779 100644 +--- a/drivers/scsi/lpfc/lpfc_vport.c ++++ b/drivers/scsi/lpfc/lpfc_vport.c +@@ -496,7 +496,7 @@ lpfc_send_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + !ndlp->logo_waitq) { + ndlp->logo_waitq = &waitq; + ndlp->nlp_fcp_info &= ~NLP_FCP_2_DEVICE; +- ndlp->nlp_flag |= NLP_ISSUE_LOGO; ++ set_bit(NLP_ISSUE_LOGO, &ndlp->nlp_flag); + ndlp->save_flags |= NLP_WAIT_FOR_LOGO; + } + spin_unlock_irq(&ndlp->lock); +@@ -515,8 +515,8 @@ lpfc_send_npiv_logo(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + } + + /* Error - clean up node flags. */ ++ clear_bit(NLP_ISSUE_LOGO, &ndlp->nlp_flag); + spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_ISSUE_LOGO; + ndlp->save_flags &= ~NLP_WAIT_FOR_LOGO; + spin_unlock_irq(&ndlp->lock); + +@@ -708,7 +708,7 @@ lpfc_vport_delete(struct fc_vport *fc_vport) + + lpfc_printf_vlog(vport, KERN_INFO, LOG_VPORT | LOG_ELS, + "1829 DA_ID issue status %d. " +- "SFlag x%x NState x%x, NFlag x%x " ++ "SFlag x%x NState x%x, NFlag x%lx " + "Rpi x%x\n", + rc, ndlp->save_flags, ndlp->nlp_state, + ndlp->nlp_flag, ndlp->nlp_rpi); +-- +2.39.5 + diff --git a/queue-6.12/scsi-lpfc-remove-nlp_release_rpi-flag-from-nodelist-.patch b/queue-6.12/scsi-lpfc-remove-nlp_release_rpi-flag-from-nodelist-.patch new file mode 100644 index 0000000000..40172dedb0 --- /dev/null +++ b/queue-6.12/scsi-lpfc-remove-nlp_release_rpi-flag-from-nodelist-.patch @@ -0,0 +1,420 @@ +From 6ff785dc316f211f5d1463a62dceb5ced70d3b45 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 31 Oct 2024 15:32:16 -0700 +Subject: scsi: lpfc: Remove NLP_RELEASE_RPI flag from nodelist structure + +From: Justin Tee + +[ Upstream commit 32566a6f1ae558d0e79fed6e17a75c253367a57f ] + +An RPI is tightly bound to an NDLP structure and is freed only upon +release of an NDLP object. As such, there should be no logic that frees +an RPI outside of the lpfc_nlp_release() routine. In order to reinforce +the original design usage of RPIs, remove the NLP_RELEASE_RPI flag and +related logic. + +Signed-off-by: Justin Tee +Link: https://lore.kernel.org/r/20241031223219.152342-9-justintee8345@gmail.com +Signed-off-by: Martin K. Petersen +Stable-dep-of: b5162bb6aa1e ("scsi: lpfc: Avoid potential ndlp use-after-free in dev_loss_tmo_callbk") +Signed-off-by: Sasha Levin +--- + drivers/scsi/lpfc/lpfc_crtn.h | 2 +- + drivers/scsi/lpfc/lpfc_disc.h | 1 - + drivers/scsi/lpfc/lpfc_els.c | 32 +++----------------- + drivers/scsi/lpfc/lpfc_hbadisc.c | 52 +++++++------------------------- + drivers/scsi/lpfc/lpfc_init.c | 36 +++++++--------------- + drivers/scsi/lpfc/lpfc_sli.c | 48 +++++++++++------------------ + 6 files changed, 44 insertions(+), 127 deletions(-) + +diff --git a/drivers/scsi/lpfc/lpfc_crtn.h b/drivers/scsi/lpfc/lpfc_crtn.h +index d4e46a08f94da..36470bd716173 100644 +--- a/drivers/scsi/lpfc/lpfc_crtn.h ++++ b/drivers/scsi/lpfc/lpfc_crtn.h +@@ -571,7 +571,7 @@ int lpfc_issue_reg_vfi(struct lpfc_vport *); + int lpfc_issue_unreg_vfi(struct lpfc_vport *); + int lpfc_selective_reset(struct lpfc_hba *); + int lpfc_sli4_read_config(struct lpfc_hba *); +-void lpfc_sli4_node_prep(struct lpfc_hba *); ++void lpfc_sli4_node_rpi_restore(struct lpfc_hba *phba); + int lpfc_sli4_els_sgl_update(struct lpfc_hba *phba); + int lpfc_sli4_nvmet_sgl_update(struct lpfc_hba *phba); + int lpfc_io_buf_flush(struct lpfc_hba *phba, struct list_head *sglist); +diff --git a/drivers/scsi/lpfc/lpfc_disc.h b/drivers/scsi/lpfc/lpfc_disc.h +index f5ae8cc158205..5d6eabaeb094e 100644 +--- a/drivers/scsi/lpfc/lpfc_disc.h ++++ b/drivers/scsi/lpfc/lpfc_disc.h +@@ -185,7 +185,6 @@ struct lpfc_node_rrq { + /* Defines for nlp_flag (uint32) */ + #define NLP_IGNR_REG_CMPL 0x00000001 /* Rcvd rscn before we cmpl reg login */ + #define NLP_REG_LOGIN_SEND 0x00000002 /* sent reglogin to adapter */ +-#define NLP_RELEASE_RPI 0x00000004 /* Release RPI to free pool */ + #define NLP_SUPPRESS_RSP 0x00000010 /* Remote NPort supports suppress rsp */ + #define NLP_PLOGI_SND 0x00000020 /* sent PLOGI request for this entry */ + #define NLP_PRLI_SND 0x00000040 /* sent PRLI request for this entry */ +diff --git a/drivers/scsi/lpfc/lpfc_els.c b/drivers/scsi/lpfc/lpfc_els.c +index d737b897ddd82..4e049783fc94e 100644 +--- a/drivers/scsi/lpfc/lpfc_els.c ++++ b/drivers/scsi/lpfc/lpfc_els.c +@@ -3063,8 +3063,6 @@ lpfc_cmpl_els_logo(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + */ + if (ndlp->nlp_flag & NLP_TARGET_REMOVE) { + spin_lock_irq(&ndlp->lock); +- if (phba->sli_rev == LPFC_SLI_REV4) +- ndlp->nlp_flag |= NLP_RELEASE_RPI; + ndlp->nlp_flag &= ~NLP_NPR_2B_DISC; + spin_unlock_irq(&ndlp->lock); + lpfc_disc_state_machine(vport, ndlp, cmdiocb, +@@ -5456,24 +5454,14 @@ lpfc_cmpl_els_rsp(struct lpfc_hba *phba, struct lpfc_iocbq *cmdiocb, + } + + /* An SLI4 NPIV instance wants to drop the node at this point under +- * these conditions and release the RPI. ++ * these conditions because it doesn't need the login. + */ + if (phba->sli_rev == LPFC_SLI_REV4 && + vport && vport->port_type == LPFC_NPIV_PORT && + !(ndlp->fc4_xpt_flags & SCSI_XPT_REGD)) { +- if (ndlp->nlp_flag & NLP_RELEASE_RPI) { +- if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE && +- ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE) { +- lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; +- ndlp->nlp_flag &= ~NLP_RELEASE_RPI; +- spin_unlock_irq(&ndlp->lock); +- } +- lpfc_drop_node(vport, ndlp); +- } else if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE && +- ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE && +- ndlp->nlp_state != NLP_STE_PRLI_ISSUE) { ++ if (ndlp->nlp_state != NLP_STE_PLOGI_ISSUE && ++ ndlp->nlp_state != NLP_STE_REG_LOGIN_ISSUE && ++ ndlp->nlp_state != NLP_STE_PRLI_ISSUE) { + /* Drop ndlp if there is no planned or outstanding + * issued PRLI. + * +@@ -5852,18 +5840,6 @@ lpfc_els_rsp_reject(struct lpfc_vport *vport, uint32_t rejectError, + return 1; + } + +- /* The NPIV instance is rejecting this unsolicited ELS. Make sure the +- * node's assigned RPI gets released provided this node is not already +- * registered with the transport. +- */ +- if (phba->sli_rev == LPFC_SLI_REV4 && +- vport->port_type == LPFC_NPIV_PORT && +- !(ndlp->fc4_xpt_flags & SCSI_XPT_REGD)) { +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag |= NLP_RELEASE_RPI; +- spin_unlock_irq(&ndlp->lock); +- } +- + rc = lpfc_sli_issue_iocb(phba, LPFC_ELS_RING, elsiocb, 0); + if (rc == IOCB_ERROR) { + lpfc_els_free_iocb(phba, elsiocb); +diff --git a/drivers/scsi/lpfc/lpfc_hbadisc.c b/drivers/scsi/lpfc/lpfc_hbadisc.c +index 34f77b250387c..ff559b28738cf 100644 +--- a/drivers/scsi/lpfc/lpfc_hbadisc.c ++++ b/drivers/scsi/lpfc/lpfc_hbadisc.c +@@ -5212,14 +5212,6 @@ lpfc_nlp_logo_unreg(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); + } else { +- /* NLP_RELEASE_RPI is only set for SLI4 ports. */ +- if (ndlp->nlp_flag & NLP_RELEASE_RPI) { +- lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi); +- spin_lock_irq(&ndlp->lock); +- ndlp->nlp_flag &= ~NLP_RELEASE_RPI; +- ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; +- spin_unlock_irq(&ndlp->lock); +- } + spin_lock_irq(&ndlp->lock); + ndlp->nlp_flag &= ~NLP_UNREG_INP; + spin_unlock_irq(&ndlp->lock); +@@ -5242,8 +5234,6 @@ static void + lpfc_set_unreg_login_mbx_cmpl(struct lpfc_hba *phba, struct lpfc_vport *vport, + struct lpfc_nodelist *ndlp, LPFC_MBOXQ_t *mbox) + { +- unsigned long iflags; +- + /* Driver always gets a reference on the mailbox job + * in support of async jobs. + */ +@@ -5261,13 +5251,6 @@ lpfc_set_unreg_login_mbx_cmpl(struct lpfc_hba *phba, struct lpfc_vport *vport, + (kref_read(&ndlp->kref) > 0)) { + mbox->mbox_cmpl = lpfc_sli4_unreg_rpi_cmpl_clr; + } else { +- if (test_bit(FC_UNLOADING, &vport->load_flag)) { +- if (phba->sli_rev == LPFC_SLI_REV4) { +- spin_lock_irqsave(&ndlp->lock, iflags); +- ndlp->nlp_flag |= NLP_RELEASE_RPI; +- spin_unlock_irqrestore(&ndlp->lock, iflags); +- } +- } + mbox->mbox_cmpl = lpfc_sli_def_mbox_cmpl; + } + } +@@ -5330,14 +5313,11 @@ lpfc_unreg_rpi(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + return 1; + } + ++ /* Accept PLOGIs after unreg_rpi_cmpl. */ + if (mbox->mbox_cmpl == lpfc_sli4_unreg_rpi_cmpl_clr) +- /* +- * accept PLOGIs after unreg_rpi_cmpl +- */ + acc_plogi = 0; +- if (((ndlp->nlp_DID & Fabric_DID_MASK) != +- Fabric_DID_MASK) && +- (!test_bit(FC_OFFLINE_MODE, &vport->fc_flag))) ++ ++ if (!test_bit(FC_OFFLINE_MODE, &vport->fc_flag)) + ndlp->nlp_flag |= NLP_UNREG_INP; + + lpfc_printf_vlog(vport, KERN_INFO, +@@ -5561,10 +5541,6 @@ lpfc_cleanup_node(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) + list_del_init(&ndlp->dev_loss_evt.evt_listp); + list_del_init(&ndlp->recovery_evt.evt_listp); + lpfc_cleanup_vports_rrqs(vport, ndlp); +- +- if (phba->sli_rev == LPFC_SLI_REV4) +- ndlp->nlp_flag |= NLP_RELEASE_RPI; +- + return 0; + } + +@@ -6580,8 +6556,9 @@ lpfc_nlp_init(struct lpfc_vport *vport, uint32_t did) + INIT_LIST_HEAD(&ndlp->nlp_listp); + if (vport->phba->sli_rev == LPFC_SLI_REV4) { + ndlp->nlp_rpi = rpi; +- lpfc_printf_vlog(vport, KERN_INFO, LOG_NODE | LOG_DISCOVERY, +- "0007 Init New ndlp x%px, rpi:x%x DID:%x " ++ lpfc_printf_vlog(vport, KERN_INFO, ++ LOG_ELS | LOG_NODE | LOG_DISCOVERY, ++ "0007 Init New ndlp x%px, rpi:x%x DID:x%x " + "flg:x%x refcnt:%d\n", + ndlp, ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_flag, kref_read(&ndlp->kref)); +@@ -6626,19 +6603,12 @@ lpfc_nlp_release(struct kref *kref) + lpfc_cancel_retry_delay_tmo(vport, ndlp); + lpfc_cleanup_node(vport, ndlp); + +- /* Not all ELS transactions have registered the RPI with the port. +- * In these cases the rpi usage is temporary and the node is +- * released when the WQE is completed. Catch this case to free the +- * RPI to the pool. Because this node is in the release path, a lock +- * is unnecessary. All references are gone and the node has been +- * dequeued. ++ /* All nodes are initialized with an RPI that needs to be released ++ * now. All references are gone and the node has been dequeued. + */ +- if (ndlp->nlp_flag & NLP_RELEASE_RPI) { +- if (ndlp->nlp_rpi != LPFC_RPI_ALLOC_ERROR && +- !(ndlp->nlp_flag & (NLP_RPI_REGISTERED | NLP_UNREG_INP))) { +- lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi); +- ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; +- } ++ if (vport->phba->sli_rev == LPFC_SLI_REV4) { ++ lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi); ++ ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; + } + + /* The node is not freed back to memory, it is released to a pool so +diff --git a/drivers/scsi/lpfc/lpfc_init.c b/drivers/scsi/lpfc/lpfc_init.c +index 50c761991191f..986e2898b10b8 100644 +--- a/drivers/scsi/lpfc/lpfc_init.c ++++ b/drivers/scsi/lpfc/lpfc_init.c +@@ -3379,7 +3379,7 @@ lpfc_block_mgmt_io(struct lpfc_hba *phba, int mbx_action) + } + + /** +- * lpfc_sli4_node_prep - Assign RPIs for active nodes. ++ * lpfc_sli4_node_rpi_restore - Recover assigned RPIs for active nodes. + * @phba: pointer to lpfc hba data structure. + * + * Allocate RPIs for all active remote nodes. This is needed whenever +@@ -3387,7 +3387,7 @@ lpfc_block_mgmt_io(struct lpfc_hba *phba, int mbx_action) + * is to fixup the temporary rpi assignments. + **/ + void +-lpfc_sli4_node_prep(struct lpfc_hba *phba) ++lpfc_sli4_node_rpi_restore(struct lpfc_hba *phba) + { + struct lpfc_nodelist *ndlp, *next_ndlp; + struct lpfc_vport **vports; +@@ -3397,10 +3397,10 @@ lpfc_sli4_node_prep(struct lpfc_hba *phba) + return; + + vports = lpfc_create_vport_work_array(phba); +- if (vports == NULL) ++ if (!vports) + return; + +- for (i = 0; i <= phba->max_vports && vports[i] != NULL; i++) { ++ for (i = 0; i <= phba->max_vports && vports[i]; i++) { + if (test_bit(FC_UNLOADING, &vports[i]->load_flag)) + continue; + +@@ -3409,7 +3409,13 @@ lpfc_sli4_node_prep(struct lpfc_hba *phba) + nlp_listp) { + rpi = lpfc_sli4_alloc_rpi(phba); + if (rpi == LPFC_RPI_ALLOC_ERROR) { +- /* TODO print log? */ ++ lpfc_printf_vlog(ndlp->vport, KERN_INFO, ++ LOG_NODE | LOG_DISCOVERY, ++ "0099 RPI alloc error for " ++ "ndlp x%px DID:x%06x " ++ "flg:x%x\n", ++ ndlp, ndlp->nlp_DID, ++ ndlp->nlp_flag); + continue; + } + ndlp->nlp_rpi = rpi; +@@ -3829,26 +3835,6 @@ lpfc_offline_prep(struct lpfc_hba *phba, int mbx_action) + ndlp->nlp_flag &= ~(NLP_UNREG_INP | + NLP_RPI_REGISTERED); + spin_unlock_irq(&ndlp->lock); +- if (phba->sli_rev == LPFC_SLI_REV4) +- lpfc_sli_rpi_release(vports[i], +- ndlp); +- } else { +- lpfc_unreg_rpi(vports[i], ndlp); +- } +- /* +- * Whenever an SLI4 port goes offline, free the +- * RPI. Get a new RPI when the adapter port +- * comes back online. +- */ +- if (phba->sli_rev == LPFC_SLI_REV4) { +- lpfc_printf_vlog(vports[i], KERN_INFO, +- LOG_NODE | LOG_DISCOVERY, +- "0011 Free RPI x%x on " +- "ndlp: x%px did x%x\n", +- ndlp->nlp_rpi, ndlp, +- ndlp->nlp_DID); +- lpfc_sli4_free_rpi(phba, ndlp->nlp_rpi); +- ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; + } + + if (ndlp->nlp_type & NLP_FABRIC) { +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c +index 4dccbaeb63283..17ecb2625eb84 100644 +--- a/drivers/scsi/lpfc/lpfc_sli.c ++++ b/drivers/scsi/lpfc/lpfc_sli.c +@@ -2842,27 +2842,6 @@ lpfc_sli_wake_mbox_wait(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmboxq) + return; + } + +-static void +-__lpfc_sli_rpi_release(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) +-{ +- unsigned long iflags; +- +- if (ndlp->nlp_flag & NLP_RELEASE_RPI) { +- lpfc_sli4_free_rpi(vport->phba, ndlp->nlp_rpi); +- spin_lock_irqsave(&ndlp->lock, iflags); +- ndlp->nlp_flag &= ~NLP_RELEASE_RPI; +- ndlp->nlp_rpi = LPFC_RPI_ALLOC_ERROR; +- spin_unlock_irqrestore(&ndlp->lock, iflags); +- } +- ndlp->nlp_flag &= ~NLP_UNREG_INP; +-} +- +-void +-lpfc_sli_rpi_release(struct lpfc_vport *vport, struct lpfc_nodelist *ndlp) +-{ +- __lpfc_sli_rpi_release(vport, ndlp); +-} +- + /** + * lpfc_sli_def_mbox_cmpl - Default mailbox completion handler + * @phba: Pointer to HBA context object. +@@ -2942,8 +2921,6 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + ndlp->nlp_flag &= ~NLP_UNREG_INP; + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); +- } else { +- __lpfc_sli_rpi_release(vport, ndlp); + } + + /* The unreg_login mailbox is complete and had a +@@ -2991,6 +2968,7 @@ lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + { + struct lpfc_vport *vport = pmb->vport; + struct lpfc_nodelist *ndlp; ++ u32 unreg_inp; + + ndlp = pmb->ctx_ndlp; + if (pmb->u.mb.mbxCommand == MBX_UNREG_LOGIN) { +@@ -3009,14 +2987,22 @@ lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + ndlp->nlp_DID, ndlp->nlp_defer_did, + ndlp->nlp_flag, + ndlp); +- ndlp->nlp_flag &= ~NLP_LOGO_ACC; ++ ++ /* Cleanup the nlp_flag now that the UNREG RPI ++ * has completed. ++ */ ++ spin_lock_irq(&ndlp->lock); ++ unreg_inp = ndlp->nlp_flag & NLP_UNREG_INP; ++ ndlp->nlp_flag &= ++ ~(NLP_UNREG_INP | NLP_LOGO_ACC); ++ spin_unlock_irq(&ndlp->lock); + + /* Check to see if there are any deferred + * events to process + */ +- if ((ndlp->nlp_flag & NLP_UNREG_INP) && +- (ndlp->nlp_defer_did != +- NLP_EVT_NOTHING_PENDING)) { ++ if (unreg_inp && ++ ndlp->nlp_defer_did != ++ NLP_EVT_NOTHING_PENDING) { + lpfc_printf_vlog( + vport, KERN_INFO, + LOG_MBOX | LOG_SLI | LOG_NODE, +@@ -3025,14 +3011,12 @@ lpfc_sli4_unreg_rpi_cmpl_clr(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + "NPort x%x Data: x%x x%px\n", + ndlp->nlp_rpi, ndlp->nlp_DID, + ndlp->nlp_defer_did, ndlp); +- ndlp->nlp_flag &= ~NLP_UNREG_INP; + ndlp->nlp_defer_did = + NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi( + vport, ndlp->nlp_DID, 0); +- } else { +- __lpfc_sli_rpi_release(vport, ndlp); + } ++ + lpfc_nlp_put(ndlp); + } + } +@@ -8750,6 +8734,7 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba) + lpfc_sli_config_mbox_opcode_get( + phba, mboxq), + rc, dd); ++ + /* + * Allocate all resources (xri,rpi,vpi,vfi) now. Subsequent + * calls depends on these resources to complete port setup. +@@ -8762,6 +8747,8 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba) + goto out_free_mbox; + } + ++ lpfc_sli4_node_rpi_restore(phba); ++ + lpfc_set_host_data(phba, mboxq); + + rc = lpfc_sli_issue_mbox(phba, mboxq, MBX_POLL); +@@ -8949,7 +8936,6 @@ lpfc_sli4_hba_setup(struct lpfc_hba *phba) + rc = -ENODEV; + goto out_free_iocblist; + } +- lpfc_sli4_node_prep(phba); + + if (!test_bit(HBA_FCOE_MODE, &phba->hba_flag)) { + if ((phba->nvmet_support == 0) || (phba->cfg_nvmet_mrq == 1)) { +-- +2.39.5 + diff --git a/queue-6.12/scsi-lpfc-restore-clearing-of-nlp_unreg_inp-in-ndlp-.patch b/queue-6.12/scsi-lpfc-restore-clearing-of-nlp_unreg_inp-in-ndlp-.patch new file mode 100644 index 0000000000..13bdc2caa1 --- /dev/null +++ b/queue-6.12/scsi-lpfc-restore-clearing-of-nlp_unreg_inp-in-ndlp-.patch @@ -0,0 +1,59 @@ +From 5349fcd322e72934a55d2cf5bf0348d4fb248098 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 Mar 2025 12:37:31 -0400 +Subject: scsi: lpfc: Restore clearing of NLP_UNREG_INP in ndlp->nlp_flag + +From: Ewan D. Milne + +[ Upstream commit 040492ac2578b66d3ff4dcefb4f56811634de53d ] + +Commit 32566a6f1ae5 ("scsi: lpfc: Remove NLP_RELEASE_RPI flag from nodelist +structure") introduced a regression with SLI-3 adapters (e.g. LPe12000 8Gb) +where a Link Down / Link Up such as caused by disabling an host FC switch +port would result in the devices remaining in the transport-offline state +and multipath reporting them as failed. This problem was not seen with +newer SLI-4 adapters. + +The problem was caused by portions of the patch which removed the functions +__lpfc_sli_rpi_release() and lpfc_sli_rpi_release() and all their callers. +This was presumably because with the removal of the NLP_RELEASE_RPI flag +there was no need to free the rpi. + +However, __lpfc_sli_rpi_release() and lpfc_sli_rpi_release() which calls it +reset the NLP_UNREG_INP flag. And, lpfc_sli_def_mbox_cmpl() has a path +where __lpfc_sli_rpi_release() was called in a particular case where +NLP_UNREG_INP was not otherwise cleared because of other conditions. + +Restoring the else clause of this conditional and simply clearing the +NLP_UNREG_INP flag appears to resolve the problem with SLI-3 adapters. It +should be noted that the code path in question is not specific to SLI-3, +but there are other SLI-4 code paths which may have masked the issue. + +Fixes: 32566a6f1ae5 ("scsi: lpfc: Remove NLP_RELEASE_RPI flag from nodelist structure") +Cc: stable@vger.kernel.org +Tested-by: Marco Patalano +Signed-off-by: Ewan D. Milne +Link: https://lore.kernel.org/r/20250317163731.356873-1-emilne@redhat.com +Reviewed-by: Justin Tee +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/scsi/lpfc/lpfc_sli.c | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c +index 80c3c84c23914..c4acf594286e5 100644 +--- a/drivers/scsi/lpfc/lpfc_sli.c ++++ b/drivers/scsi/lpfc/lpfc_sli.c +@@ -2921,6 +2921,8 @@ lpfc_sli_def_mbox_cmpl(struct lpfc_hba *phba, LPFC_MBOXQ_t *pmb) + clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + ndlp->nlp_defer_did = NLP_EVT_NOTHING_PENDING; + lpfc_issue_els_plogi(vport, ndlp->nlp_DID, 0); ++ } else { ++ clear_bit(NLP_UNREG_INP, &ndlp->nlp_flag); + } + + /* The unreg_login mailbox is complete and had a +-- +2.39.5 + diff --git a/queue-6.12/scsi-qla2xxx-fix-dma-mapping-test-in-qla24xx_get_por.patch b/queue-6.12/scsi-qla2xxx-fix-dma-mapping-test-in-qla24xx_get_por.patch new file mode 100644 index 0000000000..a9e46daf04 --- /dev/null +++ b/queue-6.12/scsi-qla2xxx-fix-dma-mapping-test-in-qla24xx_get_por.patch @@ -0,0 +1,38 @@ +From 8b98879662821b618905ed7659e27f142d6c4f24 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 17 Jun 2025 18:11:11 +0200 +Subject: scsi: qla2xxx: Fix DMA mapping test in qla24xx_get_port_database() + +From: Thomas Fourier + +[ Upstream commit c3b214719a87735d4f67333a8ef3c0e31a34837c ] + +dma_map_XXX() functions return as error values DMA_MAPPING_ERROR which is +often ~0. The error value should be tested with dma_mapping_error() like +it was done in qla26xx_dport_diagnostics(). + +Fixes: 818c7f87a177 ("scsi: qla2xxx: Add changes in preparation for vendor extended FDMI/RDP") +Signed-off-by: Thomas Fourier +Link: https://lore.kernel.org/r/20250617161115.39888-2-fourier.thomas@gmail.com +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/scsi/qla2xxx/qla_mbx.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/scsi/qla2xxx/qla_mbx.c b/drivers/scsi/qla2xxx/qla_mbx.c +index 0cd6f3e148824..13b6cb1b93acd 100644 +--- a/drivers/scsi/qla2xxx/qla_mbx.c ++++ b/drivers/scsi/qla2xxx/qla_mbx.c +@@ -2147,7 +2147,7 @@ qla24xx_get_port_database(scsi_qla_host_t *vha, u16 nport_handle, + + pdb_dma = dma_map_single(&vha->hw->pdev->dev, pdb, + sizeof(*pdb), DMA_FROM_DEVICE); +- if (!pdb_dma) { ++ if (dma_mapping_error(&vha->hw->pdev->dev, pdb_dma)) { + ql_log(ql_log_warn, vha, 0x1116, "Failed to map dma buffer.\n"); + return QLA_MEMORY_ALLOC_FAILED; + } +-- +2.39.5 + diff --git a/queue-6.12/scsi-qla4xxx-fix-missing-dma-mapping-error-in-qla4xx.patch b/queue-6.12/scsi-qla4xxx-fix-missing-dma-mapping-error-in-qla4xx.patch new file mode 100644 index 0000000000..74fb886d6a --- /dev/null +++ b/queue-6.12/scsi-qla4xxx-fix-missing-dma-mapping-error-in-qla4xx.patch @@ -0,0 +1,37 @@ +From 476c797407b7089ef08f4c37f925c08211edfab6 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 18 Jun 2025 09:17:37 +0200 +Subject: scsi: qla4xxx: Fix missing DMA mapping error in qla4xxx_alloc_pdu() + +From: Thomas Fourier + +[ Upstream commit 00f452a1b084efbe8dcb60a29860527944a002a1 ] + +dma_map_XXX() can fail and should be tested for errors with +dma_mapping_error(). + +Fixes: b3a271a94d00 ("[SCSI] qla4xxx: support iscsiadm session mgmt") +Signed-off-by: Thomas Fourier +Link: https://lore.kernel.org/r/20250618071742.21822-2-fourier.thomas@gmail.com +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/scsi/qla4xxx/ql4_os.c | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/drivers/scsi/qla4xxx/ql4_os.c b/drivers/scsi/qla4xxx/ql4_os.c +index d91f54a6e752f..97e9ca5a2a02c 100644 +--- a/drivers/scsi/qla4xxx/ql4_os.c ++++ b/drivers/scsi/qla4xxx/ql4_os.c +@@ -3420,6 +3420,8 @@ static int qla4xxx_alloc_pdu(struct iscsi_task *task, uint8_t opcode) + task_data->data_dma = dma_map_single(&ha->pdev->dev, task->data, + task->data_count, + DMA_TO_DEVICE); ++ if (dma_mapping_error(&ha->pdev->dev, task_data->data_dma)) ++ return -ENOMEM; + } + + DEBUG2(ql4_printk(KERN_INFO, ha, "%s: MaxRecvLen %u, iscsi hrd %d\n", +-- +2.39.5 + diff --git a/queue-6.12/scsi-sd-fix-vpd-page-0xb7-length-check.patch b/queue-6.12/scsi-sd-fix-vpd-page-0xb7-length-check.patch new file mode 100644 index 0000000000..891a931365 --- /dev/null +++ b/queue-6.12/scsi-sd-fix-vpd-page-0xb7-length-check.patch @@ -0,0 +1,47 @@ +From eaafa4127eeb16c0e624a581baa2d0bb8370f45d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 19 Jun 2025 12:03:02 +0800 +Subject: scsi: sd: Fix VPD page 0xb7 length check + +From: jackysliu <1972843537@qq.com> + +[ Upstream commit 8889676cd62161896f1d861ce294adc29c4f2cb5 ] + +sd_read_block_limits_ext() currently assumes that vpd->len excludes the +size of the page header. However, vpd->len describes the size of the entire +VPD page, therefore the sanity check is incorrect. + +In practice this is not really a problem since we don't attach VPD +pages unless they actually report data trailing the header. But fix +the length check regardless. + +This issue was identified by Wukong-Agent (formerly Tencent Woodpecker), a +code security AI agent, through static code analysis. + +[mkp: rewrote patch description] + +Signed-off-by: jackysliu <1972843537@qq.com> +Link: https://lore.kernel.org/r/tencent_ADA5210D1317EEB6CD7F3DE9FE9DA4591D05@qq.com +Fixes: 96b171d6dba6 ("scsi: core: Query the Block Limits Extension VPD page") +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/scsi/sd.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c +index 8947dab132d78..86dde3e7debba 100644 +--- a/drivers/scsi/sd.c ++++ b/drivers/scsi/sd.c +@@ -3388,7 +3388,7 @@ static void sd_read_block_limits_ext(struct scsi_disk *sdkp) + + rcu_read_lock(); + vpd = rcu_dereference(sdkp->device->vpd_pgb7); +- if (vpd && vpd->len >= 2) ++ if (vpd && vpd->len >= 6) + sdkp->rscs = vpd->data[5] & 1; + rcu_read_unlock(); + } +-- +2.39.5 + diff --git a/queue-6.12/scsi-target-fix-null-pointer-dereference-in-core_scs.patch b/queue-6.12/scsi-target-fix-null-pointer-dereference-in-core_scs.patch new file mode 100644 index 0000000000..936b7d739e --- /dev/null +++ b/queue-6.12/scsi-target-fix-null-pointer-dereference-in-core_scs.patch @@ -0,0 +1,56 @@ +From 8a28a0c00fd88ce2483484871c928960efee0ddc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 12 Jun 2025 12:15:56 +0200 +Subject: scsi: target: Fix NULL pointer dereference in + core_scsi3_decode_spec_i_port() + +From: Maurizio Lombardi + +[ Upstream commit d8ab68bdb294b09a761e967dad374f2965e1913f ] + +The function core_scsi3_decode_spec_i_port(), in its error code path, +unconditionally calls core_scsi3_lunacl_undepend_item() passing the +dest_se_deve pointer, which may be NULL. + +This can lead to a NULL pointer dereference if dest_se_deve remains +unset. + +SPC-3 PR SPEC_I_PT: Unable to locate dest_tpg +Unable to handle kernel paging request at virtual address dfff800000000012 +Call trace: + core_scsi3_lunacl_undepend_item+0x2c/0xf0 [target_core_mod] (P) + core_scsi3_decode_spec_i_port+0x120c/0x1c30 [target_core_mod] + core_scsi3_emulate_pro_register+0x6b8/0xcd8 [target_core_mod] + target_scsi3_emulate_pr_out+0x56c/0x840 [target_core_mod] + +Fix this by adding a NULL check before calling +core_scsi3_lunacl_undepend_item() + +Signed-off-by: Maurizio Lombardi +Link: https://lore.kernel.org/r/20250612101556.24829-1-mlombard@redhat.com +Reviewed-by: Mike Christie +Reviewed-by: John Meneghini +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + drivers/target/target_core_pr.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/drivers/target/target_core_pr.c b/drivers/target/target_core_pr.c +index 4f4ad6af416c8..47fe50b80c229 100644 +--- a/drivers/target/target_core_pr.c ++++ b/drivers/target/target_core_pr.c +@@ -1842,7 +1842,9 @@ core_scsi3_decode_spec_i_port( + } + + kmem_cache_free(t10_pr_reg_cache, dest_pr_reg); +- core_scsi3_lunacl_undepend_item(dest_se_deve); ++ ++ if (dest_se_deve) ++ core_scsi3_lunacl_undepend_item(dest_se_deve); + + if (is_local) + continue; +-- +2.39.5 + diff --git a/queue-6.12/scsi-ufs-core-fix-spelling-of-a-sysfs-attribute-name.patch b/queue-6.12/scsi-ufs-core-fix-spelling-of-a-sysfs-attribute-name.patch new file mode 100644 index 0000000000..5f4c024254 --- /dev/null +++ b/queue-6.12/scsi-ufs-core-fix-spelling-of-a-sysfs-attribute-name.patch @@ -0,0 +1,60 @@ +From 8f4a20472f4cc9203257b2f165598367922746b5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 24 Jun 2025 11:16:44 -0700 +Subject: scsi: ufs: core: Fix spelling of a sysfs attribute name + +From: Bart Van Assche + +[ Upstream commit 021f243627ead17eb6500170256d3d9be787dad8 ] + +Change "resourse" into "resource" in the name of a sysfs attribute. + +Fixes: d829fc8a1058 ("scsi: ufs: sysfs: unit descriptor") +Signed-off-by: Bart Van Assche +Link: https://lore.kernel.org/r/20250624181658.336035-1-bvanassche@acm.org +Reviewed-by: Avri Altman +Signed-off-by: Martin K. Petersen +Signed-off-by: Sasha Levin +--- + Documentation/ABI/testing/sysfs-driver-ufs | 2 +- + drivers/ufs/core/ufs-sysfs.c | 4 ++-- + 2 files changed, 3 insertions(+), 3 deletions(-) + +diff --git a/Documentation/ABI/testing/sysfs-driver-ufs b/Documentation/ABI/testing/sysfs-driver-ufs +index 5fa6655aee840..16f17e91ee496 100644 +--- a/Documentation/ABI/testing/sysfs-driver-ufs ++++ b/Documentation/ABI/testing/sysfs-driver-ufs +@@ -711,7 +711,7 @@ Description: This file shows the thin provisioning type. This is one of + + The file is read only. + +-What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resourse_count ++What: /sys/class/scsi_device/*/device/unit_descriptor/physical_memory_resource_count + Date: February 2018 + Contact: Stanislav Nijnikov + Description: This file shows the total physical memory resources. This is +diff --git a/drivers/ufs/core/ufs-sysfs.c b/drivers/ufs/core/ufs-sysfs.c +index 796e37a1d859f..f8397ef3cf8df 100644 +--- a/drivers/ufs/core/ufs-sysfs.c ++++ b/drivers/ufs/core/ufs-sysfs.c +@@ -1608,7 +1608,7 @@ UFS_UNIT_DESC_PARAM(logical_block_size, _LOGICAL_BLK_SIZE, 1); + UFS_UNIT_DESC_PARAM(logical_block_count, _LOGICAL_BLK_COUNT, 8); + UFS_UNIT_DESC_PARAM(erase_block_size, _ERASE_BLK_SIZE, 4); + UFS_UNIT_DESC_PARAM(provisioning_type, _PROVISIONING_TYPE, 1); +-UFS_UNIT_DESC_PARAM(physical_memory_resourse_count, _PHY_MEM_RSRC_CNT, 8); ++UFS_UNIT_DESC_PARAM(physical_memory_resource_count, _PHY_MEM_RSRC_CNT, 8); + UFS_UNIT_DESC_PARAM(context_capabilities, _CTX_CAPABILITIES, 2); + UFS_UNIT_DESC_PARAM(large_unit_granularity, _LARGE_UNIT_SIZE_M1, 1); + UFS_UNIT_DESC_PARAM(wb_buf_alloc_units, _WB_BUF_ALLOC_UNITS, 4); +@@ -1625,7 +1625,7 @@ static struct attribute *ufs_sysfs_unit_descriptor[] = { + &dev_attr_logical_block_count.attr, + &dev_attr_erase_block_size.attr, + &dev_attr_provisioning_type.attr, +- &dev_attr_physical_memory_resourse_count.attr, ++ &dev_attr_physical_memory_resource_count.attr, + &dev_attr_context_capabilities.attr, + &dev_attr_large_unit_granularity.attr, + &dev_attr_wb_buf_alloc_units.attr, +-- +2.39.5 + diff --git a/queue-6.12/selinux-change-security_compute_sid-to-return-the-ss.patch b/queue-6.12/selinux-change-security_compute_sid-to-return-the-ss.patch new file mode 100644 index 0000000000..66816b6997 --- /dev/null +++ b/queue-6.12/selinux-change-security_compute_sid-to-return-the-ss.patch @@ -0,0 +1,56 @@ +From 543a662d210a653e4b1b6ec3adfbf04e768c50a0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 10 Jun 2025 15:48:27 -0400 +Subject: selinux: change security_compute_sid to return the ssid or tsid on + match + +From: Stephen Smalley + +[ Upstream commit fde46f60f6c5138ee422087addbc5bf5b4968bf1 ] + +If the end result of a security_compute_sid() computation matches the +ssid or tsid, return that SID rather than looking it up again. This +avoids the problem of multiple initial SIDs that map to the same +context. + +Cc: stable@vger.kernel.org +Reported-by: Guido Trentalancia +Fixes: ae254858ce07 ("selinux: introduce an initial SID for early boot processes") +Signed-off-by: Stephen Smalley +Tested-by: Guido Trentalancia +Signed-off-by: Paul Moore +Signed-off-by: Sasha Levin +--- + security/selinux/ss/services.c | 16 +++++++++++----- + 1 file changed, 11 insertions(+), 5 deletions(-) + +diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c +index 88850405ded92..f36332e64c4d1 100644 +--- a/security/selinux/ss/services.c ++++ b/security/selinux/ss/services.c +@@ -1884,11 +1884,17 @@ static int security_compute_sid(u32 ssid, + goto out_unlock; + } + /* Obtain the sid for the context. */ +- rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); +- if (rc == -ESTALE) { +- rcu_read_unlock(); +- context_destroy(&newcontext); +- goto retry; ++ if (context_cmp(scontext, &newcontext)) ++ *out_sid = ssid; ++ else if (context_cmp(tcontext, &newcontext)) ++ *out_sid = tsid; ++ else { ++ rc = sidtab_context_to_sid(sidtab, &newcontext, out_sid); ++ if (rc == -ESTALE) { ++ rcu_read_unlock(); ++ context_destroy(&newcontext); ++ goto retry; ++ } + } + out_unlock: + rcu_read_unlock(); +-- +2.39.5 + diff --git a/queue-6.12/series b/queue-6.12/series index f5e4610e1e..17d05c90c0 100644 --- a/queue-6.12/series +++ b/queue-6.12/series @@ -23,3 +23,172 @@ mtk-sd-fix-a-pagefault-in-dma_unmap_sg-for-not-prepared-data.patch mtk-sd-prevent-memory-corruption-from-dma-map-failure.patch mtk-sd-reset-host-mrq-on-prepare_data-error.patch drm-v3d-disable-interrupts-before-resetting-the-gpu.patch +firmware-arm_ffa-fix-memory-leak-by-freeing-notifier.patch +firmware-arm_ffa-refactoring-to-prepare-for-framewor.patch +firmware-arm_ffa-stash-ffa_device-instead-of-notify_.patch +firmware-arm_ffa-add-support-for-un-registration-of-.patch +firmware-arm_ffa-move-memory-allocation-outside-the-.patch +arm64-dts-apple-t8103-fix-pcie-bcm4377-nodename.patch +platform-mellanox-mlxbf-tmfifo-fix-vring_desc.len-as.patch +rdma-mlx5-fix-unsafe-xarray-access-in-implicit-odp-h.patch +rdma-mlx5-initialize-obj_event-obj_sub_list-before-x.patch +nfs-clean-up-proc-net-rpc-nfs-when-nfs_fs_proc_net_i.patch +nfsv4-pnfs-fix-a-race-to-wake-on-nfs_layout_drain.patch +scsi-qla2xxx-fix-dma-mapping-test-in-qla24xx_get_por.patch +scsi-qla4xxx-fix-missing-dma-mapping-error-in-qla4xx.patch +scsi-sd-fix-vpd-page-0xb7-length-check.patch +scsi-ufs-core-fix-spelling-of-a-sysfs-attribute-name.patch +rdma-mlx5-fix-hw-counters-query-for-non-representor-.patch +rdma-mlx5-fix-cc-counters-query-for-mpv.patch +rdma-mlx5-fix-vport-loopback-for-mpv-device.patch +platform-mellanox-mlxbf-pmc-fix-duplicate-event-id-f.patch +platform-mellanox-nvsw-sn2201-fix-bus-number-in-adap.patch +bluetooth-prevent-unintended-pause-by-checking-if-ad.patch +btrfs-fix-missing-error-handling-when-searching-for-.patch +btrfs-fix-iteration-of-extrefs-during-log-replay.patch +btrfs-return-a-btrfs_inode-from-btrfs_iget_logging.patch +btrfs-return-a-btrfs_inode-from-read_one_inode.patch +btrfs-fix-invalid-inode-pointer-dereferences-during-.patch +btrfs-fix-inode-lookup-error-handling-during-log-rep.patch +btrfs-record-new-subvolume-in-parent-dir-earlier-to-.patch +btrfs-propagate-last_unlink_trans-earlier-when-doing.patch +btrfs-use-btrfs_record_snapshot_destroy-during-rmdir.patch +ethernet-atl1-add-missing-dma-mapping-error-checks-a.patch +dpaa2-eth-fix-xdp_rxq_info-leak.patch +drm-exynos-fimd-guard-display-clock-control-with-run.patch +spi-spi-fsl-dspi-clear-completion-counter-before-ini.patch +drm-i915-selftests-change-mock_request-to-return-err.patch +nvme-fix-incorrect-cdw15-value-in-passthru-error-log.patch +nvmet-fix-memory-leak-of-bio-integrity.patch +platform-x86-dell-wmi-sysman-fix-wmi-data-block-retr.patch +platform-x86-hp-bioscfg-directly-use-firmware_attrib.patch +platform-x86-hp-bioscfg-fix-class-device-unregistrat.patch +platform-x86-firmware_attributes_class-move-include-.patch +platform-x86-firmware_attributes_class-simplify-api.patch +platform-x86-think-lmi-directly-use-firmware_attribu.patch +platform-x86-think-lmi-fix-class-device-unregistrati.patch +platform-x86-dell-sysman-directly-use-firmware_attri.patch +platform-x86-dell-wmi-sysman-fix-class-device-unregi.patch +platform-mellanox-mlxreg-lc-fix-logic-error-in-power.patch +drm-bridge-aux-hpd-bridge-fix-assignment-of-the-of_n.patch +smb-client-fix-warning-when-reconnecting-channel.patch +net-usb-lan78xx-fix-warn-in-__netif_napi_del_locked-.patch +drm-i915-gt-fix-timeline-left-held-on-vma-alloc-erro.patch +drm-i915-gsc-mei-interrupt-top-half-should-be-in-irq.patch +idpf-return-0-size-for-rss-key-if-not-supported.patch +idpf-convert-control-queue-mutex-to-a-spinlock.patch +igc-disable-l1.2-pci-e-link-substate-to-avoid-perfor.patch +smb-client-set-missing-retry-flag-in-smb2_writev_cal.patch +smb-client-set-missing-retry-flag-in-cifs_readv_call.patch +smb-client-set-missing-retry-flag-in-cifs_writev_cal.patch +netfs-fix-i_size-updating.patch +lib-test_objagg-set-error-message-in-check_expect_hi.patch +amd-xgbe-align-cl37-an-sequence-as-per-databook.patch +enic-fix-incorrect-mtu-comparison-in-enic_change_mtu.patch +rose-fix-dangling-neighbour-pointers-in-rose_rt_devi.patch +nui-fix-dma_mapping_error-check.patch +net-sched-always-pass-notifications-when-child-class.patch +amd-xgbe-do-not-double-read-link-status.patch +smb-client-fix-race-condition-in-negotiate-timeout-b.patch +arm64-dts-rockchip-fix-internal-usb-hub-instability-.patch +crypto-iaa-remove-dst_null-support.patch +crypto-iaa-do-not-clobber-req-base.data.patch +spinlock-extend-guard-with-spinlock_bh-variants.patch +crypto-zynqmp-sha-add-locking.patch +kunit-qemu_configs-sparc-use-zilog-console.patch +kunit-qemu_configs-sparc-explicitly-enable-config_sp.patch +kunit-qemu_configs-disable-faulting-tests-on-32-bit-.patch +gfs2-initialize-gl_no_formal_ino-earlier.patch +gfs2-rename-gif_-deferred-defer-_delete.patch +gfs2-rename-dinode_demise-to-evict_behavior.patch +gfs2-prevent-inode-creation-race.patch +gfs2-decode-missing-glock-flags-in-tracepoints.patch +gfs2-add-glf_pending_reply-flag.patch +gfs2-replace-gif_defer_delete-with-glf_defer_delete.patch +gfs2-move-gfs2_dinode_dealloc.patch +gfs2-move-gif_alloc_failed-check-out-of-gfs2_ea_deal.patch +gfs2-deallocate-inodes-in-gfs2_create_inode.patch +btrfs-prepare-btrfs_page_mkwrite-for-large-folios.patch +btrfs-fix-wrong-start-offset-for-delalloc-space-rele.patch +sched-fair-rename-h_nr_running-into-h_nr_queued.patch +sched-fair-add-new-cfs_rq.h_nr_runnable.patch +sched-fair-fixup-wake_up_sync-vs-delayed_dequeue.patch +gfs2-move-gfs2_trans_add_databufs.patch +gfs2-don-t-start-unnecessary-transactions-during-log.patch +asoc-tas2764-extend-driver-to-sn012776.patch +asoc-tas2764-reinit-cache-on-part-reset.patch +acpi-thermal-fix-stale-comment-regarding-trip-points.patch +acpi-thermal-execute-_scp-before-reading-trip-points.patch +bonding-mark-active-offloaded-xfrm_states.patch +wifi-ath12k-fix-skb_ext_desc-leak-in-ath12k_dp_tx-er.patch +wifi-ath12k-handle-error-cases-during-extended-skb-a.patch +wifi-ath12k-fix-wrong-handling-of-ccmp256-and-gcmp-c.patch +rdma-rxe-fix-trying-to-register-non-static-key-in-rx.patch +iommu-ipmmu-vmsa-avoid-wformat-security-warning.patch +f2fs-decrease-spare-area-for-pinned-files-for-zoned-.patch +f2fs-zone-introduce-first_zoned_segno-in-f2fs_sb_inf.patch +f2fs-zone-fix-to-calculate-first_zoned_segno-correct.patch +scsi-lpfc-remove-nlp_release_rpi-flag-from-nodelist-.patch +scsi-lpfc-change-lpfc_nodelist-nlp_flag-member-into-.patch +scsi-lpfc-avoid-potential-ndlp-use-after-free-in-dev.patch +hisi_acc_vfio_pci-bugfix-cache-write-back-issue.patch +hisi_acc_vfio_pci-bugfix-the-problem-of-uninstalling.patch +bpf-use-common-instruction-history-across-all-states.patch +bpf-do-not-include-stack-ptr-register-in-precision-b.patch +arm64-dts-qcom-sm8650-change-labels-to-lower-case.patch +arm64-dts-qcom-sm8650-fix-domain-idle-state-for-cpu2.patch +arm64-dts-renesas-use-interrupts-extended-for-ethern.patch +arm64-dts-renesas-factor-out-white-hawk-single-board.patch +arm64-dts-renesas-white-hawk-single-improve-ethernet.patch +arm64-dts-qcom-sm8650-add-the-missing-l2-cache-node.patch +ubsan-integer-overflow-depend-on-broken-to-keep-this.patch +remoteproc-k3-call-of_node_put-rmem_np-only-once-in-.patch +remoteproc-k3-r5-add-devm-action-to-release-reserved.patch +remoteproc-k3-r5-use-devm_kcalloc-helper.patch +remoteproc-k3-r5-use-devm_ioremap_wc-helper.patch +remoteproc-k3-r5-use-devm_rproc_add-helper.patch +remoteproc-k3-r5-refactor-sequential-core-power-up-d.patch +netfs-fix-oops-in-write-retry-from-mis-resetting-the.patch +mfd-exynos-lpass-fix-another-error-handling-path-in-.patch +drm-xe-fix-dsb-buffer-coherency.patch +drm-xe-move-dsb-l2-flush-to-a-more-sensible-place.patch +drm-xe-add-interface-to-request-physical-alignment-f.patch +drm-xe-allow-bo-mapping-on-multiple-ggtts.patch +drm-xe-move-dpt-l2-flush-to-a-more-sensible-place.patch +drm-xe-replace-double-space-with-single-space-after-.patch +drm-xe-guc-dead-ct-helper.patch +drm-xe-guc-explicitly-exit-ct-safe-mode-on-unwind.patch +selinux-change-security_compute_sid-to-return-the-ss.patch +drm-simpledrm-do-not-upcast-in-release-helpers.patch +drm-amdgpu-vcn-v5_0_1-to-prevent-fw-checking-rb-duri.patch +drm-i915-dp_mst-work-around-thunderbolt-sink-disconn.patch +drm-amdgpu-add-kicker-fws-loading-for-gfx11-smu13-ps.patch +drm-amd-display-add-more-checks-for-dsc-hubp-ono-gua.patch +arm64-dts-qcom-x1e80100-crd-mark-l12b-and-l15b-alway.patch +crypto-powerpc-poly1305-add-depends-on-broken-for-no.patch +drm-amdgpu-mes-add-missing-locking-in-helper-functio.patch +sched_ext-make-scx_group_set_weight-always-update-tg.patch +scsi-lpfc-restore-clearing-of-nlp_unreg_inp-in-ndlp-.patch +drm-msm-fix-a-fence-leak-in-submit-error-path.patch +drm-msm-fix-another-leak-in-the-submit-error-path.patch +alsa-sb-don-t-allow-changing-the-dma-mode-during-ope.patch +alsa-sb-force-to-disable-dmas-once-when-dma-mode-is-.patch +ata-libata-acpi-do-not-assume-40-wire-cable-if-no-de.patch +ata-pata_cs5536-fix-build-on-32-bit-uml.patch +asoc-amd-yc-add-quirk-for-msi-bravo-17-d7vf-internal.patch +platform-x86-amd-pmc-add-pcspecialist-lafite-pro-v-1.patch +genirq-irq_sim-initialize-work-context-pointers-prop.patch +powerpc-fix-struct-termio-related-ioctl-macros.patch +asoc-amd-yc-update-quirk-data-for-hp-victus.patch +regulator-fan53555-add-enable_time-support-and-soft-.patch +scsi-target-fix-null-pointer-dereference-in-core_scs.patch +aoe-defer-rexmit-timer-downdev-work-to-workqueue.patch +wifi-mac80211-drop-invalid-source-address-ocb-frames.patch +wifi-ath6kl-remove-warn-on-bad-firmware-input.patch +acpica-refuse-to-evaluate-a-method-if-arguments-are-.patch +mtd-spinand-fix-memory-leak-of-ecc-engine-conf.patch +rcu-return-early-if-callback-is-not-specified.patch +firmware-arm_ffa-replace-mutex-with-rwlock-to-avoid-.patch +add-a-string-to-qstr-constructor.patch +module-provide-export_symbol_gpl_for_modules-helper.patch +fs-export-anon_inode_make_secure_inode-and-fix-secre.patch diff --git a/queue-6.12/smb-client-fix-race-condition-in-negotiate-timeout-b.patch b/queue-6.12/smb-client-fix-race-condition-in-negotiate-timeout-b.patch new file mode 100644 index 0000000000..e6fdcffdea --- /dev/null +++ b/queue-6.12/smb-client-fix-race-condition-in-negotiate-timeout-b.patch @@ -0,0 +1,128 @@ +From e54863249325eb1e766d6dedac5bff9fd8cf509b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 3 Jul 2025 21:29:52 +0800 +Subject: smb: client: fix race condition in negotiate timeout by using more + precise timing + +From: Wang Zhaolong + +[ Upstream commit 266b5d02e14f3a0e07414e11f239397de0577a1d ] + +When the SMB server reboots and the client immediately accesses the mount +point, a race condition can occur that causes operations to fail with +"Host is down" error. + +Reproduction steps: + # Mount SMB share + mount -t cifs //192.168.245.109/TEST /mnt/ -o xxxx + ls /mnt + + # Reboot server + ssh root@192.168.245.109 reboot + ssh root@192.168.245.109 /path/to/cifs_server_setup.sh + ssh root@192.168.245.109 systemctl stop firewalld + + # Immediate access fails + ls /mnt + ls: cannot access '/mnt': Host is down + + # But works if there is a delay + +The issue is caused by a race condition between negotiate and reconnect. +The 20-second negotiate timeout mechanism can interfere with the normal +recovery process when both are triggered simultaneously. + + ls cifsd +--------------------------------------------------- + cifs_getattr + cifs_revalidate_dentry + cifs_get_inode_info + cifs_get_fattr + smb2_query_path_info + smb2_compound_op + SMB2_open_init + smb2_reconnect + cifs_negotiate_protocol + smb2_negotiate + cifs_send_recv + smb_send_rqst + wait_for_response + cifs_demultiplex_thread + cifs_read_from_socket + cifs_readv_from_socket + server_unresponsive + cifs_reconnect + __cifs_reconnect + cifs_abort_connection + mid->mid_state = MID_RETRY_NEEDED + cifs_wake_up_task + cifs_sync_mid_result + // case MID_RETRY_NEEDED + rc = -EAGAIN; + // In smb2_negotiate() + rc = -EHOSTDOWN; + +The server_unresponsive() timeout triggers cifs_reconnect(), which aborts +ongoing mid requests and causes the ls command to receive -EAGAIN, leading +to -EHOSTDOWN. + +Fix this by introducing a dedicated `neg_start` field to +precisely tracks when the negotiate process begins. The timeout check +now uses this accurate timestamp instead of `lstrp`, ensuring that: + +1. Timeout is only triggered after negotiate has actually run for 20s +2. The mechanism doesn't interfere with concurrent recovery processes +3. Uninitialized timestamps (value 0) don't trigger false timeouts + +Fixes: 7ccc1465465d ("smb: client: fix hang in wait_for_response() for negproto") +Signed-off-by: Wang Zhaolong +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/client/cifsglob.h | 1 + + fs/smb/client/connect.c | 7 ++++--- + 2 files changed, 5 insertions(+), 3 deletions(-) + +diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h +index e77c0b3e49624..b74637ae9085a 100644 +--- a/fs/smb/client/cifsglob.h ++++ b/fs/smb/client/cifsglob.h +@@ -743,6 +743,7 @@ struct TCP_Server_Info { + __le32 session_key_id; /* retrieved from negotiate response and send in session setup request */ + struct session_key session_key; + unsigned long lstrp; /* when we got last response from this server */ ++ unsigned long neg_start; /* when negotiate started (jiffies) */ + struct cifs_secmech secmech; /* crypto sec mech functs, descriptors */ + #define CIFS_NEGFLAVOR_UNENCAP 1 /* wct == 17, but no ext_sec */ + #define CIFS_NEGFLAVOR_EXTENDED 2 /* wct == 17, ext_sec bit set */ +diff --git a/fs/smb/client/connect.c b/fs/smb/client/connect.c +index 9275e0d1e2f64..fc3f0b956f323 100644 +--- a/fs/smb/client/connect.c ++++ b/fs/smb/client/connect.c +@@ -677,12 +677,12 @@ server_unresponsive(struct TCP_Server_Info *server) + /* + * If we're in the process of mounting a share or reconnecting a session + * and the server abruptly shut down (e.g. socket wasn't closed, packet +- * had been ACK'ed but no SMB response), don't wait longer than 20s to +- * negotiate protocol. ++ * had been ACK'ed but no SMB response), don't wait longer than 20s from ++ * when negotiate actually started. + */ + spin_lock(&server->srv_lock); + if (server->tcpStatus == CifsInNegotiate && +- time_after(jiffies, server->lstrp + 20 * HZ)) { ++ time_after(jiffies, server->neg_start + 20 * HZ)) { + spin_unlock(&server->srv_lock); + cifs_reconnect(server, false); + return true; +@@ -4009,6 +4009,7 @@ cifs_negotiate_protocol(const unsigned int xid, struct cifs_ses *ses, + + server->lstrp = jiffies; + server->tcpStatus = CifsInNegotiate; ++ server->neg_start = jiffies; + spin_unlock(&server->srv_lock); + + rc = server->ops->negotiate(xid, ses, server); +-- +2.39.5 + diff --git a/queue-6.12/smb-client-fix-warning-when-reconnecting-channel.patch b/queue-6.12/smb-client-fix-warning-when-reconnecting-channel.patch new file mode 100644 index 0000000000..ee2e3f4454 --- /dev/null +++ b/queue-6.12/smb-client-fix-warning-when-reconnecting-channel.patch @@ -0,0 +1,129 @@ +From c262bdb4750925bcd9c88cf3e32d1f860f6258c8 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 25 Jun 2025 00:00:17 -0300 +Subject: smb: client: fix warning when reconnecting channel + +From: Paulo Alcantara + +[ Upstream commit 3bbe46716092d8ef6b0df4b956f585c5cd0fc78e ] + +When reconnecting a channel in smb2_reconnect_server(), a dummy tcon +is passed down to smb2_reconnect() with ->query_interface +uninitialized, so we can't call queue_delayed_work() on it. + +Fix the following warning by ensuring that we're queueing the delayed +worker from correct tcon. + +WARNING: CPU: 4 PID: 1126 at kernel/workqueue.c:2498 __queue_delayed_work+0x1d2/0x200 +Modules linked in: cifs cifs_arc4 nls_ucs2_utils cifs_md4 [last unloaded: cifs] +CPU: 4 UID: 0 PID: 1126 Comm: kworker/4:0 Not tainted 6.16.0-rc3 #5 PREEMPT(voluntary) +Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-4.fc42 04/01/2014 +Workqueue: cifsiod smb2_reconnect_server [cifs] +RIP: 0010:__queue_delayed_work+0x1d2/0x200 +Code: 41 5e 41 5f e9 7f ee ff ff 90 0f 0b 90 e9 5d ff ff ff bf 02 00 +00 00 e8 6c f3 07 00 89 c3 eb bd 90 0f 0b 90 e9 57 f> 0b 90 e9 65 fe +ff ff 90 0f 0b 90 e9 72 fe ff ff 90 0f 0b 90 e9 +RSP: 0018:ffffc900014afad8 EFLAGS: 00010003 +RAX: 0000000000000000 RBX: ffff888124d99988 RCX: ffffffff81399cc1 +RDX: dffffc0000000000 RSI: ffff888114326e00 RDI: ffff888124d999f0 +RBP: 000000000000ea60 R08: 0000000000000001 R09: ffffed10249b3331 +R10: ffff888124d9998f R11: 0000000000000004 R12: 0000000000000040 +R13: ffff888114326e00 R14: ffff888124d999d8 R15: ffff888114939020 +FS: 0000000000000000(0000) GS:ffff88829f7fe000(0000) knlGS:0000000000000000 +CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 +CR2: 00007ffe7a2b4038 CR3: 0000000120a6f000 CR4: 0000000000750ef0 +PKRU: 55555554 +Call Trace: + + queue_delayed_work_on+0xb4/0xc0 + smb2_reconnect+0xb22/0xf50 [cifs] + smb2_reconnect_server+0x413/0xd40 [cifs] + ? __pfx_smb2_reconnect_server+0x10/0x10 [cifs] + ? local_clock_noinstr+0xd/0xd0 + ? local_clock+0x15/0x30 + ? lock_release+0x29b/0x390 + process_one_work+0x4c5/0xa10 + ? __pfx_process_one_work+0x10/0x10 + ? __list_add_valid_or_report+0x37/0x120 + worker_thread+0x2f1/0x5a0 + ? __kthread_parkme+0xde/0x100 + ? __pfx_worker_thread+0x10/0x10 + kthread+0x1fe/0x380 + ? kthread+0x10f/0x380 + ? __pfx_kthread+0x10/0x10 + ? local_clock_noinstr+0xd/0xd0 + ? ret_from_fork+0x1b/0x1f0 + ? local_clock+0x15/0x30 + ? lock_release+0x29b/0x390 + ? rcu_is_watching+0x20/0x50 + ? __pfx_kthread+0x10/0x10 + ret_from_fork+0x15b/0x1f0 + ? __pfx_kthread+0x10/0x10 + ret_from_fork_asm+0x1a/0x30 + +irq event stamp: 1116206 +hardirqs last enabled at (1116205): [] __up_console_sem+0x52/0x60 +hardirqs last disabled at (1116206): [] queue_delayed_work_on+0x6e/0xc0 +softirqs last enabled at (1116138): [] __smb_send_rqst+0x42d/0x950 [cifs] +softirqs last disabled at (1116136): [] release_sock+0x21/0xf0 + +Cc: linux-cifs@vger.kernel.org +Reported-by: David Howells +Fixes: 42ca547b13a2 ("cifs: do not disable interface polling on failure") +Reviewed-by: David Howells +Tested-by: David Howells +Reviewed-by: Shyam Prasad N +Signed-off-by: Paulo Alcantara (Red Hat) +Signed-off-by: David Howells +Tested-by: Steve French +Signed-off-by: Steve French +Signed-off-by: Sasha Levin +--- + fs/smb/client/cifsglob.h | 1 + + fs/smb/client/smb2pdu.c | 10 ++++------ + 2 files changed, 5 insertions(+), 6 deletions(-) + +diff --git a/fs/smb/client/cifsglob.h b/fs/smb/client/cifsglob.h +index c66655adecb2c..e77c0b3e49624 100644 +--- a/fs/smb/client/cifsglob.h ++++ b/fs/smb/client/cifsglob.h +@@ -1275,6 +1275,7 @@ struct cifs_tcon { + bool use_persistent:1; /* use persistent instead of durable handles */ + bool no_lease:1; /* Do not request leases on files or directories */ + bool use_witness:1; /* use witness protocol */ ++ bool dummy:1; /* dummy tcon used for reconnecting channels */ + __le32 capabilities; + __u32 share_flags; + __u32 maximal_access; +diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c +index c6ae395a46925..3e501da62880c 100644 +--- a/fs/smb/client/smb2pdu.c ++++ b/fs/smb/client/smb2pdu.c +@@ -440,9 +440,9 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon, + free_xid(xid); + ses->flags &= ~CIFS_SES_FLAGS_PENDING_QUERY_INTERFACES; + +- /* regardless of rc value, setup polling */ +- queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, +- (SMB_INTERFACE_POLL_INTERVAL * HZ)); ++ if (!tcon->ipc && !tcon->dummy) ++ queue_delayed_work(cifsiod_wq, &tcon->query_interfaces, ++ (SMB_INTERFACE_POLL_INTERVAL * HZ)); + + mutex_unlock(&ses->session_mutex); + +@@ -4234,10 +4234,8 @@ void smb2_reconnect_server(struct work_struct *work) + } + goto done; + } +- + tcon->status = TID_GOOD; +- tcon->retry = false; +- tcon->need_reconnect = false; ++ tcon->dummy = true; + + /* now reconnect sessions for necessary channels */ + list_for_each_entry_safe(ses, ses2, &tmp_ses_list, rlist) { +-- +2.39.5 + diff --git a/queue-6.12/smb-client-set-missing-retry-flag-in-cifs_readv_call.patch b/queue-6.12/smb-client-set-missing-retry-flag-in-cifs_readv_call.patch new file mode 100644 index 0000000000..194b9589c8 --- /dev/null +++ b/queue-6.12/smb-client-set-missing-retry-flag-in-cifs_readv_call.patch @@ -0,0 +1,40 @@ +From 18cb1377a60b70043b8f1afbcbc81f78a316e122 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 1 Jul 2025 17:38:42 +0100 +Subject: smb: client: set missing retry flag in cifs_readv_callback() + +From: Paulo Alcantara + +[ Upstream commit 0e60bae24ad28ab06a485698077d3c626f1e54ab ] + +Set NETFS_SREQ_NEED_RETRY flag to tell netfslib that the subreq needs +to be retried. + +Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") +Signed-off-by: Paulo Alcantara (Red Hat) +Signed-off-by: David Howells +Link: https://lore.kernel.org/20250701163852.2171681-8-dhowells@redhat.com +Tested-by: Steve French +Cc: linux-cifs@vger.kernel.org +Cc: netfs@lists.linux.dev +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/smb/client/cifssmb.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c +index d6ba55d4720d2..449ac718a8beb 100644 +--- a/fs/smb/client/cifssmb.c ++++ b/fs/smb/client/cifssmb.c +@@ -1310,6 +1310,7 @@ cifs_readv_callback(struct mid_q_entry *mid) + break; + case MID_REQUEST_SUBMITTED: + case MID_RETRY_NEEDED: ++ __set_bit(NETFS_SREQ_NEED_RETRY, &rdata->subreq.flags); + rdata->result = -EAGAIN; + if (server->sign && rdata->got_bytes) + /* reset bytes number since we can not check a sign */ +-- +2.39.5 + diff --git a/queue-6.12/smb-client-set-missing-retry-flag-in-cifs_writev_cal.patch b/queue-6.12/smb-client-set-missing-retry-flag-in-cifs_writev_cal.patch new file mode 100644 index 0000000000..23380ae07b --- /dev/null +++ b/queue-6.12/smb-client-set-missing-retry-flag-in-cifs_writev_cal.patch @@ -0,0 +1,40 @@ +From 4d79a2921998b1d823fd033221e4cdb383d5c397 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 1 Jul 2025 17:38:43 +0100 +Subject: smb: client: set missing retry flag in cifs_writev_callback() + +From: Paulo Alcantara + +[ Upstream commit 74ee76bea4b445c023d04806e0bcd78a912fd30b ] + +Set NETFS_SREQ_NEED_RETRY flag to tell netfslib that the subreq needs +to be retried. + +Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") +Signed-off-by: Paulo Alcantara (Red Hat) +Signed-off-by: David Howells +Link: https://lore.kernel.org/20250701163852.2171681-9-dhowells@redhat.com +Tested-by: Steve French +Cc: linux-cifs@vger.kernel.org +Cc: netfs@lists.linux.dev +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/smb/client/cifssmb.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/fs/smb/client/cifssmb.c b/fs/smb/client/cifssmb.c +index 449ac718a8beb..e3d9367eaec37 100644 +--- a/fs/smb/client/cifssmb.c ++++ b/fs/smb/client/cifssmb.c +@@ -1682,6 +1682,7 @@ cifs_writev_callback(struct mid_q_entry *mid) + break; + case MID_REQUEST_SUBMITTED: + case MID_RETRY_NEEDED: ++ __set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags); + result = -EAGAIN; + break; + default: +-- +2.39.5 + diff --git a/queue-6.12/smb-client-set-missing-retry-flag-in-smb2_writev_cal.patch b/queue-6.12/smb-client-set-missing-retry-flag-in-smb2_writev_cal.patch new file mode 100644 index 0000000000..ff6e396256 --- /dev/null +++ b/queue-6.12/smb-client-set-missing-retry-flag-in-smb2_writev_cal.patch @@ -0,0 +1,40 @@ +From 7acf46247551aa78753c8ea7f3c7a64e9247dac0 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 1 Jul 2025 17:38:41 +0100 +Subject: smb: client: set missing retry flag in smb2_writev_callback() + +From: Paulo Alcantara + +[ Upstream commit e67e75edeb88022c04f8e0a173e1ff6dc688f155 ] + +Set NETFS_SREQ_NEED_RETRY flag to tell netfslib that the subreq needs +to be retried. + +Fixes: ee4cdf7ba857 ("netfs: Speed up buffered reading") +Signed-off-by: Paulo Alcantara (Red Hat) +Signed-off-by: David Howells +Link: https://lore.kernel.org/20250701163852.2171681-7-dhowells@redhat.com +Tested-by: Steve French +Cc: linux-cifs@vger.kernel.org +Cc: netfs@lists.linux.dev +Signed-off-by: Christian Brauner +Signed-off-by: Sasha Levin +--- + fs/smb/client/smb2pdu.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/fs/smb/client/smb2pdu.c b/fs/smb/client/smb2pdu.c +index 3e501da62880c..d514f95deb7e7 100644 +--- a/fs/smb/client/smb2pdu.c ++++ b/fs/smb/client/smb2pdu.c +@@ -4869,6 +4869,7 @@ smb2_writev_callback(struct mid_q_entry *mid) + break; + case MID_REQUEST_SUBMITTED: + case MID_RETRY_NEEDED: ++ __set_bit(NETFS_SREQ_NEED_RETRY, &wdata->subreq.flags); + result = -EAGAIN; + break; + case MID_RESPONSE_MALFORMED: +-- +2.39.5 + diff --git a/queue-6.12/spi-spi-fsl-dspi-clear-completion-counter-before-ini.patch b/queue-6.12/spi-spi-fsl-dspi-clear-completion-counter-before-ini.patch new file mode 100644 index 0000000000..2dbab7e4f4 --- /dev/null +++ b/queue-6.12/spi-spi-fsl-dspi-clear-completion-counter-before-ini.patch @@ -0,0 +1,61 @@ +From 7a656ac6829ae4a42cbc3e06d8c29bb7a3d17ded Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 27 Jun 2025 11:21:37 +0100 +Subject: spi: spi-fsl-dspi: Clear completion counter before initiating + transfer + +From: James Clark + +[ Upstream commit fa60c094c19b97e103d653f528f8d9c178b6a5f5 ] + +In target mode, extra interrupts can be received between the end of a +transfer and halting the module if the host continues sending more data. +If the interrupt from this occurs after the reinit_completion() then the +completion counter is left at a non-zero value. The next unrelated +transfer initiated by userspace will then complete immediately without +waiting for the interrupt or writing to the RX buffer. + +Fix it by resetting the counter before the transfer so that lingering +values are cleared. This is done after clearing the FIFOs and the +status register but before the transfer is initiated, so no interrupts +should be received at this point resulting in other race conditions. + +Fixes: 4f5ee75ea171 ("spi: spi-fsl-dspi: Replace interruptible wait queue with a simple completion") +Signed-off-by: James Clark +Reviewed-by: Frank Li +Link: https://patch.msgid.link/20250627-james-nxp-spi-dma-v4-1-178dba20c120@linaro.org +Signed-off-by: Mark Brown +Signed-off-by: Sasha Levin +--- + drivers/spi/spi-fsl-dspi.c | 11 ++++++++++- + 1 file changed, 10 insertions(+), 1 deletion(-) + +diff --git a/drivers/spi/spi-fsl-dspi.c b/drivers/spi/spi-fsl-dspi.c +index 7c43df252328d..e26363ae74890 100644 +--- a/drivers/spi/spi-fsl-dspi.c ++++ b/drivers/spi/spi-fsl-dspi.c +@@ -983,11 +983,20 @@ static int dspi_transfer_one_message(struct spi_controller *ctlr, + if (dspi->devtype_data->trans_mode == DSPI_DMA_MODE) { + status = dspi_dma_xfer(dspi); + } else { ++ /* ++ * Reinitialize the completion before transferring data ++ * to avoid the case where it might remain in the done ++ * state due to a spurious interrupt from a previous ++ * transfer. This could falsely signal that the current ++ * transfer has completed. ++ */ ++ if (dspi->irq) ++ reinit_completion(&dspi->xfer_done); ++ + dspi_fifo_write(dspi); + + if (dspi->irq) { + wait_for_completion(&dspi->xfer_done); +- reinit_completion(&dspi->xfer_done); + } else { + do { + status = dspi_poll(dspi); +-- +2.39.5 + diff --git a/queue-6.12/spinlock-extend-guard-with-spinlock_bh-variants.patch b/queue-6.12/spinlock-extend-guard-with-spinlock_bh-variants.patch new file mode 100644 index 0000000000..6abf115ac1 --- /dev/null +++ b/queue-6.12/spinlock-extend-guard-with-spinlock_bh-variants.patch @@ -0,0 +1,54 @@ +From 77ab04346276170191433d9d28fa222cd1c2b04f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 14 Jan 2025 13:36:34 +0100 +Subject: spinlock: extend guard with spinlock_bh variants + +From: Christian Marangi + +[ Upstream commit d6104733178293b40044525b06d6a26356934da3 ] + +Extend guard APIs with missing raw/spinlock_bh variants. + +Signed-off-by: Christian Marangi +Acked-by: Peter Zijlstra (Intel) +Signed-off-by: Herbert Xu +Stable-dep-of: c7e68043620e ("crypto: zynqmp-sha - Add locking") +Signed-off-by: Sasha Levin +--- + include/linux/spinlock.h | 13 +++++++++++++ + 1 file changed, 13 insertions(+) + +diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h +index 63dd8cf3c3c2b..d3561c4a080e2 100644 +--- a/include/linux/spinlock.h ++++ b/include/linux/spinlock.h +@@ -548,6 +548,12 @@ DEFINE_LOCK_GUARD_1(raw_spinlock_irq, raw_spinlock_t, + + DEFINE_LOCK_GUARD_1_COND(raw_spinlock_irq, _try, raw_spin_trylock_irq(_T->lock)) + ++DEFINE_LOCK_GUARD_1(raw_spinlock_bh, raw_spinlock_t, ++ raw_spin_lock_bh(_T->lock), ++ raw_spin_unlock_bh(_T->lock)) ++ ++DEFINE_LOCK_GUARD_1_COND(raw_spinlock_bh, _try, raw_spin_trylock_bh(_T->lock)) ++ + DEFINE_LOCK_GUARD_1(raw_spinlock_irqsave, raw_spinlock_t, + raw_spin_lock_irqsave(_T->lock, _T->flags), + raw_spin_unlock_irqrestore(_T->lock, _T->flags), +@@ -569,6 +575,13 @@ DEFINE_LOCK_GUARD_1(spinlock_irq, spinlock_t, + DEFINE_LOCK_GUARD_1_COND(spinlock_irq, _try, + spin_trylock_irq(_T->lock)) + ++DEFINE_LOCK_GUARD_1(spinlock_bh, spinlock_t, ++ spin_lock_bh(_T->lock), ++ spin_unlock_bh(_T->lock)) ++ ++DEFINE_LOCK_GUARD_1_COND(spinlock_bh, _try, ++ spin_trylock_bh(_T->lock)) ++ + DEFINE_LOCK_GUARD_1(spinlock_irqsave, spinlock_t, + spin_lock_irqsave(_T->lock, _T->flags), + spin_unlock_irqrestore(_T->lock, _T->flags), +-- +2.39.5 + diff --git a/queue-6.12/ubsan-integer-overflow-depend-on-broken-to-keep-this.patch b/queue-6.12/ubsan-integer-overflow-depend-on-broken-to-keep-this.patch new file mode 100644 index 0000000000..de692ab6d6 --- /dev/null +++ b/queue-6.12/ubsan-integer-overflow-depend-on-broken-to-keep-this.patch @@ -0,0 +1,45 @@ +From 31818f0559586b5a128a38e71fa812b0ba839965 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 28 May 2025 11:26:22 -0700 +Subject: ubsan: integer-overflow: depend on BROKEN to keep this out of CI + +From: Kees Cook + +[ Upstream commit d6a0e0bfecccdcecb08defe75a137c7262352102 ] + +Depending on !COMPILE_TEST isn't sufficient to keep this feature out of +CI because we can't stop it from being included in randconfig builds. +This feature is still highly experimental, and is developed in lock-step +with Clang's Overflow Behavior Types[1]. Depend on BROKEN to keep it +from being enabled by anyone not expecting it. + +Link: https://discourse.llvm.org/t/rfc-v2-clang-introduce-overflowbehaviortypes-for-wrapping-and-non-wrapping-arithmetic/86507 [1] +Reported-by: kernel test robot +Closes: https://lore.kernel.org/oe-lkp/202505281024.f42beaa7-lkp@intel.com +Fixes: 557f8c582a9b ("ubsan: Reintroduce signed overflow sanitizer") +Acked-by: Eric Biggers +Link: https://lore.kernel.org/r/20250528182616.work.296-kees@kernel.org +Reviewed-by: Nathan Chancellor +Acked-by: Marco Elver +Signed-off-by: Kees Cook +Signed-off-by: Sasha Levin +--- + lib/Kconfig.ubsan | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/lib/Kconfig.ubsan b/lib/Kconfig.ubsan +index 37655f58b8554..4e4dc430614a4 100644 +--- a/lib/Kconfig.ubsan ++++ b/lib/Kconfig.ubsan +@@ -118,6 +118,8 @@ config UBSAN_UNREACHABLE + + config UBSAN_SIGNED_WRAP + bool "Perform checking for signed arithmetic wrap-around" ++ # This is very experimental so drop the next line if you really want it ++ depends on BROKEN + depends on !COMPILE_TEST + # The no_sanitize attribute was introduced in GCC with version 8. + depends on !CC_IS_GCC || GCC_VERSION >= 80000 +-- +2.39.5 + diff --git a/queue-6.12/wifi-ath12k-fix-skb_ext_desc-leak-in-ath12k_dp_tx-er.patch b/queue-6.12/wifi-ath12k-fix-skb_ext_desc-leak-in-ath12k_dp_tx-er.patch new file mode 100644 index 0000000000..3f5a95975c --- /dev/null +++ b/queue-6.12/wifi-ath12k-fix-skb_ext_desc-leak-in-ath12k_dp_tx-er.patch @@ -0,0 +1,41 @@ +From 62be430e75897caaf483e508d645ced9b9b25540 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 22 Jan 2025 17:01:12 +0100 +Subject: wifi: ath12k: fix skb_ext_desc leak in ath12k_dp_tx() error path + +From: Nicolas Escande + +[ Upstream commit 28a9972e0f0693cd4d08f431c992fa6be39c788c ] + +When vlan support was added, we missed that when +ath12k_dp_prepare_htt_metadata() returns an error we also need to free +the skb holding the metadata before going on with the cleanup process. + +Compile tested only. + +Fixes: 26dd8ccdba4d ("wifi: ath12k: dynamic VLAN support") +Signed-off-by: Nicolas Escande +Reviewed-by: Aditya Kumar Singh +Link: https://patch.msgid.link/20250122160112.3234558-1-nico.escande@gmail.com +Signed-off-by: Jeff Johnson +Stable-dep-of: 37a068fc9dc4 ("wifi: ath12k: Handle error cases during extended skb allocation") +Signed-off-by: Sasha Levin +--- + drivers/net/wireless/ath/ath12k/dp_tx.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c +index 734e3da4cbf19..9e63d2d97c095 100644 +--- a/drivers/net/wireless/ath/ath12k/dp_tx.c ++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c +@@ -397,6 +397,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif, + if (ret < 0) { + ath12k_dbg(ab, ATH12K_DBG_DP_TX, + "Failed to add HTT meta data, dropping packet\n"); ++ kfree_skb(skb_ext_desc); + goto fail_unmap_dma; + } + } +-- +2.39.5 + diff --git a/queue-6.12/wifi-ath12k-fix-wrong-handling-of-ccmp256-and-gcmp-c.patch b/queue-6.12/wifi-ath12k-fix-wrong-handling-of-ccmp256-and-gcmp-c.patch new file mode 100644 index 0000000000..56a24f2410 --- /dev/null +++ b/queue-6.12/wifi-ath12k-fix-wrong-handling-of-ccmp256-and-gcmp-c.patch @@ -0,0 +1,123 @@ +From 190154b62a1ad50f200f2b7f9a3cf3d23ed9ffe4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 5 Jul 2025 13:56:09 -0400 +Subject: wifi: ath12k: fix wrong handling of CCMP256 and GCMP ciphers + +From: Rameshkumar Sundaram + +[ Upstream commit f5d6b15d9503263d9425dcde9cc2fd401a32b0f2 ] + +Currently for CCMP256, GCMP128 and GCMP256 ciphers, in +ath12k_install_key() IEEE80211_KEY_FLAG_GENERATE_IV_MGMT is not set and +in ath12k_mac_mgmt_tx_wmi() a length of IEEE80211_CCMP_MIC_LEN is reserved +for all ciphers. + +This results in unexpected drop of protected management frames in case +either of above 3 ciphers is used. The reason is, without +IEEE80211_KEY_FLAG_GENERATE_IV_MGMT set, mac80211 will not generate +CCMP/GCMP headers in TX frame for ath12k. +Also MIC length reserved is wrong and such frames are dropped by hardware. + +Fix this by setting IEEE80211_KEY_FLAG_GENERATE_IV_MGMT flag for above +ciphers and by reserving proper MIC length for those ciphers. + +Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.3.1-00173-QCAHKSWPL_SILICONZ-1 +Tested-on: WCN7850 hw2.0 PCI WLAN.HMT.1.0.c5-00481-QCAHMTSWPL_V1.0_V2.0_SILICONZ-3 + +Fixes: d889913205cf ("wifi: ath12k: driver for Qualcomm Wi-Fi 7 devices") +Signed-off-by: Rameshkumar Sundaram +Reviewed-by: Vasanthakumar Thiagarajan +Link: https://patch.msgid.link/20250415195812.2633923-2-rameshkumar.sundaram@oss.qualcomm.com +Signed-off-by: Jeff Johnson +Signed-off-by: Sasha Levin +--- + drivers/net/wireless/ath/ath12k/dp_rx.c | 3 +-- + drivers/net/wireless/ath/ath12k/dp_rx.h | 3 +++ + drivers/net/wireless/ath/ath12k/mac.c | 16 ++++++++++------ + 3 files changed, 14 insertions(+), 8 deletions(-) + +diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.c b/drivers/net/wireless/ath/ath12k/dp_rx.c +index 1623298ba2c47..eebdcc16e8fc4 100644 +--- a/drivers/net/wireless/ath/ath12k/dp_rx.c ++++ b/drivers/net/wireless/ath/ath12k/dp_rx.c +@@ -1868,8 +1868,7 @@ static void ath12k_dp_rx_h_csum_offload(struct ath12k *ar, struct sk_buff *msdu) + CHECKSUM_NONE : CHECKSUM_UNNECESSARY; + } + +-static int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar, +- enum hal_encrypt_type enctype) ++int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar, enum hal_encrypt_type enctype) + { + switch (enctype) { + case HAL_ENCRYPT_TYPE_OPEN: +diff --git a/drivers/net/wireless/ath/ath12k/dp_rx.h b/drivers/net/wireless/ath/ath12k/dp_rx.h +index eb1f92559179b..4232091d9e328 100644 +--- a/drivers/net/wireless/ath/ath12k/dp_rx.h ++++ b/drivers/net/wireless/ath/ath12k/dp_rx.h +@@ -143,4 +143,7 @@ int ath12k_dp_htt_tlv_iter(struct ath12k_base *ab, const void *ptr, size_t len, + int (*iter)(struct ath12k_base *ar, u16 tag, u16 len, + const void *ptr, void *data), + void *data); ++ ++int ath12k_dp_rx_crypto_mic_len(struct ath12k *ar, enum hal_encrypt_type enctype); ++ + #endif /* ATH12K_DP_RX_H */ +diff --git a/drivers/net/wireless/ath/ath12k/mac.c b/drivers/net/wireless/ath/ath12k/mac.c +index fbf5d57283576..4ca684278c367 100644 +--- a/drivers/net/wireless/ath/ath12k/mac.c ++++ b/drivers/net/wireless/ath/ath12k/mac.c +@@ -3864,8 +3864,8 @@ static int ath12k_install_key(struct ath12k_vif *arvif, + + switch (key->cipher) { + case WLAN_CIPHER_SUITE_CCMP: ++ case WLAN_CIPHER_SUITE_CCMP_256: + arg.key_cipher = WMI_CIPHER_AES_CCM; +- /* TODO: Re-check if flag is valid */ + key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT; + break; + case WLAN_CIPHER_SUITE_TKIP: +@@ -3873,12 +3873,10 @@ static int ath12k_install_key(struct ath12k_vif *arvif, + arg.key_txmic_len = 8; + arg.key_rxmic_len = 8; + break; +- case WLAN_CIPHER_SUITE_CCMP_256: +- arg.key_cipher = WMI_CIPHER_AES_CCM; +- break; + case WLAN_CIPHER_SUITE_GCMP: + case WLAN_CIPHER_SUITE_GCMP_256: + arg.key_cipher = WMI_CIPHER_AES_GCM; ++ key->flags |= IEEE80211_KEY_FLAG_GENERATE_IV_MGMT; + break; + default: + ath12k_warn(ar->ab, "cipher %d is not supported\n", key->cipher); +@@ -5725,6 +5723,8 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_vif *arvif, + struct ath12k_base *ab = ar->ab; + struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data; + struct ieee80211_tx_info *info; ++ enum hal_encrypt_type enctype; ++ unsigned int mic_len; + dma_addr_t paddr; + int buf_id; + int ret; +@@ -5738,12 +5738,16 @@ static int ath12k_mac_mgmt_tx_wmi(struct ath12k *ar, struct ath12k_vif *arvif, + return -ENOSPC; + + info = IEEE80211_SKB_CB(skb); +- if (!(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)) { ++ if ((ATH12K_SKB_CB(skb)->flags & ATH12K_SKB_CIPHER_SET) && ++ !(info->flags & IEEE80211_TX_CTL_HW_80211_ENCAP)) { + if ((ieee80211_is_action(hdr->frame_control) || + ieee80211_is_deauth(hdr->frame_control) || + ieee80211_is_disassoc(hdr->frame_control)) && + ieee80211_has_protected(hdr->frame_control)) { +- skb_put(skb, IEEE80211_CCMP_MIC_LEN); ++ enctype = ++ ath12k_dp_tx_get_encrypt_type(ATH12K_SKB_CB(skb)->cipher); ++ mic_len = ath12k_dp_rx_crypto_mic_len(ar, enctype); ++ skb_put(skb, mic_len); + } + } + +-- +2.39.5 + diff --git a/queue-6.12/wifi-ath12k-handle-error-cases-during-extended-skb-a.patch b/queue-6.12/wifi-ath12k-handle-error-cases-during-extended-skb-a.patch new file mode 100644 index 0000000000..61db0cbbe1 --- /dev/null +++ b/queue-6.12/wifi-ath12k-handle-error-cases-during-extended-skb-a.patch @@ -0,0 +1,96 @@ +From 5727977317011b92155b81c2a0868a85063ee431 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 11 Apr 2025 11:31:51 +0530 +Subject: wifi: ath12k: Handle error cases during extended skb allocation + +From: P Praneesh + +[ Upstream commit 37a068fc9dc4feb8d76e8896bb33883d06c11a6b ] + +Currently, in the case of extended skb allocation, the buffer is freed +before the DMA unmap operation. This premature deletion can result in +skb->data corruption, as the memory region could be re-allocated for other +purposes. Fix this issue by reordering the failure cases by calling +dma_unmap_single() first, then followed by the corresponding kfree_skb(). +This helps avoid data corruption in case of failures in dp_tx(). + +Tested-on: QCN9274 hw2.0 PCI WLAN.WBE.1.4.1-00199-QCAHKSWPL_SILICONZ-1 +Tested-on: WCN7850 hw2.0 PCI WLAN.HMT.1.0.c5-00481-QCAHMTSWPL_V1.0_V2.0_SILICONZ-3 + +Fixes: d889913205cf ("wifi: ath12k: driver for Qualcomm Wi-Fi 7 devices") +Signed-off-by: P Praneesh +Reviewed-by: Vasanthakumar Thiagarajan +Link: https://patch.msgid.link/20250411060154.1388159-2-praneesh.p@oss.qualcomm.com +Signed-off-by: Jeff Johnson +Signed-off-by: Sasha Levin +--- + drivers/net/wireless/ath/ath12k/dp_tx.c | 22 +++++++++++----------- + 1 file changed, 11 insertions(+), 11 deletions(-) + +diff --git a/drivers/net/wireless/ath/ath12k/dp_tx.c b/drivers/net/wireless/ath/ath12k/dp_tx.c +index 9e63d2d97c095..21e07b5cee570 100644 +--- a/drivers/net/wireless/ath/ath12k/dp_tx.c ++++ b/drivers/net/wireless/ath/ath12k/dp_tx.c +@@ -227,7 +227,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif, + struct ath12k_skb_cb *skb_cb = ATH12K_SKB_CB(skb); + struct hal_tcl_data_cmd *hal_tcl_desc; + struct hal_tx_msdu_ext_desc *msg; +- struct sk_buff *skb_ext_desc; ++ struct sk_buff *skb_ext_desc = NULL; + struct hal_srng *tcl_ring; + struct ieee80211_hdr *hdr = (void *)skb->data; + struct dp_tx_ring *tx_ring; +@@ -397,18 +397,15 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif, + if (ret < 0) { + ath12k_dbg(ab, ATH12K_DBG_DP_TX, + "Failed to add HTT meta data, dropping packet\n"); +- kfree_skb(skb_ext_desc); +- goto fail_unmap_dma; ++ goto fail_free_ext_skb; + } + } + + ti.paddr = dma_map_single(ab->dev, skb_ext_desc->data, + skb_ext_desc->len, DMA_TO_DEVICE); + ret = dma_mapping_error(ab->dev, ti.paddr); +- if (ret) { +- kfree_skb(skb_ext_desc); +- goto fail_unmap_dma; +- } ++ if (ret) ++ goto fail_free_ext_skb; + + ti.data_len = skb_ext_desc->len; + ti.type = HAL_TCL_DESC_TYPE_EXT_DESC; +@@ -444,7 +441,7 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif, + ring_selector++; + } + +- goto fail_unmap_dma; ++ goto fail_unmap_dma_ext; + } + + ath12k_hal_tx_cmd_desc_setup(ab, hal_tcl_desc, &ti); +@@ -460,13 +457,16 @@ int ath12k_dp_tx(struct ath12k *ar, struct ath12k_vif *arvif, + + return 0; + +-fail_unmap_dma: +- dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE); +- ++fail_unmap_dma_ext: + if (skb_cb->paddr_ext_desc) + dma_unmap_single(ab->dev, skb_cb->paddr_ext_desc, + sizeof(struct hal_tx_msdu_ext_desc), + DMA_TO_DEVICE); ++fail_free_ext_skb: ++ kfree_skb(skb_ext_desc); ++ ++fail_unmap_dma: ++ dma_unmap_single(ab->dev, ti.paddr, ti.data_len, DMA_TO_DEVICE); + + fail_remove_tx_buf: + ath12k_dp_tx_release_txbuf(dp, tx_desc, pool_id); +-- +2.39.5 + diff --git a/queue-6.12/wifi-ath6kl-remove-warn-on-bad-firmware-input.patch b/queue-6.12/wifi-ath6kl-remove-warn-on-bad-firmware-input.patch new file mode 100644 index 0000000000..3db9ee8029 --- /dev/null +++ b/queue-6.12/wifi-ath6kl-remove-warn-on-bad-firmware-input.patch @@ -0,0 +1,43 @@ +From cec8ccdc192cb6a92871c94b9bb9559393439c01 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 17 Jun 2025 11:45:29 +0200 +Subject: wifi: ath6kl: remove WARN on bad firmware input + +From: Johannes Berg + +[ Upstream commit e7417421d89358da071fd2930f91e67c7128fbff ] + +If the firmware gives bad input, that's nothing to do with +the driver's stack at this point etc., so the WARN_ON() +doesn't add any value. Additionally, this is one of the +top syzbot reports now. Just print a message, and as an +added bonus, print the sizes too. + +Reported-by: syzbot+92c6dd14aaa230be6855@syzkaller.appspotmail.com +Tested-by: syzbot+92c6dd14aaa230be6855@syzkaller.appspotmail.com +Acked-by: Jeff Johnson +Link: https://patch.msgid.link/20250617114529.031a677a348e.I58bf1eb4ac16a82c546725ff010f3f0d2b0cca49@changeid +Signed-off-by: Johannes Berg +Signed-off-by: Sasha Levin +--- + drivers/net/wireless/ath/ath6kl/bmi.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/drivers/net/wireless/ath/ath6kl/bmi.c b/drivers/net/wireless/ath/ath6kl/bmi.c +index af98e871199d3..5a9e93fd1ef42 100644 +--- a/drivers/net/wireless/ath/ath6kl/bmi.c ++++ b/drivers/net/wireless/ath/ath6kl/bmi.c +@@ -87,7 +87,9 @@ int ath6kl_bmi_get_target_info(struct ath6kl *ar, + * We need to do some backwards compatibility to make this work. + */ + if (le32_to_cpu(targ_info->byte_count) != sizeof(*targ_info)) { +- WARN_ON(1); ++ ath6kl_err("mismatched byte count %d vs. expected %zd\n", ++ le32_to_cpu(targ_info->byte_count), ++ sizeof(*targ_info)); + return -EINVAL; + } + +-- +2.39.5 + diff --git a/queue-6.12/wifi-mac80211-drop-invalid-source-address-ocb-frames.patch b/queue-6.12/wifi-mac80211-drop-invalid-source-address-ocb-frames.patch new file mode 100644 index 0000000000..6d3f74face --- /dev/null +++ b/queue-6.12/wifi-mac80211-drop-invalid-source-address-ocb-frames.patch @@ -0,0 +1,42 @@ +From 82c50e859395fdca96afd6f60fa0caa974ad1645 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 16 Jun 2025 17:18:38 +0200 +Subject: wifi: mac80211: drop invalid source address OCB frames + +From: Johannes Berg + +[ Upstream commit d1b1a5eb27c4948e8811cf4dbb05aaf3eb10700c ] + +In OCB, don't accept frames from invalid source addresses +(and in particular don't try to create stations for them), +drop the frames instead. + +Reported-by: syzbot+8b512026a7ec10dcbdd9@syzkaller.appspotmail.com +Closes: https://lore.kernel.org/r/6788d2d9.050a0220.20d369.0028.GAE@google.com/ +Signed-off-by: Johannes Berg +Tested-by: syzbot+8b512026a7ec10dcbdd9@syzkaller.appspotmail.com +Link: https://patch.msgid.link/20250616171838.7433379cab5d.I47444d63c72a0bd58d2e2b67bb99e1fea37eec6f@changeid +Signed-off-by: Johannes Berg +Signed-off-by: Sasha Levin +--- + net/mac80211/rx.c | 4 ++++ + 1 file changed, 4 insertions(+) + +diff --git a/net/mac80211/rx.c b/net/mac80211/rx.c +index 8e1fbdd3bff10..8e1d00efa62e5 100644 +--- a/net/mac80211/rx.c ++++ b/net/mac80211/rx.c +@@ -4481,6 +4481,10 @@ static bool ieee80211_accept_frame(struct ieee80211_rx_data *rx) + if (!multicast && + !ether_addr_equal(sdata->dev->dev_addr, hdr->addr1)) + return false; ++ /* reject invalid/our STA address */ ++ if (!is_valid_ether_addr(hdr->addr2) || ++ ether_addr_equal(sdata->dev->dev_addr, hdr->addr2)) ++ return false; + if (!rx->sta) { + int rate_idx; + if (status->encoding != RX_ENC_LEGACY) +-- +2.39.5 +