--- /dev/null
+From stable+bounces-171695-greg=kroah.com@vger.kernel.org Tue Aug 19 03:02:20 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 21:01:32 -0400
+Subject: btrfs: abort transaction on unexpected eb generation at btrfs_copy_root()
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, Daniel Vacek <neelx@suse.com>, Qu Wenruo <wqu@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819010132.236821-1-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 33e8f24b52d2796b8cfb28c19a1a7dd6476323a8 ]
+
+If we find an unexpected generation for the extent buffer we are cloning
+at btrfs_copy_root(), we just WARN_ON() and don't error out and abort the
+transaction, meaning we allow to persist metadata with an unexpected
+generation. Instead of warning only, abort the transaction and return
+-EUCLEAN.
+
+CC: stable@vger.kernel.org # 6.1+
+Reviewed-by: Daniel Vacek <neelx@suse.com>
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/ctree.c | 9 ++++++++-
+ 1 file changed, 8 insertions(+), 1 deletion(-)
+
+--- a/fs/btrfs/ctree.c
++++ b/fs/btrfs/ctree.c
+@@ -350,7 +350,14 @@ int btrfs_copy_root(struct btrfs_trans_h
+
+ write_extent_buffer_fsid(cow, fs_info->fs_devices->metadata_uuid);
+
+- WARN_ON(btrfs_header_generation(buf) > trans->transid);
++ if (unlikely(btrfs_header_generation(buf) > trans->transid)) {
++ btrfs_tree_unlock(cow);
++ free_extent_buffer(cow);
++ ret = -EUCLEAN;
++ btrfs_abort_transaction(trans, ret);
++ return ret;
++ }
++
+ if (new_root_objectid == BTRFS_TREE_RELOC_OBJECTID)
+ ret = btrfs_inc_ref(trans, root, cow, 1);
+ else
--- /dev/null
+From stable+bounces-171689-greg=kroah.com@vger.kernel.org Tue Aug 19 02:39:22 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 20:39:03 -0400
+Subject: btrfs: always abort transaction on failure to add block group to free space tree
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, Boris Burkov <boris@bur.io>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819003903.227152-2-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 1f06c942aa709d397cf6bed577a0d10a61509667 ]
+
+Only one of the callers of __add_block_group_free_space() aborts the
+transaction if the call fails, while the others don't do it and it's
+either never done up the call chain or much higher in the call chain.
+
+So make sure we abort the transaction at __add_block_group_free_space()
+if it fails, which brings a couple benefits:
+
+1) If some call chain never aborts the transaction, we avoid having some
+ metadata inconsistency because BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE is
+ cleared when we enter __add_block_group_free_space() and therefore
+ __add_block_group_free_space() is never called again to add the block
+ group items to the free space tree, since the function is only called
+ when that flag is set in a block group;
+
+2) If the call chain already aborts the transaction, then we get a better
+ trace that points to the exact step from __add_block_group_free_space()
+ which failed, which is better for analysis.
+
+So abort the transaction at __add_block_group_free_space() if any of its
+steps fails.
+
+CC: stable@vger.kernel.org # 6.6+
+Reviewed-by: Boris Burkov <boris@bur.io>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/free-space-tree.c | 16 +++++++++-------
+ 1 file changed, 9 insertions(+), 7 deletions(-)
+
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1379,12 +1379,17 @@ static int __add_block_group_free_space(
+ clear_bit(BLOCK_GROUP_FLAG_NEEDS_FREE_SPACE, &block_group->runtime_flags);
+
+ ret = add_new_free_space_info(trans, block_group, path);
+- if (ret)
++ if (ret) {
++ btrfs_abort_transaction(trans, ret);
+ return ret;
++ }
+
+- return __add_to_free_space_tree(trans, block_group, path,
+- block_group->start,
+- block_group->length);
++ ret = __add_to_free_space_tree(trans, block_group, path,
++ block_group->start, block_group->length);
++ if (ret)
++ btrfs_abort_transaction(trans, ret);
++
++ return 0;
+ }
+
+ int add_block_group_free_space(struct btrfs_trans_handle *trans,
+@@ -1409,9 +1414,6 @@ int add_block_group_free_space(struct bt
+ }
+
+ ret = __add_block_group_free_space(trans, block_group, path);
+- if (ret)
+- btrfs_abort_transaction(trans, ret);
+-
+ out:
+ btrfs_free_path(path);
+ mutex_unlock(&block_group->free_space_lock);
--- /dev/null
+From stable+bounces-171714-greg=kroah.com@vger.kernel.org Tue Aug 19 03:59:15 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 21:58:48 -0400
+Subject: btrfs: codify pattern for adding block_group to bg_list
+To: stable@vger.kernel.org
+Cc: Boris Burkov <boris@bur.io>, Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819015850.263708-2-sashal@kernel.org>
+
+From: Boris Burkov <boris@bur.io>
+
+[ Upstream commit 0497dfba98c00edbc7af12d53c0b1138eb318bf7 ]
+
+Similar to mark_bg_unused() and mark_bg_to_reclaim(), we have a few
+places that use bg_list with refcounting, mostly for retrying failures
+to reclaim/delete unused.
+
+These have custom logic for handling locking and refcounting the bg_list
+properly, but they actually all want to do the same thing, so pull that
+logic out into a helper. Unfortunately, mark_bg_unused() does still need
+the NEW flag to avoid prematurely marking stuff unused (even if refcount
+is fine, we don't want to mess with bg creation), so it cannot use the
+new helper.
+
+Reviewed-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: Boris Burkov <boris@bur.io>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 62be7afcc13b ("btrfs: zoned: requeue to unused block group list if zone finish failed")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/block-group.c | 55 +++++++++++++++++++++++++++----------------------
+ 1 file changed, 31 insertions(+), 24 deletions(-)
+
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1482,6 +1482,32 @@ out:
+ }
+
+ /*
++ * Link the block_group to a list via bg_list.
++ *
++ * @bg: The block_group to link to the list.
++ * @list: The list to link it to.
++ *
++ * Use this rather than list_add_tail() directly to ensure proper respect
++ * to locking and refcounting.
++ *
++ * Returns: true if the bg was linked with a refcount bump and false otherwise.
++ */
++static bool btrfs_link_bg_list(struct btrfs_block_group *bg, struct list_head *list)
++{
++ struct btrfs_fs_info *fs_info = bg->fs_info;
++ bool added = false;
++
++ spin_lock(&fs_info->unused_bgs_lock);
++ if (list_empty(&bg->bg_list)) {
++ btrfs_get_block_group(bg);
++ list_add_tail(&bg->bg_list, list);
++ added = true;
++ }
++ spin_unlock(&fs_info->unused_bgs_lock);
++ return added;
++}
++
++/*
+ * Process the unused_bgs list and remove any that don't have any allocated
+ * space inside of them.
+ */
+@@ -1597,8 +1623,7 @@ void btrfs_delete_unused_bgs(struct btrf
+ * drop under the "next" label for the
+ * fs_info->unused_bgs list.
+ */
+- btrfs_get_block_group(block_group);
+- list_add_tail(&block_group->bg_list, &retry_list);
++ btrfs_link_bg_list(block_group, &retry_list);
+
+ trace_btrfs_skip_unused_block_group(block_group);
+ spin_unlock(&block_group->lock);
+@@ -1971,20 +1996,8 @@ void btrfs_reclaim_bgs_work(struct work_
+ spin_unlock(&space_info->lock);
+
+ next:
+- if (ret && !READ_ONCE(space_info->periodic_reclaim)) {
+- /* Refcount held by the reclaim_bgs list after splice. */
+- spin_lock(&fs_info->unused_bgs_lock);
+- /*
+- * This block group might be added to the unused list
+- * during the above process. Move it back to the
+- * reclaim list otherwise.
+- */
+- if (list_empty(&bg->bg_list)) {
+- btrfs_get_block_group(bg);
+- list_add_tail(&bg->bg_list, &retry_list);
+- }
+- spin_unlock(&fs_info->unused_bgs_lock);
+- }
++ if (ret && !READ_ONCE(space_info->periodic_reclaim))
++ btrfs_link_bg_list(bg, &retry_list);
+ btrfs_put_block_group(bg);
+
+ mutex_unlock(&fs_info->reclaim_bgs_lock);
+@@ -2024,13 +2037,8 @@ void btrfs_mark_bg_to_reclaim(struct btr
+ {
+ struct btrfs_fs_info *fs_info = bg->fs_info;
+
+- spin_lock(&fs_info->unused_bgs_lock);
+- if (list_empty(&bg->bg_list)) {
+- btrfs_get_block_group(bg);
++ if (btrfs_link_bg_list(bg, &fs_info->reclaim_bgs))
+ trace_btrfs_add_reclaim_block_group(bg);
+- list_add_tail(&bg->bg_list, &fs_info->reclaim_bgs);
+- }
+- spin_unlock(&fs_info->unused_bgs_lock);
+ }
+
+ static int read_bg_from_eb(struct btrfs_fs_info *fs_info, const struct btrfs_key *key,
+@@ -2946,8 +2954,7 @@ struct btrfs_block_group *btrfs_make_blo
+ }
+ #endif
+
+- btrfs_get_block_group(cache);
+- list_add_tail(&cache->bg_list, &trans->new_bgs);
++ btrfs_link_bg_list(cache, &trans->new_bgs);
+ btrfs_inc_delayed_refs_rsv_bg_inserts(fs_info);
+
+ set_avail_alloc_bits(fs_info, type);
--- /dev/null
+From stable+bounces-171713-greg=kroah.com@vger.kernel.org Tue Aug 19 03:59:13 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 21:58:47 -0400
+Subject: btrfs: explicitly ref count block_group on new_bgs list
+To: stable@vger.kernel.org
+Cc: Boris Burkov <boris@bur.io>, Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819015850.263708-1-sashal@kernel.org>
+
+From: Boris Burkov <boris@bur.io>
+
+[ Upstream commit 7cbce3cb4c5cfffd8b08f148e2136afc1ec1ba94 ]
+
+All other users of the bg_list list_head increment the refcount when
+adding to a list and decrement it when deleting from the list. Just for
+the sake of uniformity and to try to avoid refcounting bugs, do it for
+this list as well.
+
+This does not fix any known ref-counting bug, as the reference belongs
+to a single task (trans_handle is not shared and this represents
+trans_handle->new_bgs linkage) and will not lose its original refcount
+while that thread is running. And BLOCK_GROUP_FLAG_NEW protects against
+ref-counting errors "moving" the block group to the unused list without
+taking a ref.
+
+With that said, I still believe it is simpler to just hold the extra ref
+count for this list user as well.
+
+Reviewed-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: Boris Burkov <boris@bur.io>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 62be7afcc13b ("btrfs: zoned: requeue to unused block group list if zone finish failed")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/block-group.c | 2 ++
+ fs/btrfs/transaction.c | 1 +
+ 2 files changed, 3 insertions(+)
+
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -2807,6 +2807,7 @@ next:
+ spin_lock(&fs_info->unused_bgs_lock);
+ list_del_init(&block_group->bg_list);
+ clear_bit(BLOCK_GROUP_FLAG_NEW, &block_group->runtime_flags);
++ btrfs_put_block_group(block_group);
+ spin_unlock(&fs_info->unused_bgs_lock);
+
+ /*
+@@ -2945,6 +2946,7 @@ struct btrfs_block_group *btrfs_make_blo
+ }
+ #endif
+
++ btrfs_get_block_group(cache);
+ list_add_tail(&cache->bg_list, &trans->new_bgs);
+ btrfs_inc_delayed_refs_rsv_bg_inserts(fs_info);
+
+--- a/fs/btrfs/transaction.c
++++ b/fs/btrfs/transaction.c
+@@ -2113,6 +2113,7 @@ static void btrfs_cleanup_pending_block_
+ */
+ spin_lock(&fs_info->unused_bgs_lock);
+ list_del_init(&block_group->bg_list);
++ btrfs_put_block_group(block_group);
+ spin_unlock(&fs_info->unused_bgs_lock);
+ }
+ }
--- /dev/null
+From stable+bounces-171690-greg=kroah.com@vger.kernel.org Tue Aug 19 02:39:22 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 20:39:02 -0400
+Subject: btrfs: move transaction aborts to the error site in add_block_group_free_space()
+To: stable@vger.kernel.org
+Cc: David Sterba <dsterba@suse.com>, Filipe Manana <fdmanana@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819003903.227152-1-sashal@kernel.org>
+
+From: David Sterba <dsterba@suse.com>
+
+[ Upstream commit b63c8c1ede4407835cb8c8bed2014d96619389f3 ]
+
+Transaction aborts should be done next to the place the error happens,
+which was not done in add_block_group_free_space().
+
+Reviewed-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 1f06c942aa70 ("btrfs: always abort transaction on failure to add block group to free space tree")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/free-space-tree.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/fs/btrfs/free-space-tree.c
++++ b/fs/btrfs/free-space-tree.c
+@@ -1404,16 +1404,17 @@ int add_block_group_free_space(struct bt
+ path = btrfs_alloc_path();
+ if (!path) {
+ ret = -ENOMEM;
++ btrfs_abort_transaction(trans, ret);
+ goto out;
+ }
+
+ ret = __add_block_group_free_space(trans, block_group, path);
++ if (ret)
++ btrfs_abort_transaction(trans, ret);
+
+ out:
+ btrfs_free_path(path);
+ mutex_unlock(&block_group->free_space_lock);
+- if (ret)
+- btrfs_abort_transaction(trans, ret);
+ return ret;
+ }
+
--- /dev/null
+From stable+bounces-171676-greg=kroah.com@vger.kernel.org Tue Aug 19 01:21:49 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 19:21:17 -0400
+Subject: btrfs: qgroup: drop unused parameter fs_info from __del_qgroup_rb()
+To: stable@vger.kernel.org
+Cc: David Sterba <dsterba@suse.com>, Anand Jain <anand.jain@oracle.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250818232119.141306-1-sashal@kernel.org>
+
+From: David Sterba <dsterba@suse.com>
+
+[ Upstream commit 2651f43274109f2d09b74a404b82722213ef9b2d ]
+
+We don't need fs_info here, everything is reachable from qgroup.
+
+Reviewed-by: Anand Jain <anand.jain@oracle.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: e12496677503 ("btrfs: qgroup: fix race between quota disable and quota rescan ioctl")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/qgroup.c | 7 +++----
+ 1 file changed, 3 insertions(+), 4 deletions(-)
+
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -226,8 +226,7 @@ static struct btrfs_qgroup *add_qgroup_r
+ return qgroup;
+ }
+
+-static void __del_qgroup_rb(struct btrfs_fs_info *fs_info,
+- struct btrfs_qgroup *qgroup)
++static void __del_qgroup_rb(struct btrfs_qgroup *qgroup)
+ {
+ struct btrfs_qgroup_list *list;
+
+@@ -258,7 +257,7 @@ static int del_qgroup_rb(struct btrfs_fs
+ return -ENOENT;
+
+ rb_erase(&qgroup->node, &fs_info->qgroup_tree);
+- __del_qgroup_rb(fs_info, qgroup);
++ __del_qgroup_rb(qgroup);
+ return 0;
+ }
+
+@@ -643,7 +642,7 @@ void btrfs_free_qgroup_config(struct btr
+ while ((n = rb_first(&fs_info->qgroup_tree))) {
+ qgroup = rb_entry(n, struct btrfs_qgroup, node);
+ rb_erase(n, &fs_info->qgroup_tree);
+- __del_qgroup_rb(fs_info, qgroup);
++ __del_qgroup_rb(qgroup);
+ btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
+ kfree(qgroup);
+ }
--- /dev/null
+From stable+bounces-171677-greg=kroah.com@vger.kernel.org Tue Aug 19 01:21:57 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 19:21:18 -0400
+Subject: btrfs: qgroup: fix race between quota disable and quota rescan ioctl
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, cen zhang <zzzccc427@gmail.com>, Boris Burkov <boris@bur.io>, Qu Wenruo <wqu@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250818232119.141306-2-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit e1249667750399a48cafcf5945761d39fa584edf ]
+
+There's a race between a task disabling quotas and another running the
+rescan ioctl that can result in a use-after-free of qgroup records from
+the fs_info->qgroup_tree rbtree.
+
+This happens as follows:
+
+1) Task A enters btrfs_ioctl_quota_rescan() -> btrfs_qgroup_rescan();
+
+2) Task B enters btrfs_quota_disable() and calls
+ btrfs_qgroup_wait_for_completion(), which does nothing because at that
+ point fs_info->qgroup_rescan_running is false (it wasn't set yet by
+ task A);
+
+3) Task B calls btrfs_free_qgroup_config() which starts freeing qgroups
+ from fs_info->qgroup_tree without taking the lock fs_info->qgroup_lock;
+
+4) Task A enters qgroup_rescan_zero_tracking() which starts iterating
+ the fs_info->qgroup_tree tree while holding fs_info->qgroup_lock,
+ but task B is freeing qgroup records from that tree without holding
+ the lock, resulting in a use-after-free.
+
+Fix this by taking fs_info->qgroup_lock at btrfs_free_qgroup_config().
+Also at btrfs_qgroup_rescan() don't start the rescan worker if quotas
+were already disabled.
+
+Reported-by: cen zhang <zzzccc427@gmail.com>
+Link: https://lore.kernel.org/linux-btrfs/CAFRLqsV+cMDETFuzqdKSHk_FDm6tneea45krsHqPD6B3FetLpQ@mail.gmail.com/
+CC: stable@vger.kernel.org # 6.1+
+Reviewed-by: Boris Burkov <boris@bur.io>
+Reviewed-by: Qu Wenruo <wqu@suse.com>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/qgroup.c | 31 ++++++++++++++++++++++++-------
+ 1 file changed, 24 insertions(+), 7 deletions(-)
+
+--- a/fs/btrfs/qgroup.c
++++ b/fs/btrfs/qgroup.c
+@@ -630,22 +630,30 @@ bool btrfs_check_quota_leak(const struct
+
+ /*
+ * This is called from close_ctree() or open_ctree() or btrfs_quota_disable(),
+- * first two are in single-threaded paths.And for the third one, we have set
+- * quota_root to be null with qgroup_lock held before, so it is safe to clean
+- * up the in-memory structures without qgroup_lock held.
++ * first two are in single-threaded paths.
+ */
+ void btrfs_free_qgroup_config(struct btrfs_fs_info *fs_info)
+ {
+ struct rb_node *n;
+ struct btrfs_qgroup *qgroup;
+
++ /*
++ * btrfs_quota_disable() can be called concurrently with
++ * btrfs_qgroup_rescan() -> qgroup_rescan_zero_tracking(), so take the
++ * lock.
++ */
++ spin_lock(&fs_info->qgroup_lock);
+ while ((n = rb_first(&fs_info->qgroup_tree))) {
+ qgroup = rb_entry(n, struct btrfs_qgroup, node);
+ rb_erase(n, &fs_info->qgroup_tree);
+ __del_qgroup_rb(qgroup);
++ spin_unlock(&fs_info->qgroup_lock);
+ btrfs_sysfs_del_one_qgroup(fs_info, qgroup);
+ kfree(qgroup);
++ spin_lock(&fs_info->qgroup_lock);
+ }
++ spin_unlock(&fs_info->qgroup_lock);
++
+ /*
+ * We call btrfs_free_qgroup_config() when unmounting
+ * filesystem and disabling quota, so we set qgroup_ulist
+@@ -4056,12 +4064,21 @@ btrfs_qgroup_rescan(struct btrfs_fs_info
+ qgroup_rescan_zero_tracking(fs_info);
+
+ mutex_lock(&fs_info->qgroup_rescan_lock);
+- fs_info->qgroup_rescan_running = true;
+- btrfs_queue_work(fs_info->qgroup_rescan_workers,
+- &fs_info->qgroup_rescan_work);
++ /*
++ * The rescan worker is only for full accounting qgroups, check if it's
++ * enabled as it is pointless to queue it otherwise. A concurrent quota
++ * disable may also have just cleared BTRFS_FS_QUOTA_ENABLED.
++ */
++ if (btrfs_qgroup_full_accounting(fs_info)) {
++ fs_info->qgroup_rescan_running = true;
++ btrfs_queue_work(fs_info->qgroup_rescan_workers,
++ &fs_info->qgroup_rescan_work);
++ } else {
++ ret = -ENOTCONN;
++ }
+ mutex_unlock(&fs_info->qgroup_rescan_lock);
+
+- return 0;
++ return ret;
+ }
+
+ int btrfs_qgroup_wait_for_completion(struct btrfs_fs_info *fs_info,
--- /dev/null
+From stable+bounces-171720-greg=kroah.com@vger.kernel.org Tue Aug 19 04:17:43 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:15:57 -0400
+Subject: btrfs: send: add and use helper to rename current inode when processing refs
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819021601.274993-3-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit ec666c84deba56f714505b53556a97565f72db86 ]
+
+Extract the logic to rename the current inode at process_recorded_refs()
+into a helper function and use it, therefore removing duplicated logic
+and making it easier for an upcoming patch by avoiding yet more duplicated
+logic.
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 005b0a0c24e1 ("btrfs: send: use fallocate for hole punching with send stream v2")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 23 +++++++++++++++--------
+ 1 file changed, 15 insertions(+), 8 deletions(-)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4165,6 +4165,19 @@ out:
+ return ret;
+ }
+
++static int rename_current_inode(struct send_ctx *sctx,
++ struct fs_path *current_path,
++ struct fs_path *new_path)
++{
++ int ret;
++
++ ret = send_rename(sctx, current_path, new_path);
++ if (ret < 0)
++ return ret;
++
++ return fs_path_copy(current_path, new_path);
++}
++
+ /*
+ * This does all the move/link/unlink/rmdir magic.
+ */
+@@ -4450,13 +4463,10 @@ static int process_recorded_refs(struct
+ * it depending on the inode mode.
+ */
+ if (is_orphan && can_rename) {
+- ret = send_rename(sctx, valid_path, cur->full_path);
++ ret = rename_current_inode(sctx, valid_path, cur->full_path);
+ if (ret < 0)
+ goto out;
+ is_orphan = false;
+- ret = fs_path_copy(valid_path, cur->full_path);
+- if (ret < 0)
+- goto out;
+ } else if (can_rename) {
+ if (S_ISDIR(sctx->cur_inode_mode)) {
+ /*
+@@ -4464,10 +4474,7 @@ static int process_recorded_refs(struct
+ * dirs, we always have one new and one deleted
+ * ref. The deleted ref is ignored later.
+ */
+- ret = send_rename(sctx, valid_path,
+- cur->full_path);
+- if (!ret)
+- ret = fs_path_copy(valid_path,
++ ret = rename_current_inode(sctx, valid_path,
+ cur->full_path);
+ if (ret < 0)
+ goto out;
--- /dev/null
+From stable+bounces-171722-greg=kroah.com@vger.kernel.org Tue Aug 19 04:17:46 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:15:59 -0400
+Subject: btrfs: send: avoid path allocation for the current inode when issuing commands
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819021601.274993-5-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 374d45af6435534a11b01b88762323abf03dd755 ]
+
+Whenever we issue a command we allocate a path and then compute it. For
+the current inode this is not necessary since we have one preallocated
+and computed in the send context structure, so we can use it instead
+and avoid allocating and freeing a path.
+
+For example if we have 100 extents to send (100 write commands) for a
+file, we are allocating and freeing paths 100 times.
+
+So improve on this by avoiding path allocation and freeing whenever a
+command is for the current inode by using the current inode's path
+stored in the send context structure.
+
+A test was run before applying this patch and the previous one in the
+series:
+
+ "btrfs: send: keep the current inode's path cached"
+
+The test script is the following:
+
+ $ cat test.sh
+ #!/bin/bash
+
+ DEV=/dev/nullb0
+ MNT=/mnt/nullb0
+
+ mkfs.btrfs -f $DEV > /dev/null
+ mount $DEV $MNT
+
+ DIR="$MNT/one/two/three/four"
+ FILE="$DIR/foobar"
+
+ mkdir -p $DIR
+
+ # Create some empty files to get a deeper btree and therefore make
+ # path computations slower.
+ for ((i = 1; i <= 30000; i++)); do
+ echo -n > "$DIR/filler_$i"
+ done
+
+ for ((i = 0; i < 10000; i += 2)); do
+ offset=$(( i * 4096 ))
+ xfs_io -f -c "pwrite -S 0xab $offset 4K" $FILE > /dev/null
+ done
+
+ btrfs subvolume snapshot -r $MNT $MNT/snap
+
+ start=$(date +%s%N)
+ btrfs send -f /dev/null $MNT/snap
+ end=$(date +%s%N)
+
+ echo -e "\nsend took $(( (end - start) / 1000000 )) milliseconds"
+
+ umount $MNT
+
+Result before applying the 2 patches: 1121 milliseconds
+Result after applying the 2 patches: 815 milliseconds (-31.6%)
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 005b0a0c24e1 ("btrfs: send: use fallocate for hole punching with send stream v2")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 215 +++++++++++++++++++++++++-------------------------------
+ 1 file changed, 97 insertions(+), 118 deletions(-)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -2623,6 +2623,47 @@ out:
+ return ret;
+ }
+
++static struct fs_path *get_cur_inode_path(struct send_ctx *sctx)
++{
++ if (fs_path_len(&sctx->cur_inode_path) == 0) {
++ int ret;
++
++ ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen,
++ &sctx->cur_inode_path);
++ if (ret < 0)
++ return ERR_PTR(ret);
++ }
++
++ return &sctx->cur_inode_path;
++}
++
++static struct fs_path *get_path_for_command(struct send_ctx *sctx, u64 ino, u64 gen)
++{
++ struct fs_path *path;
++ int ret;
++
++ if (ino == sctx->cur_ino && gen == sctx->cur_inode_gen)
++ return get_cur_inode_path(sctx);
++
++ path = fs_path_alloc();
++ if (!path)
++ return ERR_PTR(-ENOMEM);
++
++ ret = get_cur_path(sctx, ino, gen, path);
++ if (ret < 0) {
++ fs_path_free(path);
++ return ERR_PTR(ret);
++ }
++
++ return path;
++}
++
++static void free_path_for_command(const struct send_ctx *sctx, struct fs_path *path)
++{
++ if (path != &sctx->cur_inode_path)
++ fs_path_free(path);
++}
++
+ static int send_truncate(struct send_ctx *sctx, u64 ino, u64 gen, u64 size)
+ {
+ struct btrfs_fs_info *fs_info = sctx->send_root->fs_info;
+@@ -2631,17 +2672,14 @@ static int send_truncate(struct send_ctx
+
+ btrfs_debug(fs_info, "send_truncate %llu size=%llu", ino, size);
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
++ p = get_path_for_command(sctx, ino, gen);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+ ret = begin_cmd(sctx, BTRFS_SEND_C_TRUNCATE);
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, ino, gen, p);
+- if (ret < 0)
+- goto out;
+ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_SIZE, size);
+
+@@ -2649,7 +2687,7 @@ static int send_truncate(struct send_ctx
+
+ tlv_put_failure:
+ out:
+- fs_path_free(p);
++ free_path_for_command(sctx, p);
+ return ret;
+ }
+
+@@ -2661,17 +2699,14 @@ static int send_chmod(struct send_ctx *s
+
+ btrfs_debug(fs_info, "send_chmod %llu mode=%llu", ino, mode);
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
++ p = get_path_for_command(sctx, ino, gen);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+ ret = begin_cmd(sctx, BTRFS_SEND_C_CHMOD);
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, ino, gen, p);
+- if (ret < 0)
+- goto out;
+ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_MODE, mode & 07777);
+
+@@ -2679,7 +2714,7 @@ static int send_chmod(struct send_ctx *s
+
+ tlv_put_failure:
+ out:
+- fs_path_free(p);
++ free_path_for_command(sctx, p);
+ return ret;
+ }
+
+@@ -2694,17 +2729,14 @@ static int send_fileattr(struct send_ctx
+
+ btrfs_debug(fs_info, "send_fileattr %llu fileattr=%llu", ino, fileattr);
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
++ p = get_path_for_command(sctx, ino, gen);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+ ret = begin_cmd(sctx, BTRFS_SEND_C_FILEATTR);
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, ino, gen, p);
+- if (ret < 0)
+- goto out;
+ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_FILEATTR, fileattr);
+
+@@ -2712,7 +2744,7 @@ static int send_fileattr(struct send_ctx
+
+ tlv_put_failure:
+ out:
+- fs_path_free(p);
++ free_path_for_command(sctx, p);
+ return ret;
+ }
+
+@@ -2725,17 +2757,14 @@ static int send_chown(struct send_ctx *s
+ btrfs_debug(fs_info, "send_chown %llu uid=%llu, gid=%llu",
+ ino, uid, gid);
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
++ p = get_path_for_command(sctx, ino, gen);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+ ret = begin_cmd(sctx, BTRFS_SEND_C_CHOWN);
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, ino, gen, p);
+- if (ret < 0)
+- goto out;
+ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_UID, uid);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_GID, gid);
+@@ -2744,7 +2773,7 @@ static int send_chown(struct send_ctx *s
+
+ tlv_put_failure:
+ out:
+- fs_path_free(p);
++ free_path_for_command(sctx, p);
+ return ret;
+ }
+
+@@ -2761,9 +2790,9 @@ static int send_utimes(struct send_ctx *
+
+ btrfs_debug(fs_info, "send_utimes %llu", ino);
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
++ p = get_path_for_command(sctx, ino, gen);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+ path = alloc_path_for_send();
+ if (!path) {
+@@ -2788,9 +2817,6 @@ static int send_utimes(struct send_ctx *
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, ino, gen, p);
+- if (ret < 0)
+- goto out;
+ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
+ TLV_PUT_BTRFS_TIMESPEC(sctx, BTRFS_SEND_A_ATIME, eb, &ii->atime);
+ TLV_PUT_BTRFS_TIMESPEC(sctx, BTRFS_SEND_A_MTIME, eb, &ii->mtime);
+@@ -2802,7 +2828,7 @@ static int send_utimes(struct send_ctx *
+
+ tlv_put_failure:
+ out:
+- fs_path_free(p);
++ free_path_for_command(sctx, p);
+ btrfs_free_path(path);
+ return ret;
+ }
+@@ -4929,13 +4955,9 @@ static int send_set_xattr(struct send_ct
+ struct fs_path *path;
+ int ret;
+
+- path = fs_path_alloc();
+- if (!path)
+- return -ENOMEM;
+-
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, path);
+- if (ret < 0)
+- goto out;
++ path = get_cur_inode_path(sctx);
++ if (IS_ERR(path))
++ return PTR_ERR(path);
+
+ ret = begin_cmd(sctx, BTRFS_SEND_C_SET_XATTR);
+ if (ret < 0)
+@@ -4949,8 +4971,6 @@ static int send_set_xattr(struct send_ct
+
+ tlv_put_failure:
+ out:
+- fs_path_free(path);
+-
+ return ret;
+ }
+
+@@ -5008,23 +5028,14 @@ static int __process_deleted_xattr(int n
+ const char *name, int name_len,
+ const char *data, int data_len, void *ctx)
+ {
+- int ret;
+ struct send_ctx *sctx = ctx;
+ struct fs_path *p;
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
+-
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, p);
+- if (ret < 0)
+- goto out;
+-
+- ret = send_remove_xattr(sctx, p, name, name_len);
++ p = get_cur_inode_path(sctx);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+-out:
+- fs_path_free(p);
+- return ret;
++ return send_remove_xattr(sctx, p, name, name_len);
+ }
+
+ static int process_new_xattr(struct send_ctx *sctx)
+@@ -5257,21 +5268,13 @@ static int process_verity(struct send_ct
+ if (ret < 0)
+ goto iput;
+
+- p = fs_path_alloc();
+- if (!p) {
+- ret = -ENOMEM;
++ p = get_cur_inode_path(sctx);
++ if (IS_ERR(p)) {
++ ret = PTR_ERR(p);
+ goto iput;
+ }
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, p);
+- if (ret < 0)
+- goto free_path;
+
+ ret = send_verity(sctx, p, sctx->verity_descriptor);
+- if (ret < 0)
+- goto free_path;
+-
+-free_path:
+- fs_path_free(p);
+ iput:
+ iput(inode);
+ return ret;
+@@ -5393,31 +5396,25 @@ static int send_write(struct send_ctx *s
+ int ret = 0;
+ struct fs_path *p;
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
+-
+ btrfs_debug(fs_info, "send_write offset=%llu, len=%d", offset, len);
+
+- ret = begin_cmd(sctx, BTRFS_SEND_C_WRITE);
+- if (ret < 0)
+- goto out;
++ p = get_cur_inode_path(sctx);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, p);
++ ret = begin_cmd(sctx, BTRFS_SEND_C_WRITE);
+ if (ret < 0)
+- goto out;
++ return ret;
+
+ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_FILE_OFFSET, offset);
+ ret = put_file_data(sctx, offset, len);
+ if (ret < 0)
+- goto out;
++ return ret;
+
+ ret = send_cmd(sctx);
+
+ tlv_put_failure:
+-out:
+- fs_path_free(p);
+ return ret;
+ }
+
+@@ -5430,6 +5427,7 @@ static int send_clone(struct send_ctx *s
+ {
+ int ret = 0;
+ struct fs_path *p;
++ struct fs_path *cur_inode_path;
+ u64 gen;
+
+ btrfs_debug(sctx->send_root->fs_info,
+@@ -5437,6 +5435,10 @@ static int send_clone(struct send_ctx *s
+ offset, len, btrfs_root_id(clone_root->root),
+ clone_root->ino, clone_root->offset);
+
++ cur_inode_path = get_cur_inode_path(sctx);
++ if (IS_ERR(cur_inode_path))
++ return PTR_ERR(cur_inode_path);
++
+ p = fs_path_alloc();
+ if (!p)
+ return -ENOMEM;
+@@ -5445,13 +5447,9 @@ static int send_clone(struct send_ctx *s
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, p);
+- if (ret < 0)
+- goto out;
+-
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_FILE_OFFSET, offset);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_CLONE_LEN, len);
+- TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
++ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, cur_inode_path);
+
+ if (clone_root->root == sctx->send_root) {
+ ret = get_inode_gen(sctx->send_root, clone_root->ino, &gen);
+@@ -5502,17 +5500,13 @@ static int send_update_extent(struct sen
+ int ret = 0;
+ struct fs_path *p;
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
++ p = get_cur_inode_path(sctx);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
+
+ ret = begin_cmd(sctx, BTRFS_SEND_C_UPDATE_EXTENT);
+ if (ret < 0)
+- goto out;
+-
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, p);
+- if (ret < 0)
+- goto out;
++ return ret;
+
+ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, p);
+ TLV_PUT_U64(sctx, BTRFS_SEND_A_FILE_OFFSET, offset);
+@@ -5521,8 +5515,6 @@ static int send_update_extent(struct sen
+ ret = send_cmd(sctx);
+
+ tlv_put_failure:
+-out:
+- fs_path_free(p);
+ return ret;
+ }
+
+@@ -5551,12 +5543,10 @@ static int send_hole(struct send_ctx *sc
+ if (sctx->flags & BTRFS_SEND_FLAG_NO_FILE_DATA)
+ return send_update_extent(sctx, offset, end - offset);
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, p);
+- if (ret < 0)
+- goto tlv_put_failure;
++ p = get_cur_inode_path(sctx);
++ if (IS_ERR(p))
++ return PTR_ERR(p);
++
+ while (offset < end) {
+ u64 len = min(end - offset, read_size);
+
+@@ -5577,7 +5567,6 @@ static int send_hole(struct send_ctx *sc
+ }
+ sctx->cur_inode_next_write_offset = offset;
+ tlv_put_failure:
+- fs_path_free(p);
+ return ret;
+ }
+
+@@ -5600,9 +5589,9 @@ static int send_encoded_inline_extent(st
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+
+- fspath = fs_path_alloc();
+- if (!fspath) {
+- ret = -ENOMEM;
++ fspath = get_cur_inode_path(sctx);
++ if (IS_ERR(fspath)) {
++ ret = PTR_ERR(fspath);
+ goto out;
+ }
+
+@@ -5610,10 +5599,6 @@ static int send_encoded_inline_extent(st
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, fspath);
+- if (ret < 0)
+- goto out;
+-
+ btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
+ ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item);
+ ram_bytes = btrfs_file_extent_ram_bytes(leaf, ei);
+@@ -5642,7 +5627,6 @@ static int send_encoded_inline_extent(st
+
+ tlv_put_failure:
+ out:
+- fs_path_free(fspath);
+ iput(inode);
+ return ret;
+ }
+@@ -5667,9 +5651,9 @@ static int send_encoded_extent(struct se
+ if (IS_ERR(inode))
+ return PTR_ERR(inode);
+
+- fspath = fs_path_alloc();
+- if (!fspath) {
+- ret = -ENOMEM;
++ fspath = get_cur_inode_path(sctx);
++ if (IS_ERR(fspath)) {
++ ret = PTR_ERR(fspath);
+ goto out;
+ }
+
+@@ -5677,10 +5661,6 @@ static int send_encoded_extent(struct se
+ if (ret < 0)
+ goto out;
+
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, fspath);
+- if (ret < 0)
+- goto out;
+-
+ btrfs_item_key_to_cpu(leaf, &key, path->slots[0]);
+ ei = btrfs_item_ptr(leaf, path->slots[0], struct btrfs_file_extent_item);
+ disk_bytenr = btrfs_file_extent_disk_bytenr(leaf, ei);
+@@ -5747,7 +5727,6 @@ static int send_encoded_extent(struct se
+
+ tlv_put_failure:
+ out:
+- fs_path_free(fspath);
+ iput(inode);
+ return ret;
+ }
--- /dev/null
+From stable+bounces-171718-greg=kroah.com@vger.kernel.org Tue Aug 19 04:17:39 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:15:55 -0400
+Subject: btrfs: send: factor out common logic when sending xattrs
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819021601.274993-1-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 17f6a74d0b89092e38e3328b66eda1ab29a195d4 ]
+
+We always send xattrs for the current inode only and both callers of
+send_set_xattr() pass a path for the current inode. So move the path
+allocation and computation to send_set_xattr(), reducing duplicated
+code. This also facilitates an upcoming patch.
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 005b0a0c24e1 ("btrfs: send: use fallocate for hole punching with send stream v2")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 41 +++++++++++++++--------------------------
+ 1 file changed, 15 insertions(+), 26 deletions(-)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4878,11 +4878,19 @@ out:
+ }
+
+ static int send_set_xattr(struct send_ctx *sctx,
+- struct fs_path *path,
+ const char *name, int name_len,
+ const char *data, int data_len)
+ {
+- int ret = 0;
++ struct fs_path *path;
++ int ret;
++
++ path = fs_path_alloc();
++ if (!path)
++ return -ENOMEM;
++
++ ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, path);
++ if (ret < 0)
++ goto out;
+
+ ret = begin_cmd(sctx, BTRFS_SEND_C_SET_XATTR);
+ if (ret < 0)
+@@ -4896,6 +4904,8 @@ static int send_set_xattr(struct send_ct
+
+ tlv_put_failure:
+ out:
++ fs_path_free(path);
++
+ return ret;
+ }
+
+@@ -4923,19 +4933,13 @@ static int __process_new_xattr(int num,
+ const char *name, int name_len, const char *data,
+ int data_len, void *ctx)
+ {
+- int ret;
+ struct send_ctx *sctx = ctx;
+- struct fs_path *p;
+ struct posix_acl_xattr_header dummy_acl;
+
+ /* Capabilities are emitted by finish_inode_if_needed */
+ if (!strncmp(name, XATTR_NAME_CAPS, name_len))
+ return 0;
+
+- p = fs_path_alloc();
+- if (!p)
+- return -ENOMEM;
+-
+ /*
+ * This hack is needed because empty acls are stored as zero byte
+ * data in xattrs. Problem with that is, that receiving these zero byte
+@@ -4952,15 +4956,7 @@ static int __process_new_xattr(int num,
+ }
+ }
+
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, p);
+- if (ret < 0)
+- goto out;
+-
+- ret = send_set_xattr(sctx, p, name, name_len, data, data_len);
+-
+-out:
+- fs_path_free(p);
+- return ret;
++ return send_set_xattr(sctx, name, name_len, data, data_len);
+ }
+
+ static int __process_deleted_xattr(int num, struct btrfs_key *di_key,
+@@ -5836,7 +5832,6 @@ static int send_extent_data(struct send_
+ */
+ static int send_capabilities(struct send_ctx *sctx)
+ {
+- struct fs_path *fspath = NULL;
+ struct btrfs_path *path;
+ struct btrfs_dir_item *di;
+ struct extent_buffer *leaf;
+@@ -5862,25 +5857,19 @@ static int send_capabilities(struct send
+ leaf = path->nodes[0];
+ buf_len = btrfs_dir_data_len(leaf, di);
+
+- fspath = fs_path_alloc();
+ buf = kmalloc(buf_len, GFP_KERNEL);
+- if (!fspath || !buf) {
++ if (!buf) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+- ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen, fspath);
+- if (ret < 0)
+- goto out;
+-
+ data_ptr = (unsigned long)(di + 1) + btrfs_dir_name_len(leaf, di);
+ read_extent_buffer(leaf, buf, data_ptr, buf_len);
+
+- ret = send_set_xattr(sctx, fspath, XATTR_NAME_CAPS,
++ ret = send_set_xattr(sctx, XATTR_NAME_CAPS,
+ strlen(XATTR_NAME_CAPS), buf, buf_len);
+ out:
+ kfree(buf);
+- fs_path_free(fspath);
+ btrfs_free_path(path);
+ return ret;
+ }
--- /dev/null
+From stable+bounces-171721-greg=kroah.com@vger.kernel.org Tue Aug 19 04:16:58 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:15:58 -0400
+Subject: btrfs: send: keep the current inode's path cached
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819021601.274993-4-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit fc746acb7aa9aeaa2cb5dcba449323319ba5c8eb ]
+
+Whenever we need to send a command for the current inode, like sending
+writes, xattr updates, truncates, utimes, etc, we compute the inode's
+path each time, which implies doing some memory allocations and traversing
+the inode hierarchy to extract the name of the inode and each ancestor
+directory, and that implies doing lookups in the subvolume tree amongst
+other operations.
+
+Most of the time, by far, the current inode's path doesn't change while
+we are processing it (like if we need to issue 100 write commands, the
+path remains the same and it's pointless to compute it 100 times).
+
+To avoid this keep the current inode's path cached in the send context
+and invalidate it or update it whenever it's needed (after unlinks or
+renames).
+
+A performance test, and its results, is mentioned in the next patch in
+the series (subject: "btrfs: send: avoid path allocation for the current
+inode when issuing commands").
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 005b0a0c24e1 ("btrfs: send: use fallocate for hole punching with send stream v2")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++-----
+ 1 file changed, 48 insertions(+), 5 deletions(-)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -178,6 +178,7 @@ struct send_ctx {
+ u64 cur_inode_rdev;
+ u64 cur_inode_last_extent;
+ u64 cur_inode_next_write_offset;
++ struct fs_path cur_inode_path;
+ bool cur_inode_new;
+ bool cur_inode_new_gen;
+ bool cur_inode_deleted;
+@@ -436,6 +437,14 @@ static void fs_path_reset(struct fs_path
+ }
+ }
+
++static void init_path(struct fs_path *p)
++{
++ p->reversed = 0;
++ p->buf = p->inline_buf;
++ p->buf_len = FS_PATH_INLINE_SIZE;
++ fs_path_reset(p);
++}
++
+ static struct fs_path *fs_path_alloc(void)
+ {
+ struct fs_path *p;
+@@ -443,10 +452,7 @@ static struct fs_path *fs_path_alloc(voi
+ p = kmalloc(sizeof(*p), GFP_KERNEL);
+ if (!p)
+ return NULL;
+- p->reversed = 0;
+- p->buf = p->inline_buf;
+- p->buf_len = FS_PATH_INLINE_SIZE;
+- fs_path_reset(p);
++ init_path(p);
+ return p;
+ }
+
+@@ -624,6 +630,14 @@ static void fs_path_unreverse(struct fs_
+ p->reversed = 0;
+ }
+
++static inline bool is_current_inode_path(const struct send_ctx *sctx,
++ const struct fs_path *path)
++{
++ const struct fs_path *cur = &sctx->cur_inode_path;
++
++ return (strncmp(path->start, cur->start, fs_path_len(cur)) == 0);
++}
++
+ static struct btrfs_path *alloc_path_for_send(void)
+ {
+ struct btrfs_path *path;
+@@ -2450,6 +2464,14 @@ static int get_cur_path(struct send_ctx
+ u64 parent_inode = 0;
+ u64 parent_gen = 0;
+ int stop = 0;
++ const bool is_cur_inode = (ino == sctx->cur_ino && gen == sctx->cur_inode_gen);
++
++ if (is_cur_inode && fs_path_len(&sctx->cur_inode_path) > 0) {
++ if (dest != &sctx->cur_inode_path)
++ return fs_path_copy(dest, &sctx->cur_inode_path);
++
++ return 0;
++ }
+
+ name = fs_path_alloc();
+ if (!name) {
+@@ -2501,8 +2523,12 @@ static int get_cur_path(struct send_ctx
+
+ out:
+ fs_path_free(name);
+- if (!ret)
++ if (!ret) {
+ fs_path_unreverse(dest);
++ if (is_cur_inode && dest != &sctx->cur_inode_path)
++ ret = fs_path_copy(&sctx->cur_inode_path, dest);
++ }
++
+ return ret;
+ }
+
+@@ -3112,6 +3138,11 @@ static int orphanize_inode(struct send_c
+ goto out;
+
+ ret = send_rename(sctx, path, orphan);
++ if (ret < 0)
++ goto out;
++
++ if (ino == sctx->cur_ino && gen == sctx->cur_inode_gen)
++ ret = fs_path_copy(&sctx->cur_inode_path, orphan);
+
+ out:
+ fs_path_free(orphan);
+@@ -4175,6 +4206,10 @@ static int rename_current_inode(struct s
+ if (ret < 0)
+ return ret;
+
++ ret = fs_path_copy(&sctx->cur_inode_path, new_path);
++ if (ret < 0)
++ return ret;
++
+ return fs_path_copy(current_path, new_path);
+ }
+
+@@ -4368,6 +4403,7 @@ static int process_recorded_refs(struct
+ if (ret > 0) {
+ orphanized_ancestor = true;
+ fs_path_reset(valid_path);
++ fs_path_reset(&sctx->cur_inode_path);
+ ret = get_cur_path(sctx, sctx->cur_ino,
+ sctx->cur_inode_gen,
+ valid_path);
+@@ -4567,6 +4603,8 @@ static int process_recorded_refs(struct
+ ret = send_unlink(sctx, cur->full_path);
+ if (ret < 0)
+ goto out;
++ if (is_current_inode_path(sctx, cur->full_path))
++ fs_path_reset(&sctx->cur_inode_path);
+ }
+ ret = dup_ref(cur, &check_dirs);
+ if (ret < 0)
+@@ -6902,6 +6940,7 @@ static int changed_inode(struct send_ctx
+ sctx->cur_inode_last_extent = (u64)-1;
+ sctx->cur_inode_next_write_offset = 0;
+ sctx->ignore_cur_inode = false;
++ fs_path_reset(&sctx->cur_inode_path);
+
+ /*
+ * Set send_progress to current inode. This will tell all get_cur_xxx
+@@ -8174,6 +8213,7 @@ long btrfs_ioctl_send(struct btrfs_inode
+ goto out;
+ }
+
++ init_path(&sctx->cur_inode_path);
+ INIT_LIST_HEAD(&sctx->new_refs);
+ INIT_LIST_HEAD(&sctx->deleted_refs);
+
+@@ -8459,6 +8499,9 @@ out:
+ btrfs_lru_cache_clear(&sctx->dir_created_cache);
+ btrfs_lru_cache_clear(&sctx->dir_utimes_cache);
+
++ if (sctx->cur_inode_path.buf != sctx->cur_inode_path.inline_buf)
++ kfree(sctx->cur_inode_path.buf);
++
+ kfree(sctx);
+ }
+
--- /dev/null
+From stable+bounces-171724-greg=kroah.com@vger.kernel.org Tue Aug 19 04:17:48 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:16:01 -0400
+Subject: btrfs: send: make fs_path_len() inline and constify its argument
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819021601.274993-7-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 920e8ee2bfcaf886fd8c0ad9df097a7dddfeb2d8 ]
+
+The helper function fs_path_len() is trivial and doesn't need to change
+its path argument, so make it inline and constify the argument.
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -478,7 +478,7 @@ static void fs_path_free(struct fs_path
+ kfree(p);
+ }
+
+-static int fs_path_len(struct fs_path *p)
++static inline int fs_path_len(const struct fs_path *p)
+ {
+ return p->end - p->start;
+ }
--- /dev/null
+From stable+bounces-171719-greg=kroah.com@vger.kernel.org Tue Aug 19 04:17:45 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:15:56 -0400
+Subject: btrfs: send: only use boolean variables at process_recorded_refs()
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819021601.274993-2-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 9453fe329789073d9a971de01da5902c32c1a01a ]
+
+We have several local variables at process_recorded_refs() that are used
+as booleans, with some of them having a 'bool' type while two of them
+having an 'int' type. Change this to make them all use the 'bool' type
+which is more clear and to make everything more consistent.
+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Stable-dep-of: 005b0a0c24e1 ("btrfs: send: use fallocate for hole punching with send stream v2")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4179,9 +4179,9 @@ static int process_recorded_refs(struct
+ u64 ow_inode = 0;
+ u64 ow_gen;
+ u64 ow_mode;
+- int did_overwrite = 0;
+- int is_orphan = 0;
+ u64 last_dir_ino_rm = 0;
++ bool did_overwrite = false;
++ bool is_orphan = false;
+ bool can_rename = true;
+ bool orphanized_dir = false;
+ bool orphanized_ancestor = false;
+@@ -4223,14 +4223,14 @@ static int process_recorded_refs(struct
+ if (ret < 0)
+ goto out;
+ if (ret)
+- did_overwrite = 1;
++ did_overwrite = true;
+ }
+ if (sctx->cur_inode_new || did_overwrite) {
+ ret = gen_unique_name(sctx, sctx->cur_ino,
+ sctx->cur_inode_gen, valid_path);
+ if (ret < 0)
+ goto out;
+- is_orphan = 1;
++ is_orphan = true;
+ } else {
+ ret = get_cur_path(sctx, sctx->cur_ino, sctx->cur_inode_gen,
+ valid_path);
+@@ -4453,7 +4453,7 @@ static int process_recorded_refs(struct
+ ret = send_rename(sctx, valid_path, cur->full_path);
+ if (ret < 0)
+ goto out;
+- is_orphan = 0;
++ is_orphan = false;
+ ret = fs_path_copy(valid_path, cur->full_path);
+ if (ret < 0)
+ goto out;
+@@ -4514,7 +4514,7 @@ static int process_recorded_refs(struct
+ sctx->cur_inode_gen, valid_path);
+ if (ret < 0)
+ goto out;
+- is_orphan = 1;
++ is_orphan = true;
+ }
+
+ list_for_each_entry(cur, &sctx->deleted_refs, list) {
--- /dev/null
+From stable+bounces-171723-greg=kroah.com@vger.kernel.org Tue Aug 19 04:19:25 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:16:00 -0400
+Subject: btrfs: send: use fallocate for hole punching with send stream v2
+To: stable@vger.kernel.org
+Cc: Filipe Manana <fdmanana@suse.com>, Boris Burkov <boris@bur.io>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819021601.274993-6-sashal@kernel.org>
+
+From: Filipe Manana <fdmanana@suse.com>
+
+[ Upstream commit 005b0a0c24e1628313e951516b675109a92cacfe ]
+
+Currently holes are sent as writes full of zeroes, which results in
+unnecessarily using disk space at the receiving end and increasing the
+stream size.
+
+In some cases we avoid sending writes of zeroes, like during a full
+send operation where we just skip writes for holes.
+
+But for some cases we fill previous holes with writes of zeroes too, like
+in this scenario:
+
+1) We have a file with a hole in the range [2M, 3M), we snapshot the
+ subvolume and do a full send. The range [2M, 3M) stays as a hole at
+ the receiver since we skip sending write commands full of zeroes;
+
+2) We punch a hole for the range [3M, 4M) in our file, so that now it
+ has a 2M hole in the range [2M, 4M), and snapshot the subvolume.
+ Now if we do an incremental send, we will send write commands full
+ of zeroes for the range [2M, 4M), removing the hole for [2M, 3M) at
+ the receiver.
+
+We could improve cases such as this last one by doing additional
+comparisons of file extent items (or their absence) between the parent
+and send snapshots, but that's a lot of code to add plus additional CPU
+and IO costs.
+
+Since the send stream v2 already has a fallocate command and btrfs-progs
+implements a callback to execute fallocate since the send stream v2
+support was added to it, update the kernel to use fallocate for punching
+holes for V2+ streams.
+
+Test coverage is provided by btrfs/284 which is a version of btrfs/007
+that exercises send stream v2 instead of v1, using fsstress with random
+operations and fssum to verify file contents.
+
+Link: https://github.com/kdave/btrfs-progs/issues/1001
+CC: stable@vger.kernel.org # 6.1+
+Reviewed-by: Boris Burkov <boris@bur.io>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/send.c | 33 +++++++++++++++++++++++++++++++++
+ 1 file changed, 33 insertions(+)
+
+--- a/fs/btrfs/send.c
++++ b/fs/btrfs/send.c
+@@ -4,6 +4,7 @@
+ */
+
+ #include <linux/bsearch.h>
++#include <linux/falloc.h>
+ #include <linux/fs.h>
+ #include <linux/file.h>
+ #include <linux/sort.h>
+@@ -5518,6 +5519,30 @@ tlv_put_failure:
+ return ret;
+ }
+
++static int send_fallocate(struct send_ctx *sctx, u32 mode, u64 offset, u64 len)
++{
++ struct fs_path *path;
++ int ret;
++
++ path = get_cur_inode_path(sctx);
++ if (IS_ERR(path))
++ return PTR_ERR(path);
++
++ ret = begin_cmd(sctx, BTRFS_SEND_C_FALLOCATE);
++ if (ret < 0)
++ return ret;
++
++ TLV_PUT_PATH(sctx, BTRFS_SEND_A_PATH, path);
++ TLV_PUT_U32(sctx, BTRFS_SEND_A_FALLOCATE_MODE, mode);
++ TLV_PUT_U64(sctx, BTRFS_SEND_A_FILE_OFFSET, offset);
++ TLV_PUT_U64(sctx, BTRFS_SEND_A_SIZE, len);
++
++ ret = send_cmd(sctx);
++
++tlv_put_failure:
++ return ret;
++}
++
+ static int send_hole(struct send_ctx *sctx, u64 end)
+ {
+ struct fs_path *p = NULL;
+@@ -5526,6 +5551,14 @@ static int send_hole(struct send_ctx *sc
+ int ret = 0;
+
+ /*
++ * Starting with send stream v2 we have fallocate and can use it to
++ * punch holes instead of sending writes full of zeroes.
++ */
++ if (proto_cmd_ok(sctx, BTRFS_SEND_C_FALLOCATE))
++ return send_fallocate(sctx, FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
++ offset, end - offset);
++
++ /*
+ * A hole that starts at EOF or beyond it. Since we do not yet support
+ * fallocate (for extent preallocation and hole punching), sending a
+ * write of zeroes starting at EOF or beyond would later require issuing
--- /dev/null
+From stable+bounces-171715-greg=kroah.com@vger.kernel.org Tue Aug 19 04:00:07 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 21:58:49 -0400
+Subject: btrfs: zoned: requeue to unused block group list if zone finish failed
+To: stable@vger.kernel.org
+Cc: Naohiro Aota <naohiro.aota@wdc.com>, Johannes Thumshirn <johannes.thumshirn@wdc.com>, David Sterba <dsterba@suse.com>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819015850.263708-3-sashal@kernel.org>
+
+From: Naohiro Aota <naohiro.aota@wdc.com>
+
+[ Upstream commit 62be7afcc13b2727bdc6a4c91aefed6b452e6ecc ]
+
+btrfs_zone_finish() can fail for several reason. If it is -EAGAIN, we need
+to try it again later. So, put the block group to the retry list properly.
+
+Failing to do so will keep the removable block group intact until remount
+and can causes unnecessary ENOSPC.
+
+Fixes: 74e91b12b115 ("btrfs: zoned: zone finish unused block group")
+CC: stable@vger.kernel.org # 6.1+
+Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
+Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/block-group.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/fs/btrfs/block-group.c
++++ b/fs/btrfs/block-group.c
+@@ -1646,8 +1646,10 @@ void btrfs_delete_unused_bgs(struct btrf
+ ret = btrfs_zone_finish(block_group);
+ if (ret < 0) {
+ btrfs_dec_block_group_ro(block_group);
+- if (ret == -EAGAIN)
++ if (ret == -EAGAIN) {
++ btrfs_link_bg_list(block_group, &retry_list);
+ ret = 0;
++ }
+ goto next;
+ }
+
+++ /dev/null
-From 538c06e3964a8e94b645686cc58ccc4a06fa6330 Mon Sep 17 00:00:00 2001
-From: Bibo Mao <maobibo@loongson.cn>
-Date: Wed, 20 Aug 2025 22:51:15 +0800
-Subject: LoongArch: KVM: Add address alignment check in pch_pic register access
-
-From: Bibo Mao <maobibo@loongson.cn>
-
-commit 538c06e3964a8e94b645686cc58ccc4a06fa6330 upstream.
-
-With pch_pic device, its register is based on MMIO address space,
-different access size 1/2/4/8 is supported. And base address should
-be naturally aligned with its access size, here add alignment check
-in its register access emulation function.
-
-Cc: stable@vger.kernel.org
-Signed-off-by: Bibo Mao <maobibo@loongson.cn>
-Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- arch/loongarch/kvm/intc/pch_pic.c | 10 ++++++++++
- 1 file changed, 10 insertions(+)
-
-diff --git a/arch/loongarch/kvm/intc/pch_pic.c b/arch/loongarch/kvm/intc/pch_pic.c
-index 6f00ffe05c54..119290bcea79 100644
---- a/arch/loongarch/kvm/intc/pch_pic.c
-+++ b/arch/loongarch/kvm/intc/pch_pic.c
-@@ -195,6 +195,11 @@ static int kvm_pch_pic_read(struct kvm_vcpu *vcpu,
- return -EINVAL;
- }
-
-+ if (addr & (len - 1)) {
-+ kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len);
-+ return -EINVAL;
-+ }
-+
- /* statistics of pch pic reading */
- vcpu->stat.pch_pic_read_exits++;
- ret = loongarch_pch_pic_read(s, addr, len, val);
-@@ -302,6 +307,11 @@ static int kvm_pch_pic_write(struct kvm_vcpu *vcpu,
- return -EINVAL;
- }
-
-+ if (addr & (len - 1)) {
-+ kvm_err("%s: pch pic not aligned addr %llx len %d\n", __func__, addr, len);
-+ return -EINVAL;
-+ }
-+
- /* statistics of pch pic writing */
- vcpu->stat.pch_pic_write_exits++;
- ret = loongarch_pch_pic_write(s, addr, len, val);
---
-2.50.1
-
--- /dev/null
+From 7e6c3130690a01076efdf45aa02ba5d5c16849a0 Mon Sep 17 00:00:00 2001
+From: SeongJae Park <sj@kernel.org>
+Date: Sun, 20 Jul 2025 11:58:22 -0700
+Subject: mm/damon/ops-common: ignore migration request to invalid nodes
+
+From: SeongJae Park <sj@kernel.org>
+
+commit 7e6c3130690a01076efdf45aa02ba5d5c16849a0 upstream.
+
+damon_migrate_pages() tries migration even if the target node is invalid.
+If users mistakenly make such invalid requests via
+DAMOS_MIGRATE_{HOT,COLD} action, the below kernel BUG can happen.
+
+ [ 7831.883495] BUG: unable to handle page fault for address: 0000000000001f48
+ [ 7831.884160] #PF: supervisor read access in kernel mode
+ [ 7831.884681] #PF: error_code(0x0000) - not-present page
+ [ 7831.885203] PGD 0 P4D 0
+ [ 7831.885468] Oops: Oops: 0000 [#1] SMP PTI
+ [ 7831.885852] CPU: 31 UID: 0 PID: 94202 Comm: kdamond.0 Not tainted 6.16.0-rc5-mm-new-damon+ #93 PREEMPT(voluntary)
+ [ 7831.886913] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.3-4.el9 04/01/2014
+ [ 7831.887777] RIP: 0010:__alloc_frozen_pages_noprof (include/linux/mmzone.h:1724 include/linux/mmzone.h:1750 mm/page_alloc.c:4936 mm/page_alloc.c:5137)
+ [...]
+ [ 7831.895953] Call Trace:
+ [ 7831.896195] <TASK>
+ [ 7831.896397] __folio_alloc_noprof (mm/page_alloc.c:5183 mm/page_alloc.c:5192)
+ [ 7831.896787] migrate_pages_batch (mm/migrate.c:1189 mm/migrate.c:1851)
+ [ 7831.897228] ? __pfx_alloc_migration_target (mm/migrate.c:2137)
+ [ 7831.897735] migrate_pages (mm/migrate.c:2078)
+ [ 7831.898141] ? __pfx_alloc_migration_target (mm/migrate.c:2137)
+ [ 7831.898664] damon_migrate_folio_list (mm/damon/ops-common.c:321 mm/damon/ops-common.c:354)
+ [ 7831.899140] damon_migrate_pages (mm/damon/ops-common.c:405)
+ [...]
+
+Add a target node validity check in damon_migrate_pages(). The validity
+check is stolen from that of do_pages_move(), which is being used for the
+move_pages() system call.
+
+Link: https://lkml.kernel.org/r/20250720185822.1451-1-sj@kernel.org
+Fixes: b51820ebea65 ("mm/damon/paddr: introduce DAMOS_MIGRATE_COLD action for demotion") [6.11.x]
+Signed-off-by: SeongJae Park <sj@kernel.org>
+Reviewed-by: Joshua Hahn <joshua.hahnjy@gmail.com>
+Cc: Honggyu Kim <honggyu.kim@sk.com>
+Cc: Hyeongtak Ji <hyeongtak.ji@sk.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/damon/paddr.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/mm/damon/paddr.c
++++ b/mm/damon/paddr.c
+@@ -431,6 +431,10 @@ static unsigned long damon_pa_migrate_pa
+ if (list_empty(folio_list))
+ return nr_migrated;
+
++ if (target_nid < 0 || target_nid >= MAX_NUMNODES ||
++ !node_state(target_nid, N_MEMORY))
++ return nr_migrated;
++
+ noreclaim_flag = memalloc_noreclaim_save();
+
+ nid = folio_nid(lru_to_folio(folio_list));
--- /dev/null
+From stable+bounces-172251-greg=kroah.com@vger.kernel.org Fri Aug 22 05:09:26 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Thu, 21 Aug 2025 23:08:00 -0400
+Subject: netfs: Fix unbuffered write error handling
+To: stable@vger.kernel.org
+Cc: David Howells <dhowells@redhat.com>, Xiaoli Feng <fengxiaoli0714@gmail.com>, Paulo Alcantara <pc@manguebit.org>, Steve French <sfrench@samba.org>, Shyam Prasad N <sprasad@microsoft.com>, netfs@lists.linux.dev, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org, Christian Brauner <brauner@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250822030800.1054685-1-sashal@kernel.org>
+
+From: David Howells <dhowells@redhat.com>
+
+[ Upstream commit a3de58b12ce074ec05b8741fa28d62ccb1070468 ]
+
+If all the subrequests in an unbuffered write stream fail, the subrequest
+collector doesn't update the stream->transferred value and it retains its
+initial LONG_MAX value. Unfortunately, if all active streams fail, then we
+take the smallest value of { LONG_MAX, LONG_MAX, ... } as the value to set
+in wreq->transferred - which is then returned from ->write_iter().
+
+LONG_MAX was chosen as the initial value so that all the streams can be
+quickly assessed by taking the smallest value of all stream->transferred -
+but this only works if we've set any of them.
+
+Fix this by adding a flag to indicate whether the value in
+stream->transferred is valid and checking that when we integrate the
+values. stream->transferred can then be initialised to zero.
+
+This was found by running the generic/750 xfstest against cifs with
+cache=none. It splices data to the target file. Once (if) it has used up
+all the available scratch space, the writes start failing with ENOSPC.
+This causes ->write_iter() to fail. However, it was returning
+wreq->transferred, i.e. LONG_MAX, rather than an error (because it thought
+the amount transferred was non-zero) and iter_file_splice_write() would
+then try to clean up that amount of pipe bufferage - leading to an oops
+when it overran. The kernel log showed:
+
+ CIFS: VFS: Send error in write = -28
+
+followed by:
+
+ BUG: kernel NULL pointer dereference, address: 0000000000000008
+
+with:
+
+ RIP: 0010:iter_file_splice_write+0x3a4/0x520
+ do_splice+0x197/0x4e0
+
+or:
+
+ RIP: 0010:pipe_buf_release (include/linux/pipe_fs_i.h:282)
+ iter_file_splice_write (fs/splice.c:755)
+
+Also put a warning check into splice to announce if ->write_iter() returned
+that it had written more than it was asked to.
+
+Fixes: 288ace2f57c9 ("netfs: New writeback implementation")
+Reported-by: Xiaoli Feng <fengxiaoli0714@gmail.com>
+Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220445
+Signed-off-by: David Howells <dhowells@redhat.com>
+Link: https://lore.kernel.org/915443.1755207950@warthog.procyon.org.uk
+cc: Paulo Alcantara <pc@manguebit.org>
+cc: Steve French <sfrench@samba.org>
+cc: Shyam Prasad N <sprasad@microsoft.com>
+cc: netfs@lists.linux.dev
+cc: linux-cifs@vger.kernel.org
+cc: linux-fsdevel@vger.kernel.org
+cc: stable@vger.kernel.org
+Signed-off-by: Christian Brauner <brauner@kernel.org>
+[ Dropped read_collect.c hunk ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/netfs/write_collect.c | 10 ++++++++--
+ fs/netfs/write_issue.c | 4 ++--
+ fs/splice.c | 3 +++
+ include/linux/netfs.h | 1 +
+ 4 files changed, 14 insertions(+), 4 deletions(-)
+
+--- a/fs/netfs/write_collect.c
++++ b/fs/netfs/write_collect.c
+@@ -433,6 +433,7 @@ reassess_streams:
+ if (front->start + front->transferred > stream->collected_to) {
+ stream->collected_to = front->start + front->transferred;
+ stream->transferred = stream->collected_to - wreq->start;
++ stream->transferred_valid = true;
+ notes |= MADE_PROGRESS;
+ }
+ if (test_bit(NETFS_SREQ_FAILED, &front->flags)) {
+@@ -538,6 +539,7 @@ void netfs_write_collection_worker(struc
+ struct netfs_io_request *wreq = container_of(work, struct netfs_io_request, work);
+ struct netfs_inode *ictx = netfs_inode(wreq->inode);
+ size_t transferred;
++ bool transferred_valid = false;
+ int s;
+
+ _enter("R=%x", wreq->debug_id);
+@@ -568,12 +570,16 @@ void netfs_write_collection_worker(struc
+ netfs_put_request(wreq, false, netfs_rreq_trace_put_work);
+ return;
+ }
+- if (stream->transferred < transferred)
++ if (stream->transferred_valid &&
++ stream->transferred < transferred) {
+ transferred = stream->transferred;
++ transferred_valid = true;
++ }
+ }
+
+ /* Okay, declare that all I/O is complete. */
+- wreq->transferred = transferred;
++ if (transferred_valid)
++ wreq->transferred = transferred;
+ trace_netfs_rreq(wreq, netfs_rreq_trace_write_done);
+
+ if (wreq->io_streams[1].active &&
+--- a/fs/netfs/write_issue.c
++++ b/fs/netfs/write_issue.c
+@@ -115,12 +115,12 @@ struct netfs_io_request *netfs_create_wr
+ wreq->io_streams[0].prepare_write = ictx->ops->prepare_write;
+ wreq->io_streams[0].issue_write = ictx->ops->issue_write;
+ wreq->io_streams[0].collected_to = start;
+- wreq->io_streams[0].transferred = LONG_MAX;
++ wreq->io_streams[0].transferred = 0;
+
+ wreq->io_streams[1].stream_nr = 1;
+ wreq->io_streams[1].source = NETFS_WRITE_TO_CACHE;
+ wreq->io_streams[1].collected_to = start;
+- wreq->io_streams[1].transferred = LONG_MAX;
++ wreq->io_streams[1].transferred = 0;
+ if (fscache_resources_valid(&wreq->cache_resources)) {
+ wreq->io_streams[1].avail = true;
+ wreq->io_streams[1].active = true;
+--- a/fs/splice.c
++++ b/fs/splice.c
+@@ -744,6 +744,9 @@ iter_file_splice_write(struct pipe_inode
+ sd.pos = kiocb.ki_pos;
+ if (ret <= 0)
+ break;
++ WARN_ONCE(ret > sd.total_len - left,
++ "Splice Exceeded! ret=%zd tot=%zu left=%zu\n",
++ ret, sd.total_len, left);
+
+ sd.num_spliced += ret;
+ sd.total_len -= ret;
+--- a/include/linux/netfs.h
++++ b/include/linux/netfs.h
+@@ -150,6 +150,7 @@ struct netfs_io_stream {
+ bool active; /* T if stream is active */
+ bool need_retry; /* T if this stream needs retrying */
+ bool failed; /* T if this stream failed */
++ bool transferred_valid; /* T is ->transferred is valid */
+ };
+
+ /*
mptcp-drop-skb-if-mptcp-skb-extension-allocation-fails.patch
mptcp-pm-kernel-flush-do-not-reset-add_addr-limit.patch
selftests-mptcp-pm-check-flush-doesn-t-reset-limits.patch
-loongarch-kvm-add-address-alignment-check-in-pch_pic-register-access.patch
+mm-damon-ops-common-ignore-migration-request-to-invalid-nodes.patch
+x86-sev-ensure-svsm-reserved-fields-in-a-page-validation-entry-are-initialized-to-zero.patch
+usb-typec-use-str_enable_disable-like-helpers.patch
+usb-typec-fusb302-cache-pd-rx-state.patch
+btrfs-qgroup-drop-unused-parameter-fs_info-from-__del_qgroup_rb.patch
+btrfs-qgroup-fix-race-between-quota-disable-and-quota-rescan-ioctl.patch
+btrfs-move-transaction-aborts-to-the-error-site-in-add_block_group_free_space.patch
+btrfs-always-abort-transaction-on-failure-to-add-block-group-to-free-space-tree.patch
+btrfs-abort-transaction-on-unexpected-eb-generation-at-btrfs_copy_root.patch
+btrfs-explicitly-ref-count-block_group-on-new_bgs-list.patch
+btrfs-codify-pattern-for-adding-block_group-to-bg_list.patch
+btrfs-zoned-requeue-to-unused-block-group-list-if-zone-finish-failed.patch
+xfs-fully-decouple-xfs_ibulk-flags-from-xfs_iwalk-flags.patch
+btrfs-send-factor-out-common-logic-when-sending-xattrs.patch
+btrfs-send-only-use-boolean-variables-at-process_recorded_refs.patch
+btrfs-send-add-and-use-helper-to-rename-current-inode-when-processing-refs.patch
+btrfs-send-keep-the-current-inode-s-path-cached.patch
+btrfs-send-avoid-path-allocation-for-the-current-inode-when-issuing-commands.patch
+btrfs-send-use-fallocate-for-hole-punching-with-send-stream-v2.patch
+btrfs-send-make-fs_path_len-inline-and-constify-its-argument.patch
+netfs-fix-unbuffered-write-error-handling.patch
--- /dev/null
+From stable+bounces-171645-greg=kroah.com@vger.kernel.org Mon Aug 18 21:32:53 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 15:32:40 -0400
+Subject: usb: typec: fusb302: cache PD RX state
+To: stable@vger.kernel.org
+Cc: Sebastian Reichel <sebastian.reichel@collabora.com>, stable <stable@kernel.org>, Heikki Krogerus <heikki.krogerus@linux.intel.com>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250818193240.50566-2-sashal@kernel.org>
+
+From: Sebastian Reichel <sebastian.reichel@collabora.com>
+
+[ Upstream commit 1e61f6ab08786d66a11cfc51e13d6f08a6b06c56 ]
+
+This patch fixes a race condition communication error, which ends up in
+PD hard resets when losing the race. Some systems, like the Radxa ROCK
+5B are powered through USB-C without any backup power source and use a
+FUSB302 chip to do the PD negotiation. This means it is quite important
+to avoid hard resets, since that effectively kills the system's
+power-supply.
+
+I've found the following race condition while debugging unplanned power
+loss during booting the board every now and then:
+
+1. lots of TCPM/FUSB302/PD initialization stuff
+2. TCPM ends up in SNK_WAIT_CAPABILITIES (tcpm_set_pd_rx is enabled here)
+3. the remote PD source does not send anything, so TCPM does a SOFT RESET
+4. TCPM ends up in SNK_WAIT_CAPABILITIES for the second time
+ (tcpm_set_pd_rx is enabled again, even though it is still on)
+
+At this point I've seen broken CRC good messages being send by the
+FUSB302 with a logic analyzer sniffing the CC lines. Also it looks like
+messages are being lost and things generally going haywire with one of
+the two sides doing a hard reset once a broken CRC good message was send
+to the bus.
+
+I think the system is running into a race condition, that the FIFOs are
+being cleared and/or the automatic good CRC message generation flag is
+being updated while a message is already arriving.
+
+Let's avoid this by caching the PD RX enabled state, as we have already
+processed anything in the FIFOs and are in a good state. As a side
+effect that this also optimizes I2C bus usage :)
+
+As far as I can tell the problem theoretically also exists when TCPM
+enters SNK_WAIT_CAPABILITIES the first time, but I believe this is less
+critical for the following reason:
+
+On devices like the ROCK 5B, which are powered through a TCPM backed
+USB-C port, the bootloader must have done some prior PD communication
+(initial communication must happen within 5 seconds after plugging the
+USB-C plug). This means the first time the kernel TCPM state machine
+reaches SNK_WAIT_CAPABILITIES, the remote side is not sending messages
+actively. On other devices a hard reset simply adds some extra delay and
+things should be good afterwards.
+
+Fixes: c034a43e72dda ("staging: typec: Fairchild FUSB302 Type-c chip driver")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Sebastian Reichel <sebastian.reichel@collabora.com>
+Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
+Link: https://lore.kernel.org/r/20250704-fusb302-race-condition-fix-v1-1-239012c0e27a@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/typec/tcpm/fusb302.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -104,6 +104,7 @@ struct fusb302_chip {
+ bool vconn_on;
+ bool vbus_on;
+ bool charge_on;
++ bool pd_rx_on;
+ bool vbus_present;
+ enum typec_cc_polarity cc_polarity;
+ enum typec_cc_status cc1;
+@@ -841,6 +842,11 @@ static int tcpm_set_pd_rx(struct tcpc_de
+ int ret = 0;
+
+ mutex_lock(&chip->lock);
++ if (chip->pd_rx_on == on) {
++ fusb302_log(chip, "pd is already %s", str_on_off(on));
++ goto done;
++ }
++
+ ret = fusb302_pd_rx_flush(chip);
+ if (ret < 0) {
+ fusb302_log(chip, "cannot flush pd rx buffer, ret=%d", ret);
+@@ -863,6 +869,8 @@ static int tcpm_set_pd_rx(struct tcpc_de
+ str_on_off(on), ret);
+ goto done;
+ }
++
++ chip->pd_rx_on = on;
+ fusb302_log(chip, "pd := %s", str_on_off(on));
+ done:
+ mutex_unlock(&chip->lock);
--- /dev/null
+From stable+bounces-171644-greg=kroah.com@vger.kernel.org Mon Aug 18 21:32:48 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 15:32:39 -0400
+Subject: USB: typec: Use str_enable_disable-like helpers
+To: stable@vger.kernel.org
+Cc: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250818193240.50566-1-sashal@kernel.org>
+
+From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
+
+[ Upstream commit 13b3af26a41538e5051baedba8678eba521a27d3 ]
+
+Replace ternary (condition ? "enable" : "disable") syntax with helpers
+from string_choices.h because:
+1. Simple function call with one argument is easier to read. Ternary
+ operator has three arguments and with wrapping might lead to quite
+ long code.
+2. Is slightly shorter thus also easier to read.
+3. It brings uniformity in the text - same string.
+4. Allows deduping by the linker, which results in a smaller binary
+ file.
+
+Signed-off-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
+Link: https://lore.kernel.org/r/20250114-str-enable-disable-usb-v1-3-c8405df47c19@linaro.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Stable-dep-of: 1e61f6ab0878 ("usb: typec: fusb302: cache PD RX state")
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/typec/class.c | 7 ++--
+ drivers/usb/typec/tcpm/fusb302.c | 24 +++++++--------
+ drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c | 3 +
+ drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy_stub.c | 3 +
+ drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c | 4 +-
+ drivers/usb/typec/tcpm/tcpm.c | 7 ++--
+ 6 files changed, 27 insertions(+), 21 deletions(-)
+
+--- a/drivers/usb/typec/class.c
++++ b/drivers/usb/typec/class.c
+@@ -10,6 +10,7 @@
+ #include <linux/mutex.h>
+ #include <linux/property.h>
+ #include <linux/slab.h>
++#include <linux/string_choices.h>
+ #include <linux/usb/pd_vdo.h>
+ #include <linux/usb/typec_mux.h>
+ #include <linux/usb/typec_retimer.h>
+@@ -354,7 +355,7 @@ active_show(struct device *dev, struct d
+ {
+ struct typec_altmode *alt = to_typec_altmode(dev);
+
+- return sprintf(buf, "%s\n", alt->active ? "yes" : "no");
++ return sprintf(buf, "%s\n", str_yes_no(alt->active));
+ }
+
+ static ssize_t active_store(struct device *dev, struct device_attribute *attr,
+@@ -630,7 +631,7 @@ static ssize_t supports_usb_power_delive
+ {
+ struct typec_partner *p = to_typec_partner(dev);
+
+- return sprintf(buf, "%s\n", p->usb_pd ? "yes" : "no");
++ return sprintf(buf, "%s\n", str_yes_no(p->usb_pd));
+ }
+ static DEVICE_ATTR_RO(supports_usb_power_delivery);
+
+@@ -1688,7 +1689,7 @@ static ssize_t vconn_source_show(struct
+ struct typec_port *port = to_typec_port(dev);
+
+ return sprintf(buf, "%s\n",
+- port->vconn_role == TYPEC_SOURCE ? "yes" : "no");
++ str_yes_no(port->vconn_role == TYPEC_SOURCE));
+ }
+ static DEVICE_ATTR_RW(vconn_source);
+
+--- a/drivers/usb/typec/tcpm/fusb302.c
++++ b/drivers/usb/typec/tcpm/fusb302.c
+@@ -24,6 +24,7 @@
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
+ #include <linux/string.h>
++#include <linux/string_choices.h>
+ #include <linux/types.h>
+ #include <linux/usb.h>
+ #include <linux/usb/typec.h>
+@@ -733,7 +734,7 @@ static int tcpm_set_vconn(struct tcpc_de
+
+ mutex_lock(&chip->lock);
+ if (chip->vconn_on == on) {
+- fusb302_log(chip, "vconn is already %s", on ? "On" : "Off");
++ fusb302_log(chip, "vconn is already %s", str_on_off(on));
+ goto done;
+ }
+ if (on) {
+@@ -746,7 +747,7 @@ static int tcpm_set_vconn(struct tcpc_de
+ if (ret < 0)
+ goto done;
+ chip->vconn_on = on;
+- fusb302_log(chip, "vconn := %s", on ? "On" : "Off");
++ fusb302_log(chip, "vconn := %s", str_on_off(on));
+ done:
+ mutex_unlock(&chip->lock);
+
+@@ -761,7 +762,7 @@ static int tcpm_set_vbus(struct tcpc_dev
+
+ mutex_lock(&chip->lock);
+ if (chip->vbus_on == on) {
+- fusb302_log(chip, "vbus is already %s", on ? "On" : "Off");
++ fusb302_log(chip, "vbus is already %s", str_on_off(on));
+ } else {
+ if (on)
+ ret = regulator_enable(chip->vbus);
+@@ -769,15 +770,14 @@ static int tcpm_set_vbus(struct tcpc_dev
+ ret = regulator_disable(chip->vbus);
+ if (ret < 0) {
+ fusb302_log(chip, "cannot %s vbus regulator, ret=%d",
+- on ? "enable" : "disable", ret);
++ str_enable_disable(on), ret);
+ goto done;
+ }
+ chip->vbus_on = on;
+- fusb302_log(chip, "vbus := %s", on ? "On" : "Off");
++ fusb302_log(chip, "vbus := %s", str_on_off(on));
+ }
+ if (chip->charge_on == charge)
+- fusb302_log(chip, "charge is already %s",
+- charge ? "On" : "Off");
++ fusb302_log(chip, "charge is already %s", str_on_off(charge));
+ else
+ chip->charge_on = charge;
+
+@@ -854,16 +854,16 @@ static int tcpm_set_pd_rx(struct tcpc_de
+ ret = fusb302_pd_set_auto_goodcrc(chip, on);
+ if (ret < 0) {
+ fusb302_log(chip, "cannot turn %s auto GCRC, ret=%d",
+- on ? "on" : "off", ret);
++ str_on_off(on), ret);
+ goto done;
+ }
+ ret = fusb302_pd_set_interrupts(chip, on);
+ if (ret < 0) {
+ fusb302_log(chip, "cannot turn %s pd interrupts, ret=%d",
+- on ? "on" : "off", ret);
++ str_on_off(on), ret);
+ goto done;
+ }
+- fusb302_log(chip, "pd := %s", on ? "on" : "off");
++ fusb302_log(chip, "pd := %s", str_on_off(on));
+ done:
+ mutex_unlock(&chip->lock);
+
+@@ -1531,7 +1531,7 @@ static void fusb302_irq_work(struct work
+ if (interrupt & FUSB_REG_INTERRUPT_VBUSOK) {
+ vbus_present = !!(status0 & FUSB_REG_STATUS0_VBUSOK);
+ fusb302_log(chip, "IRQ: VBUS_OK, vbus=%s",
+- vbus_present ? "On" : "Off");
++ str_on_off(vbus_present));
+ if (vbus_present != chip->vbus_present) {
+ chip->vbus_present = vbus_present;
+ tcpm_vbus_change(chip->tcpm_port);
+@@ -1562,7 +1562,7 @@ static void fusb302_irq_work(struct work
+ if ((interrupt & FUSB_REG_INTERRUPT_COMP_CHNG) && intr_comp_chng) {
+ comp_result = !!(status0 & FUSB_REG_STATUS0_COMP);
+ fusb302_log(chip, "IRQ: COMP_CHNG, comp=%s",
+- comp_result ? "true" : "false");
++ str_true_false(comp_result));
+ if (comp_result) {
+ /* cc level > Rd_threshold, detach */
+ chip->cc1 = TYPEC_CC_OPEN;
+--- a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c
++++ b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy.c
+@@ -12,6 +12,7 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/slab.h>
++#include <linux/string_choices.h>
+ #include <linux/usb/pd.h>
+ #include <linux/usb/tcpm.h>
+ #include "qcom_pmic_typec.h"
+@@ -418,7 +419,7 @@ static int qcom_pmic_typec_pdphy_set_pd_
+
+ spin_unlock_irqrestore(&pmic_typec_pdphy->lock, flags);
+
+- dev_dbg(pmic_typec_pdphy->dev, "set_pd_rx: %s\n", on ? "on" : "off");
++ dev_dbg(pmic_typec_pdphy->dev, "set_pd_rx: %s\n", str_on_off(on));
+
+ return ret;
+ }
+--- a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy_stub.c
++++ b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_pdphy_stub.c
+@@ -12,6 +12,7 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/slab.h>
++#include <linux/string_choices.h>
+ #include <linux/usb/pd.h>
+ #include <linux/usb/tcpm.h>
+ #include "qcom_pmic_typec.h"
+@@ -38,7 +39,7 @@ static int qcom_pmic_typec_pdphy_stub_se
+ struct pmic_typec *tcpm = tcpc_to_tcpm(tcpc);
+ struct device *dev = tcpm->dev;
+
+- dev_dbg(dev, "set_pd_rx: %s\n", on ? "on" : "off");
++ dev_dbg(dev, "set_pd_rx: %s\n", str_on_off(on));
+
+ return 0;
+ }
+--- a/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c
++++ b/drivers/usb/typec/tcpm/qcom/qcom_pmic_typec_port.c
+@@ -13,6 +13,7 @@
+ #include <linux/regmap.h>
+ #include <linux/regulator/consumer.h>
+ #include <linux/slab.h>
++#include <linux/string_choices.h>
+ #include <linux/usb/tcpm.h>
+ #include <linux/usb/typec_mux.h>
+ #include <linux/workqueue.h>
+@@ -562,7 +563,8 @@ done:
+ spin_unlock_irqrestore(&pmic_typec_port->lock, flags);
+
+ dev_dbg(dev, "set_vconn: orientation %d control 0x%08x state %s cc %s vconn %s\n",
+- orientation, value, on ? "on" : "off", misc_to_vconn(misc), misc_to_cc(misc));
++ orientation, value, str_on_off(on), misc_to_vconn(misc),
++ misc_to_cc(misc));
+
+ return ret;
+ }
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -21,6 +21,7 @@
+ #include <linux/seq_file.h>
+ #include <linux/slab.h>
+ #include <linux/spinlock.h>
++#include <linux/string_choices.h>
+ #include <linux/usb.h>
+ #include <linux/usb/pd.h>
+ #include <linux/usb/pd_ado.h>
+@@ -874,8 +875,8 @@ static int tcpm_enable_auto_vbus_dischar
+
+ if (port->tcpc->enable_auto_vbus_discharge) {
+ ret = port->tcpc->enable_auto_vbus_discharge(port->tcpc, enable);
+- tcpm_log_force(port, "%s vbus discharge ret:%d", enable ? "enable" : "disable",
+- ret);
++ tcpm_log_force(port, "%s vbus discharge ret:%d",
++ str_enable_disable(enable), ret);
+ if (!ret)
+ port->auto_vbus_discharge_enabled = enable;
+ }
+@@ -4429,7 +4430,7 @@ static void tcpm_unregister_altmodes(str
+
+ static void tcpm_set_partner_usb_comm_capable(struct tcpm_port *port, bool capable)
+ {
+- tcpm_log(port, "Setting usb_comm capable %s", capable ? "true" : "false");
++ tcpm_log(port, "Setting usb_comm capable %s", str_true_false(capable));
+
+ if (port->tcpc->set_partner_usb_comm_capable)
+ port->tcpc->set_partner_usb_comm_capable(port->tcpc, capable);
--- /dev/null
+From 3ee9cebd0a5e7ea47eb35cec95eaa1a866af982d Mon Sep 17 00:00:00 2001
+From: Tom Lendacky <thomas.lendacky@amd.com>
+Date: Wed, 13 Aug 2025 10:26:59 -0500
+Subject: x86/sev: Ensure SVSM reserved fields in a page validation entry are initialized to zero
+
+From: Tom Lendacky <thomas.lendacky@amd.com>
+
+commit 3ee9cebd0a5e7ea47eb35cec95eaa1a866af982d upstream.
+
+In order to support future versions of the SVSM_CORE_PVALIDATE call, all
+reserved fields within a PVALIDATE entry must be set to zero as an SVSM should
+be ensuring all reserved fields are zero in order to support future usage of
+reserved areas based on the protocol version.
+
+Fixes: fcd042e86422 ("x86/sev: Perform PVALIDATE using the SVSM when not at VMPL0")
+Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Reviewed-by: Joerg Roedel <joerg.roedel@amd.com>
+Cc: <stable@kernel.org>
+Link: https://lore.kernel.org/7cde412f8b057ea13a646fb166b1ca023f6a5031.1755098819.git.thomas.lendacky@amd.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/coco/sev/shared.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/arch/x86/coco/sev/shared.c
++++ b/arch/x86/coco/sev/shared.c
+@@ -1285,6 +1285,7 @@ static void svsm_pval_4k_page(unsigned l
+ pc->entry[0].page_size = RMP_PG_SIZE_4K;
+ pc->entry[0].action = validate;
+ pc->entry[0].ignore_cf = 0;
++ pc->entry[0].rsvd = 0;
+ pc->entry[0].pfn = paddr >> PAGE_SHIFT;
+
+ /* Protocol 0, Call ID 1 */
+@@ -1373,6 +1374,7 @@ static u64 svsm_build_ca_from_pfn_range(
+ pe->page_size = RMP_PG_SIZE_4K;
+ pe->action = action;
+ pe->ignore_cf = 0;
++ pe->rsvd = 0;
+ pe->pfn = pfn;
+
+ pe++;
+@@ -1403,6 +1405,7 @@ static int svsm_build_ca_from_psc_desc(s
+ pe->page_size = e->pagesize ? RMP_PG_SIZE_2M : RMP_PG_SIZE_4K;
+ pe->action = e->operation == SNP_PAGE_STATE_PRIVATE;
+ pe->ignore_cf = 0;
++ pe->rsvd = 0;
+ pe->pfn = e->gfn;
+
+ pe++;
--- /dev/null
+From stable+bounces-171735-greg=kroah.com@vger.kernel.org Tue Aug 19 04:47:11 2025
+From: Sasha Levin <sashal@kernel.org>
+Date: Mon, 18 Aug 2025 22:46:42 -0400
+Subject: xfs: fully decouple XFS_IBULK* flags from XFS_IWALK* flags
+To: stable@vger.kernel.org
+Cc: Christoph Hellwig <hch@lst.de>, cen zhang <zzzccc427@gmail.com>, "Darrick J. Wong" <djwong@kernel.org>, Carlos Maiolino <cem@kernel.org>, Sasha Levin <sashal@kernel.org>
+Message-ID: <20250819024642.295522-1-sashal@kernel.org>
+
+From: Christoph Hellwig <hch@lst.de>
+
+[ Upstream commit d2845519b0723c5d5a0266cbf410495f9b8fd65c ]
+
+Fix up xfs_inumbers to now pass in the XFS_IBULK* flags into the flags
+argument to xfs_inobt_walk, which expects the XFS_IWALK* flags.
+
+Currently passing the wrong flags works for non-debug builds because
+the only XFS_IWALK* flag has the same encoding as the corresponding
+XFS_IBULK* flag, but in debug builds it can trigger an assert that no
+incorrect flag is passed. Instead just extra the relevant flag.
+
+Fixes: 5b35d922c52798 ("xfs: Decouple XFS_IBULK flags from XFS_IWALK flags")
+Cc: <stable@vger.kernel.org> # v5.19
+Reported-by: cen zhang <zzzccc427@gmail.com>
+Signed-off-by: Christoph Hellwig <hch@lst.de>
+Reviewed-by: Darrick J. Wong <djwong@kernel.org>
+Signed-off-by: Carlos Maiolino <cem@kernel.org>
+[ Adjust context ]
+Signed-off-by: Sasha Levin <sashal@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/xfs/xfs_itable.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+--- a/fs/xfs/xfs_itable.c
++++ b/fs/xfs/xfs_itable.c
+@@ -430,11 +430,15 @@ xfs_inumbers(
+ .breq = breq,
+ };
+ struct xfs_trans *tp;
++ unsigned int iwalk_flags = 0;
+ int error = 0;
+
+ if (xfs_bulkstat_already_done(breq->mp, breq->startino))
+ return 0;
+
++ if (breq->flags & XFS_IBULK_SAME_AG)
++ iwalk_flags |= XFS_IWALK_SAME_AG;
++
+ /*
+ * Grab an empty transaction so that we can use its recursive buffer
+ * locking abilities to detect cycles in the inobt without deadlocking.
+@@ -443,7 +447,7 @@ xfs_inumbers(
+ if (error)
+ goto out;
+
+- error = xfs_inobt_walk(breq->mp, tp, breq->startino, breq->flags,
++ error = xfs_inobt_walk(breq->mp, tp, breq->startino, iwalk_flags,
+ xfs_inumbers_walk, breq->icount, &ic);
+ xfs_trans_cancel(tp);
+ out: