--- /dev/null
+From 4751dc99627e4d1465c5bfa8cb7ab31ed418eff5 Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Mon, 28 Feb 2022 16:29:28 +0000
+Subject: btrfs: add missing run of delayed items after unlink during log replay
+
+From: Filipe Manana <fdmanana@suse.com>
+
+commit 4751dc99627e4d1465c5bfa8cb7ab31ed418eff5 upstream.
+
+During log replay, whenever we need to check if a name (dentry) exists in
+a directory we do searches on the subvolume tree for inode references or
+or directory entries (BTRFS_DIR_INDEX_KEY keys, and BTRFS_DIR_ITEM_KEY
+keys as well, before kernel 5.17). However when during log replay we
+unlink a name, through btrfs_unlink_inode(), we may not delete inode
+references and dir index keys from a subvolume tree and instead just add
+the deletions to the delayed inode's delayed items, which will only be
+run when we commit the transaction used for log replay. This means that
+after an unlink operation during log replay, if we attempt to search for
+the same name during log replay, we will not see that the name was already
+deleted, since the deletion is recorded only on the delayed items.
+
+We run delayed items after every unlink operation during log replay,
+except at unlink_old_inode_refs() and at add_inode_ref(). This was due
+to an overlook, as delayed items should be run after evert unlink, for
+the reasons stated above.
+
+So fix those two cases.
+
+Fixes: 0d836392cadd5 ("Btrfs: fix mount failure after fsync due to hard link recreation")
+Fixes: 1f250e929a9c9 ("Btrfs: fix log replay failure after unlink and link combination")
+CC: stable@vger.kernel.org # 4.19+
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/tree-log.c | 18 ++++++++++++++++++
+ 1 file changed, 18 insertions(+)
+
+--- a/fs/btrfs/tree-log.c
++++ b/fs/btrfs/tree-log.c
+@@ -1289,6 +1289,15 @@ again:
+ inode, name, namelen);
+ kfree(name);
+ iput(dir);
++ /*
++ * Whenever we need to check if a name exists or not, we
++ * check the subvolume tree. So after an unlink we must
++ * run delayed items, so that future checks for a name
++ * during log replay see that the name does not exists
++ * anymore.
++ */
++ if (!ret)
++ ret = btrfs_run_delayed_items(trans);
+ if (ret)
+ goto out;
+ goto again;
+@@ -1480,6 +1489,15 @@ static noinline int add_inode_ref(struct
+ */
+ if (!ret && inode->i_nlink == 0)
+ inc_nlink(inode);
++ /*
++ * Whenever we need to check if a name exists or
++ * not, we check the subvolume tree. So after an
++ * unlink we must run delayed items, so that future
++ * checks for a name during log replay see that the
++ * name does not exists anymore.
++ */
++ if (!ret)
++ ret = btrfs_run_delayed_items(trans);
+ }
+ if (ret < 0)
+ goto out;
--- /dev/null
+From 1d1898f65616c4601208963c3376c1d828cbf2c7 Mon Sep 17 00:00:00 2001
+From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
+Date: Tue, 1 Mar 2022 22:29:04 -0500
+Subject: tracing/histogram: Fix sorting on old "cpu" value
+
+From: Steven Rostedt (Google) <rostedt@goodmis.org>
+
+commit 1d1898f65616c4601208963c3376c1d828cbf2c7 upstream.
+
+When trying to add a histogram against an event with the "cpu" field, it
+was impossible due to "cpu" being a keyword to key off of the running CPU.
+So to fix this, it was changed to "common_cpu" to match the other generic
+fields (like "common_pid"). But since some scripts used "cpu" for keying
+off of the CPU (for events that did not have "cpu" as a field, which is
+most of them), a backward compatibility trick was added such that if "cpu"
+was used as a key, and the event did not have "cpu" as a field name, then
+it would fallback and switch over to "common_cpu".
+
+This fix has a couple of subtle bugs. One was that when switching over to
+"common_cpu", it did not change the field name, it just set a flag. But
+the code still found a "cpu" field. The "cpu" field is used for filtering
+and is returned when the event does not have a "cpu" field.
+
+This was found by:
+
+ # cd /sys/kernel/tracing
+ # echo hist:key=cpu,pid:sort=cpu > events/sched/sched_wakeup/trigger
+ # cat events/sched/sched_wakeup/hist
+
+Which showed the histogram unsorted:
+
+{ cpu: 19, pid: 1175 } hitcount: 1
+{ cpu: 6, pid: 239 } hitcount: 2
+{ cpu: 23, pid: 1186 } hitcount: 14
+{ cpu: 12, pid: 249 } hitcount: 2
+{ cpu: 3, pid: 994 } hitcount: 5
+
+Instead of hard coding the "cpu" checks, take advantage of the fact that
+trace_event_field_field() returns a special field for "cpu" and "CPU" if
+the event does not have "cpu" as a field. This special field has the
+"filter_type" of "FILTER_CPU". Check that to test if the returned field is
+of the CPU type instead of doing the string compare.
+
+Also, fix the sorting bug by testing for the hist_field flag of
+HIST_FIELD_FL_CPU when setting up the sort routine. Otherwise it will use
+the special CPU field to know what compare routine to use, and since that
+special field does not have a size, it returns tracing_map_cmp_none.
+
+Cc: stable@vger.kernel.org
+Fixes: 1e3bac71c505 ("tracing/histogram: Rename "cpu" to "common_cpu"")
+Reported-by: Daniel Bristot de Oliveira <bristot@kernel.org>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/trace_events_hist.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/kernel/trace/trace_events_hist.c
++++ b/kernel/trace/trace_events_hist.c
+@@ -2635,9 +2635,9 @@ parse_field(struct hist_trigger_data *hi
+ /*
+ * For backward compatibility, if field_name
+ * was "cpu", then we treat this the same as
+- * common_cpu.
++ * common_cpu. This also works for "CPU".
+ */
+- if (strcmp(field_name, "cpu") == 0) {
++ if (field && field->filter_type == FILTER_CPU) {
+ *flags |= HIST_FIELD_FL_CPU;
+ } else {
+ hist_err("Couldn't find field: ", field_name);
+@@ -4642,7 +4642,7 @@ static int create_tracing_map_fields(str
+
+ if (hist_field->flags & HIST_FIELD_FL_STACKTRACE)
+ cmp_fn = tracing_map_cmp_none;
+- else if (!field)
++ else if (!field || hist_field->flags & HIST_FIELD_FL_CPU)
+ cmp_fn = tracing_map_cmp_num(hist_field->size,
+ hist_field->is_signed);
+ else if (is_string_field(field))