--- /dev/null
+From 1db7959aacd905e6487d0478ac01d89f86eb1e51 Mon Sep 17 00:00:00 2001
+From: Qu Wenruo <wqu@suse.com>
+Date: Tue, 26 Mar 2024 09:16:46 +1030
+Subject: btrfs: do not wait for short bulk allocation
+
+From: Qu Wenruo <wqu@suse.com>
+
+commit 1db7959aacd905e6487d0478ac01d89f86eb1e51 upstream.
+
+[BUG]
+There is a recent report that when memory pressure is high (including
+cached pages), btrfs can spend most of its time on memory allocation in
+btrfs_alloc_page_array() for compressed read/write.
+
+[CAUSE]
+For btrfs_alloc_page_array() we always go alloc_pages_bulk_array(), and
+even if the bulk allocation failed (fell back to single page
+allocation) we still retry but with extra memalloc_retry_wait().
+
+If the bulk alloc only returned one page a time, we would spend a lot of
+time on the retry wait.
+
+The behavior was introduced in commit 395cb57e8560 ("btrfs: wait between
+incomplete batch memory allocations").
+
+[FIX]
+Although the commit mentioned that other filesystems do the wait, it's
+not the case at least nowadays.
+
+All the mainlined filesystems only call memalloc_retry_wait() if they
+failed to allocate any page (not only for bulk allocation).
+If there is any progress, they won't call memalloc_retry_wait() at all.
+
+For example, xfs_buf_alloc_pages() would only call memalloc_retry_wait()
+if there is no allocation progress at all, and the call is not for
+metadata readahead.
+
+So I don't believe we should call memalloc_retry_wait() unconditionally
+for short allocation.
+
+Call memalloc_retry_wait() if it fails to allocate any page for tree
+block allocation (which goes with __GFP_NOFAIL and may not need the
+special handling anyway), and reduce the latency for
+btrfs_alloc_page_array().
+
+Reported-by: Julian Taylor <julian.taylor@1und1.de>
+Tested-by: Julian Taylor <julian.taylor@1und1.de>
+Link: https://lore.kernel.org/all/8966c095-cbe7-4d22-9784-a647d1bf27c3@1und1.de/
+Fixes: 395cb57e8560 ("btrfs: wait between incomplete batch memory allocations")
+CC: stable@vger.kernel.org # 6.1+
+Reviewed-by: Sweet Tea Dorminy <sweettea-kernel@dorminy.me>
+Reviewed-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: Qu Wenruo <wqu@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/extent_io.c | 18 ++++--------------
+ 1 file changed, 4 insertions(+), 14 deletions(-)
+
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -692,31 +692,21 @@ static void end_bbio_data_read(struct bt
+ int btrfs_alloc_page_array(unsigned int nr_pages, struct page **page_array,
+ gfp_t extra_gfp)
+ {
++ const gfp_t gfp = GFP_NOFS | extra_gfp;
+ unsigned int allocated;
+
+ for (allocated = 0; allocated < nr_pages;) {
+ unsigned int last = allocated;
+
+- allocated = alloc_pages_bulk_array(GFP_NOFS | extra_gfp,
+- nr_pages, page_array);
+-
+- if (allocated == nr_pages)
+- return 0;
+-
+- /*
+- * During this iteration, no page could be allocated, even
+- * though alloc_pages_bulk_array() falls back to alloc_page()
+- * if it could not bulk-allocate. So we must be out of memory.
+- */
+- if (allocated == last) {
++ allocated = alloc_pages_bulk_array(gfp, nr_pages, page_array);
++ if (unlikely(allocated == last)) {
++ /* No progress, fail and do cleanup. */
+ for (int i = 0; i < allocated; i++) {
+ __free_page(page_array[i]);
+ page_array[i] = NULL;
+ }
+ return -ENOMEM;
+ }
+-
+- memalloc_retry_wait(GFP_NOFS);
+ }
+ return 0;
+ }
--- /dev/null
+From 68879386180c0efd5a11e800b0525a01068c9457 Mon Sep 17 00:00:00 2001
+From: Naohiro Aota <naohiro.aota@wdc.com>
+Date: Tue, 26 Mar 2024 14:39:20 +0900
+Subject: btrfs: zoned: do not flag ZEROOUT on non-dirty extent buffer
+
+From: Naohiro Aota <naohiro.aota@wdc.com>
+
+commit 68879386180c0efd5a11e800b0525a01068c9457 upstream.
+
+Btrfs clears the content of an extent buffer marked as
+EXTENT_BUFFER_ZONED_ZEROOUT before the bio submission. This mechanism is
+introduced to prevent a write hole of an extent buffer, which is once
+allocated, marked dirty, but turns out unnecessary and cleaned up within
+one transaction operation.
+
+Currently, btrfs_clear_buffer_dirty() marks the extent buffer as
+EXTENT_BUFFER_ZONED_ZEROOUT, and skips the entry function. If this call
+happens while the buffer is under IO (with the WRITEBACK flag set,
+without the DIRTY flag), we can add the ZEROOUT flag and clear the
+buffer's content just before a bio submission. As a result:
+
+1) it can lead to adding faulty delayed reference item which leads to a
+ FS corrupted (EUCLEAN) error, and
+
+2) it writes out cleared tree node on disk
+
+The former issue is previously discussed in [1]. The corruption happens
+when it runs a delayed reference update. So, on-disk data is safe.
+
+[1] https://lore.kernel.org/linux-btrfs/3f4f2a0ff1a6c818050434288925bdcf3cd719e5.1709124777.git.naohiro.aota@wdc.com/
+
+The latter one can reach on-disk data. But, as that node is already
+processed by btrfs_clear_buffer_dirty(), that will be invalidated in the
+next transaction commit anyway. So, the chance of hitting the corruption
+is relatively small.
+
+Anyway, we should skip flagging ZEROOUT on a non-DIRTY extent buffer, to
+keep the content under IO intact.
+
+Fixes: aa6313e6ff2b ("btrfs: zoned: don't clear dirty flag of extent buffer")
+CC: stable@vger.kernel.org # 6.8
+Link: https://lore.kernel.org/linux-btrfs/oadvdekkturysgfgi4qzuemd57zudeasynswurjxw3ocdfsef6@sjyufeugh63f/
+Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
+Signed-off-by: Naohiro Aota <naohiro.aota@wdc.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/extent_io.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -4130,7 +4130,7 @@ void btrfs_clear_buffer_dirty(struct btr
+ * The actual zeroout of the buffer will happen later in
+ * btree_csum_one_bio.
+ */
+- if (btrfs_is_zoned(fs_info)) {
++ if (btrfs_is_zoned(fs_info) && test_bit(EXTENT_BUFFER_DIRTY, &eb->bflags)) {
+ set_bit(EXTENT_BUFFER_ZONED_ZEROOUT, &eb->bflags);
+ return;
+ }
--- /dev/null
+From 56f78615bcb1c3ba58a5d9911bad3d9185cf141b Mon Sep 17 00:00:00 2001
+From: Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
+Date: Wed, 17 Apr 2024 10:55:13 +0200
+Subject: net: usb: ax88179_178a: avoid writing the mac address before first reading
+
+From: Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
+
+commit 56f78615bcb1c3ba58a5d9911bad3d9185cf141b upstream.
+
+After the commit d2689b6a86b9 ("net: usb: ax88179_178a: avoid two
+consecutive device resets"), reset operation, in which the default mac
+address from the device is read, is not executed from bind operation and
+the random address, that is pregenerated just in case, is direclty written
+the first time in the device, so the default one from the device is not
+even read. This writing is not dangerous because is volatile and the
+default mac address is not missed.
+
+In order to avoid this and keep the simplification to have only one
+reset and reduce the delays, restore the reset from bind operation and
+remove the reset that is commanded from open operation. The behavior is
+the same but everything is ready for usbnet_probe.
+
+Tested with ASIX AX88179 USB Gigabit Ethernet devices.
+Restore the old behavior for the rest of possible devices because I don't
+have the hardware to test.
+
+cc: stable@vger.kernel.org # 6.6+
+Fixes: d2689b6a86b9 ("net: usb: ax88179_178a: avoid two consecutive device resets")
+Reported-by: Jarkko Palviainen <jarkko.palviainen@gmail.com>
+Signed-off-by: Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
+Link: https://lore.kernel.org/r/20240417085524.219532-1-jtornosm@redhat.com
+Signed-off-by: Jakub Kicinski <kuba@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/usb/ax88179_178a.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/drivers/net/usb/ax88179_178a.c
++++ b/drivers/net/usb/ax88179_178a.c
+@@ -1317,6 +1317,8 @@ static int ax88179_bind(struct usbnet *d
+
+ netif_set_tso_max_size(dev->net, 16384);
+
++ ax88179_reset(dev);
++
+ return 0;
+ }
+
+@@ -1695,7 +1697,6 @@ static const struct driver_info ax88179_
+ .unbind = ax88179_unbind,
+ .status = ax88179_status,
+ .link_reset = ax88179_link_reset,
+- .reset = ax88179_reset,
+ .stop = ax88179_stop,
+ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
+@@ -1708,7 +1709,6 @@ static const struct driver_info ax88178a
+ .unbind = ax88179_unbind,
+ .status = ax88179_status,
+ .link_reset = ax88179_link_reset,
+- .reset = ax88179_reset,
+ .stop = ax88179_stop,
+ .flags = FLAG_ETHER | FLAG_FRAMING_AX,
+ .rx_fixup = ax88179_rx_fixup,
--- /dev/null
+From e871abcda3b67d0820b4182ebe93435624e9c6a4 Mon Sep 17 00:00:00 2001
+From: "Jason A. Donenfeld" <Jason@zx2c4.com>
+Date: Wed, 17 Apr 2024 13:38:29 +0200
+Subject: random: handle creditable entropy from atomic process context
+
+From: Jason A. Donenfeld <Jason@zx2c4.com>
+
+commit e871abcda3b67d0820b4182ebe93435624e9c6a4 upstream.
+
+The entropy accounting changes a static key when the RNG has
+initialized, since it only ever initializes once. Static key changes,
+however, cannot be made from atomic context, so depending on where the
+last creditable entropy comes from, the static key change might need to
+be deferred to a worker.
+
+Previously the code used the execute_in_process_context() helper
+function, which accounts for whether or not the caller is
+in_interrupt(). However, that doesn't account for the case where the
+caller is actually in process context but is holding a spinlock.
+
+This turned out to be the case with input_handle_event() in
+drivers/input/input.c contributing entropy:
+
+ [<ffffffd613025ba0>] die+0xa8/0x2fc
+ [<ffffffd613027428>] bug_handler+0x44/0xec
+ [<ffffffd613016964>] brk_handler+0x90/0x144
+ [<ffffffd613041e58>] do_debug_exception+0xa0/0x148
+ [<ffffffd61400c208>] el1_dbg+0x60/0x7c
+ [<ffffffd61400c000>] el1h_64_sync_handler+0x38/0x90
+ [<ffffffd613011294>] el1h_64_sync+0x64/0x6c
+ [<ffffffd613102d88>] __might_resched+0x1fc/0x2e8
+ [<ffffffd613102b54>] __might_sleep+0x44/0x7c
+ [<ffffffd6130b6eac>] cpus_read_lock+0x1c/0xec
+ [<ffffffd6132c2820>] static_key_enable+0x14/0x38
+ [<ffffffd61400ac08>] crng_set_ready+0x14/0x28
+ [<ffffffd6130df4dc>] execute_in_process_context+0xb8/0xf8
+ [<ffffffd61400ab30>] _credit_init_bits+0x118/0x1dc
+ [<ffffffd6138580c8>] add_timer_randomness+0x264/0x270
+ [<ffffffd613857e54>] add_input_randomness+0x38/0x48
+ [<ffffffd613a80f94>] input_handle_event+0x2b8/0x490
+ [<ffffffd613a81310>] input_event+0x6c/0x98
+
+According to Guoyong, it's not really possible to refactor the various
+drivers to never hold a spinlock there. And in_atomic() isn't reliable.
+
+So, rather than trying to be too fancy, just punt the change in the
+static key to a workqueue always. There's basically no drawback of doing
+this, as the code already needed to account for the static key not
+changing immediately, and given that it's just an optimization, there's
+not exactly a hurry to change the static key right away, so deferal is
+fine.
+
+Reported-by: Guoyong Wang <guoyong.wang@mediatek.com>
+Cc: stable@vger.kernel.org
+Fixes: f5bda35fba61 ("random: use static branch for crng_ready()")
+Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/char/random.c | 10 +++++-----
+ 1 file changed, 5 insertions(+), 5 deletions(-)
+
+--- a/drivers/char/random.c
++++ b/drivers/char/random.c
+@@ -702,7 +702,7 @@ static void extract_entropy(void *buf, s
+
+ static void __cold _credit_init_bits(size_t bits)
+ {
+- static struct execute_work set_ready;
++ static DECLARE_WORK(set_ready, crng_set_ready);
+ unsigned int new, orig, add;
+ unsigned long flags;
+
+@@ -718,8 +718,8 @@ static void __cold _credit_init_bits(siz
+
+ if (orig < POOL_READY_BITS && new >= POOL_READY_BITS) {
+ crng_reseed(NULL); /* Sets crng_init to CRNG_READY under base_crng.lock. */
+- if (static_key_initialized)
+- execute_in_process_context(crng_set_ready, &set_ready);
++ if (static_key_initialized && system_unbound_wq)
++ queue_work(system_unbound_wq, &set_ready);
+ atomic_notifier_call_chain(&random_ready_notifier, 0, NULL);
+ wake_up_interruptible(&crng_init_wait);
+ kill_fasync(&fasync, SIGIO, POLL_IN);
+@@ -890,8 +890,8 @@ void __init random_init(void)
+
+ /*
+ * If we were initialized by the cpu or bootloader before jump labels
+- * are initialized, then we should enable the static branch here, where
+- * it's guaranteed that jump labels have been initialized.
++ * or workqueues are initialized, then we should enable the static
++ * branch here, where it's guaranteed that these have been initialized.
+ */
+ if (!static_branch_likely(&crng_is_ready) && crng_init >= CRNG_READY)
+ crng_set_ready(NULL);
--- /dev/null
+From 3aadf100f93d80815685493d60cd8cab206403df Mon Sep 17 00:00:00 2001
+From: "Jason A. Donenfeld" <Jason@zx2c4.com>
+Date: Thu, 18 Apr 2024 13:45:17 +0200
+Subject: Revert "vmgenid: emit uevent when VMGENID updates"
+
+From: Jason A. Donenfeld <Jason@zx2c4.com>
+
+commit 3aadf100f93d80815685493d60cd8cab206403df upstream.
+
+This reverts commit ad6bcdad2b6724e113f191a12f859a9e8456b26d. I had
+nak'd it, and Greg said on the thread that it links that he wasn't going
+to take it either, especially since it's not his code or his tree, but
+then, seemingly accidentally, it got pushed up some months later, in
+what looks like a mistake, with no further discussion in the linked
+thread. So revert it, since it's clearly not intended.
+
+Fixes: ad6bcdad2b67 ("vmgenid: emit uevent when VMGENID updates")
+Cc: stable@vger.kernel.org
+Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Link: https://lore.kernel.org/r/20230531095119.11202-2-bchalios@amazon.es
+Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/virt/vmgenid.c | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/drivers/virt/vmgenid.c b/drivers/virt/vmgenid.c
+index b67a28da4702..a1c467a0e9f7 100644
+--- a/drivers/virt/vmgenid.c
++++ b/drivers/virt/vmgenid.c
+@@ -68,7 +68,6 @@ static int vmgenid_add(struct acpi_device *device)
+ static void vmgenid_notify(struct acpi_device *device, u32 event)
+ {
+ struct vmgenid_state *state = acpi_driver_data(device);
+- char *envp[] = { "NEW_VMGENID=1", NULL };
+ u8 old_id[VMGENID_SIZE];
+
+ memcpy(old_id, state->this_id, sizeof(old_id));
+@@ -76,7 +75,6 @@ static void vmgenid_notify(struct acpi_device *device, u32 event)
+ if (!memcmp(old_id, state->this_id, sizeof(old_id)))
+ return;
+ add_vmfork_randomness(state->this_id, sizeof(state->this_id));
+- kobject_uevent_env(&device->dev.kobj, KOBJ_CHANGE, envp);
+ }
+
+ static const struct acpi_device_id vmgenid_ids[] = {
+--
+2.44.0
+
--- /dev/null
+From ca91259b775f6fd98ae5d23bb4eec101d468ba8d Mon Sep 17 00:00:00 2001
+From: Bart Van Assche <bvanassche@acm.org>
+Date: Mon, 25 Mar 2024 15:44:17 -0700
+Subject: scsi: core: Fix handling of SCMD_FAIL_IF_RECOVERING
+
+From: Bart Van Assche <bvanassche@acm.org>
+
+commit ca91259b775f6fd98ae5d23bb4eec101d468ba8d upstream.
+
+There is code in the SCSI core that sets the SCMD_FAIL_IF_RECOVERING
+flag but there is no code that clears this flag. Instead of only clearing
+SCMD_INITIALIZED in scsi_end_request(), clear all flags. It is never
+necessary to preserve any command flags inside scsi_end_request().
+
+Cc: stable@vger.kernel.org
+Fixes: 310bcaef6d7e ("scsi: core: Support failing requests while recovering")
+Signed-off-by: Bart Van Assche <bvanassche@acm.org>
+Link: https://lore.kernel.org/r/20240325224417.1477135-1-bvanassche@acm.org
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/scsi/scsi_lib.c | 7 +++----
+ 1 file changed, 3 insertions(+), 4 deletions(-)
+
+--- a/drivers/scsi/scsi_lib.c
++++ b/drivers/scsi/scsi_lib.c
+@@ -543,10 +543,9 @@ static bool scsi_end_request(struct requ
+ if (blk_queue_add_random(q))
+ add_disk_randomness(req->q->disk);
+
+- if (!blk_rq_is_passthrough(req)) {
+- WARN_ON_ONCE(!(cmd->flags & SCMD_INITIALIZED));
+- cmd->flags &= ~SCMD_INITIALIZED;
+- }
++ WARN_ON_ONCE(!blk_rq_is_passthrough(req) &&
++ !(cmd->flags & SCMD_INITIALIZED));
++ cmd->flags = 0;
+
+ /*
+ * Calling rcu_barrier() is not necessary here because the
--- /dev/null
+From 1a4ea83a6e67f1415a1f17c1af5e9c814c882bb5 Mon Sep 17 00:00:00 2001
+From: Yuanhe Shu <xiangzao@linux.alibaba.com>
+Date: Mon, 26 Feb 2024 11:18:16 +0800
+Subject: selftests/ftrace: Limit length in subsystem-enable tests
+
+From: Yuanhe Shu <xiangzao@linux.alibaba.com>
+
+commit 1a4ea83a6e67f1415a1f17c1af5e9c814c882bb5 upstream.
+
+While sched* events being traced and sched* events continuously happen,
+"[xx] event tracing - enable/disable with subsystem level files" would
+not stop as on some slower systems it seems to take forever.
+Select the first 100 lines of output would be enough to judge whether
+there are more than 3 types of sched events.
+
+Fixes: 815b18ea66d6 ("ftracetest: Add basic event tracing test cases")
+Cc: stable@vger.kernel.org
+Signed-off-by: Yuanhe Shu <xiangzao@linux.alibaba.com>
+Acked-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Acked-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Shuah Khan <skhan@linuxfoundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
++++ b/tools/testing/selftests/ftrace/test.d/event/subsystem-enable.tc
+@@ -18,7 +18,7 @@ echo 'sched:*' > set_event
+
+ yield
+
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -29,7 +29,7 @@ echo 1 > events/sched/enable
+
+ yield
+
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -lt 3 ]; then
+ fail "at least fork, exec and exit events should be recorded"
+ fi
+@@ -40,7 +40,7 @@ echo 0 > events/sched/enable
+
+ yield
+
+-count=`cat trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
++count=`head -n 100 trace | grep -v ^# | awk '{ print $5 }' | sort -u | wc -l`
+ if [ $count -ne 0 ]; then
+ fail "any of scheduler events should not be recorded"
+ fi
io_uring-fix-io_cqring_wait-not-restoring-sigmask-on-get_timespec64-failure.patch
drm-i915-cdclk-fix-voltage_level-programming-edge-ca.patch
+revert-vmgenid-emit-uevent-when-vmgenid-updates.patch
+sunrpc-fix-rpcgss_context-trace-event-acceptor-field.patch
+selftests-ftrace-limit-length-in-subsystem-enable-tests.patch
+random-handle-creditable-entropy-from-atomic-process-context.patch
+scsi-core-fix-handling-of-scmd_fail_if_recovering.patch
+net-usb-ax88179_178a-avoid-writing-the-mac-address-before-first-reading.patch
+btrfs-do-not-wait-for-short-bulk-allocation.patch
+btrfs-zoned-do-not-flag-zeroout-on-non-dirty-extent-buffer.patch
--- /dev/null
+From a4833e3abae132d613ce7da0e0c9a9465d1681fa Mon Sep 17 00:00:00 2001
+From: "Steven Rostedt (Google)" <rostedt@goodmis.org>
+Date: Wed, 10 Apr 2024 12:38:13 -0400
+Subject: SUNRPC: Fix rpcgss_context trace event acceptor field
+
+From: Steven Rostedt (Google) <rostedt@goodmis.org>
+
+commit a4833e3abae132d613ce7da0e0c9a9465d1681fa upstream.
+
+The rpcgss_context trace event acceptor field is a dynamically sized
+string that records the "data" parameter. But this parameter is also
+dependent on the "len" field to determine the size of the data.
+
+It needs to use __string_len() helper macro where the length can be passed
+in. It also incorrectly uses strncpy() to save it instead of
+__assign_str(). As these macros can change, it is not wise to open code
+them in trace events.
+
+As of commit c759e609030c ("tracing: Remove __assign_str_len()"),
+__assign_str() can be used for both __string() and __string_len() fields.
+Before that commit, __assign_str_len() is required to be used. This needs
+to be noted for backporting. (In actuality, commit c1fa617caeb0 ("tracing:
+Rework __assign_str() and __string() to not duplicate getting the string")
+is the commit that makes __string_str_len() obsolete).
+
+Cc: stable@vger.kernel.org
+Fixes: 0c77668ddb4e ("SUNRPC: Introduce trace points in rpc_auth_gss.ko")
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/trace/events/rpcgss.h | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/include/trace/events/rpcgss.h
++++ b/include/trace/events/rpcgss.h
+@@ -609,7 +609,7 @@ TRACE_EVENT(rpcgss_context,
+ __field(unsigned int, timeout)
+ __field(u32, window_size)
+ __field(int, len)
+- __string(acceptor, data)
++ __string_len(acceptor, data, len)
+ ),
+
+ TP_fast_assign(
+@@ -618,7 +618,7 @@ TRACE_EVENT(rpcgss_context,
+ __entry->timeout = timeout;
+ __entry->window_size = window_size;
+ __entry->len = len;
+- strncpy(__get_str(acceptor), data, len);
++ __assign_str(acceptor, data);
+ ),
+
+ TP_printk("win_size=%u expiry=%lu now=%lu timeout=%u acceptor=%.*s",