--- /dev/null
+From 03ddd7725ed1b39cf9251e1a420559f25dac49b3 Mon Sep 17 00:00:00 2001
+From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
+Date: Wed, 2 Apr 2025 15:59:55 +0100
+Subject: 9p: Add a migrate_folio method
+
+From: Matthew Wilcox (Oracle) <willy@infradead.org>
+
+commit 03ddd7725ed1b39cf9251e1a420559f25dac49b3 upstream.
+
+The migration code used to be able to migrate dirty 9p folios by writing
+them back using writepage. When the writepage method was removed,
+we neglected to add a migrate_folio method, which means that dirty 9p
+folios have been unmovable ever since. This reduced our success at
+defragmenting memory on machines which use 9p heavily.
+
+Fixes: 80105ed2fd27 (9p: Use netfslib read/write_iter)
+Cc: stable@vger.kernel.org
+Cc: David Howells <dhowells@redhat.com>
+Cc: v9fs@lists.linux.dev
+Signed-off-by: "Matthew Wilcox (Oracle)" <willy@infradead.org>
+Link: https://lore.kernel.org/r/20250402150005.2309458-2-willy@infradead.org
+Acked-by: Dominique Martinet <asmadeus@codewreck.org>
+Reviewed-by: David Howells <dhowells@redhat.com>
+Signed-off-by: Christian Brauner <brauner@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/9p/vfs_addr.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/fs/9p/vfs_addr.c
++++ b/fs/9p/vfs_addr.c
+@@ -160,4 +160,5 @@ const struct address_space_operations v9
+ .invalidate_folio = netfs_invalidate_folio,
+ .direct_IO = noop_direct_IO,
+ .writepages = netfs_writepages,
++ .migrate_folio = filemap_migrate_folio,
+ };
--- /dev/null
+From fe8abdd175d7b547ae1a612757e7902bcd62e9cf Mon Sep 17 00:00:00 2001
+From: Peter Korsgaard <peter@korsgaard.com>
+Date: Fri, 9 May 2025 13:24:07 +0100
+Subject: nvmem: zynqmp_nvmem: unbreak driver after cleanup
+
+From: Peter Korsgaard <peter@korsgaard.com>
+
+commit fe8abdd175d7b547ae1a612757e7902bcd62e9cf upstream.
+
+Commit 29be47fcd6a0 ("nvmem: zynqmp_nvmem: zynqmp_nvmem_probe cleanup")
+changed the driver to expect the device pointer to be passed as the
+"context", but in nvmem the context parameter comes from nvmem_config.priv
+which is never set - Leading to null pointer exceptions when the device is
+accessed.
+
+Fixes: 29be47fcd6a0 ("nvmem: zynqmp_nvmem: zynqmp_nvmem_probe cleanup")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Peter Korsgaard <peter@korsgaard.com>
+Reviewed-by: Michal Simek <michal.simek@amd.com>
+Tested-by: Michal Simek <michal.simek@amd.com>
+Signed-off-by: Srinivas Kandagatla <srini@kernel.org>
+Link: https://lore.kernel.org/r/20250509122407.11763-3-srini@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/nvmem/zynqmp_nvmem.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/nvmem/zynqmp_nvmem.c
++++ b/drivers/nvmem/zynqmp_nvmem.c
+@@ -213,6 +213,7 @@ static int zynqmp_nvmem_probe(struct pla
+ econfig.word_size = 1;
+ econfig.size = ZYNQMP_NVMEM_SIZE;
+ econfig.dev = dev;
++ econfig.priv = dev;
+ econfig.add_legacy_fixed_of_cells = true;
+ econfig.reg_read = zynqmp_nvmem_read;
+ econfig.reg_write = zynqmp_nvmem_write;
--- /dev/null
+From 4fc78a7c9ca994e1da5d3940704d4e8f0ea8c5e4 Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt@goodmis.org>
+Date: Wed, 28 May 2025 12:15:55 -0400
+Subject: ring-buffer: Do not trigger WARN_ON() due to a commit_overrun
+
+From: Steven Rostedt <rostedt@goodmis.org>
+
+commit 4fc78a7c9ca994e1da5d3940704d4e8f0ea8c5e4 upstream.
+
+When reading a memory mapped buffer the reader page is just swapped out
+with the last page written in the write buffer. If the reader page is the
+same as the commit buffer (the buffer that is currently being written to)
+it was assumed that it should never have missed events. If it does, it
+triggers a WARN_ON_ONCE().
+
+But there just happens to be one scenario where this can legitimately
+happen. That is on a commit_overrun. A commit overrun is when an interrupt
+preempts an event being written to the buffer and then the interrupt adds
+so many new events that it fills and wraps the buffer back to the commit.
+Any new events would then be dropped and be reported as "missed_events".
+
+In this case, the next page to read is the commit buffer and after the
+swap of the reader page, the reader page will be the commit buffer, but
+this time there will be missed events and this triggers the following
+warning:
+
+ ------------[ cut here ]------------
+ WARNING: CPU: 2 PID: 1127 at kernel/trace/ring_buffer.c:7357 ring_buffer_map_get_reader+0x49a/0x780
+ Modules linked in: kvm_intel kvm irqbypass
+ CPU: 2 UID: 0 PID: 1127 Comm: trace-cmd Not tainted 6.15.0-rc7-test-00004-g478bc2824b45-dirty #564 PREEMPT
+ Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
+ RIP: 0010:ring_buffer_map_get_reader+0x49a/0x780
+ Code: 00 00 00 48 89 fe 48 c1 ee 03 80 3c 2e 00 0f 85 ec 01 00 00 4d 3b a6 a8 00 00 00 0f 85 8a fd ff ff 48 85 c0 0f 84 55 fe ff ff <0f> 0b e9 4e fe ff ff be 08 00 00 00 4c 89 54 24 58 48 89 54 24 50
+ RSP: 0018:ffff888121787dc0 EFLAGS: 00010002
+ RAX: 00000000000006a2 RBX: ffff888100062800 RCX: ffffffff8190cb49
+ RDX: ffff888126934c00 RSI: 1ffff11020200a15 RDI: ffff8881010050a8
+ RBP: dffffc0000000000 R08: 0000000000000000 R09: ffffed1024d26982
+ R10: ffff888126934c17 R11: ffff8881010050a8 R12: ffff888126934c00
+ R13: ffff8881010050b8 R14: ffff888101005000 R15: ffff888126930008
+ FS: 00007f95c8cd7540(0000) GS:ffff8882b576e000(0000) knlGS:0000000000000000
+ CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
+ CR2: 00007f95c8de4dc0 CR3: 0000000128452002 CR4: 0000000000172ef0
+ Call Trace:
+ <TASK>
+ ? __pfx_ring_buffer_map_get_reader+0x10/0x10
+ tracing_buffers_ioctl+0x283/0x370
+ __x64_sys_ioctl+0x134/0x190
+ do_syscall_64+0x79/0x1c0
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+ RIP: 0033:0x7f95c8de48db
+ Code: 00 48 89 44 24 18 31 c0 48 8d 44 24 60 c7 04 24 10 00 00 00 48 89 44 24 08 48 8d 44 24 20 48 89 44 24 10 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 ff ff 77 1c 48 8b 44 24 18 64 48 2b 04 25 28 00 00
+ RSP: 002b:00007ffe037ba110 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
+ RAX: ffffffffffffffda RBX: 00007ffe037bb2b0 RCX: 00007f95c8de48db
+ RDX: 0000000000000000 RSI: 0000000000005220 RDI: 0000000000000006
+ RBP: 00007ffe037ba180 R08: 0000000000000000 R09: 0000000000000000
+ R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
+ R13: 00007ffe037bb6f8 R14: 00007f95c9065000 R15: 00005575c7492c90
+ </TASK>
+ irq event stamp: 5080
+ hardirqs last enabled at (5079): [<ffffffff83e0adb0>] _raw_spin_unlock_irqrestore+0x50/0x70
+ hardirqs last disabled at (5080): [<ffffffff83e0aa83>] _raw_spin_lock_irqsave+0x63/0x70
+ softirqs last enabled at (4182): [<ffffffff81516122>] handle_softirqs+0x552/0x710
+ softirqs last disabled at (4159): [<ffffffff815163f7>] __irq_exit_rcu+0x107/0x210
+ ---[ end trace 0000000000000000 ]---
+
+The above was triggered by running on a kernel with both lockdep and KASAN
+as well as kmemleak enabled and executing the following command:
+
+ # perf record -o perf-test.dat -a -- trace-cmd record --nosplice -e all -p function hackbench 50
+
+With perf interjecting a lot of interrupts and trace-cmd enabling all
+events as well as function tracing, with lockdep, KASAN and kmemleak
+enabled, it could cause an interrupt preempting an event being written to
+add enough events to wrap the buffer. trace-cmd was modified to have
+--nosplice use mmap instead of reading the buffer.
+
+The way to differentiate this case from the normal case of there only
+being one page written to where the swap of the reader page received that
+one page (which is the commit page), check if the tail page is on the
+reader page. The difference between the commit page and the tail page is
+that the tail page is where new writes go to, and the commit page holds
+the first write that hasn't been committed yet. In the case of an
+interrupt preempting the write of an event and filling the buffer, it
+would move the tail page but not the commit page.
+
+Have the warning only trigger if the tail page is also on the reader page,
+and also print out the number of events dropped by a commit overrun as
+that can not yet be safely added to the page so that the reader can see
+there were events dropped.
+
+Cc: stable@vger.kernel.org
+Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+Cc: Vincent Donnefort <vdonnefort@google.com>
+Link: https://lore.kernel.org/20250528121555.2066527e@gandalf.local.home
+Fixes: fe832be05a8ee ("ring-buffer: Have mmapped ring buffer keep track of missed events")
+Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/ring_buffer.c | 26 ++++++++++++++++++--------
+ 1 file changed, 18 insertions(+), 8 deletions(-)
+
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -7274,8 +7274,8 @@ consume:
+ /* Check if any events were dropped */
+ missed_events = cpu_buffer->lost_events;
+
+- if (cpu_buffer->reader_page != cpu_buffer->commit_page) {
+- if (missed_events) {
++ if (missed_events) {
++ if (cpu_buffer->reader_page != cpu_buffer->commit_page) {
+ struct buffer_data_page *bpage = reader->page;
+ unsigned int commit;
+ /*
+@@ -7296,13 +7296,23 @@ consume:
+ local_add(RB_MISSED_STORED, &bpage->commit);
+ }
+ local_add(RB_MISSED_EVENTS, &bpage->commit);
++ } else if (!WARN_ONCE(cpu_buffer->reader_page == cpu_buffer->tail_page,
++ "Reader on commit with %ld missed events",
++ missed_events)) {
++ /*
++ * There shouldn't be any missed events if the tail_page
++ * is on the reader page. But if the tail page is not on the
++ * reader page and the commit_page is, that would mean that
++ * there's a commit_overrun (an interrupt preempted an
++ * addition of an event and then filled the buffer
++ * with new events). In this case it's not an
++ * error, but it should still be reported.
++ *
++ * TODO: Add missed events to the page for user space to know.
++ */
++ pr_info("Ring buffer [%d] commit overrun lost %ld events at timestamp:%lld\n",
++ cpu, missed_events, cpu_buffer->reader_page->page->time_stamp);
+ }
+- } else {
+- /*
+- * There really shouldn't be any missed events if the commit
+- * is on the reader page.
+- */
+- WARN_ON_ONCE(missed_events);
+ }
+
+ cpu_buffer->lost_events = 0;
--- /dev/null
+From 40ee2afafc1d9fe3aa44a6fbe440d78a5c96a72e Mon Sep 17 00:00:00 2001
+From: Dmitry Antipov <dmantipov@yandex.ru>
+Date: Fri, 6 Jun 2025 14:22:42 +0300
+Subject: ring-buffer: Fix buffer locking in ring_buffer_subbuf_order_set()
+
+From: Dmitry Antipov <dmantipov@yandex.ru>
+
+commit 40ee2afafc1d9fe3aa44a6fbe440d78a5c96a72e upstream.
+
+Enlarge the critical section in ring_buffer_subbuf_order_set() to
+ensure that error handling takes place with per-buffer mutex held,
+thus preventing list corruption and other concurrency-related issues.
+
+Cc: stable@vger.kernel.org
+Cc: Masami Hiramatsu <mhiramat@kernel.org>
+Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+Cc: Tzvetomir Stoyanov <tz.stoyanov@gmail.com>
+Link: https://lore.kernel.org/20250606112242.1510605-1-dmantipov@yandex.ru
+Reported-by: syzbot+05d673e83ec640f0ced9@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=05d673e83ec640f0ced9
+Fixes: f9b94daa542a8 ("ring-buffer: Set new size of the ring buffer sub page")
+Signed-off-by: Dmitry Antipov <dmantipov@yandex.ru>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/ring_buffer.c | 4 +---
+ 1 file changed, 1 insertion(+), 3 deletions(-)
+
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -6754,7 +6754,7 @@ int ring_buffer_subbuf_order_set(struct
+ old_size = buffer->subbuf_size;
+
+ /* prevent another thread from changing buffer sizes */
+- mutex_lock(&buffer->mutex);
++ guard(mutex)(&buffer->mutex);
+ atomic_inc(&buffer->record_disabled);
+
+ /* Make sure all commits have finished */
+@@ -6859,7 +6859,6 @@ int ring_buffer_subbuf_order_set(struct
+ }
+
+ atomic_dec(&buffer->record_disabled);
+- mutex_unlock(&buffer->mutex);
+
+ return 0;
+
+@@ -6868,7 +6867,6 @@ error:
+ buffer->subbuf_size = old_size;
+
+ atomic_dec(&buffer->record_disabled);
+- mutex_unlock(&buffer->mutex);
+
+ for_each_buffer_cpu(buffer, cpu) {
+ cpu_buffer = buffer->buffers[cpu];
--- /dev/null
+From c98cc9797b7009308fff73d41bc1d08642dab77a Mon Sep 17 00:00:00 2001
+From: Steven Rostedt <rostedt@goodmis.org>
+Date: Tue, 27 May 2025 10:58:20 -0400
+Subject: ring-buffer: Move cpus_read_lock() outside of buffer->mutex
+
+From: Steven Rostedt <rostedt@goodmis.org>
+
+commit c98cc9797b7009308fff73d41bc1d08642dab77a upstream.
+
+Running a modified trace-cmd record --nosplice where it does a mmap of the
+ring buffer when '--nosplice' is set, caused the following lockdep splat:
+
+ ======================================================
+ WARNING: possible circular locking dependency detected
+ 6.15.0-rc7-test-00002-gfb7d03d8a82f #551 Not tainted
+ ------------------------------------------------------
+ trace-cmd/1113 is trying to acquire lock:
+ ffff888100062888 (&buffer->mutex){+.+.}-{4:4}, at: ring_buffer_map+0x11c/0xe70
+
+ but task is already holding lock:
+ ffff888100a5f9f8 (&cpu_buffer->mapping_lock){+.+.}-{4:4}, at: ring_buffer_map+0xcf/0xe70
+
+ which lock already depends on the new lock.
+
+ the existing dependency chain (in reverse order) is:
+
+ -> #5 (&cpu_buffer->mapping_lock){+.+.}-{4:4}:
+ __mutex_lock+0x192/0x18c0
+ ring_buffer_map+0xcf/0xe70
+ tracing_buffers_mmap+0x1c4/0x3b0
+ __mmap_region+0xd8d/0x1f70
+ do_mmap+0x9d7/0x1010
+ vm_mmap_pgoff+0x20b/0x390
+ ksys_mmap_pgoff+0x2e9/0x440
+ do_syscall_64+0x79/0x1c0
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+
+ -> #4 (&mm->mmap_lock){++++}-{4:4}:
+ __might_fault+0xa5/0x110
+ _copy_to_user+0x22/0x80
+ _perf_ioctl+0x61b/0x1b70
+ perf_ioctl+0x62/0x90
+ __x64_sys_ioctl+0x134/0x190
+ do_syscall_64+0x79/0x1c0
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+
+ -> #3 (&cpuctx_mutex){+.+.}-{4:4}:
+ __mutex_lock+0x192/0x18c0
+ perf_event_init_cpu+0x325/0x7c0
+ perf_event_init+0x52a/0x5b0
+ start_kernel+0x263/0x3e0
+ x86_64_start_reservations+0x24/0x30
+ x86_64_start_kernel+0x95/0xa0
+ common_startup_64+0x13e/0x141
+
+ -> #2 (pmus_lock){+.+.}-{4:4}:
+ __mutex_lock+0x192/0x18c0
+ perf_event_init_cpu+0xb7/0x7c0
+ cpuhp_invoke_callback+0x2c0/0x1030
+ __cpuhp_invoke_callback_range+0xbf/0x1f0
+ _cpu_up+0x2e7/0x690
+ cpu_up+0x117/0x170
+ cpuhp_bringup_mask+0xd5/0x120
+ bringup_nonboot_cpus+0x13d/0x170
+ smp_init+0x2b/0xf0
+ kernel_init_freeable+0x441/0x6d0
+ kernel_init+0x1e/0x160
+ ret_from_fork+0x34/0x70
+ ret_from_fork_asm+0x1a/0x30
+
+ -> #1 (cpu_hotplug_lock){++++}-{0:0}:
+ cpus_read_lock+0x2a/0xd0
+ ring_buffer_resize+0x610/0x14e0
+ __tracing_resize_ring_buffer.part.0+0x42/0x120
+ tracing_set_tracer+0x7bd/0xa80
+ tracing_set_trace_write+0x132/0x1e0
+ vfs_write+0x21c/0xe80
+ ksys_write+0xf9/0x1c0
+ do_syscall_64+0x79/0x1c0
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+
+ -> #0 (&buffer->mutex){+.+.}-{4:4}:
+ __lock_acquire+0x1405/0x2210
+ lock_acquire+0x174/0x310
+ __mutex_lock+0x192/0x18c0
+ ring_buffer_map+0x11c/0xe70
+ tracing_buffers_mmap+0x1c4/0x3b0
+ __mmap_region+0xd8d/0x1f70
+ do_mmap+0x9d7/0x1010
+ vm_mmap_pgoff+0x20b/0x390
+ ksys_mmap_pgoff+0x2e9/0x440
+ do_syscall_64+0x79/0x1c0
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+
+ other info that might help us debug this:
+
+ Chain exists of:
+ &buffer->mutex --> &mm->mmap_lock --> &cpu_buffer->mapping_lock
+
+ Possible unsafe locking scenario:
+
+ CPU0 CPU1
+ ---- ----
+ lock(&cpu_buffer->mapping_lock);
+ lock(&mm->mmap_lock);
+ lock(&cpu_buffer->mapping_lock);
+ lock(&buffer->mutex);
+
+ *** DEADLOCK ***
+
+ 2 locks held by trace-cmd/1113:
+ #0: ffff888106b847e0 (&mm->mmap_lock){++++}-{4:4}, at: vm_mmap_pgoff+0x192/0x390
+ #1: ffff888100a5f9f8 (&cpu_buffer->mapping_lock){+.+.}-{4:4}, at: ring_buffer_map+0xcf/0xe70
+
+ stack backtrace:
+ CPU: 5 UID: 0 PID: 1113 Comm: trace-cmd Not tainted 6.15.0-rc7-test-00002-gfb7d03d8a82f #551 PREEMPT
+ Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
+ Call Trace:
+ <TASK>
+ dump_stack_lvl+0x6e/0xa0
+ print_circular_bug.cold+0x178/0x1be
+ check_noncircular+0x146/0x160
+ __lock_acquire+0x1405/0x2210
+ lock_acquire+0x174/0x310
+ ? ring_buffer_map+0x11c/0xe70
+ ? ring_buffer_map+0x11c/0xe70
+ ? __mutex_lock+0x169/0x18c0
+ __mutex_lock+0x192/0x18c0
+ ? ring_buffer_map+0x11c/0xe70
+ ? ring_buffer_map+0x11c/0xe70
+ ? function_trace_call+0x296/0x370
+ ? __pfx___mutex_lock+0x10/0x10
+ ? __pfx_function_trace_call+0x10/0x10
+ ? __pfx___mutex_lock+0x10/0x10
+ ? _raw_spin_unlock+0x2d/0x50
+ ? ring_buffer_map+0x11c/0xe70
+ ? ring_buffer_map+0x11c/0xe70
+ ? __mutex_lock+0x5/0x18c0
+ ring_buffer_map+0x11c/0xe70
+ ? do_raw_spin_lock+0x12d/0x270
+ ? find_held_lock+0x2b/0x80
+ ? _raw_spin_unlock+0x2d/0x50
+ ? rcu_is_watching+0x15/0xb0
+ ? _raw_spin_unlock+0x2d/0x50
+ ? trace_preempt_on+0xd0/0x110
+ tracing_buffers_mmap+0x1c4/0x3b0
+ __mmap_region+0xd8d/0x1f70
+ ? ring_buffer_lock_reserve+0x99/0xff0
+ ? __pfx___mmap_region+0x10/0x10
+ ? ring_buffer_lock_reserve+0x99/0xff0
+ ? __pfx_ring_buffer_lock_reserve+0x10/0x10
+ ? __pfx_ring_buffer_lock_reserve+0x10/0x10
+ ? bpf_lsm_mmap_addr+0x4/0x10
+ ? security_mmap_addr+0x46/0xd0
+ ? lock_is_held_type+0xd9/0x130
+ do_mmap+0x9d7/0x1010
+ ? 0xffffffffc0370095
+ ? __pfx_do_mmap+0x10/0x10
+ vm_mmap_pgoff+0x20b/0x390
+ ? __pfx_vm_mmap_pgoff+0x10/0x10
+ ? 0xffffffffc0370095
+ ksys_mmap_pgoff+0x2e9/0x440
+ do_syscall_64+0x79/0x1c0
+ entry_SYSCALL_64_after_hwframe+0x76/0x7e
+ RIP: 0033:0x7fb0963a7de2
+ Code: 00 00 00 0f 1f 44 00 00 41 f7 c1 ff 0f 00 00 75 27 55 89 cd 53 48 89 fb 48 85 ff 74 3b 41 89 ea 48 89 df b8 09 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 76 5b 5d c3 0f 1f 00 48 8b 05 e1 9f 0d 00 64
+ RSP: 002b:00007ffdcc8fb878 EFLAGS: 00000246 ORIG_RAX: 0000000000000009
+ RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb0963a7de2
+ RDX: 0000000000000001 RSI: 0000000000001000 RDI: 0000000000000000
+ RBP: 0000000000000001 R08: 0000000000000006 R09: 0000000000000000
+ R10: 0000000000000001 R11: 0000000000000246 R12: 0000000000000000
+ R13: 00007ffdcc8fbe68 R14: 00007fb096628000 R15: 00005633e01a5c90
+ </TASK>
+
+The issue is that cpus_read_lock() is taken within buffer->mutex. The
+memory mapped pages are taken with the mmap_lock held. The buffer->mutex
+is taken within the cpu_buffer->mapping_lock. There's quite a chain with
+all these locks, where the deadlock can be fixed by moving the
+cpus_read_lock() outside the taking of the buffer->mutex.
+
+Cc: stable@vger.kernel.org
+Cc: Masami Hiramatsu <mhiramat@kernel.org>
+Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+Cc: Vincent Donnefort <vdonnefort@google.com>
+Link: https://lore.kernel.org/20250527105820.0f45d045@gandalf.local.home
+Fixes: 117c39200d9d7 ("ring-buffer: Introducing ring-buffer mapping functions")
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/ring_buffer.c | 11 ++++++-----
+ 1 file changed, 6 insertions(+), 5 deletions(-)
+
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -2796,6 +2796,12 @@ int ring_buffer_resize(struct trace_buff
+ if (nr_pages < 2)
+ nr_pages = 2;
+
++ /*
++ * Keep CPUs from coming online while resizing to synchronize
++ * with new per CPU buffers being created.
++ */
++ guard(cpus_read_lock)();
++
+ /* prevent another thread from changing buffer sizes */
+ mutex_lock(&buffer->mutex);
+ atomic_inc(&buffer->resizing);
+@@ -2840,7 +2846,6 @@ int ring_buffer_resize(struct trace_buff
+ cond_resched();
+ }
+
+- cpus_read_lock();
+ /*
+ * Fire off all the required work handlers
+ * We can't schedule on offline CPUs, but it's not necessary
+@@ -2880,7 +2885,6 @@ int ring_buffer_resize(struct trace_buff
+ cpu_buffer->nr_pages_to_update = 0;
+ }
+
+- cpus_read_unlock();
+ } else {
+ cpu_buffer = buffer->buffers[cpu_id];
+
+@@ -2908,8 +2912,6 @@ int ring_buffer_resize(struct trace_buff
+ goto out_err;
+ }
+
+- cpus_read_lock();
+-
+ /* Can't run something on an offline CPU. */
+ if (!cpu_online(cpu_id))
+ rb_update_pages(cpu_buffer);
+@@ -2928,7 +2930,6 @@ int ring_buffer_resize(struct trace_buff
+ }
+
+ cpu_buffer->nr_pages_to_update = 0;
+- cpus_read_unlock();
+ }
+
+ out:
alsa-usb-audio-add-implicit-feedback-quirk-for-rode-ai-1.patch
hid-usbhid-eliminate-recurrent-out-of-bounds-bug-in-usbhid_parse.patch
posix-cpu-timers-fix-race-between-handle_posix_cpu_timers-and-posix_cpu_timer_del.patch
+nvmem-zynqmp_nvmem-unbreak-driver-after-cleanup.patch
+usb-usbtmc-fix-read_stb-function-and-get_stb-ioctl.patch
+vmci-fix-race-between-vmci_host_setup_notify-and-vmci_ctx_unset_notify.patch
+tty-serial-8250_omap-fix-tx-with-dma-for-am33xx.patch
+usb-misc-onboard_usb_dev-fix-usb5744-initialization-sequence.patch
+usb-cdnsp-fix-issue-with-detecting-command-completion-event.patch
+usb-cdnsp-fix-issue-with-detecting-usb-3.2-speed.patch
+usb-flush-altsetting-0-endpoints-before-reinitializating-them-after-reset.patch
+usb-typec-tcpm-tcpci_maxim-fix-bounds-check-in-process_rx.patch
+usb-typec-tcpm-move-tcpm_queue_vdm_unlocked-to-asynchronous-work.patch
+9p-add-a-migrate_folio-method.patch
+ring-buffer-do-not-trigger-warn_on-due-to-a-commit_overrun.patch
+ring-buffer-fix-buffer-locking-in-ring_buffer_subbuf_order_set.patch
+ring-buffer-move-cpus_read_lock-outside-of-buffer-mutex.patch
+xfs-don-t-assume-perags-are-initialised-when-trimming-ags.patch
+xen-arm-call-uaccess_ttbr0_enable-for-dm_op-hypercall.patch
+x86-iopl-cure-tif_io_bitmap-inconsistencies.patch
+x86-fred-signal-prevent-immediate-repeat-of-single-step-trap-on-return-from-sigtrap-handler.patch
--- /dev/null
+From b495021a973e2468497689bd3e29b736747b896f Mon Sep 17 00:00:00 2001
+From: "Jiri Slaby (SUSE)" <jirislaby@kernel.org>
+Date: Thu, 22 May 2025 07:38:35 +0200
+Subject: tty: serial: 8250_omap: fix TX with DMA for am33xx
+
+From: Jiri Slaby (SUSE) <jirislaby@kernel.org>
+
+commit b495021a973e2468497689bd3e29b736747b896f upstream.
+
+Commit 1788cf6a91d9 ("tty: serial: switch from circ_buf to kfifo")
+introduced an error in the TX DMA handling for 8250_omap.
+
+When the OMAP_DMA_TX_KICK flag is set, the "skip_byte" is pulled from
+the kfifo and emitted directly in order to start the DMA. While the
+kfifo is updated, dma->tx_size is not decreased. This leads to
+uart_xmit_advance() called in omap_8250_dma_tx_complete() advancing the
+kfifo by one too much.
+
+In practice, transmitting N bytes has been seen to result in the last
+N-1 bytes being sent repeatedly.
+
+This change fixes the problem by moving all of the dma setup after the
+OMAP_DMA_TX_KICK handling and using kfifo_len() instead of the DMA size
+for the 4-byte cutoff check. This slightly changes the behaviour at
+buffer wraparound, but it still transmits the correct bytes somehow.
+
+Now, the "skip_byte" would no longer be accounted to the stats. As
+previously, dma->tx_size included also this skip byte, up->icount.tx was
+updated by aforementioned uart_xmit_advance() in
+omap_8250_dma_tx_complete(). Fix this by using the uart_fifo_out()
+helper instead of bare kfifo_get().
+
+Based on patch by Mans Rullgard <mans@mansr.com>
+
+Signed-off-by: "Jiri Slaby (SUSE)" <jirislaby@kernel.org>
+Fixes: 1788cf6a91d9 ("tty: serial: switch from circ_buf to kfifo")
+Link: https://lore.kernel.org/all/20250506150748.3162-1-mans@mansr.com/
+Reported-by: Mans Rullgard <mans@mansr.com>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/r/20250522053835.3495975-1-jirislaby@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/tty/serial/8250/8250_omap.c | 25 ++++++++++---------------
+ 1 file changed, 10 insertions(+), 15 deletions(-)
+
+--- a/drivers/tty/serial/8250/8250_omap.c
++++ b/drivers/tty/serial/8250/8250_omap.c
+@@ -1168,16 +1168,6 @@ static int omap_8250_tx_dma(struct uart_
+ return 0;
+ }
+
+- sg_init_table(&sg, 1);
+- ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1,
+- UART_XMIT_SIZE, dma->tx_addr);
+- if (ret != 1) {
+- serial8250_clear_THRI(p);
+- return 0;
+- }
+-
+- dma->tx_size = sg_dma_len(&sg);
+-
+ if (priv->habit & OMAP_DMA_TX_KICK) {
+ unsigned char c;
+ u8 tx_lvl;
+@@ -1202,18 +1192,22 @@ static int omap_8250_tx_dma(struct uart_
+ ret = -EBUSY;
+ goto err;
+ }
+- if (dma->tx_size < 4) {
++ if (kfifo_len(&tport->xmit_fifo) < 4) {
+ ret = -EINVAL;
+ goto err;
+ }
+- if (!kfifo_get(&tport->xmit_fifo, &c)) {
++ if (!uart_fifo_out(&p->port, &c, 1)) {
+ ret = -EINVAL;
+ goto err;
+ }
+ skip_byte = c;
+- /* now we need to recompute due to kfifo_get */
+- kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1,
+- UART_XMIT_SIZE, dma->tx_addr);
++ }
++
++ sg_init_table(&sg, 1);
++ ret = kfifo_dma_out_prepare_mapped(&tport->xmit_fifo, &sg, 1, UART_XMIT_SIZE, dma->tx_addr);
++ if (ret != 1) {
++ ret = -EINVAL;
++ goto err;
+ }
+
+ desc = dmaengine_prep_slave_sg(dma->txchan, &sg, 1, DMA_MEM_TO_DEV,
+@@ -1223,6 +1217,7 @@ static int omap_8250_tx_dma(struct uart_
+ goto err;
+ }
+
++ dma->tx_size = sg_dma_len(&sg);
+ dma->tx_running = 1;
+
+ desc->callback = omap_8250_dma_tx_complete;
--- /dev/null
+From f4ecdc352646f7d23f348e5c544dbe3212c94fc8 Mon Sep 17 00:00:00 2001
+From: Pawel Laszczak <pawell@cadence.com>
+Date: Tue, 13 May 2025 05:30:09 +0000
+Subject: usb: cdnsp: Fix issue with detecting command completion event
+
+From: Pawel Laszczak <pawell@cadence.com>
+
+commit f4ecdc352646f7d23f348e5c544dbe3212c94fc8 upstream.
+
+In some cases, there is a small-time gap in which CMD_RING_BUSY can be
+cleared by controller but adding command completion event to event ring
+will be delayed. As the result driver will return error code.
+
+This behavior has been detected on usbtest driver (test 9) with
+configuration including ep1in/ep1out bulk and ep2in/ep2out isoc
+endpoint.
+
+Probably this gap occurred because controller was busy with adding some
+other events to event ring.
+
+The CMD_RING_BUSY is cleared to '0' when the Command Descriptor has been
+executed and not when command completion event has been added to event
+ring.
+
+To fix this issue for this test the small delay is sufficient less than
+10us) but to make sure the problem doesn't happen again in the future
+the patch introduces 10 retries to check with delay about 20us before
+returning error code.
+
+Fixes: 3d82904559f4 ("usb: cdnsp: cdns3 Add main part of Cadence USBSSP DRD Driver")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Pawel Laszczak <pawell@cadence.com>
+Acked-by: Peter Chen <peter.chen@kernel.org>
+Link: https://lore.kernel.org/r/PH7PR07MB9538AA45362ACCF1B94EE9B7DD96A@PH7PR07MB9538.namprd07.prod.outlook.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/cdns3/cdnsp-gadget.c | 18 +++++++++++++++++-
+ 1 file changed, 17 insertions(+), 1 deletion(-)
+
+--- a/drivers/usb/cdns3/cdnsp-gadget.c
++++ b/drivers/usb/cdns3/cdnsp-gadget.c
+@@ -546,6 +546,7 @@ int cdnsp_wait_for_cmd_compl(struct cdns
+ dma_addr_t cmd_deq_dma;
+ union cdnsp_trb *event;
+ u32 cycle_state;
++ u32 retry = 10;
+ int ret, val;
+ u64 cmd_dma;
+ u32 flags;
+@@ -577,8 +578,23 @@ int cdnsp_wait_for_cmd_compl(struct cdns
+ flags = le32_to_cpu(event->event_cmd.flags);
+
+ /* Check the owner of the TRB. */
+- if ((flags & TRB_CYCLE) != cycle_state)
++ if ((flags & TRB_CYCLE) != cycle_state) {
++ /*
++ * Give some extra time to get chance controller
++ * to finish command before returning error code.
++ * Checking CMD_RING_BUSY is not sufficient because
++ * this bit is cleared to '0' when the Command
++ * Descriptor has been executed by controller
++ * and not when command completion event has
++ * be added to event ring.
++ */
++ if (retry--) {
++ udelay(20);
++ continue;
++ }
++
+ return -EINVAL;
++ }
+
+ cmd_dma = le64_to_cpu(event->event_cmd.cmd_trb);
+
--- /dev/null
+From 2852788cfbe9ca1ab68509d65804413871f741f9 Mon Sep 17 00:00:00 2001
+From: Pawel Laszczak <pawell@cadence.com>
+Date: Tue, 13 May 2025 06:54:03 +0000
+Subject: usb: cdnsp: Fix issue with detecting USB 3.2 speed
+
+From: Pawel Laszczak <pawell@cadence.com>
+
+commit 2852788cfbe9ca1ab68509d65804413871f741f9 upstream.
+
+Patch adds support for detecting SuperSpeedPlus Gen1 x2 and
+SuperSpeedPlus Gen2 x2 speed.
+
+Fixes: 3d82904559f4 ("usb: cdnsp: cdns3 Add main part of Cadence USBSSP DRD Driver")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Pawel Laszczak <pawell@cadence.com>
+Acked-by: Peter Chen <peter.chen@kernel.org>
+Link: https://lore.kernel.org/r/PH7PR07MB95387AD98EDCA695FECE52BADD96A@PH7PR07MB9538.namprd07.prod.outlook.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/cdns3/cdnsp-gadget.c | 3 ++-
+ drivers/usb/cdns3/cdnsp-gadget.h | 4 ++++
+ 2 files changed, 6 insertions(+), 1 deletion(-)
+
+--- a/drivers/usb/cdns3/cdnsp-gadget.c
++++ b/drivers/usb/cdns3/cdnsp-gadget.c
+@@ -28,7 +28,8 @@
+ unsigned int cdnsp_port_speed(unsigned int port_status)
+ {
+ /*Detect gadget speed based on PORTSC register*/
+- if (DEV_SUPERSPEEDPLUS(port_status))
++ if (DEV_SUPERSPEEDPLUS(port_status) ||
++ DEV_SSP_GEN1x2(port_status) || DEV_SSP_GEN2x2(port_status))
+ return USB_SPEED_SUPER_PLUS;
+ else if (DEV_SUPERSPEED(port_status))
+ return USB_SPEED_SUPER;
+--- a/drivers/usb/cdns3/cdnsp-gadget.h
++++ b/drivers/usb/cdns3/cdnsp-gadget.h
+@@ -285,11 +285,15 @@ struct cdnsp_port_regs {
+ #define XDEV_HS (0x3 << 10)
+ #define XDEV_SS (0x4 << 10)
+ #define XDEV_SSP (0x5 << 10)
++#define XDEV_SSP1x2 (0x6 << 10)
++#define XDEV_SSP2x2 (0x7 << 10)
+ #define DEV_UNDEFSPEED(p) (((p) & DEV_SPEED_MASK) == (0x0 << 10))
+ #define DEV_FULLSPEED(p) (((p) & DEV_SPEED_MASK) == XDEV_FS)
+ #define DEV_HIGHSPEED(p) (((p) & DEV_SPEED_MASK) == XDEV_HS)
+ #define DEV_SUPERSPEED(p) (((p) & DEV_SPEED_MASK) == XDEV_SS)
+ #define DEV_SUPERSPEEDPLUS(p) (((p) & DEV_SPEED_MASK) == XDEV_SSP)
++#define DEV_SSP_GEN1x2(p) (((p) & DEV_SPEED_MASK) == XDEV_SSP1x2)
++#define DEV_SSP_GEN2x2(p) (((p) & DEV_SPEED_MASK) == XDEV_SSP2x2)
+ #define DEV_SUPERSPEED_ANY(p) (((p) & DEV_SPEED_MASK) >= XDEV_SS)
+ #define DEV_PORT_SPEED(p) (((p) >> 10) & 0x0f)
+ /* Port Link State Write Strobe - set this when changing link state */
--- /dev/null
+From 89bb3dc13ac29a563f4e4c555e422882f64742bd Mon Sep 17 00:00:00 2001
+From: Mathias Nyman <mathias.nyman@linux.intel.com>
+Date: Wed, 14 May 2025 16:25:20 +0300
+Subject: usb: Flush altsetting 0 endpoints before reinitializating them after reset.
+
+From: Mathias Nyman <mathias.nyman@linux.intel.com>
+
+commit 89bb3dc13ac29a563f4e4c555e422882f64742bd upstream.
+
+usb core avoids sending a Set-Interface altsetting 0 request after device
+reset, and instead relies on calling usb_disable_interface() and
+usb_enable_interface() to flush and reset host-side of those endpoints.
+
+xHCI hosts allocate and set up endpoint ring buffers and host_ep->hcpriv
+during usb_hcd_alloc_bandwidth() callback, which in this case is called
+before flushing the endpoint in usb_disable_interface().
+
+Call usb_disable_interface() before usb_hcd_alloc_bandwidth() to ensure
+URBs are flushed before new ring buffers for the endpoints are allocated.
+
+Otherwise host driver will attempt to find and remove old stale URBs
+from a freshly allocated new ringbuffer.
+
+Cc: stable <stable@kernel.org>
+Fixes: 4fe0387afa89 ("USB: don't send Set-Interface after reset")
+Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
+Link: https://lore.kernel.org/r/20250514132520.225345-1-mathias.nyman@linux.intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/core/hub.c | 16 ++++++++++++++--
+ 1 file changed, 14 insertions(+), 2 deletions(-)
+
+--- a/drivers/usb/core/hub.c
++++ b/drivers/usb/core/hub.c
+@@ -6135,6 +6135,7 @@ static int usb_reset_and_verify_device(s
+ struct usb_hub *parent_hub;
+ struct usb_hcd *hcd = bus_to_hcd(udev->bus);
+ struct usb_device_descriptor descriptor;
++ struct usb_interface *intf;
+ struct usb_host_bos *bos;
+ int i, j, ret = 0;
+ int port1 = udev->portnum;
+@@ -6192,6 +6193,18 @@ static int usb_reset_and_verify_device(s
+ if (!udev->actconfig)
+ goto done;
+
++ /*
++ * Some devices can't handle setting default altsetting 0 with a
++ * Set-Interface request. Disable host-side endpoints of those
++ * interfaces here. Enable and reset them back after host has set
++ * its internal endpoint structures during usb_hcd_alloc_bandwith()
++ */
++ for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
++ intf = udev->actconfig->interface[i];
++ if (intf->cur_altsetting->desc.bAlternateSetting == 0)
++ usb_disable_interface(udev, intf, true);
++ }
++
+ mutex_lock(hcd->bandwidth_mutex);
+ ret = usb_hcd_alloc_bandwidth(udev, udev->actconfig, NULL, NULL);
+ if (ret < 0) {
+@@ -6223,12 +6236,11 @@ static int usb_reset_and_verify_device(s
+ */
+ for (i = 0; i < udev->actconfig->desc.bNumInterfaces; i++) {
+ struct usb_host_config *config = udev->actconfig;
+- struct usb_interface *intf = config->interface[i];
+ struct usb_interface_descriptor *desc;
+
++ intf = config->interface[i];
+ desc = &intf->cur_altsetting->desc;
+ if (desc->bAlternateSetting == 0) {
+- usb_disable_interface(udev, intf, true);
+ usb_enable_interface(udev, intf, true);
+ ret = 0;
+ } else {
--- /dev/null
+From 1143d41922c0f87504f095417ba1870167970143 Mon Sep 17 00:00:00 2001
+From: Jonathan Stroud <jonathan.stroud@amd.com>
+Date: Fri, 16 May 2025 18:02:40 +0530
+Subject: usb: misc: onboard_usb_dev: Fix usb5744 initialization sequence
+
+From: Jonathan Stroud <jonathan.stroud@amd.com>
+
+commit 1143d41922c0f87504f095417ba1870167970143 upstream.
+
+Introduce i2c APIs to read/write for proper configuration register
+programming. It ensures that read-modify-write sequence is performed
+and reserved bit in Runtime Flags 2 register are not touched.
+
+Also legacy smbus block write inserted an extra count value into the
+i2c data stream which breaks the register write on the usb5744.
+
+Switching to new read/write i2c APIs fixes both issues.
+
+Fixes: 6782311d04df ("usb: misc: onboard_usb_dev: add Microchip usb5744 SMBus programming support")
+Cc: stable <stable@kernel.org>
+Signed-off-by: Jonathan Stroud <jonathan.stroud@amd.com>
+Co-developed-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
+Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@amd.com>
+Link: https://lore.kernel.org/r/1747398760-284021-1-git-send-email-radhey.shyam.pandey@amd.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/misc/onboard_usb_dev.c | 100 +++++++++++++++++++++++++----
+ 1 file changed, 87 insertions(+), 13 deletions(-)
+
+diff --git a/drivers/usb/misc/onboard_usb_dev.c b/drivers/usb/misc/onboard_usb_dev.c
+index 40bc98019e0b..1048e3912068 100644
+--- a/drivers/usb/misc/onboard_usb_dev.c
++++ b/drivers/usb/misc/onboard_usb_dev.c
+@@ -36,9 +36,10 @@
+ #define USB5744_CMD_CREG_ACCESS 0x99
+ #define USB5744_CMD_CREG_ACCESS_LSB 0x37
+ #define USB5744_CREG_MEM_ADDR 0x00
++#define USB5744_CREG_MEM_RD_ADDR 0x04
+ #define USB5744_CREG_WRITE 0x00
+-#define USB5744_CREG_RUNTIMEFLAGS2 0x41
+-#define USB5744_CREG_RUNTIMEFLAGS2_LSB 0x1D
++#define USB5744_CREG_READ 0x01
++#define USB5744_CREG_RUNTIMEFLAGS2 0x411D
+ #define USB5744_CREG_BYPASS_UDC_SUSPEND BIT(3)
+
+ static void onboard_dev_attach_usb_driver(struct work_struct *work);
+@@ -309,11 +310,88 @@ static void onboard_dev_attach_usb_driver(struct work_struct *work)
+ pr_err("Failed to attach USB driver: %pe\n", ERR_PTR(err));
+ }
+
++static int onboard_dev_5744_i2c_read_byte(struct i2c_client *client, u16 addr, u8 *data)
++{
++ struct i2c_msg msg[2];
++ u8 rd_buf[3];
++ int ret;
++
++ u8 wr_buf[7] = {0, USB5744_CREG_MEM_ADDR, 4,
++ USB5744_CREG_READ, 1,
++ addr >> 8 & 0xff,
++ addr & 0xff};
++ msg[0].addr = client->addr;
++ msg[0].flags = 0;
++ msg[0].len = sizeof(wr_buf);
++ msg[0].buf = wr_buf;
++
++ ret = i2c_transfer(client->adapter, msg, 1);
++ if (ret < 0)
++ return ret;
++
++ wr_buf[0] = USB5744_CMD_CREG_ACCESS;
++ wr_buf[1] = USB5744_CMD_CREG_ACCESS_LSB;
++ wr_buf[2] = 0;
++ msg[0].len = 3;
++
++ ret = i2c_transfer(client->adapter, msg, 1);
++ if (ret < 0)
++ return ret;
++
++ wr_buf[0] = 0;
++ wr_buf[1] = USB5744_CREG_MEM_RD_ADDR;
++ msg[0].len = 2;
++
++ msg[1].addr = client->addr;
++ msg[1].flags = I2C_M_RD;
++ msg[1].len = 2;
++ msg[1].buf = rd_buf;
++
++ ret = i2c_transfer(client->adapter, msg, 2);
++ if (ret < 0)
++ return ret;
++ *data = rd_buf[1];
++
++ return 0;
++}
++
++static int onboard_dev_5744_i2c_write_byte(struct i2c_client *client, u16 addr, u8 data)
++{
++ struct i2c_msg msg[2];
++ int ret;
++
++ u8 wr_buf[8] = {0, USB5744_CREG_MEM_ADDR, 5,
++ USB5744_CREG_WRITE, 1,
++ addr >> 8 & 0xff,
++ addr & 0xff,
++ data};
++ msg[0].addr = client->addr;
++ msg[0].flags = 0;
++ msg[0].len = sizeof(wr_buf);
++ msg[0].buf = wr_buf;
++
++ ret = i2c_transfer(client->adapter, msg, 1);
++ if (ret < 0)
++ return ret;
++
++ msg[0].len = 3;
++ wr_buf[0] = USB5744_CMD_CREG_ACCESS;
++ wr_buf[1] = USB5744_CMD_CREG_ACCESS_LSB;
++ wr_buf[2] = 0;
++
++ ret = i2c_transfer(client->adapter, msg, 1);
++ if (ret < 0)
++ return ret;
++
++ return 0;
++}
++
+ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ {
+ #if IS_ENABLED(CONFIG_USB_ONBOARD_DEV_USB5744)
+ struct device *dev = &client->dev;
+ int ret;
++ u8 reg;
+
+ /*
+ * Set BYPASS_UDC_SUSPEND bit to ensure MCU is always enabled
+@@ -321,21 +399,17 @@ static int onboard_dev_5744_i2c_init(struct i2c_client *client)
+ * The command writes 5 bytes to memory and single data byte in
+ * configuration register.
+ */
+- char wr_buf[7] = {USB5744_CREG_MEM_ADDR, 5,
+- USB5744_CREG_WRITE, 1,
+- USB5744_CREG_RUNTIMEFLAGS2,
+- USB5744_CREG_RUNTIMEFLAGS2_LSB,
+- USB5744_CREG_BYPASS_UDC_SUSPEND};
++ ret = onboard_dev_5744_i2c_read_byte(client,
++ USB5744_CREG_RUNTIMEFLAGS2, ®);
++ if (ret)
++ return dev_err_probe(dev, ret, "CREG_RUNTIMEFLAGS2 read failed\n");
+
+- ret = i2c_smbus_write_block_data(client, 0, sizeof(wr_buf), wr_buf);
++ reg |= USB5744_CREG_BYPASS_UDC_SUSPEND;
++ ret = onboard_dev_5744_i2c_write_byte(client,
++ USB5744_CREG_RUNTIMEFLAGS2, reg);
+ if (ret)
+ return dev_err_probe(dev, ret, "BYPASS_UDC_SUSPEND bit configuration failed\n");
+
+- ret = i2c_smbus_write_word_data(client, USB5744_CMD_CREG_ACCESS,
+- USB5744_CMD_CREG_ACCESS_LSB);
+- if (ret)
+- return dev_err_probe(dev, ret, "Configuration Register Access Command failed\n");
+-
+ /* Send SMBus command to boot hub. */
+ ret = i2c_smbus_write_word_data(client, USB5744_CMD_ATTACH,
+ USB5744_CMD_ATTACH_LSB);
+--
+2.49.0
+
--- /dev/null
+From 324d45e53f1a36c88bc649dc39e0c8300a41be0a Mon Sep 17 00:00:00 2001
+From: RD Babiera <rdbabiera@google.com>
+Date: Tue, 6 May 2025 23:28:53 +0000
+Subject: usb: typec: tcpm: move tcpm_queue_vdm_unlocked to asynchronous work
+
+From: RD Babiera <rdbabiera@google.com>
+
+commit 324d45e53f1a36c88bc649dc39e0c8300a41be0a upstream.
+
+A state check was previously added to tcpm_queue_vdm_unlocked to
+prevent a deadlock where the DisplayPort Alt Mode driver would be
+executing work and attempting to grab the tcpm_lock while the TCPM
+was holding the lock and attempting to unregister the altmode, blocking
+on the altmode driver's cancel_work_sync call.
+
+Because the state check isn't protected, there is a small window
+where the Alt Mode driver could determine that the TCPM is
+in a ready state and attempt to grab the lock while the
+TCPM grabs the lock and changes the TCPM state to one that
+causes the deadlock. The callstack is provided below:
+
+[110121.667392][ C7] Call trace:
+[110121.667396][ C7] __switch_to+0x174/0x338
+[110121.667406][ C7] __schedule+0x608/0x9f0
+[110121.667414][ C7] schedule+0x7c/0xe8
+[110121.667423][ C7] kernfs_drain+0xb0/0x114
+[110121.667431][ C7] __kernfs_remove+0x16c/0x20c
+[110121.667436][ C7] kernfs_remove_by_name_ns+0x74/0xe8
+[110121.667442][ C7] sysfs_remove_group+0x84/0xe8
+[110121.667450][ C7] sysfs_remove_groups+0x34/0x58
+[110121.667458][ C7] device_remove_groups+0x10/0x20
+[110121.667464][ C7] device_release_driver_internal+0x164/0x2e4
+[110121.667475][ C7] device_release_driver+0x18/0x28
+[110121.667484][ C7] bus_remove_device+0xec/0x118
+[110121.667491][ C7] device_del+0x1e8/0x4ac
+[110121.667498][ C7] device_unregister+0x18/0x38
+[110121.667504][ C7] typec_unregister_altmode+0x30/0x44
+[110121.667515][ C7] tcpm_reset_port+0xac/0x370
+[110121.667523][ C7] tcpm_snk_detach+0x84/0xb8
+[110121.667529][ C7] run_state_machine+0x4c0/0x1b68
+[110121.667536][ C7] tcpm_state_machine_work+0x94/0xe4
+[110121.667544][ C7] kthread_worker_fn+0x10c/0x244
+[110121.667552][ C7] kthread+0x104/0x1d4
+[110121.667557][ C7] ret_from_fork+0x10/0x20
+
+[110121.667689][ C7] Workqueue: events dp_altmode_work
+[110121.667697][ C7] Call trace:
+[110121.667701][ C7] __switch_to+0x174/0x338
+[110121.667710][ C7] __schedule+0x608/0x9f0
+[110121.667717][ C7] schedule+0x7c/0xe8
+[110121.667725][ C7] schedule_preempt_disabled+0x24/0x40
+[110121.667733][ C7] __mutex_lock+0x408/0xdac
+[110121.667741][ C7] __mutex_lock_slowpath+0x14/0x24
+[110121.667748][ C7] mutex_lock+0x40/0xec
+[110121.667757][ C7] tcpm_altmode_enter+0x78/0xb4
+[110121.667764][ C7] typec_altmode_enter+0xdc/0x10c
+[110121.667769][ C7] dp_altmode_work+0x68/0x164
+[110121.667775][ C7] process_one_work+0x1e4/0x43c
+[110121.667783][ C7] worker_thread+0x25c/0x430
+[110121.667789][ C7] kthread+0x104/0x1d4
+[110121.667794][ C7] ret_from_fork+0x10/0x20
+
+Change tcpm_queue_vdm_unlocked to queue for tcpm_queue_vdm_work,
+which can perform the state check while holding the TCPM lock
+while the Alt Mode lock is no longer held. This requires a new
+struct to hold the vdm data, altmode_vdm_event.
+
+Fixes: cdc9946ea637 ("usb: typec: tcpm: enforce ready state when queueing alt mode vdm")
+Cc: stable <stable@kernel.org>
+Signed-off-by: RD Babiera <rdbabiera@google.com>
+Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
+Reviewed-by: Badhri Jagan Sridharan <badhri@google.com>
+Link: https://lore.kernel.org/r/20250506232853.1968304-2-rdbabiera@google.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/typec/tcpm/tcpm.c | 91 ++++++++++++++++++++++++++++++++----------
+ 1 file changed, 71 insertions(+), 20 deletions(-)
+
+--- a/drivers/usb/typec/tcpm/tcpm.c
++++ b/drivers/usb/typec/tcpm/tcpm.c
+@@ -568,6 +568,15 @@ struct pd_rx_event {
+ enum tcpm_transmit_type rx_sop_type;
+ };
+
++struct altmode_vdm_event {
++ struct kthread_work work;
++ struct tcpm_port *port;
++ u32 header;
++ u32 *data;
++ int cnt;
++ enum tcpm_transmit_type tx_sop_type;
++};
++
+ static const char * const pd_rev[] = {
+ [PD_REV10] = "rev1",
+ [PD_REV20] = "rev2",
+@@ -1562,18 +1571,68 @@ static void tcpm_queue_vdm(struct tcpm_p
+ mod_vdm_delayed_work(port, 0);
+ }
+
+-static void tcpm_queue_vdm_unlocked(struct tcpm_port *port, const u32 header,
+- const u32 *data, int cnt, enum tcpm_transmit_type tx_sop_type)
++static void tcpm_queue_vdm_work(struct kthread_work *work)
+ {
+- if (port->state != SRC_READY && port->state != SNK_READY &&
+- port->state != SRC_VDM_IDENTITY_REQUEST)
+- return;
++ struct altmode_vdm_event *event = container_of(work,
++ struct altmode_vdm_event,
++ work);
++ struct tcpm_port *port = event->port;
+
+ mutex_lock(&port->lock);
+- tcpm_queue_vdm(port, header, data, cnt, tx_sop_type);
++ if (port->state != SRC_READY && port->state != SNK_READY &&
++ port->state != SRC_VDM_IDENTITY_REQUEST) {
++ tcpm_log_force(port, "dropping altmode_vdm_event");
++ goto port_unlock;
++ }
++
++ tcpm_queue_vdm(port, event->header, event->data, event->cnt, event->tx_sop_type);
++
++port_unlock:
++ kfree(event->data);
++ kfree(event);
+ mutex_unlock(&port->lock);
+ }
+
++static int tcpm_queue_vdm_unlocked(struct tcpm_port *port, const u32 header,
++ const u32 *data, int cnt, enum tcpm_transmit_type tx_sop_type)
++{
++ struct altmode_vdm_event *event;
++ u32 *data_cpy;
++ int ret = -ENOMEM;
++
++ event = kzalloc(sizeof(*event), GFP_KERNEL);
++ if (!event)
++ goto err_event;
++
++ data_cpy = kcalloc(cnt, sizeof(u32), GFP_KERNEL);
++ if (!data_cpy)
++ goto err_data;
++
++ kthread_init_work(&event->work, tcpm_queue_vdm_work);
++ event->port = port;
++ event->header = header;
++ memcpy(data_cpy, data, sizeof(u32) * cnt);
++ event->data = data_cpy;
++ event->cnt = cnt;
++ event->tx_sop_type = tx_sop_type;
++
++ ret = kthread_queue_work(port->wq, &event->work);
++ if (!ret) {
++ ret = -EBUSY;
++ goto err_queue;
++ }
++
++ return 0;
++
++err_queue:
++ kfree(data_cpy);
++err_data:
++ kfree(event);
++err_event:
++ tcpm_log_force(port, "failed to queue altmode vdm, err:%d", ret);
++ return ret;
++}
++
+ static void svdm_consume_identity(struct tcpm_port *port, const u32 *p, int cnt)
+ {
+ u32 vdo = p[VDO_INDEX_IDH];
+@@ -2784,8 +2843,7 @@ static int tcpm_altmode_enter(struct typ
+ header = VDO(altmode->svid, vdo ? 2 : 1, svdm_version, CMD_ENTER_MODE);
+ header |= VDO_OPOS(altmode->mode);
+
+- tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP);
+- return 0;
++ return tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP);
+ }
+
+ static int tcpm_altmode_exit(struct typec_altmode *altmode)
+@@ -2801,8 +2859,7 @@ static int tcpm_altmode_exit(struct type
+ header = VDO(altmode->svid, 1, svdm_version, CMD_EXIT_MODE);
+ header |= VDO_OPOS(altmode->mode);
+
+- tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP);
+- return 0;
++ return tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP);
+ }
+
+ static int tcpm_altmode_vdm(struct typec_altmode *altmode,
+@@ -2810,9 +2867,7 @@ static int tcpm_altmode_vdm(struct typec
+ {
+ struct tcpm_port *port = typec_altmode_get_drvdata(altmode);
+
+- tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP);
+-
+- return 0;
++ return tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP);
+ }
+
+ static const struct typec_altmode_ops tcpm_altmode_ops = {
+@@ -2836,8 +2891,7 @@ static int tcpm_cable_altmode_enter(stru
+ header = VDO(altmode->svid, vdo ? 2 : 1, svdm_version, CMD_ENTER_MODE);
+ header |= VDO_OPOS(altmode->mode);
+
+- tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP_PRIME);
+- return 0;
++ return tcpm_queue_vdm_unlocked(port, header, vdo, vdo ? 1 : 0, TCPC_TX_SOP_PRIME);
+ }
+
+ static int tcpm_cable_altmode_exit(struct typec_altmode *altmode, enum typec_plug_index sop)
+@@ -2853,8 +2907,7 @@ static int tcpm_cable_altmode_exit(struc
+ header = VDO(altmode->svid, 1, svdm_version, CMD_EXIT_MODE);
+ header |= VDO_OPOS(altmode->mode);
+
+- tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP_PRIME);
+- return 0;
++ return tcpm_queue_vdm_unlocked(port, header, NULL, 0, TCPC_TX_SOP_PRIME);
+ }
+
+ static int tcpm_cable_altmode_vdm(struct typec_altmode *altmode, enum typec_plug_index sop,
+@@ -2862,9 +2915,7 @@ static int tcpm_cable_altmode_vdm(struct
+ {
+ struct tcpm_port *port = typec_altmode_get_drvdata(altmode);
+
+- tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP_PRIME);
+-
+- return 0;
++ return tcpm_queue_vdm_unlocked(port, header, data, count - 1, TCPC_TX_SOP_PRIME);
+ }
+
+ static const struct typec_cable_ops tcpm_cable_ops = {
--- /dev/null
+From 0736299d090f5c6a1032678705c4bc0a9511a3db Mon Sep 17 00:00:00 2001
+From: Amit Sunil Dhamne <amitsd@google.com>
+Date: Fri, 2 May 2025 16:57:03 -0700
+Subject: usb: typec: tcpm/tcpci_maxim: Fix bounds check in process_rx()
+
+From: Amit Sunil Dhamne <amitsd@google.com>
+
+commit 0736299d090f5c6a1032678705c4bc0a9511a3db upstream.
+
+Register read of TCPC_RX_BYTE_CNT returns the total size consisting of:
+
+ PD message (pending read) size + 1 Byte for Frame Type (SOP*)
+
+This is validated against the max PD message (`struct pd_message`) size
+without accounting for the extra byte for the frame type. Note that the
+struct pd_message does not contain a field for the frame_type. This
+results in false negatives when the "PD message (pending read)" is equal
+to the max PD message size.
+
+Fixes: 6f413b559f86 ("usb: typec: tcpci_maxim: Chip level TCPC driver")
+Signed-off-by: Amit Sunil Dhamne <amitsd@google.com>
+Signed-off-by: Badhri Jagan Sridharan <badhri@google.com>
+Reviewed-by: Kyle Tso <kyletso@google.com>
+Cc: stable <stable@kernel.org>
+Link: https://lore.kernel.org/stable/20250502-b4-new-fix-pd-rx-count-v1-1-e5711ed09b3d%40google.com
+Reviewed-by: Heikki Krogerus <heikki.krogerus@linux.intel.com>
+Link: https://lore.kernel.org/r/20250502-b4-new-fix-pd-rx-count-v1-1-e5711ed09b3d@google.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/typec/tcpm/tcpci_maxim_core.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/usb/typec/tcpm/tcpci_maxim_core.c
++++ b/drivers/usb/typec/tcpm/tcpci_maxim_core.c
+@@ -166,7 +166,8 @@ static void process_rx(struct max_tcpci_
+ return;
+ }
+
+- if (count > sizeof(struct pd_message) || count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
++ if (count > sizeof(struct pd_message) + 1 ||
++ count + 1 > TCPC_RECEIVE_BUFFER_LEN) {
+ dev_err(chip->dev, "Invalid TCPC_RX_BYTE_CNT %d\n", count);
+ return;
+ }
--- /dev/null
+From acb3dac2805d3342ded7dbbd164add32bbfdf21c Mon Sep 17 00:00:00 2001
+From: Dave Penkler <dpenkler@gmail.com>
+Date: Wed, 21 May 2025 14:16:55 +0200
+Subject: usb: usbtmc: Fix read_stb function and get_stb ioctl
+
+From: Dave Penkler <dpenkler@gmail.com>
+
+commit acb3dac2805d3342ded7dbbd164add32bbfdf21c upstream.
+
+The usbtmc488_ioctl_read_stb function relied on a positive return from
+usbtmc_get_stb to reset the srq condition in the driver. The
+USBTMC_IOCTL_GET_STB case tested for a positive return to return the stb
+to the user.
+
+Commit: <cac01bd178d6> ("usb: usbtmc: Fix erroneous get_stb ioctl
+error returns") changed the return value of usbtmc_get_stb to 0 on
+success instead of returning the value of usb_control_msg which is
+positive in the normal case. This change caused the function
+usbtmc488_ioctl_read_stb and the USBTMC_IOCTL_GET_STB ioctl to no
+longer function correctly.
+
+Change the test in usbtmc488_ioctl_read_stb to test for failure
+first and return the failure code immediately.
+Change the test for the USBTMC_IOCTL_GET_STB ioctl to test for 0
+instead of a positive value.
+
+Fixes: cac01bd178d6 ("usb: usbtmc: Fix erroneous get_stb ioctl error returns")
+Cc: stable@vger.kernel.org
+Signed-off-by: Dave Penkler <dpenkler@gmail.com>
+Link: https://lore.kernel.org/r/20250521121656.18174-3-dpenkler@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/usb/class/usbtmc.c | 17 +++++++++--------
+ 1 file changed, 9 insertions(+), 8 deletions(-)
+
+--- a/drivers/usb/class/usbtmc.c
++++ b/drivers/usb/class/usbtmc.c
+@@ -565,14 +565,15 @@ static int usbtmc488_ioctl_read_stb(stru
+
+ rv = usbtmc_get_stb(file_data, &stb);
+
+- if (rv > 0) {
+- srq_asserted = atomic_xchg(&file_data->srq_asserted,
+- srq_asserted);
+- if (srq_asserted)
+- stb |= 0x40; /* Set RQS bit */
++ if (rv < 0)
++ return rv;
++
++ srq_asserted = atomic_xchg(&file_data->srq_asserted, srq_asserted);
++ if (srq_asserted)
++ stb |= 0x40; /* Set RQS bit */
++
++ rv = put_user(stb, (__u8 __user *)arg);
+
+- rv = put_user(stb, (__u8 __user *)arg);
+- }
+ return rv;
+
+ }
+@@ -2201,7 +2202,7 @@ static long usbtmc_ioctl(struct file *fi
+
+ case USBTMC_IOCTL_GET_STB:
+ retval = usbtmc_get_stb(file_data, &tmp_byte);
+- if (retval > 0)
++ if (!retval)
+ retval = put_user(tmp_byte, (__u8 __user *)arg);
+ break;
+
--- /dev/null
+From 1bd6406fb5f36c2bb1e96e27d4c3e9f4d09edde4 Mon Sep 17 00:00:00 2001
+From: Wupeng Ma <mawupeng1@huawei.com>
+Date: Sat, 10 May 2025 11:30:40 +0800
+Subject: VMCI: fix race between vmci_host_setup_notify and vmci_ctx_unset_notify
+
+From: Wupeng Ma <mawupeng1@huawei.com>
+
+commit 1bd6406fb5f36c2bb1e96e27d4c3e9f4d09edde4 upstream.
+
+During our test, it is found that a warning can be trigger in try_grab_folio
+as follow:
+
+ ------------[ cut here ]------------
+ WARNING: CPU: 0 PID: 1678 at mm/gup.c:147 try_grab_folio+0x106/0x130
+ Modules linked in:
+ CPU: 0 UID: 0 PID: 1678 Comm: syz.3.31 Not tainted 6.15.0-rc5 #163 PREEMPT(undef)
+ RIP: 0010:try_grab_folio+0x106/0x130
+ Call Trace:
+ <TASK>
+ follow_huge_pmd+0x240/0x8e0
+ follow_pmd_mask.constprop.0.isra.0+0x40b/0x5c0
+ follow_pud_mask.constprop.0.isra.0+0x14a/0x170
+ follow_page_mask+0x1c2/0x1f0
+ __get_user_pages+0x176/0x950
+ __gup_longterm_locked+0x15b/0x1060
+ ? gup_fast+0x120/0x1f0
+ gup_fast_fallback+0x17e/0x230
+ get_user_pages_fast+0x5f/0x80
+ vmci_host_unlocked_ioctl+0x21c/0xf80
+ RIP: 0033:0x54d2cd
+ ---[ end trace 0000000000000000 ]---
+
+Digging into the source, context->notify_page may init by get_user_pages_fast
+and can be seen in vmci_ctx_unset_notify which will try to put_page. However
+get_user_pages_fast is not finished here and lead to following
+try_grab_folio warning. The race condition is shown as follow:
+
+cpu0 cpu1
+vmci_host_do_set_notify
+vmci_host_setup_notify
+get_user_pages_fast(uva, 1, FOLL_WRITE, &context->notify_page);
+lockless_pages_from_mm
+gup_pgd_range
+gup_huge_pmd // update &context->notify_page
+ vmci_host_do_set_notify
+ vmci_ctx_unset_notify
+ notify_page = context->notify_page;
+ if (notify_page)
+ put_page(notify_page); // page is freed
+__gup_longterm_locked
+__get_user_pages
+follow_trans_huge_pmd
+try_grab_folio // warn here
+
+To slove this, use local variable page to make notify_page can be seen
+after finish get_user_pages_fast.
+
+Fixes: a1d88436d53a ("VMCI: Fix two UVA mapping bugs")
+Cc: stable <stable@kernel.org>
+Closes: https://lore.kernel.org/all/e91da589-ad57-3969-d979-879bbd10dddd@huawei.com/
+Signed-off-by: Wupeng Ma <mawupeng1@huawei.com>
+Link: https://lore.kernel.org/r/20250510033040.901582-1-mawupeng1@huawei.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/misc/vmw_vmci/vmci_host.c | 11 +++++------
+ 1 file changed, 5 insertions(+), 6 deletions(-)
+
+--- a/drivers/misc/vmw_vmci/vmci_host.c
++++ b/drivers/misc/vmw_vmci/vmci_host.c
+@@ -227,6 +227,7 @@ static int drv_cp_harray_to_user(void __
+ static int vmci_host_setup_notify(struct vmci_ctx *context,
+ unsigned long uva)
+ {
++ struct page *page;
+ int retval;
+
+ if (context->notify_page) {
+@@ -243,13 +244,11 @@ static int vmci_host_setup_notify(struct
+ /*
+ * Lock physical page backing a given user VA.
+ */
+- retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &context->notify_page);
+- if (retval != 1) {
+- context->notify_page = NULL;
++ retval = get_user_pages_fast(uva, 1, FOLL_WRITE, &page);
++ if (retval != 1)
+ return VMCI_ERROR_GENERIC;
+- }
+- if (context->notify_page == NULL)
+- return VMCI_ERROR_UNAVAILABLE;
++
++ context->notify_page = page;
+
+ /*
+ * Map the locked page and set up notify pointer.
--- /dev/null
+From e34dbbc85d64af59176fe59fad7b4122f4330fe2 Mon Sep 17 00:00:00 2001
+From: "Xin Li (Intel)" <xin@zytor.com>
+Date: Mon, 9 Jun 2025 01:40:53 -0700
+Subject: x86/fred/signal: Prevent immediate repeat of single step trap on return from SIGTRAP handler
+
+From: Xin Li (Intel) <xin@zytor.com>
+
+commit e34dbbc85d64af59176fe59fad7b4122f4330fe2 upstream.
+
+Clear the software event flag in the augmented SS to prevent immediate
+repeat of single step trap on return from SIGTRAP handler if the trap
+flag (TF) is set without an external debugger attached.
+
+Following is a typical single-stepping flow for a user process:
+
+1) The user process is prepared for single-stepping by setting
+ RFLAGS.TF = 1.
+2) When any instruction in user space completes, a #DB is triggered.
+3) The kernel handles the #DB and returns to user space, invoking the
+ SIGTRAP handler with RFLAGS.TF = 0.
+4) After the SIGTRAP handler finishes, the user process performs a
+ sigreturn syscall, restoring the original state, including
+ RFLAGS.TF = 1.
+5) Goto step 2.
+
+According to the FRED specification:
+
+A) Bit 17 in the augmented SS is designated as the software event
+ flag, which is set to 1 for FRED event delivery of SYSCALL,
+ SYSENTER, or INT n.
+B) If bit 17 of the augmented SS is 1 and ERETU would result in
+ RFLAGS.TF = 1, a single-step trap will be pending upon completion
+ of ERETU.
+
+In step 4) above, the software event flag is set upon the sigreturn
+syscall, and its corresponding ERETU would restore RFLAGS.TF = 1.
+This combination causes a pending single-step trap upon completion of
+ERETU. Therefore, another #DB is triggered before any user space
+instruction is executed, which leads to an infinite loop in which the
+SIGTRAP handler keeps being invoked on the same user space IP.
+
+Fixes: 14619d912b65 ("x86/fred: FRED entry/exit and dispatch code")
+Suggested-by: H. Peter Anvin (Intel) <hpa@zytor.com>
+Signed-off-by: Xin Li (Intel) <xin@zytor.com>
+Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
+Tested-by: Sohil Mehta <sohil.mehta@intel.com>
+Cc:stable@vger.kernel.org
+Link: https://lore.kernel.org/all/20250609084054.2083189-2-xin%40zytor.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/include/asm/sighandling.h | 22 ++++++++++++++++++++++
+ arch/x86/kernel/signal_32.c | 4 ++++
+ arch/x86/kernel/signal_64.c | 4 ++++
+ 3 files changed, 30 insertions(+)
+
+--- a/arch/x86/include/asm/sighandling.h
++++ b/arch/x86/include/asm/sighandling.h
+@@ -24,4 +24,26 @@ int ia32_setup_rt_frame(struct ksignal *
+ int x64_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+ int x32_setup_rt_frame(struct ksignal *ksig, struct pt_regs *regs);
+
++/*
++ * To prevent immediate repeat of single step trap on return from SIGTRAP
++ * handler if the trap flag (TF) is set without an external debugger attached,
++ * clear the software event flag in the augmented SS, ensuring no single-step
++ * trap is pending upon ERETU completion.
++ *
++ * Note, this function should be called in sigreturn() before the original
++ * state is restored to make sure the TF is read from the entry frame.
++ */
++static __always_inline void prevent_single_step_upon_eretu(struct pt_regs *regs)
++{
++ /*
++ * If the trap flag (TF) is set, i.e., the sigreturn() SYSCALL instruction
++ * is being single-stepped, do not clear the software event flag in the
++ * augmented SS, thus a debugger won't skip over the following instruction.
++ */
++#ifdef CONFIG_X86_FRED
++ if (!(regs->flags & X86_EFLAGS_TF))
++ regs->fred_ss.swevent = 0;
++#endif
++}
++
+ #endif /* _ASM_X86_SIGHANDLING_H */
+--- a/arch/x86/kernel/signal_32.c
++++ b/arch/x86/kernel/signal_32.c
+@@ -152,6 +152,8 @@ SYSCALL32_DEFINE0(sigreturn)
+ struct sigframe_ia32 __user *frame = (struct sigframe_ia32 __user *)(regs->sp-8);
+ sigset_t set;
+
++ prevent_single_step_upon_eretu(regs);
++
+ if (!access_ok(frame, sizeof(*frame)))
+ goto badframe;
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+@@ -175,6 +177,8 @@ SYSCALL32_DEFINE0(rt_sigreturn)
+ struct rt_sigframe_ia32 __user *frame;
+ sigset_t set;
+
++ prevent_single_step_upon_eretu(regs);
++
+ frame = (struct rt_sigframe_ia32 __user *)(regs->sp - 4);
+
+ if (!access_ok(frame, sizeof(*frame)))
+--- a/arch/x86/kernel/signal_64.c
++++ b/arch/x86/kernel/signal_64.c
+@@ -250,6 +250,8 @@ SYSCALL_DEFINE0(rt_sigreturn)
+ sigset_t set;
+ unsigned long uc_flags;
+
++ prevent_single_step_upon_eretu(regs);
++
+ frame = (struct rt_sigframe __user *)(regs->sp - sizeof(long));
+ if (!access_ok(frame, sizeof(*frame)))
+ goto badframe;
+@@ -366,6 +368,8 @@ COMPAT_SYSCALL_DEFINE0(x32_rt_sigreturn)
+ sigset_t set;
+ unsigned long uc_flags;
+
++ prevent_single_step_upon_eretu(regs);
++
+ frame = (struct rt_sigframe_x32 __user *)(regs->sp - 8);
+
+ if (!access_ok(frame, sizeof(*frame)))
--- /dev/null
+From 8b68e978718f14fdcb080c2a7791c52a0d09bc6d Mon Sep 17 00:00:00 2001
+From: Thomas Gleixner <tglx@linutronix.de>
+Date: Wed, 26 Feb 2025 16:01:57 +0100
+Subject: x86/iopl: Cure TIF_IO_BITMAP inconsistencies
+
+From: Thomas Gleixner <tglx@linutronix.de>
+
+commit 8b68e978718f14fdcb080c2a7791c52a0d09bc6d upstream.
+
+io_bitmap_exit() is invoked from exit_thread() when a task exists or
+when a fork fails. In the latter case the exit_thread() cleans up
+resources which were allocated during fork().
+
+io_bitmap_exit() invokes task_update_io_bitmap(), which in turn ends up
+in tss_update_io_bitmap(). tss_update_io_bitmap() operates on the
+current task. If current has TIF_IO_BITMAP set, but no bitmap installed,
+tss_update_io_bitmap() crashes with a NULL pointer dereference.
+
+There are two issues, which lead to that problem:
+
+ 1) io_bitmap_exit() should not invoke task_update_io_bitmap() when
+ the task, which is cleaned up, is not the current task. That's a
+ clear indicator for a cleanup after a failed fork().
+
+ 2) A task should not have TIF_IO_BITMAP set and neither a bitmap
+ installed nor IOPL emulation level 3 activated.
+
+ This happens when a kernel thread is created in the context of
+ a user space thread, which has TIF_IO_BITMAP set as the thread
+ flags are copied and the IO bitmap pointer is cleared.
+
+ Other than in the failed fork() case this has no impact because
+ kernel threads including IO workers never return to user space and
+ therefore never invoke tss_update_io_bitmap().
+
+Cure this by adding the missing cleanups and checks:
+
+ 1) Prevent io_bitmap_exit() to invoke task_update_io_bitmap() if
+ the to be cleaned up task is not the current task.
+
+ 2) Clear TIF_IO_BITMAP in copy_thread() unconditionally. For user
+ space forks it is set later, when the IO bitmap is inherited in
+ io_bitmap_share().
+
+For paranoia sake, add a warning into tss_update_io_bitmap() to catch
+the case, when that code is invoked with inconsistent state.
+
+Fixes: ea5f1cd7ab49 ("x86/ioperm: Remove bitmap if all permissions dropped")
+Reported-by: syzbot+e2b1803445d236442e54@syzkaller.appspotmail.com
+Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
+Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
+Cc: stable@vger.kernel.org
+Link: https://lore.kernel.org/87wmdceom2.ffs@tglx
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/x86/kernel/ioport.c | 13 +++++++++----
+ arch/x86/kernel/process.c | 6 ++++++
+ 2 files changed, 15 insertions(+), 4 deletions(-)
+
+--- a/arch/x86/kernel/ioport.c
++++ b/arch/x86/kernel/ioport.c
+@@ -33,8 +33,9 @@ void io_bitmap_share(struct task_struct
+ set_tsk_thread_flag(tsk, TIF_IO_BITMAP);
+ }
+
+-static void task_update_io_bitmap(struct task_struct *tsk)
++static void task_update_io_bitmap(void)
+ {
++ struct task_struct *tsk = current;
+ struct thread_struct *t = &tsk->thread;
+
+ if (t->iopl_emul == 3 || t->io_bitmap) {
+@@ -54,7 +55,12 @@ void io_bitmap_exit(struct task_struct *
+ struct io_bitmap *iobm = tsk->thread.io_bitmap;
+
+ tsk->thread.io_bitmap = NULL;
+- task_update_io_bitmap(tsk);
++ /*
++ * Don't touch the TSS when invoked on a failed fork(). TSS
++ * reflects the state of @current and not the state of @tsk.
++ */
++ if (tsk == current)
++ task_update_io_bitmap();
+ if (iobm && refcount_dec_and_test(&iobm->refcnt))
+ kfree(iobm);
+ }
+@@ -192,8 +198,7 @@ SYSCALL_DEFINE1(iopl, unsigned int, leve
+ }
+
+ t->iopl_emul = level;
+- task_update_io_bitmap(current);
+-
++ task_update_io_bitmap();
+ return 0;
+ }
+
+--- a/arch/x86/kernel/process.c
++++ b/arch/x86/kernel/process.c
+@@ -180,6 +180,7 @@ int copy_thread(struct task_struct *p, c
+ frame->ret_addr = (unsigned long) ret_from_fork_asm;
+ p->thread.sp = (unsigned long) fork_frame;
+ p->thread.io_bitmap = NULL;
++ clear_tsk_thread_flag(p, TIF_IO_BITMAP);
+ p->thread.iopl_warn = 0;
+ memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
+
+@@ -468,6 +469,11 @@ void native_tss_update_io_bitmap(void)
+ } else {
+ struct io_bitmap *iobm = t->io_bitmap;
+
++ if (WARN_ON_ONCE(!iobm)) {
++ clear_thread_flag(TIF_IO_BITMAP);
++ native_tss_invalidate_io_bitmap();
++ }
++
+ /*
+ * Only copy bitmap data when the sequence number differs. The
+ * update time is accounted to the incoming task.
--- /dev/null
+From 7f9bbc1140ff8796230bc2634055763e271fd692 Mon Sep 17 00:00:00 2001
+From: Stefano Stabellini <stefano.stabellini@amd.com>
+Date: Mon, 12 May 2025 14:54:52 -0700
+Subject: xen/arm: call uaccess_ttbr0_enable for dm_op hypercall
+
+From: Stefano Stabellini <stefano.stabellini@amd.com>
+
+commit 7f9bbc1140ff8796230bc2634055763e271fd692 upstream.
+
+dm_op hypercalls might come from userspace and pass memory addresses as
+parameters. The memory addresses typically correspond to buffers
+allocated in userspace to hold extra hypercall parameters.
+
+On ARM, when CONFIG_ARM64_SW_TTBR0_PAN is enabled, they might not be
+accessible by Xen, as a result ioreq hypercalls might fail. See the
+existing comment in arch/arm64/xen/hypercall.S regarding privcmd_call
+for reference.
+
+For privcmd_call, Linux calls uaccess_ttbr0_enable before issuing the
+hypercall thanks to commit 9cf09d68b89a. We need to do the same for
+dm_op. This resolves the problem.
+
+Cc: stable@kernel.org
+Fixes: 9cf09d68b89a ("arm64: xen: Enable user access before a privcmd hvc call")
+Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
+Reviewed-by: Juergen Gross <jgross@suse.com>
+Message-ID: <alpine.DEB.2.22.394.2505121446370.8380@ubuntu-linux-20-04-desktop>
+Signed-off-by: Juergen Gross <jgross@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arm64/xen/hypercall.S | 21 ++++++++++++++++++++-
+ 1 file changed, 20 insertions(+), 1 deletion(-)
+
+--- a/arch/arm64/xen/hypercall.S
++++ b/arch/arm64/xen/hypercall.S
+@@ -83,7 +83,26 @@ HYPERCALL3(vcpu_op);
+ HYPERCALL1(platform_op_raw);
+ HYPERCALL2(multicall);
+ HYPERCALL2(vm_assist);
+-HYPERCALL3(dm_op);
++
++SYM_FUNC_START(HYPERVISOR_dm_op)
++ mov x16, #__HYPERVISOR_dm_op; \
++ /*
++ * dm_op hypercalls are issued by the userspace. The kernel needs to
++ * enable access to TTBR0_EL1 as the hypervisor would issue stage 1
++ * translations to user memory via AT instructions. Since AT
++ * instructions are not affected by the PAN bit (ARMv8.1), we only
++ * need the explicit uaccess_enable/disable if the TTBR0 PAN emulation
++ * is enabled (it implies that hardware UAO and PAN disabled).
++ */
++ uaccess_ttbr0_enable x6, x7, x8
++ hvc XEN_IMM
++
++ /*
++ * Disable userspace access from kernel once the hyp call completed.
++ */
++ uaccess_ttbr0_disable x6, x7
++ ret
++SYM_FUNC_END(HYPERVISOR_dm_op);
+
+ SYM_FUNC_START(privcmd_call)
+ mov x16, x0
--- /dev/null
+From 23be716b1c4f3f3a6c00ee38d51a57ef7db9ef7d Mon Sep 17 00:00:00 2001
+From: Dave Chinner <dchinner@redhat.com>
+Date: Thu, 1 May 2025 09:27:24 +1000
+Subject: xfs: don't assume perags are initialised when trimming AGs
+
+From: Dave Chinner <dchinner@redhat.com>
+
+commit 23be716b1c4f3f3a6c00ee38d51a57ef7db9ef7d upstream.
+
+When running fstrim immediately after mounting a V4 filesystem,
+the fstrim fails to trim all the free space in the filesystem. It
+only trims the first extent in the by-size free space tree in each
+AG and then returns. If a second fstrim is then run, it runs
+correctly and the entire free space in the filesystem is iterated
+and discarded correctly.
+
+The problem lies in the setup of the trim cursor - it assumes that
+pag->pagf_longest is valid without either reading the AGF first or
+checking if xfs_perag_initialised_agf(pag) is true or not.
+
+As a result, when a filesystem is mounted without reading the AGF
+(e.g. a clean mount on a v4 filesystem) and the first operation is a
+fstrim call, pag->pagf_longest is zero and so the free extent search
+starts at the wrong end of the by-size btree and exits after
+discarding the first record in the tree.
+
+Fix this by deferring the initialisation of tcur->count to after
+we have locked the AGF and guaranteed that the perag is properly
+initialised. We trigger this on tcur->count == 0 after locking the
+AGF, as this will only occur on the first call to
+xfs_trim_gather_extents() for each AG. If we need to iterate,
+tcur->count will be set to the length of the record we need to
+restart at, so we can use this to ensure we only sample a valid
+pag->pagf_longest value for the iteration.
+
+Signed-off-by: Dave Chinner <dchinner@redhat.com>
+Reviewed-by: Bill O'Donnell <bodonnel@redhat.com>
+Reviewed-by: Darrick J. Wong <djwong@kernel.org>
+Fixes: 89cfa899608f ("xfs: reduce AGF hold times during fstrim operations")
+Cc: <stable@vger.kernel.org> # v6.6
+Signed-off-by: Carlos Maiolino <cem@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/xfs/xfs_discard.c | 17 ++++++++++++++++-
+ 1 file changed, 16 insertions(+), 1 deletion(-)
+
+--- a/fs/xfs/xfs_discard.c
++++ b/fs/xfs/xfs_discard.c
+@@ -146,6 +146,14 @@ xfs_discard_extents(
+ return error;
+ }
+
++/*
++ * Care must be taken setting up the trim cursor as the perags may not have been
++ * initialised when the cursor is initialised. e.g. a clean mount which hasn't
++ * read in AGFs and the first operation run on the mounted fs is a trim. This
++ * can result in perag fields that aren't initialised until
++ * xfs_trim_gather_extents() calls xfs_alloc_read_agf() to lock down the AG for
++ * the free space search.
++ */
+ struct xfs_trim_cur {
+ xfs_agblock_t start;
+ xfs_extlen_t count;
+@@ -183,6 +191,14 @@ xfs_trim_gather_extents(
+ if (error)
+ goto out_trans_cancel;
+
++ /*
++ * First time through tcur->count will not have been initialised as
++ * pag->pagf_longest is not guaranteed to be valid before we read
++ * the AGF buffer above.
++ */
++ if (!tcur->count)
++ tcur->count = pag->pagf_longest;
++
+ if (tcur->by_bno) {
+ /* sub-AG discard request always starts at tcur->start */
+ cur = xfs_bnobt_init_cursor(mp, tp, agbp, pag);
+@@ -329,7 +345,6 @@ xfs_trim_perag_extents(
+ {
+ struct xfs_trim_cur tcur = {
+ .start = start,
+- .count = pag->pagf_longest,
+ .end = end,
+ .minlen = minlen,
+ };