--- /dev/null
+From 60da7445a142bd15e67f3cda915497781c3f781f Mon Sep 17 00:00:00 2001
+From: Suren Baghdasaryan <surenb@google.com>
+Date: Fri, 29 Nov 2024 16:14:23 -0800
+Subject: alloc_tag: fix set_codetag_empty() when !CONFIG_MEM_ALLOC_PROFILING_DEBUG
+
+From: Suren Baghdasaryan <surenb@google.com>
+
+commit 60da7445a142bd15e67f3cda915497781c3f781f upstream.
+
+It was recently noticed that set_codetag_empty() might be used not only to
+mark NULL alloctag references as empty to avoid warnings but also to reset
+valid tags (in clear_page_tag_ref()). Since set_codetag_empty() is
+defined as NOOP for CONFIG_MEM_ALLOC_PROFILING_DEBUG=n, such use of
+set_codetag_empty() leads to subtle bugs. Fix set_codetag_empty() for
+CONFIG_MEM_ALLOC_PROFILING_DEBUG=n to reset the tag reference.
+
+Link: https://lkml.kernel.org/r/20241130001423.1114965-2-surenb@google.com
+Fixes: a8fc28dad6d5 ("alloc_tag: introduce clear_page_tag_ref() helper function")
+Signed-off-by: Suren Baghdasaryan <surenb@google.com>
+Reported-by: David Wang <00107082@163.com>
+Closes: https://lore.kernel.org/lkml/20241124074318.399027-1-00107082@163.com/
+Cc: David Wang <00107082@163.com>
+Cc: Kent Overstreet <kent.overstreet@linux.dev>
+Cc: Mike Rapoport (Microsoft) <rppt@kernel.org>
+Cc: Pasha Tatashin <pasha.tatashin@soleen.com>
+Cc: Sourav Panda <souravpanda@google.com>
+Cc: Yu Zhao <yuzhao@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ include/linux/alloc_tag.h | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+--- a/include/linux/alloc_tag.h
++++ b/include/linux/alloc_tag.h
+@@ -48,7 +48,12 @@ static inline void set_codetag_empty(uni
+ #else /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
+
+ static inline bool is_codetag_empty(union codetag_ref *ref) { return false; }
+-static inline void set_codetag_empty(union codetag_ref *ref) {}
++
++static inline void set_codetag_empty(union codetag_ref *ref)
++{
++ if (ref)
++ ref->ct = NULL;
++}
+
+ #endif /* CONFIG_MEM_ALLOC_PROFILING_DEBUG */
+
--- /dev/null
+From da4d8c83358163df9a4addaeba0ef8bcb03b22e8 Mon Sep 17 00:00:00 2001
+From: Davidlohr Bueso <dave@stgolabs.net>
+Date: Fri, 15 Nov 2024 09:00:32 -0800
+Subject: cxl/pci: Fix potential bogus return value upon successful probing
+
+From: Davidlohr Bueso <dave@stgolabs.net>
+
+commit da4d8c83358163df9a4addaeba0ef8bcb03b22e8 upstream.
+
+If cxl_pci_ras_unmask() returns non-zero, cxl_pci_probe() will end up
+returning that value, instead of zero.
+
+Fixes: 248529edc86f ("cxl: add RAS status unmasking for CXL")
+Reviewed-by: Fan Ni <fan.ni@samsung.com>
+Signed-off-by: Davidlohr Bueso <dave@stgolabs.net>
+Reviewed-by: Ira Weiny <ira.weiny@intel.com>
+Link: https://patch.msgid.link/20241115170032.108445-1-dave@stgolabs.net
+Signed-off-by: Dave Jiang <dave.jiang@intel.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/cxl/pci.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/drivers/cxl/pci.c b/drivers/cxl/pci.c
+index 0241d1d7133a..26ab06c9deff 100644
+--- a/drivers/cxl/pci.c
++++ b/drivers/cxl/pci.c
+@@ -1032,8 +1032,7 @@ static int cxl_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+ if (rc)
+ return rc;
+
+- rc = cxl_pci_ras_unmask(pdev);
+- if (rc)
++ if (cxl_pci_ras_unmask(pdev))
+ dev_dbg(&pdev->dev, "No RAS reporting unmasked\n");
+
+ pci_save_state(pdev);
+--
+2.47.1
+
--- /dev/null
+From faeec8e23c10bd30e8aa759a2eb3018dae00f924 Mon Sep 17 00:00:00 2001
+From: David Hildenbrand <david@redhat.com>
+Date: Tue, 10 Dec 2024 10:34:37 +0100
+Subject: mm/page_alloc: don't call pfn_to_page() on possibly non-existent PFN in split_large_buddy()
+
+From: David Hildenbrand <david@redhat.com>
+
+commit faeec8e23c10bd30e8aa759a2eb3018dae00f924 upstream.
+
+In split_large_buddy(), we might call pfn_to_page() on a PFN that might
+not exist. In corner cases, such as when freeing the highest pageblock in
+the last memory section, this could result with CONFIG_SPARSEMEM &&
+!CONFIG_SPARSEMEM_EXTREME in __pfn_to_section() returning NULL and and
+__section_mem_map_addr() dereferencing that NULL pointer.
+
+Let's fix it, and avoid doing a pfn_to_page() call for the first
+iteration, where we already have the page.
+
+So far this was found by code inspection, but let's just CC stable as the
+fix is easy.
+
+Link: https://lkml.kernel.org/r/20241210093437.174413-1-david@redhat.com
+Fixes: fd919a85cd55 ("mm: page_isolation: prepare for hygienic freelists")
+Signed-off-by: David Hildenbrand <david@redhat.com>
+Reported-by: Vlastimil Babka <vbabka@suse.cz>
+Closes: https://lkml.kernel.org/r/e1a898ba-a717-4d20-9144-29df1a6c8813@suse.cz
+Reviewed-by: Vlastimil Babka <vbabka@suse.cz>
+Reviewed-by: Zi Yan <ziy@nvidia.com>
+Acked-by: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Yu Zhao <yuzhao@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/page_alloc.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/mm/page_alloc.c b/mm/page_alloc.c
+index 1cb4b8c8886d..cae7b93864c2 100644
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -1238,13 +1238,15 @@ static void split_large_buddy(struct zone *zone, struct page *page,
+ if (order > pageblock_order)
+ order = pageblock_order;
+
+- while (pfn != end) {
++ do {
+ int mt = get_pfnblock_migratetype(page, pfn);
+
+ __free_one_page(page, pfn, zone, order, mt, fpi);
+ pfn += 1 << order;
++ if (pfn == end)
++ break;
+ page = pfn_to_page(pfn);
+- }
++ } while (1);
+ }
+
+ static void free_one_page(struct zone *zone, struct page *page,
+--
+2.47.1
+
--- /dev/null
+From c58a812c8e49ad688f94f4b050ad5c5b388fc5d2 Mon Sep 17 00:00:00 2001
+From: Edward Adam Davis <eadavis@qq.com>
+Date: Wed, 18 Dec 2024 21:36:55 +0800
+Subject: ring-buffer: Fix overflow in __rb_map_vma
+
+From: Edward Adam Davis <eadavis@qq.com>
+
+commit c58a812c8e49ad688f94f4b050ad5c5b388fc5d2 upstream.
+
+An overflow occurred when performing the following calculation:
+
+ nr_pages = ((nr_subbufs + 1) << subbuf_order) - pgoff;
+
+Add a check before the calculation to avoid this problem.
+
+syzbot reported this as a slab-out-of-bounds in __rb_map_vma:
+
+BUG: KASAN: slab-out-of-bounds in __rb_map_vma+0x9ab/0xae0 kernel/trace/ring_buffer.c:7058
+Read of size 8 at addr ffff8880767dd2b8 by task syz-executor187/5836
+
+CPU: 0 UID: 0 PID: 5836 Comm: syz-executor187 Not tainted 6.13.0-rc2-syzkaller-00159-gf932fb9b4074 #0
+Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/25/2024
+Call Trace:
+ <TASK>
+ __dump_stack lib/dump_stack.c:94 [inline]
+ dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
+ print_address_description mm/kasan/report.c:378 [inline]
+ print_report+0xc3/0x620 mm/kasan/report.c:489
+ kasan_report+0xd9/0x110 mm/kasan/report.c:602
+ __rb_map_vma+0x9ab/0xae0 kernel/trace/ring_buffer.c:7058
+ ring_buffer_map+0x56e/0x9b0 kernel/trace/ring_buffer.c:7138
+ tracing_buffers_mmap+0xa6/0x120 kernel/trace/trace.c:8482
+ call_mmap include/linux/fs.h:2183 [inline]
+ mmap_file mm/internal.h:124 [inline]
+ __mmap_new_file_vma mm/vma.c:2291 [inline]
+ __mmap_new_vma mm/vma.c:2355 [inline]
+ __mmap_region+0x1786/0x2670 mm/vma.c:2456
+ mmap_region+0x127/0x320 mm/mmap.c:1348
+ do_mmap+0xc00/0xfc0 mm/mmap.c:496
+ vm_mmap_pgoff+0x1ba/0x360 mm/util.c:580
+ ksys_mmap_pgoff+0x32c/0x5c0 mm/mmap.c:542
+ __do_sys_mmap arch/x86/kernel/sys_x86_64.c:89 [inline]
+ __se_sys_mmap arch/x86/kernel/sys_x86_64.c:82 [inline]
+ __x64_sys_mmap+0x125/0x190 arch/x86/kernel/sys_x86_64.c:82
+ do_syscall_x64 arch/x86/entry/common.c:52 [inline]
+ do_syscall_64+0xcd/0x250 arch/x86/entry/common.c:83
+ entry_SYSCALL_64_after_hwframe+0x77/0x7f
+
+The reproducer for this bug is:
+
+------------------------8<-------------------------
+ #include <fcntl.h>
+ #include <stdlib.h>
+ #include <unistd.h>
+ #include <asm/types.h>
+ #include <sys/mman.h>
+
+ int main(int argc, char **argv)
+ {
+ int page_size = getpagesize();
+ int fd;
+ void *meta;
+
+ system("echo 1 > /sys/kernel/tracing/buffer_size_kb");
+ fd = open("/sys/kernel/tracing/per_cpu/cpu0/trace_pipe_raw", O_RDONLY);
+
+ meta = mmap(NULL, page_size, PROT_READ, MAP_SHARED, fd, page_size * 5);
+ }
+------------------------>8-------------------------
+
+Cc: stable@vger.kernel.org
+Fixes: 117c39200d9d7 ("ring-buffer: Introducing ring-buffer mapping functions")
+Link: https://lore.kernel.org/tencent_06924B6674ED771167C23CC336C097223609@qq.com
+Reported-by: syzbot+345e4443a21200874b18@syzkaller.appspotmail.com
+Closes: https://syzkaller.appspot.com/bug?extid=345e4443a21200874b18
+Signed-off-by: Edward Adam Davis <eadavis@qq.com>
+Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/trace/ring_buffer.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
+index 7e257e855dd1..60210fb5b211 100644
+--- a/kernel/trace/ring_buffer.c
++++ b/kernel/trace/ring_buffer.c
+@@ -7019,7 +7019,11 @@ static int __rb_map_vma(struct ring_buffer_per_cpu *cpu_buffer,
+ lockdep_assert_held(&cpu_buffer->mapping_lock);
+
+ nr_subbufs = cpu_buffer->nr_pages + 1; /* + reader-subbuf */
+- nr_pages = ((nr_subbufs + 1) << subbuf_order) - pgoff; /* + meta-page */
++ nr_pages = ((nr_subbufs + 1) << subbuf_order); /* + meta-page */
++ if (nr_pages <= pgoff)
++ return -EINVAL;
++
++ nr_pages -= pgoff;
+
+ nr_vma_pages = vma_pages(vma);
+ if (!nr_vma_pages || nr_vma_pages > nr_pages)
+--
+2.47.1
+
drm-amdgpu-nbio7.11-fix-ip-version-check.patch
drm-amdgpu-nbio7.7-fix-ip-version-check.patch
drm-amdgpu-smu14.0.2-fix-ip-version-check.patch
+zram-refuse-to-use-zero-sized-block-device-as-backing-device.patch
+zram-fix-uninitialized-zram-not-releasing-backing-device.patch
+vmalloc-fix-accounting-with-i915.patch
+mm-page_alloc-don-t-call-pfn_to_page-on-possibly-non-existent-pfn-in-split_large_buddy.patch
+ring-buffer-fix-overflow-in-__rb_map_vma.patch
+alloc_tag-fix-set_codetag_empty-when-config_mem_alloc_profiling_debug.patch
+cxl-pci-fix-potential-bogus-return-value-upon-successful-probing.patch
--- /dev/null
+From a2e740e216f5bf49ccb83b6d490c72a340558a43 Mon Sep 17 00:00:00 2001
+From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
+Date: Wed, 11 Dec 2024 20:25:37 +0000
+Subject: vmalloc: fix accounting with i915
+
+From: Matthew Wilcox (Oracle) <willy@infradead.org>
+
+commit a2e740e216f5bf49ccb83b6d490c72a340558a43 upstream.
+
+If the caller of vmap() specifies VM_MAP_PUT_PAGES (currently only the
+i915 driver), we will decrement nr_vmalloc_pages and MEMCG_VMALLOC in
+vfree(). These counters are incremented by vmalloc() but not by vmap() so
+this will cause an underflow. Check the VM_MAP_PUT_PAGES flag before
+decrementing either counter.
+
+Link: https://lkml.kernel.org/r/20241211202538.168311-1-willy@infradead.org
+Fixes: b944afc9d64d ("mm: add a VM_MAP_PUT_PAGES flag for vmap")
+Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
+Acked-by: Johannes Weiner <hannes@cmpxchg.org>
+Reviewed-by: Shakeel Butt <shakeel.butt@linux.dev>
+Reviewed-by: Balbir Singh <balbirs@nvidia.com>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: Muchun Song <muchun.song@linux.dev>
+Cc: Roman Gushchin <roman.gushchin@linux.dev>
+Cc: "Uladzislau Rezki (Sony)" <urezki@gmail.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/vmalloc.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+--- a/mm/vmalloc.c
++++ b/mm/vmalloc.c
+@@ -3369,7 +3369,8 @@ void vfree(const void *addr)
+ struct page *page = vm->pages[i];
+
+ BUG_ON(!page);
+- mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
++ if (!(vm->flags & VM_MAP_PUT_PAGES))
++ mod_memcg_page_state(page, MEMCG_VMALLOC, -1);
+ /*
+ * High-order allocs for huge vmallocs are split, so
+ * can be freed as an array of order-0 allocations
+@@ -3377,7 +3378,8 @@ void vfree(const void *addr)
+ __free_page(page);
+ cond_resched();
+ }
+- atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
++ if (!(vm->flags & VM_MAP_PUT_PAGES))
++ atomic_long_sub(vm->nr_pages, &nr_vmalloc_pages);
+ kvfree(vm->pages);
+ kfree(vm);
+ }
--- /dev/null
+From 74363ec674cb172d8856de25776c8f3103f05e2f Mon Sep 17 00:00:00 2001
+From: Kairui Song <kasong@tencent.com>
+Date: Tue, 10 Dec 2024 00:57:16 +0800
+Subject: zram: fix uninitialized ZRAM not releasing backing device
+
+From: Kairui Song <kasong@tencent.com>
+
+commit 74363ec674cb172d8856de25776c8f3103f05e2f upstream.
+
+Setting backing device is done before ZRAM initialization. If we set the
+backing device, then remove the ZRAM module without initializing the
+device, the backing device reference will be leaked and the device will be
+hold forever.
+
+Fix this by always reset the ZRAM fully on rmmod or reset store.
+
+Link: https://lkml.kernel.org/r/20241209165717.94215-3-ryncsn@gmail.com
+Fixes: 013bf95a83ec ("zram: add interface to specif backing device")
+Signed-off-by: Kairui Song <kasong@tencent.com>
+Reported-by: Desheng Wu <deshengwu@tencent.com>
+Suggested-by: Sergey Senozhatsky <senozhatsky@chromium.org>
+Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/block/zram/zram_drv.c | 9 ++++-----
+ 1 file changed, 4 insertions(+), 5 deletions(-)
+
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -1325,12 +1325,16 @@ static void zram_meta_free(struct zram *
+ size_t num_pages = disksize >> PAGE_SHIFT;
+ size_t index;
+
++ if (!zram->table)
++ return;
++
+ /* Free all pages that are still in this zram device */
+ for (index = 0; index < num_pages; index++)
+ zram_free_page(zram, index);
+
+ zs_destroy_pool(zram->mem_pool);
+ vfree(zram->table);
++ zram->table = NULL;
+ }
+
+ static bool zram_meta_alloc(struct zram *zram, u64 disksize)
+@@ -2171,11 +2175,6 @@ static void zram_reset_device(struct zra
+
+ zram->limit_pages = 0;
+
+- if (!init_done(zram)) {
+- up_write(&zram->init_lock);
+- return;
+- }
+-
+ set_capacity_and_notify(zram->disk, 0);
+ part_stat_set_all(zram->disk->part0, 0);
+
--- /dev/null
+From be48c412f6ebf38849213c19547bc6d5b692b5e5 Mon Sep 17 00:00:00 2001
+From: Kairui Song <kasong@tencent.com>
+Date: Tue, 10 Dec 2024 00:57:15 +0800
+Subject: zram: refuse to use zero sized block device as backing device
+
+From: Kairui Song <kasong@tencent.com>
+
+commit be48c412f6ebf38849213c19547bc6d5b692b5e5 upstream.
+
+Patch series "zram: fix backing device setup issue", v2.
+
+This series fixes two bugs of backing device setting:
+
+- ZRAM should reject using a zero sized (or the uninitialized ZRAM
+ device itself) as the backing device.
+- Fix backing device leaking when removing a uninitialized ZRAM
+ device.
+
+
+This patch (of 2):
+
+Setting a zero sized block device as backing device is pointless, and one
+can easily create a recursive loop by setting the uninitialized ZRAM
+device itself as its own backing device by (zram0 is uninitialized):
+
+ echo /dev/zram0 > /sys/block/zram0/backing_dev
+
+It's definitely a wrong config, and the module will pin itself, kernel
+should refuse doing so in the first place.
+
+By refusing to use zero sized device we avoided misuse cases including
+this one above.
+
+Link: https://lkml.kernel.org/r/20241209165717.94215-1-ryncsn@gmail.com
+Link: https://lkml.kernel.org/r/20241209165717.94215-2-ryncsn@gmail.com
+Fixes: 013bf95a83ec ("zram: add interface to specif backing device")
+Signed-off-by: Kairui Song <kasong@tencent.com>
+Reported-by: Desheng Wu <deshengwu@tencent.com>
+Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/block/zram/zram_drv.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+--- a/drivers/block/zram/zram_drv.c
++++ b/drivers/block/zram/zram_drv.c
+@@ -524,6 +524,12 @@ static ssize_t backing_dev_store(struct
+ }
+
+ nr_pages = i_size_read(inode) >> PAGE_SHIFT;
++ /* Refuse to use zero sized device (also prevents self reference) */
++ if (!nr_pages) {
++ err = -EINVAL;
++ goto out;
++ }
++
+ bitmap_sz = BITS_TO_LONGS(nr_pages) * sizeof(long);
+ bitmap = kvzalloc(bitmap_sz, GFP_KERNEL);
+ if (!bitmap) {