--- /dev/null
+From 0f2dca516541032fe47a1236c852f58edc662795 Mon Sep 17 00:00:00 2001
+From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
+Date: Mon, 14 Aug 2023 15:41:00 +0100
+Subject: block: Remove special-casing of compound pages
+
+From: Matthew Wilcox (Oracle) <willy@infradead.org>
+
+commit 1b151e2435fc3a9b10c8946c6aebe9f3e1938c55 upstream.
+
+The special casing was originally added in pre-git history; reproducing
+the commit log here:
+
+> commit a318a92567d77
+> Author: Andrew Morton <akpm@osdl.org>
+> Date: Sun Sep 21 01:42:22 2003 -0700
+>
+> [PATCH] Speed up direct-io hugetlbpage handling
+>
+> This patch short-circuits all the direct-io page dirtying logic for
+> higher-order pages. Without this, we pointlessly bounce BIOs up to
+> keventd all the time.
+
+In the last twenty years, compound pages have become used for more than
+just hugetlb. Rewrite these functions to operate on folios instead
+of pages and remove the special case for hugetlbfs; I don't think
+it's needed any more (and if it is, we can put it back in as a call
+to folio_test_hugetlb()).
+
+This was found by inspection; as far as I can tell, this bug can lead
+to pages used as the destination of a direct I/O read not being marked
+as dirty. If those pages are then reclaimed by the MM without being
+dirtied for some other reason, they won't be written out. Then when
+they're faulted back in, they will not contain the data they should.
+It'll take a pretty unusual setup to produce this problem with several
+races all going the wrong way.
+
+This problem predates the folio work; it could for example have been
+triggered by mmaping a THP in tmpfs and using that as the target of an
+O_DIRECT read.
+
+Fixes: 800d8c63b2e98 ("shmem: add huge pages support")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
+Signed-off-by: Jens Axboe <axboe@kernel.dk>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ block/bio.c | 7 +++----
+ 1 file changed, 3 insertions(+), 4 deletions(-)
+
+--- a/block/bio.c
++++ b/block/bio.c
+@@ -884,7 +884,7 @@ void bio_release_pages(struct bio *bio,
+ return;
+
+ bio_for_each_segment_all(bvec, bio, iter_all) {
+- if (mark_dirty && !PageCompound(bvec->bv_page))
++ if (mark_dirty)
+ set_page_dirty_lock(bvec->bv_page);
+ put_page(bvec->bv_page);
+ }
+@@ -1691,8 +1691,7 @@ void bio_set_pages_dirty(struct bio *bio
+ struct bvec_iter_all iter_all;
+
+ bio_for_each_segment_all(bvec, bio, iter_all) {
+- if (!PageCompound(bvec->bv_page))
+- set_page_dirty_lock(bvec->bv_page);
++ set_page_dirty_lock(bvec->bv_page);
+ }
+ }
+
+@@ -1740,7 +1739,7 @@ void bio_check_pages_dirty(struct bio *b
+ struct bvec_iter_all iter_all;
+
+ bio_for_each_segment_all(bvec, bio, iter_all) {
+- if (!PageDirty(bvec->bv_page) && !PageCompound(bvec->bv_page))
++ if (!PageDirty(bvec->bv_page))
+ goto defer;
+ }
+
--- /dev/null
+From ac3f3b0a55518056bc80ed32a41931c99e1f7d81 Mon Sep 17 00:00:00 2001
+From: Charan Teja Kalla <quic_charante@quicinc.com>
+Date: Fri, 24 Nov 2023 16:27:25 +0530
+Subject: mm: page_alloc: unreserve highatomic page blocks before oom
+
+From: Charan Teja Kalla <quic_charante@quicinc.com>
+
+commit ac3f3b0a55518056bc80ed32a41931c99e1f7d81 upstream.
+
+__alloc_pages_direct_reclaim() is called from slowpath allocation where
+high atomic reserves can be unreserved after there is a progress in
+reclaim and yet no suitable page is found. Later should_reclaim_retry()
+gets called from slow path allocation to decide if the reclaim needs to be
+retried before OOM kill path is taken.
+
+should_reclaim_retry() checks the available(reclaimable + free pages)
+memory against the min wmark levels of a zone and returns:
+
+a) true, if it is above the min wmark so that slow path allocation will
+ do the reclaim retries.
+
+b) false, thus slowpath allocation takes oom kill path.
+
+should_reclaim_retry() can also unreserves the high atomic reserves **but
+only after all the reclaim retries are exhausted.**
+
+In a case where there are almost none reclaimable memory and free pages
+contains mostly the high atomic reserves but allocation context can't use
+these high atomic reserves, makes the available memory below min wmark
+levels hence false is returned from should_reclaim_retry() leading the
+allocation request to take OOM kill path. This can turn into a early oom
+kill if high atomic reserves are holding lot of free memory and
+unreserving of them is not attempted.
+
+(early)OOM is encountered on a VM with the below state:
+[ 295.998653] Normal free:7728kB boost:0kB min:804kB low:1004kB
+high:1204kB reserved_highatomic:8192KB active_anon:4kB inactive_anon:0kB
+active_file:24kB inactive_file:24kB unevictable:1220kB writepending:0kB
+present:70732kB managed:49224kB mlocked:0kB bounce:0kB free_pcp:688kB
+local_pcp:492kB free_cma:0kB
+[ 295.998656] lowmem_reserve[]: 0 32
+[ 295.998659] Normal: 508*4kB (UMEH) 241*8kB (UMEH) 143*16kB (UMEH)
+33*32kB (UH) 7*64kB (UH) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB
+0*4096kB = 7752kB
+
+Per above log, the free memory of ~7MB exist in the high atomic reserves
+is not freed up before falling back to oom kill path.
+
+Fix it by trying to unreserve the high atomic reserves in
+should_reclaim_retry() before __alloc_pages_direct_reclaim() can fallback
+to oom kill path.
+
+Link: https://lkml.kernel.org/r/1700823445-27531-1-git-send-email-quic_charante@quicinc.com
+Fixes: 0aaa29a56e4f ("mm, page_alloc: reserve pageblocks for high-order atomic allocations on demand")
+Signed-off-by: Charan Teja Kalla <quic_charante@quicinc.com>
+Reported-by: Chris Goldsworthy <quic_cgoldswo@quicinc.com>
+Suggested-by: Michal Hocko <mhocko@suse.com>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Acked-by: David Rientjes <rientjes@google.com>
+Cc: Chris Goldsworthy <quic_cgoldswo@quicinc.com>
+Cc: David Hildenbrand <david@redhat.com>
+Cc: Johannes Weiner <hannes@cmpxchg.org>
+Cc: Mel Gorman <mgorman@techsingularity.net>
+Cc: Pavankumar Kondeti <quic_pkondeti@quicinc.com>
+Cc: Vlastimil Babka <vbabka@suse.cz>
+Cc: Joakim Tjernlund <Joakim.Tjernlund@infinera.com>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/page_alloc.c | 16 ++++++++--------
+ 1 file changed, 8 insertions(+), 8 deletions(-)
+
+--- a/mm/page_alloc.c
++++ b/mm/page_alloc.c
+@@ -4335,14 +4335,9 @@ should_reclaim_retry(gfp_t gfp_mask, uns
+ else
+ (*no_progress_loops)++;
+
+- /*
+- * Make sure we converge to OOM if we cannot make any progress
+- * several times in the row.
+- */
+- if (*no_progress_loops > MAX_RECLAIM_RETRIES) {
+- /* Before OOM, exhaust highatomic_reserve */
+- return unreserve_highatomic_pageblock(ac, true);
+- }
++ if (*no_progress_loops > MAX_RECLAIM_RETRIES)
++ goto out;
++
+
+ /*
+ * Keep reclaiming pages while there is a chance this will lead
+@@ -4404,6 +4399,11 @@ out:
+ schedule_timeout_uninterruptible(1);
+ else
+ cond_resched();
++out:
++ /* Before OOM, exhaust highatomic_reserve */
++ if (!ret)
++ return unreserve_highatomic_pageblock(ac, true);
++
+ return ret;
+ }
+
--- /dev/null
+From jaimeliao.tw@gmail.com Fri Jan 26 16:34:34 2024
+From: Jaime Liao <jaimeliao.tw@gmail.com>
+Date: Thu, 25 Jan 2024 10:48:16 +0800
+Subject: mtd: spinand: macronix: Fix MX35LFxGE4AD page size
+To: miquel.raynal@bootlin.com
+Cc: jaimeliao@mxic.com.tw, stable@vger.kernel.org
+Message-ID: <20240125024816.222554-1-jaimeliao.tw@gmail.com>
+
+From: JaimeLiao <jaimeliao@mxic.com.tw>
+
+Support for MX35LF{2,4}GE4AD chips was added in mainline through
+upstream commit 5ece78de88739b4c68263e9f2582380c1fd8314f.
+
+The patch was later adapted to 5.4.y and backported through
+stable commit 85258ae3070848d9d0f6fbee385be2db80e8cf26.
+
+Fix the backport mentioned right above as it is wrong: the bigger chip
+features 4kiB pages and not 2kiB pages.
+
+Fixes: 85258ae30708 ("mtd: spinand: macronix: Add support for MX35LFxGE4AD")
+Cc: stable@vger.kernel.org # v5.4.y
+Cc: Miquel Raynal <miquel.raynal@bootlin.com>
+Signed-off-by: JaimeLiao <jaimeliao@mxic.com.tw>
+Acked-by: Miquel Raynal <miquel.raynal@bootlin.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/mtd/nand/spi/macronix.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/mtd/nand/spi/macronix.c
++++ b/drivers/mtd/nand/spi/macronix.c
+@@ -125,7 +125,7 @@ static const struct spinand_info macroni
+ SPINAND_HAS_QE_BIT,
+ SPINAND_ECCINFO(&mx35lfxge4ab_ooblayout, NULL)),
+ SPINAND_INFO("MX35LF4GE4AD", 0x37,
+- NAND_MEMORG(1, 2048, 128, 64, 2048, 40, 1, 1, 1),
++ NAND_MEMORG(1, 4096, 128, 64, 2048, 40, 1, 1, 1),
+ NAND_ECCREQ(8, 512),
+ SPINAND_INFO_OP_VARIANTS(&read_cache_variants,
+ &write_cache_variants,
nouveau-vmm-don-t-set-addr-on-the-fail-path-to-avoid-warning.patch
ubifs-ubifs_symlink-fix-memleak-of-inode-i_link-in-error-path.patch
rename-fix-the-locking-of-subdirectories.patch
+block-remove-special-casing-of-compound-pages.patch
+mm-page_alloc-unreserve-highatomic-page-blocks-before-oom.patch
+mtd-spinand-macronix-fix-mx35lfxge4ad-page-size.patch