From: Greg Kroah-Hartman Date: Wed, 21 Feb 2024 07:29:45 +0000 (+0100) Subject: drop readahead patch as requested X-Git-Tag: v4.19.307~41 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=dd9ba08ab1df1e875e7d8424bdfded4d15682922;p=thirdparty%2Fkernel%2Fstable-queue.git drop readahead patch as requested --- diff --git a/queue-6.1/readahead-avoid-multiple-marked-readahead-pages.patch b/queue-6.1/readahead-avoid-multiple-marked-readahead-pages.patch deleted file mode 100644 index 8bd8c1d9537..00000000000 --- a/queue-6.1/readahead-avoid-multiple-marked-readahead-pages.patch +++ /dev/null @@ -1,92 +0,0 @@ -From ab4443fe3ca6298663a55c4a70efc6c3ce913ca6 Mon Sep 17 00:00:00 2001 -From: Jan Kara -Date: Thu, 4 Jan 2024 09:58:39 +0100 -Subject: readahead: avoid multiple marked readahead pages - -From: Jan Kara - -commit ab4443fe3ca6298663a55c4a70efc6c3ce913ca6 upstream. - -ra_alloc_folio() marks a page that should trigger next round of async -readahead. However it rounds up computed index to the order of page being -allocated. This can however lead to multiple consecutive pages being -marked with readahead flag. Consider situation with index == 1, mark == -1, order == 0. We insert order 0 page at index 1 and mark it. Then we -bump order to 1, index to 2, mark (still == 1) is rounded up to 2 so page -at index 2 is marked as well. Then we bump order to 2, index is -incremented to 4, mark gets rounded to 4 so page at index 4 is marked as -well. The fact that multiple pages get marked within a single readahead -window confuses the readahead logic and results in readahead window being -trimmed back to 1. This situation is triggered in particular when maximum -readahead window size is not a power of two (in the observed case it was -768 KB) and as a result sequential read throughput suffers. - -Fix the problem by rounding 'mark' down instead of up. Because the index -is naturally aligned to 'order', we are guaranteed 'rounded mark' == index -iff 'mark' is within the page we are allocating at 'index' and thus -exactly one page is marked with readahead flag as required by the -readahead code and sequential read performance is restored. - -This effectively reverts part of commit b9ff43dd2743 ("mm/readahead: Fix -readahead with large folios"). The commit changed the rounding with the -rationale: - -"... we were setting the readahead flag on the folio which contains the -last byte read from the block. This is wrong because we will trigger -readahead at the end of the read without waiting to see if a subsequent -read is going to use the pages we just read." - -Although this is true, the fact is this was always the case with read -sizes not aligned to folio boundaries and large folios in the page cache -just make the situation more obvious (and frequent). Also for sequential -read workloads it is better to trigger the readahead earlier rather than -later. It is true that the difference in the rounding and thus earlier -triggering of the readahead can result in reading more for semi-random -workloads. However workloads really suffering from this seem to be rare. -In particular I have verified that the workload described in commit -b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") of reading -random 100k blocks from a file like: - -[reader] -bs=100k -rw=randread -numjobs=1 -size=64g -runtime=60s - -is not impacted by the rounding change and achieves ~70MB/s in both cases. - -[jack@suse.cz: fix one more place where mark rounding was done as well] - Link: https://lkml.kernel.org/r/20240123153254.5206-1-jack@suse.cz -Link: https://lkml.kernel.org/r/20240104085839.21029-1-jack@suse.cz -Fixes: b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") -Signed-off-by: Jan Kara -Cc: Matthew Wilcox -Cc: Guo Xuenan -Cc: -Signed-off-by: Andrew Morton -Signed-off-by: Greg Kroah-Hartman ---- - mm/readahead.c | 4 ++-- - 1 file changed, 2 insertions(+), 2 deletions(-) - ---- a/mm/readahead.c -+++ b/mm/readahead.c -@@ -483,7 +483,7 @@ static inline int ra_alloc_folio(struct - - if (!folio) - return -ENOMEM; -- mark = round_up(mark, 1UL << order); -+ mark = round_down(mark, 1UL << order); - if (index == mark) - folio_set_readahead(folio); - err = filemap_add_folio(ractl->mapping, folio, index, gfp); -@@ -591,7 +591,7 @@ static void ondemand_readahead(struct re - * It's the expected callback index, assume sequential access. - * Ramp up sizes, and push forward the readahead window. - */ -- expected = round_up(ra->start + ra->size - ra->async_size, -+ expected = round_down(ra->start + ra->size - ra->async_size, - 1UL << order); - if (index == expected || index == (ra->start + ra->size)) { - ra->start += ra->size; diff --git a/queue-6.1/series b/queue-6.1/series index 2a0440b33dc..a037b10a8df 100644 --- a/queue-6.1/series +++ b/queue-6.1/series @@ -33,7 +33,6 @@ i40e-do-not-allow-untrusted-vf-to-remove-administrat.patch i40e-fix-waiting-for-queues-of-all-vsis-to-be-disabl.patch scs-add-config_mmu-dependency-for-vfree_atomic.patch tracing-trigger-fix-to-return-error-if-failed-to-alloc-snapshot.patch -readahead-avoid-multiple-marked-readahead-pages.patch mm-writeback-fix-possible-divide-by-zero-in-wb_dirty_limits-again.patch scsi-storvsc-fix-ring-buffer-size-calculation.patch dm-crypt-dm-verity-disable-tasklets.patch diff --git a/queue-6.6/readahead-avoid-multiple-marked-readahead-pages.patch b/queue-6.6/readahead-avoid-multiple-marked-readahead-pages.patch deleted file mode 100644 index f0f5a2f96d8..00000000000 --- a/queue-6.6/readahead-avoid-multiple-marked-readahead-pages.patch +++ /dev/null @@ -1,92 +0,0 @@ -From ab4443fe3ca6298663a55c4a70efc6c3ce913ca6 Mon Sep 17 00:00:00 2001 -From: Jan Kara -Date: Thu, 4 Jan 2024 09:58:39 +0100 -Subject: readahead: avoid multiple marked readahead pages - -From: Jan Kara - -commit ab4443fe3ca6298663a55c4a70efc6c3ce913ca6 upstream. - -ra_alloc_folio() marks a page that should trigger next round of async -readahead. However it rounds up computed index to the order of page being -allocated. This can however lead to multiple consecutive pages being -marked with readahead flag. Consider situation with index == 1, mark == -1, order == 0. We insert order 0 page at index 1 and mark it. Then we -bump order to 1, index to 2, mark (still == 1) is rounded up to 2 so page -at index 2 is marked as well. Then we bump order to 2, index is -incremented to 4, mark gets rounded to 4 so page at index 4 is marked as -well. The fact that multiple pages get marked within a single readahead -window confuses the readahead logic and results in readahead window being -trimmed back to 1. This situation is triggered in particular when maximum -readahead window size is not a power of two (in the observed case it was -768 KB) and as a result sequential read throughput suffers. - -Fix the problem by rounding 'mark' down instead of up. Because the index -is naturally aligned to 'order', we are guaranteed 'rounded mark' == index -iff 'mark' is within the page we are allocating at 'index' and thus -exactly one page is marked with readahead flag as required by the -readahead code and sequential read performance is restored. - -This effectively reverts part of commit b9ff43dd2743 ("mm/readahead: Fix -readahead with large folios"). The commit changed the rounding with the -rationale: - -"... we were setting the readahead flag on the folio which contains the -last byte read from the block. This is wrong because we will trigger -readahead at the end of the read without waiting to see if a subsequent -read is going to use the pages we just read." - -Although this is true, the fact is this was always the case with read -sizes not aligned to folio boundaries and large folios in the page cache -just make the situation more obvious (and frequent). Also for sequential -read workloads it is better to trigger the readahead earlier rather than -later. It is true that the difference in the rounding and thus earlier -triggering of the readahead can result in reading more for semi-random -workloads. However workloads really suffering from this seem to be rare. -In particular I have verified that the workload described in commit -b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") of reading -random 100k blocks from a file like: - -[reader] -bs=100k -rw=randread -numjobs=1 -size=64g -runtime=60s - -is not impacted by the rounding change and achieves ~70MB/s in both cases. - -[jack@suse.cz: fix one more place where mark rounding was done as well] - Link: https://lkml.kernel.org/r/20240123153254.5206-1-jack@suse.cz -Link: https://lkml.kernel.org/r/20240104085839.21029-1-jack@suse.cz -Fixes: b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") -Signed-off-by: Jan Kara -Cc: Matthew Wilcox -Cc: Guo Xuenan -Cc: -Signed-off-by: Andrew Morton -Signed-off-by: Greg Kroah-Hartman ---- - mm/readahead.c | 4 ++-- - 1 file changed, 2 insertions(+), 2 deletions(-) - ---- a/mm/readahead.c -+++ b/mm/readahead.c -@@ -469,7 +469,7 @@ static inline int ra_alloc_folio(struct - - if (!folio) - return -ENOMEM; -- mark = round_up(mark, 1UL << order); -+ mark = round_down(mark, 1UL << order); - if (index == mark) - folio_set_readahead(folio); - err = filemap_add_folio(ractl->mapping, folio, index, gfp); -@@ -577,7 +577,7 @@ static void ondemand_readahead(struct re - * It's the expected callback index, assume sequential access. - * Ramp up sizes, and push forward the readahead window. - */ -- expected = round_up(ra->start + ra->size - ra->async_size, -+ expected = round_down(ra->start + ra->size - ra->async_size, - 1UL << order); - if (index == expected || index == (ra->start + ra->size)) { - ra->start += ra->size; diff --git a/queue-6.6/series b/queue-6.6/series index ac890054ae8..670429e01f2 100644 --- a/queue-6.6/series +++ b/queue-6.6/series @@ -57,7 +57,6 @@ selftests-mm-ksm_tests-should-only-madv_hugepage-valid-memory.patch scs-add-config_mmu-dependency-for-vfree_atomic.patch tracing-trigger-fix-to-return-error-if-failed-to-alloc-snapshot.patch selftests-mm-switch-to-bash-from-sh.patch -readahead-avoid-multiple-marked-readahead-pages.patch mm-writeback-fix-possible-divide-by-zero-in-wb_dirty_limits-again.patch selftests-mm-update-va_high_addr_switch.sh-to-check-cpu-for-la57-flag.patch selftests-mm-fix-map_hugetlb-failure-on-64k-page-size-systems.patch diff --git a/queue-6.7/readahead-avoid-multiple-marked-readahead-pages.patch b/queue-6.7/readahead-avoid-multiple-marked-readahead-pages.patch deleted file mode 100644 index f0f5a2f96d8..00000000000 --- a/queue-6.7/readahead-avoid-multiple-marked-readahead-pages.patch +++ /dev/null @@ -1,92 +0,0 @@ -From ab4443fe3ca6298663a55c4a70efc6c3ce913ca6 Mon Sep 17 00:00:00 2001 -From: Jan Kara -Date: Thu, 4 Jan 2024 09:58:39 +0100 -Subject: readahead: avoid multiple marked readahead pages - -From: Jan Kara - -commit ab4443fe3ca6298663a55c4a70efc6c3ce913ca6 upstream. - -ra_alloc_folio() marks a page that should trigger next round of async -readahead. However it rounds up computed index to the order of page being -allocated. This can however lead to multiple consecutive pages being -marked with readahead flag. Consider situation with index == 1, mark == -1, order == 0. We insert order 0 page at index 1 and mark it. Then we -bump order to 1, index to 2, mark (still == 1) is rounded up to 2 so page -at index 2 is marked as well. Then we bump order to 2, index is -incremented to 4, mark gets rounded to 4 so page at index 4 is marked as -well. The fact that multiple pages get marked within a single readahead -window confuses the readahead logic and results in readahead window being -trimmed back to 1. This situation is triggered in particular when maximum -readahead window size is not a power of two (in the observed case it was -768 KB) and as a result sequential read throughput suffers. - -Fix the problem by rounding 'mark' down instead of up. Because the index -is naturally aligned to 'order', we are guaranteed 'rounded mark' == index -iff 'mark' is within the page we are allocating at 'index' and thus -exactly one page is marked with readahead flag as required by the -readahead code and sequential read performance is restored. - -This effectively reverts part of commit b9ff43dd2743 ("mm/readahead: Fix -readahead with large folios"). The commit changed the rounding with the -rationale: - -"... we were setting the readahead flag on the folio which contains the -last byte read from the block. This is wrong because we will trigger -readahead at the end of the read without waiting to see if a subsequent -read is going to use the pages we just read." - -Although this is true, the fact is this was always the case with read -sizes not aligned to folio boundaries and large folios in the page cache -just make the situation more obvious (and frequent). Also for sequential -read workloads it is better to trigger the readahead earlier rather than -later. It is true that the difference in the rounding and thus earlier -triggering of the readahead can result in reading more for semi-random -workloads. However workloads really suffering from this seem to be rare. -In particular I have verified that the workload described in commit -b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") of reading -random 100k blocks from a file like: - -[reader] -bs=100k -rw=randread -numjobs=1 -size=64g -runtime=60s - -is not impacted by the rounding change and achieves ~70MB/s in both cases. - -[jack@suse.cz: fix one more place where mark rounding was done as well] - Link: https://lkml.kernel.org/r/20240123153254.5206-1-jack@suse.cz -Link: https://lkml.kernel.org/r/20240104085839.21029-1-jack@suse.cz -Fixes: b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") -Signed-off-by: Jan Kara -Cc: Matthew Wilcox -Cc: Guo Xuenan -Cc: -Signed-off-by: Andrew Morton -Signed-off-by: Greg Kroah-Hartman ---- - mm/readahead.c | 4 ++-- - 1 file changed, 2 insertions(+), 2 deletions(-) - ---- a/mm/readahead.c -+++ b/mm/readahead.c -@@ -469,7 +469,7 @@ static inline int ra_alloc_folio(struct - - if (!folio) - return -ENOMEM; -- mark = round_up(mark, 1UL << order); -+ mark = round_down(mark, 1UL << order); - if (index == mark) - folio_set_readahead(folio); - err = filemap_add_folio(ractl->mapping, folio, index, gfp); -@@ -577,7 +577,7 @@ static void ondemand_readahead(struct re - * It's the expected callback index, assume sequential access. - * Ramp up sizes, and push forward the readahead window. - */ -- expected = round_up(ra->start + ra->size - ra->async_size, -+ expected = round_down(ra->start + ra->size - ra->async_size, - 1UL << order); - if (index == expected || index == (ra->start + ra->size)) { - ra->start += ra->size; diff --git a/queue-6.7/series b/queue-6.7/series index ce9bd1f29b3..c41196a60c8 100644 --- a/queue-6.7/series +++ b/queue-6.7/series @@ -67,7 +67,6 @@ scs-add-config_mmu-dependency-for-vfree_atomic.patch tracing-trigger-fix-to-return-error-if-failed-to-alloc-snapshot.patch fs-hugetlbfs-inode.c-mm-memory-failure.c-fix-hugetlbfs-hwpoison-handling.patch selftests-mm-switch-to-bash-from-sh.patch -readahead-avoid-multiple-marked-readahead-pages.patch mm-writeback-fix-possible-divide-by-zero-in-wb_dirty_limits-again.patch selftests-mm-update-va_high_addr_switch.sh-to-check-cpu-for-la57-flag.patch selftests-mm-fix-map_hugetlb-failure-on-64k-page-size-systems.patch