From: Sasha Levin Date: Sun, 10 Mar 2024 02:31:47 +0000 (-0500) Subject: Fixes for 6.6 X-Git-Tag: v6.8.1~33 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=7ae1f84062a34974e6383e388e264d26fa0bab00;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 6.6 Signed-off-by: Sasha Levin --- diff --git a/queue-6.6/drm-bridge-properly-refcount-dt-nodes-in-aux-bridge-.patch b/queue-6.6/drm-bridge-properly-refcount-dt-nodes-in-aux-bridge-.patch new file mode 100644 index 00000000000..f938f4c5d87 --- /dev/null +++ b/queue-6.6/drm-bridge-properly-refcount-dt-nodes-in-aux-bridge-.patch @@ -0,0 +1,68 @@ +From 09b2aa52ca01bb74251a0afe1bac930edb534b0a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 17 Dec 2023 01:59:10 +0200 +Subject: drm/bridge: properly refcount DT nodes in aux bridge drivers + +From: Dmitry Baryshkov + +[ Upstream commit 6914968a0b52507bf19d85e5fb9e35272e17cd35 ] + +The aux-bridge and aux-hpd-bridge drivers didn't call of_node_get() on +the device nodes further used for dev->of_node and platform data. When +bridge devices are released, the reference counts are decreased, +resulting in refcount underflow / use-after-free warnings. Get +corresponding refcounts during AUX bridge allocation. + +Reported-by: Luca Weiss +Fixes: 2a04739139b2 ("drm/bridge: add transparent bridge helper") +Fixes: 26f4bac3d884 ("drm/bridge: aux-hpd: Replace of_device.h with explicit include") +Reviewed-by: Neil Armstrong +Signed-off-by: Dmitry Baryshkov +Link: https://patchwork.freedesktop.org/patch/msgid/20231216235910.911958-1-dmitry.baryshkov@linaro.org +Signed-off-by: Dmitry Baryshkov +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/bridge/aux-bridge.c | 3 ++- + drivers/gpu/drm/bridge/aux-hpd-bridge.c | 4 ++-- + 2 files changed, 4 insertions(+), 3 deletions(-) + +diff --git a/drivers/gpu/drm/bridge/aux-bridge.c b/drivers/gpu/drm/bridge/aux-bridge.c +index 49d7c2ab1ecc3..b29980f95379e 100644 +--- a/drivers/gpu/drm/bridge/aux-bridge.c ++++ b/drivers/gpu/drm/bridge/aux-bridge.c +@@ -6,6 +6,7 @@ + */ + #include + #include ++#include + + #include + #include +@@ -57,7 +58,7 @@ int drm_aux_bridge_register(struct device *parent) + adev->id = ret; + adev->name = "aux_bridge"; + adev->dev.parent = parent; +- adev->dev.of_node = parent->of_node; ++ adev->dev.of_node = of_node_get(parent->of_node); + adev->dev.release = drm_aux_bridge_release; + + ret = auxiliary_device_init(adev); +diff --git a/drivers/gpu/drm/bridge/aux-hpd-bridge.c b/drivers/gpu/drm/bridge/aux-hpd-bridge.c +index 44bb771211b82..a24b6613cc02d 100644 +--- a/drivers/gpu/drm/bridge/aux-hpd-bridge.c ++++ b/drivers/gpu/drm/bridge/aux-hpd-bridge.c +@@ -63,9 +63,9 @@ struct auxiliary_device *devm_drm_dp_hpd_bridge_alloc(struct device *parent, str + adev->id = ret; + adev->name = "dp_hpd_bridge"; + adev->dev.parent = parent; +- adev->dev.of_node = parent->of_node; ++ adev->dev.of_node = of_node_get(parent->of_node); + adev->dev.release = drm_aux_hpd_bridge_release; +- adev->dev.platform_data = np; ++ adev->dev.platform_data = of_node_get(np); + + ret = auxiliary_device_init(adev); + if (ret) { +-- +2.43.0 + diff --git a/queue-6.6/drm-bridge-return-null-instead-of-plain-0-in-drm_dp_.patch b/queue-6.6/drm-bridge-return-null-instead-of-plain-0-in-drm_dp_.patch new file mode 100644 index 00000000000..48e0779e1bd --- /dev/null +++ b/queue-6.6/drm-bridge-return-null-instead-of-plain-0-in-drm_dp_.patch @@ -0,0 +1,46 @@ +From 07ea395c1063726b3e679d2830eae8eee3186278 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 5 Dec 2023 13:13:36 -0700 +Subject: drm/bridge: Return NULL instead of plain 0 in + drm_dp_hpd_bridge_register() stub + +From: Nathan Chancellor + +[ Upstream commit 812cc1da7ffd9e178ef66b8a22113be10fba466c ] + +sparse complains: + + drivers/usb/typec/tcpm/qcom/qcom_pmic_typec.c: note: in included file: + include/drm/bridge/aux-bridge.h:29:16: sparse: sparse: Using plain integer as NULL pointer + +Return NULL to clear up the warning. + +Reported-by: kernel test robot +Closes: https://lore.kernel.org/oe-kbuild-all/202312060025.BdeqZrWx-lkp@intel.com/ +Fixes: e560518a6c2e ("drm/bridge: implement generic DP HPD bridge") +Signed-off-by: Nathan Chancellor +Reviewed-by: Bryan O'Donoghue +Reviewed-by: Guenter Roeck +Signed-off-by: Dmitry Baryshkov +Link: https://patchwork.freedesktop.org/patch/msgid/20231205-drm_aux_bridge-fixes-v1-3-d242a0ae9df4@kernel.org +Signed-off-by: Sasha Levin +--- + include/drm/bridge/aux-bridge.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/include/drm/bridge/aux-bridge.h b/include/drm/bridge/aux-bridge.h +index 874f177381e34..4453906105ca1 100644 +--- a/include/drm/bridge/aux-bridge.h ++++ b/include/drm/bridge/aux-bridge.h +@@ -41,7 +41,7 @@ static inline int devm_drm_dp_hpd_bridge_add(struct auxiliary_device *adev) + static inline struct device *drm_dp_hpd_bridge_register(struct device *parent, + struct device_node *np) + { +- return 0; ++ return NULL; + } + + static inline void drm_aux_hpd_bridge_notify(struct device *dev, enum drm_connector_status status) +-- +2.43.0 + diff --git a/queue-6.6/exit-wait_task_zombie-kill-the-no-longer-necessary-s.patch b/queue-6.6/exit-wait_task_zombie-kill-the-no-longer-necessary-s.patch new file mode 100644 index 00000000000..54f7357f3dc --- /dev/null +++ b/queue-6.6/exit-wait_task_zombie-kill-the-no-longer-necessary-s.patch @@ -0,0 +1,65 @@ +From 89169857789368f0471c81a3509f2412f7a7ce52 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 23 Jan 2024 16:34:00 +0100 +Subject: exit: wait_task_zombie: kill the no longer necessary + spin_lock_irq(siglock) + +From: Oleg Nesterov + +[ Upstream commit c1be35a16b2f1fe21f4f26f9de030ad6eaaf6a25 ] + +After the recent changes nobody use siglock to read the values protected +by stats_lock, we can kill spin_lock_irq(¤t->sighand->siglock) and +update the comment. + +With this patch only __exit_signal() and thread_group_start_cputime() take +stats_lock under siglock. + +Link: https://lkml.kernel.org/r/20240123153359.GA21866@redhat.com +Signed-off-by: Oleg Nesterov +Signed-off-by: Dylan Hatch +Cc: Eric W. Biederman +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Sasha Levin +--- + kernel/exit.c | 10 +++------- + 1 file changed, 3 insertions(+), 7 deletions(-) + +diff --git a/kernel/exit.c b/kernel/exit.c +index 21a59a6e1f2e8..1867d420c36c4 100644 +--- a/kernel/exit.c ++++ b/kernel/exit.c +@@ -1148,17 +1148,14 @@ static int wait_task_zombie(struct wait_opts *wo, struct task_struct *p) + * and nobody can change them. + * + * psig->stats_lock also protects us from our sub-threads +- * which can reap other children at the same time. Until +- * we change k_getrusage()-like users to rely on this lock +- * we have to take ->siglock as well. ++ * which can reap other children at the same time. + * + * We use thread_group_cputime_adjusted() to get times for + * the thread group, which consolidates times for all threads + * in the group including the group leader. + */ + thread_group_cputime_adjusted(p, &tgutime, &tgstime); +- spin_lock_irq(¤t->sighand->siglock); +- write_seqlock(&psig->stats_lock); ++ write_seqlock_irq(&psig->stats_lock); + psig->cutime += tgutime + sig->cutime; + psig->cstime += tgstime + sig->cstime; + psig->cgtime += task_gtime(p) + sig->gtime + sig->cgtime; +@@ -1181,8 +1178,7 @@ static int wait_task_zombie(struct wait_opts *wo, struct task_struct *p) + psig->cmaxrss = maxrss; + task_io_accounting_add(&psig->ioac, &p->ioac); + task_io_accounting_add(&psig->ioac, &sig->ioac); +- write_sequnlock(&psig->stats_lock); +- spin_unlock_irq(¤t->sighand->siglock); ++ write_sequnlock_irq(&psig->stats_lock); + } + + if (wo->wo_rusage) +-- +2.43.0 + diff --git a/queue-6.6/kvm-s390-add-stat-counter-for-shadow-gmap-events.patch b/queue-6.6/kvm-s390-add-stat-counter-for-shadow-gmap-events.patch new file mode 100644 index 00000000000..762570587ba --- /dev/null +++ b/queue-6.6/kvm-s390-add-stat-counter-for-shadow-gmap-events.patch @@ -0,0 +1,168 @@ +From 2c2407820116c4ec83555123df8b2ecc4dbcc26b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 9 Oct 2023 11:32:52 +0200 +Subject: KVM: s390: add stat counter for shadow gmap events + +From: Nico Boehr + +[ Upstream commit c3235e2dd6956448a562d6b1112205eeebc8ab43 ] + +The shadow gmap tracks memory of nested guests (guest-3). In certain +scenarios, the shadow gmap needs to be rebuilt, which is a costly operation +since it involves a SIE exit into guest-1 for every entry in the respective +shadow level. + +Add kvm stat counters when new shadow structures are created at various +levels. Also add a counter gmap_shadow_create when a completely fresh +shadow gmap is created as well as a counter gmap_shadow_reuse when an +existing gmap is being reused. + +Note that when several levels are shadowed at once, counters on all +affected levels will be increased. + +Also note that not all page table levels need to be present and a ASCE +can directly point to e.g. a segment table. In this case, a new segment +table will always be equivalent to a new shadow gmap and hence will be +counted as gmap_shadow_create and not as gmap_shadow_segment. + +Signed-off-by: Nico Boehr +Reviewed-by: David Hildenbrand +Reviewed-by: Claudio Imbrenda +Reviewed-by: Janosch Frank +Signed-off-by: Janosch Frank +Link: https://lore.kernel.org/r/20231009093304.2555344-2-nrb@linux.ibm.com +Message-Id: <20231009093304.2555344-2-nrb@linux.ibm.com> +Stable-dep-of: fe752331d4b3 ("KVM: s390: vsie: fix race during shadow creation") +Signed-off-by: Sasha Levin +--- + arch/s390/include/asm/kvm_host.h | 7 +++++++ + arch/s390/kvm/gaccess.c | 7 +++++++ + arch/s390/kvm/kvm-s390.c | 9 ++++++++- + arch/s390/kvm/vsie.c | 5 ++++- + 4 files changed, 26 insertions(+), 2 deletions(-) + +diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h +index 427f9528a7b69..67a298b6cf6e9 100644 +--- a/arch/s390/include/asm/kvm_host.h ++++ b/arch/s390/include/asm/kvm_host.h +@@ -777,6 +777,13 @@ struct kvm_vm_stat { + u64 inject_service_signal; + u64 inject_virtio; + u64 aen_forward; ++ u64 gmap_shadow_create; ++ u64 gmap_shadow_reuse; ++ u64 gmap_shadow_r1_entry; ++ u64 gmap_shadow_r2_entry; ++ u64 gmap_shadow_r3_entry; ++ u64 gmap_shadow_sg_entry; ++ u64 gmap_shadow_pg_entry; + }; + + struct kvm_arch_memory_slot { +diff --git a/arch/s390/kvm/gaccess.c b/arch/s390/kvm/gaccess.c +index 6d6bc19b37dcb..ff8349d17b331 100644 +--- a/arch/s390/kvm/gaccess.c ++++ b/arch/s390/kvm/gaccess.c +@@ -1382,6 +1382,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr, + unsigned long *pgt, int *dat_protection, + int *fake) + { ++ struct kvm *kvm; + struct gmap *parent; + union asce asce; + union vaddress vaddr; +@@ -1390,6 +1391,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr, + + *fake = 0; + *dat_protection = 0; ++ kvm = sg->private; + parent = sg->parent; + vaddr.addr = saddr; + asce.val = sg->orig_asce; +@@ -1450,6 +1452,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr, + rc = gmap_shadow_r2t(sg, saddr, rfte.val, *fake); + if (rc) + return rc; ++ kvm->stat.gmap_shadow_r1_entry++; + } + fallthrough; + case ASCE_TYPE_REGION2: { +@@ -1478,6 +1481,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr, + rc = gmap_shadow_r3t(sg, saddr, rste.val, *fake); + if (rc) + return rc; ++ kvm->stat.gmap_shadow_r2_entry++; + } + fallthrough; + case ASCE_TYPE_REGION3: { +@@ -1515,6 +1519,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr, + rc = gmap_shadow_sgt(sg, saddr, rtte.val, *fake); + if (rc) + return rc; ++ kvm->stat.gmap_shadow_r3_entry++; + } + fallthrough; + case ASCE_TYPE_SEGMENT: { +@@ -1548,6 +1553,7 @@ static int kvm_s390_shadow_tables(struct gmap *sg, unsigned long saddr, + rc = gmap_shadow_pgt(sg, saddr, ste.val, *fake); + if (rc) + return rc; ++ kvm->stat.gmap_shadow_sg_entry++; + } + } + /* Return the parent address of the page table */ +@@ -1618,6 +1624,7 @@ int kvm_s390_shadow_fault(struct kvm_vcpu *vcpu, struct gmap *sg, + pte.p |= dat_protection; + if (!rc) + rc = gmap_shadow_page(sg, saddr, __pte(pte.val)); ++ vcpu->kvm->stat.gmap_shadow_pg_entry++; + ipte_unlock(vcpu->kvm); + mmap_read_unlock(sg->mm); + return rc; +diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c +index 49cce436444e0..1af55343a606b 100644 +--- a/arch/s390/kvm/kvm-s390.c ++++ b/arch/s390/kvm/kvm-s390.c +@@ -66,7 +66,14 @@ const struct _kvm_stats_desc kvm_vm_stats_desc[] = { + STATS_DESC_COUNTER(VM, inject_pfault_done), + STATS_DESC_COUNTER(VM, inject_service_signal), + STATS_DESC_COUNTER(VM, inject_virtio), +- STATS_DESC_COUNTER(VM, aen_forward) ++ STATS_DESC_COUNTER(VM, aen_forward), ++ STATS_DESC_COUNTER(VM, gmap_shadow_reuse), ++ STATS_DESC_COUNTER(VM, gmap_shadow_create), ++ STATS_DESC_COUNTER(VM, gmap_shadow_r1_entry), ++ STATS_DESC_COUNTER(VM, gmap_shadow_r2_entry), ++ STATS_DESC_COUNTER(VM, gmap_shadow_r3_entry), ++ STATS_DESC_COUNTER(VM, gmap_shadow_sg_entry), ++ STATS_DESC_COUNTER(VM, gmap_shadow_pg_entry), + }; + + const struct kvm_stats_header kvm_vm_stats_header = { +diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c +index e55f489e1fb79..8207a892bbe22 100644 +--- a/arch/s390/kvm/vsie.c ++++ b/arch/s390/kvm/vsie.c +@@ -1210,8 +1210,10 @@ static int acquire_gmap_shadow(struct kvm_vcpu *vcpu, + * we're holding has been unshadowed. If the gmap is still valid, + * we can safely reuse it. + */ +- if (vsie_page->gmap && gmap_shadow_valid(vsie_page->gmap, asce, edat)) ++ if (vsie_page->gmap && gmap_shadow_valid(vsie_page->gmap, asce, edat)) { ++ vcpu->kvm->stat.gmap_shadow_reuse++; + return 0; ++ } + + /* release the old shadow - if any, and mark the prefix as unmapped */ + release_gmap_shadow(vsie_page); +@@ -1219,6 +1221,7 @@ static int acquire_gmap_shadow(struct kvm_vcpu *vcpu, + if (IS_ERR(gmap)) + return PTR_ERR(gmap); + gmap->private = vcpu->kvm; ++ vcpu->kvm->stat.gmap_shadow_create++; + WRITE_ONCE(vsie_page->gmap, gmap); + return 0; + } +-- +2.43.0 + diff --git a/queue-6.6/kvm-s390-vsie-fix-race-during-shadow-creation.patch b/queue-6.6/kvm-s390-vsie-fix-race-during-shadow-creation.patch new file mode 100644 index 00000000000..a697a057060 --- /dev/null +++ b/queue-6.6/kvm-s390-vsie-fix-race-during-shadow-creation.patch @@ -0,0 +1,66 @@ +From afbd9d3dd682500a4c3192ebc0da1abe2ae2697f Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 20 Dec 2023 13:53:17 +0100 +Subject: KVM: s390: vsie: fix race during shadow creation + +From: Christian Borntraeger + +[ Upstream commit fe752331d4b361d43cfd0b89534b4b2176057c32 ] + +Right now it is possible to see gmap->private being zero in +kvm_s390_vsie_gmap_notifier resulting in a crash. This is due to the +fact that we add gmap->private == kvm after creation: + +static int acquire_gmap_shadow(struct kvm_vcpu *vcpu, + struct vsie_page *vsie_page) +{ +[...] + gmap = gmap_shadow(vcpu->arch.gmap, asce, edat); + if (IS_ERR(gmap)) + return PTR_ERR(gmap); + gmap->private = vcpu->kvm; + +Let children inherit the private field of the parent. + +Reported-by: Marc Hartmayer +Fixes: a3508fbe9dc6 ("KVM: s390: vsie: initial support for nested virtualization") +Cc: +Cc: David Hildenbrand +Reviewed-by: Janosch Frank +Reviewed-by: David Hildenbrand +Reviewed-by: Claudio Imbrenda +Signed-off-by: Christian Borntraeger +Link: https://lore.kernel.org/r/20231220125317.4258-1-borntraeger@linux.ibm.com +Signed-off-by: Sasha Levin +--- + arch/s390/kvm/vsie.c | 1 - + arch/s390/mm/gmap.c | 1 + + 2 files changed, 1 insertion(+), 1 deletion(-) + +diff --git a/arch/s390/kvm/vsie.c b/arch/s390/kvm/vsie.c +index 8207a892bbe22..db9a180de65f1 100644 +--- a/arch/s390/kvm/vsie.c ++++ b/arch/s390/kvm/vsie.c +@@ -1220,7 +1220,6 @@ static int acquire_gmap_shadow(struct kvm_vcpu *vcpu, + gmap = gmap_shadow(vcpu->arch.gmap, asce, edat); + if (IS_ERR(gmap)) + return PTR_ERR(gmap); +- gmap->private = vcpu->kvm; + vcpu->kvm->stat.gmap_shadow_create++; + WRITE_ONCE(vsie_page->gmap, gmap); + return 0; +diff --git a/arch/s390/mm/gmap.c b/arch/s390/mm/gmap.c +index 20786f6883b29..157e0a8d5157d 100644 +--- a/arch/s390/mm/gmap.c ++++ b/arch/s390/mm/gmap.c +@@ -1691,6 +1691,7 @@ struct gmap *gmap_shadow(struct gmap *parent, unsigned long asce, + return ERR_PTR(-ENOMEM); + new->mm = parent->mm; + new->parent = gmap_get(parent); ++ new->private = parent->private; + new->orig_asce = asce; + new->edat_level = edat_level; + new->initialized = false; +-- +2.43.0 + diff --git a/queue-6.6/readahead-avoid-multiple-marked-readahead-pages.patch b/queue-6.6/readahead-avoid-multiple-marked-readahead-pages.patch new file mode 100644 index 00000000000..0daeee37fb1 --- /dev/null +++ b/queue-6.6/readahead-avoid-multiple-marked-readahead-pages.patch @@ -0,0 +1,97 @@ +From 7eeb2d06b6194d53afad1044946dbac4061655e3 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 4 Jan 2024 09:58:39 +0100 +Subject: readahead: avoid multiple marked readahead pages + +From: Jan Kara + +[ Upstream commit ab4443fe3ca6298663a55c4a70efc6c3ce913ca6 ] + +ra_alloc_folio() marks a page that should trigger next round of async +readahead. However it rounds up computed index to the order of page being +allocated. This can however lead to multiple consecutive pages being +marked with readahead flag. Consider situation with index == 1, mark == +1, order == 0. We insert order 0 page at index 1 and mark it. Then we +bump order to 1, index to 2, mark (still == 1) is rounded up to 2 so page +at index 2 is marked as well. Then we bump order to 2, index is +incremented to 4, mark gets rounded to 4 so page at index 4 is marked as +well. The fact that multiple pages get marked within a single readahead +window confuses the readahead logic and results in readahead window being +trimmed back to 1. This situation is triggered in particular when maximum +readahead window size is not a power of two (in the observed case it was +768 KB) and as a result sequential read throughput suffers. + +Fix the problem by rounding 'mark' down instead of up. Because the index +is naturally aligned to 'order', we are guaranteed 'rounded mark' == index +iff 'mark' is within the page we are allocating at 'index' and thus +exactly one page is marked with readahead flag as required by the +readahead code and sequential read performance is restored. + +This effectively reverts part of commit b9ff43dd2743 ("mm/readahead: Fix +readahead with large folios"). The commit changed the rounding with the +rationale: + +"... we were setting the readahead flag on the folio which contains the +last byte read from the block. This is wrong because we will trigger +readahead at the end of the read without waiting to see if a subsequent +read is going to use the pages we just read." + +Although this is true, the fact is this was always the case with read +sizes not aligned to folio boundaries and large folios in the page cache +just make the situation more obvious (and frequent). Also for sequential +read workloads it is better to trigger the readahead earlier rather than +later. It is true that the difference in the rounding and thus earlier +triggering of the readahead can result in reading more for semi-random +workloads. However workloads really suffering from this seem to be rare. +In particular I have verified that the workload described in commit +b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") of reading +random 100k blocks from a file like: + +[reader] +bs=100k +rw=randread +numjobs=1 +size=64g +runtime=60s + +is not impacted by the rounding change and achieves ~70MB/s in both cases. + +[jack@suse.cz: fix one more place where mark rounding was done as well] + Link: https://lkml.kernel.org/r/20240123153254.5206-1-jack@suse.cz +Link: https://lkml.kernel.org/r/20240104085839.21029-1-jack@suse.cz +Fixes: b9ff43dd2743 ("mm/readahead: Fix readahead with large folios") +Signed-off-by: Jan Kara +Cc: Matthew Wilcox +Cc: Guo Xuenan +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Sasha Levin +--- + mm/readahead.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/mm/readahead.c b/mm/readahead.c +index 6925e6959fd3f..1d1a84deb5bc5 100644 +--- a/mm/readahead.c ++++ b/mm/readahead.c +@@ -469,7 +469,7 @@ static inline int ra_alloc_folio(struct readahead_control *ractl, pgoff_t index, + + if (!folio) + return -ENOMEM; +- mark = round_up(mark, 1UL << order); ++ mark = round_down(mark, 1UL << order); + if (index == mark) + folio_set_readahead(folio); + err = filemap_add_folio(ractl->mapping, folio, index, gfp); +@@ -577,7 +577,7 @@ static void ondemand_readahead(struct readahead_control *ractl, + * It's the expected callback index, assume sequential access. + * Ramp up sizes, and push forward the readahead window. + */ +- expected = round_up(ra->start + ra->size - ra->async_size, ++ expected = round_down(ra->start + ra->size - ra->async_size, + 1UL << order); + if (index == expected || index == (ra->start + ra->size)) { + ra->start += ra->size; +-- +2.43.0 + diff --git a/queue-6.6/selftests-mptcp-decrease-bw-in-simult-flows.patch b/queue-6.6/selftests-mptcp-decrease-bw-in-simult-flows.patch new file mode 100644 index 00000000000..df3a2aba44e --- /dev/null +++ b/queue-6.6/selftests-mptcp-decrease-bw-in-simult-flows.patch @@ -0,0 +1,53 @@ +From 416fb8656c05f3fa21a0b9deac1ef1bd20a54498 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 31 Jan 2024 22:49:51 +0100 +Subject: selftests: mptcp: decrease BW in simult flows + +From: Matthieu Baerts (NGI0) + +[ Upstream commit 5e2f3c65af47e527ccac54060cf909e3306652ff ] + +When running the simult_flow selftest in slow environments -- e.g. QEmu +without KVM support --, the results can be unstable. This selftest +checks if the aggregated bandwidth is (almost) fully used as expected. + +To help improving the stability while still keeping the same validation +in place, the BW and the delay are reduced to lower the pressure on the +CPU. + +Fixes: 1a418cb8e888 ("mptcp: simult flow self-tests") +Fixes: 219d04992b68 ("mptcp: push pending frames when subflow has free space") +Cc: stable@vger.kernel.org +Suggested-by: Paolo Abeni +Signed-off-by: Matthieu Baerts (NGI0) +Link: https://lore.kernel.org/r/20240131-upstream-net-20240131-mptcp-ci-issues-v1-6-4c1c11e571ff@kernel.org +Signed-off-by: Jakub Kicinski +Signed-off-by: Sasha Levin +--- + tools/testing/selftests/net/mptcp/simult_flows.sh | 8 ++++---- + 1 file changed, 4 insertions(+), 4 deletions(-) + +diff --git a/tools/testing/selftests/net/mptcp/simult_flows.sh b/tools/testing/selftests/net/mptcp/simult_flows.sh +index 9096bf5794888..25693b37f820d 100755 +--- a/tools/testing/selftests/net/mptcp/simult_flows.sh ++++ b/tools/testing/selftests/net/mptcp/simult_flows.sh +@@ -302,12 +302,12 @@ done + + setup + run_test 10 10 0 0 "balanced bwidth" +-run_test 10 10 1 50 "balanced bwidth with unbalanced delay" ++run_test 10 10 1 25 "balanced bwidth with unbalanced delay" + + # we still need some additional infrastructure to pass the following test-cases +-run_test 30 10 0 0 "unbalanced bwidth" +-run_test 30 10 1 50 "unbalanced bwidth with unbalanced delay" +-run_test 30 10 50 1 "unbalanced bwidth with opposed, unbalanced delay" ++run_test 10 3 0 0 "unbalanced bwidth" ++run_test 10 3 1 25 "unbalanced bwidth with unbalanced delay" ++run_test 10 3 25 1 "unbalanced bwidth with opposed, unbalanced delay" + + mptcp_lib_result_print_all_tap + exit $ret +-- +2.43.0 + diff --git a/queue-6.6/series b/queue-6.6/series index b80f64806d3..6c7522adf1b 100644 --- a/queue-6.6/series +++ b/queue-6.6/series @@ -53,3 +53,10 @@ netrom-fix-a-data-race-around-sysctl_netrom_routing_.patch netrom-fix-a-data-race-around-sysctl_netrom_link_fai.patch netrom-fix-data-races-around-sysctl_net_busy_read.patch net-pds_core-fix-possible-double-free-in-error-handl.patch +kvm-s390-add-stat-counter-for-shadow-gmap-events.patch +kvm-s390-vsie-fix-race-during-shadow-creation.patch +readahead-avoid-multiple-marked-readahead-pages.patch +selftests-mptcp-decrease-bw-in-simult-flows.patch +exit-wait_task_zombie-kill-the-no-longer-necessary-s.patch +drm-bridge-return-null-instead-of-plain-0-in-drm_dp_.patch +drm-bridge-properly-refcount-dt-nodes-in-aux-bridge-.patch