From: Greg Kroah-Hartman Date: Tue, 3 Mar 2020 15:26:01 +0000 (+0100) Subject: drop sched-fair-optimize-select_idle_cpu.patch from 4.14 and 4.19 X-Git-Tag: v4.19.108~27 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=72f11e1bd78d51dc28104d678537f5d537d378ad;p=thirdparty%2Fkernel%2Fstable-queue.git drop sched-fair-optimize-select_idle_cpu.patch from 4.14 and 4.19 --- diff --git a/queue-4.14/sched-fair-optimize-select_idle_cpu.patch b/queue-4.14/sched-fair-optimize-select_idle_cpu.patch deleted file mode 100644 index 72f9aa8ed35..00000000000 --- a/queue-4.14/sched-fair-optimize-select_idle_cpu.patch +++ /dev/null @@ -1,62 +0,0 @@ -From 60588bfa223ff675b95f866249f90616613fbe31 Mon Sep 17 00:00:00 2001 -From: Cheng Jian -Date: Fri, 13 Dec 2019 10:45:30 +0800 -Subject: sched/fair: Optimize select_idle_cpu - -From: Cheng Jian - -commit 60588bfa223ff675b95f866249f90616613fbe31 upstream. - -select_idle_cpu() will scan the LLC domain for idle CPUs, -it's always expensive. so the next commit : - - 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()") - -introduces a way to limit how many CPUs we scan. - -But it consume some CPUs out of 'nr' that are not allowed -for the task and thus waste our attempts. The function -always return nr_cpumask_bits, and we can't find a CPU -which our task is allowed to run. - -Cpumask may be too big, similar to select_idle_core(), use -per_cpu_ptr 'select_idle_mask' to prevent stack overflow. - -Fixes: 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()") -Signed-off-by: Cheng Jian -Signed-off-by: Peter Zijlstra (Intel) -Reviewed-by: Srikar Dronamraju -Reviewed-by: Vincent Guittot -Reviewed-by: Valentin Schneider -Link: https://lkml.kernel.org/r/20191213024530.28052-1-cj.chengjian@huawei.com -Signed-off-by: Greg Kroah-Hartman - ---- - kernel/sched/fair.c | 7 ++++--- - 1 file changed, 4 insertions(+), 3 deletions(-) - ---- a/kernel/sched/fair.c -+++ b/kernel/sched/fair.c -@@ -5779,6 +5779,7 @@ static inline int select_idle_smt(struct - */ - static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target) - { -+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); - struct sched_domain *this_sd; - u64 avg_cost, avg_idle; - u64 time, cost; -@@ -5809,11 +5810,11 @@ static int select_idle_cpu(struct task_s - - time = local_clock(); - -- for_each_cpu_wrap(cpu, sched_domain_span(sd), target) { -+ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); -+ -+ for_each_cpu_wrap(cpu, cpus, target) { - if (!--nr) - return -1; -- if (!cpumask_test_cpu(cpu, &p->cpus_allowed)) -- continue; - if (idle_cpu(cpu)) - break; - } diff --git a/queue-4.14/series b/queue-4.14/series index 54d18a73296..180037ade5b 100644 --- a/queue-4.14/series +++ b/queue-4.14/series @@ -58,4 +58,3 @@ namei-only-return-echild-from-follow_dotdot_rcu.patch mwifiex-drop-most-magic-numbers-from-mwifiex_process_tdls_action_frame.patch kvm-svm-override-default-mmio-mask-if-memory-encryption-is-enabled.patch kvm-check-for-a-bad-hva-before-dropping-into-the-ghc-slow-path.patch -sched-fair-optimize-select_idle_cpu.patch diff --git a/queue-4.19/sched-fair-optimize-select_idle_cpu.patch b/queue-4.19/sched-fair-optimize-select_idle_cpu.patch deleted file mode 100644 index cd3a29091ae..00000000000 --- a/queue-4.19/sched-fair-optimize-select_idle_cpu.patch +++ /dev/null @@ -1,62 +0,0 @@ -From 60588bfa223ff675b95f866249f90616613fbe31 Mon Sep 17 00:00:00 2001 -From: Cheng Jian -Date: Fri, 13 Dec 2019 10:45:30 +0800 -Subject: sched/fair: Optimize select_idle_cpu - -From: Cheng Jian - -commit 60588bfa223ff675b95f866249f90616613fbe31 upstream. - -select_idle_cpu() will scan the LLC domain for idle CPUs, -it's always expensive. so the next commit : - - 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()") - -introduces a way to limit how many CPUs we scan. - -But it consume some CPUs out of 'nr' that are not allowed -for the task and thus waste our attempts. The function -always return nr_cpumask_bits, and we can't find a CPU -which our task is allowed to run. - -Cpumask may be too big, similar to select_idle_core(), use -per_cpu_ptr 'select_idle_mask' to prevent stack overflow. - -Fixes: 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()") -Signed-off-by: Cheng Jian -Signed-off-by: Peter Zijlstra (Intel) -Reviewed-by: Srikar Dronamraju -Reviewed-by: Vincent Guittot -Reviewed-by: Valentin Schneider -Link: https://lkml.kernel.org/r/20191213024530.28052-1-cj.chengjian@huawei.com -Signed-off-by: Greg Kroah-Hartman - ---- - kernel/sched/fair.c | 7 ++++--- - 1 file changed, 4 insertions(+), 3 deletions(-) - ---- a/kernel/sched/fair.c -+++ b/kernel/sched/fair.c -@@ -6133,6 +6133,7 @@ static inline int select_idle_smt(struct - */ - static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target) - { -+ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); - struct sched_domain *this_sd; - u64 avg_cost, avg_idle; - u64 time, cost; -@@ -6163,11 +6164,11 @@ static int select_idle_cpu(struct task_s - - time = local_clock(); - -- for_each_cpu_wrap(cpu, sched_domain_span(sd), target) { -+ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); -+ -+ for_each_cpu_wrap(cpu, cpus, target) { - if (!--nr) - return -1; -- if (!cpumask_test_cpu(cpu, &p->cpus_allowed)) -- continue; - if (available_idle_cpu(cpu)) - break; - } diff --git a/queue-4.19/series b/queue-4.19/series index 01463020bca..572452a55af 100644 --- a/queue-4.19/series +++ b/queue-4.19/series @@ -69,4 +69,3 @@ mwifiex-drop-most-magic-numbers-from-mwifiex_process_tdls_action_frame.patch mwifiex-delete-unused-mwifiex_get_intf_num.patch kvm-svm-override-default-mmio-mask-if-memory-encryption-is-enabled.patch kvm-check-for-a-bad-hva-before-dropping-into-the-ghc-slow-path.patch -sched-fair-optimize-select_idle_cpu.patch