From: Greg Kroah-Hartman Date: Sun, 8 Jun 2014 01:36:14 +0000 (-0700) Subject: 3.14-stable patches X-Git-Tag: v3.14.7~37 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=2508e60c99efe25db38991e980130e040ecfb539;p=thirdparty%2Fkernel%2Fstable-queue.git 3.14-stable patches added patches: mm-memory-failure.c-fix-memory-leak-by-race-between-poison-and-unpoison.patch perf-fix-race-in-removing-an-event.patch perf-limit-perf_event_attr-sample_period-to-63-bits.patch perf-prevent-false-warning-in-perf_swevent_add.patch sched-deadline-fix-memory-leak.patch sched-sanitize-irq-accounting-madness.patch sched-use-cpupri_nr_priorities-instead-of-max_rt_prio-in-cpupri-check.patch --- diff --git a/queue-3.10/series b/queue-3.10/series new file mode 100644 index 00000000000..c365390240f --- /dev/null +++ b/queue-3.10/series @@ -0,0 +1,6 @@ +sched-use-cpupri_nr_priorities-instead-of-max_rt_prio-in-cpupri-check.patch +sched-sanitize-irq-accounting-madness.patch +perf-prevent-false-warning-in-perf_swevent_add.patch +perf-limit-perf_event_attr-sample_period-to-63-bits.patch +perf-fix-race-in-removing-an-event.patch +perf-evsel-fix-printing-of-perf_event_paranoid-message.patch diff --git a/queue-3.14/mm-memory-failure.c-fix-memory-leak-by-race-between-poison-and-unpoison.patch b/queue-3.14/mm-memory-failure.c-fix-memory-leak-by-race-between-poison-and-unpoison.patch new file mode 100644 index 00000000000..8700adba297 --- /dev/null +++ b/queue-3.14/mm-memory-failure.c-fix-memory-leak-by-race-between-poison-and-unpoison.patch @@ -0,0 +1,41 @@ +From 3e030ecc0fc7de10fd0da10c1c19939872a31717 Mon Sep 17 00:00:00 2001 +From: Naoya Horiguchi +Date: Thu, 22 May 2014 11:54:21 -0700 +Subject: mm/memory-failure.c: fix memory leak by race between poison and unpoison + +From: Naoya Horiguchi + +commit 3e030ecc0fc7de10fd0da10c1c19939872a31717 upstream. + +When a memory error happens on an in-use page or (free and in-use) +hugepage, the victim page is isolated with its refcount set to one. + +When you try to unpoison it later, unpoison_memory() calls put_page() +for it twice in order to bring the page back to free page pool (buddy or +free hugepage list). However, if another memory error occurs on the +page which we are unpoisoning, memory_failure() returns without +releasing the refcount which was incremented in the same call at first, +which results in memory leak and unconsistent num_poisoned_pages +statistics. This patch fixes it. + +Signed-off-by: Naoya Horiguchi +Cc: Andi Kleen +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/memory-failure.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/mm/memory-failure.c ++++ b/mm/memory-failure.c +@@ -1157,6 +1157,8 @@ int memory_failure(unsigned long pfn, in + */ + if (!PageHWPoison(p)) { + printk(KERN_ERR "MCE %#lx: just unpoisoned\n", pfn); ++ atomic_long_sub(nr_pages, &num_poisoned_pages); ++ put_page(hpage); + res = 0; + goto out; + } diff --git a/queue-3.14/perf-fix-race-in-removing-an-event.patch b/queue-3.14/perf-fix-race-in-removing-an-event.patch new file mode 100644 index 00000000000..0ead53fa9e9 --- /dev/null +++ b/queue-3.14/perf-fix-race-in-removing-an-event.patch @@ -0,0 +1,203 @@ +From 46ce0fe97a6be7532ce6126bb26ce89fed81528c Mon Sep 17 00:00:00 2001 +From: Peter Zijlstra +Date: Fri, 2 May 2014 16:56:01 +0200 +Subject: perf: Fix race in removing an event + +From: Peter Zijlstra + +commit 46ce0fe97a6be7532ce6126bb26ce89fed81528c upstream. + +When removing a (sibling) event we do: + + raw_spin_lock_irq(&ctx->lock); + perf_group_detach(event); + raw_spin_unlock_irq(&ctx->lock); + + + + perf_remove_from_context(event); + raw_spin_lock_irq(&ctx->lock); + ... + raw_spin_unlock_irq(&ctx->lock); + +Now, assuming the event is a sibling, it will be 'unreachable' for +things like ctx_sched_out() because that iterates the +groups->siblings, and we just unhooked the sibling. + +So, if during we get ctx_sched_out(), it will miss the event +and not call event_sched_out() on it, leaving it programmed on the +PMU. + +The subsequent perf_remove_from_context() call will find the ctx is +inactive and only call list_del_event() to remove the event from all +other lists. + +Hereafter we can proceed to free the event; while still programmed! + +Close this hole by moving perf_group_detach() inside the same +ctx->lock region(s) perf_remove_from_context() has. + +The condition on inherited events only in __perf_event_exit_task() is +likely complete crap because non-inherited events are part of groups +too and we're tearing down just the same. But leave that for another +patch. + +Most-likely-Fixes: e03a9a55b4e ("perf: Change close() semantics for group events") +Reported-by: Vince Weaver +Tested-by: Vince Weaver +Much-staring-at-traces-by: Vince Weaver +Much-staring-at-traces-by: Thomas Gleixner +Cc: Arnaldo Carvalho de Melo +Cc: Linus Torvalds +Signed-off-by: Peter Zijlstra +Link: http://lkml.kernel.org/r/20140505093124.GN17778@laptop.programming.kicks-ass.net +Signed-off-by: Ingo Molnar +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/events/core.c | 47 ++++++++++++++++++++++++++--------------------- + 1 file changed, 26 insertions(+), 21 deletions(-) + +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -1439,6 +1439,11 @@ group_sched_out(struct perf_event *group + cpuctx->exclusive = 0; + } + ++struct remove_event { ++ struct perf_event *event; ++ bool detach_group; ++}; ++ + /* + * Cross CPU call to remove a performance event + * +@@ -1447,12 +1452,15 @@ group_sched_out(struct perf_event *group + */ + static int __perf_remove_from_context(void *info) + { +- struct perf_event *event = info; ++ struct remove_event *re = info; ++ struct perf_event *event = re->event; + struct perf_event_context *ctx = event->ctx; + struct perf_cpu_context *cpuctx = __get_cpu_context(ctx); + + raw_spin_lock(&ctx->lock); + event_sched_out(event, cpuctx, ctx); ++ if (re->detach_group) ++ perf_group_detach(event); + list_del_event(event, ctx); + if (!ctx->nr_events && cpuctx->task_ctx == ctx) { + ctx->is_active = 0; +@@ -1477,10 +1485,14 @@ static int __perf_remove_from_context(vo + * When called from perf_event_exit_task, it's OK because the + * context has been detached from its task. + */ +-static void perf_remove_from_context(struct perf_event *event) ++static void perf_remove_from_context(struct perf_event *event, bool detach_group) + { + struct perf_event_context *ctx = event->ctx; + struct task_struct *task = ctx->task; ++ struct remove_event re = { ++ .event = event, ++ .detach_group = detach_group, ++ }; + + lockdep_assert_held(&ctx->mutex); + +@@ -1489,12 +1501,12 @@ static void perf_remove_from_context(str + * Per cpu events are removed via an smp call and + * the removal is always successful. + */ +- cpu_function_call(event->cpu, __perf_remove_from_context, event); ++ cpu_function_call(event->cpu, __perf_remove_from_context, &re); + return; + } + + retry: +- if (!task_function_call(task, __perf_remove_from_context, event)) ++ if (!task_function_call(task, __perf_remove_from_context, &re)) + return; + + raw_spin_lock_irq(&ctx->lock); +@@ -1511,6 +1523,8 @@ retry: + * Since the task isn't running, its safe to remove the event, us + * holding the ctx->lock ensures the task won't get scheduled in. + */ ++ if (detach_group) ++ perf_group_detach(event); + list_del_event(event, ctx); + raw_spin_unlock_irq(&ctx->lock); + } +@@ -3279,10 +3293,7 @@ int perf_event_release_kernel(struct per + * to trigger the AB-BA case. + */ + mutex_lock_nested(&ctx->mutex, SINGLE_DEPTH_NESTING); +- raw_spin_lock_irq(&ctx->lock); +- perf_group_detach(event); +- raw_spin_unlock_irq(&ctx->lock); +- perf_remove_from_context(event); ++ perf_remove_from_context(event, true); + mutex_unlock(&ctx->mutex); + + free_event(event); +@@ -7175,7 +7186,7 @@ SYSCALL_DEFINE5(perf_event_open, + struct perf_event_context *gctx = group_leader->ctx; + + mutex_lock(&gctx->mutex); +- perf_remove_from_context(group_leader); ++ perf_remove_from_context(group_leader, false); + + /* + * Removing from the context ends up with disabled +@@ -7185,7 +7196,7 @@ SYSCALL_DEFINE5(perf_event_open, + perf_event__state_init(group_leader); + list_for_each_entry(sibling, &group_leader->sibling_list, + group_entry) { +- perf_remove_from_context(sibling); ++ perf_remove_from_context(sibling, false); + perf_event__state_init(sibling); + put_ctx(gctx); + } +@@ -7315,7 +7326,7 @@ void perf_pmu_migrate_context(struct pmu + mutex_lock(&src_ctx->mutex); + list_for_each_entry_safe(event, tmp, &src_ctx->event_list, + event_entry) { +- perf_remove_from_context(event); ++ perf_remove_from_context(event, false); + unaccount_event_cpu(event, src_cpu); + put_ctx(src_ctx); + list_add(&event->migrate_entry, &events); +@@ -7377,13 +7388,7 @@ __perf_event_exit_task(struct perf_event + struct perf_event_context *child_ctx, + struct task_struct *child) + { +- if (child_event->parent) { +- raw_spin_lock_irq(&child_ctx->lock); +- perf_group_detach(child_event); +- raw_spin_unlock_irq(&child_ctx->lock); +- } +- +- perf_remove_from_context(child_event); ++ perf_remove_from_context(child_event, !!child_event->parent); + + /* + * It can happen that the parent exits first, and has events +@@ -7868,14 +7873,14 @@ static void perf_pmu_rotate_stop(struct + + static void __perf_event_exit_context(void *__info) + { ++ struct remove_event re = { .detach_group = false }; + struct perf_event_context *ctx = __info; +- struct perf_event *event; + + perf_pmu_rotate_stop(ctx->pmu); + + rcu_read_lock(); +- list_for_each_entry_rcu(event, &ctx->event_list, event_entry) +- __perf_remove_from_context(event); ++ list_for_each_entry_rcu(re.event, &ctx->event_list, event_entry) ++ __perf_remove_from_context(&re); + rcu_read_unlock(); + } + diff --git a/queue-3.14/perf-limit-perf_event_attr-sample_period-to-63-bits.patch b/queue-3.14/perf-limit-perf_event_attr-sample_period-to-63-bits.patch new file mode 100644 index 00000000000..28dfc3e04ef --- /dev/null +++ b/queue-3.14/perf-limit-perf_event_attr-sample_period-to-63-bits.patch @@ -0,0 +1,38 @@ +From 0819b2e30ccb93edf04876237b6205eef84ec8d2 Mon Sep 17 00:00:00 2001 +From: Peter Zijlstra +Date: Thu, 15 May 2014 20:23:48 +0200 +Subject: perf: Limit perf_event_attr::sample_period to 63 bits + +From: Peter Zijlstra + +commit 0819b2e30ccb93edf04876237b6205eef84ec8d2 upstream. + +Vince reported that using a large sample_period (one with bit 63 set) +results in wreckage since while the sample_period is fundamentally +unsigned (negative periods don't make sense) the way we implement +things very much rely on signed logic. + +So limit sample_period to 63 bits to avoid tripping over this. + +Reported-by: Vince Weaver +Signed-off-by: Peter Zijlstra +Link: http://lkml.kernel.org/n/tip-p25fhunibl4y3qi0zuqmyf4b@git.kernel.org +Signed-off-by: Thomas Gleixner +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/events/core.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -7025,6 +7025,9 @@ SYSCALL_DEFINE5(perf_event_open, + if (attr.freq) { + if (attr.sample_freq > sysctl_perf_event_sample_rate) + return -EINVAL; ++ } else { ++ if (attr.sample_period & (1ULL << 63)) ++ return -EINVAL; + } + + /* diff --git a/queue-3.14/perf-prevent-false-warning-in-perf_swevent_add.patch b/queue-3.14/perf-prevent-false-warning-in-perf_swevent_add.patch new file mode 100644 index 00000000000..9aa799e0bc5 --- /dev/null +++ b/queue-3.14/perf-prevent-false-warning-in-perf_swevent_add.patch @@ -0,0 +1,121 @@ +From 39af6b1678afa5880dda7e375cf3f9d395087f6d Mon Sep 17 00:00:00 2001 +From: Jiri Olsa +Date: Mon, 7 Apr 2014 11:04:08 +0200 +Subject: perf: Prevent false warning in perf_swevent_add + +From: Jiri Olsa + +commit 39af6b1678afa5880dda7e375cf3f9d395087f6d upstream. + +The perf cpu offline callback takes down all cpu context +events and releases swhash->swevent_hlist. + +This could race with task context software event being just +scheduled on this cpu via perf_swevent_add while cpu hotplug +code already cleaned up event's data. + +The race happens in the gap between the cpu notifier code +and the cpu being actually taken down. Note that only cpu +ctx events are terminated in the perf cpu hotplug code. + +It's easily reproduced with: + $ perf record -e faults perf bench sched pipe + +while putting one of the cpus offline: + # echo 0 > /sys/devices/system/cpu/cpu1/online + +Console emits following warning: + WARNING: CPU: 1 PID: 2845 at kernel/events/core.c:5672 perf_swevent_add+0x18d/0x1a0() + Modules linked in: + CPU: 1 PID: 2845 Comm: sched-pipe Tainted: G W 3.14.0+ #256 + Hardware name: Intel Corporation Montevina platform/To be filled by O.E.M., BIOS AMVACRB1.86C.0066.B00.0805070703 05/07/2008 + 0000000000000009 ffff880077233ab8 ffffffff81665a23 0000000000200005 + 0000000000000000 ffff880077233af8 ffffffff8104732c 0000000000000046 + ffff88007467c800 0000000000000002 ffff88007a9cf2a0 0000000000000001 + Call Trace: + [] dump_stack+0x4f/0x7c + [] warn_slowpath_common+0x8c/0xc0 + [] warn_slowpath_null+0x1a/0x20 + [] perf_swevent_add+0x18d/0x1a0 + [] event_sched_in.isra.75+0x9e/0x1f0 + [] group_sched_in+0x6a/0x1f0 + [] ? sched_clock_local+0x25/0xa0 + [] ctx_sched_in+0x1f6/0x450 + [] perf_event_sched_in+0x6b/0xa0 + [] perf_event_context_sched_in+0x7b/0xc0 + [] __perf_event_task_sched_in+0x43e/0x460 + [] ? put_lock_stats.isra.18+0xe/0x30 + [] finish_task_switch+0xb8/0x100 + [] __schedule+0x30e/0xad0 + [] ? pipe_read+0x3e2/0x560 + [] ? preempt_schedule_irq+0x3e/0x70 + [] ? preempt_schedule_irq+0x3e/0x70 + [] preempt_schedule_irq+0x44/0x70 + [] retint_kernel+0x20/0x30 + [] ? lockdep_sys_exit+0x1a/0x90 + [] lockdep_sys_exit_thunk+0x35/0x67 + [] ? sysret_check+0x5/0x56 + +Fixing this by tracking the cpu hotplug state and displaying +the WARN only if current cpu is initialized properly. + +Cc: Corey Ashford +Cc: Frederic Weisbecker +Cc: Ingo Molnar +Cc: Paul Mackerras +Cc: Arnaldo Carvalho de Melo +Reported-by: Fengguang Wu +Signed-off-by: Jiri Olsa +Signed-off-by: Peter Zijlstra +Link: http://lkml.kernel.org/r/1396861448-10097-1-git-send-email-jolsa@redhat.com +Signed-off-by: Thomas Gleixner +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/events/core.c | 13 ++++++++++++- + 1 file changed, 12 insertions(+), 1 deletion(-) + +--- a/kernel/events/core.c ++++ b/kernel/events/core.c +@@ -5406,6 +5406,9 @@ struct swevent_htable { + + /* Recursion avoidance in each contexts */ + int recursion[PERF_NR_CONTEXTS]; ++ ++ /* Keeps track of cpu being initialized/exited */ ++ bool online; + }; + + static DEFINE_PER_CPU(struct swevent_htable, swevent_htable); +@@ -5652,8 +5655,14 @@ static int perf_swevent_add(struct perf_ + hwc->state = !(flags & PERF_EF_START); + + head = find_swevent_head(swhash, event); +- if (WARN_ON_ONCE(!head)) ++ if (!head) { ++ /* ++ * We can race with cpu hotplug code. Do not ++ * WARN if the cpu just got unplugged. ++ */ ++ WARN_ON_ONCE(swhash->online); + return -EINVAL; ++ } + + hlist_add_head_rcu(&event->hlist_entry, head); + +@@ -7833,6 +7842,7 @@ static void perf_event_init_cpu(int cpu) + struct swevent_htable *swhash = &per_cpu(swevent_htable, cpu); + + mutex_lock(&swhash->hlist_mutex); ++ swhash->online = true; + if (swhash->hlist_refcount > 0) { + struct swevent_hlist *hlist; + +@@ -7890,6 +7900,7 @@ static void perf_event_exit_cpu(int cpu) + perf_event_exit_cpu_context(cpu); + + mutex_lock(&swhash->hlist_mutex); ++ swhash->online = false; + swevent_hlist_release(swhash); + mutex_unlock(&swhash->hlist_mutex); + } diff --git a/queue-3.14/sched-deadline-fix-memory-leak.patch b/queue-3.14/sched-deadline-fix-memory-leak.patch new file mode 100644 index 00000000000..49544a400d3 --- /dev/null +++ b/queue-3.14/sched-deadline-fix-memory-leak.patch @@ -0,0 +1,33 @@ +From 6a7cd273dc4bc3246f37ebe874754a54ccb29141 Mon Sep 17 00:00:00 2001 +From: Li Zefan +Date: Thu, 17 Apr 2014 10:05:02 +0800 +Subject: sched/deadline: Fix memory leak + +From: Li Zefan + +commit 6a7cd273dc4bc3246f37ebe874754a54ccb29141 upstream. + +Free cpudl->free_cpus allocated in cpudl_init(). + +Signed-off-by: Li Zefan +Acked-by: Juri Lelli +Signed-off-by: Peter Zijlstra +Link: http://lkml.kernel.org/r/534F36CE.2000409@huawei.com +Signed-off-by: Ingo Molnar +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/sched/cpudeadline.c | 4 +--- + 1 file changed, 1 insertion(+), 3 deletions(-) + +--- a/kernel/sched/cpudeadline.c ++++ b/kernel/sched/cpudeadline.c +@@ -210,7 +210,5 @@ int cpudl_init(struct cpudl *cp) + */ + void cpudl_cleanup(struct cpudl *cp) + { +- /* +- * nothing to do for the moment +- */ ++ free_cpumask_var(cp->free_cpus); + } diff --git a/queue-3.14/sched-sanitize-irq-accounting-madness.patch b/queue-3.14/sched-sanitize-irq-accounting-madness.patch new file mode 100644 index 00000000000..3a60f502831 --- /dev/null +++ b/queue-3.14/sched-sanitize-irq-accounting-madness.patch @@ -0,0 +1,120 @@ +From 2d513868e2a33e1d5315490ef4c861ee65babd65 Mon Sep 17 00:00:00 2001 +From: Thomas Gleixner +Date: Fri, 2 May 2014 23:26:24 +0200 +Subject: sched: Sanitize irq accounting madness + +From: Thomas Gleixner + +commit 2d513868e2a33e1d5315490ef4c861ee65babd65 upstream. + +Russell reported, that irqtime_account_idle_ticks() takes ages due to: + + for (i = 0; i < ticks; i++) + irqtime_account_process_tick(current, 0, rq); + +It's sad, that this code was written way _AFTER_ the NOHZ idle +functionality was available. I charge myself guitly for not paying +attention when that crap got merged with commit abb74cefa ("sched: +Export ns irqtimes through /proc/stat") + +So instead of looping nr_ticks times just apply the whole thing at +once. + +As a side note: The whole cputime_t vs. u64 business in that context +wants to be cleaned up as well. There is no point in having all these +back and forth conversions. Lets standardise on u64 nsec for all +kernel internal accounting and be done with it. Everything else does +not make sense at all for fine grained accounting. Frederic, can you +please take care of that? + +Reported-by: Russell King +Signed-off-by: Thomas Gleixner +Reviewed-by: Paul E. McKenney +Signed-off-by: Peter Zijlstra +Cc: Venkatesh Pallipadi +Cc: Shaun Ruffell +Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1405022307000.6261@ionos.tec.linutronix.de +Signed-off-by: Ingo Molnar +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/sched/cputime.c | 32 ++++++++++++++++---------------- + 1 file changed, 16 insertions(+), 16 deletions(-) + +--- a/kernel/sched/cputime.c ++++ b/kernel/sched/cputime.c +@@ -326,50 +326,50 @@ out: + * softirq as those do not count in task exec_runtime any more. + */ + static void irqtime_account_process_tick(struct task_struct *p, int user_tick, +- struct rq *rq) ++ struct rq *rq, int ticks) + { +- cputime_t one_jiffy_scaled = cputime_to_scaled(cputime_one_jiffy); ++ cputime_t scaled = cputime_to_scaled(cputime_one_jiffy); ++ u64 cputime = (__force u64) cputime_one_jiffy; + u64 *cpustat = kcpustat_this_cpu->cpustat; + + if (steal_account_process_tick()) + return; + ++ cputime *= ticks; ++ scaled *= ticks; ++ + if (irqtime_account_hi_update()) { +- cpustat[CPUTIME_IRQ] += (__force u64) cputime_one_jiffy; ++ cpustat[CPUTIME_IRQ] += cputime; + } else if (irqtime_account_si_update()) { +- cpustat[CPUTIME_SOFTIRQ] += (__force u64) cputime_one_jiffy; ++ cpustat[CPUTIME_SOFTIRQ] += cputime; + } else if (this_cpu_ksoftirqd() == p) { + /* + * ksoftirqd time do not get accounted in cpu_softirq_time. + * So, we have to handle it separately here. + * Also, p->stime needs to be updated for ksoftirqd. + */ +- __account_system_time(p, cputime_one_jiffy, one_jiffy_scaled, +- CPUTIME_SOFTIRQ); ++ __account_system_time(p, cputime, scaled, CPUTIME_SOFTIRQ); + } else if (user_tick) { +- account_user_time(p, cputime_one_jiffy, one_jiffy_scaled); ++ account_user_time(p, cputime, scaled); + } else if (p == rq->idle) { +- account_idle_time(cputime_one_jiffy); ++ account_idle_time(cputime); + } else if (p->flags & PF_VCPU) { /* System time or guest time */ +- account_guest_time(p, cputime_one_jiffy, one_jiffy_scaled); ++ account_guest_time(p, cputime, scaled); + } else { +- __account_system_time(p, cputime_one_jiffy, one_jiffy_scaled, +- CPUTIME_SYSTEM); ++ __account_system_time(p, cputime, scaled, CPUTIME_SYSTEM); + } + } + + static void irqtime_account_idle_ticks(int ticks) + { +- int i; + struct rq *rq = this_rq(); + +- for (i = 0; i < ticks; i++) +- irqtime_account_process_tick(current, 0, rq); ++ irqtime_account_process_tick(current, 0, rq, ticks); + } + #else /* CONFIG_IRQ_TIME_ACCOUNTING */ + static inline void irqtime_account_idle_ticks(int ticks) {} + static inline void irqtime_account_process_tick(struct task_struct *p, int user_tick, +- struct rq *rq) {} ++ struct rq *rq, int nr_ticks) {} + #endif /* CONFIG_IRQ_TIME_ACCOUNTING */ + + /* +@@ -458,7 +458,7 @@ void account_process_tick(struct task_st + return; + + if (sched_clock_irqtime) { +- irqtime_account_process_tick(p, user_tick, rq); ++ irqtime_account_process_tick(p, user_tick, rq, 1); + return; + } + diff --git a/queue-3.14/sched-use-cpupri_nr_priorities-instead-of-max_rt_prio-in-cpupri-check.patch b/queue-3.14/sched-use-cpupri_nr_priorities-instead-of-max_rt_prio-in-cpupri-check.patch new file mode 100644 index 00000000000..8b97ba5ac12 --- /dev/null +++ b/queue-3.14/sched-use-cpupri_nr_priorities-instead-of-max_rt_prio-in-cpupri-check.patch @@ -0,0 +1,41 @@ +From 6227cb00cc120f9a43ce8313bb0475ddabcb7d01 Mon Sep 17 00:00:00 2001 +From: "Steven Rostedt (Red Hat)" +Date: Sun, 13 Apr 2014 09:34:53 -0400 +Subject: sched: Use CPUPRI_NR_PRIORITIES instead of MAX_RT_PRIO in cpupri check + +From: "Steven Rostedt (Red Hat)" + +commit 6227cb00cc120f9a43ce8313bb0475ddabcb7d01 upstream. + +The check at the beginning of cpupri_find() makes sure that the task_pri +variable does not exceed the cp->pri_to_cpu array length. But that length +is CPUPRI_NR_PRIORITIES not MAX_RT_PRIO, where it will miss the last two +priorities in that array. + +As task_pri is computed from convert_prio() which should never be bigger +than CPUPRI_NR_PRIORITIES, if the check should cause a panic if it is +hit. + +Reported-by: Mike Galbraith +Signed-off-by: Steven Rostedt +Signed-off-by: Peter Zijlstra +Link: http://lkml.kernel.org/r/1397015410.5212.13.camel@marge.simpson.net +Signed-off-by: Ingo Molnar +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/sched/cpupri.c | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +--- a/kernel/sched/cpupri.c ++++ b/kernel/sched/cpupri.c +@@ -70,8 +70,7 @@ int cpupri_find(struct cpupri *cp, struc + int idx = 0; + int task_pri = convert_prio(p->prio); + +- if (task_pri >= MAX_RT_PRIO) +- return 0; ++ BUG_ON(task_pri >= CPUPRI_NR_PRIORITIES); + + for (idx = 0; idx < task_pri; idx++) { + struct cpupri_vec *vec = &cp->pri_to_cpu[idx]; diff --git a/queue-3.14/series b/queue-3.14/series new file mode 100644 index 00000000000..f9fe2366397 --- /dev/null +++ b/queue-3.14/series @@ -0,0 +1,7 @@ +sched-use-cpupri_nr_priorities-instead-of-max_rt_prio-in-cpupri-check.patch +sched-deadline-fix-memory-leak.patch +sched-sanitize-irq-accounting-madness.patch +perf-prevent-false-warning-in-perf_swevent_add.patch +perf-limit-perf_event_attr-sample_period-to-63-bits.patch +perf-fix-race-in-removing-an-event.patch +mm-memory-failure.c-fix-memory-leak-by-race-between-poison-and-unpoison.patch diff --git a/queue-3.4/series b/queue-3.4/series new file mode 100644 index 00000000000..eff5fea5375 --- /dev/null +++ b/queue-3.4/series @@ -0,0 +1,4 @@ +sched-use-cpupri_nr_priorities-instead-of-max_rt_prio-in-cpupri-check.patch +perf-prevent-false-warning-in-perf_swevent_add.patch +perf-limit-perf_event_attr-sample_period-to-63-bits.patch +perf-fix-race-in-removing-an-event.patch