From: Greg Kroah-Hartman Date: Mon, 6 Jan 2014 17:20:36 +0000 (-0800) Subject: 3.10-stable patches X-Git-Tag: v3.4.76~53 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=ed654006c3772fb494d18cd468afd3a429d318a3;p=thirdparty%2Fkernel%2Fstable-queue.git 3.10-stable patches added patches: firewire-sbp2-bring-back-write-same-support.patch net_dma-mark-broken.patch sched-numa-skip-inaccessible-vmas.patch sched-rt-fix-rq-s-cpupri-leak-while-enqueue-dequeue-child-rt-entities.patch --- diff --git a/queue-3.10/firewire-sbp2-bring-back-write-same-support.patch b/queue-3.10/firewire-sbp2-bring-back-write-same-support.patch new file mode 100644 index 00000000000..e847413c876 --- /dev/null +++ b/queue-3.10/firewire-sbp2-bring-back-write-same-support.patch @@ -0,0 +1,34 @@ +From ce027ed98fd176710fb14be9d6015697b62436f0 Mon Sep 17 00:00:00 2001 +From: Stefan Richter +Date: Sun, 15 Dec 2013 16:18:01 +0100 +Subject: firewire: sbp2: bring back WRITE SAME support + +From: Stefan Richter + +commit ce027ed98fd176710fb14be9d6015697b62436f0 upstream. + +Commit 54b2b50c20a6 "[SCSI] Disable WRITE SAME for RAID and virtual +host adapter drivers" disabled WRITE SAME support for all SBP-2 attached +targets. But as described in the changelog of commit b0ea5f19d3d8 +"firewire: sbp2: allow WRITE SAME and REPORT SUPPORTED OPERATION CODES", +it is not required to blacklist WRITE SAME. + +Bring the feature back by reverting the sbp2.c hunk of commit 54b2b50c20a6. + +Signed-off-by: Stefan Richter +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/firewire/sbp2.c | 1 - + 1 file changed, 1 deletion(-) + +--- a/drivers/firewire/sbp2.c ++++ b/drivers/firewire/sbp2.c +@@ -1626,7 +1626,6 @@ static struct scsi_host_template scsi_dr + .cmd_per_lun = 1, + .can_queue = 1, + .sdev_attrs = sbp2_scsi_sysfs_attrs, +- .no_write_same = 1, + }; + + MODULE_AUTHOR("Kristian Hoegsberg "); diff --git a/queue-3.10/net_dma-mark-broken.patch b/queue-3.10/net_dma-mark-broken.patch new file mode 100644 index 00000000000..04388f70b8d --- /dev/null +++ b/queue-3.10/net_dma-mark-broken.patch @@ -0,0 +1,91 @@ +From 77873803363c9e831fc1d1e6895c084279090c22 Mon Sep 17 00:00:00 2001 +From: Dan Williams +Date: Tue, 17 Dec 2013 10:09:32 -0800 +Subject: net_dma: mark broken + +From: Dan Williams + +commit 77873803363c9e831fc1d1e6895c084279090c22 upstream. + +net_dma can cause data to be copied to a stale mapping if a +copy-on-write fault occurs during dma. The application sees missing +data. + +The following trace is triggered by modifying the kernel to WARN if it +ever triggers copy-on-write on a page that is undergoing dma: + + WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120() + ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9] + Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca + CPU: 24 PID: 2529 Comm: linbug Tainted: G W 3.13.0-rc1+ #353 + 00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70 + ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646 + ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349 + Call Trace: + [] dump_stack+0x46/0x58 + [] warn_slowpath_common+0x8c/0xc0 + [] ? ftrace_pid_func+0x26/0x30 + [] warn_slowpath_fmt+0x46/0x50 + [] debug_dma_assert_idle+0xd2/0x120 + [] do_wp_page+0xd0/0x790 + [] handle_mm_fault+0x51c/0xde0 + [] ? copy_user_enhanced_fast_string+0x9/0x20 + [] __do_page_fault+0x19c/0x530 + [] ? _raw_spin_lock_bh+0x16/0x40 + [] ? trace_clock_local+0x9/0x10 + [] ? rb_reserve_next_event+0x64/0x310 + [] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma] + [] do_page_fault+0xe/0x10 + [] page_fault+0x22/0x30 + [] ? __kfree_skb+0x51/0xd0 + [] ? copy_user_enhanced_fast_string+0x9/0x20 + [] ? memcpy_toiovec+0x52/0xa0 + [] skb_copy_datagram_iovec+0x5f/0x2a0 + [] tcp_rcv_established+0x674/0x7f0 + [] tcp_v4_do_rcv+0x2e5/0x4a0 + [..] + ---[ end trace e30e3b01191b7617 ]--- + Mapped at: + [] debug_dma_map_page+0xb9/0x160 + [] dma_async_memcpy_pg_to_pg+0x127/0x210 + [] dma_memcpy_pg_to_iovec+0x119/0x1f0 + [] dma_skb_copy_datagram_iovec+0x11c/0x2b0 + [] tcp_rcv_established+0x74a/0x7f0: + +...the problem is that the receive path falls back to cpu-copy in +several locations and this trace is just one of the areas. A few +options were considered to fix this: + +1/ sync all dma whenever a cpu copy branch is taken + +2/ modify the page fault handler to hold off while dma is in-flight + +Option 1 adds yet more cpu overhead to an "offload" that struggles to compete +with cpu-copy. Option 2 adds checks for behavior that is already documented as +broken when using get_user_pages(). At a minimum a debug mode is warranted to +catch and flag these violations of the dma-api vs get_user_pages(). + +Thanks to David for his reproducer. + +Cc: Dave Jiang +Cc: Vinod Koul +Cc: Alexander Duyck +Reported-by: David Whipple +Acked-by: David S. Miller +Signed-off-by: Dan Williams +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/dma/Kconfig | 1 + + 1 file changed, 1 insertion(+) + +--- a/drivers/dma/Kconfig ++++ b/drivers/dma/Kconfig +@@ -333,6 +333,7 @@ config NET_DMA + bool "Network: TCP receive copy offload" + depends on DMA_ENGINE && NET + default (INTEL_IOATDMA || FSL_DMA) ++ depends on BROKEN + help + This enables the use of DMA engines in the network stack to + offload receive copy-to-user operations, freeing CPU cycles. diff --git a/queue-3.10/sched-numa-skip-inaccessible-vmas.patch b/queue-3.10/sched-numa-skip-inaccessible-vmas.patch new file mode 100644 index 00000000000..ab4eadcacc4 --- /dev/null +++ b/queue-3.10/sched-numa-skip-inaccessible-vmas.patch @@ -0,0 +1,38 @@ +From 3c67f474558748b604e247d92b55dfe89654c81d Mon Sep 17 00:00:00 2001 +From: Mel Gorman +Date: Wed, 18 Dec 2013 17:08:40 -0800 +Subject: sched: numa: skip inaccessible VMAs + +From: Mel Gorman + +commit 3c67f474558748b604e247d92b55dfe89654c81d upstream. + +Inaccessible VMA should not be trapping NUMA hint faults. Skip them. + +Signed-off-by: Mel Gorman +Reviewed-by: Rik van Riel +Cc: Alex Thorlton +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/sched/fair.c | 7 +++++++ + 1 file changed, 7 insertions(+) + +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -936,6 +936,13 @@ void task_numa_work(struct callback_head + if (vma->vm_end - vma->vm_start < HPAGE_SIZE) + continue; + ++ /* ++ * Skip inaccessible VMAs to avoid any confusion between ++ * PROT_NONE and NUMA hinting ptes ++ */ ++ if (!(vma->vm_flags & (VM_READ | VM_EXEC | VM_WRITE))) ++ continue; ++ + do { + start = max(start, vma->vm_start); + end = ALIGN(start + (pages << PAGE_SHIFT), HPAGE_SIZE); diff --git a/queue-3.10/sched-rt-fix-rq-s-cpupri-leak-while-enqueue-dequeue-child-rt-entities.patch b/queue-3.10/sched-rt-fix-rq-s-cpupri-leak-while-enqueue-dequeue-child-rt-entities.patch new file mode 100644 index 00000000000..514e78a9649 --- /dev/null +++ b/queue-3.10/sched-rt-fix-rq-s-cpupri-leak-while-enqueue-dequeue-child-rt-entities.patch @@ -0,0 +1,65 @@ +From 757dfcaa41844595964f1220f1d33182dae49976 Mon Sep 17 00:00:00 2001 +From: Kirill Tkhai +Date: Wed, 27 Nov 2013 19:59:13 +0400 +Subject: sched/rt: Fix rq's cpupri leak while enqueue/dequeue child RT entities + +From: Kirill Tkhai + +commit 757dfcaa41844595964f1220f1d33182dae49976 upstream. + +This patch touches the RT group scheduling case. + +Functions inc_rt_prio_smp() and dec_rt_prio_smp() change (global) rq's +priority, while rt_rq passed to them may be not the top-level rt_rq. +This is wrong, because changing of priority on a child level does not +guarantee that the priority is the highest all over the rq. So, this +leak makes RT balancing unusable. + +The short example: the task having the highest priority among all rq's +RT tasks (no one other task has the same priority) are waking on a +throttle rt_rq. The rq's cpupri is set to the task's priority +equivalent, but real rq->rt.highest_prio.curr is less. + +The patch below fixes the problem. + +Signed-off-by: Kirill Tkhai +Signed-off-by: Peter Zijlstra +CC: Steven Rostedt +Link: http://lkml.kernel.org/r/49231385567953@web4m.yandex.ru +Signed-off-by: Ingo Molnar +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/sched/rt.c | 14 ++++++++++++++ + 1 file changed, 14 insertions(+) + +--- a/kernel/sched/rt.c ++++ b/kernel/sched/rt.c +@@ -964,6 +964,13 @@ inc_rt_prio_smp(struct rt_rq *rt_rq, int + { + struct rq *rq = rq_of_rt_rq(rt_rq); + ++#ifdef CONFIG_RT_GROUP_SCHED ++ /* ++ * Change rq's cpupri only if rt_rq is the top queue. ++ */ ++ if (&rq->rt != rt_rq) ++ return; ++#endif + if (rq->online && prio < prev_prio) + cpupri_set(&rq->rd->cpupri, rq->cpu, prio); + } +@@ -973,6 +980,13 @@ dec_rt_prio_smp(struct rt_rq *rt_rq, int + { + struct rq *rq = rq_of_rt_rq(rt_rq); + ++#ifdef CONFIG_RT_GROUP_SCHED ++ /* ++ * Change rq's cpupri only if rt_rq is the top queue. ++ */ ++ if (&rq->rt != rt_rq) ++ return; ++#endif + if (rq->online && rt_rq->highest_prio.curr != prev_prio) + cpupri_set(&rq->rd->cpupri, rq->cpu, rt_rq->highest_prio.curr); + } diff --git a/queue-3.10/series b/queue-3.10/series index 7ac3c7dae49..d7f2311f3b1 100644 --- a/queue-3.10/series +++ b/queue-3.10/series @@ -33,3 +33,7 @@ ext4-do-not-reserve-clusters-when-fs-doesn-t-support-extents.patch ext4-fix-deadlock-when-writing-in-enospc-conditions.patch ext4-add-explicit-casts-when-masking-cluster-sizes.patch ext4-fix-fitrim-in-no-journal-mode.patch +sched-numa-skip-inaccessible-vmas.patch +sched-rt-fix-rq-s-cpupri-leak-while-enqueue-dequeue-child-rt-entities.patch +firewire-sbp2-bring-back-write-same-support.patch +net_dma-mark-broken.patch