]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blame - releases/4.11.2/x86-pmem-fix-cache-flushing-for-iovec-write-8-bytes.patch
4.9-stable patches
[thirdparty/kernel/stable-queue.git] / releases / 4.11.2 / x86-pmem-fix-cache-flushing-for-iovec-write-8-bytes.patch
CommitLineData
b889272d
GKH
1From 8376efd31d3d7c44bd05be337adde023cc531fa1 Mon Sep 17 00:00:00 2001
2From: Ben Hutchings <ben.hutchings@codethink.co.uk>
3Date: Tue, 9 May 2017 18:00:43 +0100
4Subject: x86, pmem: Fix cache flushing for iovec write < 8 bytes
5
6From: Ben Hutchings <ben.hutchings@codethink.co.uk>
7
8commit 8376efd31d3d7c44bd05be337adde023cc531fa1 upstream.
9
10Commit 11e63f6d920d added cache flushing for unaligned writes from an
11iovec, covering the first and last cache line of a >= 8 byte write and
12the first cache line of a < 8 byte write. But an unaligned write of
132-7 bytes can still cover two cache lines, so make sure we flush both
14in that case.
15
16Fixes: 11e63f6d920d ("x86, pmem: fix broken __copy_user_nocache ...")
17Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
18Signed-off-by: Dan Williams <dan.j.williams@intel.com>
19Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
20
21---
22 arch/x86/include/asm/pmem.h | 2 +-
23 1 file changed, 1 insertion(+), 1 deletion(-)
24
25--- a/arch/x86/include/asm/pmem.h
26+++ b/arch/x86/include/asm/pmem.h
27@@ -103,7 +103,7 @@ static inline size_t arch_copy_from_iter
28
29 if (bytes < 8) {
30 if (!IS_ALIGNED(dest, 4) || (bytes != 4))
31- arch_wb_cache_pmem(addr, 1);
32+ arch_wb_cache_pmem(addr, bytes);
33 } else {
34 if (!IS_ALIGNED(dest, 8)) {
35 dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);