From: Greg Kroah-Hartman Date: Wed, 19 Apr 2017 11:39:55 +0000 (+0200) Subject: 3.18-stable patches X-Git-Tag: v4.4.63~16 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=9cb4bac71fb09f2c59d79a13490717628f8df454;p=thirdparty%2Fkernel%2Fstable-queue.git 3.18-stable patches added patches: zram-do-not-use-copy_page-with-non-page-aligned-address.patch --- diff --git a/queue-3.18/series b/queue-3.18/series index 1b3a9a738a6..71f8540d676 100644 --- a/queue-3.18/series +++ b/queue-3.18/series @@ -47,3 +47,4 @@ scsi-libsas-fix-ata-xfer-length.patch xen-acpi-upload-pm-state-from-init-domain-to-xen.patch crypto-caam-fix-rng-deinstantiation-error-checking.patch tty-serial-atmel-fix-race-condition-tx-dma.patch +zram-do-not-use-copy_page-with-non-page-aligned-address.patch diff --git a/queue-3.18/zram-do-not-use-copy_page-with-non-page-aligned-address.patch b/queue-3.18/zram-do-not-use-copy_page-with-non-page-aligned-address.patch new file mode 100644 index 00000000000..2f3a4e2abf5 --- /dev/null +++ b/queue-3.18/zram-do-not-use-copy_page-with-non-page-aligned-address.patch @@ -0,0 +1,75 @@ +From d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e Mon Sep 17 00:00:00 2001 +From: Minchan Kim +Date: Thu, 13 Apr 2017 14:56:37 -0700 +Subject: zram: do not use copy_page with non-page aligned address + +From: Minchan Kim + +commit d72e9a7a93e4f8e9e52491921d99e0c8aa89eb4e upstream. + +The copy_page is optimized memcpy for page-alinged address. If it is +used with non-page aligned address, it can corrupt memory which means +system corruption. With zram, it can happen with + +1. 64K architecture +2. partial IO +3. slub debug + +Partial IO need to allocate a page and zram allocates it via kmalloc. +With slub debug, kmalloc(PAGE_SIZE) doesn't return page-size aligned +address. And finally, copy_page(mem, cmem) corrupts memory. + +So, this patch changes it to memcpy. + +Actuaully, we don't need to change zram_bvec_write part because zsmalloc +returns page-aligned address in case of PAGE_SIZE class but it's not +good to rely on the internal of zsmalloc. + +Note: + When this patch is merged to stable, clear_page should be fixed, too. + Unfortunately, recent zram removes it by "same page merge" feature so + it's hard to backport this patch to -stable tree. + +I will handle it when I receive the mail from stable tree maintainer to +merge this patch to backport. + +Fixes: 42e99bd ("zram: optimize memory operations with clear_page()/copy_page()") +Link: http://lkml.kernel.org/r/1492042622-12074-2-git-send-email-minchan@kernel.org +Signed-off-by: Minchan Kim +Reviewed-by: Sergey Senozhatsky +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + + +--- + drivers/block/zram/zram_drv.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +--- a/drivers/block/zram/zram_drv.c ++++ b/drivers/block/zram/zram_drv.c +@@ -431,13 +431,13 @@ static int zram_decompress_page(struct z + + if (!handle || zram_test_flag(meta, index, ZRAM_ZERO)) { + bit_spin_unlock(ZRAM_ACCESS, &meta->table[index].value); +- clear_page(mem); ++ memset(mem, 0, PAGE_SIZE); + return 0; + } + + cmem = zs_map_object(meta->mem_pool, handle, ZS_MM_RO); + if (size == PAGE_SIZE) +- copy_page(mem, cmem); ++ memcpy(mem, cmem, PAGE_SIZE); + else + ret = zcomp_decompress(zram->comp, cmem, size, mem); + zs_unmap_object(meta->mem_pool, handle); +@@ -612,7 +612,7 @@ static int zram_bvec_write(struct zram * + + if ((clen == PAGE_SIZE) && !is_partial_io(bvec)) { + src = kmap_atomic(page); +- copy_page(cmem, src); ++ memcpy(cmem, src, PAGE_SIZE); + kunmap_atomic(src); + } else { + memcpy(cmem, src, clen);