--- /dev/null
+From 901c7280ca0d5e2b4a8929fbe0bfb007ac2a6544 Mon Sep 17 00:00:00 2001
+From: Linus Torvalds <torvalds@linux-foundation.org>
+Date: Mon, 28 Mar 2022 11:37:05 -0700
+Subject: Reinstate some of "swiotlb: rework "fix info leak with DMA_FROM_DEVICE""
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Linus Torvalds <torvalds@linux-foundation.org>
+
+commit 901c7280ca0d5e2b4a8929fbe0bfb007ac2a6544 upstream.
+
+Halil Pasic points out [1] that the full revert of that commit (revert
+in bddac7c1e02b), and that a partial revert that only reverts the
+problematic case, but still keeps some of the cleanups is probably
+better. 
+
+And that partial revert [2] had already been verified by Oleksandr
+Natalenko to also fix the issue, I had just missed that in the long
+discussion.
+
+So let's reinstate the cleanups from commit aa6f8dcbab47 ("swiotlb:
+rework "fix info leak with DMA_FROM_DEVICE""), and effectively only
+revert the part that caused problems.
+
+Link: https://lore.kernel.org/all/20220328013731.017ae3e3.pasic@linux.ibm.com/ [1]
+Link: https://lore.kernel.org/all/20220324055732.GB12078@lst.de/ [2]
+Link: https://lore.kernel.org/all/4386660.LvFx2qVVIh@natalenko.name/ [3]
+Suggested-by: Halil Pasic <pasic@linux.ibm.com>
+Tested-by: Oleksandr Natalenko <oleksandr@natalenko.name>
+Cc: Christoph Hellwig" <hch@lst.de>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+[OP: backport to 4.14: apply swiotlb_tbl_map_single() changes in lib/swiotlb.c]
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/DMA-attributes.txt | 10 ----------
+ include/linux/dma-mapping.h | 8 --------
+ lib/swiotlb.c | 13 ++++++++-----
+ 3 files changed, 8 insertions(+), 23 deletions(-)
+
+--- a/Documentation/DMA-attributes.txt
++++ b/Documentation/DMA-attributes.txt
+@@ -156,13 +156,3 @@ accesses to DMA buffers in both privileg
+ subsystem that the buffer is fully accessible at the elevated privilege
+ level (and ideally inaccessible or at least read-only at the
+ lesser-privileged levels).
+-
+-DMA_ATTR_PRIVILEGED
+--------------------
+-
+-Some advanced peripherals such as remote processors and GPUs perform
+-accesses to DMA buffers in both privileged "supervisor" and unprivileged
+-"user" modes. This attribute is used to indicate to the DMA-mapping
+-subsystem that the buffer is fully accessible at the elevated privilege
+-level (and ideally inaccessible or at least read-only at the
+-lesser-privileged levels).
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -71,14 +71,6 @@
+ #define DMA_ATTR_PRIVILEGED (1UL << 9)
+
+ /*
+- * This is a hint to the DMA-mapping subsystem that the device is expected
+- * to overwrite the entire mapped size, thus the caller does not require any
+- * of the previous buffer contents to be preserved. This allows
+- * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
+- */
+-#define DMA_ATTR_OVERWRITE (1UL << 10)
+-
+-/*
+ * A dma_addr_t can hold any valid DMA or bus address for the platform.
+ * It can be given to a device to use as a DMA source or target. A CPU cannot
+ * reference a dma_addr_t directly because there may be translation between
+--- a/lib/swiotlb.c
++++ b/lib/swiotlb.c
+@@ -600,11 +600,14 @@ found:
+ */
+ for (i = 0; i < nslots; i++)
+ io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
+- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+- (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
+- dir == DMA_BIDIRECTIONAL))
+- swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE);
+-
++ /*
++ * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
++ * to the tlb buffer, if we knew for sure the device will
++ * overwirte the entire current content. But we don't. Thus
++ * unconditional bounce may prevent leaking swiotlb content (i.e.
++ * kernel memory) to user-space.
++ */
++ swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE);
+ return tlb_addr;
+ }
+ EXPORT_SYMBOL_GPL(swiotlb_tbl_map_single);
--- /dev/null
+From ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e Mon Sep 17 00:00:00 2001
+From: Halil Pasic <pasic@linux.ibm.com>
+Date: Fri, 11 Feb 2022 02:12:52 +0100
+Subject: swiotlb: fix info leak with DMA_FROM_DEVICE
+
+From: Halil Pasic <pasic@linux.ibm.com>
+
+commit ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e upstream.
+
+The problem I'm addressing was discovered by the LTP test covering
+cve-2018-1000204.
+
+A short description of what happens follows:
+1) The test case issues a command code 00 (TEST UNIT READY) via the SG_IO
+ interface with: dxfer_len == 524288, dxdfer_dir == SG_DXFER_FROM_DEV
+ and a corresponding dxferp. The peculiar thing about this is that TUR
+ is not reading from the device.
+2) In sg_start_req() the invocation of blk_rq_map_user() effectively
+ bounces the user-space buffer. As if the device was to transfer into
+ it. Since commit a45b599ad808 ("scsi: sg: allocate with __GFP_ZERO in
+ sg_build_indirect()") we make sure this first bounce buffer is
+ allocated with GFP_ZERO.
+3) For the rest of the story we keep ignoring that we have a TUR, so the
+ device won't touch the buffer we prepare as if the we had a
+ DMA_FROM_DEVICE type of situation. My setup uses a virtio-scsi device
+ and the buffer allocated by SG is mapped by the function
+ virtqueue_add_split() which uses DMA_FROM_DEVICE for the "in" sgs (here
+ scatter-gather and not scsi generics). This mapping involves bouncing
+ via the swiotlb (we need swiotlb to do virtio in protected guest like
+ s390 Secure Execution, or AMD SEV).
+4) When the SCSI TUR is done, we first copy back the content of the second
+ (that is swiotlb) bounce buffer (which most likely contains some
+ previous IO data), to the first bounce buffer, which contains all
+ zeros. Then we copy back the content of the first bounce buffer to
+ the user-space buffer.
+5) The test case detects that the buffer, which it zero-initialized,
+ ain't all zeros and fails.
+
+One can argue that this is an swiotlb problem, because without swiotlb
+we leak all zeros, and the swiotlb should be transparent in a sense that
+it does not affect the outcome (if all other participants are well
+behaved).
+
+Copying the content of the original buffer into the swiotlb buffer is
+the only way I can think of to make swiotlb transparent in such
+scenarios. So let's do just that if in doubt, but allow the driver
+to tell us that the whole mapped buffer is going to be overwritten,
+in which case we can preserve the old behavior and avoid the performance
+impact of the extra bounce.
+
+Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
+Signed-off-by: Christoph Hellwig <hch@lst.de>
+[OP: backport to 4.14: apply swiotlb_tbl_map_single() changes in lib/swiotlb.c]
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/DMA-attributes.txt | 10 ++++++++++
+ include/linux/dma-mapping.h | 8 ++++++++
+ lib/swiotlb.c | 3 ++-
+ 3 files changed, 20 insertions(+), 1 deletion(-)
+
+--- a/Documentation/DMA-attributes.txt
++++ b/Documentation/DMA-attributes.txt
+@@ -156,3 +156,13 @@ accesses to DMA buffers in both privileg
+ subsystem that the buffer is fully accessible at the elevated privilege
+ level (and ideally inaccessible or at least read-only at the
+ lesser-privileged levels).
++
++DMA_ATTR_PRIVILEGED
++-------------------
++
++Some advanced peripherals such as remote processors and GPUs perform
++accesses to DMA buffers in both privileged "supervisor" and unprivileged
++"user" modes. This attribute is used to indicate to the DMA-mapping
++subsystem that the buffer is fully accessible at the elevated privilege
++level (and ideally inaccessible or at least read-only at the
++lesser-privileged levels).
+--- a/include/linux/dma-mapping.h
++++ b/include/linux/dma-mapping.h
+@@ -71,6 +71,14 @@
+ #define DMA_ATTR_PRIVILEGED (1UL << 9)
+
+ /*
++ * This is a hint to the DMA-mapping subsystem that the device is expected
++ * to overwrite the entire mapped size, thus the caller does not require any
++ * of the previous buffer contents to be preserved. This allows
++ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
++ */
++#define DMA_ATTR_OVERWRITE (1UL << 10)
++
++/*
+ * A dma_addr_t can hold any valid DMA or bus address for the platform.
+ * It can be given to a device to use as a DMA source or target. A CPU cannot
+ * reference a dma_addr_t directly because there may be translation between
+--- a/lib/swiotlb.c
++++ b/lib/swiotlb.c
+@@ -601,7 +601,8 @@ found:
+ for (i = 0; i < nslots; i++)
+ io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
+ if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+- (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
++ (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
++ dir == DMA_BIDIRECTIONAL))
+ swiotlb_bounce(orig_addr, tlb_addr, size, DMA_TO_DEVICE);
+
+ return tlb_addr;