tpm-fix-error-handling-in-async-work.patch
staging-fbtft-fb_st7789v-reset-display-before-initialization.patch
llc-fix-netdevice-reference-leaks-in-llc_ui_bind.patch
-swiotlb-fix-info-leak-with-dma_from_device.patch
-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
asoc-sti-fix-deadlock-via-snd_pcm_stop_xrun-call.patch
alsa-oss-fix-pcm-oss-buffer-allocation-overflow.patch
alsa-usb-audio-add-mapping-for-new-corsair-virtuoso-se.patch
+++ /dev/null
-From ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e Mon Sep 17 00:00:00 2001
-From: Halil Pasic <pasic@linux.ibm.com>
-Date: Fri, 11 Feb 2022 02:12:52 +0100
-Subject: swiotlb: fix info leak with DMA_FROM_DEVICE
-
-From: Halil Pasic <pasic@linux.ibm.com>
-
-commit ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e upstream.
-
-The problem I'm addressing was discovered by the LTP test covering
-cve-2018-1000204.
-
-A short description of what happens follows:
-1) The test case issues a command code 00 (TEST UNIT READY) via the SG_IO
- interface with: dxfer_len == 524288, dxdfer_dir == SG_DXFER_FROM_DEV
- and a corresponding dxferp. The peculiar thing about this is that TUR
- is not reading from the device.
-2) In sg_start_req() the invocation of blk_rq_map_user() effectively
- bounces the user-space buffer. As if the device was to transfer into
- it. Since commit a45b599ad808 ("scsi: sg: allocate with __GFP_ZERO in
- sg_build_indirect()") we make sure this first bounce buffer is
- allocated with GFP_ZERO.
-3) For the rest of the story we keep ignoring that we have a TUR, so the
- device won't touch the buffer we prepare as if the we had a
- DMA_FROM_DEVICE type of situation. My setup uses a virtio-scsi device
- and the buffer allocated by SG is mapped by the function
- virtqueue_add_split() which uses DMA_FROM_DEVICE for the "in" sgs (here
- scatter-gather and not scsi generics). This mapping involves bouncing
- via the swiotlb (we need swiotlb to do virtio in protected guest like
- s390 Secure Execution, or AMD SEV).
-4) When the SCSI TUR is done, we first copy back the content of the second
- (that is swiotlb) bounce buffer (which most likely contains some
- previous IO data), to the first bounce buffer, which contains all
- zeros. Then we copy back the content of the first bounce buffer to
- the user-space buffer.
-5) The test case detects that the buffer, which it zero-initialized,
- ain't all zeros and fails.
-
-One can argue that this is an swiotlb problem, because without swiotlb
-we leak all zeros, and the swiotlb should be transparent in a sense that
-it does not affect the outcome (if all other participants are well
-behaved).
-
-Copying the content of the original buffer into the swiotlb buffer is
-the only way I can think of to make swiotlb transparent in such
-scenarios. So let's do just that if in doubt, but allow the driver
-to tell us that the whole mapped buffer is going to be overwritten,
-in which case we can preserve the old behavior and avoid the performance
-impact of the extra bounce.
-
-Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
-Signed-off-by: Christoph Hellwig <hch@lst.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-
----
- Documentation/core-api/dma-attributes.rst | 8 ++++++++
- include/linux/dma-mapping.h | 8 ++++++++
- kernel/dma/swiotlb.c | 3 ++-
- 3 files changed, 18 insertions(+), 1 deletion(-)
-
---- a/Documentation/core-api/dma-attributes.rst
-+++ b/Documentation/core-api/dma-attributes.rst
-@@ -130,3 +130,11 @@ accesses to DMA buffers in both privileg
- subsystem that the buffer is fully accessible at the elevated privilege
- level (and ideally inaccessible or at least read-only at the
- lesser-privileged levels).
-+
-+DMA_ATTR_OVERWRITE
-+------------------
-+
-+This is a hint to the DMA-mapping subsystem that the device is expected to
-+overwrite the entire mapped size, thus the caller does not require any of the
-+previous buffer contents to be preserved. This allows bounce-buffering
-+implementations to optimise DMA_FROM_DEVICE transfers.
---- a/include/linux/dma-mapping.h
-+++ b/include/linux/dma-mapping.h
-@@ -62,6 +62,14 @@
- #define DMA_ATTR_PRIVILEGED (1UL << 9)
-
- /*
-+ * This is a hint to the DMA-mapping subsystem that the device is expected
-+ * to overwrite the entire mapped size, thus the caller does not require any
-+ * of the previous buffer contents to be preserved. This allows
-+ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
-+ */
-+#define DMA_ATTR_OVERWRITE (1UL << 10)
-+
-+/*
- * A dma_addr_t can hold any valid DMA or bus address for the platform. It can
- * be given to a device to use as a DMA source or target. It is specific to a
- * given device and there may be a translation between the CPU physical address
---- a/kernel/dma/swiotlb.c
-+++ b/kernel/dma/swiotlb.c
-@@ -598,7 +598,8 @@ phys_addr_t swiotlb_tbl_map_single(struc
-
- tlb_addr = slot_addr(io_tlb_start, index) + offset;
- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-- (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-+ (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
-+ dir == DMA_BIDIRECTIONAL))
- swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
- return tlb_addr;
- }
+++ /dev/null
-From aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 Mon Sep 17 00:00:00 2001
-From: Halil Pasic <pasic@linux.ibm.com>
-Date: Sat, 5 Mar 2022 18:07:14 +0100
-Subject: swiotlb: rework "fix info leak with DMA_FROM_DEVICE"
-
-From: Halil Pasic <pasic@linux.ibm.com>
-
-commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 upstream.
-
-Unfortunately, we ended up merging an old version of the patch "fix info
-leak with DMA_FROM_DEVICE" instead of merging the latest one. Christoph
-(the swiotlb maintainer), he asked me to create an incremental fix
-(after I have pointed this out the mix up, and asked him for guidance).
-So here we go.
-
-The main differences between what we got and what was agreed are:
-* swiotlb_sync_single_for_device is also required to do an extra bounce
-* We decided not to introduce DMA_ATTR_OVERWRITE until we have exploiters
-* The implantation of DMA_ATTR_OVERWRITE is flawed: DMA_ATTR_OVERWRITE
- must take precedence over DMA_ATTR_SKIP_CPU_SYNC
-
-Thus this patch removes DMA_ATTR_OVERWRITE, and makes
-swiotlb_sync_single_for_device() bounce unconditionally (that is, also
-when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
-data from the swiotlb buffer.
-
-Let me note, that if the size used with dma_sync_* API is less than the
-size used with dma_[un]map_*, under certain circumstances we may still
-end up with swiotlb not being transparent. In that sense, this is no
-perfect fix either.
-
-To get this bullet proof, we would have to bounce the entire
-mapping/bounce buffer. For that we would have to figure out the starting
-address, and the size of the mapping in
-swiotlb_sync_single_for_device(). While this does seem possible, there
-seems to be no firm consensus on how things are supposed to work.
-
-Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
-Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE")
-Cc: stable@vger.kernel.org
-Reviewed-by: Christoph Hellwig <hch@lst.de>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- Documentation/core-api/dma-attributes.rst | 8 --------
- include/linux/dma-mapping.h | 8 --------
- kernel/dma/swiotlb.c | 25 ++++++++++++++++---------
- 3 files changed, 16 insertions(+), 25 deletions(-)
-
---- a/Documentation/core-api/dma-attributes.rst
-+++ b/Documentation/core-api/dma-attributes.rst
-@@ -130,11 +130,3 @@ accesses to DMA buffers in both privileg
- subsystem that the buffer is fully accessible at the elevated privilege
- level (and ideally inaccessible or at least read-only at the
- lesser-privileged levels).
--
--DMA_ATTR_OVERWRITE
--------------------
--
--This is a hint to the DMA-mapping subsystem that the device is expected to
--overwrite the entire mapped size, thus the caller does not require any of the
--previous buffer contents to be preserved. This allows bounce-buffering
--implementations to optimise DMA_FROM_DEVICE transfers.
---- a/include/linux/dma-mapping.h
-+++ b/include/linux/dma-mapping.h
-@@ -62,14 +62,6 @@
- #define DMA_ATTR_PRIVILEGED (1UL << 9)
-
- /*
-- * This is a hint to the DMA-mapping subsystem that the device is expected
-- * to overwrite the entire mapped size, thus the caller does not require any
-- * of the previous buffer contents to be preserved. This allows
-- * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
-- */
--#define DMA_ATTR_OVERWRITE (1UL << 10)
--
--/*
- * A dma_addr_t can hold any valid DMA or bus address for the platform. It can
- * be given to a device to use as a DMA source or target. It is specific to a
- * given device and there may be a translation between the CPU physical address
---- a/kernel/dma/swiotlb.c
-+++ b/kernel/dma/swiotlb.c
-@@ -597,10 +597,14 @@ phys_addr_t swiotlb_tbl_map_single(struc
- io_tlb_orig_addr[index + i] = slot_addr(orig_addr, i);
-
- tlb_addr = slot_addr(io_tlb_start, index) + offset;
-- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-- (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
-- dir == DMA_BIDIRECTIONAL))
-- swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
-+ /*
-+ * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
-+ * to the tlb buffer, if we knew for sure the device will
-+ * overwirte the entire current content. But we don't. Thus
-+ * unconditional bounce may prevent leaking swiotlb content (i.e.
-+ * kernel memory) to user-space.
-+ */
-+ swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
- return tlb_addr;
- }
-
-@@ -680,11 +684,14 @@ void swiotlb_tbl_sync_single(struct devi
- BUG_ON(dir != DMA_TO_DEVICE);
- break;
- case SYNC_FOR_DEVICE:
-- if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- swiotlb_bounce(orig_addr, tlb_addr,
-- size, DMA_TO_DEVICE);
-- else
-- BUG_ON(dir != DMA_FROM_DEVICE);
-+ /*
-+ * Unconditional bounce is necessary to avoid corruption on
-+ * sync_*_for_cpu or dma_ummap_* when the device didn't
-+ * overwrite the whole lengt of the bounce buffer.
-+ */
-+ swiotlb_bounce(orig_addr, tlb_addr,
-+ size, DMA_TO_DEVICE);
-+ BUG_ON(!valid_dma_direction(dir));
- break;
- default:
- BUG();
staging-fbtft-fb_st7789v-reset-display-before-initialization.patch
thermal-int340x-fix-memory-leak-in-int3400_notify.patch
llc-fix-netdevice-reference-leaks-in-llc_ui_bind.patch
-swiotlb-fix-info-leak-with-dma_from_device.patch
-swiotlb-rework-fix-info-leak-with-dma_from_device.patch
alsa-pcm-add-stream-lock-during-pcm-reset-ioctl-operations.patch
alsa-usb-audio-add-mute-tlv-for-playback-volumes-on-rode-nt-usb.patch
alsa-cmipci-restore-aux-vol-on-suspend-resume.patch
+++ /dev/null
-From ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e Mon Sep 17 00:00:00 2001
-From: Halil Pasic <pasic@linux.ibm.com>
-Date: Fri, 11 Feb 2022 02:12:52 +0100
-Subject: swiotlb: fix info leak with DMA_FROM_DEVICE
-
-From: Halil Pasic <pasic@linux.ibm.com>
-
-commit ddbd89deb7d32b1fbb879f48d68fda1a8ac58e8e upstream.
-
-The problem I'm addressing was discovered by the LTP test covering
-cve-2018-1000204.
-
-A short description of what happens follows:
-1) The test case issues a command code 00 (TEST UNIT READY) via the SG_IO
- interface with: dxfer_len == 524288, dxdfer_dir == SG_DXFER_FROM_DEV
- and a corresponding dxferp. The peculiar thing about this is that TUR
- is not reading from the device.
-2) In sg_start_req() the invocation of blk_rq_map_user() effectively
- bounces the user-space buffer. As if the device was to transfer into
- it. Since commit a45b599ad808 ("scsi: sg: allocate with __GFP_ZERO in
- sg_build_indirect()") we make sure this first bounce buffer is
- allocated with GFP_ZERO.
-3) For the rest of the story we keep ignoring that we have a TUR, so the
- device won't touch the buffer we prepare as if the we had a
- DMA_FROM_DEVICE type of situation. My setup uses a virtio-scsi device
- and the buffer allocated by SG is mapped by the function
- virtqueue_add_split() which uses DMA_FROM_DEVICE for the "in" sgs (here
- scatter-gather and not scsi generics). This mapping involves bouncing
- via the swiotlb (we need swiotlb to do virtio in protected guest like
- s390 Secure Execution, or AMD SEV).
-4) When the SCSI TUR is done, we first copy back the content of the second
- (that is swiotlb) bounce buffer (which most likely contains some
- previous IO data), to the first bounce buffer, which contains all
- zeros. Then we copy back the content of the first bounce buffer to
- the user-space buffer.
-5) The test case detects that the buffer, which it zero-initialized,
- ain't all zeros and fails.
-
-One can argue that this is an swiotlb problem, because without swiotlb
-we leak all zeros, and the swiotlb should be transparent in a sense that
-it does not affect the outcome (if all other participants are well
-behaved).
-
-Copying the content of the original buffer into the swiotlb buffer is
-the only way I can think of to make swiotlb transparent in such
-scenarios. So let's do just that if in doubt, but allow the driver
-to tell us that the whole mapped buffer is going to be overwritten,
-in which case we can preserve the old behavior and avoid the performance
-impact of the extra bounce.
-
-Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
-Signed-off-by: Christoph Hellwig <hch@lst.de>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- Documentation/DMA-attributes.txt | 10 ++++++++++
- include/linux/dma-mapping.h | 8 ++++++++
- kernel/dma/swiotlb.c | 3 ++-
- 3 files changed, 20 insertions(+), 1 deletion(-)
-
---- a/Documentation/DMA-attributes.txt
-+++ b/Documentation/DMA-attributes.txt
-@@ -156,3 +156,13 @@ accesses to DMA buffers in both privileg
- subsystem that the buffer is fully accessible at the elevated privilege
- level (and ideally inaccessible or at least read-only at the
- lesser-privileged levels).
-+
-+DMA_ATTR_PRIVILEGED
-+-------------------
-+
-+Some advanced peripherals such as remote processors and GPUs perform
-+accesses to DMA buffers in both privileged "supervisor" and unprivileged
-+"user" modes. This attribute is used to indicate to the DMA-mapping
-+subsystem that the buffer is fully accessible at the elevated privilege
-+level (and ideally inaccessible or at least read-only at the
-+lesser-privileged levels).
---- a/include/linux/dma-mapping.h
-+++ b/include/linux/dma-mapping.h
-@@ -71,6 +71,14 @@
- #define DMA_ATTR_PRIVILEGED (1UL << 9)
-
- /*
-+ * This is a hint to the DMA-mapping subsystem that the device is expected
-+ * to overwrite the entire mapped size, thus the caller does not require any
-+ * of the previous buffer contents to be preserved. This allows
-+ * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
-+ */
-+#define DMA_ATTR_OVERWRITE (1UL << 10)
-+
-+/*
- * A dma_addr_t can hold any valid DMA or bus address for the platform.
- * It can be given to a device to use as a DMA source or target. A CPU cannot
- * reference a dma_addr_t directly because there may be translation between
---- a/kernel/dma/swiotlb.c
-+++ b/kernel/dma/swiotlb.c
-@@ -572,7 +572,8 @@ found:
- for (i = 0; i < nslots; i++)
- io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-- (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-+ (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
-+ dir == DMA_BIDIRECTIONAL))
- swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
-
- return tlb_addr;
+++ /dev/null
-From aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 Mon Sep 17 00:00:00 2001
-From: Halil Pasic <pasic@linux.ibm.com>
-Date: Sat, 5 Mar 2022 18:07:14 +0100
-Subject: swiotlb: rework "fix info leak with DMA_FROM_DEVICE"
-
-From: Halil Pasic <pasic@linux.ibm.com>
-
-commit aa6f8dcbab473f3a3c7454b74caa46d36cdc5d13 upstream.
-
-Unfortunately, we ended up merging an old version of the patch "fix info
-leak with DMA_FROM_DEVICE" instead of merging the latest one. Christoph
-(the swiotlb maintainer), he asked me to create an incremental fix
-(after I have pointed this out the mix up, and asked him for guidance).
-So here we go.
-
-The main differences between what we got and what was agreed are:
-* swiotlb_sync_single_for_device is also required to do an extra bounce
-* We decided not to introduce DMA_ATTR_OVERWRITE until we have exploiters
-* The implantation of DMA_ATTR_OVERWRITE is flawed: DMA_ATTR_OVERWRITE
- must take precedence over DMA_ATTR_SKIP_CPU_SYNC
-
-Thus this patch removes DMA_ATTR_OVERWRITE, and makes
-swiotlb_sync_single_for_device() bounce unconditionally (that is, also
-when dir == DMA_TO_DEVICE) in order do avoid synchronising back stale
-data from the swiotlb buffer.
-
-Let me note, that if the size used with dma_sync_* API is less than the
-size used with dma_[un]map_*, under certain circumstances we may still
-end up with swiotlb not being transparent. In that sense, this is no
-perfect fix either.
-
-To get this bullet proof, we would have to bounce the entire
-mapping/bounce buffer. For that we would have to figure out the starting
-address, and the size of the mapping in
-swiotlb_sync_single_for_device(). While this does seem possible, there
-seems to be no firm consensus on how things are supposed to work.
-
-Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
-Fixes: ddbd89deb7d3 ("swiotlb: fix info leak with DMA_FROM_DEVICE")
-Cc: stable@vger.kernel.org
-Reviewed-by: Christoph Hellwig <hch@lst.de>
-Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
----
- Documentation/DMA-attributes.txt | 10 ----------
- include/linux/dma-mapping.h | 8 --------
- kernel/dma/swiotlb.c | 25 ++++++++++++++++---------
- 3 files changed, 16 insertions(+), 27 deletions(-)
-
---- a/Documentation/DMA-attributes.txt
-+++ b/Documentation/DMA-attributes.txt
-@@ -156,13 +156,3 @@ accesses to DMA buffers in both privileg
- subsystem that the buffer is fully accessible at the elevated privilege
- level (and ideally inaccessible or at least read-only at the
- lesser-privileged levels).
--
--DMA_ATTR_PRIVILEGED
---------------------
--
--Some advanced peripherals such as remote processors and GPUs perform
--accesses to DMA buffers in both privileged "supervisor" and unprivileged
--"user" modes. This attribute is used to indicate to the DMA-mapping
--subsystem that the buffer is fully accessible at the elevated privilege
--level (and ideally inaccessible or at least read-only at the
--lesser-privileged levels).
---- a/include/linux/dma-mapping.h
-+++ b/include/linux/dma-mapping.h
-@@ -71,14 +71,6 @@
- #define DMA_ATTR_PRIVILEGED (1UL << 9)
-
- /*
-- * This is a hint to the DMA-mapping subsystem that the device is expected
-- * to overwrite the entire mapped size, thus the caller does not require any
-- * of the previous buffer contents to be preserved. This allows
-- * bounce-buffering implementations to optimise DMA_FROM_DEVICE transfers.
-- */
--#define DMA_ATTR_OVERWRITE (1UL << 10)
--
--/*
- * A dma_addr_t can hold any valid DMA or bus address for the platform.
- * It can be given to a device to use as a DMA source or target. A CPU cannot
- * reference a dma_addr_t directly because there may be translation between
---- a/kernel/dma/swiotlb.c
-+++ b/kernel/dma/swiotlb.c
-@@ -571,10 +571,14 @@ found:
- */
- for (i = 0; i < nslots; i++)
- io_tlb_orig_addr[index+i] = orig_addr + (i << IO_TLB_SHIFT);
-- if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-- (!(attrs & DMA_ATTR_OVERWRITE) || dir == DMA_TO_DEVICE ||
-- dir == DMA_BIDIRECTIONAL))
-- swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
-+ /*
-+ * When dir == DMA_FROM_DEVICE we could omit the copy from the orig
-+ * to the tlb buffer, if we knew for sure the device will
-+ * overwirte the entire current content. But we don't. Thus
-+ * unconditional bounce may prevent leaking swiotlb content (i.e.
-+ * kernel memory) to user-space.
-+ */
-+ swiotlb_bounce(orig_addr, tlb_addr, mapping_size, DMA_TO_DEVICE);
-
- return tlb_addr;
- }
-@@ -649,11 +653,14 @@ void swiotlb_tbl_sync_single(struct devi
- BUG_ON(dir != DMA_TO_DEVICE);
- break;
- case SYNC_FOR_DEVICE:
-- if (likely(dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- swiotlb_bounce(orig_addr, tlb_addr,
-- size, DMA_TO_DEVICE);
-- else
-- BUG_ON(dir != DMA_FROM_DEVICE);
-+ /*
-+ * Unconditional bounce is necessary to avoid corruption on
-+ * sync_*_for_cpu or dma_ummap_* when the device didn't
-+ * overwrite the whole lengt of the bounce buffer.
-+ */
-+ swiotlb_bounce(orig_addr, tlb_addr,
-+ size, DMA_TO_DEVICE);
-+ BUG_ON(!valid_dma_direction(dir));
- break;
- default:
- BUG();