From: Greg Kroah-Hartman Date: Mon, 26 Jul 2021 08:53:53 +0000 (+0200) Subject: 4.19-stable patches X-Git-Tag: v4.4.277~44 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=a7e00a18098366f4239cb91ddd1db0ba195e3f19;p=thirdparty%2Fkernel%2Fstable-queue.git 4.19-stable patches added patches: ixgbe-fix-packet-corruption-due-to-missing-dma-sync.patch media-ngene-fix-out-of-bounds-bug-in-ngene_command_config_free_buf.patch selftest-use-mmap-instead-of-posix_memalign-to-allocate-memory.patch tracing-fix-bug-in-rb_per_cpu_empty-that-might-cause-deadloop.patch --- diff --git a/queue-4.19/ixgbe-fix-packet-corruption-due-to-missing-dma-sync.patch b/queue-4.19/ixgbe-fix-packet-corruption-due-to-missing-dma-sync.patch new file mode 100644 index 00000000000..fb35aca6d0e --- /dev/null +++ b/queue-4.19/ixgbe-fix-packet-corruption-due-to-missing-dma-sync.patch @@ -0,0 +1,55 @@ +From 09cfae9f13d51700b0fecf591dcd658fc5375428 Mon Sep 17 00:00:00 2001 +From: Markus Boehme +Date: Tue, 20 Jul 2021 16:26:19 -0700 +Subject: ixgbe: Fix packet corruption due to missing DMA sync + +From: Markus Boehme + +commit 09cfae9f13d51700b0fecf591dcd658fc5375428 upstream. + +When receiving a packet with multiple fragments, hardware may still +touch the first fragment until the entire packet has been received. The +driver therefore keeps the first fragment mapped for DMA until end of +packet has been asserted, and delays its dma_sync call until then. + +The driver tries to fit multiple receive buffers on one page. When using +3K receive buffers (e.g. using Jumbo frames and legacy-rx is turned +off/build_skb is being used) on an architecture with 4K pages, the +driver allocates an order 1 compound page and uses one page per receive +buffer. To determine the correct offset for a delayed DMA sync of the +first fragment of a multi-fragment packet, the driver then cannot just +use PAGE_MASK on the DMA address but has to construct a mask based on +the actual size of the backing page. + +Using PAGE_MASK in the 3K RX buffer/4K page architecture configuration +will always sync the first page of a compound page. With the SWIOTLB +enabled this can lead to corrupted packets (zeroed out first fragment, +re-used garbage from another packet) and various consequences, such as +slow/stalling data transfers and connection resets. For example, testing +on a link with MTU exceeding 3058 bytes on a host with SWIOTLB enabled +(e.g. "iommu=soft swiotlb=262144,force") TCP transfers quickly fizzle +out without this patch. + +Cc: stable@vger.kernel.org +Fixes: 0c5661ecc5dd7 ("ixgbe: fix crash in build_skb Rx code path") +Signed-off-by: Markus Boehme +Tested-by: Tony Brelinski +Signed-off-by: Tony Nguyen +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman +--- + drivers/net/ethernet/intel/ixgbe/ixgbe_main.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c ++++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c +@@ -1823,7 +1823,8 @@ static void ixgbe_dma_sync_frag(struct i + struct sk_buff *skb) + { + if (ring_uses_build_skb(rx_ring)) { +- unsigned long offset = (unsigned long)(skb->data) & ~PAGE_MASK; ++ unsigned long mask = (unsigned long)ixgbe_rx_pg_size(rx_ring) - 1; ++ unsigned long offset = (unsigned long)(skb->data) & mask; + + dma_sync_single_range_for_cpu(rx_ring->dev, + IXGBE_CB(skb)->dma, diff --git a/queue-4.19/media-ngene-fix-out-of-bounds-bug-in-ngene_command_config_free_buf.patch b/queue-4.19/media-ngene-fix-out-of-bounds-bug-in-ngene_command_config_free_buf.patch new file mode 100644 index 00000000000..5e8c95837a3 --- /dev/null +++ b/queue-4.19/media-ngene-fix-out-of-bounds-bug-in-ngene_command_config_free_buf.patch @@ -0,0 +1,82 @@ +From 8d4abca95ecc82fc8c41912fa0085281f19cc29f Mon Sep 17 00:00:00 2001 +From: "Gustavo A. R. Silva" +Date: Mon, 19 Apr 2021 18:43:32 -0500 +Subject: media: ngene: Fix out-of-bounds bug in ngene_command_config_free_buf() + +From: Gustavo A. R. Silva + +commit 8d4abca95ecc82fc8c41912fa0085281f19cc29f upstream. + +Fix an 11-year old bug in ngene_command_config_free_buf() while +addressing the following warnings caught with -Warray-bounds: + +arch/alpha/include/asm/string.h:22:16: warning: '__builtin_memcpy' offset [12, 16] from the object at 'com' is out of the bounds of referenced subobject 'config' with type 'unsigned char' at offset 10 [-Warray-bounds] +arch/x86/include/asm/string_32.h:182:25: warning: '__builtin_memcpy' offset [12, 16] from the object at 'com' is out of the bounds of referenced subobject 'config' with type 'unsigned char' at offset 10 [-Warray-bounds] + +The problem is that the original code is trying to copy 6 bytes of +data into a one-byte size member _config_ of the wrong structue +FW_CONFIGURE_BUFFERS, in a single call to memcpy(). This causes a +legitimate compiler warning because memcpy() overruns the length +of &com.cmd.ConfigureBuffers.config. It seems that the right +structure is FW_CONFIGURE_FREE_BUFFERS, instead, because it contains +6 more members apart from the header _hdr_. Also, the name of +the function ngene_command_config_free_buf() suggests that the actual +intention is to ConfigureFreeBuffers, instead of ConfigureBuffers +(which takes place in the function ngene_command_config_buf(), above). + +Fix this by enclosing those 6 members of struct FW_CONFIGURE_FREE_BUFFERS +into new struct config, and use &com.cmd.ConfigureFreeBuffers.config as +the destination address, instead of &com.cmd.ConfigureBuffers.config, +when calling memcpy(). + +This also helps with the ongoing efforts to globally enable +-Warray-bounds and get us closer to being able to tighten the +FORTIFY_SOURCE routines on memcpy(). + +Link: https://github.com/KSPP/linux/issues/109 +Fixes: dae52d009fc9 ("V4L/DVB: ngene: Initial check-in") +Cc: stable@vger.kernel.org +Reported-by: kernel test robot +Reviewed-by: Kees Cook +Signed-off-by: Gustavo A. R. Silva +Link: https://lore.kernel.org/linux-hardening/20210420001631.GA45456@embeddedor/ +Signed-off-by: Greg Kroah-Hartman +--- + drivers/media/pci/ngene/ngene-core.c | 2 +- + drivers/media/pci/ngene/ngene.h | 14 ++++++++------ + 2 files changed, 9 insertions(+), 7 deletions(-) + +--- a/drivers/media/pci/ngene/ngene-core.c ++++ b/drivers/media/pci/ngene/ngene-core.c +@@ -398,7 +398,7 @@ static int ngene_command_config_free_buf + + com.cmd.hdr.Opcode = CMD_CONFIGURE_FREE_BUFFER; + com.cmd.hdr.Length = 6; +- memcpy(&com.cmd.ConfigureBuffers.config, config, 6); ++ memcpy(&com.cmd.ConfigureFreeBuffers.config, config, 6); + com.in_len = 6; + com.out_len = 0; + +--- a/drivers/media/pci/ngene/ngene.h ++++ b/drivers/media/pci/ngene/ngene.h +@@ -419,12 +419,14 @@ enum _BUFFER_CONFIGS { + + struct FW_CONFIGURE_FREE_BUFFERS { + struct FW_HEADER hdr; +- u8 UVI1_BufferLength; +- u8 UVI2_BufferLength; +- u8 TVO_BufferLength; +- u8 AUD1_BufferLength; +- u8 AUD2_BufferLength; +- u8 TVA_BufferLength; ++ struct { ++ u8 UVI1_BufferLength; ++ u8 UVI2_BufferLength; ++ u8 TVO_BufferLength; ++ u8 AUD1_BufferLength; ++ u8 AUD2_BufferLength; ++ u8 TVA_BufferLength; ++ } __packed config; + } __attribute__ ((__packed__)); + + struct FW_CONFIGURE_UART { diff --git a/queue-4.19/selftest-use-mmap-instead-of-posix_memalign-to-allocate-memory.patch b/queue-4.19/selftest-use-mmap-instead-of-posix_memalign-to-allocate-memory.patch new file mode 100644 index 00000000000..6b9779467dc --- /dev/null +++ b/queue-4.19/selftest-use-mmap-instead-of-posix_memalign-to-allocate-memory.patch @@ -0,0 +1,56 @@ +From 0db282ba2c12c1515d490d14a1ff696643ab0f1b Mon Sep 17 00:00:00 2001 +From: Peter Collingbourne +Date: Fri, 23 Jul 2021 15:50:04 -0700 +Subject: selftest: use mmap instead of posix_memalign to allocate memory + +From: Peter Collingbourne + +commit 0db282ba2c12c1515d490d14a1ff696643ab0f1b upstream. + +This test passes pointers obtained from anon_allocate_area to the +userfaultfd and mremap APIs. This causes a problem if the system +allocator returns tagged pointers because with the tagged address ABI +the kernel rejects tagged addresses passed to these APIs, which would +end up causing the test to fail. To make this test compatible with such +system allocators, stop using the system allocator to allocate memory in +anon_allocate_area, and instead just use mmap. + +Link: https://lkml.kernel.org/r/20210714195437.118982-3-pcc@google.com +Link: https://linux-review.googlesource.com/id/Icac91064fcd923f77a83e8e133f8631c5b8fc241 +Fixes: c47174fc362a ("userfaultfd: selftest") +Co-developed-by: Lokesh Gidra +Signed-off-by: Lokesh Gidra +Signed-off-by: Peter Collingbourne +Reviewed-by: Catalin Marinas +Cc: Vincenzo Frascino +Cc: Dave Martin +Cc: Will Deacon +Cc: Andrea Arcangeli +Cc: Alistair Delva +Cc: William McVicker +Cc: Evgenii Stepanov +Cc: Mitch Phillips +Cc: Andrey Konovalov +Cc: [5.4] +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman +--- + tools/testing/selftests/vm/userfaultfd.c | 6 ++++-- + 1 file changed, 4 insertions(+), 2 deletions(-) + +--- a/tools/testing/selftests/vm/userfaultfd.c ++++ b/tools/testing/selftests/vm/userfaultfd.c +@@ -129,8 +129,10 @@ static int anon_release_pages(char *rel_ + + static void anon_allocate_area(void **alloc_area) + { +- if (posix_memalign(alloc_area, page_size, nr_pages * page_size)) { +- fprintf(stderr, "out of memory\n"); ++ *alloc_area = mmap(NULL, nr_pages * page_size, PROT_READ | PROT_WRITE, ++ MAP_ANONYMOUS | MAP_PRIVATE, -1, 0); ++ if (*alloc_area == MAP_FAILED) ++ fprintf(stderr, "mmap of anonymous memory failed"); + *alloc_area = NULL; + } + } diff --git a/queue-4.19/series b/queue-4.19/series index 7aa7c68f89c..7e3ae4fa92b 100644 --- a/queue-4.19/series +++ b/queue-4.19/series @@ -102,3 +102,7 @@ usb-serial-option-add-support-for-u-blox-lara-r6-family.patch usb-serial-cp210x-fix-comments-for-ge-cs1000.patch usb-serial-cp210x-add-id-for-cel-em3588-usb-zigbee-stick.patch usb-dwc2-gadget-fix-sending-zero-length-packet-in-ddma-mode.patch +tracing-fix-bug-in-rb_per_cpu_empty-that-might-cause-deadloop.patch +media-ngene-fix-out-of-bounds-bug-in-ngene_command_config_free_buf.patch +ixgbe-fix-packet-corruption-due-to-missing-dma-sync.patch +selftest-use-mmap-instead-of-posix_memalign-to-allocate-memory.patch diff --git a/queue-4.19/tracing-fix-bug-in-rb_per_cpu_empty-that-might-cause-deadloop.patch b/queue-4.19/tracing-fix-bug-in-rb_per_cpu_empty-that-might-cause-deadloop.patch new file mode 100644 index 00000000000..5fce11ce3b3 --- /dev/null +++ b/queue-4.19/tracing-fix-bug-in-rb_per_cpu_empty-that-might-cause-deadloop.patch @@ -0,0 +1,102 @@ +From 67f0d6d9883c13174669f88adac4f0ee656cc16a Mon Sep 17 00:00:00 2001 +From: Haoran Luo +Date: Wed, 21 Jul 2021 14:12:07 +0000 +Subject: tracing: Fix bug in rb_per_cpu_empty() that might cause deadloop. + +From: Haoran Luo + +commit 67f0d6d9883c13174669f88adac4f0ee656cc16a upstream. + +The "rb_per_cpu_empty()" misinterpret the condition (as not-empty) when +"head_page" and "commit_page" of "struct ring_buffer_per_cpu" points to +the same buffer page, whose "buffer_data_page" is empty and "read" field +is non-zero. + +An error scenario could be constructed as followed (kernel perspective): + +1. All pages in the buffer has been accessed by reader(s) so that all of +them will have non-zero "read" field. + +2. Read and clear all buffer pages so that "rb_num_of_entries()" will +return 0 rendering there's no more data to read. It is also required +that the "read_page", "commit_page" and "tail_page" points to the same +page, while "head_page" is the next page of them. + +3. Invoke "ring_buffer_lock_reserve()" with large enough "length" +so that it shot pass the end of current tail buffer page. Now the +"head_page", "commit_page" and "tail_page" points to the same page. + +4. Discard current event with "ring_buffer_discard_commit()", so that +"head_page", "commit_page" and "tail_page" points to a page whose buffer +data page is now empty. + +When the error scenario has been constructed, "tracing_read_pipe" will +be trapped inside a deadloop: "trace_empty()" returns 0 since +"rb_per_cpu_empty()" returns 0 when it hits the CPU containing such +constructed ring buffer. Then "trace_find_next_entry_inc()" always +return NULL since "rb_num_of_entries()" reports there's no more entry +to read. Finally "trace_seq_to_user()" returns "-EBUSY" spanking +"tracing_read_pipe" back to the start of the "waitagain" loop. + +I've also written a proof-of-concept script to construct the scenario +and trigger the bug automatically, you can use it to trace and validate +my reasoning above: + + https://github.com/aegistudio/RingBufferDetonator.git + +Tests has been carried out on linux kernel 5.14-rc2 +(2734d6c1b1a089fb593ef6a23d4b70903526fe0c), my fixed version +of kernel (for testing whether my update fixes the bug) and +some older kernels (for range of affected kernels). Test result is +also attached to the proof-of-concept repository. + +Link: https://lore.kernel.org/linux-trace-devel/YPaNxsIlb2yjSi5Y@aegistudio/ +Link: https://lore.kernel.org/linux-trace-devel/YPgrN85WL9VyrZ55@aegistudio + +Cc: stable@vger.kernel.org +Fixes: bf41a158cacba ("ring-buffer: make reentrant") +Suggested-by: Linus Torvalds +Signed-off-by: Haoran Luo +Signed-off-by: Steven Rostedt (VMware) +Signed-off-by: Greg Kroah-Hartman +--- + kernel/trace/ring_buffer.c | 28 ++++++++++++++++++++++++---- + 1 file changed, 24 insertions(+), 4 deletions(-) + +--- a/kernel/trace/ring_buffer.c ++++ b/kernel/trace/ring_buffer.c +@@ -3172,10 +3172,30 @@ static bool rb_per_cpu_empty(struct ring + if (unlikely(!head)) + return true; + +- return reader->read == rb_page_commit(reader) && +- (commit == reader || +- (commit == head && +- head->read == rb_page_commit(commit))); ++ /* Reader should exhaust content in reader page */ ++ if (reader->read != rb_page_commit(reader)) ++ return false; ++ ++ /* ++ * If writers are committing on the reader page, knowing all ++ * committed content has been read, the ring buffer is empty. ++ */ ++ if (commit == reader) ++ return true; ++ ++ /* ++ * If writers are committing on a page other than reader page ++ * and head page, there should always be content to read. ++ */ ++ if (commit != head) ++ return false; ++ ++ /* ++ * Writers are committing on the head page, we just need ++ * to care about there're committed data, and the reader will ++ * swap reader page with head page when it is to read data. ++ */ ++ return rb_page_commit(commit) == 0; + } + + /**