f2fs-check-memory-boundary-by-insane-namelen.patch
f2fs-check-if-file-namelen-exceeds-max-value.patch
arm-8986-1-hw_breakpoint-don-t-invoke-overflow-handl.patch
-x86-build-lto-fix-truncated-.bss-with-fdata-sections.patch
-x86-vmlinux.lds-page-align-end-of-.page_aligned-sect.patch
fbdev-detect-integer-underflow-at-struct-fbcon_ops-c.patch
rds-prevent-kernel-infoleak-in-rds_notify_queue_get.patch
net-x25-fix-x25_neigh-refcnt-leak-when-x25-disconnect.patch
random-fix-circular-include-dependency-on-arm64-after-addition-of-percpu.h.patch
random32-remove-net_rand_state-from-the-latent-entropy-gcc-plugin.patch
random32-move-the-pseudo-random-32-bit-definitions-to-prandom.h.patch
+ext4-fix-direct-i-o-read-error.patch
+++ /dev/null
-From 08dfba9cb1d8b35c132726a0ab8adb8785832714 Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Mon, 15 Apr 2019 09:49:56 -0700
-Subject: x86/build/lto: Fix truncated .bss with -fdata-sections
-
-From: Sami Tolvanen <samitolvanen@google.com>
-
-[ Upstream commit 6a03469a1edc94da52b65478f1e00837add869a3 ]
-
-With CONFIG_LD_DEAD_CODE_DATA_ELIMINATION=y, we compile the kernel with
--fdata-sections, which also splits the .bss section.
-
-The new section, with a new .bss.* name, which pattern gets missed by the
-main x86 linker script which only expects the '.bss' name. This results
-in the discarding of the second part and a too small, truncated .bss
-section and an unhappy, non-working kernel.
-
-Use the common BSS_MAIN macro in the linker script to properly capture
-and merge all the generated BSS sections.
-
-Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
-Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
-Reviewed-by: Kees Cook <keescook@chromium.org>
-Cc: Borislav Petkov <bp@alien8.de>
-Cc: Kees Cook <keescook@chromium.org>
-Cc: Linus Torvalds <torvalds@linux-foundation.org>
-Cc: Nicholas Piggin <npiggin@gmail.com>
-Cc: Nick Desaulniers <ndesaulniers@google.com>
-Cc: Peter Zijlstra <peterz@infradead.org>
-Cc: Thomas Gleixner <tglx@linutronix.de>
-Link: http://lkml.kernel.org/r/20190415164956.124067-1-samitolvanen@google.com
-[ Extended the changelog. ]
-Signed-off-by: Ingo Molnar <mingo@kernel.org>
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/kernel/vmlinux.lds.S | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
-index b05da220ea0a2..9b185142219b9 100644
---- a/arch/x86/kernel/vmlinux.lds.S
-+++ b/arch/x86/kernel/vmlinux.lds.S
-@@ -330,7 +330,7 @@ SECTIONS
- .bss : AT(ADDR(.bss) - LOAD_OFFSET) {
- __bss_start = .;
- *(.bss..page_aligned)
-- *(.bss)
-+ *(BSS_MAIN)
- . = ALIGN(PAGE_SIZE);
- __bss_stop = .;
- }
---
-2.25.1
-
+++ /dev/null
-From e1c399c0a0f5bc0f8cf762a8a9a4910d22e0304b Mon Sep 17 00:00:00 2001
-From: Sasha Levin <sashal@kernel.org>
-Date: Tue, 21 Jul 2020 11:34:48 +0200
-Subject: x86, vmlinux.lds: Page-align end of ..page_aligned sections
-
-From: Joerg Roedel <jroedel@suse.de>
-
-[ Upstream commit de2b41be8fcccb2f5b6c480d35df590476344201 ]
-
-On x86-32 the idt_table with 256 entries needs only 2048 bytes. It is
-page-aligned, but the end of the .bss..page_aligned section is not
-guaranteed to be page-aligned.
-
-As a result, objects from other .bss sections may end up on the same 4k
-page as the idt_table, and will accidentially get mapped read-only during
-boot, causing unexpected page-faults when the kernel writes to them.
-
-This could be worked around by making the objects in the page aligned
-sections page sized, but that's wrong.
-
-Explicit sections which store only page aligned objects have an implicit
-guarantee that the object is alone in the page in which it is placed. That
-works for all objects except the last one. That's inconsistent.
-
-Enforcing page sized objects for these sections would wreckage memory
-sanitizers, because the object becomes artificially larger than it should
-be and out of bound access becomes legit.
-
-Align the end of the .bss..page_aligned and .data..page_aligned section on
-page-size so all objects places in these sections are guaranteed to have
-their own page.
-
-[ tglx: Amended changelog ]
-
-Signed-off-by: Joerg Roedel <jroedel@suse.de>
-Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
-Reviewed-by: Kees Cook <keescook@chromium.org>
-Cc: stable@vger.kernel.org
-Link: https://lkml.kernel.org/r/20200721093448.10417-1-joro@8bytes.org
-Signed-off-by: Sasha Levin <sashal@kernel.org>
----
- arch/x86/kernel/vmlinux.lds.S | 1 +
- include/asm-generic/vmlinux.lds.h | 5 ++++-
- 2 files changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
-index 9b185142219b9..5cdd7dc7b9941 100644
---- a/arch/x86/kernel/vmlinux.lds.S
-+++ b/arch/x86/kernel/vmlinux.lds.S
-@@ -330,6 +330,7 @@ SECTIONS
- .bss : AT(ADDR(.bss) - LOAD_OFFSET) {
- __bss_start = .;
- *(.bss..page_aligned)
-+ . = ALIGN(PAGE_SIZE);
- *(BSS_MAIN)
- . = ALIGN(PAGE_SIZE);
- __bss_stop = .;
-diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h
-index a461b6604fd9d..49e373792fc15 100644
---- a/include/asm-generic/vmlinux.lds.h
-+++ b/include/asm-generic/vmlinux.lds.h
-@@ -233,7 +233,8 @@
-
- #define PAGE_ALIGNED_DATA(page_align) \
- . = ALIGN(page_align); \
-- *(.data..page_aligned)
-+ *(.data..page_aligned) \
-+ . = ALIGN(page_align);
-
- #define READ_MOSTLY_DATA(align) \
- . = ALIGN(align); \
-@@ -572,7 +573,9 @@
- . = ALIGN(bss_align); \
- .bss : AT(ADDR(.bss) - LOAD_OFFSET) { \
- BSS_FIRST_SECTIONS \
-+ . = ALIGN(PAGE_SIZE); \
- *(.bss..page_aligned) \
-+ . = ALIGN(PAGE_SIZE); \
- *(.dynbss) \
- *(.bss) \
- *(COMMON) \
---
-2.25.1
-