--- /dev/null
+From e82553c10b0899994153f9bf0af333c0a1550fd7 Mon Sep 17 00:00:00 2001
+From: Johannes Weiner <hannes@cmpxchg.org>
+Date: Tue, 9 Feb 2021 13:42:28 -0800
+Subject: Revert "mm: memcontrol: avoid workload stalls when lowering memory.high"
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Johannes Weiner <hannes@cmpxchg.org>
+
+commit e82553c10b0899994153f9bf0af333c0a1550fd7 upstream.
+
+This reverts commit 536d3bf261a2fc3b05b3e91e7eef7383443015cf, as it can
+cause writers to memory.high to get stuck in the kernel forever,
+performing page reclaim and consuming excessive amounts of CPU cycles.
+
+Before the patch, a write to memory.high would first put the new limit
+in place for the workload, and then reclaim the requested delta. After
+the patch, the kernel tries to reclaim the delta before putting the new
+limit into place, in order to not overwhelm the workload with a sudden,
+large excess over the limit. However, if reclaim is actively racing
+with new allocations from the uncurbed workload, it can keep the write()
+working inside the kernel indefinitely.
+
+This is causing problems in Facebook production. A privileged
+system-level daemon that adjusts memory.high for various workloads
+running on a host can get unexpectedly stuck in the kernel and
+essentially turn into a sort of involuntary kswapd for one of the
+workloads. We've observed that daemon busy-spin in a write() for
+minutes at a time, neglecting its other duties on the system, and
+expending privileged system resources on behalf of a workload.
+
+To remedy this, we have first considered changing the reclaim logic to
+break out after a couple of loops - whether the workload has converged
+to the new limit or not - and bound the write() call this way. However,
+the root cause that inspired the sequence change in the first place has
+been fixed through other means, and so a revert back to the proven
+limit-setting sequence, also used by memory.max, is preferable.
+
+The sequence was changed to avoid extreme latencies in the workload when
+the limit was lowered: the sudden, large excess created by the limit
+lowering would erroneously trigger the penalty sleeping code that is
+meant to throttle excessive growth from below. Allocating threads could
+end up sleeping long after the write() had already reclaimed the delta
+for which they were being punished.
+
+However, erroneous throttling also caused problems in other scenarios at
+around the same time. This resulted in commit b3ff92916af3 ("mm, memcg:
+reclaim more aggressively before high allocator throttling"), included
+in the same release as the offending commit. When allocating threads
+now encounter large excess caused by a racing write() to memory.high,
+instead of entering punitive sleeps, they will simply be tasked with
+helping reclaim down the excess, and will be held no longer than it
+takes to accomplish that. This is in line with regular limit
+enforcement - i.e. if the workload allocates up against or over an
+otherwise unchanged limit from below.
+
+With the patch breaking userspace, and the root cause addressed by other
+means already, revert it again.
+
+Link: https://lkml.kernel.org/r/20210122184341.292461-1-hannes@cmpxchg.org
+Fixes: 536d3bf261a2 ("mm: memcontrol: avoid workload stalls when lowering memory.high")
+Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
+Reported-by: Tejun Heo <tj@kernel.org>
+Acked-by: Chris Down <chris@chrisdown.name>
+Acked-by: Michal Hocko <mhocko@suse.com>
+Cc: Roman Gushchin <guro@fb.com>
+Cc: Shakeel Butt <shakeelb@google.com>
+Cc: Michal Koutný <mkoutny@suse.com>
+Cc: <stable@vger.kernel.org> [5.8+]
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ mm/memcontrol.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/mm/memcontrol.c
++++ b/mm/memcontrol.c
+@@ -6320,6 +6320,8 @@ static ssize_t memory_high_write(struct
+ if (err)
+ return err;
+
++ page_counter_set_high(&memcg->memory, high);
++
+ for (;;) {
+ unsigned long nr_pages = page_counter_read(&memcg->memory);
+ unsigned long reclaimed;
+@@ -6343,10 +6345,7 @@ static ssize_t memory_high_write(struct
+ break;
+ }
+
+- page_counter_set_high(&memcg->memory, high);
+-
+ memcg_wb_domain_size_changed(memcg);
+-
+ return nbytes;
+ }
+
drm-i915-fix-icl-mg-phy-vswing-handling.patch
drm-i915-skip-vswing-programming-for-tbt.patch
nilfs2-make-splice-write-available-again.patch
+revert-mm-memcontrol-avoid-workload-stalls-when-lowering-memory.high.patch
+squashfs-avoid-out-of-bounds-writes-in-decompressors.patch
+squashfs-add-more-sanity-checks-in-id-lookup.patch
+squashfs-add-more-sanity-checks-in-inode-lookup.patch
+squashfs-add-more-sanity-checks-in-xattr-id-lookup.patch
--- /dev/null
+From f37aa4c7366e23f91b81d00bafd6a7ab54e4a381 Mon Sep 17 00:00:00 2001
+From: Phillip Lougher <phillip@squashfs.org.uk>
+Date: Tue, 9 Feb 2021 13:41:53 -0800
+Subject: squashfs: add more sanity checks in id lookup
+
+From: Phillip Lougher <phillip@squashfs.org.uk>
+
+commit f37aa4c7366e23f91b81d00bafd6a7ab54e4a381 upstream.
+
+Sysbot has reported a number of "slab-out-of-bounds reads" and
+"use-after-free read" errors which has been identified as being caused
+by a corrupted index value read from the inode. This could be because
+the metadata block is uncompressed, or because the "compression" bit has
+been corrupted (turning a compressed block into an uncompressed block).
+
+This patch adds additional sanity checks to detect this, and the
+following corruption.
+
+1. It checks against corruption of the ids count. This can either
+ lead to a larger table to be read, or a smaller than expected
+ table to be read.
+
+ In the case of a too large ids count, this would often have been
+ trapped by the existing sanity checks, but this patch introduces
+ a more exact check, which can identify too small values.
+
+2. It checks the contents of the index table for corruption.
+
+Link: https://lkml.kernel.org/r/20210204130249.4495-3-phillip@squashfs.org.uk
+Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
+Reported-by: syzbot+b06d57ba83f604522af2@syzkaller.appspotmail.com
+Reported-by: syzbot+c021ba012da41ee9807c@syzkaller.appspotmail.com
+Reported-by: syzbot+5024636e8b5fd19f0f19@syzkaller.appspotmail.com
+Reported-by: syzbot+bcbc661df46657d0fa4f@syzkaller.appspotmail.com
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/squashfs/id.c | 40 ++++++++++++++++++++++++++++++++--------
+ fs/squashfs/squashfs_fs_sb.h | 1 +
+ fs/squashfs/super.c | 6 +++---
+ fs/squashfs/xattr.h | 10 +++++++++-
+ 4 files changed, 45 insertions(+), 12 deletions(-)
+
+--- a/fs/squashfs/id.c
++++ b/fs/squashfs/id.c
+@@ -35,10 +35,15 @@ int squashfs_get_id(struct super_block *
+ struct squashfs_sb_info *msblk = sb->s_fs_info;
+ int block = SQUASHFS_ID_BLOCK(index);
+ int offset = SQUASHFS_ID_BLOCK_OFFSET(index);
+- u64 start_block = le64_to_cpu(msblk->id_table[block]);
++ u64 start_block;
+ __le32 disk_id;
+ int err;
+
++ if (index >= msblk->ids)
++ return -EINVAL;
++
++ start_block = le64_to_cpu(msblk->id_table[block]);
++
+ err = squashfs_read_metadata(sb, &disk_id, &start_block, &offset,
+ sizeof(disk_id));
+ if (err < 0)
+@@ -56,7 +61,10 @@ __le64 *squashfs_read_id_index_table(str
+ u64 id_table_start, u64 next_table, unsigned short no_ids)
+ {
+ unsigned int length = SQUASHFS_ID_BLOCK_BYTES(no_ids);
++ unsigned int indexes = SQUASHFS_ID_BLOCKS(no_ids);
++ int n;
+ __le64 *table;
++ u64 start, end;
+
+ TRACE("In read_id_index_table, length %d\n", length);
+
+@@ -67,20 +75,36 @@ __le64 *squashfs_read_id_index_table(str
+ return ERR_PTR(-EINVAL);
+
+ /*
+- * length bytes should not extend into the next table - this check
+- * also traps instances where id_table_start is incorrectly larger
+- * than the next table start
++ * The computed size of the index table (length bytes) should exactly
++ * match the table start and end points
+ */
+- if (id_table_start + length > next_table)
++ if (length != (next_table - id_table_start))
+ return ERR_PTR(-EINVAL);
+
+ table = squashfs_read_table(sb, id_table_start, length);
++ if (IS_ERR(table))
++ return table;
+
+ /*
+- * table[0] points to the first id lookup table metadata block, this
+- * should be less than id_table_start
++ * table[0], table[1], ... table[indexes - 1] store the locations
++ * of the compressed id blocks. Each entry should be less than
++ * the next (i.e. table[0] < table[1]), and the difference between them
++ * should be SQUASHFS_METADATA_SIZE or less. table[indexes - 1]
++ * should be less than id_table_start, and again the difference
++ * should be SQUASHFS_METADATA_SIZE or less
+ */
+- if (!IS_ERR(table) && le64_to_cpu(table[0]) >= id_table_start) {
++ for (n = 0; n < (indexes - 1); n++) {
++ start = le64_to_cpu(table[n]);
++ end = le64_to_cpu(table[n + 1]);
++
++ if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++ kfree(table);
++ return ERR_PTR(-EINVAL);
++ }
++ }
++
++ start = le64_to_cpu(table[indexes - 1]);
++ if (start >= id_table_start || (id_table_start - start) > SQUASHFS_METADATA_SIZE) {
+ kfree(table);
+ return ERR_PTR(-EINVAL);
+ }
+--- a/fs/squashfs/squashfs_fs_sb.h
++++ b/fs/squashfs/squashfs_fs_sb.h
+@@ -64,5 +64,6 @@ struct squashfs_sb_info {
+ unsigned int inodes;
+ unsigned int fragments;
+ int xattr_ids;
++ unsigned int ids;
+ };
+ #endif
+--- a/fs/squashfs/super.c
++++ b/fs/squashfs/super.c
+@@ -166,6 +166,7 @@ static int squashfs_fill_super(struct su
+ msblk->directory_table = le64_to_cpu(sblk->directory_table_start);
+ msblk->inodes = le32_to_cpu(sblk->inodes);
+ msblk->fragments = le32_to_cpu(sblk->fragments);
++ msblk->ids = le16_to_cpu(sblk->no_ids);
+ flags = le16_to_cpu(sblk->flags);
+
+ TRACE("Found valid superblock on %pg\n", sb->s_bdev);
+@@ -177,7 +178,7 @@ static int squashfs_fill_super(struct su
+ TRACE("Block size %d\n", msblk->block_size);
+ TRACE("Number of inodes %d\n", msblk->inodes);
+ TRACE("Number of fragments %d\n", msblk->fragments);
+- TRACE("Number of ids %d\n", le16_to_cpu(sblk->no_ids));
++ TRACE("Number of ids %d\n", msblk->ids);
+ TRACE("sblk->inode_table_start %llx\n", msblk->inode_table);
+ TRACE("sblk->directory_table_start %llx\n", msblk->directory_table);
+ TRACE("sblk->fragment_table_start %llx\n",
+@@ -236,8 +237,7 @@ static int squashfs_fill_super(struct su
+ allocate_id_index_table:
+ /* Allocate and read id index table */
+ msblk->id_table = squashfs_read_id_index_table(sb,
+- le64_to_cpu(sblk->id_table_start), next_table,
+- le16_to_cpu(sblk->no_ids));
++ le64_to_cpu(sblk->id_table_start), next_table, msblk->ids);
+ if (IS_ERR(msblk->id_table)) {
+ errorf(fc, "unable to read id index table");
+ err = PTR_ERR(msblk->id_table);
+--- a/fs/squashfs/xattr.h
++++ b/fs/squashfs/xattr.h
+@@ -17,8 +17,16 @@ extern int squashfs_xattr_lookup(struct
+ static inline __le64 *squashfs_read_xattr_id_table(struct super_block *sb,
+ u64 start, u64 *xattr_table_start, int *xattr_ids)
+ {
++ struct squashfs_xattr_id_table *id_table;
++
++ id_table = squashfs_read_table(sb, start, sizeof(*id_table));
++ if (IS_ERR(id_table))
++ return (__le64 *) id_table;
++
++ *xattr_table_start = le64_to_cpu(id_table->xattr_table_start);
++ kfree(id_table);
++
+ ERROR("Xattrs in filesystem, these will be ignored\n");
+- *xattr_table_start = start;
+ return ERR_PTR(-ENOTSUPP);
+ }
+
--- /dev/null
+From eabac19e40c095543def79cb6ffeb3a8588aaff4 Mon Sep 17 00:00:00 2001
+From: Phillip Lougher <phillip@squashfs.org.uk>
+Date: Tue, 9 Feb 2021 13:41:56 -0800
+Subject: squashfs: add more sanity checks in inode lookup
+
+From: Phillip Lougher <phillip@squashfs.org.uk>
+
+commit eabac19e40c095543def79cb6ffeb3a8588aaff4 upstream.
+
+Sysbot has reported an "slab-out-of-bounds read" error which has been
+identified as being caused by a corrupted "ino_num" value read from the
+inode. This could be because the metadata block is uncompressed, or
+because the "compression" bit has been corrupted (turning a compressed
+block into an uncompressed block).
+
+This patch adds additional sanity checks to detect this, and the
+following corruption.
+
+1. It checks against corruption of the inodes count. This can either
+ lead to a larger table to be read, or a smaller than expected
+ table to be read.
+
+ In the case of a too large inodes count, this would often have been
+ trapped by the existing sanity checks, but this patch introduces
+ a more exact check, which can identify too small values.
+
+2. It checks the contents of the index table for corruption.
+
+[phillip@squashfs.org.uk: fix checkpatch issue]
+ Link: https://lkml.kernel.org/r/527909353.754618.1612769948607@webmail.123-reg.co.uk
+
+Link: https://lkml.kernel.org/r/20210204130249.4495-4-phillip@squashfs.org.uk
+Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
+Reported-by: syzbot+04419e3ff19d2970ea28@syzkaller.appspotmail.com
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/squashfs/export.c | 41 +++++++++++++++++++++++++++++++++--------
+ 1 file changed, 33 insertions(+), 8 deletions(-)
+
+--- a/fs/squashfs/export.c
++++ b/fs/squashfs/export.c
+@@ -41,12 +41,17 @@ static long long squashfs_inode_lookup(s
+ struct squashfs_sb_info *msblk = sb->s_fs_info;
+ int blk = SQUASHFS_LOOKUP_BLOCK(ino_num - 1);
+ int offset = SQUASHFS_LOOKUP_BLOCK_OFFSET(ino_num - 1);
+- u64 start = le64_to_cpu(msblk->inode_lookup_table[blk]);
++ u64 start;
+ __le64 ino;
+ int err;
+
+ TRACE("Entered squashfs_inode_lookup, inode_number = %d\n", ino_num);
+
++ if (ino_num == 0 || (ino_num - 1) >= msblk->inodes)
++ return -EINVAL;
++
++ start = le64_to_cpu(msblk->inode_lookup_table[blk]);
++
+ err = squashfs_read_metadata(sb, &ino, &start, &offset, sizeof(ino));
+ if (err < 0)
+ return err;
+@@ -111,7 +116,10 @@ __le64 *squashfs_read_inode_lookup_table
+ u64 lookup_table_start, u64 next_table, unsigned int inodes)
+ {
+ unsigned int length = SQUASHFS_LOOKUP_BLOCK_BYTES(inodes);
++ unsigned int indexes = SQUASHFS_LOOKUP_BLOCKS(inodes);
++ int n;
+ __le64 *table;
++ u64 start, end;
+
+ TRACE("In read_inode_lookup_table, length %d\n", length);
+
+@@ -121,20 +129,37 @@ __le64 *squashfs_read_inode_lookup_table
+ if (inodes == 0)
+ return ERR_PTR(-EINVAL);
+
+- /* length bytes should not extend into the next table - this check
+- * also traps instances where lookup_table_start is incorrectly larger
+- * than the next table start
++ /*
++ * The computed size of the lookup table (length bytes) should exactly
++ * match the table start and end points
+ */
+- if (lookup_table_start + length > next_table)
++ if (length != (next_table - lookup_table_start))
+ return ERR_PTR(-EINVAL);
+
+ table = squashfs_read_table(sb, lookup_table_start, length);
++ if (IS_ERR(table))
++ return table;
+
+ /*
+- * table[0] points to the first inode lookup table metadata block,
+- * this should be less than lookup_table_start
++ * table0], table[1], ... table[indexes - 1] store the locations
++ * of the compressed inode lookup blocks. Each entry should be
++ * less than the next (i.e. table[0] < table[1]), and the difference
++ * between them should be SQUASHFS_METADATA_SIZE or less.
++ * table[indexes - 1] should be less than lookup_table_start, and
++ * again the difference should be SQUASHFS_METADATA_SIZE or less
+ */
+- if (!IS_ERR(table) && le64_to_cpu(table[0]) >= lookup_table_start) {
++ for (n = 0; n < (indexes - 1); n++) {
++ start = le64_to_cpu(table[n]);
++ end = le64_to_cpu(table[n + 1]);
++
++ if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++ kfree(table);
++ return ERR_PTR(-EINVAL);
++ }
++ }
++
++ start = le64_to_cpu(table[indexes - 1]);
++ if (start >= lookup_table_start || (lookup_table_start - start) > SQUASHFS_METADATA_SIZE) {
+ kfree(table);
+ return ERR_PTR(-EINVAL);
+ }
--- /dev/null
+From 506220d2ba21791314af569211ffd8870b8208fa Mon Sep 17 00:00:00 2001
+From: Phillip Lougher <phillip@squashfs.org.uk>
+Date: Tue, 9 Feb 2021 13:42:00 -0800
+Subject: squashfs: add more sanity checks in xattr id lookup
+
+From: Phillip Lougher <phillip@squashfs.org.uk>
+
+commit 506220d2ba21791314af569211ffd8870b8208fa upstream.
+
+Sysbot has reported a warning where a kmalloc() attempt exceeds the
+maximum limit. This has been identified as corruption of the xattr_ids
+count when reading the xattr id lookup table.
+
+This patch adds a number of additional sanity checks to detect this
+corruption and others.
+
+1. It checks for a corrupted xattr index read from the inode. This could
+ be because the metadata block is uncompressed, or because the
+ "compression" bit has been corrupted (turning a compressed block
+ into an uncompressed block). This would cause an out of bounds read.
+
+2. It checks against corruption of the xattr_ids count. This can either
+ lead to the above kmalloc failure, or a smaller than expected
+ table to be read.
+
+3. It checks the contents of the index table for corruption.
+
+[phillip@squashfs.org.uk: fix checkpatch issue]
+ Link: https://lkml.kernel.org/r/270245655.754655.1612770082682@webmail.123-reg.co.uk
+
+Link: https://lkml.kernel.org/r/20210204130249.4495-5-phillip@squashfs.org.uk
+Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
+Reported-by: syzbot+2ccea6339d368360800d@syzkaller.appspotmail.com
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/squashfs/xattr_id.c | 66 ++++++++++++++++++++++++++++++++++++++++++-------
+ 1 file changed, 57 insertions(+), 9 deletions(-)
+
+--- a/fs/squashfs/xattr_id.c
++++ b/fs/squashfs/xattr_id.c
+@@ -31,10 +31,15 @@ int squashfs_xattr_lookup(struct super_b
+ struct squashfs_sb_info *msblk = sb->s_fs_info;
+ int block = SQUASHFS_XATTR_BLOCK(index);
+ int offset = SQUASHFS_XATTR_BLOCK_OFFSET(index);
+- u64 start_block = le64_to_cpu(msblk->xattr_id_table[block]);
++ u64 start_block;
+ struct squashfs_xattr_id id;
+ int err;
+
++ if (index >= msblk->xattr_ids)
++ return -EINVAL;
++
++ start_block = le64_to_cpu(msblk->xattr_id_table[block]);
++
+ err = squashfs_read_metadata(sb, &id, &start_block, &offset,
+ sizeof(id));
+ if (err < 0)
+@@ -50,13 +55,17 @@ int squashfs_xattr_lookup(struct super_b
+ /*
+ * Read uncompressed xattr id lookup table indexes from disk into memory
+ */
+-__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 start,
++__le64 *squashfs_read_xattr_id_table(struct super_block *sb, u64 table_start,
+ u64 *xattr_table_start, int *xattr_ids)
+ {
+- unsigned int len;
++ struct squashfs_sb_info *msblk = sb->s_fs_info;
++ unsigned int len, indexes;
+ struct squashfs_xattr_id_table *id_table;
++ __le64 *table;
++ u64 start, end;
++ int n;
+
+- id_table = squashfs_read_table(sb, start, sizeof(*id_table));
++ id_table = squashfs_read_table(sb, table_start, sizeof(*id_table));
+ if (IS_ERR(id_table))
+ return (__le64 *) id_table;
+
+@@ -70,13 +79,52 @@ __le64 *squashfs_read_xattr_id_table(str
+ if (*xattr_ids == 0)
+ return ERR_PTR(-EINVAL);
+
+- /* xattr_table should be less than start */
+- if (*xattr_table_start >= start)
++ len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
++ indexes = SQUASHFS_XATTR_BLOCKS(*xattr_ids);
++
++ /*
++ * The computed size of the index table (len bytes) should exactly
++ * match the table start and end points
++ */
++ start = table_start + sizeof(*id_table);
++ end = msblk->bytes_used;
++
++ if (len != (end - start))
+ return ERR_PTR(-EINVAL);
+
+- len = SQUASHFS_XATTR_BLOCK_BYTES(*xattr_ids);
++ table = squashfs_read_table(sb, start, len);
++ if (IS_ERR(table))
++ return table;
++
++ /* table[0], table[1], ... table[indexes - 1] store the locations
++ * of the compressed xattr id blocks. Each entry should be less than
++ * the next (i.e. table[0] < table[1]), and the difference between them
++ * should be SQUASHFS_METADATA_SIZE or less. table[indexes - 1]
++ * should be less than table_start, and again the difference
++ * shouls be SQUASHFS_METADATA_SIZE or less.
++ *
++ * Finally xattr_table_start should be less than table[0].
++ */
++ for (n = 0; n < (indexes - 1); n++) {
++ start = le64_to_cpu(table[n]);
++ end = le64_to_cpu(table[n + 1]);
++
++ if (start >= end || (end - start) > SQUASHFS_METADATA_SIZE) {
++ kfree(table);
++ return ERR_PTR(-EINVAL);
++ }
++ }
++
++ start = le64_to_cpu(table[indexes - 1]);
++ if (start >= table_start || (table_start - start) > SQUASHFS_METADATA_SIZE) {
++ kfree(table);
++ return ERR_PTR(-EINVAL);
++ }
+
+- TRACE("In read_xattr_index_table, length %d\n", len);
++ if (*xattr_table_start >= le64_to_cpu(table[0])) {
++ kfree(table);
++ return ERR_PTR(-EINVAL);
++ }
+
+- return squashfs_read_table(sb, start + sizeof(*id_table), len);
++ return table;
+ }
--- /dev/null
+From e812cbbbbbb15adbbbee176baa1e8bda53059bf0 Mon Sep 17 00:00:00 2001
+From: Phillip Lougher <phillip@squashfs.org.uk>
+Date: Tue, 9 Feb 2021 13:41:50 -0800
+Subject: squashfs: avoid out of bounds writes in decompressors
+
+From: Phillip Lougher <phillip@squashfs.org.uk>
+
+commit e812cbbbbbb15adbbbee176baa1e8bda53059bf0 upstream.
+
+Patch series "Squashfs: fix BIO migration regression and add sanity checks".
+
+Patch [1/4] fixes a regression introduced by the "migrate from
+ll_rw_block usage to BIO" patch, which has produced a number of
+Sysbot/Syzkaller reports.
+
+Patches [2/4], [3/4], and [4/4] fix a number of filesystem corruption
+issues which have produced Sysbot reports in the id, inode and xattr
+lookup code.
+
+Each patch has been tested against the Sysbot reproducers using the
+given kernel configuration. They have the appropriate "Reported-by:"
+lines added.
+
+Additionally, all of the reproducer filesystems are indirectly fixed by
+patch [4/4] due to the fact they all have xattr corruption which is now
+detected there.
+
+Additional testing with other configurations and architectures (32bit,
+big endian), and normal filesystems has also been done to trap any
+inadvertent regressions caused by the additional sanity checks.
+
+This patch (of 4):
+
+This is a regression introduced by the patch "migrate from ll_rw_block
+usage to BIO".
+
+Sysbot/Syskaller has reported a number of "out of bounds writes" and
+"unable to handle kernel paging request in squashfs_decompress" errors
+which have been identified as a regression introduced by the above
+patch.
+
+Specifically, the patch removed the following sanity check
+
+ if (length < 0 || length > output->length ||
+ (index + length) > msblk->bytes_used)
+
+This check did two things:
+
+1. It ensured any reads were not beyond the end of the filesystem
+
+2. It ensured that the "length" field read from the filesystem
+ was within the expected maximum length. Without this any
+ corrupted values can over-run allocated buffers.
+
+Link: https://lkml.kernel.org/r/20210204130249.4495-1-phillip@squashfs.org.uk
+Link: https://lkml.kernel.org/r/20210204130249.4495-2-phillip@squashfs.org.uk
+Fixes: 93e72b3c612adc ("squashfs: migrate from ll_rw_block usage to BIO")
+Reported-by: syzbot+6fba78f99b9afd4b5634@syzkaller.appspotmail.com
+Signed-off-by: Phillip Lougher <phillip@squashfs.org.uk>
+Cc: Philippe Liard <pliard@google.com>
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/squashfs/block.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+--- a/fs/squashfs/block.c
++++ b/fs/squashfs/block.c
+@@ -196,9 +196,15 @@ int squashfs_read_data(struct super_bloc
+ length = SQUASHFS_COMPRESSED_SIZE(length);
+ index += 2;
+
+- TRACE("Block @ 0x%llx, %scompressed size %d\n", index,
++ TRACE("Block @ 0x%llx, %scompressed size %d\n", index - 2,
+ compressed ? "" : "un", length);
+ }
++ if (length < 0 || length > output->length ||
++ (index + length) > msblk->bytes_used) {
++ res = -EIO;
++ goto out;
++ }
++
+ if (next_index)
+ *next_index = index + length;
+