--- /dev/null
+From e383e871ab54f073c2a798a9e0bde7f1d0528de8 Mon Sep 17 00:00:00 2001
+From: Krzysztof Kozlowski <krzk@kernel.org>
+Date: Thu, 30 Jan 2020 20:55:24 +0100
+Subject: ARM: npcm: Bring back GPIOLIB support
+
+From: Krzysztof Kozlowski <krzk@kernel.org>
+
+commit e383e871ab54f073c2a798a9e0bde7f1d0528de8 upstream.
+
+The CONFIG_ARCH_REQUIRE_GPIOLIB is gone since commit 65053e1a7743
+("gpio: delete ARCH_[WANTS_OPTIONAL|REQUIRE]_GPIOLIB") and all platforms
+should explicitly select GPIOLIB to have it.
+
+Link: https://lore.kernel.org/r/20200130195525.4525-1-krzk@kernel.org
+Cc: <stable@vger.kernel.org>
+Fixes: 65053e1a7743 ("gpio: delete ARCH_[WANTS_OPTIONAL|REQUIRE]_GPIOLIB")
+Signed-off-by: Krzysztof Kozlowski <krzk@kernel.org>
+Signed-off-by: Olof Johansson <olof@lixom.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/arm/mach-npcm/Kconfig | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/arm/mach-npcm/Kconfig
++++ b/arch/arm/mach-npcm/Kconfig
+@@ -11,7 +11,7 @@ config ARCH_NPCM7XX
+ depends on ARCH_MULTI_V7
+ select PINCTRL_NPCM7XX
+ select NPCM7XX_TIMER
+- select ARCH_REQUIRE_GPIOLIB
++ select GPIOLIB
+ select CACHE_L2X0
+ select ARM_GIC
+ select HAVE_ARM_TWD if SMP
--- /dev/null
+From fca3d33d8ad61eb53eca3ee4cac476d1e31b9008 Mon Sep 17 00:00:00 2001
+From: Will Deacon <will@kernel.org>
+Date: Thu, 6 Feb 2020 10:42:58 +0000
+Subject: arm64: ssbs: Fix context-switch when SSBS is present on all CPUs
+
+From: Will Deacon <will@kernel.org>
+
+commit fca3d33d8ad61eb53eca3ee4cac476d1e31b9008 upstream.
+
+When all CPUs in the system implement the SSBS extension, the SSBS field
+in PSTATE is the definitive indication of the mitigation state. Further,
+when the CPUs implement the SSBS manipulation instructions (advertised
+to userspace via an HWCAP), EL0 can toggle the SSBS field directly and
+so we cannot rely on any shadow state such as TIF_SSBD at all.
+
+Avoid forcing the SSBS field in context-switch on such a system, and
+simply rely on the PSTATE register instead.
+
+Cc: <stable@vger.kernel.org>
+Cc: Catalin Marinas <catalin.marinas@arm.com>
+Cc: Srinivas Ramana <sramana@codeaurora.org>
+Fixes: cbdf8a189a66 ("arm64: Force SSBS on context switch")
+Reviewed-by: Marc Zyngier <maz@kernel.org>
+Signed-off-by: Will Deacon <will@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/arm64/kernel/process.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+--- a/arch/arm64/kernel/process.c
++++ b/arch/arm64/kernel/process.c
+@@ -466,6 +466,13 @@ static void ssbs_thread_switch(struct ta
+ if (unlikely(next->flags & PF_KTHREAD))
+ return;
+
++ /*
++ * If all CPUs implement the SSBS extension, then we just need to
++ * context-switch the PSTATE field.
++ */
++ if (cpu_have_feature(cpu_feature(SSBS)))
++ return;
++
+ /* If the mitigation is enabled, then we leave SSBS clear. */
+ if ((arm64_get_ssbd_state() == ARM64_SSBD_FORCE_ENABLE) ||
+ test_tsk_thread_flag(next, TIF_SSBD))
--- /dev/null
+From 28553fa992cb28be6a65566681aac6cafabb4f2d Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Fri, 7 Feb 2020 12:23:09 +0000
+Subject: Btrfs: fix race between shrinking truncate and fiemap
+
+From: Filipe Manana <fdmanana@suse.com>
+
+commit 28553fa992cb28be6a65566681aac6cafabb4f2d upstream.
+
+When there is a fiemap executing in parallel with a shrinking truncate
+we can end up in a situation where we have extent maps for which we no
+longer have corresponding file extent items. This is generally harmless
+and at the moment the only consequences are missing file extent items
+representing holes after we expand the file size again after the
+truncate operation removed the prealloc extent items, and stale
+information for future fiemap calls (reporting extents that no longer
+exist or may have been reallocated to other files for example).
+
+Consider the following example:
+
+1) Our inode has a size of 128KiB, one 128KiB extent at file offset 0
+ and a 1MiB prealloc extent at file offset 128KiB;
+
+2) Task A starts doing a shrinking truncate of our inode to reduce it to
+ a size of 64KiB. Before it searches the subvolume tree for file
+ extent items to delete, it drops all the extent maps in the range
+ from 64KiB to (u64)-1 by calling btrfs_drop_extent_cache();
+
+3) Task B starts doing a fiemap against our inode. When looking up for
+ the inode's extent maps in the range from 128KiB to (u64)-1, it
+ doesn't find any in the inode's extent map tree, since they were
+ removed by task A. Because it didn't find any in the extent map
+ tree, it scans the inode's subvolume tree for file extent items, and
+ it finds the 1MiB prealloc extent at file offset 128KiB, then it
+ creates an extent map based on that file extent item and adds it to
+ inode's extent map tree (this ends up being done by
+ btrfs_get_extent() <- btrfs_get_extent_fiemap() <-
+ get_extent_skip_holes());
+
+4) Task A then drops the prealloc extent at file offset 128KiB and
+ shrinks the 128KiB extent file offset 0 to a length of 64KiB. The
+ truncation operation finishes and we end up with an extent map
+ representing a 1MiB prealloc extent at file offset 128KiB, despite we
+ don't have any more that extent;
+
+After this the two types of problems we have are:
+
+1) Future calls to fiemap always report that a 1MiB prealloc extent
+ exists at file offset 128KiB. This is stale information, no longer
+ correct;
+
+2) If the size of the file is increased, by a truncate operation that
+ increases the file size or by a write into a file offset > 64KiB for
+ example, we end up not inserting file extent items to represent holes
+ for any range between 128KiB and 128KiB + 1MiB, since the hole
+ expansion function, btrfs_cont_expand() will skip hole insertion for
+ any range for which an extent map exists that represents a prealloc
+ extent. This causes fsck to complain about missing file extent items
+ when not using the NO_HOLES feature.
+
+The second issue could be often triggered by test case generic/561 from
+fstests, which runs fsstress and duperemove in parallel, and duperemove
+does frequent fiemap calls.
+
+Essentially the problems happens because fiemap does not acquire the
+inode's lock while truncate does, and fiemap locks the file range in the
+inode's iotree while truncate does not. So fix the issue by making
+btrfs_truncate_inode_items() lock the file range from the new file size
+to (u64)-1, so that it serializes with fiemap.
+
+CC: stable@vger.kernel.org # 4.4+
+Reviewed-by: Josef Bacik <josef@toxicpanda.com>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/inode.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -4686,6 +4686,8 @@ int btrfs_truncate_inode_items(struct bt
+ u64 bytes_deleted = 0;
+ bool be_nice = false;
+ bool should_throttle = false;
++ const u64 lock_start = ALIGN_DOWN(new_size, fs_info->sectorsize);
++ struct extent_state *cached_state = NULL;
+
+ BUG_ON(new_size > 0 && min_type != BTRFS_EXTENT_DATA_KEY);
+
+@@ -4702,6 +4704,9 @@ int btrfs_truncate_inode_items(struct bt
+ return -ENOMEM;
+ path->reada = READA_BACK;
+
++ lock_extent_bits(&BTRFS_I(inode)->io_tree, lock_start, (u64)-1,
++ &cached_state);
++
+ /*
+ * We want to drop from the next block forward in case this new size is
+ * not block aligned since we will be keeping the last block of the
+@@ -4968,6 +4973,9 @@ out:
+ btrfs_ordered_update_i_size(inode, last_size, NULL);
+ }
+
++ unlock_extent_cached(&BTRFS_I(inode)->io_tree, lock_start, (u64)-1,
++ &cached_state);
++
+ btrfs_free_path(path);
+ return ret;
+ }
--- /dev/null
+From ac05ca913e9f3871126d61da275bfe8516ff01ca Mon Sep 17 00:00:00 2001
+From: Filipe Manana <fdmanana@suse.com>
+Date: Fri, 31 Jan 2020 14:06:07 +0000
+Subject: Btrfs: fix race between using extent maps and merging them
+
+From: Filipe Manana <fdmanana@suse.com>
+
+commit ac05ca913e9f3871126d61da275bfe8516ff01ca upstream.
+
+We have a few cases where we allow an extent map that is in an extent map
+tree to be merged with other extents in the tree. Such cases include the
+unpinning of an extent after the respective ordered extent completed or
+after logging an extent during a fast fsync. This can lead to subtle and
+dangerous problems because when doing the merge some other task might be
+using the same extent map and as consequence see an inconsistent state of
+the extent map - for example sees the new length but has seen the old start
+offset.
+
+With luck this triggers a BUG_ON(), and not some silent bug, such as the
+following one in __do_readpage():
+
+ $ cat -n fs/btrfs/extent_io.c
+ 3061 static int __do_readpage(struct extent_io_tree *tree,
+ 3062 struct page *page,
+ (...)
+ 3127 em = __get_extent_map(inode, page, pg_offset, cur,
+ 3128 end - cur + 1, get_extent, em_cached);
+ 3129 if (IS_ERR_OR_NULL(em)) {
+ 3130 SetPageError(page);
+ 3131 unlock_extent(tree, cur, end);
+ 3132 break;
+ 3133 }
+ 3134 extent_offset = cur - em->start;
+ 3135 BUG_ON(extent_map_end(em) <= cur);
+ (...)
+
+Consider the following example scenario, where we end up hitting the
+BUG_ON() in __do_readpage().
+
+We have an inode with a size of 8KiB and 2 extent maps:
+
+ extent A: file offset 0, length 4KiB, disk_bytenr = X, persisted on disk by
+ a previous transaction
+
+ extent B: file offset 4KiB, length 4KiB, disk_bytenr = X + 4KiB, not yet
+ persisted but writeback started for it already. The extent map
+ is pinned since there's writeback and an ordered extent in
+ progress, so it can not be merged with extent map A yet
+
+The following sequence of steps leads to the BUG_ON():
+
+1) The ordered extent for extent B completes, the respective page gets its
+ writeback bit cleared and the extent map is unpinned, at that point it
+ is not yet merged with extent map A because it's in the list of modified
+ extents;
+
+2) Due to memory pressure, or some other reason, the MM subsystem releases
+ the page corresponding to extent B - btrfs_releasepage() is called and
+ returns 1, meaning the page can be released as it's not dirty, not under
+ writeback anymore and the extent range is not locked in the inode's
+ iotree. However the extent map is not released, either because we are
+ not in a context that allows memory allocations to block or because the
+ inode's size is smaller than 16MiB - in this case our inode has a size
+ of 8KiB;
+
+3) Task B needs to read extent B and ends up __do_readpage() through the
+ btrfs_readpage() callback. At __do_readpage() it gets a reference to
+ extent map B;
+
+4) Task A, doing a fast fsync, calls clear_em_loggin() against extent map B
+ while holding the write lock on the inode's extent map tree - this
+ results in try_merge_map() being called and since it's possible to merge
+ extent map B with extent map A now (the extent map B was removed from
+ the list of modified extents), the merging begins - it sets extent map
+ B's start offset to 0 (was 4KiB), but before it increments the map's
+ length to 8KiB (4kb + 4KiB), task A is at:
+
+ BUG_ON(extent_map_end(em) <= cur);
+
+ The call to extent_map_end() sees the extent map has a start of 0
+ and a length still at 4KiB, so it returns 4KiB and 'cur' is 4KiB, so
+ the BUG_ON() is triggered.
+
+So it's dangerous to modify an extent map that is in the tree, because some
+other task might have got a reference to it before and still using it, and
+needs to see a consistent map while using it. Generally this is very rare
+since most paths that lookup and use extent maps also have the file range
+locked in the inode's iotree. The fsync path is pretty much the only
+exception where we don't do it to avoid serialization with concurrent
+reads.
+
+Fix this by not allowing an extent map do be merged if if it's being used
+by tasks other then the one attempting to merge the extent map (when the
+reference count of the extent map is greater than 2).
+
+Reported-by: ryusuke1925 <st13s20@gm.ibaraki-ct.ac.jp>
+Reported-by: Koki Mitani <koki.mitani.xg@hco.ntt.co.jp>
+Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=206211
+CC: stable@vger.kernel.org # 4.4+
+Reviewed-by: Josef Bacik <josef@toxicpanda.com>
+Signed-off-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/extent_map.c | 11 +++++++++++
+ 1 file changed, 11 insertions(+)
+
+--- a/fs/btrfs/extent_map.c
++++ b/fs/btrfs/extent_map.c
+@@ -237,6 +237,17 @@ static void try_merge_map(struct extent_
+ struct extent_map *merge = NULL;
+ struct rb_node *rb;
+
++ /*
++ * We can't modify an extent map that is in the tree and that is being
++ * used by another task, as it can cause that other task to see it in
++ * inconsistent state during the merging. We always have 1 reference for
++ * the tree and 1 for this task (which is unpinning the extent map or
++ * clearing the logging flag), so anything > 2 means it's being used by
++ * other tasks too.
++ */
++ if (refcount_read(&em->refs) > 2)
++ return;
++
+ if (em->start != 0) {
+ rb = rb_prev(&em->rb_node);
+ if (rb)
--- /dev/null
+From 10a3a3edc5b89a8cd095bc63495fb1e0f42047d9 Mon Sep 17 00:00:00 2001
+From: David Sterba <dsterba@suse.com>
+Date: Wed, 5 Feb 2020 17:12:28 +0100
+Subject: btrfs: log message when rw remount is attempted with unclean tree-log
+
+From: David Sterba <dsterba@suse.com>
+
+commit 10a3a3edc5b89a8cd095bc63495fb1e0f42047d9 upstream.
+
+A remount to a read-write filesystem is not safe when there's tree-log
+to be replayed. Files that could be opened until now might be affected
+by the changes in the tree-log.
+
+A regular mount is needed to replay the log so the filesystem presents
+the consistent view with the pending changes included.
+
+CC: stable@vger.kernel.org # 4.4+
+Reviewed-by: Anand Jain <anand.jain@oracle.com>
+Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/super.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/fs/btrfs/super.c
++++ b/fs/btrfs/super.c
+@@ -1803,6 +1803,8 @@ static int btrfs_remount(struct super_bl
+ }
+
+ if (btrfs_super_log_root(fs_info->super_copy) != 0) {
++ btrfs_warn(fs_info,
++ "mount required to replay tree-log, cannot remount read-write");
+ ret = -EINVAL;
+ goto restore;
+ }
--- /dev/null
+From e8294f2f6aa6208ed0923aa6d70cea3be178309a Mon Sep 17 00:00:00 2001
+From: David Sterba <dsterba@suse.com>
+Date: Wed, 5 Feb 2020 17:12:16 +0100
+Subject: btrfs: print message when tree-log replay starts
+
+From: David Sterba <dsterba@suse.com>
+
+commit e8294f2f6aa6208ed0923aa6d70cea3be178309a upstream.
+
+There's no logged information about tree-log replay although this is
+something that points to previous unclean unmount. Other filesystems
+report that as well.
+
+Suggested-by: Chris Murphy <lists@colorremedies.com>
+CC: stable@vger.kernel.org # 4.4+
+Reviewed-by: Anand Jain <anand.jain@oracle.com>
+Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/disk-io.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -3164,6 +3164,7 @@ int __cold open_ctree(struct super_block
+ /* do not make disk changes in broken FS or nologreplay is given */
+ if (btrfs_super_log_root(disk_super) != 0 &&
+ !btrfs_test_opt(fs_info, NOLOGREPLAY)) {
++ btrfs_info(fs_info, "start tree-log replay");
+ ret = btrfs_replay_log(fs_info, fs_devices);
+ if (ret) {
+ err = ret;
--- /dev/null
+From f311ade3a7adf31658ed882aaab9f9879fdccef7 Mon Sep 17 00:00:00 2001
+From: Wenwen Wang <wenwen@cs.uga.edu>
+Date: Sat, 1 Feb 2020 20:38:38 +0000
+Subject: btrfs: ref-verify: fix memory leaks
+
+From: Wenwen Wang <wenwen@cs.uga.edu>
+
+commit f311ade3a7adf31658ed882aaab9f9879fdccef7 upstream.
+
+In btrfs_ref_tree_mod(), 'ref' and 'ra' are allocated through kzalloc() and
+kmalloc(), respectively. In the following code, if an error occurs, the
+execution will be redirected to 'out' or 'out_unlock' and the function will
+be exited. However, on some of the paths, 'ref' and 'ra' are not
+deallocated, leading to memory leaks. For example, if 'action' is
+BTRFS_ADD_DELAYED_EXTENT, add_block_entry() will be invoked. If the return
+value indicates an error, the execution will be redirected to 'out'. But,
+'ref' is not deallocated on this path, causing a memory leak.
+
+To fix the above issues, deallocate both 'ref' and 'ra' before exiting from
+the function when an error is encountered.
+
+CC: stable@vger.kernel.org # 4.15+
+Signed-off-by: Wenwen Wang <wenwen@cs.uga.edu>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/ref-verify.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+--- a/fs/btrfs/ref-verify.c
++++ b/fs/btrfs/ref-verify.c
+@@ -744,6 +744,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_i
+ */
+ be = add_block_entry(fs_info, bytenr, num_bytes, ref_root);
+ if (IS_ERR(be)) {
++ kfree(ref);
+ kfree(ra);
+ ret = PTR_ERR(be);
+ goto out;
+@@ -757,6 +758,8 @@ int btrfs_ref_tree_mod(struct btrfs_fs_i
+ "re-allocated a block that still has references to it!");
+ dump_block_entry(fs_info, be);
+ dump_ref_action(fs_info, ra);
++ kfree(ref);
++ kfree(ra);
+ goto out_unlock;
+ }
+
+@@ -819,6 +822,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_i
+ "dropping a ref for a existing root that doesn't have a ref on the block");
+ dump_block_entry(fs_info, be);
+ dump_ref_action(fs_info, ra);
++ kfree(ref);
+ kfree(ra);
+ goto out_unlock;
+ }
+@@ -834,6 +838,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_i
+ "attempting to add another ref for an existing ref on a tree block");
+ dump_block_entry(fs_info, be);
+ dump_ref_action(fs_info, ra);
++ kfree(ref);
+ kfree(ra);
+ goto out_unlock;
+ }
--- /dev/null
+From 0cd9d33ace336bc424fc30944aa3defd6786e4fe Mon Sep 17 00:00:00 2001
+From: Tejun Heo <tj@kernel.org>
+Date: Thu, 30 Jan 2020 11:37:33 -0500
+Subject: cgroup: init_tasks shouldn't be linked to the root cgroup
+
+From: Tejun Heo <tj@kernel.org>
+
+commit 0cd9d33ace336bc424fc30944aa3defd6786e4fe upstream.
+
+5153faac18d2 ("cgroup: remove cgroup_enable_task_cg_lists()
+optimization") removed lazy initialization of css_sets so that new
+tasks are always lniked to its css_set. In the process, it incorrectly
+ended up adding init_tasks to root css_set. They show up as PID 0's in
+root's cgroup.procs triggering warnings in systemd and generally
+confusing people.
+
+Fix it by skip css_set linking for init_tasks.
+
+Signed-off-by: Tejun Heo <tj@kernel.org>
+Reported-by: https://github.com/joanbm
+Link: https://github.com/systemd/systemd/issues/14682
+Fixes: 5153faac18d2 ("cgroup: remove cgroup_enable_task_cg_lists() optimization")
+Cc: stable@vger.kernel.org # v5.5+
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/cgroup/cgroup.c | 13 ++++++++-----
+ 1 file changed, 8 insertions(+), 5 deletions(-)
+
+--- a/kernel/cgroup/cgroup.c
++++ b/kernel/cgroup/cgroup.c
+@@ -5932,11 +5932,14 @@ void cgroup_post_fork(struct task_struct
+
+ spin_lock_irq(&css_set_lock);
+
+- WARN_ON_ONCE(!list_empty(&child->cg_list));
+- cset = task_css_set(current); /* current is @child's parent */
+- get_css_set(cset);
+- cset->nr_tasks++;
+- css_set_move_task(child, NULL, cset, false);
++ /* init tasks are special, only link regular threads */
++ if (likely(child->pid)) {
++ WARN_ON_ONCE(!list_empty(&child->cg_list));
++ cset = task_css_set(current); /* current is @child's parent */
++ get_css_set(cset);
++ cset->nr_tasks++;
++ css_set_move_task(child, NULL, cset, false);
++ }
+
+ /*
+ * If the cgroup has to be frozen, the new task has too. Let's set
--- /dev/null
+From 85db6b7ae65f33be4bb44f1c28261a3faa126437 Mon Sep 17 00:00:00 2001
+From: Ronnie Sahlberg <lsahlber@redhat.com>
+Date: Thu, 13 Feb 2020 12:14:47 +1000
+Subject: cifs: make sure we do not overflow the max EA buffer size
+
+From: Ronnie Sahlberg <lsahlber@redhat.com>
+
+commit 85db6b7ae65f33be4bb44f1c28261a3faa126437 upstream.
+
+RHBZ: 1752437
+
+Before we add a new EA we should check that this will not overflow
+the maximum buffer we have available to read the EAs back.
+Otherwise we can get into a situation where the EAs are so big that
+we can not read them back to the client and thus we can not list EAs
+anymore or delete them.
+
+Signed-off-by: Ronnie Sahlberg <lsahlber@redhat.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+CC: Stable <stable@vger.kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/cifs/smb2ops.c | 35 ++++++++++++++++++++++++++++++++++-
+ 1 file changed, 34 insertions(+), 1 deletion(-)
+
+--- a/fs/cifs/smb2ops.c
++++ b/fs/cifs/smb2ops.c
+@@ -1115,7 +1115,8 @@ smb2_set_ea(const unsigned int xid, stru
+ void *data[1];
+ struct smb2_file_full_ea_info *ea = NULL;
+ struct kvec close_iov[1];
+- int rc;
++ struct smb2_query_info_rsp *rsp;
++ int rc, used_len = 0;
+
+ if (smb3_encryption_required(tcon))
+ flags |= CIFS_TRANSFORM_REQ;
+@@ -1138,6 +1139,38 @@ smb2_set_ea(const unsigned int xid, stru
+ cifs_sb);
+ if (rc == -ENODATA)
+ goto sea_exit;
++ } else {
++ /* If we are adding a attribute we should first check
++ * if there will be enough space available to store
++ * the new EA. If not we should not add it since we
++ * would not be able to even read the EAs back.
++ */
++ rc = smb2_query_info_compound(xid, tcon, utf16_path,
++ FILE_READ_EA,
++ FILE_FULL_EA_INFORMATION,
++ SMB2_O_INFO_FILE,
++ CIFSMaxBufSize -
++ MAX_SMB2_CREATE_RESPONSE_SIZE -
++ MAX_SMB2_CLOSE_RESPONSE_SIZE,
++ &rsp_iov[1], &resp_buftype[1], cifs_sb);
++ if (rc == 0) {
++ rsp = (struct smb2_query_info_rsp *)rsp_iov[1].iov_base;
++ used_len = le32_to_cpu(rsp->OutputBufferLength);
++ }
++ free_rsp_buf(resp_buftype[1], rsp_iov[1].iov_base);
++ resp_buftype[1] = CIFS_NO_BUFFER;
++ memset(&rsp_iov[1], 0, sizeof(rsp_iov[1]));
++ rc = 0;
++
++ /* Use a fudge factor of 256 bytes in case we collide
++ * with a different set_EAs command.
++ */
++ if(CIFSMaxBufSize - MAX_SMB2_CREATE_RESPONSE_SIZE -
++ MAX_SMB2_CLOSE_RESPONSE_SIZE - 256 <
++ used_len + ea_name_len + ea_value_len + 1) {
++ rc = -ENOSPC;
++ goto sea_exit;
++ }
+ }
+ }
+
--- /dev/null
+From e33a8cfda5198fc09554fdd77ba246de42c886bd Mon Sep 17 00:00:00 2001
+From: Alex Deucher <alexander.deucher@amd.com>
+Date: Thu, 6 Feb 2020 14:53:06 -0500
+Subject: drm/amdgpu:/navi10: use the ODCAP enum to index the caps array
+
+From: Alex Deucher <alexander.deucher@amd.com>
+
+commit e33a8cfda5198fc09554fdd77ba246de42c886bd upstream.
+
+Rather than the FEATURE_ID flags. Avoids a possible reading past
+the end of the array.
+
+Reviewed-by: Evan Quan <evan.quan@amd.com>
+Reported-by: Aleksandr Mezin <mezin.alexander@gmail.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Cc: stable@vger.kernel.org # 5.5.x
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/gpu/drm/amd/powerplay/navi10_ppt.c | 22 +++++++++++-----------
+ 1 file changed, 11 insertions(+), 11 deletions(-)
+
+--- a/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
++++ b/drivers/gpu/drm/amd/powerplay/navi10_ppt.c
+@@ -705,9 +705,9 @@ static bool navi10_is_support_fine_grain
+ return dpm_desc->SnapToDiscrete == 0 ? true : false;
+ }
+
+-static inline bool navi10_od_feature_is_supported(struct smu_11_0_overdrive_table *od_table, enum SMU_11_0_ODFEATURE_ID feature)
++static inline bool navi10_od_feature_is_supported(struct smu_11_0_overdrive_table *od_table, enum SMU_11_0_ODFEATURE_CAP cap)
+ {
+- return od_table->cap[feature];
++ return od_table->cap[cap];
+ }
+
+ static void navi10_od_setting_get_range(struct smu_11_0_overdrive_table *od_table,
+@@ -815,7 +815,7 @@ static int navi10_print_clk_levels(struc
+ case SMU_OD_SCLK:
+ if (!smu->od_enabled || !od_table || !od_settings)
+ break;
+- if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_GFXCLK_LIMITS))
++ if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_LIMITS))
+ break;
+ size += sprintf(buf + size, "OD_SCLK:\n");
+ size += sprintf(buf + size, "0: %uMhz\n1: %uMhz\n", od_table->GfxclkFmin, od_table->GfxclkFmax);
+@@ -823,7 +823,7 @@ static int navi10_print_clk_levels(struc
+ case SMU_OD_MCLK:
+ if (!smu->od_enabled || !od_table || !od_settings)
+ break;
+- if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_UCLK_MAX))
++ if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_UCLK_MAX))
+ break;
+ size += sprintf(buf + size, "OD_MCLK:\n");
+ size += sprintf(buf + size, "1: %uMHz\n", od_table->UclkFmax);
+@@ -831,7 +831,7 @@ static int navi10_print_clk_levels(struc
+ case SMU_OD_VDDC_CURVE:
+ if (!smu->od_enabled || !od_table || !od_settings)
+ break;
+- if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_GFXCLK_CURVE))
++ if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_CURVE))
+ break;
+ size += sprintf(buf + size, "OD_VDDC_CURVE:\n");
+ for (i = 0; i < 3; i++) {
+@@ -856,7 +856,7 @@ static int navi10_print_clk_levels(struc
+ break;
+ size = sprintf(buf, "%s:\n", "OD_RANGE");
+
+- if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_GFXCLK_LIMITS)) {
++ if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_LIMITS)) {
+ navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_GFXCLKFMIN,
+ &min_value, NULL);
+ navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_GFXCLKFMAX,
+@@ -865,14 +865,14 @@ static int navi10_print_clk_levels(struc
+ min_value, max_value);
+ }
+
+- if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_UCLK_MAX)) {
++ if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_UCLK_MAX)) {
+ navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_UCLKFMAX,
+ &min_value, &max_value);
+ size += sprintf(buf + size, "MCLK: %7uMhz %10uMhz\n",
+ min_value, max_value);
+ }
+
+- if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_GFXCLK_CURVE)) {
++ if (navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_CURVE)) {
+ navi10_od_setting_get_range(od_settings, SMU_11_0_ODSETTING_VDDGFXCURVEFREQ_P1,
+ &min_value, &max_value);
+ size += sprintf(buf + size, "VDDC_CURVE_SCLK[0]: %7uMhz %10uMhz\n",
+@@ -1956,7 +1956,7 @@ static int navi10_od_edit_dpm_table(stru
+
+ switch (type) {
+ case PP_OD_EDIT_SCLK_VDDC_TABLE:
+- if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_GFXCLK_LIMITS)) {
++ if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_LIMITS)) {
+ pr_warn("GFXCLK_LIMITS not supported!\n");
+ return -ENOTSUPP;
+ }
+@@ -2002,7 +2002,7 @@ static int navi10_od_edit_dpm_table(stru
+ }
+ break;
+ case PP_OD_EDIT_MCLK_VDDC_TABLE:
+- if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_UCLK_MAX)) {
++ if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_UCLK_MAX)) {
+ pr_warn("UCLK_MAX not supported!\n");
+ return -ENOTSUPP;
+ }
+@@ -2043,7 +2043,7 @@ static int navi10_od_edit_dpm_table(stru
+ }
+ break;
+ case PP_OD_EDIT_VDDC_CURVE:
+- if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODFEATURE_GFXCLK_CURVE)) {
++ if (!navi10_od_feature_is_supported(od_settings, SMU_11_0_ODCAP_GFXCLK_CURVE)) {
+ pr_warn("GFXCLK_CURVE not supported!\n");
+ return -ENOTSUPP;
+ }
--- /dev/null
+From c1d66bc2e531b4ed3a9464b8e87144cc6b2fd63f Mon Sep 17 00:00:00 2001
+From: Alex Deucher <alexander.deucher@amd.com>
+Date: Thu, 6 Feb 2020 14:46:34 -0500
+Subject: drm/amdgpu: update smu_v11_0_pptable.h
+
+From: Alex Deucher <alexander.deucher@amd.com>
+
+commit c1d66bc2e531b4ed3a9464b8e87144cc6b2fd63f upstream.
+
+Update to the latest changes.
+
+Reviewed-by: Evan Quan <evan.quan@amd.com>
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Cc: stable@vger.kernel.org # 5.5.x
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h | 46 ++++++++++++------
+ 1 file changed, 32 insertions(+), 14 deletions(-)
+
+--- a/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h
++++ b/drivers/gpu/drm/amd/powerplay/inc/smu_v11_0_pptable.h
+@@ -39,21 +39,39 @@
+ #define SMU_11_0_PP_OVERDRIVE_VERSION 0x0800
+ #define SMU_11_0_PP_POWERSAVINGCLOCK_VERSION 0x0100
+
++enum SMU_11_0_ODFEATURE_CAP {
++ SMU_11_0_ODCAP_GFXCLK_LIMITS = 0,
++ SMU_11_0_ODCAP_GFXCLK_CURVE,
++ SMU_11_0_ODCAP_UCLK_MAX,
++ SMU_11_0_ODCAP_POWER_LIMIT,
++ SMU_11_0_ODCAP_FAN_ACOUSTIC_LIMIT,
++ SMU_11_0_ODCAP_FAN_SPEED_MIN,
++ SMU_11_0_ODCAP_TEMPERATURE_FAN,
++ SMU_11_0_ODCAP_TEMPERATURE_SYSTEM,
++ SMU_11_0_ODCAP_MEMORY_TIMING_TUNE,
++ SMU_11_0_ODCAP_FAN_ZERO_RPM_CONTROL,
++ SMU_11_0_ODCAP_AUTO_UV_ENGINE,
++ SMU_11_0_ODCAP_AUTO_OC_ENGINE,
++ SMU_11_0_ODCAP_AUTO_OC_MEMORY,
++ SMU_11_0_ODCAP_FAN_CURVE,
++ SMU_11_0_ODCAP_COUNT,
++};
++
+ enum SMU_11_0_ODFEATURE_ID {
+- SMU_11_0_ODFEATURE_GFXCLK_LIMITS = 1 << 0, //GFXCLK Limit feature
+- SMU_11_0_ODFEATURE_GFXCLK_CURVE = 1 << 1, //GFXCLK Curve feature
+- SMU_11_0_ODFEATURE_UCLK_MAX = 1 << 2, //UCLK Limit feature
+- SMU_11_0_ODFEATURE_POWER_LIMIT = 1 << 3, //Power Limit feature
+- SMU_11_0_ODFEATURE_FAN_ACOUSTIC_LIMIT = 1 << 4, //Fan Acoustic RPM feature
+- SMU_11_0_ODFEATURE_FAN_SPEED_MIN = 1 << 5, //Minimum Fan Speed feature
+- SMU_11_0_ODFEATURE_TEMPERATURE_FAN = 1 << 6, //Fan Target Temperature Limit feature
+- SMU_11_0_ODFEATURE_TEMPERATURE_SYSTEM = 1 << 7, //Operating Temperature Limit feature
+- SMU_11_0_ODFEATURE_MEMORY_TIMING_TUNE = 1 << 8, //AC Timing Tuning feature
+- SMU_11_0_ODFEATURE_FAN_ZERO_RPM_CONTROL = 1 << 9, //Zero RPM feature
+- SMU_11_0_ODFEATURE_AUTO_UV_ENGINE = 1 << 10, //Auto Under Volt GFXCLK feature
+- SMU_11_0_ODFEATURE_AUTO_OC_ENGINE = 1 << 11, //Auto Over Clock GFXCLK feature
+- SMU_11_0_ODFEATURE_AUTO_OC_MEMORY = 1 << 12, //Auto Over Clock MCLK feature
+- SMU_11_0_ODFEATURE_FAN_CURVE = 1 << 13, //VICTOR TODO
++ SMU_11_0_ODFEATURE_GFXCLK_LIMITS = 1 << SMU_11_0_ODCAP_GFXCLK_LIMITS, //GFXCLK Limit feature
++ SMU_11_0_ODFEATURE_GFXCLK_CURVE = 1 << SMU_11_0_ODCAP_GFXCLK_CURVE, //GFXCLK Curve feature
++ SMU_11_0_ODFEATURE_UCLK_MAX = 1 << SMU_11_0_ODCAP_UCLK_MAX, //UCLK Limit feature
++ SMU_11_0_ODFEATURE_POWER_LIMIT = 1 << SMU_11_0_ODCAP_POWER_LIMIT, //Power Limit feature
++ SMU_11_0_ODFEATURE_FAN_ACOUSTIC_LIMIT = 1 << SMU_11_0_ODCAP_FAN_ACOUSTIC_LIMIT, //Fan Acoustic RPM feature
++ SMU_11_0_ODFEATURE_FAN_SPEED_MIN = 1 << SMU_11_0_ODCAP_FAN_SPEED_MIN, //Minimum Fan Speed feature
++ SMU_11_0_ODFEATURE_TEMPERATURE_FAN = 1 << SMU_11_0_ODCAP_TEMPERATURE_FAN, //Fan Target Temperature Limit feature
++ SMU_11_0_ODFEATURE_TEMPERATURE_SYSTEM = 1 << SMU_11_0_ODCAP_TEMPERATURE_SYSTEM, //Operating Temperature Limit feature
++ SMU_11_0_ODFEATURE_MEMORY_TIMING_TUNE = 1 << SMU_11_0_ODCAP_MEMORY_TIMING_TUNE, //AC Timing Tuning feature
++ SMU_11_0_ODFEATURE_FAN_ZERO_RPM_CONTROL = 1 << SMU_11_0_ODCAP_FAN_ZERO_RPM_CONTROL, //Zero RPM feature
++ SMU_11_0_ODFEATURE_AUTO_UV_ENGINE = 1 << SMU_11_0_ODCAP_AUTO_UV_ENGINE, //Auto Under Volt GFXCLK feature
++ SMU_11_0_ODFEATURE_AUTO_OC_ENGINE = 1 << SMU_11_0_ODCAP_AUTO_OC_ENGINE, //Auto Over Clock GFXCLK feature
++ SMU_11_0_ODFEATURE_AUTO_OC_MEMORY = 1 << SMU_11_0_ODCAP_AUTO_OC_MEMORY, //Auto Over Clock MCLK feature
++ SMU_11_0_ODFEATURE_FAN_CURVE = 1 << SMU_11_0_ODCAP_FAN_CURVE, //Fan Curve feature
+ SMU_11_0_ODFEATURE_COUNT = 14,
+ };
+ #define SMU_11_0_MAX_ODFEATURE 32 //Maximum Number of OD Features
--- /dev/null
+From 8ccb5bf7619c6523e7a4384a84b72e7be804298c Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Jos=C3=A9=20Roberto=20de=20Souza?= <jose.souza@intel.com>
+Date: Wed, 29 Jan 2020 15:24:48 -0800
+Subject: drm/mst: Fix possible NULL pointer dereference in drm_dp_mst_process_up_req()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: José Roberto de Souza <jose.souza@intel.com>
+
+commit 8ccb5bf7619c6523e7a4384a84b72e7be804298c upstream.
+
+According to DP specification, DP_SINK_EVENT_NOTIFY is also a
+broadcast message but as this function only handles
+DP_CONNECTION_STATUS_NOTIFY I will only make the static
+analyzer that caught this issue happy by not calling
+drm_dp_get_mst_branch_device_by_guid() with a NULL guid, causing
+drm_dp_mst_process_up_req() to return in the "if (!mstb)" right
+bellow.
+
+Fixes: 9408cc94eb04 ("drm/dp_mst: Handle UP requests asynchronously")
+Cc: Lyude Paul <lyude@redhat.com>
+Cc: Sean Paul <sean@poorly.run>
+Cc: <stable@vger.kernel.org> # v5.5+
+Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
+[added cc to stable]
+Signed-off-by: Lyude Paul <lyude@redhat.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20200129232448.84704-1-jose.souza@intel.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/gpu/drm/drm_dp_mst_topology.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/drm_dp_mst_topology.c
+@@ -3772,7 +3772,8 @@ drm_dp_mst_process_up_req(struct drm_dp_
+ else if (msg->req_type == DP_RESOURCE_STATUS_NOTIFY)
+ guid = msg->u.resource_stat.guid;
+
+- mstb = drm_dp_get_mst_branch_device_by_guid(mgr, guid);
++ if (guid)
++ mstb = drm_dp_get_mst_branch_device_by_guid(mgr, guid);
+ } else {
+ mstb = drm_dp_get_mst_branch_device(mgr, hdr->lct, hdr->rad);
+ }
--- /dev/null
+From 7e0cf7e9936c4358b0863357b90aa12afe6489da Mon Sep 17 00:00:00 2001
+From: Boris Brezillon <boris.brezillon@collabora.com>
+Date: Fri, 29 Nov 2019 14:59:08 +0100
+Subject: drm/panfrost: Make sure the shrinker does not reclaim referenced BOs
+
+From: Boris Brezillon <boris.brezillon@collabora.com>
+
+commit 7e0cf7e9936c4358b0863357b90aa12afe6489da upstream.
+
+Userspace might tag a BO purgeable while it's still referenced by GPU
+jobs. We need to make sure the shrinker does not purge such BOs until
+all jobs referencing it are finished.
+
+Fixes: 013b65101315 ("drm/panfrost: Add madvise and shrinker support")
+Cc: <stable@vger.kernel.org>
+Signed-off-by: Boris Brezillon <boris.brezillon@collabora.com>
+Reviewed-by: Steven Price <steven.price@arm.com>
+Signed-off-by: Rob Herring <robh@kernel.org>
+Link: https://patchwork.freedesktop.org/patch/msgid/20191129135908.2439529-9-boris.brezillon@collabora.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/gpu/drm/panfrost/panfrost_drv.c | 1 +
+ drivers/gpu/drm/panfrost/panfrost_gem.h | 6 ++++++
+ drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c | 3 +++
+ drivers/gpu/drm/panfrost/panfrost_job.c | 7 ++++++-
+ 4 files changed, 16 insertions(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
++++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
+@@ -166,6 +166,7 @@ panfrost_lookup_bos(struct drm_device *d
+ break;
+ }
+
++ atomic_inc(&bo->gpu_usecount);
+ job->mappings[i] = mapping;
+ }
+
+--- a/drivers/gpu/drm/panfrost/panfrost_gem.h
++++ b/drivers/gpu/drm/panfrost/panfrost_gem.h
+@@ -30,6 +30,12 @@ struct panfrost_gem_object {
+ struct mutex lock;
+ } mappings;
+
++ /*
++ * Count the number of jobs referencing this BO so we don't let the
++ * shrinker reclaim this object prematurely.
++ */
++ atomic_t gpu_usecount;
++
+ bool noexec :1;
+ bool is_heap :1;
+ };
+--- a/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
++++ b/drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c
+@@ -41,6 +41,9 @@ static bool panfrost_gem_purge(struct dr
+ struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
+ struct panfrost_gem_object *bo = to_panfrost_bo(obj);
+
++ if (atomic_read(&bo->gpu_usecount))
++ return false;
++
+ if (!mutex_trylock(&shmem->pages_lock))
+ return false;
+
+--- a/drivers/gpu/drm/panfrost/panfrost_job.c
++++ b/drivers/gpu/drm/panfrost/panfrost_job.c
+@@ -269,8 +269,13 @@ static void panfrost_job_cleanup(struct
+ dma_fence_put(job->render_done_fence);
+
+ if (job->mappings) {
+- for (i = 0; i < job->bo_count; i++)
++ for (i = 0; i < job->bo_count; i++) {
++ if (!job->mappings[i])
++ break;
++
++ atomic_dec(&job->mappings[i]->obj->gpu_usecount);
+ panfrost_gem_mapping_put(job->mappings[i]);
++ }
+ kvfree(job->mappings);
+ }
+
--- /dev/null
+From 4b848f20eda5974020f043ca14bacf7a7e634fc8 Mon Sep 17 00:00:00 2001
+From: Daniel Vetter <daniel.vetter@ffwll.ch>
+Date: Sun, 2 Feb 2020 14:21:33 +0100
+Subject: drm/vgem: Close use-after-free race in vgem_gem_create
+
+From: Daniel Vetter <daniel.vetter@ffwll.ch>
+
+commit 4b848f20eda5974020f043ca14bacf7a7e634fc8 upstream.
+
+There's two references floating around here (for the object reference,
+not the handle_count reference, that's a different thing):
+
+- The temporary reference held by vgem_gem_create, acquired by
+ creating the object and released by calling
+ drm_gem_object_put_unlocked.
+
+- The reference held by the object handle, created by
+ drm_gem_handle_create. This one generally outlives the function,
+ except if a 2nd thread races with a GEM_CLOSE ioctl call.
+
+So usually everything is correct, except in that race case, where the
+access to gem_object->size could be looking at freed data already.
+Which again isn't a real problem (userspace shot its feet off already
+with the race, we could return garbage), but maybe someone can exploit
+this as an information leak.
+
+Cc: Dan Carpenter <dan.carpenter@oracle.com>
+Cc: Hillf Danton <hdanton@sina.com>
+Reported-by: syzbot+0dc4444774d419e916c8@syzkaller.appspotmail.com
+Cc: stable@vger.kernel.org
+Cc: Emil Velikov <emil.velikov@collabora.com>
+Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
+Cc: Sean Paul <seanpaul@chromium.org>
+Cc: Chris Wilson <chris@chris-wilson.co.uk>
+Cc: Eric Anholt <eric@anholt.net>
+Cc: Sam Ravnborg <sam@ravnborg.org>
+Cc: Rob Clark <robdclark@chromium.org>
+Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
+Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
+Link: https://patchwork.freedesktop.org/patch/msgid/20200202132133.1891846-1-daniel.vetter@ffwll.ch
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/gpu/drm/vgem/vgem_drv.c | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+--- a/drivers/gpu/drm/vgem/vgem_drv.c
++++ b/drivers/gpu/drm/vgem/vgem_drv.c
+@@ -196,9 +196,10 @@ static struct drm_gem_object *vgem_gem_c
+ return ERR_CAST(obj);
+
+ ret = drm_gem_handle_create(file, &obj->base, handle);
+- drm_gem_object_put_unlocked(&obj->base);
+- if (ret)
++ if (ret) {
++ drm_gem_object_put_unlocked(&obj->base);
+ return ERR_PTR(ret);
++ }
+
+ return &obj->base;
+ }
+@@ -221,7 +222,9 @@ static int vgem_gem_dumb_create(struct d
+ args->size = gem_object->size;
+ args->pitch = pitch;
+
+- DRM_DEBUG("Created object of size %lld\n", size);
++ drm_gem_object_put_unlocked(gem_object);
++
++ DRM_DEBUG("Created object of size %llu\n", args->size);
+
+ return 0;
+ }
--- /dev/null
+From 216aa145aaf379a50b17afc812db71d893bd6683 Mon Sep 17 00:00:00 2001
+From: Robert Richter <rrichter@marvell.com>
+Date: Wed, 12 Feb 2020 18:25:18 +0100
+Subject: EDAC/mc: Fix use-after-free and memleaks during device removal
+
+From: Robert Richter <rrichter@marvell.com>
+
+commit 216aa145aaf379a50b17afc812db71d893bd6683 upstream.
+
+A test kernel with the options DEBUG_TEST_DRIVER_REMOVE, KASAN and
+DEBUG_KMEMLEAK set, revealed several issues when removing an mci device:
+
+1) Use-after-free:
+
+On 27.11.19 17:07:33, John Garry wrote:
+> [ 22.104498] BUG: KASAN: use-after-free in
+> edac_remove_sysfs_mci_device+0x148/0x180
+
+The use-after-free is caused by the mci_for_each_dimm() macro called in
+edac_remove_sysfs_mci_device(). The iterator was introduced with
+
+ c498afaf7df8 ("EDAC: Introduce an mci_for_each_dimm() iterator").
+
+The iterator loop calls device_unregister(&dimm->dev), which removes
+the sysfs entry of the device, but also frees the dimm struct in
+dimm_attr_release(). When incrementing the loop in mci_for_each_dimm(),
+the dimm struct is accessed again, after having been freed already.
+
+The fix is to free all the mci device's subsequent dimm and csrow
+objects at a later point, in _edac_mc_free(), when the mci device itself
+is being freed.
+
+This keeps the data structures intact and the mci device can be
+fully used until its removal. The change allows the safe usage of
+mci_for_each_dimm() to release dimm devices from sysfs.
+
+2) Memory leaks:
+
+Following memory leaks have been detected:
+
+ # grep edac /sys/kernel/debug/kmemleak | sort | uniq -c
+ 1 [<000000003c0f58f9>] edac_mc_alloc+0x3bc/0x9d0 # mci->csrows
+ 16 [<00000000bb932dc0>] edac_mc_alloc+0x49c/0x9d0 # csr->channels
+ 16 [<00000000e2734dba>] edac_mc_alloc+0x518/0x9d0 # csr->channels[chn]
+ 1 [<00000000eb040168>] edac_mc_alloc+0x5c8/0x9d0 # mci->dimms
+ 34 [<00000000ef737c29>] ghes_edac_register+0x1c8/0x3f8 # see edac_mc_alloc()
+
+All leaks are from memory allocated by edac_mc_alloc().
+
+Note: The test above shows that edac_mc_alloc() was called here from
+ghes_edac_register(), thus both functions show up in the stack trace
+but the module causing the leaks is edac_mc. The comments with the data
+structures involved were made manually by analyzing the objdump.
+
+The data structures listed above and created by edac_mc_alloc() are
+not properly removed during device removal, which is done in
+edac_mc_free().
+
+There are two paths implemented to remove the device depending on device
+registration, _edac_mc_free() is called if the device is not registered
+and edac_unregister_sysfs() otherwise.
+
+The implemenations differ. For the sysfs case, the mci device removal
+lacks the removal of subsequent data structures (csrows, channels,
+dimms). This causes the memory leaks (see mci_attr_release()).
+
+ [ bp: Massage commit message. ]
+
+Fixes: c498afaf7df8 ("EDAC: Introduce an mci_for_each_dimm() iterator")
+Fixes: faa2ad09c01c ("edac_mc: edac_mc_free() cannot assume mem_ctl_info is registered in sysfs.")
+Fixes: 7a623c039075 ("edac: rewrite the sysfs code to use struct device")
+Reported-by: John Garry <john.garry@huawei.com>
+Signed-off-by: Robert Richter <rrichter@marvell.com>
+Signed-off-by: Borislav Petkov <bp@suse.de>
+Tested-by: John Garry <john.garry@huawei.com>
+Cc: <stable@vger.kernel.org>
+Link: https://lkml.kernel.org/r/20200212120340.4764-3-rrichter@marvell.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/edac/edac_mc.c | 12 +++---------
+ drivers/edac/edac_mc_sysfs.c | 15 +++------------
+ 2 files changed, 6 insertions(+), 21 deletions(-)
+
+--- a/drivers/edac/edac_mc.c
++++ b/drivers/edac/edac_mc.c
+@@ -505,16 +505,10 @@ void edac_mc_free(struct mem_ctl_info *m
+ {
+ edac_dbg(1, "\n");
+
+- /* If we're not yet registered with sysfs free only what was allocated
+- * in edac_mc_alloc().
+- */
+- if (!device_is_registered(&mci->dev)) {
+- _edac_mc_free(mci);
+- return;
+- }
++ if (device_is_registered(&mci->dev))
++ edac_unregister_sysfs(mci);
+
+- /* the mci instance is freed here, when the sysfs object is dropped */
+- edac_unregister_sysfs(mci);
++ _edac_mc_free(mci);
+ }
+ EXPORT_SYMBOL_GPL(edac_mc_free);
+
+--- a/drivers/edac/edac_mc_sysfs.c
++++ b/drivers/edac/edac_mc_sysfs.c
+@@ -276,10 +276,7 @@ static const struct attribute_group *csr
+
+ static void csrow_attr_release(struct device *dev)
+ {
+- struct csrow_info *csrow = container_of(dev, struct csrow_info, dev);
+-
+- edac_dbg(1, "device %s released\n", dev_name(dev));
+- kfree(csrow);
++ /* release device with _edac_mc_free() */
+ }
+
+ static const struct device_type csrow_attr_type = {
+@@ -607,10 +604,7 @@ static const struct attribute_group *dim
+
+ static void dimm_attr_release(struct device *dev)
+ {
+- struct dimm_info *dimm = container_of(dev, struct dimm_info, dev);
+-
+- edac_dbg(1, "device %s released\n", dev_name(dev));
+- kfree(dimm);
++ /* release device with _edac_mc_free() */
+ }
+
+ static const struct device_type dimm_attr_type = {
+@@ -892,10 +886,7 @@ static const struct attribute_group *mci
+
+ static void mci_attr_release(struct device *dev)
+ {
+- struct mem_ctl_info *mci = container_of(dev, struct mem_ctl_info, dev);
+-
+- edac_dbg(1, "device %s released\n", dev_name(dev));
+- kfree(mci);
++ /* release device with _edac_mc_free() */
+ }
+
+ static const struct device_type mci_attr_type = {
--- /dev/null
+From 4d59588c09f2a2daedad2a544d4d1b602ab3a8af Mon Sep 17 00:00:00 2001
+From: Robert Richter <rrichter@marvell.com>
+Date: Wed, 12 Feb 2020 13:03:39 +0100
+Subject: EDAC/sysfs: Remove csrow objects on errors
+
+From: Robert Richter <rrichter@marvell.com>
+
+commit 4d59588c09f2a2daedad2a544d4d1b602ab3a8af upstream.
+
+All created csrow objects must be removed in the error path of
+edac_create_csrow_objects(). The objects have been added as devices.
+
+They need to be removed by doing a device_del() *and* put_device() call
+to also free their memory. The missing put_device() leaves a memory
+leak. Use device_unregister() instead of device_del() which properly
+unregisters the device doing both.
+
+Fixes: 7adc05d2dc3a ("EDAC/sysfs: Drop device references properly")
+Signed-off-by: Robert Richter <rrichter@marvell.com>
+Signed-off-by: Borislav Petkov <bp@suse.de>
+Tested-by: John Garry <john.garry@huawei.com>
+Cc: <stable@vger.kernel.org>
+Link: https://lkml.kernel.org/r/20200212120340.4764-4-rrichter@marvell.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/edac/edac_mc_sysfs.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/drivers/edac/edac_mc_sysfs.c
++++ b/drivers/edac/edac_mc_sysfs.c
+@@ -447,8 +447,7 @@ error:
+ csrow = mci->csrows[i];
+ if (!nr_pages_per_csrow(csrow))
+ continue;
+-
+- device_del(&mci->csrows[i]->dev);
++ device_unregister(&mci->csrows[i]->dev);
+ }
+
+ return err;
--- /dev/null
+From af133ade9a40794a37104ecbcc2827c0ea373a3c Mon Sep 17 00:00:00 2001
+From: Shijie Luo <luoshijie1@huawei.com>
+Date: Mon, 10 Feb 2020 20:17:52 -0500
+Subject: ext4: add cond_resched() to ext4_protect_reserved_inode
+
+From: Shijie Luo <luoshijie1@huawei.com>
+
+commit af133ade9a40794a37104ecbcc2827c0ea373a3c upstream.
+
+When journal size is set too big by "mkfs.ext4 -J size=", or when
+we mount a crafted image to make journal inode->i_size too big,
+the loop, "while (i < num)", holds cpu too long. This could cause
+soft lockup.
+
+[ 529.357541] Call trace:
+[ 529.357551] dump_backtrace+0x0/0x198
+[ 529.357555] show_stack+0x24/0x30
+[ 529.357562] dump_stack+0xa4/0xcc
+[ 529.357568] watchdog_timer_fn+0x300/0x3e8
+[ 529.357574] __hrtimer_run_queues+0x114/0x358
+[ 529.357576] hrtimer_interrupt+0x104/0x2d8
+[ 529.357580] arch_timer_handler_virt+0x38/0x58
+[ 529.357584] handle_percpu_devid_irq+0x90/0x248
+[ 529.357588] generic_handle_irq+0x34/0x50
+[ 529.357590] __handle_domain_irq+0x68/0xc0
+[ 529.357593] gic_handle_irq+0x6c/0x150
+[ 529.357595] el1_irq+0xb8/0x140
+[ 529.357599] __ll_sc_atomic_add_return_acquire+0x14/0x20
+[ 529.357668] ext4_map_blocks+0x64/0x5c0 [ext4]
+[ 529.357693] ext4_setup_system_zone+0x330/0x458 [ext4]
+[ 529.357717] ext4_fill_super+0x2170/0x2ba8 [ext4]
+[ 529.357722] mount_bdev+0x1a8/0x1e8
+[ 529.357746] ext4_mount+0x44/0x58 [ext4]
+[ 529.357748] mount_fs+0x50/0x170
+[ 529.357752] vfs_kern_mount.part.9+0x54/0x188
+[ 529.357755] do_mount+0x5ac/0xd78
+[ 529.357758] ksys_mount+0x9c/0x118
+[ 529.357760] __arm64_sys_mount+0x28/0x38
+[ 529.357764] el0_svc_common+0x78/0x130
+[ 529.357766] el0_svc_handler+0x38/0x78
+[ 529.357769] el0_svc+0x8/0xc
+[ 541.356516] watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [mount:18674]
+
+Link: https://lore.kernel.org/r/20200211011752.29242-1-luoshijie1@huawei.com
+Reviewed-by: Jan Kara <jack@suse.cz>
+Signed-off-by: Shijie Luo <luoshijie1@huawei.com>
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/ext4/block_validity.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/fs/ext4/block_validity.c
++++ b/fs/ext4/block_validity.c
+@@ -207,6 +207,7 @@ static int ext4_protect_reserved_inode(s
+ return PTR_ERR(inode);
+ num = (inode->i_size + sb->s_blocksize - 1) >> sb->s_blocksize_bits;
+ while (i < num) {
++ cond_resched();
+ map.m_lblk = i;
+ map.m_len = num - i;
+ n = ext4_map_blocks(NULL, inode, &map, 0);
--- /dev/null
+From 14c9ca0583eee8df285d68a0e6ec71053efd2228 Mon Sep 17 00:00:00 2001
+From: Andreas Dilger <adilger@dilger.ca>
+Date: Sun, 26 Jan 2020 15:03:34 -0700
+Subject: ext4: don't assume that mmp_nodename/bdevname have NUL
+
+From: Andreas Dilger <adilger@dilger.ca>
+
+commit 14c9ca0583eee8df285d68a0e6ec71053efd2228 upstream.
+
+Don't assume that the mmp_nodename and mmp_bdevname strings are NUL
+terminated, since they are filled in by snprintf(), which is not
+guaranteed to do so.
+
+Link: https://lore.kernel.org/r/1580076215-1048-1-git-send-email-adilger@dilger.ca
+Signed-off-by: Andreas Dilger <adilger@dilger.ca>
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/ext4/mmp.c | 12 +++++++-----
+ 1 file changed, 7 insertions(+), 5 deletions(-)
+
+--- a/fs/ext4/mmp.c
++++ b/fs/ext4/mmp.c
+@@ -120,10 +120,10 @@ void __dump_mmp_msg(struct super_block *
+ {
+ __ext4_warning(sb, function, line, "%s", msg);
+ __ext4_warning(sb, function, line,
+- "MMP failure info: last update time: %llu, last update "
+- "node: %s, last update device: %s",
+- (long long unsigned int) le64_to_cpu(mmp->mmp_time),
+- mmp->mmp_nodename, mmp->mmp_bdevname);
++ "MMP failure info: last update time: %llu, last update node: %.*s, last update device: %.*s",
++ (unsigned long long)le64_to_cpu(mmp->mmp_time),
++ (int)sizeof(mmp->mmp_nodename), mmp->mmp_nodename,
++ (int)sizeof(mmp->mmp_bdevname), mmp->mmp_bdevname);
+ }
+
+ /*
+@@ -154,6 +154,7 @@ static int kmmpd(void *data)
+ mmp_check_interval = max(EXT4_MMP_CHECK_MULT * mmp_update_interval,
+ EXT4_MMP_MIN_CHECK_INTERVAL);
+ mmp->mmp_check_interval = cpu_to_le16(mmp_check_interval);
++ BUILD_BUG_ON(sizeof(mmp->mmp_bdevname) < BDEVNAME_SIZE);
+ bdevname(bh->b_bdev, mmp->mmp_bdevname);
+
+ memcpy(mmp->mmp_nodename, init_utsname()->nodename,
+@@ -375,7 +376,8 @@ skip:
+ /*
+ * Start a kernel thread to update the MMP block periodically.
+ */
+- EXT4_SB(sb)->s_mmp_tsk = kthread_run(kmmpd, mmpd_data, "kmmpd-%s",
++ EXT4_SB(sb)->s_mmp_tsk = kthread_run(kmmpd, mmpd_data, "kmmpd-%.*s",
++ (int)sizeof(mmp->mmp_bdevname),
+ bdevname(bh->b_bdev,
+ mmp->mmp_bdevname));
+ if (IS_ERR(EXT4_SB(sb)->s_mmp_tsk)) {
--- /dev/null
+From 48a34311953d921235f4d7bbd2111690d2e469cf Mon Sep 17 00:00:00 2001
+From: Jan Kara <jack@suse.cz>
+Date: Mon, 10 Feb 2020 15:43:16 +0100
+Subject: ext4: fix checksum errors with indexed dirs
+
+From: Jan Kara <jack@suse.cz>
+
+commit 48a34311953d921235f4d7bbd2111690d2e469cf upstream.
+
+DIR_INDEX has been introduced as a compat ext4 feature. That means that
+even kernels / tools that don't understand the feature may modify the
+filesystem. This works because for kernels not understanding indexed dir
+format, internal htree nodes appear just as empty directory entries.
+Index dir aware kernels then check the htree structure is still
+consistent before using the data. This all worked reasonably well until
+metadata checksums were introduced. The problem is that these
+effectively made DIR_INDEX only ro-compatible because internal htree
+nodes store checksums in a different place than normal directory blocks.
+Thus any modification ignorant to DIR_INDEX (or just clearing
+EXT4_INDEX_FL from the inode) will effectively cause checksum mismatch
+and trigger kernel errors. So we have to be more careful when dealing
+with indexed directories on filesystems with checksumming enabled.
+
+1) We just disallow loading any directory inodes with EXT4_INDEX_FL when
+DIR_INDEX is not enabled. This is harsh but it should be very rare (it
+means someone disabled DIR_INDEX on existing filesystem and didn't run
+e2fsck), e2fsck can fix the problem, and we don't want to answer the
+difficult question: "Should we rather corrupt the directory more or
+should we ignore that DIR_INDEX feature is not set?"
+
+2) When we find out htree structure is corrupted (but the filesystem and
+the directory should in support htrees), we continue just ignoring htree
+information for reading but we refuse to add new entries to the
+directory to avoid corrupting it more.
+
+Link: https://lore.kernel.org/r/20200210144316.22081-1-jack@suse.cz
+Fixes: dbe89444042a ("ext4: Calculate and verify checksums for htree nodes")
+Reviewed-by: Andreas Dilger <adilger@dilger.ca>
+Signed-off-by: Jan Kara <jack@suse.cz>
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/ext4/dir.c | 14 ++++++++------
+ fs/ext4/ext4.h | 5 ++++-
+ fs/ext4/inode.c | 12 ++++++++++++
+ fs/ext4/namei.c | 7 +++++++
+ 4 files changed, 31 insertions(+), 7 deletions(-)
+
+--- a/fs/ext4/dir.c
++++ b/fs/ext4/dir.c
+@@ -129,12 +129,14 @@ static int ext4_readdir(struct file *fil
+ if (err != ERR_BAD_DX_DIR) {
+ return err;
+ }
+- /*
+- * We don't set the inode dirty flag since it's not
+- * critical that it get flushed back to the disk.
+- */
+- ext4_clear_inode_flag(file_inode(file),
+- EXT4_INODE_INDEX);
++ /* Can we just clear INDEX flag to ignore htree information? */
++ if (!ext4_has_metadata_csum(sb)) {
++ /*
++ * We don't set the inode dirty flag since it's not
++ * critical that it gets flushed back to the disk.
++ */
++ ext4_clear_inode_flag(inode, EXT4_INODE_INDEX);
++ }
+ }
+
+ if (ext4_has_inline_data(inode)) {
+--- a/fs/ext4/ext4.h
++++ b/fs/ext4/ext4.h
+@@ -2482,8 +2482,11 @@ void ext4_insert_dentry(struct inode *in
+ struct ext4_filename *fname);
+ static inline void ext4_update_dx_flag(struct inode *inode)
+ {
+- if (!ext4_has_feature_dir_index(inode->i_sb))
++ if (!ext4_has_feature_dir_index(inode->i_sb)) {
++ /* ext4_iget() should have caught this... */
++ WARN_ON_ONCE(ext4_has_feature_metadata_csum(inode->i_sb));
+ ext4_clear_inode_flag(inode, EXT4_INODE_INDEX);
++ }
+ }
+ static const unsigned char ext4_filetype_table[] = {
+ DT_UNKNOWN, DT_REG, DT_DIR, DT_CHR, DT_BLK, DT_FIFO, DT_SOCK, DT_LNK
+--- a/fs/ext4/inode.c
++++ b/fs/ext4/inode.c
+@@ -4615,6 +4615,18 @@ struct inode *__ext4_iget(struct super_b
+ ret = -EFSCORRUPTED;
+ goto bad_inode;
+ }
++ /*
++ * If dir_index is not enabled but there's dir with INDEX flag set,
++ * we'd normally treat htree data as empty space. But with metadata
++ * checksumming that corrupts checksums so forbid that.
++ */
++ if (!ext4_has_feature_dir_index(sb) && ext4_has_metadata_csum(sb) &&
++ ext4_test_inode_flag(inode, EXT4_INODE_INDEX)) {
++ ext4_error_inode(inode, function, line, 0,
++ "iget: Dir with htree data on filesystem without dir_index feature.");
++ ret = -EFSCORRUPTED;
++ goto bad_inode;
++ }
+ ei->i_disksize = inode->i_size;
+ #ifdef CONFIG_QUOTA
+ ei->i_reserved_quota = 0;
+--- a/fs/ext4/namei.c
++++ b/fs/ext4/namei.c
+@@ -2207,6 +2207,13 @@ static int ext4_add_entry(handle_t *hand
+ retval = ext4_dx_add_entry(handle, &fname, dir, inode);
+ if (!retval || (retval != ERR_BAD_DX_DIR))
+ goto out;
++ /* Can we just ignore htree data? */
++ if (ext4_has_metadata_csum(sb)) {
++ EXT4_ERROR_INODE(dir,
++ "Directory has corrupted htree index.");
++ retval = -EFSCORRUPTED;
++ goto out;
++ }
+ ext4_clear_inode_flag(dir, EXT4_INODE_INDEX);
+ dx_fallback++;
+ ext4_mark_inode_dirty(handle, dir);
--- /dev/null
+From 4f97a68192bd33b9963b400759cef0ca5963af00 Mon Sep 17 00:00:00 2001
+From: Theodore Ts'o <tytso@mit.edu>
+Date: Thu, 6 Feb 2020 17:35:01 -0500
+Subject: ext4: fix support for inode sizes > 1024 bytes
+
+From: Theodore Ts'o <tytso@mit.edu>
+
+commit 4f97a68192bd33b9963b400759cef0ca5963af00 upstream.
+
+A recent commit, 9803387c55f7 ("ext4: validate the
+debug_want_extra_isize mount option at parse time"), moved mount-time
+checks around. One of those changes moved the inode size check before
+the blocksize variable was set to the blocksize of the file system.
+After 9803387c55f7 was set to the minimum allowable blocksize, which
+in practice on most systems would be 1024 bytes. This cuased file
+systems with inode sizes larger than 1024 bytes to be rejected with a
+message:
+
+EXT4-fs (sdXX): unsupported inode size: 4096
+
+Fixes: 9803387c55f7 ("ext4: validate the debug_want_extra_isize mount option at parse time")
+Link: https://lore.kernel.org/r/20200206225252.GA3673@mit.edu
+Reported-by: Herbert Poetzl <herbert@13thfloor.at>
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/ext4/super.c | 18 ++++++++++--------
+ 1 file changed, 10 insertions(+), 8 deletions(-)
+
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -3768,6 +3768,15 @@ static int ext4_fill_super(struct super_
+ */
+ sbi->s_li_wait_mult = EXT4_DEF_LI_WAIT_MULT;
+
++ blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
++ if (blocksize < EXT4_MIN_BLOCK_SIZE ||
++ blocksize > EXT4_MAX_BLOCK_SIZE) {
++ ext4_msg(sb, KERN_ERR,
++ "Unsupported filesystem blocksize %d (%d log_block_size)",
++ blocksize, le32_to_cpu(es->s_log_block_size));
++ goto failed_mount;
++ }
++
+ if (le32_to_cpu(es->s_rev_level) == EXT4_GOOD_OLD_REV) {
+ sbi->s_inode_size = EXT4_GOOD_OLD_INODE_SIZE;
+ sbi->s_first_ino = EXT4_GOOD_OLD_FIRST_INO;
+@@ -3785,6 +3794,7 @@ static int ext4_fill_super(struct super_
+ ext4_msg(sb, KERN_ERR,
+ "unsupported inode size: %d",
+ sbi->s_inode_size);
++ ext4_msg(sb, KERN_ERR, "blocksize: %d", blocksize);
+ goto failed_mount;
+ }
+ /*
+@@ -3988,14 +3998,6 @@ static int ext4_fill_super(struct super_
+ if (!ext4_feature_set_ok(sb, (sb_rdonly(sb))))
+ goto failed_mount;
+
+- blocksize = BLOCK_SIZE << le32_to_cpu(es->s_log_block_size);
+- if (blocksize < EXT4_MIN_BLOCK_SIZE ||
+- blocksize > EXT4_MAX_BLOCK_SIZE) {
+- ext4_msg(sb, KERN_ERR,
+- "Unsupported filesystem blocksize %d (%d log_block_size)",
+- blocksize, le32_to_cpu(es->s_log_block_size));
+- goto failed_mount;
+- }
+ if (le32_to_cpu(es->s_log_block_size) >
+ (EXT4_MAX_BLOCK_LOG_SIZE - EXT4_MIN_BLOCK_LOG_SIZE)) {
+ ext4_msg(sb, KERN_ERR,
--- /dev/null
+From d65d87a07476aa17df2dcb3ad18c22c154315bec Mon Sep 17 00:00:00 2001
+From: Theodore Ts'o <tytso@mit.edu>
+Date: Fri, 14 Feb 2020 18:11:19 -0500
+Subject: ext4: improve explanation of a mount failure caused by a misconfigured kernel
+
+From: Theodore Ts'o <tytso@mit.edu>
+
+commit d65d87a07476aa17df2dcb3ad18c22c154315bec upstream.
+
+If CONFIG_QFMT_V2 is not enabled, but CONFIG_QUOTA is enabled, when a
+user tries to mount a file system with the quota or project quota
+enabled, the kernel will emit a very confusing messsage:
+
+ EXT4-fs warning (device vdc): ext4_enable_quotas:5914: Failed to enable quota tracking (type=0, err=-3). Please run e2fsck to fix.
+ EXT4-fs (vdc): mount failed
+
+We will now report an explanatory message indicating which kernel
+configuration options have to be enabled, to avoid customer/sysadmin
+confusion.
+
+Link: https://lore.kernel.org/r/20200215012738.565735-1-tytso@mit.edu
+Google-Bug-Id: 149093531
+Fixes: 7c319d328505b778 ("ext4: make quota as first class supported feature")
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/ext4/super.c | 14 ++++----------
+ 1 file changed, 4 insertions(+), 10 deletions(-)
+
+--- a/fs/ext4/super.c
++++ b/fs/ext4/super.c
+@@ -2964,17 +2964,11 @@ static int ext4_feature_set_ok(struct su
+ return 0;
+ }
+
+-#ifndef CONFIG_QUOTA
+- if (ext4_has_feature_quota(sb) && !readonly) {
++#if !defined(CONFIG_QUOTA) || !defined(CONFIG_QFMT_V2)
++ if (!readonly && (ext4_has_feature_quota(sb) ||
++ ext4_has_feature_project(sb))) {
+ ext4_msg(sb, KERN_ERR,
+- "Filesystem with quota feature cannot be mounted RDWR "
+- "without CONFIG_QUOTA");
+- return 0;
+- }
+- if (ext4_has_feature_project(sb) && !readonly) {
+- ext4_msg(sb, KERN_ERR,
+- "Filesystem with project quota feature cannot be mounted RDWR "
+- "without CONFIG_QUOTA");
++ "The kernel was not built with CONFIG_QUOTA and CONFIG_QFMT_V2");
+ return 0;
+ }
+ #endif /* CONFIG_QUOTA */
--- /dev/null
+From c3afa804c58e5c30ac63858b527fffadc88bce82 Mon Sep 17 00:00:00 2001
+From: Paul Thomas <pthomas8589@gmail.com>
+Date: Sat, 25 Jan 2020 17:14:10 -0500
+Subject: gpio: xilinx: Fix bug where the wrong GPIO register is written to
+
+From: Paul Thomas <pthomas8589@gmail.com>
+
+commit c3afa804c58e5c30ac63858b527fffadc88bce82 upstream.
+
+Care is taken with "index", however with the current version
+the actual xgpio_writereg is using index for data but
+xgpio_regoffset(chip, i) for the offset. And since i is already
+incremented it is incorrect. This patch fixes it so that index
+is used for the offset too.
+
+Cc: stable@vger.kernel.org
+Signed-off-by: Paul Thomas <pthomas8589@gmail.com>
+Link: https://lore.kernel.org/r/20200125221410.8022-1-pthomas8589@gmail.com
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/gpio/gpio-xilinx.c | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+--- a/drivers/gpio/gpio-xilinx.c
++++ b/drivers/gpio/gpio-xilinx.c
+@@ -147,9 +147,10 @@ static void xgpio_set_multiple(struct gp
+ for (i = 0; i < gc->ngpio; i++) {
+ if (*mask == 0)
+ break;
++ /* Once finished with an index write it out to the register */
+ if (index != xgpio_index(chip, i)) {
+ xgpio_writereg(chip->regs + XGPIO_DATA_OFFSET +
+- xgpio_regoffset(chip, i),
++ index * XGPIO_CHANNEL_OFFSET,
+ chip->gpio_state[index]);
+ spin_unlock_irqrestore(&chip->gpio_lock[index], flags);
+ index = xgpio_index(chip, i);
+@@ -165,7 +166,7 @@ static void xgpio_set_multiple(struct gp
+ }
+
+ xgpio_writereg(chip->regs + XGPIO_DATA_OFFSET +
+- xgpio_regoffset(chip, i), chip->gpio_state[index]);
++ index * XGPIO_CHANNEL_OFFSET, chip->gpio_state[index]);
+
+ spin_unlock_irqrestore(&chip->gpio_lock[index], flags);
+ }
--- /dev/null
+From c96dceeabf765d0b1b1f29c3bf50a5c01315b820 Mon Sep 17 00:00:00 2001
+From: "zhangyi (F)" <yi.zhang@huawei.com>
+Date: Thu, 13 Feb 2020 14:38:21 +0800
+Subject: jbd2: do not clear the BH_Mapped flag when forgetting a metadata buffer
+
+From: zhangyi (F) <yi.zhang@huawei.com>
+
+commit c96dceeabf765d0b1b1f29c3bf50a5c01315b820 upstream.
+
+Commit 904cdbd41d74 ("jbd2: clear dirty flag when revoking a buffer from
+an older transaction") set the BH_Freed flag when forgetting a metadata
+buffer which belongs to the committing transaction, it indicate the
+committing process clear dirty bits when it is done with the buffer. But
+it also clear the BH_Mapped flag at the same time, which may trigger
+below NULL pointer oops when block_size < PAGE_SIZE.
+
+rmdir 1 kjournald2 mkdir 2
+ jbd2_journal_commit_transaction
+ commit transaction N
+jbd2_journal_forget
+set_buffer_freed(bh1)
+ jbd2_journal_commit_transaction
+ commit transaction N+1
+ ...
+ clear_buffer_mapped(bh1)
+ ext4_getblk(bh2 ummapped)
+ ...
+ grow_dev_page
+ init_page_buffers
+ bh1->b_private=NULL
+ bh2->b_private=NULL
+ jbd2_journal_put_journal_head(jh1)
+ __journal_remove_journal_head(hb1)
+ jh1 is NULL and trigger oops
+
+*) Dir entry block bh1 and bh2 belongs to one page, and the bh2 has
+ already been unmapped.
+
+For the metadata buffer we forgetting, we should always keep the mapped
+flag and clear the dirty flags is enough, so this patch pick out the
+these buffers and keep their BH_Mapped flag.
+
+Link: https://lore.kernel.org/r/20200213063821.30455-3-yi.zhang@huawei.com
+Fixes: 904cdbd41d74 ("jbd2: clear dirty flag when revoking a buffer from an older transaction")
+Reviewed-by: Jan Kara <jack@suse.cz>
+Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/jbd2/commit.c | 25 +++++++++++++++++++++----
+ 1 file changed, 21 insertions(+), 4 deletions(-)
+
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -985,12 +985,29 @@ restart_loop:
+ * pagesize and it is attached to the last partial page.
+ */
+ if (buffer_freed(bh) && !jh->b_next_transaction) {
++ struct address_space *mapping;
++
+ clear_buffer_freed(bh);
+ clear_buffer_jbddirty(bh);
+- clear_buffer_mapped(bh);
+- clear_buffer_new(bh);
+- clear_buffer_req(bh);
+- bh->b_bdev = NULL;
++
++ /*
++ * Block device buffers need to stay mapped all the
++ * time, so it is enough to clear buffer_jbddirty and
++ * buffer_freed bits. For the file mapping buffers (i.e.
++ * journalled data) we need to unmap buffer and clear
++ * more bits. We also need to be careful about the check
++ * because the data page mapping can get cleared under
++ * out hands, which alse need not to clear more bits
++ * because the page and buffers will be freed and can
++ * never be reused once we are done with them.
++ */
++ mapping = READ_ONCE(bh->b_page->mapping);
++ if (mapping && !sb_is_blkdev_sb(mapping->host->i_sb)) {
++ clear_buffer_mapped(bh);
++ clear_buffer_new(bh);
++ clear_buffer_req(bh);
++ bh->b_bdev = NULL;
++ }
+ }
+
+ if (buffer_jbddirty(bh)) {
--- /dev/null
+From 6a66a7ded12baa6ebbb2e3e82f8cb91382814839 Mon Sep 17 00:00:00 2001
+From: "zhangyi (F)" <yi.zhang@huawei.com>
+Date: Thu, 13 Feb 2020 14:38:20 +0800
+Subject: jbd2: move the clearing of b_modified flag to the journal_unmap_buffer()
+
+From: zhangyi (F) <yi.zhang@huawei.com>
+
+commit 6a66a7ded12baa6ebbb2e3e82f8cb91382814839 upstream.
+
+There is no need to delay the clearing of b_modified flag to the
+transaction committing time when unmapping the journalled buffer, so
+just move it to the journal_unmap_buffer().
+
+Link: https://lore.kernel.org/r/20200213063821.30455-2-yi.zhang@huawei.com
+Reviewed-by: Jan Kara <jack@suse.cz>
+Signed-off-by: zhangyi (F) <yi.zhang@huawei.com>
+Signed-off-by: Theodore Ts'o <tytso@mit.edu>
+Cc: stable@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/jbd2/commit.c | 43 +++++++++++++++----------------------------
+ fs/jbd2/transaction.c | 10 ++++++----
+ 2 files changed, 21 insertions(+), 32 deletions(-)
+
+--- a/fs/jbd2/commit.c
++++ b/fs/jbd2/commit.c
+@@ -976,34 +976,21 @@ restart_loop:
+ * it. */
+
+ /*
+- * A buffer which has been freed while still being journaled by
+- * a previous transaction.
+- */
+- if (buffer_freed(bh)) {
+- /*
+- * If the running transaction is the one containing
+- * "add to orphan" operation (b_next_transaction !=
+- * NULL), we have to wait for that transaction to
+- * commit before we can really get rid of the buffer.
+- * So just clear b_modified to not confuse transaction
+- * credit accounting and refile the buffer to
+- * BJ_Forget of the running transaction. If the just
+- * committed transaction contains "add to orphan"
+- * operation, we can completely invalidate the buffer
+- * now. We are rather through in that since the
+- * buffer may be still accessible when blocksize <
+- * pagesize and it is attached to the last partial
+- * page.
+- */
+- jh->b_modified = 0;
+- if (!jh->b_next_transaction) {
+- clear_buffer_freed(bh);
+- clear_buffer_jbddirty(bh);
+- clear_buffer_mapped(bh);
+- clear_buffer_new(bh);
+- clear_buffer_req(bh);
+- bh->b_bdev = NULL;
+- }
++ * A buffer which has been freed while still being journaled
++ * by a previous transaction, refile the buffer to BJ_Forget of
++ * the running transaction. If the just committed transaction
++ * contains "add to orphan" operation, we can completely
++ * invalidate the buffer now. We are rather through in that
++ * since the buffer may be still accessible when blocksize <
++ * pagesize and it is attached to the last partial page.
++ */
++ if (buffer_freed(bh) && !jh->b_next_transaction) {
++ clear_buffer_freed(bh);
++ clear_buffer_jbddirty(bh);
++ clear_buffer_mapped(bh);
++ clear_buffer_new(bh);
++ clear_buffer_req(bh);
++ bh->b_bdev = NULL;
+ }
+
+ if (buffer_jbddirty(bh)) {
+--- a/fs/jbd2/transaction.c
++++ b/fs/jbd2/transaction.c
+@@ -2329,14 +2329,16 @@ static int journal_unmap_buffer(journal_
+ return -EBUSY;
+ }
+ /*
+- * OK, buffer won't be reachable after truncate. We just set
+- * j_next_transaction to the running transaction (if there is
+- * one) and mark buffer as freed so that commit code knows it
+- * should clear dirty bits when it is done with the buffer.
++ * OK, buffer won't be reachable after truncate. We just clear
++ * b_modified to not confuse transaction credit accounting, and
++ * set j_next_transaction to the running transaction (if there
++ * is one) and mark buffer as freed so that commit code knows
++ * it should clear dirty bits when it is done with the buffer.
+ */
+ set_buffer_freed(bh);
+ if (journal->j_running_transaction && buffer_jbddirty(bh))
+ jh->b_next_transaction = journal->j_running_transaction;
++ jh->b_modified = 0;
+ spin_unlock(&journal->j_list_lock);
+ spin_unlock(&jh->b_state_lock);
+ write_unlock(&journal->j_state_lock);
--- /dev/null
+From 148d735eb55d32848c3379e460ce365f2c1cbe4b Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+Date: Fri, 7 Feb 2020 09:37:41 -0800
+Subject: KVM: nVMX: Use correct root level for nested EPT shadow page tables
+
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+
+commit 148d735eb55d32848c3379e460ce365f2c1cbe4b upstream.
+
+Hardcode the EPT page-walk level for L2 to be 4 levels, as KVM's MMU
+currently also hardcodes the page walk level for nested EPT to be 4
+levels. The L2 guest is all but guaranteed to soft hang on its first
+instruction when L1 is using EPT, as KVM will construct 4-level page
+tables and then tell hardware to use 5-level page tables.
+
+Fixes: 855feb673640 ("KVM: MMU: Add 5 level EPT & Shadow page table support.")
+Cc: stable@vger.kernel.org
+Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kvm/vmx/vmx.c | 3 +++
+ 1 file changed, 3 insertions(+)
+
+--- a/arch/x86/kvm/vmx/vmx.c
++++ b/arch/x86/kvm/vmx/vmx.c
+@@ -2968,6 +2968,9 @@ void vmx_set_cr0(struct kvm_vcpu *vcpu,
+
+ static int get_ept_level(struct kvm_vcpu *vcpu)
+ {
++ /* Nested EPT currently only supports 4-level walks. */
++ if (is_guest_mode(vcpu) && nested_cpu_has_ept(get_vmcs12(vcpu)))
++ return 4;
+ if (cpu_has_vmx_ept_5levels() && (cpuid_maxphyaddr(vcpu) > 48))
+ return 5;
+ return 4;
--- /dev/null
+From f6ab0107a4942dbf9a5cf0cca3f37e184870a360 Mon Sep 17 00:00:00 2001
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+Date: Fri, 7 Feb 2020 09:37:42 -0800
+Subject: KVM: x86/mmu: Fix struct guest_walker arrays for 5-level paging
+
+From: Sean Christopherson <sean.j.christopherson@intel.com>
+
+commit f6ab0107a4942dbf9a5cf0cca3f37e184870a360 upstream.
+
+Define PT_MAX_FULL_LEVELS as PT64_ROOT_MAX_LEVEL, i.e. 5, to fix shadow
+paging for 5-level guest page tables. PT_MAX_FULL_LEVELS is used to
+size the arrays that track guest pages table information, i.e. using a
+"max levels" of 4 causes KVM to access garbage beyond the end of an
+array when querying state for level 5 entries. E.g. FNAME(gpte_changed)
+will read garbage and most likely return %true for a level 5 entry,
+soft-hanging the guest because FNAME(fetch) will restart the guest
+instead of creating SPTEs because it thinks the guest PTE has changed.
+
+Note, KVM doesn't yet support 5-level nested EPT, so PT_MAX_FULL_LEVELS
+gets to stay "4" for the PTTYPE_EPT case.
+
+Fixes: 855feb673640 ("KVM: MMU: Add 5 level EPT & Shadow page table support.")
+Cc: stable@vger.kernel.org
+Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/kvm/mmu/paging_tmpl.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/x86/kvm/mmu/paging_tmpl.h
++++ b/arch/x86/kvm/mmu/paging_tmpl.h
+@@ -33,7 +33,7 @@
+ #define PT_GUEST_ACCESSED_SHIFT PT_ACCESSED_SHIFT
+ #define PT_HAVE_ACCESSED_DIRTY(mmu) true
+ #ifdef CONFIG_X86_64
+- #define PT_MAX_FULL_LEVELS 4
++ #define PT_MAX_FULL_LEVELS PT64_ROOT_MAX_LEVEL
+ #define CMPXCHG cmpxchg
+ #else
+ #define CMPXCHG cmpxchg64
--- /dev/null
+From 25d387287cf0330abf2aad761ce6eee67326a355 Mon Sep 17 00:00:00 2001
+From: Kim Phillips <kim.phillips@amd.com>
+Date: Tue, 21 Jan 2020 11:12:31 -0600
+Subject: perf/x86/amd: Add missing L2 misses event spec to AMD Family 17h's event map
+
+From: Kim Phillips <kim.phillips@amd.com>
+
+commit 25d387287cf0330abf2aad761ce6eee67326a355 upstream.
+
+Commit 3fe3331bb285 ("perf/x86/amd: Add event map for AMD Family 17h"),
+claimed L2 misses were unsupported, due to them not being found in its
+referenced documentation, whose link has now moved [1].
+
+That old documentation listed PMCx064 unit mask bit 3 as:
+
+ "LsRdBlkC: LS Read Block C S L X Change to X Miss."
+
+and bit 0 as:
+
+ "IcFillMiss: IC Fill Miss"
+
+We now have new public documentation [2] with improved descriptions, that
+clearly indicate what events those unit mask bits represent:
+
+Bit 3 now clearly states:
+
+ "LsRdBlkC: Data Cache Req Miss in L2 (all types)"
+
+and bit 0 is:
+
+ "IcFillMiss: Instruction Cache Req Miss in L2."
+
+So we can now add support for L2 misses in perf's genericised events as
+PMCx064 with both the above unit masks.
+
+[1] The commit's original documentation reference, "Processor Programming
+ Reference (PPR) for AMD Family 17h Model 01h, Revision B1 Processors",
+ originally available here:
+
+ https://www.amd.com/system/files/TechDocs/54945_PPR_Family_17h_Models_00h-0Fh.pdf
+
+ is now available here:
+
+ https://developer.amd.com/wordpress/media/2017/11/54945_PPR_Family_17h_Models_00h-0Fh.pdf
+
+[2] "Processor Programming Reference (PPR) for Family 17h Model 31h,
+ Revision B0 Processors", available here:
+
+ https://developer.amd.com/wp-content/resources/55803_0.54-PUB.pdf
+
+Fixes: 3fe3331bb285 ("perf/x86/amd: Add event map for AMD Family 17h")
+Reported-by: Babu Moger <babu.moger@amd.com>
+Signed-off-by: Kim Phillips <kim.phillips@amd.com>
+Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Ingo Molnar <mingo@kernel.org>
+Tested-by: Babu Moger <babu.moger@amd.com>
+Cc: stable@vger.kernel.org
+Link: https://lkml.kernel.org/r/20200121171232.28839-1-kim.phillips@amd.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/x86/events/amd/core.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/arch/x86/events/amd/core.c
++++ b/arch/x86/events/amd/core.c
+@@ -246,6 +246,7 @@ static const u64 amd_f17h_perfmon_event_
+ [PERF_COUNT_HW_CPU_CYCLES] = 0x0076,
+ [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = 0xff60,
++ [PERF_COUNT_HW_CACHE_MISSES] = 0x0964,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = 0x00c2,
+ [PERF_COUNT_HW_BRANCH_MISSES] = 0x00c3,
+ [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = 0x0287,
--- /dev/null
+From aab73d278d49c718b722ff5052e16c9cddf144d4 Mon Sep 17 00:00:00 2001
+From: Harald Freudenberger <freude@linux.ibm.com>
+Date: Fri, 31 Jan 2020 12:08:31 +0100
+Subject: s390/pkey: fix missing length of protected key on return
+
+From: Harald Freudenberger <freude@linux.ibm.com>
+
+commit aab73d278d49c718b722ff5052e16c9cddf144d4 upstream.
+
+The pkey ioctl call PKEY_SEC2PROTK updates a struct pkey_protkey
+on return. The protected key is stored in, the protected key type
+is stored in but the len information was not updated. This patch
+now fixes this and so the len field gets an update to refrect
+the actual size of the protected key value returned.
+
+Fixes: efc598e6c8a9 ("s390/zcrypt: move cca misc functions to new code file")
+Cc: Stable <stable@vger.kernel.org>
+Signed-off-by: Harald Freudenberger <freude@linux.ibm.com>
+Reported-by: Christian Rund <RUNDC@de.ibm.com>
+Suggested-by: Ingo Franzki <ifranzki@linux.ibm.com>
+Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/s390/crypto/pkey_api.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/s390/crypto/pkey_api.c
++++ b/drivers/s390/crypto/pkey_api.c
+@@ -774,7 +774,7 @@ static long pkey_unlocked_ioctl(struct f
+ return -EFAULT;
+ rc = cca_sec2protkey(ksp.cardnr, ksp.domain,
+ ksp.seckey.seckey, ksp.protkey.protkey,
+- NULL, &ksp.protkey.type);
++ &ksp.protkey.len, &ksp.protkey.type);
+ DEBUG_DBG("%s cca_sec2protkey()=%d\n", __func__, rc);
+ if (rc)
+ break;
--- /dev/null
+From 27dc0700c3be7c681cea03c5230b93d02f623492 Mon Sep 17 00:00:00 2001
+From: Christian Borntraeger <borntraeger@de.ibm.com>
+Date: Mon, 10 Feb 2020 11:27:37 -0500
+Subject: s390/uv: Fix handling of length extensions
+
+From: Christian Borntraeger <borntraeger@de.ibm.com>
+
+commit 27dc0700c3be7c681cea03c5230b93d02f623492 upstream.
+
+The query parameter block might contain additional information and can
+be extended in the future. If the size of the block does not suffice we
+get an error code of rc=0x100. The buffer will contain all information
+up to the specified size and the hypervisor/guest simply do not need the
+additional information as they do not know about the new data. That
+means that we can (and must) accept rc=0x100 as success.
+
+Cc: stable@vger.kernel.org
+Reviewed-by: Cornelia Huck <cohuck@redhat.com>
+Fixes: 5abb9351dfd9 ("s390/uv: introduce guest side ultravisor code")
+Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
+Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/s390/boot/uv.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/arch/s390/boot/uv.c
++++ b/arch/s390/boot/uv.c
+@@ -15,7 +15,8 @@ void uv_query_info(void)
+ if (!test_facility(158))
+ return;
+
+- if (uv_call(0, (uint64_t)&uvcb))
++ /* rc==0x100 means that there is additional data we do not process */
++ if (uv_call(0, (uint64_t)&uvcb) && uvcb.header.rc != 0x100)
+ return;
+
+ if (test_bit_inv(BIT_UVC_CMD_SET_SHARED_ACCESS, (unsigned long *)uvcb.inst_calls_list) &&
acpi-pm-s2idle-avoid-possible-race-related-to-the-ec-gpe.patch
acpica-introduce-acpi_any_gpe_status_set.patch
acpi-pm-s2idle-prevent-spurious-scis-from-waking-up-the-system.patch
+ext4-don-t-assume-that-mmp_nodename-bdevname-have-nul.patch
+ext4-fix-support-for-inode-sizes-1024-bytes.patch
+ext4-fix-checksum-errors-with-indexed-dirs.patch
+ext4-add-cond_resched-to-ext4_protect_reserved_inode.patch
+ext4-improve-explanation-of-a-mount-failure-caused-by-a-misconfigured-kernel.patch
+btrfs-fix-race-between-using-extent-maps-and-merging-them.patch
+btrfs-ref-verify-fix-memory-leaks.patch
+btrfs-print-message-when-tree-log-replay-starts.patch
+btrfs-log-message-when-rw-remount-is-attempted-with-unclean-tree-log.patch
+btrfs-fix-race-between-shrinking-truncate-and-fiemap.patch
+arm-npcm-bring-back-gpiolib-support.patch
+gpio-xilinx-fix-bug-where-the-wrong-gpio-register-is-written-to.patch
+arm64-ssbs-fix-context-switch-when-ssbs-is-present-on-all-cpus.patch
+cgroup-init_tasks-shouldn-t-be-linked-to-the-root-cgroup.patch
+xprtrdma-fix-dma-scatter-gather-list-mapping-imbalance.patch
+cifs-make-sure-we-do-not-overflow-the-max-ea-buffer-size.patch
+jbd2-move-the-clearing-of-b_modified-flag-to-the-journal_unmap_buffer.patch
+jbd2-do-not-clear-the-bh_mapped-flag-when-forgetting-a-metadata-buffer.patch
+edac-sysfs-remove-csrow-objects-on-errors.patch
+edac-mc-fix-use-after-free-and-memleaks-during-device-removal.patch
+kvm-nvmx-use-correct-root-level-for-nested-ept-shadow-page-tables.patch
+kvm-x86-mmu-fix-struct-guest_walker-arrays-for-5-level-paging.patch
+perf-x86-amd-add-missing-l2-misses-event-spec-to-amd-family-17h-s-event-map.patch
+s390-pkey-fix-missing-length-of-protected-key-on-return.patch
+s390-uv-fix-handling-of-length-extensions.patch
+drm-vgem-close-use-after-free-race-in-vgem_gem_create.patch
+drm-mst-fix-possible-null-pointer-dereference-in-drm_dp_mst_process_up_req.patch
+drm-panfrost-make-sure-the-shrinker-does-not-reclaim-referenced-bos.patch
+drm-amdgpu-update-smu_v11_0_pptable.h.patch
+drm-amdgpu-navi10-use-the-odcap-enum-to-index-the-caps-array.patch
--- /dev/null
+From ca1c671302825182629d3c1a60363cee6f5455bb Mon Sep 17 00:00:00 2001
+From: Chuck Lever <chuck.lever@oracle.com>
+Date: Wed, 12 Feb 2020 11:12:30 -0500
+Subject: xprtrdma: Fix DMA scatter-gather list mapping imbalance
+
+From: Chuck Lever <chuck.lever@oracle.com>
+
+commit ca1c671302825182629d3c1a60363cee6f5455bb upstream.
+
+The @nents value that was passed to ib_dma_map_sg() has to be passed
+to the matching ib_dma_unmap_sg() call. If ib_dma_map_sg() choses to
+concatenate sg entries, it will return a different nents value than
+it was passed.
+
+The bug was exposed by recent changes to the AMD IOMMU driver, which
+enabled sg entry concatenation.
+
+Looking all the way back to commit 4143f34e01e9 ("xprtrdma: Port to
+new memory registration API") and reviewing other kernel ULPs, it's
+not clear that the frwr_map() logic was ever correct for this case.
+
+Reported-by: Andre Tomt <andre@tomt.net>
+Suggested-by: Robin Murphy <robin.murphy@arm.com>
+Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
+Cc: stable@vger.kernel.org
+Reviewed-by: Jason Gunthorpe <jgg@mellanox.com>
+Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ net/sunrpc/xprtrdma/frwr_ops.c | 13 +++++++------
+ 1 file changed, 7 insertions(+), 6 deletions(-)
+
+--- a/net/sunrpc/xprtrdma/frwr_ops.c
++++ b/net/sunrpc/xprtrdma/frwr_ops.c
+@@ -298,8 +298,8 @@ struct rpcrdma_mr_seg *frwr_map(struct r
+ {
+ struct rpcrdma_ia *ia = &r_xprt->rx_ia;
+ struct ib_reg_wr *reg_wr;
++ int i, n, dma_nents;
+ struct ib_mr *ibmr;
+- int i, n;
+ u8 key;
+
+ if (nsegs > ia->ri_max_frwr_depth)
+@@ -323,15 +323,16 @@ struct rpcrdma_mr_seg *frwr_map(struct r
+ break;
+ }
+ mr->mr_dir = rpcrdma_data_dir(writing);
++ mr->mr_nents = i;
+
+- mr->mr_nents =
+- ib_dma_map_sg(ia->ri_id->device, mr->mr_sg, i, mr->mr_dir);
+- if (!mr->mr_nents)
++ dma_nents = ib_dma_map_sg(ia->ri_id->device, mr->mr_sg, mr->mr_nents,
++ mr->mr_dir);
++ if (!dma_nents)
+ goto out_dmamap_err;
+
+ ibmr = mr->frwr.fr_mr;
+- n = ib_map_mr_sg(ibmr, mr->mr_sg, mr->mr_nents, NULL, PAGE_SIZE);
+- if (unlikely(n != mr->mr_nents))
++ n = ib_map_mr_sg(ibmr, mr->mr_sg, dma_nents, NULL, PAGE_SIZE);
++ if (n != dma_nents)
+ goto out_mapmr_err;
+
+ ibmr->iova &= 0x00000000ffffffff;