--- /dev/null
+From 92e222df7b8f05c565009c7383321b593eca488b Mon Sep 17 00:00:00 2001
+From: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
+Date: Mon, 5 Feb 2018 17:45:11 +0100
+Subject: btrfs: alloc_chunk: fix DUP stripe size handling
+
+From: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
+
+commit 92e222df7b8f05c565009c7383321b593eca488b upstream.
+
+In case of using DUP, we search for enough unallocated disk space on a
+device to hold two stripes.
+
+The devices_info[ndevs-1].max_avail that holds the amount of unallocated
+space found is directly assigned to stripe_size, while it's actually
+twice the stripe size.
+
+Later on in the code, an unconditional division of stripe_size by
+dev_stripes corrects the value, but in the meantime there's a check to
+see if the stripe_size does not exceed max_chunk_size. Since during this
+check stripe_size is twice the amount as intended, the check will reduce
+the stripe_size to max_chunk_size if the actual correct to be used
+stripe_size is more than half the amount of max_chunk_size.
+
+The unconditional division later tries to correct stripe_size, but will
+actually make sure we can't allocate more than half the max_chunk_size.
+
+Fix this by moving the division by dev_stripes before the max chunk size
+check, so it always contains the right value, instead of putting a duct
+tape division in further on to get it fixed again.
+
+Since in all other cases than DUP, dev_stripes is 1, this change only
+affects DUP.
+
+Other attempts in the past were made to fix this:
+* 37db63a400 "Btrfs: fix max chunk size check in chunk allocator" tried
+to fix the same problem, but still resulted in part of the code acting
+on a wrongly doubled stripe_size value.
+* 86db25785a "Btrfs: fix max chunk size on raid5/6" unintentionally
+broke this fix again.
+
+The real problem was already introduced with the rest of the code in
+73c5de0051.
+
+The user visible result however will be that the max chunk size for DUP
+will suddenly double, while it's actually acting according to the limits
+in the code again like it was 5 years ago.
+
+Reported-by: Naohiro Aota <naohiro.aota@wdc.com>
+Link: https://www.spinics.net/lists/linux-btrfs/msg69752.html
+Fixes: 73c5de0051 ("btrfs: quasi-round-robin for chunk allocation")
+Fixes: 86db25785a ("Btrfs: fix max chunk size on raid5/6")
+Signed-off-by: Hans van Kranenburg <hans.van.kranenburg@mendix.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+[ update comment ]
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/volumes.c | 11 ++++++-----
+ 1 file changed, 6 insertions(+), 5 deletions(-)
+
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4638,10 +4638,13 @@ static int __btrfs_alloc_chunk(struct bt
+ if (devs_max && ndevs > devs_max)
+ ndevs = devs_max;
+ /*
+- * the primary goal is to maximize the number of stripes, so use as many
+- * devices as possible, even if the stripes are not maximum sized.
++ * The primary goal is to maximize the number of stripes, so use as
++ * many devices as possible, even if the stripes are not maximum sized.
++ *
++ * The DUP profile stores more than one stripe per device, the
++ * max_avail is the total size so we have to adjust.
+ */
+- stripe_size = devices_info[ndevs-1].max_avail;
++ stripe_size = div_u64(devices_info[ndevs - 1].max_avail, dev_stripes);
+ num_stripes = ndevs * dev_stripes;
+
+ /*
+@@ -4681,8 +4684,6 @@ static int __btrfs_alloc_chunk(struct bt
+ stripe_size = devices_info[ndevs-1].max_avail;
+ }
+
+- stripe_size = div_u64(stripe_size, dev_stripes);
+-
+ /* align to BTRFS_STRIPE_LEN */
+ stripe_size = div_u64(stripe_size, raid_stripe_len);
+ stripe_size *= raid_stripe_len;
--- /dev/null
+From fd649f10c3d21ee9d7542c609f29978bdf73ab94 Mon Sep 17 00:00:00 2001
+From: Nikolay Borisov <nborisov@suse.com>
+Date: Tue, 30 Jan 2018 16:07:37 +0200
+Subject: btrfs: Fix use-after-free when cleaning up fs_devs with a single stale device
+
+From: Nikolay Borisov <nborisov@suse.com>
+
+commit fd649f10c3d21ee9d7542c609f29978bdf73ab94 upstream.
+
+Commit 4fde46f0cc71 ("Btrfs: free the stale device") introduced
+btrfs_free_stale_device which iterates the device lists for all
+registered btrfs filesystems and deletes those devices which aren't
+mounted. In a btrfs_devices structure has only 1 device attached to it
+and it is unused then btrfs_free_stale_devices will proceed to also free
+the btrfs_fs_devices struct itself. Currently this leads to a use after
+free since list_for_each_entry will try to perform a check on the
+already freed memory to see if it has to terminate the loop.
+
+The fix is to use 'break' when we know we are freeing the current
+fs_devs.
+
+Fixes: 4fde46f0cc71 ("Btrfs: free the stale device")
+Signed-off-by: Nikolay Borisov <nborisov@suse.com>
+Reviewed-by: Anand Jain <anand.jain@oracle.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/volumes.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -568,6 +568,7 @@ void btrfs_free_stale_device(struct btrf
+ btrfs_sysfs_remove_fsid(fs_devs);
+ list_del(&fs_devs->list);
+ free_fs_devices(fs_devs);
++ break;
+ } else {
+ fs_devs->num_devices--;
+ list_del(&dev->dev_list);