--- /dev/null
+From gpiccoli@canonical.com Wed Jul 3 14:16:04 2019
+From: "Guilherme G. Piccoli" <gpiccoli@canonical.com>
+Date: Fri, 28 Jun 2019 19:17:58 -0300
+Subject: block: Fix a NULL pointer dereference in generic_make_request()
+To: stable@vger.kernel.org
+Cc: gregkh@linuxfoundation.org, sashal@kernel.org, linux-block@vger.kernel.org, linux-raid@vger.kernel.org, gpiccoli@canonical.com, jay.vosburgh@canonical.com, Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>, Song Liu <songliubraving@fb.com>, Bart Van Assche <bvanassche@acm.org>, Ming Lei <ming.lei@redhat.com>, Eric Ren <renzhengeek@gmail.com>
+Message-ID: <20190628221759.18274-1-gpiccoli@canonical.com>
+
+From: "Guilherme G. Piccoli" <gpiccoli@canonical.com>
+
+-----------------------------------------------------------------
+This patch is not on mainline and is meant to 4.19 stable *only*.
+After the patch description there's a reasoning about that.
+-----------------------------------------------------------------
+
+Commit 37f9579f4c31 ("blk-mq: Avoid that submitting a bio concurrently
+with device removal triggers a crash") introduced a NULL pointer
+dereference in generic_make_request(). The patch sets q to NULL and
+enter_succeeded to false; right after, there's an 'if (enter_succeeded)'
+which is not taken, and then the 'else' will dereference q in
+blk_queue_dying(q).
+
+This patch just moves the 'q = NULL' to a point in which it won't trigger
+the oops, although the semantics of this NULLification remains untouched.
+
+A simple test case/reproducer is as follows:
+a) Build kernel v4.19.56-stable with CONFIG_BLK_CGROUP=n.
+
+b) Create a raid0 md array with 2 NVMe devices as members, and mount
+it with an ext4 filesystem.
+
+c) Run the following oneliner (supposing the raid0 is mounted in /mnt):
+(dd of=/mnt/tmp if=/dev/zero bs=1M count=999 &); sleep 0.3;
+echo 1 > /sys/block/nvme1n1/device/device/remove
+(whereas nvme1n1 is the 2nd array member)
+
+This will trigger the following oops:
+
+BUG: unable to handle kernel NULL pointer dereference at 0000000000000078
+PGD 0 P4D 0
+Oops: 0000 [#1] SMP PTI
+RIP: 0010:generic_make_request+0x32b/0x400
+Call Trace:
+ submit_bio+0x73/0x140
+ ext4_io_submit+0x4d/0x60
+ ext4_writepages+0x626/0xe90
+ do_writepages+0x4b/0xe0
+[...]
+
+This patch has no functional changes and preserves the md/raid0 behavior
+when a member is removed before kernel v4.17.
+
+----------------------------
+Why this is not on mainline?
+----------------------------
+
+The patch was originally submitted upstream in linux-raid and
+linux-block mailing-lists - it was initially accepted by Song Liu,
+but Christoph Hellwig[0] observed that there was a clean-up series
+ready to be accepted from Ming Lei[1] that fixed the same issue.
+
+The accepted patches from Ming's series in upstream are: commit
+47cdee29ef9d ("block: move blk_exit_queue into __blk_release_queue") and
+commit fe2008640ae3 ("block: don't protect generic_make_request_checks
+with blk_queue_enter"). Those patches basically do a clean-up in the
+block layer involving:
+
+1) Putting back blk_exit_queue() logic into __blk_release_queue(); that
+path was changed in the past and the logic from blk_exit_queue() was
+added to blk_cleanup_queue().
+
+2) Removing the guard/protection in generic_make_request_checks() with
+blk_queue_enter().
+
+The problem with Ming's series for -stable is that it relies in the
+legacy request IO path removal. So it's "backport-able" to v5.0+,
+but doing that for early versions (like 4.19) would incur in complex
+code changes. Hence, it was suggested by Christoph and Song Liu that
+this patch was submitted to stable only; otherwise merging it upstream
+would add code to fix a path removed in a subsequent commit.
+
+[0] lore.kernel.org/linux-block/20190521172258.GA32702@infradead.org
+[1] lore.kernel.org/linux-block/20190515030310.20393-1-ming.lei@redhat.com
+
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: Jens Axboe <axboe@kernel.dk>
+Reviewed-by: Bart Van Assche <bvanassche@acm.org>
+Reviewed-by: Ming Lei <ming.lei@redhat.com>
+Tested-by: Eric Ren <renzhengeek@gmail.com>
+Fixes: 37f9579f4c31 ("blk-mq: Avoid that submitting a bio concurrently with device removal triggers a crash")
+Signed-off-by: Guilherme G. Piccoli <gpiccoli@canonical.com>
+Acked-by: Song Liu <songliubraving@fb.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ block/blk-core.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/block/blk-core.c
++++ b/block/blk-core.c
+@@ -2445,10 +2445,8 @@ blk_qc_t generic_make_request(struct bio
+ flags = 0;
+ if (bio->bi_opf & REQ_NOWAIT)
+ flags = BLK_MQ_REQ_NOWAIT;
+- if (blk_queue_enter(q, flags) < 0) {
++ if (blk_queue_enter(q, flags) < 0)
+ enter_succeeded = false;
+- q = NULL;
+- }
+ }
+
+ if (enter_succeeded) {
+@@ -2479,6 +2477,7 @@ blk_qc_t generic_make_request(struct bio
+ bio_wouldblock_error(bio);
+ else
+ bio_io_error(bio);
++ q = NULL;
+ }
+ bio = bio_list_pop(&bio_list_on_stack[0]);
+ } while (bio);
--- /dev/null
+From eca94432934fe5f141d084f2e36ee2c0e614cc04 Mon Sep 17 00:00:00 2001
+From: Matias Karhumaa <matias.karhumaa@gmail.com>
+Date: Tue, 2 Jul 2019 16:35:09 +0200
+Subject: Bluetooth: Fix faulty expression for minimum encryption key size check
+
+From: Matias Karhumaa <matias.karhumaa@gmail.com>
+
+commit eca94432934fe5f141d084f2e36ee2c0e614cc04 upstream.
+
+Fix minimum encryption key size check so that HCI_MIN_ENC_KEY_SIZE is
+also allowed as stated in the comment.
+
+This bug caused connection problems with devices having maximum
+encryption key size of 7 octets (56-bit).
+
+Fixes: 693cd8ce3f88 ("Bluetooth: Fix regression with minimum encryption key size alignment")
+Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=203997
+Signed-off-by: Matias Karhumaa <matias.karhumaa@gmail.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ net/bluetooth/l2cap_core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/bluetooth/l2cap_core.c
++++ b/net/bluetooth/l2cap_core.c
+@@ -1352,7 +1352,7 @@ static bool l2cap_check_enc_key_size(str
+ * actually encrypted before enforcing a key size.
+ */
+ return (!test_bit(HCI_CONN_ENCRYPT, &hcon->flags) ||
+- hcon->enc_key_size > HCI_MIN_ENC_KEY_SIZE);
++ hcon->enc_key_size >= HCI_MIN_ENC_KEY_SIZE);
+ }
+
+ static void l2cap_do_start(struct l2cap_chan *chan)
--- /dev/null
+From gpiccoli@canonical.com Wed Jul 3 14:16:21 2019
+From: "Guilherme G. Piccoli" <gpiccoli@canonical.com>
+Date: Fri, 28 Jun 2019 19:17:59 -0300
+Subject: md/raid0: Do not bypass blocking queue entered for raid0 bios
+To: stable@vger.kernel.org
+Cc: gregkh@linuxfoundation.org, sashal@kernel.org, linux-block@vger.kernel.org, linux-raid@vger.kernel.org, gpiccoli@canonical.com, jay.vosburgh@canonical.com, Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>, Song Liu <songliubraving@fb.com>, Ming Lei <ming.lei@redhat.com>, Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
+Message-ID: <20190628221759.18274-2-gpiccoli@canonical.com>
+
+From: "Guilherme G. Piccoli" <gpiccoli@canonical.com>
+
+-----------------------------------------------------------------
+This patch is not on mainline and is meant to 4.19 stable *only*.
+After the patch description there's a reasoning about that.
+-----------------------------------------------------------------
+
+Commit cd4a4ae4683d ("block: don't use blocking queue entered for
+recursive bio submits") introduced the flag BIO_QUEUE_ENTERED in order
+split bios bypass the blocking queue entering routine and use the live
+non-blocking version. It was a result of an extensive discussion in
+a linux-block thread[0], and the purpose of this change was to prevent
+a hung task waiting on a reference to drop.
+
+Happens that md raid0 split bios all the time, and more important,
+it changes their underlying device to the raid member. After the change
+introduced by this flag's usage, we experience various crashes if a raid0
+member is removed during a large write. This happens because the bio
+reaches the live queue entering function when the queue of the raid0
+member is dying.
+
+A simple reproducer of this behavior is presented below:
+a) Build kernel v4.19.56-stable with CONFIG_BLK_DEV_THROTTLING=y.
+
+b) Create a raid0 md array with 2 NVMe devices as members, and mount
+it with an ext4 filesystem.
+
+c) Run the following oneliner (supposing the raid0 is mounted in /mnt):
+(dd of=/mnt/tmp if=/dev/zero bs=1M count=999 &); sleep 0.3;
+echo 1 > /sys/block/nvme1n1/device/device/remove
+(whereas nvme1n1 is the 2nd array member)
+
+This will trigger the following warning/oops:
+
+------------[ cut here ]------------
+BUG: unable to handle kernel NULL pointer dereference at 0000000000000155
+PGD 0 P4D 0
+Oops: 0000 [#1] SMP PTI
+RIP: 0010:blk_throtl_bio+0x45/0x970
+[...]
+Call Trace:
+ generic_make_request_checks+0x1bf/0x690
+ generic_make_request+0x64/0x3f0
+ raid0_make_request+0x184/0x620 [raid0]
+ ? raid0_make_request+0x184/0x620 [raid0]
+ md_handle_request+0x126/0x1a0
+ md_make_request+0x7b/0x180
+ generic_make_request+0x19e/0x3f0
+ submit_bio+0x73/0x140
+[...]
+
+This patch changes raid0 driver to fallback to the "old" blocking queue
+entering procedure, by clearing the BIO_QUEUE_ENTERED from raid0 bios.
+This prevents the crashes and restores the regular behavior of raid0
+arrays when a member is removed during a large write.
+
+[0] lore.kernel.org/linux-block/343bbbf6-64eb-879e-d19e-96aebb037d47@I-love.SAKURA.ne.jp
+
+----------------------------
+Why this is not on mainline?
+----------------------------
+
+The patch was originally submitted upstream in linux-raid and
+linux-block mailing-lists - it was initially accepted by Song Liu,
+but Christoph Hellwig[1] observed that there was a clean-up series
+ready to be accepted from Ming Lei[2] that fixed the same issue.
+
+The accepted patches from Ming's series in upstream are: commit
+47cdee29ef9d ("block: move blk_exit_queue into __blk_release_queue") and
+commit fe2008640ae3 ("block: don't protect generic_make_request_checks
+with blk_queue_enter"). Those patches basically do a clean-up in the
+block layer involving:
+
+1) Putting back blk_exit_queue() logic into __blk_release_queue(); that
+path was changed in the past and the logic from blk_exit_queue() was
+added to blk_cleanup_queue().
+
+2) Removing the guard/protection in generic_make_request_checks() with
+blk_queue_enter().
+
+The problem with Ming's series for -stable is that it relies in the
+legacy request IO path removal. So it's "backport-able" to v5.0+,
+but doing that for early versions (like 4.19) would incur in complex
+code changes. Hence, it was suggested by Christoph and Song Liu that
+this patch was submitted to stable only; otherwise merging it upstream
+would add code to fix a path removed in a subsequent commit.
+
+[1] lore.kernel.org/linux-block/20190521172258.GA32702@infradead.org
+[2] lore.kernel.org/linux-block/20190515030310.20393-1-ming.lei@redhat.com
+
+Cc: Christoph Hellwig <hch@lst.de>
+Cc: Jens Axboe <axboe@kernel.dk>
+Cc: Ming Lei <ming.lei@redhat.com>
+Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
+Fixes: cd4a4ae4683d ("block: don't use blocking queue entered for recursive bio submits")
+Signed-off-by: Guilherme G. Piccoli <gpiccoli@canonical.com>
+Acked-by: Song Liu <songliubraving@fb.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/md/raid0.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/md/raid0.c
++++ b/drivers/md/raid0.c
+@@ -547,6 +547,7 @@ static void raid0_handle_discard(struct
+ trace_block_bio_remap(bdev_get_queue(rdev->bdev),
+ discard_bio, disk_devt(mddev->gendisk),
+ bio->bi_iter.bi_sector);
++ bio_clear_flag(bio, BIO_QUEUE_ENTERED);
+ generic_make_request(discard_bio);
+ }
+ bio_endio(bio);
+@@ -602,6 +603,7 @@ static bool raid0_make_request(struct md
+ disk_devt(mddev->gendisk), bio_sector);
+ mddev_check_writesame(mddev, bio);
+ mddev_check_write_zeroes(mddev, bio);
++ bio_clear_flag(bio, BIO_QUEUE_ENTERED);
+ generic_make_request(bio);
+ return true;
+ }
--- /dev/null
+From e75b3e1c9bc5b997d09bdf8eb72ab3dd3c1a7072 Mon Sep 17 00:00:00 2001
+From: Florian Westphal <fw@strlen.de>
+Date: Tue, 21 May 2019 13:24:30 +0200
+Subject: netfilter: nf_flow_table: ignore DF bit setting
+
+From: Florian Westphal <fw@strlen.de>
+
+commit e75b3e1c9bc5b997d09bdf8eb72ab3dd3c1a7072 upstream.
+
+Its irrelevant if the DF bit is set or not, we must pass packet to
+stack in either case.
+
+If the DF bit is set, we must pass it to stack so the appropriate
+ICMP error can be generated.
+
+If the DF is not set, we must pass it to stack for fragmentation.
+
+Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ net/netfilter/nf_flow_table_ip.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/net/netfilter/nf_flow_table_ip.c
++++ b/net/netfilter/nf_flow_table_ip.c
+@@ -246,8 +246,7 @@ nf_flow_offload_ip_hook(void *priv, stru
+ flow = container_of(tuplehash, struct flow_offload, tuplehash[dir]);
+ rt = (struct rtable *)flow->tuplehash[dir].tuple.dst_cache;
+
+- if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)) &&
+- (ip_hdr(skb)->frag_off & htons(IP_DF)) != 0)
++ if (unlikely(nf_flow_exceeds_mtu(skb, flow->tuplehash[dir].tuple.mtu)))
+ return NF_ACCEPT;
+
+ if (skb_try_make_writable(skb, sizeof(*iph)))
--- /dev/null
+From 91a9048f238063dde7feea752b9dd386f7e3808b Mon Sep 17 00:00:00 2001
+From: Florian Westphal <fw@strlen.de>
+Date: Tue, 21 May 2019 13:24:32 +0200
+Subject: netfilter: nft_flow_offload: don't offload when sequence numbers need adjustment
+
+From: Florian Westphal <fw@strlen.de>
+
+commit 91a9048f238063dde7feea752b9dd386f7e3808b upstream.
+
+We can't deal with tcp sequence number rewrite in flow_offload.
+While at it, simplify helper check, we only need to know if the extension
+is present, we don't need the helper data.
+
+Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ net/netfilter/nft_flow_offload.c | 6 ++----
+ 1 file changed, 2 insertions(+), 4 deletions(-)
+
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -12,7 +12,6 @@
+ #include <net/netfilter/nf_conntrack_core.h>
+ #include <linux/netfilter/nf_conntrack_common.h>
+ #include <net/netfilter/nf_flow_table.h>
+-#include <net/netfilter/nf_conntrack_helper.h>
+
+ struct nft_flow_offload {
+ struct nft_flowtable *flowtable;
+@@ -67,7 +66,6 @@ static void nft_flow_offload_eval(const
+ {
+ struct nft_flow_offload *priv = nft_expr_priv(expr);
+ struct nf_flowtable *flowtable = &priv->flowtable->data;
+- const struct nf_conn_help *help;
+ enum ip_conntrack_info ctinfo;
+ struct nf_flow_route route;
+ struct flow_offload *flow;
+@@ -93,8 +91,8 @@ static void nft_flow_offload_eval(const
+ goto out;
+ }
+
+- help = nfct_help(ct);
+- if (help)
++ if (nf_ct_ext_exist(ct, NF_CT_EXT_HELPER) ||
++ ct->status & IPS_SEQ_ADJUST)
+ goto out;
+
+ if (ctinfo == IP_CT_NEW ||
--- /dev/null
+From 69aeb538587e087bfc81dd1f465eab3558ff3158 Mon Sep 17 00:00:00 2001
+From: Florian Westphal <fw@strlen.de>
+Date: Tue, 21 May 2019 13:24:33 +0200
+Subject: netfilter: nft_flow_offload: IPCB is only valid for ipv4 family
+
+From: Florian Westphal <fw@strlen.de>
+
+commit 69aeb538587e087bfc81dd1f465eab3558ff3158 upstream.
+
+Guard this with a check vs. ipv4, IPCB isn't valid in ipv6 case.
+
+Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ net/netfilter/nft_flow_offload.c | 17 +++++++++++------
+ 1 file changed, 11 insertions(+), 6 deletions(-)
+
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -48,15 +48,20 @@ static int nft_flow_route(const struct n
+ return 0;
+ }
+
+-static bool nft_flow_offload_skip(struct sk_buff *skb)
++static bool nft_flow_offload_skip(struct sk_buff *skb, int family)
+ {
+- struct ip_options *opt = &(IPCB(skb)->opt);
+-
+- if (unlikely(opt->optlen))
+- return true;
+ if (skb_sec_path(skb))
+ return true;
+
++ if (family == NFPROTO_IPV4) {
++ const struct ip_options *opt;
++
++ opt = &(IPCB(skb)->opt);
++
++ if (unlikely(opt->optlen))
++ return true;
++ }
++
+ return false;
+ }
+
+@@ -74,7 +79,7 @@ static void nft_flow_offload_eval(const
+ struct nf_conn *ct;
+ int ret;
+
+- if (nft_flow_offload_skip(pkt->skb))
++ if (nft_flow_offload_skip(pkt->skb, nft_pf(pkt)))
+ goto out;
+
+ ct = nf_ct_get(pkt->skb, &ctinfo);
--- /dev/null
+From 8437a6209f76f85a2db1abb12a9bde2170801617 Mon Sep 17 00:00:00 2001
+From: Florian Westphal <fw@strlen.de>
+Date: Tue, 21 May 2019 13:24:31 +0200
+Subject: netfilter: nft_flow_offload: set liberal tracking mode for tcp
+
+From: Florian Westphal <fw@strlen.de>
+
+commit 8437a6209f76f85a2db1abb12a9bde2170801617 upstream.
+
+Without it, whenever a packet has to be pushed up the stack (e.g. because
+of mtu mismatch), then conntrack will flag packets as invalid, which in
+turn breaks NAT.
+
+Signed-off-by: Florian Westphal <fw@strlen.de>
+Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ net/netfilter/nft_flow_offload.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/net/netfilter/nft_flow_offload.c
++++ b/net/netfilter/nft_flow_offload.c
+@@ -72,6 +72,7 @@ static void nft_flow_offload_eval(const
+ struct nf_flow_route route;
+ struct flow_offload *flow;
+ enum ip_conntrack_dir dir;
++ bool is_tcp = false;
+ struct nf_conn *ct;
+ int ret;
+
+@@ -84,6 +85,8 @@ static void nft_flow_offload_eval(const
+
+ switch (ct->tuplehash[IP_CT_DIR_ORIGINAL].tuple.dst.protonum) {
+ case IPPROTO_TCP:
++ is_tcp = true;
++ break;
+ case IPPROTO_UDP:
+ break;
+ default:
+@@ -109,6 +112,11 @@ static void nft_flow_offload_eval(const
+ if (!flow)
+ goto err_flow_alloc;
+
++ if (is_tcp) {
++ ct->proto.tcp.seen[0].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++ ct->proto.tcp.seen[1].flags |= IP_CT_TCP_FLAG_BE_LIBERAL;
++ }
++
+ ret = flow_offload_add(flowtable, flow);
+ if (ret < 0)
+ goto err_flow_add;