--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: David Howells <dhowells@redhat.com>
+Date: Fri, 7 Sep 2018 23:55:17 +0100
+Subject: afs: Fix cell specification to permit an empty address list
+
+From: David Howells <dhowells@redhat.com>
+
+[ Upstream commit ecfe951f0c1b169ea4b7dd6f3a404dfedd795bc2 ]
+
+Fix the cell specification mechanism to allow cells to be pre-created
+without having to specify at least one address (the addresses will be
+upcalled for).
+
+This allows the cell information preload service to avoid the need to issue
+loads of DNS lookups during boot to get the addresses for each cell (500+
+lookups for the 'standard' cell list[*]). The lookups can be done later as
+each cell is accessed through the filesystem.
+
+Also remove the print statement that prints a line every time a new cell is
+added.
+
+[*] There are 144 cells in the list. Each cell is first looked up for an
+ SRV record, and if that fails, for an AFSDB record. These get a list
+ of server names, each of which then has to be looked up to get the
+ addresses for that server. E.g.:
+
+ dig srv _afs3-vlserver._udp.grand.central.org
+
+Signed-off-by: David Howells <dhowells@redhat.com>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/afs/proc.c | 15 +++++++--------
+ 1 file changed, 7 insertions(+), 8 deletions(-)
+
+--- a/fs/afs/proc.c
++++ b/fs/afs/proc.c
+@@ -98,13 +98,13 @@ static int afs_proc_cells_write(struct f
+ goto inval;
+
+ args = strchr(name, ' ');
+- if (!args)
+- goto inval;
+- do {
+- *args++ = 0;
+- } while(*args == ' ');
+- if (!*args)
+- goto inval;
++ if (args) {
++ do {
++ *args++ = 0;
++ } while(*args == ' ');
++ if (!*args)
++ goto inval;
++ }
+
+ /* determine command to perform */
+ _debug("cmd=%s name=%s args=%s", buf, name, args);
+@@ -120,7 +120,6 @@ static int afs_proc_cells_write(struct f
+
+ if (test_and_set_bit(AFS_CELL_FL_NO_GC, &cell->flags))
+ afs_put_cell(net, cell);
+- printk("kAFS: Added new cell '%s'\n", name);
+ } else {
+ goto inval;
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Will Deacon <will.deacon@arm.com>
+Date: Thu, 30 Aug 2018 13:52:38 -0700
+Subject: ARC: atomics: unbork atomic_fetch_##op()
+
+From: Will Deacon <will.deacon@arm.com>
+
+[ Upstream commit 3fcbb8260a87efb691d837e8cd24e81f65b3eb70 ]
+
+In 4.19-rc1, Eugeniy reported weird boot and IO errors on ARC HSDK
+
+| INFO: task syslogd:77 blocked for more than 10 seconds.
+| Not tainted 4.19.0-rc1-00007-gf213acea4e88 #40
+| "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this
+| message.
+| syslogd D 0 77 76 0x00000000
+|
+| Stack Trace:
+| __switch_to+0x0/0xac
+| __schedule+0x1b2/0x730
+| io_schedule+0x5c/0xc0
+| __lock_page+0x98/0xdc
+| find_lock_entry+0x38/0x100
+| shmem_getpage_gfp.isra.3+0x82/0xbfc
+| shmem_fault+0x46/0x138
+| handle_mm_fault+0x5bc/0x924
+| do_page_fault+0x100/0x2b8
+| ret_from_exception+0x0/0x8
+
+He bisected to 84c6591103db ("locking/atomics,
+asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*()")
+
+This commit however only unmasked the real issue introduced by commit
+4aef66c8ae9 ("locking/atomic, arch/arc: Fix build") which missed the
+retry-if-scond-failed branch in atomic_fetch_##op() macros.
+
+The bisected commit started using atomic_fetch_##op() macros for building
+the rest of atomics.
+
+Fixes: 4aef66c8ae9 ("locking/atomic, arch/arc: Fix build")
+Reported-by: Eugeniy Paltsev <paltsev@synopsys.com>
+Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
+Signed-off-by: Will Deacon <will.deacon@arm.com>
+Signed-off-by: Vineet Gupta <vgupta@synopsys.com>
+[vgupta: wrote changelog]
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/arc/include/asm/atomic.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/arc/include/asm/atomic.h
++++ b/arch/arc/include/asm/atomic.h
+@@ -84,7 +84,7 @@ static inline int atomic_fetch_##op(int
+ "1: llock %[orig], [%[ctr]] \n" \
+ " " #asm_op " %[val], %[orig], %[i] \n" \
+ " scond %[val], [%[ctr]] \n" \
+- " \n" \
++ " bnz 1b \n" \
+ : [val] "=&r" (val), \
+ [orig] "=&r" (orig) \
+ : [ctr] "r" (&v->counter), \
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: John Fastabend <john.fastabend@gmail.com>
+Date: Thu, 30 Aug 2018 21:25:02 -0700
+Subject: bpf: avoid misuse of psock when TCP_ULP_BPF collides with another ULP
+
+From: John Fastabend <john.fastabend@gmail.com>
+
+[ Upstream commit 597222f72a94118f593e4f32bf58ae7e049a0df1 ]
+
+Currently we check sk_user_data is non NULL to determine if the sk
+exists in a map. However, this is not sufficient to ensure the psock
+or the ULP ops are not in use by another user, such as kcm or TLS. To
+avoid this when adding a sock to a map also verify it is of the
+correct ULP type. Additionally, when releasing a psock verify that
+it is the TCP_ULP_BPF type before releasing the ULP. The error case
+where we abort an update due to ULP collision can cause this error
+path.
+
+For example,
+
+ __sock_map_ctx_update_elem()
+ [...]
+ err = tcp_set_ulp_id(sock, TCP_ULP_BPF) <- collides with TLS
+ if (err) <- so err out here
+ goto out_free
+ [...]
+ out_free:
+ smap_release_sock() <- calling tcp_cleanup_ulp releases the
+ TLS ULP incorrectly.
+
+Fixes: 2f857d04601a ("bpf: sockmap, remove STRPARSER map_flags and add multi-map support")
+Signed-off-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/sockmap.c | 12 +++++++++++-
+ 1 file changed, 11 insertions(+), 1 deletion(-)
+
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -1465,10 +1465,16 @@ static void smap_destroy_psock(struct rc
+ schedule_work(&psock->gc_work);
+ }
+
++static bool psock_is_smap_sk(struct sock *sk)
++{
++ return inet_csk(sk)->icsk_ulp_ops == &bpf_tcp_ulp_ops;
++}
++
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock)
+ {
+ if (refcount_dec_and_test(&psock->refcnt)) {
+- tcp_cleanup_ulp(sock);
++ if (psock_is_smap_sk(sock))
++ tcp_cleanup_ulp(sock);
+ write_lock_bh(&sock->sk_callback_lock);
+ smap_stop_sock(psock, sock);
+ write_unlock_bh(&sock->sk_callback_lock);
+@@ -1895,6 +1901,10 @@ static int __sock_map_ctx_update_elem(st
+ * doesn't update user data.
+ */
+ if (psock) {
++ if (!psock_is_smap_sk(sock)) {
++ err = -EBUSY;
++ goto out_progs;
++ }
+ if (READ_ONCE(psock->bpf_parse) && parse) {
+ err = -EBUSY;
+ goto out_progs;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Tushar Dave <tushar.n.dave@oracle.com>
+Date: Fri, 31 Aug 2018 23:45:16 +0200
+Subject: bpf: Fix bpf_msg_pull_data()
+
+From: Tushar Dave <tushar.n.dave@oracle.com>
+
+[ Upstream commit 9db39f4d4f94b61e4b64b077f6ddb2bdfb533a88 ]
+
+Helper bpf_msg_pull_data() mistakenly reuses variable 'offset' while
+linearizing multiple scatterlist elements. Variable 'offset' is used
+to find first starting scatterlist element
+ i.e. msg->data = sg_virt(&sg[first_sg]) + start - offset"
+
+Use different variable name while linearizing multiple scatterlist
+elements so that value contained in variable 'offset' won't get
+overwritten.
+
+Fixes: 015632bb30da ("bpf: sk_msg program helper bpf_sk_msg_pull_data")
+Signed-off-by: Tushar Dave <tushar.n.dave@oracle.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/filter.c | 7 +++----
+ 1 file changed, 3 insertions(+), 4 deletions(-)
+
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2282,7 +2282,7 @@ static const struct bpf_func_proto bpf_m
+ BPF_CALL_4(bpf_msg_pull_data,
+ struct sk_msg_buff *, msg, u32, start, u32, end, u64, flags)
+ {
+- unsigned int len = 0, offset = 0, copy = 0;
++ unsigned int len = 0, offset = 0, copy = 0, poffset = 0;
+ int bytes = end - start, bytes_sg_total;
+ struct scatterlist *sg = msg->sg_data;
+ int first_sg, last_sg, i, shift;
+@@ -2338,16 +2338,15 @@ BPF_CALL_4(bpf_msg_pull_data,
+ if (unlikely(!page))
+ return -ENOMEM;
+ p = page_address(page);
+- offset = 0;
+
+ i = first_sg;
+ do {
+ from = sg_virt(&sg[i]);
+ len = sg[i].length;
+- to = p + offset;
++ to = p + poffset;
+
+ memcpy(to, from, len);
+- offset += len;
++ poffset += len;
+ sg[i].length = 0;
+ put_page(sg_page(&sg[i]));
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Wed, 29 Aug 2018 16:50:34 +0200
+Subject: bpf: fix msg->data/data_end after sg shift repair in bpf_msg_pull_data
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+[ Upstream commit 0e06b227c5221dd51b5569de93f3b9f532be4a32 ]
+
+In the current code, msg->data is set as sg_virt(&sg[i]) + start - offset
+and msg->data_end relative to it as msg->data + bytes. Using iterator i
+to point to the updated starting scatterlist element holds true for some
+cases, however not for all where we'd end up pointing out of bounds. It
+is /correct/ for these ones:
+
+1) When first finding the starting scatterlist element (sge) where we
+ find that the page is already privately owned by the msg and where
+ the requested bytes and headroom fit into the sge's length.
+
+However, it's /incorrect/ for the following ones:
+
+2) After we made the requested area private and updated the newly allocated
+ page into first_sg slot of the scatterlist ring; when we find that no
+ shift repair of the ring is needed where we bail out updating msg->data
+ and msg->data_end. At that point i will point to last_sg, which in this
+ case is the next elem of first_sg in the ring. The sge at that point
+ might as well be invalid (e.g. i == msg->sg_end), which we use for
+ setting the range of sg_virt(&sg[i]). The correct one would have been
+ first_sg.
+
+3) Similar as in 2) but when we find that a shift repair of the ring is
+ needed. In this case we fix up all sges and stop once we've reached the
+ end. In this case i will point to will point to the new msg->sg_end,
+ and the sge at that point will be invalid. Again here the requested
+ range sits in first_sg.
+
+Fixes: 015632bb30da ("bpf: sk_msg program helper bpf_sk_msg_pull_data")
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/filter.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2300,6 +2300,7 @@ BPF_CALL_4(bpf_msg_pull_data,
+ if (unlikely(start >= offset + len))
+ return -EINVAL;
+
++ first_sg = i;
+ /* The start may point into the sg element so we need to also
+ * account for the headroom.
+ */
+@@ -2307,8 +2308,6 @@ BPF_CALL_4(bpf_msg_pull_data,
+ if (!msg->sg_copy[i] && bytes_sg_total <= len)
+ goto out;
+
+- first_sg = i;
+-
+ /* At this point we need to linearize multiple scatterlist
+ * elements or a single shared page. Either way we need to
+ * copy into a linear buffer exclusively owned by BPF. Then
+@@ -2390,7 +2389,7 @@ BPF_CALL_4(bpf_msg_pull_data,
+ if (msg->sg_end < 0)
+ msg->sg_end += MAX_SKB_FRAGS;
+ out:
+- msg->data = sg_virt(&sg[i]) + start - offset;
++ msg->data = sg_virt(&sg[first_sg]) + start - offset;
+ msg->data_end = msg->data + bytes;
+
+ return 0;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Tue, 28 Aug 2018 16:15:35 +0200
+Subject: bpf: fix several offset tests in bpf_msg_pull_data
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+[ Upstream commit 5b24109b0563d45094c470684c1f8cea1af269f8 ]
+
+While recently going over bpf_msg_pull_data(), I noticed three
+issues which are fixed in here:
+
+1) When we attempt to find the first scatterlist element (sge)
+ for the start offset, we add len to the offset before we check
+ for start < offset + len, whereas it should come after when
+ we iterate to the next sge to accumulate the offsets. For
+ example, given a start offset of 12 with a sge length of 8
+ for the first sge in the list would lead us to determine this
+ sge as the first sge thinking it covers first 16 bytes where
+ start is located, whereas start sits in subsequent sges so
+ we would end up pulling in the wrong data.
+
+2) After figuring out the starting sge, we have a short-cut test
+ in !msg->sg_copy[i] && bytes <= len. This checks whether it's
+ not needed to make the page at the sge private where we can
+ just exit by updating msg->data and msg->data_end. However,
+ the length test is not fully correct. bytes <= len checks
+ whether the requested bytes (end - start offsets) fit into the
+ sge's length. The part that is missing is that start must not
+ be sge length aligned. Meaning, the start offset into the sge
+ needs to be accounted as well on top of the requested bytes
+ as otherwise we can access the sge out of bounds. For example
+ the sge could have length of 8, our requested bytes could have
+ length of 8, but at a start offset of 4, so we also would need
+ to pull in 4 bytes of the next sge, when we jump to the out
+ label we do set msg->data to sg_virt(&sg[i]) + start - offset
+ and msg->data_end to msg->data + bytes which would be oob.
+
+3) The subsequent bytes < copy test for finding the last sge has
+ the same issue as in point 2) but also it tests for less than
+ rather than less or equal to. Meaning if the sge length is of
+ 8 and requested bytes of 8 while having the start aligned with
+ the sge, we would unnecessarily go and pull in the next sge as
+ well to make it private.
+
+Fixes: 015632bb30da ("bpf: sk_msg program helper bpf_sk_msg_pull_data")
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/filter.c | 14 +++++++++-----
+ 1 file changed, 9 insertions(+), 5 deletions(-)
+
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2276,10 +2276,10 @@ BPF_CALL_4(bpf_msg_pull_data,
+ struct sk_msg_buff *, msg, u32, start, u32, end, u64, flags)
+ {
+ unsigned int len = 0, offset = 0, copy = 0;
++ int bytes = end - start, bytes_sg_total;
+ struct scatterlist *sg = msg->sg_data;
+ int first_sg, last_sg, i, shift;
+ unsigned char *p, *to, *from;
+- int bytes = end - start;
+ struct page *page;
+
+ if (unlikely(flags || end <= start))
+@@ -2289,9 +2289,9 @@ BPF_CALL_4(bpf_msg_pull_data,
+ i = msg->sg_start;
+ do {
+ len = sg[i].length;
+- offset += len;
+ if (start < offset + len)
+ break;
++ offset += len;
+ i++;
+ if (i == MAX_SKB_FRAGS)
+ i = 0;
+@@ -2300,7 +2300,11 @@ BPF_CALL_4(bpf_msg_pull_data,
+ if (unlikely(start >= offset + len))
+ return -EINVAL;
+
+- if (!msg->sg_copy[i] && bytes <= len)
++ /* The start may point into the sg element so we need to also
++ * account for the headroom.
++ */
++ bytes_sg_total = start - offset + bytes;
++ if (!msg->sg_copy[i] && bytes_sg_total <= len)
+ goto out;
+
+ first_sg = i;
+@@ -2320,12 +2324,12 @@ BPF_CALL_4(bpf_msg_pull_data,
+ i++;
+ if (i == MAX_SKB_FRAGS)
+ i = 0;
+- if (bytes < copy)
++ if (bytes_sg_total <= copy)
+ break;
+ } while (i != msg->sg_end);
+ last_sg = i;
+
+- if (unlikely(copy < end - start))
++ if (unlikely(bytes_sg_total > copy))
+ return -EINVAL;
+
+ page = alloc_pages(__GFP_NOWARN | GFP_ATOMIC, get_order(copy));
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Wed, 29 Aug 2018 16:50:36 +0200
+Subject: bpf: fix sg shift repair start offset in bpf_msg_pull_data
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+[ Upstream commit a8cf76a9023bc6709b1361d06bb2fae5227b9d68 ]
+
+When we perform the sg shift repair for the scatterlist ring, we
+currently start out at i = first_sg + 1. However, this is not
+correct since the first_sg could point to the sge sitting at slot
+MAX_SKB_FRAGS - 1, and a subsequent i = MAX_SKB_FRAGS will access
+the scatterlist ring (sg) out of bounds. Add the sk_msg_iter_var()
+helper for iterating through the ring, and apply the same rule
+for advancing to the next ring element as we do elsewhere. Later
+work will use this helper also in other places.
+
+Fixes: 015632bb30da ("bpf: sk_msg program helper bpf_sk_msg_pull_data")
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/filter.c | 26 +++++++++++++-------------
+ 1 file changed, 13 insertions(+), 13 deletions(-)
+
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2272,6 +2272,13 @@ static const struct bpf_func_proto bpf_m
+ .arg2_type = ARG_ANYTHING,
+ };
+
++#define sk_msg_iter_var(var) \
++ do { \
++ var++; \
++ if (var == MAX_SKB_FRAGS) \
++ var = 0; \
++ } while (0)
++
+ BPF_CALL_4(bpf_msg_pull_data,
+ struct sk_msg_buff *, msg, u32, start, u32, end, u64, flags)
+ {
+@@ -2292,9 +2299,7 @@ BPF_CALL_4(bpf_msg_pull_data,
+ if (start < offset + len)
+ break;
+ offset += len;
+- i++;
+- if (i == MAX_SKB_FRAGS)
+- i = 0;
++ sk_msg_iter_var(i);
+ } while (i != msg->sg_end);
+
+ if (unlikely(start >= offset + len))
+@@ -2320,9 +2325,7 @@ BPF_CALL_4(bpf_msg_pull_data,
+ */
+ do {
+ copy += sg[i].length;
+- i++;
+- if (i == MAX_SKB_FRAGS)
+- i = 0;
++ sk_msg_iter_var(i);
+ if (bytes_sg_total <= copy)
+ break;
+ } while (i != msg->sg_end);
+@@ -2348,9 +2351,7 @@ BPF_CALL_4(bpf_msg_pull_data,
+ sg[i].length = 0;
+ put_page(sg_page(&sg[i]));
+
+- i++;
+- if (i == MAX_SKB_FRAGS)
+- i = 0;
++ sk_msg_iter_var(i);
+ } while (i != last_sg);
+
+ sg[first_sg].length = copy;
+@@ -2367,7 +2368,8 @@ BPF_CALL_4(bpf_msg_pull_data,
+ if (!shift)
+ goto out;
+
+- i = first_sg + 1;
++ i = first_sg;
++ sk_msg_iter_var(i);
+ do {
+ int move_from;
+
+@@ -2384,9 +2386,7 @@ BPF_CALL_4(bpf_msg_pull_data,
+ sg[move_from].page_link = 0;
+ sg[move_from].offset = 0;
+
+- i++;
+- if (i == MAX_SKB_FRAGS)
+- i = 0;
++ sk_msg_iter_var(i);
+ } while (1);
+ msg->sg_end -= shift;
+ if (msg->sg_end < 0)
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Wed, 29 Aug 2018 16:50:35 +0200
+Subject: bpf: fix shift upon scatterlist ring wrap-around in bpf_msg_pull_data
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+[ Upstream commit 2e43f95dd8ee62bc8bf57f2afac37fbd70c8d565 ]
+
+If first_sg and last_sg wraps around in the scatterlist ring, then we
+need to account for that in the shift as well. E.g. crafting such msgs
+where this is the case leads to a hang as shift becomes negative. E.g.
+consider the following scenario:
+
+ first_sg := 14 |=> shift := -12 msg->sg_start := 10
+ last_sg := 3 | msg->sg_end := 5
+
+round 1: i := 15, move_from := 3, sg[15] := sg[ 3]
+round 2: i := 0, move_from := -12, sg[ 0] := sg[-12]
+round 3: i := 1, move_from := -11, sg[ 1] := sg[-11]
+round 4: i := 2, move_from := -10, sg[ 2] := sg[-10]
+[...]
+round 13: i := 11, move_from := -1, sg[ 2] := sg[ -1]
+round 14: i := 12, move_from := 0, sg[ 2] := sg[ 0]
+round 15: i := 13, move_from := 1, sg[ 2] := sg[ 1]
+round 16: i := 14, move_from := 2, sg[ 2] := sg[ 2]
+round 17: i := 15, move_from := 3, sg[ 2] := sg[ 3]
+[...]
+
+This means we will loop forever and never hit the msg->sg_end condition
+to break out of the loop. When we see that the ring wraps around, then
+the shift should be MAX_SKB_FRAGS - first_sg + last_sg - 1. Meaning,
+the remainder slots from the tail of the ring and the head until last_sg
+combined.
+
+Fixes: 015632bb30da ("bpf: sk_msg program helper bpf_sk_msg_pull_data")
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/core/filter.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+--- a/net/core/filter.c
++++ b/net/core/filter.c
+@@ -2360,7 +2360,10 @@ BPF_CALL_4(bpf_msg_pull_data,
+ * had a single entry though we can just replace it and
+ * be done. Otherwise walk the ring and shift the entries.
+ */
+- shift = last_sg - first_sg - 1;
++ WARN_ON_ONCE(last_sg == first_sg);
++ shift = last_sg > first_sg ?
++ last_sg - first_sg - 1 :
++ MAX_SKB_FRAGS - first_sg + last_sg - 1;
+ if (!shift)
+ goto out;
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: John Fastabend <john.fastabend@gmail.com>
+Date: Fri, 24 Aug 2018 17:37:00 -0700
+Subject: bpf: sockmap, decrement copied count correctly in redirect error case
+
+From: John Fastabend <john.fastabend@gmail.com>
+
+[ Upstream commit 501ca81760c204ec59b73e4a00bee5971fc0f1b1 ]
+
+Currently, when a redirect occurs in sockmap and an error occurs in
+the redirect call we unwind the scatterlist once in the error path
+of bpf_tcp_sendmsg_do_redirect() and then again in sendmsg(). Then
+in the error path of sendmsg we decrement the copied count by the
+send size.
+
+However, its possible we partially sent data before the error was
+generated. This can happen if do_tcp_sendpages() partially sends the
+scatterlist before encountering a memory pressure error. If this
+happens we need to decrement the copied value (the value tracking
+how many bytes were actually sent to TCP stack) by the number of
+remaining bytes _not_ the entire send size. Otherwise we risk
+confusing userspace.
+
+Also we don't need two calls to free the scatterlist one is
+good enough. So remove the one in bpf_tcp_sendmsg_do_redirect() and
+then properly reduce copied by the number of remaining bytes which
+may in fact be the entire send size if no bytes were sent.
+
+To do this use bool to indicate if free_start_sg() should do mem
+accounting or not.
+
+Signed-off-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/sockmap.c | 45 ++++++++++++++++++++++-----------------------
+ 1 file changed, 22 insertions(+), 23 deletions(-)
+
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -236,7 +236,7 @@ static int bpf_tcp_init(struct sock *sk)
+ }
+
+ static void smap_release_sock(struct smap_psock *psock, struct sock *sock);
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md);
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge);
+
+ static void bpf_tcp_release(struct sock *sk)
+ {
+@@ -248,7 +248,7 @@ static void bpf_tcp_release(struct sock
+ goto out;
+
+ if (psock->cork) {
+- free_start_sg(psock->sock, psock->cork);
++ free_start_sg(psock->sock, psock->cork, true);
+ kfree(psock->cork);
+ psock->cork = NULL;
+ }
+@@ -330,14 +330,14 @@ static void bpf_tcp_close(struct sock *s
+ close_fun = psock->save_close;
+
+ if (psock->cork) {
+- free_start_sg(psock->sock, psock->cork);
++ free_start_sg(psock->sock, psock->cork, true);
+ kfree(psock->cork);
+ psock->cork = NULL;
+ }
+
+ list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ list_del(&md->list);
+- free_start_sg(psock->sock, md);
++ free_start_sg(psock->sock, md, true);
+ kfree(md);
+ }
+
+@@ -570,14 +570,16 @@ static void free_bytes_sg(struct sock *s
+ md->sg_start = i;
+ }
+
+-static int free_sg(struct sock *sk, int start, struct sk_msg_buff *md)
++static int free_sg(struct sock *sk, int start,
++ struct sk_msg_buff *md, bool charge)
+ {
+ struct scatterlist *sg = md->sg_data;
+ int i = start, free = 0;
+
+ while (sg[i].length) {
+ free += sg[i].length;
+- sk_mem_uncharge(sk, sg[i].length);
++ if (charge)
++ sk_mem_uncharge(sk, sg[i].length);
+ if (!md->skb)
+ put_page(sg_page(&sg[i]));
+ sg[i].length = 0;
+@@ -594,9 +596,9 @@ static int free_sg(struct sock *sk, int
+ return free;
+ }
+
+-static int free_start_sg(struct sock *sk, struct sk_msg_buff *md)
++static int free_start_sg(struct sock *sk, struct sk_msg_buff *md, bool charge)
+ {
+- int free = free_sg(sk, md->sg_start, md);
++ int free = free_sg(sk, md->sg_start, md, charge);
+
+ md->sg_start = md->sg_end;
+ return free;
+@@ -604,7 +606,7 @@ static int free_start_sg(struct sock *sk
+
+ static int free_curr_sg(struct sock *sk, struct sk_msg_buff *md)
+ {
+- return free_sg(sk, md->sg_curr, md);
++ return free_sg(sk, md->sg_curr, md, true);
+ }
+
+ static int bpf_map_msg_verdict(int _rc, struct sk_msg_buff *md)
+@@ -718,7 +720,7 @@ static int bpf_tcp_ingress(struct sock *
+ list_add_tail(&r->list, &psock->ingress);
+ sk->sk_data_ready(sk);
+ } else {
+- free_start_sg(sk, r);
++ free_start_sg(sk, r, true);
+ kfree(r);
+ }
+
+@@ -755,14 +757,10 @@ static int bpf_tcp_sendmsg_do_redirect(s
+ release_sock(sk);
+ }
+ smap_release_sock(psock, sk);
+- if (unlikely(err))
+- goto out;
+- return 0;
++ return err;
+ out_rcu:
+ rcu_read_unlock();
+-out:
+- free_bytes_sg(NULL, send, md, false);
+- return err;
++ return 0;
+ }
+
+ static inline void bpf_md_init(struct smap_psock *psock)
+@@ -825,7 +823,7 @@ more_data:
+ case __SK_PASS:
+ err = bpf_tcp_push(sk, send, m, flags, true);
+ if (unlikely(err)) {
+- *copied -= free_start_sg(sk, m);
++ *copied -= free_start_sg(sk, m, true);
+ break;
+ }
+
+@@ -848,16 +846,17 @@ more_data:
+ lock_sock(sk);
+
+ if (unlikely(err < 0)) {
+- free_start_sg(sk, m);
++ int free = free_start_sg(sk, m, false);
++
+ psock->sg_size = 0;
+ if (!cork)
+- *copied -= send;
++ *copied -= free;
+ } else {
+ psock->sg_size -= send;
+ }
+
+ if (cork) {
+- free_start_sg(sk, m);
++ free_start_sg(sk, m, true);
+ psock->sg_size = 0;
+ kfree(m);
+ m = NULL;
+@@ -1124,7 +1123,7 @@ wait_for_memory:
+ err = sk_stream_wait_memory(sk, &timeo);
+ if (err) {
+ if (m && m != psock->cork)
+- free_start_sg(sk, m);
++ free_start_sg(sk, m, true);
+ goto out_err;
+ }
+ }
+@@ -1583,13 +1582,13 @@ static void smap_gc_work(struct work_str
+ bpf_prog_put(psock->bpf_tx_msg);
+
+ if (psock->cork) {
+- free_start_sg(psock->sock, psock->cork);
++ free_start_sg(psock->sock, psock->cork, true);
+ kfree(psock->cork);
+ }
+
+ list_for_each_entry_safe(md, mtmp, &psock->ingress, list) {
+ list_del(&md->list);
+- free_start_sg(psock->sock, md);
++ free_start_sg(psock->sock, md, true);
+ kfree(md);
+ }
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Fri, 24 Aug 2018 22:08:50 +0200
+Subject: bpf, sockmap: fix potential use after free in bpf_tcp_close
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+[ Upstream commit e06fa9c16ce4b740996189fa5610eabcee734e6c ]
+
+bpf_tcp_close() we pop the psock linkage to a map via psock_map_pop().
+A parallel update on the sock hash map can happen between psock_map_pop()
+and lookup_elem_raw() where we override the element under link->hash /
+link->key. In bpf_tcp_close()'s lookup_elem_raw() we subsequently only
+test whether an element is present, but we do not test whether the
+element is infact the element we were looking for.
+
+We lock the sock in bpf_tcp_close() during that time, so do we hold
+the lock in sock_hash_update_elem(). However, the latter locks the
+sock which is newly updated, not the one we're purging from the hash
+table. This means that while one CPU is doing the lookup from bpf_tcp_close(),
+another CPU is doing the map update in parallel, dropped our sock from
+the hlist and released the psock.
+
+Subsequently the first CPU will find the new sock and attempts to drop
+and release the old sock yet another time. Fix is that we need to check
+the elements for a match after lookup, similar as we do in the sock map.
+Note that the hash tab elems are freed via RCU, so access to their
+link->hash / link->key is fine since we're under RCU read side there.
+
+Fixes: e9db4ef6bf4c ("bpf: sockhash fix omitted bucket lock in sock_close")
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/sockmap.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -369,7 +369,7 @@ static void bpf_tcp_close(struct sock *s
+ /* If another thread deleted this object skip deletion.
+ * The refcnt on psock may or may not be zero.
+ */
+- if (l) {
++ if (l && l == link) {
+ hlist_del_rcu(&link->hash_node);
+ smap_release_sock(psock, link->sk);
+ free_htab_elem(htab, link);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Daniel Borkmann <daniel@iogearbox.net>
+Date: Fri, 24 Aug 2018 22:08:51 +0200
+Subject: bpf, sockmap: fix psock refcount leak in bpf_tcp_recvmsg
+
+From: Daniel Borkmann <daniel@iogearbox.net>
+
+[ Upstream commit 15c480efab01197c965ce0562a43ffedd852b8f9 ]
+
+In bpf_tcp_recvmsg() we first took a reference on the psock, however
+once we find that there are skbs in the normal socket's receive queue
+we return with processing them through tcp_recvmsg(). Problem is that
+we leak the taken reference on the psock in that path. Given we don't
+really do anything with the psock at this point, move the skb_queue_empty()
+test before we fetch the psock to fix this case.
+
+Fixes: 8934ce2fd081 ("bpf: sockmap redirect ingress support")
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ kernel/bpf/sockmap.c | 5 ++---
+ 1 file changed, 2 insertions(+), 3 deletions(-)
+
+--- a/kernel/bpf/sockmap.c
++++ b/kernel/bpf/sockmap.c
+@@ -915,6 +915,8 @@ static int bpf_tcp_recvmsg(struct sock *
+
+ if (unlikely(flags & MSG_ERRQUEUE))
+ return inet_recv_error(sk, msg, len, addr_len);
++ if (!skb_queue_empty(&sk->sk_receive_queue))
++ return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+
+ rcu_read_lock();
+ psock = smap_psock_sk(sk);
+@@ -925,9 +927,6 @@ static int bpf_tcp_recvmsg(struct sock *
+ goto out;
+ rcu_read_unlock();
+
+- if (!skb_queue_empty(&sk->sk_receive_queue))
+- return tcp_recvmsg(sk, msg, len, nonblock, flags, addr_len);
+-
+ lock_sock(sk);
+ bytes_ready:
+ while (copied != len) {
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Anand Jain <anand.jain@oracle.com>
+Date: Mon, 6 Aug 2018 18:12:37 +0800
+Subject: btrfs: btrfs_shrink_device should call commit transaction at the end
+
+From: Anand Jain <anand.jain@oracle.com>
+
+[ Upstream commit 801660b040d132f67fac6a95910ad307c5929b49 ]
+
+Test case btrfs/164 reports use-after-free:
+
+[ 6712.084324] general protection fault: 0000 [#1] PREEMPT SMP
+..
+[ 6712.195423] btrfs_update_commit_device_size+0x75/0xf0 [btrfs]
+[ 6712.201424] btrfs_commit_transaction+0x57d/0xa90 [btrfs]
+[ 6712.206999] btrfs_rm_device+0x627/0x850 [btrfs]
+[ 6712.211800] btrfs_ioctl+0x2b03/0x3120 [btrfs]
+
+Reason for this is that btrfs_shrink_device adds the resized device to
+the fs_devices::resized_devices after it has called the last commit
+transaction.
+
+So the list fs_devices::resized_devices is not empty when
+btrfs_shrink_device returns. Now the parent function
+btrfs_rm_device calls:
+
+ btrfs_close_bdev(device);
+ call_rcu(&device->rcu, free_device_rcu);
+
+and then does the transactio ncommit. It goes through the
+fs_devices::resized_devices in btrfs_update_commit_device_size and
+leads to use-after-free.
+
+Fix this by making sure btrfs_shrink_device calls the last needed
+btrfs_commit_transaction before the return. This is consistent with what
+the grow counterpart does and this makes sure the on-disk state is
+persistent when the function returns.
+
+Reported-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
+Tested-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com>
+Signed-off-by: Anand Jain <anand.jain@oracle.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+[ update changelog ]
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/volumes.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -4584,7 +4584,12 @@ again:
+
+ /* Now btrfs_update_device() will change the on-disk size. */
+ ret = btrfs_update_device(trans, device);
+- btrfs_end_transaction(trans);
++ if (ret < 0) {
++ btrfs_abort_transaction(trans, ret);
++ btrfs_end_transaction(trans);
++ } else {
++ ret = btrfs_commit_transaction(trans);
++ }
+ done:
+ btrfs_free_path(path);
+ if (ret) {
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Robbie Ko <robbieko@synology.com>
+Date: Mon, 6 Aug 2018 10:30:30 +0800
+Subject: Btrfs: fix unexpected failure of nocow buffered writes after snapshotting when low on space
+
+From: Robbie Ko <robbieko@synology.com>
+
+[ Upstream commit 8ecebf4d767e2307a946c8905278d6358eda35c3 ]
+
+Commit e9894fd3e3b3 ("Btrfs: fix snapshot vs nocow writting") forced
+nocow writes to fallback to COW, during writeback, when a snapshot is
+created. This resulted in writes made before creating the snapshot to
+unexpectedly fail with ENOSPC during writeback when success (0) was
+returned to user space through the write system call.
+
+The steps leading to this problem are:
+
+1. When it's not possible to allocate data space for a write, the
+ buffered write path checks if a NOCOW write is possible. If it is,
+ it will not reserve space and success (0) is returned to user space.
+
+2. Then when a snapshot is created, the root's will_be_snapshotted
+ atomic is incremented and writeback is triggered for all inode's that
+ belong to the root being snapshotted. Incrementing that atomic forces
+ all previous writes to fallback to COW during writeback (running
+ delalloc).
+
+3. This results in the writeback for the inodes to fail and therefore
+ setting the ENOSPC error in their mappings, so that a subsequent
+ fsync on them will report the error to user space. So it's not a
+ completely silent data loss (since fsync will report ENOSPC) but it's
+ a very unexpected and undesirable behaviour, because if a clean
+ shutdown/unmount of the filesystem happens without previous calls to
+ fsync, it is expected to have the data present in the files after
+ mounting the filesystem again.
+
+So fix this by adding a new atomic named snapshot_force_cow to the
+root structure which prevents this behaviour and works the following way:
+
+1. It is incremented when we start to create a snapshot after triggering
+ writeback and before waiting for writeback to finish.
+
+2. This new atomic is now what is used by writeback (running delalloc)
+ to decide whether we need to fallback to COW or not. Because we
+ incremented this new atomic after triggering writeback in the
+ snapshot creation ioctl, we ensure that all buffered writes that
+ happened before snapshot creation will succeed and not fallback to
+ COW (which would make them fail with ENOSPC).
+
+3. The existing atomic, will_be_snapshotted, is kept because it is used
+ to force new buffered writes, that start after we started
+ snapshotting, to reserve data space even when NOCOW is possible.
+ This makes these writes fail early with ENOSPC when there's no
+ available space to allocate, preventing the unexpected behaviour of
+ writeback later failing with ENOSPC due to a fallback to COW mode.
+
+Fixes: e9894fd3e3b3 ("Btrfs: fix snapshot vs nocow writting")
+Signed-off-by: Robbie Ko <robbieko@synology.com>
+Reviewed-by: Filipe Manana <fdmanana@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/btrfs/ctree.h | 1 +
+ fs/btrfs/disk-io.c | 1 +
+ fs/btrfs/inode.c | 25 ++++---------------------
+ fs/btrfs/ioctl.c | 16 ++++++++++++++++
+ 4 files changed, 22 insertions(+), 21 deletions(-)
+
+--- a/fs/btrfs/ctree.h
++++ b/fs/btrfs/ctree.h
+@@ -1277,6 +1277,7 @@ struct btrfs_root {
+ int send_in_progress;
+ struct btrfs_subvolume_writers *subv_writers;
+ atomic_t will_be_snapshotted;
++ atomic_t snapshot_force_cow;
+
+ /* For qgroup metadata reserved space */
+ spinlock_t qgroup_meta_rsv_lock;
+--- a/fs/btrfs/disk-io.c
++++ b/fs/btrfs/disk-io.c
+@@ -1217,6 +1217,7 @@ static void __setup_root(struct btrfs_ro
+ atomic_set(&root->log_batch, 0);
+ refcount_set(&root->refs, 1);
+ atomic_set(&root->will_be_snapshotted, 0);
++ atomic_set(&root->snapshot_force_cow, 0);
+ root->log_transid = 0;
+ root->log_transid_committed = -1;
+ root->last_log_commit = 0;
+--- a/fs/btrfs/inode.c
++++ b/fs/btrfs/inode.c
+@@ -1275,7 +1275,7 @@ static noinline int run_delalloc_nocow(s
+ u64 disk_num_bytes;
+ u64 ram_bytes;
+ int extent_type;
+- int ret, err;
++ int ret;
+ int type;
+ int nocow;
+ int check_prev = 1;
+@@ -1407,11 +1407,8 @@ next_slot:
+ * if there are pending snapshots for this root,
+ * we fall into common COW way.
+ */
+- if (!nolock) {
+- err = btrfs_start_write_no_snapshotting(root);
+- if (!err)
+- goto out_check;
+- }
++ if (!nolock && atomic_read(&root->snapshot_force_cow))
++ goto out_check;
+ /*
+ * force cow if csum exists in the range.
+ * this ensure that csum for a given extent are
+@@ -1420,9 +1417,6 @@ next_slot:
+ ret = csum_exist_in_range(fs_info, disk_bytenr,
+ num_bytes);
+ if (ret) {
+- if (!nolock)
+- btrfs_end_write_no_snapshotting(root);
+-
+ /*
+ * ret could be -EIO if the above fails to read
+ * metadata.
+@@ -1435,11 +1429,8 @@ next_slot:
+ WARN_ON_ONCE(nolock);
+ goto out_check;
+ }
+- if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr)) {
+- if (!nolock)
+- btrfs_end_write_no_snapshotting(root);
++ if (!btrfs_inc_nocow_writers(fs_info, disk_bytenr))
+ goto out_check;
+- }
+ nocow = 1;
+ } else if (extent_type == BTRFS_FILE_EXTENT_INLINE) {
+ extent_end = found_key.offset +
+@@ -1453,8 +1444,6 @@ next_slot:
+ out_check:
+ if (extent_end <= start) {
+ path->slots[0]++;
+- if (!nolock && nocow)
+- btrfs_end_write_no_snapshotting(root);
+ if (nocow)
+ btrfs_dec_nocow_writers(fs_info, disk_bytenr);
+ goto next_slot;
+@@ -1476,8 +1465,6 @@ out_check:
+ end, page_started, nr_written, 1,
+ NULL);
+ if (ret) {
+- if (!nolock && nocow)
+- btrfs_end_write_no_snapshotting(root);
+ if (nocow)
+ btrfs_dec_nocow_writers(fs_info,
+ disk_bytenr);
+@@ -1497,8 +1484,6 @@ out_check:
+ ram_bytes, BTRFS_COMPRESS_NONE,
+ BTRFS_ORDERED_PREALLOC);
+ if (IS_ERR(em)) {
+- if (!nolock && nocow)
+- btrfs_end_write_no_snapshotting(root);
+ if (nocow)
+ btrfs_dec_nocow_writers(fs_info,
+ disk_bytenr);
+@@ -1537,8 +1522,6 @@ out_check:
+ EXTENT_CLEAR_DATA_RESV,
+ PAGE_UNLOCK | PAGE_SET_PRIVATE2);
+
+- if (!nolock && nocow)
+- btrfs_end_write_no_snapshotting(root);
+ cur_offset = extent_end;
+
+ /*
+--- a/fs/btrfs/ioctl.c
++++ b/fs/btrfs/ioctl.c
+@@ -761,6 +761,7 @@ static int create_snapshot(struct btrfs_
+ struct btrfs_pending_snapshot *pending_snapshot;
+ struct btrfs_trans_handle *trans;
+ int ret;
++ bool snapshot_force_cow = false;
+
+ if (!test_bit(BTRFS_ROOT_REF_COWS, &root->state))
+ return -EINVAL;
+@@ -777,6 +778,11 @@ static int create_snapshot(struct btrfs_
+ goto free_pending;
+ }
+
++ /*
++ * Force new buffered writes to reserve space even when NOCOW is
++ * possible. This is to avoid later writeback (running dealloc) to
++ * fallback to COW mode and unexpectedly fail with ENOSPC.
++ */
+ atomic_inc(&root->will_be_snapshotted);
+ smp_mb__after_atomic();
+ /* wait for no snapshot writes */
+@@ -787,6 +793,14 @@ static int create_snapshot(struct btrfs_
+ if (ret)
+ goto dec_and_free;
+
++ /*
++ * All previous writes have started writeback in NOCOW mode, so now
++ * we force future writes to fallback to COW mode during snapshot
++ * creation.
++ */
++ atomic_inc(&root->snapshot_force_cow);
++ snapshot_force_cow = true;
++
+ btrfs_wait_ordered_extents(root, U64_MAX, 0, (u64)-1);
+
+ btrfs_init_block_rsv(&pending_snapshot->block_rsv,
+@@ -851,6 +865,8 @@ static int create_snapshot(struct btrfs_
+ fail:
+ btrfs_subvolume_release_metadata(fs_info, &pending_snapshot->block_rsv);
+ dec_and_free:
++ if (snapshot_force_cow)
++ atomic_dec(&root->snapshot_force_cow);
+ if (atomic_dec_and_test(&root->will_be_snapshotted))
+ wake_up_var(&root->will_be_snapshotted);
+ free_pending:
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Ilya Dryomov <idryomov@gmail.com>
+Date: Fri, 24 Aug 2018 15:32:43 +0200
+Subject: ceph: avoid a use-after-free in ceph_destroy_options()
+
+From: Ilya Dryomov <idryomov@gmail.com>
+
+[ Upstream commit 8aaff15168cfbc7c8980fdb0e8a585f1afe56ec0 ]
+
+syzbot reported a use-after-free in ceph_destroy_options(), called from
+ceph_mount(). The problem was that create_fs_client() consumed the opt
+pointer on some errors, but not on all of them. Make sure it always
+consumes both libceph and ceph options.
+
+Reported-by: syzbot+8ab6f1042021b4eed062@syzkaller.appspotmail.com
+Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
+Reviewed-by: "Yan, Zheng" <zyan@redhat.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/ceph/super.c | 16 +++++++++++-----
+ 1 file changed, 11 insertions(+), 5 deletions(-)
+
+--- a/fs/ceph/super.c
++++ b/fs/ceph/super.c
+@@ -603,6 +603,8 @@ static int extra_mon_dispatch(struct cep
+
+ /*
+ * create a new fs client
++ *
++ * Success or not, this function consumes @fsopt and @opt.
+ */
+ static struct ceph_fs_client *create_fs_client(struct ceph_mount_options *fsopt,
+ struct ceph_options *opt)
+@@ -610,17 +612,20 @@ static struct ceph_fs_client *create_fs_
+ struct ceph_fs_client *fsc;
+ int page_count;
+ size_t size;
+- int err = -ENOMEM;
++ int err;
+
+ fsc = kzalloc(sizeof(*fsc), GFP_KERNEL);
+- if (!fsc)
+- return ERR_PTR(-ENOMEM);
++ if (!fsc) {
++ err = -ENOMEM;
++ goto fail;
++ }
+
+ fsc->client = ceph_create_client(opt, fsc);
+ if (IS_ERR(fsc->client)) {
+ err = PTR_ERR(fsc->client);
+ goto fail;
+ }
++ opt = NULL; /* fsc->client now owns this */
+
+ fsc->client->extra_mon_dispatch = extra_mon_dispatch;
+ fsc->client->osdc.abort_on_full = true;
+@@ -678,6 +683,9 @@ fail_client:
+ ceph_destroy_client(fsc->client);
+ fail:
+ kfree(fsc);
++ if (opt)
++ ceph_destroy_options(opt);
++ destroy_mount_options(fsopt);
+ return ERR_PTR(err);
+ }
+
+@@ -1042,8 +1050,6 @@ static struct dentry *ceph_mount(struct
+ fsc = create_fs_client(fsopt, opt);
+ if (IS_ERR(fsc)) {
+ res = ERR_CAST(fsc);
+- destroy_mount_options(fsopt);
+- ceph_destroy_options(opt);
+ goto out_final;
+ }
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Dan Carpenter <dan.carpenter@oracle.com>
+Date: Fri, 31 Aug 2018 11:10:55 +0300
+Subject: cfg80211: fix a type issue in ieee80211_chandef_to_operating_class()
+
+From: Dan Carpenter <dan.carpenter@oracle.com>
+
+[ Upstream commit 8442938c3a2177ba16043b3a935f2c78266ad399 ]
+
+The "chandef->center_freq1" variable is a u32 but "freq" is a u16 so we
+are truncating away the high bits. I noticed this bug because in commit
+9cf0a0b4b64a ("cfg80211: Add support for 60GHz band channels 5 and 6")
+we made "freq <= 56160 + 2160 * 6" a valid requency when before it was
+only "freq <= 56160 + 2160 * 4" that was valid. It introduces a static
+checker warning:
+
+ net/wireless/util.c:1571 ieee80211_chandef_to_operating_class()
+ warn: always true condition '(freq <= 56160 + 2160 * 6) => (0-u16max <= 69120)'
+
+But really we probably shouldn't have been truncating the high bits
+away to begin with.
+
+Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/util.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/wireless/util.c
++++ b/net/wireless/util.c
+@@ -1374,7 +1374,7 @@ bool ieee80211_chandef_to_operating_clas
+ u8 *op_class)
+ {
+ u8 vht_opclass;
+- u16 freq = chandef->center_freq1;
++ u32 freq = chandef->center_freq1;
+
+ if (freq >= 2412 && freq <= 2472) {
+ if (chandef->width > NL80211_CHAN_WIDTH_40)
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Stanislaw Gruszka <sgruszka@redhat.com>
+Date: Wed, 22 Aug 2018 13:52:21 +0200
+Subject: cfg80211: make wmm_rule part of the reg_rule structure
+
+From: Stanislaw Gruszka <sgruszka@redhat.com>
+
+[ Upstream commit 38cb87ee47fb825f6c9d645c019f75b3905c0ab2 ]
+
+Make wmm_rule be part of the reg_rule structure. This simplifies the
+code a lot at the cost of having bigger memory usage. However in most
+cases we have only few reg_rule's and when we do have many like in
+iwlwifi we do not save memory as it allocates a separate wmm_rule for
+each channel anyway.
+
+This also fixes a bug reported in various places where somewhere the
+pointers were corrupted and we ended up doing a null-dereference.
+
+Fixes: 230ebaa189af ("cfg80211: read wmm rules from regulatory database")
+Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
+[rephrase commit message slightly]
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c | 50 +----------
+ include/net/cfg80211.h | 4
+ include/net/regulatory.h | 4
+ net/mac80211/util.c | 8 -
+ net/wireless/nl80211.c | 10 +-
+ net/wireless/reg.c | 92 +++------------------
+ 6 files changed, 32 insertions(+), 136 deletions(-)
+
+--- a/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
++++ b/drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c
+@@ -877,15 +877,12 @@ iwl_parse_nvm_mcc_info(struct device *de
+ const u8 *nvm_chan = cfg->nvm_type == IWL_NVM_EXT ?
+ iwl_ext_nvm_channels : iwl_nvm_channels;
+ struct ieee80211_regdomain *regd, *copy_rd;
+- int size_of_regd, regd_to_copy, wmms_to_copy;
+- int size_of_wmms = 0;
++ int size_of_regd, regd_to_copy;
+ struct ieee80211_reg_rule *rule;
+- struct ieee80211_wmm_rule *wmm_rule, *d_wmm, *s_wmm;
+ struct regdb_ptrs *regdb_ptrs;
+ enum nl80211_band band;
+ int center_freq, prev_center_freq = 0;
+- int valid_rules = 0, n_wmms = 0;
+- int i;
++ int valid_rules = 0;
+ bool new_rule;
+ int max_num_ch = cfg->nvm_type == IWL_NVM_EXT ?
+ IWL_NVM_NUM_CHANNELS_EXT : IWL_NVM_NUM_CHANNELS;
+@@ -904,11 +901,7 @@ iwl_parse_nvm_mcc_info(struct device *de
+ sizeof(struct ieee80211_regdomain) +
+ num_of_ch * sizeof(struct ieee80211_reg_rule);
+
+- if (geo_info & GEO_WMM_ETSI_5GHZ_INFO)
+- size_of_wmms =
+- num_of_ch * sizeof(struct ieee80211_wmm_rule);
+-
+- regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++ regd = kzalloc(size_of_regd, GFP_KERNEL);
+ if (!regd)
+ return ERR_PTR(-ENOMEM);
+
+@@ -922,8 +915,6 @@ iwl_parse_nvm_mcc_info(struct device *de
+ regd->alpha2[0] = fw_mcc >> 8;
+ regd->alpha2[1] = fw_mcc & 0xff;
+
+- wmm_rule = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+ for (ch_idx = 0; ch_idx < num_of_ch; ch_idx++) {
+ ch_flags = (u16)__le32_to_cpup(channels + ch_idx);
+ band = (ch_idx < NUM_2GHZ_CHANNELS) ?
+@@ -977,26 +968,10 @@ iwl_parse_nvm_mcc_info(struct device *de
+ band == NL80211_BAND_2GHZ)
+ continue;
+
+- if (!reg_query_regdb_wmm(regd->alpha2, center_freq,
+- ®db_ptrs[n_wmms].token, wmm_rule)) {
+- /* Add only new rules */
+- for (i = 0; i < n_wmms; i++) {
+- if (regdb_ptrs[i].token ==
+- regdb_ptrs[n_wmms].token) {
+- rule->wmm_rule = regdb_ptrs[i].rule;
+- break;
+- }
+- }
+- if (i == n_wmms) {
+- rule->wmm_rule = wmm_rule;
+- regdb_ptrs[n_wmms++].rule = wmm_rule;
+- wmm_rule++;
+- }
+- }
++ reg_query_regdb_wmm(regd->alpha2, center_freq, rule);
+ }
+
+ regd->n_reg_rules = valid_rules;
+- regd->n_wmm_rules = n_wmms;
+
+ /*
+ * Narrow down regdom for unused regulatory rules to prevent hole
+@@ -1005,28 +980,13 @@ iwl_parse_nvm_mcc_info(struct device *de
+ regd_to_copy = sizeof(struct ieee80211_regdomain) +
+ valid_rules * sizeof(struct ieee80211_reg_rule);
+
+- wmms_to_copy = sizeof(struct ieee80211_wmm_rule) * n_wmms;
+-
+- copy_rd = kzalloc(regd_to_copy + wmms_to_copy, GFP_KERNEL);
++ copy_rd = kzalloc(regd_to_copy, GFP_KERNEL);
+ if (!copy_rd) {
+ copy_rd = ERR_PTR(-ENOMEM);
+ goto out;
+ }
+
+ memcpy(copy_rd, regd, regd_to_copy);
+- memcpy((u8 *)copy_rd + regd_to_copy, (u8 *)regd + size_of_regd,
+- wmms_to_copy);
+-
+- d_wmm = (struct ieee80211_wmm_rule *)((u8 *)copy_rd + regd_to_copy);
+- s_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+-
+- for (i = 0; i < regd->n_reg_rules; i++) {
+- if (!regd->reg_rules[i].wmm_rule)
+- continue;
+-
+- copy_rd->reg_rules[i].wmm_rule = d_wmm +
+- (regd->reg_rules[i].wmm_rule - s_wmm);
+- }
+
+ out:
+ kfree(regdb_ptrs);
+--- a/include/net/cfg80211.h
++++ b/include/net/cfg80211.h
+@@ -4763,8 +4763,8 @@ const char *reg_initiator_name(enum nl80
+ *
+ * Return: 0 on success. -ENODATA.
+ */
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *ptr,
+- struct ieee80211_wmm_rule *rule);
++int reg_query_regdb_wmm(char *alpha2, int freq,
++ struct ieee80211_reg_rule *rule);
+
+ /*
+ * callbacks for asynchronous cfg80211 methods, notification
+--- a/include/net/regulatory.h
++++ b/include/net/regulatory.h
+@@ -217,15 +217,15 @@ struct ieee80211_wmm_rule {
+ struct ieee80211_reg_rule {
+ struct ieee80211_freq_range freq_range;
+ struct ieee80211_power_rule power_rule;
+- struct ieee80211_wmm_rule *wmm_rule;
++ struct ieee80211_wmm_rule wmm_rule;
+ u32 flags;
+ u32 dfs_cac_ms;
++ bool has_wmm;
+ };
+
+ struct ieee80211_regdomain {
+ struct rcu_head rcu_head;
+ u32 n_reg_rules;
+- u32 n_wmm_rules;
+ char alpha2[3];
+ enum nl80211_dfs_regions dfs_region;
+ struct ieee80211_reg_rule reg_rules[];
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1120,7 +1120,7 @@ void ieee80211_regulatory_limit_wmm_para
+ {
+ struct ieee80211_chanctx_conf *chanctx_conf;
+ const struct ieee80211_reg_rule *rrule;
+- struct ieee80211_wmm_ac *wmm_ac;
++ const struct ieee80211_wmm_ac *wmm_ac;
+ u16 center_freq = 0;
+
+ if (sdata->vif.type != NL80211_IFTYPE_AP &&
+@@ -1139,15 +1139,15 @@ void ieee80211_regulatory_limit_wmm_para
+
+ rrule = freq_reg_info(sdata->wdev.wiphy, MHZ_TO_KHZ(center_freq));
+
+- if (IS_ERR_OR_NULL(rrule) || !rrule->wmm_rule) {
++ if (IS_ERR_OR_NULL(rrule) || !rrule->has_wmm) {
+ rcu_read_unlock();
+ return;
+ }
+
+ if (sdata->vif.type == NL80211_IFTYPE_AP)
+- wmm_ac = &rrule->wmm_rule->ap[ac];
++ wmm_ac = &rrule->wmm_rule.ap[ac];
+ else
+- wmm_ac = &rrule->wmm_rule->client[ac];
++ wmm_ac = &rrule->wmm_rule.client[ac];
+ qparam->cw_min = max_t(u16, qparam->cw_min, wmm_ac->cw_min);
+ qparam->cw_max = max_t(u16, qparam->cw_max, wmm_ac->cw_max);
+ qparam->aifs = max_t(u8, qparam->aifs, wmm_ac->aifsn);
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -667,13 +667,13 @@ static int nl80211_msg_put_wmm_rules(str
+ goto nla_put_failure;
+
+ if (nla_put_u16(msg, NL80211_WMMR_CW_MIN,
+- rule->wmm_rule->client[j].cw_min) ||
++ rule->wmm_rule.client[j].cw_min) ||
+ nla_put_u16(msg, NL80211_WMMR_CW_MAX,
+- rule->wmm_rule->client[j].cw_max) ||
++ rule->wmm_rule.client[j].cw_max) ||
+ nla_put_u8(msg, NL80211_WMMR_AIFSN,
+- rule->wmm_rule->client[j].aifsn) ||
++ rule->wmm_rule.client[j].aifsn) ||
+ nla_put_u8(msg, NL80211_WMMR_TXOP,
+- rule->wmm_rule->client[j].cot))
++ rule->wmm_rule.client[j].cot))
+ goto nla_put_failure;
+
+ nla_nest_end(msg, nl_wmm_rule);
+@@ -766,7 +766,7 @@ static int nl80211_msg_put_channel(struc
+ const struct ieee80211_reg_rule *rule =
+ freq_reg_info(wiphy, chan->center_freq);
+
+- if (!IS_ERR(rule) && rule->wmm_rule) {
++ if (!IS_ERR_OR_NULL(rule) && rule->has_wmm) {
+ if (nl80211_msg_put_wmm_rules(msg, rule))
+ goto nla_put_failure;
+ }
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -425,35 +425,23 @@ static const struct ieee80211_regdomain
+ reg_copy_regd(const struct ieee80211_regdomain *src_regd)
+ {
+ struct ieee80211_regdomain *regd;
+- int size_of_regd, size_of_wmms;
++ int size_of_regd;
+ unsigned int i;
+- struct ieee80211_wmm_rule *d_wmm, *s_wmm;
+
+ size_of_regd =
+ sizeof(struct ieee80211_regdomain) +
+ src_regd->n_reg_rules * sizeof(struct ieee80211_reg_rule);
+- size_of_wmms = src_regd->n_wmm_rules *
+- sizeof(struct ieee80211_wmm_rule);
+
+- regd = kzalloc(size_of_regd + size_of_wmms, GFP_KERNEL);
++ regd = kzalloc(size_of_regd, GFP_KERNEL);
+ if (!regd)
+ return ERR_PTR(-ENOMEM);
+
+ memcpy(regd, src_regd, sizeof(struct ieee80211_regdomain));
+
+- d_wmm = (struct ieee80211_wmm_rule *)((u8 *)regd + size_of_regd);
+- s_wmm = (struct ieee80211_wmm_rule *)((u8 *)src_regd + size_of_regd);
+- memcpy(d_wmm, s_wmm, size_of_wmms);
+-
+- for (i = 0; i < src_regd->n_reg_rules; i++) {
++ for (i = 0; i < src_regd->n_reg_rules; i++)
+ memcpy(®d->reg_rules[i], &src_regd->reg_rules[i],
+ sizeof(struct ieee80211_reg_rule));
+- if (!src_regd->reg_rules[i].wmm_rule)
+- continue;
+
+- regd->reg_rules[i].wmm_rule = d_wmm +
+- (src_regd->reg_rules[i].wmm_rule - s_wmm);
+- }
+ return regd;
+ }
+
+@@ -859,9 +847,10 @@ static bool valid_regdb(const u8 *data,
+ return true;
+ }
+
+-static void set_wmm_rule(struct ieee80211_wmm_rule *rule,
++static void set_wmm_rule(struct ieee80211_reg_rule *rrule,
+ struct fwdb_wmm_rule *wmm)
+ {
++ struct ieee80211_wmm_rule *rule = &rrule->wmm_rule;
+ unsigned int i;
+
+ for (i = 0; i < IEEE80211_NUM_ACS; i++) {
+@@ -875,11 +864,13 @@ static void set_wmm_rule(struct ieee8021
+ rule->ap[i].aifsn = wmm->ap[i].aifsn;
+ rule->ap[i].cot = 1000 * be16_to_cpu(wmm->ap[i].cot);
+ }
++
++ rrule->has_wmm = true;
+ }
+
+ static int __regdb_query_wmm(const struct fwdb_header *db,
+ const struct fwdb_country *country, int freq,
+- u32 *dbptr, struct ieee80211_wmm_rule *rule)
++ struct ieee80211_reg_rule *rule)
+ {
+ unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+@@ -900,8 +891,6 @@ static int __regdb_query_wmm(const struc
+ wmm_ptr = be16_to_cpu(rrule->wmm_ptr) << 2;
+ wmm = (void *)((u8 *)db + wmm_ptr);
+ set_wmm_rule(rule, wmm);
+- if (dbptr)
+- *dbptr = wmm_ptr;
+ return 0;
+ }
+ }
+@@ -909,8 +898,7 @@ static int __regdb_query_wmm(const struc
+ return -ENODATA;
+ }
+
+-int reg_query_regdb_wmm(char *alpha2, int freq, u32 *dbptr,
+- struct ieee80211_wmm_rule *rule)
++int reg_query_regdb_wmm(char *alpha2, int freq, struct ieee80211_reg_rule *rule)
+ {
+ const struct fwdb_header *hdr = regdb;
+ const struct fwdb_country *country;
+@@ -924,8 +912,7 @@ int reg_query_regdb_wmm(char *alpha2, in
+ country = &hdr->country[0];
+ while (country->coll_ptr) {
+ if (alpha2_equal(alpha2, country->alpha2))
+- return __regdb_query_wmm(regdb, country, freq, dbptr,
+- rule);
++ return __regdb_query_wmm(regdb, country, freq, rule);
+
+ country++;
+ }
+@@ -934,32 +921,13 @@ int reg_query_regdb_wmm(char *alpha2, in
+ }
+ EXPORT_SYMBOL(reg_query_regdb_wmm);
+
+-struct wmm_ptrs {
+- struct ieee80211_wmm_rule *rule;
+- u32 ptr;
+-};
+-
+-static struct ieee80211_wmm_rule *find_wmm_ptr(struct wmm_ptrs *wmm_ptrs,
+- u32 wmm_ptr, int n_wmms)
+-{
+- int i;
+-
+- for (i = 0; i < n_wmms; i++) {
+- if (wmm_ptrs[i].ptr == wmm_ptr)
+- return wmm_ptrs[i].rule;
+- }
+- return NULL;
+-}
+-
+ static int regdb_query_country(const struct fwdb_header *db,
+ const struct fwdb_country *country)
+ {
+ unsigned int ptr = be16_to_cpu(country->coll_ptr) << 2;
+ struct fwdb_collection *coll = (void *)((u8 *)db + ptr);
+ struct ieee80211_regdomain *regdom;
+- struct ieee80211_regdomain *tmp_rd;
+- unsigned int size_of_regd, i, n_wmms = 0;
+- struct wmm_ptrs *wmm_ptrs;
++ unsigned int size_of_regd, i;
+
+ size_of_regd = sizeof(struct ieee80211_regdomain) +
+ coll->n_rules * sizeof(struct ieee80211_reg_rule);
+@@ -968,12 +936,6 @@ static int regdb_query_country(const str
+ if (!regdom)
+ return -ENOMEM;
+
+- wmm_ptrs = kcalloc(coll->n_rules, sizeof(*wmm_ptrs), GFP_KERNEL);
+- if (!wmm_ptrs) {
+- kfree(regdom);
+- return -ENOMEM;
+- }
+-
+ regdom->n_reg_rules = coll->n_rules;
+ regdom->alpha2[0] = country->alpha2[0];
+ regdom->alpha2[1] = country->alpha2[1];
+@@ -1012,37 +974,11 @@ static int regdb_query_country(const str
+ 1000 * be16_to_cpu(rule->cac_timeout);
+ if (rule->len >= offsetofend(struct fwdb_rule, wmm_ptr)) {
+ u32 wmm_ptr = be16_to_cpu(rule->wmm_ptr) << 2;
+- struct ieee80211_wmm_rule *wmm_pos =
+- find_wmm_ptr(wmm_ptrs, wmm_ptr, n_wmms);
+- struct fwdb_wmm_rule *wmm;
+- struct ieee80211_wmm_rule *wmm_rule;
+-
+- if (wmm_pos) {
+- rrule->wmm_rule = wmm_pos;
+- continue;
+- }
+- wmm = (void *)((u8 *)db + wmm_ptr);
+- tmp_rd = krealloc(regdom, size_of_regd + (n_wmms + 1) *
+- sizeof(struct ieee80211_wmm_rule),
+- GFP_KERNEL);
+-
+- if (!tmp_rd) {
+- kfree(regdom);
+- kfree(wmm_ptrs);
+- return -ENOMEM;
+- }
+- regdom = tmp_rd;
+-
+- wmm_rule = (struct ieee80211_wmm_rule *)
+- ((u8 *)regdom + size_of_regd + n_wmms *
+- sizeof(struct ieee80211_wmm_rule));
+-
+- set_wmm_rule(wmm_rule, wmm);
+- wmm_ptrs[n_wmms].ptr = wmm_ptr;
+- wmm_ptrs[n_wmms++].rule = wmm_rule;
++ struct fwdb_wmm_rule *wmm = (void *)((u8 *)db + wmm_ptr);
++
++ set_wmm_rule(rrule, wmm);
+ }
+ }
+- kfree(wmm_ptrs);
+
+ return reg_schedule_apply(regdom);
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Arunk Khandavalli <akhandav@codeaurora.org>
+Date: Thu, 30 Aug 2018 00:40:16 +0300
+Subject: cfg80211: nl80211_update_ft_ies() to validate NL80211_ATTR_IE
+
+From: Arunk Khandavalli <akhandav@codeaurora.org>
+
+[ Upstream commit 4f0223bfe9c3e62d8f45a85f1ef1b18a8a263ef9 ]
+
+nl80211_update_ft_ies() tried to validate NL80211_ATTR_IE with
+is_valid_ie_attr() before dereferencing it, but that helper function
+returns true in case of NULL pointer (i.e., attribute not included).
+This can result to dereferencing a NULL pointer. Fix that by explicitly
+checking that NL80211_ATTR_IE is included.
+
+Fixes: 355199e02b83 ("cfg80211: Extend support for IEEE 802.11r Fast BSS Transition")
+Signed-off-by: Arunk Khandavalli <akhandav@codeaurora.org>
+Signed-off-by: Jouni Malinen <jouni@codeaurora.org>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/nl80211.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -12099,6 +12099,7 @@ static int nl80211_update_ft_ies(struct
+ return -EOPNOTSUPP;
+
+ if (!info->attrs[NL80211_ATTR_MDID] ||
++ !info->attrs[NL80211_ATTR_IE] ||
+ !is_valid_ie_attr(info->attrs[NL80211_ATTR_IE]))
+ return -EINVAL;
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Johannes Berg <johannes.berg@intel.com>
+Date: Mon, 18 Jun 2018 09:29:57 +0200
+Subject: cfg80211: remove division by size of sizeof(struct ieee80211_wmm_rule)
+
+From: Johannes Berg <johannes.berg@intel.com>
+
+[ Upstream commit 8a54d8fc160e67ad485d95a0322ce1221f80770a ]
+
+Pointer arithmetic already adjusts by the size of the struct,
+so the sizeof() calculation is wrong. This is basically the
+same as Colin King's patch for similar code in the iwlwifi
+driver.
+
+Fixes: 230ebaa189af ("cfg80211: read wmm rules from regulatory database")
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/reg.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/net/wireless/reg.c
++++ b/net/wireless/reg.c
+@@ -452,8 +452,7 @@ reg_copy_regd(const struct ieee80211_reg
+ continue;
+
+ regd->reg_rules[i].wmm_rule = d_wmm +
+- (src_regd->reg_rules[i].wmm_rule - s_wmm) /
+- sizeof(struct ieee80211_wmm_rule);
++ (src_regd->reg_rules[i].wmm_rule - s_wmm);
+ }
+ return regd;
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Sudeep Holla <sudeep.holla@arm.com>
+Date: Thu, 6 Sep 2018 16:10:39 +0100
+Subject: firmware: arm_scmi: fix divide by zero when sustained_perf_level is zero
+
+From: Sudeep Holla <sudeep.holla@arm.com>
+
+[ Upstream commit 96d529bac562574600eda85726fcfa3eef6dde8e ]
+
+Firmware can provide zero as values for sustained performance level and
+corresponding sustained frequency in kHz in order to hide the actual
+frequencies and provide only abstract values. It may endup with divide
+by zero scenario resulting in kernel panic.
+
+Let's set the multiplication factor to one if either one or both of them
+(sustained_perf_level and sustained_freq) are set to zero.
+
+Fixes: a9e3fbfaa0ff ("firmware: arm_scmi: add initial support for performance protocol")
+Reported-by: Ionela Voinescu <ionela.voinescu@arm.com>
+Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
+Signed-off-by: Olof Johansson <olof@lixom.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/firmware/arm_scmi/perf.c | 8 +++++++-
+ 1 file changed, 7 insertions(+), 1 deletion(-)
+
+--- a/drivers/firmware/arm_scmi/perf.c
++++ b/drivers/firmware/arm_scmi/perf.c
+@@ -166,7 +166,13 @@ scmi_perf_domain_attributes_get(const st
+ le32_to_cpu(attr->sustained_freq_khz);
+ dom_info->sustained_perf_level =
+ le32_to_cpu(attr->sustained_perf_level);
+- dom_info->mult_factor = (dom_info->sustained_freq_khz * 1000) /
++ if (!dom_info->sustained_freq_khz ||
++ !dom_info->sustained_perf_level)
++ /* CPUFreq converts to kHz, hence default 1000 */
++ dom_info->mult_factor = 1000;
++ else
++ dom_info->mult_factor =
++ (dom_info->sustained_freq_khz * 1000) /
+ dom_info->sustained_perf_level;
+ memcpy(dom_info->name, attr->name, SCMI_MAX_STR_SIZE);
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Jon Kuhn <jkuhn@barracuda.com>
+Date: Mon, 9 Jul 2018 14:33:14 +0000
+Subject: fs/cifs: don't translate SFM_SLASH (U+F026) to backslash
+
+From: Jon Kuhn <jkuhn@barracuda.com>
+
+[ Upstream commit c15e3f19a6d5c89b1209dc94b40e568177cb0921 ]
+
+When a Mac client saves an item containing a backslash to a file server
+the backslash is represented in the CIFS/SMB protocol as as U+F026.
+Before this change, listing a directory containing an item with a
+backslash in its name will return that item with the backslash
+represented with a true backslash character (U+005C) because
+convert_sfm_character mapped U+F026 to U+005C when interpretting the
+CIFS/SMB protocol response. However, attempting to open or stat the
+path using a true backslash will result in an error because
+convert_to_sfm_char does not map U+005C back to U+F026 causing the
+CIFS/SMB request to be made with the backslash represented as U+005C.
+
+This change simply prevents the U+F026 to U+005C conversion from
+happenning. This is analogous to how the code does not do any
+translation of UNI_SLASH (U+F000).
+
+Signed-off-by: Jon Kuhn <jkuhn@barracuda.com>
+Signed-off-by: Steve French <stfrench@microsoft.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/cifs/cifs_unicode.c | 3 ---
+ 1 file changed, 3 deletions(-)
+
+--- a/fs/cifs/cifs_unicode.c
++++ b/fs/cifs/cifs_unicode.c
+@@ -105,9 +105,6 @@ convert_sfm_char(const __u16 src_char, c
+ case SFM_LESSTHAN:
+ *target = '<';
+ break;
+- case SFM_SLASH:
+- *target = '\\';
+- break;
+ case SFM_SPACE:
+ *target = ' ';
+ break;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Amir Goldstein <amir73il@gmail.com>
+Date: Sat, 1 Sep 2018 09:40:01 +0300
+Subject: fsnotify: fix ignore mask logic in fsnotify()
+
+From: Amir Goldstein <amir73il@gmail.com>
+
+[ Upstream commit 9bdda4e9cf2dcecb60a0683b10ffb8cd7e5f2f45 ]
+
+Commit 92183a42898d ("fsnotify: fix ignore mask logic in
+send_to_group()") acknoledges the use case of ignoring an event on
+an inode mark, because of an ignore mask on a mount mark of the same
+group (i.e. I want to get all events on this file, except for the events
+that came from that mount).
+
+This change depends on correctly merging the inode marks and mount marks
+group lists, so that the mount mark ignore mask would be tested in
+send_to_group(). Alas, the merging of the lists did not take into
+account the case where event in question is not in the mask of any of
+the mount marks.
+
+To fix this, completely remove the tests for inode and mount event masks
+from the lists merging code.
+
+Fixes: 92183a42898d ("fsnotify: fix ignore mask logic in send_to_group")
+Signed-off-by: Amir Goldstein <amir73il@gmail.com>
+Signed-off-by: Jan Kara <jack@suse.cz>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ fs/notify/fsnotify.c | 13 +++----------
+ 1 file changed, 3 insertions(+), 10 deletions(-)
+
+--- a/fs/notify/fsnotify.c
++++ b/fs/notify/fsnotify.c
+@@ -351,16 +351,9 @@ int fsnotify(struct inode *to_tell, __u3
+
+ iter_info.srcu_idx = srcu_read_lock(&fsnotify_mark_srcu);
+
+- if ((mask & FS_MODIFY) ||
+- (test_mask & to_tell->i_fsnotify_mask)) {
+- iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+- fsnotify_first_mark(&to_tell->i_fsnotify_marks);
+- }
+-
+- if (mnt && ((mask & FS_MODIFY) ||
+- (test_mask & mnt->mnt_fsnotify_mask))) {
+- iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
+- fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++ iter_info.marks[FSNOTIFY_OBJ_TYPE_INODE] =
++ fsnotify_first_mark(&to_tell->i_fsnotify_marks);
++ if (mnt) {
+ iter_info.marks[FSNOTIFY_OBJ_TYPE_VFSMOUNT] =
+ fsnotify_first_mark(&mnt->mnt_fsnotify_marks);
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Michael Hennerich <michael.hennerich@analog.com>
+Date: Mon, 13 Aug 2018 15:57:44 +0200
+Subject: gpio: adp5588: Fix sleep-in-atomic-context bug
+
+From: Michael Hennerich <michael.hennerich@analog.com>
+
+[ Upstream commit 6537886cdc9a637711fd6da980dbb87c2c87c9aa ]
+
+This fixes:
+[BUG] gpio: gpio-adp5588: A possible sleep-in-atomic-context bug
+ in adp5588_gpio_write()
+[BUG] gpio: gpio-adp5588: A possible sleep-in-atomic-context bug
+ in adp5588_gpio_direction_input()
+
+Reported-by: Jia-Ju Bai <baijiaju1990@gmail.com>
+Signed-off-by: Michael Hennerich <michael.hennerich@analog.com>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpio/gpio-adp5588.c | 24 ++++++++++++++++++++----
+ 1 file changed, 20 insertions(+), 4 deletions(-)
+
+--- a/drivers/gpio/gpio-adp5588.c
++++ b/drivers/gpio/gpio-adp5588.c
+@@ -41,6 +41,8 @@ struct adp5588_gpio {
+ uint8_t int_en[3];
+ uint8_t irq_mask[3];
+ uint8_t irq_stat[3];
++ uint8_t int_input_en[3];
++ uint8_t int_lvl_cached[3];
+ };
+
+ static int adp5588_gpio_read(struct i2c_client *client, u8 reg)
+@@ -173,12 +175,28 @@ static void adp5588_irq_bus_sync_unlock(
+ struct adp5588_gpio *dev = irq_data_get_irq_chip_data(d);
+ int i;
+
+- for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++)
++ for (i = 0; i <= ADP5588_BANK(ADP5588_MAXGPIO); i++) {
++ if (dev->int_input_en[i]) {
++ mutex_lock(&dev->lock);
++ dev->dir[i] &= ~dev->int_input_en[i];
++ dev->int_input_en[i] = 0;
++ adp5588_gpio_write(dev->client, GPIO_DIR1 + i,
++ dev->dir[i]);
++ mutex_unlock(&dev->lock);
++ }
++
++ if (dev->int_lvl_cached[i] != dev->int_lvl[i]) {
++ dev->int_lvl_cached[i] = dev->int_lvl[i];
++ adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + i,
++ dev->int_lvl[i]);
++ }
++
+ if (dev->int_en[i] ^ dev->irq_mask[i]) {
+ dev->int_en[i] = dev->irq_mask[i];
+ adp5588_gpio_write(dev->client, GPIO_INT_EN1 + i,
+ dev->int_en[i]);
+ }
++ }
+
+ mutex_unlock(&dev->irq_lock);
+ }
+@@ -221,9 +239,7 @@ static int adp5588_irq_set_type(struct i
+ else
+ return -EINVAL;
+
+- adp5588_gpio_direction_input(&dev->gpio_chip, gpio);
+- adp5588_gpio_write(dev->client, GPIO_INT_LVL1 + bank,
+- dev->int_lvl[bank]);
++ dev->int_input_en[bank] |= bit;
+
+ return 0;
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Alexey Khoroshilov <khoroshilov@ispras.ru>
+Date: Tue, 28 Aug 2018 23:40:26 +0300
+Subject: gpio: dwapb: Fix error handling in dwapb_gpio_probe()
+
+From: Alexey Khoroshilov <khoroshilov@ispras.ru>
+
+[ Upstream commit a618cf4800970d260871c159b7eec014a1da2e81 ]
+
+If dwapb_gpio_add_port() fails in dwapb_gpio_probe(),
+gpio->clk is left undisabled.
+
+Found by Linux Driver Verification project (linuxtesting.org).
+
+Signed-off-by: Alexey Khoroshilov <khoroshilov@ispras.ru>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpio/gpio-dwapb.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/gpio/gpio-dwapb.c
++++ b/drivers/gpio/gpio-dwapb.c
+@@ -726,6 +726,7 @@ static int dwapb_gpio_probe(struct platf
+ out_unregister:
+ dwapb_gpio_unregister(gpio);
+ dwapb_irq_teardown(gpio);
++ clk_disable_unprepare(gpio->clk);
+
+ return err;
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Vincent Whitchurch <vincent.whitchurch@axis.com>
+Date: Fri, 31 Aug 2018 09:04:18 +0200
+Subject: gpio: Fix crash due to registration race
+
+From: Vincent Whitchurch <vincent.whitchurch@axis.com>
+
+[ Upstream commit d49b48f088c323dbacae44dfbe56d9c985c8a2a1 ]
+
+gpiochip_add_data_with_key() adds the gpiochip to the gpio_devices list
+before of_gpiochip_add() is called, but it's only the latter which sets
+the ->of_xlate function pointer. gpiochip_find() can be called by
+someone else between these two actions, and it can find the chip and
+call of_gpiochip_match_node_and_xlate() which leads to the following
+crash due to a NULL ->of_xlate().
+
+ Unhandled prefetch abort: page domain fault (0x01b) at 0x00000000
+ Modules linked in: leds_gpio(+) gpio_generic(+)
+ CPU: 0 PID: 830 Comm: insmod Not tainted 4.18.0+ #43
+ Hardware name: ARM-Versatile Express
+ PC is at (null)
+ LR is at of_gpiochip_match_node_and_xlate+0x2c/0x38
+ Process insmod (pid: 830, stack limit = 0x(ptrval))
+ (of_gpiochip_match_node_and_xlate) from (gpiochip_find+0x48/0x84)
+ (gpiochip_find) from (of_get_named_gpiod_flags+0xa8/0x238)
+ (of_get_named_gpiod_flags) from (gpiod_get_from_of_node+0x2c/0xc8)
+ (gpiod_get_from_of_node) from (devm_fwnode_get_index_gpiod_from_child+0xb8/0x144)
+ (devm_fwnode_get_index_gpiod_from_child) from (gpio_led_probe+0x208/0x3c4 [leds_gpio])
+ (gpio_led_probe [leds_gpio]) from (platform_drv_probe+0x48/0x9c)
+ (platform_drv_probe) from (really_probe+0x1d0/0x3d4)
+ (really_probe) from (driver_probe_device+0x78/0x1c0)
+ (driver_probe_device) from (__driver_attach+0x120/0x13c)
+ (__driver_attach) from (bus_for_each_dev+0x68/0xb4)
+ (bus_for_each_dev) from (bus_add_driver+0x1a8/0x268)
+ (bus_add_driver) from (driver_register+0x78/0x10c)
+ (driver_register) from (do_one_initcall+0x54/0x1fc)
+ (do_one_initcall) from (do_init_module+0x64/0x1f4)
+ (do_init_module) from (load_module+0x2198/0x26ac)
+ (load_module) from (sys_finit_module+0xe0/0x110)
+ (sys_finit_module) from (ret_fast_syscall+0x0/0x54)
+
+One way to fix this would be to rework the hairy registration sequence
+in gpiochip_add_data_with_key(), but since I'd probably introduce a
+couple of new bugs if I attempted that, simply add a check for a
+non-NULL of_xlate function pointer in
+of_gpiochip_match_node_and_xlate(). This works since the driver looking
+for the gpio will simply fail to find the gpio and defer its probe and
+be reprobed when the driver which is registering the gpiochip has fully
+completed its probe.
+
+Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpio/gpiolib-of.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/gpio/gpiolib-of.c
++++ b/drivers/gpio/gpiolib-of.c
+@@ -31,6 +31,7 @@ static int of_gpiochip_match_node_and_xl
+ struct of_phandle_args *gpiospec = data;
+
+ return chip->gpiodev->dev.of_node == gpiospec->np &&
++ chip->of_xlate &&
+ chip->of_xlate(chip, gpiospec, NULL) >= 0;
+ }
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Hans de Goede <hdegoede@redhat.com>
+Date: Tue, 14 Aug 2018 16:07:03 +0200
+Subject: gpiolib-acpi: Register GpioInt ACPI event handlers from a late_initcall
+
+From: Hans de Goede <hdegoede@redhat.com>
+
+[ Upstream commit 78d3a92edbfb02e8cb83173cad84c3f2d5e1f070 ]
+
+GpioInt ACPI event handlers may see there IRQ triggered immediately
+after requesting the IRQ (esp. level triggered ones). This means that they
+may run before any other (builtin) drivers have had a chance to register
+their OpRegion handlers, leading to errors like this:
+
+[ 1.133274] ACPI Error: No handler for Region [PMOP] ((____ptrval____)) [UserDefinedRegion] (20180531/evregion-132)
+[ 1.133286] ACPI Error: Region UserDefinedRegion (ID=141) has no handler (20180531/exfldio-265)
+[ 1.133297] ACPI Error: Method parse/execution failed \_SB.GPO2._L01, AE_NOT_EXIST (20180531/psparse-516)
+
+We already defer the manual initial trigger of edge triggered interrupts
+by running it from a late_initcall handler, this commit replaces this with
+deferring the entire acpi_gpiochip_request_interrupts() call till then,
+fixing the problem of some OpRegions not being registered yet.
+
+Note that this removes the need to have a list of edge triggered handlers
+which need to run, since the entire acpi_gpiochip_request_interrupts() call
+is now delayed, acpi_gpiochip_request_interrupt() can call these directly
+now.
+
+Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
+Signed-off-by: Hans de Goede <hdegoede@redhat.com>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpio/gpiolib-acpi.c | 84 +++++++++++++++++++++++++-------------------
+ 1 file changed, 49 insertions(+), 35 deletions(-)
+
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -25,7 +25,6 @@
+
+ struct acpi_gpio_event {
+ struct list_head node;
+- struct list_head initial_sync_list;
+ acpi_handle handle;
+ unsigned int pin;
+ unsigned int irq;
+@@ -49,10 +48,19 @@ struct acpi_gpio_chip {
+ struct mutex conn_lock;
+ struct gpio_chip *chip;
+ struct list_head events;
++ struct list_head deferred_req_irqs_list_entry;
+ };
+
+-static LIST_HEAD(acpi_gpio_initial_sync_list);
+-static DEFINE_MUTEX(acpi_gpio_initial_sync_list_lock);
++/*
++ * For gpiochips which call acpi_gpiochip_request_interrupts() before late_init
++ * (so builtin drivers) we register the ACPI GpioInt event handlers from a
++ * late_initcall_sync handler, so that other builtin drivers can register their
++ * OpRegions before the event handlers can run. This list contains gpiochips
++ * for which the acpi_gpiochip_request_interrupts() has been deferred.
++ */
++static DEFINE_MUTEX(acpi_gpio_deferred_req_irqs_lock);
++static LIST_HEAD(acpi_gpio_deferred_req_irqs_list);
++static bool acpi_gpio_deferred_req_irqs_done;
+
+ static int acpi_gpiochip_find(struct gpio_chip *gc, void *data)
+ {
+@@ -89,21 +97,6 @@ static struct gpio_desc *acpi_get_gpiod(
+ return gpiochip_get_desc(chip, pin);
+ }
+
+-static void acpi_gpio_add_to_initial_sync_list(struct acpi_gpio_event *event)
+-{
+- mutex_lock(&acpi_gpio_initial_sync_list_lock);
+- list_add(&event->initial_sync_list, &acpi_gpio_initial_sync_list);
+- mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+-static void acpi_gpio_del_from_initial_sync_list(struct acpi_gpio_event *event)
+-{
+- mutex_lock(&acpi_gpio_initial_sync_list_lock);
+- if (!list_empty(&event->initial_sync_list))
+- list_del_init(&event->initial_sync_list);
+- mutex_unlock(&acpi_gpio_initial_sync_list_lock);
+-}
+-
+ static irqreturn_t acpi_gpio_irq_handler(int irq, void *data)
+ {
+ struct acpi_gpio_event *event = data;
+@@ -229,7 +222,6 @@ static acpi_status acpi_gpiochip_request
+ event->irq = irq;
+ event->pin = pin;
+ event->desc = desc;
+- INIT_LIST_HEAD(&event->initial_sync_list);
+
+ ret = request_threaded_irq(event->irq, NULL, handler, irqflags,
+ "ACPI:Event", event);
+@@ -251,10 +243,9 @@ static acpi_status acpi_gpiochip_request
+ * may refer to OperationRegions from other (builtin) drivers which
+ * may be probed after us.
+ */
+- if (handler == acpi_gpio_irq_handler &&
+- (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
+- ((irqflags & IRQF_TRIGGER_FALLING) && value == 0)))
+- acpi_gpio_add_to_initial_sync_list(event);
++ if (((irqflags & IRQF_TRIGGER_RISING) && value == 1) ||
++ ((irqflags & IRQF_TRIGGER_FALLING) && value == 0))
++ handler(event->irq, event);
+
+ return AE_OK;
+
+@@ -283,6 +274,7 @@ void acpi_gpiochip_request_interrupts(st
+ struct acpi_gpio_chip *acpi_gpio;
+ acpi_handle handle;
+ acpi_status status;
++ bool defer;
+
+ if (!chip->parent || !chip->to_irq)
+ return;
+@@ -295,6 +287,16 @@ void acpi_gpiochip_request_interrupts(st
+ if (ACPI_FAILURE(status))
+ return;
+
++ mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++ defer = !acpi_gpio_deferred_req_irqs_done;
++ if (defer)
++ list_add(&acpi_gpio->deferred_req_irqs_list_entry,
++ &acpi_gpio_deferred_req_irqs_list);
++ mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
++ if (defer)
++ return;
++
+ acpi_walk_resources(handle, "_AEI",
+ acpi_gpiochip_request_interrupt, acpi_gpio);
+ }
+@@ -325,11 +327,14 @@ void acpi_gpiochip_free_interrupts(struc
+ if (ACPI_FAILURE(status))
+ return;
+
++ mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++ if (!list_empty(&acpi_gpio->deferred_req_irqs_list_entry))
++ list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
++ mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
++
+ list_for_each_entry_safe_reverse(event, ep, &acpi_gpio->events, node) {
+ struct gpio_desc *desc;
+
+- acpi_gpio_del_from_initial_sync_list(event);
+-
+ if (irqd_is_wakeup_set(irq_get_irq_data(event->irq)))
+ disable_irq_wake(event->irq);
+
+@@ -1049,6 +1054,7 @@ void acpi_gpiochip_add(struct gpio_chip
+
+ acpi_gpio->chip = chip;
+ INIT_LIST_HEAD(&acpi_gpio->events);
++ INIT_LIST_HEAD(&acpi_gpio->deferred_req_irqs_list_entry);
+
+ status = acpi_attach_data(handle, acpi_gpio_chip_dh, acpi_gpio);
+ if (ACPI_FAILURE(status)) {
+@@ -1195,20 +1201,28 @@ bool acpi_can_fallback_to_crs(struct acp
+ return con_id == NULL;
+ }
+
+-/* Sync the initial state of handlers after all builtin drivers have probed */
+-static int acpi_gpio_initial_sync(void)
++/* Run deferred acpi_gpiochip_request_interrupts() */
++static int acpi_gpio_handle_deferred_request_interrupts(void)
+ {
+- struct acpi_gpio_event *event, *ep;
++ struct acpi_gpio_chip *acpi_gpio, *tmp;
++
++ mutex_lock(&acpi_gpio_deferred_req_irqs_lock);
++ list_for_each_entry_safe(acpi_gpio, tmp,
++ &acpi_gpio_deferred_req_irqs_list,
++ deferred_req_irqs_list_entry) {
++ acpi_handle handle;
+
+- mutex_lock(&acpi_gpio_initial_sync_list_lock);
+- list_for_each_entry_safe(event, ep, &acpi_gpio_initial_sync_list,
+- initial_sync_list) {
+- acpi_evaluate_object(event->handle, NULL, NULL, NULL);
+- list_del_init(&event->initial_sync_list);
++ handle = ACPI_HANDLE(acpi_gpio->chip->parent);
++ acpi_walk_resources(handle, "_AEI",
++ acpi_gpiochip_request_interrupt, acpi_gpio);
++
++ list_del_init(&acpi_gpio->deferred_req_irqs_list_entry);
+ }
+- mutex_unlock(&acpi_gpio_initial_sync_list_lock);
++
++ acpi_gpio_deferred_req_irqs_done = true;
++ mutex_unlock(&acpi_gpio_deferred_req_irqs_lock);
+
+ return 0;
+ }
+ /* We must use _sync so that this runs after the first deferred_probe run */
+-late_initcall_sync(acpi_gpio_initial_sync);
++late_initcall_sync(acpi_gpio_handle_deferred_request_interrupts);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Date: Mon, 13 Aug 2018 19:00:27 +0300
+Subject: gpiolib: acpi: Switch to cansleep version of GPIO library call
+
+From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+
+[ Upstream commit 993b9bc5c47fda86f8ab4e53d68c6fea5ff2764a ]
+
+The commit ca876c7483b6
+
+ ("gpiolib-acpi: make sure we trigger edge events at least once on boot")
+
+added a initial value check for pin which is about to be locked as IRQ.
+Unfortunately, not all GPIO drivers can do that atomically. Thus,
+switch to cansleep version of the call. Otherwise we have a warning:
+
+...
+ WARNING: CPU: 2 PID: 1408 at drivers/gpio/gpiolib.c:2883 gpiod_get_value+0x46/0x50
+...
+ RIP: 0010:gpiod_get_value+0x46/0x50
+...
+
+The change tested on Intel Broxton with Whiskey Cove PMIC GPIO controller.
+
+Fixes: ca876c7483b6 ("gpiolib-acpi: make sure we trigger edge events at least once on boot")
+Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
+Cc: Hans de Goede <hdegoede@redhat.com>
+Cc: Benjamin Tissoires <benjamin.tissoires@redhat.com>
+Acked-by: Mika Westerberg <mika.westerberg@linux.intel.com>
+Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpio/gpiolib-acpi.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/gpio/gpiolib-acpi.c
++++ b/drivers/gpio/gpiolib-acpi.c
+@@ -186,7 +186,7 @@ static acpi_status acpi_gpiochip_request
+
+ gpiod_direction_input(desc);
+
+- value = gpiod_get_value(desc);
++ value = gpiod_get_value_cansleep(desc);
+
+ ret = gpiochip_lock_as_irq(chip, pin);
+ if (ret) {
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Masahiro Yamada <yamada.masahiro@socionext.com>
+Date: Fri, 31 Aug 2018 23:30:48 +0900
+Subject: i2c: uniphier-f: issue STOP only for last message or I2C_M_STOP
+
+From: Masahiro Yamada <yamada.masahiro@socionext.com>
+
+[ Upstream commit 4c85609b08c4761eca0a40fd7beb06bc650f252d ]
+
+This driver currently emits a STOP if the next message is not
+I2C_MD_RD. It should not do it because it disturbs the I2C_RDWR
+ioctl, where read/write transactions are combined without STOP
+between.
+
+Issue STOP only when the message is the last one _or_ flagged with
+I2C_M_STOP.
+
+Fixes: 6a62974b667f ("i2c: uniphier_f: add UniPhier FIFO-builtin I2C driver")
+Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
+Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/i2c/busses/i2c-uniphier-f.c | 7 ++-----
+ 1 file changed, 2 insertions(+), 5 deletions(-)
+
+--- a/drivers/i2c/busses/i2c-uniphier-f.c
++++ b/drivers/i2c/busses/i2c-uniphier-f.c
+@@ -401,11 +401,8 @@ static int uniphier_fi2c_master_xfer(str
+ return ret;
+
+ for (msg = msgs; msg < emsg; msg++) {
+- /* If next message is read, skip the stop condition */
+- bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+- /* but, force it if I2C_M_STOP is set */
+- if (msg->flags & I2C_M_STOP)
+- stop = true;
++ /* Emit STOP if it is the last message or I2C_M_STOP is set. */
++ bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+
+ ret = uniphier_fi2c_master_xfer_one(adap, msg, stop);
+ if (ret)
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Masahiro Yamada <yamada.masahiro@socionext.com>
+Date: Fri, 31 Aug 2018 23:30:47 +0900
+Subject: i2c: uniphier: issue STOP only for last message or I2C_M_STOP
+
+From: Masahiro Yamada <yamada.masahiro@socionext.com>
+
+[ Upstream commit 38f5d8d8cbb2ffa2b54315118185332329ec891c ]
+
+This driver currently emits a STOP if the next message is not
+I2C_MD_RD. It should not do it because it disturbs the I2C_RDWR
+ioctl, where read/write transactions are combined without STOP
+between.
+
+Issue STOP only when the message is the last one _or_ flagged with
+I2C_M_STOP.
+
+Fixes: dd6fd4a32793 ("i2c: uniphier: add UniPhier FIFO-less I2C driver")
+Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
+Signed-off-by: Wolfram Sang <wsa@the-dreams.de>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/i2c/busses/i2c-uniphier.c | 7 ++-----
+ 1 file changed, 2 insertions(+), 5 deletions(-)
+
+--- a/drivers/i2c/busses/i2c-uniphier.c
++++ b/drivers/i2c/busses/i2c-uniphier.c
+@@ -248,11 +248,8 @@ static int uniphier_i2c_master_xfer(stru
+ return ret;
+
+ for (msg = msgs; msg < emsg; msg++) {
+- /* If next message is read, skip the stop condition */
+- bool stop = !(msg + 1 < emsg && msg[1].flags & I2C_M_RD);
+- /* but, force it if I2C_M_STOP is set */
+- if (msg->flags & I2C_M_STOP)
+- stop = true;
++ /* Emit STOP if it is the last message or I2C_M_STOP is set. */
++ bool stop = (msg + 1 == emsg) || (msg->flags & I2C_M_STOP);
+
+ ret = uniphier_i2c_master_xfer_one(adap, msg, stop);
+ if (ret)
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
+Date: Thu, 30 Aug 2018 13:19:53 -0500
+Subject: ibmvnic: Include missing return code checks in reset function
+
+From: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
+
+[ Upstream commit f611a5b4a51fa36a0aa792be474f5d6aacaef7e3 ]
+
+Check the return codes of these functions and halt reset
+in case of failure. The driver will remain in a dormant state
+until the next reset event, when device initialization will be
+re-attempted.
+
+Signed-off-by: Thomas Falcon <tlfalcon@linux.vnet.ibm.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/ibm/ibmvnic.c | 12 +++++++++---
+ 1 file changed, 9 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/ibm/ibmvnic.c
++++ b/drivers/net/ethernet/ibm/ibmvnic.c
+@@ -1841,11 +1841,17 @@ static int do_reset(struct ibmvnic_adapt
+ adapter->map_id = 1;
+ release_rx_pools(adapter);
+ release_tx_pools(adapter);
+- init_rx_pools(netdev);
+- init_tx_pools(netdev);
++ rc = init_rx_pools(netdev);
++ if (rc)
++ return rc;
++ rc = init_tx_pools(netdev);
++ if (rc)
++ return rc;
+
+ release_napi(adapter);
+- init_napi(adapter);
++ rc = init_napi(adapter);
++ if (rc)
++ return rc;
+ } else {
+ rc = reset_tx_pools(adapter);
+ if (rc)
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Paul Mackerras <paulus@ozlabs.org>
+Date: Mon, 20 Aug 2018 16:05:45 +1000
+Subject: KVM: PPC: Book3S HV: Don't truncate HPTE index in xlate function
+
+From: Paul Mackerras <paulus@ozlabs.org>
+
+[ Upstream commit 46dec40fb741f00f1864580130779aeeaf24fb3d ]
+
+This fixes a bug which causes guest virtual addresses to get translated
+to guest real addresses incorrectly when the guest is using the HPT MMU
+and has more than 256GB of RAM, or more specifically has a HPT larger
+than 2GB. This has showed up in testing as a failure of the host to
+emulate doorbell instructions correctly on POWER9 for HPT guests with
+more than 256GB of RAM.
+
+The bug is that the HPTE index in kvmppc_mmu_book3s_64_hv_xlate()
+is stored as an int, and in forming the HPTE address, the index gets
+shifted left 4 bits as an int before being signed-extended to 64 bits.
+The simple fix is to make the variable a long int, matching the
+return type of kvmppc_hv_find_lock_hpte(), which is what calculates
+the index.
+
+Fixes: 697d3899dcb4 ("KVM: PPC: Implement MMIO emulation support for Book3S HV guests")
+Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/powerpc/kvm/book3s_64_mmu_hv.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
++++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
+@@ -359,7 +359,7 @@ static int kvmppc_mmu_book3s_64_hv_xlate
+ unsigned long pp, key;
+ unsigned long v, orig_v, gr;
+ __be64 *hptep;
+- int index;
++ long int index;
+ int virtmode = vcpu->arch.shregs.msr & (data ? MSR_DR : MSR_IR);
+
+ if (kvm_is_radix(vcpu->kvm))
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Johannes Berg <johannes.berg@intel.com>
+Date: Thu, 30 Aug 2018 10:55:49 +0200
+Subject: mac80211: always account for A-MSDU header changes
+
+From: Johannes Berg <johannes.berg@intel.com>
+
+[ Upstream commit aa58acf325b4aadeecae2bfc90658273b47dbace ]
+
+In the error path of changing the SKB headroom of the second
+A-MSDU subframe, we would not account for the already-changed
+length of the first frame that just got converted to be in
+A-MSDU format and thus is a bit longer now.
+
+Fix this by doing the necessary accounting.
+
+It would be possible to reorder the operations, but that would
+make the code more complex (to calculate the necessary pad),
+and the headroom expansion should not fail frequently enough
+to make that worthwhile.
+
+Fixes: 6e0456b54545 ("mac80211: add A-MSDU tx support")
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Acked-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/tx.c | 12 +++++++-----
+ 1 file changed, 7 insertions(+), 5 deletions(-)
+
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3239,7 +3239,7 @@ static bool ieee80211_amsdu_aggregate(st
+
+ if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) +
+ 2 + pad))
+- goto out;
++ goto out_recalc;
+
+ ret = true;
+ data = skb_push(skb, ETH_ALEN + 2);
+@@ -3256,11 +3256,13 @@ static bool ieee80211_amsdu_aggregate(st
+ head->data_len += skb->len;
+ *frag_tail = skb;
+
+- flow->backlog += head->len - orig_len;
+- tin->backlog_bytes += head->len - orig_len;
+-
+- fq_recalc_backlog(fq, tin, flow);
++out_recalc:
++ if (head->len != orig_len) {
++ flow->backlog += head->len - orig_len;
++ tin->backlog_bytes += head->len - orig_len;
+
++ fq_recalc_backlog(fq, tin, flow);
++ }
+ out:
+ spin_unlock_bh(&fq->lock);
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Sara Sharon <sara.sharon@intel.com>
+Date: Wed, 29 Aug 2018 08:57:02 +0200
+Subject: mac80211: avoid kernel panic when building AMSDU from non-linear SKB
+
+From: Sara Sharon <sara.sharon@intel.com>
+
+[ Upstream commit 166ac9d55b0ab70b644e429be1f217fe8393cbd7 ]
+
+When building building AMSDU from non-linear SKB, we hit a
+kernel panic when trying to push the padding to the tail.
+Instead, put the padding at the head of the next subframe.
+This also fixes the A-MSDU subframes to not have the padding
+accounted in the length field and not have pad at all for
+the last subframe, both required by the spec.
+
+Fixes: 6e0456b54545 ("mac80211: add A-MSDU tx support")
+Signed-off-by: Sara Sharon <sara.sharon@intel.com>
+Reviewed-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/tx.c | 38 +++++++++++++++++++++-----------------
+ 1 file changed, 21 insertions(+), 17 deletions(-)
+
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3073,27 +3073,18 @@ void ieee80211_clear_fast_xmit(struct st
+ }
+
+ static bool ieee80211_amsdu_realloc_pad(struct ieee80211_local *local,
+- struct sk_buff *skb, int headroom,
+- int *subframe_len)
++ struct sk_buff *skb, int headroom)
+ {
+- int amsdu_len = *subframe_len + sizeof(struct ethhdr);
+- int padding = (4 - amsdu_len) & 3;
+-
+- if (skb_headroom(skb) < headroom || skb_tailroom(skb) < padding) {
++ if (skb_headroom(skb) < headroom) {
+ I802_DEBUG_INC(local->tx_expand_skb_head);
+
+- if (pskb_expand_head(skb, headroom, padding, GFP_ATOMIC)) {
++ if (pskb_expand_head(skb, headroom, 0, GFP_ATOMIC)) {
+ wiphy_debug(local->hw.wiphy,
+ "failed to reallocate TX buffer\n");
+ return false;
+ }
+ }
+
+- if (padding) {
+- *subframe_len += padding;
+- skb_put_zero(skb, padding);
+- }
+-
+ return true;
+ }
+
+@@ -3117,8 +3108,7 @@ static bool ieee80211_amsdu_prepare_head
+ if (info->control.flags & IEEE80211_TX_CTRL_AMSDU)
+ return true;
+
+- if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr),
+- &subframe_len))
++ if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(*amsdu_hdr)))
+ return false;
+
+ data = skb_push(skb, sizeof(*amsdu_hdr));
+@@ -3184,7 +3174,8 @@ static bool ieee80211_amsdu_aggregate(st
+ void *data;
+ bool ret = false;
+ unsigned int orig_len;
+- int n = 1, nfrags;
++ int n = 1, nfrags, pad = 0;
++ u16 hdrlen;
+
+ if (!ieee80211_hw_check(&local->hw, TX_AMSDU))
+ return false;
+@@ -3235,8 +3226,19 @@ static bool ieee80211_amsdu_aggregate(st
+ if (max_frags && nfrags > max_frags)
+ goto out;
+
+- if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) + 2,
+- &subframe_len))
++ /*
++ * Pad out the previous subframe to a multiple of 4 by adding the
++ * padding to the next one, that's being added. Note that head->len
++ * is the length of the full A-MSDU, but that works since each time
++ * we add a new subframe we pad out the previous one to a multiple
++ * of 4 and thus it no longer matters in the next round.
++ */
++ hdrlen = fast_tx->hdr_len - sizeof(rfc1042_header);
++ if ((head->len - hdrlen) & 3)
++ pad = 4 - ((head->len - hdrlen) & 3);
++
++ if (!ieee80211_amsdu_realloc_pad(local, skb, sizeof(rfc1042_header) +
++ 2 + pad))
+ goto out;
+
+ ret = true;
+@@ -3248,6 +3250,8 @@ static bool ieee80211_amsdu_aggregate(st
+ memcpy(data, &len, 2);
+ memcpy(data + 2, rfc1042_header, sizeof(rfc1042_header));
+
++ memset(skb_push(skb, pad), 0, pad);
++
+ head->len += skb->len;
+ head->data_len += skb->len;
+ *frag_tail = skb;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Danek Duvall <duvall@comfychair.org>
+Date: Wed, 22 Aug 2018 16:01:04 -0700
+Subject: mac80211: correct use of IEEE80211_VHT_CAP_RXSTBC_X
+
+From: Danek Duvall <duvall@comfychair.org>
+
+[ Upstream commit 67d1ba8a6dc83d90cd58b89fa6cbf9ae35a0cf7f ]
+
+The mod mask for VHT capabilities intends to say that you can override
+the number of STBC receive streams, and it does, but only by accident.
+The IEEE80211_VHT_CAP_RXSTBC_X aren't bits to be set, but values (albeit
+left-shifted). ORing the bits together gets the right answer, but we
+should use the _MASK macro here instead.
+
+Signed-off-by: Danek Duvall <duvall@comfychair.org>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/main.c | 5 +----
+ 1 file changed, 1 insertion(+), 4 deletions(-)
+
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -470,10 +470,7 @@ static const struct ieee80211_vht_cap ma
+ cpu_to_le32(IEEE80211_VHT_CAP_RXLDPC |
+ IEEE80211_VHT_CAP_SHORT_GI_80 |
+ IEEE80211_VHT_CAP_SHORT_GI_160 |
+- IEEE80211_VHT_CAP_RXSTBC_1 |
+- IEEE80211_VHT_CAP_RXSTBC_2 |
+- IEEE80211_VHT_CAP_RXSTBC_3 |
+- IEEE80211_VHT_CAP_RXSTBC_4 |
++ IEEE80211_VHT_CAP_RXSTBC_MASK |
+ IEEE80211_VHT_CAP_TXSTBC |
+ IEEE80211_VHT_CAP_SU_BEAMFORMER_CAPABLE |
+ IEEE80211_VHT_CAP_SU_BEAMFORMEE_CAPABLE |
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+Date: Wed, 29 Aug 2018 21:03:25 +0200
+Subject: mac80211: do not convert to A-MSDU if frag/subframe limited
+
+From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+
+[ Upstream commit 1eb507903665442360a959136dfa3234c43db085 ]
+
+Do not start to aggregate packets in a A-MSDU frame (converting the
+first subframe to A-MSDU, adding the header) if max_tx_fragments or
+max_amsdu_subframes limits are already exceeded by it. In particular,
+this happens when drivers set the limit to 1 to avoid A-MSDUs at all.
+
+Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+[reword commit message to be more precise]
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/tx.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3208,9 +3208,6 @@ static bool ieee80211_amsdu_aggregate(st
+ if (skb->len + head->len > max_amsdu_len)
+ goto out;
+
+- if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
+- goto out;
+-
+ nfrags = 1 + skb_shinfo(skb)->nr_frags;
+ nfrags += 1 + skb_shinfo(head)->nr_frags;
+ frag_tail = &skb_shinfo(head)->frag_list;
+@@ -3226,6 +3223,9 @@ static bool ieee80211_amsdu_aggregate(st
+ if (max_frags && nfrags > max_frags)
+ goto out;
+
++ if (!ieee80211_amsdu_prepare_head(sdata, fast_tx, head))
++ goto out;
++
+ /*
+ * Pad out the previous subframe to a multiple of 4 by adding the
+ * padding to the next one, that's being added. Note that head->len
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+Date: Fri, 31 Aug 2018 11:31:12 +0300
+Subject: mac80211: don't Tx a deauth frame if the AP forbade Tx
+
+From: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+
+[ Upstream commit 6c18b27d6e5c6a7206364eae2b47bc8d8b2fa68f ]
+
+If the driver fails to properly prepare for the channel
+switch, mac80211 will disconnect. If the CSA IE had mode
+set to 1, it means that the clients are not allowed to send
+any Tx on the current channel, and that includes the
+deauthentication frame.
+
+Make sure that we don't send the deauthentication frame in
+this case.
+
+In iwlwifi, this caused a failure to flush queues since the
+firmware already closed the queues after having parsed the
+CSA IE. Then mac80211 would wait until the deauthentication
+frame would go out (drv_flush(drop=false)) and that would
+never happen.
+
+Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/mlme.c | 17 +++++++++++++++--
+ 1 file changed, 15 insertions(+), 2 deletions(-)
+
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -1270,6 +1270,16 @@ ieee80211_sta_process_chanswitch(struct
+ cbss->beacon_interval));
+ return;
+ drop_connection:
++ /*
++ * This is just so that the disconnect flow will know that
++ * we were trying to switch channel and failed. In case the
++ * mode is 1 (we are not allowed to Tx), we will know not to
++ * send a deauthentication frame. Those two fields will be
++ * reset when the disconnection worker runs.
++ */
++ sdata->vif.csa_active = true;
++ sdata->csa_block_tx = csa_ie.mode;
++
+ ieee80211_queue_work(&local->hw, &ifmgd->csa_connection_drop_work);
+ mutex_unlock(&local->chanctx_mtx);
+ mutex_unlock(&local->mtx);
+@@ -2453,6 +2463,7 @@ static void __ieee80211_disconnect(struc
+ struct ieee80211_local *local = sdata->local;
+ struct ieee80211_if_managed *ifmgd = &sdata->u.mgd;
+ u8 frame_buf[IEEE80211_DEAUTH_FRAME_LEN];
++ bool tx;
+
+ sdata_lock(sdata);
+ if (!ifmgd->associated) {
+@@ -2460,6 +2471,8 @@ static void __ieee80211_disconnect(struc
+ return;
+ }
+
++ tx = !sdata->csa_block_tx;
++
+ /* AP is probably out of range (or not reachable for another reason) so
+ * remove the bss struct for that AP.
+ */
+@@ -2467,7 +2480,7 @@ static void __ieee80211_disconnect(struc
+
+ ieee80211_set_disassoc(sdata, IEEE80211_STYPE_DEAUTH,
+ WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY,
+- true, frame_buf);
++ tx, frame_buf);
+ mutex_lock(&local->mtx);
+ sdata->vif.csa_active = false;
+ ifmgd->csa_waiting_bcn = false;
+@@ -2478,7 +2491,7 @@ static void __ieee80211_disconnect(struc
+ }
+ mutex_unlock(&local->mtx);
+
+- ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), true,
++ ieee80211_report_disconnect(sdata, frame_buf, sizeof(frame_buf), tx,
+ WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY);
+
+ sdata_unlock(sdata);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+Date: Fri, 31 Aug 2018 11:31:06 +0300
+Subject: mac80211: fix a race between restart and CSA flows
+
+From: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+
+[ Upstream commit f3ffb6c3a28963657eb8b02a795d75f2ebbd5ef4 ]
+
+We hit a problem with iwlwifi that was caused by a bug in
+mac80211. A bug in iwlwifi caused the firwmare to crash in
+certain cases in channel switch. Because of that bug,
+drv_pre_channel_switch would fail and trigger the restart
+flow.
+Now we had the hw restart worker which runs on the system's
+workqueue and the csa_connection_drop_work worker that runs
+on mac80211's workqueue that can run together. This is
+obviously problematic since the restart work wants to
+reconfigure the connection, while the csa_connection_drop_work
+worker does the exact opposite: it tries to disconnect.
+
+Fix this by cancelling the csa_connection_drop_work worker
+in the restart worker.
+
+Note that this can sound racy: we could have:
+
+driver iface_work CSA_work restart_work
++++++++++++++++++++++++++++++++++++++++++++++
+ |
+ <--drv_cs ---|
+<FW CRASH!>
+-CS FAILED-->
+ | |
+ | cancel_work(CSA)
+ schedule |
+ CSA work |
+ | |
+ Race between those 2
+
+But this is not possible because we flush the workqueue
+in the restart worker before we cancel the CSA worker.
+That would be bullet proof if we could guarantee that
+we schedule the CSA worker only from the iface_work
+which runs on the workqueue (and not on the system's
+workqueue), but unfortunately we do have an instance
+in which we schedule the CSA work outside the context
+of the workqueue (ieee80211_chswitch_done).
+
+Note also that we should probably cancel other workers
+like beacon_connection_loss_work and possibly others
+for different types of interfaces, at the very least,
+IBSS should suffer from the exact same problem, but for
+now, do the minimum to fix the actual bug that was actually
+experienced and reproduced.
+
+Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/main.c | 21 ++++++++++++++++++++-
+ 1 file changed, 20 insertions(+), 1 deletion(-)
+
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -255,8 +255,27 @@ static void ieee80211_restart_work(struc
+
+ flush_work(&local->radar_detected_work);
+ rtnl_lock();
+- list_for_each_entry(sdata, &local->interfaces, list)
++ list_for_each_entry(sdata, &local->interfaces, list) {
++ /*
++ * XXX: there may be more work for other vif types and even
++ * for station mode: a good thing would be to run most of
++ * the iface type's dependent _stop (ieee80211_mg_stop,
++ * ieee80211_ibss_stop) etc...
++ * For now, fix only the specific bug that was seen: race
++ * between csa_connection_drop_work and us.
++ */
++ if (sdata->vif.type == NL80211_IFTYPE_STATION) {
++ /*
++ * This worker is scheduled from the iface worker that
++ * runs on mac80211's workqueue, so we can't be
++ * scheduling this worker after the cancel right here.
++ * The exception is ieee80211_chswitch_done.
++ * Then we can have a race...
++ */
++ cancel_work_sync(&sdata->u.mgd.csa_connection_drop_work);
++ }
+ flush_delayed_work(&sdata->dec_tailroom_needed_wk);
++ }
+ ieee80211_scan_cancel(local);
+
+ /* make sure any new ROC will consider local->in_reconfig */
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+Date: Fri, 31 Aug 2018 01:04:13 +0200
+Subject: mac80211: fix an off-by-one issue in A-MSDU max_subframe computation
+
+From: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+
+[ Upstream commit 66eb02d839e8495ae6b612e2d09ff599374b80e2 ]
+
+Initialize 'n' to 2 in order to take into account also the first
+packet in the estimation of max_subframe limit for a given A-MSDU
+since frag_tail pointer is NULL when ieee80211_amsdu_aggregate
+routine analyzes the second frame.
+
+Fixes: 6e0456b54545 ("mac80211: add A-MSDU tx support")
+Signed-off-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/tx.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/mac80211/tx.c
++++ b/net/mac80211/tx.c
+@@ -3174,7 +3174,7 @@ static bool ieee80211_amsdu_aggregate(st
+ void *data;
+ bool ret = false;
+ unsigned int orig_len;
+- int n = 1, nfrags, pad = 0;
++ int n = 2, nfrags, pad = 0;
+ u16 hdrlen;
+
+ if (!ieee80211_hw_check(&local->hw, TX_AMSDU))
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Ilan Peer <ilan.peer@intel.com>
+Date: Fri, 31 Aug 2018 11:31:10 +0300
+Subject: mac80211: Fix station bandwidth setting after channel switch
+
+From: Ilan Peer <ilan.peer@intel.com>
+
+[ Upstream commit 0007e94355fdb71a1cf5dba0754155cba08f0666 ]
+
+When performing a channel switch flow for a managed interface, the
+flow did not update the bandwidth of the AP station and the rate
+scale algorithm. In case of a channel width downgrade, this would
+result with the rate scale algorithm using a bandwidth that does not
+match the interface channel configuration.
+
+Fix this by updating the AP station bandwidth and rate scaling algorithm
+before the actual channel change in case of a bandwidth downgrade, or
+after the actual channel change in case of a bandwidth upgrade.
+
+Signed-off-by: Ilan Peer <ilan.peer@intel.com>
+Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/mlme.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++++++
+ 1 file changed, 53 insertions(+)
+
+--- a/net/mac80211/mlme.c
++++ b/net/mac80211/mlme.c
+@@ -978,6 +978,10 @@ static void ieee80211_chswitch_work(stru
+ */
+
+ if (sdata->reserved_chanctx) {
++ struct ieee80211_supported_band *sband = NULL;
++ struct sta_info *mgd_sta = NULL;
++ enum ieee80211_sta_rx_bandwidth bw = IEEE80211_STA_RX_BW_20;
++
+ /*
+ * with multi-vif csa driver may call ieee80211_csa_finish()
+ * many times while waiting for other interfaces to use their
+@@ -986,6 +990,48 @@ static void ieee80211_chswitch_work(stru
+ if (sdata->reserved_ready)
+ goto out;
+
++ if (sdata->vif.bss_conf.chandef.width !=
++ sdata->csa_chandef.width) {
++ /*
++ * For managed interface, we need to also update the AP
++ * station bandwidth and align the rate scale algorithm
++ * on the bandwidth change. Here we only consider the
++ * bandwidth of the new channel definition (as channel
++ * switch flow does not have the full HT/VHT/HE
++ * information), assuming that if additional changes are
++ * required they would be done as part of the processing
++ * of the next beacon from the AP.
++ */
++ switch (sdata->csa_chandef.width) {
++ case NL80211_CHAN_WIDTH_20_NOHT:
++ case NL80211_CHAN_WIDTH_20:
++ default:
++ bw = IEEE80211_STA_RX_BW_20;
++ break;
++ case NL80211_CHAN_WIDTH_40:
++ bw = IEEE80211_STA_RX_BW_40;
++ break;
++ case NL80211_CHAN_WIDTH_80:
++ bw = IEEE80211_STA_RX_BW_80;
++ break;
++ case NL80211_CHAN_WIDTH_80P80:
++ case NL80211_CHAN_WIDTH_160:
++ bw = IEEE80211_STA_RX_BW_160;
++ break;
++ }
++
++ mgd_sta = sta_info_get(sdata, ifmgd->bssid);
++ sband =
++ local->hw.wiphy->bands[sdata->csa_chandef.chan->band];
++ }
++
++ if (sdata->vif.bss_conf.chandef.width >
++ sdata->csa_chandef.width) {
++ mgd_sta->sta.bandwidth = bw;
++ rate_control_rate_update(local, sband, mgd_sta,
++ IEEE80211_RC_BW_CHANGED);
++ }
++
+ ret = ieee80211_vif_use_reserved_context(sdata);
+ if (ret) {
+ sdata_info(sdata,
+@@ -996,6 +1042,13 @@ static void ieee80211_chswitch_work(stru
+ goto out;
+ }
+
++ if (sdata->vif.bss_conf.chandef.width <
++ sdata->csa_chandef.width) {
++ mgd_sta->sta.bandwidth = bw;
++ rate_control_rate_update(local, sband, mgd_sta,
++ IEEE80211_RC_BW_CHANGED);
++ }
++
+ goto out;
+ }
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: "Dreyfuss, Haim" <haim.dreyfuss@intel.com>
+Date: Fri, 31 Aug 2018 11:31:04 +0300
+Subject: mac80211: fix WMM TXOP calculation
+
+From: "Dreyfuss, Haim" <haim.dreyfuss@intel.com>
+
+[ Upstream commit abd76d255d69d70206c01b9cb19ba36a9c1df6a1 ]
+
+In commit 9236c4523e5b ("mac80211: limit wmm params to comply
+with ETSI requirements"), we have limited the WMM parameters to
+comply with 802.11 and ETSI standard. Mistakenly the TXOP value
+was caluclated wrong. Fix it by taking the minimum between
+802.11 to ETSI to make sure we are not violating both.
+
+Fixes: e552af058148 ("mac80211: limit wmm params to comply with ETSI requirements")
+Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
+Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/util.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+--- a/net/mac80211/util.c
++++ b/net/mac80211/util.c
+@@ -1151,8 +1151,7 @@ void ieee80211_regulatory_limit_wmm_para
+ qparam->cw_min = max_t(u16, qparam->cw_min, wmm_ac->cw_min);
+ qparam->cw_max = max_t(u16, qparam->cw_max, wmm_ac->cw_max);
+ qparam->aifs = max_t(u8, qparam->aifs, wmm_ac->aifsn);
+- qparam->txop = !qparam->txop ? wmm_ac->cot / 32 :
+- min_t(u16, qparam->txop, wmm_ac->cot / 32);
++ qparam->txop = min_t(u16, qparam->txop, wmm_ac->cot / 32);
+ rcu_read_unlock();
+ }
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Yuan-Chi Pang <fu3mo6goo@gmail.com>
+Date: Wed, 29 Aug 2018 09:30:08 +0800
+Subject: mac80211: mesh: fix HWMP sequence numbering to follow standard
+
+From: Yuan-Chi Pang <fu3mo6goo@gmail.com>
+
+[ Upstream commit 1f631c3201fe5491808df143d8fcba81b3197ffd ]
+
+IEEE 802.11-2016 14.10.8.3 HWMP sequence numbering says:
+If it is a target mesh STA, it shall update its own HWMP SN to
+maximum (current HWMP SN, target HWMP SN in the PREQ element) + 1
+immediately before it generates a PREP element in response to a
+PREQ element.
+
+Signed-off-by: Yuan-Chi Pang <fu3mo6goo@gmail.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/mesh_hwmp.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/net/mac80211/mesh_hwmp.c
++++ b/net/mac80211/mesh_hwmp.c
+@@ -572,6 +572,10 @@ static void hwmp_preq_frame_process(stru
+ forward = false;
+ reply = true;
+ target_metric = 0;
++
++ if (SN_GT(target_sn, ifmsh->sn))
++ ifmsh->sn = target_sn;
++
+ if (time_after(jiffies, ifmsh->last_sn_update +
+ net_traversal_jiffies(sdata)) ||
+ time_before(jiffies, ifmsh->last_sn_update)) {
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: "Toke Høiland-Jørgensen" <toke@toke.dk>
+Date: Mon, 13 Aug 2018 14:16:25 +0200
+Subject: mac80211: Run TXQ teardown code before de-registering interfaces
+
+From: "Toke Høiland-Jørgensen" <toke@toke.dk>
+
+[ Upstream commit 77cfaf52eca5cac30ed029507e0cab065f888995 ]
+
+The TXQ teardown code can reference the vif data structures that are
+stored in the netdev private memory area if there are still packets on
+the queue when it is being freed. Since the TXQ teardown code is run
+after the netdevs are freed, this can lead to a use-after-free. Fix this
+by moving the TXQ teardown code to earlier in ieee80211_unregister_hw().
+
+Reported-by: Ben Greear <greearb@candelatech.com>
+Tested-by: Ben Greear <greearb@candelatech.com>
+Signed-off-by: Toke Høiland-Jørgensen <toke@toke.dk>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/main.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/mac80211/main.c
++++ b/net/mac80211/main.c
+@@ -1182,6 +1182,7 @@ void ieee80211_unregister_hw(struct ieee
+ #if IS_ENABLED(CONFIG_IPV6)
+ unregister_inet6addr_notifier(&local->ifa6_notifier);
+ #endif
++ ieee80211_txq_teardown_flows(local);
+
+ rtnl_lock();
+
+@@ -1210,7 +1211,6 @@ void ieee80211_unregister_hw(struct ieee
+ skb_queue_purge(&local->skb_queue);
+ skb_queue_purge(&local->skb_queue_unreliable);
+ skb_queue_purge(&local->skb_queue_tdls_chsw);
+- ieee80211_txq_teardown_flows(local);
+
+ destroy_workqueue(local->workqueue);
+ wiphy_unregister(local->hw.wiphy);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+Date: Fri, 31 Aug 2018 11:31:13 +0300
+Subject: mac80211: shorten the IBSS debug messages
+
+From: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+
+[ Upstream commit c6e57b3896fc76299913b8cfd82d853bee8a2c84 ]
+
+When tracing is enabled, all the debug messages are recorded and must
+not exceed MAX_MSG_LEN (100) columns. Longer debug messages grant the
+user with:
+
+WARNING: CPU: 3 PID: 32642 at /tmp/wifi-core-20180806094828/src/iwlwifi-stack-dev/net/mac80211/./trace_msg.h:32 trace_event_raw_event_mac80211_msg_event+0xab/0xc0 [mac80211]
+Workqueue: phy1 ieee80211_iface_work [mac80211]
+ RIP: 0010:trace_event_raw_event_mac80211_msg_event+0xab/0xc0 [mac80211]
+ Call Trace:
+ __sdata_dbg+0xbd/0x120 [mac80211]
+ ieee80211_ibss_rx_queued_mgmt+0x15f/0x510 [mac80211]
+ ieee80211_iface_work+0x21d/0x320 [mac80211]
+
+Signed-off-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/mac80211/ibss.c | 22 +++++++++++-----------
+ 1 file changed, 11 insertions(+), 11 deletions(-)
+
+--- a/net/mac80211/ibss.c
++++ b/net/mac80211/ibss.c
+@@ -947,8 +947,8 @@ static void ieee80211_rx_mgmt_deauth_ibs
+ if (len < IEEE80211_DEAUTH_FRAME_LEN)
+ return;
+
+- ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM BSSID=%pM (reason: %d)\n",
+- mgmt->sa, mgmt->da, mgmt->bssid, reason);
++ ibss_dbg(sdata, "RX DeAuth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++ ibss_dbg(sdata, "\tBSSID=%pM (reason: %d)\n", mgmt->bssid, reason);
+ sta_info_destroy_addr(sdata, mgmt->sa);
+ }
+
+@@ -966,9 +966,9 @@ static void ieee80211_rx_mgmt_auth_ibss(
+ auth_alg = le16_to_cpu(mgmt->u.auth.auth_alg);
+ auth_transaction = le16_to_cpu(mgmt->u.auth.auth_transaction);
+
+- ibss_dbg(sdata,
+- "RX Auth SA=%pM DA=%pM BSSID=%pM (auth_transaction=%d)\n",
+- mgmt->sa, mgmt->da, mgmt->bssid, auth_transaction);
++ ibss_dbg(sdata, "RX Auth SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++ ibss_dbg(sdata, "\tBSSID=%pM (auth_transaction=%d)\n",
++ mgmt->bssid, auth_transaction);
+
+ if (auth_alg != WLAN_AUTH_OPEN || auth_transaction != 1)
+ return;
+@@ -1175,10 +1175,10 @@ static void ieee80211_rx_bss_info(struct
+ rx_timestamp = drv_get_tsf(local, sdata);
+ }
+
+- ibss_dbg(sdata,
+- "RX beacon SA=%pM BSSID=%pM TSF=0x%llx BCN=0x%llx diff=%lld @%lu\n",
++ ibss_dbg(sdata, "RX beacon SA=%pM BSSID=%pM TSF=0x%llx\n",
+ mgmt->sa, mgmt->bssid,
+- (unsigned long long)rx_timestamp,
++ (unsigned long long)rx_timestamp);
++ ibss_dbg(sdata, "\tBCN=0x%llx diff=%lld @%lu\n",
+ (unsigned long long)beacon_timestamp,
+ (unsigned long long)(rx_timestamp - beacon_timestamp),
+ jiffies);
+@@ -1537,9 +1537,9 @@ static void ieee80211_rx_mgmt_probe_req(
+
+ tx_last_beacon = drv_tx_last_beacon(local);
+
+- ibss_dbg(sdata,
+- "RX ProbeReq SA=%pM DA=%pM BSSID=%pM (tx_last_beacon=%d)\n",
+- mgmt->sa, mgmt->da, mgmt->bssid, tx_last_beacon);
++ ibss_dbg(sdata, "RX ProbeReq SA=%pM DA=%pM\n", mgmt->sa, mgmt->da);
++ ibss_dbg(sdata, "\tBSSID=%pM (tx_last_beacon=%d)\n",
++ mgmt->bssid, tx_last_beacon);
+
+ if (!tx_last_beacon && is_multicast_ether_addr(mgmt->da))
+ return;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Danek Duvall <duvall@comfychair.org>
+Date: Wed, 22 Aug 2018 16:01:05 -0700
+Subject: mac80211_hwsim: correct use of IEEE80211_VHT_CAP_RXSTBC_X
+
+From: Danek Duvall <duvall@comfychair.org>
+
+[ Upstream commit d7c863a2f65e48f442379f4ee1846d52e0c5d24d ]
+
+The mac80211_hwsim driver intends to say that it supports up to four
+STBC receive streams, but instead it ends up saying something undefined.
+The IEEE80211_VHT_CAP_RXSTBC_X macros aren't independent bits that can
+be ORed together, but values. In this case, _4 is the appropriate one
+to use.
+
+Signed-off-by: Danek Duvall <duvall@comfychair.org>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/mac80211_hwsim.c | 3 ---
+ 1 file changed, 3 deletions(-)
+
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -2699,9 +2699,6 @@ static int mac80211_hwsim_new_radio(stru
+ IEEE80211_VHT_CAP_SHORT_GI_80 |
+ IEEE80211_VHT_CAP_SHORT_GI_160 |
+ IEEE80211_VHT_CAP_TXSTBC |
+- IEEE80211_VHT_CAP_RXSTBC_1 |
+- IEEE80211_VHT_CAP_RXSTBC_2 |
+- IEEE80211_VHT_CAP_RXSTBC_3 |
+ IEEE80211_VHT_CAP_RXSTBC_4 |
+ IEEE80211_VHT_CAP_MAX_A_MPDU_LENGTH_EXPONENT_MASK;
+ sband->vht_cap.vht_mcs.rx_mcs_map =
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Jinbum Park <jinb.park7@gmail.com>
+Date: Tue, 31 Jul 2018 23:10:40 +0900
+Subject: mac80211_hwsim: Fix possible Spectre-v1 for hwsim_world_regdom_custom
+
+From: Jinbum Park <jinb.park7@gmail.com>
+
+[ Upstream commit 3a2af7cccbbaf2362db9053a946a6084e12bfa73 ]
+
+User controls @idx which to be used as index of hwsim_world_regdom_custom.
+So, It can be exploited via Spectre-like attack. (speculative execution)
+
+This kind of attack leaks address of hwsim_world_regdom_custom,
+It leads an attacker to bypass security mechanism such as KASLR.
+
+So sanitize @idx before using it to prevent attack.
+
+I leveraged strategy [1] to find and exploit this gadget.
+
+[1] https://github.com/jinb-park/linux-exploit/tree/master/exploit-remaining-spectre-gadget/
+
+Signed-off-by: Jinbum Park <jinb.park7@gmail.com>
+[johannes: unwrap URL]
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/mac80211_hwsim.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -33,6 +33,7 @@
+ #include <net/net_namespace.h>
+ #include <net/netns/generic.h>
+ #include <linux/rhashtable.h>
++#include <linux/nospec.h>
+ #include "mac80211_hwsim.h"
+
+ #define WARN_QUEUE 100
+@@ -3229,6 +3230,9 @@ static int hwsim_new_radio_nl(struct sk_
+ kfree(hwname);
+ return -EINVAL;
+ }
++
++ idx = array_index_nospec(idx,
++ ARRAY_SIZE(hwsim_world_regdom_custom));
+ param.regd = hwsim_world_regdom_custom[idx];
+ }
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Johannes Berg <johannes.berg@intel.com>
+Date: Wed, 15 Aug 2018 18:17:03 +0200
+Subject: mac80211_hwsim: require at least one channel
+
+From: Johannes Berg <johannes.berg@intel.com>
+
+[ Upstream commit 484004339d4514fde425f6e8a9f6a6cc979bb0c3 ]
+
+Syzbot continues to try to create mac80211_hwsim radios, and
+manages to pass parameters that are later checked with WARN_ON
+in cfg80211 - catch another one in hwsim directly.
+
+Reported-by: syzbot+2a12f11c306afe871c1f@syzkaller.appspotmail.com
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/wireless/mac80211_hwsim.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+--- a/drivers/net/wireless/mac80211_hwsim.c
++++ b/drivers/net/wireless/mac80211_hwsim.c
+@@ -3194,6 +3194,11 @@ static int hwsim_new_radio_nl(struct sk_
+ if (info->attrs[HWSIM_ATTR_CHANNELS])
+ param.channels = nla_get_u32(info->attrs[HWSIM_ATTR_CHANNELS]);
+
++ if (param.channels < 1) {
++ GENL_SET_ERR_MSG(info, "must have at least one channel");
++ return -EINVAL;
++ }
++
+ if (param.channels > CFG80211_MAX_NUM_DIFFERENT_CHANNELS) {
+ GENL_SET_ERR_MSG(info, "too many channels specified");
+ return -EINVAL;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Shaohua Li <shli@fb.com>
+Date: Wed, 29 Aug 2018 11:05:42 -0700
+Subject: md/raid5-cache: disable reshape completely
+
+From: Shaohua Li <shli@fb.com>
+
+[ Upstream commit e254de6bcf3f5b6e78a92ac95fb91acef8adfe1a ]
+
+We don't support reshape yet if an array supports log device. Previously we
+determine the fact by checking ->log. However, ->log could be NULL after a log
+device is removed, but the array is still marked to support log device. Don't
+allow reshape in this case too. User can disable log device support by setting
+'consistency_policy' to 'resync' then do reshape.
+
+Reported-by: Xiao Ni <xni@redhat.com>
+Tested-by: Xiao Ni <xni@redhat.com>
+Signed-off-by: Shaohua Li <shli@fb.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/md/raid5-log.h | 5 +++++
+ drivers/md/raid5.c | 6 +++---
+ 2 files changed, 8 insertions(+), 3 deletions(-)
+
+--- a/drivers/md/raid5-log.h
++++ b/drivers/md/raid5-log.h
+@@ -46,6 +46,11 @@ extern int ppl_modify_log(struct r5conf
+ extern void ppl_quiesce(struct r5conf *conf, int quiesce);
+ extern int ppl_handle_flush_request(struct r5l_log *log, struct bio *bio);
+
++static inline bool raid5_has_log(struct r5conf *conf)
++{
++ return test_bit(MD_HAS_JOURNAL, &conf->mddev->flags);
++}
++
+ static inline bool raid5_has_ppl(struct r5conf *conf)
+ {
+ return test_bit(MD_HAS_PPL, &conf->mddev->flags);
+--- a/drivers/md/raid5.c
++++ b/drivers/md/raid5.c
+@@ -735,7 +735,7 @@ static bool stripe_can_batch(struct stri
+ {
+ struct r5conf *conf = sh->raid_conf;
+
+- if (conf->log || raid5_has_ppl(conf))
++ if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ return false;
+ return test_bit(STRIPE_BATCH_READY, &sh->state) &&
+ !test_bit(STRIPE_BITMAP_PENDING, &sh->state) &&
+@@ -7739,7 +7739,7 @@ static int raid5_resize(struct mddev *md
+ sector_t newsize;
+ struct r5conf *conf = mddev->private;
+
+- if (conf->log || raid5_has_ppl(conf))
++ if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ return -EINVAL;
+ sectors &= ~((sector_t)conf->chunk_sectors - 1);
+ newsize = raid5_size(mddev, sectors, mddev->raid_disks);
+@@ -7790,7 +7790,7 @@ static int check_reshape(struct mddev *m
+ {
+ struct r5conf *conf = mddev->private;
+
+- if (conf->log || raid5_has_ppl(conf))
++ if (raid5_has_log(conf) || raid5_has_ppl(conf))
+ return -EINVAL;
+ if (mddev->delta_disks == 0 &&
+ mddev->new_layout == mddev->layout &&
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: YueHaibing <yuehaibing@huawei.com>
+Date: Tue, 7 Aug 2018 12:03:13 +0800
+Subject: nds32: add NULL entry to the end of_device_id array
+
+From: YueHaibing <yuehaibing@huawei.com>
+
+[ Upstream commit 1944a50859ec2b570b42b459ac25d607fc7c31f0 ]
+
+Make sure of_device_id tables are NULL terminated.
+Found by coccinelle spatch "misc/of_table.cocci"
+
+Signed-off-by: YueHaibing <yuehaibing@huawei.com>
+Acked-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/nds32/kernel/atl2c.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/arch/nds32/kernel/atl2c.c
++++ b/arch/nds32/kernel/atl2c.c
+@@ -9,7 +9,8 @@
+
+ void __iomem *atl2c_base;
+ static const struct of_device_id atl2c_ids[] __initconst = {
+- {.compatible = "andestech,atl2c",}
++ {.compatible = "andestech,atl2c",},
++ {}
+ };
+
+ static int __init atl2c_of_init(void)
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Greentime Hu <greentime@andestech.com>
+Date: Tue, 28 Aug 2018 16:07:39 +0800
+Subject: nds32: fix build error because of wrong semicolon
+
+From: Greentime Hu <greentime@andestech.com>
+
+[ Upstream commit ec865393292f5ad8d52da20788b3685ebce44c48 ]
+
+It shall be removed in the define usage. We shall not put a semicolon there.
+
+/kisskb/src/arch/nds32/include/asm/elf.h:126:29: error: expected '}' before ';' token
+ #define ELF_DATA ELFDATA2LSB;
+ ^
+/kisskb/src/fs/proc/kcore.c:318:17: note: in expansion of macro 'ELF_DATA'
+ [EI_DATA] = ELF_DATA,
+ ^~~~~~~~
+/kisskb/src/fs/proc/kcore.c:312:15: note: to match this '{'
+ .e_ident = {
+ ^
+/kisskb/src/scripts/Makefile.build:307: recipe for target 'fs/proc/kcore.o' failed
+
+Signed-off-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/nds32/include/asm/elf.h | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/arch/nds32/include/asm/elf.h
++++ b/arch/nds32/include/asm/elf.h
+@@ -121,9 +121,9 @@ struct elf32_hdr;
+ */
+ #define ELF_CLASS ELFCLASS32
+ #ifdef __NDS32_EB__
+-#define ELF_DATA ELFDATA2MSB;
++#define ELF_DATA ELFDATA2MSB
+ #else
+-#define ELF_DATA ELFDATA2LSB;
++#define ELF_DATA ELFDATA2LSB
+ #endif
+ #define ELF_ARCH EM_NDS32
+ #define USE_ELF_CORE_DUMP
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Zong Li <zong@andestech.com>
+Date: Mon, 13 Aug 2018 13:28:23 +0800
+Subject: nds32: Fix empty call trace
+
+From: Zong Li <zong@andestech.com>
+
+[ Upstream commit c17df7960534357fb74074c2f514c831d4a9cf5a ]
+
+The compiler predefined macro 'NDS32_ABI_2' had been removed, it should
+use the '__NDS32_ABI_2' here.
+
+Signed-off-by: Zong Li <zong@andestech.com>
+Acked-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/nds32/kernel/traps.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/nds32/kernel/traps.c
++++ b/arch/nds32/kernel/traps.c
+@@ -137,7 +137,7 @@ static void __dump(struct task_struct *t
+ !((unsigned long)base_reg & 0x3) &&
+ ((unsigned long)base_reg >= TASK_SIZE)) {
+ unsigned long next_fp;
+-#if !defined(NDS32_ABI_2)
++#if !defined(__NDS32_ABI_2)
+ ret_addr = base_reg[0];
+ next_fp = base_reg[1];
+ #else
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Zong Li <zong@andestech.com>
+Date: Mon, 13 Aug 2018 14:48:49 +0800
+Subject: nds32: Fix get_user/put_user macro expand pointer problem
+
+From: Zong Li <zong@andestech.com>
+
+[ Upstream commit 6cce95a6c7d288ac2126eee4b95df448b9015b84 ]
+
+The pointer argument of macro need to be taken out once first, and then
+use the new pointer in the macro body.
+
+In kernel/trace/trace.c, get_user(ch, ubuf++) causes the unexpected
+increment after expand the macro.
+
+Signed-off-by: Zong Li <zong@andestech.com>
+Acked-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/nds32/include/asm/uaccess.h | 26 ++++++++++++++------------
+ 1 file changed, 14 insertions(+), 12 deletions(-)
+
+--- a/arch/nds32/include/asm/uaccess.h
++++ b/arch/nds32/include/asm/uaccess.h
+@@ -78,8 +78,9 @@ static inline void set_fs(mm_segment_t f
+ #define get_user(x,p) \
+ ({ \
+ long __e = -EFAULT; \
+- if(likely(access_ok(VERIFY_READ, p, sizeof(*p)))) { \
+- __e = __get_user(x,p); \
++ const __typeof__(*(p)) __user *__p = (p); \
++ if(likely(access_ok(VERIFY_READ, __p, sizeof(*__p)))) { \
++ __e = __get_user(x, __p); \
+ } else \
+ x = 0; \
+ __e; \
+@@ -99,10 +100,10 @@ static inline void set_fs(mm_segment_t f
+
+ #define __get_user_err(x,ptr,err) \
+ do { \
+- unsigned long __gu_addr = (unsigned long)(ptr); \
++ const __typeof__(*(ptr)) __user *__gu_addr = (ptr); \
+ unsigned long __gu_val; \
+- __chk_user_ptr(ptr); \
+- switch (sizeof(*(ptr))) { \
++ __chk_user_ptr(__gu_addr); \
++ switch (sizeof(*(__gu_addr))) { \
+ case 1: \
+ __get_user_asm("lbi",__gu_val,__gu_addr,err); \
+ break; \
+@@ -119,7 +120,7 @@ do { \
+ BUILD_BUG(); \
+ break; \
+ } \
+- (x) = (__typeof__(*(ptr)))__gu_val; \
++ (x) = (__typeof__(*(__gu_addr)))__gu_val; \
+ } while (0)
+
+ #define __get_user_asm(inst,x,addr,err) \
+@@ -169,8 +170,9 @@ do { \
+ #define put_user(x,p) \
+ ({ \
+ long __e = -EFAULT; \
+- if(likely(access_ok(VERIFY_WRITE, p, sizeof(*p)))) { \
+- __e = __put_user(x,p); \
++ __typeof__(*(p)) __user *__p = (p); \
++ if(likely(access_ok(VERIFY_WRITE, __p, sizeof(*__p)))) { \
++ __e = __put_user(x, __p); \
+ } \
+ __e; \
+ })
+@@ -189,10 +191,10 @@ do { \
+
+ #define __put_user_err(x,ptr,err) \
+ do { \
+- unsigned long __pu_addr = (unsigned long)(ptr); \
+- __typeof__(*(ptr)) __pu_val = (x); \
+- __chk_user_ptr(ptr); \
+- switch (sizeof(*(ptr))) { \
++ __typeof__(*(ptr)) __user *__pu_addr = (ptr); \
++ __typeof__(*(__pu_addr)) __pu_val = (x); \
++ __chk_user_ptr(__pu_addr); \
++ switch (sizeof(*(__pu_addr))) { \
+ case 1: \
+ __put_user_asm("sbi",__pu_val,__pu_addr,err); \
+ break; \
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Greentime Hu <greentime@andestech.com>
+Date: Wed, 18 Jul 2018 09:54:55 +0800
+Subject: nds32: fix logic for module
+
+From: Greentime Hu <greentime@andestech.com>
+
+[ Upstream commit 1dfdf99106668679b0de5a62fd4f42c1a11c9445 ]
+
+This bug is report by Dan Carpenter. We shall use ~loc_mask instead of
+!loc_mask because we need to and(&) the bits of ~loc_mask.
+
+Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
+Fixes: c9a4a8da6baa ("nds32: Loadable modules")
+Signed-off-by: Greentime Hu <greentime@andestech.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/nds32/kernel/module.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/arch/nds32/kernel/module.c
++++ b/arch/nds32/kernel/module.c
+@@ -40,7 +40,7 @@ void do_reloc16(unsigned int val, unsign
+
+ tmp2 = tmp & loc_mask;
+ if (partial_in_place) {
+- tmp &= (!loc_mask);
++ tmp &= (~loc_mask);
+ tmp =
+ tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ } else {
+@@ -70,7 +70,7 @@ void do_reloc32(unsigned int val, unsign
+
+ tmp2 = tmp & loc_mask;
+ if (partial_in_place) {
+- tmp &= (!loc_mask);
++ tmp &= (~loc_mask);
+ tmp =
+ tmp2 | ((tmp + ((val & val_mask) >> val_shift)) & val_mask);
+ } else {
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Greentime Hu <greentime@andestech.com>
+Date: Tue, 4 Sep 2018 14:25:57 +0800
+Subject: nds32: linker script: GCOV kernel may refers data in __exit
+
+From: Greentime Hu <greentime@andestech.com>
+
+[ Upstream commit 3350139c0ff3c95724b784f7109987d533cb3ecd ]
+
+This patch is used to fix nds32 allmodconfig/allyesconfig build error
+because GCOV kernel embeds counters in the kernel for each line
+and a part of that embed in __exit text. So we need to keep the
+EXIT_TEXT and EXIT_DATA if CONFIG_GCOV_KERNEL=y.
+
+Link: https://lkml.org/lkml/2018/9/1/125
+Signed-off-by: Greentime Hu <greentime@andestech.com>
+Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ arch/nds32/kernel/vmlinux.lds.S | 12 ++++++++++++
+ 1 file changed, 12 insertions(+)
+
+--- a/arch/nds32/kernel/vmlinux.lds.S
++++ b/arch/nds32/kernel/vmlinux.lds.S
+@@ -13,14 +13,26 @@ OUTPUT_ARCH(nds32)
+ ENTRY(_stext_lma)
+ jiffies = jiffies_64;
+
++#if defined(CONFIG_GCOV_KERNEL)
++#define NDS32_EXIT_KEEP(x) x
++#else
++#define NDS32_EXIT_KEEP(x)
++#endif
++
+ SECTIONS
+ {
+ _stext_lma = TEXTADDR - LOAD_OFFSET;
+ . = TEXTADDR;
+ __init_begin = .;
+ HEAD_TEXT_SECTION
++ .exit.text : {
++ NDS32_EXIT_KEEP(EXIT_TEXT)
++ }
+ INIT_TEXT_SECTION(PAGE_SIZE)
+ INIT_DATA_SECTION(16)
++ .exit.data : {
++ NDS32_EXIT_KEEP(EXIT_DATA)
++ }
+ PERCPU_SECTION(L1_CACHE_BYTES)
+ __init_end = .;
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Jia-Ju Bai <baijiaju1990@gmail.com>
+Date: Sat, 1 Sep 2018 20:11:05 +0800
+Subject: net: cadence: Fix a sleep-in-atomic-context bug in macb_halt_tx()
+
+From: Jia-Ju Bai <baijiaju1990@gmail.com>
+
+[ Upstream commit 16fe10cf92783ed9ceb182d6ea2b8adf5e8ec1b8 ]
+
+The kernel module may sleep with holding a spinlock.
+
+The function call paths (from bottom to top) in Linux-4.16 are:
+
+[FUNC] usleep_range
+drivers/net/ethernet/cadence/macb_main.c, 648:
+ usleep_range in macb_halt_tx
+drivers/net/ethernet/cadence/macb_main.c, 730:
+ macb_halt_tx in macb_tx_error_task
+drivers/net/ethernet/cadence/macb_main.c, 721:
+ _raw_spin_lock_irqsave in macb_tx_error_task
+
+To fix this bug, usleep_range() is replaced with udelay().
+
+This bug is found by my static analysis tool DSAC.
+
+Signed-off-by: Jia-Ju Bai <baijiaju1990@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/cadence/macb_main.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/net/ethernet/cadence/macb_main.c
++++ b/drivers/net/ethernet/cadence/macb_main.c
+@@ -648,7 +648,7 @@ static int macb_halt_tx(struct macb *bp)
+ if (!(status & MACB_BIT(TGO)))
+ return 0;
+
+- usleep_range(10, 250);
++ udelay(250);
+ } while (time_before(halt_time, timeout));
+
+ return -ETIMEDOUT;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Tony Lindgren <tony@atomide.com>
+Date: Wed, 29 Aug 2018 08:00:24 -0700
+Subject: net: ethernet: cpsw-phy-sel: prefer phandle for phy sel
+
+From: Tony Lindgren <tony@atomide.com>
+
+[ Upstream commit 18eb8aea7fb2fb4490e578b1b8a1096c34b2fc48 ]
+
+The cpsw-phy-sel device is not a child of the cpsw interconnect target
+module. It lives in the system control module.
+
+Let's fix this issue by trying to use cpsw-phy-sel phandle first if it
+exists and if not fall back to current usage of trying to find the
+cpsw-phy-sel child. That way the phy sel driver can be a child of the
+system control module where it belongs in the device tree.
+
+Without this fix, we cannot have a proper interconnect target module
+hierarchy in device tree for things like genpd.
+
+Note that deferred probe is mostly not supported by cpsw and this patch
+does not attempt to fix that. In case deferred probe support is needed,
+this could be added to cpsw_slave_open() and phy_connect() so they start
+handling and returning errors.
+
+For documenting it, looks like the cpsw-phy-sel is used for all cpsw device
+tree nodes. It's missing the related binding documentation, so let's also
+update the binding documentation accordingly.
+
+Cc: devicetree@vger.kernel.org
+Cc: Andrew Lunn <andrew@lunn.ch>
+Cc: Grygorii Strashko <grygorii.strashko@ti.com>
+Cc: Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org>
+Cc: Mark Rutland <mark.rutland@arm.com>
+Cc: Murali Karicheri <m-karicheri2@ti.com>
+Cc: Rob Herring <robh+dt@kernel.org>
+Signed-off-by: Tony Lindgren <tony@atomide.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/ti/cpsw-phy-sel.c | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/ti/cpsw-phy-sel.c
++++ b/drivers/net/ethernet/ti/cpsw-phy-sel.c
+@@ -170,10 +170,13 @@ void cpsw_phy_sel(struct device *dev, ph
+ struct device_node *node;
+ struct cpsw_phy_sel_priv *priv;
+
+- node = of_get_child_by_name(dev->of_node, "cpsw-phy-sel");
++ node = of_parse_phandle(dev->of_node, "cpsw-phy-sel", 0);
+ if (!node) {
+- dev_err(dev, "Phy mode driver DT not found\n");
+- return;
++ node = of_get_child_by_name(dev->of_node, "cpsw-phy-sel");
++ if (!node) {
++ dev_err(dev, "Phy mode driver DT not found\n");
++ return;
++ }
+ }
+
+ dev = bus_find_device(&platform_bus_type, NULL, node, match);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Peng Li <lipeng321@huawei.com>
+Date: Mon, 27 Aug 2018 09:59:30 +0800
+Subject: net: hns: add netif_carrier_off before change speed and duplex
+
+From: Peng Li <lipeng321@huawei.com>
+
+[ Upstream commit 455c4401fe7a538facaffb35b906ce19f1ece474 ]
+
+If there are packets in hardware when changing the speed
+or duplex, it may cause hardware hang up.
+
+This patch adds netif_carrier_off before change speed and
+duplex in ethtool_ops.set_link_ksettings, and adds
+netif_carrier_on after complete the change.
+
+Signed-off-by: Peng Li <lipeng321@huawei.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/hisilicon/hns/hns_ethtool.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ethtool.c
+@@ -243,7 +243,9 @@ static int hns_nic_set_link_ksettings(st
+ }
+
+ if (h->dev->ops->adjust_link) {
++ netif_carrier_off(net_dev);
+ h->dev->ops->adjust_link(h, (int)speed, cmd->base.duplex);
++ netif_carrier_on(net_dev);
+ return 0;
+ }
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Peng Li <lipeng321@huawei.com>
+Date: Mon, 27 Aug 2018 09:59:29 +0800
+Subject: net: hns: add the code for cleaning pkt in chip
+
+From: Peng Li <lipeng321@huawei.com>
+
+[ Upstream commit 31fabbee8f5c658c3fa1603c66e9e4f51ea8c2c6 ]
+
+If there are packets in hardware when changing the speed
+or duplex, it may cause hardware hang up.
+
+This patch adds the code for waiting chip to clean the all
+pkts(TX & RX) in chip when the driver uses the function named
+"adjust link".
+
+This patch cleans the pkts as follows:
+1) close rx of chip, close tx of protocol stack.
+2) wait rcb, ppe, mac to clean.
+3) adjust link
+4) open rx of chip, open tx of protocol stack.
+
+Signed-off-by: Peng Li <lipeng321@huawei.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/hisilicon/hns/hnae.h | 2
+ drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c | 67 ++++++++++++++++++++-
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c | 36 +++++++++++
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c | 44 +++++++++++++
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h | 8 ++
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c | 29 +++++++++
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h | 3
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c | 23 +++++++
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h | 1
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c | 23 +++++++
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h | 1
+ drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h | 1
+ drivers/net/ethernet/hisilicon/hns/hns_enet.c | 21 +++++-
+ 13 files changed, 255 insertions(+), 4 deletions(-)
+
+--- a/drivers/net/ethernet/hisilicon/hns/hnae.h
++++ b/drivers/net/ethernet/hisilicon/hns/hnae.h
+@@ -486,6 +486,8 @@ struct hnae_ae_ops {
+ u8 *auto_neg, u16 *speed, u8 *duplex);
+ void (*toggle_ring_irq)(struct hnae_ring *ring, u32 val);
+ void (*adjust_link)(struct hnae_handle *handle, int speed, int duplex);
++ bool (*need_adjust_link)(struct hnae_handle *handle,
++ int speed, int duplex);
+ int (*set_loopback)(struct hnae_handle *handle,
+ enum hnae_loop loop_mode, int en);
+ void (*get_ring_bdnum_limit)(struct hnae_queue *queue,
+--- a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+@@ -155,6 +155,41 @@ static void hns_ae_put_handle(struct hna
+ hns_ae_get_ring_pair(handle->qs[i])->used_by_vf = 0;
+ }
+
++static int hns_ae_wait_flow_down(struct hnae_handle *handle)
++{
++ struct dsaf_device *dsaf_dev;
++ struct hns_ppe_cb *ppe_cb;
++ struct hnae_vf_cb *vf_cb;
++ int ret;
++ int i;
++
++ for (i = 0; i < handle->q_num; i++) {
++ ret = hns_rcb_wait_tx_ring_clean(handle->qs[i]);
++ if (ret)
++ return ret;
++ }
++
++ ppe_cb = hns_get_ppe_cb(handle);
++ ret = hns_ppe_wait_tx_fifo_clean(ppe_cb);
++ if (ret)
++ return ret;
++
++ dsaf_dev = hns_ae_get_dsaf_dev(handle->dev);
++ if (!dsaf_dev)
++ return -EINVAL;
++ ret = hns_dsaf_wait_pkt_clean(dsaf_dev, handle->dport_id);
++ if (ret)
++ return ret;
++
++ vf_cb = hns_ae_get_vf_cb(handle);
++ ret = hns_mac_wait_fifo_clean(vf_cb->mac_cb);
++ if (ret)
++ return ret;
++
++ mdelay(10);
++ return 0;
++}
++
+ static void hns_ae_ring_enable_all(struct hnae_handle *handle, int val)
+ {
+ int q_num = handle->q_num;
+@@ -399,12 +434,41 @@ static int hns_ae_get_mac_info(struct hn
+ return hns_mac_get_port_info(mac_cb, auto_neg, speed, duplex);
+ }
+
++static bool hns_ae_need_adjust_link(struct hnae_handle *handle, int speed,
++ int duplex)
++{
++ struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
++
++ return hns_mac_need_adjust_link(mac_cb, speed, duplex);
++}
++
+ static void hns_ae_adjust_link(struct hnae_handle *handle, int speed,
+ int duplex)
+ {
+ struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
+
+- hns_mac_adjust_link(mac_cb, speed, duplex);
++ switch (mac_cb->dsaf_dev->dsaf_ver) {
++ case AE_VERSION_1:
++ hns_mac_adjust_link(mac_cb, speed, duplex);
++ break;
++
++ case AE_VERSION_2:
++ /* chip need to clear all pkt inside */
++ hns_mac_disable(mac_cb, MAC_COMM_MODE_RX);
++ if (hns_ae_wait_flow_down(handle)) {
++ hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++ break;
++ }
++
++ hns_mac_adjust_link(mac_cb, speed, duplex);
++ hns_mac_enable(mac_cb, MAC_COMM_MODE_RX);
++ break;
++
++ default:
++ break;
++ }
++
++ return;
+ }
+
+ static void hns_ae_get_ring_bdnum_limit(struct hnae_queue *queue,
+@@ -902,6 +966,7 @@ static struct hnae_ae_ops hns_dsaf_ops =
+ .get_status = hns_ae_get_link_status,
+ .get_info = hns_ae_get_mac_info,
+ .adjust_link = hns_ae_adjust_link,
++ .need_adjust_link = hns_ae_need_adjust_link,
+ .set_loopback = hns_ae_config_loopback,
+ .get_ring_bdnum_limit = hns_ae_get_ring_bdnum_limit,
+ .get_pauseparam = hns_ae_get_pauseparam,
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_gmac.c
+@@ -257,6 +257,16 @@ static void hns_gmac_get_pausefrm_cfg(vo
+ *tx_pause_en = dsaf_get_bit(pause_en, GMAC_PAUSE_EN_TX_FDFC_B);
+ }
+
++static bool hns_gmac_need_adjust_link(void *mac_drv, enum mac_speed speed,
++ int duplex)
++{
++ struct mac_driver *drv = (struct mac_driver *)mac_drv;
++ struct hns_mac_cb *mac_cb = drv->mac_cb;
++
++ return (mac_cb->speed != speed) ||
++ (mac_cb->half_duplex == duplex);
++}
++
+ static int hns_gmac_adjust_link(void *mac_drv, enum mac_speed speed,
+ u32 full_duplex)
+ {
+@@ -309,6 +319,30 @@ static void hns_gmac_set_promisc(void *m
+ hns_gmac_set_uc_match(mac_drv, en);
+ }
+
++int hns_gmac_wait_fifo_clean(void *mac_drv)
++{
++ struct mac_driver *drv = (struct mac_driver *)mac_drv;
++ int wait_cnt;
++ u32 val;
++
++ wait_cnt = 0;
++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++ val = dsaf_read_dev(drv, GMAC_FIFO_STATE_REG);
++ /* bit5~bit0 is not send complete pkts */
++ if ((val & 0x3f) == 0)
++ break;
++ usleep_range(100, 200);
++ }
++
++ if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++ dev_err(drv->dev,
++ "hns ge %d fifo was not idle.\n", drv->mac_id);
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
+ static void hns_gmac_init(void *mac_drv)
+ {
+ u32 port;
+@@ -690,6 +724,7 @@ void *hns_gmac_config(struct hns_mac_cb
+ mac_drv->mac_disable = hns_gmac_disable;
+ mac_drv->mac_free = hns_gmac_free;
+ mac_drv->adjust_link = hns_gmac_adjust_link;
++ mac_drv->need_adjust_link = hns_gmac_need_adjust_link;
+ mac_drv->set_tx_auto_pause_frames = hns_gmac_set_tx_auto_pause_frames;
+ mac_drv->config_max_frame_length = hns_gmac_config_max_frame_length;
+ mac_drv->mac_pausefrm_cfg = hns_gmac_pause_frm_cfg;
+@@ -717,6 +752,7 @@ void *hns_gmac_config(struct hns_mac_cb
+ mac_drv->get_strings = hns_gmac_get_strings;
+ mac_drv->update_stats = hns_gmac_update_stats;
+ mac_drv->set_promiscuous = hns_gmac_set_promisc;
++ mac_drv->wait_fifo_clean = hns_gmac_wait_fifo_clean;
+
+ return (void *)mac_drv;
+ }
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+@@ -114,6 +114,26 @@ int hns_mac_get_port_info(struct hns_mac
+ return 0;
+ }
+
++/**
++ *hns_mac_is_adjust_link - check is need change mac speed and duplex register
++ *@mac_cb: mac device
++ *@speed: phy device speed
++ *@duplex:phy device duplex
++ *
++ */
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
++{
++ struct mac_driver *mac_ctrl_drv;
++
++ mac_ctrl_drv = (struct mac_driver *)(mac_cb->priv.mac);
++
++ if (mac_ctrl_drv->need_adjust_link)
++ return mac_ctrl_drv->need_adjust_link(mac_ctrl_drv,
++ (enum mac_speed)speed, duplex);
++ else
++ return true;
++}
++
+ void hns_mac_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex)
+ {
+ int ret;
+@@ -430,6 +450,16 @@ int hns_mac_vm_config_bc_en(struct hns_m
+ return 0;
+ }
+
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb)
++{
++ struct mac_driver *drv = hns_mac_get_drv(mac_cb);
++
++ if (drv->wait_fifo_clean)
++ return drv->wait_fifo_clean(drv);
++
++ return 0;
++}
++
+ void hns_mac_reset(struct hns_mac_cb *mac_cb)
+ {
+ struct mac_driver *drv = hns_mac_get_drv(mac_cb);
+@@ -999,6 +1029,20 @@ static int hns_mac_get_max_port_num(stru
+ return DSAF_MAX_PORT_NUM;
+ }
+
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++ struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++ mac_ctrl_drv->mac_enable(mac_cb->priv.mac, mode);
++}
++
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode)
++{
++ struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
++
++ mac_ctrl_drv->mac_disable(mac_cb->priv.mac, mode);
++}
++
+ /**
+ * hns_mac_init - init mac
+ * @dsaf_dev: dsa fabric device struct pointer
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+@@ -356,6 +356,9 @@ struct mac_driver {
+ /*adjust mac mode of port,include speed and duplex*/
+ int (*adjust_link)(void *mac_drv, enum mac_speed speed,
+ u32 full_duplex);
++ /* need adjust link */
++ bool (*need_adjust_link)(void *mac_drv, enum mac_speed speed,
++ int duplex);
+ /* config autoegotaite mode of port*/
+ void (*set_an_mode)(void *mac_drv, u8 enable);
+ /* config loopbank mode */
+@@ -394,6 +397,7 @@ struct mac_driver {
+ void (*get_info)(void *mac_drv, struct mac_info *mac_info);
+
+ void (*update_stats)(void *mac_drv);
++ int (*wait_fifo_clean)(void *mac_drv);
+
+ enum mac_mode mac_mode;
+ u8 mac_id;
+@@ -427,6 +431,7 @@ void *hns_xgmac_config(struct hns_mac_cb
+
+ int hns_mac_init(struct dsaf_device *dsaf_dev);
+ void mac_adjust_link(struct net_device *net_dev);
++bool hns_mac_need_adjust_link(struct hns_mac_cb *mac_cb, int speed, int duplex);
+ void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status);
+ int hns_mac_change_vf_addr(struct hns_mac_cb *mac_cb, u32 vmid, char *addr);
+ int hns_mac_set_multi(struct hns_mac_cb *mac_cb,
+@@ -463,5 +468,8 @@ int hns_mac_add_uc_addr(struct hns_mac_c
+ int hns_mac_rm_uc_addr(struct hns_mac_cb *mac_cb, u8 vf_id,
+ const unsigned char *addr);
+ int hns_mac_clr_multicast(struct hns_mac_cb *mac_cb, int vfn);
++void hns_mac_enable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++void hns_mac_disable(struct hns_mac_cb *mac_cb, enum mac_commom_mode mode);
++int hns_mac_wait_fifo_clean(struct hns_mac_cb *mac_cb);
+
+ #endif /* _HNS_DSAF_MAC_H */
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+@@ -2733,6 +2733,35 @@ void hns_dsaf_set_promisc_tcam(struct ds
+ soft_mac_entry->index = enable ? entry_index : DSAF_INVALID_ENTRY_IDX;
+ }
+
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port)
++{
++ u32 val, val_tmp;
++ int wait_cnt;
++
++ if (port >= DSAF_SERVICE_NW_NUM)
++ return 0;
++
++ wait_cnt = 0;
++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++ val = dsaf_read_dev(dsaf_dev, DSAF_VOQ_IN_PKT_NUM_0_REG +
++ (port + DSAF_XGE_NUM) * 0x40);
++ val_tmp = dsaf_read_dev(dsaf_dev, DSAF_VOQ_OUT_PKT_NUM_0_REG +
++ (port + DSAF_XGE_NUM) * 0x40);
++ if (val == val_tmp)
++ break;
++
++ usleep_range(100, 200);
++ }
++
++ if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++ dev_err(dsaf_dev->dev, "hns dsaf clean wait timeout(%u - %u).\n",
++ val, val_tmp);
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
+ /**
+ * dsaf_probe - probo dsaf dev
+ * @pdev: dasf platform device
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+@@ -44,6 +44,8 @@ struct hns_mac_cb;
+ #define DSAF_ROCE_CREDIT_CHN 8
+ #define DSAF_ROCE_CHAN_MODE 3
+
++#define HNS_MAX_WAIT_CNT 10000
++
+ enum dsaf_roce_port_mode {
+ DSAF_ROCE_6PORT_MODE,
+ DSAF_ROCE_4PORT_MODE,
+@@ -463,5 +465,6 @@ int hns_dsaf_rm_mac_addr(
+
+ int hns_dsaf_clr_mac_mc_port(struct dsaf_device *dsaf_dev,
+ u8 mac_id, u8 port_num);
++int hns_dsaf_wait_pkt_clean(struct dsaf_device *dsaf_dev, int port);
+
+ #endif /* __HNS_DSAF_MAIN_H__ */
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+@@ -274,6 +274,29 @@ static void hns_ppe_exc_irq_en(struct hn
+ dsaf_write_dev(ppe_cb, PPE_INTEN_REG, msk_vlue & vld_msk);
+ }
+
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb)
++{
++ int wait_cnt;
++ u32 val;
++
++ wait_cnt = 0;
++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++ val = dsaf_read_dev(ppe_cb, PPE_CURR_TX_FIFO0_REG) & 0x3ffU;
++ if (!val)
++ break;
++
++ usleep_range(100, 200);
++ }
++
++ if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++ dev_err(ppe_cb->dev, "hns ppe tx fifo clean wait timeout, still has %u pkt.\n",
++ val);
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
+ /**
+ * ppe_init_hw - init ppe
+ * @ppe_cb: ppe device
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+@@ -100,6 +100,7 @@ struct ppe_common_cb {
+
+ };
+
++int hns_ppe_wait_tx_fifo_clean(struct hns_ppe_cb *ppe_cb);
+ int hns_ppe_init(struct dsaf_device *dsaf_dev);
+
+ void hns_ppe_uninit(struct dsaf_device *dsaf_dev);
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+@@ -66,6 +66,29 @@ void hns_rcb_wait_fbd_clean(struct hnae_
+ "queue(%d) wait fbd(%d) clean fail!!\n", i, fbd_num);
+ }
+
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs)
++{
++ u32 head, tail;
++ int wait_cnt;
++
++ tail = dsaf_read_dev(&qs->tx_ring, RCB_REG_TAIL);
++ wait_cnt = 0;
++ while (wait_cnt++ < HNS_MAX_WAIT_CNT) {
++ head = dsaf_read_dev(&qs->tx_ring, RCB_REG_HEAD);
++ if (tail == head)
++ break;
++
++ usleep_range(100, 200);
++ }
++
++ if (wait_cnt >= HNS_MAX_WAIT_CNT) {
++ dev_err(qs->dev->dev, "rcb wait timeout, head not equal to tail.\n");
++ return -EBUSY;
++ }
++
++ return 0;
++}
++
+ /**
+ *hns_rcb_reset_ring_hw - ring reset
+ *@q: ring struct pointer
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+@@ -136,6 +136,7 @@ void hns_rcbv2_int_clr_hw(struct hnae_qu
+ void hns_rcb_init_hw(struct ring_pair_cb *ring);
+ void hns_rcb_reset_ring_hw(struct hnae_queue *q);
+ void hns_rcb_wait_fbd_clean(struct hnae_queue **qs, int q_num, u32 flag);
++int hns_rcb_wait_tx_ring_clean(struct hnae_queue *qs);
+ u32 hns_rcb_get_rx_coalesced_frames(
+ struct rcb_common_cb *rcb_common, u32 port_idx);
+ u32 hns_rcb_get_tx_coalesced_frames(
+--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
++++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+@@ -464,6 +464,7 @@
+ #define RCB_RING_INTMSK_TX_OVERTIME_REG 0x000C4
+ #define RCB_RING_INTSTS_TX_OVERTIME_REG 0x000C8
+
++#define GMAC_FIFO_STATE_REG 0x0000UL
+ #define GMAC_DUPLEX_TYPE_REG 0x0008UL
+ #define GMAC_FD_FC_TYPE_REG 0x000CUL
+ #define GMAC_TX_WATER_LINE_REG 0x0010UL
+--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
++++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+@@ -1212,11 +1212,26 @@ static void hns_nic_adjust_link(struct n
+ struct hnae_handle *h = priv->ae_handle;
+ int state = 1;
+
++ /* If there is no phy, do not need adjust link */
+ if (ndev->phydev) {
+- h->dev->ops->adjust_link(h, ndev->phydev->speed,
+- ndev->phydev->duplex);
+- state = ndev->phydev->link;
++ /* When phy link down, do nothing */
++ if (ndev->phydev->link == 0)
++ return;
++
++ if (h->dev->ops->need_adjust_link(h, ndev->phydev->speed,
++ ndev->phydev->duplex)) {
++ /* because Hi161X chip don't support to change gmac
++ * speed and duplex with traffic. Delay 200ms to
++ * make sure there is no more data in chip FIFO.
++ */
++ netif_carrier_off(ndev);
++ msleep(200);
++ h->dev->ops->adjust_link(h, ndev->phydev->speed,
++ ndev->phydev->duplex);
++ netif_carrier_on(ndev);
++ }
+ }
++
+ state = state && h->dev->ops->get_status(h);
+
+ if (state != priv->link) {
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Ivan Mikhaylov <ivan@de.ibm.com>
+Date: Mon, 3 Sep 2018 10:26:28 +0300
+Subject: net/ibm/emac: wrong emac_calc_base call was used by typo
+
+From: Ivan Mikhaylov <ivan@de.ibm.com>
+
+[ Upstream commit bf68066fccb10fce6bbffdda24ee2ae314c9c5b2 ]
+
+__emac_calc_base_mr1 was used instead of __emac4_calc_base_mr1
+by copy-paste mistake for emac4syn.
+
+Fixes: 45d6e545505fd32edb812f085be7de45b6a5c0af ("net/ibm/emac: add 8192 rx/tx fifo size")
+Signed-off-by: Ivan Mikhaylov <ivan@de.ibm.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/ibm/emac/core.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/drivers/net/ethernet/ibm/emac/core.c
++++ b/drivers/net/ethernet/ibm/emac/core.c
+@@ -494,9 +494,6 @@ static u32 __emac_calc_base_mr1(struct e
+ case 16384:
+ ret |= EMAC_MR1_RFS_16K;
+ break;
+- case 8192:
+- ret |= EMAC4_MR1_RFS_8K;
+- break;
+ case 4096:
+ ret |= EMAC_MR1_RFS_4K;
+ break;
+@@ -537,6 +534,9 @@ static u32 __emac4_calc_base_mr1(struct
+ case 16384:
+ ret |= EMAC4_MR1_RFS_16K;
+ break;
++ case 8192:
++ ret |= EMAC4_MR1_RFS_8K;
++ break;
+ case 4096:
+ ret |= EMAC4_MR1_RFS_4K;
+ break;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Baruch Siach <baruch@tkos.co.il>
+Date: Wed, 29 Aug 2018 09:44:39 +0300
+Subject: net: mvpp2: initialize port of_node pointer
+
+From: Baruch Siach <baruch@tkos.co.il>
+
+[ Upstream commit c4053ef322081554765e1b708d6cdd8855e1d72d ]
+
+Without a valid of_node in struct device we can't find the mvpp2 port
+device by its DT node. Specifically, this breaks
+of_find_net_device_by_node().
+
+For example, the Armada 8040 based Clearfog GT-8K uses Marvell 88E6141
+switch connected to the &cp1_eth2 port:
+
+&cp1_mdio {
+ ...
+
+ switch0: switch0@4 {
+ compatible = "marvell,mv88e6085";
+ ...
+
+ ports {
+ ...
+
+ port@5 {
+ reg = <5>;
+ label = "cpu";
+ ethernet = <&cp1_eth2>;
+ };
+ };
+ };
+};
+
+Without this patch, dsa_register_switch() returns -EPROBE_DEFER because
+of_find_net_device_by_node() can't find the device_node of the &cp1_eth2
+device.
+
+Reviewed-by: Andrew Lunn <andrew@lunn.ch>
+Signed-off-by: Baruch Siach <baruch@tkos.co.il>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
++++ b/drivers/net/ethernet/marvell/mvpp2/mvpp2_main.c
+@@ -4685,6 +4685,7 @@ static int mvpp2_port_probe(struct platf
+ dev->min_mtu = ETH_MIN_MTU;
+ /* 9704 == 9728 - 20 and rounding to 8 */
+ dev->max_mtu = MVPP2_BM_JUMBO_PKT_SIZE;
++ dev->dev.of_node = port_node;
+
+ /* Phylink isn't used w/ ACPI as of now */
+ if (port_node) {
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Haim Dreyfuss <haim.dreyfuss@intel.com>
+Date: Tue, 21 Aug 2018 09:22:19 +0300
+Subject: nl80211: Fix nla_put_u8 to u16 for NL80211_WMMR_TXOP
+
+From: Haim Dreyfuss <haim.dreyfuss@intel.com>
+
+[ Upstream commit d3c89bbc7491d5e288ca2993e999d24ba9ff52ad ]
+
+TXOP (also known as Channel Occupancy Time) is u16 and should be
+added using nla_put_u16 instead of u8, fix that.
+
+Fixes: 50f32718e125 ("nl80211: Add wmm rule attribute to NL80211_CMD_GET_WIPHY dump command")
+Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
+Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/nl80211.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -672,8 +672,8 @@ static int nl80211_msg_put_wmm_rules(str
+ rule->wmm_rule.client[j].cw_max) ||
+ nla_put_u8(msg, NL80211_WMMR_AIFSN,
+ rule->wmm_rule.client[j].aifsn) ||
+- nla_put_u8(msg, NL80211_WMMR_TXOP,
+- rule->wmm_rule.client[j].cot))
++ nla_put_u16(msg, NL80211_WMMR_TXOP,
++ rule->wmm_rule.client[j].cot))
+ goto nla_put_failure;
+
+ nla_nest_end(msg, nl_wmm_rule);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Haim Dreyfuss <haim.dreyfuss@intel.com>
+Date: Tue, 21 Aug 2018 09:22:20 +0300
+Subject: nl80211: Pass center frequency in kHz instead of MHz
+
+From: Haim Dreyfuss <haim.dreyfuss@intel.com>
+
+[ Upstream commit b88d26d97c41680f7327e5fb8061ad0037877f40 ]
+
+freq_reg_info expects to get the frequency in kHz. Instead we
+accidently pass it in MHz. Thus, currently the function always
+return ERR rule. Fix that.
+
+Fixes: 50f32718e125 ("nl80211: Add wmm rule attribute to NL80211_CMD_GET_WIPHY dump command")
+Signed-off-by: Haim Dreyfuss <haim.dreyfuss@intel.com>
+Signed-off-by: Luca Coelho <luciano.coelho@intel.com>
+[fix kHz/MHz in commit message]
+Signed-off-by: Johannes Berg <johannes.berg@intel.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/wireless/nl80211.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/net/wireless/nl80211.c
++++ b/net/wireless/nl80211.c
+@@ -764,7 +764,7 @@ static int nl80211_msg_put_channel(struc
+
+ if (large) {
+ const struct ieee80211_reg_rule *rule =
+- freq_reg_info(wiphy, chan->center_freq);
++ freq_reg_info(wiphy, MHZ_TO_KHZ(chan->center_freq));
+
+ if (!IS_ERR_OR_NULL(rule) && rule->has_wmm) {
+ if (nl80211_msg_put_wmm_rules(msg, rule))
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Xiao Ni <xni@redhat.com>
+Date: Thu, 30 Aug 2018 15:57:09 +0800
+Subject: RAID10 BUG_ON in raise_barrier when force is true and conf->barrier is 0
+
+From: Xiao Ni <xni@redhat.com>
+
+[ Upstream commit 1d0ffd264204eba1861865560f1f7f7a92919384 ]
+
+In raid10 reshape_request it gets max_sectors in read_balance. If the underlayer disks
+have bad blocks, the max_sectors is less than last. It will call goto read_more many
+times. It calls raise_barrier(conf, sectors_done != 0) every time. In this condition
+sectors_done is not 0. So the value passed to the argument force of raise_barrier is
+true.
+
+In raise_barrier it checks conf->barrier when force is true. If force is true and
+conf->barrier is 0, it panic. In this case reshape_request submits bio to under layer
+disks. And in the callback function of the bio it calls lower_barrier. If the bio
+finishes before calling raise_barrier again, it can trigger the BUG_ON.
+
+Add one pair of raise_barrier/lower_barrier to fix this bug.
+
+Signed-off-by: Xiao Ni <xni@redhat.com>
+Suggested-by: Neil Brown <neilb@suse.com>
+Signed-off-by: Shaohua Li <shli@fb.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/md/raid10.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+--- a/drivers/md/raid10.c
++++ b/drivers/md/raid10.c
+@@ -4531,11 +4531,12 @@ static sector_t reshape_request(struct m
+ allow_barrier(conf);
+ }
+
++ raise_barrier(conf, 0);
+ read_more:
+ /* Now schedule reads for blocks from sector_nr to last */
+ r10_bio = raid10_alloc_init_r10buf(conf);
+ r10_bio->state = 0;
+- raise_barrier(conf, sectors_done != 0);
++ raise_barrier(conf, 1);
+ atomic_set(&r10_bio->remaining, 0);
+ r10_bio->mddev = mddev;
+ r10_bio->sector = sector_nr;
+@@ -4631,6 +4632,8 @@ read_more:
+ if (sector_nr <= last)
+ goto read_more;
+
++ lower_barrier(conf);
++
+ /* Now that we have done the whole section we can
+ * update reshape_progress
+ */
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: "Dennis Zhou (Facebook)" <dennisszhou@gmail.com>
+Date: Fri, 31 Aug 2018 16:22:42 -0400
+Subject: Revert "blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()"
+
+From: "Dennis Zhou (Facebook)" <dennisszhou@gmail.com>
+
+[ Upstream commit 6b06546206868f723f2061d703a3c3c378dcbf4c ]
+
+This reverts commit 4c6994806f708559c2812b73501406e21ae5dcd0.
+
+Destroying blkgs is tricky because of the nature of the relationship. A
+blkg should go away when either a blkcg or a request_queue goes away.
+However, blkg's pin the blkcg to ensure they remain valid. To break this
+cycle, when a blkcg is offlined, blkgs put back their css ref. This
+eventually lets css_free() get called which frees the blkcg.
+
+The above commit (4c6994806f70) breaks this order of events by trying to
+destroy blkgs in css_free(). As the blkgs still hold references to the
+blkcg, css_free() is never called.
+
+The race between blkcg_bio_issue_check() and cgroup_rmdir() will be
+addressed in the following patch by delaying destruction of a blkg until
+all writeback associated with the blkcg has been finished.
+
+Fixes: 4c6994806f70 ("blk-throttle: fix race between blkcg_bio_issue_check() and cgroup_rmdir()")
+Reviewed-by: Josef Bacik <josef@toxicpanda.com>
+Signed-off-by: Dennis Zhou <dennisszhou@gmail.com>
+Cc: Jiufei Xue <jiufei.xue@linux.alibaba.com>
+Cc: Joseph Qi <joseph.qi@linux.alibaba.com>
+Cc: Tejun Heo <tj@kernel.org>
+Cc: Jens Axboe <axboe@kernel.dk>
+Signed-off-by: Jens Axboe <axboe@kernel.dk>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ block/blk-cgroup.c | 78 +++++++++------------------------------------
+ include/linux/blk-cgroup.h | 1
+ 2 files changed, 16 insertions(+), 63 deletions(-)
+
+--- a/block/blk-cgroup.c
++++ b/block/blk-cgroup.c
+@@ -307,28 +307,11 @@ struct blkcg_gq *blkg_lookup_create(stru
+ }
+ }
+
+-static void blkg_pd_offline(struct blkcg_gq *blkg)
+-{
+- int i;
+-
+- lockdep_assert_held(blkg->q->queue_lock);
+- lockdep_assert_held(&blkg->blkcg->lock);
+-
+- for (i = 0; i < BLKCG_MAX_POLS; i++) {
+- struct blkcg_policy *pol = blkcg_policy[i];
+-
+- if (blkg->pd[i] && !blkg->pd[i]->offline &&
+- pol->pd_offline_fn) {
+- pol->pd_offline_fn(blkg->pd[i]);
+- blkg->pd[i]->offline = true;
+- }
+- }
+-}
+-
+ static void blkg_destroy(struct blkcg_gq *blkg)
+ {
+ struct blkcg *blkcg = blkg->blkcg;
+ struct blkcg_gq *parent = blkg->parent;
++ int i;
+
+ lockdep_assert_held(blkg->q->queue_lock);
+ lockdep_assert_held(&blkcg->lock);
+@@ -337,6 +320,13 @@ static void blkg_destroy(struct blkcg_gq
+ WARN_ON_ONCE(list_empty(&blkg->q_node));
+ WARN_ON_ONCE(hlist_unhashed(&blkg->blkcg_node));
+
++ for (i = 0; i < BLKCG_MAX_POLS; i++) {
++ struct blkcg_policy *pol = blkcg_policy[i];
++
++ if (blkg->pd[i] && pol->pd_offline_fn)
++ pol->pd_offline_fn(blkg->pd[i]);
++ }
++
+ if (parent) {
+ blkg_rwstat_add_aux(&parent->stat_bytes, &blkg->stat_bytes);
+ blkg_rwstat_add_aux(&parent->stat_ios, &blkg->stat_ios);
+@@ -379,7 +369,6 @@ static void blkg_destroy_all(struct requ
+ struct blkcg *blkcg = blkg->blkcg;
+
+ spin_lock(&blkcg->lock);
+- blkg_pd_offline(blkg);
+ blkg_destroy(blkg);
+ spin_unlock(&blkcg->lock);
+ }
+@@ -1006,54 +995,21 @@ static struct cftype blkcg_legacy_files[
+ * @css: css of interest
+ *
+ * This function is called when @css is about to go away and responsible
+- * for offlining all blkgs pd and killing all wbs associated with @css.
+- * blkgs pd offline should be done while holding both q and blkcg locks.
+- * As blkcg lock is nested inside q lock, this function performs reverse
+- * double lock dancing.
++ * for shooting down all blkgs associated with @css. blkgs should be
++ * removed while holding both q and blkcg locks. As blkcg lock is nested
++ * inside q lock, this function performs reverse double lock dancing.
+ *
+ * This is the blkcg counterpart of ioc_release_fn().
+ */
+ static void blkcg_css_offline(struct cgroup_subsys_state *css)
+ {
+ struct blkcg *blkcg = css_to_blkcg(css);
+- struct blkcg_gq *blkg;
+
+ spin_lock_irq(&blkcg->lock);
+
+- hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) {
+- struct request_queue *q = blkg->q;
+-
+- if (spin_trylock(q->queue_lock)) {
+- blkg_pd_offline(blkg);
+- spin_unlock(q->queue_lock);
+- } else {
+- spin_unlock_irq(&blkcg->lock);
+- cpu_relax();
+- spin_lock_irq(&blkcg->lock);
+- }
+- }
+-
+- spin_unlock_irq(&blkcg->lock);
+-
+- wb_blkcg_offline(blkcg);
+-}
+-
+-/**
+- * blkcg_destroy_all_blkgs - destroy all blkgs associated with a blkcg
+- * @blkcg: blkcg of interest
+- *
+- * This function is called when blkcg css is about to free and responsible for
+- * destroying all blkgs associated with @blkcg.
+- * blkgs should be removed while holding both q and blkcg locks. As blkcg lock
+- * is nested inside q lock, this function performs reverse double lock dancing.
+- */
+-static void blkcg_destroy_all_blkgs(struct blkcg *blkcg)
+-{
+- spin_lock_irq(&blkcg->lock);
+ while (!hlist_empty(&blkcg->blkg_list)) {
+ struct blkcg_gq *blkg = hlist_entry(blkcg->blkg_list.first,
+- struct blkcg_gq,
+- blkcg_node);
++ struct blkcg_gq, blkcg_node);
+ struct request_queue *q = blkg->q;
+
+ if (spin_trylock(q->queue_lock)) {
+@@ -1065,7 +1021,10 @@ static void blkcg_destroy_all_blkgs(stru
+ spin_lock_irq(&blkcg->lock);
+ }
+ }
++
+ spin_unlock_irq(&blkcg->lock);
++
++ wb_blkcg_offline(blkcg);
+ }
+
+ static void blkcg_css_free(struct cgroup_subsys_state *css)
+@@ -1073,8 +1032,6 @@ static void blkcg_css_free(struct cgroup
+ struct blkcg *blkcg = css_to_blkcg(css);
+ int i;
+
+- blkcg_destroy_all_blkgs(blkcg);
+-
+ mutex_lock(&blkcg_pol_mutex);
+
+ list_del(&blkcg->all_blkcgs_node);
+@@ -1412,11 +1369,8 @@ void blkcg_deactivate_policy(struct requ
+
+ list_for_each_entry(blkg, &q->blkg_list, q_node) {
+ if (blkg->pd[pol->plid]) {
+- if (!blkg->pd[pol->plid]->offline &&
+- pol->pd_offline_fn) {
++ if (pol->pd_offline_fn)
+ pol->pd_offline_fn(blkg->pd[pol->plid]);
+- blkg->pd[pol->plid]->offline = true;
+- }
+ pol->pd_free_fn(blkg->pd[pol->plid]);
+ blkg->pd[pol->plid] = NULL;
+ }
+--- a/include/linux/blk-cgroup.h
++++ b/include/linux/blk-cgroup.h
+@@ -88,7 +88,6 @@ struct blkg_policy_data {
+ /* the blkg and policy id this per-policy data belongs to */
+ struct blkcg_gq *blkg;
+ int plid;
+- bool offline;
+ };
+
+ /*
--- /dev/null
+From ce01a1575f45bf319e374592656441021a7f5823 Mon Sep 17 00:00:00 2001
+From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+Date: Thu, 27 Sep 2018 14:39:19 -0400
+Subject: rseq/selftests: fix parametrized test with -fpie
+
+From: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+
+commit ce01a1575f45bf319e374592656441021a7f5823 upstream.
+
+On x86-64, the parametrized selftest code for rseq crashes with a
+segmentation fault when compiled with -fpie. This happens when the
+param_test binary is loaded at an address beyond 32-bit on x86-64.
+
+The issue is caused by use of a 32-bit register to hold the address
+of the loop counter variable.
+
+Fix this by using a 64-bit register to calculate the address of the
+loop counter variables as an offset from rip.
+
+Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
+Acked-by: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
+Cc: <stable@vger.kernel.org> # v4.18
+Cc: Shuah Khan <shuah@kernel.org>
+Cc: Thomas Gleixner <tglx@linutronix.de>
+Cc: Joel Fernandes <joelaf@google.com>
+Cc: Peter Zijlstra <peterz@infradead.org>
+Cc: Catalin Marinas <catalin.marinas@arm.com>
+Cc: Dave Watson <davejwatson@fb.com>
+Cc: Will Deacon <will.deacon@arm.com>
+Cc: Andi Kleen <andi@firstfloor.org>
+Cc: linux-kselftest@vger.kernel.org
+Cc: "H . Peter Anvin" <hpa@zytor.com>
+Cc: Chris Lameter <cl@linux.com>
+Cc: Russell King <linux@arm.linux.org.uk>
+Cc: Michael Kerrisk <mtk.manpages@gmail.com>
+Cc: "Paul E . McKenney" <paulmck@linux.vnet.ibm.com>
+Cc: Paul Turner <pjt@google.com>
+Cc: Boqun Feng <boqun.feng@gmail.com>
+Cc: Josh Triplett <josh@joshtriplett.org>
+Cc: Steven Rostedt <rostedt@goodmis.org>
+Cc: Ben Maurer <bmaurer@fb.com>
+Cc: Andy Lutomirski <luto@amacapital.net>
+Cc: Andrew Morton <akpm@linux-foundation.org>
+Cc: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Shuah Khan (Samsung OSG) <shuah@kernel.org>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+
+---
+ tools/testing/selftests/rseq/param_test.c | 19 ++++++++++---------
+ 1 file changed, 10 insertions(+), 9 deletions(-)
+
+--- a/tools/testing/selftests/rseq/param_test.c
++++ b/tools/testing/selftests/rseq/param_test.c
+@@ -56,15 +56,13 @@ unsigned int yield_mod_cnt, nr_abort;
+ printf(fmt, ## __VA_ARGS__); \
+ } while (0)
+
+-#if defined(__x86_64__) || defined(__i386__)
++#ifdef __i386__
+
+ #define INJECT_ASM_REG "eax"
+
+ #define RSEQ_INJECT_CLOBBER \
+ , INJECT_ASM_REG
+
+-#ifdef __i386__
+-
+ #define RSEQ_INJECT_ASM(n) \
+ "mov asm_loop_cnt_" #n ", %%" INJECT_ASM_REG "\n\t" \
+ "test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+@@ -76,9 +74,16 @@ unsigned int yield_mod_cnt, nr_abort;
+
+ #elif defined(__x86_64__)
+
++#define INJECT_ASM_REG_P "rax"
++#define INJECT_ASM_REG "eax"
++
++#define RSEQ_INJECT_CLOBBER \
++ , INJECT_ASM_REG_P \
++ , INJECT_ASM_REG
++
+ #define RSEQ_INJECT_ASM(n) \
+- "lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG "\n\t" \
+- "mov (%%" INJECT_ASM_REG "), %%" INJECT_ASM_REG "\n\t" \
++ "lea asm_loop_cnt_" #n "(%%rip), %%" INJECT_ASM_REG_P "\n\t" \
++ "mov (%%" INJECT_ASM_REG_P "), %%" INJECT_ASM_REG "\n\t" \
+ "test %%" INJECT_ASM_REG ",%%" INJECT_ASM_REG "\n\t" \
+ "jz 333f\n\t" \
+ "222:\n\t" \
+@@ -86,10 +91,6 @@ unsigned int yield_mod_cnt, nr_abort;
+ "jnz 222b\n\t" \
+ "333:\n\t"
+
+-#else
+-#error "Unsupported architecture"
+-#endif
+-
+ #elif defined(__ARMEL__)
+
+ #define RSEQ_INJECT_INPUT \
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Dan Carpenter <dan.carpenter@oracle.com>
+Date: Mon, 27 Aug 2018 12:23:01 +0300
+Subject: scsi: aacraid: fix a signedness bug
+
+From: Dan Carpenter <dan.carpenter@oracle.com>
+
+[ Upstream commit b9eb3b14f1dbf16bf27b6c1ffe6b8c00ec945c9b ]
+
+The problem is that ->reset_state is a u8 but it can be set to -1 or -2 in
+aac_tmf_callback() and the error handling in aac_eh_target_reset() relies
+on it to be signed.
+
+[mkp: fixed typo]
+
+Fixes: 0d643ff3c353 ("scsi: aacraid: use aac_tmf_callback for reset fib")
+Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/scsi/aacraid/aacraid.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/scsi/aacraid/aacraid.h
++++ b/drivers/scsi/aacraid/aacraid.h
+@@ -1346,7 +1346,7 @@ struct fib {
+ struct aac_hba_map_info {
+ __le32 rmw_nexus; /* nexus for native HBA devices */
+ u8 devtype; /* device type */
+- u8 reset_state; /* 0 - no reset, 1..x - */
++ s8 reset_state; /* 0 - no reset, 1..x - */
+ /* after xth TM LUN reset */
+ u16 qd_limit;
+ u32 scan_counter;
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Varun Prakash <varun@chelsio.com>
+Date: Sat, 11 Aug 2018 21:03:58 +0530
+Subject: scsi: csiostor: add a check for NULL pointer after kmalloc()
+
+From: Varun Prakash <varun@chelsio.com>
+
+[ Upstream commit 89809b028b6f54187b7d81a0c69b35d394c52e62 ]
+
+Reported-by: Colin Ian King <colin.king@canonical.com>
+Signed-off-by: Varun Prakash <varun@chelsio.com>
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/scsi/csiostor/csio_hw.c | 16 +++++++++-------
+ 1 file changed, 9 insertions(+), 7 deletions(-)
+
+--- a/drivers/scsi/csiostor/csio_hw.c
++++ b/drivers/scsi/csiostor/csio_hw.c
+@@ -2275,8 +2275,8 @@ bye:
+ }
+
+ /*
+- * Returns -EINVAL if attempts to flash the firmware failed
+- * else returns 0,
++ * Returns -EINVAL if attempts to flash the firmware failed,
++ * -ENOMEM if memory allocation failed else returns 0,
+ * if flashing was not attempted because the card had the
+ * latest firmware ECANCELED is returned
+ */
+@@ -2304,6 +2304,13 @@ csio_hw_flash_fw(struct csio_hw *hw, int
+ return -EINVAL;
+ }
+
++ /* allocate memory to read the header of the firmware on the
++ * card
++ */
++ card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
++ if (!card_fw)
++ return -ENOMEM;
++
+ if (csio_is_t5(pci_dev->device & CSIO_HW_CHIP_MASK))
+ fw_bin_file = FW_FNAME_T5;
+ else
+@@ -2317,11 +2324,6 @@ csio_hw_flash_fw(struct csio_hw *hw, int
+ fw_size = fw->size;
+ }
+
+- /* allocate memory to read the header of the firmware on the
+- * card
+- */
+- card_fw = kmalloc(sizeof(*card_fw), GFP_KERNEL);
+-
+ /* upgrade FW logic */
+ ret = csio_hw_prep_fw(hw, fw_info, fw_data, fw_size, card_fw,
+ hw->fw_state, reset);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Varun Prakash <varun@chelsio.com>
+Date: Sat, 11 Aug 2018 21:14:08 +0530
+Subject: scsi: csiostor: fix incorrect port capabilities
+
+From: Varun Prakash <varun@chelsio.com>
+
+[ Upstream commit 68bdc630721c40e908d22cffe07b5ca225a69f6e ]
+
+ - use be32_to_cpu() instead of ntohs() for 32 bit port capabilities.
+
+ - add a new function fwcaps32_to_caps16() to convert 32 bit port
+ capabilities to 16 bit port capabilities.
+
+Signed-off-by: Varun Prakash <varun@chelsio.com>
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/scsi/csiostor/csio_hw.c | 55 ++++++++++++++++++++++++++++++++--------
+ drivers/scsi/csiostor/csio_hw.h | 1
+ drivers/scsi/csiostor/csio_mb.c | 6 ++--
+ 3 files changed, 48 insertions(+), 14 deletions(-)
+
+--- a/drivers/scsi/csiostor/csio_hw.c
++++ b/drivers/scsi/csiostor/csio_hw.c
+@@ -1513,6 +1513,46 @@ fw_port_cap32_t fwcaps16_to_caps32(fw_po
+ }
+
+ /**
++ * fwcaps32_to_caps16 - convert 32-bit Port Capabilities to 16-bits
++ * @caps32: a 32-bit Port Capabilities value
++ *
++ * Returns the equivalent 16-bit Port Capabilities value. Note that
++ * not all 32-bit Port Capabilities can be represented in the 16-bit
++ * Port Capabilities and some fields/values may not make it.
++ */
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32)
++{
++ fw_port_cap16_t caps16 = 0;
++
++ #define CAP32_TO_CAP16(__cap) \
++ do { \
++ if (caps32 & FW_PORT_CAP32_##__cap) \
++ caps16 |= FW_PORT_CAP_##__cap; \
++ } while (0)
++
++ CAP32_TO_CAP16(SPEED_100M);
++ CAP32_TO_CAP16(SPEED_1G);
++ CAP32_TO_CAP16(SPEED_10G);
++ CAP32_TO_CAP16(SPEED_25G);
++ CAP32_TO_CAP16(SPEED_40G);
++ CAP32_TO_CAP16(SPEED_100G);
++ CAP32_TO_CAP16(FC_RX);
++ CAP32_TO_CAP16(FC_TX);
++ CAP32_TO_CAP16(802_3_PAUSE);
++ CAP32_TO_CAP16(802_3_ASM_DIR);
++ CAP32_TO_CAP16(ANEG);
++ CAP32_TO_CAP16(FORCE_PAUSE);
++ CAP32_TO_CAP16(MDIAUTO);
++ CAP32_TO_CAP16(MDISTRAIGHT);
++ CAP32_TO_CAP16(FEC_RS);
++ CAP32_TO_CAP16(FEC_BASER_RS);
++
++ #undef CAP32_TO_CAP16
++
++ return caps16;
++}
++
++/**
+ * lstatus_to_fwcap - translate old lstatus to 32-bit Port Capabilities
+ * @lstatus: old FW_PORT_ACTION_GET_PORT_INFO lstatus value
+ *
+@@ -1670,7 +1710,7 @@ csio_enable_ports(struct csio_hw *hw)
+ val = 1;
+
+ csio_mb_params(hw, mbp, CSIO_MB_DEFAULT_TMO,
+- hw->pfn, 0, 1, ¶m, &val, false,
++ hw->pfn, 0, 1, ¶m, &val, true,
+ NULL);
+
+ if (csio_mb_issue(hw, mbp)) {
+@@ -1680,16 +1720,9 @@ csio_enable_ports(struct csio_hw *hw)
+ return -EINVAL;
+ }
+
+- csio_mb_process_read_params_rsp(hw, mbp, &retval, 1,
+- &val);
+- if (retval != FW_SUCCESS) {
+- csio_err(hw, "FW_PARAMS_CMD(r) port:%d failed: 0x%x\n",
+- portid, retval);
+- mempool_free(mbp, hw->mb_mempool);
+- return -EINVAL;
+- }
+-
+- fw_caps = val;
++ csio_mb_process_read_params_rsp(hw, mbp, &retval,
++ 0, NULL);
++ fw_caps = retval ? FW_CAPS16 : FW_CAPS32;
+ }
+
+ /* Read PORT information */
+--- a/drivers/scsi/csiostor/csio_hw.h
++++ b/drivers/scsi/csiostor/csio_hw.h
+@@ -639,6 +639,7 @@ int csio_handle_intr_status(struct csio_
+
+ fw_port_cap32_t fwcap_to_fwspeed(fw_port_cap32_t acaps);
+ fw_port_cap32_t fwcaps16_to_caps32(fw_port_cap16_t caps16);
++fw_port_cap16_t fwcaps32_to_caps16(fw_port_cap32_t caps32);
+ fw_port_cap32_t lstatus_to_fwcap(u32 lstatus);
+
+ int csio_hw_start(struct csio_hw *);
+--- a/drivers/scsi/csiostor/csio_mb.c
++++ b/drivers/scsi/csiostor/csio_mb.c
+@@ -368,7 +368,7 @@ csio_mb_port(struct csio_hw *hw, struct
+ FW_CMD_LEN16_V(sizeof(*cmdp) / 16));
+
+ if (fw_caps == FW_CAPS16)
+- cmdp->u.l1cfg.rcap = cpu_to_be32(fc);
++ cmdp->u.l1cfg.rcap = cpu_to_be32(fwcaps32_to_caps16(fc));
+ else
+ cmdp->u.l1cfg32.rcap32 = cpu_to_be32(fc);
+ }
+@@ -395,8 +395,8 @@ csio_mb_process_read_port_rsp(struct csi
+ *pcaps = fwcaps16_to_caps32(ntohs(rsp->u.info.pcap));
+ *acaps = fwcaps16_to_caps32(ntohs(rsp->u.info.acap));
+ } else {
+- *pcaps = ntohs(rsp->u.info32.pcaps32);
+- *acaps = ntohs(rsp->u.info32.acaps32);
++ *pcaps = be32_to_cpu(rsp->u.info32.pcaps32);
++ *acaps = be32_to_cpu(rsp->u.info32.acaps32);
+ }
+ }
+ }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Geert Uytterhoeven <geert@linux-m68k.org>
+Date: Thu, 23 Aug 2018 23:23:06 +0200
+Subject: scsi: libata: Add missing newline at end of file
+
+From: Geert Uytterhoeven <geert@linux-m68k.org>
+
+[ Upstream commit 4e8065aa6c6f50765290be27ab8a64a4e44cb009 ]
+
+With gcc 4.1.2:
+
+ drivers/ata/libata-core.c:7396:33: warning: no newline at end of file
+
+Fixes: 2fa4a32613c9182b ("scsi: libsas: dynamically allocate and free ata host")
+Signed-off-by: Geert Uytterhoeven <geert@linux-m68k.org>
+Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/ata/libata-core.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/drivers/ata/libata-core.c
++++ b/drivers/ata/libata-core.c
+@@ -7403,4 +7403,4 @@ EXPORT_SYMBOL_GPL(ata_cable_unknown);
+ EXPORT_SYMBOL_GPL(ata_cable_ignore);
+ EXPORT_SYMBOL_GPL(ata_cable_sata);
+ EXPORT_SYMBOL_GPL(ata_host_get);
+-EXPORT_SYMBOL_GPL(ata_host_put);
+\ No newline at end of file
++EXPORT_SYMBOL_GPL(ata_host_put);
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Sabrina Dubroca <sd@queasysnail.net>
+Date: Thu, 30 Aug 2018 16:01:18 +0200
+Subject: selftests: pmtu: detect correct binary to ping ipv6 addresses
+
+From: Sabrina Dubroca <sd@queasysnail.net>
+
+[ Upstream commit c81c7012e0c769b5704c2b07bd5224965e76fb70 ]
+
+Some systems don't have the ping6 binary anymore, and use ping for
+everything. Detect the absence of ping6 and try to use ping instead.
+
+Fixes: d1f1b9cbf34c ("selftests: net: Introduce first PMTU test")
+Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
+Acked-by: Stefano Brivio <sbrivio@redhat.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/net/pmtu.sh | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -46,6 +46,9 @@
+ # Kselftest framework requirement - SKIP code is 4.
+ ksft_skip=4
+
++# Some systems don't have a ping6 binary anymore
++which ping6 > /dev/null 2>&1 && ping6=$(which ping6) || ping6=$(which ping)
++
+ tests="
+ pmtu_vti6_exception vti6: PMTU exceptions
+ pmtu_vti4_exception vti4: PMTU exceptions
+@@ -274,7 +277,7 @@ test_pmtu_vti6_exception() {
+ mtu "${ns_b}" veth_b 4000
+ mtu "${ns_a}" vti6_a 5000
+ mtu "${ns_b}" vti6_b 5000
+- ${ns_a} ping6 -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
++ ${ns_a} ${ping6} -q -i 0.1 -w 2 -s 60000 ${vti6_b_addr} > /dev/null
+
+ # Check that exception was created
+ if [ "$(route_get_dst_pmtu_from_exception "${ns_a}" ${vti6_b_addr})" = "" ]; then
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Sabrina Dubroca <sd@queasysnail.net>
+Date: Thu, 30 Aug 2018 16:01:17 +0200
+Subject: selftests: pmtu: maximum MTU for vti4 is 2^16-1-20
+
+From: Sabrina Dubroca <sd@queasysnail.net>
+
+[ Upstream commit 902b5417f28d955cdb4898df6ffaab15f56c5cff ]
+
+Since commit 82612de1c98e ("ip_tunnel: restore binding to ifaces with a
+large mtu"), the maximum MTU for vti4 is based on IP_MAX_MTU instead of
+the mysterious constant 0xFFF8. This makes this selftest fail.
+
+Fixes: 82612de1c98e ("ip_tunnel: restore binding to ifaces with a large mtu")
+Signed-off-by: Sabrina Dubroca <sd@queasysnail.net>
+Acked-by: Stefano Brivio <sbrivio@redhat.com>
+Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/net/pmtu.sh | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/tools/testing/selftests/net/pmtu.sh
++++ b/tools/testing/selftests/net/pmtu.sh
+@@ -334,7 +334,7 @@ test_pmtu_vti4_link_add_mtu() {
+ fail=0
+
+ min=68
+- max=$((65528 - 20))
++ max=$((65535 - 20))
+ # Check invalid values first
+ for v in $((min - 1)) $((max + 1)); do
+ ${ns_a} ip link add vti4_a mtu ${v} type vti local ${veth4_a_addr} remote ${veth4_b_addr} key 10 2>/dev/null
+rseq-selftests-fix-parametrized-test-with-fpie.patch
+mac80211-run-txq-teardown-code-before-de-registering-interfaces.patch
+mac80211_hwsim-require-at-least-one-channel.patch
+btrfs-fix-unexpected-failure-of-nocow-buffered-writes-after-snapshotting-when-low-on-space.patch
+kvm-ppc-book3s-hv-don-t-truncate-hpte-index-in-xlate-function.patch
+cfg80211-remove-division-by-size-of-sizeof-struct-ieee80211_wmm_rule.patch
+btrfs-btrfs_shrink_device-should-call-commit-transaction-at-the-end.patch
+scsi-csiostor-add-a-check-for-null-pointer-after-kmalloc.patch
+scsi-csiostor-fix-incorrect-port-capabilities.patch
+scsi-libata-add-missing-newline-at-end-of-file.patch
+scsi-aacraid-fix-a-signedness-bug.patch
+bpf-sockmap-fix-potential-use-after-free-in-bpf_tcp_close.patch
+bpf-sockmap-fix-psock-refcount-leak-in-bpf_tcp_recvmsg.patch
+bpf-sockmap-decrement-copied-count-correctly-in-redirect-error-case.patch
+mac80211-correct-use-of-ieee80211_vht_cap_rxstbc_x.patch
+mac80211_hwsim-correct-use-of-ieee80211_vht_cap_rxstbc_x.patch
+cfg80211-make-wmm_rule-part-of-the-reg_rule-structure.patch
+mac80211_hwsim-fix-possible-spectre-v1-for-hwsim_world_regdom_custom.patch
+nl80211-fix-nla_put_u8-to-u16-for-nl80211_wmmr_txop.patch
+nl80211-pass-center-frequency-in-khz-instead-of-mhz.patch
+bpf-fix-several-offset-tests-in-bpf_msg_pull_data.patch
+gpio-adp5588-fix-sleep-in-atomic-context-bug.patch
+mac80211-mesh-fix-hwmp-sequence-numbering-to-follow-standard.patch
+mac80211-avoid-kernel-panic-when-building-amsdu-from-non-linear-skb.patch
+gpiolib-acpi-switch-to-cansleep-version-of-gpio-library-call.patch
+gpiolib-acpi-register-gpioint-acpi-event-handlers-from-a-late_initcall.patch
+gpio-dwapb-fix-error-handling-in-dwapb_gpio_probe.patch
+bpf-fix-msg-data-data_end-after-sg-shift-repair-in-bpf_msg_pull_data.patch
+bpf-fix-shift-upon-scatterlist-ring-wrap-around-in-bpf_msg_pull_data.patch
+bpf-fix-sg-shift-repair-start-offset-in-bpf_msg_pull_data.patch
+tipc-switch-to-rhashtable-iterator.patch
+net-hns-add-the-code-for-cleaning-pkt-in-chip.patch
+net-hns-add-netif_carrier_off-before-change-speed-and-duplex.patch
+sh_eth-add-r7s9210-support.patch
+net-mvpp2-initialize-port-of_node-pointer.patch
+tc-testing-add-test-cases-for-numeric-and-invalid-control-action.patch
+cfg80211-nl80211_update_ft_ies-to-validate-nl80211_attr_ie.patch
+mac80211-do-not-convert-to-a-msdu-if-frag-subframe-limited.patch
+mac80211-always-account-for-a-msdu-header-changes.patch
+tools-kvm_stat-fix-python3-issues.patch
+tools-kvm_stat-fix-handling-of-invalid-paths-in-debugfs-provider.patch
+tools-kvm_stat-fix-updates-for-dead-guests.patch
+gpio-fix-crash-due-to-registration-race.patch
+arc-atomics-unbork-atomic_fetch_-op.patch
+revert-blk-throttle-fix-race-between-blkcg_bio_issue_check-and-cgroup_rmdir.patch
+md-raid5-cache-disable-reshape-completely.patch
+raid10-bug_on-in-raise_barrier-when-force-is-true-and-conf-barrier-is-0.patch
+selftests-pmtu-maximum-mtu-for-vti4-is-2-16-1-20.patch
+selftests-pmtu-detect-correct-binary-to-ping-ipv6-addresses.patch
+ibmvnic-include-missing-return-code-checks-in-reset-function.patch
+bpf-fix-bpf_msg_pull_data.patch
+bpf-avoid-misuse-of-psock-when-tcp_ulp_bpf-collides-with-another-ulp.patch
+net-ethernet-cpsw-phy-sel-prefer-phandle-for-phy-sel.patch
+i2c-uniphier-issue-stop-only-for-last-message-or-i2c_m_stop.patch
+i2c-uniphier-f-issue-stop-only-for-last-message-or-i2c_m_stop.patch
+net-cadence-fix-a-sleep-in-atomic-context-bug-in-macb_halt_tx.patch
+fs-cifs-don-t-translate-sfm_slash-u-f026-to-backslash.patch
+mac80211-fix-an-off-by-one-issue-in-a-msdu-max_subframe-computation.patch
+cfg80211-fix-a-type-issue-in-ieee80211_chandef_to_operating_class.patch
+mac80211-fix-wmm-txop-calculation.patch
+mac80211-fix-a-race-between-restart-and-csa-flows.patch
+mac80211-fix-station-bandwidth-setting-after-channel-switch.patch
+mac80211-don-t-tx-a-deauth-frame-if-the-ap-forbade-tx.patch
+mac80211-shorten-the-ibss-debug-messages.patch
+fsnotify-fix-ignore-mask-logic-in-fsnotify.patch
+net-ibm-emac-wrong-emac_calc_base-call-was-used-by-typo.patch
+nds32-fix-logic-for-module.patch
+nds32-add-null-entry-to-the-end-of_device_id-array.patch
+nds32-fix-empty-call-trace.patch
+nds32-fix-get_user-put_user-macro-expand-pointer-problem.patch
+nds32-fix-build-error-because-of-wrong-semicolon.patch
+tools-vm-slabinfo.c-fix-sign-compare-warning.patch
+tools-vm-page-types.c-fix-defined-but-not-used-warning.patch
+nds32-linker-script-gcov-kernel-may-refers-data-in-__exit.patch
+ceph-avoid-a-use-after-free-in-ceph_destroy_options.patch
+firmware-arm_scmi-fix-divide-by-zero-when-sustained_perf_level-is-zero.patch
+afs-fix-cell-specification-to-permit-an-empty-address-list.patch
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Chris Brandt <chris.brandt@renesas.com>
+Date: Mon, 27 Aug 2018 12:42:02 -0500
+Subject: sh_eth: Add R7S9210 support
+
+From: Chris Brandt <chris.brandt@renesas.com>
+
+[ Upstream commit 6e0bb04d0e4f597d8d8f4f21401a9636f2809fd1 ]
+
+Add support for the R7S9210 which is part of the RZ/A2 series.
+
+Signed-off-by: Chris Brandt <chris.brandt@renesas.com>
+Acked-by: Rob Herring <robh@kernel.org>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ Documentation/devicetree/bindings/net/sh_eth.txt | 1
+ drivers/net/ethernet/renesas/sh_eth.c | 36 +++++++++++++++++++++++
+ 2 files changed, 37 insertions(+)
+
+--- a/Documentation/devicetree/bindings/net/sh_eth.txt
++++ b/Documentation/devicetree/bindings/net/sh_eth.txt
+@@ -16,6 +16,7 @@ Required properties:
+ "renesas,ether-r8a7794" if the device is a part of R8A7794 SoC.
+ "renesas,gether-r8a77980" if the device is a part of R8A77980 SoC.
+ "renesas,ether-r7s72100" if the device is a part of R7S72100 SoC.
++ "renesas,ether-r7s9210" if the device is a part of R7S9210 SoC.
+ "renesas,rcar-gen1-ether" for a generic R-Car Gen1 device.
+ "renesas,rcar-gen2-ether" for a generic R-Car Gen2 or RZ/G1
+ device.
+--- a/drivers/net/ethernet/renesas/sh_eth.c
++++ b/drivers/net/ethernet/renesas/sh_eth.c
+@@ -807,6 +807,41 @@ static struct sh_eth_cpu_data r8a77980_d
+ .magic = 1,
+ .cexcr = 1,
+ };
++
++/* R7S9210 */
++static struct sh_eth_cpu_data r7s9210_data = {
++ .soft_reset = sh_eth_soft_reset,
++
++ .set_duplex = sh_eth_set_duplex,
++ .set_rate = sh_eth_set_rate_rcar,
++
++ .register_type = SH_ETH_REG_FAST_SH4,
++
++ .edtrr_trns = EDTRR_TRNS_ETHER,
++ .ecsr_value = ECSR_ICD,
++ .ecsipr_value = ECSIPR_ICDIP,
++ .eesipr_value = EESIPR_TWBIP | EESIPR_TABTIP | EESIPR_RABTIP |
++ EESIPR_RFCOFIP | EESIPR_ECIIP | EESIPR_FTCIP |
++ EESIPR_TDEIP | EESIPR_TFUFIP | EESIPR_FRIP |
++ EESIPR_RDEIP | EESIPR_RFOFIP | EESIPR_CNDIP |
++ EESIPR_DLCIP | EESIPR_CDIP | EESIPR_TROIP |
++ EESIPR_RMAFIP | EESIPR_RRFIP | EESIPR_RTLFIP |
++ EESIPR_RTSFIP | EESIPR_PREIP | EESIPR_CERFIP,
++
++ .tx_check = EESR_FTC | EESR_CND | EESR_DLC | EESR_CD | EESR_TRO,
++ .eesr_err_check = EESR_TWB | EESR_TABT | EESR_RABT | EESR_RFE |
++ EESR_RDE | EESR_RFRMER | EESR_TFE | EESR_TDE,
++
++ .fdr_value = 0x0000070f,
++
++ .apr = 1,
++ .mpr = 1,
++ .tpauser = 1,
++ .hw_swap = 1,
++ .rpadir = 1,
++ .no_ade = 1,
++ .xdfar_rw = 1,
++};
+ #endif /* CONFIG_OF */
+
+ static void sh_eth_set_rate_sh7724(struct net_device *ndev)
+@@ -3131,6 +3166,7 @@ static const struct of_device_id sh_eth_
+ { .compatible = "renesas,ether-r8a7794", .data = &rcar_gen2_data },
+ { .compatible = "renesas,gether-r8a77980", .data = &r8a77980_data },
+ { .compatible = "renesas,ether-r7s72100", .data = &r7s72100_data },
++ { .compatible = "renesas,ether-r7s9210", .data = &r7s9210_data },
+ { .compatible = "renesas,rcar-gen1-ether", .data = &rcar_gen1_data },
+ { .compatible = "renesas,rcar-gen2-ether", .data = &rcar_gen2_data },
+ { }
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Paolo Abeni <pabeni@redhat.com>
+Date: Wed, 29 Aug 2018 10:22:34 +0200
+Subject: tc-testing: add test-cases for numeric and invalid control action
+
+From: Paolo Abeni <pabeni@redhat.com>
+
+[ Upstream commit 25a8238f4cc8425d4aade4f9041be468d0e8aa2e ]
+
+Only the police action allows us to specify an arbitrary numeric value
+for the control action. This change introduces an explicit test case
+for the above feature and then leverage it for testing the kernel behavior
+for invalid control actions (reject).
+
+Signed-off-by: Paolo Abeni <pabeni@redhat.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/testing/selftests/tc-testing/tc-tests/actions/police.json | 48 ++++++++++
+ 1 file changed, 48 insertions(+)
+
+--- a/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
++++ b/tools/testing/selftests/tc-testing/tc-tests/actions/police.json
+@@ -313,6 +313,54 @@
+ ]
+ },
+ {
++ "id": "6aaf",
++ "name": "Add police actions with conform-exceed control pass/pipe [with numeric values]",
++ "category": [
++ "actions",
++ "police"
++ ],
++ "setup": [
++ [
++ "$TC actions flush action police",
++ 0,
++ 1,
++ 255
++ ]
++ ],
++ "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 0/3 index 1",
++ "expExitCode": "0",
++ "verifyCmd": "$TC actions get action police index 1",
++ "matchPattern": "action order [0-9]*: police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action pass/pipe",
++ "matchCount": "1",
++ "teardown": [
++ "$TC actions flush action police"
++ ]
++ },
++ {
++ "id": "29b1",
++ "name": "Add police actions with conform-exceed control <invalid>/drop",
++ "category": [
++ "actions",
++ "police"
++ ],
++ "setup": [
++ [
++ "$TC actions flush action police",
++ 0,
++ 1,
++ 255
++ ]
++ ],
++ "cmdUnderTest": "$TC actions add action police rate 3mbit burst 250k conform-exceed 10/drop index 1",
++ "expExitCode": "255",
++ "verifyCmd": "$TC actions ls action police",
++ "matchPattern": "action order [0-9]*: police 0x1 rate 3Mbit burst 250Kb mtu 2Kb action ",
++ "matchCount": "0",
++ "teardown": [
++ "$TC actions flush action police"
++ ]
++ },
++ {
+ "id": "c26f",
+ "name": "Add police action with invalid peakrate value",
+ "category": [
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Cong Wang <xiyou.wangcong@gmail.com>
+Date: Fri, 24 Aug 2018 12:28:06 -0700
+Subject: tipc: switch to rhashtable iterator
+
+From: Cong Wang <xiyou.wangcong@gmail.com>
+
+[ Upstream commit 9a07efa9aea2f4a59f35da0785a4e6a6b5a96192 ]
+
+syzbot reported a use-after-free in tipc_group_fill_sock_diag(),
+where tipc_group_fill_sock_diag() still reads tsk->group meanwhile
+tipc_group_delete() just deletes it in tipc_release().
+
+tipc_nl_sk_walk() aims to lock this sock when walking each sock
+in the hash table to close race conditions with sock changes like
+this one, by acquiring tsk->sk.sk_lock.slock spinlock, unfortunately
+this doesn't work at all. All non-BH call path should take
+lock_sock() instead to make it work.
+
+tipc_nl_sk_walk() brutally iterates with raw rht_for_each_entry_rcu()
+where RCU read lock is required, this is the reason why lock_sock()
+can't be taken on this path. This could be resolved by switching to
+rhashtable iterator API's, where taking a sleepable lock is possible.
+Also, the iterator API's are friendly for restartable calls like
+diag dump, the last position is remembered behind the scence,
+all we need to do here is saving the iterator into cb->args[].
+
+I tested this with parallel tipc diag dump and thousands of tipc
+socket creation and release, no crash or memory leak.
+
+Reported-by: syzbot+b9c8f3ab2994b7cd1625@syzkaller.appspotmail.com
+Cc: Jon Maloy <jon.maloy@ericsson.com>
+Cc: Ying Xue <ying.xue@windriver.com>
+Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ net/tipc/diag.c | 2 +
+ net/tipc/netlink.c | 2 +
+ net/tipc/socket.c | 76 ++++++++++++++++++++++++++++++++++-------------------
+ net/tipc/socket.h | 2 +
+ 4 files changed, 56 insertions(+), 26 deletions(-)
+
+--- a/net/tipc/diag.c
++++ b/net/tipc/diag.c
+@@ -84,7 +84,9 @@ static int tipc_sock_diag_handler_dump(s
+
+ if (h->nlmsg_flags & NLM_F_DUMP) {
+ struct netlink_dump_control c = {
++ .start = tipc_dump_start,
+ .dump = tipc_diag_dump,
++ .done = tipc_dump_done,
+ };
+ netlink_dump_start(net->diag_nlsk, skb, h, &c);
+ return 0;
+--- a/net/tipc/netlink.c
++++ b/net/tipc/netlink.c
+@@ -167,7 +167,9 @@ static const struct genl_ops tipc_genl_v
+ },
+ {
+ .cmd = TIPC_NL_SOCK_GET,
++ .start = tipc_dump_start,
+ .dumpit = tipc_nl_sk_dump,
++ .done = tipc_dump_done,
+ .policy = tipc_nl_policy,
+ },
+ {
+--- a/net/tipc/socket.c
++++ b/net/tipc/socket.c
+@@ -3233,45 +3233,69 @@ int tipc_nl_sk_walk(struct sk_buff *skb,
+ struct netlink_callback *cb,
+ struct tipc_sock *tsk))
+ {
+- struct net *net = sock_net(skb->sk);
+- struct tipc_net *tn = tipc_net(net);
+- const struct bucket_table *tbl;
+- u32 prev_portid = cb->args[1];
+- u32 tbl_id = cb->args[0];
+- struct rhash_head *pos;
++ struct rhashtable_iter *iter = (void *)cb->args[0];
+ struct tipc_sock *tsk;
+ int err;
+
+- rcu_read_lock();
+- tbl = rht_dereference_rcu((&tn->sk_rht)->tbl, &tn->sk_rht);
+- for (; tbl_id < tbl->size; tbl_id++) {
+- rht_for_each_entry_rcu(tsk, pos, tbl, tbl_id, node) {
+- spin_lock_bh(&tsk->sk.sk_lock.slock);
+- if (prev_portid && prev_portid != tsk->portid) {
+- spin_unlock_bh(&tsk->sk.sk_lock.slock);
++ rhashtable_walk_start(iter);
++ while ((tsk = rhashtable_walk_next(iter)) != NULL) {
++ if (IS_ERR(tsk)) {
++ err = PTR_ERR(tsk);
++ if (err == -EAGAIN) {
++ err = 0;
+ continue;
+ }
++ break;
++ }
+
+- err = skb_handler(skb, cb, tsk);
+- if (err) {
+- prev_portid = tsk->portid;
+- spin_unlock_bh(&tsk->sk.sk_lock.slock);
+- goto out;
+- }
+-
+- prev_portid = 0;
+- spin_unlock_bh(&tsk->sk.sk_lock.slock);
++ sock_hold(&tsk->sk);
++ rhashtable_walk_stop(iter);
++ lock_sock(&tsk->sk);
++ err = skb_handler(skb, cb, tsk);
++ if (err) {
++ release_sock(&tsk->sk);
++ sock_put(&tsk->sk);
++ goto out;
+ }
++ release_sock(&tsk->sk);
++ rhashtable_walk_start(iter);
++ sock_put(&tsk->sk);
+ }
++ rhashtable_walk_stop(iter);
+ out:
+- rcu_read_unlock();
+- cb->args[0] = tbl_id;
+- cb->args[1] = prev_portid;
+-
+ return skb->len;
+ }
+ EXPORT_SYMBOL(tipc_nl_sk_walk);
+
++int tipc_dump_start(struct netlink_callback *cb)
++{
++ struct rhashtable_iter *iter = (void *)cb->args[0];
++ struct net *net = sock_net(cb->skb->sk);
++ struct tipc_net *tn = tipc_net(net);
++
++ if (!iter) {
++ iter = kmalloc(sizeof(*iter), GFP_KERNEL);
++ if (!iter)
++ return -ENOMEM;
++
++ cb->args[0] = (long)iter;
++ }
++
++ rhashtable_walk_enter(&tn->sk_rht, iter);
++ return 0;
++}
++EXPORT_SYMBOL(tipc_dump_start);
++
++int tipc_dump_done(struct netlink_callback *cb)
++{
++ struct rhashtable_iter *hti = (void *)cb->args[0];
++
++ rhashtable_walk_exit(hti);
++ kfree(hti);
++ return 0;
++}
++EXPORT_SYMBOL(tipc_dump_done);
++
+ int tipc_sk_fill_sock_diag(struct sk_buff *skb, struct netlink_callback *cb,
+ struct tipc_sock *tsk, u32 sk_filter_state,
+ u64 (*tipc_diag_gen_cookie)(struct sock *sk))
+--- a/net/tipc/socket.h
++++ b/net/tipc/socket.h
+@@ -68,4 +68,6 @@ int tipc_nl_sk_walk(struct sk_buff *skb,
+ int (*skb_handler)(struct sk_buff *skb,
+ struct netlink_callback *cb,
+ struct tipc_sock *tsk));
++int tipc_dump_start(struct netlink_callback *cb);
++int tipc_dump_done(struct netlink_callback *cb);
+ #endif
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Stefan Raspl <stefan.raspl@de.ibm.com>
+Date: Fri, 24 Aug 2018 14:03:56 +0200
+Subject: tools/kvm_stat: fix handling of invalid paths in debugfs provider
+
+From: Stefan Raspl <stefan.raspl@de.ibm.com>
+
+[ Upstream commit 617c66b9f236d20f11cecbb3f45e6d5675b2fae1 ]
+
+When filtering by guest, kvm_stat displays garbage when the guest is
+destroyed - see sample output below.
+We add code to remove the invalid paths from the providers, so at least
+no more garbage is displayed.
+Here's a sample output to illustrate:
+
+ kvm statistics - pid 13986 (foo)
+
+ Event Total %Total CurAvg/s
+ diagnose_258 -2 0.0 0
+ deliver_program_interruption -3 0.0 0
+ diagnose_308 -4 0.0 0
+ halt_poll_invalid -91 0.0 -6
+ deliver_service_signal -244 0.0 -16
+ halt_successful_poll -250 0.1 -17
+ exit_pei -285 0.1 -19
+ exit_external_request -312 0.1 -21
+ diagnose_9c -328 0.1 -22
+ userspace_handled -713 0.1 -47
+ halt_attempted_poll -939 0.2 -62
+ deliver_emergency_signal -3126 0.6 -208
+ halt_wakeup -7199 1.5 -481
+ exit_wait_state -7379 1.5 -493
+ diagnose_500 -56499 11.5 -3757
+ exit_null -85491 17.4 -5685
+ diagnose_44 -133300 27.1 -8874
+ exit_instruction -195898 39.8 -13037
+ Total -492063
+
+Signed-off-by: Stefan Raspl <raspl@linux.vnet.ibm.com>
+Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/kvm/kvm_stat/kvm_stat | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -766,6 +766,13 @@ class DebugfsProvider(Provider):
+ self.do_read = True
+ self.reset()
+
++ def _verify_paths(self):
++ """Remove invalid paths"""
++ for path in self.paths:
++ if not os.path.exists(os.path.join(PATH_DEBUGFS_KVM, path)):
++ self.paths.remove(path)
++ continue
++
+ def read(self, reset=0, by_guest=0):
+ """Returns a dict with format:'file name / field -> current value'.
+
+@@ -780,6 +787,7 @@ class DebugfsProvider(Provider):
+ # If no debugfs filtering support is available, then don't read.
+ if not self.do_read:
+ return results
++ self._verify_paths()
+
+ paths = self.paths
+ if self._pid == 0:
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Stefan Raspl <stefan.raspl@de.ibm.com>
+Date: Fri, 24 Aug 2018 14:03:55 +0200
+Subject: tools/kvm_stat: fix python3 issues
+
+From: Stefan Raspl <stefan.raspl@de.ibm.com>
+
+[ Upstream commit 58f33cfe73076b6497bada4f7b5bda961ed68083 ]
+
+Python3 returns a float for a regular division - switch to a division
+operator that returns an integer.
+Furthermore, filters return a generator object instead of the actual
+list - wrap result in yet another list, which makes it still work in
+both, Python2 and 3.
+
+Signed-off-by: Stefan Raspl <raspl@linux.ibm.com>
+Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/kvm/kvm_stat/kvm_stat | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -759,7 +759,7 @@ class DebugfsProvider(Provider):
+ if len(vms) == 0:
+ self.do_read = False
+
+- self.paths = filter(lambda x: "{}-".format(pid) in x, vms)
++ self.paths = list(filter(lambda x: "{}-".format(pid) in x, vms))
+
+ else:
+ self.paths = []
+@@ -1219,10 +1219,10 @@ class Tui(object):
+ (x, term_width) = self.screen.getmaxyx()
+ row = 2
+ for line in text:
+- start = (term_width - len(line)) / 2
++ start = (term_width - len(line)) // 2
+ self.screen.addstr(row, start, line)
+ row += 1
+- self.screen.addstr(row + 1, (term_width - len(hint)) / 2, hint,
++ self.screen.addstr(row + 1, (term_width - len(hint)) // 2, hint,
+ curses.A_STANDOUT)
+ self.screen.getkey()
+
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Stefan Raspl <stefan.raspl@de.ibm.com>
+Date: Fri, 24 Aug 2018 14:03:57 +0200
+Subject: tools/kvm_stat: fix updates for dead guests
+
+From: Stefan Raspl <stefan.raspl@de.ibm.com>
+
+[ Upstream commit 710ab11ad9329d2d4b044405e328c994b19a2aa9 ]
+
+With pid filtering active, when a guest is removed e.g. via virsh shutdown,
+successive updates produce garbage.
+Therefore, we add code to detect this case and prevent further body updates.
+Note that when displaying the help dialog via 'h' in this case, once we exit
+we're stuck with the 'Collecting data...' message till we remove the filter.
+
+Signed-off-by: Stefan Raspl <raspl@linux.ibm.com>
+Signed-off-by: Radim Krčmář <rkrcmar@redhat.com>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/kvm/kvm_stat/kvm_stat | 11 ++++++++++-
+ 1 file changed, 10 insertions(+), 1 deletion(-)
+
+--- a/tools/kvm/kvm_stat/kvm_stat
++++ b/tools/kvm/kvm_stat/kvm_stat
+@@ -1170,6 +1170,9 @@ class Tui(object):
+
+ return sorted_items
+
++ if not self._is_running_guest(self.stats.pid_filter):
++ # leave final data on screen
++ return
+ row = 3
+ self.screen.move(row, 0)
+ self.screen.clrtobot()
+@@ -1327,6 +1330,12 @@ class Tui(object):
+ msg = '"' + str(val) + '": Invalid value'
+ self._refresh_header()
+
++ def _is_running_guest(self, pid):
++ """Check if pid is still a running process."""
++ if not pid:
++ return True
++ return os.path.isdir(os.path.join('/proc/', str(pid)))
++
+ def _show_vm_selection_by_guest(self):
+ """Draws guest selection mask.
+
+@@ -1354,7 +1363,7 @@ class Tui(object):
+ if not guest or guest == '0':
+ break
+ if guest.isdigit():
+- if not os.path.isdir(os.path.join('/proc/', guest)):
++ if not self._is_running_guest(guest):
+ msg = '"' + guest + '": Not a running process'
+ continue
+ pid = int(guest)
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Date: Tue, 4 Sep 2018 15:45:51 -0700
+Subject: tools/vm/page-types.c: fix "defined but not used" warning
+
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+
+[ Upstream commit 7ab660f8baecfe26c1c267fa8e64d2073feae2bb ]
+
+debugfs_known_mountpoints[] is not used any more, so let's remove it.
+
+Link: http://lkml.kernel.org/r/1535102651-19418-1-git-send-email-n-horiguchi@ah.jp.nec.com
+Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
+Cc: Matthew Wilcox <willy@infradead.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/vm/page-types.c | 6 ------
+ 1 file changed, 6 deletions(-)
+
+--- a/tools/vm/page-types.c
++++ b/tools/vm/page-types.c
+@@ -156,12 +156,6 @@ static const char * const page_flag_name
+ };
+
+
+-static const char * const debugfs_known_mountpoints[] = {
+- "/sys/kernel/debug",
+- "/debug",
+- 0,
+-};
+-
+ /*
+ * data structures
+ */
--- /dev/null
+From foo@baz Thu Oct 4 12:32:08 PDT 2018
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Date: Tue, 4 Sep 2018 15:45:48 -0700
+Subject: tools/vm/slabinfo.c: fix sign-compare warning
+
+From: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+
+[ Upstream commit 904506562e0856f2535d876407d087c9459d345b ]
+
+Currently we get the following compiler warning:
+
+ slabinfo.c:854:22: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
+ if (s->object_size < min_objsize)
+ ^
+
+due to the mismatch of signed/unsigned comparison. ->object_size and
+->slab_size are never expected to be negative, so let's define them as
+unsigned int.
+
+[n-horiguchi@ah.jp.nec.com: convert everything - none of these can be negative]
+ Link: http://lkml.kernel.org/r/20180826234947.GA9787@hori1.linux.bs1.fc.nec.co.jp
+Link: http://lkml.kernel.org/r/1535103134-20239-1-git-send-email-n-horiguchi@ah.jp.nec.com
+Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
+Reviewed-by: Andrew Morton <akpm@linux-foundation.org>
+Cc: Matthew Wilcox <willy@infradead.org>
+Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
+Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
+Signed-off-by: Sasha Levin <alexander.levin@microsoft.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ tools/vm/slabinfo.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+--- a/tools/vm/slabinfo.c
++++ b/tools/vm/slabinfo.c
+@@ -30,8 +30,8 @@ struct slabinfo {
+ int alias;
+ int refs;
+ int aliases, align, cache_dma, cpu_slabs, destroy_by_rcu;
+- int hwcache_align, object_size, objs_per_slab;
+- int sanity_checks, slab_size, store_user, trace;
++ unsigned int hwcache_align, object_size, objs_per_slab;
++ unsigned int sanity_checks, slab_size, store_user, trace;
+ int order, poison, reclaim_account, red_zone;
+ unsigned long partial, objects, slabs, objects_partial, objects_total;
+ unsigned long alloc_fastpath, alloc_slowpath;