--- /dev/null
+From 485ec2ea9cf556e9c120e07961b7b459d776a115 Mon Sep 17 00:00:00 2001
+From: Amol Grover <frextrite@gmail.com>
+Date: Thu, 23 Jan 2020 17:34:38 +0530
+Subject: bpf, devmap: Pass lockdep expression to RCU lists
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Amol Grover <frextrite@gmail.com>
+
+commit 485ec2ea9cf556e9c120e07961b7b459d776a115 upstream.
+
+head is traversed using hlist_for_each_entry_rcu outside an RCU
+read-side critical section but under the protection of dtab->index_lock.
+
+Hence, add corresponding lockdep expression to silence false-positive
+lockdep warnings, and harden RCU lists.
+
+Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index")
+Signed-off-by: Amol Grover <frextrite@gmail.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
+Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
+Link: https://lore.kernel.org/bpf/20200123120437.26506-1-frextrite@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ kernel/bpf/devmap.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+--- a/kernel/bpf/devmap.c
++++ b/kernel/bpf/devmap.c
+@@ -293,7 +293,8 @@ struct bpf_dtab_netdev *__dev_map_hash_l
+ struct hlist_head *head = dev_map_index_hash(dtab, key);
+ struct bpf_dtab_netdev *dev;
+
+- hlist_for_each_entry_rcu(dev, head, index_hlist)
++ hlist_for_each_entry_rcu(dev, head, index_hlist,
++ lockdep_is_held(&dtab->index_lock))
+ if (dev->idx == key)
+ return dev;
+
--- /dev/null
+From 556755a8a99be8ca3cd9fbe36aaf9b3b0339a00d Mon Sep 17 00:00:00 2001
+From: Josef Bacik <josef@toxicpanda.com>
+Date: Fri, 3 Jan 2020 10:38:44 -0500
+Subject: btrfs: fix improper setting of scanned for range cyclic write cache pages
+
+From: Josef Bacik <josef@toxicpanda.com>
+
+commit 556755a8a99be8ca3cd9fbe36aaf9b3b0339a00d upstream.
+
+We noticed that we were having regular CG OOM kills in cases where there
+was still enough dirty pages to avoid OOM'ing. It turned out there's
+this corner case in btrfs's handling of range_cyclic where files that
+were being redirtied were not getting fully written out because of how
+we do range_cyclic writeback.
+
+We unconditionally were setting scanned = 1; the first time we found any
+pages in the inode. This isn't actually what we want, we want it to be
+set if we've scanned the entire file. For range_cyclic we could be
+starting in the middle or towards the end of the file, so we could write
+one page and then not write any of the other dirty pages in the file
+because we set scanned = 1.
+
+Fix this by not setting scanned = 1 if we find pages. The rules for
+setting scanned should be
+
+1) !range_cyclic. In this case we have a specified range to write out.
+2) range_cyclic && index == 0. In this case we've started at the
+ beginning and there is no need to loop around a second time.
+3) range_cyclic && we started at index > 0 and we've reached the end of
+ the file without satisfying our nr_to_write.
+
+This patch fixes both of our writepages implementations to make sure
+these rules hold true. This fixed our over zealous CG OOMs in
+production.
+
+Fixes: d1310b2e0cd9 ("Btrfs: Split the extent_map code into two parts")
+Signed-off-by: Josef Bacik <josef@toxicpanda.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+[ add comment ]
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/extent_io.c | 12 ++++++++++--
+ 1 file changed, 10 insertions(+), 2 deletions(-)
+
+--- a/fs/btrfs/extent_io.c
++++ b/fs/btrfs/extent_io.c
+@@ -3938,6 +3938,11 @@ int btree_write_cache_pages(struct addre
+ if (wbc->range_cyclic) {
+ index = mapping->writeback_index; /* Start from prev offset */
+ end = -1;
++ /*
++ * Start from the beginning does not need to cycle over the
++ * range, mark it as scanned.
++ */
++ scanned = (index == 0);
+ } else {
+ index = wbc->range_start >> PAGE_SHIFT;
+ end = wbc->range_end >> PAGE_SHIFT;
+@@ -3955,7 +3960,6 @@ retry:
+ tag))) {
+ unsigned i;
+
+- scanned = 1;
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page = pvec.pages[i];
+
+@@ -4084,6 +4088,11 @@ static int extent_write_cache_pages(stru
+ if (wbc->range_cyclic) {
+ index = mapping->writeback_index; /* Start from prev offset */
+ end = -1;
++ /*
++ * Start from the beginning does not need to cycle over the
++ * range, mark it as scanned.
++ */
++ scanned = (index == 0);
+ } else {
+ index = wbc->range_start >> PAGE_SHIFT;
+ end = wbc->range_end >> PAGE_SHIFT;
+@@ -4117,7 +4126,6 @@ retry:
+ &index, end, tag))) {
+ unsigned i;
+
+- scanned = 1;
+ for (i = 0; i < nr_pages; i++) {
+ struct page *page = pvec.pages[i];
+
--- /dev/null
+From 05840710149c7d1a78ea85a2db5723f706e97d8f Mon Sep 17 00:00:00 2001
+From: Nikolay Borisov <nborisov@suse.com>
+Date: Fri, 10 Jan 2020 14:11:34 +0200
+Subject: btrfs: Handle another split brain scenario with metadata uuid feature
+
+From: Nikolay Borisov <nborisov@suse.com>
+
+commit 05840710149c7d1a78ea85a2db5723f706e97d8f upstream.
+
+There is one more cases which isn't handled by the original metadata
+uuid work. Namely, when a filesystem has METADATA_UUID incompat bit and
+the user decides to change the FSID to the original one e.g. have
+metadata_uuid and fsid match. In case of power failure while this
+operation is in progress we could end up in a situation where some of
+the disks have the incompat bit removed and the other half have both
+METADATA_UUID_INCOMPAT and FSID_CHANGING_IN_PROGRESS flags.
+
+This patch handles the case where a disk that has successfully changed
+its FSID such that it equals METADATA_UUID is scanned first.
+Subsequently when a disk with both
+METADATA_UUID_INCOMPAT/FSID_CHANGING_IN_PROGRESS flags is scanned
+find_fsid_changed won't be able to find an appropriate btrfs_fs_devices.
+This is done by extending find_fsid_changed to correctly find
+btrfs_fs_devices whose metadata_uuid/fsid are the same and they match
+the metadata_uuid of the currently scanned device.
+
+Fixes: cc5de4e70256 ("btrfs: Handle final split-brain possibility during fsid change")
+Reviewed-by: Josef Bacik <josef@toxicpanda.com>
+Reported-by: Su Yue <Damenly_Su@gmx.com>
+Signed-off-by: Nikolay Borisov <nborisov@suse.com>
+Reviewed-by: David Sterba <dsterba@suse.com>
+Signed-off-by: David Sterba <dsterba@suse.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ fs/btrfs/volumes.c | 17 ++++++++++++++---
+ 1 file changed, 14 insertions(+), 3 deletions(-)
+
+--- a/fs/btrfs/volumes.c
++++ b/fs/btrfs/volumes.c
+@@ -881,17 +881,28 @@ static struct btrfs_fs_devices *find_fsi
+ /*
+ * Handles the case where scanned device is part of an fs that had
+ * multiple successful changes of FSID but curently device didn't
+- * observe it. Meaning our fsid will be different than theirs.
++ * observe it. Meaning our fsid will be different than theirs. We need
++ * to handle two subcases :
++ * 1 - The fs still continues to have different METADATA/FSID uuids.
++ * 2 - The fs is switched back to its original FSID (METADATA/FSID
++ * are equal).
+ */
+ list_for_each_entry(fs_devices, &fs_uuids, fs_list) {
++ /* Changed UUIDs */
+ if (memcmp(fs_devices->metadata_uuid, fs_devices->fsid,
+ BTRFS_FSID_SIZE) != 0 &&
+ memcmp(fs_devices->metadata_uuid, disk_super->metadata_uuid,
+ BTRFS_FSID_SIZE) == 0 &&
+ memcmp(fs_devices->fsid, disk_super->fsid,
+- BTRFS_FSID_SIZE) != 0) {
++ BTRFS_FSID_SIZE) != 0)
++ return fs_devices;
++
++ /* Unchanged UUIDs */
++ if (memcmp(fs_devices->metadata_uuid, fs_devices->fsid,
++ BTRFS_FSID_SIZE) == 0 &&
++ memcmp(fs_devices->fsid, disk_super->metadata_uuid,
++ BTRFS_FSID_SIZE) == 0)
+ return fs_devices;
+- }
+ }
+
+ return NULL;
--- /dev/null
+From 73669cc556462f4e50376538d77ee312142e8a8a Mon Sep 17 00:00:00 2001
+From: Herbert Xu <herbert@gondor.apana.org.au>
+Date: Sat, 7 Dec 2019 22:15:15 +0800
+Subject: crypto: api - Fix race condition in crypto_spawn_alg
+
+From: Herbert Xu <herbert@gondor.apana.org.au>
+
+commit 73669cc556462f4e50376538d77ee312142e8a8a upstream.
+
+The function crypto_spawn_alg is racy because it drops the lock
+before shooting the dying algorithm. The algorithm could disappear
+altogether before we shoot it.
+
+This patch fixes it by moving the shooting into the locked section.
+
+Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns")
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ crypto/algapi.c | 16 +++++-----------
+ crypto/api.c | 3 +--
+ crypto/internal.h | 1 -
+ 3 files changed, 6 insertions(+), 14 deletions(-)
+
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -697,22 +697,16 @@ EXPORT_SYMBOL_GPL(crypto_drop_spawn);
+ static struct crypto_alg *crypto_spawn_alg(struct crypto_spawn *spawn)
+ {
+ struct crypto_alg *alg;
+- struct crypto_alg *alg2;
+
+ down_read(&crypto_alg_sem);
+ alg = spawn->alg;
+- alg2 = alg;
+- if (alg2)
+- alg2 = crypto_mod_get(alg2);
+- up_read(&crypto_alg_sem);
+-
+- if (!alg2) {
+- if (alg)
+- crypto_shoot_alg(alg);
+- return ERR_PTR(-EAGAIN);
++ if (alg && !crypto_mod_get(alg)) {
++ alg->cra_flags |= CRYPTO_ALG_DYING;
++ alg = NULL;
+ }
++ up_read(&crypto_alg_sem);
+
+- return alg;
++ return alg ?: ERR_PTR(-EAGAIN);
+ }
+
+ struct crypto_tfm *crypto_spawn_tfm(struct crypto_spawn *spawn, u32 type,
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -346,13 +346,12 @@ static unsigned int crypto_ctxsize(struc
+ return len;
+ }
+
+-void crypto_shoot_alg(struct crypto_alg *alg)
++static void crypto_shoot_alg(struct crypto_alg *alg)
+ {
+ down_write(&crypto_alg_sem);
+ alg->cra_flags |= CRYPTO_ALG_DYING;
+ up_write(&crypto_alg_sem);
+ }
+-EXPORT_SYMBOL_GPL(crypto_shoot_alg);
+
+ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
+ u32 mask)
+--- a/crypto/internal.h
++++ b/crypto/internal.h
+@@ -68,7 +68,6 @@ void crypto_alg_tested(const char *name,
+ void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list,
+ struct crypto_alg *nalg);
+ void crypto_remove_final(struct list_head *list);
+-void crypto_shoot_alg(struct crypto_alg *alg);
+ struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type,
+ u32 mask);
+ void *crypto_create_tfm(struct crypto_alg *alg,
--- /dev/null
+From 2bbb3375d967155bccc86a5887d4a6e29c56b683 Mon Sep 17 00:00:00 2001
+From: Herbert Xu <herbert@gondor.apana.org.au>
+Date: Wed, 11 Dec 2019 10:50:11 +0800
+Subject: crypto: api - fix unexpectedly getting generic implementation
+
+From: Herbert Xu <herbert@gondor.apana.org.au>
+
+commit 2bbb3375d967155bccc86a5887d4a6e29c56b683 upstream.
+
+When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an
+algorithm that needs to be instantiated using a template will always get
+the generic implementation, even when an accelerated one is available.
+
+This happens because the extra self-tests for the accelerated
+implementation allocate the generic implementation for comparison
+purposes, and then crypto_alg_tested() for the generic implementation
+"fulfills" the original request (i.e. sets crypto_larval::adult).
+
+This patch fixes this by only fulfilling the original request if
+we are currently the best outstanding larval as judged by the
+priority. If we're not the best then we will ask all waiters on
+that larval request to retry the lookup.
+
+Note that this patch introduces a behaviour change when the module
+providing the new algorithm is unregistered during the process.
+Previously we would have failed with ENOENT, after the patch we
+will instead redo the lookup.
+
+Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...")
+Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...")
+Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...")
+Reported-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Reviewed-by: Eric Biggers <ebiggers@google.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ crypto/algapi.c | 24 +++++++++++++++++++++---
+ crypto/api.c | 4 +++-
+ 2 files changed, 24 insertions(+), 4 deletions(-)
+
+--- a/crypto/algapi.c
++++ b/crypto/algapi.c
+@@ -257,6 +257,7 @@ void crypto_alg_tested(const char *name,
+ struct crypto_alg *alg;
+ struct crypto_alg *q;
+ LIST_HEAD(list);
++ bool best;
+
+ down_write(&crypto_alg_sem);
+ list_for_each_entry(q, &crypto_alg_list, cra_list) {
+@@ -280,6 +281,21 @@ found:
+
+ alg->cra_flags |= CRYPTO_ALG_TESTED;
+
++ /* Only satisfy larval waiters if we are the best. */
++ best = true;
++ list_for_each_entry(q, &crypto_alg_list, cra_list) {
++ if (crypto_is_moribund(q) || !crypto_is_larval(q))
++ continue;
++
++ if (strcmp(alg->cra_name, q->cra_name))
++ continue;
++
++ if (q->cra_priority > alg->cra_priority) {
++ best = false;
++ break;
++ }
++ }
++
+ list_for_each_entry(q, &crypto_alg_list, cra_list) {
+ if (q == alg)
+ continue;
+@@ -303,10 +319,12 @@ found:
+ continue;
+ if ((q->cra_flags ^ alg->cra_flags) & larval->mask)
+ continue;
+- if (!crypto_mod_get(alg))
+- continue;
+
+- larval->adult = alg;
++ if (best && crypto_mod_get(alg))
++ larval->adult = alg;
++ else
++ larval->adult = ERR_PTR(-EAGAIN);
++
+ continue;
+ }
+
+--- a/crypto/api.c
++++ b/crypto/api.c
+@@ -97,7 +97,7 @@ static void crypto_larval_destroy(struct
+ struct crypto_larval *larval = (void *)alg;
+
+ BUG_ON(!crypto_is_larval(alg));
+- if (larval->adult)
++ if (!IS_ERR_OR_NULL(larval->adult))
+ crypto_mod_put(larval->adult);
+ kfree(larval);
+ }
+@@ -178,6 +178,8 @@ static struct crypto_alg *crypto_larval_
+ alg = ERR_PTR(-ETIMEDOUT);
+ else if (!alg)
+ alg = ERR_PTR(-ENOENT);
++ else if (IS_ERR(alg))
++ ;
+ else if (crypto_is_test_larval(larval) &&
+ !(alg->cra_flags & CRYPTO_ALG_TESTED))
+ alg = ERR_PTR(-EAGAIN);
--- /dev/null
+From 5441c6507bc84166e9227e9370a56c57ba13794a Mon Sep 17 00:00:00 2001
+From: Ard Biesheuvel <ardb@kernel.org>
+Date: Thu, 28 Nov 2019 13:55:31 +0100
+Subject: crypto: arm64/ghash-neon - bump priority to 150
+
+From: Ard Biesheuvel <ardb@kernel.org>
+
+commit 5441c6507bc84166e9227e9370a56c57ba13794a upstream.
+
+The SIMD based GHASH implementation for arm64 is typically much faster
+than the generic one, and doesn't use any lookup tables, so it is
+clearly preferred when available. So bump the priority to reflect that.
+
+Fixes: 5a22b198cd527447 ("crypto: arm64/ghash - register PMULL variants ...")
+Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/arm64/crypto/ghash-ce-glue.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/arch/arm64/crypto/ghash-ce-glue.c
++++ b/arch/arm64/crypto/ghash-ce-glue.c
+@@ -261,7 +261,7 @@ static int ghash_setkey(struct crypto_sh
+ static struct shash_alg ghash_alg[] = {{
+ .base.cra_name = "ghash",
+ .base.cra_driver_name = "ghash-neon",
+- .base.cra_priority = 100,
++ .base.cra_priority = 150,
+ .base.cra_blocksize = GHASH_BLOCK_SIZE,
+ .base.cra_ctxsize = sizeof(struct ghash_key),
+ .base.cra_module = THIS_MODULE,
--- /dev/null
+From 781a08d9740afa73357f1a60d45d7c93d7cca2dd Mon Sep 17 00:00:00 2001
+From: Tudor Ambarus <tudor.ambarus@microchip.com>
+Date: Thu, 5 Dec 2019 09:54:01 +0000
+Subject: crypto: atmel-aes - Fix counter overflow in CTR mode
+
+From: Tudor Ambarus <tudor.ambarus@microchip.com>
+
+commit 781a08d9740afa73357f1a60d45d7c93d7cca2dd upstream.
+
+32 bit counter is not supported by neither of our AES IPs, all implement
+a 16 bit block counter. Drop the 32 bit block counter logic.
+
+Fixes: fcac83656a3e ("crypto: atmel-aes - fix the counter overflow in CTR mode")
+Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/crypto/atmel-aes.c | 37 ++++++++++++-------------------------
+ 1 file changed, 12 insertions(+), 25 deletions(-)
+
+--- a/drivers/crypto/atmel-aes.c
++++ b/drivers/crypto/atmel-aes.c
+@@ -88,7 +88,6 @@
+ struct atmel_aes_caps {
+ bool has_dualbuff;
+ bool has_cfb64;
+- bool has_ctr32;
+ bool has_gcm;
+ bool has_xts;
+ bool has_authenc;
+@@ -1013,8 +1012,9 @@ static int atmel_aes_ctr_transfer(struct
+ struct atmel_aes_ctr_ctx *ctx = atmel_aes_ctr_ctx_cast(dd->ctx);
+ struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq);
+ struct scatterlist *src, *dst;
+- u32 ctr, blocks;
+ size_t datalen;
++ u32 ctr;
++ u16 blocks, start, end;
+ bool use_dma, fragmented = false;
+
+ /* Check for transfer completion. */
+@@ -1026,27 +1026,17 @@ static int atmel_aes_ctr_transfer(struct
+ datalen = req->nbytes - ctx->offset;
+ blocks = DIV_ROUND_UP(datalen, AES_BLOCK_SIZE);
+ ctr = be32_to_cpu(ctx->iv[3]);
+- if (dd->caps.has_ctr32) {
+- /* Check 32bit counter overflow. */
+- u32 start = ctr;
+- u32 end = start + blocks - 1;
+-
+- if (end < start) {
+- ctr |= 0xffffffff;
+- datalen = AES_BLOCK_SIZE * -start;
+- fragmented = true;
+- }
+- } else {
+- /* Check 16bit counter overflow. */
+- u16 start = ctr & 0xffff;
+- u16 end = start + (u16)blocks - 1;
+-
+- if (blocks >> 16 || end < start) {
+- ctr |= 0xffff;
+- datalen = AES_BLOCK_SIZE * (0x10000-start);
+- fragmented = true;
+- }
++
++ /* Check 16bit counter overflow. */
++ start = ctr & 0xffff;
++ end = start + blocks - 1;
++
++ if (blocks >> 16 || end < start) {
++ ctr |= 0xffff;
++ datalen = AES_BLOCK_SIZE * (0x10000 - start);
++ fragmented = true;
+ }
++
+ use_dma = (datalen >= ATMEL_AES_DMA_THRESHOLD);
+
+ /* Jump to offset. */
+@@ -2550,7 +2540,6 @@ static void atmel_aes_get_cap(struct atm
+ {
+ dd->caps.has_dualbuff = 0;
+ dd->caps.has_cfb64 = 0;
+- dd->caps.has_ctr32 = 0;
+ dd->caps.has_gcm = 0;
+ dd->caps.has_xts = 0;
+ dd->caps.has_authenc = 0;
+@@ -2561,7 +2550,6 @@ static void atmel_aes_get_cap(struct atm
+ case 0x500:
+ dd->caps.has_dualbuff = 1;
+ dd->caps.has_cfb64 = 1;
+- dd->caps.has_ctr32 = 1;
+ dd->caps.has_gcm = 1;
+ dd->caps.has_xts = 1;
+ dd->caps.has_authenc = 1;
+@@ -2570,7 +2558,6 @@ static void atmel_aes_get_cap(struct atm
+ case 0x200:
+ dd->caps.has_dualbuff = 1;
+ dd->caps.has_cfb64 = 1;
+- dd->caps.has_ctr32 = 1;
+ dd->caps.has_gcm = 1;
+ dd->caps.max_burst_size = 4;
+ break;
--- /dev/null
+From 11548f5a5747813ff84bed6f2ea01100053b0d8d Mon Sep 17 00:00:00 2001
+From: Ard Biesheuvel <ardb@kernel.org>
+Date: Wed, 27 Nov 2019 13:01:36 +0100
+Subject: crypto: ccp - set max RSA modulus size for v3 platform devices as well
+
+From: Ard Biesheuvel <ardb@kernel.org>
+
+commit 11548f5a5747813ff84bed6f2ea01100053b0d8d upstream.
+
+AMD Seattle incorporates a non-PCI version of the v3 CCP crypto
+accelerator, and this version was left behind when the maximum
+RSA modulus size was parameterized in order to support v5 hardware
+which supports larger moduli than v3 hardware does. Due to this
+oversight, RSA acceleration no longer works at all on these systems.
+
+Fix this by setting the .rsamax property to the appropriate value
+for v3 platform hardware.
+
+Fixes: e28c190db66830c0 ("csrypto: ccp - Expand RSA support for a v5 ccp")
+Cc: Gary R Hook <gary.hook@amd.com>
+Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
+Acked-by: Gary R Hook <gary.hook@amd.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/crypto/ccp/ccp-dev-v3.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+--- a/drivers/crypto/ccp/ccp-dev-v3.c
++++ b/drivers/crypto/ccp/ccp-dev-v3.c
+@@ -586,6 +586,7 @@ const struct ccp_vdata ccpv3_platform =
+ .setup = NULL,
+ .perform = &ccp3_actions,
+ .offset = 0,
++ .rsamax = CCP_RSA_MAX_WIDTH,
+ };
+
+ const struct ccp_vdata ccpv3 = {
--- /dev/null
+From 484a897ffa3005f16cd9a31efd747bcf8155826f Mon Sep 17 00:00:00 2001
+From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Date: Tue, 19 Nov 2019 13:42:57 +0800
+Subject: crypto: hisilicon - Use the offset fields in sqe to avoid need to split scatterlists
+
+From: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+
+commit 484a897ffa3005f16cd9a31efd747bcf8155826f upstream.
+
+We can configure sgl offset fields in ZIP sqe to let ZIP engine read/write
+sgl data with skipped data. Hence no need to splite the sgl.
+
+Fixes: 62c455ca853e (crypto: hisilicon - add HiSilicon ZIP accelerator support)
+Signed-off-by: Jonathan Cameron <Jonathan.Cameron@huawei.com>
+Signed-off-by: Zhou Wang <wangzhou1@hisilicon.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/crypto/hisilicon/Kconfig | 1
+ drivers/crypto/hisilicon/zip/zip.h | 4 +
+ drivers/crypto/hisilicon/zip/zip_crypto.c | 92 +++++++-----------------------
+ 3 files changed, 27 insertions(+), 70 deletions(-)
+
+--- a/drivers/crypto/hisilicon/Kconfig
++++ b/drivers/crypto/hisilicon/Kconfig
+@@ -35,6 +35,5 @@ config CRYPTO_DEV_HISI_ZIP
+ depends on ARM64 && PCI && PCI_MSI
+ select CRYPTO_DEV_HISI_QM
+ select CRYPTO_HISI_SGL
+- select SG_SPLIT
+ help
+ Support for HiSilicon ZIP Driver
+--- a/drivers/crypto/hisilicon/zip/zip.h
++++ b/drivers/crypto/hisilicon/zip/zip.h
+@@ -12,6 +12,10 @@
+
+ /* hisi_zip_sqe dw3 */
+ #define HZIP_BD_STATUS_M GENMASK(7, 0)
++/* hisi_zip_sqe dw7 */
++#define HZIP_IN_SGE_DATA_OFFSET_M GENMASK(23, 0)
++/* hisi_zip_sqe dw8 */
++#define HZIP_OUT_SGE_DATA_OFFSET_M GENMASK(23, 0)
+ /* hisi_zip_sqe dw9 */
+ #define HZIP_REQ_TYPE_M GENMASK(7, 0)
+ #define HZIP_ALG_TYPE_ZLIB 0x02
+--- a/drivers/crypto/hisilicon/zip/zip_crypto.c
++++ b/drivers/crypto/hisilicon/zip/zip_crypto.c
+@@ -45,10 +45,8 @@ enum hisi_zip_alg_type {
+
+ struct hisi_zip_req {
+ struct acomp_req *req;
+- struct scatterlist *src;
+- struct scatterlist *dst;
+- size_t slen;
+- size_t dlen;
++ int sskip;
++ int dskip;
+ struct hisi_acc_hw_sgl *hw_src;
+ struct hisi_acc_hw_sgl *hw_dst;
+ dma_addr_t dma_src;
+@@ -94,13 +92,15 @@ static void hisi_zip_config_tag(struct h
+
+ static void hisi_zip_fill_sqe(struct hisi_zip_sqe *sqe, u8 req_type,
+ dma_addr_t s_addr, dma_addr_t d_addr, u32 slen,
+- u32 dlen)
++ u32 dlen, int sskip, int dskip)
+ {
+ memset(sqe, 0, sizeof(struct hisi_zip_sqe));
+
+- sqe->input_data_length = slen;
++ sqe->input_data_length = slen - sskip;
++ sqe->dw7 = FIELD_PREP(HZIP_IN_SGE_DATA_OFFSET_M, sskip);
++ sqe->dw8 = FIELD_PREP(HZIP_OUT_SGE_DATA_OFFSET_M, dskip);
+ sqe->dw9 = FIELD_PREP(HZIP_REQ_TYPE_M, req_type);
+- sqe->dest_avail_out = dlen;
++ sqe->dest_avail_out = dlen - dskip;
+ sqe->source_addr_l = lower_32_bits(s_addr);
+ sqe->source_addr_h = upper_32_bits(s_addr);
+ sqe->dest_addr_l = lower_32_bits(d_addr);
+@@ -301,11 +301,6 @@ static void hisi_zip_remove_req(struct h
+ {
+ struct hisi_zip_req_q *req_q = &qp_ctx->req_q;
+
+- if (qp_ctx->qp->alg_type == HZIP_ALG_TYPE_COMP)
+- kfree(req->dst);
+- else
+- kfree(req->src);
+-
+ write_lock(&req_q->req_lock);
+ clear_bit(req->req_id, req_q->req_bitmap);
+ memset(req, 0, sizeof(struct hisi_zip_req));
+@@ -333,8 +328,8 @@ static void hisi_zip_acomp_cb(struct his
+ }
+ dlen = sqe->produced;
+
+- hisi_acc_sg_buf_unmap(dev, req->src, req->hw_src);
+- hisi_acc_sg_buf_unmap(dev, req->dst, req->hw_dst);
++ hisi_acc_sg_buf_unmap(dev, acomp_req->src, req->hw_src);
++ hisi_acc_sg_buf_unmap(dev, acomp_req->dst, req->hw_dst);
+
+ head_size = (qp->alg_type == 0) ? TO_HEAD_SIZE(qp->req_type) : 0;
+ acomp_req->dlen = dlen + head_size;
+@@ -428,20 +423,6 @@ static size_t get_comp_head_size(struct
+ }
+ }
+
+-static int get_sg_skip_bytes(struct scatterlist *sgl, size_t bytes,
+- size_t remains, struct scatterlist **out)
+-{
+-#define SPLIT_NUM 2
+- size_t split_sizes[SPLIT_NUM];
+- int out_mapped_nents[SPLIT_NUM];
+-
+- split_sizes[0] = bytes;
+- split_sizes[1] = remains;
+-
+- return sg_split(sgl, 0, 0, SPLIT_NUM, split_sizes, out,
+- out_mapped_nents, GFP_KERNEL);
+-}
+-
+ static struct hisi_zip_req *hisi_zip_create_req(struct acomp_req *req,
+ struct hisi_zip_qp_ctx *qp_ctx,
+ size_t head_size, bool is_comp)
+@@ -449,31 +430,7 @@ static struct hisi_zip_req *hisi_zip_cre
+ struct hisi_zip_req_q *req_q = &qp_ctx->req_q;
+ struct hisi_zip_req *q = req_q->q;
+ struct hisi_zip_req *req_cache;
+- struct scatterlist *out[2];
+- struct scatterlist *sgl;
+- size_t len;
+- int ret, req_id;
+-
+- /*
+- * remove/add zlib/gzip head, as hardware operations do not include
+- * comp head. so split req->src to get sgl without heads in acomp, or
+- * add comp head to req->dst ahead of that hardware output compressed
+- * data in sgl splited from req->dst without comp head.
+- */
+- if (is_comp) {
+- sgl = req->dst;
+- len = req->dlen - head_size;
+- } else {
+- sgl = req->src;
+- len = req->slen - head_size;
+- }
+-
+- ret = get_sg_skip_bytes(sgl, head_size, len, out);
+- if (ret)
+- return ERR_PTR(ret);
+-
+- /* sgl for comp head is useless, so free it now */
+- kfree(out[0]);
++ int req_id;
+
+ write_lock(&req_q->req_lock);
+
+@@ -481,7 +438,6 @@ static struct hisi_zip_req *hisi_zip_cre
+ if (req_id >= req_q->size) {
+ write_unlock(&req_q->req_lock);
+ dev_dbg(&qp_ctx->qp->qm->pdev->dev, "req cache is full!\n");
+- kfree(out[1]);
+ return ERR_PTR(-EBUSY);
+ }
+ set_bit(req_id, req_q->req_bitmap);
+@@ -489,16 +445,13 @@ static struct hisi_zip_req *hisi_zip_cre
+ req_cache = q + req_id;
+ req_cache->req_id = req_id;
+ req_cache->req = req;
++
+ if (is_comp) {
+- req_cache->src = req->src;
+- req_cache->dst = out[1];
+- req_cache->slen = req->slen;
+- req_cache->dlen = req->dlen - head_size;
++ req_cache->sskip = 0;
++ req_cache->dskip = head_size;
+ } else {
+- req_cache->src = out[1];
+- req_cache->dst = req->dst;
+- req_cache->slen = req->slen - head_size;
+- req_cache->dlen = req->dlen;
++ req_cache->sskip = head_size;
++ req_cache->dskip = 0;
+ }
+
+ write_unlock(&req_q->req_lock);
+@@ -510,6 +463,7 @@ static int hisi_zip_do_work(struct hisi_
+ struct hisi_zip_qp_ctx *qp_ctx)
+ {
+ struct hisi_zip_sqe *zip_sqe = &qp_ctx->zip_sqe;
++ struct acomp_req *a_req = req->req;
+ struct hisi_qp *qp = qp_ctx->qp;
+ struct device *dev = &qp->qm->pdev->dev;
+ struct hisi_acc_sgl_pool *pool = &qp_ctx->sgl_pool;
+@@ -517,16 +471,16 @@ static int hisi_zip_do_work(struct hisi_
+ dma_addr_t output;
+ int ret;
+
+- if (!req->src || !req->slen || !req->dst || !req->dlen)
++ if (!a_req->src || !a_req->slen || !a_req->dst || !a_req->dlen)
+ return -EINVAL;
+
+- req->hw_src = hisi_acc_sg_buf_map_to_hw_sgl(dev, req->src, pool,
++ req->hw_src = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->src, pool,
+ req->req_id << 1, &input);
+ if (IS_ERR(req->hw_src))
+ return PTR_ERR(req->hw_src);
+ req->dma_src = input;
+
+- req->hw_dst = hisi_acc_sg_buf_map_to_hw_sgl(dev, req->dst, pool,
++ req->hw_dst = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->dst, pool,
+ (req->req_id << 1) + 1,
+ &output);
+ if (IS_ERR(req->hw_dst)) {
+@@ -535,8 +489,8 @@ static int hisi_zip_do_work(struct hisi_
+ }
+ req->dma_dst = output;
+
+- hisi_zip_fill_sqe(zip_sqe, qp->req_type, input, output, req->slen,
+- req->dlen);
++ hisi_zip_fill_sqe(zip_sqe, qp->req_type, input, output, a_req->slen,
++ a_req->dlen, req->sskip, req->dskip);
+ hisi_zip_config_buf_type(zip_sqe, HZIP_SGL);
+ hisi_zip_config_tag(zip_sqe, req->req_id);
+
+@@ -548,9 +502,9 @@ static int hisi_zip_do_work(struct hisi_
+ return -EINPROGRESS;
+
+ err_unmap_output:
+- hisi_acc_sg_buf_unmap(dev, req->dst, req->hw_dst);
++ hisi_acc_sg_buf_unmap(dev, a_req->dst, req->hw_dst);
+ err_unmap_input:
+- hisi_acc_sg_buf_unmap(dev, req->src, req->hw_src);
++ hisi_acc_sg_buf_unmap(dev, a_req->src, req->hw_src);
+ return ret;
+ }
+
--- /dev/null
+From e8d998264bffade3cfe0536559f712ab9058d654 Mon Sep 17 00:00:00 2001
+From: Herbert Xu <herbert@gondor.apana.org.au>
+Date: Fri, 29 Nov 2019 16:40:24 +0800
+Subject: crypto: pcrypt - Do not clear MAY_SLEEP flag in original request
+
+From: Herbert Xu <herbert@gondor.apana.org.au>
+
+commit e8d998264bffade3cfe0536559f712ab9058d654 upstream.
+
+We should not be modifying the original request's MAY_SLEEP flag
+upon completion. It makes no sense to do so anyway.
+
+Reported-by: Eric Biggers <ebiggers@kernel.org>
+Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...")
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Tested-by: Eric Biggers <ebiggers@kernel.org>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ crypto/pcrypt.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+--- a/crypto/pcrypt.c
++++ b/crypto/pcrypt.c
+@@ -71,7 +71,6 @@ static void pcrypt_aead_done(struct cryp
+ struct padata_priv *padata = pcrypt_request_padata(preq);
+
+ padata->info = err;
+- req->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;
+
+ padata_do_serial(padata);
+ }
--- /dev/null
+From 7f8c36fe9be46862c4f3c5302f769378028a34fa Mon Sep 17 00:00:00 2001
+From: Chuhong Yuan <hslester96@gmail.com>
+Date: Tue, 10 Dec 2019 00:21:44 +0800
+Subject: crypto: picoxcell - adjust the position of tasklet_init and fix missed tasklet_kill
+
+From: Chuhong Yuan <hslester96@gmail.com>
+
+commit 7f8c36fe9be46862c4f3c5302f769378028a34fa upstream.
+
+Since tasklet is needed to be initialized before registering IRQ
+handler, adjust the position of tasklet_init to fix the wrong order.
+
+Besides, to fix the missed tasklet_kill, this patch adds a helper
+function and uses devm_add_action to kill the tasklet automatically.
+
+Fixes: ce92136843cb ("crypto: picoxcell - add support for the picoxcell crypto engines")
+Signed-off-by: Chuhong Yuan <hslester96@gmail.com>
+Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ drivers/crypto/picoxcell_crypto.c | 15 +++++++++++++--
+ 1 file changed, 13 insertions(+), 2 deletions(-)
+
+--- a/drivers/crypto/picoxcell_crypto.c
++++ b/drivers/crypto/picoxcell_crypto.c
+@@ -1613,6 +1613,11 @@ static const struct of_device_id spacc_o
+ MODULE_DEVICE_TABLE(of, spacc_of_id_table);
+ #endif /* CONFIG_OF */
+
++static void spacc_tasklet_kill(void *data)
++{
++ tasklet_kill(data);
++}
++
+ static int spacc_probe(struct platform_device *pdev)
+ {
+ int i, err, ret;
+@@ -1655,6 +1660,14 @@ static int spacc_probe(struct platform_d
+ return -ENXIO;
+ }
+
++ tasklet_init(&engine->complete, spacc_spacc_complete,
++ (unsigned long)engine);
++
++ ret = devm_add_action(&pdev->dev, spacc_tasklet_kill,
++ &engine->complete);
++ if (ret)
++ return ret;
++
+ if (devm_request_irq(&pdev->dev, irq->start, spacc_spacc_irq, 0,
+ engine->name, engine)) {
+ dev_err(engine->dev, "failed to request IRQ\n");
+@@ -1712,8 +1725,6 @@ static int spacc_probe(struct platform_d
+ INIT_LIST_HEAD(&engine->completed);
+ INIT_LIST_HEAD(&engine->in_progress);
+ engine->in_flight = 0;
+- tasklet_init(&engine->complete, spacc_spacc_complete,
+- (unsigned long)engine);
+
+ platform_set_drvdata(pdev, engine);
+
--- /dev/null
+From 35b9211c0a2427e8f39e534f442f43804fc8d5ca Mon Sep 17 00:00:00 2001
+From: Andrii Nakryiko <andriin@fb.com>
+Date: Fri, 24 Jan 2020 12:18:46 -0800
+Subject: libbpf: Fix realloc usage in bpf_core_find_cands
+
+From: Andrii Nakryiko <andriin@fb.com>
+
+commit 35b9211c0a2427e8f39e534f442f43804fc8d5ca upstream.
+
+Fix bug requesting invalid size of reallocated array when constructing CO-RE
+relocation candidate list. This can cause problems if there are many potential
+candidates and a very fine-grained memory allocator bucket sizes are used.
+
+Fixes: ddc7c3042614 ("libbpf: implement BPF CO-RE offset relocation algorithm")
+Reported-by: William Smith <williampsmith@fb.com>
+Signed-off-by: Andrii Nakryiko <andriin@fb.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Yonghong Song <yhs@fb.com>
+Link: https://lore.kernel.org/bpf/20200124201847.212528-1-andriin@fb.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/lib/bpf/libbpf.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+--- a/tools/lib/bpf/libbpf.c
++++ b/tools/lib/bpf/libbpf.c
+@@ -2541,7 +2541,9 @@ static struct ids_vec *bpf_core_find_can
+ if (strncmp(local_name, targ_name, local_essent_len) == 0) {
+ pr_debug("[%d] %s: found candidate [%d] %s\n",
+ local_type_id, local_name, i, targ_name);
+- new_ids = realloc(cand_ids->data, cand_ids->len + 1);
++ new_ids = reallocarray(cand_ids->data,
++ cand_ids->len + 1,
++ sizeof(*cand_ids->data));
+ if (!new_ids) {
+ err = -ENOMEM;
+ goto err_out;
--- /dev/null
+From f1003b787c00fbaa4b11619c6b23a885bfce8f07 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= <bjorn.topel@gmail.com>
+Date: Mon, 16 Dec 2019 10:13:35 +0100
+Subject: riscv, bpf: Fix broken BPF tail calls
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Björn Töpel <bjorn.topel@gmail.com>
+
+commit f1003b787c00fbaa4b11619c6b23a885bfce8f07 upstream.
+
+The BPF JIT incorrectly clobbered the a0 register, and did not flag
+usage of s5 register when BPF stack was being used.
+
+Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G")
+Signed-off-by: Björn Töpel <bjorn.topel@gmail.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Link: https://lore.kernel.org/bpf/20191216091343.23260-2-bjorn.topel@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ arch/riscv/net/bpf_jit_comp.c | 13 +++++++++++--
+ 1 file changed, 11 insertions(+), 2 deletions(-)
+
+--- a/arch/riscv/net/bpf_jit_comp.c
++++ b/arch/riscv/net/bpf_jit_comp.c
+@@ -120,6 +120,11 @@ static bool seen_reg(int reg, struct rv_
+ return false;
+ }
+
++static void mark_fp(struct rv_jit_context *ctx)
++{
++ __set_bit(RV_CTX_F_SEEN_S5, &ctx->flags);
++}
++
+ static void mark_call(struct rv_jit_context *ctx)
+ {
+ __set_bit(RV_CTX_F_SEEN_CALL, &ctx->flags);
+@@ -596,7 +601,8 @@ static void __build_epilogue(u8 reg, str
+
+ emit(rv_addi(RV_REG_SP, RV_REG_SP, stack_adjust), ctx);
+ /* Set return value. */
+- emit(rv_addi(RV_REG_A0, RV_REG_A5, 0), ctx);
++ if (reg == RV_REG_RA)
++ emit(rv_addi(RV_REG_A0, RV_REG_A5, 0), ctx);
+ emit(rv_jalr(RV_REG_ZERO, reg, 0), ctx);
+ }
+
+@@ -1426,6 +1432,10 @@ static void build_prologue(struct rv_jit
+ {
+ int stack_adjust = 0, store_offset, bpf_stack_adjust;
+
++ bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16);
++ if (bpf_stack_adjust)
++ mark_fp(ctx);
++
+ if (seen_reg(RV_REG_RA, ctx))
+ stack_adjust += 8;
+ stack_adjust += 8; /* RV_REG_FP */
+@@ -1443,7 +1453,6 @@ static void build_prologue(struct rv_jit
+ stack_adjust += 8;
+
+ stack_adjust = round_up(stack_adjust, 16);
+- bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16);
+ stack_adjust += bpf_stack_adjust;
+
+ store_offset = stack_adjust - 8;
--- /dev/null
+From b2e5e93ae8af6a34bca536cdc4b453ab1e707b8b Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= <toke@redhat.com>
+Date: Mon, 20 Jan 2020 14:06:41 +0100
+Subject: samples/bpf: Don't try to remove user's homedir on clean
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+From: Toke Høiland-Jørgensen <toke@redhat.com>
+
+commit b2e5e93ae8af6a34bca536cdc4b453ab1e707b8b upstream.
+
+The 'clean' rule in the samples/bpf Makefile tries to remove backup
+files (ending in ~). However, if no such files exist, it will instead try
+to remove the user's home directory. While the attempt is mostly harmless,
+it does lead to a somewhat scary warning like this:
+
+rm: cannot remove '~': Is a directory
+
+Fix this by using find instead of shell expansion to locate any actual
+backup files that need to be removed.
+
+Fixes: b62a796c109c ("samples/bpf: allow make to be run from samples/bpf/ directory")
+Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
+Link: https://lore.kernel.org/bpf/157952560126.1683545.7273054725976032511.stgit@toke.dk
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ samples/bpf/Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/samples/bpf/Makefile
++++ b/samples/bpf/Makefile
+@@ -236,7 +236,7 @@ all:
+
+ clean:
+ $(MAKE) -C ../../ M=$(CURDIR) clean
+- @rm -f *~
++ @find $(CURDIR) -type f -name '*~' -delete
+
+ $(LIBBPF): FORCE
+ # Fix up variables inherited from Kbuild that tools/ build system won't like
--- /dev/null
+From f9e6bfdbaf0cf304d72c70a05d81acac01a04f48 Mon Sep 17 00:00:00 2001
+From: Jesper Dangaard Brouer <brouer@redhat.com>
+Date: Fri, 20 Dec 2019 17:19:36 +0100
+Subject: samples/bpf: Xdp_redirect_cpu fix missing tracepoint attach
+
+From: Jesper Dangaard Brouer <brouer@redhat.com>
+
+commit f9e6bfdbaf0cf304d72c70a05d81acac01a04f48 upstream.
+
+When sample xdp_redirect_cpu was converted to use libbpf, the
+tracepoints used by this sample were not getting attached automatically
+like with bpf_load.c. The BPF-maps was still getting loaded, thus
+nobody notice that the tracepoints were not updating these maps.
+
+This fix doesn't use the new skeleton code, as this bug was introduced
+in v5.1 and stable might want to backport this. E.g. Red Hat QA uses
+this sample as part of their testing.
+
+Fixes: bbaf6029c49c ("samples/bpf: Convert XDP samples to libbpf usage")
+Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Acked-by: Andrii Nakryiko <andriin@fb.com>
+Link: https://lore.kernel.org/bpf/157685877642.26195.2798780195186786841.stgit@firesoul
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ samples/bpf/xdp_redirect_cpu_user.c | 59 +++++++++++++++++++++++++++++++++---
+ 1 file changed, 55 insertions(+), 4 deletions(-)
+
+--- a/samples/bpf/xdp_redirect_cpu_user.c
++++ b/samples/bpf/xdp_redirect_cpu_user.c
+@@ -16,6 +16,10 @@ static const char *__doc__ =
+ #include <getopt.h>
+ #include <net/if.h>
+ #include <time.h>
++#include <linux/limits.h>
++
++#define __must_check
++#include <linux/err.h>
+
+ #include <arpa/inet.h>
+ #include <linux/if_link.h>
+@@ -46,6 +50,10 @@ static int cpus_count_map_fd;
+ static int cpus_iterator_map_fd;
+ static int exception_cnt_map_fd;
+
++#define NUM_TP 5
++struct bpf_link *tp_links[NUM_TP] = { 0 };
++static int tp_cnt = 0;
++
+ /* Exit return codes */
+ #define EXIT_OK 0
+ #define EXIT_FAIL 1
+@@ -88,6 +96,10 @@ static void int_exit(int sig)
+ printf("program on interface changed, not removing\n");
+ }
+ }
++ /* Detach tracepoints */
++ while (tp_cnt)
++ bpf_link__destroy(tp_links[--tp_cnt]);
++
+ exit(EXIT_OK);
+ }
+
+@@ -588,23 +600,61 @@ static void stats_poll(int interval, boo
+ free_stats_record(prev);
+ }
+
++static struct bpf_link * attach_tp(struct bpf_object *obj,
++ const char *tp_category,
++ const char* tp_name)
++{
++ struct bpf_program *prog;
++ struct bpf_link *link;
++ char sec_name[PATH_MAX];
++ int len;
++
++ len = snprintf(sec_name, PATH_MAX, "tracepoint/%s/%s",
++ tp_category, tp_name);
++ if (len < 0)
++ exit(EXIT_FAIL);
++
++ prog = bpf_object__find_program_by_title(obj, sec_name);
++ if (!prog) {
++ fprintf(stderr, "ERR: finding progsec: %s\n", sec_name);
++ exit(EXIT_FAIL_BPF);
++ }
++
++ link = bpf_program__attach_tracepoint(prog, tp_category, tp_name);
++ if (IS_ERR(link))
++ exit(EXIT_FAIL_BPF);
++
++ return link;
++}
++
++static void init_tracepoints(struct bpf_object *obj) {
++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_redirect_err");
++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_redirect_map_err");
++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_exception");
++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_cpumap_enqueue");
++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_cpumap_kthread");
++}
++
+ static int init_map_fds(struct bpf_object *obj)
+ {
+- cpu_map_fd = bpf_object__find_map_fd_by_name(obj, "cpu_map");
+- rx_cnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rx_cnt");
++ /* Maps updated by tracepoints */
+ redirect_err_cnt_map_fd =
+ bpf_object__find_map_fd_by_name(obj, "redirect_err_cnt");
++ exception_cnt_map_fd =
++ bpf_object__find_map_fd_by_name(obj, "exception_cnt");
+ cpumap_enqueue_cnt_map_fd =
+ bpf_object__find_map_fd_by_name(obj, "cpumap_enqueue_cnt");
+ cpumap_kthread_cnt_map_fd =
+ bpf_object__find_map_fd_by_name(obj, "cpumap_kthread_cnt");
++
++ /* Maps used by XDP */
++ rx_cnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rx_cnt");
++ cpu_map_fd = bpf_object__find_map_fd_by_name(obj, "cpu_map");
+ cpus_available_map_fd =
+ bpf_object__find_map_fd_by_name(obj, "cpus_available");
+ cpus_count_map_fd = bpf_object__find_map_fd_by_name(obj, "cpus_count");
+ cpus_iterator_map_fd =
+ bpf_object__find_map_fd_by_name(obj, "cpus_iterator");
+- exception_cnt_map_fd =
+- bpf_object__find_map_fd_by_name(obj, "exception_cnt");
+
+ if (cpu_map_fd < 0 || rx_cnt_map_fd < 0 ||
+ redirect_err_cnt_map_fd < 0 || cpumap_enqueue_cnt_map_fd < 0 ||
+@@ -662,6 +712,7 @@ int main(int argc, char **argv)
+ strerror(errno));
+ return EXIT_FAIL;
+ }
++ init_tracepoints(obj);
+ if (init_map_fds(obj) < 0) {
+ fprintf(stderr, "bpf_object__find_map_fd_by_name failed\n");
+ return EXIT_FAIL;
--- /dev/null
+From 91cbdf740a476cf2c744169bf407de2e3ac1f3cf Mon Sep 17 00:00:00 2001
+From: Andrii Nakryiko <andriin@fb.com>
+Date: Wed, 11 Dec 2019 17:36:20 -0800
+Subject: selftests/bpf: Fix perf_buffer test on systems w/ offline CPUs
+
+From: Andrii Nakryiko <andriin@fb.com>
+
+commit 91cbdf740a476cf2c744169bf407de2e3ac1f3cf upstream.
+
+Fix up perf_buffer.c selftest to take into account offline/missing CPUs.
+
+Fixes: ee5cf82ce04a ("selftests/bpf: test perf buffer API")
+Signed-off-by: Andrii Nakryiko <andriin@fb.com>
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Link: https://lore.kernel.org/bpf/20191212013621.1691858-1-andriin@fb.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/testing/selftests/bpf/prog_tests/perf_buffer.c | 29 +++++++++++++++----
+ 1 file changed, 24 insertions(+), 5 deletions(-)
+
+--- a/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
++++ b/tools/testing/selftests/bpf/prog_tests/perf_buffer.c
+@@ -4,6 +4,7 @@
+ #include <sched.h>
+ #include <sys/socket.h>
+ #include <test_progs.h>
++#include "libbpf_internal.h"
+
+ static void on_sample(void *ctx, int cpu, void *data, __u32 size)
+ {
+@@ -19,7 +20,7 @@ static void on_sample(void *ctx, int cpu
+
+ void test_perf_buffer(void)
+ {
+- int err, prog_fd, nr_cpus, i, duration = 0;
++ int err, prog_fd, on_len, nr_on_cpus = 0, nr_cpus, i, duration = 0;
+ const char *prog_name = "kprobe/sys_nanosleep";
+ const char *file = "./test_perf_buffer.o";
+ struct perf_buffer_opts pb_opts = {};
+@@ -29,15 +30,27 @@ void test_perf_buffer(void)
+ struct bpf_object *obj;
+ struct perf_buffer *pb;
+ struct bpf_link *link;
++ bool *online;
+
+ nr_cpus = libbpf_num_possible_cpus();
+ if (CHECK(nr_cpus < 0, "nr_cpus", "err %d\n", nr_cpus))
+ return;
+
++ err = parse_cpu_mask_file("/sys/devices/system/cpu/online",
++ &online, &on_len);
++ if (CHECK(err, "nr_on_cpus", "err %d\n", err))
++ return;
++
++ for (i = 0; i < on_len; i++)
++ if (online[i])
++ nr_on_cpus++;
++
+ /* load program */
+ err = bpf_prog_load(file, BPF_PROG_TYPE_KPROBE, &obj, &prog_fd);
+- if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno))
+- return;
++ if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno)) {
++ obj = NULL;
++ goto out_close;
++ }
+
+ prog = bpf_object__find_program_by_title(obj, prog_name);
+ if (CHECK(!prog, "find_probe", "prog '%s' not found\n", prog_name))
+@@ -64,6 +77,11 @@ void test_perf_buffer(void)
+ /* trigger kprobe on every CPU */
+ CPU_ZERO(&cpu_seen);
+ for (i = 0; i < nr_cpus; i++) {
++ if (i >= on_len || !online[i]) {
++ printf("skipping offline CPU #%d\n", i);
++ continue;
++ }
++
+ CPU_ZERO(&cpu_set);
+ CPU_SET(i, &cpu_set);
+
+@@ -81,8 +99,8 @@ void test_perf_buffer(void)
+ if (CHECK(err < 0, "perf_buffer__poll", "err %d\n", err))
+ goto out_free_pb;
+
+- if (CHECK(CPU_COUNT(&cpu_seen) != nr_cpus, "seen_cpu_cnt",
+- "expect %d, seen %d\n", nr_cpus, CPU_COUNT(&cpu_seen)))
++ if (CHECK(CPU_COUNT(&cpu_seen) != nr_on_cpus, "seen_cpu_cnt",
++ "expect %d, seen %d\n", nr_on_cpus, CPU_COUNT(&cpu_seen)))
+ goto out_free_pb;
+
+ out_free_pb:
+@@ -91,4 +109,5 @@ out_detach:
+ bpf_link__destroy(link);
+ out_close:
+ bpf_object__close(obj);
++ free(online);
+ }
--- /dev/null
+From 580205dd4fe800b1e95be8b6df9e2991f975a8ad Mon Sep 17 00:00:00 2001
+From: Alexei Starovoitov <ast@kernel.org>
+Date: Wed, 18 Dec 2019 18:04:42 -0800
+Subject: selftests/bpf: Fix test_attach_probe
+
+From: Alexei Starovoitov <ast@kernel.org>
+
+commit 580205dd4fe800b1e95be8b6df9e2991f975a8ad upstream.
+
+Fix two issues in test_attach_probe:
+
+1. it was not able to parse /proc/self/maps beyond the first line,
+ since %s means parse string until white space.
+2. offset has to be accounted for otherwise uprobed address is incorrect.
+
+Fixes: 1e8611bbdfc9 ("selftests/bpf: add kprobe/uprobe selftests")
+Signed-off-by: Alexei Starovoitov <ast@kernel.org>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: Yonghong Song <yhs@fb.com>
+Acked-by: Andrii Nakryiko <andriin@fb.com>
+Link: https://lore.kernel.org/bpf/20191219020442.1922617-1-ast@kernel.org
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/testing/selftests/bpf/prog_tests/attach_probe.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+--- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c
++++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c
+@@ -2,7 +2,7 @@
+ #include <test_progs.h>
+
+ ssize_t get_base_addr() {
+- size_t start;
++ size_t start, offset;
+ char buf[256];
+ FILE *f;
+
+@@ -10,10 +10,11 @@ ssize_t get_base_addr() {
+ if (!f)
+ return -errno;
+
+- while (fscanf(f, "%zx-%*x %s %*s\n", &start, buf) == 2) {
++ while (fscanf(f, "%zx-%*x %s %zx %*[^\n]\n",
++ &start, buf, &offset) == 3) {
+ if (strcmp(buf, "r-xp") == 0) {
+ fclose(f);
+- return start;
++ return start - offset;
+ }
+ }
+
--- /dev/null
+From 8bec4f665e0baecb5f1b683379fc10b3745eb612 Mon Sep 17 00:00:00 2001
+From: Lorenz Bauer <lmb@cloudflare.com>
+Date: Fri, 24 Jan 2020 11:27:52 +0000
+Subject: selftests: bpf: Ignore FIN packets for reuseport tests
+
+From: Lorenz Bauer <lmb@cloudflare.com>
+
+commit 8bec4f665e0baecb5f1b683379fc10b3745eb612 upstream.
+
+The reuseport tests currently suffer from a race condition: FIN
+packets count towards DROP_ERR_SKB_DATA, since they don't contain
+a valid struct cmd. Tests will spuriously fail depending on whether
+check_results is called before or after the FIN is processed.
+
+Exit the BPF program early if FIN is set.
+
+Fixes: 91134d849a0e ("bpf: Test BPF_PROG_TYPE_SK_REUSEPORT")
+Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
+Acked-by: Martin KaFai Lau <kafai@fb.com>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Link: https://lore.kernel.org/bpf/20200124112754.19664-3-lmb@cloudflare.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c | 6 ++++++
+ 1 file changed, 6 insertions(+)
+
+--- a/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c
++++ b/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c
+@@ -113,6 +113,12 @@ int _select_by_skb_data(struct sk_reusep
+ data_check.skb_ports[0] = th->source;
+ data_check.skb_ports[1] = th->dest;
+
++ if (th->fin)
++ /* The connection is being torn down at the end of a
++ * test. It can't contain a cmd, so return early.
++ */
++ return SK_PASS;
++
+ if ((th->doff << 2) + sizeof(*cmd) > data_check.len)
+ GOTO_DONE(DROP_ERR_SKB_DATA);
+ if (bpf_skb_load_bytes(reuse_md, th->doff << 2, &cmd_copy,
--- /dev/null
+From f1c3656c6d9c147d07d16614455aceb34932bdeb Mon Sep 17 00:00:00 2001
+From: Hangbin Liu <liuhangbin@gmail.com>
+Date: Fri, 17 Jan 2020 18:06:56 +0800
+Subject: selftests/bpf: Skip perf hw events test if the setup disabled it
+
+From: Hangbin Liu <liuhangbin@gmail.com>
+
+commit f1c3656c6d9c147d07d16614455aceb34932bdeb upstream.
+
+The same with commit 4e59afbbed96 ("selftests/bpf: skip nmi test when perf
+hw events are disabled"), it would make more sense to skip the
+test_stacktrace_build_id_nmi test if the setup (e.g. virtual machines) has
+disabled hardware perf events.
+
+Fixes: 13790d1cc72c ("bpf: add selftest for stackmap with build_id in NMI context")
+Signed-off-by: Hangbin Liu <liuhangbin@gmail.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Link: https://lore.kernel.org/bpf/20200117100656.10359-1-liuhangbin@gmail.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c | 8 ++++++--
+ 1 file changed, 6 insertions(+), 2 deletions(-)
+
+--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
++++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c
+@@ -49,8 +49,12 @@ retry:
+ pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */,
+ 0 /* cpu 0 */, -1 /* group id */,
+ 0 /* flags */);
+- if (CHECK(pmu_fd < 0, "perf_event_open",
+- "err %d errno %d. Does the test host support PERF_COUNT_HW_CPU_CYCLES?\n",
++ if (pmu_fd < 0 && errno == ENOENT) {
++ printf("%s:SKIP:no PERF_COUNT_HW_CPU_CYCLES\n", __func__);
++ test__skip();
++ goto cleanup;
++ }
++ if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n",
+ pmu_fd, errno))
+ goto close_prog;
+
--- /dev/null
+From c31dbb1e41d1857b403f9bf58c87f5898519a0bc Mon Sep 17 00:00:00 2001
+From: Lorenz Bauer <lmb@cloudflare.com>
+Date: Fri, 24 Jan 2020 11:27:51 +0000
+Subject: selftests: bpf: Use a temporary file in test_sockmap
+
+From: Lorenz Bauer <lmb@cloudflare.com>
+
+commit c31dbb1e41d1857b403f9bf58c87f5898519a0bc upstream.
+
+Use a proper temporary file for sendpage tests. This means that running
+the tests doesn't clutter the working directory, and allows running the
+test on read-only filesystems.
+
+Fixes: 16962b2404ac ("bpf: sockmap, add selftests")
+Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
+Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
+Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
+Acked-by: Martin KaFai Lau <kafai@fb.com>
+Acked-by: John Fastabend <john.fastabend@gmail.com>
+Link: https://lore.kernel.org/bpf/20200124112754.19664-2-lmb@cloudflare.com
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/testing/selftests/bpf/test_sockmap.c | 15 +++++----------
+ 1 file changed, 5 insertions(+), 10 deletions(-)
+
+--- a/tools/testing/selftests/bpf/test_sockmap.c
++++ b/tools/testing/selftests/bpf/test_sockmap.c
+@@ -331,7 +331,7 @@ static int msg_loop_sendpage(int fd, int
+ FILE *file;
+ int i, fp;
+
+- file = fopen(".sendpage_tst.tmp", "w+");
++ file = tmpfile();
+ if (!file) {
+ perror("create file for sendpage");
+ return 1;
+@@ -340,13 +340,8 @@ static int msg_loop_sendpage(int fd, int
+ fwrite(&k, sizeof(char), 1, file);
+ fflush(file);
+ fseek(file, 0, SEEK_SET);
+- fclose(file);
+
+- fp = open(".sendpage_tst.tmp", O_RDONLY);
+- if (fp < 0) {
+- perror("reopen file for sendpage");
+- return 1;
+- }
++ fp = fileno(file);
+
+ clock_gettime(CLOCK_MONOTONIC, &s->start);
+ for (i = 0; i < cnt; i++) {
+@@ -354,11 +349,11 @@ static int msg_loop_sendpage(int fd, int
+
+ if (!drop && sent < 0) {
+ perror("send loop error");
+- close(fp);
++ fclose(file);
+ return sent;
+ } else if (drop && sent >= 0) {
+ printf("sendpage loop error expected: %i\n", sent);
+- close(fp);
++ fclose(file);
+ return -EIO;
+ }
+
+@@ -366,7 +361,7 @@ static int msg_loop_sendpage(int fd, int
+ s->bytes_sent += sent;
+ }
+ clock_gettime(CLOCK_MONOTONIC, &s->end);
+- close(fp);
++ fclose(file);
+ return 0;
+ }
+
ftrace-add-comment-to-why-rcu_dereference_sched-is-o.patch
ftrace-protect-ftrace_graph_hash-with-ftrace_sync.patch
crypto-pcrypt-avoid-deadlock-by-using-per-instance-padata-queues.patch
+btrfs-fix-improper-setting-of-scanned-for-range-cyclic-write-cache-pages.patch
+btrfs-handle-another-split-brain-scenario-with-metadata-uuid-feature.patch
+riscv-bpf-fix-broken-bpf-tail-calls.patch
+selftests-bpf-fix-perf_buffer-test-on-systems-w-offline-cpus.patch
+bpf-devmap-pass-lockdep-expression-to-rcu-lists.patch
+libbpf-fix-realloc-usage-in-bpf_core_find_cands.patch
+tc-testing-fix-ebpf-tests-failure-on-linux-fresh-clones.patch
+samples-bpf-don-t-try-to-remove-user-s-homedir-on-clean.patch
+samples-bpf-xdp_redirect_cpu-fix-missing-tracepoint-attach.patch
+selftests-bpf-fix-test_attach_probe.patch
+selftests-bpf-skip-perf-hw-events-test-if-the-setup-disabled-it.patch
+selftests-bpf-use-a-temporary-file-in-test_sockmap.patch
+selftests-bpf-ignore-fin-packets-for-reuseport-tests.patch
+crypto-api-fix-unexpectedly-getting-generic-implementation.patch
+crypto-hisilicon-use-the-offset-fields-in-sqe-to-avoid-need-to-split-scatterlists.patch
+crypto-ccp-set-max-rsa-modulus-size-for-v3-platform-devices-as-well.patch
+crypto-arm64-ghash-neon-bump-priority-to-150.patch
+crypto-pcrypt-do-not-clear-may_sleep-flag-in-original-request.patch
+crypto-atmel-aes-fix-counter-overflow-in-ctr-mode.patch
+crypto-api-fix-race-condition-in-crypto_spawn_alg.patch
+crypto-picoxcell-adjust-the-position-of-tasklet_init-and-fix-missed-tasklet_kill.patch
--- /dev/null
+From 7145fcfffef1fad4266aaf5ca96727696916edb7 Mon Sep 17 00:00:00 2001
+From: Davide Caratti <dcaratti@redhat.com>
+Date: Mon, 3 Feb 2020 16:29:29 +0100
+Subject: tc-testing: fix eBPF tests failure on linux fresh clones
+
+From: Davide Caratti <dcaratti@redhat.com>
+
+commit 7145fcfffef1fad4266aaf5ca96727696916edb7 upstream.
+
+when the following command is done on a fresh clone of the kernel tree,
+
+ [root@f31 tc-testing]# ./tdc.py -c bpf
+
+test cases that need to build the eBPF sample program fail systematically,
+because 'buildebpfPlugin' is unable to install the kernel headers (i.e, the
+'khdr' target fails). Pass the correct environment to 'make', in place of
+ENVIR, to allow running these tests.
+
+Fixes: 4c2d39bd40c1 ("tc-testing: use a plugin to build eBPF program")
+Signed-off-by: Davide Caratti <dcaratti@redhat.com>
+Signed-off-by: David S. Miller <davem@davemloft.net>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+
+---
+ tools/testing/selftests/tc-testing/plugin-lib/buildebpfPlugin.py | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+--- a/tools/testing/selftests/tc-testing/plugin-lib/buildebpfPlugin.py
++++ b/tools/testing/selftests/tc-testing/plugin-lib/buildebpfPlugin.py
+@@ -54,7 +54,7 @@ class SubPlugin(TdcPlugin):
+ shell=True,
+ stdout=subprocess.PIPE,
+ stderr=subprocess.PIPE,
+- env=ENVIR)
++ env=os.environ.copy())
+ (rawout, serr) = proc.communicate()
+
+ if proc.returncode != 0 and len(serr) > 0: