From: Greg Kroah-Hartman Date: Sat, 8 Feb 2020 16:40:50 +0000 (+0100) Subject: 5.4-stable patches X-Git-Tag: v4.19.103~73 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=4f18a33be10046c988ddb5a09c57e6a737837a70;p=thirdparty%2Fkernel%2Fstable-queue.git 5.4-stable patches added patches: bpf-devmap-pass-lockdep-expression-to-rcu-lists.patch btrfs-fix-improper-setting-of-scanned-for-range-cyclic-write-cache-pages.patch btrfs-handle-another-split-brain-scenario-with-metadata-uuid-feature.patch crypto-api-fix-race-condition-in-crypto_spawn_alg.patch crypto-api-fix-unexpectedly-getting-generic-implementation.patch crypto-arm64-ghash-neon-bump-priority-to-150.patch crypto-atmel-aes-fix-counter-overflow-in-ctr-mode.patch crypto-ccp-set-max-rsa-modulus-size-for-v3-platform-devices-as-well.patch crypto-hisilicon-use-the-offset-fields-in-sqe-to-avoid-need-to-split-scatterlists.patch crypto-pcrypt-do-not-clear-may_sleep-flag-in-original-request.patch crypto-picoxcell-adjust-the-position-of-tasklet_init-and-fix-missed-tasklet_kill.patch libbpf-fix-realloc-usage-in-bpf_core_find_cands.patch riscv-bpf-fix-broken-bpf-tail-calls.patch samples-bpf-don-t-try-to-remove-user-s-homedir-on-clean.patch samples-bpf-xdp_redirect_cpu-fix-missing-tracepoint-attach.patch selftests-bpf-fix-perf_buffer-test-on-systems-w-offline-cpus.patch selftests-bpf-fix-test_attach_probe.patch selftests-bpf-ignore-fin-packets-for-reuseport-tests.patch selftests-bpf-skip-perf-hw-events-test-if-the-setup-disabled-it.patch selftests-bpf-use-a-temporary-file-in-test_sockmap.patch tc-testing-fix-ebpf-tests-failure-on-linux-fresh-clones.patch --- diff --git a/queue-5.4/bpf-devmap-pass-lockdep-expression-to-rcu-lists.patch b/queue-5.4/bpf-devmap-pass-lockdep-expression-to-rcu-lists.patch new file mode 100644 index 00000000000..f389eaeaf5b --- /dev/null +++ b/queue-5.4/bpf-devmap-pass-lockdep-expression-to-rcu-lists.patch @@ -0,0 +1,42 @@ +From 485ec2ea9cf556e9c120e07961b7b459d776a115 Mon Sep 17 00:00:00 2001 +From: Amol Grover +Date: Thu, 23 Jan 2020 17:34:38 +0530 +Subject: bpf, devmap: Pass lockdep expression to RCU lists +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Amol Grover + +commit 485ec2ea9cf556e9c120e07961b7b459d776a115 upstream. + +head is traversed using hlist_for_each_entry_rcu outside an RCU +read-side critical section but under the protection of dtab->index_lock. + +Hence, add corresponding lockdep expression to silence false-positive +lockdep warnings, and harden RCU lists. + +Fixes: 6f9d451ab1a3 ("xdp: Add devmap_hash map type for looking up devices by hashed index") +Signed-off-by: Amol Grover +Signed-off-by: Daniel Borkmann +Acked-by: Jesper Dangaard Brouer +Acked-by: Toke Høiland-Jørgensen +Link: https://lore.kernel.org/bpf/20200123120437.26506-1-frextrite@gmail.com +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/bpf/devmap.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/kernel/bpf/devmap.c ++++ b/kernel/bpf/devmap.c +@@ -293,7 +293,8 @@ struct bpf_dtab_netdev *__dev_map_hash_l + struct hlist_head *head = dev_map_index_hash(dtab, key); + struct bpf_dtab_netdev *dev; + +- hlist_for_each_entry_rcu(dev, head, index_hlist) ++ hlist_for_each_entry_rcu(dev, head, index_hlist, ++ lockdep_is_held(&dtab->index_lock)) + if (dev->idx == key) + return dev; + diff --git a/queue-5.4/btrfs-fix-improper-setting-of-scanned-for-range-cyclic-write-cache-pages.patch b/queue-5.4/btrfs-fix-improper-setting-of-scanned-for-range-cyclic-write-cache-pages.patch new file mode 100644 index 00000000000..b425acbc930 --- /dev/null +++ b/queue-5.4/btrfs-fix-improper-setting-of-scanned-for-range-cyclic-write-cache-pages.patch @@ -0,0 +1,88 @@ +From 556755a8a99be8ca3cd9fbe36aaf9b3b0339a00d Mon Sep 17 00:00:00 2001 +From: Josef Bacik +Date: Fri, 3 Jan 2020 10:38:44 -0500 +Subject: btrfs: fix improper setting of scanned for range cyclic write cache pages + +From: Josef Bacik + +commit 556755a8a99be8ca3cd9fbe36aaf9b3b0339a00d upstream. + +We noticed that we were having regular CG OOM kills in cases where there +was still enough dirty pages to avoid OOM'ing. It turned out there's +this corner case in btrfs's handling of range_cyclic where files that +were being redirtied were not getting fully written out because of how +we do range_cyclic writeback. + +We unconditionally were setting scanned = 1; the first time we found any +pages in the inode. This isn't actually what we want, we want it to be +set if we've scanned the entire file. For range_cyclic we could be +starting in the middle or towards the end of the file, so we could write +one page and then not write any of the other dirty pages in the file +because we set scanned = 1. + +Fix this by not setting scanned = 1 if we find pages. The rules for +setting scanned should be + +1) !range_cyclic. In this case we have a specified range to write out. +2) range_cyclic && index == 0. In this case we've started at the + beginning and there is no need to loop around a second time. +3) range_cyclic && we started at index > 0 and we've reached the end of + the file without satisfying our nr_to_write. + +This patch fixes both of our writepages implementations to make sure +these rules hold true. This fixed our over zealous CG OOMs in +production. + +Fixes: d1310b2e0cd9 ("Btrfs: Split the extent_map code into two parts") +Signed-off-by: Josef Bacik +Reviewed-by: David Sterba +[ add comment ] +Signed-off-by: David Sterba +Signed-off-by: Greg Kroah-Hartman + +--- + fs/btrfs/extent_io.c | 12 ++++++++++-- + 1 file changed, 10 insertions(+), 2 deletions(-) + +--- a/fs/btrfs/extent_io.c ++++ b/fs/btrfs/extent_io.c +@@ -3938,6 +3938,11 @@ int btree_write_cache_pages(struct addre + if (wbc->range_cyclic) { + index = mapping->writeback_index; /* Start from prev offset */ + end = -1; ++ /* ++ * Start from the beginning does not need to cycle over the ++ * range, mark it as scanned. ++ */ ++ scanned = (index == 0); + } else { + index = wbc->range_start >> PAGE_SHIFT; + end = wbc->range_end >> PAGE_SHIFT; +@@ -3955,7 +3960,6 @@ retry: + tag))) { + unsigned i; + +- scanned = 1; + for (i = 0; i < nr_pages; i++) { + struct page *page = pvec.pages[i]; + +@@ -4084,6 +4088,11 @@ static int extent_write_cache_pages(stru + if (wbc->range_cyclic) { + index = mapping->writeback_index; /* Start from prev offset */ + end = -1; ++ /* ++ * Start from the beginning does not need to cycle over the ++ * range, mark it as scanned. ++ */ ++ scanned = (index == 0); + } else { + index = wbc->range_start >> PAGE_SHIFT; + end = wbc->range_end >> PAGE_SHIFT; +@@ -4117,7 +4126,6 @@ retry: + &index, end, tag))) { + unsigned i; + +- scanned = 1; + for (i = 0; i < nr_pages; i++) { + struct page *page = pvec.pages[i]; + diff --git a/queue-5.4/btrfs-handle-another-split-brain-scenario-with-metadata-uuid-feature.patch b/queue-5.4/btrfs-handle-another-split-brain-scenario-with-metadata-uuid-feature.patch new file mode 100644 index 00000000000..3ec20fbfafc --- /dev/null +++ b/queue-5.4/btrfs-handle-another-split-brain-scenario-with-metadata-uuid-feature.patch @@ -0,0 +1,72 @@ +From 05840710149c7d1a78ea85a2db5723f706e97d8f Mon Sep 17 00:00:00 2001 +From: Nikolay Borisov +Date: Fri, 10 Jan 2020 14:11:34 +0200 +Subject: btrfs: Handle another split brain scenario with metadata uuid feature + +From: Nikolay Borisov + +commit 05840710149c7d1a78ea85a2db5723f706e97d8f upstream. + +There is one more cases which isn't handled by the original metadata +uuid work. Namely, when a filesystem has METADATA_UUID incompat bit and +the user decides to change the FSID to the original one e.g. have +metadata_uuid and fsid match. In case of power failure while this +operation is in progress we could end up in a situation where some of +the disks have the incompat bit removed and the other half have both +METADATA_UUID_INCOMPAT and FSID_CHANGING_IN_PROGRESS flags. + +This patch handles the case where a disk that has successfully changed +its FSID such that it equals METADATA_UUID is scanned first. +Subsequently when a disk with both +METADATA_UUID_INCOMPAT/FSID_CHANGING_IN_PROGRESS flags is scanned +find_fsid_changed won't be able to find an appropriate btrfs_fs_devices. +This is done by extending find_fsid_changed to correctly find +btrfs_fs_devices whose metadata_uuid/fsid are the same and they match +the metadata_uuid of the currently scanned device. + +Fixes: cc5de4e70256 ("btrfs: Handle final split-brain possibility during fsid change") +Reviewed-by: Josef Bacik +Reported-by: Su Yue +Signed-off-by: Nikolay Borisov +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Signed-off-by: Greg Kroah-Hartman + +--- + fs/btrfs/volumes.c | 17 ++++++++++++++--- + 1 file changed, 14 insertions(+), 3 deletions(-) + +--- a/fs/btrfs/volumes.c ++++ b/fs/btrfs/volumes.c +@@ -881,17 +881,28 @@ static struct btrfs_fs_devices *find_fsi + /* + * Handles the case where scanned device is part of an fs that had + * multiple successful changes of FSID but curently device didn't +- * observe it. Meaning our fsid will be different than theirs. ++ * observe it. Meaning our fsid will be different than theirs. We need ++ * to handle two subcases : ++ * 1 - The fs still continues to have different METADATA/FSID uuids. ++ * 2 - The fs is switched back to its original FSID (METADATA/FSID ++ * are equal). + */ + list_for_each_entry(fs_devices, &fs_uuids, fs_list) { ++ /* Changed UUIDs */ + if (memcmp(fs_devices->metadata_uuid, fs_devices->fsid, + BTRFS_FSID_SIZE) != 0 && + memcmp(fs_devices->metadata_uuid, disk_super->metadata_uuid, + BTRFS_FSID_SIZE) == 0 && + memcmp(fs_devices->fsid, disk_super->fsid, +- BTRFS_FSID_SIZE) != 0) { ++ BTRFS_FSID_SIZE) != 0) ++ return fs_devices; ++ ++ /* Unchanged UUIDs */ ++ if (memcmp(fs_devices->metadata_uuid, fs_devices->fsid, ++ BTRFS_FSID_SIZE) == 0 && ++ memcmp(fs_devices->fsid, disk_super->metadata_uuid, ++ BTRFS_FSID_SIZE) == 0) + return fs_devices; +- } + } + + return NULL; diff --git a/queue-5.4/crypto-api-fix-race-condition-in-crypto_spawn_alg.patch b/queue-5.4/crypto-api-fix-race-condition-in-crypto_spawn_alg.patch new file mode 100644 index 00000000000..ed40fa9f0a4 --- /dev/null +++ b/queue-5.4/crypto-api-fix-race-condition-in-crypto_spawn_alg.patch @@ -0,0 +1,82 @@ +From 73669cc556462f4e50376538d77ee312142e8a8a Mon Sep 17 00:00:00 2001 +From: Herbert Xu +Date: Sat, 7 Dec 2019 22:15:15 +0800 +Subject: crypto: api - Fix race condition in crypto_spawn_alg + +From: Herbert Xu + +commit 73669cc556462f4e50376538d77ee312142e8a8a upstream. + +The function crypto_spawn_alg is racy because it drops the lock +before shooting the dying algorithm. The algorithm could disappear +altogether before we shoot it. + +This patch fixes it by moving the shooting into the locked section. + +Fixes: 6bfd48096ff8 ("[CRYPTO] api: Added spawns") +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/algapi.c | 16 +++++----------- + crypto/api.c | 3 +-- + crypto/internal.h | 1 - + 3 files changed, 6 insertions(+), 14 deletions(-) + +--- a/crypto/algapi.c ++++ b/crypto/algapi.c +@@ -697,22 +697,16 @@ EXPORT_SYMBOL_GPL(crypto_drop_spawn); + static struct crypto_alg *crypto_spawn_alg(struct crypto_spawn *spawn) + { + struct crypto_alg *alg; +- struct crypto_alg *alg2; + + down_read(&crypto_alg_sem); + alg = spawn->alg; +- alg2 = alg; +- if (alg2) +- alg2 = crypto_mod_get(alg2); +- up_read(&crypto_alg_sem); +- +- if (!alg2) { +- if (alg) +- crypto_shoot_alg(alg); +- return ERR_PTR(-EAGAIN); ++ if (alg && !crypto_mod_get(alg)) { ++ alg->cra_flags |= CRYPTO_ALG_DYING; ++ alg = NULL; + } ++ up_read(&crypto_alg_sem); + +- return alg; ++ return alg ?: ERR_PTR(-EAGAIN); + } + + struct crypto_tfm *crypto_spawn_tfm(struct crypto_spawn *spawn, u32 type, +--- a/crypto/api.c ++++ b/crypto/api.c +@@ -346,13 +346,12 @@ static unsigned int crypto_ctxsize(struc + return len; + } + +-void crypto_shoot_alg(struct crypto_alg *alg) ++static void crypto_shoot_alg(struct crypto_alg *alg) + { + down_write(&crypto_alg_sem); + alg->cra_flags |= CRYPTO_ALG_DYING; + up_write(&crypto_alg_sem); + } +-EXPORT_SYMBOL_GPL(crypto_shoot_alg); + + struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type, + u32 mask) +--- a/crypto/internal.h ++++ b/crypto/internal.h +@@ -68,7 +68,6 @@ void crypto_alg_tested(const char *name, + void crypto_remove_spawns(struct crypto_alg *alg, struct list_head *list, + struct crypto_alg *nalg); + void crypto_remove_final(struct list_head *list); +-void crypto_shoot_alg(struct crypto_alg *alg); + struct crypto_tfm *__crypto_alloc_tfm(struct crypto_alg *alg, u32 type, + u32 mask); + void *crypto_create_tfm(struct crypto_alg *alg, diff --git a/queue-5.4/crypto-api-fix-unexpectedly-getting-generic-implementation.patch b/queue-5.4/crypto-api-fix-unexpectedly-getting-generic-implementation.patch new file mode 100644 index 00000000000..18b51d00b41 --- /dev/null +++ b/queue-5.4/crypto-api-fix-unexpectedly-getting-generic-implementation.patch @@ -0,0 +1,110 @@ +From 2bbb3375d967155bccc86a5887d4a6e29c56b683 Mon Sep 17 00:00:00 2001 +From: Herbert Xu +Date: Wed, 11 Dec 2019 10:50:11 +0800 +Subject: crypto: api - fix unexpectedly getting generic implementation + +From: Herbert Xu + +commit 2bbb3375d967155bccc86a5887d4a6e29c56b683 upstream. + +When CONFIG_CRYPTO_MANAGER_EXTRA_TESTS=y, the first lookup of an +algorithm that needs to be instantiated using a template will always get +the generic implementation, even when an accelerated one is available. + +This happens because the extra self-tests for the accelerated +implementation allocate the generic implementation for comparison +purposes, and then crypto_alg_tested() for the generic implementation +"fulfills" the original request (i.e. sets crypto_larval::adult). + +This patch fixes this by only fulfilling the original request if +we are currently the best outstanding larval as judged by the +priority. If we're not the best then we will ask all waiters on +that larval request to retry the lookup. + +Note that this patch introduces a behaviour change when the module +providing the new algorithm is unregistered during the process. +Previously we would have failed with ENOENT, after the patch we +will instead redo the lookup. + +Fixes: 9a8a6b3f0950 ("crypto: testmgr - fuzz hashes against...") +Fixes: d435e10e67be ("crypto: testmgr - fuzz skciphers against...") +Fixes: 40153b10d91c ("crypto: testmgr - fuzz AEADs against...") +Reported-by: Eric Biggers +Signed-off-by: Herbert Xu +Reviewed-by: Eric Biggers +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/algapi.c | 24 +++++++++++++++++++++--- + crypto/api.c | 4 +++- + 2 files changed, 24 insertions(+), 4 deletions(-) + +--- a/crypto/algapi.c ++++ b/crypto/algapi.c +@@ -257,6 +257,7 @@ void crypto_alg_tested(const char *name, + struct crypto_alg *alg; + struct crypto_alg *q; + LIST_HEAD(list); ++ bool best; + + down_write(&crypto_alg_sem); + list_for_each_entry(q, &crypto_alg_list, cra_list) { +@@ -280,6 +281,21 @@ found: + + alg->cra_flags |= CRYPTO_ALG_TESTED; + ++ /* Only satisfy larval waiters if we are the best. */ ++ best = true; ++ list_for_each_entry(q, &crypto_alg_list, cra_list) { ++ if (crypto_is_moribund(q) || !crypto_is_larval(q)) ++ continue; ++ ++ if (strcmp(alg->cra_name, q->cra_name)) ++ continue; ++ ++ if (q->cra_priority > alg->cra_priority) { ++ best = false; ++ break; ++ } ++ } ++ + list_for_each_entry(q, &crypto_alg_list, cra_list) { + if (q == alg) + continue; +@@ -303,10 +319,12 @@ found: + continue; + if ((q->cra_flags ^ alg->cra_flags) & larval->mask) + continue; +- if (!crypto_mod_get(alg)) +- continue; + +- larval->adult = alg; ++ if (best && crypto_mod_get(alg)) ++ larval->adult = alg; ++ else ++ larval->adult = ERR_PTR(-EAGAIN); ++ + continue; + } + +--- a/crypto/api.c ++++ b/crypto/api.c +@@ -97,7 +97,7 @@ static void crypto_larval_destroy(struct + struct crypto_larval *larval = (void *)alg; + + BUG_ON(!crypto_is_larval(alg)); +- if (larval->adult) ++ if (!IS_ERR_OR_NULL(larval->adult)) + crypto_mod_put(larval->adult); + kfree(larval); + } +@@ -178,6 +178,8 @@ static struct crypto_alg *crypto_larval_ + alg = ERR_PTR(-ETIMEDOUT); + else if (!alg) + alg = ERR_PTR(-ENOENT); ++ else if (IS_ERR(alg)) ++ ; + else if (crypto_is_test_larval(larval) && + !(alg->cra_flags & CRYPTO_ALG_TESTED)) + alg = ERR_PTR(-EAGAIN); diff --git a/queue-5.4/crypto-arm64-ghash-neon-bump-priority-to-150.patch b/queue-5.4/crypto-arm64-ghash-neon-bump-priority-to-150.patch new file mode 100644 index 00000000000..4f195fff14e --- /dev/null +++ b/queue-5.4/crypto-arm64-ghash-neon-bump-priority-to-150.patch @@ -0,0 +1,33 @@ +From 5441c6507bc84166e9227e9370a56c57ba13794a Mon Sep 17 00:00:00 2001 +From: Ard Biesheuvel +Date: Thu, 28 Nov 2019 13:55:31 +0100 +Subject: crypto: arm64/ghash-neon - bump priority to 150 + +From: Ard Biesheuvel + +commit 5441c6507bc84166e9227e9370a56c57ba13794a upstream. + +The SIMD based GHASH implementation for arm64 is typically much faster +than the generic one, and doesn't use any lookup tables, so it is +clearly preferred when available. So bump the priority to reflect that. + +Fixes: 5a22b198cd527447 ("crypto: arm64/ghash - register PMULL variants ...") +Signed-off-by: Ard Biesheuvel +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + arch/arm64/crypto/ghash-ce-glue.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/arch/arm64/crypto/ghash-ce-glue.c ++++ b/arch/arm64/crypto/ghash-ce-glue.c +@@ -261,7 +261,7 @@ static int ghash_setkey(struct crypto_sh + static struct shash_alg ghash_alg[] = {{ + .base.cra_name = "ghash", + .base.cra_driver_name = "ghash-neon", +- .base.cra_priority = 100, ++ .base.cra_priority = 150, + .base.cra_blocksize = GHASH_BLOCK_SIZE, + .base.cra_ctxsize = sizeof(struct ghash_key), + .base.cra_module = THIS_MODULE, diff --git a/queue-5.4/crypto-atmel-aes-fix-counter-overflow-in-ctr-mode.patch b/queue-5.4/crypto-atmel-aes-fix-counter-overflow-in-ctr-mode.patch new file mode 100644 index 00000000000..08453720bfc --- /dev/null +++ b/queue-5.4/crypto-atmel-aes-fix-counter-overflow-in-ctr-mode.patch @@ -0,0 +1,104 @@ +From 781a08d9740afa73357f1a60d45d7c93d7cca2dd Mon Sep 17 00:00:00 2001 +From: Tudor Ambarus +Date: Thu, 5 Dec 2019 09:54:01 +0000 +Subject: crypto: atmel-aes - Fix counter overflow in CTR mode + +From: Tudor Ambarus + +commit 781a08d9740afa73357f1a60d45d7c93d7cca2dd upstream. + +32 bit counter is not supported by neither of our AES IPs, all implement +a 16 bit block counter. Drop the 32 bit block counter logic. + +Fixes: fcac83656a3e ("crypto: atmel-aes - fix the counter overflow in CTR mode") +Signed-off-by: Tudor Ambarus +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/crypto/atmel-aes.c | 37 ++++++++++++------------------------- + 1 file changed, 12 insertions(+), 25 deletions(-) + +--- a/drivers/crypto/atmel-aes.c ++++ b/drivers/crypto/atmel-aes.c +@@ -88,7 +88,6 @@ + struct atmel_aes_caps { + bool has_dualbuff; + bool has_cfb64; +- bool has_ctr32; + bool has_gcm; + bool has_xts; + bool has_authenc; +@@ -1013,8 +1012,9 @@ static int atmel_aes_ctr_transfer(struct + struct atmel_aes_ctr_ctx *ctx = atmel_aes_ctr_ctx_cast(dd->ctx); + struct ablkcipher_request *req = ablkcipher_request_cast(dd->areq); + struct scatterlist *src, *dst; +- u32 ctr, blocks; + size_t datalen; ++ u32 ctr; ++ u16 blocks, start, end; + bool use_dma, fragmented = false; + + /* Check for transfer completion. */ +@@ -1026,27 +1026,17 @@ static int atmel_aes_ctr_transfer(struct + datalen = req->nbytes - ctx->offset; + blocks = DIV_ROUND_UP(datalen, AES_BLOCK_SIZE); + ctr = be32_to_cpu(ctx->iv[3]); +- if (dd->caps.has_ctr32) { +- /* Check 32bit counter overflow. */ +- u32 start = ctr; +- u32 end = start + blocks - 1; +- +- if (end < start) { +- ctr |= 0xffffffff; +- datalen = AES_BLOCK_SIZE * -start; +- fragmented = true; +- } +- } else { +- /* Check 16bit counter overflow. */ +- u16 start = ctr & 0xffff; +- u16 end = start + (u16)blocks - 1; +- +- if (blocks >> 16 || end < start) { +- ctr |= 0xffff; +- datalen = AES_BLOCK_SIZE * (0x10000-start); +- fragmented = true; +- } ++ ++ /* Check 16bit counter overflow. */ ++ start = ctr & 0xffff; ++ end = start + blocks - 1; ++ ++ if (blocks >> 16 || end < start) { ++ ctr |= 0xffff; ++ datalen = AES_BLOCK_SIZE * (0x10000 - start); ++ fragmented = true; + } ++ + use_dma = (datalen >= ATMEL_AES_DMA_THRESHOLD); + + /* Jump to offset. */ +@@ -2550,7 +2540,6 @@ static void atmel_aes_get_cap(struct atm + { + dd->caps.has_dualbuff = 0; + dd->caps.has_cfb64 = 0; +- dd->caps.has_ctr32 = 0; + dd->caps.has_gcm = 0; + dd->caps.has_xts = 0; + dd->caps.has_authenc = 0; +@@ -2561,7 +2550,6 @@ static void atmel_aes_get_cap(struct atm + case 0x500: + dd->caps.has_dualbuff = 1; + dd->caps.has_cfb64 = 1; +- dd->caps.has_ctr32 = 1; + dd->caps.has_gcm = 1; + dd->caps.has_xts = 1; + dd->caps.has_authenc = 1; +@@ -2570,7 +2558,6 @@ static void atmel_aes_get_cap(struct atm + case 0x200: + dd->caps.has_dualbuff = 1; + dd->caps.has_cfb64 = 1; +- dd->caps.has_ctr32 = 1; + dd->caps.has_gcm = 1; + dd->caps.max_burst_size = 4; + break; diff --git a/queue-5.4/crypto-ccp-set-max-rsa-modulus-size-for-v3-platform-devices-as-well.patch b/queue-5.4/crypto-ccp-set-max-rsa-modulus-size-for-v3-platform-devices-as-well.patch new file mode 100644 index 00000000000..a7345d5b963 --- /dev/null +++ b/queue-5.4/crypto-ccp-set-max-rsa-modulus-size-for-v3-platform-devices-as-well.patch @@ -0,0 +1,39 @@ +From 11548f5a5747813ff84bed6f2ea01100053b0d8d Mon Sep 17 00:00:00 2001 +From: Ard Biesheuvel +Date: Wed, 27 Nov 2019 13:01:36 +0100 +Subject: crypto: ccp - set max RSA modulus size for v3 platform devices as well + +From: Ard Biesheuvel + +commit 11548f5a5747813ff84bed6f2ea01100053b0d8d upstream. + +AMD Seattle incorporates a non-PCI version of the v3 CCP crypto +accelerator, and this version was left behind when the maximum +RSA modulus size was parameterized in order to support v5 hardware +which supports larger moduli than v3 hardware does. Due to this +oversight, RSA acceleration no longer works at all on these systems. + +Fix this by setting the .rsamax property to the appropriate value +for v3 platform hardware. + +Fixes: e28c190db66830c0 ("csrypto: ccp - Expand RSA support for a v5 ccp") +Cc: Gary R Hook +Signed-off-by: Ard Biesheuvel +Acked-by: Gary R Hook +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/crypto/ccp/ccp-dev-v3.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/drivers/crypto/ccp/ccp-dev-v3.c ++++ b/drivers/crypto/ccp/ccp-dev-v3.c +@@ -586,6 +586,7 @@ const struct ccp_vdata ccpv3_platform = + .setup = NULL, + .perform = &ccp3_actions, + .offset = 0, ++ .rsamax = CCP_RSA_MAX_WIDTH, + }; + + const struct ccp_vdata ccpv3 = { diff --git a/queue-5.4/crypto-hisilicon-use-the-offset-fields-in-sqe-to-avoid-need-to-split-scatterlists.patch b/queue-5.4/crypto-hisilicon-use-the-offset-fields-in-sqe-to-avoid-need-to-split-scatterlists.patch new file mode 100644 index 00000000000..fcb6521b69d --- /dev/null +++ b/queue-5.4/crypto-hisilicon-use-the-offset-fields-in-sqe-to-avoid-need-to-split-scatterlists.patch @@ -0,0 +1,238 @@ +From 484a897ffa3005f16cd9a31efd747bcf8155826f Mon Sep 17 00:00:00 2001 +From: Jonathan Cameron +Date: Tue, 19 Nov 2019 13:42:57 +0800 +Subject: crypto: hisilicon - Use the offset fields in sqe to avoid need to split scatterlists + +From: Jonathan Cameron + +commit 484a897ffa3005f16cd9a31efd747bcf8155826f upstream. + +We can configure sgl offset fields in ZIP sqe to let ZIP engine read/write +sgl data with skipped data. Hence no need to splite the sgl. + +Fixes: 62c455ca853e (crypto: hisilicon - add HiSilicon ZIP accelerator support) +Signed-off-by: Jonathan Cameron +Signed-off-by: Zhou Wang +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/crypto/hisilicon/Kconfig | 1 + drivers/crypto/hisilicon/zip/zip.h | 4 + + drivers/crypto/hisilicon/zip/zip_crypto.c | 92 +++++++----------------------- + 3 files changed, 27 insertions(+), 70 deletions(-) + +--- a/drivers/crypto/hisilicon/Kconfig ++++ b/drivers/crypto/hisilicon/Kconfig +@@ -35,6 +35,5 @@ config CRYPTO_DEV_HISI_ZIP + depends on ARM64 && PCI && PCI_MSI + select CRYPTO_DEV_HISI_QM + select CRYPTO_HISI_SGL +- select SG_SPLIT + help + Support for HiSilicon ZIP Driver +--- a/drivers/crypto/hisilicon/zip/zip.h ++++ b/drivers/crypto/hisilicon/zip/zip.h +@@ -12,6 +12,10 @@ + + /* hisi_zip_sqe dw3 */ + #define HZIP_BD_STATUS_M GENMASK(7, 0) ++/* hisi_zip_sqe dw7 */ ++#define HZIP_IN_SGE_DATA_OFFSET_M GENMASK(23, 0) ++/* hisi_zip_sqe dw8 */ ++#define HZIP_OUT_SGE_DATA_OFFSET_M GENMASK(23, 0) + /* hisi_zip_sqe dw9 */ + #define HZIP_REQ_TYPE_M GENMASK(7, 0) + #define HZIP_ALG_TYPE_ZLIB 0x02 +--- a/drivers/crypto/hisilicon/zip/zip_crypto.c ++++ b/drivers/crypto/hisilicon/zip/zip_crypto.c +@@ -45,10 +45,8 @@ enum hisi_zip_alg_type { + + struct hisi_zip_req { + struct acomp_req *req; +- struct scatterlist *src; +- struct scatterlist *dst; +- size_t slen; +- size_t dlen; ++ int sskip; ++ int dskip; + struct hisi_acc_hw_sgl *hw_src; + struct hisi_acc_hw_sgl *hw_dst; + dma_addr_t dma_src; +@@ -94,13 +92,15 @@ static void hisi_zip_config_tag(struct h + + static void hisi_zip_fill_sqe(struct hisi_zip_sqe *sqe, u8 req_type, + dma_addr_t s_addr, dma_addr_t d_addr, u32 slen, +- u32 dlen) ++ u32 dlen, int sskip, int dskip) + { + memset(sqe, 0, sizeof(struct hisi_zip_sqe)); + +- sqe->input_data_length = slen; ++ sqe->input_data_length = slen - sskip; ++ sqe->dw7 = FIELD_PREP(HZIP_IN_SGE_DATA_OFFSET_M, sskip); ++ sqe->dw8 = FIELD_PREP(HZIP_OUT_SGE_DATA_OFFSET_M, dskip); + sqe->dw9 = FIELD_PREP(HZIP_REQ_TYPE_M, req_type); +- sqe->dest_avail_out = dlen; ++ sqe->dest_avail_out = dlen - dskip; + sqe->source_addr_l = lower_32_bits(s_addr); + sqe->source_addr_h = upper_32_bits(s_addr); + sqe->dest_addr_l = lower_32_bits(d_addr); +@@ -301,11 +301,6 @@ static void hisi_zip_remove_req(struct h + { + struct hisi_zip_req_q *req_q = &qp_ctx->req_q; + +- if (qp_ctx->qp->alg_type == HZIP_ALG_TYPE_COMP) +- kfree(req->dst); +- else +- kfree(req->src); +- + write_lock(&req_q->req_lock); + clear_bit(req->req_id, req_q->req_bitmap); + memset(req, 0, sizeof(struct hisi_zip_req)); +@@ -333,8 +328,8 @@ static void hisi_zip_acomp_cb(struct his + } + dlen = sqe->produced; + +- hisi_acc_sg_buf_unmap(dev, req->src, req->hw_src); +- hisi_acc_sg_buf_unmap(dev, req->dst, req->hw_dst); ++ hisi_acc_sg_buf_unmap(dev, acomp_req->src, req->hw_src); ++ hisi_acc_sg_buf_unmap(dev, acomp_req->dst, req->hw_dst); + + head_size = (qp->alg_type == 0) ? TO_HEAD_SIZE(qp->req_type) : 0; + acomp_req->dlen = dlen + head_size; +@@ -428,20 +423,6 @@ static size_t get_comp_head_size(struct + } + } + +-static int get_sg_skip_bytes(struct scatterlist *sgl, size_t bytes, +- size_t remains, struct scatterlist **out) +-{ +-#define SPLIT_NUM 2 +- size_t split_sizes[SPLIT_NUM]; +- int out_mapped_nents[SPLIT_NUM]; +- +- split_sizes[0] = bytes; +- split_sizes[1] = remains; +- +- return sg_split(sgl, 0, 0, SPLIT_NUM, split_sizes, out, +- out_mapped_nents, GFP_KERNEL); +-} +- + static struct hisi_zip_req *hisi_zip_create_req(struct acomp_req *req, + struct hisi_zip_qp_ctx *qp_ctx, + size_t head_size, bool is_comp) +@@ -449,31 +430,7 @@ static struct hisi_zip_req *hisi_zip_cre + struct hisi_zip_req_q *req_q = &qp_ctx->req_q; + struct hisi_zip_req *q = req_q->q; + struct hisi_zip_req *req_cache; +- struct scatterlist *out[2]; +- struct scatterlist *sgl; +- size_t len; +- int ret, req_id; +- +- /* +- * remove/add zlib/gzip head, as hardware operations do not include +- * comp head. so split req->src to get sgl without heads in acomp, or +- * add comp head to req->dst ahead of that hardware output compressed +- * data in sgl splited from req->dst without comp head. +- */ +- if (is_comp) { +- sgl = req->dst; +- len = req->dlen - head_size; +- } else { +- sgl = req->src; +- len = req->slen - head_size; +- } +- +- ret = get_sg_skip_bytes(sgl, head_size, len, out); +- if (ret) +- return ERR_PTR(ret); +- +- /* sgl for comp head is useless, so free it now */ +- kfree(out[0]); ++ int req_id; + + write_lock(&req_q->req_lock); + +@@ -481,7 +438,6 @@ static struct hisi_zip_req *hisi_zip_cre + if (req_id >= req_q->size) { + write_unlock(&req_q->req_lock); + dev_dbg(&qp_ctx->qp->qm->pdev->dev, "req cache is full!\n"); +- kfree(out[1]); + return ERR_PTR(-EBUSY); + } + set_bit(req_id, req_q->req_bitmap); +@@ -489,16 +445,13 @@ static struct hisi_zip_req *hisi_zip_cre + req_cache = q + req_id; + req_cache->req_id = req_id; + req_cache->req = req; ++ + if (is_comp) { +- req_cache->src = req->src; +- req_cache->dst = out[1]; +- req_cache->slen = req->slen; +- req_cache->dlen = req->dlen - head_size; ++ req_cache->sskip = 0; ++ req_cache->dskip = head_size; + } else { +- req_cache->src = out[1]; +- req_cache->dst = req->dst; +- req_cache->slen = req->slen - head_size; +- req_cache->dlen = req->dlen; ++ req_cache->sskip = head_size; ++ req_cache->dskip = 0; + } + + write_unlock(&req_q->req_lock); +@@ -510,6 +463,7 @@ static int hisi_zip_do_work(struct hisi_ + struct hisi_zip_qp_ctx *qp_ctx) + { + struct hisi_zip_sqe *zip_sqe = &qp_ctx->zip_sqe; ++ struct acomp_req *a_req = req->req; + struct hisi_qp *qp = qp_ctx->qp; + struct device *dev = &qp->qm->pdev->dev; + struct hisi_acc_sgl_pool *pool = &qp_ctx->sgl_pool; +@@ -517,16 +471,16 @@ static int hisi_zip_do_work(struct hisi_ + dma_addr_t output; + int ret; + +- if (!req->src || !req->slen || !req->dst || !req->dlen) ++ if (!a_req->src || !a_req->slen || !a_req->dst || !a_req->dlen) + return -EINVAL; + +- req->hw_src = hisi_acc_sg_buf_map_to_hw_sgl(dev, req->src, pool, ++ req->hw_src = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->src, pool, + req->req_id << 1, &input); + if (IS_ERR(req->hw_src)) + return PTR_ERR(req->hw_src); + req->dma_src = input; + +- req->hw_dst = hisi_acc_sg_buf_map_to_hw_sgl(dev, req->dst, pool, ++ req->hw_dst = hisi_acc_sg_buf_map_to_hw_sgl(dev, a_req->dst, pool, + (req->req_id << 1) + 1, + &output); + if (IS_ERR(req->hw_dst)) { +@@ -535,8 +489,8 @@ static int hisi_zip_do_work(struct hisi_ + } + req->dma_dst = output; + +- hisi_zip_fill_sqe(zip_sqe, qp->req_type, input, output, req->slen, +- req->dlen); ++ hisi_zip_fill_sqe(zip_sqe, qp->req_type, input, output, a_req->slen, ++ a_req->dlen, req->sskip, req->dskip); + hisi_zip_config_buf_type(zip_sqe, HZIP_SGL); + hisi_zip_config_tag(zip_sqe, req->req_id); + +@@ -548,9 +502,9 @@ static int hisi_zip_do_work(struct hisi_ + return -EINPROGRESS; + + err_unmap_output: +- hisi_acc_sg_buf_unmap(dev, req->dst, req->hw_dst); ++ hisi_acc_sg_buf_unmap(dev, a_req->dst, req->hw_dst); + err_unmap_input: +- hisi_acc_sg_buf_unmap(dev, req->src, req->hw_src); ++ hisi_acc_sg_buf_unmap(dev, a_req->src, req->hw_src); + return ret; + } + diff --git a/queue-5.4/crypto-pcrypt-do-not-clear-may_sleep-flag-in-original-request.patch b/queue-5.4/crypto-pcrypt-do-not-clear-may_sleep-flag-in-original-request.patch new file mode 100644 index 00000000000..19ba1adee57 --- /dev/null +++ b/queue-5.4/crypto-pcrypt-do-not-clear-may_sleep-flag-in-original-request.patch @@ -0,0 +1,33 @@ +From e8d998264bffade3cfe0536559f712ab9058d654 Mon Sep 17 00:00:00 2001 +From: Herbert Xu +Date: Fri, 29 Nov 2019 16:40:24 +0800 +Subject: crypto: pcrypt - Do not clear MAY_SLEEP flag in original request + +From: Herbert Xu + +commit e8d998264bffade3cfe0536559f712ab9058d654 upstream. + +We should not be modifying the original request's MAY_SLEEP flag +upon completion. It makes no sense to do so anyway. + +Reported-by: Eric Biggers +Fixes: 5068c7a883d1 ("crypto: pcrypt - Add pcrypt crypto...") +Signed-off-by: Herbert Xu +Tested-by: Eric Biggers +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/pcrypt.c | 1 - + 1 file changed, 1 deletion(-) + +--- a/crypto/pcrypt.c ++++ b/crypto/pcrypt.c +@@ -71,7 +71,6 @@ static void pcrypt_aead_done(struct cryp + struct padata_priv *padata = pcrypt_request_padata(preq); + + padata->info = err; +- req->base.flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP; + + padata_do_serial(padata); + } diff --git a/queue-5.4/crypto-picoxcell-adjust-the-position-of-tasklet_init-and-fix-missed-tasklet_kill.patch b/queue-5.4/crypto-picoxcell-adjust-the-position-of-tasklet_init-and-fix-missed-tasklet_kill.patch new file mode 100644 index 00000000000..ebf60fcd44d --- /dev/null +++ b/queue-5.4/crypto-picoxcell-adjust-the-position-of-tasklet_init-and-fix-missed-tasklet_kill.patch @@ -0,0 +1,62 @@ +From 7f8c36fe9be46862c4f3c5302f769378028a34fa Mon Sep 17 00:00:00 2001 +From: Chuhong Yuan +Date: Tue, 10 Dec 2019 00:21:44 +0800 +Subject: crypto: picoxcell - adjust the position of tasklet_init and fix missed tasklet_kill + +From: Chuhong Yuan + +commit 7f8c36fe9be46862c4f3c5302f769378028a34fa upstream. + +Since tasklet is needed to be initialized before registering IRQ +handler, adjust the position of tasklet_init to fix the wrong order. + +Besides, to fix the missed tasklet_kill, this patch adds a helper +function and uses devm_add_action to kill the tasklet automatically. + +Fixes: ce92136843cb ("crypto: picoxcell - add support for the picoxcell crypto engines") +Signed-off-by: Chuhong Yuan +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/crypto/picoxcell_crypto.c | 15 +++++++++++++-- + 1 file changed, 13 insertions(+), 2 deletions(-) + +--- a/drivers/crypto/picoxcell_crypto.c ++++ b/drivers/crypto/picoxcell_crypto.c +@@ -1613,6 +1613,11 @@ static const struct of_device_id spacc_o + MODULE_DEVICE_TABLE(of, spacc_of_id_table); + #endif /* CONFIG_OF */ + ++static void spacc_tasklet_kill(void *data) ++{ ++ tasklet_kill(data); ++} ++ + static int spacc_probe(struct platform_device *pdev) + { + int i, err, ret; +@@ -1655,6 +1660,14 @@ static int spacc_probe(struct platform_d + return -ENXIO; + } + ++ tasklet_init(&engine->complete, spacc_spacc_complete, ++ (unsigned long)engine); ++ ++ ret = devm_add_action(&pdev->dev, spacc_tasklet_kill, ++ &engine->complete); ++ if (ret) ++ return ret; ++ + if (devm_request_irq(&pdev->dev, irq->start, spacc_spacc_irq, 0, + engine->name, engine)) { + dev_err(engine->dev, "failed to request IRQ\n"); +@@ -1712,8 +1725,6 @@ static int spacc_probe(struct platform_d + INIT_LIST_HEAD(&engine->completed); + INIT_LIST_HEAD(&engine->in_progress); + engine->in_flight = 0; +- tasklet_init(&engine->complete, spacc_spacc_complete, +- (unsigned long)engine); + + platform_set_drvdata(pdev, engine); + diff --git a/queue-5.4/libbpf-fix-realloc-usage-in-bpf_core_find_cands.patch b/queue-5.4/libbpf-fix-realloc-usage-in-bpf_core_find_cands.patch new file mode 100644 index 00000000000..07408c633c5 --- /dev/null +++ b/queue-5.4/libbpf-fix-realloc-usage-in-bpf_core_find_cands.patch @@ -0,0 +1,38 @@ +From 35b9211c0a2427e8f39e534f442f43804fc8d5ca Mon Sep 17 00:00:00 2001 +From: Andrii Nakryiko +Date: Fri, 24 Jan 2020 12:18:46 -0800 +Subject: libbpf: Fix realloc usage in bpf_core_find_cands + +From: Andrii Nakryiko + +commit 35b9211c0a2427e8f39e534f442f43804fc8d5ca upstream. + +Fix bug requesting invalid size of reallocated array when constructing CO-RE +relocation candidate list. This can cause problems if there are many potential +candidates and a very fine-grained memory allocator bucket sizes are used. + +Fixes: ddc7c3042614 ("libbpf: implement BPF CO-RE offset relocation algorithm") +Reported-by: William Smith +Signed-off-by: Andrii Nakryiko +Signed-off-by: Daniel Borkmann +Acked-by: Yonghong Song +Link: https://lore.kernel.org/bpf/20200124201847.212528-1-andriin@fb.com +Signed-off-by: Greg Kroah-Hartman + +--- + tools/lib/bpf/libbpf.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/tools/lib/bpf/libbpf.c ++++ b/tools/lib/bpf/libbpf.c +@@ -2541,7 +2541,9 @@ static struct ids_vec *bpf_core_find_can + if (strncmp(local_name, targ_name, local_essent_len) == 0) { + pr_debug("[%d] %s: found candidate [%d] %s\n", + local_type_id, local_name, i, targ_name); +- new_ids = realloc(cand_ids->data, cand_ids->len + 1); ++ new_ids = reallocarray(cand_ids->data, ++ cand_ids->len + 1, ++ sizeof(*cand_ids->data)); + if (!new_ids) { + err = -ENOMEM; + goto err_out; diff --git a/queue-5.4/riscv-bpf-fix-broken-bpf-tail-calls.patch b/queue-5.4/riscv-bpf-fix-broken-bpf-tail-calls.patch new file mode 100644 index 00000000000..b45be5b7f4c --- /dev/null +++ b/queue-5.4/riscv-bpf-fix-broken-bpf-tail-calls.patch @@ -0,0 +1,68 @@ +From f1003b787c00fbaa4b11619c6b23a885bfce8f07 Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Bj=C3=B6rn=20T=C3=B6pel?= +Date: Mon, 16 Dec 2019 10:13:35 +0100 +Subject: riscv, bpf: Fix broken BPF tail calls +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Björn Töpel + +commit f1003b787c00fbaa4b11619c6b23a885bfce8f07 upstream. + +The BPF JIT incorrectly clobbered the a0 register, and did not flag +usage of s5 register when BPF stack was being used. + +Fixes: 2353ecc6f91f ("bpf, riscv: add BPF JIT for RV64G") +Signed-off-by: Björn Töpel +Signed-off-by: Daniel Borkmann +Link: https://lore.kernel.org/bpf/20191216091343.23260-2-bjorn.topel@gmail.com +Signed-off-by: Greg Kroah-Hartman + +--- + arch/riscv/net/bpf_jit_comp.c | 13 +++++++++++-- + 1 file changed, 11 insertions(+), 2 deletions(-) + +--- a/arch/riscv/net/bpf_jit_comp.c ++++ b/arch/riscv/net/bpf_jit_comp.c +@@ -120,6 +120,11 @@ static bool seen_reg(int reg, struct rv_ + return false; + } + ++static void mark_fp(struct rv_jit_context *ctx) ++{ ++ __set_bit(RV_CTX_F_SEEN_S5, &ctx->flags); ++} ++ + static void mark_call(struct rv_jit_context *ctx) + { + __set_bit(RV_CTX_F_SEEN_CALL, &ctx->flags); +@@ -596,7 +601,8 @@ static void __build_epilogue(u8 reg, str + + emit(rv_addi(RV_REG_SP, RV_REG_SP, stack_adjust), ctx); + /* Set return value. */ +- emit(rv_addi(RV_REG_A0, RV_REG_A5, 0), ctx); ++ if (reg == RV_REG_RA) ++ emit(rv_addi(RV_REG_A0, RV_REG_A5, 0), ctx); + emit(rv_jalr(RV_REG_ZERO, reg, 0), ctx); + } + +@@ -1426,6 +1432,10 @@ static void build_prologue(struct rv_jit + { + int stack_adjust = 0, store_offset, bpf_stack_adjust; + ++ bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16); ++ if (bpf_stack_adjust) ++ mark_fp(ctx); ++ + if (seen_reg(RV_REG_RA, ctx)) + stack_adjust += 8; + stack_adjust += 8; /* RV_REG_FP */ +@@ -1443,7 +1453,6 @@ static void build_prologue(struct rv_jit + stack_adjust += 8; + + stack_adjust = round_up(stack_adjust, 16); +- bpf_stack_adjust = round_up(ctx->prog->aux->stack_depth, 16); + stack_adjust += bpf_stack_adjust; + + store_offset = stack_adjust - 8; diff --git a/queue-5.4/samples-bpf-don-t-try-to-remove-user-s-homedir-on-clean.patch b/queue-5.4/samples-bpf-don-t-try-to-remove-user-s-homedir-on-clean.patch new file mode 100644 index 00000000000..45047734aca --- /dev/null +++ b/queue-5.4/samples-bpf-don-t-try-to-remove-user-s-homedir-on-clean.patch @@ -0,0 +1,44 @@ +From b2e5e93ae8af6a34bca536cdc4b453ab1e707b8b Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Toke=20H=C3=B8iland-J=C3=B8rgensen?= +Date: Mon, 20 Jan 2020 14:06:41 +0100 +Subject: samples/bpf: Don't try to remove user's homedir on clean +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Toke Høiland-Jørgensen + +commit b2e5e93ae8af6a34bca536cdc4b453ab1e707b8b upstream. + +The 'clean' rule in the samples/bpf Makefile tries to remove backup +files (ending in ~). However, if no such files exist, it will instead try +to remove the user's home directory. While the attempt is mostly harmless, +it does lead to a somewhat scary warning like this: + +rm: cannot remove '~': Is a directory + +Fix this by using find instead of shell expansion to locate any actual +backup files that need to be removed. + +Fixes: b62a796c109c ("samples/bpf: allow make to be run from samples/bpf/ directory") +Signed-off-by: Toke Høiland-Jørgensen +Signed-off-by: Alexei Starovoitov +Acked-by: Jesper Dangaard Brouer +Link: https://lore.kernel.org/bpf/157952560126.1683545.7273054725976032511.stgit@toke.dk +Signed-off-by: Greg Kroah-Hartman + +--- + samples/bpf/Makefile | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/samples/bpf/Makefile ++++ b/samples/bpf/Makefile +@@ -236,7 +236,7 @@ all: + + clean: + $(MAKE) -C ../../ M=$(CURDIR) clean +- @rm -f *~ ++ @find $(CURDIR) -type f -name '*~' -delete + + $(LIBBPF): FORCE + # Fix up variables inherited from Kbuild that tools/ build system won't like diff --git a/queue-5.4/samples-bpf-xdp_redirect_cpu-fix-missing-tracepoint-attach.patch b/queue-5.4/samples-bpf-xdp_redirect_cpu-fix-missing-tracepoint-attach.patch new file mode 100644 index 00000000000..5ccf966ce74 --- /dev/null +++ b/queue-5.4/samples-bpf-xdp_redirect_cpu-fix-missing-tracepoint-attach.patch @@ -0,0 +1,138 @@ +From f9e6bfdbaf0cf304d72c70a05d81acac01a04f48 Mon Sep 17 00:00:00 2001 +From: Jesper Dangaard Brouer +Date: Fri, 20 Dec 2019 17:19:36 +0100 +Subject: samples/bpf: Xdp_redirect_cpu fix missing tracepoint attach + +From: Jesper Dangaard Brouer + +commit f9e6bfdbaf0cf304d72c70a05d81acac01a04f48 upstream. + +When sample xdp_redirect_cpu was converted to use libbpf, the +tracepoints used by this sample were not getting attached automatically +like with bpf_load.c. The BPF-maps was still getting loaded, thus +nobody notice that the tracepoints were not updating these maps. + +This fix doesn't use the new skeleton code, as this bug was introduced +in v5.1 and stable might want to backport this. E.g. Red Hat QA uses +this sample as part of their testing. + +Fixes: bbaf6029c49c ("samples/bpf: Convert XDP samples to libbpf usage") +Signed-off-by: Jesper Dangaard Brouer +Signed-off-by: Alexei Starovoitov +Acked-by: Andrii Nakryiko +Link: https://lore.kernel.org/bpf/157685877642.26195.2798780195186786841.stgit@firesoul +Signed-off-by: Greg Kroah-Hartman + +--- + samples/bpf/xdp_redirect_cpu_user.c | 59 +++++++++++++++++++++++++++++++++--- + 1 file changed, 55 insertions(+), 4 deletions(-) + +--- a/samples/bpf/xdp_redirect_cpu_user.c ++++ b/samples/bpf/xdp_redirect_cpu_user.c +@@ -16,6 +16,10 @@ static const char *__doc__ = + #include + #include + #include ++#include ++ ++#define __must_check ++#include + + #include + #include +@@ -46,6 +50,10 @@ static int cpus_count_map_fd; + static int cpus_iterator_map_fd; + static int exception_cnt_map_fd; + ++#define NUM_TP 5 ++struct bpf_link *tp_links[NUM_TP] = { 0 }; ++static int tp_cnt = 0; ++ + /* Exit return codes */ + #define EXIT_OK 0 + #define EXIT_FAIL 1 +@@ -88,6 +96,10 @@ static void int_exit(int sig) + printf("program on interface changed, not removing\n"); + } + } ++ /* Detach tracepoints */ ++ while (tp_cnt) ++ bpf_link__destroy(tp_links[--tp_cnt]); ++ + exit(EXIT_OK); + } + +@@ -588,23 +600,61 @@ static void stats_poll(int interval, boo + free_stats_record(prev); + } + ++static struct bpf_link * attach_tp(struct bpf_object *obj, ++ const char *tp_category, ++ const char* tp_name) ++{ ++ struct bpf_program *prog; ++ struct bpf_link *link; ++ char sec_name[PATH_MAX]; ++ int len; ++ ++ len = snprintf(sec_name, PATH_MAX, "tracepoint/%s/%s", ++ tp_category, tp_name); ++ if (len < 0) ++ exit(EXIT_FAIL); ++ ++ prog = bpf_object__find_program_by_title(obj, sec_name); ++ if (!prog) { ++ fprintf(stderr, "ERR: finding progsec: %s\n", sec_name); ++ exit(EXIT_FAIL_BPF); ++ } ++ ++ link = bpf_program__attach_tracepoint(prog, tp_category, tp_name); ++ if (IS_ERR(link)) ++ exit(EXIT_FAIL_BPF); ++ ++ return link; ++} ++ ++static void init_tracepoints(struct bpf_object *obj) { ++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_redirect_err"); ++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_redirect_map_err"); ++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_exception"); ++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_cpumap_enqueue"); ++ tp_links[tp_cnt++] = attach_tp(obj, "xdp", "xdp_cpumap_kthread"); ++} ++ + static int init_map_fds(struct bpf_object *obj) + { +- cpu_map_fd = bpf_object__find_map_fd_by_name(obj, "cpu_map"); +- rx_cnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rx_cnt"); ++ /* Maps updated by tracepoints */ + redirect_err_cnt_map_fd = + bpf_object__find_map_fd_by_name(obj, "redirect_err_cnt"); ++ exception_cnt_map_fd = ++ bpf_object__find_map_fd_by_name(obj, "exception_cnt"); + cpumap_enqueue_cnt_map_fd = + bpf_object__find_map_fd_by_name(obj, "cpumap_enqueue_cnt"); + cpumap_kthread_cnt_map_fd = + bpf_object__find_map_fd_by_name(obj, "cpumap_kthread_cnt"); ++ ++ /* Maps used by XDP */ ++ rx_cnt_map_fd = bpf_object__find_map_fd_by_name(obj, "rx_cnt"); ++ cpu_map_fd = bpf_object__find_map_fd_by_name(obj, "cpu_map"); + cpus_available_map_fd = + bpf_object__find_map_fd_by_name(obj, "cpus_available"); + cpus_count_map_fd = bpf_object__find_map_fd_by_name(obj, "cpus_count"); + cpus_iterator_map_fd = + bpf_object__find_map_fd_by_name(obj, "cpus_iterator"); +- exception_cnt_map_fd = +- bpf_object__find_map_fd_by_name(obj, "exception_cnt"); + + if (cpu_map_fd < 0 || rx_cnt_map_fd < 0 || + redirect_err_cnt_map_fd < 0 || cpumap_enqueue_cnt_map_fd < 0 || +@@ -662,6 +712,7 @@ int main(int argc, char **argv) + strerror(errno)); + return EXIT_FAIL; + } ++ init_tracepoints(obj); + if (init_map_fds(obj) < 0) { + fprintf(stderr, "bpf_object__find_map_fd_by_name failed\n"); + return EXIT_FAIL; diff --git a/queue-5.4/selftests-bpf-fix-perf_buffer-test-on-systems-w-offline-cpus.patch b/queue-5.4/selftests-bpf-fix-perf_buffer-test-on-systems-w-offline-cpus.patch new file mode 100644 index 00000000000..6886fd28e0b --- /dev/null +++ b/queue-5.4/selftests-bpf-fix-perf_buffer-test-on-systems-w-offline-cpus.patch @@ -0,0 +1,99 @@ +From 91cbdf740a476cf2c744169bf407de2e3ac1f3cf Mon Sep 17 00:00:00 2001 +From: Andrii Nakryiko +Date: Wed, 11 Dec 2019 17:36:20 -0800 +Subject: selftests/bpf: Fix perf_buffer test on systems w/ offline CPUs + +From: Andrii Nakryiko + +commit 91cbdf740a476cf2c744169bf407de2e3ac1f3cf upstream. + +Fix up perf_buffer.c selftest to take into account offline/missing CPUs. + +Fixes: ee5cf82ce04a ("selftests/bpf: test perf buffer API") +Signed-off-by: Andrii Nakryiko +Signed-off-by: Alexei Starovoitov +Link: https://lore.kernel.org/bpf/20191212013621.1691858-1-andriin@fb.com +Signed-off-by: Greg Kroah-Hartman + +--- + tools/testing/selftests/bpf/prog_tests/perf_buffer.c | 29 +++++++++++++++---- + 1 file changed, 24 insertions(+), 5 deletions(-) + +--- a/tools/testing/selftests/bpf/prog_tests/perf_buffer.c ++++ b/tools/testing/selftests/bpf/prog_tests/perf_buffer.c +@@ -4,6 +4,7 @@ + #include + #include + #include ++#include "libbpf_internal.h" + + static void on_sample(void *ctx, int cpu, void *data, __u32 size) + { +@@ -19,7 +20,7 @@ static void on_sample(void *ctx, int cpu + + void test_perf_buffer(void) + { +- int err, prog_fd, nr_cpus, i, duration = 0; ++ int err, prog_fd, on_len, nr_on_cpus = 0, nr_cpus, i, duration = 0; + const char *prog_name = "kprobe/sys_nanosleep"; + const char *file = "./test_perf_buffer.o"; + struct perf_buffer_opts pb_opts = {}; +@@ -29,15 +30,27 @@ void test_perf_buffer(void) + struct bpf_object *obj; + struct perf_buffer *pb; + struct bpf_link *link; ++ bool *online; + + nr_cpus = libbpf_num_possible_cpus(); + if (CHECK(nr_cpus < 0, "nr_cpus", "err %d\n", nr_cpus)) + return; + ++ err = parse_cpu_mask_file("/sys/devices/system/cpu/online", ++ &online, &on_len); ++ if (CHECK(err, "nr_on_cpus", "err %d\n", err)) ++ return; ++ ++ for (i = 0; i < on_len; i++) ++ if (online[i]) ++ nr_on_cpus++; ++ + /* load program */ + err = bpf_prog_load(file, BPF_PROG_TYPE_KPROBE, &obj, &prog_fd); +- if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno)) +- return; ++ if (CHECK(err, "obj_load", "err %d errno %d\n", err, errno)) { ++ obj = NULL; ++ goto out_close; ++ } + + prog = bpf_object__find_program_by_title(obj, prog_name); + if (CHECK(!prog, "find_probe", "prog '%s' not found\n", prog_name)) +@@ -64,6 +77,11 @@ void test_perf_buffer(void) + /* trigger kprobe on every CPU */ + CPU_ZERO(&cpu_seen); + for (i = 0; i < nr_cpus; i++) { ++ if (i >= on_len || !online[i]) { ++ printf("skipping offline CPU #%d\n", i); ++ continue; ++ } ++ + CPU_ZERO(&cpu_set); + CPU_SET(i, &cpu_set); + +@@ -81,8 +99,8 @@ void test_perf_buffer(void) + if (CHECK(err < 0, "perf_buffer__poll", "err %d\n", err)) + goto out_free_pb; + +- if (CHECK(CPU_COUNT(&cpu_seen) != nr_cpus, "seen_cpu_cnt", +- "expect %d, seen %d\n", nr_cpus, CPU_COUNT(&cpu_seen))) ++ if (CHECK(CPU_COUNT(&cpu_seen) != nr_on_cpus, "seen_cpu_cnt", ++ "expect %d, seen %d\n", nr_on_cpus, CPU_COUNT(&cpu_seen))) + goto out_free_pb; + + out_free_pb: +@@ -91,4 +109,5 @@ out_detach: + bpf_link__destroy(link); + out_close: + bpf_object__close(obj); ++ free(online); + } diff --git a/queue-5.4/selftests-bpf-fix-test_attach_probe.patch b/queue-5.4/selftests-bpf-fix-test_attach_probe.patch new file mode 100644 index 00000000000..12615d42a64 --- /dev/null +++ b/queue-5.4/selftests-bpf-fix-test_attach_probe.patch @@ -0,0 +1,52 @@ +From 580205dd4fe800b1e95be8b6df9e2991f975a8ad Mon Sep 17 00:00:00 2001 +From: Alexei Starovoitov +Date: Wed, 18 Dec 2019 18:04:42 -0800 +Subject: selftests/bpf: Fix test_attach_probe + +From: Alexei Starovoitov + +commit 580205dd4fe800b1e95be8b6df9e2991f975a8ad upstream. + +Fix two issues in test_attach_probe: + +1. it was not able to parse /proc/self/maps beyond the first line, + since %s means parse string until white space. +2. offset has to be accounted for otherwise uprobed address is incorrect. + +Fixes: 1e8611bbdfc9 ("selftests/bpf: add kprobe/uprobe selftests") +Signed-off-by: Alexei Starovoitov +Signed-off-by: Daniel Borkmann +Acked-by: Yonghong Song +Acked-by: Andrii Nakryiko +Link: https://lore.kernel.org/bpf/20191219020442.1922617-1-ast@kernel.org +Signed-off-by: Greg Kroah-Hartman + +--- + tools/testing/selftests/bpf/prog_tests/attach_probe.c | 7 ++++--- + 1 file changed, 4 insertions(+), 3 deletions(-) + +--- a/tools/testing/selftests/bpf/prog_tests/attach_probe.c ++++ b/tools/testing/selftests/bpf/prog_tests/attach_probe.c +@@ -2,7 +2,7 @@ + #include + + ssize_t get_base_addr() { +- size_t start; ++ size_t start, offset; + char buf[256]; + FILE *f; + +@@ -10,10 +10,11 @@ ssize_t get_base_addr() { + if (!f) + return -errno; + +- while (fscanf(f, "%zx-%*x %s %*s\n", &start, buf) == 2) { ++ while (fscanf(f, "%zx-%*x %s %zx %*[^\n]\n", ++ &start, buf, &offset) == 3) { + if (strcmp(buf, "r-xp") == 0) { + fclose(f); +- return start; ++ return start - offset; + } + } + diff --git a/queue-5.4/selftests-bpf-ignore-fin-packets-for-reuseport-tests.patch b/queue-5.4/selftests-bpf-ignore-fin-packets-for-reuseport-tests.patch new file mode 100644 index 00000000000..63506a384f5 --- /dev/null +++ b/queue-5.4/selftests-bpf-ignore-fin-packets-for-reuseport-tests.patch @@ -0,0 +1,44 @@ +From 8bec4f665e0baecb5f1b683379fc10b3745eb612 Mon Sep 17 00:00:00 2001 +From: Lorenz Bauer +Date: Fri, 24 Jan 2020 11:27:52 +0000 +Subject: selftests: bpf: Ignore FIN packets for reuseport tests + +From: Lorenz Bauer + +commit 8bec4f665e0baecb5f1b683379fc10b3745eb612 upstream. + +The reuseport tests currently suffer from a race condition: FIN +packets count towards DROP_ERR_SKB_DATA, since they don't contain +a valid struct cmd. Tests will spuriously fail depending on whether +check_results is called before or after the FIN is processed. + +Exit the BPF program early if FIN is set. + +Fixes: 91134d849a0e ("bpf: Test BPF_PROG_TYPE_SK_REUSEPORT") +Signed-off-by: Lorenz Bauer +Signed-off-by: Daniel Borkmann +Reviewed-by: Jakub Sitnicki +Acked-by: Martin KaFai Lau +Acked-by: John Fastabend +Link: https://lore.kernel.org/bpf/20200124112754.19664-3-lmb@cloudflare.com +Signed-off-by: Greg Kroah-Hartman + +--- + tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +--- a/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c ++++ b/tools/testing/selftests/bpf/progs/test_select_reuseport_kern.c +@@ -113,6 +113,12 @@ int _select_by_skb_data(struct sk_reusep + data_check.skb_ports[0] = th->source; + data_check.skb_ports[1] = th->dest; + ++ if (th->fin) ++ /* The connection is being torn down at the end of a ++ * test. It can't contain a cmd, so return early. ++ */ ++ return SK_PASS; ++ + if ((th->doff << 2) + sizeof(*cmd) > data_check.len) + GOTO_DONE(DROP_ERR_SKB_DATA); + if (bpf_skb_load_bytes(reuse_md, th->doff << 2, &cmd_copy, diff --git a/queue-5.4/selftests-bpf-skip-perf-hw-events-test-if-the-setup-disabled-it.patch b/queue-5.4/selftests-bpf-skip-perf-hw-events-test-if-the-setup-disabled-it.patch new file mode 100644 index 00000000000..ae470245bb1 --- /dev/null +++ b/queue-5.4/selftests-bpf-skip-perf-hw-events-test-if-the-setup-disabled-it.patch @@ -0,0 +1,42 @@ +From f1c3656c6d9c147d07d16614455aceb34932bdeb Mon Sep 17 00:00:00 2001 +From: Hangbin Liu +Date: Fri, 17 Jan 2020 18:06:56 +0800 +Subject: selftests/bpf: Skip perf hw events test if the setup disabled it + +From: Hangbin Liu + +commit f1c3656c6d9c147d07d16614455aceb34932bdeb upstream. + +The same with commit 4e59afbbed96 ("selftests/bpf: skip nmi test when perf +hw events are disabled"), it would make more sense to skip the +test_stacktrace_build_id_nmi test if the setup (e.g. virtual machines) has +disabled hardware perf events. + +Fixes: 13790d1cc72c ("bpf: add selftest for stackmap with build_id in NMI context") +Signed-off-by: Hangbin Liu +Signed-off-by: Daniel Borkmann +Acked-by: John Fastabend +Link: https://lore.kernel.org/bpf/20200117100656.10359-1-liuhangbin@gmail.com +Signed-off-by: Greg Kroah-Hartman + +--- + tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c | 8 ++++++-- + 1 file changed, 6 insertions(+), 2 deletions(-) + +--- a/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c ++++ b/tools/testing/selftests/bpf/prog_tests/stacktrace_build_id_nmi.c +@@ -49,8 +49,12 @@ retry: + pmu_fd = syscall(__NR_perf_event_open, &attr, -1 /* pid */, + 0 /* cpu 0 */, -1 /* group id */, + 0 /* flags */); +- if (CHECK(pmu_fd < 0, "perf_event_open", +- "err %d errno %d. Does the test host support PERF_COUNT_HW_CPU_CYCLES?\n", ++ if (pmu_fd < 0 && errno == ENOENT) { ++ printf("%s:SKIP:no PERF_COUNT_HW_CPU_CYCLES\n", __func__); ++ test__skip(); ++ goto cleanup; ++ } ++ if (CHECK(pmu_fd < 0, "perf_event_open", "err %d errno %d\n", + pmu_fd, errno)) + goto close_prog; + diff --git a/queue-5.4/selftests-bpf-use-a-temporary-file-in-test_sockmap.patch b/queue-5.4/selftests-bpf-use-a-temporary-file-in-test_sockmap.patch new file mode 100644 index 00000000000..410f3a6fd4a --- /dev/null +++ b/queue-5.4/selftests-bpf-use-a-temporary-file-in-test_sockmap.patch @@ -0,0 +1,75 @@ +From c31dbb1e41d1857b403f9bf58c87f5898519a0bc Mon Sep 17 00:00:00 2001 +From: Lorenz Bauer +Date: Fri, 24 Jan 2020 11:27:51 +0000 +Subject: selftests: bpf: Use a temporary file in test_sockmap + +From: Lorenz Bauer + +commit c31dbb1e41d1857b403f9bf58c87f5898519a0bc upstream. + +Use a proper temporary file for sendpage tests. This means that running +the tests doesn't clutter the working directory, and allows running the +test on read-only filesystems. + +Fixes: 16962b2404ac ("bpf: sockmap, add selftests") +Signed-off-by: Lorenz Bauer +Signed-off-by: Daniel Borkmann +Reviewed-by: Jakub Sitnicki +Acked-by: Martin KaFai Lau +Acked-by: John Fastabend +Link: https://lore.kernel.org/bpf/20200124112754.19664-2-lmb@cloudflare.com +Signed-off-by: Greg Kroah-Hartman + +--- + tools/testing/selftests/bpf/test_sockmap.c | 15 +++++---------- + 1 file changed, 5 insertions(+), 10 deletions(-) + +--- a/tools/testing/selftests/bpf/test_sockmap.c ++++ b/tools/testing/selftests/bpf/test_sockmap.c +@@ -331,7 +331,7 @@ static int msg_loop_sendpage(int fd, int + FILE *file; + int i, fp; + +- file = fopen(".sendpage_tst.tmp", "w+"); ++ file = tmpfile(); + if (!file) { + perror("create file for sendpage"); + return 1; +@@ -340,13 +340,8 @@ static int msg_loop_sendpage(int fd, int + fwrite(&k, sizeof(char), 1, file); + fflush(file); + fseek(file, 0, SEEK_SET); +- fclose(file); + +- fp = open(".sendpage_tst.tmp", O_RDONLY); +- if (fp < 0) { +- perror("reopen file for sendpage"); +- return 1; +- } ++ fp = fileno(file); + + clock_gettime(CLOCK_MONOTONIC, &s->start); + for (i = 0; i < cnt; i++) { +@@ -354,11 +349,11 @@ static int msg_loop_sendpage(int fd, int + + if (!drop && sent < 0) { + perror("send loop error"); +- close(fp); ++ fclose(file); + return sent; + } else if (drop && sent >= 0) { + printf("sendpage loop error expected: %i\n", sent); +- close(fp); ++ fclose(file); + return -EIO; + } + +@@ -366,7 +361,7 @@ static int msg_loop_sendpage(int fd, int + s->bytes_sent += sent; + } + clock_gettime(CLOCK_MONOTONIC, &s->end); +- close(fp); ++ fclose(file); + return 0; + } + diff --git a/queue-5.4/series b/queue-5.4/series index ee174ca0c65..aaee65c0752 100644 --- a/queue-5.4/series +++ b/queue-5.4/series @@ -139,3 +139,24 @@ tracing-annotate-ftrace_graph_notrace_hash-pointer-w.patch ftrace-add-comment-to-why-rcu_dereference_sched-is-o.patch ftrace-protect-ftrace_graph_hash-with-ftrace_sync.patch crypto-pcrypt-avoid-deadlock-by-using-per-instance-padata-queues.patch +btrfs-fix-improper-setting-of-scanned-for-range-cyclic-write-cache-pages.patch +btrfs-handle-another-split-brain-scenario-with-metadata-uuid-feature.patch +riscv-bpf-fix-broken-bpf-tail-calls.patch +selftests-bpf-fix-perf_buffer-test-on-systems-w-offline-cpus.patch +bpf-devmap-pass-lockdep-expression-to-rcu-lists.patch +libbpf-fix-realloc-usage-in-bpf_core_find_cands.patch +tc-testing-fix-ebpf-tests-failure-on-linux-fresh-clones.patch +samples-bpf-don-t-try-to-remove-user-s-homedir-on-clean.patch +samples-bpf-xdp_redirect_cpu-fix-missing-tracepoint-attach.patch +selftests-bpf-fix-test_attach_probe.patch +selftests-bpf-skip-perf-hw-events-test-if-the-setup-disabled-it.patch +selftests-bpf-use-a-temporary-file-in-test_sockmap.patch +selftests-bpf-ignore-fin-packets-for-reuseport-tests.patch +crypto-api-fix-unexpectedly-getting-generic-implementation.patch +crypto-hisilicon-use-the-offset-fields-in-sqe-to-avoid-need-to-split-scatterlists.patch +crypto-ccp-set-max-rsa-modulus-size-for-v3-platform-devices-as-well.patch +crypto-arm64-ghash-neon-bump-priority-to-150.patch +crypto-pcrypt-do-not-clear-may_sleep-flag-in-original-request.patch +crypto-atmel-aes-fix-counter-overflow-in-ctr-mode.patch +crypto-api-fix-race-condition-in-crypto_spawn_alg.patch +crypto-picoxcell-adjust-the-position-of-tasklet_init-and-fix-missed-tasklet_kill.patch diff --git a/queue-5.4/tc-testing-fix-ebpf-tests-failure-on-linux-fresh-clones.patch b/queue-5.4/tc-testing-fix-ebpf-tests-failure-on-linux-fresh-clones.patch new file mode 100644 index 00000000000..6f6b4b54f09 --- /dev/null +++ b/queue-5.4/tc-testing-fix-ebpf-tests-failure-on-linux-fresh-clones.patch @@ -0,0 +1,38 @@ +From 7145fcfffef1fad4266aaf5ca96727696916edb7 Mon Sep 17 00:00:00 2001 +From: Davide Caratti +Date: Mon, 3 Feb 2020 16:29:29 +0100 +Subject: tc-testing: fix eBPF tests failure on linux fresh clones + +From: Davide Caratti + +commit 7145fcfffef1fad4266aaf5ca96727696916edb7 upstream. + +when the following command is done on a fresh clone of the kernel tree, + + [root@f31 tc-testing]# ./tdc.py -c bpf + +test cases that need to build the eBPF sample program fail systematically, +because 'buildebpfPlugin' is unable to install the kernel headers (i.e, the +'khdr' target fails). Pass the correct environment to 'make', in place of +ENVIR, to allow running these tests. + +Fixes: 4c2d39bd40c1 ("tc-testing: use a plugin to build eBPF program") +Signed-off-by: Davide Caratti +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman + +--- + tools/testing/selftests/tc-testing/plugin-lib/buildebpfPlugin.py | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/tools/testing/selftests/tc-testing/plugin-lib/buildebpfPlugin.py ++++ b/tools/testing/selftests/tc-testing/plugin-lib/buildebpfPlugin.py +@@ -54,7 +54,7 @@ class SubPlugin(TdcPlugin): + shell=True, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, +- env=ENVIR) ++ env=os.environ.copy()) + (rawout, serr) = proc.communicate() + + if proc.returncode != 0 and len(serr) > 0: