From: Sasha Levin Date: Fri, 27 Aug 2021 17:35:49 +0000 (-0400) Subject: Fixes for 5.4 X-Git-Tag: v4.4.283~71 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=3721424a7afa853fdd7d06be74aed16ee75cddfd;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.4 Signed-off-by: Sasha Levin --- diff --git a/queue-5.4/arc-fix-config_stackdepot.patch b/queue-5.4/arc-fix-config_stackdepot.patch new file mode 100644 index 00000000000..72c9458e105 --- /dev/null +++ b/queue-5.4/arc-fix-config_stackdepot.patch @@ -0,0 +1,47 @@ +From aa33ad597bd63bad5e9af5ea0ce2cc5542bb518b Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sat, 10 Jul 2021 07:50:33 -0700 +Subject: ARC: Fix CONFIG_STACKDEPOT + +From: Guenter Roeck + +[ Upstream commit bf79167fd86f3b97390fe2e70231d383526bd9cc ] + +Enabling CONFIG_STACKDEPOT results in the following build error. + +arc-elf-ld: lib/stackdepot.o: in function `filter_irq_stacks': +stackdepot.c:(.text+0x456): undefined reference to `__irqentry_text_start' +arc-elf-ld: stackdepot.c:(.text+0x456): undefined reference to `__irqentry_text_start' +arc-elf-ld: stackdepot.c:(.text+0x476): undefined reference to `__irqentry_text_end' +arc-elf-ld: stackdepot.c:(.text+0x476): undefined reference to `__irqentry_text_end' +arc-elf-ld: stackdepot.c:(.text+0x484): undefined reference to `__softirqentry_text_start' +arc-elf-ld: stackdepot.c:(.text+0x484): undefined reference to `__softirqentry_text_start' +arc-elf-ld: stackdepot.c:(.text+0x48c): undefined reference to `__softirqentry_text_end' +arc-elf-ld: stackdepot.c:(.text+0x48c): undefined reference to `__softirqentry_text_end' + +Other architectures address this problem by adding IRQENTRY_TEXT and +SOFTIRQENTRY_TEXT to the text segment, so do the same here. + +Signed-off-by: Guenter Roeck +Signed-off-by: Vineet Gupta +Signed-off-by: Sasha Levin +--- + arch/arc/kernel/vmlinux.lds.S | 2 ++ + 1 file changed, 2 insertions(+) + +diff --git a/arch/arc/kernel/vmlinux.lds.S b/arch/arc/kernel/vmlinux.lds.S +index 6c693a9d29b6..0391b8293ad8 100644 +--- a/arch/arc/kernel/vmlinux.lds.S ++++ b/arch/arc/kernel/vmlinux.lds.S +@@ -88,6 +88,8 @@ SECTIONS + CPUIDLE_TEXT + LOCK_TEXT + KPROBES_TEXT ++ IRQENTRY_TEXT ++ SOFTIRQENTRY_TEXT + *(.fixup) + *(.gnu.warning) + } +-- +2.30.2 + diff --git a/queue-5.4/mmc-sdhci-msm-update-the-software-timeout-value-for-.patch b/queue-5.4/mmc-sdhci-msm-update-the-software-timeout-value-for-.patch new file mode 100644 index 00000000000..99cbdfe3d40 --- /dev/null +++ b/queue-5.4/mmc-sdhci-msm-update-the-software-timeout-value-for-.patch @@ -0,0 +1,94 @@ +From 3eba8d5e919a6e60fbb9758c5f9c5bf84b4015a4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 16 Jul 2021 17:16:14 +0530 +Subject: mmc: sdhci-msm: Update the software timeout value for sdhc + +From: Shaik Sajida Bhanu + +[ Upstream commit 67b13f3e221ed81b46a657e2b499bf8b20162476 ] + +Whenever SDHC run at clock rate 50MHZ or below, the hardware data +timeout value will be 21.47secs, which is approx. 22secs and we have +a current software timeout value as 10secs. We have to set software +timeout value more than the hardware data timeout value to avioid seeing +the below register dumps. + +[ 332.953670] mmc2: Timeout waiting for hardware interrupt. +[ 332.959608] mmc2: sdhci: ============ SDHCI REGISTER DUMP =========== +[ 332.966450] mmc2: sdhci: Sys addr: 0x00000000 | Version: 0x00007202 +[ 332.973256] mmc2: sdhci: Blk size: 0x00000200 | Blk cnt: 0x00000001 +[ 332.980054] mmc2: sdhci: Argument: 0x00000000 | Trn mode: 0x00000027 +[ 332.986864] mmc2: sdhci: Present: 0x01f801f6 | Host ctl: 0x0000001f +[ 332.993671] mmc2: sdhci: Power: 0x00000001 | Blk gap: 0x00000000 +[ 333.000583] mmc2: sdhci: Wake-up: 0x00000000 | Clock: 0x00000007 +[ 333.007386] mmc2: sdhci: Timeout: 0x0000000e | Int stat: 0x00000000 +[ 333.014182] mmc2: sdhci: Int enab: 0x03ff100b | Sig enab: 0x03ff100b +[ 333.020976] mmc2: sdhci: ACmd stat: 0x00000000 | Slot int: 0x00000000 +[ 333.027771] mmc2: sdhci: Caps: 0x322dc8b2 | Caps_1: 0x0000808f +[ 333.034561] mmc2: sdhci: Cmd: 0x0000183a | Max curr: 0x00000000 +[ 333.041359] mmc2: sdhci: Resp[0]: 0x00000900 | Resp[1]: 0x00000000 +[ 333.048157] mmc2: sdhci: Resp[2]: 0x00000000 | Resp[3]: 0x00000000 +[ 333.054945] mmc2: sdhci: Host ctl2: 0x00000000 +[ 333.059657] mmc2: sdhci: ADMA Err: 0x00000000 | ADMA Ptr: +0x0000000ffffff218 +[ 333.067178] mmc2: sdhci_msm: ----------- VENDOR REGISTER DUMP +----------- +[ 333.074343] mmc2: sdhci_msm: DLL sts: 0x00000000 | DLL cfg: +0x6000642c | DLL cfg2: 0x0020a000 +[ 333.083417] mmc2: sdhci_msm: DLL cfg3: 0x00000000 | DLL usr ctl: +0x00000000 | DDR cfg: 0x80040873 +[ 333.092850] mmc2: sdhci_msm: Vndr func: 0x00008a9c | Vndr func2 : +0xf88218a8 Vndr func3: 0x02626040 +[ 333.102371] mmc2: sdhci: ============================================ + +So, set software timeout value more than hardware timeout value. + +Signed-off-by: Shaik Sajida Bhanu +Acked-by: Adrian Hunter +Cc: stable@vger.kernel.org +Link: https://lore.kernel.org/r/1626435974-14462-1-git-send-email-sbhanu@codeaurora.org +Signed-off-by: Ulf Hansson +Signed-off-by: Sasha Levin +--- + drivers/mmc/host/sdhci-msm.c | 18 ++++++++++++++++++ + 1 file changed, 18 insertions(+) + +diff --git a/drivers/mmc/host/sdhci-msm.c b/drivers/mmc/host/sdhci-msm.c +index 8bed81cf03ad..8ab963055238 100644 +--- a/drivers/mmc/host/sdhci-msm.c ++++ b/drivers/mmc/host/sdhci-msm.c +@@ -1589,6 +1589,23 @@ out: + __sdhci_msm_set_clock(host, clock); + } + ++static void sdhci_msm_set_timeout(struct sdhci_host *host, struct mmc_command *cmd) ++{ ++ u32 count, start = 15; ++ ++ __sdhci_set_timeout(host, cmd); ++ count = sdhci_readb(host, SDHCI_TIMEOUT_CONTROL); ++ /* ++ * Update software timeout value if its value is less than hardware data ++ * timeout value. Qcom SoC hardware data timeout value was calculated ++ * using 4 * MCLK * 2^(count + 13). where MCLK = 1 / host->clock. ++ */ ++ if (cmd && cmd->data && host->clock > 400000 && ++ host->clock <= 50000000 && ++ ((1 << (count + start)) > (10 * host->clock))) ++ host->data_timeout = 22LL * NSEC_PER_SEC; ++} ++ + /* + * Platform specific register write functions. This is so that, if any + * register write needs to be followed up by platform specific actions, +@@ -1753,6 +1770,7 @@ static const struct sdhci_ops sdhci_msm_ops = { + .set_uhs_signaling = sdhci_msm_set_uhs_signaling, + .write_w = sdhci_msm_writew, + .write_b = sdhci_msm_writeb, ++ .set_timeout = sdhci_msm_set_timeout, + }; + + static const struct sdhci_pltfm_data sdhci_msm_pdata = { +-- +2.30.2 + diff --git a/queue-5.4/netfilter-conntrack-collect-all-entries-in-one-cycle.patch b/queue-5.4/netfilter-conntrack-collect-all-entries-in-one-cycle.patch new file mode 100644 index 00000000000..ff873c06e43 --- /dev/null +++ b/queue-5.4/netfilter-conntrack-collect-all-entries-in-one-cycle.patch @@ -0,0 +1,184 @@ +From 4ee9294e034ea09b4ac20cf3349a09eb4df7623a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 27 Jul 2021 00:29:19 +0200 +Subject: netfilter: conntrack: collect all entries in one cycle + +From: Florian Westphal + +[ Upstream commit 4608fdfc07e116f9fc0895beb40abad7cdb5ee3d ] + +Michal Kubecek reports that conntrack gc is responsible for frequent +wakeups (every 125ms) on idle systems. + +On busy systems, timed out entries are evicted during lookup. +The gc worker is only needed to remove entries after system becomes idle +after a busy period. + +To resolve this, always scan the entire table. +If the scan is taking too long, reschedule so other work_structs can run +and resume from next bucket. + +After a completed scan, wait for 2 minutes before the next cycle. +Heuristics for faster re-schedule are removed. + +GC_SCAN_INTERVAL could be exposed as a sysctl in the future to allow +tuning this as-needed or even turn the gc worker off. + +Reported-by: Michal Kubecek +Signed-off-by: Florian Westphal +Signed-off-by: Pablo Neira Ayuso +Signed-off-by: Sasha Levin +--- + net/netfilter/nf_conntrack_core.c | 71 ++++++++++--------------------- + 1 file changed, 22 insertions(+), 49 deletions(-) + +diff --git a/net/netfilter/nf_conntrack_core.c b/net/netfilter/nf_conntrack_core.c +index 4a988ce4264c..4bcc36e4b2ef 100644 +--- a/net/netfilter/nf_conntrack_core.c ++++ b/net/netfilter/nf_conntrack_core.c +@@ -66,22 +66,17 @@ EXPORT_SYMBOL_GPL(nf_conntrack_hash); + + struct conntrack_gc_work { + struct delayed_work dwork; +- u32 last_bucket; ++ u32 next_bucket; + bool exiting; + bool early_drop; +- long next_gc_run; + }; + + static __read_mostly struct kmem_cache *nf_conntrack_cachep; + static DEFINE_SPINLOCK(nf_conntrack_locks_all_lock); + static __read_mostly bool nf_conntrack_locks_all; + +-/* every gc cycle scans at most 1/GC_MAX_BUCKETS_DIV part of table */ +-#define GC_MAX_BUCKETS_DIV 128u +-/* upper bound of full table scan */ +-#define GC_MAX_SCAN_JIFFIES (16u * HZ) +-/* desired ratio of entries found to be expired */ +-#define GC_EVICT_RATIO 50u ++#define GC_SCAN_INTERVAL (120u * HZ) ++#define GC_SCAN_MAX_DURATION msecs_to_jiffies(10) + + static struct conntrack_gc_work conntrack_gc_work; + +@@ -1226,17 +1221,13 @@ static void nf_ct_offload_timeout(struct nf_conn *ct) + + static void gc_worker(struct work_struct *work) + { +- unsigned int min_interval = max(HZ / GC_MAX_BUCKETS_DIV, 1u); +- unsigned int i, goal, buckets = 0, expired_count = 0; +- unsigned int nf_conntrack_max95 = 0; ++ unsigned long end_time = jiffies + GC_SCAN_MAX_DURATION; ++ unsigned int i, hashsz, nf_conntrack_max95 = 0; ++ unsigned long next_run = GC_SCAN_INTERVAL; + struct conntrack_gc_work *gc_work; +- unsigned int ratio, scanned = 0; +- unsigned long next_run; +- + gc_work = container_of(work, struct conntrack_gc_work, dwork.work); + +- goal = nf_conntrack_htable_size / GC_MAX_BUCKETS_DIV; +- i = gc_work->last_bucket; ++ i = gc_work->next_bucket; + if (gc_work->early_drop) + nf_conntrack_max95 = nf_conntrack_max / 100u * 95u; + +@@ -1244,22 +1235,21 @@ static void gc_worker(struct work_struct *work) + struct nf_conntrack_tuple_hash *h; + struct hlist_nulls_head *ct_hash; + struct hlist_nulls_node *n; +- unsigned int hashsz; + struct nf_conn *tmp; + +- i++; + rcu_read_lock(); + + nf_conntrack_get_ht(&ct_hash, &hashsz); +- if (i >= hashsz) +- i = 0; ++ if (i >= hashsz) { ++ rcu_read_unlock(); ++ break; ++ } + + hlist_nulls_for_each_entry_rcu(h, n, &ct_hash[i], hnnode) { + struct net *net; + + tmp = nf_ct_tuplehash_to_ctrack(h); + +- scanned++; + if (test_bit(IPS_OFFLOAD_BIT, &tmp->status)) { + nf_ct_offload_timeout(tmp); + continue; +@@ -1267,7 +1257,6 @@ static void gc_worker(struct work_struct *work) + + if (nf_ct_is_expired(tmp)) { + nf_ct_gc_expired(tmp); +- expired_count++; + continue; + } + +@@ -1299,7 +1288,14 @@ static void gc_worker(struct work_struct *work) + */ + rcu_read_unlock(); + cond_resched(); +- } while (++buckets < goal); ++ i++; ++ ++ if (time_after(jiffies, end_time) && i < hashsz) { ++ gc_work->next_bucket = i; ++ next_run = 0; ++ break; ++ } ++ } while (i < hashsz); + + if (gc_work->exiting) + return; +@@ -1310,40 +1306,17 @@ static void gc_worker(struct work_struct *work) + * + * This worker is only here to reap expired entries when system went + * idle after a busy period. +- * +- * The heuristics below are supposed to balance conflicting goals: +- * +- * 1. Minimize time until we notice a stale entry +- * 2. Maximize scan intervals to not waste cycles +- * +- * Normally, expire ratio will be close to 0. +- * +- * As soon as a sizeable fraction of the entries have expired +- * increase scan frequency. + */ +- ratio = scanned ? expired_count * 100 / scanned : 0; +- if (ratio > GC_EVICT_RATIO) { +- gc_work->next_gc_run = min_interval; +- } else { +- unsigned int max = GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV; +- +- BUILD_BUG_ON((GC_MAX_SCAN_JIFFIES / GC_MAX_BUCKETS_DIV) == 0); +- +- gc_work->next_gc_run += min_interval; +- if (gc_work->next_gc_run > max) +- gc_work->next_gc_run = max; ++ if (next_run) { ++ gc_work->early_drop = false; ++ gc_work->next_bucket = 0; + } +- +- next_run = gc_work->next_gc_run; +- gc_work->last_bucket = i; +- gc_work->early_drop = false; + queue_delayed_work(system_power_efficient_wq, &gc_work->dwork, next_run); + } + + static void conntrack_gc_work_init(struct conntrack_gc_work *gc_work) + { + INIT_DEFERRABLE_WORK(&gc_work->dwork, gc_worker); +- gc_work->next_gc_run = HZ; + gc_work->exiting = false; + } + +-- +2.30.2 + diff --git a/queue-5.4/once-fix-panic-when-module-unload.patch b/queue-5.4/once-fix-panic-when-module-unload.patch new file mode 100644 index 00000000000..85ce97bbec4 --- /dev/null +++ b/queue-5.4/once-fix-panic-when-module-unload.patch @@ -0,0 +1,123 @@ +From aafd50f1f10780c5953d1c634bc377d5b908fa3d Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 6 Aug 2021 16:21:24 +0800 +Subject: once: Fix panic when module unload + +From: Kefeng Wang + +[ Upstream commit 1027b96ec9d34f9abab69bc1a4dc5b1ad8ab1349 ] + +DO_ONCE +DEFINE_STATIC_KEY_TRUE(___once_key); +__do_once_done + once_disable_jump(once_key); + INIT_WORK(&w->work, once_deferred); + struct once_work *w; + w->key = key; + schedule_work(&w->work); module unload + //*the key is +destroy* +process_one_work + once_deferred + BUG_ON(!static_key_enabled(work->key)); + static_key_count((struct static_key *)x) //*access key, crash* + +When module uses DO_ONCE mechanism, it could crash due to the above +concurrency problem, we could reproduce it with link[1]. + +Fix it by add/put module refcount in the once work process. + +[1] https://lore.kernel.org/netdev/eaa6c371-465e-57eb-6be9-f4b16b9d7cbf@huawei.com/ + +Cc: Hannes Frederic Sowa +Cc: Daniel Borkmann +Cc: David S. Miller +Cc: Eric Dumazet +Reported-by: Minmin chen +Signed-off-by: Kefeng Wang +Acked-by: Hannes Frederic Sowa +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + include/linux/once.h | 4 ++-- + lib/once.c | 11 ++++++++--- + 2 files changed, 10 insertions(+), 5 deletions(-) + +diff --git a/include/linux/once.h b/include/linux/once.h +index 9225ee6d96c7..ae6f4eb41cbe 100644 +--- a/include/linux/once.h ++++ b/include/linux/once.h +@@ -7,7 +7,7 @@ + + bool __do_once_start(bool *done, unsigned long *flags); + void __do_once_done(bool *done, struct static_key_true *once_key, +- unsigned long *flags); ++ unsigned long *flags, struct module *mod); + + /* Call a function exactly once. The idea of DO_ONCE() is to perform + * a function call such as initialization of random seeds, etc, only +@@ -46,7 +46,7 @@ void __do_once_done(bool *done, struct static_key_true *once_key, + if (unlikely(___ret)) { \ + func(__VA_ARGS__); \ + __do_once_done(&___done, &___once_key, \ +- &___flags); \ ++ &___flags, THIS_MODULE); \ + } \ + } \ + ___ret; \ +diff --git a/lib/once.c b/lib/once.c +index 8b7d6235217e..59149bf3bfb4 100644 +--- a/lib/once.c ++++ b/lib/once.c +@@ -3,10 +3,12 @@ + #include + #include + #include ++#include + + struct once_work { + struct work_struct work; + struct static_key_true *key; ++ struct module *module; + }; + + static void once_deferred(struct work_struct *w) +@@ -16,10 +18,11 @@ static void once_deferred(struct work_struct *w) + work = container_of(w, struct once_work, work); + BUG_ON(!static_key_enabled(work->key)); + static_branch_disable(work->key); ++ module_put(work->module); + kfree(work); + } + +-static void once_disable_jump(struct static_key_true *key) ++static void once_disable_jump(struct static_key_true *key, struct module *mod) + { + struct once_work *w; + +@@ -29,6 +32,8 @@ static void once_disable_jump(struct static_key_true *key) + + INIT_WORK(&w->work, once_deferred); + w->key = key; ++ w->module = mod; ++ __module_get(mod); + schedule_work(&w->work); + } + +@@ -53,11 +58,11 @@ bool __do_once_start(bool *done, unsigned long *flags) + EXPORT_SYMBOL(__do_once_start); + + void __do_once_done(bool *done, struct static_key_true *once_key, +- unsigned long *flags) ++ unsigned long *flags, struct module *mod) + __releases(once_lock) + { + *done = true; + spin_unlock_irqrestore(&once_lock, *flags); +- once_disable_jump(once_key); ++ once_disable_jump(once_key, mod); + } + EXPORT_SYMBOL(__do_once_done); +-- +2.30.2 + diff --git a/queue-5.4/ovl-fix-uninitialized-pointer-read-in-ovl_lookup_rea.patch b/queue-5.4/ovl-fix-uninitialized-pointer-read-in-ovl_lookup_rea.patch new file mode 100644 index 00000000000..6c980981d56 --- /dev/null +++ b/queue-5.4/ovl-fix-uninitialized-pointer-read-in-ovl_lookup_rea.patch @@ -0,0 +1,45 @@ +From d0b987027d253f73a74bd29a2db0466629b37d97 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 6 Aug 2021 10:03:12 +0200 +Subject: ovl: fix uninitialized pointer read in ovl_lookup_real_one() + +From: Miklos Szeredi + +[ Upstream commit 580c610429b3994e8db24418927747cf28443cde ] + +One error path can result in release_dentry_name_snapshot() being called +before "name" was initialized by take_dentry_name_snapshot(). + +Fix by moving the release_dentry_name_snapshot() to immediately after the +only use. + +Reported-by: Colin Ian King +Signed-off-by: Miklos Szeredi +Signed-off-by: Sasha Levin +--- + fs/overlayfs/export.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +diff --git a/fs/overlayfs/export.c b/fs/overlayfs/export.c +index 11dd8177770d..19574ef17470 100644 +--- a/fs/overlayfs/export.c ++++ b/fs/overlayfs/export.c +@@ -395,6 +395,7 @@ static struct dentry *ovl_lookup_real_one(struct dentry *connected, + */ + take_dentry_name_snapshot(&name, real); + this = lookup_one_len(name.name.name, connected, name.name.len); ++ release_dentry_name_snapshot(&name); + err = PTR_ERR(this); + if (IS_ERR(this)) { + goto fail; +@@ -409,7 +410,6 @@ static struct dentry *ovl_lookup_real_one(struct dentry *connected, + } + + out: +- release_dentry_name_snapshot(&name); + dput(parent); + inode_unlock(dir); + return this; +-- +2.30.2 + diff --git a/queue-5.4/series b/queue-5.4/series index df41c568995..3255779d934 100644 --- a/queue-5.4/series +++ b/queue-5.4/series @@ -1 +1,6 @@ net-qrtr-fix-another-oob-read-in-qrtr_endpoint_post.patch +arc-fix-config_stackdepot.patch +netfilter-conntrack-collect-all-entries-in-one-cycle.patch +once-fix-panic-when-module-unload.patch +ovl-fix-uninitialized-pointer-read-in-ovl_lookup_rea.patch +mmc-sdhci-msm-update-the-software-timeout-value-for-.patch