From: Greg Kroah-Hartman Date: Tue, 3 Mar 2020 13:07:23 +0000 (+0100) Subject: 5.5-stable patches X-Git-Tag: v4.19.108~29 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=085f86528f96835acb45c85c84446cb4c1c7c303;p=thirdparty%2Fkernel%2Fstable-queue.git 5.5-stable patches added patches: kvm-check-for-a-bad-hva-before-dropping-into-the-ghc-slow-path.patch kvm-svm-override-default-mmio-mask-if-memory-encryption-is-enabled.patch mwifiex-delete-unused-mwifiex_get_intf_num.patch mwifiex-drop-most-magic-numbers-from-mwifiex_process_tdls_action_frame.patch namei-only-return-echild-from-follow_dotdot_rcu.patch perf-report-fix-no-libunwind-compiled-warning-break-s390-issue.patch sched-fair-optimize-select_idle_cpu.patch tipc-fix-successful-connect-but-timed-out.patch --- diff --git a/queue-5.5/kvm-check-for-a-bad-hva-before-dropping-into-the-ghc-slow-path.patch b/queue-5.5/kvm-check-for-a-bad-hva-before-dropping-into-the-ghc-slow-path.patch new file mode 100644 index 00000000000..fa3ef126324 --- /dev/null +++ b/queue-5.5/kvm-check-for-a-bad-hva-before-dropping-into-the-ghc-slow-path.patch @@ -0,0 +1,77 @@ +From fcfbc617547fc6d9552cb6c1c563b6a90ee98085 Mon Sep 17 00:00:00 2001 +From: Sean Christopherson +Date: Thu, 9 Jan 2020 15:56:18 -0800 +Subject: KVM: Check for a bad hva before dropping into the ghc slow path + +From: Sean Christopherson + +commit fcfbc617547fc6d9552cb6c1c563b6a90ee98085 upstream. + +When reading/writing using the guest/host cache, check for a bad hva +before checking for a NULL memslot, which triggers the slow path for +handing cross-page accesses. Because the memslot is nullified on error +by __kvm_gfn_to_hva_cache_init(), if the bad hva is encountered after +crossing into a new page, then the kvm_{read,write}_guest() slow path +could potentially write/access the first chunk prior to detecting the +bad hva. + +Arguably, performing a partial access is semantically correct from an +architectural perspective, but that behavior is certainly not intended. +In the original implementation, memslot was not explicitly nullified +and therefore the partial access behavior varied based on whether the +memslot itself was null, or if the hva was simply bad. The current +behavior was introduced as a seemingly unintentional side effect in +commit f1b9dd5eb86c ("kvm: Disallow wraparound in +kvm_gfn_to_hva_cache_init"), which justified the change with "since some +callers don't check the return code from this function, it sit seems +prudent to clear ghc->memslot in the event of an error". + +Regardless of intent, the partial access is dependent on _not_ checking +the result of the cache initialization, which is arguably a bug in its +own right, at best simply weird. + +Fixes: 8f964525a121 ("KVM: Allow cross page reads and writes from cached translations.") +Cc: Jim Mattson +Cc: Andrew Honig +Signed-off-by: Sean Christopherson +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + virt/kvm/kvm_main.c | 12 ++++++------ + 1 file changed, 6 insertions(+), 6 deletions(-) + +--- a/virt/kvm/kvm_main.c ++++ b/virt/kvm/kvm_main.c +@@ -2287,12 +2287,12 @@ int kvm_write_guest_offset_cached(struct + if (slots->generation != ghc->generation) + __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); + +- if (unlikely(!ghc->memslot)) +- return kvm_write_guest(kvm, gpa, data, len); +- + if (kvm_is_error_hva(ghc->hva)) + return -EFAULT; + ++ if (unlikely(!ghc->memslot)) ++ return kvm_write_guest(kvm, gpa, data, len); ++ + r = __copy_to_user((void __user *)ghc->hva + offset, data, len); + if (r) + return -EFAULT; +@@ -2320,12 +2320,12 @@ int kvm_read_guest_cached(struct kvm *kv + if (slots->generation != ghc->generation) + __kvm_gfn_to_hva_cache_init(slots, ghc, ghc->gpa, ghc->len); + +- if (unlikely(!ghc->memslot)) +- return kvm_read_guest(kvm, ghc->gpa, data, len); +- + if (kvm_is_error_hva(ghc->hva)) + return -EFAULT; + ++ if (unlikely(!ghc->memslot)) ++ return kvm_read_guest(kvm, ghc->gpa, data, len); ++ + r = __copy_from_user(data, (void __user *)ghc->hva, len); + if (r) + return -EFAULT; diff --git a/queue-5.5/kvm-svm-override-default-mmio-mask-if-memory-encryption-is-enabled.patch b/queue-5.5/kvm-svm-override-default-mmio-mask-if-memory-encryption-is-enabled.patch new file mode 100644 index 00000000000..614af05bda5 --- /dev/null +++ b/queue-5.5/kvm-svm-override-default-mmio-mask-if-memory-encryption-is-enabled.patch @@ -0,0 +1,89 @@ +From 52918ed5fcf05d97d257f4131e19479da18f5d16 Mon Sep 17 00:00:00 2001 +From: Tom Lendacky +Date: Thu, 9 Jan 2020 17:42:16 -0600 +Subject: KVM: SVM: Override default MMIO mask if memory encryption is enabled + +From: Tom Lendacky + +commit 52918ed5fcf05d97d257f4131e19479da18f5d16 upstream. + +The KVM MMIO support uses bit 51 as the reserved bit to cause nested page +faults when a guest performs MMIO. The AMD memory encryption support uses +a CPUID function to define the encryption bit position. Given this, it is +possible that these bits can conflict. + +Use svm_hardware_setup() to override the MMIO mask if memory encryption +support is enabled. Various checks are performed to ensure that the mask +is properly defined and rsvd_bits() is used to generate the new mask (as +was done prior to the change that necessitated this patch). + +Fixes: 28a1f3ac1d0c ("kvm: x86: Set highest physical address bits in non-present/reserved SPTEs") +Suggested-by: Sean Christopherson +Reviewed-by: Sean Christopherson +Signed-off-by: Tom Lendacky +Signed-off-by: Paolo Bonzini +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/kvm/svm.c | 43 +++++++++++++++++++++++++++++++++++++++++++ + 1 file changed, 43 insertions(+) + +--- a/arch/x86/kvm/svm.c ++++ b/arch/x86/kvm/svm.c +@@ -1307,6 +1307,47 @@ static void shrink_ple_window(struct kvm + } + } + ++/* ++ * The default MMIO mask is a single bit (excluding the present bit), ++ * which could conflict with the memory encryption bit. Check for ++ * memory encryption support and override the default MMIO mask if ++ * memory encryption is enabled. ++ */ ++static __init void svm_adjust_mmio_mask(void) ++{ ++ unsigned int enc_bit, mask_bit; ++ u64 msr, mask; ++ ++ /* If there is no memory encryption support, use existing mask */ ++ if (cpuid_eax(0x80000000) < 0x8000001f) ++ return; ++ ++ /* If memory encryption is not enabled, use existing mask */ ++ rdmsrl(MSR_K8_SYSCFG, msr); ++ if (!(msr & MSR_K8_SYSCFG_MEM_ENCRYPT)) ++ return; ++ ++ enc_bit = cpuid_ebx(0x8000001f) & 0x3f; ++ mask_bit = boot_cpu_data.x86_phys_bits; ++ ++ /* Increment the mask bit if it is the same as the encryption bit */ ++ if (enc_bit == mask_bit) ++ mask_bit++; ++ ++ /* ++ * If the mask bit location is below 52, then some bits above the ++ * physical addressing limit will always be reserved, so use the ++ * rsvd_bits() function to generate the mask. This mask, along with ++ * the present bit, will be used to generate a page fault with ++ * PFER.RSV = 1. ++ * ++ * If the mask bit location is 52 (or above), then clear the mask. ++ */ ++ mask = (mask_bit < 52) ? rsvd_bits(mask_bit, 51) | PT_PRESENT_MASK : 0; ++ ++ kvm_mmu_set_mmio_spte_mask(mask, mask, PT_WRITABLE_MASK | PT_USER_MASK); ++} ++ + static __init int svm_hardware_setup(void) + { + int cpu; +@@ -1361,6 +1402,8 @@ static __init int svm_hardware_setup(voi + } + } + ++ svm_adjust_mmio_mask(); ++ + for_each_possible_cpu(cpu) { + r = svm_cpu_init(cpu); + if (r) diff --git a/queue-5.5/mwifiex-delete-unused-mwifiex_get_intf_num.patch b/queue-5.5/mwifiex-delete-unused-mwifiex_get_intf_num.patch new file mode 100644 index 00000000000..47ec61c2428 --- /dev/null +++ b/queue-5.5/mwifiex-delete-unused-mwifiex_get_intf_num.patch @@ -0,0 +1,47 @@ +From 1c9f329b084b7b8ea6d60d91a202e884cdcf6aae Mon Sep 17 00:00:00 2001 +From: Brian Norris +Date: Mon, 9 Dec 2019 16:39:11 -0800 +Subject: mwifiex: delete unused mwifiex_get_intf_num() + +From: Brian Norris + +commit 1c9f329b084b7b8ea6d60d91a202e884cdcf6aae upstream. + +Commit 7afb94da3cd8 ("mwifiex: update set_mac_address logic") fixed the +only user of this function, partly because the author seems to have +noticed that, as written, it's on the borderline between highly +misleading and buggy. + +Anyway, no sense in keeping dead code around: let's drop it. + +Fixes: 7afb94da3cd8 ("mwifiex: update set_mac_address logic") +Signed-off-by: Brian Norris +Signed-off-by: Kalle Valo +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/net/wireless/marvell/mwifiex/main.h | 13 ------------- + 1 file changed, 13 deletions(-) + +--- a/drivers/net/wireless/marvell/mwifiex/main.h ++++ b/drivers/net/wireless/marvell/mwifiex/main.h +@@ -1295,19 +1295,6 @@ mwifiex_copy_rates(u8 *dest, u32 pos, u8 + return pos; + } + +-/* This function return interface number with the same bss_type. +- */ +-static inline u8 +-mwifiex_get_intf_num(struct mwifiex_adapter *adapter, u8 bss_type) +-{ +- u8 i, num = 0; +- +- for (i = 0; i < adapter->priv_num; i++) +- if (adapter->priv[i] && adapter->priv[i]->bss_type == bss_type) +- num++; +- return num; +-} +- + /* + * This function returns the correct private structure pointer based + * upon the BSS type and BSS number. diff --git a/queue-5.5/mwifiex-drop-most-magic-numbers-from-mwifiex_process_tdls_action_frame.patch b/queue-5.5/mwifiex-drop-most-magic-numbers-from-mwifiex_process_tdls_action_frame.patch new file mode 100644 index 00000000000..fdc09301a0c --- /dev/null +++ b/queue-5.5/mwifiex-drop-most-magic-numbers-from-mwifiex_process_tdls_action_frame.patch @@ -0,0 +1,225 @@ +From 70e5b8f445fd27fde0c5583460e82539a7242424 Mon Sep 17 00:00:00 2001 +From: Brian Norris +Date: Fri, 6 Dec 2019 11:45:35 -0800 +Subject: mwifiex: drop most magic numbers from mwifiex_process_tdls_action_frame() + +From: Brian Norris + +commit 70e5b8f445fd27fde0c5583460e82539a7242424 upstream. + +Before commit 1e58252e334d ("mwifiex: Fix heap overflow in +mmwifiex_process_tdls_action_frame()"), +mwifiex_process_tdls_action_frame() already had too many magic numbers. +But this commit just added a ton more, in the name of checking for +buffer overflows. That seems like a really bad idea. + +Let's make these magic numbers a little less magic, by +(a) factoring out 'pos[1]' as 'ie_len' +(b) using 'sizeof' on the appropriate source or destination fields where + possible, instead of bare numbers +(c) dropping redundant checks, per below. + +Regarding redundant checks: the beginning of the loop has this: + + if (pos + 2 + pos[1] > end) + break; + +but then individual 'case's include stuff like this: + + if (pos > end - 3) + return; + if (pos[1] != 1) + return; + +Note that the second 'return' (validating the length, pos[1]) combined +with the above condition (ensuring 'pos + 2 + length' doesn't exceed +'end'), makes the first 'return' (whose 'if' can be reworded as 'pos > +end - pos[1] - 2') redundant. Rather than unwind the magic numbers +there, just drop those conditions. + +Fixes: 1e58252e334d ("mwifiex: Fix heap overflow in mmwifiex_process_tdls_action_frame()") +Signed-off-by: Brian Norris +Signed-off-by: Kalle Valo +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/net/wireless/marvell/mwifiex/tdls.c | 75 ++++++++++------------------ + 1 file changed, 28 insertions(+), 47 deletions(-) + +--- a/drivers/net/wireless/marvell/mwifiex/tdls.c ++++ b/drivers/net/wireless/marvell/mwifiex/tdls.c +@@ -894,7 +894,7 @@ void mwifiex_process_tdls_action_frame(s + u8 *peer, *pos, *end; + u8 i, action, basic; + u16 cap = 0; +- int ie_len = 0; ++ int ies_len = 0; + + if (len < (sizeof(struct ethhdr) + 3)) + return; +@@ -916,7 +916,7 @@ void mwifiex_process_tdls_action_frame(s + pos = buf + sizeof(struct ethhdr) + 4; + /* payload 1+ category 1 + action 1 + dialog 1 */ + cap = get_unaligned_le16(pos); +- ie_len = len - sizeof(struct ethhdr) - TDLS_REQ_FIX_LEN; ++ ies_len = len - sizeof(struct ethhdr) - TDLS_REQ_FIX_LEN; + pos += 2; + break; + +@@ -926,7 +926,7 @@ void mwifiex_process_tdls_action_frame(s + /* payload 1+ category 1 + action 1 + dialog 1 + status code 2*/ + pos = buf + sizeof(struct ethhdr) + 6; + cap = get_unaligned_le16(pos); +- ie_len = len - sizeof(struct ethhdr) - TDLS_RESP_FIX_LEN; ++ ies_len = len - sizeof(struct ethhdr) - TDLS_RESP_FIX_LEN; + pos += 2; + break; + +@@ -934,7 +934,7 @@ void mwifiex_process_tdls_action_frame(s + if (len < (sizeof(struct ethhdr) + TDLS_CONFIRM_FIX_LEN)) + return; + pos = buf + sizeof(struct ethhdr) + TDLS_CONFIRM_FIX_LEN; +- ie_len = len - sizeof(struct ethhdr) - TDLS_CONFIRM_FIX_LEN; ++ ies_len = len - sizeof(struct ethhdr) - TDLS_CONFIRM_FIX_LEN; + break; + default: + mwifiex_dbg(priv->adapter, ERROR, "Unknown TDLS frame type.\n"); +@@ -947,33 +947,33 @@ void mwifiex_process_tdls_action_frame(s + + sta_ptr->tdls_cap.capab = cpu_to_le16(cap); + +- for (end = pos + ie_len; pos + 1 < end; pos += 2 + pos[1]) { +- if (pos + 2 + pos[1] > end) ++ for (end = pos + ies_len; pos + 1 < end; pos += 2 + pos[1]) { ++ u8 ie_len = pos[1]; ++ ++ if (pos + 2 + ie_len > end) + break; + + switch (*pos) { + case WLAN_EID_SUPP_RATES: +- if (pos[1] > 32) ++ if (ie_len > sizeof(sta_ptr->tdls_cap.rates)) + return; +- sta_ptr->tdls_cap.rates_len = pos[1]; +- for (i = 0; i < pos[1]; i++) ++ sta_ptr->tdls_cap.rates_len = ie_len; ++ for (i = 0; i < ie_len; i++) + sta_ptr->tdls_cap.rates[i] = pos[i + 2]; + break; + + case WLAN_EID_EXT_SUPP_RATES: +- if (pos[1] > 32) ++ if (ie_len > sizeof(sta_ptr->tdls_cap.rates)) + return; + basic = sta_ptr->tdls_cap.rates_len; +- if (pos[1] > 32 - basic) ++ if (ie_len > sizeof(sta_ptr->tdls_cap.rates) - basic) + return; +- for (i = 0; i < pos[1]; i++) ++ for (i = 0; i < ie_len; i++) + sta_ptr->tdls_cap.rates[basic + i] = pos[i + 2]; +- sta_ptr->tdls_cap.rates_len += pos[1]; ++ sta_ptr->tdls_cap.rates_len += ie_len; + break; + case WLAN_EID_HT_CAPABILITY: +- if (pos > end - sizeof(struct ieee80211_ht_cap) - 2) +- return; +- if (pos[1] != sizeof(struct ieee80211_ht_cap)) ++ if (ie_len != sizeof(struct ieee80211_ht_cap)) + return; + /* copy the ie's value into ht_capb*/ + memcpy((u8 *)&sta_ptr->tdls_cap.ht_capb, pos + 2, +@@ -981,59 +981,45 @@ void mwifiex_process_tdls_action_frame(s + sta_ptr->is_11n_enabled = 1; + break; + case WLAN_EID_HT_OPERATION: +- if (pos > end - +- sizeof(struct ieee80211_ht_operation) - 2) +- return; +- if (pos[1] != sizeof(struct ieee80211_ht_operation)) ++ if (ie_len != sizeof(struct ieee80211_ht_operation)) + return; + /* copy the ie's value into ht_oper*/ + memcpy(&sta_ptr->tdls_cap.ht_oper, pos + 2, + sizeof(struct ieee80211_ht_operation)); + break; + case WLAN_EID_BSS_COEX_2040: +- if (pos > end - 3) +- return; +- if (pos[1] != 1) ++ if (ie_len != sizeof(pos[2])) + return; + sta_ptr->tdls_cap.coex_2040 = pos[2]; + break; + case WLAN_EID_EXT_CAPABILITY: +- if (pos > end - sizeof(struct ieee_types_header)) +- return; +- if (pos[1] < sizeof(struct ieee_types_header)) ++ if (ie_len < sizeof(struct ieee_types_header)) + return; +- if (pos[1] > 8) ++ if (ie_len > 8) + return; + memcpy((u8 *)&sta_ptr->tdls_cap.extcap, pos, + sizeof(struct ieee_types_header) + +- min_t(u8, pos[1], 8)); ++ min_t(u8, ie_len, 8)); + break; + case WLAN_EID_RSN: +- if (pos > end - sizeof(struct ieee_types_header)) ++ if (ie_len < sizeof(struct ieee_types_header)) + return; +- if (pos[1] < sizeof(struct ieee_types_header)) +- return; +- if (pos[1] > IEEE_MAX_IE_SIZE - ++ if (ie_len > IEEE_MAX_IE_SIZE - + sizeof(struct ieee_types_header)) + return; + memcpy((u8 *)&sta_ptr->tdls_cap.rsn_ie, pos, + sizeof(struct ieee_types_header) + +- min_t(u8, pos[1], IEEE_MAX_IE_SIZE - ++ min_t(u8, ie_len, IEEE_MAX_IE_SIZE - + sizeof(struct ieee_types_header))); + break; + case WLAN_EID_QOS_CAPA: +- if (pos > end - 3) +- return; +- if (pos[1] != 1) ++ if (ie_len != sizeof(pos[2])) + return; + sta_ptr->tdls_cap.qos_info = pos[2]; + break; + case WLAN_EID_VHT_OPERATION: + if (priv->adapter->is_hw_11ac_capable) { +- if (pos > end - +- sizeof(struct ieee80211_vht_operation) - 2) +- return; +- if (pos[1] != ++ if (ie_len != + sizeof(struct ieee80211_vht_operation)) + return; + /* copy the ie's value into vhtoper*/ +@@ -1043,10 +1029,7 @@ void mwifiex_process_tdls_action_frame(s + break; + case WLAN_EID_VHT_CAPABILITY: + if (priv->adapter->is_hw_11ac_capable) { +- if (pos > end - +- sizeof(struct ieee80211_vht_cap) - 2) +- return; +- if (pos[1] != sizeof(struct ieee80211_vht_cap)) ++ if (ie_len != sizeof(struct ieee80211_vht_cap)) + return; + /* copy the ie's value into vhtcap*/ + memcpy((u8 *)&sta_ptr->tdls_cap.vhtcap, pos + 2, +@@ -1056,9 +1039,7 @@ void mwifiex_process_tdls_action_frame(s + break; + case WLAN_EID_AID: + if (priv->adapter->is_hw_11ac_capable) { +- if (pos > end - 4) +- return; +- if (pos[1] != 2) ++ if (ie_len != sizeof(u16)) + return; + sta_ptr->tdls_cap.aid = + get_unaligned_le16((pos + 2)); diff --git a/queue-5.5/namei-only-return-echild-from-follow_dotdot_rcu.patch b/queue-5.5/namei-only-return-echild-from-follow_dotdot_rcu.patch new file mode 100644 index 00000000000..808000de6e1 --- /dev/null +++ b/queue-5.5/namei-only-return-echild-from-follow_dotdot_rcu.patch @@ -0,0 +1,41 @@ +From 2b98149c2377bff12be5dd3ce02ae0506e2dd613 Mon Sep 17 00:00:00 2001 +From: Aleksa Sarai +Date: Sat, 7 Dec 2019 01:13:26 +1100 +Subject: namei: only return -ECHILD from follow_dotdot_rcu() + +From: Aleksa Sarai + +commit 2b98149c2377bff12be5dd3ce02ae0506e2dd613 upstream. + +It's over-zealous to return hard errors under RCU-walk here, given that +a REF-walk will be triggered for all other cases handling ".." under +RCU. + +The original purpose of this check was to ensure that if a rename occurs +such that a directory is moved outside of the bind-mount which the +resolution started in, it would be detected and blocked to avoid being +able to mess with paths outside of the bind-mount. However, triggering a +new REF-walk is just as effective a solution. + +Cc: "Eric W. Biederman" +Fixes: 397d425dc26d ("vfs: Test for and handle paths that are unreachable from their mnt_root") +Suggested-by: Al Viro +Signed-off-by: Aleksa Sarai +Signed-off-by: Al Viro +Signed-off-by: Greg Kroah-Hartman + +--- + fs/namei.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/fs/namei.c ++++ b/fs/namei.c +@@ -1367,7 +1367,7 @@ static int follow_dotdot_rcu(struct name + nd->path.dentry = parent; + nd->seq = seq; + if (unlikely(!path_connected(&nd->path))) +- return -ENOENT; ++ return -ECHILD; + break; + } else { + struct mount *mnt = real_mount(nd->path.mnt); diff --git a/queue-5.5/perf-report-fix-no-libunwind-compiled-warning-break-s390-issue.patch b/queue-5.5/perf-report-fix-no-libunwind-compiled-warning-break-s390-issue.patch new file mode 100644 index 00000000000..af302b7fb86 --- /dev/null +++ b/queue-5.5/perf-report-fix-no-libunwind-compiled-warning-break-s390-issue.patch @@ -0,0 +1,54 @@ +From c3314a74f86dc00827e0945c8e5039fc3aebaa3c Mon Sep 17 00:00:00 2001 +From: Jin Yao +Date: Wed, 8 Jan 2020 03:17:45 +0800 +Subject: perf report: Fix no libunwind compiled warning break s390 issue + +From: Jin Yao + +commit c3314a74f86dc00827e0945c8e5039fc3aebaa3c upstream. + +Commit 800d3f561659 ("perf report: Add warning when libunwind not +compiled in") breaks the s390 platform. S390 uses libdw-dwarf-unwind for +call chain unwinding and had no support for libunwind. + +So the warning "Please install libunwind development packages during the +perf build." caused the confusion even if the call-graph is displayed +correctly. + +This patch adds checking for HAVE_DWARF_SUPPORT, which is set when +libdw-dwarf-unwind is compiled in. + +Fixes: 800d3f561659 ("perf report: Add warning when libunwind not compiled in") +Signed-off-by: Jin Yao +Reviewed-by: Thomas Richter +Tested-by: Thomas Richter +Acked-by: Jiri Olsa +Cc: Alexander Shishkin +Cc: Andi Kleen +Cc: Jin Yao +Cc: Kan Liang +Cc: Peter Zijlstra +Link: http://lore.kernel.org/lkml/20200107191745.18415-1-yao.jin@linux.intel.com +Signed-off-by: Arnaldo Carvalho de Melo +Signed-off-by: Greg Kroah-Hartman + +--- + tools/perf/builtin-report.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +--- a/tools/perf/builtin-report.c ++++ b/tools/perf/builtin-report.c +@@ -412,10 +412,10 @@ static int report__setup_sample_type(str + PERF_SAMPLE_BRANCH_ANY)) + rep->nonany_branch_mode = true; + +-#ifndef HAVE_LIBUNWIND_SUPPORT ++#if !defined(HAVE_LIBUNWIND_SUPPORT) && !defined(HAVE_DWARF_SUPPORT) + if (dwarf_callchain_users) { +- ui__warning("Please install libunwind development packages " +- "during the perf build.\n"); ++ ui__warning("Please install libunwind or libdw " ++ "development packages during the perf build.\n"); + } + #endif + diff --git a/queue-5.5/sched-fair-optimize-select_idle_cpu.patch b/queue-5.5/sched-fair-optimize-select_idle_cpu.patch new file mode 100644 index 00000000000..77208b8415b --- /dev/null +++ b/queue-5.5/sched-fair-optimize-select_idle_cpu.patch @@ -0,0 +1,62 @@ +From 60588bfa223ff675b95f866249f90616613fbe31 Mon Sep 17 00:00:00 2001 +From: Cheng Jian +Date: Fri, 13 Dec 2019 10:45:30 +0800 +Subject: sched/fair: Optimize select_idle_cpu + +From: Cheng Jian + +commit 60588bfa223ff675b95f866249f90616613fbe31 upstream. + +select_idle_cpu() will scan the LLC domain for idle CPUs, +it's always expensive. so the next commit : + + 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()") + +introduces a way to limit how many CPUs we scan. + +But it consume some CPUs out of 'nr' that are not allowed +for the task and thus waste our attempts. The function +always return nr_cpumask_bits, and we can't find a CPU +which our task is allowed to run. + +Cpumask may be too big, similar to select_idle_core(), use +per_cpu_ptr 'select_idle_mask' to prevent stack overflow. + +Fixes: 1ad3aaf3fcd2 ("sched/core: Implement new approach to scale select_idle_cpu()") +Signed-off-by: Cheng Jian +Signed-off-by: Peter Zijlstra (Intel) +Reviewed-by: Srikar Dronamraju +Reviewed-by: Vincent Guittot +Reviewed-by: Valentin Schneider +Link: https://lkml.kernel.org/r/20191213024530.28052-1-cj.chengjian@huawei.com +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/sched/fair.c | 7 ++++--- + 1 file changed, 4 insertions(+), 3 deletions(-) + +--- a/kernel/sched/fair.c ++++ b/kernel/sched/fair.c +@@ -5828,6 +5828,7 @@ static inline int select_idle_smt(struct + */ + static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int target) + { ++ struct cpumask *cpus = this_cpu_cpumask_var_ptr(select_idle_mask); + struct sched_domain *this_sd; + u64 avg_cost, avg_idle; + u64 time, cost; +@@ -5859,11 +5860,11 @@ static int select_idle_cpu(struct task_s + + time = cpu_clock(this); + +- for_each_cpu_wrap(cpu, sched_domain_span(sd), target) { ++ cpumask_and(cpus, sched_domain_span(sd), p->cpus_ptr); ++ ++ for_each_cpu_wrap(cpu, cpus, target) { + if (!--nr) + return si_cpu; +- if (!cpumask_test_cpu(cpu, p->cpus_ptr)) +- continue; + if (available_idle_cpu(cpu)) + break; + if (si_cpu == -1 && sched_idle_cpu(cpu)) diff --git a/queue-5.5/series b/queue-5.5/series index 2296a361302..4d52a583ff0 100644 --- a/queue-5.5/series +++ b/queue-5.5/series @@ -133,3 +133,11 @@ net-atlantic-fix-out-of-range-usage-of-active_vlans-array.patch selftests-install-settings-files-to-fix-timeout-failures.patch net-smc-no-peer-id-in-clc-decline-for-smcd.patch net-ena-make-ena-rxfh-support-eth_rss_hash_no_change.patch +tipc-fix-successful-connect-but-timed-out.patch +namei-only-return-echild-from-follow_dotdot_rcu.patch +mwifiex-drop-most-magic-numbers-from-mwifiex_process_tdls_action_frame.patch +mwifiex-delete-unused-mwifiex_get_intf_num.patch +perf-report-fix-no-libunwind-compiled-warning-break-s390-issue.patch +kvm-svm-override-default-mmio-mask-if-memory-encryption-is-enabled.patch +kvm-check-for-a-bad-hva-before-dropping-into-the-ghc-slow-path.patch +sched-fair-optimize-select_idle_cpu.patch diff --git a/queue-5.5/tipc-fix-successful-connect-but-timed-out.patch b/queue-5.5/tipc-fix-successful-connect-but-timed-out.patch new file mode 100644 index 00000000000..f80a0863f7e --- /dev/null +++ b/queue-5.5/tipc-fix-successful-connect-but-timed-out.patch @@ -0,0 +1,84 @@ +From 5391a87751a164b3194864126f3b016038abc9fe Mon Sep 17 00:00:00 2001 +From: Tuong Lien +Date: Mon, 10 Feb 2020 15:35:44 +0700 +Subject: tipc: fix successful connect() but timed out +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Tuong Lien + +commit 5391a87751a164b3194864126f3b016038abc9fe upstream. + +In commit 9546a0b7ce00 ("tipc: fix wrong connect() return code"), we +fixed the issue with the 'connect()' that returns zero even though the +connecting has failed by waiting for the connection to be 'ESTABLISHED' +really. However, the approach has one drawback in conjunction with our +'lightweight' connection setup mechanism that the following scenario +can happen: + + (server) (client) + + +- accept()| | wait_for_conn() + | | |connect() -------+ + | |<-------[SYN]---------| > sleeping + | | *CONNECTING | + |--------->*ESTABLISHED | | + |--------[ACK]-------->*ESTABLISHED > wakeup() + send()|--------[DATA]------->|\ > wakeup() + send()|--------[DATA]------->| | > wakeup() + . . . . |-> recvq . + . . . . | . + send()|--------[DATA]------->|/ > wakeup() + close()|--------[FIN]-------->*DISCONNECTING | + *DISCONNECTING | | + | ~~~~~~~~~~~~~~~~~~> schedule() + | wait again + . + . + | ETIMEDOUT + +Upon the receipt of the server 'ACK', the client becomes 'ESTABLISHED' +and the 'wait_for_conn()' process is woken up but not run. Meanwhile, +the server starts to send a number of data following by a 'close()' +shortly without waiting any response from the client, which then forces +the client socket to be 'DISCONNECTING' immediately. When the wait +process is switched to be running, it continues to wait until the timer +expires because of the unexpected socket state. The client 'connect()' +will finally get ‘-ETIMEDOUT’ and force to release the socket whereas +there remains the messages in its receive queue. + +Obviously the issue would not happen if the server had some delay prior +to its 'close()' (or the number of 'DATA' messages is large enough), +but any kind of delay would make the connection setup/shutdown "heavy". +We solve this by simply allowing the 'connect()' returns zero in this +particular case. The socket is already 'DISCONNECTING', so any further +write will get '-EPIPE' but the socket is still able to read the +messages existing in its receive queue. + +Note: This solution doesn't break the previous one as it deals with a +different situation that the socket state is 'DISCONNECTING' but has no +error (i.e. sk->sk_err = 0). + +Fixes: 9546a0b7ce00 ("tipc: fix wrong connect() return code") +Acked-by: Ying Xue +Acked-by: Jon Maloy +Signed-off-by: Tuong Lien +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman + +--- + net/tipc/socket.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/net/tipc/socket.c ++++ b/net/tipc/socket.c +@@ -2441,6 +2441,8 @@ static int tipc_wait_for_connect(struct + return -ETIMEDOUT; + if (signal_pending(current)) + return sock_intr_errno(*timeo_p); ++ if (sk->sk_state == TIPC_DISCONNECTING) ++ break; + + add_wait_queue(sk_sleep(sk), &wait); + done = sk_wait_event(sk, timeo_p, tipc_sk_connected(sk),