From: Greg Kroah-Hartman Date: Sun, 11 Nov 2018 01:49:38 +0000 (-0800) Subject: 4.19-stable patches X-Git-Tag: v4.19.2~63 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=e3195dddca2f4abc8810342506f0a52d5bfce863;p=thirdparty%2Fkernel%2Fstable-queue.git 4.19-stable patches added patches: arm-dts-dra7-fix-up-unaligned-access-setting-for-pcie-ep.patch arm-dts-exynos-convert-exynos5250.dtsi-to-opp-v2-bindings.patch arm-dts-exynos-mark-1-ghz-cpu-opp-as-suspend-opp-on-exynos5250.patch asoc-intel-skylake-add-missing-break-in-skl_tplg_get_token.patch asoc-sta32x-set-component-pointer-in-private-struct.patch crypto-aegis-generic-fix-for-big-endian-systems.patch crypto-aesni-don-t-use-gfp_atomic-allocation-if-the-request-doesn-t-cross-a-page-in-gcm.patch crypto-lrw-fix-out-of-bounds-access-on-counter-overflow.patch crypto-morus-generic-fix-for-big-endian-systems.patch crypto-speck-remove-speck.patch crypto-tcrypt-fix-ghash-generic-speed-test.patch dmaengine-ppc4xx-fix-off-by-one-build-failure.patch drivers-hv-kvp-fix-two-this-statement-may-fall-through-warnings.patch edac-amd64-add-family-17h-models-10h-2fh-support.patch edac-i7core-sb-skx-_edac-fix-uncorrected-error-counting.patch edac-skx_edac-fix-logical-channel-intermediate-decoding.patch ext4-fix-ext4_ioc_swap_boot.patch ext4-fix-setattr-project-check-in-fssetxattr-ioctl.patch ext4-fix-use-after-free-race-in-ext4_remount-s-error-path.patch ext4-initialize-retries-variable-in-ext4_da_write_inline_data_begin.patch ext4-propagate-error-from-dquot_initialize-in-ext4_ioc_fssetxattr.patch f2fs-fix-missing-up_read.patch f2fs-fix-to-account-io-correctly.patch f2fs-fix-to-recover-cold-bit-of-inode-block-during-por.patch genirq-fix-race-on-spurious-interrupt-detection.patch gfs2_meta-mount-can-get-null-dev_name.patch hid-hiddev-fix-potential-spectre-v1.patch hid-wacom-work-around-hid-descriptor-bug-in-dtk-2451-and-dth-2452.patch hugetlbfs-dirty-pages-as-they-are-added-to-pagecache.patch ib-mlx5-fix-mr-cache-initialization.patch ib-rxe-revise-the-ib_wr_opcode-enum.patch iio-ad5064-fix-regulator-handling.patch iio-adc-at91-fix-acking-drdy-irq-on-simple-conversions.patch iio-adc-at91-fix-wrong-channel-number-in-triggered-buffer-mode.patch iio-adc-imx25-gcq-fix-leak-of-device_node-in-mx25_gcq_setup_cfgs.patch ima-fix-showing-large-violations-or-runtime_measurements_count.patch ima-open-a-new-file-instance-if-no-read-permissions.patch iwlwifi-mvm-check-return-value-of-rs_rate_from_ucode_rate.patch jbd2-fix-use-after-free-in-jbd2_log_do_checkpoint.patch kbuild-fix-kernel-bounds.c-w-1-warning.patch kvm-arm-arm64-ensure-only-thp-is-candidate-for-adjustment.patch kvm-arm64-fix-caching-of-host-mdcr_el2-value.patch libertas-don-t-set-urb_zero_packet-on-in-usb-transfer.patch libnvdimm-hold-reference-on-parent-while-scheduling-async-init.patch libnvdimm-pmem-fix-badblocks-population-for-raw-namespaces.patch libnvdimm-region-fail-badblocks-listing-for-inactive-regions.patch mm-hmm-fix-race-between-hmm_mirror_unregister-and-mmu_notifier-callback.patch mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch mm-rmap-map_pte-was-not-handling-private-zone_device-page-properly.patch mt76-mt76x2-fix-multi-interface-beacon-configuration.patch net-ipv4-defensive-cipso-option-parsing.patch opp-free-opp-table-properly-on-performance-state-irregularities.patch pci-add-device-ids-for-intel-gpu-spurious-interrupt-quirk.patch pci-aspm-fix-link_state-teardown-on-device-removal.patch printk-fix-panic-caused-by-passing-log_buf_len-to-command-line.patch revert-f2fs-fix-to-clear-pg_checked-flag-in-set_page_dirty.patch scsi-sched-wait-add-wait_event_lock_irq_timeout-for-task_uninterruptible-usage.patch scsi-target-fix-target_wait_for_sess_cmds-breakage-with-active-signals.patch selinux-fix-mounting-of-cgroup2-under-older-policies.patch signal-genwqe-fix-sending-of-sigkill.patch signal-guard-against-negative-signal-numbers-in-copy_siginfo_from_user32.patch smb3-allow-stats-which-track-session-and-share-reconnects-to-be-reset.patch smb3-do-not-attempt-cifs-operation-in-smb3-query-info-error-path.patch smb3-on-kerberos-mount-if-server-doesn-t-specify-auth-type-use-krb5.patch tpm-fix-response-size-validation-in-tpm_get_random.patch tpm-restore-functionality-to-xen-vtpm-driver.patch usb-gadget-udc-renesas_usb3-fix-b-device-mode-for-workaround.patch usb-typec-tcpm-fix-apdo-pps-order-checking-to-be-based-on-voltage.patch usbip-vudc-bug-kmalloc-2048-not-tainted-poison-overwritten.patch userfaultfd-disable-irqs-when-taking-the-waitqueue-lock.patch w1-omap-hdq-fix-missing-bus-unregister-at-removal.patch xen-balloon-support-xend-based-toolstack.patch xen-blkfront-avoid-null-blkfront_info-dereference-on-device-removal.patch xen-fix-race-in-xen_qlock_wait.patch xen-make-xen_qlock_wait-nestable.patch xen-pvh-don-t-try-to-unplug-emulated-devices.patch xen-pvh-increase-early-stack-size.patch xen-swiotlb-use-actually-allocated-size-on-check-physical-continuous.patch --- diff --git a/queue-4.19/arm-dts-dra7-fix-up-unaligned-access-setting-for-pcie-ep.patch b/queue-4.19/arm-dts-dra7-fix-up-unaligned-access-setting-for-pcie-ep.patch new file mode 100644 index 00000000000..79677f4c469 --- /dev/null +++ b/queue-4.19/arm-dts-dra7-fix-up-unaligned-access-setting-for-pcie-ep.patch @@ -0,0 +1,35 @@ +From 6d0af44a82be87c13f2320821e9fbb8b8cf5a56f Mon Sep 17 00:00:00 2001 +From: Vignesh R +Date: Tue, 25 Sep 2018 10:51:51 +0530 +Subject: ARM: dts: dra7: Fix up unaligned access setting for PCIe EP + +From: Vignesh R + +commit 6d0af44a82be87c13f2320821e9fbb8b8cf5a56f upstream. + +Bit positions of PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE and +PCIE_SS1_AXI2OCP_LEGACY_MODE_ENABLE in CTRL_CORE_SMA_SW_7 are +incorrectly documented in the TRM. In fact, the bit positions are +swapped. Update the DT bindings for PCIe EP to reflect the same. + +Fixes: d23f3839fe97 ("ARM: dts: DRA7: Add pcie1 dt node for EP mode") +Cc: stable@vger.kernel.org +Signed-off-by: Vignesh R +Signed-off-by: Tony Lindgren +Signed-off-by: Greg Kroah-Hartman + +--- + arch/arm/boot/dts/dra7.dtsi | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/arch/arm/boot/dts/dra7.dtsi ++++ b/arch/arm/boot/dts/dra7.dtsi +@@ -354,7 +354,7 @@ + ti,hwmods = "pcie1"; + phys = <&pcie1_phy>; + phy-names = "pcie-phy0"; +- ti,syscon-unaligned-access = <&scm_conf1 0x14 2>; ++ ti,syscon-unaligned-access = <&scm_conf1 0x14 1>; + status = "disabled"; + }; + }; diff --git a/queue-4.19/arm-dts-exynos-convert-exynos5250.dtsi-to-opp-v2-bindings.patch b/queue-4.19/arm-dts-exynos-convert-exynos5250.dtsi-to-opp-v2-bindings.patch new file mode 100644 index 00000000000..818f8e06058 --- /dev/null +++ b/queue-4.19/arm-dts-exynos-convert-exynos5250.dtsi-to-opp-v2-bindings.patch @@ -0,0 +1,178 @@ +From eb9e16d8573e243f8175647f851eb5085dbe97a4 Mon Sep 17 00:00:00 2001 +From: Marek Szyprowski +Date: Tue, 7 Aug 2018 12:48:48 +0200 +Subject: ARM: dts: exynos: Convert exynos5250.dtsi to opp-v2 bindings + +From: Marek Szyprowski + +commit eb9e16d8573e243f8175647f851eb5085dbe97a4 upstream. + +Convert Exynos5250 to OPP-v2 bindings. This is a preparation to add proper +support for suspend operation point, which cannot be marked in opp-v1. + +Cc: # 4.3.x: cd6f55457eb4: ARM: dts: exynos: Remove "cooling-{min|max}-level" for CPU nodes +Cc: # 4.3.x: 672f33198bee: arm: dts: exynos: Add missing cooling device properties for CPUs +Cc: # 4.3.x +Signed-off-by: Marek Szyprowski +Reviewed-by: Chanwoo Choi +Acked-by: Bartlomiej Zolnierkiewicz +Signed-off-by: Krzysztof Kozlowski +Signed-off-by: Greg Kroah-Hartman + +--- + arch/arm/boot/dts/exynos5250.dtsi | 130 +++++++++++++++++++++++++------------- + 1 file changed, 88 insertions(+), 42 deletions(-) + +--- a/arch/arm/boot/dts/exynos5250.dtsi ++++ b/arch/arm/boot/dts/exynos5250.dtsi +@@ -54,62 +54,108 @@ + device_type = "cpu"; + compatible = "arm,cortex-a15"; + reg = <0>; +- clock-frequency = <1700000000>; + clocks = <&clock CLK_ARM_CLK>; + clock-names = "cpu"; +- clock-latency = <140000>; +- +- operating-points = < +- 1700000 1300000 +- 1600000 1250000 +- 1500000 1225000 +- 1400000 1200000 +- 1300000 1150000 +- 1200000 1125000 +- 1100000 1100000 +- 1000000 1075000 +- 900000 1050000 +- 800000 1025000 +- 700000 1012500 +- 600000 1000000 +- 500000 975000 +- 400000 950000 +- 300000 937500 +- 200000 925000 +- >; ++ operating-points-v2 = <&cpu0_opp_table>; + #cooling-cells = <2>; /* min followed by max */ + }; + cpu@1 { + device_type = "cpu"; + compatible = "arm,cortex-a15"; + reg = <1>; +- clock-frequency = <1700000000>; + clocks = <&clock CLK_ARM_CLK>; + clock-names = "cpu"; +- clock-latency = <140000>; +- +- operating-points = < +- 1700000 1300000 +- 1600000 1250000 +- 1500000 1225000 +- 1400000 1200000 +- 1300000 1150000 +- 1200000 1125000 +- 1100000 1100000 +- 1000000 1075000 +- 900000 1050000 +- 800000 1025000 +- 700000 1012500 +- 600000 1000000 +- 500000 975000 +- 400000 950000 +- 300000 937500 +- 200000 925000 +- >; ++ operating-points-v2 = <&cpu0_opp_table>; + #cooling-cells = <2>; /* min followed by max */ + }; + }; + ++ cpu0_opp_table: opp_table0 { ++ compatible = "operating-points-v2"; ++ opp-shared; ++ ++ opp-200000000 { ++ opp-hz = /bits/ 64 <200000000>; ++ opp-microvolt = <925000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-300000000 { ++ opp-hz = /bits/ 64 <300000000>; ++ opp-microvolt = <937500>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-400000000 { ++ opp-hz = /bits/ 64 <400000000>; ++ opp-microvolt = <950000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-500000000 { ++ opp-hz = /bits/ 64 <500000000>; ++ opp-microvolt = <975000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-600000000 { ++ opp-hz = /bits/ 64 <600000000>; ++ opp-microvolt = <1000000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-700000000 { ++ opp-hz = /bits/ 64 <700000000>; ++ opp-microvolt = <1012500>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-800000000 { ++ opp-hz = /bits/ 64 <800000000>; ++ opp-microvolt = <1025000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-900000000 { ++ opp-hz = /bits/ 64 <900000000>; ++ opp-microvolt = <1050000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1000000000 { ++ opp-hz = /bits/ 64 <1000000000>; ++ opp-microvolt = <1075000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1100000000 { ++ opp-hz = /bits/ 64 <1100000000>; ++ opp-microvolt = <1100000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1200000000 { ++ opp-hz = /bits/ 64 <1200000000>; ++ opp-microvolt = <1125000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1300000000 { ++ opp-hz = /bits/ 64 <1300000000>; ++ opp-microvolt = <1150000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1400000000 { ++ opp-hz = /bits/ 64 <1400000000>; ++ opp-microvolt = <1200000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1500000000 { ++ opp-hz = /bits/ 64 <1500000000>; ++ opp-microvolt = <1225000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1600000000 { ++ opp-hz = /bits/ 64 <1600000000>; ++ opp-microvolt = <1250000>; ++ clock-latency-ns = <140000>; ++ }; ++ opp-1700000000 { ++ opp-hz = /bits/ 64 <1700000000>; ++ opp-microvolt = <1300000>; ++ clock-latency-ns = <140000>; ++ }; ++ }; ++ + soc: soc { + sysram@2020000 { + compatible = "mmio-sram"; diff --git a/queue-4.19/arm-dts-exynos-mark-1-ghz-cpu-opp-as-suspend-opp-on-exynos5250.patch b/queue-4.19/arm-dts-exynos-mark-1-ghz-cpu-opp-as-suspend-opp-on-exynos5250.patch new file mode 100644 index 00000000000..37fb59ce0ba --- /dev/null +++ b/queue-4.19/arm-dts-exynos-mark-1-ghz-cpu-opp-as-suspend-opp-on-exynos5250.patch @@ -0,0 +1,37 @@ +From 645b23da6f8b47f295fa87051335d41d139717a5 Mon Sep 17 00:00:00 2001 +From: Marek Szyprowski +Date: Tue, 7 Aug 2018 12:48:49 +0200 +Subject: ARM: dts: exynos: Mark 1 GHz CPU OPP as suspend OPP on Exynos5250 + +From: Marek Szyprowski + +commit 645b23da6f8b47f295fa87051335d41d139717a5 upstream. + +1 GHz CPU OPP is the default boot value for the Exynos5250 SOC, so mark it +as suspend OPP. This fixes suspend/resume on Samsung Exynos5250 Snow +Chomebook, which was broken since switching to generic cpufreq-dt driver +in v4.3. + +Cc: # 4.3.x: cd6f55457eb4: ARM: dts: exynos: Remove "cooling-{min|max}-level" for CPU nodes +Cc: # 4.3.x: 672f33198bee: arm: dts: exynos: Add missing cooling device properties for CPUs +Cc: # 4.3.x +Signed-off-by: Marek Szyprowski +Reviewed-by: Chanwoo Choi +Acked-by: Bartlomiej Zolnierkiewicz +Signed-off-by: Krzysztof Kozlowski +Signed-off-by: Greg Kroah-Hartman + +--- + arch/arm/boot/dts/exynos5250.dtsi | 1 + + 1 file changed, 1 insertion(+) + +--- a/arch/arm/boot/dts/exynos5250.dtsi ++++ b/arch/arm/boot/dts/exynos5250.dtsi +@@ -118,6 +118,7 @@ + opp-hz = /bits/ 64 <1000000000>; + opp-microvolt = <1075000>; + clock-latency-ns = <140000>; ++ opp-suspend; + }; + opp-1100000000 { + opp-hz = /bits/ 64 <1100000000>; diff --git a/queue-4.19/asoc-intel-skylake-add-missing-break-in-skl_tplg_get_token.patch b/queue-4.19/asoc-intel-skylake-add-missing-break-in-skl_tplg_get_token.patch new file mode 100644 index 00000000000..75bc0be925e --- /dev/null +++ b/queue-4.19/asoc-intel-skylake-add-missing-break-in-skl_tplg_get_token.patch @@ -0,0 +1,33 @@ +From 9c80c5a8831471e0a3e139aad1b0d4c0fdc50b2f Mon Sep 17 00:00:00 2001 +From: Takashi Iwai +Date: Wed, 3 Oct 2018 19:31:44 +0200 +Subject: ASoC: intel: skylake: Add missing break in skl_tplg_get_token() + +From: Takashi Iwai + +commit 9c80c5a8831471e0a3e139aad1b0d4c0fdc50b2f upstream. + +skl_tplg_get_token() misses a break in the big switch() block for +SKL_TKN_U8_CORE_ID entry. +Spotted nicely by -Wimplicit-fallthrough compiler option. + +Fixes: 6277e83292a2 ("ASoC: Intel: Skylake: Parse vendor tokens to build module data") +Cc: +Signed-off-by: Takashi Iwai +Signed-off-by: Mark Brown +Signed-off-by: Greg Kroah-Hartman + +--- + sound/soc/intel/skylake/skl-topology.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/sound/soc/intel/skylake/skl-topology.c ++++ b/sound/soc/intel/skylake/skl-topology.c +@@ -2461,6 +2461,7 @@ static int skl_tplg_get_token(struct dev + + case SKL_TKN_U8_CORE_ID: + mconfig->core_id = tkn_elem->value; ++ break; + + case SKL_TKN_U8_MOD_TYPE: + mconfig->m_type = tkn_elem->value; diff --git a/queue-4.19/asoc-sta32x-set-component-pointer-in-private-struct.patch b/queue-4.19/asoc-sta32x-set-component-pointer-in-private-struct.patch new file mode 100644 index 00000000000..c2541a14f38 --- /dev/null +++ b/queue-4.19/asoc-sta32x-set-component-pointer-in-private-struct.patch @@ -0,0 +1,38 @@ +From 747df19747bc9752cd40b9cce761e17a033aa5c2 Mon Sep 17 00:00:00 2001 +From: Daniel Mack +Date: Thu, 11 Oct 2018 20:32:05 +0200 +Subject: ASoC: sta32x: set ->component pointer in private struct + +From: Daniel Mack + +commit 747df19747bc9752cd40b9cce761e17a033aa5c2 upstream. + +The ESD watchdog code in sta32x_watchdog() dereferences the pointer +which is never assigned. + +This is a regression from a1be4cead9b950 ("ASoC: sta32x: Convert to direct +regmap API usage.") which went unnoticed since nobody seems to use that ESD +workaround. + +Fixes: a1be4cead9b950 ("ASoC: sta32x: Convert to direct regmap API usage.") +Signed-off-by: Daniel Mack +Signed-off-by: Mark Brown +Cc: stable@vger.kernel.org +Signed-off-by: Greg Kroah-Hartman + +--- + sound/soc/codecs/sta32x.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/sound/soc/codecs/sta32x.c ++++ b/sound/soc/codecs/sta32x.c +@@ -879,6 +879,9 @@ static int sta32x_probe(struct snd_soc_c + struct sta32x_priv *sta32x = snd_soc_component_get_drvdata(component); + struct sta32x_platform_data *pdata = sta32x->pdata; + int i, ret = 0, thermal = 0; ++ ++ sta32x->component = component; ++ + ret = regulator_bulk_enable(ARRAY_SIZE(sta32x->supplies), + sta32x->supplies); + if (ret != 0) { diff --git a/queue-4.19/crypto-aegis-generic-fix-for-big-endian-systems.patch b/queue-4.19/crypto-aegis-generic-fix-for-big-endian-systems.patch new file mode 100644 index 00000000000..23ecac2c6ca --- /dev/null +++ b/queue-4.19/crypto-aegis-generic-fix-for-big-endian-systems.patch @@ -0,0 +1,76 @@ +From 4a34e3c2f2f48f47213702a84a123af0fe21ad60 Mon Sep 17 00:00:00 2001 +From: Ard Biesheuvel +Date: Mon, 1 Oct 2018 10:36:38 +0200 +Subject: crypto: aegis/generic - fix for big endian systems + +From: Ard Biesheuvel + +commit 4a34e3c2f2f48f47213702a84a123af0fe21ad60 upstream. + +Use the correct __le32 annotation and accessors to perform the +single round of AES encryption performed inside the AEGIS transform. +Otherwise, tcrypt reports: + + alg: aead: Test 1 failed on encryption for aegis128-generic + 00000000: 6c 25 25 4a 3c 10 1d 27 2b c1 d4 84 9a ef 7f 6e + alg: aead: Test 1 failed on encryption for aegis128l-generic + 00000000: cd c6 e3 b8 a0 70 9d 8e c2 4f 6f fe 71 42 df 28 + alg: aead: Test 1 failed on encryption for aegis256-generic + 00000000: aa ed 07 b1 96 1d e9 e6 f2 ed b5 8e 1c 5f dc 1c + +Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations") +Cc: # v4.18+ +Signed-off-by: Ard Biesheuvel +Reviewed-by: Ondrej Mosnacek +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/aegis.h | 22 ++++++++++------------ + 1 file changed, 10 insertions(+), 12 deletions(-) + +--- a/crypto/aegis.h ++++ b/crypto/aegis.h +@@ -21,7 +21,7 @@ + + union aegis_block { + __le64 words64[AEGIS_BLOCK_SIZE / sizeof(__le64)]; +- u32 words32[AEGIS_BLOCK_SIZE / sizeof(u32)]; ++ __le32 words32[AEGIS_BLOCK_SIZE / sizeof(__le32)]; + u8 bytes[AEGIS_BLOCK_SIZE]; + }; + +@@ -57,24 +57,22 @@ static void crypto_aegis_aesenc(union ae + const union aegis_block *src, + const union aegis_block *key) + { +- u32 *d = dst->words32; + const u8 *s = src->bytes; +- const u32 *k = key->words32; + const u32 *t0 = crypto_ft_tab[0]; + const u32 *t1 = crypto_ft_tab[1]; + const u32 *t2 = crypto_ft_tab[2]; + const u32 *t3 = crypto_ft_tab[3]; + u32 d0, d1, d2, d3; + +- d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]] ^ k[0]; +- d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]] ^ k[1]; +- d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]] ^ k[2]; +- d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]] ^ k[3]; +- +- d[0] = d0; +- d[1] = d1; +- d[2] = d2; +- d[3] = d3; ++ d0 = t0[s[ 0]] ^ t1[s[ 5]] ^ t2[s[10]] ^ t3[s[15]]; ++ d1 = t0[s[ 4]] ^ t1[s[ 9]] ^ t2[s[14]] ^ t3[s[ 3]]; ++ d2 = t0[s[ 8]] ^ t1[s[13]] ^ t2[s[ 2]] ^ t3[s[ 7]]; ++ d3 = t0[s[12]] ^ t1[s[ 1]] ^ t2[s[ 6]] ^ t3[s[11]]; ++ ++ dst->words32[0] = cpu_to_le32(d0) ^ key->words32[0]; ++ dst->words32[1] = cpu_to_le32(d1) ^ key->words32[1]; ++ dst->words32[2] = cpu_to_le32(d2) ^ key->words32[2]; ++ dst->words32[3] = cpu_to_le32(d3) ^ key->words32[3]; + } + + #endif /* _CRYPTO_AEGIS_H */ diff --git a/queue-4.19/crypto-aesni-don-t-use-gfp_atomic-allocation-if-the-request-doesn-t-cross-a-page-in-gcm.patch b/queue-4.19/crypto-aesni-don-t-use-gfp_atomic-allocation-if-the-request-doesn-t-cross-a-page-in-gcm.patch new file mode 100644 index 00000000000..4421939755e --- /dev/null +++ b/queue-4.19/crypto-aesni-don-t-use-gfp_atomic-allocation-if-the-request-doesn-t-cross-a-page-in-gcm.patch @@ -0,0 +1,38 @@ +From a788848116454d753b13a4888e0e31ada3c4d393 Mon Sep 17 00:00:00 2001 +From: Mikulas Patocka +Date: Wed, 5 Sep 2018 09:18:43 -0400 +Subject: crypto: aesni - don't use GFP_ATOMIC allocation if the request doesn't cross a page in gcm + +From: Mikulas Patocka + +commit a788848116454d753b13a4888e0e31ada3c4d393 upstream. + +This patch fixes gcmaes_crypt_by_sg so that it won't use memory +allocation if the data doesn't cross a page boundary. + +Authenticated encryption may be used by dm-crypt. If the encryption or +decryption fails, it would result in I/O error and filesystem corruption. +The function gcmaes_crypt_by_sg is using GFP_ATOMIC allocation that can +fail anytime. This patch fixes the logic so that it won't attempt the +failing allocation if the data doesn't cross a page boundary. + +Signed-off-by: Mikulas Patocka +Cc: stable@vger.kernel.org +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/crypto/aesni-intel_glue.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/arch/x86/crypto/aesni-intel_glue.c ++++ b/arch/x86/crypto/aesni-intel_glue.c +@@ -817,7 +817,7 @@ static int gcmaes_crypt_by_sg(bool enc, + /* Linearize assoc, if not already linear */ + if (req->src->length >= assoclen && req->src->length && + (!PageHighMem(sg_page(req->src)) || +- req->src->offset + req->src->length < PAGE_SIZE)) { ++ req->src->offset + req->src->length <= PAGE_SIZE)) { + scatterwalk_start(&assoc_sg_walk, req->src); + assoc = scatterwalk_map(&assoc_sg_walk); + } else { diff --git a/queue-4.19/crypto-lrw-fix-out-of-bounds-access-on-counter-overflow.patch b/queue-4.19/crypto-lrw-fix-out-of-bounds-access-on-counter-overflow.patch new file mode 100644 index 00000000000..c3d52f73b47 --- /dev/null +++ b/queue-4.19/crypto-lrw-fix-out-of-bounds-access-on-counter-overflow.patch @@ -0,0 +1,40 @@ +From fbe1a850b3b1522e9fc22319ccbbcd2ab05328d2 Mon Sep 17 00:00:00 2001 +From: Ondrej Mosnacek +Date: Thu, 13 Sep 2018 10:51:31 +0200 +Subject: crypto: lrw - Fix out-of bounds access on counter overflow + +From: Ondrej Mosnacek + +commit fbe1a850b3b1522e9fc22319ccbbcd2ab05328d2 upstream. + +When the LRW block counter overflows, the current implementation returns +128 as the index to the precomputed multiplication table, which has 128 +entries. This patch fixes it to return the correct value (127). + +Fixes: 64470f1b8510 ("[CRYPTO] lrw: Liskov Rivest Wagner, a tweakable narrow block cipher mode") +Cc: # 2.6.20+ +Reported-by: Eric Biggers +Signed-off-by: Ondrej Mosnacek +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/lrw.c | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +--- a/crypto/lrw.c ++++ b/crypto/lrw.c +@@ -143,7 +143,12 @@ static inline int get_index128(be128 *bl + return x + ffz(val); + } + +- return x; ++ /* ++ * If we get here, then x == 128 and we are incrementing the counter ++ * from all ones to all zeros. This means we must return index 127, i.e. ++ * the one corresponding to key2*{ 1,...,1 }. ++ */ ++ return 127; + } + + static int post_crypt(struct skcipher_request *req) diff --git a/queue-4.19/crypto-morus-generic-fix-for-big-endian-systems.patch b/queue-4.19/crypto-morus-generic-fix-for-big-endian-systems.patch new file mode 100644 index 00000000000..cc18727b0f7 --- /dev/null +++ b/queue-4.19/crypto-morus-generic-fix-for-big-endian-systems.patch @@ -0,0 +1,82 @@ +From 5a8dedfa3276e88c5865f265195d63d72aec3e72 Mon Sep 17 00:00:00 2001 +From: Ard Biesheuvel +Date: Mon, 1 Oct 2018 10:36:37 +0200 +Subject: crypto: morus/generic - fix for big endian systems + +From: Ard Biesheuvel + +commit 5a8dedfa3276e88c5865f265195d63d72aec3e72 upstream. + +Omit the endian swabbing when folding the lengths of the assoc and +crypt input buffers into the state to finalize the tag. This is not +necessary given that the memory representation of the state is in +machine native endianness already. + +This fixes an error reported by tcrypt running on a big endian system: + + alg: aead: Test 2 failed on encryption for morus640-generic + 00000000: a8 30 ef fb e6 26 eb 23 b0 87 dd 98 57 f3 e1 4b + 00000010: 21 + alg: aead: Test 2 failed on encryption for morus1280-generic + 00000000: 88 19 1b fb 1c 29 49 0e ee 82 2f cb 97 a6 a5 ee + 00000010: 5f + +Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations") +Cc: # v4.18+ +Reviewed-by: Ondrej Mosnacek +Signed-off-by: Ard Biesheuvel +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/morus1280.c | 7 ++----- + crypto/morus640.c | 16 ++++------------ + 2 files changed, 6 insertions(+), 17 deletions(-) + +--- a/crypto/morus1280.c ++++ b/crypto/morus1280.c +@@ -385,14 +385,11 @@ static void crypto_morus1280_final(struc + struct morus1280_block *tag_xor, + u64 assoclen, u64 cryptlen) + { +- u64 assocbits = assoclen * 8; +- u64 cryptbits = cryptlen * 8; +- + struct morus1280_block tmp; + unsigned int i; + +- tmp.words[0] = cpu_to_le64(assocbits); +- tmp.words[1] = cpu_to_le64(cryptbits); ++ tmp.words[0] = assoclen * 8; ++ tmp.words[1] = cryptlen * 8; + tmp.words[2] = 0; + tmp.words[3] = 0; + +--- a/crypto/morus640.c ++++ b/crypto/morus640.c +@@ -384,21 +384,13 @@ static void crypto_morus640_final(struct + struct morus640_block *tag_xor, + u64 assoclen, u64 cryptlen) + { +- u64 assocbits = assoclen * 8; +- u64 cryptbits = cryptlen * 8; +- +- u32 assocbits_lo = (u32)assocbits; +- u32 assocbits_hi = (u32)(assocbits >> 32); +- u32 cryptbits_lo = (u32)cryptbits; +- u32 cryptbits_hi = (u32)(cryptbits >> 32); +- + struct morus640_block tmp; + unsigned int i; + +- tmp.words[0] = cpu_to_le32(assocbits_lo); +- tmp.words[1] = cpu_to_le32(assocbits_hi); +- tmp.words[2] = cpu_to_le32(cryptbits_lo); +- tmp.words[3] = cpu_to_le32(cryptbits_hi); ++ tmp.words[0] = lower_32_bits(assoclen * 8); ++ tmp.words[1] = upper_32_bits(assoclen * 8); ++ tmp.words[2] = lower_32_bits(cryptlen * 8); ++ tmp.words[3] = upper_32_bits(cryptlen * 8); + + for (i = 0; i < MORUS_BLOCK_WORDS; i++) + state->s[4].words[i] ^= state->s[0].words[i]; diff --git a/queue-4.19/crypto-speck-remove-speck.patch b/queue-4.19/crypto-speck-remove-speck.patch new file mode 100644 index 00000000000..d886de2cb0a --- /dev/null +++ b/queue-4.19/crypto-speck-remove-speck.patch @@ -0,0 +1,2876 @@ +From 578bdaabd015b9b164842c3e8ace9802f38e7ecc Mon Sep 17 00:00:00 2001 +From: "Jason A. Donenfeld" +Date: Tue, 7 Aug 2018 08:22:25 +0200 +Subject: crypto: speck - remove Speck + +From: Jason A. Donenfeld + +commit 578bdaabd015b9b164842c3e8ace9802f38e7ecc upstream. + +These are unused, undesired, and have never actually been used by +anybody. The original authors of this code have changed their mind about +its inclusion. While originally proposed for disk encryption on low-end +devices, the idea was discarded [1] in favor of something else before +that could really get going. Therefore, this patch removes Speck. + +[1] https://marc.info/?l=linux-crypto-vger&m=153359499015659 + +Signed-off-by: Jason A. Donenfeld +Acked-by: Eric Biggers +Cc: stable@vger.kernel.org +Acked-by: Ard Biesheuvel +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + Documentation/filesystems/fscrypt.rst | 10 + arch/arm/crypto/Kconfig | 6 + arch/arm/crypto/Makefile | 2 + arch/arm/crypto/speck-neon-core.S | 434 ------------------- + arch/arm/crypto/speck-neon-glue.c | 288 ------------- + arch/arm64/crypto/Kconfig | 6 + arch/arm64/crypto/Makefile | 3 + arch/arm64/crypto/speck-neon-core.S | 352 ---------------- + arch/arm64/crypto/speck-neon-glue.c | 282 ------------ + arch/m68k/configs/amiga_defconfig | 1 + arch/m68k/configs/apollo_defconfig | 1 + arch/m68k/configs/atari_defconfig | 1 + arch/m68k/configs/bvme6000_defconfig | 1 + arch/m68k/configs/hp300_defconfig | 1 + arch/m68k/configs/mac_defconfig | 1 + arch/m68k/configs/multi_defconfig | 1 + arch/m68k/configs/mvme147_defconfig | 1 + arch/m68k/configs/mvme16x_defconfig | 1 + arch/m68k/configs/q40_defconfig | 1 + arch/m68k/configs/sun3_defconfig | 1 + arch/m68k/configs/sun3x_defconfig | 1 + arch/s390/defconfig | 1 + crypto/Kconfig | 14 + crypto/Makefile | 1 + crypto/speck.c | 307 -------------- + crypto/testmgr.c | 24 - + crypto/testmgr.h | 738 ---------------------------------- + fs/crypto/fscrypt_private.h | 4 + fs/crypto/keyinfo.c | 10 + include/crypto/speck.h | 62 -- + include/uapi/linux/fs.h | 4 + 31 files changed, 2 insertions(+), 2558 deletions(-) + +--- a/Documentation/filesystems/fscrypt.rst ++++ b/Documentation/filesystems/fscrypt.rst +@@ -191,21 +191,11 @@ Currently, the following pairs of encryp + + - AES-256-XTS for contents and AES-256-CTS-CBC for filenames + - AES-128-CBC for contents and AES-128-CTS-CBC for filenames +-- Speck128/256-XTS for contents and Speck128/256-CTS-CBC for filenames + + It is strongly recommended to use AES-256-XTS for contents encryption. + AES-128-CBC was added only for low-powered embedded devices with + crypto accelerators such as CAAM or CESA that do not support XTS. + +-Similarly, Speck128/256 support was only added for older or low-end +-CPUs which cannot do AES fast enough -- especially ARM CPUs which have +-NEON instructions but not the Cryptography Extensions -- and for which +-it would not otherwise be feasible to use encryption at all. It is +-not recommended to use Speck on CPUs that have AES instructions. +-Speck support is only available if it has been enabled in the crypto +-API via CONFIG_CRYPTO_SPECK. Also, on ARM platforms, to get +-acceptable performance CONFIG_CRYPTO_SPECK_NEON must be enabled. +- + New encryption modes can be added relatively easily, without changes + to individual filesystems. However, authenticated encryption (AE) + modes are not currently supported because of the difficulty of dealing +--- a/arch/arm/crypto/Kconfig ++++ b/arch/arm/crypto/Kconfig +@@ -121,10 +121,4 @@ config CRYPTO_CHACHA20_NEON + select CRYPTO_BLKCIPHER + select CRYPTO_CHACHA20 + +-config CRYPTO_SPECK_NEON +- tristate "NEON accelerated Speck cipher algorithms" +- depends on KERNEL_MODE_NEON +- select CRYPTO_BLKCIPHER +- select CRYPTO_SPECK +- + endif +--- a/arch/arm/crypto/Makefile ++++ b/arch/arm/crypto/Makefile +@@ -10,7 +10,6 @@ obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) += sh + obj-$(CONFIG_CRYPTO_SHA256_ARM) += sha256-arm.o + obj-$(CONFIG_CRYPTO_SHA512_ARM) += sha512-arm.o + obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o +-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o + + ce-obj-$(CONFIG_CRYPTO_AES_ARM_CE) += aes-arm-ce.o + ce-obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) += sha1-arm-ce.o +@@ -54,7 +53,6 @@ ghash-arm-ce-y := ghash-ce-core.o ghash- + crct10dif-arm-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o + crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o + chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o +-speck-neon-y := speck-neon-core.o speck-neon-glue.o + + ifdef REGENERATE_ARM_CRYPTO + quiet_cmd_perl = PERL $@ +--- a/arch/arm/crypto/speck-neon-core.S ++++ /dev/null +@@ -1,434 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS +- * +- * Copyright (c) 2018 Google, Inc +- * +- * Author: Eric Biggers +- */ +- +-#include +- +- .text +- .fpu neon +- +- // arguments +- ROUND_KEYS .req r0 // const {u64,u32} *round_keys +- NROUNDS .req r1 // int nrounds +- DST .req r2 // void *dst +- SRC .req r3 // const void *src +- NBYTES .req r4 // unsigned int nbytes +- TWEAK .req r5 // void *tweak +- +- // registers which hold the data being encrypted/decrypted +- X0 .req q0 +- X0_L .req d0 +- X0_H .req d1 +- Y0 .req q1 +- Y0_H .req d3 +- X1 .req q2 +- X1_L .req d4 +- X1_H .req d5 +- Y1 .req q3 +- Y1_H .req d7 +- X2 .req q4 +- X2_L .req d8 +- X2_H .req d9 +- Y2 .req q5 +- Y2_H .req d11 +- X3 .req q6 +- X3_L .req d12 +- X3_H .req d13 +- Y3 .req q7 +- Y3_H .req d15 +- +- // the round key, duplicated in all lanes +- ROUND_KEY .req q8 +- ROUND_KEY_L .req d16 +- ROUND_KEY_H .req d17 +- +- // index vector for vtbl-based 8-bit rotates +- ROTATE_TABLE .req d18 +- +- // multiplication table for updating XTS tweaks +- GF128MUL_TABLE .req d19 +- GF64MUL_TABLE .req d19 +- +- // current XTS tweak value(s) +- TWEAKV .req q10 +- TWEAKV_L .req d20 +- TWEAKV_H .req d21 +- +- TMP0 .req q12 +- TMP0_L .req d24 +- TMP0_H .req d25 +- TMP1 .req q13 +- TMP2 .req q14 +- TMP3 .req q15 +- +- .align 4 +-.Lror64_8_table: +- .byte 1, 2, 3, 4, 5, 6, 7, 0 +-.Lror32_8_table: +- .byte 1, 2, 3, 0, 5, 6, 7, 4 +-.Lrol64_8_table: +- .byte 7, 0, 1, 2, 3, 4, 5, 6 +-.Lrol32_8_table: +- .byte 3, 0, 1, 2, 7, 4, 5, 6 +-.Lgf128mul_table: +- .byte 0, 0x87 +- .fill 14 +-.Lgf64mul_table: +- .byte 0, 0x1b, (0x1b << 1), (0x1b << 1) ^ 0x1b +- .fill 12 +- +-/* +- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time +- * +- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for +- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes +- * of ROUND_KEY. 'n' is the lane size: 64 for Speck128, or 32 for Speck64. +- * +- * The 8-bit rotates are implemented using vtbl instead of vshr + vsli because +- * the vtbl approach is faster on some processors and the same speed on others. +- */ +-.macro _speck_round_128bytes n +- +- // x = ror(x, 8) +- vtbl.8 X0_L, {X0_L}, ROTATE_TABLE +- vtbl.8 X0_H, {X0_H}, ROTATE_TABLE +- vtbl.8 X1_L, {X1_L}, ROTATE_TABLE +- vtbl.8 X1_H, {X1_H}, ROTATE_TABLE +- vtbl.8 X2_L, {X2_L}, ROTATE_TABLE +- vtbl.8 X2_H, {X2_H}, ROTATE_TABLE +- vtbl.8 X3_L, {X3_L}, ROTATE_TABLE +- vtbl.8 X3_H, {X3_H}, ROTATE_TABLE +- +- // x += y +- vadd.u\n X0, Y0 +- vadd.u\n X1, Y1 +- vadd.u\n X2, Y2 +- vadd.u\n X3, Y3 +- +- // x ^= k +- veor X0, ROUND_KEY +- veor X1, ROUND_KEY +- veor X2, ROUND_KEY +- veor X3, ROUND_KEY +- +- // y = rol(y, 3) +- vshl.u\n TMP0, Y0, #3 +- vshl.u\n TMP1, Y1, #3 +- vshl.u\n TMP2, Y2, #3 +- vshl.u\n TMP3, Y3, #3 +- vsri.u\n TMP0, Y0, #(\n - 3) +- vsri.u\n TMP1, Y1, #(\n - 3) +- vsri.u\n TMP2, Y2, #(\n - 3) +- vsri.u\n TMP3, Y3, #(\n - 3) +- +- // y ^= x +- veor Y0, TMP0, X0 +- veor Y1, TMP1, X1 +- veor Y2, TMP2, X2 +- veor Y3, TMP3, X3 +-.endm +- +-/* +- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time +- * +- * This is the inverse of _speck_round_128bytes(). +- */ +-.macro _speck_unround_128bytes n +- +- // y ^= x +- veor TMP0, Y0, X0 +- veor TMP1, Y1, X1 +- veor TMP2, Y2, X2 +- veor TMP3, Y3, X3 +- +- // y = ror(y, 3) +- vshr.u\n Y0, TMP0, #3 +- vshr.u\n Y1, TMP1, #3 +- vshr.u\n Y2, TMP2, #3 +- vshr.u\n Y3, TMP3, #3 +- vsli.u\n Y0, TMP0, #(\n - 3) +- vsli.u\n Y1, TMP1, #(\n - 3) +- vsli.u\n Y2, TMP2, #(\n - 3) +- vsli.u\n Y3, TMP3, #(\n - 3) +- +- // x ^= k +- veor X0, ROUND_KEY +- veor X1, ROUND_KEY +- veor X2, ROUND_KEY +- veor X3, ROUND_KEY +- +- // x -= y +- vsub.u\n X0, Y0 +- vsub.u\n X1, Y1 +- vsub.u\n X2, Y2 +- vsub.u\n X3, Y3 +- +- // x = rol(x, 8); +- vtbl.8 X0_L, {X0_L}, ROTATE_TABLE +- vtbl.8 X0_H, {X0_H}, ROTATE_TABLE +- vtbl.8 X1_L, {X1_L}, ROTATE_TABLE +- vtbl.8 X1_H, {X1_H}, ROTATE_TABLE +- vtbl.8 X2_L, {X2_L}, ROTATE_TABLE +- vtbl.8 X2_H, {X2_H}, ROTATE_TABLE +- vtbl.8 X3_L, {X3_L}, ROTATE_TABLE +- vtbl.8 X3_H, {X3_H}, ROTATE_TABLE +-.endm +- +-.macro _xts128_precrypt_one dst_reg, tweak_buf, tmp +- +- // Load the next source block +- vld1.8 {\dst_reg}, [SRC]! +- +- // Save the current tweak in the tweak buffer +- vst1.8 {TWEAKV}, [\tweak_buf:128]! +- +- // XOR the next source block with the current tweak +- veor \dst_reg, TWEAKV +- +- /* +- * Calculate the next tweak by multiplying the current one by x, +- * modulo p(x) = x^128 + x^7 + x^2 + x + 1. +- */ +- vshr.u64 \tmp, TWEAKV, #63 +- vshl.u64 TWEAKV, #1 +- veor TWEAKV_H, \tmp\()_L +- vtbl.8 \tmp\()_H, {GF128MUL_TABLE}, \tmp\()_H +- veor TWEAKV_L, \tmp\()_H +-.endm +- +-.macro _xts64_precrypt_two dst_reg, tweak_buf, tmp +- +- // Load the next two source blocks +- vld1.8 {\dst_reg}, [SRC]! +- +- // Save the current two tweaks in the tweak buffer +- vst1.8 {TWEAKV}, [\tweak_buf:128]! +- +- // XOR the next two source blocks with the current two tweaks +- veor \dst_reg, TWEAKV +- +- /* +- * Calculate the next two tweaks by multiplying the current ones by x^2, +- * modulo p(x) = x^64 + x^4 + x^3 + x + 1. +- */ +- vshr.u64 \tmp, TWEAKV, #62 +- vshl.u64 TWEAKV, #2 +- vtbl.8 \tmp\()_L, {GF64MUL_TABLE}, \tmp\()_L +- vtbl.8 \tmp\()_H, {GF64MUL_TABLE}, \tmp\()_H +- veor TWEAKV, \tmp +-.endm +- +-/* +- * _speck_xts_crypt() - Speck-XTS encryption/decryption +- * +- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer +- * using Speck-XTS, specifically the variant with a block size of '2n' and round +- * count given by NROUNDS. The expanded round keys are given in ROUND_KEYS, and +- * the current XTS tweak value is given in TWEAK. It's assumed that NBYTES is a +- * nonzero multiple of 128. +- */ +-.macro _speck_xts_crypt n, decrypting +- push {r4-r7} +- mov r7, sp +- +- /* +- * The first four parameters were passed in registers r0-r3. Load the +- * additional parameters, which were passed on the stack. +- */ +- ldr NBYTES, [sp, #16] +- ldr TWEAK, [sp, #20] +- +- /* +- * If decrypting, modify the ROUND_KEYS parameter to point to the last +- * round key rather than the first, since for decryption the round keys +- * are used in reverse order. +- */ +-.if \decrypting +-.if \n == 64 +- add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #3 +- sub ROUND_KEYS, #8 +-.else +- add ROUND_KEYS, ROUND_KEYS, NROUNDS, lsl #2 +- sub ROUND_KEYS, #4 +-.endif +-.endif +- +- // Load the index vector for vtbl-based 8-bit rotates +-.if \decrypting +- ldr r12, =.Lrol\n\()_8_table +-.else +- ldr r12, =.Lror\n\()_8_table +-.endif +- vld1.8 {ROTATE_TABLE}, [r12:64] +- +- // One-time XTS preparation +- +- /* +- * Allocate stack space to store 128 bytes worth of tweaks. For +- * performance, this space is aligned to a 16-byte boundary so that we +- * can use the load/store instructions that declare 16-byte alignment. +- * For Thumb2 compatibility, don't do the 'bic' directly on 'sp'. +- */ +- sub r12, sp, #128 +- bic r12, #0xf +- mov sp, r12 +- +-.if \n == 64 +- // Load first tweak +- vld1.8 {TWEAKV}, [TWEAK] +- +- // Load GF(2^128) multiplication table +- ldr r12, =.Lgf128mul_table +- vld1.8 {GF128MUL_TABLE}, [r12:64] +-.else +- // Load first tweak +- vld1.8 {TWEAKV_L}, [TWEAK] +- +- // Load GF(2^64) multiplication table +- ldr r12, =.Lgf64mul_table +- vld1.8 {GF64MUL_TABLE}, [r12:64] +- +- // Calculate second tweak, packing it together with the first +- vshr.u64 TMP0_L, TWEAKV_L, #63 +- vtbl.u8 TMP0_L, {GF64MUL_TABLE}, TMP0_L +- vshl.u64 TWEAKV_H, TWEAKV_L, #1 +- veor TWEAKV_H, TMP0_L +-.endif +- +-.Lnext_128bytes_\@: +- +- /* +- * Load the source blocks into {X,Y}[0-3], XOR them with their XTS tweak +- * values, and save the tweaks on the stack for later. Then +- * de-interleave the 'x' and 'y' elements of each block, i.e. make it so +- * that the X[0-3] registers contain only the second halves of blocks, +- * and the Y[0-3] registers contain only the first halves of blocks. +- * (Speck uses the order (y, x) rather than the more intuitive (x, y).) +- */ +- mov r12, sp +-.if \n == 64 +- _xts128_precrypt_one X0, r12, TMP0 +- _xts128_precrypt_one Y0, r12, TMP0 +- _xts128_precrypt_one X1, r12, TMP0 +- _xts128_precrypt_one Y1, r12, TMP0 +- _xts128_precrypt_one X2, r12, TMP0 +- _xts128_precrypt_one Y2, r12, TMP0 +- _xts128_precrypt_one X3, r12, TMP0 +- _xts128_precrypt_one Y3, r12, TMP0 +- vswp X0_L, Y0_H +- vswp X1_L, Y1_H +- vswp X2_L, Y2_H +- vswp X3_L, Y3_H +-.else +- _xts64_precrypt_two X0, r12, TMP0 +- _xts64_precrypt_two Y0, r12, TMP0 +- _xts64_precrypt_two X1, r12, TMP0 +- _xts64_precrypt_two Y1, r12, TMP0 +- _xts64_precrypt_two X2, r12, TMP0 +- _xts64_precrypt_two Y2, r12, TMP0 +- _xts64_precrypt_two X3, r12, TMP0 +- _xts64_precrypt_two Y3, r12, TMP0 +- vuzp.32 Y0, X0 +- vuzp.32 Y1, X1 +- vuzp.32 Y2, X2 +- vuzp.32 Y3, X3 +-.endif +- +- // Do the cipher rounds +- +- mov r12, ROUND_KEYS +- mov r6, NROUNDS +- +-.Lnext_round_\@: +-.if \decrypting +-.if \n == 64 +- vld1.64 ROUND_KEY_L, [r12] +- sub r12, #8 +- vmov ROUND_KEY_H, ROUND_KEY_L +-.else +- vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12] +- sub r12, #4 +-.endif +- _speck_unround_128bytes \n +-.else +-.if \n == 64 +- vld1.64 ROUND_KEY_L, [r12]! +- vmov ROUND_KEY_H, ROUND_KEY_L +-.else +- vld1.32 {ROUND_KEY_L[],ROUND_KEY_H[]}, [r12]! +-.endif +- _speck_round_128bytes \n +-.endif +- subs r6, r6, #1 +- bne .Lnext_round_\@ +- +- // Re-interleave the 'x' and 'y' elements of each block +-.if \n == 64 +- vswp X0_L, Y0_H +- vswp X1_L, Y1_H +- vswp X2_L, Y2_H +- vswp X3_L, Y3_H +-.else +- vzip.32 Y0, X0 +- vzip.32 Y1, X1 +- vzip.32 Y2, X2 +- vzip.32 Y3, X3 +-.endif +- +- // XOR the encrypted/decrypted blocks with the tweaks we saved earlier +- mov r12, sp +- vld1.8 {TMP0, TMP1}, [r12:128]! +- vld1.8 {TMP2, TMP3}, [r12:128]! +- veor X0, TMP0 +- veor Y0, TMP1 +- veor X1, TMP2 +- veor Y1, TMP3 +- vld1.8 {TMP0, TMP1}, [r12:128]! +- vld1.8 {TMP2, TMP3}, [r12:128]! +- veor X2, TMP0 +- veor Y2, TMP1 +- veor X3, TMP2 +- veor Y3, TMP3 +- +- // Store the ciphertext in the destination buffer +- vst1.8 {X0, Y0}, [DST]! +- vst1.8 {X1, Y1}, [DST]! +- vst1.8 {X2, Y2}, [DST]! +- vst1.8 {X3, Y3}, [DST]! +- +- // Continue if there are more 128-byte chunks remaining, else return +- subs NBYTES, #128 +- bne .Lnext_128bytes_\@ +- +- // Store the next tweak +-.if \n == 64 +- vst1.8 {TWEAKV}, [TWEAK] +-.else +- vst1.8 {TWEAKV_L}, [TWEAK] +-.endif +- +- mov sp, r7 +- pop {r4-r7} +- bx lr +-.endm +- +-ENTRY(speck128_xts_encrypt_neon) +- _speck_xts_crypt n=64, decrypting=0 +-ENDPROC(speck128_xts_encrypt_neon) +- +-ENTRY(speck128_xts_decrypt_neon) +- _speck_xts_crypt n=64, decrypting=1 +-ENDPROC(speck128_xts_decrypt_neon) +- +-ENTRY(speck64_xts_encrypt_neon) +- _speck_xts_crypt n=32, decrypting=0 +-ENDPROC(speck64_xts_encrypt_neon) +- +-ENTRY(speck64_xts_decrypt_neon) +- _speck_xts_crypt n=32, decrypting=1 +-ENDPROC(speck64_xts_decrypt_neon) +--- a/arch/arm/crypto/speck-neon-glue.c ++++ /dev/null +@@ -1,288 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS +- * +- * Copyright (c) 2018 Google, Inc +- * +- * Note: the NIST recommendation for XTS only specifies a 128-bit block size, +- * but a 64-bit version (needed for Speck64) is fairly straightforward; the math +- * is just done in GF(2^64) instead of GF(2^128), with the reducing polynomial +- * x^64 + x^4 + x^3 + x + 1 from the original XEX paper (Rogaway, 2004: +- * "Efficient Instantiations of Tweakable Blockciphers and Refinements to Modes +- * OCB and PMAC"), represented as 0x1B. +- */ +- +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +- +-/* The assembly functions only handle multiples of 128 bytes */ +-#define SPECK_NEON_CHUNK_SIZE 128 +- +-/* Speck128 */ +- +-struct speck128_xts_tfm_ctx { +- struct speck128_tfm_ctx main_key; +- struct speck128_tfm_ctx tweak_key; +-}; +- +-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *, +- u8 *, const u8 *); +-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *, +- const void *, unsigned int, void *); +- +-static __always_inline int +-__speck128_xts_crypt(struct skcipher_request *req, +- speck128_crypt_one_t crypt_one, +- speck128_xts_crypt_many_t crypt_many) +-{ +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); +- const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- struct skcipher_walk walk; +- le128 tweak; +- int err; +- +- err = skcipher_walk_virt(&walk, req, true); +- +- crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv); +- +- while (walk.nbytes > 0) { +- unsigned int nbytes = walk.nbytes; +- u8 *dst = walk.dst.virt.addr; +- const u8 *src = walk.src.virt.addr; +- +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) { +- unsigned int count; +- +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE); +- kernel_neon_begin(); +- (*crypt_many)(ctx->main_key.round_keys, +- ctx->main_key.nrounds, +- dst, src, count, &tweak); +- kernel_neon_end(); +- dst += count; +- src += count; +- nbytes -= count; +- } +- +- /* Handle any remainder with generic code */ +- while (nbytes >= sizeof(tweak)) { +- le128_xor((le128 *)dst, (const le128 *)src, &tweak); +- (*crypt_one)(&ctx->main_key, dst, dst); +- le128_xor((le128 *)dst, (const le128 *)dst, &tweak); +- gf128mul_x_ble(&tweak, &tweak); +- +- dst += sizeof(tweak); +- src += sizeof(tweak); +- nbytes -= sizeof(tweak); +- } +- err = skcipher_walk_done(&walk, nbytes); +- } +- +- return err; +-} +- +-static int speck128_xts_encrypt(struct skcipher_request *req) +-{ +- return __speck128_xts_crypt(req, crypto_speck128_encrypt, +- speck128_xts_encrypt_neon); +-} +- +-static int speck128_xts_decrypt(struct skcipher_request *req) +-{ +- return __speck128_xts_crypt(req, crypto_speck128_decrypt, +- speck128_xts_decrypt_neon); +-} +- +-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, +- unsigned int keylen) +-{ +- struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- int err; +- +- err = xts_verify_key(tfm, key, keylen); +- if (err) +- return err; +- +- keylen /= 2; +- +- err = crypto_speck128_setkey(&ctx->main_key, key, keylen); +- if (err) +- return err; +- +- return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen); +-} +- +-/* Speck64 */ +- +-struct speck64_xts_tfm_ctx { +- struct speck64_tfm_ctx main_key; +- struct speck64_tfm_ctx tweak_key; +-}; +- +-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *, +- u8 *, const u8 *); +-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *, +- const void *, unsigned int, void *); +- +-static __always_inline int +-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one, +- speck64_xts_crypt_many_t crypt_many) +-{ +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); +- const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- struct skcipher_walk walk; +- __le64 tweak; +- int err; +- +- err = skcipher_walk_virt(&walk, req, true); +- +- crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv); +- +- while (walk.nbytes > 0) { +- unsigned int nbytes = walk.nbytes; +- u8 *dst = walk.dst.virt.addr; +- const u8 *src = walk.src.virt.addr; +- +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) { +- unsigned int count; +- +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE); +- kernel_neon_begin(); +- (*crypt_many)(ctx->main_key.round_keys, +- ctx->main_key.nrounds, +- dst, src, count, &tweak); +- kernel_neon_end(); +- dst += count; +- src += count; +- nbytes -= count; +- } +- +- /* Handle any remainder with generic code */ +- while (nbytes >= sizeof(tweak)) { +- *(__le64 *)dst = *(__le64 *)src ^ tweak; +- (*crypt_one)(&ctx->main_key, dst, dst); +- *(__le64 *)dst ^= tweak; +- tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^ +- ((tweak & cpu_to_le64(1ULL << 63)) ? +- 0x1B : 0)); +- dst += sizeof(tweak); +- src += sizeof(tweak); +- nbytes -= sizeof(tweak); +- } +- err = skcipher_walk_done(&walk, nbytes); +- } +- +- return err; +-} +- +-static int speck64_xts_encrypt(struct skcipher_request *req) +-{ +- return __speck64_xts_crypt(req, crypto_speck64_encrypt, +- speck64_xts_encrypt_neon); +-} +- +-static int speck64_xts_decrypt(struct skcipher_request *req) +-{ +- return __speck64_xts_crypt(req, crypto_speck64_decrypt, +- speck64_xts_decrypt_neon); +-} +- +-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, +- unsigned int keylen) +-{ +- struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- int err; +- +- err = xts_verify_key(tfm, key, keylen); +- if (err) +- return err; +- +- keylen /= 2; +- +- err = crypto_speck64_setkey(&ctx->main_key, key, keylen); +- if (err) +- return err; +- +- return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen); +-} +- +-static struct skcipher_alg speck_algs[] = { +- { +- .base.cra_name = "xts(speck128)", +- .base.cra_driver_name = "xts-speck128-neon", +- .base.cra_priority = 300, +- .base.cra_blocksize = SPECK128_BLOCK_SIZE, +- .base.cra_ctxsize = sizeof(struct speck128_xts_tfm_ctx), +- .base.cra_alignmask = 7, +- .base.cra_module = THIS_MODULE, +- .min_keysize = 2 * SPECK128_128_KEY_SIZE, +- .max_keysize = 2 * SPECK128_256_KEY_SIZE, +- .ivsize = SPECK128_BLOCK_SIZE, +- .walksize = SPECK_NEON_CHUNK_SIZE, +- .setkey = speck128_xts_setkey, +- .encrypt = speck128_xts_encrypt, +- .decrypt = speck128_xts_decrypt, +- }, { +- .base.cra_name = "xts(speck64)", +- .base.cra_driver_name = "xts-speck64-neon", +- .base.cra_priority = 300, +- .base.cra_blocksize = SPECK64_BLOCK_SIZE, +- .base.cra_ctxsize = sizeof(struct speck64_xts_tfm_ctx), +- .base.cra_alignmask = 7, +- .base.cra_module = THIS_MODULE, +- .min_keysize = 2 * SPECK64_96_KEY_SIZE, +- .max_keysize = 2 * SPECK64_128_KEY_SIZE, +- .ivsize = SPECK64_BLOCK_SIZE, +- .walksize = SPECK_NEON_CHUNK_SIZE, +- .setkey = speck64_xts_setkey, +- .encrypt = speck64_xts_encrypt, +- .decrypt = speck64_xts_decrypt, +- } +-}; +- +-static int __init speck_neon_module_init(void) +-{ +- if (!(elf_hwcap & HWCAP_NEON)) +- return -ENODEV; +- return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs)); +-} +- +-static void __exit speck_neon_module_exit(void) +-{ +- crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs)); +-} +- +-module_init(speck_neon_module_init); +-module_exit(speck_neon_module_exit); +- +-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)"); +-MODULE_LICENSE("GPL"); +-MODULE_AUTHOR("Eric Biggers "); +-MODULE_ALIAS_CRYPTO("xts(speck128)"); +-MODULE_ALIAS_CRYPTO("xts-speck128-neon"); +-MODULE_ALIAS_CRYPTO("xts(speck64)"); +-MODULE_ALIAS_CRYPTO("xts-speck64-neon"); +--- a/arch/arm64/crypto/Kconfig ++++ b/arch/arm64/crypto/Kconfig +@@ -119,10 +119,4 @@ config CRYPTO_AES_ARM64_BS + select CRYPTO_AES_ARM64 + select CRYPTO_SIMD + +-config CRYPTO_SPECK_NEON +- tristate "NEON accelerated Speck cipher algorithms" +- depends on KERNEL_MODE_NEON +- select CRYPTO_BLKCIPHER +- select CRYPTO_SPECK +- + endif +--- a/arch/arm64/crypto/Makefile ++++ b/arch/arm64/crypto/Makefile +@@ -56,9 +56,6 @@ sha512-arm64-y := sha512-glue.o sha512-c + obj-$(CONFIG_CRYPTO_CHACHA20_NEON) += chacha20-neon.o + chacha20-neon-y := chacha20-neon-core.o chacha20-neon-glue.o + +-obj-$(CONFIG_CRYPTO_SPECK_NEON) += speck-neon.o +-speck-neon-y := speck-neon-core.o speck-neon-glue.o +- + obj-$(CONFIG_CRYPTO_AES_ARM64) += aes-arm64.o + aes-arm64-y := aes-cipher-core.o aes-cipher-glue.o + +--- a/arch/arm64/crypto/speck-neon-core.S ++++ /dev/null +@@ -1,352 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * ARM64 NEON-accelerated implementation of Speck128-XTS and Speck64-XTS +- * +- * Copyright (c) 2018 Google, Inc +- * +- * Author: Eric Biggers +- */ +- +-#include +- +- .text +- +- // arguments +- ROUND_KEYS .req x0 // const {u64,u32} *round_keys +- NROUNDS .req w1 // int nrounds +- NROUNDS_X .req x1 +- DST .req x2 // void *dst +- SRC .req x3 // const void *src +- NBYTES .req w4 // unsigned int nbytes +- TWEAK .req x5 // void *tweak +- +- // registers which hold the data being encrypted/decrypted +- // (underscores avoid a naming collision with ARM64 registers x0-x3) +- X_0 .req v0 +- Y_0 .req v1 +- X_1 .req v2 +- Y_1 .req v3 +- X_2 .req v4 +- Y_2 .req v5 +- X_3 .req v6 +- Y_3 .req v7 +- +- // the round key, duplicated in all lanes +- ROUND_KEY .req v8 +- +- // index vector for tbl-based 8-bit rotates +- ROTATE_TABLE .req v9 +- ROTATE_TABLE_Q .req q9 +- +- // temporary registers +- TMP0 .req v10 +- TMP1 .req v11 +- TMP2 .req v12 +- TMP3 .req v13 +- +- // multiplication table for updating XTS tweaks +- GFMUL_TABLE .req v14 +- GFMUL_TABLE_Q .req q14 +- +- // next XTS tweak value(s) +- TWEAKV_NEXT .req v15 +- +- // XTS tweaks for the blocks currently being encrypted/decrypted +- TWEAKV0 .req v16 +- TWEAKV1 .req v17 +- TWEAKV2 .req v18 +- TWEAKV3 .req v19 +- TWEAKV4 .req v20 +- TWEAKV5 .req v21 +- TWEAKV6 .req v22 +- TWEAKV7 .req v23 +- +- .align 4 +-.Lror64_8_table: +- .octa 0x080f0e0d0c0b0a090007060504030201 +-.Lror32_8_table: +- .octa 0x0c0f0e0d080b0a090407060500030201 +-.Lrol64_8_table: +- .octa 0x0e0d0c0b0a09080f0605040302010007 +-.Lrol32_8_table: +- .octa 0x0e0d0c0f0a09080b0605040702010003 +-.Lgf128mul_table: +- .octa 0x00000000000000870000000000000001 +-.Lgf64mul_table: +- .octa 0x0000000000000000000000002d361b00 +- +-/* +- * _speck_round_128bytes() - Speck encryption round on 128 bytes at a time +- * +- * Do one Speck encryption round on the 128 bytes (8 blocks for Speck128, 16 for +- * Speck64) stored in X0-X3 and Y0-Y3, using the round key stored in all lanes +- * of ROUND_KEY. 'n' is the lane size: 64 for Speck128, or 32 for Speck64. +- * 'lanes' is the lane specifier: "2d" for Speck128 or "4s" for Speck64. +- */ +-.macro _speck_round_128bytes n, lanes +- +- // x = ror(x, 8) +- tbl X_0.16b, {X_0.16b}, ROTATE_TABLE.16b +- tbl X_1.16b, {X_1.16b}, ROTATE_TABLE.16b +- tbl X_2.16b, {X_2.16b}, ROTATE_TABLE.16b +- tbl X_3.16b, {X_3.16b}, ROTATE_TABLE.16b +- +- // x += y +- add X_0.\lanes, X_0.\lanes, Y_0.\lanes +- add X_1.\lanes, X_1.\lanes, Y_1.\lanes +- add X_2.\lanes, X_2.\lanes, Y_2.\lanes +- add X_3.\lanes, X_3.\lanes, Y_3.\lanes +- +- // x ^= k +- eor X_0.16b, X_0.16b, ROUND_KEY.16b +- eor X_1.16b, X_1.16b, ROUND_KEY.16b +- eor X_2.16b, X_2.16b, ROUND_KEY.16b +- eor X_3.16b, X_3.16b, ROUND_KEY.16b +- +- // y = rol(y, 3) +- shl TMP0.\lanes, Y_0.\lanes, #3 +- shl TMP1.\lanes, Y_1.\lanes, #3 +- shl TMP2.\lanes, Y_2.\lanes, #3 +- shl TMP3.\lanes, Y_3.\lanes, #3 +- sri TMP0.\lanes, Y_0.\lanes, #(\n - 3) +- sri TMP1.\lanes, Y_1.\lanes, #(\n - 3) +- sri TMP2.\lanes, Y_2.\lanes, #(\n - 3) +- sri TMP3.\lanes, Y_3.\lanes, #(\n - 3) +- +- // y ^= x +- eor Y_0.16b, TMP0.16b, X_0.16b +- eor Y_1.16b, TMP1.16b, X_1.16b +- eor Y_2.16b, TMP2.16b, X_2.16b +- eor Y_3.16b, TMP3.16b, X_3.16b +-.endm +- +-/* +- * _speck_unround_128bytes() - Speck decryption round on 128 bytes at a time +- * +- * This is the inverse of _speck_round_128bytes(). +- */ +-.macro _speck_unround_128bytes n, lanes +- +- // y ^= x +- eor TMP0.16b, Y_0.16b, X_0.16b +- eor TMP1.16b, Y_1.16b, X_1.16b +- eor TMP2.16b, Y_2.16b, X_2.16b +- eor TMP3.16b, Y_3.16b, X_3.16b +- +- // y = ror(y, 3) +- ushr Y_0.\lanes, TMP0.\lanes, #3 +- ushr Y_1.\lanes, TMP1.\lanes, #3 +- ushr Y_2.\lanes, TMP2.\lanes, #3 +- ushr Y_3.\lanes, TMP3.\lanes, #3 +- sli Y_0.\lanes, TMP0.\lanes, #(\n - 3) +- sli Y_1.\lanes, TMP1.\lanes, #(\n - 3) +- sli Y_2.\lanes, TMP2.\lanes, #(\n - 3) +- sli Y_3.\lanes, TMP3.\lanes, #(\n - 3) +- +- // x ^= k +- eor X_0.16b, X_0.16b, ROUND_KEY.16b +- eor X_1.16b, X_1.16b, ROUND_KEY.16b +- eor X_2.16b, X_2.16b, ROUND_KEY.16b +- eor X_3.16b, X_3.16b, ROUND_KEY.16b +- +- // x -= y +- sub X_0.\lanes, X_0.\lanes, Y_0.\lanes +- sub X_1.\lanes, X_1.\lanes, Y_1.\lanes +- sub X_2.\lanes, X_2.\lanes, Y_2.\lanes +- sub X_3.\lanes, X_3.\lanes, Y_3.\lanes +- +- // x = rol(x, 8) +- tbl X_0.16b, {X_0.16b}, ROTATE_TABLE.16b +- tbl X_1.16b, {X_1.16b}, ROTATE_TABLE.16b +- tbl X_2.16b, {X_2.16b}, ROTATE_TABLE.16b +- tbl X_3.16b, {X_3.16b}, ROTATE_TABLE.16b +-.endm +- +-.macro _next_xts_tweak next, cur, tmp, n +-.if \n == 64 +- /* +- * Calculate the next tweak by multiplying the current one by x, +- * modulo p(x) = x^128 + x^7 + x^2 + x + 1. +- */ +- sshr \tmp\().2d, \cur\().2d, #63 +- and \tmp\().16b, \tmp\().16b, GFMUL_TABLE.16b +- shl \next\().2d, \cur\().2d, #1 +- ext \tmp\().16b, \tmp\().16b, \tmp\().16b, #8 +- eor \next\().16b, \next\().16b, \tmp\().16b +-.else +- /* +- * Calculate the next two tweaks by multiplying the current ones by x^2, +- * modulo p(x) = x^64 + x^4 + x^3 + x + 1. +- */ +- ushr \tmp\().2d, \cur\().2d, #62 +- shl \next\().2d, \cur\().2d, #2 +- tbl \tmp\().16b, {GFMUL_TABLE.16b}, \tmp\().16b +- eor \next\().16b, \next\().16b, \tmp\().16b +-.endif +-.endm +- +-/* +- * _speck_xts_crypt() - Speck-XTS encryption/decryption +- * +- * Encrypt or decrypt NBYTES bytes of data from the SRC buffer to the DST buffer +- * using Speck-XTS, specifically the variant with a block size of '2n' and round +- * count given by NROUNDS. The expanded round keys are given in ROUND_KEYS, and +- * the current XTS tweak value is given in TWEAK. It's assumed that NBYTES is a +- * nonzero multiple of 128. +- */ +-.macro _speck_xts_crypt n, lanes, decrypting +- +- /* +- * If decrypting, modify the ROUND_KEYS parameter to point to the last +- * round key rather than the first, since for decryption the round keys +- * are used in reverse order. +- */ +-.if \decrypting +- mov NROUNDS, NROUNDS /* zero the high 32 bits */ +-.if \n == 64 +- add ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #3 +- sub ROUND_KEYS, ROUND_KEYS, #8 +-.else +- add ROUND_KEYS, ROUND_KEYS, NROUNDS_X, lsl #2 +- sub ROUND_KEYS, ROUND_KEYS, #4 +-.endif +-.endif +- +- // Load the index vector for tbl-based 8-bit rotates +-.if \decrypting +- ldr ROTATE_TABLE_Q, .Lrol\n\()_8_table +-.else +- ldr ROTATE_TABLE_Q, .Lror\n\()_8_table +-.endif +- +- // One-time XTS preparation +-.if \n == 64 +- // Load first tweak +- ld1 {TWEAKV0.16b}, [TWEAK] +- +- // Load GF(2^128) multiplication table +- ldr GFMUL_TABLE_Q, .Lgf128mul_table +-.else +- // Load first tweak +- ld1 {TWEAKV0.8b}, [TWEAK] +- +- // Load GF(2^64) multiplication table +- ldr GFMUL_TABLE_Q, .Lgf64mul_table +- +- // Calculate second tweak, packing it together with the first +- ushr TMP0.2d, TWEAKV0.2d, #63 +- shl TMP1.2d, TWEAKV0.2d, #1 +- tbl TMP0.8b, {GFMUL_TABLE.16b}, TMP0.8b +- eor TMP0.8b, TMP0.8b, TMP1.8b +- mov TWEAKV0.d[1], TMP0.d[0] +-.endif +- +-.Lnext_128bytes_\@: +- +- // Calculate XTS tweaks for next 128 bytes +- _next_xts_tweak TWEAKV1, TWEAKV0, TMP0, \n +- _next_xts_tweak TWEAKV2, TWEAKV1, TMP0, \n +- _next_xts_tweak TWEAKV3, TWEAKV2, TMP0, \n +- _next_xts_tweak TWEAKV4, TWEAKV3, TMP0, \n +- _next_xts_tweak TWEAKV5, TWEAKV4, TMP0, \n +- _next_xts_tweak TWEAKV6, TWEAKV5, TMP0, \n +- _next_xts_tweak TWEAKV7, TWEAKV6, TMP0, \n +- _next_xts_tweak TWEAKV_NEXT, TWEAKV7, TMP0, \n +- +- // Load the next source blocks into {X,Y}[0-3] +- ld1 {X_0.16b-Y_1.16b}, [SRC], #64 +- ld1 {X_2.16b-Y_3.16b}, [SRC], #64 +- +- // XOR the source blocks with their XTS tweaks +- eor TMP0.16b, X_0.16b, TWEAKV0.16b +- eor Y_0.16b, Y_0.16b, TWEAKV1.16b +- eor TMP1.16b, X_1.16b, TWEAKV2.16b +- eor Y_1.16b, Y_1.16b, TWEAKV3.16b +- eor TMP2.16b, X_2.16b, TWEAKV4.16b +- eor Y_2.16b, Y_2.16b, TWEAKV5.16b +- eor TMP3.16b, X_3.16b, TWEAKV6.16b +- eor Y_3.16b, Y_3.16b, TWEAKV7.16b +- +- /* +- * De-interleave the 'x' and 'y' elements of each block, i.e. make it so +- * that the X[0-3] registers contain only the second halves of blocks, +- * and the Y[0-3] registers contain only the first halves of blocks. +- * (Speck uses the order (y, x) rather than the more intuitive (x, y).) +- */ +- uzp2 X_0.\lanes, TMP0.\lanes, Y_0.\lanes +- uzp1 Y_0.\lanes, TMP0.\lanes, Y_0.\lanes +- uzp2 X_1.\lanes, TMP1.\lanes, Y_1.\lanes +- uzp1 Y_1.\lanes, TMP1.\lanes, Y_1.\lanes +- uzp2 X_2.\lanes, TMP2.\lanes, Y_2.\lanes +- uzp1 Y_2.\lanes, TMP2.\lanes, Y_2.\lanes +- uzp2 X_3.\lanes, TMP3.\lanes, Y_3.\lanes +- uzp1 Y_3.\lanes, TMP3.\lanes, Y_3.\lanes +- +- // Do the cipher rounds +- mov x6, ROUND_KEYS +- mov w7, NROUNDS +-.Lnext_round_\@: +-.if \decrypting +- ld1r {ROUND_KEY.\lanes}, [x6] +- sub x6, x6, #( \n / 8 ) +- _speck_unround_128bytes \n, \lanes +-.else +- ld1r {ROUND_KEY.\lanes}, [x6], #( \n / 8 ) +- _speck_round_128bytes \n, \lanes +-.endif +- subs w7, w7, #1 +- bne .Lnext_round_\@ +- +- // Re-interleave the 'x' and 'y' elements of each block +- zip1 TMP0.\lanes, Y_0.\lanes, X_0.\lanes +- zip2 Y_0.\lanes, Y_0.\lanes, X_0.\lanes +- zip1 TMP1.\lanes, Y_1.\lanes, X_1.\lanes +- zip2 Y_1.\lanes, Y_1.\lanes, X_1.\lanes +- zip1 TMP2.\lanes, Y_2.\lanes, X_2.\lanes +- zip2 Y_2.\lanes, Y_2.\lanes, X_2.\lanes +- zip1 TMP3.\lanes, Y_3.\lanes, X_3.\lanes +- zip2 Y_3.\lanes, Y_3.\lanes, X_3.\lanes +- +- // XOR the encrypted/decrypted blocks with the tweaks calculated earlier +- eor X_0.16b, TMP0.16b, TWEAKV0.16b +- eor Y_0.16b, Y_0.16b, TWEAKV1.16b +- eor X_1.16b, TMP1.16b, TWEAKV2.16b +- eor Y_1.16b, Y_1.16b, TWEAKV3.16b +- eor X_2.16b, TMP2.16b, TWEAKV4.16b +- eor Y_2.16b, Y_2.16b, TWEAKV5.16b +- eor X_3.16b, TMP3.16b, TWEAKV6.16b +- eor Y_3.16b, Y_3.16b, TWEAKV7.16b +- mov TWEAKV0.16b, TWEAKV_NEXT.16b +- +- // Store the ciphertext in the destination buffer +- st1 {X_0.16b-Y_1.16b}, [DST], #64 +- st1 {X_2.16b-Y_3.16b}, [DST], #64 +- +- // Continue if there are more 128-byte chunks remaining +- subs NBYTES, NBYTES, #128 +- bne .Lnext_128bytes_\@ +- +- // Store the next tweak and return +-.if \n == 64 +- st1 {TWEAKV_NEXT.16b}, [TWEAK] +-.else +- st1 {TWEAKV_NEXT.8b}, [TWEAK] +-.endif +- ret +-.endm +- +-ENTRY(speck128_xts_encrypt_neon) +- _speck_xts_crypt n=64, lanes=2d, decrypting=0 +-ENDPROC(speck128_xts_encrypt_neon) +- +-ENTRY(speck128_xts_decrypt_neon) +- _speck_xts_crypt n=64, lanes=2d, decrypting=1 +-ENDPROC(speck128_xts_decrypt_neon) +- +-ENTRY(speck64_xts_encrypt_neon) +- _speck_xts_crypt n=32, lanes=4s, decrypting=0 +-ENDPROC(speck64_xts_encrypt_neon) +- +-ENTRY(speck64_xts_decrypt_neon) +- _speck_xts_crypt n=32, lanes=4s, decrypting=1 +-ENDPROC(speck64_xts_decrypt_neon) +--- a/arch/arm64/crypto/speck-neon-glue.c ++++ /dev/null +@@ -1,282 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * NEON-accelerated implementation of Speck128-XTS and Speck64-XTS +- * (64-bit version; based on the 32-bit version) +- * +- * Copyright (c) 2018 Google, Inc +- */ +- +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +-#include +- +-/* The assembly functions only handle multiples of 128 bytes */ +-#define SPECK_NEON_CHUNK_SIZE 128 +- +-/* Speck128 */ +- +-struct speck128_xts_tfm_ctx { +- struct speck128_tfm_ctx main_key; +- struct speck128_tfm_ctx tweak_key; +-}; +- +-asmlinkage void speck128_xts_encrypt_neon(const u64 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-asmlinkage void speck128_xts_decrypt_neon(const u64 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-typedef void (*speck128_crypt_one_t)(const struct speck128_tfm_ctx *, +- u8 *, const u8 *); +-typedef void (*speck128_xts_crypt_many_t)(const u64 *, int, void *, +- const void *, unsigned int, void *); +- +-static __always_inline int +-__speck128_xts_crypt(struct skcipher_request *req, +- speck128_crypt_one_t crypt_one, +- speck128_xts_crypt_many_t crypt_many) +-{ +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); +- const struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- struct skcipher_walk walk; +- le128 tweak; +- int err; +- +- err = skcipher_walk_virt(&walk, req, true); +- +- crypto_speck128_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv); +- +- while (walk.nbytes > 0) { +- unsigned int nbytes = walk.nbytes; +- u8 *dst = walk.dst.virt.addr; +- const u8 *src = walk.src.virt.addr; +- +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) { +- unsigned int count; +- +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE); +- kernel_neon_begin(); +- (*crypt_many)(ctx->main_key.round_keys, +- ctx->main_key.nrounds, +- dst, src, count, &tweak); +- kernel_neon_end(); +- dst += count; +- src += count; +- nbytes -= count; +- } +- +- /* Handle any remainder with generic code */ +- while (nbytes >= sizeof(tweak)) { +- le128_xor((le128 *)dst, (const le128 *)src, &tweak); +- (*crypt_one)(&ctx->main_key, dst, dst); +- le128_xor((le128 *)dst, (const le128 *)dst, &tweak); +- gf128mul_x_ble(&tweak, &tweak); +- +- dst += sizeof(tweak); +- src += sizeof(tweak); +- nbytes -= sizeof(tweak); +- } +- err = skcipher_walk_done(&walk, nbytes); +- } +- +- return err; +-} +- +-static int speck128_xts_encrypt(struct skcipher_request *req) +-{ +- return __speck128_xts_crypt(req, crypto_speck128_encrypt, +- speck128_xts_encrypt_neon); +-} +- +-static int speck128_xts_decrypt(struct skcipher_request *req) +-{ +- return __speck128_xts_crypt(req, crypto_speck128_decrypt, +- speck128_xts_decrypt_neon); +-} +- +-static int speck128_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, +- unsigned int keylen) +-{ +- struct speck128_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- int err; +- +- err = xts_verify_key(tfm, key, keylen); +- if (err) +- return err; +- +- keylen /= 2; +- +- err = crypto_speck128_setkey(&ctx->main_key, key, keylen); +- if (err) +- return err; +- +- return crypto_speck128_setkey(&ctx->tweak_key, key + keylen, keylen); +-} +- +-/* Speck64 */ +- +-struct speck64_xts_tfm_ctx { +- struct speck64_tfm_ctx main_key; +- struct speck64_tfm_ctx tweak_key; +-}; +- +-asmlinkage void speck64_xts_encrypt_neon(const u32 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-asmlinkage void speck64_xts_decrypt_neon(const u32 *round_keys, int nrounds, +- void *dst, const void *src, +- unsigned int nbytes, void *tweak); +- +-typedef void (*speck64_crypt_one_t)(const struct speck64_tfm_ctx *, +- u8 *, const u8 *); +-typedef void (*speck64_xts_crypt_many_t)(const u32 *, int, void *, +- const void *, unsigned int, void *); +- +-static __always_inline int +-__speck64_xts_crypt(struct skcipher_request *req, speck64_crypt_one_t crypt_one, +- speck64_xts_crypt_many_t crypt_many) +-{ +- struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req); +- const struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- struct skcipher_walk walk; +- __le64 tweak; +- int err; +- +- err = skcipher_walk_virt(&walk, req, true); +- +- crypto_speck64_encrypt(&ctx->tweak_key, (u8 *)&tweak, walk.iv); +- +- while (walk.nbytes > 0) { +- unsigned int nbytes = walk.nbytes; +- u8 *dst = walk.dst.virt.addr; +- const u8 *src = walk.src.virt.addr; +- +- if (nbytes >= SPECK_NEON_CHUNK_SIZE && may_use_simd()) { +- unsigned int count; +- +- count = round_down(nbytes, SPECK_NEON_CHUNK_SIZE); +- kernel_neon_begin(); +- (*crypt_many)(ctx->main_key.round_keys, +- ctx->main_key.nrounds, +- dst, src, count, &tweak); +- kernel_neon_end(); +- dst += count; +- src += count; +- nbytes -= count; +- } +- +- /* Handle any remainder with generic code */ +- while (nbytes >= sizeof(tweak)) { +- *(__le64 *)dst = *(__le64 *)src ^ tweak; +- (*crypt_one)(&ctx->main_key, dst, dst); +- *(__le64 *)dst ^= tweak; +- tweak = cpu_to_le64((le64_to_cpu(tweak) << 1) ^ +- ((tweak & cpu_to_le64(1ULL << 63)) ? +- 0x1B : 0)); +- dst += sizeof(tweak); +- src += sizeof(tweak); +- nbytes -= sizeof(tweak); +- } +- err = skcipher_walk_done(&walk, nbytes); +- } +- +- return err; +-} +- +-static int speck64_xts_encrypt(struct skcipher_request *req) +-{ +- return __speck64_xts_crypt(req, crypto_speck64_encrypt, +- speck64_xts_encrypt_neon); +-} +- +-static int speck64_xts_decrypt(struct skcipher_request *req) +-{ +- return __speck64_xts_crypt(req, crypto_speck64_decrypt, +- speck64_xts_decrypt_neon); +-} +- +-static int speck64_xts_setkey(struct crypto_skcipher *tfm, const u8 *key, +- unsigned int keylen) +-{ +- struct speck64_xts_tfm_ctx *ctx = crypto_skcipher_ctx(tfm); +- int err; +- +- err = xts_verify_key(tfm, key, keylen); +- if (err) +- return err; +- +- keylen /= 2; +- +- err = crypto_speck64_setkey(&ctx->main_key, key, keylen); +- if (err) +- return err; +- +- return crypto_speck64_setkey(&ctx->tweak_key, key + keylen, keylen); +-} +- +-static struct skcipher_alg speck_algs[] = { +- { +- .base.cra_name = "xts(speck128)", +- .base.cra_driver_name = "xts-speck128-neon", +- .base.cra_priority = 300, +- .base.cra_blocksize = SPECK128_BLOCK_SIZE, +- .base.cra_ctxsize = sizeof(struct speck128_xts_tfm_ctx), +- .base.cra_alignmask = 7, +- .base.cra_module = THIS_MODULE, +- .min_keysize = 2 * SPECK128_128_KEY_SIZE, +- .max_keysize = 2 * SPECK128_256_KEY_SIZE, +- .ivsize = SPECK128_BLOCK_SIZE, +- .walksize = SPECK_NEON_CHUNK_SIZE, +- .setkey = speck128_xts_setkey, +- .encrypt = speck128_xts_encrypt, +- .decrypt = speck128_xts_decrypt, +- }, { +- .base.cra_name = "xts(speck64)", +- .base.cra_driver_name = "xts-speck64-neon", +- .base.cra_priority = 300, +- .base.cra_blocksize = SPECK64_BLOCK_SIZE, +- .base.cra_ctxsize = sizeof(struct speck64_xts_tfm_ctx), +- .base.cra_alignmask = 7, +- .base.cra_module = THIS_MODULE, +- .min_keysize = 2 * SPECK64_96_KEY_SIZE, +- .max_keysize = 2 * SPECK64_128_KEY_SIZE, +- .ivsize = SPECK64_BLOCK_SIZE, +- .walksize = SPECK_NEON_CHUNK_SIZE, +- .setkey = speck64_xts_setkey, +- .encrypt = speck64_xts_encrypt, +- .decrypt = speck64_xts_decrypt, +- } +-}; +- +-static int __init speck_neon_module_init(void) +-{ +- if (!(elf_hwcap & HWCAP_ASIMD)) +- return -ENODEV; +- return crypto_register_skciphers(speck_algs, ARRAY_SIZE(speck_algs)); +-} +- +-static void __exit speck_neon_module_exit(void) +-{ +- crypto_unregister_skciphers(speck_algs, ARRAY_SIZE(speck_algs)); +-} +- +-module_init(speck_neon_module_init); +-module_exit(speck_neon_module_exit); +- +-MODULE_DESCRIPTION("Speck block cipher (NEON-accelerated)"); +-MODULE_LICENSE("GPL"); +-MODULE_AUTHOR("Eric Biggers "); +-MODULE_ALIAS_CRYPTO("xts(speck128)"); +-MODULE_ALIAS_CRYPTO("xts-speck128-neon"); +-MODULE_ALIAS_CRYPTO("xts(speck64)"); +-MODULE_ALIAS_CRYPTO("xts-speck64-neon"); +--- a/arch/m68k/configs/amiga_defconfig ++++ b/arch/m68k/configs/amiga_defconfig +@@ -657,7 +657,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/apollo_defconfig ++++ b/arch/m68k/configs/apollo_defconfig +@@ -614,7 +614,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/atari_defconfig ++++ b/arch/m68k/configs/atari_defconfig +@@ -635,7 +635,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/bvme6000_defconfig ++++ b/arch/m68k/configs/bvme6000_defconfig +@@ -606,7 +606,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/hp300_defconfig ++++ b/arch/m68k/configs/hp300_defconfig +@@ -616,7 +616,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/mac_defconfig ++++ b/arch/m68k/configs/mac_defconfig +@@ -638,7 +638,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/multi_defconfig ++++ b/arch/m68k/configs/multi_defconfig +@@ -720,7 +720,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/mvme147_defconfig ++++ b/arch/m68k/configs/mvme147_defconfig +@@ -606,7 +606,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/mvme16x_defconfig ++++ b/arch/m68k/configs/mvme16x_defconfig +@@ -606,7 +606,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/q40_defconfig ++++ b/arch/m68k/configs/q40_defconfig +@@ -629,7 +629,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/sun3_defconfig ++++ b/arch/m68k/configs/sun3_defconfig +@@ -607,7 +607,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/m68k/configs/sun3x_defconfig ++++ b/arch/m68k/configs/sun3x_defconfig +@@ -608,7 +608,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_LZO=m +--- a/arch/s390/defconfig ++++ b/arch/s390/defconfig +@@ -221,7 +221,6 @@ CONFIG_CRYPTO_SALSA20=m + CONFIG_CRYPTO_SEED=m + CONFIG_CRYPTO_SERPENT=m + CONFIG_CRYPTO_SM4=m +-CONFIG_CRYPTO_SPECK=m + CONFIG_CRYPTO_TEA=m + CONFIG_CRYPTO_TWOFISH=m + CONFIG_CRYPTO_DEFLATE=m +--- a/crypto/Kconfig ++++ b/crypto/Kconfig +@@ -1590,20 +1590,6 @@ config CRYPTO_SM4 + + If unsure, say N. + +-config CRYPTO_SPECK +- tristate "Speck cipher algorithm" +- select CRYPTO_ALGAPI +- help +- Speck is a lightweight block cipher that is tuned for optimal +- performance in software (rather than hardware). +- +- Speck may not be as secure as AES, and should only be used on systems +- where AES is not fast enough. +- +- See also: +- +- If unsure, say N. +- + config CRYPTO_TEA + tristate "TEA, XTEA and XETA cipher algorithms" + select CRYPTO_ALGAPI +--- a/crypto/Makefile ++++ b/crypto/Makefile +@@ -115,7 +115,6 @@ obj-$(CONFIG_CRYPTO_TEA) += tea.o + obj-$(CONFIG_CRYPTO_KHAZAD) += khazad.o + obj-$(CONFIG_CRYPTO_ANUBIS) += anubis.o + obj-$(CONFIG_CRYPTO_SEED) += seed.o +-obj-$(CONFIG_CRYPTO_SPECK) += speck.o + obj-$(CONFIG_CRYPTO_SALSA20) += salsa20_generic.o + obj-$(CONFIG_CRYPTO_CHACHA20) += chacha20_generic.o + obj-$(CONFIG_CRYPTO_POLY1305) += poly1305_generic.o +--- a/crypto/speck.c ++++ /dev/null +@@ -1,307 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * Speck: a lightweight block cipher +- * +- * Copyright (c) 2018 Google, Inc +- * +- * Speck has 10 variants, including 5 block sizes. For now we only implement +- * the variants Speck128/128, Speck128/192, Speck128/256, Speck64/96, and +- * Speck64/128. Speck${B}/${K} denotes the variant with a block size of B bits +- * and a key size of K bits. The Speck128 variants are believed to be the most +- * secure variants, and they use the same block size and key sizes as AES. The +- * Speck64 variants are less secure, but on 32-bit processors are usually +- * faster. The remaining variants (Speck32, Speck48, and Speck96) are even less +- * secure and/or not as well suited for implementation on either 32-bit or +- * 64-bit processors, so are omitted. +- * +- * Reference: "The Simon and Speck Families of Lightweight Block Ciphers" +- * https://eprint.iacr.org/2013/404.pdf +- * +- * In a correspondence, the Speck designers have also clarified that the words +- * should be interpreted in little-endian format, and the words should be +- * ordered such that the first word of each block is 'y' rather than 'x', and +- * the first key word (rather than the last) becomes the first round key. +- */ +- +-#include +-#include +-#include +-#include +-#include +-#include +- +-/* Speck128 */ +- +-static __always_inline void speck128_round(u64 *x, u64 *y, u64 k) +-{ +- *x = ror64(*x, 8); +- *x += *y; +- *x ^= k; +- *y = rol64(*y, 3); +- *y ^= *x; +-} +- +-static __always_inline void speck128_unround(u64 *x, u64 *y, u64 k) +-{ +- *y ^= *x; +- *y = ror64(*y, 3); +- *x ^= k; +- *x -= *y; +- *x = rol64(*x, 8); +-} +- +-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx, +- u8 *out, const u8 *in) +-{ +- u64 y = get_unaligned_le64(in); +- u64 x = get_unaligned_le64(in + 8); +- int i; +- +- for (i = 0; i < ctx->nrounds; i++) +- speck128_round(&x, &y, ctx->round_keys[i]); +- +- put_unaligned_le64(y, out); +- put_unaligned_le64(x, out + 8); +-} +-EXPORT_SYMBOL_GPL(crypto_speck128_encrypt); +- +-static void speck128_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +-{ +- crypto_speck128_encrypt(crypto_tfm_ctx(tfm), out, in); +-} +- +-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx, +- u8 *out, const u8 *in) +-{ +- u64 y = get_unaligned_le64(in); +- u64 x = get_unaligned_le64(in + 8); +- int i; +- +- for (i = ctx->nrounds - 1; i >= 0; i--) +- speck128_unround(&x, &y, ctx->round_keys[i]); +- +- put_unaligned_le64(y, out); +- put_unaligned_le64(x, out + 8); +-} +-EXPORT_SYMBOL_GPL(crypto_speck128_decrypt); +- +-static void speck128_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +-{ +- crypto_speck128_decrypt(crypto_tfm_ctx(tfm), out, in); +-} +- +-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key, +- unsigned int keylen) +-{ +- u64 l[3]; +- u64 k; +- int i; +- +- switch (keylen) { +- case SPECK128_128_KEY_SIZE: +- k = get_unaligned_le64(key); +- l[0] = get_unaligned_le64(key + 8); +- ctx->nrounds = SPECK128_128_NROUNDS; +- for (i = 0; i < ctx->nrounds; i++) { +- ctx->round_keys[i] = k; +- speck128_round(&l[0], &k, i); +- } +- break; +- case SPECK128_192_KEY_SIZE: +- k = get_unaligned_le64(key); +- l[0] = get_unaligned_le64(key + 8); +- l[1] = get_unaligned_le64(key + 16); +- ctx->nrounds = SPECK128_192_NROUNDS; +- for (i = 0; i < ctx->nrounds; i++) { +- ctx->round_keys[i] = k; +- speck128_round(&l[i % 2], &k, i); +- } +- break; +- case SPECK128_256_KEY_SIZE: +- k = get_unaligned_le64(key); +- l[0] = get_unaligned_le64(key + 8); +- l[1] = get_unaligned_le64(key + 16); +- l[2] = get_unaligned_le64(key + 24); +- ctx->nrounds = SPECK128_256_NROUNDS; +- for (i = 0; i < ctx->nrounds; i++) { +- ctx->round_keys[i] = k; +- speck128_round(&l[i % 3], &k, i); +- } +- break; +- default: +- return -EINVAL; +- } +- +- return 0; +-} +-EXPORT_SYMBOL_GPL(crypto_speck128_setkey); +- +-static int speck128_setkey(struct crypto_tfm *tfm, const u8 *key, +- unsigned int keylen) +-{ +- return crypto_speck128_setkey(crypto_tfm_ctx(tfm), key, keylen); +-} +- +-/* Speck64 */ +- +-static __always_inline void speck64_round(u32 *x, u32 *y, u32 k) +-{ +- *x = ror32(*x, 8); +- *x += *y; +- *x ^= k; +- *y = rol32(*y, 3); +- *y ^= *x; +-} +- +-static __always_inline void speck64_unround(u32 *x, u32 *y, u32 k) +-{ +- *y ^= *x; +- *y = ror32(*y, 3); +- *x ^= k; +- *x -= *y; +- *x = rol32(*x, 8); +-} +- +-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx, +- u8 *out, const u8 *in) +-{ +- u32 y = get_unaligned_le32(in); +- u32 x = get_unaligned_le32(in + 4); +- int i; +- +- for (i = 0; i < ctx->nrounds; i++) +- speck64_round(&x, &y, ctx->round_keys[i]); +- +- put_unaligned_le32(y, out); +- put_unaligned_le32(x, out + 4); +-} +-EXPORT_SYMBOL_GPL(crypto_speck64_encrypt); +- +-static void speck64_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +-{ +- crypto_speck64_encrypt(crypto_tfm_ctx(tfm), out, in); +-} +- +-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx, +- u8 *out, const u8 *in) +-{ +- u32 y = get_unaligned_le32(in); +- u32 x = get_unaligned_le32(in + 4); +- int i; +- +- for (i = ctx->nrounds - 1; i >= 0; i--) +- speck64_unround(&x, &y, ctx->round_keys[i]); +- +- put_unaligned_le32(y, out); +- put_unaligned_le32(x, out + 4); +-} +-EXPORT_SYMBOL_GPL(crypto_speck64_decrypt); +- +-static void speck64_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in) +-{ +- crypto_speck64_decrypt(crypto_tfm_ctx(tfm), out, in); +-} +- +-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key, +- unsigned int keylen) +-{ +- u32 l[3]; +- u32 k; +- int i; +- +- switch (keylen) { +- case SPECK64_96_KEY_SIZE: +- k = get_unaligned_le32(key); +- l[0] = get_unaligned_le32(key + 4); +- l[1] = get_unaligned_le32(key + 8); +- ctx->nrounds = SPECK64_96_NROUNDS; +- for (i = 0; i < ctx->nrounds; i++) { +- ctx->round_keys[i] = k; +- speck64_round(&l[i % 2], &k, i); +- } +- break; +- case SPECK64_128_KEY_SIZE: +- k = get_unaligned_le32(key); +- l[0] = get_unaligned_le32(key + 4); +- l[1] = get_unaligned_le32(key + 8); +- l[2] = get_unaligned_le32(key + 12); +- ctx->nrounds = SPECK64_128_NROUNDS; +- for (i = 0; i < ctx->nrounds; i++) { +- ctx->round_keys[i] = k; +- speck64_round(&l[i % 3], &k, i); +- } +- break; +- default: +- return -EINVAL; +- } +- +- return 0; +-} +-EXPORT_SYMBOL_GPL(crypto_speck64_setkey); +- +-static int speck64_setkey(struct crypto_tfm *tfm, const u8 *key, +- unsigned int keylen) +-{ +- return crypto_speck64_setkey(crypto_tfm_ctx(tfm), key, keylen); +-} +- +-/* Algorithm definitions */ +- +-static struct crypto_alg speck_algs[] = { +- { +- .cra_name = "speck128", +- .cra_driver_name = "speck128-generic", +- .cra_priority = 100, +- .cra_flags = CRYPTO_ALG_TYPE_CIPHER, +- .cra_blocksize = SPECK128_BLOCK_SIZE, +- .cra_ctxsize = sizeof(struct speck128_tfm_ctx), +- .cra_module = THIS_MODULE, +- .cra_u = { +- .cipher = { +- .cia_min_keysize = SPECK128_128_KEY_SIZE, +- .cia_max_keysize = SPECK128_256_KEY_SIZE, +- .cia_setkey = speck128_setkey, +- .cia_encrypt = speck128_encrypt, +- .cia_decrypt = speck128_decrypt +- } +- } +- }, { +- .cra_name = "speck64", +- .cra_driver_name = "speck64-generic", +- .cra_priority = 100, +- .cra_flags = CRYPTO_ALG_TYPE_CIPHER, +- .cra_blocksize = SPECK64_BLOCK_SIZE, +- .cra_ctxsize = sizeof(struct speck64_tfm_ctx), +- .cra_module = THIS_MODULE, +- .cra_u = { +- .cipher = { +- .cia_min_keysize = SPECK64_96_KEY_SIZE, +- .cia_max_keysize = SPECK64_128_KEY_SIZE, +- .cia_setkey = speck64_setkey, +- .cia_encrypt = speck64_encrypt, +- .cia_decrypt = speck64_decrypt +- } +- } +- } +-}; +- +-static int __init speck_module_init(void) +-{ +- return crypto_register_algs(speck_algs, ARRAY_SIZE(speck_algs)); +-} +- +-static void __exit speck_module_exit(void) +-{ +- crypto_unregister_algs(speck_algs, ARRAY_SIZE(speck_algs)); +-} +- +-module_init(speck_module_init); +-module_exit(speck_module_exit); +- +-MODULE_DESCRIPTION("Speck block cipher (generic)"); +-MODULE_LICENSE("GPL"); +-MODULE_AUTHOR("Eric Biggers "); +-MODULE_ALIAS_CRYPTO("speck128"); +-MODULE_ALIAS_CRYPTO("speck128-generic"); +-MODULE_ALIAS_CRYPTO("speck64"); +-MODULE_ALIAS_CRYPTO("speck64-generic"); +--- a/crypto/testmgr.c ++++ b/crypto/testmgr.c +@@ -3038,18 +3038,6 @@ static const struct alg_test_desc alg_te + .cipher = __VECS(sm4_tv_template) + } + }, { +- .alg = "ecb(speck128)", +- .test = alg_test_skcipher, +- .suite = { +- .cipher = __VECS(speck128_tv_template) +- } +- }, { +- .alg = "ecb(speck64)", +- .test = alg_test_skcipher, +- .suite = { +- .cipher = __VECS(speck64_tv_template) +- } +- }, { + .alg = "ecb(tea)", + .test = alg_test_skcipher, + .suite = { +@@ -3577,18 +3565,6 @@ static const struct alg_test_desc alg_te + .cipher = __VECS(serpent_xts_tv_template) + } + }, { +- .alg = "xts(speck128)", +- .test = alg_test_skcipher, +- .suite = { +- .cipher = __VECS(speck128_xts_tv_template) +- } +- }, { +- .alg = "xts(speck64)", +- .test = alg_test_skcipher, +- .suite = { +- .cipher = __VECS(speck64_xts_tv_template) +- } +- }, { + .alg = "xts(twofish)", + .test = alg_test_skcipher, + .suite = { +--- a/crypto/testmgr.h ++++ b/crypto/testmgr.h +@@ -10198,744 +10198,6 @@ static const struct cipher_testvec sm4_t + } + }; + +-/* +- * Speck test vectors taken from the original paper: +- * "The Simon and Speck Families of Lightweight Block Ciphers" +- * https://eprint.iacr.org/2013/404.pdf +- * +- * Note that the paper does not make byte and word order clear. But it was +- * confirmed with the authors that the intended orders are little endian byte +- * order and (y, x) word order. Equivalently, the printed test vectors, when +- * looking at only the bytes (ignoring the whitespace that divides them into +- * words), are backwards: the left-most byte is actually the one with the +- * highest memory address, while the right-most byte is actually the one with +- * the lowest memory address. +- */ +- +-static const struct cipher_testvec speck128_tv_template[] = { +- { /* Speck128/128 */ +- .key = "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f", +- .klen = 16, +- .ptext = "\x20\x6d\x61\x64\x65\x20\x69\x74" +- "\x20\x65\x71\x75\x69\x76\x61\x6c", +- .ctext = "\x18\x0d\x57\x5c\xdf\xfe\x60\x78" +- "\x65\x32\x78\x79\x51\x98\x5d\xa6", +- .len = 16, +- }, { /* Speck128/192 */ +- .key = "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17", +- .klen = 24, +- .ptext = "\x65\x6e\x74\x20\x74\x6f\x20\x43" +- "\x68\x69\x65\x66\x20\x48\x61\x72", +- .ctext = "\x86\x18\x3c\xe0\x5d\x18\xbc\xf9" +- "\x66\x55\x13\x13\x3a\xcf\xe4\x1b", +- .len = 16, +- }, { /* Speck128/256 */ +- .key = "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f", +- .klen = 32, +- .ptext = "\x70\x6f\x6f\x6e\x65\x72\x2e\x20" +- "\x49\x6e\x20\x74\x68\x6f\x73\x65", +- .ctext = "\x43\x8f\x18\x9c\x8d\xb4\xee\x4e" +- "\x3e\xf5\xc0\x05\x04\x01\x09\x41", +- .len = 16, +- }, +-}; +- +-/* +- * Speck128-XTS test vectors, taken from the AES-XTS test vectors with the +- * ciphertext recomputed with Speck128 as the cipher +- */ +-static const struct cipher_testvec speck128_xts_tv_template[] = { +- { +- .key = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .klen = 32, +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ctext = "\xbe\xa0\xe7\x03\xd7\xfe\xab\x62" +- "\x3b\x99\x4a\x64\x74\x77\xac\xed" +- "\xd8\xf4\xa6\xcf\xae\xb9\x07\x42" +- "\x51\xd9\xb6\x1d\xe0\x5e\xbc\x54", +- .len = 32, +- }, { +- .key = "\x11\x11\x11\x11\x11\x11\x11\x11" +- "\x11\x11\x11\x11\x11\x11\x11\x11" +- "\x22\x22\x22\x22\x22\x22\x22\x22" +- "\x22\x22\x22\x22\x22\x22\x22\x22", +- .klen = 32, +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44", +- .ctext = "\xfb\x53\x81\x75\x6f\x9f\x34\xad" +- "\x7e\x01\xed\x7b\xcc\xda\x4e\x4a" +- "\xd4\x84\xa4\x53\xd5\x88\x73\x1b" +- "\xfd\xcb\xae\x0d\xf3\x04\xee\xe6", +- .len = 32, +- }, { +- .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8" +- "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0" +- "\x22\x22\x22\x22\x22\x22\x22\x22" +- "\x22\x22\x22\x22\x22\x22\x22\x22", +- .klen = 32, +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44", +- .ctext = "\x21\x52\x84\x15\xd1\xf7\x21\x55" +- "\xd9\x75\x4a\xd3\xc5\xdb\x9f\x7d" +- "\xda\x63\xb2\xf1\x82\xb0\x89\x59" +- "\x86\xd4\xaa\xaa\xdd\xff\x4f\x92", +- .len = 32, +- }, { +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45" +- "\x23\x53\x60\x28\x74\x71\x35\x26" +- "\x31\x41\x59\x26\x53\x58\x97\x93" +- "\x23\x84\x62\x64\x33\x83\x27\x95", +- .klen = 32, +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff" +- "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff", +- .ctext = "\x57\xb5\xf8\x71\x6e\x6d\xdd\x82" +- "\x53\xd0\xed\x2d\x30\xc1\x20\xef" +- "\x70\x67\x5e\xff\x09\x70\xbb\xc1" +- "\x3a\x7b\x48\x26\xd9\x0b\xf4\x48" +- "\xbe\xce\xb1\xc7\xb2\x67\xc4\xa7" +- "\x76\xf8\x36\x30\xb7\xb4\x9a\xd9" +- "\xf5\x9d\xd0\x7b\xc1\x06\x96\x44" +- "\x19\xc5\x58\x84\x63\xb9\x12\x68" +- "\x68\xc7\xaa\x18\x98\xf2\x1f\x5c" +- "\x39\xa6\xd8\x32\x2b\xc3\x51\xfd" +- "\x74\x79\x2e\xb4\x44\xd7\x69\xc4" +- "\xfc\x29\xe6\xed\x26\x1e\xa6\x9d" +- "\x1c\xbe\x00\x0e\x7f\x3a\xca\xfb" +- "\x6d\x13\x65\xa0\xf9\x31\x12\xe2" +- "\x26\xd1\xec\x2b\x0a\x8b\x59\x99" +- "\xa7\x49\xa0\x0e\x09\x33\x85\x50" +- "\xc3\x23\xca\x7a\xdd\x13\x45\x5f" +- "\xde\x4c\xa7\xcb\x00\x8a\x66\x6f" +- "\xa2\xb6\xb1\x2e\xe1\xa0\x18\xf6" +- "\xad\xf3\xbd\xeb\xc7\xef\x55\x4f" +- "\x79\x91\x8d\x36\x13\x7b\xd0\x4a" +- "\x6c\x39\xfb\x53\xb8\x6f\x02\x51" +- "\xa5\x20\xac\x24\x1c\x73\x59\x73" +- "\x58\x61\x3a\x87\x58\xb3\x20\x56" +- "\x39\x06\x2b\x4d\xd3\x20\x2b\x89" +- "\x3f\xa2\xf0\x96\xeb\x7f\xa4\xcd" +- "\x11\xae\xbd\xcb\x3a\xb4\xd9\x91" +- "\x09\x35\x71\x50\x65\xac\x92\xe3" +- "\x7b\x32\xc0\x7a\xdd\xd4\xc3\x92" +- "\x6f\xeb\x79\xde\x6f\xd3\x25\xc9" +- "\xcd\x63\xf5\x1e\x7a\x3b\x26\x9d" +- "\x77\x04\x80\xa9\xbf\x38\xb5\xbd" +- "\xb8\x05\x07\xbd\xfd\xab\x7b\xf8" +- "\x2a\x26\xcc\x49\x14\x6d\x55\x01" +- "\x06\x94\xd8\xb2\x2d\x53\x83\x1b" +- "\x8f\xd4\xdd\x57\x12\x7e\x18\xba" +- "\x8e\xe2\x4d\x80\xef\x7e\x6b\x9d" +- "\x24\xa9\x60\xa4\x97\x85\x86\x2a" +- "\x01\x00\x09\xf1\xcb\x4a\x24\x1c" +- "\xd8\xf6\xe6\x5b\xe7\x5d\xf2\xc4" +- "\x97\x1c\x10\xc6\x4d\x66\x4f\x98" +- "\x87\x30\xac\xd5\xea\x73\x49\x10" +- "\x80\xea\xe5\x5f\x4d\x5f\x03\x33" +- "\x66\x02\x35\x3d\x60\x06\x36\x4f" +- "\x14\x1c\xd8\x07\x1f\x78\xd0\xf8" +- "\x4f\x6c\x62\x7c\x15\xa5\x7c\x28" +- "\x7c\xcc\xeb\x1f\xd1\x07\x90\x93" +- "\x7e\xc2\xa8\x3a\x80\xc0\xf5\x30" +- "\xcc\x75\xcf\x16\x26\xa9\x26\x3b" +- "\xe7\x68\x2f\x15\x21\x5b\xe4\x00" +- "\xbd\x48\x50\xcd\x75\x70\xc4\x62" +- "\xbb\x41\xfb\x89\x4a\x88\x3b\x3b" +- "\x51\x66\x02\x69\x04\x97\x36\xd4" +- "\x75\xae\x0b\xa3\x42\xf8\xca\x79" +- "\x8f\x93\xe9\xcc\x38\xbd\xd6\xd2" +- "\xf9\x70\x4e\xc3\x6a\x8e\x25\xbd" +- "\xea\x15\x5a\xa0\x85\x7e\x81\x0d" +- "\x03\xe7\x05\x39\xf5\x05\x26\xee" +- "\xec\xaa\x1f\x3d\xc9\x98\x76\x01" +- "\x2c\xf4\xfc\xa3\x88\x77\x38\xc4" +- "\x50\x65\x50\x6d\x04\x1f\xdf\x5a" +- "\xaa\xf2\x01\xa9\xc1\x8d\xee\xca" +- "\x47\x26\xef\x39\xb8\xb4\xf2\xd1" +- "\xd6\xbb\x1b\x2a\xc1\x34\x14\xcf", +- .len = 512, +- }, { +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45" +- "\x23\x53\x60\x28\x74\x71\x35\x26" +- "\x62\x49\x77\x57\x24\x70\x93\x69" +- "\x99\x59\x57\x49\x66\x96\x76\x27" +- "\x31\x41\x59\x26\x53\x58\x97\x93" +- "\x23\x84\x62\x64\x33\x83\x27\x95" +- "\x02\x88\x41\x97\x16\x93\x99\x37" +- "\x51\x05\x82\x09\x74\x94\x45\x92", +- .klen = 64, +- .iv = "\xff\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff" +- "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff", +- .ctext = "\xc5\x85\x2a\x4b\x73\xe4\xf6\xf1" +- "\x7e\xf9\xf6\xe9\xa3\x73\x36\xcb" +- "\xaa\xb6\x22\xb0\x24\x6e\x3d\x73" +- "\x92\x99\xde\xd3\x76\xed\xcd\x63" +- "\x64\x3a\x22\x57\xc1\x43\x49\xd4" +- "\x79\x36\x31\x19\x62\xae\x10\x7e" +- "\x7d\xcf\x7a\xe2\x6b\xce\x27\xfa" +- "\xdc\x3d\xd9\x83\xd3\x42\x4c\xe0" +- "\x1b\xd6\x1d\x1a\x6f\xd2\x03\x00" +- "\xfc\x81\x99\x8a\x14\x62\xf5\x7e" +- "\x0d\xe7\x12\xe8\x17\x9d\x0b\xec" +- "\xe2\xf7\xc9\xa7\x63\xd1\x79\xb6" +- "\x62\x62\x37\xfe\x0a\x4c\x4a\x37" +- "\x70\xc7\x5e\x96\x5f\xbc\x8e\x9e" +- "\x85\x3c\x4f\x26\x64\x85\xbc\x68" +- "\xb0\xe0\x86\x5e\x26\x41\xce\x11" +- "\x50\xda\x97\x14\xe9\x9e\xc7\x6d" +- "\x3b\xdc\x43\xde\x2b\x27\x69\x7d" +- "\xfc\xb0\x28\xbd\x8f\xb1\xc6\x31" +- "\x14\x4d\xf0\x74\x37\xfd\x07\x25" +- "\x96\x55\xe5\xfc\x9e\x27\x2a\x74" +- "\x1b\x83\x4d\x15\x83\xac\x57\xa0" +- "\xac\xa5\xd0\x38\xef\x19\x56\x53" +- "\x25\x4b\xfc\xce\x04\x23\xe5\x6b" +- "\xf6\xc6\x6c\x32\x0b\xb3\x12\xc5" +- "\xed\x22\x34\x1c\x5d\xed\x17\x06" +- "\x36\xa3\xe6\x77\xb9\x97\x46\xb8" +- "\xe9\x3f\x7e\xc7\xbc\x13\x5c\xdc" +- "\x6e\x3f\x04\x5e\xd1\x59\xa5\x82" +- "\x35\x91\x3d\x1b\xe4\x97\x9f\x92" +- "\x1c\x5e\x5f\x6f\x41\xd4\x62\xa1" +- "\x8d\x39\xfc\x42\xfb\x38\x80\xb9" +- "\x0a\xe3\xcc\x6a\x93\xd9\x7a\xb1" +- "\xe9\x69\xaf\x0a\x6b\x75\x38\xa7" +- "\xa1\xbf\xf7\xda\x95\x93\x4b\x78" +- "\x19\xf5\x94\xf9\xd2\x00\x33\x37" +- "\xcf\xf5\x9e\x9c\xf3\xcc\xa6\xee" +- "\x42\xb2\x9e\x2c\x5f\x48\x23\x26" +- "\x15\x25\x17\x03\x3d\xfe\x2c\xfc" +- "\xeb\xba\xda\xe0\x00\x05\xb6\xa6" +- "\x07\xb3\xe8\x36\x5b\xec\x5b\xbf" +- "\xd6\x5b\x00\x74\xc6\x97\xf1\x6a" +- "\x49\xa1\xc3\xfa\x10\x52\xb9\x14" +- "\xad\xb7\x73\xf8\x78\x12\xc8\x59" +- "\x17\x80\x4c\x57\x39\xf1\x6d\x80" +- "\x25\x77\x0f\x5e\x7d\xf0\xaf\x21" +- "\xec\xce\xb7\xc8\x02\x8a\xed\x53" +- "\x2c\x25\x68\x2e\x1f\x85\x5e\x67" +- "\xd1\x07\x7a\x3a\x89\x08\xe0\x34" +- "\xdc\xdb\x26\xb4\x6b\x77\xfc\x40" +- "\x31\x15\x72\xa0\xf0\x73\xd9\x3b" +- "\xd5\xdb\xfe\xfc\x8f\xa9\x44\xa2" +- "\x09\x9f\xc6\x33\xe5\xe2\x88\xe8" +- "\xf3\xf0\x1a\xf4\xce\x12\x0f\xd6" +- "\xf7\x36\xe6\xa4\xf4\x7a\x10\x58" +- "\xcc\x1f\x48\x49\x65\x47\x75\xe9" +- "\x28\xe1\x65\x7b\xf2\xc4\xb5\x07" +- "\xf2\xec\x76\xd8\x8f\x09\xf3\x16" +- "\xa1\x51\x89\x3b\xeb\x96\x42\xac" +- "\x65\xe0\x67\x63\x29\xdc\xb4\x7d" +- "\xf2\x41\x51\x6a\xcb\xde\x3c\xfb" +- "\x66\x8d\x13\xca\xe0\x59\x2a\x00" +- "\xc9\x53\x4c\xe6\x9e\xe2\x73\xd5" +- "\x67\x19\xb2\xbd\x9a\x63\xd7\x5c", +- .len = 512, +- .also_non_np = 1, +- .np = 3, +- .tap = { 512 - 20, 4, 16 }, +- } +-}; +- +-static const struct cipher_testvec speck64_tv_template[] = { +- { /* Speck64/96 */ +- .key = "\x00\x01\x02\x03\x08\x09\x0a\x0b" +- "\x10\x11\x12\x13", +- .klen = 12, +- .ptext = "\x65\x61\x6e\x73\x20\x46\x61\x74", +- .ctext = "\x6c\x94\x75\x41\xec\x52\x79\x9f", +- .len = 8, +- }, { /* Speck64/128 */ +- .key = "\x00\x01\x02\x03\x08\x09\x0a\x0b" +- "\x10\x11\x12\x13\x18\x19\x1a\x1b", +- .klen = 16, +- .ptext = "\x2d\x43\x75\x74\x74\x65\x72\x3b", +- .ctext = "\x8b\x02\x4e\x45\x48\xa5\x6f\x8c", +- .len = 8, +- }, +-}; +- +-/* +- * Speck64-XTS test vectors, taken from the AES-XTS test vectors with the +- * ciphertext recomputed with Speck64 as the cipher, and key lengths adjusted +- */ +-static const struct cipher_testvec speck64_xts_tv_template[] = { +- { +- .key = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .klen = 24, +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ctext = "\x84\xaf\x54\x07\x19\xd4\x7c\xa6" +- "\xe4\xfe\xdf\xc4\x1f\x34\xc3\xc2" +- "\x80\xf5\x72\xe7\xcd\xf0\x99\x22" +- "\x35\xa7\x2f\x06\xef\xdc\x51\xaa", +- .len = 32, +- }, { +- .key = "\x11\x11\x11\x11\x11\x11\x11\x11" +- "\x11\x11\x11\x11\x11\x11\x11\x11" +- "\x22\x22\x22\x22\x22\x22\x22\x22", +- .klen = 24, +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44", +- .ctext = "\x12\x56\x73\xcd\x15\x87\xa8\x59" +- "\xcf\x84\xae\xd9\x1c\x66\xd6\x9f" +- "\xb3\x12\x69\x7e\x36\xeb\x52\xff" +- "\x62\xdd\xba\x90\xb3\xe1\xee\x99", +- .len = 32, +- }, { +- .key = "\xff\xfe\xfd\xfc\xfb\xfa\xf9\xf8" +- "\xf7\xf6\xf5\xf4\xf3\xf2\xf1\xf0" +- "\x22\x22\x22\x22\x22\x22\x22\x22", +- .klen = 24, +- .iv = "\x33\x33\x33\x33\x33\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44" +- "\x44\x44\x44\x44\x44\x44\x44\x44", +- .ctext = "\x15\x1b\xe4\x2c\xa2\x5a\x2d\x2c" +- "\x27\x36\xc0\xbf\x5d\xea\x36\x37" +- "\x2d\x1a\x88\xbc\x66\xb5\xd0\x0b" +- "\xa1\xbc\x19\xb2\x0f\x3b\x75\x34", +- .len = 32, +- }, { +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45" +- "\x23\x53\x60\x28\x74\x71\x35\x26" +- "\x31\x41\x59\x26\x53\x58\x97\x93", +- .klen = 24, +- .iv = "\x00\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff" +- "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff", +- .ctext = "\xaf\xa1\x81\xa6\x32\xbb\x15\x8e" +- "\xf8\x95\x2e\xd3\xe6\xee\x7e\x09" +- "\x0c\x1a\xf5\x02\x97\x8b\xe3\xb3" +- "\x11\xc7\x39\x96\xd0\x95\xf4\x56" +- "\xf4\xdd\x03\x38\x01\x44\x2c\xcf" +- "\x88\xae\x8e\x3c\xcd\xe7\xaa\x66" +- "\xfe\x3d\xc6\xfb\x01\x23\x51\x43" +- "\xd5\xd2\x13\x86\x94\x34\xe9\x62" +- "\xf9\x89\xe3\xd1\x7b\xbe\xf8\xef" +- "\x76\x35\x04\x3f\xdb\x23\x9d\x0b" +- "\x85\x42\xb9\x02\xd6\xcc\xdb\x96" +- "\xa7\x6b\x27\xb6\xd4\x45\x8f\x7d" +- "\xae\xd2\x04\xd5\xda\xc1\x7e\x24" +- "\x8c\x73\xbe\x48\x7e\xcf\x65\x28" +- "\x29\xe5\xbe\x54\x30\xcb\x46\x95" +- "\x4f\x2e\x8a\x36\xc8\x27\xc5\xbe" +- "\xd0\x1a\xaf\xab\x26\xcd\x9e\x69" +- "\xa1\x09\x95\x71\x26\xe9\xc4\xdf" +- "\xe6\x31\xc3\x46\xda\xaf\x0b\x41" +- "\x1f\xab\xb1\x8e\xd6\xfc\x0b\xb3" +- "\x82\xc0\x37\x27\xfc\x91\xa7\x05" +- "\xfb\xc5\xdc\x2b\x74\x96\x48\x43" +- "\x5d\x9c\x19\x0f\x60\x63\x3a\x1f" +- "\x6f\xf0\x03\xbe\x4d\xfd\xc8\x4a" +- "\xc6\xa4\x81\x6d\xc3\x12\x2a\x5c" +- "\x07\xff\xf3\x72\x74\x48\xb5\x40" +- "\x50\xb5\xdd\x90\x43\x31\x18\x15" +- "\x7b\xf2\xa6\xdb\x83\xc8\x4b\x4a" +- "\x29\x93\x90\x8b\xda\x07\xf0\x35" +- "\x6d\x90\x88\x09\x4e\x83\xf5\x5b" +- "\x94\x12\xbb\x33\x27\x1d\x3f\x23" +- "\x51\xa8\x7c\x07\xa2\xae\x77\xa6" +- "\x50\xfd\xcc\xc0\x4f\x80\x7a\x9f" +- "\x66\xdd\xcd\x75\x24\x8b\x33\xf7" +- "\x20\xdb\x83\x9b\x4f\x11\x63\x6e" +- "\xcf\x37\xef\xc9\x11\x01\x5c\x45" +- "\x32\x99\x7c\x3c\x9e\x42\x89\xe3" +- "\x70\x6d\x15\x9f\xb1\xe6\xb6\x05" +- "\xfe\x0c\xb9\x49\x2d\x90\x6d\xcc" +- "\x5d\x3f\xc1\xfe\x89\x0a\x2e\x2d" +- "\xa0\xa8\x89\x3b\x73\x39\xa5\x94" +- "\x4c\xa4\xa6\xbb\xa7\x14\x46\x89" +- "\x10\xff\xaf\xef\xca\xdd\x4f\x80" +- "\xb3\xdf\x3b\xab\xd4\xe5\x5a\xc7" +- "\x33\xca\x00\x8b\x8b\x3f\xea\xec" +- "\x68\x8a\xc2\x6d\xfd\xd4\x67\x0f" +- "\x22\x31\xe1\x0e\xfe\x5a\x04\xd5" +- "\x64\xa3\xf1\x1a\x76\x28\xcc\x35" +- "\x36\xa7\x0a\x74\xf7\x1c\x44\x9b" +- "\xc7\x1b\x53\x17\x02\xea\xd1\xad" +- "\x13\x51\x73\xc0\xa0\xb2\x05\x32" +- "\xa8\xa2\x37\x2e\xe1\x7a\x3a\x19" +- "\x26\xb4\x6c\x62\x5d\xb3\x1a\x1d" +- "\x59\xda\xee\x1a\x22\x18\xda\x0d" +- "\x88\x0f\x55\x8b\x72\x62\xfd\xc1" +- "\x69\x13\xcd\x0d\x5f\xc1\x09\x52" +- "\xee\xd6\xe3\x84\x4d\xee\xf6\x88" +- "\xaf\x83\xdc\x76\xf4\xc0\x93\x3f" +- "\x4a\x75\x2f\xb0\x0b\x3e\xc4\x54" +- "\x7d\x69\x8d\x00\x62\x77\x0d\x14" +- "\xbe\x7c\xa6\x7d\xc5\x24\x4f\xf3" +- "\x50\xf7\x5f\xf4\xc2\xca\x41\x97" +- "\x37\xbe\x75\x74\xcd\xf0\x75\x6e" +- "\x25\x23\x94\xbd\xda\x8d\xb0\xd4", +- .len = 512, +- }, { +- .key = "\x27\x18\x28\x18\x28\x45\x90\x45" +- "\x23\x53\x60\x28\x74\x71\x35\x26" +- "\x62\x49\x77\x57\x24\x70\x93\x69" +- "\x99\x59\x57\x49\x66\x96\x76\x27", +- .klen = 32, +- .iv = "\xff\x00\x00\x00\x00\x00\x00\x00" +- "\x00\x00\x00\x00\x00\x00\x00\x00", +- .ptext = "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff" +- "\x00\x01\x02\x03\x04\x05\x06\x07" +- "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f" +- "\x10\x11\x12\x13\x14\x15\x16\x17" +- "\x18\x19\x1a\x1b\x1c\x1d\x1e\x1f" +- "\x20\x21\x22\x23\x24\x25\x26\x27" +- "\x28\x29\x2a\x2b\x2c\x2d\x2e\x2f" +- "\x30\x31\x32\x33\x34\x35\x36\x37" +- "\x38\x39\x3a\x3b\x3c\x3d\x3e\x3f" +- "\x40\x41\x42\x43\x44\x45\x46\x47" +- "\x48\x49\x4a\x4b\x4c\x4d\x4e\x4f" +- "\x50\x51\x52\x53\x54\x55\x56\x57" +- "\x58\x59\x5a\x5b\x5c\x5d\x5e\x5f" +- "\x60\x61\x62\x63\x64\x65\x66\x67" +- "\x68\x69\x6a\x6b\x6c\x6d\x6e\x6f" +- "\x70\x71\x72\x73\x74\x75\x76\x77" +- "\x78\x79\x7a\x7b\x7c\x7d\x7e\x7f" +- "\x80\x81\x82\x83\x84\x85\x86\x87" +- "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f" +- "\x90\x91\x92\x93\x94\x95\x96\x97" +- "\x98\x99\x9a\x9b\x9c\x9d\x9e\x9f" +- "\xa0\xa1\xa2\xa3\xa4\xa5\xa6\xa7" +- "\xa8\xa9\xaa\xab\xac\xad\xae\xaf" +- "\xb0\xb1\xb2\xb3\xb4\xb5\xb6\xb7" +- "\xb8\xb9\xba\xbb\xbc\xbd\xbe\xbf" +- "\xc0\xc1\xc2\xc3\xc4\xc5\xc6\xc7" +- "\xc8\xc9\xca\xcb\xcc\xcd\xce\xcf" +- "\xd0\xd1\xd2\xd3\xd4\xd5\xd6\xd7" +- "\xd8\xd9\xda\xdb\xdc\xdd\xde\xdf" +- "\xe0\xe1\xe2\xe3\xe4\xe5\xe6\xe7" +- "\xe8\xe9\xea\xeb\xec\xed\xee\xef" +- "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7" +- "\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff", +- .ctext = "\x55\xed\x71\xd3\x02\x8e\x15\x3b" +- "\xc6\x71\x29\x2d\x3e\x89\x9f\x59" +- "\x68\x6a\xcc\x8a\x56\x97\xf3\x95" +- "\x4e\x51\x08\xda\x2a\xf8\x6f\x3c" +- "\x78\x16\xea\x80\xdb\x33\x75\x94" +- "\xf9\x29\xc4\x2b\x76\x75\x97\xc7" +- "\xf2\x98\x2c\xf9\xff\xc8\xd5\x2b" +- "\x18\xf1\xaf\xcf\x7c\xc5\x0b\xee" +- "\xad\x3c\x76\x7c\xe6\x27\xa2\x2a" +- "\xe4\x66\xe1\xab\xa2\x39\xfc\x7c" +- "\xf5\xec\x32\x74\xa3\xb8\x03\x88" +- "\x52\xfc\x2e\x56\x3f\xa1\xf0\x9f" +- "\x84\x5e\x46\xed\x20\x89\xb6\x44" +- "\x8d\xd0\xed\x54\x47\x16\xbe\x95" +- "\x8a\xb3\x6b\x72\xc4\x32\x52\x13" +- "\x1b\xb0\x82\xbe\xac\xf9\x70\xa6" +- "\x44\x18\xdd\x8c\x6e\xca\x6e\x45" +- "\x8f\x1e\x10\x07\x57\x25\x98\x7b" +- "\x17\x8c\x78\xdd\x80\xa7\xd9\xd8" +- "\x63\xaf\xb9\x67\x57\xfd\xbc\xdb" +- "\x44\xe9\xc5\x65\xd1\xc7\x3b\xff" +- "\x20\xa0\x80\x1a\xc3\x9a\xad\x5e" +- "\x5d\x3b\xd3\x07\xd9\xf5\xfd\x3d" +- "\x4a\x8b\xa8\xd2\x6e\x7a\x51\x65" +- "\x6c\x8e\x95\xe0\x45\xc9\x5f\x4a" +- "\x09\x3c\x3d\x71\x7f\x0c\x84\x2a" +- "\xc8\x48\x52\x1a\xc2\xd5\xd6\x78" +- "\x92\x1e\xa0\x90\x2e\xea\xf0\xf3" +- "\xdc\x0f\xb1\xaf\x0d\x9b\x06\x2e" +- "\x35\x10\x30\x82\x0d\xe7\xc5\x9b" +- "\xde\x44\x18\xbd\x9f\xd1\x45\xa9" +- "\x7b\x7a\x4a\xad\x35\x65\x27\xca" +- "\xb2\xc3\xd4\x9b\x71\x86\x70\xee" +- "\xf1\x89\x3b\x85\x4b\x5b\xaa\xaf" +- "\xfc\x42\xc8\x31\x59\xbe\x16\x60" +- "\x4f\xf9\xfa\x12\xea\xd0\xa7\x14" +- "\xf0\x7a\xf3\xd5\x8d\xbd\x81\xef" +- "\x52\x7f\x29\x51\x94\x20\x67\x3c" +- "\xd1\xaf\x77\x9f\x22\x5a\x4e\x63" +- "\xe7\xff\x73\x25\xd1\xdd\x96\x8a" +- "\x98\x52\x6d\xf3\xac\x3e\xf2\x18" +- "\x6d\xf6\x0a\x29\xa6\x34\x3d\xed" +- "\xe3\x27\x0d\x9d\x0a\x02\x44\x7e" +- "\x5a\x7e\x67\x0f\x0a\x9e\xd6\xad" +- "\x91\xe6\x4d\x81\x8c\x5c\x59\xaa" +- "\xfb\xeb\x56\x53\xd2\x7d\x4c\x81" +- "\x65\x53\x0f\x41\x11\xbd\x98\x99" +- "\xf9\xc6\xfa\x51\x2e\xa3\xdd\x8d" +- "\x84\x98\xf9\x34\xed\x33\x2a\x1f" +- "\x82\xed\xc1\x73\x98\xd3\x02\xdc" +- "\xe6\xc2\x33\x1d\xa2\xb4\xca\x76" +- "\x63\x51\x34\x9d\x96\x12\xae\xce" +- "\x83\xc9\x76\x5e\xa4\x1b\x53\x37" +- "\x17\xd5\xc0\x80\x1d\x62\xf8\x3d" +- "\x54\x27\x74\xbb\x10\x86\x57\x46" +- "\x68\xe1\xed\x14\xe7\x9d\xfc\x84" +- "\x47\xbc\xc2\xf8\x19\x4b\x99\xcf" +- "\x7a\xe9\xc4\xb8\x8c\x82\x72\x4d" +- "\x7b\x4f\x38\x55\x36\x71\x64\xc1" +- "\xfc\x5c\x75\x52\x33\x02\x18\xf8" +- "\x17\xe1\x2b\xc2\x43\x39\xbd\x76" +- "\x9b\x63\x76\x32\x2f\x19\x72\x10" +- "\x9f\x21\x0c\xf1\x66\x50\x7f\xa5" +- "\x0d\x1f\x46\xe0\xba\xd3\x2f\x3c", +- .len = 512, +- .also_non_np = 1, +- .np = 3, +- .tap = { 512 - 20, 4, 16 }, +- } +-}; +- + /* Cast6 test vectors from RFC 2612 */ + static const struct cipher_testvec cast6_tv_template[] = { + { +--- a/fs/crypto/fscrypt_private.h ++++ b/fs/crypto/fscrypt_private.h +@@ -83,10 +83,6 @@ static inline bool fscrypt_valid_enc_mod + filenames_mode == FS_ENCRYPTION_MODE_AES_256_CTS) + return true; + +- if (contents_mode == FS_ENCRYPTION_MODE_SPECK128_256_XTS && +- filenames_mode == FS_ENCRYPTION_MODE_SPECK128_256_CTS) +- return true; +- + return false; + } + +--- a/fs/crypto/keyinfo.c ++++ b/fs/crypto/keyinfo.c +@@ -174,16 +174,6 @@ static struct fscrypt_mode { + .cipher_str = "cts(cbc(aes))", + .keysize = 16, + }, +- [FS_ENCRYPTION_MODE_SPECK128_256_XTS] = { +- .friendly_name = "Speck128/256-XTS", +- .cipher_str = "xts(speck128)", +- .keysize = 64, +- }, +- [FS_ENCRYPTION_MODE_SPECK128_256_CTS] = { +- .friendly_name = "Speck128/256-CTS-CBC", +- .cipher_str = "cts(cbc(speck128))", +- .keysize = 32, +- }, + }; + + static struct fscrypt_mode * +--- a/include/crypto/speck.h ++++ /dev/null +@@ -1,62 +0,0 @@ +-// SPDX-License-Identifier: GPL-2.0 +-/* +- * Common values for the Speck algorithm +- */ +- +-#ifndef _CRYPTO_SPECK_H +-#define _CRYPTO_SPECK_H +- +-#include +- +-/* Speck128 */ +- +-#define SPECK128_BLOCK_SIZE 16 +- +-#define SPECK128_128_KEY_SIZE 16 +-#define SPECK128_128_NROUNDS 32 +- +-#define SPECK128_192_KEY_SIZE 24 +-#define SPECK128_192_NROUNDS 33 +- +-#define SPECK128_256_KEY_SIZE 32 +-#define SPECK128_256_NROUNDS 34 +- +-struct speck128_tfm_ctx { +- u64 round_keys[SPECK128_256_NROUNDS]; +- int nrounds; +-}; +- +-void crypto_speck128_encrypt(const struct speck128_tfm_ctx *ctx, +- u8 *out, const u8 *in); +- +-void crypto_speck128_decrypt(const struct speck128_tfm_ctx *ctx, +- u8 *out, const u8 *in); +- +-int crypto_speck128_setkey(struct speck128_tfm_ctx *ctx, const u8 *key, +- unsigned int keysize); +- +-/* Speck64 */ +- +-#define SPECK64_BLOCK_SIZE 8 +- +-#define SPECK64_96_KEY_SIZE 12 +-#define SPECK64_96_NROUNDS 26 +- +-#define SPECK64_128_KEY_SIZE 16 +-#define SPECK64_128_NROUNDS 27 +- +-struct speck64_tfm_ctx { +- u32 round_keys[SPECK64_128_NROUNDS]; +- int nrounds; +-}; +- +-void crypto_speck64_encrypt(const struct speck64_tfm_ctx *ctx, +- u8 *out, const u8 *in); +- +-void crypto_speck64_decrypt(const struct speck64_tfm_ctx *ctx, +- u8 *out, const u8 *in); +- +-int crypto_speck64_setkey(struct speck64_tfm_ctx *ctx, const u8 *key, +- unsigned int keysize); +- +-#endif /* _CRYPTO_SPECK_H */ +--- a/include/uapi/linux/fs.h ++++ b/include/uapi/linux/fs.h +@@ -279,8 +279,8 @@ struct fsxattr { + #define FS_ENCRYPTION_MODE_AES_256_CTS 4 + #define FS_ENCRYPTION_MODE_AES_128_CBC 5 + #define FS_ENCRYPTION_MODE_AES_128_CTS 6 +-#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 +-#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 ++#define FS_ENCRYPTION_MODE_SPECK128_256_XTS 7 /* Removed, do not use. */ ++#define FS_ENCRYPTION_MODE_SPECK128_256_CTS 8 /* Removed, do not use. */ + + struct fscrypt_policy { + __u8 version; diff --git a/queue-4.19/crypto-tcrypt-fix-ghash-generic-speed-test.patch b/queue-4.19/crypto-tcrypt-fix-ghash-generic-speed-test.patch new file mode 100644 index 00000000000..96aaeab8216 --- /dev/null +++ b/queue-4.19/crypto-tcrypt-fix-ghash-generic-speed-test.patch @@ -0,0 +1,43 @@ +From 331351f89c36bf7d03561a28b6f64fa10a9f6f3a Mon Sep 17 00:00:00 2001 +From: =?UTF-8?q?Horia=20Geant=C4=83?= +Date: Wed, 12 Sep 2018 16:20:48 +0300 +Subject: crypto: tcrypt - fix ghash-generic speed test +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Horia Geantă + +commit 331351f89c36bf7d03561a28b6f64fa10a9f6f3a upstream. + +ghash is a keyed hash algorithm, thus setkey needs to be called. +Otherwise the following error occurs: +$ modprobe tcrypt mode=318 sec=1 +testing speed of async ghash-generic (ghash-generic) +tcrypt: test 0 ( 16 byte blocks, 16 bytes per update, 1 updates): +tcrypt: hashing failed ret=-126 + +Cc: # 4.6+ +Fixes: 0660511c0bee ("crypto: tcrypt - Use ahash") +Tested-by: Franck Lenormand +Signed-off-by: Horia Geantă +Acked-by: Ard Biesheuvel +Signed-off-by: Herbert Xu +Signed-off-by: Greg Kroah-Hartman + +--- + crypto/tcrypt.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/crypto/tcrypt.c ++++ b/crypto/tcrypt.c +@@ -1103,6 +1103,9 @@ static void test_ahash_speed_common(cons + break; + } + ++ if (speed[i].klen) ++ crypto_ahash_setkey(tfm, tvmem[0], speed[i].klen); ++ + pr_info("test%3u " + "(%5u byte blocks,%5u bytes per update,%4u updates): ", + i, speed[i].blen, speed[i].plen, speed[i].blen / speed[i].plen); diff --git a/queue-4.19/dmaengine-ppc4xx-fix-off-by-one-build-failure.patch b/queue-4.19/dmaengine-ppc4xx-fix-off-by-one-build-failure.patch new file mode 100644 index 00000000000..a58f6e6101c --- /dev/null +++ b/queue-4.19/dmaengine-ppc4xx-fix-off-by-one-build-failure.patch @@ -0,0 +1,39 @@ +From 27d8d2d7a9b7eb05c4484b74b749eaee7b50b845 Mon Sep 17 00:00:00 2001 +From: Christian Lamparter +Date: Sun, 14 Oct 2018 23:28:50 +0200 +Subject: dmaengine: ppc4xx: fix off-by-one build failure + +From: Christian Lamparter + +commit 27d8d2d7a9b7eb05c4484b74b749eaee7b50b845 upstream. + +There are two poly_store, but one should have been poly_show. + +|adma.c:4382:16: error: conflicting types for 'poly_store' +| static ssize_t poly_store(struct device_driver *dev, const char *buf, +| ^~~~~~~~~~ +|adma.c:4363:16: note: previous definition of 'poly_store' was here +| static ssize_t poly_store(struct device_driver *dev, char *buf) +| ^~~~~~~~~~ + +CC: stable@vger.kernel.org +Fixes: 13efe1a05384 ("dmaengine: ppc4xx: remove DRIVER_ATTR() usage") +Signed-off-by: Christian Lamparter +Signed-off-by: Vinod Koul +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/dma/ppc4xx/adma.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/dma/ppc4xx/adma.c ++++ b/drivers/dma/ppc4xx/adma.c +@@ -4360,7 +4360,7 @@ static ssize_t enable_store(struct devic + } + static DRIVER_ATTR_RW(enable); + +-static ssize_t poly_store(struct device_driver *dev, char *buf) ++static ssize_t poly_show(struct device_driver *dev, char *buf) + { + ssize_t size = 0; + u32 reg; diff --git a/queue-4.19/drivers-hv-kvp-fix-two-this-statement-may-fall-through-warnings.patch b/queue-4.19/drivers-hv-kvp-fix-two-this-statement-may-fall-through-warnings.patch new file mode 100644 index 00000000000..2325a673ab5 --- /dev/null +++ b/queue-4.19/drivers-hv-kvp-fix-two-this-statement-may-fall-through-warnings.patch @@ -0,0 +1,64 @@ +From fc62c3b1977d62e6374fd6e28d371bb42dfa5c9d Mon Sep 17 00:00:00 2001 +From: Dexuan Cui +Date: Sun, 23 Sep 2018 21:10:43 +0000 +Subject: Drivers: hv: kvp: Fix two "this statement may fall through" warnings + +From: Dexuan Cui + +commit fc62c3b1977d62e6374fd6e28d371bb42dfa5c9d upstream. + +We don't need to call process_ib_ipinfo() if message->kvp_hdr.operation is +KVP_OP_GET_IP_INFO in kvp_send_key(), because here we just need to pass on +the op code from the host to the userspace; when the userspace returns +the info requested by the host, we pass the info on to the host in +kvp_respond_to_host() -> process_ob_ipinfo(). BTW, the current buggy code +actually doesn't cause any harm, because only message->kvp_hdr.operation +is used by the userspace, in the case of KVP_OP_GET_IP_INFO. + +The patch also adds a missing "break;" in kvp_send_key(). BTW, the current +buggy code actually doesn't cause any harm, because in the case of +KVP_OP_SET, the unexpected fall-through corrupts +message->body.kvp_set.data.key_size, but that is not really used: see +the definition of struct hv_kvp_exchg_msg_value. + +Signed-off-by: Dexuan Cui +Cc: K. Y. Srinivasan +Cc: Haiyang Zhang +Cc: Stephen Hemminger +Cc: +Signed-off-by: K. Y. Srinivasan +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/hv/hv_kvp.c | 6 ++++-- + 1 file changed, 4 insertions(+), 2 deletions(-) + +--- a/drivers/hv/hv_kvp.c ++++ b/drivers/hv/hv_kvp.c +@@ -353,7 +353,6 @@ static void process_ib_ipinfo(void *in_m + + out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled; + +- default: + utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id, + MAX_ADAPTER_ID_SIZE, + UTF16_LITTLE_ENDIAN, +@@ -406,7 +405,7 @@ kvp_send_key(struct work_struct *dummy) + process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO); + break; + case KVP_OP_GET_IP_INFO: +- process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO); ++ /* We only need to pass on message->kvp_hdr.operation. */ + break; + case KVP_OP_SET: + switch (in_msg->body.kvp_set.data.value_type) { +@@ -446,6 +445,9 @@ kvp_send_key(struct work_struct *dummy) + break; + + } ++ ++ break; ++ + case KVP_OP_GET: + message->body.kvp_set.data.key_size = + utf16s_to_utf8s( diff --git a/queue-4.19/edac-amd64-add-family-17h-models-10h-2fh-support.patch b/queue-4.19/edac-amd64-add-family-17h-models-10h-2fh-support.patch new file mode 100644 index 00000000000..c4454e1c2c1 --- /dev/null +++ b/queue-4.19/edac-amd64-add-family-17h-models-10h-2fh-support.patch @@ -0,0 +1,75 @@ +From 8960de4a5ca7980ed1e19e7ca5a774d3b7a55c38 Mon Sep 17 00:00:00 2001 +From: Michael Jin +Date: Thu, 16 Aug 2018 15:28:40 -0400 +Subject: EDAC, amd64: Add Family 17h, models 10h-2fh support + +From: Michael Jin + +commit 8960de4a5ca7980ed1e19e7ca5a774d3b7a55c38 upstream. + +Add new device IDs for family 17h, models 10h-2fh. + +This is required by amd64_edac_mod in order to properly detect PCI +device functions 0 and 6. + +Signed-off-by: Michael Jin +Reviewed-by: Yazen Ghannam +Cc: +Link: http://lkml.kernel.org/r/20180816192840.31166-1-mikhail.jin@gmail.com +Signed-off-by: Borislav Petkov +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/edac/amd64_edac.c | 14 ++++++++++++++ + drivers/edac/amd64_edac.h | 3 +++ + 2 files changed, 17 insertions(+) + +--- a/drivers/edac/amd64_edac.c ++++ b/drivers/edac/amd64_edac.c +@@ -2200,6 +2200,15 @@ static struct amd64_family_type family_t + .dbam_to_cs = f17_base_addr_to_cs_size, + } + }, ++ [F17_M10H_CPUS] = { ++ .ctl_name = "F17h_M10h", ++ .f0_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F0, ++ .f6_id = PCI_DEVICE_ID_AMD_17H_M10H_DF_F6, ++ .ops = { ++ .early_channel_count = f17_early_channel_count, ++ .dbam_to_cs = f17_base_addr_to_cs_size, ++ } ++ }, + }; + + /* +@@ -3188,6 +3197,11 @@ static struct amd64_family_type *per_fam + break; + + case 0x17: ++ if (pvt->model >= 0x10 && pvt->model <= 0x2f) { ++ fam_type = &family_types[F17_M10H_CPUS]; ++ pvt->ops = &family_types[F17_M10H_CPUS].ops; ++ break; ++ } + fam_type = &family_types[F17_CPUS]; + pvt->ops = &family_types[F17_CPUS].ops; + break; +--- a/drivers/edac/amd64_edac.h ++++ b/drivers/edac/amd64_edac.h +@@ -115,6 +115,8 @@ + #define PCI_DEVICE_ID_AMD_16H_M30H_NB_F2 0x1582 + #define PCI_DEVICE_ID_AMD_17H_DF_F0 0x1460 + #define PCI_DEVICE_ID_AMD_17H_DF_F6 0x1466 ++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F0 0x15e8 ++#define PCI_DEVICE_ID_AMD_17H_M10H_DF_F6 0x15ee + + /* + * Function 1 - Address Map +@@ -281,6 +283,7 @@ enum amd_families { + F16_CPUS, + F16_M30H_CPUS, + F17_CPUS, ++ F17_M10H_CPUS, + NUM_FAMILIES, + }; + diff --git a/queue-4.19/edac-i7core-sb-skx-_edac-fix-uncorrected-error-counting.patch b/queue-4.19/edac-i7core-sb-skx-_edac-fix-uncorrected-error-counting.patch new file mode 100644 index 00000000000..b95116b6ad9 --- /dev/null +++ b/queue-4.19/edac-i7core-sb-skx-_edac-fix-uncorrected-error-counting.patch @@ -0,0 +1,62 @@ +From 432de7fd7630c84ad24f1c2acd1e3bb4ce3741ca Mon Sep 17 00:00:00 2001 +From: Tony Luck +Date: Fri, 28 Sep 2018 14:39:34 -0700 +Subject: EDAC, {i7core,sb,skx}_edac: Fix uncorrected error counting + +From: Tony Luck + +commit 432de7fd7630c84ad24f1c2acd1e3bb4ce3741ca upstream. + +The count of errors is picked up from bits 52:38 of the machine check +bank status register. But this is the count of *corrected* errors. If an +uncorrected error is being logged, the h/w sets this field to 0. Which +means that when edac_mc_handle_error() is called, the EDAC core will +carefully add zero to the appropriate uncorrected error counts. + +Signed-off-by: Tony Luck +[ Massage commit message. ] +Signed-off-by: Borislav Petkov +Cc: stable@vger.kernel.org +Cc: Aristeu Rozanski +Cc: Mauro Carvalho Chehab +Cc: Qiuxu Zhuo +Cc: linux-edac +Link: http://lkml.kernel.org/r/20180928213934.19890-1-tony.luck@intel.com +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/edac/i7core_edac.c | 1 + + drivers/edac/sb_edac.c | 1 + + drivers/edac/skx_edac.c | 1 + + 3 files changed, 3 insertions(+) + +--- a/drivers/edac/i7core_edac.c ++++ b/drivers/edac/i7core_edac.c +@@ -1711,6 +1711,7 @@ static void i7core_mce_output_error(stru + u32 errnum = find_first_bit(&error, 32); + + if (uncorrected_error) { ++ core_err_cnt = 1; + if (ripv) + tp_event = HW_EVENT_ERR_FATAL; + else +--- a/drivers/edac/sb_edac.c ++++ b/drivers/edac/sb_edac.c +@@ -2888,6 +2888,7 @@ static void sbridge_mce_output_error(str + recoverable = GET_BITFIELD(m->status, 56, 56); + + if (uncorrected_error) { ++ core_err_cnt = 1; + if (ripv) { + type = "FATAL"; + tp_event = HW_EVENT_ERR_FATAL; +--- a/drivers/edac/skx_edac.c ++++ b/drivers/edac/skx_edac.c +@@ -959,6 +959,7 @@ static void skx_mce_output_error(struct + recoverable = GET_BITFIELD(m->status, 56, 56); + + if (uncorrected_error) { ++ core_err_cnt = 1; + if (ripv) { + type = "FATAL"; + tp_event = HW_EVENT_ERR_FATAL; diff --git a/queue-4.19/edac-skx_edac-fix-logical-channel-intermediate-decoding.patch b/queue-4.19/edac-skx_edac-fix-logical-channel-intermediate-decoding.patch new file mode 100644 index 00000000000..dd7ce1b8fb9 --- /dev/null +++ b/queue-4.19/edac-skx_edac-fix-logical-channel-intermediate-decoding.patch @@ -0,0 +1,41 @@ +From 8f18973877204dc8ca4ce1004a5d28683b9a7086 Mon Sep 17 00:00:00 2001 +From: Qiuxu Zhuo +Date: Tue, 9 Oct 2018 10:20:25 -0700 +Subject: EDAC, skx_edac: Fix logical channel intermediate decoding + +From: Qiuxu Zhuo + +commit 8f18973877204dc8ca4ce1004a5d28683b9a7086 upstream. + +The code "lchan = (lchan << 1) | ~lchan" for logical channel +intermediate decoding is wrong. The wrong intermediate decoding +result is {0xffffffff, 0xfffffffe}. + +Fix it by replacing '~' with '!'. The correct intermediate +decoding result is {0x1, 0x2}. + +Signed-off-by: Qiuxu Zhuo +Signed-off-by: Tony Luck +Signed-off-by: Borislav Petkov +CC: Aristeu Rozanski +CC: Mauro Carvalho Chehab +CC: linux-edac +Cc: stable@vger.kernel.org +Link: http://lkml.kernel.org/r/20181009172025.18594-1-tony.luck@intel.com +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/edac/skx_edac.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/edac/skx_edac.c ++++ b/drivers/edac/skx_edac.c +@@ -668,7 +668,7 @@ sad_found: + break; + case 2: + lchan = (addr >> shift) % 2; +- lchan = (lchan << 1) | ~lchan; ++ lchan = (lchan << 1) | !lchan; + break; + case 3: + lchan = ((addr >> shift) % 2) << 1; diff --git a/queue-4.19/ext4-fix-ext4_ioc_swap_boot.patch b/queue-4.19/ext4-fix-ext4_ioc_swap_boot.patch new file mode 100644 index 00000000000..3e0c1536859 --- /dev/null +++ b/queue-4.19/ext4-fix-ext4_ioc_swap_boot.patch @@ -0,0 +1,126 @@ +From 18aded17492088962ef43f00825179598b3e8c58 Mon Sep 17 00:00:00 2001 +From: Theodore Ts'o +Date: Tue, 2 Oct 2018 18:21:19 -0400 +Subject: ext4: fix EXT4_IOC_SWAP_BOOT + +From: Theodore Ts'o + +commit 18aded17492088962ef43f00825179598b3e8c58 upstream. + +The code EXT4_IOC_SWAP_BOOT ioctl hasn't been updated in a while, and +it's a bit broken with respect to more modern ext4 kernels, especially +metadata checksums. + +Other problems fixed with this commit: + +* Don't allow installing a DAX, swap file, or an encrypted file as a + boot loader. + +* Respect the immutable and append-only flags. + +* Wait until any DIO operations are finished *before* calling + truncate_inode_pages(). + +* Don't swap inode->i_flags, since these flags have nothing to do with + the inode blocks --- and it will give the IMA/audit code heartburn + when the inode is evicted. + +Signed-off-by: Theodore Ts'o +Cc: stable@kernel.org +Reported-by: syzbot+e81ccd4744c6c4f71354@syzkaller.appspotmail.com +Signed-off-by: Greg Kroah-Hartman + +--- + fs/ext4/ioctl.c | 33 +++++++++++++++++++++++++++------ + 1 file changed, 27 insertions(+), 6 deletions(-) + +--- a/fs/ext4/ioctl.c ++++ b/fs/ext4/ioctl.c +@@ -67,7 +67,6 @@ static void swap_inode_data(struct inode + ei1 = EXT4_I(inode1); + ei2 = EXT4_I(inode2); + +- swap(inode1->i_flags, inode2->i_flags); + swap(inode1->i_version, inode2->i_version); + swap(inode1->i_blocks, inode2->i_blocks); + swap(inode1->i_bytes, inode2->i_bytes); +@@ -85,6 +84,21 @@ static void swap_inode_data(struct inode + i_size_write(inode2, isize); + } + ++static void reset_inode_seed(struct inode *inode) ++{ ++ struct ext4_inode_info *ei = EXT4_I(inode); ++ struct ext4_sb_info *sbi = EXT4_SB(inode->i_sb); ++ __le32 inum = cpu_to_le32(inode->i_ino); ++ __le32 gen = cpu_to_le32(inode->i_generation); ++ __u32 csum; ++ ++ if (!ext4_has_metadata_csum(inode->i_sb)) ++ return; ++ ++ csum = ext4_chksum(sbi, sbi->s_csum_seed, (__u8 *)&inum, sizeof(inum)); ++ ei->i_csum_seed = ext4_chksum(sbi, csum, (__u8 *)&gen, sizeof(gen)); ++} ++ + /** + * Swap the information from the given @inode and the inode + * EXT4_BOOT_LOADER_INO. It will basically swap i_data and all other +@@ -102,10 +116,13 @@ static long swap_inode_boot_loader(struc + struct inode *inode_bl; + struct ext4_inode_info *ei_bl; + +- if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode)) ++ if (inode->i_nlink != 1 || !S_ISREG(inode->i_mode) || ++ IS_SWAPFILE(inode) || IS_ENCRYPTED(inode) || ++ ext4_has_inline_data(inode)) + return -EINVAL; + +- if (!inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) ++ if (IS_RDONLY(inode) || IS_APPEND(inode) || IS_IMMUTABLE(inode) || ++ !inode_owner_or_capable(inode) || !capable(CAP_SYS_ADMIN)) + return -EPERM; + + inode_bl = ext4_iget(sb, EXT4_BOOT_LOADER_INO); +@@ -120,13 +137,13 @@ static long swap_inode_boot_loader(struc + * that only 1 swap_inode_boot_loader is running. */ + lock_two_nondirectories(inode, inode_bl); + +- truncate_inode_pages(&inode->i_data, 0); +- truncate_inode_pages(&inode_bl->i_data, 0); +- + /* Wait for all existing dio workers */ + inode_dio_wait(inode); + inode_dio_wait(inode_bl); + ++ truncate_inode_pages(&inode->i_data, 0); ++ truncate_inode_pages(&inode_bl->i_data, 0); ++ + handle = ext4_journal_start(inode_bl, EXT4_HT_MOVE_EXTENTS, 2); + if (IS_ERR(handle)) { + err = -EINVAL; +@@ -159,6 +176,8 @@ static long swap_inode_boot_loader(struc + + inode->i_generation = prandom_u32(); + inode_bl->i_generation = prandom_u32(); ++ reset_inode_seed(inode); ++ reset_inode_seed(inode_bl); + + ext4_discard_preallocations(inode); + +@@ -169,6 +188,7 @@ static long swap_inode_boot_loader(struc + inode->i_ino, err); + /* Revert all changes: */ + swap_inode_data(inode, inode_bl); ++ ext4_mark_inode_dirty(handle, inode); + } else { + err = ext4_mark_inode_dirty(handle, inode_bl); + if (err < 0) { +@@ -178,6 +198,7 @@ static long swap_inode_boot_loader(struc + /* Revert all changes: */ + swap_inode_data(inode, inode_bl); + ext4_mark_inode_dirty(handle, inode); ++ ext4_mark_inode_dirty(handle, inode_bl); + } + } + ext4_journal_stop(handle); diff --git a/queue-4.19/ext4-fix-setattr-project-check-in-fssetxattr-ioctl.patch b/queue-4.19/ext4-fix-setattr-project-check-in-fssetxattr-ioctl.patch new file mode 100644 index 00000000000..47ba9ef52b2 --- /dev/null +++ b/queue-4.19/ext4-fix-setattr-project-check-in-fssetxattr-ioctl.patch @@ -0,0 +1,156 @@ +From dc7ac6c4cae3b58724c2f1e21a7c05ce19ecd5a8 Mon Sep 17 00:00:00 2001 +From: Wang Shilong +Date: Wed, 3 Oct 2018 10:33:32 -0400 +Subject: ext4: fix setattr project check in fssetxattr ioctl + +From: Wang Shilong + +commit dc7ac6c4cae3b58724c2f1e21a7c05ce19ecd5a8 upstream. + +Currently, project quota could be changed by fssetxattr +ioctl, and existed permission check inode_owner_or_capable() +is obviously not enough, just think that common users could +change project id of file, that could make users to +break project quota easily. + +This patch try to follow same regular of xfs project +quota: + +"Project Quota ID state is only allowed to change from +within the init namespace. Enforce that restriction only +if we are trying to change the quota ID state. +Everything else is allowed in user namespaces." + +Besides that, check and set project id'state should +be an atomic operation, protect whole operation with +inode lock, ext4_ioctl_setproject() is only used for +ioctl EXT4_IOC_FSSETXATTR, we have held mnt_want_write_file() +before ext4_ioctl_setflags(), and ext4_ioctl_setproject() +is called after ext4_ioctl_setflags(), we could share +codes, so remove it inside ext4_ioctl_setproject(). + +Signed-off-by: Wang Shilong +Signed-off-by: Theodore Ts'o +Reviewed-by: Andreas Dilger +Cc: stable@kernel.org +Signed-off-by: Greg Kroah-Hartman + +--- + fs/ext4/ioctl.c | 60 ++++++++++++++++++++++++++++++++++---------------------- + 1 file changed, 37 insertions(+), 23 deletions(-) + +--- a/fs/ext4/ioctl.c ++++ b/fs/ext4/ioctl.c +@@ -360,19 +360,14 @@ static int ext4_ioctl_setproject(struct + if (projid_eq(kprojid, EXT4_I(inode)->i_projid)) + return 0; + +- err = mnt_want_write_file(filp); +- if (err) +- return err; +- + err = -EPERM; +- inode_lock(inode); + /* Is it quota file? Do not allow user to mess with it */ + if (ext4_is_quota_file(inode)) +- goto out_unlock; ++ return err; + + err = ext4_get_inode_loc(inode, &iloc); + if (err) +- goto out_unlock; ++ return err; + + raw_inode = ext4_raw_inode(&iloc); + if (!EXT4_FITS_IN_INODE(raw_inode, ei, i_projid)) { +@@ -380,7 +375,7 @@ static int ext4_ioctl_setproject(struct + EXT4_SB(sb)->s_want_extra_isize, + &iloc); + if (err) +- goto out_unlock; ++ return err; + } else { + brelse(iloc.bh); + } +@@ -390,10 +385,8 @@ static int ext4_ioctl_setproject(struct + handle = ext4_journal_start(inode, EXT4_HT_QUOTA, + EXT4_QUOTA_INIT_BLOCKS(sb) + + EXT4_QUOTA_DEL_BLOCKS(sb) + 3); +- if (IS_ERR(handle)) { +- err = PTR_ERR(handle); +- goto out_unlock; +- } ++ if (IS_ERR(handle)) ++ return PTR_ERR(handle); + + err = ext4_reserve_inode_write(handle, inode, &iloc); + if (err) +@@ -421,9 +414,6 @@ out_dirty: + err = rc; + out_stop: + ext4_journal_stop(handle); +-out_unlock: +- inode_unlock(inode); +- mnt_drop_write_file(filp); + return err; + } + #else +@@ -647,6 +637,30 @@ group_add_out: + return err; + } + ++static int ext4_ioctl_check_project(struct inode *inode, struct fsxattr *fa) ++{ ++ /* ++ * Project Quota ID state is only allowed to change from within the init ++ * namespace. Enforce that restriction only if we are trying to change ++ * the quota ID state. Everything else is allowed in user namespaces. ++ */ ++ if (current_user_ns() == &init_user_ns) ++ return 0; ++ ++ if (__kprojid_val(EXT4_I(inode)->i_projid) != fa->fsx_projid) ++ return -EINVAL; ++ ++ if (ext4_test_inode_flag(inode, EXT4_INODE_PROJINHERIT)) { ++ if (!(fa->fsx_xflags & FS_XFLAG_PROJINHERIT)) ++ return -EINVAL; ++ } else { ++ if (fa->fsx_xflags & FS_XFLAG_PROJINHERIT) ++ return -EINVAL; ++ } ++ ++ return 0; ++} ++ + long ext4_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) + { + struct inode *inode = file_inode(filp); +@@ -1046,19 +1060,19 @@ resizefs_out: + return err; + + inode_lock(inode); ++ err = ext4_ioctl_check_project(inode, &fa); ++ if (err) ++ goto out; + flags = (ei->i_flags & ~EXT4_FL_XFLAG_VISIBLE) | + (flags & EXT4_FL_XFLAG_VISIBLE); + err = ext4_ioctl_setflags(inode, flags); +- inode_unlock(inode); +- mnt_drop_write_file(filp); + if (err) +- return err; +- ++ goto out; + err = ext4_ioctl_setproject(filp, fa.fsx_projid); +- if (err) +- return err; +- +- return 0; ++out: ++ inode_unlock(inode); ++ mnt_drop_write_file(filp); ++ return err; + } + case EXT4_IOC_SHUTDOWN: + return ext4_shutdown(sb, arg); diff --git a/queue-4.19/ext4-fix-use-after-free-race-in-ext4_remount-s-error-path.patch b/queue-4.19/ext4-fix-use-after-free-race-in-ext4_remount-s-error-path.patch new file mode 100644 index 00000000000..c9ab1c83853 --- /dev/null +++ b/queue-4.19/ext4-fix-use-after-free-race-in-ext4_remount-s-error-path.patch @@ -0,0 +1,219 @@ +From 33458eaba4dfe778a426df6a19b7aad2ff9f7eec Mon Sep 17 00:00:00 2001 +From: Theodore Ts'o +Date: Fri, 12 Oct 2018 09:28:09 -0400 +Subject: ext4: fix use-after-free race in ext4_remount()'s error path + +From: Theodore Ts'o + +commit 33458eaba4dfe778a426df6a19b7aad2ff9f7eec upstream. + +It's possible for ext4_show_quota_options() to try reading +s_qf_names[i] while it is being modified by ext4_remount() --- most +notably, in ext4_remount's error path when the original values of the +quota file name gets restored. + +Reported-by: syzbot+a2872d6feea6918008a9@syzkaller.appspotmail.com +Signed-off-by: Theodore Ts'o +Cc: stable@kernel.org # 3.2+ +Signed-off-by: Greg Kroah-Hartman + +--- + fs/ext4/ext4.h | 3 +- + fs/ext4/super.c | 73 ++++++++++++++++++++++++++++++++++++-------------------- + 2 files changed, 50 insertions(+), 26 deletions(-) + +--- a/fs/ext4/ext4.h ++++ b/fs/ext4/ext4.h +@@ -1401,7 +1401,8 @@ struct ext4_sb_info { + u32 s_min_batch_time; + struct block_device *journal_bdev; + #ifdef CONFIG_QUOTA +- char *s_qf_names[EXT4_MAXQUOTAS]; /* Names of quota files with journalled quota */ ++ /* Names of quota files with journalled quota */ ++ char __rcu *s_qf_names[EXT4_MAXQUOTAS]; + int s_jquota_fmt; /* Format of quota to use */ + #endif + unsigned int s_want_extra_isize; /* New inodes should reserve # bytes */ +--- a/fs/ext4/super.c ++++ b/fs/ext4/super.c +@@ -914,6 +914,18 @@ static inline void ext4_quota_off_umount + for (type = 0; type < EXT4_MAXQUOTAS; type++) + ext4_quota_off(sb, type); + } ++ ++/* ++ * This is a helper function which is used in the mount/remount ++ * codepaths (which holds s_umount) to fetch the quota file name. ++ */ ++static inline char *get_qf_name(struct super_block *sb, ++ struct ext4_sb_info *sbi, ++ int type) ++{ ++ return rcu_dereference_protected(sbi->s_qf_names[type], ++ lockdep_is_held(&sb->s_umount)); ++} + #else + static inline void ext4_quota_off_umount(struct super_block *sb) + { +@@ -965,7 +977,7 @@ static void ext4_put_super(struct super_ + percpu_free_rwsem(&sbi->s_journal_flag_rwsem); + #ifdef CONFIG_QUOTA + for (i = 0; i < EXT4_MAXQUOTAS; i++) +- kfree(sbi->s_qf_names[i]); ++ kfree(get_qf_name(sb, sbi, i)); + #endif + + /* Debugging code just in case the in-memory inode orphan list +@@ -1530,11 +1542,10 @@ static const char deprecated_msg[] = + static int set_qf_name(struct super_block *sb, int qtype, substring_t *args) + { + struct ext4_sb_info *sbi = EXT4_SB(sb); +- char *qname; ++ char *qname, *old_qname = get_qf_name(sb, sbi, qtype); + int ret = -1; + +- if (sb_any_quota_loaded(sb) && +- !sbi->s_qf_names[qtype]) { ++ if (sb_any_quota_loaded(sb) && !old_qname) { + ext4_msg(sb, KERN_ERR, + "Cannot change journaled " + "quota options when quota turned on"); +@@ -1551,8 +1562,8 @@ static int set_qf_name(struct super_bloc + "Not enough memory for storing quotafile name"); + return -1; + } +- if (sbi->s_qf_names[qtype]) { +- if (strcmp(sbi->s_qf_names[qtype], qname) == 0) ++ if (old_qname) { ++ if (strcmp(old_qname, qname) == 0) + ret = 1; + else + ext4_msg(sb, KERN_ERR, +@@ -1565,7 +1576,7 @@ static int set_qf_name(struct super_bloc + "quotafile must be on filesystem root"); + goto errout; + } +- sbi->s_qf_names[qtype] = qname; ++ rcu_assign_pointer(sbi->s_qf_names[qtype], qname); + set_opt(sb, QUOTA); + return 1; + errout: +@@ -1577,15 +1588,16 @@ static int clear_qf_name(struct super_bl + { + + struct ext4_sb_info *sbi = EXT4_SB(sb); ++ char *old_qname = get_qf_name(sb, sbi, qtype); + +- if (sb_any_quota_loaded(sb) && +- sbi->s_qf_names[qtype]) { ++ if (sb_any_quota_loaded(sb) && old_qname) { + ext4_msg(sb, KERN_ERR, "Cannot change journaled quota options" + " when quota turned on"); + return -1; + } +- kfree(sbi->s_qf_names[qtype]); +- sbi->s_qf_names[qtype] = NULL; ++ rcu_assign_pointer(sbi->s_qf_names[qtype], NULL); ++ synchronize_rcu(); ++ kfree(old_qname); + return 1; + } + #endif +@@ -1960,7 +1972,7 @@ static int parse_options(char *options, + int is_remount) + { + struct ext4_sb_info *sbi = EXT4_SB(sb); +- char *p; ++ char *p, __maybe_unused *usr_qf_name, __maybe_unused *grp_qf_name; + substring_t args[MAX_OPT_ARGS]; + int token; + +@@ -1991,11 +2003,13 @@ static int parse_options(char *options, + "Cannot enable project quota enforcement."); + return 0; + } +- if (sbi->s_qf_names[USRQUOTA] || sbi->s_qf_names[GRPQUOTA]) { +- if (test_opt(sb, USRQUOTA) && sbi->s_qf_names[USRQUOTA]) ++ usr_qf_name = get_qf_name(sb, sbi, USRQUOTA); ++ grp_qf_name = get_qf_name(sb, sbi, GRPQUOTA); ++ if (usr_qf_name || grp_qf_name) { ++ if (test_opt(sb, USRQUOTA) && usr_qf_name) + clear_opt(sb, USRQUOTA); + +- if (test_opt(sb, GRPQUOTA) && sbi->s_qf_names[GRPQUOTA]) ++ if (test_opt(sb, GRPQUOTA) && grp_qf_name) + clear_opt(sb, GRPQUOTA); + + if (test_opt(sb, GRPQUOTA) || test_opt(sb, USRQUOTA)) { +@@ -2029,6 +2043,7 @@ static inline void ext4_show_quota_optio + { + #if defined(CONFIG_QUOTA) + struct ext4_sb_info *sbi = EXT4_SB(sb); ++ char *usr_qf_name, *grp_qf_name; + + if (sbi->s_jquota_fmt) { + char *fmtname = ""; +@@ -2047,11 +2062,14 @@ static inline void ext4_show_quota_optio + seq_printf(seq, ",jqfmt=%s", fmtname); + } + +- if (sbi->s_qf_names[USRQUOTA]) +- seq_show_option(seq, "usrjquota", sbi->s_qf_names[USRQUOTA]); +- +- if (sbi->s_qf_names[GRPQUOTA]) +- seq_show_option(seq, "grpjquota", sbi->s_qf_names[GRPQUOTA]); ++ rcu_read_lock(); ++ usr_qf_name = rcu_dereference(sbi->s_qf_names[USRQUOTA]); ++ grp_qf_name = rcu_dereference(sbi->s_qf_names[GRPQUOTA]); ++ if (usr_qf_name) ++ seq_show_option(seq, "usrjquota", usr_qf_name); ++ if (grp_qf_name) ++ seq_show_option(seq, "grpjquota", grp_qf_name); ++ rcu_read_unlock(); + #endif + } + +@@ -5103,6 +5121,7 @@ static int ext4_remount(struct super_blo + int err = 0; + #ifdef CONFIG_QUOTA + int i, j; ++ char *to_free[EXT4_MAXQUOTAS]; + #endif + char *orig_data = kstrdup(data, GFP_KERNEL); + +@@ -5122,8 +5141,9 @@ static int ext4_remount(struct super_blo + old_opts.s_jquota_fmt = sbi->s_jquota_fmt; + for (i = 0; i < EXT4_MAXQUOTAS; i++) + if (sbi->s_qf_names[i]) { +- old_opts.s_qf_names[i] = kstrdup(sbi->s_qf_names[i], +- GFP_KERNEL); ++ char *qf_name = get_qf_name(sb, sbi, i); ++ ++ old_opts.s_qf_names[i] = kstrdup(qf_name, GFP_KERNEL); + if (!old_opts.s_qf_names[i]) { + for (j = 0; j < i; j++) + kfree(old_opts.s_qf_names[j]); +@@ -5352,9 +5372,12 @@ restore_opts: + #ifdef CONFIG_QUOTA + sbi->s_jquota_fmt = old_opts.s_jquota_fmt; + for (i = 0; i < EXT4_MAXQUOTAS; i++) { +- kfree(sbi->s_qf_names[i]); +- sbi->s_qf_names[i] = old_opts.s_qf_names[i]; ++ to_free[i] = get_qf_name(sb, sbi, i); ++ rcu_assign_pointer(sbi->s_qf_names[i], old_opts.s_qf_names[i]); + } ++ synchronize_rcu(); ++ for (i = 0; i < EXT4_MAXQUOTAS; i++) ++ kfree(to_free[i]); + #endif + kfree(orig_data); + return err; +@@ -5545,7 +5568,7 @@ static int ext4_write_info(struct super_ + */ + static int ext4_quota_on_mount(struct super_block *sb, int type) + { +- return dquot_quota_on_mount(sb, EXT4_SB(sb)->s_qf_names[type], ++ return dquot_quota_on_mount(sb, get_qf_name(sb, EXT4_SB(sb), type), + EXT4_SB(sb)->s_jquota_fmt, type); + } + diff --git a/queue-4.19/ext4-initialize-retries-variable-in-ext4_da_write_inline_data_begin.patch b/queue-4.19/ext4-initialize-retries-variable-in-ext4_da_write_inline_data_begin.patch new file mode 100644 index 00000000000..162e8140ee7 --- /dev/null +++ b/queue-4.19/ext4-initialize-retries-variable-in-ext4_da_write_inline_data_begin.patch @@ -0,0 +1,34 @@ +From 625ef8a3acd111d5f496d190baf99d1a815bd03e Mon Sep 17 00:00:00 2001 +From: Lukas Czerner +Date: Tue, 2 Oct 2018 21:18:45 -0400 +Subject: ext4: initialize retries variable in ext4_da_write_inline_data_begin() + +From: Lukas Czerner + +commit 625ef8a3acd111d5f496d190baf99d1a815bd03e upstream. + +Variable retries is not initialized in ext4_da_write_inline_data_begin() +which can lead to nondeterministic number of retries in case we hit +ENOSPC. Initialize retries to zero as we do everywhere else. + +Signed-off-by: Lukas Czerner +Signed-off-by: Theodore Ts'o +Fixes: bc0ca9df3b2a ("ext4: retry allocation when inline->extent conversion failed") +Cc: stable@kernel.org +Signed-off-by: Greg Kroah-Hartman + +--- + fs/ext4/inline.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/fs/ext4/inline.c ++++ b/fs/ext4/inline.c +@@ -863,7 +863,7 @@ int ext4_da_write_inline_data_begin(stru + handle_t *handle; + struct page *page; + struct ext4_iloc iloc; +- int retries; ++ int retries = 0; + + ret = ext4_get_inode_loc(inode, &iloc); + if (ret) diff --git a/queue-4.19/ext4-propagate-error-from-dquot_initialize-in-ext4_ioc_fssetxattr.patch b/queue-4.19/ext4-propagate-error-from-dquot_initialize-in-ext4_ioc_fssetxattr.patch new file mode 100644 index 00000000000..2e40a4124d1 --- /dev/null +++ b/queue-4.19/ext4-propagate-error-from-dquot_initialize-in-ext4_ioc_fssetxattr.patch @@ -0,0 +1,38 @@ +From 182a79e0c17147d2c2d3990a9a7b6b58a1561c7a Mon Sep 17 00:00:00 2001 +From: Wang Shilong +Date: Wed, 3 Oct 2018 12:19:21 -0400 +Subject: ext4: propagate error from dquot_initialize() in EXT4_IOC_FSSETXATTR + +From: Wang Shilong + +commit 182a79e0c17147d2c2d3990a9a7b6b58a1561c7a upstream. + +We return most failure of dquota_initialize() except +inode evict, this could make a bit sense, for example +we allow file removal even quota files are broken? + +But it dosen't make sense to allow setting project +if quota files etc are broken. + +Signed-off-by: Wang Shilong +Signed-off-by: Theodore Ts'o +Cc: stable@kernel.org +Signed-off-by: Greg Kroah-Hartman + +--- + fs/ext4/ioctl.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/fs/ext4/ioctl.c ++++ b/fs/ext4/ioctl.c +@@ -380,7 +380,9 @@ static int ext4_ioctl_setproject(struct + brelse(iloc.bh); + } + +- dquot_initialize(inode); ++ err = dquot_initialize(inode); ++ if (err) ++ return err; + + handle = ext4_journal_start(inode, EXT4_HT_QUOTA, + EXT4_QUOTA_INIT_BLOCKS(sb) + diff --git a/queue-4.19/f2fs-fix-missing-up_read.patch b/queue-4.19/f2fs-fix-missing-up_read.patch new file mode 100644 index 00000000000..b70620afc9c --- /dev/null +++ b/queue-4.19/f2fs-fix-missing-up_read.patch @@ -0,0 +1,35 @@ +From 89d13c38501df730cbb2e02c4499da1b5187119d Mon Sep 17 00:00:00 2001 +From: Jaegeuk Kim +Date: Thu, 27 Sep 2018 22:15:31 -0700 +Subject: f2fs: fix missing up_read + +From: Jaegeuk Kim + +commit 89d13c38501df730cbb2e02c4499da1b5187119d upstream. + +This patch fixes missing up_read call. + +Fixes: c9b60788fc76 ("f2fs: fix to do sanity check with block address in main area") +Cc: # 4.19+ +Reviewed-by: Chao Yu +Signed-off-by: Jaegeuk Kim +Signed-off-by: Greg Kroah-Hartman + +--- + fs/f2fs/node.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/fs/f2fs/node.c ++++ b/fs/f2fs/node.c +@@ -1542,8 +1542,10 @@ static int __write_node_page(struct page + } + + if (__is_valid_data_blkaddr(ni.blk_addr) && +- !f2fs_is_valid_blkaddr(sbi, ni.blk_addr, DATA_GENERIC)) ++ !f2fs_is_valid_blkaddr(sbi, ni.blk_addr, DATA_GENERIC)) { ++ up_read(&sbi->node_write); + goto redirty_out; ++ } + + if (atomic && !test_opt(sbi, NOBARRIER)) + fio.op_flags |= REQ_PREFLUSH | REQ_FUA; diff --git a/queue-4.19/f2fs-fix-to-account-io-correctly.patch b/queue-4.19/f2fs-fix-to-account-io-correctly.patch new file mode 100644 index 00000000000..c5d242b966c --- /dev/null +++ b/queue-4.19/f2fs-fix-to-account-io-correctly.patch @@ -0,0 +1,45 @@ +From 4c58ed076875f36dae0f240da1e25e99e5d4afb8 Mon Sep 17 00:00:00 2001 +From: Chao Yu +Date: Mon, 22 Oct 2018 09:12:51 +0800 +Subject: f2fs: fix to account IO correctly + +From: Chao Yu + +commit 4c58ed076875f36dae0f240da1e25e99e5d4afb8 upstream. + +Below race can cause reversed reference on dirty count, fix it by +relocating __submit_bio() and inc_page_count(). + +Thread A Thread B +- f2fs_inplace_write_data + - f2fs_submit_page_bio + - __submit_bio + - f2fs_write_end_io + - dec_page_count + - inc_page_count + +Cc: +Fixes: d1b3e72d5490 ("f2fs: submit bio of in-place-update pages") +Signed-off-by: Chao Yu +Signed-off-by: Jaegeuk Kim +Signed-off-by: Greg Kroah-Hartman + +--- + fs/f2fs/data.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -463,10 +463,10 @@ int f2fs_submit_page_bio(struct f2fs_io_ + + bio_set_op_attrs(bio, fio->op, fio->op_flags); + +- __submit_bio(fio->sbi, bio, fio->type); +- + if (!is_read_io(fio->op)) + inc_page_count(fio->sbi, WB_DATA_TYPE(fio->page)); ++ ++ __submit_bio(fio->sbi, bio, fio->type); + return 0; + } + diff --git a/queue-4.19/f2fs-fix-to-recover-cold-bit-of-inode-block-during-por.patch b/queue-4.19/f2fs-fix-to-recover-cold-bit-of-inode-block-during-por.patch new file mode 100644 index 00000000000..7f8757bfa62 --- /dev/null +++ b/queue-4.19/f2fs-fix-to-recover-cold-bit-of-inode-block-during-por.patch @@ -0,0 +1,80 @@ +From ef2a007134b4eaa39264c885999f296577bc87d2 Mon Sep 17 00:00:00 2001 +From: Chao Yu +Date: Wed, 3 Oct 2018 22:32:44 +0800 +Subject: f2fs: fix to recover cold bit of inode block during POR + +From: Chao Yu + +commit ef2a007134b4eaa39264c885999f296577bc87d2 upstream. + +Testcase to reproduce this bug: +1. mkfs.f2fs /dev/sdd +2. mount -t f2fs /dev/sdd /mnt/f2fs +3. touch /mnt/f2fs/file +4. sync +5. chattr +A /mnt/f2fs/file +6. xfs_io -f /mnt/f2fs/file -c "fsync" +7. godown /mnt/f2fs +8. umount /mnt/f2fs +9. mount -t f2fs /dev/sdd /mnt/f2fs +10. chattr -A /mnt/f2fs/file +11. xfs_io -f /mnt/f2fs/file -c "fsync" +12. umount /mnt/f2fs +13. mount -t f2fs /dev/sdd /mnt/f2fs +14. lsattr /mnt/f2fs/file + +-----------------N- /mnt/f2fs/file + +But actually, we expect the corrct result is: + +-------A---------N- /mnt/f2fs/file + +The reason is in step 9) we missed to recover cold bit flag in inode +block, so later, in fsync, we will skip write inode block due to below +condition check, result in lossing data in another SPOR. + +f2fs_fsync_node_pages() + if (!IS_DNODE(page) || !is_cold_node(page)) + continue; + +Note that, I guess that some non-dir inode has already lost cold bit +during POR, so in order to reenable recovery for those inode, let's +try to recover cold bit in f2fs_iget() to save more fsynced data. + +Fixes: c56675750d7c ("f2fs: remove unneeded set_cold_node()") +Cc: 4.17+ +Signed-off-by: Chao Yu +Signed-off-by: Jaegeuk Kim +Signed-off-by: Greg Kroah-Hartman + +--- + fs/f2fs/inode.c | 6 ++++++ + fs/f2fs/node.c | 2 +- + 2 files changed, 7 insertions(+), 1 deletion(-) + +--- a/fs/f2fs/inode.c ++++ b/fs/f2fs/inode.c +@@ -368,6 +368,12 @@ static int do_read_inode(struct inode *i + if (f2fs_has_inline_data(inode) && !f2fs_exist_data(inode)) + __recover_inline_status(inode, node_page); + ++ /* try to recover cold bit for non-dir inode */ ++ if (!S_ISDIR(inode->i_mode) && !is_cold_node(node_page)) { ++ set_cold_node(node_page, false); ++ set_page_dirty(node_page); ++ } ++ + /* get rdev by using inline_info */ + __get_inode_rdev(inode, ri); + +--- a/fs/f2fs/node.c ++++ b/fs/f2fs/node.c +@@ -2539,7 +2539,7 @@ retry: + if (!PageUptodate(ipage)) + SetPageUptodate(ipage); + fill_node_footer(ipage, ino, ino, 0, true); +- set_cold_node(page, false); ++ set_cold_node(ipage, false); + + src = F2FS_INODE(page); + dst = F2FS_INODE(ipage); diff --git a/queue-4.19/genirq-fix-race-on-spurious-interrupt-detection.patch b/queue-4.19/genirq-fix-race-on-spurious-interrupt-detection.patch new file mode 100644 index 00000000000..dddbea11d22 --- /dev/null +++ b/queue-4.19/genirq-fix-race-on-spurious-interrupt-detection.patch @@ -0,0 +1,96 @@ +From 746a923b863a1065ef77324e1e43f19b1a3eab5c Mon Sep 17 00:00:00 2001 +From: Lukas Wunner +Date: Thu, 18 Oct 2018 15:15:05 +0200 +Subject: genirq: Fix race on spurious interrupt detection + +From: Lukas Wunner + +commit 746a923b863a1065ef77324e1e43f19b1a3eab5c upstream. + +Commit 1e77d0a1ed74 ("genirq: Sanitize spurious interrupt detection of +threaded irqs") made detection of spurious interrupts work for threaded +handlers by: + +a) incrementing a counter every time the thread returns IRQ_HANDLED, and +b) checking whether that counter has increased every time the thread is + woken. + +However for oneshot interrupts, the commit unmasks the interrupt before +incrementing the counter. If another interrupt occurs right after +unmasking but before the counter is incremented, that interrupt is +incorrectly considered spurious: + +time + | irq_thread() + | irq_thread_fn() + | action->thread_fn() + | irq_finalize_oneshot() + | unmask_threaded_irq() /* interrupt is unmasked */ + | + | /* interrupt fires, incorrectly deemed spurious */ + | + | atomic_inc(&desc->threads_handled); /* counter is incremented */ + v + +This is observed with a hi3110 CAN controller receiving data at high volume +(from a separate machine sending with "cangen -g 0 -i -x"): The controller +signals a huge number of interrupts (hundreds of millions per day) and +every second there are about a dozen which are deemed spurious. + +In theory with high CPU load and the presence of higher priority tasks, the +number of incorrectly detected spurious interrupts might increase beyond +the 99,900 threshold and cause disablement of the interrupt. + +In practice it just increments the spurious interrupt count. But that can +cause people to waste time investigating it over and over. + +Fix it by moving the accounting before the invocation of +irq_finalize_oneshot(). + +[ tglx: Folded change log update ] + +Fixes: 1e77d0a1ed74 ("genirq: Sanitize spurious interrupt detection of threaded irqs") +Signed-off-by: Lukas Wunner +Signed-off-by: Thomas Gleixner +Cc: Mathias Duckeck +Cc: Akshay Bhat +Cc: Casey Fitzpatrick +Cc: stable@vger.kernel.org # v3.16+ +Link: https://lkml.kernel.org/r/1dfd8bbd16163940648045495e3e9698e63b50ad.1539867047.git.lukas@wunner.de +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/irq/manage.c | 8 ++++++-- + 1 file changed, 6 insertions(+), 2 deletions(-) + +--- a/kernel/irq/manage.c ++++ b/kernel/irq/manage.c +@@ -927,6 +927,9 @@ irq_forced_thread_fn(struct irq_desc *de + + local_bh_disable(); + ret = action->thread_fn(action->irq, action->dev_id); ++ if (ret == IRQ_HANDLED) ++ atomic_inc(&desc->threads_handled); ++ + irq_finalize_oneshot(desc, action); + local_bh_enable(); + return ret; +@@ -943,6 +946,9 @@ static irqreturn_t irq_thread_fn(struct + irqreturn_t ret; + + ret = action->thread_fn(action->irq, action->dev_id); ++ if (ret == IRQ_HANDLED) ++ atomic_inc(&desc->threads_handled); ++ + irq_finalize_oneshot(desc, action); + return ret; + } +@@ -1020,8 +1026,6 @@ static int irq_thread(void *data) + irq_thread_check_affinity(desc, action); + + action_ret = handler_fn(desc, action); +- if (action_ret == IRQ_HANDLED) +- atomic_inc(&desc->threads_handled); + if (action_ret == IRQ_WAKE_THREAD) + irq_wake_secondary(desc, action); + diff --git a/queue-4.19/gfs2_meta-mount-can-get-null-dev_name.patch b/queue-4.19/gfs2_meta-mount-can-get-null-dev_name.patch new file mode 100644 index 00000000000..13f85d75c62 --- /dev/null +++ b/queue-4.19/gfs2_meta-mount-can-get-null-dev_name.patch @@ -0,0 +1,32 @@ +From 3df629d873f8683af6f0d34dfc743f637966d483 Mon Sep 17 00:00:00 2001 +From: Al Viro +Date: Sat, 13 Oct 2018 00:19:13 -0400 +Subject: gfs2_meta: ->mount() can get NULL dev_name + +From: Al Viro + +commit 3df629d873f8683af6f0d34dfc743f637966d483 upstream. + +get in sync with mount_bdev() handling of the same + +Reported-by: syzbot+c54f8e94e6bba03b04e9@syzkaller.appspotmail.com +Cc: stable@vger.kernel.org +Signed-off-by: Al Viro +Signed-off-by: Greg Kroah-Hartman + +--- + fs/gfs2/ops_fstype.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/fs/gfs2/ops_fstype.c ++++ b/fs/gfs2/ops_fstype.c +@@ -1333,6 +1333,9 @@ static struct dentry *gfs2_mount_meta(st + struct path path; + int error; + ++ if (!dev_name || !*dev_name) ++ return ERR_PTR(-EINVAL); ++ + error = kern_path(dev_name, LOOKUP_FOLLOW, &path); + if (error) { + pr_warn("path_lookup on %s returned error %d\n", diff --git a/queue-4.19/hid-hiddev-fix-potential-spectre-v1.patch b/queue-4.19/hid-hiddev-fix-potential-spectre-v1.patch new file mode 100644 index 00000000000..4567dbeae05 --- /dev/null +++ b/queue-4.19/hid-hiddev-fix-potential-spectre-v1.patch @@ -0,0 +1,76 @@ +From f11274396a538b31bc010f782e05c2ce3f804c13 Mon Sep 17 00:00:00 2001 +From: Breno Leitao +Date: Fri, 19 Oct 2018 17:01:33 -0300 +Subject: HID: hiddev: fix potential Spectre v1 + +From: Breno Leitao + +commit f11274396a538b31bc010f782e05c2ce3f804c13 upstream. + +uref->usage_index can be indirectly controlled by userspace, hence leading +to a potential exploitation of the Spectre variant 1 vulnerability. + +This field is used as an array index by the hiddev_ioctl_usage() function, +when 'cmd' is either HIDIOCGCOLLECTIONINDEX, HIDIOCGUSAGES or +HIDIOCSUSAGES. + +For cmd == HIDIOCGCOLLECTIONINDEX case, uref->usage_index is compared to +field->maxusage and then used as an index to dereference field->usage +array. The same thing happens to the cmd == HIDIOC{G,S}USAGES cases, where +uref->usage_index is checked against an array maximum value and then it is +used as an index in an array. + +This is a summary of the HIDIOCGCOLLECTIONINDEX case, which matches the +traditional Spectre V1 first load: + + copy_from_user(uref, user_arg, sizeof(*uref)) + if (uref->usage_index >= field->maxusage) + goto inval; + i = field->usage[uref->usage_index].collection_index; + return i; + +This patch fixes this by sanitizing field uref->usage_index before using it +to index field->usage (HIDIOCGCOLLECTIONINDEX) or field->value in +HIDIOC{G,S}USAGES arrays, thus, avoiding speculation in the first load. + +Cc: +Signed-off-by: Breno Leitao +v2: Contemplate cmd == HIDIOC{G,S}USAGES case +Signed-off-by: Jiri Kosina +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/hid/usbhid/hiddev.c | 18 ++++++++++++++---- + 1 file changed, 14 insertions(+), 4 deletions(-) + +--- a/drivers/hid/usbhid/hiddev.c ++++ b/drivers/hid/usbhid/hiddev.c +@@ -512,14 +512,24 @@ static noinline int hiddev_ioctl_usage(s + if (cmd == HIDIOCGCOLLECTIONINDEX) { + if (uref->usage_index >= field->maxusage) + goto inval; ++ uref->usage_index = ++ array_index_nospec(uref->usage_index, ++ field->maxusage); + } else if (uref->usage_index >= field->report_count) + goto inval; + } + +- if ((cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) && +- (uref_multi->num_values > HID_MAX_MULTI_USAGES || +- uref->usage_index + uref_multi->num_values > field->report_count)) +- goto inval; ++ if (cmd == HIDIOCGUSAGES || cmd == HIDIOCSUSAGES) { ++ if (uref_multi->num_values > HID_MAX_MULTI_USAGES || ++ uref->usage_index + uref_multi->num_values > ++ field->report_count) ++ goto inval; ++ ++ uref->usage_index = ++ array_index_nospec(uref->usage_index, ++ field->report_count - ++ uref_multi->num_values); ++ } + + switch (cmd) { + case HIDIOCGUSAGE: diff --git a/queue-4.19/hid-wacom-work-around-hid-descriptor-bug-in-dtk-2451-and-dth-2452.patch b/queue-4.19/hid-wacom-work-around-hid-descriptor-bug-in-dtk-2451-and-dth-2452.patch new file mode 100644 index 00000000000..6a6660109fa --- /dev/null +++ b/queue-4.19/hid-wacom-work-around-hid-descriptor-bug-in-dtk-2451-and-dth-2452.patch @@ -0,0 +1,66 @@ +From 11db8173dbab7a94cf5ba5225fcedbfc0f3b7e54 Mon Sep 17 00:00:00 2001 +From: Jason Gerecke +Date: Wed, 10 Oct 2018 13:40:26 -0700 +Subject: HID: wacom: Work around HID descriptor bug in DTK-2451 and DTH-2452 + +From: Jason Gerecke + +commit 11db8173dbab7a94cf5ba5225fcedbfc0f3b7e54 upstream. + +The DTK-2451 and DTH-2452 have a buggy HID descriptor which incorrectly +contains a Cintiq-like report, complete with pen tilt, rotation, twist, serial +number, etc. The hardware doesn't actually support this data but our driver +duitifully sets up the device as though it does. To ensure userspace has a +correct view of devices without updated firmware, we clean up this incorrect +data in wacom_setup_device_quirks. + +We're also careful to clear the WACOM_QUIRK_TOOLSERIAL flag since its presence +causes the driver to wait for serial number information (via +wacom_wac_pen_serial_enforce) that never comes, resulting in +the pen being non-responsive. + +Signed-off-by: Jason Gerecke +Fixes: 8341720642 ("HID: wacom: Queue events with missing type/serial data for later processing") +Cc: stable@vger.kernel.org # v4.16+ +Signed-off-by: Jiri Kosina +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/hid/wacom_wac.c | 19 +++++++++++++++++++ + 1 file changed, 19 insertions(+) + +--- a/drivers/hid/wacom_wac.c ++++ b/drivers/hid/wacom_wac.c +@@ -3335,6 +3335,7 @@ static void wacom_setup_intuos(struct wa + + void wacom_setup_device_quirks(struct wacom *wacom) + { ++ struct wacom_wac *wacom_wac = &wacom->wacom_wac; + struct wacom_features *features = &wacom->wacom_wac.features; + + /* The pen and pad share the same interface on most devices */ +@@ -3464,6 +3465,24 @@ void wacom_setup_device_quirks(struct wa + + if (features->type == REMOTE) + features->device_type |= WACOM_DEVICETYPE_WL_MONITOR; ++ ++ /* HID descriptor for DTK-2451 / DTH-2452 claims to report lots ++ * of things it shouldn't. Lets fix up the damage... ++ */ ++ if (wacom->hdev->product == 0x382 || wacom->hdev->product == 0x37d) { ++ features->quirks &= ~WACOM_QUIRK_TOOLSERIAL; ++ __clear_bit(BTN_TOOL_BRUSH, wacom_wac->pen_input->keybit); ++ __clear_bit(BTN_TOOL_PENCIL, wacom_wac->pen_input->keybit); ++ __clear_bit(BTN_TOOL_AIRBRUSH, wacom_wac->pen_input->keybit); ++ __clear_bit(ABS_Z, wacom_wac->pen_input->absbit); ++ __clear_bit(ABS_DISTANCE, wacom_wac->pen_input->absbit); ++ __clear_bit(ABS_TILT_X, wacom_wac->pen_input->absbit); ++ __clear_bit(ABS_TILT_Y, wacom_wac->pen_input->absbit); ++ __clear_bit(ABS_WHEEL, wacom_wac->pen_input->absbit); ++ __clear_bit(ABS_MISC, wacom_wac->pen_input->absbit); ++ __clear_bit(MSC_SERIAL, wacom_wac->pen_input->mscbit); ++ __clear_bit(EV_MSC, wacom_wac->pen_input->evbit); ++ } + } + + int wacom_setup_pen_input_capabilities(struct input_dev *input_dev, diff --git a/queue-4.19/hugetlbfs-dirty-pages-as-they-are-added-to-pagecache.patch b/queue-4.19/hugetlbfs-dirty-pages-as-they-are-added-to-pagecache.patch new file mode 100644 index 00000000000..9afabc837d2 --- /dev/null +++ b/queue-4.19/hugetlbfs-dirty-pages-as-they-are-added-to-pagecache.patch @@ -0,0 +1,73 @@ +From 22146c3ce98962436e401f7b7016a6f664c9ffb5 Mon Sep 17 00:00:00 2001 +From: Mike Kravetz +Date: Fri, 26 Oct 2018 15:10:58 -0700 +Subject: hugetlbfs: dirty pages as they are added to pagecache + +From: Mike Kravetz + +commit 22146c3ce98962436e401f7b7016a6f664c9ffb5 upstream. + +Some test systems were experiencing negative huge page reserve counts and +incorrect file block counts. This was traced to /proc/sys/vm/drop_caches +removing clean pages from hugetlbfs file pagecaches. When non-hugetlbfs +explicit code removes the pages, the appropriate accounting is not +performed. + +This can be recreated as follows: + fallocate -l 2M /dev/hugepages/foo + echo 1 > /proc/sys/vm/drop_caches + fallocate -l 2M /dev/hugepages/foo + grep -i huge /proc/meminfo + AnonHugePages: 0 kB + ShmemHugePages: 0 kB + HugePages_Total: 2048 + HugePages_Free: 2047 + HugePages_Rsvd: 18446744073709551615 + HugePages_Surp: 0 + Hugepagesize: 2048 kB + Hugetlb: 4194304 kB + ls -lsh /dev/hugepages/foo + 4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo + +To address this issue, dirty pages as they are added to pagecache. This +can easily be reproduced with fallocate as shown above. Read faulted +pages will eventually end up being marked dirty. But there is a window +where they are clean and could be impacted by code such as drop_caches. +So, just dirty them all as they are added to the pagecache. + +Link: http://lkml.kernel.org/r/b5be45b8-5afe-56cd-9482-28384699a049@oracle.com +Fixes: 6bda666a03f0 ("hugepages: fold find_or_alloc_pages into huge_no_page()") +Signed-off-by: Mike Kravetz +Acked-by: Mihcla Hocko +Reviewed-by: Khalid Aziz +Cc: Hugh Dickins +Cc: Naoya Horiguchi +Cc: "Aneesh Kumar K . V" +Cc: Andrea Arcangeli +Cc: "Kirill A . Shutemov" +Cc: Davidlohr Bueso +Cc: Alexander Viro +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/hugetlb.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +--- a/mm/hugetlb.c ++++ b/mm/hugetlb.c +@@ -3690,6 +3690,12 @@ int huge_add_to_page_cache(struct page * + return err; + ClearPagePrivate(page); + ++ /* ++ * set page dirty so that it will not be removed from cache/file ++ * by non-hugetlbfs specific code paths. ++ */ ++ set_page_dirty(page); ++ + spin_lock(&inode->i_lock); + inode->i_blocks += blocks_per_huge_page(h); + spin_unlock(&inode->i_lock); diff --git a/queue-4.19/ib-mlx5-fix-mr-cache-initialization.patch b/queue-4.19/ib-mlx5-fix-mr-cache-initialization.patch new file mode 100644 index 00000000000..1b2ab622105 --- /dev/null +++ b/queue-4.19/ib-mlx5-fix-mr-cache-initialization.patch @@ -0,0 +1,41 @@ +From 013c2403bf32e48119aeb13126929f81352cc7ac Mon Sep 17 00:00:00 2001 +From: Artemy Kovalyov +Date: Mon, 15 Oct 2018 14:13:35 +0300 +Subject: IB/mlx5: Fix MR cache initialization + +From: Artemy Kovalyov + +commit 013c2403bf32e48119aeb13126929f81352cc7ac upstream. + +Schedule MR cache work only after bucket was initialized. + +Cc: # 4.10 +Fixes: 49780d42dfc9 ("IB/mlx5: Expose MR cache for mlx5_ib") +Signed-off-by: Artemy Kovalyov +Reviewed-by: Majd Dibbiny +Signed-off-by: Leon Romanovsky +Signed-off-by: Jason Gunthorpe +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/infiniband/hw/mlx5/mr.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/infiniband/hw/mlx5/mr.c ++++ b/drivers/infiniband/hw/mlx5/mr.c +@@ -691,7 +691,6 @@ int mlx5_mr_cache_init(struct mlx5_ib_de + init_completion(&ent->compl); + INIT_WORK(&ent->work, cache_work_func); + INIT_DELAYED_WORK(&ent->dwork, delayed_cache_work_func); +- queue_work(cache->wq, &ent->work); + + if (i > MR_CACHE_LAST_STD_ENTRY) { + mlx5_odp_init_mr_cache_entry(ent); +@@ -711,6 +710,7 @@ int mlx5_mr_cache_init(struct mlx5_ib_de + ent->limit = dev->mdev->profile->mr_cache[i].limit; + else + ent->limit = 0; ++ queue_work(cache->wq, &ent->work); + } + + err = mlx5_mr_cache_debugfs_init(dev); diff --git a/queue-4.19/ib-rxe-revise-the-ib_wr_opcode-enum.patch b/queue-4.19/ib-rxe-revise-the-ib_wr_opcode-enum.patch new file mode 100644 index 00000000000..fd605e8b07b --- /dev/null +++ b/queue-4.19/ib-rxe-revise-the-ib_wr_opcode-enum.patch @@ -0,0 +1,123 @@ +From 9a59739bd01f77db6fbe2955a4fce165f0f43568 Mon Sep 17 00:00:00 2001 +From: Jason Gunthorpe +Date: Tue, 14 Aug 2018 15:33:02 -0700 +Subject: IB/rxe: Revise the ib_wr_opcode enum + +From: Jason Gunthorpe + +commit 9a59739bd01f77db6fbe2955a4fce165f0f43568 upstream. + +This enum has become part of the uABI, as both RXE and the +ib_uverbs_post_send() command expect userspace to supply values from this +enum. So it should be properly placed in include/uapi/rdma. + +In userspace this enum is called 'enum ibv_wr_opcode' as part of +libibverbs.h. That enum defines different values for IB_WR_LOCAL_INV, +IB_WR_SEND_WITH_INV, and IB_WR_LSO. These were introduced (incorrectly, it +turns out) into libiberbs in 2015. + +The kernel has changed its mind on the numbering for several of the IB_WC +values over the years, but has remained stable on IB_WR_LOCAL_INV and +below. + +Based on this we can conclude that there is no real user space user of the +values beyond IB_WR_ATOMIC_FETCH_AND_ADD, as they have never worked via +rdma-core. This is confirmed by inspection, only rxe uses the kernel enum +and implements the latter operations. rxe has clearly never worked with +these attributes from userspace. Other drivers that support these opcodes +implement the functionality without calling out to the kernel. + +To make IB_WR_SEND_WITH_INV and related work for RXE in userspace we +choose to renumber the IB_WR enum in the kernel to match the uABI that +userspace has bee using since before Soft RoCE was merged. This is an +overall simpler configuration for the whole software stack, and obviously +can't break anything existing. + +Reported-by: Seth Howell +Tested-by: Seth Howell +Fixes: 8700e3e7c485 ("Soft RoCE driver") +Cc: +Signed-off-by: Jason Gunthorpe +Signed-off-by: Greg Kroah-Hartman + +--- + include/rdma/ib_verbs.h | 34 ++++++++++++++++++++-------------- + include/uapi/rdma/ib_user_verbs.h | 20 +++++++++++++++++++- + 2 files changed, 39 insertions(+), 15 deletions(-) + +--- a/include/rdma/ib_verbs.h ++++ b/include/rdma/ib_verbs.h +@@ -1278,21 +1278,27 @@ struct ib_qp_attr { + }; + + enum ib_wr_opcode { +- IB_WR_RDMA_WRITE, +- IB_WR_RDMA_WRITE_WITH_IMM, +- IB_WR_SEND, +- IB_WR_SEND_WITH_IMM, +- IB_WR_RDMA_READ, +- IB_WR_ATOMIC_CMP_AND_SWP, +- IB_WR_ATOMIC_FETCH_AND_ADD, +- IB_WR_LSO, +- IB_WR_SEND_WITH_INV, +- IB_WR_RDMA_READ_WITH_INV, +- IB_WR_LOCAL_INV, +- IB_WR_REG_MR, +- IB_WR_MASKED_ATOMIC_CMP_AND_SWP, +- IB_WR_MASKED_ATOMIC_FETCH_AND_ADD, ++ /* These are shared with userspace */ ++ IB_WR_RDMA_WRITE = IB_UVERBS_WR_RDMA_WRITE, ++ IB_WR_RDMA_WRITE_WITH_IMM = IB_UVERBS_WR_RDMA_WRITE_WITH_IMM, ++ IB_WR_SEND = IB_UVERBS_WR_SEND, ++ IB_WR_SEND_WITH_IMM = IB_UVERBS_WR_SEND_WITH_IMM, ++ IB_WR_RDMA_READ = IB_UVERBS_WR_RDMA_READ, ++ IB_WR_ATOMIC_CMP_AND_SWP = IB_UVERBS_WR_ATOMIC_CMP_AND_SWP, ++ IB_WR_ATOMIC_FETCH_AND_ADD = IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD, ++ IB_WR_LSO = IB_UVERBS_WR_TSO, ++ IB_WR_SEND_WITH_INV = IB_UVERBS_WR_SEND_WITH_INV, ++ IB_WR_RDMA_READ_WITH_INV = IB_UVERBS_WR_RDMA_READ_WITH_INV, ++ IB_WR_LOCAL_INV = IB_UVERBS_WR_LOCAL_INV, ++ IB_WR_MASKED_ATOMIC_CMP_AND_SWP = ++ IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP, ++ IB_WR_MASKED_ATOMIC_FETCH_AND_ADD = ++ IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD, ++ ++ /* These are kernel only and can not be issued by userspace */ ++ IB_WR_REG_MR = 0x20, + IB_WR_REG_SIG_MR, ++ + /* reserve values for low level drivers' internal use. + * These values will not be used at all in the ib core layer. + */ +--- a/include/uapi/rdma/ib_user_verbs.h ++++ b/include/uapi/rdma/ib_user_verbs.h +@@ -763,10 +763,28 @@ struct ib_uverbs_sge { + __u32 lkey; + }; + ++enum ib_uverbs_wr_opcode { ++ IB_UVERBS_WR_RDMA_WRITE = 0, ++ IB_UVERBS_WR_RDMA_WRITE_WITH_IMM = 1, ++ IB_UVERBS_WR_SEND = 2, ++ IB_UVERBS_WR_SEND_WITH_IMM = 3, ++ IB_UVERBS_WR_RDMA_READ = 4, ++ IB_UVERBS_WR_ATOMIC_CMP_AND_SWP = 5, ++ IB_UVERBS_WR_ATOMIC_FETCH_AND_ADD = 6, ++ IB_UVERBS_WR_LOCAL_INV = 7, ++ IB_UVERBS_WR_BIND_MW = 8, ++ IB_UVERBS_WR_SEND_WITH_INV = 9, ++ IB_UVERBS_WR_TSO = 10, ++ IB_UVERBS_WR_RDMA_READ_WITH_INV = 11, ++ IB_UVERBS_WR_MASKED_ATOMIC_CMP_AND_SWP = 12, ++ IB_UVERBS_WR_MASKED_ATOMIC_FETCH_AND_ADD = 13, ++ /* Review enum ib_wr_opcode before modifying this */ ++}; ++ + struct ib_uverbs_send_wr { + __aligned_u64 wr_id; + __u32 num_sge; +- __u32 opcode; ++ __u32 opcode; /* see enum ib_uverbs_wr_opcode */ + __u32 send_flags; + union { + __be32 imm_data; diff --git a/queue-4.19/iio-ad5064-fix-regulator-handling.patch b/queue-4.19/iio-ad5064-fix-regulator-handling.patch new file mode 100644 index 00000000000..eeeef7daed8 --- /dev/null +++ b/queue-4.19/iio-ad5064-fix-regulator-handling.patch @@ -0,0 +1,96 @@ +From 8911a43bc198877fad9f4b0246a866b26bb547ab Mon Sep 17 00:00:00 2001 +From: Lars-Peter Clausen +Date: Fri, 28 Sep 2018 11:23:40 +0200 +Subject: iio: ad5064: Fix regulator handling + +From: Lars-Peter Clausen + +commit 8911a43bc198877fad9f4b0246a866b26bb547ab upstream. + +The correct way to handle errors returned by regualtor_get() and friends is +to propagate the error since that means that an regulator was specified, +but something went wrong when requesting it. + +For handling optional regulators, e.g. when the device has an internal +vref, regulator_get_optional() should be used to avoid getting the dummy +regulator that the regulator core otherwise provides. + +Signed-off-by: Lars-Peter Clausen +Cc: +Signed-off-by: Jonathan Cameron +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/iio/dac/ad5064.c | 53 +++++++++++++++++++++++++++++++++-------------- + 1 file changed, 38 insertions(+), 15 deletions(-) + +--- a/drivers/iio/dac/ad5064.c ++++ b/drivers/iio/dac/ad5064.c +@@ -808,6 +808,40 @@ static int ad5064_set_config(struct ad50 + return ad5064_write(st, cmd, 0, val, 0); + } + ++static int ad5064_request_vref(struct ad5064_state *st, struct device *dev) ++{ ++ unsigned int i; ++ int ret; ++ ++ for (i = 0; i < ad5064_num_vref(st); ++i) ++ st->vref_reg[i].supply = ad5064_vref_name(st, i); ++ ++ if (!st->chip_info->internal_vref) ++ return devm_regulator_bulk_get(dev, ad5064_num_vref(st), ++ st->vref_reg); ++ ++ /* ++ * This assumes that when the regulator has an internal VREF ++ * there is only one external VREF connection, which is ++ * currently the case for all supported devices. ++ */ ++ st->vref_reg[0].consumer = devm_regulator_get_optional(dev, "vref"); ++ if (!IS_ERR(st->vref_reg[0].consumer)) ++ return 0; ++ ++ ret = PTR_ERR(st->vref_reg[0].consumer); ++ if (ret != -ENODEV) ++ return ret; ++ ++ /* If no external regulator was supplied use the internal VREF */ ++ st->use_internal_vref = true; ++ ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE); ++ if (ret) ++ dev_err(dev, "Failed to enable internal vref: %d\n", ret); ++ ++ return ret; ++} ++ + static int ad5064_probe(struct device *dev, enum ad5064_type type, + const char *name, ad5064_write_func write) + { +@@ -828,22 +862,11 @@ static int ad5064_probe(struct device *d + st->dev = dev; + st->write = write; + +- for (i = 0; i < ad5064_num_vref(st); ++i) +- st->vref_reg[i].supply = ad5064_vref_name(st, i); ++ ret = ad5064_request_vref(st, dev); ++ if (ret) ++ return ret; + +- ret = devm_regulator_bulk_get(dev, ad5064_num_vref(st), +- st->vref_reg); +- if (ret) { +- if (!st->chip_info->internal_vref) +- return ret; +- st->use_internal_vref = true; +- ret = ad5064_set_config(st, AD5064_CONFIG_INT_VREF_ENABLE); +- if (ret) { +- dev_err(dev, "Failed to enable internal vref: %d\n", +- ret); +- return ret; +- } +- } else { ++ if (!st->use_internal_vref) { + ret = regulator_bulk_enable(ad5064_num_vref(st), st->vref_reg); + if (ret) + return ret; diff --git a/queue-4.19/iio-adc-at91-fix-acking-drdy-irq-on-simple-conversions.patch b/queue-4.19/iio-adc-at91-fix-acking-drdy-irq-on-simple-conversions.patch new file mode 100644 index 00000000000..8e6e0373b3f --- /dev/null +++ b/queue-4.19/iio-adc-at91-fix-acking-drdy-irq-on-simple-conversions.patch @@ -0,0 +1,39 @@ +From bc1b45326223e7e890053cf6266357adfa61942d Mon Sep 17 00:00:00 2001 +From: Eugen Hristev +Date: Mon, 24 Sep 2018 10:51:43 +0300 +Subject: iio: adc: at91: fix acking DRDY irq on simple conversions + +From: Eugen Hristev + +commit bc1b45326223e7e890053cf6266357adfa61942d upstream. + +When doing simple conversions, the driver did not acknowledge the DRDY irq. +If this irq status is not acked, it will be left pending, and as soon as a +trigger is enabled, the irq handler will be called, it doesn't know why +this status has occurred because no channel is pending, and then it will go +int a irq loop and board will hang. +To avoid this situation, read the LCDR after a raw conversion is done. + +Fixes: 0e589d5fb ("ARM: AT91: IIO: Add AT91 ADC driver.") +Cc: Maxime Ripard +Signed-off-by: Eugen Hristev +Acked-by: Ludovic Desroches +Cc: +Signed-off-by: Jonathan Cameron +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/iio/adc/at91_adc.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/drivers/iio/adc/at91_adc.c ++++ b/drivers/iio/adc/at91_adc.c +@@ -279,6 +279,8 @@ static void handle_adc_eoc_trigger(int i + iio_trigger_poll(idev->trig); + } else { + st->last_value = at91_adc_readl(st, AT91_ADC_CHAN(st, st->chnb)); ++ /* Needed to ACK the DRDY interruption */ ++ at91_adc_readl(st, AT91_ADC_LCDR); + st->done = true; + wake_up_interruptible(&st->wq_data_avail); + } diff --git a/queue-4.19/iio-adc-at91-fix-wrong-channel-number-in-triggered-buffer-mode.patch b/queue-4.19/iio-adc-at91-fix-wrong-channel-number-in-triggered-buffer-mode.patch new file mode 100644 index 00000000000..cbf025a1a92 --- /dev/null +++ b/queue-4.19/iio-adc-at91-fix-wrong-channel-number-in-triggered-buffer-mode.patch @@ -0,0 +1,49 @@ +From aea835f2dc8a682942b859179c49ad1841a6c8b9 Mon Sep 17 00:00:00 2001 +From: Eugen Hristev +Date: Mon, 24 Sep 2018 10:51:44 +0300 +Subject: iio: adc: at91: fix wrong channel number in triggered buffer mode + +From: Eugen Hristev + +commit aea835f2dc8a682942b859179c49ad1841a6c8b9 upstream. + +When channels are registered, the hardware channel number is not the +actual iio channel number. +This is because the driver is probed with a certain number of accessible +channels. Some pins are routed and some not, depending on the description of +the board in the DT. +Because of that, channels 0,1,2,3 can correspond to hardware channels +2,3,4,5 for example. +In the buffered triggered case, we need to do the translation accordingly. +Fixed the channel number to stop reading the wrong channel. + +Fixes: 0e589d5fb ("ARM: AT91: IIO: Add AT91 ADC driver.") +Cc: Maxime Ripard +Signed-off-by: Eugen Hristev +Acked-by: Ludovic Desroches +Cc: +Signed-off-by: Jonathan Cameron +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/iio/adc/at91_adc.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/drivers/iio/adc/at91_adc.c ++++ b/drivers/iio/adc/at91_adc.c +@@ -248,12 +248,14 @@ static irqreturn_t at91_adc_trigger_hand + struct iio_poll_func *pf = p; + struct iio_dev *idev = pf->indio_dev; + struct at91_adc_state *st = iio_priv(idev); ++ struct iio_chan_spec const *chan; + int i, j = 0; + + for (i = 0; i < idev->masklength; i++) { + if (!test_bit(i, idev->active_scan_mask)) + continue; +- st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, i)); ++ chan = idev->channels + i; ++ st->buffer[j] = at91_adc_readl(st, AT91_ADC_CHAN(st, chan->channel)); + j++; + } + diff --git a/queue-4.19/iio-adc-imx25-gcq-fix-leak-of-device_node-in-mx25_gcq_setup_cfgs.patch b/queue-4.19/iio-adc-imx25-gcq-fix-leak-of-device_node-in-mx25_gcq_setup_cfgs.patch new file mode 100644 index 00000000000..32bb58d5c53 --- /dev/null +++ b/queue-4.19/iio-adc-imx25-gcq-fix-leak-of-device_node-in-mx25_gcq_setup_cfgs.patch @@ -0,0 +1,73 @@ +From d3fa21c73c391975488818b085b894c2980ea052 Mon Sep 17 00:00:00 2001 +From: Alexey Khoroshilov +Date: Sat, 22 Sep 2018 00:58:02 +0300 +Subject: iio: adc: imx25-gcq: Fix leak of device_node in mx25_gcq_setup_cfgs() + +From: Alexey Khoroshilov + +commit d3fa21c73c391975488818b085b894c2980ea052 upstream. + +Leaving for_each_child_of_node loop we should release child device node, +if it is not stored for future use. + +Found by Linux Driver Verification project (linuxtesting.org). + +JC: I'm not sending this as a quick fix as it's been wrong for years, +but good to pick up for stable after the merge window. + +Signed-off-by: Alexey Khoroshilov +Fixes: 6df2e98c3ea56 ("iio: adc: Add imx25-gcq ADC driver") +Cc: +Signed-off-by: Jonathan Cameron +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/iio/adc/fsl-imx25-gcq.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +--- a/drivers/iio/adc/fsl-imx25-gcq.c ++++ b/drivers/iio/adc/fsl-imx25-gcq.c +@@ -209,12 +209,14 @@ static int mx25_gcq_setup_cfgs(struct pl + ret = of_property_read_u32(child, "reg", ®); + if (ret) { + dev_err(dev, "Failed to get reg property\n"); ++ of_node_put(child); + return ret; + } + + if (reg >= MX25_NUM_CFGS) { + dev_err(dev, + "reg value is greater than the number of available configuration registers\n"); ++ of_node_put(child); + return -EINVAL; + } + +@@ -228,6 +230,7 @@ static int mx25_gcq_setup_cfgs(struct pl + if (IS_ERR(priv->vref[refp])) { + dev_err(dev, "Error, trying to use external voltage reference without a vref-%s regulator.", + mx25_gcq_refp_names[refp]); ++ of_node_put(child); + return PTR_ERR(priv->vref[refp]); + } + priv->channel_vref_mv[reg] = +@@ -240,6 +243,7 @@ static int mx25_gcq_setup_cfgs(struct pl + break; + default: + dev_err(dev, "Invalid positive reference %d\n", refp); ++ of_node_put(child); + return -EINVAL; + } + +@@ -254,10 +258,12 @@ static int mx25_gcq_setup_cfgs(struct pl + + if ((refp & MX25_ADCQ_CFG_REFP_MASK) != refp) { + dev_err(dev, "Invalid fsl,adc-refp property value\n"); ++ of_node_put(child); + return -EINVAL; + } + if ((refn & MX25_ADCQ_CFG_REFN_MASK) != refn) { + dev_err(dev, "Invalid fsl,adc-refn property value\n"); ++ of_node_put(child); + return -EINVAL; + } + diff --git a/queue-4.19/ima-fix-showing-large-violations-or-runtime_measurements_count.patch b/queue-4.19/ima-fix-showing-large-violations-or-runtime_measurements_count.patch new file mode 100644 index 00000000000..3df848e0ce2 --- /dev/null +++ b/queue-4.19/ima-fix-showing-large-violations-or-runtime_measurements_count.patch @@ -0,0 +1,41 @@ +From 1e4c8dafbb6bf72fb5eca035b861e39c5896c2b7 Mon Sep 17 00:00:00 2001 +From: Eric Biggers +Date: Fri, 7 Sep 2018 14:33:24 -0700 +Subject: ima: fix showing large 'violations' or 'runtime_measurements_count' + +From: Eric Biggers + +commit 1e4c8dafbb6bf72fb5eca035b861e39c5896c2b7 upstream. + +The 12 character temporary buffer is not necessarily long enough to hold +a 'long' value. Increase it. + +Signed-off-by: Eric Biggers +Cc: stable@vger.kernel.org +Signed-off-by: Mimi Zohar +Signed-off-by: Greg Kroah-Hartman + +--- + security/integrity/ima/ima_fs.c | 6 +++--- + 1 file changed, 3 insertions(+), 3 deletions(-) + +--- a/security/integrity/ima/ima_fs.c ++++ b/security/integrity/ima/ima_fs.c +@@ -42,14 +42,14 @@ static int __init default_canonical_fmt_ + __setup("ima_canonical_fmt", default_canonical_fmt_setup); + + static int valid_policy = 1; +-#define TMPBUFLEN 12 ++ + static ssize_t ima_show_htable_value(char __user *buf, size_t count, + loff_t *ppos, atomic_long_t *val) + { +- char tmpbuf[TMPBUFLEN]; ++ char tmpbuf[32]; /* greater than largest 'long' string value */ + ssize_t len; + +- len = scnprintf(tmpbuf, TMPBUFLEN, "%li\n", atomic_long_read(val)); ++ len = scnprintf(tmpbuf, sizeof(tmpbuf), "%li\n", atomic_long_read(val)); + return simple_read_from_buffer(buf, count, ppos, tmpbuf, len); + } + diff --git a/queue-4.19/ima-open-a-new-file-instance-if-no-read-permissions.patch b/queue-4.19/ima-open-a-new-file-instance-if-no-read-permissions.patch new file mode 100644 index 00000000000..03aa5f352bf --- /dev/null +++ b/queue-4.19/ima-open-a-new-file-instance-if-no-read-permissions.patch @@ -0,0 +1,142 @@ +From a408e4a86b36bf98ad15b9ada531cf0e5118ac67 Mon Sep 17 00:00:00 2001 +From: Goldwyn Rodrigues +Date: Tue, 9 Oct 2018 10:12:33 -0500 +Subject: ima: open a new file instance if no read permissions + +From: Goldwyn Rodrigues + +commit a408e4a86b36bf98ad15b9ada531cf0e5118ac67 upstream. + +Open a new file instance as opposed to changing file->f_mode when +the file is not readable. This is done to accomodate overlayfs +stacked file operations change. The real struct file is hidden +behind the overlays struct file. So, any file->f_mode manipulations are +not reflected on the real struct file. Open the file again in read mode +if original file cannot be read, read and calculate the hash. + +Signed-off-by: Goldwyn Rodrigues +Cc: stable@vger.kernel.org (linux-4.19) +Signed-off-by: Mimi Zohar +Signed-off-by: Greg Kroah-Hartman + +--- + security/integrity/ima/ima_crypto.c | 54 ++++++++++++++++++++++-------------- + 1 file changed, 34 insertions(+), 20 deletions(-) + +--- a/security/integrity/ima/ima_crypto.c ++++ b/security/integrity/ima/ima_crypto.c +@@ -210,7 +210,7 @@ static int ima_calc_file_hash_atfm(struc + { + loff_t i_size, offset; + char *rbuf[2] = { NULL, }; +- int rc, read = 0, rbuf_len, active = 0, ahash_rc = 0; ++ int rc, rbuf_len, active = 0, ahash_rc = 0; + struct ahash_request *req; + struct scatterlist sg[1]; + struct crypto_wait wait; +@@ -257,11 +257,6 @@ static int ima_calc_file_hash_atfm(struc + &rbuf_size[1], 0); + } + +- if (!(file->f_mode & FMODE_READ)) { +- file->f_mode |= FMODE_READ; +- read = 1; +- } +- + for (offset = 0; offset < i_size; offset += rbuf_len) { + if (!rbuf[1] && offset) { + /* Not using two buffers, and it is not the first +@@ -300,8 +295,6 @@ static int ima_calc_file_hash_atfm(struc + /* wait for the last update request to complete */ + rc = ahash_wait(ahash_rc, &wait); + out3: +- if (read) +- file->f_mode &= ~FMODE_READ; + ima_free_pages(rbuf[0], rbuf_size[0]); + ima_free_pages(rbuf[1], rbuf_size[1]); + out2: +@@ -336,7 +329,7 @@ static int ima_calc_file_hash_tfm(struct + { + loff_t i_size, offset = 0; + char *rbuf; +- int rc, read = 0; ++ int rc; + SHASH_DESC_ON_STACK(shash, tfm); + + shash->tfm = tfm; +@@ -357,11 +350,6 @@ static int ima_calc_file_hash_tfm(struct + if (!rbuf) + return -ENOMEM; + +- if (!(file->f_mode & FMODE_READ)) { +- file->f_mode |= FMODE_READ; +- read = 1; +- } +- + while (offset < i_size) { + int rbuf_len; + +@@ -378,8 +366,6 @@ static int ima_calc_file_hash_tfm(struct + if (rc) + break; + } +- if (read) +- file->f_mode &= ~FMODE_READ; + kfree(rbuf); + out: + if (!rc) +@@ -420,6 +406,8 @@ int ima_calc_file_hash(struct file *file + { + loff_t i_size; + int rc; ++ struct file *f = file; ++ bool new_file_instance = false, modified_flags = false; + + /* + * For consistency, fail file's opened with the O_DIRECT flag on +@@ -431,15 +419,41 @@ int ima_calc_file_hash(struct file *file + return -EINVAL; + } + +- i_size = i_size_read(file_inode(file)); ++ /* Open a new file instance in O_RDONLY if we cannot read */ ++ if (!(file->f_mode & FMODE_READ)) { ++ int flags = file->f_flags & ~(O_WRONLY | O_APPEND | ++ O_TRUNC | O_CREAT | O_NOCTTY | O_EXCL); ++ flags |= O_RDONLY; ++ f = dentry_open(&file->f_path, flags, file->f_cred); ++ if (IS_ERR(f)) { ++ /* ++ * Cannot open the file again, lets modify f_flags ++ * of original and continue ++ */ ++ pr_info_ratelimited("Unable to reopen file for reading.\n"); ++ f = file; ++ f->f_flags |= FMODE_READ; ++ modified_flags = true; ++ } else { ++ new_file_instance = true; ++ } ++ } ++ ++ i_size = i_size_read(file_inode(f)); + + if (ima_ahash_minsize && i_size >= ima_ahash_minsize) { +- rc = ima_calc_file_ahash(file, hash); ++ rc = ima_calc_file_ahash(f, hash); + if (!rc) +- return 0; ++ goto out; + } + +- return ima_calc_file_shash(file, hash); ++ rc = ima_calc_file_shash(f, hash); ++out: ++ if (new_file_instance) ++ fput(f); ++ else if (modified_flags) ++ f->f_flags &= ~FMODE_READ; ++ return rc; + } + + /* diff --git a/queue-4.19/iwlwifi-mvm-check-return-value-of-rs_rate_from_ucode_rate.patch b/queue-4.19/iwlwifi-mvm-check-return-value-of-rs_rate_from_ucode_rate.patch new file mode 100644 index 00000000000..75e0e36267d --- /dev/null +++ b/queue-4.19/iwlwifi-mvm-check-return-value-of-rs_rate_from_ucode_rate.patch @@ -0,0 +1,81 @@ +From 3d71c3f1f50cf309bd20659422af549bc784bfff Mon Sep 17 00:00:00 2001 +From: Luca Coelho +Date: Sat, 13 Oct 2018 09:46:08 +0300 +Subject: iwlwifi: mvm: check return value of rs_rate_from_ucode_rate() + +From: Luca Coelho + +commit 3d71c3f1f50cf309bd20659422af549bc784bfff upstream. + +The rs_rate_from_ucode_rate() function may return -EINVAL if the rate +is invalid, but none of the callsites check for the error, potentially +making us access arrays with index IWL_RATE_INVALID, which is larger +than the arrays, causing an out-of-bounds access. This will trigger +KASAN warnings, such as the one reported in the bugzilla issue +mentioned below. + +This fixes https://bugzilla.kernel.org/show_bug.cgi?id=200659 + +Cc: stable@vger.kernel.org +Signed-off-by: Luca Coelho +Signed-off-by: Kalle Valo +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/net/wireless/intel/iwlwifi/mvm/rs.c | 24 +++++++++++++++++++----- + 1 file changed, 19 insertions(+), 5 deletions(-) + +--- a/drivers/net/wireless/intel/iwlwifi/mvm/rs.c ++++ b/drivers/net/wireless/intel/iwlwifi/mvm/rs.c +@@ -1239,7 +1239,11 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm + !(info->flags & IEEE80211_TX_STAT_AMPDU)) + return; + +- rs_rate_from_ucode_rate(tx_resp_hwrate, info->band, &tx_resp_rate); ++ if (rs_rate_from_ucode_rate(tx_resp_hwrate, info->band, ++ &tx_resp_rate)) { ++ WARN_ON_ONCE(1); ++ return; ++ } + + #ifdef CONFIG_MAC80211_DEBUGFS + /* Disable last tx check if we are debugging with fixed rate but +@@ -1290,7 +1294,10 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm + */ + table = &lq_sta->lq; + lq_hwrate = le32_to_cpu(table->rs_table[0]); +- rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate); ++ if (rs_rate_from_ucode_rate(lq_hwrate, info->band, &lq_rate)) { ++ WARN_ON_ONCE(1); ++ return; ++ } + + /* Here we actually compare this rate to the latest LQ command */ + if (lq_color != LQ_FLAG_COLOR_GET(table->flags)) { +@@ -1392,8 +1399,12 @@ void iwl_mvm_rs_tx_status(struct iwl_mvm + /* Collect data for each rate used during failed TX attempts */ + for (i = 0; i <= retries; ++i) { + lq_hwrate = le32_to_cpu(table->rs_table[i]); +- rs_rate_from_ucode_rate(lq_hwrate, info->band, +- &lq_rate); ++ if (rs_rate_from_ucode_rate(lq_hwrate, info->band, ++ &lq_rate)) { ++ WARN_ON_ONCE(1); ++ return; ++ } ++ + /* + * Only collect stats if retried rate is in the same RS + * table as active/search. +@@ -3262,7 +3273,10 @@ static void rs_build_rates_table_from_fi + for (i = 0; i < num_rates; i++) + lq_cmd->rs_table[i] = ucode_rate_le32; + +- rs_rate_from_ucode_rate(ucode_rate, band, &rate); ++ if (rs_rate_from_ucode_rate(ucode_rate, band, &rate)) { ++ WARN_ON_ONCE(1); ++ return; ++ } + + if (is_mimo(&rate)) + lq_cmd->mimo_delim = num_rates - 1; diff --git a/queue-4.19/jbd2-fix-use-after-free-in-jbd2_log_do_checkpoint.patch b/queue-4.19/jbd2-fix-use-after-free-in-jbd2_log_do_checkpoint.patch new file mode 100644 index 00000000000..35d2d99aa71 --- /dev/null +++ b/queue-4.19/jbd2-fix-use-after-free-in-jbd2_log_do_checkpoint.patch @@ -0,0 +1,69 @@ +From ccd3c4373eacb044eb3832966299d13d2631f66f Mon Sep 17 00:00:00 2001 +From: Jan Kara +Date: Fri, 5 Oct 2018 18:44:40 -0400 +Subject: jbd2: fix use after free in jbd2_log_do_checkpoint() + +From: Jan Kara + +commit ccd3c4373eacb044eb3832966299d13d2631f66f upstream. + +The code cleaning transaction's lists of checkpoint buffers has a bug +where it increases bh refcount only after releasing +journal->j_list_lock. Thus the following race is possible: + +CPU0 CPU1 +jbd2_log_do_checkpoint() + jbd2_journal_try_to_free_buffers() + __journal_try_to_free_buffer(bh) + ... + while (transaction->t_checkpoint_io_list) + ... + if (buffer_locked(bh)) { + +<-- IO completes now, buffer gets unlocked --> + + spin_unlock(&journal->j_list_lock); + spin_lock(&journal->j_list_lock); + __jbd2_journal_remove_checkpoint(jh); + spin_unlock(&journal->j_list_lock); + try_to_free_buffers(page); + get_bh(bh) <-- accesses freed bh + +Fix the problem by grabbing bh reference before unlocking +journal->j_list_lock. + +Fixes: dc6e8d669cf5 ("jbd2: don't call get_bh() before calling __jbd2_journal_remove_checkpoint()") +Fixes: be1158cc615f ("jbd2: fold __process_buffer() into jbd2_log_do_checkpoint()") +Reported-by: syzbot+7f4a27091759e2fe7453@syzkaller.appspotmail.com +CC: stable@vger.kernel.org +Reviewed-by: Lukas Czerner +Signed-off-by: Jan Kara +Signed-off-by: Theodore Ts'o +Signed-off-by: Greg Kroah-Hartman + +--- + fs/jbd2/checkpoint.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/fs/jbd2/checkpoint.c ++++ b/fs/jbd2/checkpoint.c +@@ -251,8 +251,8 @@ restart: + bh = jh2bh(jh); + + if (buffer_locked(bh)) { +- spin_unlock(&journal->j_list_lock); + get_bh(bh); ++ spin_unlock(&journal->j_list_lock); + wait_on_buffer(bh); + /* the journal_head may have gone by now */ + BUFFER_TRACE(bh, "brelse"); +@@ -333,8 +333,8 @@ restart2: + jh = transaction->t_checkpoint_io_list; + bh = jh2bh(jh); + if (buffer_locked(bh)) { +- spin_unlock(&journal->j_list_lock); + get_bh(bh); ++ spin_unlock(&journal->j_list_lock); + wait_on_buffer(bh); + /* the journal_head may have gone by now */ + BUFFER_TRACE(bh, "brelse"); diff --git a/queue-4.19/kbuild-fix-kernel-bounds.c-w-1-warning.patch b/queue-4.19/kbuild-fix-kernel-bounds.c-w-1-warning.patch new file mode 100644 index 00000000000..9b3b69dd7ff --- /dev/null +++ b/queue-4.19/kbuild-fix-kernel-bounds.c-w-1-warning.patch @@ -0,0 +1,54 @@ +From 6a32c2469c3fbfee8f25bcd20af647326650a6cf Mon Sep 17 00:00:00 2001 +From: Arnd Bergmann +Date: Tue, 30 Oct 2018 15:07:32 -0700 +Subject: kbuild: fix kernel/bounds.c 'W=1' warning + +From: Arnd Bergmann + +commit 6a32c2469c3fbfee8f25bcd20af647326650a6cf upstream. + +Building any configuration with 'make W=1' produces a warning: + +kernel/bounds.c:16:6: warning: no previous prototype for 'foo' [-Wmissing-prototypes] + +When also passing -Werror, this prevents us from building any other files. +Nobody ever calls the function, but we can't make it 'static' either +since we want the compiler output. + +Calling it 'main' instead however avoids the warning, because gcc +does not insist on having a declaration for main. + +Link: http://lkml.kernel.org/r/20181005083313.2088252-1-arnd@arndb.de +Signed-off-by: Arnd Bergmann +Reported-by: Kieran Bingham +Reviewed-by: Kieran Bingham +Cc: David Laight +Cc: Masahiro Yamada +Cc: Greg Kroah-Hartman +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/bounds.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/kernel/bounds.c ++++ b/kernel/bounds.c +@@ -13,7 +13,7 @@ + #include + #include + +-void foo(void) ++int main(void) + { + /* The enum constants to put into include/generated/bounds.h */ + DEFINE(NR_PAGEFLAGS, __NR_PAGEFLAGS); +@@ -23,4 +23,6 @@ void foo(void) + #endif + DEFINE(SPINLOCK_SIZE, sizeof(spinlock_t)); + /* End of constants */ ++ ++ return 0; + } diff --git a/queue-4.19/kvm-arm-arm64-ensure-only-thp-is-candidate-for-adjustment.patch b/queue-4.19/kvm-arm-arm64-ensure-only-thp-is-candidate-for-adjustment.patch new file mode 100644 index 00000000000..ccc461b9313 --- /dev/null +++ b/queue-4.19/kvm-arm-arm64-ensure-only-thp-is-candidate-for-adjustment.patch @@ -0,0 +1,48 @@ +From fd2ef358282c849c193aa36dadbf4f07f7dcd29b Mon Sep 17 00:00:00 2001 +From: Punit Agrawal +Date: Mon, 1 Oct 2018 16:54:35 +0100 +Subject: KVM: arm/arm64: Ensure only THP is candidate for adjustment + +From: Punit Agrawal + +commit fd2ef358282c849c193aa36dadbf4f07f7dcd29b upstream. + +PageTransCompoundMap() returns true for hugetlbfs and THP +hugepages. This behaviour incorrectly leads to stage 2 faults for +unsupported hugepage sizes (e.g., 64K hugepage with 4K pages) to be +treated as THP faults. + +Tighten the check to filter out hugetlbfs pages. This also leads to +consistently mapping all unsupported hugepage sizes as PTE level +entries at stage 2. + +Signed-off-by: Punit Agrawal +Reviewed-by: Suzuki Poulose +Cc: Christoffer Dall +Cc: Marc Zyngier +Cc: stable@vger.kernel.org # v4.13+ +Signed-off-by: Marc Zyngier +Signed-off-by: Greg Kroah-Hartman + +--- + virt/kvm/arm/mmu.c | 8 +++++++- + 1 file changed, 7 insertions(+), 1 deletion(-) + +--- a/virt/kvm/arm/mmu.c ++++ b/virt/kvm/arm/mmu.c +@@ -1230,8 +1230,14 @@ static bool transparent_hugepage_adjust( + { + kvm_pfn_t pfn = *pfnp; + gfn_t gfn = *ipap >> PAGE_SHIFT; ++ struct page *page = pfn_to_page(pfn); + +- if (PageTransCompoundMap(pfn_to_page(pfn))) { ++ /* ++ * PageTransCompoungMap() returns true for THP and ++ * hugetlbfs. Make sure the adjustment is done only for THP ++ * pages. ++ */ ++ if (!PageHuge(page) && PageTransCompoundMap(page)) { + unsigned long mask; + /* + * The address we faulted on is backed by a transparent huge diff --git a/queue-4.19/kvm-arm64-fix-caching-of-host-mdcr_el2-value.patch b/queue-4.19/kvm-arm64-fix-caching-of-host-mdcr_el2-value.patch new file mode 100644 index 00000000000..b03008c0b11 --- /dev/null +++ b/queue-4.19/kvm-arm64-fix-caching-of-host-mdcr_el2-value.patch @@ -0,0 +1,57 @@ +From da5a3ce66b8bb51b0ea8a89f42aac153903f90fb Mon Sep 17 00:00:00 2001 +From: Mark Rutland +Date: Wed, 17 Oct 2018 17:42:10 +0100 +Subject: KVM: arm64: Fix caching of host MDCR_EL2 value + +From: Mark Rutland + +commit da5a3ce66b8bb51b0ea8a89f42aac153903f90fb upstream. + +At boot time, KVM stashes the host MDCR_EL2 value, but only does this +when the kernel is not running in hyp mode (i.e. is non-VHE). In these +cases, the stashed value of MDCR_EL2.HPMN happens to be zero, which can +lead to CONSTRAINED UNPREDICTABLE behaviour. + +Since we use this value to derive the MDCR_EL2 value when switching +to/from a guest, after a guest have been run, the performance counters +do not behave as expected. This has been observed to result in accesses +via PMXEVTYPER_EL0 and PMXEVCNTR_EL0 not affecting the relevant +counters, resulting in events not being counted. In these cases, only +the fixed-purpose cycle counter appears to work as expected. + +Fix this by always stashing the host MDCR_EL2 value, regardless of VHE. + +Cc: Christopher Dall +Cc: James Morse +Cc: Will Deacon +Cc: stable@vger.kernel.org +Fixes: 1e947bad0b63b351 ("arm64: KVM: Skip HYP setup when already running in HYP") +Tested-by: Robin Murphy +Signed-off-by: Mark Rutland +Signed-off-by: Marc Zyngier +Signed-off-by: Greg Kroah-Hartman + +--- + virt/kvm/arm/arm.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/virt/kvm/arm/arm.c ++++ b/virt/kvm/arm/arm.c +@@ -1295,8 +1295,6 @@ static void cpu_init_hyp_mode(void *dumm + + __cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr); + __cpu_init_stage2(); +- +- kvm_arm_init_debug(); + } + + static void cpu_hyp_reset(void) +@@ -1320,6 +1318,8 @@ static void cpu_hyp_reinit(void) + cpu_init_hyp_mode(NULL); + } + ++ kvm_arm_init_debug(); ++ + if (vgic_present) + kvm_vgic_init_cpu_hardware(); + } diff --git a/queue-4.19/libertas-don-t-set-urb_zero_packet-on-in-usb-transfer.patch b/queue-4.19/libertas-don-t-set-urb_zero_packet-on-in-usb-transfer.patch new file mode 100644 index 00000000000..f64b8286dce --- /dev/null +++ b/queue-4.19/libertas-don-t-set-urb_zero_packet-on-in-usb-transfer.patch @@ -0,0 +1,64 @@ +From 6528d88047801b80d2a5370ad46fb6eff2f509e0 Mon Sep 17 00:00:00 2001 +From: Lubomir Rintel +Date: Sat, 6 Oct 2018 22:12:32 +0200 +Subject: libertas: don't set URB_ZERO_PACKET on IN USB transfer + +From: Lubomir Rintel + +commit 6528d88047801b80d2a5370ad46fb6eff2f509e0 upstream. + +The USB core gets rightfully upset: + + usb 1-1: BOGUS urb flags, 240 --> 200 + WARNING: CPU: 0 PID: 60 at drivers/usb/core/urb.c:503 usb_submit_urb+0x2f8/0x3ed + Modules linked in: + CPU: 0 PID: 60 Comm: kworker/0:3 Not tainted 4.19.0-rc6-00319-g5206d00a45c7 #39 + Hardware name: OLPC XO/XO, BIOS OLPC Ver 1.00.01 06/11/2014 + Workqueue: events request_firmware_work_func + EIP: usb_submit_urb+0x2f8/0x3ed + Code: 75 06 8b 8f 80 00 00 00 8d 47 78 89 4d e4 89 55 e8 e8 35 1c f6 ff 8b 55 e8 56 52 8b 4d e4 51 50 68 e3 ce c7 c0 e8 ed 18 c6 ff <0f> 0b 83 c4 14 80 7d ef 01 74 0a 80 7d ef 03 0f 85 b8 00 00 00 8b + EAX: 00000025 EBX: ce7d4980 ECX: 00000000 EDX: 00000001 + ESI: 00000200 EDI: ce7d8800 EBP: ce7f5ea8 ESP: ce7f5e70 + DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00210292 + CR0: 80050033 CR2: 00000000 CR3: 00e80000 CR4: 00000090 + Call Trace: + ? if_usb_fw_timeo+0x64/0x64 + __if_usb_submit_rx_urb+0x85/0xe6 + ? if_usb_fw_timeo+0x64/0x64 + if_usb_submit_rx_urb_fwload+0xd/0xf + if_usb_prog_firmware+0xc0/0x3db + ? _request_firmware+0x54/0x47b + ? _request_firmware+0x89/0x47b + ? if_usb_probe+0x412/0x412 + lbs_fw_loaded+0x55/0xa6 + ? debug_smp_processor_id+0x12/0x14 + helper_firmware_cb+0x3c/0x3f + request_firmware_work_func+0x37/0x6f + process_one_work+0x164/0x25a + worker_thread+0x1c4/0x284 + kthread+0xec/0xf1 + ? cancel_delayed_work_sync+0xf/0xf + ? kthread_create_on_node+0x1a/0x1a + ret_from_fork+0x2e/0x38 + ---[ end trace 3ef1e3b2dd53852f ]--- + +Cc: stable@vger.kernel.org +Signed-off-by: Lubomir Rintel +Signed-off-by: Kalle Valo +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/net/wireless/marvell/libertas/if_usb.c | 2 -- + 1 file changed, 2 deletions(-) + +--- a/drivers/net/wireless/marvell/libertas/if_usb.c ++++ b/drivers/net/wireless/marvell/libertas/if_usb.c +@@ -456,8 +456,6 @@ static int __if_usb_submit_rx_urb(struct + MRVDRV_ETH_RX_PACKET_BUFFER_SIZE, callbackfn, + cardp); + +- cardp->rx_urb->transfer_flags |= URB_ZERO_PACKET; +- + lbs_deb_usb2(&cardp->udev->dev, "Pointer for rx_urb %p\n", cardp->rx_urb); + if ((ret = usb_submit_urb(cardp->rx_urb, GFP_ATOMIC))) { + lbs_deb_usbd(&cardp->udev->dev, "Submit Rx URB failed: %d\n", ret); diff --git a/queue-4.19/libnvdimm-hold-reference-on-parent-while-scheduling-async-init.patch b/queue-4.19/libnvdimm-hold-reference-on-parent-while-scheduling-async-init.patch new file mode 100644 index 00000000000..0fcd98091e5 --- /dev/null +++ b/queue-4.19/libnvdimm-hold-reference-on-parent-while-scheduling-async-init.patch @@ -0,0 +1,46 @@ +From b6eae0f61db27748606cc00dafcfd1e2c032f0a5 Mon Sep 17 00:00:00 2001 +From: Alexander Duyck +Date: Tue, 25 Sep 2018 13:53:02 -0700 +Subject: libnvdimm: Hold reference on parent while scheduling async init + +From: Alexander Duyck + +commit b6eae0f61db27748606cc00dafcfd1e2c032f0a5 upstream. + +Unlike asynchronous initialization in the core we have not yet associated +the device with the parent, and as such the device doesn't hold a reference +to the parent. + +In order to resolve that we should be holding a reference on the parent +until the asynchronous initialization has completed. + +Cc: +Fixes: 4d88a97aa9e8 ("libnvdimm: ...base ... infrastructure") +Signed-off-by: Alexander Duyck +Signed-off-by: Dan Williams +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/nvdimm/bus.c | 4 ++++ + 1 file changed, 4 insertions(+) + +--- a/drivers/nvdimm/bus.c ++++ b/drivers/nvdimm/bus.c +@@ -488,6 +488,8 @@ static void nd_async_device_register(voi + put_device(dev); + } + put_device(dev); ++ if (dev->parent) ++ put_device(dev->parent); + } + + static void nd_async_device_unregister(void *d, async_cookie_t cookie) +@@ -507,6 +509,8 @@ void __nd_device_register(struct device + if (!dev) + return; + dev->bus = &nvdimm_bus_type; ++ if (dev->parent) ++ get_device(dev->parent); + get_device(dev); + async_schedule_domain(nd_async_device_register, dev, + &nd_async_domain); diff --git a/queue-4.19/libnvdimm-pmem-fix-badblocks-population-for-raw-namespaces.patch b/queue-4.19/libnvdimm-pmem-fix-badblocks-population-for-raw-namespaces.patch new file mode 100644 index 00000000000..61b06869c0a --- /dev/null +++ b/queue-4.19/libnvdimm-pmem-fix-badblocks-population-for-raw-namespaces.patch @@ -0,0 +1,41 @@ +From 91ed7ac444ef749603a95629a5ec483988c4f14b Mon Sep 17 00:00:00 2001 +From: Dan Williams +Date: Thu, 4 Oct 2018 16:32:08 -0700 +Subject: libnvdimm, pmem: Fix badblocks population for 'raw' namespaces + +From: Dan Williams + +commit 91ed7ac444ef749603a95629a5ec483988c4f14b upstream. + +The driver is only initializing bb_res in the devm_memremap_pages() +paths, but the raw namespace case is passing an uninitialized bb_res to +nvdimm_badblocks_populate(). + +Fixes: e8d513483300 ("memremap: change devm_memremap_pages interface...") +Cc: +Cc: Christoph Hellwig +Reported-by: Jacek Zloch +Reported-by: Krzysztof Rusocki +Reviewed-by: Vishal Verma +Signed-off-by: Dan Williams +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/nvdimm/pmem.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/drivers/nvdimm/pmem.c ++++ b/drivers/nvdimm/pmem.c +@@ -421,9 +421,11 @@ static int pmem_attach_disk(struct devic + addr = devm_memremap_pages(dev, &pmem->pgmap); + pmem->pfn_flags |= PFN_MAP; + memcpy(&bb_res, &pmem->pgmap.res, sizeof(bb_res)); +- } else ++ } else { + addr = devm_memremap(dev, pmem->phys_addr, + pmem->size, ARCH_MEMREMAP_PMEM); ++ memcpy(&bb_res, &nsio->res, sizeof(bb_res)); ++ } + + /* + * At release time the queue must be frozen before diff --git a/queue-4.19/libnvdimm-region-fail-badblocks-listing-for-inactive-regions.patch b/queue-4.19/libnvdimm-region-fail-badblocks-listing-for-inactive-regions.patch new file mode 100644 index 00000000000..2636652a05f --- /dev/null +++ b/queue-4.19/libnvdimm-region-fail-badblocks-listing-for-inactive-regions.patch @@ -0,0 +1,69 @@ +From 5d394eee2c102453278d81d9a7cf94c80253486a Mon Sep 17 00:00:00 2001 +From: Dan Williams +Date: Thu, 27 Sep 2018 15:01:55 -0700 +Subject: libnvdimm, region: Fail badblocks listing for inactive regions + +From: Dan Williams + +commit 5d394eee2c102453278d81d9a7cf94c80253486a upstream. + +While experimenting with region driver loading the following backtrace +was triggered: + + INFO: trying to register non-static key. + the code is fine but needs lockdep annotation. + turning off the locking correctness validator. + [..] + Call Trace: + dump_stack+0x85/0xcb + register_lock_class+0x571/0x580 + ? __lock_acquire+0x2ba/0x1310 + ? kernfs_seq_start+0x2a/0x80 + __lock_acquire+0xd4/0x1310 + ? dev_attr_show+0x1c/0x50 + ? __lock_acquire+0x2ba/0x1310 + ? kernfs_seq_start+0x2a/0x80 + ? lock_acquire+0x9e/0x1a0 + lock_acquire+0x9e/0x1a0 + ? dev_attr_show+0x1c/0x50 + badblocks_show+0x70/0x190 + ? dev_attr_show+0x1c/0x50 + dev_attr_show+0x1c/0x50 + +This results from a missing successful call to devm_init_badblocks() +from nd_region_probe(). Block attempts to show badblocks while the +region is not enabled. + +Fixes: 6a6bef90425e ("libnvdimm: add mechanism to publish badblocks...") +Cc: +Reviewed-by: Johannes Thumshirn +Reviewed-by: Dave Jiang +Signed-off-by: Dan Williams +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/nvdimm/region_devs.c | 11 +++++++++-- + 1 file changed, 9 insertions(+), 2 deletions(-) + +--- a/drivers/nvdimm/region_devs.c ++++ b/drivers/nvdimm/region_devs.c +@@ -560,10 +560,17 @@ static ssize_t region_badblocks_show(str + struct device_attribute *attr, char *buf) + { + struct nd_region *nd_region = to_nd_region(dev); ++ ssize_t rc; + +- return badblocks_show(&nd_region->bb, buf, 0); +-} ++ device_lock(dev); ++ if (dev->driver) ++ rc = badblocks_show(&nd_region->bb, buf, 0); ++ else ++ rc = -ENXIO; ++ device_unlock(dev); + ++ return rc; ++} + static DEVICE_ATTR(badblocks, 0444, region_badblocks_show, NULL); + + static ssize_t resource_show(struct device *dev, diff --git a/queue-4.19/mm-hmm-fix-race-between-hmm_mirror_unregister-and-mmu_notifier-callback.patch b/queue-4.19/mm-hmm-fix-race-between-hmm_mirror_unregister-and-mmu_notifier-callback.patch new file mode 100644 index 00000000000..9fbf64a37ee --- /dev/null +++ b/queue-4.19/mm-hmm-fix-race-between-hmm_mirror_unregister-and-mmu_notifier-callback.patch @@ -0,0 +1,102 @@ +From 86a2d59841ab0b147ffc1b7b3041af87927cf312 Mon Sep 17 00:00:00 2001 +From: Ralph Campbell +Date: Tue, 30 Oct 2018 15:04:14 -0700 +Subject: mm/hmm: fix race between hmm_mirror_unregister() and mmu_notifier callback +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Ralph Campbell + +commit 86a2d59841ab0b147ffc1b7b3041af87927cf312 upstream. + +In hmm_mirror_unregister(), mm->hmm is set to NULL and then +mmu_notifier_unregister_no_release() is called. That creates a small +window where mmu_notifier can call mmu_notifier_ops with mm->hmm equal to +NULL. Fix this by first unregistering mmu notifier callbacks and then +setting mm->hmm to NULL. + +Similarly in hmm_register(), set mm->hmm before registering mmu_notifier +callbacks so callback functions always see mm->hmm set. + +Link: http://lkml.kernel.org/r/20181019160442.18723-4-jglisse@redhat.com +Signed-off-by: Ralph Campbell +Signed-off-by: Jérôme Glisse +Reviewed-by: John Hubbard +Reviewed-by: Jérôme Glisse +Reviewed-by: Balbir Singh +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/hmm.c | 36 +++++++++++++++++++++--------------- + 1 file changed, 21 insertions(+), 15 deletions(-) + +--- a/mm/hmm.c ++++ b/mm/hmm.c +@@ -91,16 +91,6 @@ static struct hmm *hmm_register(struct m + spin_lock_init(&hmm->lock); + hmm->mm = mm; + +- /* +- * We should only get here if hold the mmap_sem in write mode ie on +- * registration of first mirror through hmm_mirror_register() +- */ +- hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops; +- if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) { +- kfree(hmm); +- return NULL; +- } +- + spin_lock(&mm->page_table_lock); + if (!mm->hmm) + mm->hmm = hmm; +@@ -108,12 +98,27 @@ static struct hmm *hmm_register(struct m + cleanup = true; + spin_unlock(&mm->page_table_lock); + +- if (cleanup) { +- mmu_notifier_unregister(&hmm->mmu_notifier, mm); +- kfree(hmm); +- } ++ if (cleanup) ++ goto error; ++ ++ /* ++ * We should only get here if hold the mmap_sem in write mode ie on ++ * registration of first mirror through hmm_mirror_register() ++ */ ++ hmm->mmu_notifier.ops = &hmm_mmu_notifier_ops; ++ if (__mmu_notifier_register(&hmm->mmu_notifier, mm)) ++ goto error_mm; + + return mm->hmm; ++ ++error_mm: ++ spin_lock(&mm->page_table_lock); ++ if (mm->hmm == hmm) ++ mm->hmm = NULL; ++ spin_unlock(&mm->page_table_lock); ++error: ++ kfree(hmm); ++ return NULL; + } + + void hmm_mm_destroy(struct mm_struct *mm) +@@ -278,12 +283,13 @@ void hmm_mirror_unregister(struct hmm_mi + if (!should_unregister || mm == NULL) + return; + ++ mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm); ++ + spin_lock(&mm->page_table_lock); + if (mm->hmm == hmm) + mm->hmm = NULL; + spin_unlock(&mm->page_table_lock); + +- mmu_notifier_unregister_no_release(&hmm->mmu_notifier, mm); + kfree(hmm); + } + EXPORT_SYMBOL(hmm_mirror_unregister); diff --git a/queue-4.19/mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch b/queue-4.19/mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch new file mode 100644 index 00000000000..b9de27ab603 --- /dev/null +++ b/queue-4.19/mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch @@ -0,0 +1,111 @@ +From fa76da461bb0be13c8339d984dcf179151167c8f Mon Sep 17 00:00:00 2001 +From: Vlastimil Babka +Date: Fri, 26 Oct 2018 15:02:16 -0700 +Subject: mm: /proc/pid/smaps_rollup: fix NULL pointer deref in smaps_pte_range() +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Vlastimil Babka + +commit fa76da461bb0be13c8339d984dcf179151167c8f upstream. + +Leonardo reports an apparent regression in 4.19-rc7: + + BUG: unable to handle kernel NULL pointer dereference at 00000000000000f0 + PGD 0 P4D 0 + Oops: 0000 [#1] PREEMPT SMP PTI + CPU: 3 PID: 6032 Comm: python Not tainted 4.19.0-041900rc7-lowlatency #201810071631 + Hardware name: LENOVO 80UG/Toronto 4A2, BIOS 0XCN45WW 08/09/2018 + RIP: 0010:smaps_pte_range+0x32d/0x540 + Code: 80 00 00 00 00 74 a9 48 89 de 41 f6 40 52 40 0f 85 04 02 00 00 49 2b 30 48 c1 ee 0c 49 03 b0 98 00 00 00 49 8b 80 a0 00 00 00 <48> 8b b8 f0 00 00 00 e8 b7 ef ec ff 48 85 c0 0f 84 71 ff ff ff a8 + RSP: 0018:ffffb0cbc484fb88 EFLAGS: 00010202 + RAX: 0000000000000000 RBX: 0000560ddb9e9000 RCX: 0000000000000000 + RDX: 0000000000000000 RSI: 0000000560ddb9e9 RDI: 0000000000000001 + RBP: ffffb0cbc484fbc0 R08: ffff94a5a227a578 R09: ffff94a5a227a578 + R10: 0000000000000000 R11: 0000560ddbbe7000 R12: ffffe903098ba728 + R13: ffffb0cbc484fc78 R14: ffffb0cbc484fcf8 R15: ffff94a5a2e9cf48 + FS: 00007f6dfb683740(0000) GS:ffff94a5aaf80000(0000) knlGS:0000000000000000 + CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 + CR2: 00000000000000f0 CR3: 000000011c118001 CR4: 00000000003606e0 + DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 + DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 + Call Trace: + __walk_page_range+0x3c2/0x6f0 + walk_page_vma+0x42/0x60 + smap_gather_stats+0x79/0xe0 + ? gather_pte_stats+0x320/0x320 + ? gather_hugetlb_stats+0x70/0x70 + show_smaps_rollup+0xcd/0x1c0 + seq_read+0x157/0x400 + __vfs_read+0x3a/0x180 + ? security_file_permission+0x93/0xc0 + ? security_file_permission+0x93/0xc0 + vfs_read+0x8f/0x140 + ksys_read+0x55/0xc0 + __x64_sys_read+0x1a/0x20 + do_syscall_64+0x5a/0x110 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 + +Decoded code matched to local compilation+disassembly points to +smaps_pte_entry(): + + } else if (unlikely(IS_ENABLED(CONFIG_SHMEM) && mss->check_shmem_swap + && pte_none(*pte))) { + page = find_get_entry(vma->vm_file->f_mapping, + linear_page_index(vma, addr)); + +Here, vma->vm_file is NULL. mss->check_shmem_swap should be false in that +case, however for smaps_rollup, smap_gather_stats() can set the flag true +for one vma and leave it true for subsequent vma's where it should be +false. + +To fix, reset the check_shmem_swap flag to false. There's also related +bug which sets mss->swap to shmem_swapped, which in the context of +smaps_rollup overwrites any value accumulated from previous vma's. Fix +that as well. + +Note that the report suggests a regression between 4.17.19 and 4.19-rc7, +which makes the 4.19 series ending with commit 258f669e7e88 ("mm: +/proc/pid/smaps_rollup: convert to single value seq_file") suspicious. +But the mss was reused for rollup since 493b0e9d945f ("mm: add +/proc/pid/smaps_rollup") so let's play it safe with the stable backport. + +Link: http://lkml.kernel.org/r/555fbd1f-4ac9-0b58-dcd4-5dc4380ff7ca@suse.cz +Link: https://bugzilla.kernel.org/show_bug.cgi?id=201377 +Fixes: 493b0e9d945f ("mm: add /proc/pid/smaps_rollup") +Signed-off-by: Vlastimil Babka +Reported-by: Leonardo Soares Müller +Tested-by: Leonardo Soares Müller +Cc: Greg Kroah-Hartman +Cc: Daniel Colascione +Cc: Alexey Dobriyan +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + fs/proc/task_mmu.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +--- a/fs/proc/task_mmu.c ++++ b/fs/proc/task_mmu.c +@@ -713,6 +713,8 @@ static void smap_gather_stats(struct vm_ + smaps_walk.private = mss; + + #ifdef CONFIG_SHMEM ++ /* In case of smaps_rollup, reset the value from previous vma */ ++ mss->check_shmem_swap = false; + if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) { + /* + * For shared or readonly shmem mappings we know that all +@@ -728,7 +730,7 @@ static void smap_gather_stats(struct vm_ + + if (!shmem_swapped || (vma->vm_flags & VM_SHARED) || + !(vma->vm_flags & VM_WRITE)) { +- mss->swap = shmem_swapped; ++ mss->swap += shmem_swapped; + } else { + mss->check_shmem_swap = true; + smaps_walk.pte_hole = smaps_pte_hole; diff --git a/queue-4.19/mm-rmap-map_pte-was-not-handling-private-zone_device-page-properly.patch b/queue-4.19/mm-rmap-map_pte-was-not-handling-private-zone_device-page-properly.patch new file mode 100644 index 00000000000..69ffe99081e --- /dev/null +++ b/queue-4.19/mm-rmap-map_pte-was-not-handling-private-zone_device-page-properly.patch @@ -0,0 +1,69 @@ +From aab8d0520e6e7c2a61f71195e6ce7007a4843afb Mon Sep 17 00:00:00 2001 +From: Ralph Campbell +Date: Tue, 30 Oct 2018 15:04:11 -0700 +Subject: mm/rmap: map_pte() was not handling private ZONE_DEVICE page properly +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Ralph Campbell + +commit aab8d0520e6e7c2a61f71195e6ce7007a4843afb upstream. + +Private ZONE_DEVICE pages use a special pte entry and thus are not +present. Properly handle this case in map_pte(), it is already handled in +check_pte(), the map_pte() part was lost in some rebase most probably. + +Without this patch the slow migration path can not migrate back to any +private ZONE_DEVICE memory to regular memory. This was found after stress +testing migration back to system memory. This ultimatly can lead to the +CPU constantly page fault looping on the special swap entry. + +Link: http://lkml.kernel.org/r/20181019160442.18723-3-jglisse@redhat.com +Signed-off-by: Ralph Campbell +Signed-off-by: Jérôme Glisse +Reviewed-by: Balbir Singh +Cc: Andrew Morton +Cc: Kirill A. Shutemov +Cc: +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + mm/page_vma_mapped.c | 24 +++++++++++++++++++++++- + 1 file changed, 23 insertions(+), 1 deletion(-) + +--- a/mm/page_vma_mapped.c ++++ b/mm/page_vma_mapped.c +@@ -21,7 +21,29 @@ static bool map_pte(struct page_vma_mapp + if (!is_swap_pte(*pvmw->pte)) + return false; + } else { +- if (!pte_present(*pvmw->pte)) ++ /* ++ * We get here when we are trying to unmap a private ++ * device page from the process address space. Such ++ * page is not CPU accessible and thus is mapped as ++ * a special swap entry, nonetheless it still does ++ * count as a valid regular mapping for the page (and ++ * is accounted as such in page maps count). ++ * ++ * So handle this special case as if it was a normal ++ * page mapping ie lock CPU page table and returns ++ * true. ++ * ++ * For more details on device private memory see HMM ++ * (include/linux/hmm.h or mm/hmm.c). ++ */ ++ if (is_swap_pte(*pvmw->pte)) { ++ swp_entry_t entry; ++ ++ /* Handle un-addressable ZONE_DEVICE memory */ ++ entry = pte_to_swp_entry(*pvmw->pte); ++ if (!is_device_private_entry(entry)) ++ return false; ++ } else if (!pte_present(*pvmw->pte)) + return false; + } + } diff --git a/queue-4.19/mt76-mt76x2-fix-multi-interface-beacon-configuration.patch b/queue-4.19/mt76-mt76x2-fix-multi-interface-beacon-configuration.patch new file mode 100644 index 00000000000..4db4ea22afc --- /dev/null +++ b/queue-4.19/mt76-mt76x2-fix-multi-interface-beacon-configuration.patch @@ -0,0 +1,36 @@ +From 5289976ad887deb07c76df7eecf553c264aeebed Mon Sep 17 00:00:00 2001 +From: Felix Fietkau +Date: Mon, 1 Oct 2018 13:24:00 +0200 +Subject: mt76: mt76x2: fix multi-interface beacon configuration + +From: Felix Fietkau + +commit 5289976ad887deb07c76df7eecf553c264aeebed upstream. + +If the first virtual interface is a station (or an AP with beacons +temporarily disabled), the beacon of the second interface needs to +occupy hardware beacon slot 0. +For some reason the beacon index was incorrectly masked with the +virtual interface beacon mask, which prevents the secondary +interface from sending beacons unless the first one also does. + +Cc: stable@vger.kernel.org +Signed-off-by: Felix Fietkau +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/net/wireless/mediatek/mt76/mt76x2_mac.c | 3 +-- + 1 file changed, 1 insertion(+), 2 deletions(-) + +--- a/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c ++++ b/drivers/net/wireless/mediatek/mt76/mt76x2_mac.c +@@ -128,8 +128,7 @@ __mt76x2_mac_set_beacon(struct mt76x2_de + if (skb) { + ret = mt76_write_beacon(dev, beacon_addr, skb); + if (!ret) +- dev->beacon_data_mask |= BIT(bcn_idx) & +- dev->beacon_mask; ++ dev->beacon_data_mask |= BIT(bcn_idx); + } else { + dev->beacon_data_mask &= ~BIT(bcn_idx); + for (i = 0; i < beacon_len; i += 4) diff --git a/queue-4.19/net-ipv4-defensive-cipso-option-parsing.patch b/queue-4.19/net-ipv4-defensive-cipso-option-parsing.patch new file mode 100644 index 00000000000..5eb3223e3f7 --- /dev/null +++ b/queue-4.19/net-ipv4-defensive-cipso-option-parsing.patch @@ -0,0 +1,66 @@ +From 076ed3da0c9b2f88d9157dbe7044a45641ae369e Mon Sep 17 00:00:00 2001 +From: Stefan Nuernberger +Date: Mon, 17 Sep 2018 19:46:53 +0200 +Subject: net/ipv4: defensive cipso option parsing + +From: Stefan Nuernberger + +commit 076ed3da0c9b2f88d9157dbe7044a45641ae369e upstream. + +commit 40413955ee26 ("Cipso: cipso_v4_optptr enter infinite loop") fixed +a possible infinite loop in the IP option parsing of CIPSO. The fix +assumes that ip_options_compile filtered out all zero length options and +that no other one-byte options beside IPOPT_END and IPOPT_NOOP exist. +While this assumption currently holds true, add explicit checks for zero +length and invalid length options to be safe for the future. Even though +ip_options_compile should have validated the options, the introduction of +new one-byte options can still confuse this code without the additional +checks. + +Signed-off-by: Stefan Nuernberger +Cc: David Woodhouse +Cc: Simon Veith +Cc: stable@vger.kernel.org +Acked-by: Paul Moore +Signed-off-by: David S. Miller +Signed-off-by: Greg Kroah-Hartman + +--- + net/ipv4/cipso_ipv4.c | 11 +++++++---- + 1 file changed, 7 insertions(+), 4 deletions(-) + +--- a/net/ipv4/cipso_ipv4.c ++++ b/net/ipv4/cipso_ipv4.c +@@ -1512,7 +1512,7 @@ static int cipso_v4_parsetag_loc(const s + * + * Description: + * Parse the packet's IP header looking for a CIPSO option. Returns a pointer +- * to the start of the CIPSO option on success, NULL if one if not found. ++ * to the start of the CIPSO option on success, NULL if one is not found. + * + */ + unsigned char *cipso_v4_optptr(const struct sk_buff *skb) +@@ -1522,10 +1522,8 @@ unsigned char *cipso_v4_optptr(const str + int optlen; + int taglen; + +- for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 0; ) { ++ for (optlen = iph->ihl*4 - sizeof(struct iphdr); optlen > 1; ) { + switch (optptr[0]) { +- case IPOPT_CIPSO: +- return optptr; + case IPOPT_END: + return NULL; + case IPOPT_NOOP: +@@ -1534,6 +1532,11 @@ unsigned char *cipso_v4_optptr(const str + default: + taglen = optptr[1]; + } ++ if (!taglen || taglen > optlen) ++ return NULL; ++ if (optptr[0] == IPOPT_CIPSO) ++ return optptr; ++ + optlen -= taglen; + optptr += taglen; + } diff --git a/queue-4.19/opp-free-opp-table-properly-on-performance-state-irregularities.patch b/queue-4.19/opp-free-opp-table-properly-on-performance-state-irregularities.patch new file mode 100644 index 00000000000..c3ca198f156 --- /dev/null +++ b/queue-4.19/opp-free-opp-table-properly-on-performance-state-irregularities.patch @@ -0,0 +1,33 @@ +From 2fbb8670b4ff4454f1c0de510f788d737edc4b90 Mon Sep 17 00:00:00 2001 +From: Viresh Kumar +Date: Tue, 11 Sep 2018 11:14:34 +0530 +Subject: OPP: Free OPP table properly on performance state irregularities + +From: Viresh Kumar + +commit 2fbb8670b4ff4454f1c0de510f788d737edc4b90 upstream. + +The OPP table was freed, but not the individual OPPs which is done from +_dev_pm_opp_remove_table(). Fix it by calling _dev_pm_opp_remove_table() +as well. + +Cc: 4.18 # v4.18 +Fixes: 3ba98324e81a ("PM / OPP: Get performance state using genpd helper") +Tested-by: Niklas Cassel +Signed-off-by: Viresh Kumar +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/opp/of.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/drivers/opp/of.c ++++ b/drivers/opp/of.c +@@ -425,6 +425,7 @@ static int _of_add_opp_table_v2(struct d + dev_err(dev, "Not all nodes have performance state set (%d: %d)\n", + count, pstate_count); + ret = -ENOENT; ++ _dev_pm_opp_remove_table(opp_table, dev, false); + goto put_opp_table; + } + diff --git a/queue-4.19/pci-add-device-ids-for-intel-gpu-spurious-interrupt-quirk.patch b/queue-4.19/pci-add-device-ids-for-intel-gpu-spurious-interrupt-quirk.patch new file mode 100644 index 00000000000..a46aa091ded --- /dev/null +++ b/queue-4.19/pci-add-device-ids-for-intel-gpu-spurious-interrupt-quirk.patch @@ -0,0 +1,51 @@ +From d0c9606b31a21028fb5b753c8ad79626292accfd Mon Sep 17 00:00:00 2001 +From: Bin Meng +Date: Wed, 26 Sep 2018 08:14:01 -0700 +Subject: PCI: Add Device IDs for Intel GPU "spurious interrupt" quirk + +From: Bin Meng + +commit d0c9606b31a21028fb5b753c8ad79626292accfd upstream. + +Add Device IDs to the Intel GPU "spurious interrupt" quirk table. + +For these devices, unplugging the VGA cable and plugging it in again causes +spurious interrupts from the IGD. Linux eventually disables the interrupt, +but of course that disables any other devices sharing the interrupt. + +The theory is that this is a VGA BIOS defect: it should have disabled the +IGD interrupt but failed to do so. + +See f67fd55fa96f ("PCI: Add quirk for still enabled interrupts on Intel +Sandy Bridge GPUs") and 7c82126a94e6 ("PCI: Add new ID for Intel GPU +"spurious interrupt" quirk") for some history. + +[bhelgaas: See link below for discussion about how to fix this more +generically instead of adding device IDs for every new Intel GPU. I hope +this is the last patch to add device IDs.] + +Link: https://lore.kernel.org/linux-pci/1537974841-29928-1-git-send-email-bmeng.cn@gmail.com +Signed-off-by: Bin Meng +[bhelgaas: changelog] +Signed-off-by: Bjorn Helgaas +Cc: stable@vger.kernel.org # v3.4+ +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/pci/quirks.c | 4 ++++ + 1 file changed, 4 insertions(+) + +--- a/drivers/pci/quirks.c ++++ b/drivers/pci/quirks.c +@@ -3190,7 +3190,11 @@ static void disable_igfx_irq(struct pci_ + + pci_iounmap(dev, regs); + } ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0042, disable_igfx_irq); ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0046, disable_igfx_irq); ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x004a, disable_igfx_irq); + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0102, disable_igfx_irq); ++DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0106, disable_igfx_irq); + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x010a, disable_igfx_irq); + DECLARE_PCI_FIXUP_FINAL(PCI_VENDOR_ID_INTEL, 0x0152, disable_igfx_irq); + diff --git a/queue-4.19/pci-aspm-fix-link_state-teardown-on-device-removal.patch b/queue-4.19/pci-aspm-fix-link_state-teardown-on-device-removal.patch new file mode 100644 index 00000000000..1023a825ebb --- /dev/null +++ b/queue-4.19/pci-aspm-fix-link_state-teardown-on-device-removal.patch @@ -0,0 +1,75 @@ +From aeae4f3e5c38d47bdaef50446dc0ec857307df68 Mon Sep 17 00:00:00 2001 +From: Lukas Wunner +Date: Tue, 4 Sep 2018 12:34:18 -0500 +Subject: PCI/ASPM: Fix link_state teardown on device removal + +From: Lukas Wunner + +commit aeae4f3e5c38d47bdaef50446dc0ec857307df68 upstream. + +Upon removal of the last device on a bus, the link_state of the bridge +leading to that bus is sought to be torn down by having pci_stop_dev() +call pcie_aspm_exit_link_state(). + +When ASPM was originally introduced by commit 7d715a6c1ae5 ("PCI: add +PCI Express ASPM support"), it determined whether the device being +removed is the last one by calling list_empty() on the bridge's +subordinate devices list. That didn't work because the device is only +removed from the list slightly later in pci_destroy_dev(). + +Commit 3419c75e15f8 ("PCI: properly clean up ASPM link state on device +remove") attempted to fix it by calling list_is_last(), but that's not +correct either because it checks whether the device is at the *end* of +the list, not whether it's the last one *left* in the list. If the user +removes the device which happens to be at the end of the list via sysfs +but other devices are preceding the device in the list, the link_state +is torn down prematurely. + +The real fix is to move the invocation of pcie_aspm_exit_link_state() to +pci_destroy_dev() and reinstate the call to list_empty(). Remove a +duplicate check for dev->bus->self because pcie_aspm_exit_link_state() +already contains an identical check. + +Fixes: 7d715a6c1ae5 ("PCI: add PCI Express ASPM support") +Signed-off-by: Lukas Wunner +Signed-off-by: Bjorn Helgaas +Cc: Shaohua Li +Cc: stable@vger.kernel.org # v2.6.26 +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/pci/pcie/aspm.c | 2 +- + drivers/pci/remove.c | 4 +--- + 2 files changed, 2 insertions(+), 4 deletions(-) + +--- a/drivers/pci/pcie/aspm.c ++++ b/drivers/pci/pcie/aspm.c +@@ -991,7 +991,7 @@ void pcie_aspm_exit_link_state(struct pc + * All PCIe functions are in one slot, remove one function will remove + * the whole slot, so just wait until we are the last function left. + */ +- if (!list_is_last(&pdev->bus_list, &parent->subordinate->devices)) ++ if (!list_empty(&parent->subordinate->devices)) + goto out; + + link = parent->link_state; +--- a/drivers/pci/remove.c ++++ b/drivers/pci/remove.c +@@ -25,9 +25,6 @@ static void pci_stop_dev(struct pci_dev + + pci_dev_assign_added(dev, false); + } +- +- if (dev->bus->self) +- pcie_aspm_exit_link_state(dev); + } + + static void pci_destroy_dev(struct pci_dev *dev) +@@ -41,6 +38,7 @@ static void pci_destroy_dev(struct pci_d + list_del(&dev->bus_list); + up_write(&pci_bus_sem); + ++ pcie_aspm_exit_link_state(dev); + pci_bridge_d3_update(dev); + pci_free_resources(dev); + put_device(&dev->dev); diff --git a/queue-4.19/printk-fix-panic-caused-by-passing-log_buf_len-to-command-line.patch b/queue-4.19/printk-fix-panic-caused-by-passing-log_buf_len-to-command-line.patch new file mode 100644 index 00000000000..ebf11c70d3f --- /dev/null +++ b/queue-4.19/printk-fix-panic-caused-by-passing-log_buf_len-to-command-line.patch @@ -0,0 +1,65 @@ +From 277fcdb2cfee38ccdbe07e705dbd4896ba0c9930 Mon Sep 17 00:00:00 2001 +From: He Zhe +Date: Sun, 30 Sep 2018 00:45:50 +0800 +Subject: printk: Fix panic caused by passing log_buf_len to command line + +From: He Zhe + +commit 277fcdb2cfee38ccdbe07e705dbd4896ba0c9930 upstream. + +log_buf_len_setup does not check input argument before passing it to +simple_strtoull. The argument would be a NULL pointer if "log_buf_len", +without its value, is set in command line and thus causes the following +panic. + +PANIC: early exception 0xe3 IP 10:ffffffffaaeacd0d error 0 cr2 0x0 +[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.19.0-rc4-yocto-standard+ #1 +[ 0.000000] RIP: 0010:_parse_integer_fixup_radix+0xd/0x70 +... +[ 0.000000] Call Trace: +[ 0.000000] simple_strtoull+0x29/0x70 +[ 0.000000] memparse+0x26/0x90 +[ 0.000000] log_buf_len_setup+0x17/0x22 +[ 0.000000] do_early_param+0x57/0x8e +[ 0.000000] parse_args+0x208/0x320 +[ 0.000000] ? rdinit_setup+0x30/0x30 +[ 0.000000] parse_early_options+0x29/0x2d +[ 0.000000] ? rdinit_setup+0x30/0x30 +[ 0.000000] parse_early_param+0x36/0x4d +[ 0.000000] setup_arch+0x336/0x99e +[ 0.000000] start_kernel+0x6f/0x4ee +[ 0.000000] x86_64_start_reservations+0x24/0x26 +[ 0.000000] x86_64_start_kernel+0x6f/0x72 +[ 0.000000] secondary_startup_64+0xa4/0xb0 + +This patch adds a check to prevent the panic. + +Link: http://lkml.kernel.org/r/1538239553-81805-1-git-send-email-zhe.he@windriver.com +Cc: stable@vger.kernel.org +Cc: rostedt@goodmis.org +Cc: linux-kernel@vger.kernel.org +Signed-off-by: He Zhe +Reviewed-by: Sergey Senozhatsky +Signed-off-by: Petr Mladek +Signed-off-by: Greg Kroah-Hartman + +--- + kernel/printk/printk.c | 7 ++++++- + 1 file changed, 6 insertions(+), 1 deletion(-) + +--- a/kernel/printk/printk.c ++++ b/kernel/printk/printk.c +@@ -1048,7 +1048,12 @@ static void __init log_buf_len_update(un + /* save requested log_buf_len since it's too early to process it */ + static int __init log_buf_len_setup(char *str) + { +- unsigned size = memparse(str, &str); ++ unsigned int size; ++ ++ if (!str) ++ return -EINVAL; ++ ++ size = memparse(str, &str); + + log_buf_len_update(size); + diff --git a/queue-4.19/revert-f2fs-fix-to-clear-pg_checked-flag-in-set_page_dirty.patch b/queue-4.19/revert-f2fs-fix-to-clear-pg_checked-flag-in-set_page_dirty.patch new file mode 100644 index 00000000000..9e73738c442 --- /dev/null +++ b/queue-4.19/revert-f2fs-fix-to-clear-pg_checked-flag-in-set_page_dirty.patch @@ -0,0 +1,51 @@ +From 164a63fa6b384e30ceb96ed80bc7dc3379bc0960 Mon Sep 17 00:00:00 2001 +From: Jaegeuk Kim +Date: Tue, 16 Oct 2018 19:30:13 -0700 +Subject: Revert "f2fs: fix to clear PG_checked flag in set_page_dirty()" + +From: Jaegeuk Kim + +commit 164a63fa6b384e30ceb96ed80bc7dc3379bc0960 upstream. + +This reverts commit 66110abc4c931f879d70e83e1281f891699364bf. + +If we clear the cold data flag out of the writeback flow, we can miscount +-1 by end_io, which incurs a deadlock caused by all I/Os being blocked during +heavy GC. + +Balancing F2FS Async: + - IO (CP: 1, Data: -1, Flush: ( 0 0 1), Discard: ( ... + +GC thread: IRQ +- move_data_page() + - set_page_dirty() + - clear_cold_data() + - f2fs_write_end_io() + - type = WB_DATA_TYPE(page); + here, we get wrong type + - dec_page_count(sbi, type); + - f2fs_wait_on_page_writeback() + +Cc: +Reported-and-Tested-by: Park Ju Hyung +Reviewed-by: Chao Yu +Signed-off-by: Jaegeuk Kim +Signed-off-by: Greg Kroah-Hartman + +--- + fs/f2fs/data.c | 4 ---- + 1 file changed, 4 deletions(-) + +--- a/fs/f2fs/data.c ++++ b/fs/f2fs/data.c +@@ -2590,10 +2590,6 @@ static int f2fs_set_data_page_dirty(stru + if (!PageUptodate(page)) + SetPageUptodate(page); + +- /* don't remain PG_checked flag which was set during GC */ +- if (is_cold_data(page)) +- clear_cold_data(page); +- + if (f2fs_is_atomic_file(inode) && !f2fs_is_commit_atomic_write(inode)) { + if (!IS_ATOMIC_WRITTEN_PAGE(page)) { + f2fs_register_inmem_page(inode, page); diff --git a/queue-4.19/scsi-sched-wait-add-wait_event_lock_irq_timeout-for-task_uninterruptible-usage.patch b/queue-4.19/scsi-sched-wait-add-wait_event_lock_irq_timeout-for-task_uninterruptible-usage.patch new file mode 100644 index 00000000000..0b7d978458c --- /dev/null +++ b/queue-4.19/scsi-sched-wait-add-wait_event_lock_irq_timeout-for-task_uninterruptible-usage.patch @@ -0,0 +1,82 @@ +From 25ab0bc334b43bbbe4eabc255006ce42a9424da2 Mon Sep 17 00:00:00 2001 +From: Nicholas Bellinger +Date: Wed, 10 Oct 2018 03:23:09 +0000 +Subject: scsi: sched/wait: Add wait_event_lock_irq_timeout for TASK_UNINTERRUPTIBLE usage + +From: Nicholas Bellinger + +commit 25ab0bc334b43bbbe4eabc255006ce42a9424da2 upstream. + +Short of reverting commit 00d909a10710 ("scsi: target: Make the session +shutdown code also wait for commands that are being aborted") for v4.19, +target-core needs a wait_event_t macro can be executed using +TASK_UNINTERRUPTIBLE to function correctly with existing fabric drivers that +expect to run with signals pending during session shutdown and active se_cmd +I/O quiesce. + +The most notable is iscsi-target/iser-target, while ibmvscsi_tgt invokes +session shutdown logic from userspace via configfs attribute that could also +potentially have signals pending. + +So go ahead and introduce wait_event_lock_irq_timeout() to achieve this, and +update + rename __wait_event_lock_irq_timeout() to make it accept 'state' as a +parameter. + +Fixes: 00d909a10710 ("scsi: target: Make the session shutdown code also wait for commands that are being aborted") +Cc: # v4.19+ +Cc: Bart Van Assche +Cc: Mike Christie +Cc: Hannes Reinecke +Cc: Christoph Hellwig +Cc: Sagi Grimberg +Cc: Bryant G. Ly +Cc: Peter Zijlstra (Intel) +Tested-by: Nicholas Bellinger +Signed-off-by: Nicholas Bellinger +Reviewed-by: Bryant G. Ly +Acked-by: Peter Zijlstra (Intel) +Reviewed-by: Bart Van Assche +Signed-off-by: Martin K. Petersen +Signed-off-by: Greg Kroah-Hartman + +--- + include/linux/wait.h | 20 +++++++++++++++----- + 1 file changed, 15 insertions(+), 5 deletions(-) + +--- a/include/linux/wait.h ++++ b/include/linux/wait.h +@@ -1052,10 +1052,9 @@ do { \ + __ret; \ + }) + +-#define __wait_event_interruptible_lock_irq_timeout(wq_head, condition, \ +- lock, timeout) \ ++#define __wait_event_lock_irq_timeout(wq_head, condition, lock, timeout, state) \ + ___wait_event(wq_head, ___wait_cond_timeout(condition), \ +- TASK_INTERRUPTIBLE, 0, timeout, \ ++ state, 0, timeout, \ + spin_unlock_irq(&lock); \ + __ret = schedule_timeout(__ret); \ + spin_lock_irq(&lock)); +@@ -1089,8 +1088,19 @@ do { \ + ({ \ + long __ret = timeout; \ + if (!___wait_cond_timeout(condition)) \ +- __ret = __wait_event_interruptible_lock_irq_timeout( \ +- wq_head, condition, lock, timeout); \ ++ __ret = __wait_event_lock_irq_timeout( \ ++ wq_head, condition, lock, timeout, \ ++ TASK_INTERRUPTIBLE); \ ++ __ret; \ ++}) ++ ++#define wait_event_lock_irq_timeout(wq_head, condition, lock, timeout) \ ++({ \ ++ long __ret = timeout; \ ++ if (!___wait_cond_timeout(condition)) \ ++ __ret = __wait_event_lock_irq_timeout( \ ++ wq_head, condition, lock, timeout, \ ++ TASK_UNINTERRUPTIBLE); \ + __ret; \ + }) + diff --git a/queue-4.19/scsi-target-fix-target_wait_for_sess_cmds-breakage-with-active-signals.patch b/queue-4.19/scsi-target-fix-target_wait_for_sess_cmds-breakage-with-active-signals.patch new file mode 100644 index 00000000000..cfedd0ec889 --- /dev/null +++ b/queue-4.19/scsi-target-fix-target_wait_for_sess_cmds-breakage-with-active-signals.patch @@ -0,0 +1,89 @@ +From 38fe73cc2c96fbc9942b07220f2a4f1bab37392d Mon Sep 17 00:00:00 2001 +From: Nicholas Bellinger +Date: Wed, 10 Oct 2018 03:23:10 +0000 +Subject: scsi: target: Fix target_wait_for_sess_cmds breakage with active signals + +From: Nicholas Bellinger + +commit 38fe73cc2c96fbc9942b07220f2a4f1bab37392d upstream. + +With the addition of commit 00d909a10710 ("scsi: target: Make the session +shutdown code also wait for commands that are being aborted") in v4.19-rc, it +incorrectly assumes no signals will be pending for task_struct executing the +normal session shutdown and I/O quiesce code-path. + +For example, iscsi-target and iser-target issue SIGINT to all kthreads as part +of session shutdown. This has been the behaviour since day one. + +As-is when signals are pending with se_cmds active in se_sess->sess_cmd_list, +wait_event_interruptible_lock_irq_timeout() returns a negative number and +immediately kills the machine because of the do while (ret <= 0) loop that was +added in commit 00d909a107 to spin while backend I/O is taking any amount of +extended time (say 30 seconds) to complete. + +Here's what it looks like in action with debug plus delayed backend I/O +completion: + +[ 4951.909951] se_sess: 000000003e7e08fa before target_wait_for_sess_cmds +[ 4951.914600] target_wait_for_sess_cmds: signal_pending: 1 +[ 4951.918015] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 0 +[ 4951.921639] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 1 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 2 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 3 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 4 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 5 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 6 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 7 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 8 +[ 4951.921944] wait_event_interruptible_lock_irq_timeout ret: -512 signal_pending: 1 loop count: 9 + +... followed by the usual RCU CPU stalls and deadlock. + +There was never a case pre commit 00d909a107 where +wait_for_complete(&se_cmd->cmd_wait_comp) was able to be interrupted, so to +address this for v4.19+ moving forward go ahead and use +wait_event_lock_irq_timeout() instead so new code works with all fabric +drivers. + +Also for commit 00d909a107, fix a minor regression in +target_release_cmd_kref() to only wake_up the new se_sess->cmd_list_wq only +when shutdown has actually been triggered via se_sess->sess_tearing_down. + +Fixes: 00d909a10710 ("scsi: target: Make the session shutdown code also wait for commands that are being aborted") +Cc: # v4.19+ +Cc: Bart Van Assche +Cc: Mike Christie +Cc: Hannes Reinecke +Cc: Christoph Hellwig +Cc: Sagi Grimberg +Cc: Bryant G. Ly +Tested-by: Nicholas Bellinger +Signed-off-by: Nicholas Bellinger +Reviewed-by: Bryant G. Ly +Signed-off-by: Martin K. Petersen +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/target/target_core_transport.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/drivers/target/target_core_transport.c ++++ b/drivers/target/target_core_transport.c +@@ -2754,7 +2754,7 @@ static void target_release_cmd_kref(stru + if (se_sess) { + spin_lock_irqsave(&se_sess->sess_cmd_lock, flags); + list_del_init(&se_cmd->se_cmd_list); +- if (list_empty(&se_sess->sess_cmd_list)) ++ if (se_sess->sess_tearing_down && list_empty(&se_sess->sess_cmd_list)) + wake_up(&se_sess->cmd_list_wq); + spin_unlock_irqrestore(&se_sess->sess_cmd_lock, flags); + } +@@ -2907,7 +2907,7 @@ void target_wait_for_sess_cmds(struct se + + spin_lock_irq(&se_sess->sess_cmd_lock); + do { +- ret = wait_event_interruptible_lock_irq_timeout( ++ ret = wait_event_lock_irq_timeout( + se_sess->cmd_list_wq, + list_empty(&se_sess->sess_cmd_list), + se_sess->sess_cmd_lock, 180 * HZ); diff --git a/queue-4.19/selinux-fix-mounting-of-cgroup2-under-older-policies.patch b/queue-4.19/selinux-fix-mounting-of-cgroup2-under-older-policies.patch new file mode 100644 index 00000000000..371f42f977c --- /dev/null +++ b/queue-4.19/selinux-fix-mounting-of-cgroup2-under-older-policies.patch @@ -0,0 +1,50 @@ +From 7bb185edb0306bb90029a5fa6b9cff900ffdbf4b Mon Sep 17 00:00:00 2001 +From: Stephen Smalley +Date: Tue, 4 Sep 2018 16:51:36 -0400 +Subject: selinux: fix mounting of cgroup2 under older policies + +From: Stephen Smalley + +commit 7bb185edb0306bb90029a5fa6b9cff900ffdbf4b upstream. + +commit 901ef845fa2469c ("selinux: allow per-file labeling for cgroupfs") +broke mounting of cgroup2 under older SELinux policies which lacked +a genfscon rule for cgroup2. This prevents mounting of cgroup2 even +when SELinux is permissive. + +Change the handling when there is no genfscon rule in policy to +just mark the inode unlabeled and not return an error to the caller. +This permits mounting and access if allowed by policy, e.g. to +unconfined domains. + +I also considered changing the behavior of security_genfs_sid() to +never return -ENOENT, but the current behavior is relied upon by +other callers to perform caller-specific handling. + +Fixes: 901ef845fa2469c ("selinux: allow per-file labeling for cgroupfs") +CC: +Reported-by: Dmitry Vyukov +Reported-by: Waiman Long +Signed-off-by: Stephen Smalley +Tested-by: Waiman Long +Signed-off-by: Paul Moore +Signed-off-by: Greg Kroah-Hartman + +--- + security/selinux/hooks.c | 5 +++++ + 1 file changed, 5 insertions(+) + +--- a/security/selinux/hooks.c ++++ b/security/selinux/hooks.c +@@ -1508,6 +1508,11 @@ static int selinux_genfs_get_sid(struct + } + rc = security_genfs_sid(&selinux_state, sb->s_type->name, + path, tclass, sid); ++ if (rc == -ENOENT) { ++ /* No match in policy, mark as unlabeled. */ ++ *sid = SECINITSID_UNLABELED; ++ rc = 0; ++ } + } + free_page((unsigned long)buffer); + return rc; diff --git a/queue-4.19/series b/queue-4.19/series index 3c5f1bad99b..11487993b66 100644 --- a/queue-4.19/series +++ b/queue-4.19/series @@ -190,3 +190,81 @@ dmaengine-dma-jz4780-return-error-if-not-probed-from-dt.patch ib-rxe-fix-for-duplicate-request-processing-and-ack-psns.patch alsa-hda-check-the-non-cached-stream-buffers-more-explicitly.patch cpupower-fix-amd-family-0x17-msr_pstate-size.patch +revert-f2fs-fix-to-clear-pg_checked-flag-in-set_page_dirty.patch +f2fs-fix-missing-up_read.patch +f2fs-fix-to-recover-cold-bit-of-inode-block-during-por.patch +f2fs-fix-to-account-io-correctly.patch +opp-free-opp-table-properly-on-performance-state-irregularities.patch +arm-dts-exynos-convert-exynos5250.dtsi-to-opp-v2-bindings.patch +arm-dts-exynos-mark-1-ghz-cpu-opp-as-suspend-opp-on-exynos5250.patch +xen-swiotlb-use-actually-allocated-size-on-check-physical-continuous.patch +tpm-restore-functionality-to-xen-vtpm-driver.patch +xen-blkfront-avoid-null-blkfront_info-dereference-on-device-removal.patch +xen-balloon-support-xend-based-toolstack.patch +xen-fix-race-in-xen_qlock_wait.patch +xen-make-xen_qlock_wait-nestable.patch +xen-pvh-increase-early-stack-size.patch +xen-pvh-don-t-try-to-unplug-emulated-devices.patch +libertas-don-t-set-urb_zero_packet-on-in-usb-transfer.patch +usbip-vudc-bug-kmalloc-2048-not-tainted-poison-overwritten.patch +usb-typec-tcpm-fix-apdo-pps-order-checking-to-be-based-on-voltage.patch +usb-gadget-udc-renesas_usb3-fix-b-device-mode-for-workaround.patch +mt76-mt76x2-fix-multi-interface-beacon-configuration.patch +iwlwifi-mvm-check-return-value-of-rs_rate_from_ucode_rate.patch +net-ipv4-defensive-cipso-option-parsing.patch +dmaengine-ppc4xx-fix-off-by-one-build-failure.patch +scsi-sched-wait-add-wait_event_lock_irq_timeout-for-task_uninterruptible-usage.patch +scsi-target-fix-target_wait_for_sess_cmds-breakage-with-active-signals.patch +libnvdimm-hold-reference-on-parent-while-scheduling-async-init.patch +libnvdimm-region-fail-badblocks-listing-for-inactive-regions.patch +libnvdimm-pmem-fix-badblocks-population-for-raw-namespaces.patch +asoc-intel-skylake-add-missing-break-in-skl_tplg_get_token.patch +asoc-sta32x-set-component-pointer-in-private-struct.patch +ib-mlx5-fix-mr-cache-initialization.patch +ib-rxe-revise-the-ib_wr_opcode-enum.patch +jbd2-fix-use-after-free-in-jbd2_log_do_checkpoint.patch +gfs2_meta-mount-can-get-null-dev_name.patch +ext4-fix-ext4_ioc_swap_boot.patch +ext4-initialize-retries-variable-in-ext4_da_write_inline_data_begin.patch +ext4-fix-setattr-project-check-in-fssetxattr-ioctl.patch +ext4-propagate-error-from-dquot_initialize-in-ext4_ioc_fssetxattr.patch +ext4-fix-use-after-free-race-in-ext4_remount-s-error-path.patch +selinux-fix-mounting-of-cgroup2-under-older-policies.patch +hid-wacom-work-around-hid-descriptor-bug-in-dtk-2451-and-dth-2452.patch +hid-hiddev-fix-potential-spectre-v1.patch +edac-amd64-add-family-17h-models-10h-2fh-support.patch +edac-i7core-sb-skx-_edac-fix-uncorrected-error-counting.patch +edac-skx_edac-fix-logical-channel-intermediate-decoding.patch +arm-dts-dra7-fix-up-unaligned-access-setting-for-pcie-ep.patch +pci-aspm-fix-link_state-teardown-on-device-removal.patch +pci-add-device-ids-for-intel-gpu-spurious-interrupt-quirk.patch +signal-genwqe-fix-sending-of-sigkill.patch +signal-guard-against-negative-signal-numbers-in-copy_siginfo_from_user32.patch +crypto-lrw-fix-out-of-bounds-access-on-counter-overflow.patch +crypto-tcrypt-fix-ghash-generic-speed-test.patch +crypto-aesni-don-t-use-gfp_atomic-allocation-if-the-request-doesn-t-cross-a-page-in-gcm.patch +crypto-morus-generic-fix-for-big-endian-systems.patch +crypto-aegis-generic-fix-for-big-endian-systems.patch +crypto-speck-remove-speck.patch +mm-proc-pid-smaps_rollup-fix-null-pointer-deref-in-smaps_pte_range.patch +userfaultfd-disable-irqs-when-taking-the-waitqueue-lock.patch +ima-fix-showing-large-violations-or-runtime_measurements_count.patch +ima-open-a-new-file-instance-if-no-read-permissions.patch +hugetlbfs-dirty-pages-as-they-are-added-to-pagecache.patch +mm-rmap-map_pte-was-not-handling-private-zone_device-page-properly.patch +mm-hmm-fix-race-between-hmm_mirror_unregister-and-mmu_notifier-callback.patch +kvm-arm-arm64-ensure-only-thp-is-candidate-for-adjustment.patch +kvm-arm64-fix-caching-of-host-mdcr_el2-value.patch +kbuild-fix-kernel-bounds.c-w-1-warning.patch +iio-ad5064-fix-regulator-handling.patch +iio-adc-imx25-gcq-fix-leak-of-device_node-in-mx25_gcq_setup_cfgs.patch +iio-adc-at91-fix-acking-drdy-irq-on-simple-conversions.patch +iio-adc-at91-fix-wrong-channel-number-in-triggered-buffer-mode.patch +drivers-hv-kvp-fix-two-this-statement-may-fall-through-warnings.patch +w1-omap-hdq-fix-missing-bus-unregister-at-removal.patch +smb3-allow-stats-which-track-session-and-share-reconnects-to-be-reset.patch +smb3-do-not-attempt-cifs-operation-in-smb3-query-info-error-path.patch +smb3-on-kerberos-mount-if-server-doesn-t-specify-auth-type-use-krb5.patch +printk-fix-panic-caused-by-passing-log_buf_len-to-command-line.patch +genirq-fix-race-on-spurious-interrupt-detection.patch +tpm-fix-response-size-validation-in-tpm_get_random.patch diff --git a/queue-4.19/signal-genwqe-fix-sending-of-sigkill.patch b/queue-4.19/signal-genwqe-fix-sending-of-sigkill.patch new file mode 100644 index 00000000000..e81ab13ccb7 --- /dev/null +++ b/queue-4.19/signal-genwqe-fix-sending-of-sigkill.patch @@ -0,0 +1,112 @@ +From 0ab93e9c99f8208c0a1a7b7170c827936268c996 Mon Sep 17 00:00:00 2001 +From: "Eric W. Biederman" +Date: Thu, 13 Sep 2018 11:28:01 +0200 +Subject: signal/GenWQE: Fix sending of SIGKILL + +From: Eric W. Biederman + +commit 0ab93e9c99f8208c0a1a7b7170c827936268c996 upstream. + +The genweq_add_file and genwqe_del_file by caching current without +using reference counting embed the assumption that a file descriptor +will never be passed from one process to another. It even embeds the +assumption that the the thread that opened the file will be in +existence when the process terminates. Neither of which are +guaranteed to be true. + +Therefore replace caching the task_struct of the opener with +pid of the openers thread group id. All the knowledge of the +opener is used for is as the target of SIGKILL and a SIGKILL +will kill the entire process group. + +Rename genwqe_force_sig to genwqe_terminate, remove it's unncessary +signal argument, update it's ownly caller, and use kill_pid +instead of force_sig. + +The work force_sig does in changing signal handling state is not +relevant to SIGKILL sent as SEND_SIG_PRIV. The exact same processess +will be killed just with less work, and less confusion. The work done +by force_sig is really only needed for handling syncrhonous +exceptions. + +It will still be possible to cause genwqe_device_remove to wait +8 seconds by passing a file descriptor to another process but +the possible user after free is fixed. + +Fixes: eaf4722d4645 ("GenWQE Character device and DDCB queue") +Cc: stable@vger.kernel.org +Cc: Greg Kroah-Hartman +Cc: Frank Haverkamp +Cc: Joerg-Stephan Vogt +Cc: Michael Jung +Cc: Michael Ruettger +Cc: Kleber Sacilotto de Souza +Cc: Sebastian Ott +Cc: Eberhard S. Amann +Cc: Gabriel Krisman Bertazi +Cc: Guilherme G. Piccoli +Signed-off-by: "Eric W. Biederman" +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/misc/genwqe/card_base.h | 2 +- + drivers/misc/genwqe/card_dev.c | 9 +++++---- + 2 files changed, 6 insertions(+), 5 deletions(-) + +--- a/drivers/misc/genwqe/card_base.h ++++ b/drivers/misc/genwqe/card_base.h +@@ -408,7 +408,7 @@ struct genwqe_file { + struct file *filp; + + struct fasync_struct *async_queue; +- struct task_struct *owner; ++ struct pid *opener; + struct list_head list; /* entry in list of open files */ + + spinlock_t map_lock; /* lock for dma_mappings */ +--- a/drivers/misc/genwqe/card_dev.c ++++ b/drivers/misc/genwqe/card_dev.c +@@ -52,7 +52,7 @@ static void genwqe_add_file(struct genwq + { + unsigned long flags; + +- cfile->owner = current; ++ cfile->opener = get_pid(task_tgid(current)); + spin_lock_irqsave(&cd->file_lock, flags); + list_add(&cfile->list, &cd->file_list); + spin_unlock_irqrestore(&cd->file_lock, flags); +@@ -65,6 +65,7 @@ static int genwqe_del_file(struct genwqe + spin_lock_irqsave(&cd->file_lock, flags); + list_del(&cfile->list); + spin_unlock_irqrestore(&cd->file_lock, flags); ++ put_pid(cfile->opener); + + return 0; + } +@@ -275,7 +276,7 @@ static int genwqe_kill_fasync(struct gen + return files; + } + +-static int genwqe_force_sig(struct genwqe_dev *cd, int sig) ++static int genwqe_terminate(struct genwqe_dev *cd) + { + unsigned int files = 0; + unsigned long flags; +@@ -283,7 +284,7 @@ static int genwqe_force_sig(struct genwq + + spin_lock_irqsave(&cd->file_lock, flags); + list_for_each_entry(cfile, &cd->file_list, list) { +- force_sig(sig, cfile->owner); ++ kill_pid(cfile->opener, SIGKILL, 1); + files++; + } + spin_unlock_irqrestore(&cd->file_lock, flags); +@@ -1352,7 +1353,7 @@ static int genwqe_inform_and_stop_proces + dev_warn(&pci_dev->dev, + "[%s] send SIGKILL and wait ...\n", __func__); + +- rc = genwqe_force_sig(cd, SIGKILL); /* force terminate */ ++ rc = genwqe_terminate(cd); + if (rc) { + /* Give kill_timout more seconds to end processes */ + for (i = 0; (i < GENWQE_KILL_TIMEOUT) && diff --git a/queue-4.19/signal-guard-against-negative-signal-numbers-in-copy_siginfo_from_user32.patch b/queue-4.19/signal-guard-against-negative-signal-numbers-in-copy_siginfo_from_user32.patch new file mode 100644 index 00000000000..22af56affe8 --- /dev/null +++ b/queue-4.19/signal-guard-against-negative-signal-numbers-in-copy_siginfo_from_user32.patch @@ -0,0 +1,50 @@ +From a36700589b85443e28170be59fa11c8a104130a5 Mon Sep 17 00:00:00 2001 +From: "Eric W. Biederman" +Date: Wed, 10 Oct 2018 20:29:44 -0500 +Subject: signal: Guard against negative signal numbers in copy_siginfo_from_user32 + +From: Eric W. Biederman + +commit a36700589b85443e28170be59fa11c8a104130a5 upstream. + +While fixing an out of bounds array access in known_siginfo_layout +reported by the kernel test robot it became apparent that the same bug +exists in siginfo_layout and affects copy_siginfo_from_user32. + +The straight forward fix that makes guards against making this mistake +in the future and should keep the code size small is to just take an +unsigned signal number instead of a signed signal number, as I did to +fix known_siginfo_layout. + +Cc: stable@vger.kernel.org +Fixes: cc731525f26a ("signal: Remove kernel interal si_code magic") +Signed-off-by: "Eric W. Biederman" +Signed-off-by: Greg Kroah-Hartman + +--- + include/linux/signal.h | 2 +- + kernel/signal.c | 2 +- + 2 files changed, 2 insertions(+), 2 deletions(-) + +--- a/include/linux/signal.h ++++ b/include/linux/signal.h +@@ -36,7 +36,7 @@ enum siginfo_layout { + SIL_SYS, + }; + +-enum siginfo_layout siginfo_layout(int sig, int si_code); ++enum siginfo_layout siginfo_layout(unsigned sig, int si_code); + + /* + * Define some primitives to manipulate sigset_t. +--- a/kernel/signal.c ++++ b/kernel/signal.c +@@ -2847,7 +2847,7 @@ COMPAT_SYSCALL_DEFINE2(rt_sigpending, co + } + #endif + +-enum siginfo_layout siginfo_layout(int sig, int si_code) ++enum siginfo_layout siginfo_layout(unsigned sig, int si_code) + { + enum siginfo_layout layout = SIL_KILL; + if ((si_code > SI_USER) && (si_code < SI_KERNEL)) { diff --git a/queue-4.19/smb3-allow-stats-which-track-session-and-share-reconnects-to-be-reset.patch b/queue-4.19/smb3-allow-stats-which-track-session-and-share-reconnects-to-be-reset.patch new file mode 100644 index 00000000000..c74dc5639f0 --- /dev/null +++ b/queue-4.19/smb3-allow-stats-which-track-session-and-share-reconnects-to-be-reset.patch @@ -0,0 +1,34 @@ +From 2c887635cd6ab3af619dc2be94e5bf8f2e172b78 Mon Sep 17 00:00:00 2001 +From: Steve French +Date: Sat, 15 Sep 2018 23:04:41 -0500 +Subject: smb3: allow stats which track session and share reconnects to be reset + +From: Steve French + +commit 2c887635cd6ab3af619dc2be94e5bf8f2e172b78 upstream. + +Currently, "echo 0 > /proc/fs/cifs/Stats" resets all of the stats +except the session and share reconnect counts. Fix it to +reset those as well. + +CC: Stable +Signed-off-by: Steve French +Reviewed-by: Aurelien Aptel +Signed-off-by: Greg Kroah-Hartman + +--- + fs/cifs/cifs_debug.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/fs/cifs/cifs_debug.c ++++ b/fs/cifs/cifs_debug.c +@@ -383,6 +383,9 @@ static ssize_t cifs_stats_proc_write(str + atomic_set(&totBufAllocCount, 0); + atomic_set(&totSmBufAllocCount, 0); + #endif /* CONFIG_CIFS_STATS2 */ ++ atomic_set(&tcpSesReconnectCount, 0); ++ atomic_set(&tconInfoReconnectCount, 0); ++ + spin_lock(&GlobalMid_Lock); + GlobalMaxActiveXid = 0; + GlobalCurrentXid = 0; diff --git a/queue-4.19/smb3-do-not-attempt-cifs-operation-in-smb3-query-info-error-path.patch b/queue-4.19/smb3-do-not-attempt-cifs-operation-in-smb3-query-info-error-path.patch new file mode 100644 index 00000000000..32982a173fa --- /dev/null +++ b/queue-4.19/smb3-do-not-attempt-cifs-operation-in-smb3-query-info-error-path.patch @@ -0,0 +1,45 @@ +From 1e77a8c204c9d1b655c61751b8ad0fde22421dbb Mon Sep 17 00:00:00 2001 +From: Steve French +Date: Fri, 19 Oct 2018 00:45:21 -0500 +Subject: smb3: do not attempt cifs operation in smb3 query info error path + +From: Steve French + +commit 1e77a8c204c9d1b655c61751b8ad0fde22421dbb upstream. + +If backupuid mount option is sent, we can incorrectly retry +(on access denied on query info) with a cifs (FindFirst) operation +on an smb3 mount which causes the server to force the session close. + +We set backup intent on open so no need for this fallback. + +See kernel bugzilla 201435 + +Signed-off-by: Steve French +CC: Stable +Reviewed-by: Ronnie Sahlberg +Signed-off-by: Greg Kroah-Hartman + +--- + fs/cifs/inode.c | 10 +++++++++- + 1 file changed, 9 insertions(+), 1 deletion(-) + +--- a/fs/cifs/inode.c ++++ b/fs/cifs/inode.c +@@ -777,7 +777,15 @@ cifs_get_inode_info(struct inode **inode + } else if (rc == -EREMOTE) { + cifs_create_dfs_fattr(&fattr, sb); + rc = 0; +- } else if (rc == -EACCES && backup_cred(cifs_sb)) { ++ } else if ((rc == -EACCES) && backup_cred(cifs_sb) && ++ (strcmp(server->vals->version_string, SMB1_VERSION_STRING) ++ == 0)) { ++ /* ++ * For SMB2 and later the backup intent flag is already ++ * sent if needed on open and there is no path based ++ * FindFirst operation to use to retry with ++ */ ++ + srchinf = kzalloc(sizeof(struct cifs_search_info), + GFP_KERNEL); + if (srchinf == NULL) { diff --git a/queue-4.19/smb3-on-kerberos-mount-if-server-doesn-t-specify-auth-type-use-krb5.patch b/queue-4.19/smb3-on-kerberos-mount-if-server-doesn-t-specify-auth-type-use-krb5.patch new file mode 100644 index 00000000000..d90a1430f2d --- /dev/null +++ b/queue-4.19/smb3-on-kerberos-mount-if-server-doesn-t-specify-auth-type-use-krb5.patch @@ -0,0 +1,40 @@ +From 926674de6705f0f1dbf29a62fd758d0977f535d6 Mon Sep 17 00:00:00 2001 +From: Steve French +Date: Sun, 28 Oct 2018 13:13:23 -0500 +Subject: smb3: on kerberos mount if server doesn't specify auth type use krb5 + +From: Steve French + +commit 926674de6705f0f1dbf29a62fd758d0977f535d6 upstream. + +Some servers (e.g. Azure) do not include a spnego blob in the SMB3 +negotiate protocol response, so on kerberos mounts ("sec=krb5") +we can fail, as we expected the server to list its supported +auth types (OIDs in the spnego blob in the negprot response). +Change this so that on krb5 mounts we default to trying krb5 if the +server doesn't list its supported protocol mechanisms. + +Signed-off-by: Steve French +Reviewed-by: Ronnie Sahlberg +CC: Stable +Signed-off-by: Greg Kroah-Hartman + +--- + fs/cifs/cifs_spnego.c | 6 ++++-- + 1 file changed, 4 insertions(+), 2 deletions(-) + +--- a/fs/cifs/cifs_spnego.c ++++ b/fs/cifs/cifs_spnego.c +@@ -147,8 +147,10 @@ cifs_get_spnego_key(struct cifs_ses *ses + sprintf(dp, ";sec=krb5"); + else if (server->sec_mskerberos) + sprintf(dp, ";sec=mskrb5"); +- else +- goto out; ++ else { ++ cifs_dbg(VFS, "unknown or missing server auth type, use krb5\n"); ++ sprintf(dp, ";sec=krb5"); ++ } + + dp = description + strlen(description); + sprintf(dp, ";uid=0x%x", diff --git a/queue-4.19/tpm-fix-response-size-validation-in-tpm_get_random.patch b/queue-4.19/tpm-fix-response-size-validation-in-tpm_get_random.patch new file mode 100644 index 00000000000..ec3264242f9 --- /dev/null +++ b/queue-4.19/tpm-fix-response-size-validation-in-tpm_get_random.patch @@ -0,0 +1,49 @@ +From 84b59f6487d82d3ab4247a099aba66d4d17e8b08 Mon Sep 17 00:00:00 2001 +From: Jarkko Sakkinen +Date: Mon, 3 Sep 2018 04:01:26 +0300 +Subject: tpm: fix response size validation in tpm_get_random() + +From: Jarkko Sakkinen + +commit 84b59f6487d82d3ab4247a099aba66d4d17e8b08 upstream. + +When checking whether the response is large enough to be able to contain +the received random bytes in tpm_get_random() and tpm2_get_random(), +they fail to take account the header size, which should be added to the +minimum size. This commit fixes this issue. + +Cc: stable@vger.kernel.org +Fixes: c659af78eb7b ("tpm: Check size of response before accessing data") +Signed-off-by: Jarkko Sakkinen +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/char/tpm/tpm-interface.c | 3 ++- + drivers/char/tpm/tpm2-cmd.c | 4 +++- + 2 files changed, 5 insertions(+), 2 deletions(-) + +--- a/drivers/char/tpm/tpm-interface.c ++++ b/drivers/char/tpm/tpm-interface.c +@@ -1322,7 +1322,8 @@ int tpm_get_random(struct tpm_chip *chip + } + + rlength = be32_to_cpu(tpm_cmd.header.out.length); +- if (rlength < offsetof(struct tpm_getrandom_out, rng_data) + ++ if (rlength < TPM_HEADER_SIZE + ++ offsetof(struct tpm_getrandom_out, rng_data) + + recd) { + total = -EFAULT; + break; +--- a/drivers/char/tpm/tpm2-cmd.c ++++ b/drivers/char/tpm/tpm2-cmd.c +@@ -329,7 +329,9 @@ int tpm2_get_random(struct tpm_chip *chi + &buf.data[TPM_HEADER_SIZE]; + recd = min_t(u32, be16_to_cpu(out->size), num_bytes); + if (tpm_buf_length(&buf) < +- offsetof(struct tpm2_get_random_out, buffer) + recd) { ++ TPM_HEADER_SIZE + ++ offsetof(struct tpm2_get_random_out, buffer) + ++ recd) { + err = -EFAULT; + goto out; + } diff --git a/queue-4.19/tpm-restore-functionality-to-xen-vtpm-driver.patch b/queue-4.19/tpm-restore-functionality-to-xen-vtpm-driver.patch new file mode 100644 index 00000000000..ed6950c9658 --- /dev/null +++ b/queue-4.19/tpm-restore-functionality-to-xen-vtpm-driver.patch @@ -0,0 +1,59 @@ +From e487a0f52301293152a6f8c4e217f2a11dd808e3 Mon Sep 17 00:00:00 2001 +From: "Dr. Greg Wettstein" +Date: Mon, 17 Sep 2018 18:53:33 -0400 +Subject: tpm: Restore functionality to xen vtpm driver. + +From: Dr. Greg Wettstein + +commit e487a0f52301293152a6f8c4e217f2a11dd808e3 upstream. + +Functionality of the xen-tpmfront driver was lost secondary to +the introduction of xenbus multi-page support in commit ccc9d90a9a8b +("xenbus_client: Extend interface to support multi-page ring"). + +In this commit pointer to location of where the shared page address +is stored was being passed to the xenbus_grant_ring() function rather +then the address of the shared page itself. This resulted in a situation +where the driver would attach to the vtpm-stubdom but any attempt +to send a command to the stub domain would timeout. + +A diagnostic finding for this regression is the following error +message being generated when the xen-tpmfront driver probes for a +device: + +<3>vtpm vtpm-0: tpm_transmit: tpm_send: error -62 + +<3>vtpm vtpm-0: A TPM error (-62) occurred attempting to determine +the timeouts + +This fix is relevant to all kernels from 4.1 forward which is the +release in which multi-page xenbus support was introduced. + +Daniel De Graaf formulated the fix by code inspection after the +regression point was located. + +Fixes: ccc9d90a9a8b ("xenbus_client: Extend interface to support multi-page ring") +Signed-off-by: Dr. Greg Wettstein +Signed-off-by: Greg Kroah-Hartman + +[boris: Updated commit message, added Fixes tag] +Signed-off-by: Boris Ostrovsky +Cc: stable@vger.kernel.org # v4.1+ +Reviewed-by: Jarkko Sakkinen +Signed-off-by: Jarkko Sakkinen + +--- + drivers/char/tpm/xen-tpmfront.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/char/tpm/xen-tpmfront.c ++++ b/drivers/char/tpm/xen-tpmfront.c +@@ -264,7 +264,7 @@ static int setup_ring(struct xenbus_devi + return -ENOMEM; + } + +- rv = xenbus_grant_ring(dev, &priv->shr, 1, &gref); ++ rv = xenbus_grant_ring(dev, priv->shr, 1, &gref); + if (rv < 0) + return rv; + diff --git a/queue-4.19/usb-gadget-udc-renesas_usb3-fix-b-device-mode-for-workaround.patch b/queue-4.19/usb-gadget-udc-renesas_usb3-fix-b-device-mode-for-workaround.patch new file mode 100644 index 00000000000..7eb6dccaf91 --- /dev/null +++ b/queue-4.19/usb-gadget-udc-renesas_usb3-fix-b-device-mode-for-workaround.patch @@ -0,0 +1,38 @@ +From afc92514a34c7414b28047b1205a6b709103c699 Mon Sep 17 00:00:00 2001 +From: Yoshihiro Shimoda +Date: Tue, 2 Oct 2018 20:57:44 +0900 +Subject: usb: gadget: udc: renesas_usb3: Fix b-device mode for "workaround" + +From: Yoshihiro Shimoda + +commit afc92514a34c7414b28047b1205a6b709103c699 upstream. + +If the "workaround_for_vbus" is true, the driver will not call +usb_disconnect(). So, since the controller keeps some registers' +value, the driver doesn't re-enumarate suitable speed after +the b-device mode is disabled. To fix the issue, this patch +adds usb_disconnect() calling in renesas_usb3_b_device_write() +if workaround_for_vbus is true. + +Fixes: 43ba968b00ea ("usb: gadget: udc: renesas_usb3: add debugfs to set the b-device mode") +Cc: # v4.14+ +Signed-off-by: Yoshihiro Shimoda +Signed-off-by: Felipe Balbi +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/usb/gadget/udc/renesas_usb3.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/drivers/usb/gadget/udc/renesas_usb3.c ++++ b/drivers/usb/gadget/udc/renesas_usb3.c +@@ -2437,6 +2437,9 @@ static ssize_t renesas_usb3_b_device_wri + else + usb3->forced_b_device = false; + ++ if (usb3->workaround_for_vbus) ++ usb3_disconnect(usb3); ++ + /* Let this driver call usb3_connect() anyway */ + usb3_check_id(usb3); + diff --git a/queue-4.19/usb-typec-tcpm-fix-apdo-pps-order-checking-to-be-based-on-voltage.patch b/queue-4.19/usb-typec-tcpm-fix-apdo-pps-order-checking-to-be-based-on-voltage.patch new file mode 100644 index 00000000000..693436a5d23 --- /dev/null +++ b/queue-4.19/usb-typec-tcpm-fix-apdo-pps-order-checking-to-be-based-on-voltage.patch @@ -0,0 +1,39 @@ +From 1b6af2f58c2b1522e0804b150ca95e50a9e80ea7 Mon Sep 17 00:00:00 2001 +From: Adam Thomson +Date: Fri, 21 Sep 2018 16:04:11 +0100 +Subject: usb: typec: tcpm: Fix APDO PPS order checking to be based on voltage + +From: Adam Thomson + +commit 1b6af2f58c2b1522e0804b150ca95e50a9e80ea7 upstream. + +Current code mistakenly checks against max current to determine +order but this should be max voltage. This commit fixes the issue +so order is correctly determined, thus avoiding failure based on +a higher voltage PPS APDO having a lower maximum current output, +which is actually valid. + +Fixes: 2eadc33f40d4 ("typec: tcpm: Add core support for sink side PPS") +Cc: +Signed-off-by: Adam Thomson +Reviewed-by: Heikki Krogerus +Reviewed-by: Guenter Roeck +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/usb/typec/tcpm.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +--- a/drivers/usb/typec/tcpm.c ++++ b/drivers/usb/typec/tcpm.c +@@ -1430,8 +1430,8 @@ static enum pdo_err tcpm_caps_err(struct + if (pdo_apdo_type(pdo[i]) != APDO_TYPE_PPS) + break; + +- if (pdo_pps_apdo_max_current(pdo[i]) < +- pdo_pps_apdo_max_current(pdo[i - 1])) ++ if (pdo_pps_apdo_max_voltage(pdo[i]) < ++ pdo_pps_apdo_max_voltage(pdo[i - 1])) + return PDO_ERR_PPS_APDO_NOT_SORTED; + else if (pdo_pps_apdo_min_voltage(pdo[i]) == + pdo_pps_apdo_min_voltage(pdo[i - 1]) && diff --git a/queue-4.19/usbip-vudc-bug-kmalloc-2048-not-tainted-poison-overwritten.patch b/queue-4.19/usbip-vudc-bug-kmalloc-2048-not-tainted-poison-overwritten.patch new file mode 100644 index 00000000000..5faf8748409 --- /dev/null +++ b/queue-4.19/usbip-vudc-bug-kmalloc-2048-not-tainted-poison-overwritten.patch @@ -0,0 +1,67 @@ +From e28fd56ad5273be67d0fae5bedc7e1680e729952 Mon Sep 17 00:00:00 2001 +From: "Shuah Khan (Samsung OSG)" +Date: Thu, 18 Oct 2018 10:19:29 -0600 +Subject: usbip:vudc: BUG kmalloc-2048 (Not tainted): Poison overwritten + +From: Shuah Khan (Samsung OSG) + +commit e28fd56ad5273be67d0fae5bedc7e1680e729952 upstream. + +In rmmod path, usbip_vudc does platform_device_put() twice once from +platform_device_unregister() and then from put_vudc_device(). + +The second put results in: + +BUG kmalloc-2048 (Not tainted): Poison overwritten error or +BUG: KASAN: use-after-free in kobject_put+0x1e/0x230 if KASAN is +enabled. + +[ 169.042156] calling init+0x0/0x1000 [usbip_vudc] @ 1697 +[ 169.042396] ============================================================================= +[ 169.043678] probe of usbip-vudc.0 returned 1 after 350 usecs +[ 169.044508] BUG kmalloc-2048 (Not tainted): Poison overwritten +[ 169.044509] ----------------------------------------------------------------------------- +... +[ 169.057849] INFO: Freed in device_release+0x2b/0x80 age=4223 cpu=3 pid=1693 +[ 169.057852] kobject_put+0x86/0x1b0 +[ 169.057853] 0xffffffffc0c30a96 +[ 169.057855] __x64_sys_delete_module+0x157/0x240 + +Fix it to call platform_device_del() instead and let put_vudc_device() do +the platform_device_put(). + +Reported-by: Randy Dunlap +Signed-off-by: Shuah Khan (Samsung OSG) +Cc: +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/usb/usbip/vudc_main.c | 10 +++++++++- + 1 file changed, 9 insertions(+), 1 deletion(-) + +--- a/drivers/usb/usbip/vudc_main.c ++++ b/drivers/usb/usbip/vudc_main.c +@@ -73,6 +73,10 @@ static int __init init(void) + cleanup: + list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) { + list_del(&udc_dev->dev_entry); ++ /* ++ * Just do platform_device_del() here, put_vudc_device() ++ * calls the platform_device_put() ++ */ + platform_device_del(udc_dev->pdev); + put_vudc_device(udc_dev); + } +@@ -89,7 +93,11 @@ static void __exit cleanup(void) + + list_for_each_entry_safe(udc_dev, udc_dev2, &vudc_devices, dev_entry) { + list_del(&udc_dev->dev_entry); +- platform_device_unregister(udc_dev->pdev); ++ /* ++ * Just do platform_device_del() here, put_vudc_device() ++ * calls the platform_device_put() ++ */ ++ platform_device_del(udc_dev->pdev); + put_vudc_device(udc_dev); + } + platform_driver_unregister(&vudc_driver); diff --git a/queue-4.19/userfaultfd-disable-irqs-when-taking-the-waitqueue-lock.patch b/queue-4.19/userfaultfd-disable-irqs-when-taking-the-waitqueue-lock.patch new file mode 100644 index 00000000000..869652c7273 --- /dev/null +++ b/queue-4.19/userfaultfd-disable-irqs-when-taking-the-waitqueue-lock.patch @@ -0,0 +1,56 @@ +From ae62c16e105a869524afcf8a07ee85c5ae5d0479 Mon Sep 17 00:00:00 2001 +From: Christoph Hellwig +Date: Fri, 26 Oct 2018 15:02:19 -0700 +Subject: userfaultfd: disable irqs when taking the waitqueue lock + +From: Christoph Hellwig + +commit ae62c16e105a869524afcf8a07ee85c5ae5d0479 upstream. + +userfaultfd contains howe-grown locking of the waitqueue lock, and does +not disable interrupts. This relies on the fact that no one else takes it +from interrupt context and violates an invariat of the normal waitqueue +locking scheme. With aio poll it is easy to trigger other locks that +disable interrupts (or are called from interrupt context). + +Link: http://lkml.kernel.org/r/20181018154101.18750-1-hch@lst.de +Signed-off-by: Christoph Hellwig +Reviewed-by: Andrea Arcangeli +Reviewed-by: Andrew Morton +Cc: [4.19.x] +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Greg Kroah-Hartman + +--- + fs/userfaultfd.c | 8 ++++---- + 1 file changed, 4 insertions(+), 4 deletions(-) + +--- a/fs/userfaultfd.c ++++ b/fs/userfaultfd.c +@@ -1026,7 +1026,7 @@ static ssize_t userfaultfd_ctx_read(stru + struct userfaultfd_ctx *fork_nctx = NULL; + + /* always take the fd_wqh lock before the fault_pending_wqh lock */ +- spin_lock(&ctx->fd_wqh.lock); ++ spin_lock_irq(&ctx->fd_wqh.lock); + __add_wait_queue(&ctx->fd_wqh, &wait); + for (;;) { + set_current_state(TASK_INTERRUPTIBLE); +@@ -1112,13 +1112,13 @@ static ssize_t userfaultfd_ctx_read(stru + ret = -EAGAIN; + break; + } +- spin_unlock(&ctx->fd_wqh.lock); ++ spin_unlock_irq(&ctx->fd_wqh.lock); + schedule(); +- spin_lock(&ctx->fd_wqh.lock); ++ spin_lock_irq(&ctx->fd_wqh.lock); + } + __remove_wait_queue(&ctx->fd_wqh, &wait); + __set_current_state(TASK_RUNNING); +- spin_unlock(&ctx->fd_wqh.lock); ++ spin_unlock_irq(&ctx->fd_wqh.lock); + + if (!ret && msg->event == UFFD_EVENT_FORK) { + ret = resolve_userfault_fork(ctx, fork_nctx, msg); diff --git a/queue-4.19/w1-omap-hdq-fix-missing-bus-unregister-at-removal.patch b/queue-4.19/w1-omap-hdq-fix-missing-bus-unregister-at-removal.patch new file mode 100644 index 00000000000..71bc5dbac1a --- /dev/null +++ b/queue-4.19/w1-omap-hdq-fix-missing-bus-unregister-at-removal.patch @@ -0,0 +1,65 @@ +From a007734618fee1bf35556c04fa498d41d42c7301 Mon Sep 17 00:00:00 2001 +From: Andreas Kemnade +Date: Sat, 22 Sep 2018 21:20:54 +0200 +Subject: w1: omap-hdq: fix missing bus unregister at removal + +From: Andreas Kemnade + +commit a007734618fee1bf35556c04fa498d41d42c7301 upstream. + +The bus master was not removed after unloading the module +or unbinding the driver. That lead to oopses like this + +[ 127.842987] Unable to handle kernel paging request at virtual address bf01d04c +[ 127.850646] pgd = 70e3cd9a +[ 127.853698] [bf01d04c] *pgd=8f908811, *pte=00000000, *ppte=00000000 +[ 127.860412] Internal error: Oops: 80000007 [#1] PREEMPT SMP ARM +[ 127.866668] Modules linked in: bq27xxx_battery overlay [last unloaded: omap_hdq] +[ 127.874542] CPU: 0 PID: 1022 Comm: w1_bus_master1 Not tainted 4.19.0-rc4-00001-g2d51da718324 #12 +[ 127.883819] Hardware name: Generic OMAP36xx (Flattened Device Tree) +[ 127.890441] PC is at 0xbf01d04c +[ 127.893798] LR is at w1_search_process_cb+0x4c/0xfc +[ 127.898956] pc : [] lr : [] psr: a0070013 +[ 127.905609] sp : cf885f48 ip : bf01d04c fp : ddf1e11c +[ 127.911132] r10: cf8fe040 r9 : c05f8d00 r8 : cf8fe040 +[ 127.916656] r7 : 000000f0 r6 : cf8fe02c r5 : cf8fe000 r4 : cf8fe01c +[ 127.923553] r3 : c05f8d00 r2 : 000000f0 r1 : cf8fe000 r0 : dde1ef10 +[ 127.930450] Flags: NzCv IRQs on FIQs on Mode SVC_32 ISA ARM Segment none +[ 127.938018] Control: 10c5387d Table: 8f8f0019 DAC: 00000051 +[ 127.944091] Process w1_bus_master1 (pid: 1022, stack limit = 0x9135699f) +[ 127.951171] Stack: (0xcf885f48 to 0xcf886000) +[ 127.955810] 5f40: cf8fe000 00000000 cf884000 cf8fe090 000003e8 c05f8d00 +[ 127.964477] 5f60: dde5fc34 c05f9700 ddf1e100 ddf1e540 cf884000 cf8fe000 c05f9694 00000000 +[ 127.973114] 5f80: dde5fc34 c01499a4 00000000 ddf1e540 c0149874 00000000 00000000 00000000 +[ 127.981781] 5fa0: 00000000 00000000 00000000 c01010e8 00000000 00000000 00000000 00000000 +[ 127.990447] 5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 +[ 127.999114] 5fe0: 00000000 00000000 00000000 00000000 00000013 00000000 00000000 00000000 +[ 128.007781] [] (w1_search_process_cb) from [] (w1_process+0x6c/0x118) +[ 128.016479] [] (w1_process) from [] (kthread+0x130/0x148) +[ 128.024047] [] (kthread) from [] (ret_from_fork+0x14/0x2c) +[ 128.031677] Exception stack(0xcf885fb0 to 0xcf885ff8) +[ 128.037017] 5fa0: 00000000 00000000 00000000 00000000 +[ 128.045684] 5fc0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 +[ 128.054351] 5fe0: 00000000 00000000 00000000 00000000 00000013 00000000 +[ 128.061340] Code: bad PC value +[ 128.064697] ---[ end trace af066e33c0e14119 ]--- + +Cc: +Signed-off-by: Andreas Kemnade +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/w1/masters/omap_hdq.c | 2 ++ + 1 file changed, 2 insertions(+) + +--- a/drivers/w1/masters/omap_hdq.c ++++ b/drivers/w1/masters/omap_hdq.c +@@ -763,6 +763,8 @@ static int omap_hdq_remove(struct platfo + /* remove module dependency */ + pm_runtime_disable(&pdev->dev); + ++ w1_remove_master_device(&omap_w1_master); ++ + return 0; + } + diff --git a/queue-4.19/xen-balloon-support-xend-based-toolstack.patch b/queue-4.19/xen-balloon-support-xend-based-toolstack.patch new file mode 100644 index 00000000000..0b591f8b8fb --- /dev/null +++ b/queue-4.19/xen-balloon-support-xend-based-toolstack.patch @@ -0,0 +1,46 @@ +From 3aa6c19d2f38be9c6e9a8ad5fa8e3c9d29ee3c35 Mon Sep 17 00:00:00 2001 +From: Boris Ostrovsky +Date: Sun, 7 Oct 2018 16:05:38 -0400 +Subject: xen/balloon: Support xend-based toolstack + +From: Boris Ostrovsky + +commit 3aa6c19d2f38be9c6e9a8ad5fa8e3c9d29ee3c35 upstream. + +Xend-based toolstacks don't have static-max entry in xenstore. The +equivalent node for those toolstacks is memory_static_max. + +Fixes: 5266b8e4445c (xen: fix booting ballooned down hvm guest) +Signed-off-by: Boris Ostrovsky +Cc: # 4.13 +Reviewed-by: Juergen Gross +Signed-off-by: Juergen Gross +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/xen/xen-balloon.c | 13 ++++++++----- + 1 file changed, 8 insertions(+), 5 deletions(-) + +--- a/drivers/xen/xen-balloon.c ++++ b/drivers/xen/xen-balloon.c +@@ -76,12 +76,15 @@ static void watch_target(struct xenbus_w + + if (!watch_fired) { + watch_fired = true; +- err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu", +- &static_max); +- if (err != 1) +- static_max = new_target; +- else ++ ++ if ((xenbus_scanf(XBT_NIL, "memory", "static-max", ++ "%llu", &static_max) == 1) || ++ (xenbus_scanf(XBT_NIL, "memory", "memory_static_max", ++ "%llu", &static_max) == 1)) + static_max >>= PAGE_SHIFT - 10; ++ else ++ static_max = new_target; ++ + target_diff = (xen_pv_domain() || xen_initial_domain()) ? 0 + : static_max - balloon_stats.target_pages; + } diff --git a/queue-4.19/xen-blkfront-avoid-null-blkfront_info-dereference-on-device-removal.patch b/queue-4.19/xen-blkfront-avoid-null-blkfront_info-dereference-on-device-removal.patch new file mode 100644 index 00000000000..b92972b99c9 --- /dev/null +++ b/queue-4.19/xen-blkfront-avoid-null-blkfront_info-dereference-on-device-removal.patch @@ -0,0 +1,58 @@ +From f92898e7f32e3533bfd95be174044bc349d416ca Mon Sep 17 00:00:00 2001 +From: Vasilis Liaskovitis +Date: Mon, 15 Oct 2018 15:25:08 +0200 +Subject: xen/blkfront: avoid NULL blkfront_info dereference on device removal +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Vasilis Liaskovitis + +commit f92898e7f32e3533bfd95be174044bc349d416ca upstream. + +If a block device is hot-added when we are out of grants, +gnttab_grant_foreign_access fails with -ENOSPC (log message "28 +granting access to ring page") in this code path: + + talk_to_blkback -> + setup_blkring -> + xenbus_grant_ring -> + gnttab_grant_foreign_access + +and the failing path in talk_to_blkback sets the driver_data to NULL: + + destroy_blkring: + blkif_free(info, 0); + + mutex_lock(&blkfront_mutex); + free_info(info); + mutex_unlock(&blkfront_mutex); + + dev_set_drvdata(&dev->dev, NULL); + +This results in a NULL pointer BUG when blkfront_remove and blkif_free +try to access the failing device's NULL struct blkfront_info. + +Cc: stable@vger.kernel.org # 4.5 and later +Signed-off-by: Vasilis Liaskovitis +Reviewed-by: Roger Pau Monné +Signed-off-by: Konrad Rzeszutek Wilk +Signed-off-by: Jens Axboe +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/block/xen-blkfront.c | 3 +++ + 1 file changed, 3 insertions(+) + +--- a/drivers/block/xen-blkfront.c ++++ b/drivers/block/xen-blkfront.c +@@ -2493,6 +2493,9 @@ static int blkfront_remove(struct xenbus + + dev_dbg(&xbdev->dev, "%s removed", xbdev->nodename); + ++ if (!info) ++ return 0; ++ + blkif_free(info, 0); + + mutex_lock(&info->mutex); diff --git a/queue-4.19/xen-fix-race-in-xen_qlock_wait.patch b/queue-4.19/xen-fix-race-in-xen_qlock_wait.patch new file mode 100644 index 00000000000..c5e4e7e8b6a --- /dev/null +++ b/queue-4.19/xen-fix-race-in-xen_qlock_wait.patch @@ -0,0 +1,71 @@ +From 2ac2a7d4d9ff4e01e36f9c3d116582f6f655ab47 Mon Sep 17 00:00:00 2001 +From: Juergen Gross +Date: Mon, 1 Oct 2018 07:57:42 +0200 +Subject: xen: fix race in xen_qlock_wait() + +From: Juergen Gross + +commit 2ac2a7d4d9ff4e01e36f9c3d116582f6f655ab47 upstream. + +In the following situation a vcpu waiting for a lock might not be +woken up from xen_poll_irq(): + +CPU 1: CPU 2: CPU 3: +takes a spinlock + tries to get lock + -> xen_qlock_wait() +frees the lock +-> xen_qlock_kick(cpu2) + -> xen_clear_irq_pending() + +takes lock again + tries to get lock + -> *lock = _Q_SLOW_VAL + -> *lock == _Q_SLOW_VAL ? + -> xen_poll_irq() +frees the lock +-> xen_qlock_kick(cpu3) + +And cpu 2 will sleep forever. + +This can be avoided easily by modifying xen_qlock_wait() to call +xen_poll_irq() only if the related irq was not pending and to call +xen_clear_irq_pending() only if it was pending. + +Cc: stable@vger.kernel.org +Cc: Waiman.Long@hp.com +Cc: peterz@infradead.org +Signed-off-by: Juergen Gross +Reviewed-by: Jan Beulich +Signed-off-by: Juergen Gross +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/xen/spinlock.c | 15 +++++---------- + 1 file changed, 5 insertions(+), 10 deletions(-) + +--- a/arch/x86/xen/spinlock.c ++++ b/arch/x86/xen/spinlock.c +@@ -45,17 +45,12 @@ static void xen_qlock_wait(u8 *byte, u8 + if (irq == -1) + return; + +- /* clear pending */ +- xen_clear_irq_pending(irq); +- barrier(); ++ /* If irq pending already clear it and return. */ ++ if (xen_test_irq_pending(irq)) { ++ xen_clear_irq_pending(irq); ++ return; ++ } + +- /* +- * We check the byte value after clearing pending IRQ to make sure +- * that we won't miss a wakeup event because of the clearing. +- * +- * The sync_clear_bit() call in xen_clear_irq_pending() is atomic. +- * So it is effectively a memory barrier for x86. +- */ + if (READ_ONCE(*byte) != val) + return; + diff --git a/queue-4.19/xen-make-xen_qlock_wait-nestable.patch b/queue-4.19/xen-make-xen_qlock_wait-nestable.patch new file mode 100644 index 00000000000..7c8605f12f6 --- /dev/null +++ b/queue-4.19/xen-make-xen_qlock_wait-nestable.patch @@ -0,0 +1,93 @@ +From a856531951dc8094359dfdac21d59cee5969c18e Mon Sep 17 00:00:00 2001 +From: Juergen Gross +Date: Mon, 1 Oct 2018 07:57:42 +0200 +Subject: xen: make xen_qlock_wait() nestable + +From: Juergen Gross + +commit a856531951dc8094359dfdac21d59cee5969c18e upstream. + +xen_qlock_wait() isn't safe for nested calls due to interrupts. A call +of xen_qlock_kick() might be ignored in case a deeper nesting level +was active right before the call of xen_poll_irq(): + +CPU 1: CPU 2: +spin_lock(lock1) + spin_lock(lock1) + -> xen_qlock_wait() + -> xen_clear_irq_pending() + Interrupt happens +spin_unlock(lock1) +-> xen_qlock_kick(CPU 2) +spin_lock_irqsave(lock2) + spin_lock_irqsave(lock2) + -> xen_qlock_wait() + -> xen_clear_irq_pending() + clears kick for lock1 + -> xen_poll_irq() +spin_unlock_irq_restore(lock2) +-> xen_qlock_kick(CPU 2) + wakes up + spin_unlock_irq_restore(lock2) + IRET + resumes in xen_qlock_wait() + -> xen_poll_irq() + never wakes up + +The solution is to disable interrupts in xen_qlock_wait() and not to +poll for the irq in case xen_qlock_wait() is called in nmi context. + +Cc: stable@vger.kernel.org +Cc: Waiman.Long@hp.com +Cc: peterz@infradead.org +Signed-off-by: Juergen Gross +Reviewed-by: Jan Beulich +Signed-off-by: Juergen Gross +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/xen/spinlock.c | 24 ++++++++++-------------- + 1 file changed, 10 insertions(+), 14 deletions(-) + +--- a/arch/x86/xen/spinlock.c ++++ b/arch/x86/xen/spinlock.c +@@ -39,29 +39,25 @@ static void xen_qlock_kick(int cpu) + */ + static void xen_qlock_wait(u8 *byte, u8 val) + { ++ unsigned long flags; + int irq = __this_cpu_read(lock_kicker_irq); + + /* If kicker interrupts not initialized yet, just spin */ +- if (irq == -1) ++ if (irq == -1 || in_nmi()) + return; + +- /* If irq pending already clear it and return. */ ++ /* Guard against reentry. */ ++ local_irq_save(flags); ++ ++ /* If irq pending already clear it. */ + if (xen_test_irq_pending(irq)) { + xen_clear_irq_pending(irq); +- return; ++ } else if (READ_ONCE(*byte) == val) { ++ /* Block until irq becomes pending (or a spurious wakeup) */ ++ xen_poll_irq(irq); + } + +- if (READ_ONCE(*byte) != val) +- return; +- +- /* +- * If an interrupt happens here, it will leave the wakeup irq +- * pending, which will cause xen_poll_irq() to return +- * immediately. +- */ +- +- /* Block until irq becomes pending (or perhaps a spurious wakeup) */ +- xen_poll_irq(irq); ++ local_irq_restore(flags); + } + + static irqreturn_t dummy_handler(int irq, void *dev_id) diff --git a/queue-4.19/xen-pvh-don-t-try-to-unplug-emulated-devices.patch b/queue-4.19/xen-pvh-don-t-try-to-unplug-emulated-devices.patch new file mode 100644 index 00000000000..88ba50c768a --- /dev/null +++ b/queue-4.19/xen-pvh-don-t-try-to-unplug-emulated-devices.patch @@ -0,0 +1,40 @@ +From e6111161c0a02d58919d776eec94b313bb57911f Mon Sep 17 00:00:00 2001 +From: Juergen Gross +Date: Thu, 25 Oct 2018 09:54:15 +0200 +Subject: xen/pvh: don't try to unplug emulated devices + +From: Juergen Gross + +commit e6111161c0a02d58919d776eec94b313bb57911f upstream. + +A Xen PVH guest has no associated qemu device model, so trying to +unplug any emulated devices is making no sense at all. + +Bail out early from xen_unplug_emulated_devices() when running as PVH +guest. This will avoid issuing the boot message: + +[ 0.000000] Xen Platform PCI: unrecognised magic value + +Cc: # 4.11 +Signed-off-by: Juergen Gross +Reviewed-by: Boris Ostrovsky +Signed-off-by: Juergen Gross +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/xen/platform-pci-unplug.c | 4 ++++ + 1 file changed, 4 insertions(+) + +--- a/arch/x86/xen/platform-pci-unplug.c ++++ b/arch/x86/xen/platform-pci-unplug.c +@@ -146,6 +146,10 @@ void xen_unplug_emulated_devices(void) + { + int r; + ++ /* PVH guests don't have emulated devices. */ ++ if (xen_pvh_domain()) ++ return; ++ + /* user explicitly requested no unplug */ + if (xen_emul_unplug & XEN_UNPLUG_NEVER) + return; diff --git a/queue-4.19/xen-pvh-increase-early-stack-size.patch b/queue-4.19/xen-pvh-increase-early-stack-size.patch new file mode 100644 index 00000000000..22bbf7ac0c7 --- /dev/null +++ b/queue-4.19/xen-pvh-increase-early-stack-size.patch @@ -0,0 +1,38 @@ +From 7deecbda3026f5e2a8cc095d7ef7261a920efcf2 Mon Sep 17 00:00:00 2001 +From: Roger Pau Monne +Date: Tue, 9 Oct 2018 12:32:37 +0200 +Subject: xen/pvh: increase early stack size +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Roger Pau Monne + +commit 7deecbda3026f5e2a8cc095d7ef7261a920efcf2 upstream. + +While booting on an AMD EPYC box the stack canary would detect stack +overflows when using the current PVH early stack size (256). Switch to +using the value defined by BOOT_STACK_SIZE, which prevents the stack +overflow. + +Cc: # 4.11 +Signed-off-by: Roger Pau Monné +Reviewed-by: Juergen Gross +Signed-off-by: Juergen Gross +Signed-off-by: Greg Kroah-Hartman + +--- + arch/x86/xen/xen-pvh.S | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/arch/x86/xen/xen-pvh.S ++++ b/arch/x86/xen/xen-pvh.S +@@ -181,7 +181,7 @@ canary: + .fill 48, 1, 0 + + early_stack: +- .fill 256, 1, 0 ++ .fill BOOT_STACK_SIZE, 1, 0 + early_stack_end: + + ELFNOTE(Xen, XEN_ELFNOTE_PHYS32_ENTRY, diff --git a/queue-4.19/xen-swiotlb-use-actually-allocated-size-on-check-physical-continuous.patch b/queue-4.19/xen-swiotlb-use-actually-allocated-size-on-check-physical-continuous.patch new file mode 100644 index 00000000000..ae4f9f84c79 --- /dev/null +++ b/queue-4.19/xen-swiotlb-use-actually-allocated-size-on-check-physical-continuous.patch @@ -0,0 +1,56 @@ +From 7250f422da0480d8512b756640f131b9b893ccda Mon Sep 17 00:00:00 2001 +From: Joe Jin +Date: Tue, 16 Oct 2018 15:21:16 -0700 +Subject: xen-swiotlb: use actually allocated size on check physical continuous + +From: Joe Jin + +commit 7250f422da0480d8512b756640f131b9b893ccda upstream. + +xen_swiotlb_{alloc,free}_coherent() allocate/free memory based on the +order of the pages and not size argument (bytes). This is inconsistent with +range_straddles_page_boundary and memset which use the 'size' value, +which may lead to not exchanging memory with Xen (range_straddles_page_boundary() +returned true). And then the call to xen_swiotlb_free_coherent() would +actually try to exchange the memory with Xen, leading to the kernel +hitting an BUG (as the hypercall returned an error). + +This patch fixes it by making the 'size' variable be of the same size +as the amount of memory allocated. + +CC: stable@vger.kernel.org +Signed-off-by: Joe Jin +Cc: Konrad Rzeszutek Wilk +Cc: Boris Ostrovsky +Cc: Christoph Helwig +Cc: Dongli Zhang +Cc: John Sobecki +Signed-off-by: Konrad Rzeszutek Wilk +Signed-off-by: Greg Kroah-Hartman + +--- + drivers/xen/swiotlb-xen.c | 6 ++++++ + 1 file changed, 6 insertions(+) + +--- a/drivers/xen/swiotlb-xen.c ++++ b/drivers/xen/swiotlb-xen.c +@@ -303,6 +303,9 @@ xen_swiotlb_alloc_coherent(struct device + */ + flags &= ~(__GFP_DMA | __GFP_HIGHMEM); + ++ /* Convert the size to actually allocated. */ ++ size = 1UL << (order + XEN_PAGE_SHIFT); ++ + /* On ARM this function returns an ioremap'ped virtual address for + * which virt_to_phys doesn't return the corresponding physical + * address. In fact on ARM virt_to_phys only works for kernel direct +@@ -351,6 +354,9 @@ xen_swiotlb_free_coherent(struct device + * physical address */ + phys = xen_bus_to_phys(dev_addr); + ++ /* Convert the size to actually allocated. */ ++ size = 1UL << (order + XEN_PAGE_SHIFT); ++ + if (((dev_addr + size - 1 <= dma_mask)) || + range_straddles_page_boundary(phys, size)) + xen_destroy_contiguous_region(phys, order);