From: Greg Kroah-Hartman Date: Mon, 15 Aug 2022 11:52:28 +0000 (+0200) Subject: 4.9-stable patches X-Git-Tag: v5.15.61~70 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=7d25da2627f18f9a6049a6eb19f9a474097a785e;p=thirdparty%2Fkernel%2Fstable-queue.git 4.9-stable patches added patches: dm-raid-fix-address-sanitizer-warning-in-raid_status.patch net_sched-cls_route-remove-from-list-when-handle-is-0.patch --- diff --git a/queue-4.9/dm-raid-fix-address-sanitizer-warning-in-raid_status.patch b/queue-4.9/dm-raid-fix-address-sanitizer-warning-in-raid_status.patch new file mode 100644 index 00000000000..62afbf33264 --- /dev/null +++ b/queue-4.9/dm-raid-fix-address-sanitizer-warning-in-raid_status.patch @@ -0,0 +1,63 @@ +From 1fbeea217d8f297fe0e0956a1516d14ba97d0396 Mon Sep 17 00:00:00 2001 +From: Mikulas Patocka +Date: Sun, 24 Jul 2022 14:31:35 -0400 +Subject: dm raid: fix address sanitizer warning in raid_status + +From: Mikulas Patocka + +commit 1fbeea217d8f297fe0e0956a1516d14ba97d0396 upstream. + +There is this warning when using a kernel with the address sanitizer +and running this testsuite: +https://gitlab.com/cki-project/kernel-tests/-/tree/main/storage/swraid/scsi_raid + +================================================================== +BUG: KASAN: slab-out-of-bounds in raid_status+0x1747/0x2820 [dm_raid] +Read of size 4 at addr ffff888079d2c7e8 by task lvcreate/13319 +CPU: 0 PID: 13319 Comm: lvcreate Not tainted 5.18.0-0.rc3. #1 +Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 +Call Trace: + + dump_stack_lvl+0x6a/0x9c + print_address_description.constprop.0+0x1f/0x1e0 + print_report.cold+0x55/0x244 + kasan_report+0xc9/0x100 + raid_status+0x1747/0x2820 [dm_raid] + dm_ima_measure_on_table_load+0x4b8/0xca0 [dm_mod] + table_load+0x35c/0x630 [dm_mod] + ctl_ioctl+0x411/0x630 [dm_mod] + dm_ctl_ioctl+0xa/0x10 [dm_mod] + __x64_sys_ioctl+0x12a/0x1a0 + do_syscall_64+0x5b/0x80 + +The warning is caused by reading conf->max_nr_stripes in raid_status. The +code in raid_status reads mddev->private, casts it to struct r5conf and +reads the entry max_nr_stripes. + +However, if we have different raid type than 4/5/6, mddev->private +doesn't point to struct r5conf; it may point to struct r0conf, struct +r1conf, struct r10conf or struct mpconf. If we cast a pointer to one +of these structs to struct r5conf, we will be reading invalid memory +and KASAN warns about it. + +Fix this bug by reading struct r5conf only if raid type is 4, 5 or 6. + +Cc: stable@vger.kernel.org +Signed-off-by: Mikulas Patocka +Signed-off-by: Mike Snitzer +Signed-off-by: Greg Kroah-Hartman +--- + drivers/md/dm-raid.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/md/dm-raid.c ++++ b/drivers/md/dm-raid.c +@@ -3173,7 +3173,7 @@ static void raid_status(struct dm_target + { + struct raid_set *rs = ti->private; + struct mddev *mddev = &rs->md; +- struct r5conf *conf = mddev->private; ++ struct r5conf *conf = rs_is_raid456(rs) ? mddev->private : NULL; + int i, max_nr_stripes = conf ? conf->max_nr_stripes : 0; + bool array_in_sync; + unsigned int raid_param_cnt = 1; /* at least 1 for chunksize */ diff --git a/queue-4.9/net_sched-cls_route-remove-from-list-when-handle-is-0.patch b/queue-4.9/net_sched-cls_route-remove-from-list-when-handle-is-0.patch new file mode 100644 index 00000000000..8744c2363db --- /dev/null +++ b/queue-4.9/net_sched-cls_route-remove-from-list-when-handle-is-0.patch @@ -0,0 +1,45 @@ +From 9ad36309e2719a884f946678e0296be10f0bb4c1 Mon Sep 17 00:00:00 2001 +From: Thadeu Lima de Souza Cascardo +Date: Tue, 9 Aug 2022 14:05:18 -0300 +Subject: net_sched: cls_route: remove from list when handle is 0 + +From: Thadeu Lima de Souza Cascardo + +commit 9ad36309e2719a884f946678e0296be10f0bb4c1 upstream. + +When a route filter is replaced and the old filter has a 0 handle, the old +one won't be removed from the hashtable, while it will still be freed. + +The test was there since before commit 1109c00547fc ("net: sched: RCU +cls_route"), when a new filter was not allocated when there was an old one. +The old filter was reused and the reinserting would only be necessary if an +old filter was replaced. That was still wrong for the same case where the +old handle was 0. + +Remove the old filter from the list independently from its handle value. + +This fixes CVE-2022-2588, also reported as ZDI-CAN-17440. + +Reported-by: Zhenpeng Lin +Signed-off-by: Thadeu Lima de Souza Cascardo +Reviewed-by: Kamal Mostafa +Cc: +Acked-by: Jamal Hadi Salim +Link: https://lore.kernel.org/r/20220809170518.164662-1-cascardo@canonical.com +Signed-off-by: Jakub Kicinski +Signed-off-by: Greg Kroah-Hartman +--- + net/sched/cls_route.c | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/net/sched/cls_route.c ++++ b/net/sched/cls_route.c +@@ -534,7 +534,7 @@ static int route4_change(struct net *net + rcu_assign_pointer(f->next, f1); + rcu_assign_pointer(*fp, f); + +- if (fold && fold->handle && f->handle != fold->handle) { ++ if (fold) { + th = to_hash(fold->handle); + h = from_hash(fold->handle >> 16); + b = rtnl_dereference(head->table[th]); diff --git a/queue-4.9/series b/queue-4.9/series index 7eb89ab3bf5..9b427de15da 100644 --- a/queue-4.9/series +++ b/queue-4.9/series @@ -54,3 +54,5 @@ ext4-fix-use-after-free-in-ext4_xattr_set_entry.patch ext4-update-s_overhead_clusters-in-the-superblock-during-an-on-line-resize.patch ext4-fix-extent-status-tree-race-in-writeback-error-recovery-path.patch ext4-correct-max_inline_xattr_value_size-computing.patch +dm-raid-fix-address-sanitizer-warning-in-raid_status.patch +net_sched-cls_route-remove-from-list-when-handle-is-0.patch