From: Sasha Levin Date: Tue, 8 Jun 2021 01:13:07 +0000 (-0400) Subject: Fixes for 5.12 X-Git-Tag: v4.4.272~78 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=6b07cd30924357e738753fedf86dc056650dec29;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.12 Signed-off-by: Sasha Levin --- diff --git a/queue-5.12/amdgpu-fix-gem-obj-leak-in-amdgpu_display_user_frame.patch b/queue-5.12/amdgpu-fix-gem-obj-leak-in-amdgpu_display_user_frame.patch new file mode 100644 index 00000000000..adf4f6dd126 --- /dev/null +++ b/queue-5.12/amdgpu-fix-gem-obj-leak-in-amdgpu_display_user_frame.patch @@ -0,0 +1,44 @@ +From 0a8577cfd08c297071f689b888f4aaa99eb74a14 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 21 Apr 2021 11:16:35 +0200 +Subject: amdgpu: fix GEM obj leak in amdgpu_display_user_framebuffer_create +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: Simon Ser + +[ Upstream commit e0c16eb4b3610298a74ae5504c7f6939b12be991 ] + +This error code-path is missing a drm_gem_object_put call. Other +error code-paths are fine. + +Signed-off-by: Simon Ser +Fixes: 1769152ac64b ("drm/amdgpu: Fail fb creation from imported dma-bufs. (v2)") +Cc: Alex Deucher +Cc: Harry Wentland +Cc: Nicholas Kazlauskas +Cc: Bas Nieuwenhuizen +Reviewed-by: Christian König +Signed-off-by: Alex Deucher +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdgpu/amdgpu_display.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +index a2ac44cc2a6d..e80cc2928b58 100644 +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_display.c +@@ -944,6 +944,7 @@ amdgpu_display_user_framebuffer_create(struct drm_device *dev, + domains = amdgpu_display_supported_domains(drm_to_adev(dev), bo->flags); + if (obj->import_attach && !(domains & AMDGPU_GEM_DOMAIN_GTT)) { + drm_dbg_kms(dev, "Cannot create framebuffer from imported dma_buf\n"); ++ drm_gem_object_put(obj); + return ERR_PTR(-EINVAL); + } + +-- +2.30.2 + diff --git a/queue-5.12/btrfs-fix-compressed-writes-that-cross-stripe-bounda.patch b/queue-5.12/btrfs-fix-compressed-writes-that-cross-stripe-bounda.patch new file mode 100644 index 00000000000..9e8ee700845 --- /dev/null +++ b/queue-5.12/btrfs-fix-compressed-writes-that-cross-stripe-bounda.patch @@ -0,0 +1,127 @@ +From 6ac90347d410797dff70ea412aacb446f898b366 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 25 May 2021 13:52:43 +0800 +Subject: btrfs: fix compressed writes that cross stripe boundary + +From: Qu Wenruo + +[ Upstream commit 4c80a97d7b02cf68e169118ef2bda0725fc87f6f ] + +[BUG] +When running btrfs/027 with "-o compress" mount option, it always +crashes with the following call trace: + + BTRFS critical (device dm-4): mapping failed logical 298901504 bio len 12288 len 8192 + ------------[ cut here ]------------ + kernel BUG at fs/btrfs/volumes.c:6651! + invalid opcode: 0000 [#1] PREEMPT SMP NOPTI + CPU: 5 PID: 31089 Comm: kworker/u24:10 Tainted: G OE 5.13.0-rc2-custom+ #26 + Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 + Workqueue: btrfs-delalloc btrfs_work_helper [btrfs] + RIP: 0010:btrfs_map_bio.cold+0x58/0x5a [btrfs] + Call Trace: + btrfs_submit_compressed_write+0x2d7/0x470 [btrfs] + submit_compressed_extents+0x3b0/0x470 [btrfs] + ? mark_held_locks+0x49/0x70 + btrfs_work_helper+0x131/0x3e0 [btrfs] + process_one_work+0x28f/0x5d0 + worker_thread+0x55/0x3c0 + ? process_one_work+0x5d0/0x5d0 + kthread+0x141/0x160 + ? __kthread_bind_mask+0x60/0x60 + ret_from_fork+0x22/0x30 + ---[ end trace 63113a3a91f34e68 ]--- + +[CAUSE] +The critical message before the crash means we have a bio at logical +bytenr 298901504 length 12288, but only 8192 bytes can fit into one +stripe, the remaining 4096 bytes go to another stripe. + +In btrfs, all bios are properly split to avoid cross stripe boundary, +but commit 764c7c9a464b ("btrfs: zoned: fix parallel compressed writes") +changed the behavior for compressed writes. + +Previously if we find our new page can't be fitted into current stripe, +ie. "submit == 1" case, we submit current bio without adding current +page. + + submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio, 0); + + page->mapping = NULL; + if (submit || bio_add_page(bio, page, PAGE_SIZE, 0) < + PAGE_SIZE) { + +But after the modification, we will add the page no matter if it crosses +stripe boundary, leading to the above crash. + + submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio, 0); + + if (pg_index == 0 && use_append) + len = bio_add_zone_append_page(bio, page, PAGE_SIZE, 0); + else + len = bio_add_page(bio, page, PAGE_SIZE, 0); + + page->mapping = NULL; + if (submit || len < PAGE_SIZE) { + +[FIX] +It's no longer possible to revert to the original code style as we have +two different bio_add_*_page() calls now. + +The new fix is to skip the bio_add_*_page() call if @submit is true. + +Also to avoid @len to be uninitialized, always initialize it to zero. + +If @submit is true, @len will not be checked. +If @submit is not true, @len will be the return value of +bio_add_*_page() call. +Either way, the behavior is still the same as the old code. + +Reported-by: Josef Bacik +Fixes: 764c7c9a464b ("btrfs: zoned: fix parallel compressed writes") +Reviewed-by: Johannes Thumshirn +Signed-off-by: Qu Wenruo +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/compression.c | 17 ++++++++++++----- + 1 file changed, 12 insertions(+), 5 deletions(-) + +diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c +index fe0ec3d321b8..752b3cffe226 100644 +--- a/fs/btrfs/compression.c ++++ b/fs/btrfs/compression.c +@@ -457,7 +457,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + bytes_left = compressed_len; + for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) { + int submit = 0; +- int len; ++ int len = 0; + + page = compressed_pages[pg_index]; + page->mapping = inode->vfs_inode.i_mapping; +@@ -465,10 +465,17 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio, + 0); + +- if (pg_index == 0 && use_append) +- len = bio_add_zone_append_page(bio, page, PAGE_SIZE, 0); +- else +- len = bio_add_page(bio, page, PAGE_SIZE, 0); ++ /* ++ * Page can only be added to bio if the current bio fits in ++ * stripe. ++ */ ++ if (!submit) { ++ if (pg_index == 0 && use_append) ++ len = bio_add_zone_append_page(bio, page, ++ PAGE_SIZE, 0); ++ else ++ len = bio_add_page(bio, page, PAGE_SIZE, 0); ++ } + + page->mapping = NULL; + if (submit || len < PAGE_SIZE) { +-- +2.30.2 + diff --git a/queue-5.12/btrfs-zoned-fix-parallel-compressed-writes.patch b/queue-5.12/btrfs-zoned-fix-parallel-compressed-writes.patch new file mode 100644 index 00000000000..f4b9d6faf5b --- /dev/null +++ b/queue-5.12/btrfs-zoned-fix-parallel-compressed-writes.patch @@ -0,0 +1,141 @@ +From 9061dbf92469820de013b2dfe17ee85e06c00f6e Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 19 May 2021 00:40:28 +0900 +Subject: btrfs: zoned: fix parallel compressed writes + +From: Johannes Thumshirn + +[ Upstream commit 764c7c9a464b68f7c6a5a9ec0b923176a05e8e8f ] + +When multiple processes write data to the same block group on a +compressed zoned filesystem, the underlying device could report I/O +errors and data corruption is possible. + +This happens because on a zoned file system, compressed data writes +where sent to the device via a REQ_OP_WRITE instead of a +REQ_OP_ZONE_APPEND operation. But with REQ_OP_WRITE and parallel +submission it cannot be guaranteed that the data is always submitted +aligned to the underlying zone's write pointer. + +The change to using REQ_OP_ZONE_APPEND instead of REQ_OP_WRITE on a +zoned filesystem is non intrusive on a regular file system or when +submitting to a conventional zone on a zoned filesystem, as it is +guarded by btrfs_use_zone_append. + +Reported-by: David Sterba +Fixes: 9d294a685fbc ("btrfs: zoned: enable to mount ZONED incompat flag") +CC: stable@vger.kernel.org # 5.12.x: e380adfc213a13: btrfs: zoned: pass start block to btrfs_use_zone_append +CC: stable@vger.kernel.org # 5.12.x +Signed-off-by: Johannes Thumshirn +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/compression.c | 42 ++++++++++++++++++++++++++++++++++++++---- + 1 file changed, 38 insertions(+), 4 deletions(-) + +diff --git a/fs/btrfs/compression.c b/fs/btrfs/compression.c +index 81387cdf334d..fe0ec3d321b8 100644 +--- a/fs/btrfs/compression.c ++++ b/fs/btrfs/compression.c +@@ -28,6 +28,7 @@ + #include "compression.h" + #include "extent_io.h" + #include "extent_map.h" ++#include "zoned.h" + + static const char* const btrfs_compress_types[] = { "", "zlib", "lzo", "zstd" }; + +@@ -349,6 +350,7 @@ static void end_compressed_bio_write(struct bio *bio) + */ + inode = cb->inode; + cb->compressed_pages[0]->mapping = cb->inode->i_mapping; ++ btrfs_record_physical_zoned(inode, cb->start, bio); + btrfs_writepage_endio_finish_ordered(cb->compressed_pages[0], + cb->start, cb->start + cb->len - 1, + bio->bi_status == BLK_STS_OK); +@@ -401,6 +403,8 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + u64 first_byte = disk_start; + blk_status_t ret; + int skip_sum = inode->flags & BTRFS_INODE_NODATASUM; ++ const bool use_append = btrfs_use_zone_append(inode, disk_start); ++ const unsigned int bio_op = use_append ? REQ_OP_ZONE_APPEND : REQ_OP_WRITE; + + WARN_ON(!PAGE_ALIGNED(start)); + cb = kmalloc(compressed_bio_size(fs_info, compressed_len), GFP_NOFS); +@@ -418,10 +422,31 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + cb->nr_pages = nr_pages; + + bio = btrfs_bio_alloc(first_byte); +- bio->bi_opf = REQ_OP_WRITE | write_flags; ++ bio->bi_opf = bio_op | write_flags; + bio->bi_private = cb; + bio->bi_end_io = end_compressed_bio_write; + ++ if (use_append) { ++ struct extent_map *em; ++ struct map_lookup *map; ++ struct block_device *bdev; ++ ++ em = btrfs_get_chunk_map(fs_info, disk_start, PAGE_SIZE); ++ if (IS_ERR(em)) { ++ kfree(cb); ++ bio_put(bio); ++ return BLK_STS_NOTSUPP; ++ } ++ ++ map = em->map_lookup; ++ /* We only support single profile for now */ ++ ASSERT(map->num_stripes == 1); ++ bdev = map->stripes[0].dev->bdev; ++ ++ bio_set_dev(bio, bdev); ++ free_extent_map(em); ++ } ++ + if (blkcg_css) { + bio->bi_opf |= REQ_CGROUP_PUNT; + kthread_associate_blkcg(blkcg_css); +@@ -432,6 +457,7 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + bytes_left = compressed_len; + for (pg_index = 0; pg_index < cb->nr_pages; pg_index++) { + int submit = 0; ++ int len; + + page = compressed_pages[pg_index]; + page->mapping = inode->vfs_inode.i_mapping; +@@ -439,9 +465,13 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + submit = btrfs_bio_fits_in_stripe(page, PAGE_SIZE, bio, + 0); + ++ if (pg_index == 0 && use_append) ++ len = bio_add_zone_append_page(bio, page, PAGE_SIZE, 0); ++ else ++ len = bio_add_page(bio, page, PAGE_SIZE, 0); ++ + page->mapping = NULL; +- if (submit || bio_add_page(bio, page, PAGE_SIZE, 0) < +- PAGE_SIZE) { ++ if (submit || len < PAGE_SIZE) { + /* + * inc the count before we submit the bio so + * we know the end IO handler won't happen before +@@ -465,11 +495,15 @@ blk_status_t btrfs_submit_compressed_write(struct btrfs_inode *inode, u64 start, + } + + bio = btrfs_bio_alloc(first_byte); +- bio->bi_opf = REQ_OP_WRITE | write_flags; ++ bio->bi_opf = bio_op | write_flags; + bio->bi_private = cb; + bio->bi_end_io = end_compressed_bio_write; + if (blkcg_css) + bio->bi_opf |= REQ_CGROUP_PUNT; ++ /* ++ * Use bio_add_page() to ensure the bio has at least one ++ * page. ++ */ + bio_add_page(bio, page, PAGE_SIZE, 0); + } + if (bytes_left < PAGE_SIZE) { +-- +2.30.2 + diff --git a/queue-5.12/drm-amdgpu-jpeg2.5-add-cancel_delayed_work_sync-befo.patch b/queue-5.12/drm-amdgpu-jpeg2.5-add-cancel_delayed_work_sync-befo.patch new file mode 100644 index 00000000000..9aa106b702e --- /dev/null +++ b/queue-5.12/drm-amdgpu-jpeg2.5-add-cancel_delayed_work_sync-befo.patch @@ -0,0 +1,49 @@ +From 290e1365c920ff38c800c3d6f3d21683e54a50a1 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 19 May 2021 12:04:38 -0400 +Subject: drm/amdgpu/jpeg2.5: add cancel_delayed_work_sync before power gate +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: James Zhu + +[ Upstream commit 23f10a571da5eaa63b7845d16e2f49837e841ab9 ] + +Add cancel_delayed_work_sync before set power gating state +to avoid race condition issue when power gating. + +Signed-off-by: James Zhu +Reviewed-by: Leo Liu +Acked-by: Christian König +Signed-off-by: Alex Deucher +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c +index dc947c8ffe21..e6c4a36eaf9a 100644 +--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c ++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_5.c +@@ -187,14 +187,14 @@ static int jpeg_v2_5_hw_init(void *handle) + static int jpeg_v2_5_hw_fini(void *handle) + { + struct amdgpu_device *adev = (struct amdgpu_device *)handle; +- struct amdgpu_ring *ring; + int i; + ++ cancel_delayed_work_sync(&adev->vcn.idle_work); ++ + for (i = 0; i < adev->jpeg.num_jpeg_inst; ++i) { + if (adev->jpeg.harvest_config & (1 << i)) + continue; + +- ring = &adev->jpeg.inst[i].ring_dec; + if (adev->jpeg.cur_state != AMD_PG_STATE_GATE && + RREG32_SOC15(JPEG, i, mmUVD_JRBC_STATUS)) + jpeg_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE); +-- +2.30.2 + diff --git a/queue-5.12/drm-amdgpu-jpeg3-add-cancel_delayed_work_sync-before.patch b/queue-5.12/drm-amdgpu-jpeg3-add-cancel_delayed_work_sync-before.patch new file mode 100644 index 00000000000..be41b4e3be4 --- /dev/null +++ b/queue-5.12/drm-amdgpu-jpeg3-add-cancel_delayed_work_sync-before.patch @@ -0,0 +1,44 @@ +From 17686c03ff2011329c710badf227b2a0543f1d18 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 19 May 2021 12:08:20 -0400 +Subject: drm/amdgpu/jpeg3: add cancel_delayed_work_sync before power gate +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: James Zhu + +[ Upstream commit 20ebbfd22f8115a1e4f60d3d289f66be4d47f1ec ] + +Add cancel_delayed_work_sync before set power gating state +to avoid race condition issue when power gating. + +Signed-off-by: James Zhu +Reviewed-by: Leo Liu +Acked-by: Christian König +Signed-off-by: Alex Deucher +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c +index 1d354245678d..2ea68c84e6b4 100644 +--- a/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v3_0.c +@@ -159,9 +159,9 @@ static int jpeg_v3_0_hw_init(void *handle) + static int jpeg_v3_0_hw_fini(void *handle) + { + struct amdgpu_device *adev = (struct amdgpu_device *)handle; +- struct amdgpu_ring *ring; + +- ring = &adev->jpeg.inst->ring_dec; ++ cancel_delayed_work_sync(&adev->vcn.idle_work); ++ + if (adev->jpeg.cur_state != AMD_PG_STATE_GATE && + RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS)) + jpeg_v3_0_set_powergating_state(adev, AMD_PG_STATE_GATE); +-- +2.30.2 + diff --git a/queue-5.12/drm-amdgpu-vcn3-add-cancel_delayed_work_sync-before-.patch b/queue-5.12/drm-amdgpu-vcn3-add-cancel_delayed_work_sync-before-.patch new file mode 100644 index 00000000000..f4c2bf09a3a --- /dev/null +++ b/queue-5.12/drm-amdgpu-vcn3-add-cancel_delayed_work_sync-before-.patch @@ -0,0 +1,50 @@ +From 4f9c79ab37054372fac1b29afd5592c1a6187ec5 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 17 May 2021 16:39:17 -0400 +Subject: drm/amdgpu/vcn3: add cancel_delayed_work_sync before power gate +MIME-Version: 1.0 +Content-Type: text/plain; charset=UTF-8 +Content-Transfer-Encoding: 8bit + +From: James Zhu + +[ Upstream commit 4a62542ae064e3b645d6bbf2295a6c05136956c6 ] + +Add cancel_delayed_work_sync before set power gating state +to avoid race condition issue when power gating. + +Signed-off-by: James Zhu +Reviewed-by: Leo Liu +Acked-by: Christian König +Signed-off-by: Alex Deucher +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c | 5 ++--- + 1 file changed, 2 insertions(+), 3 deletions(-) + +diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c +index ebbc04ff5da0..90138469648a 100644 +--- a/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c ++++ b/drivers/gpu/drm/amd/amdgpu/vcn_v3_0.c +@@ -367,15 +367,14 @@ done: + static int vcn_v3_0_hw_fini(void *handle) + { + struct amdgpu_device *adev = (struct amdgpu_device *)handle; +- struct amdgpu_ring *ring; + int i; + ++ cancel_delayed_work_sync(&adev->vcn.idle_work); ++ + for (i = 0; i < adev->vcn.num_vcn_inst; ++i) { + if (adev->vcn.harvest_config & (1 << i)) + continue; + +- ring = &adev->vcn.inst[i].ring_dec; +- + if (!amdgpu_sriov_vf(adev)) { + if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) || + (adev->vcn.cur_state != AMD_PG_STATE_GATE && +-- +2.30.2 + diff --git a/queue-5.12/io_uring-fix-link-timeout-refs.patch b/queue-5.12/io_uring-fix-link-timeout-refs.patch new file mode 100644 index 00000000000..0874110ea26 --- /dev/null +++ b/queue-5.12/io_uring-fix-link-timeout-refs.patch @@ -0,0 +1,54 @@ +From 47b6b591aac7d5fe85ec7bf61f6263b1e163dde2 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 7 May 2021 21:06:38 +0100 +Subject: io_uring: fix link timeout refs + +From: Pavel Begunkov + +[ Upstream commit a298232ee6b9a1d5d732aa497ff8be0d45b5bd82 ] + +WARNING: CPU: 0 PID: 10242 at lib/refcount.c:28 refcount_warn_saturate+0x15b/0x1a0 lib/refcount.c:28 +RIP: 0010:refcount_warn_saturate+0x15b/0x1a0 lib/refcount.c:28 +Call Trace: + __refcount_sub_and_test include/linux/refcount.h:283 [inline] + __refcount_dec_and_test include/linux/refcount.h:315 [inline] + refcount_dec_and_test include/linux/refcount.h:333 [inline] + io_put_req fs/io_uring.c:2140 [inline] + io_queue_linked_timeout fs/io_uring.c:6300 [inline] + __io_queue_sqe+0xbef/0xec0 fs/io_uring.c:6354 + io_submit_sqe fs/io_uring.c:6534 [inline] + io_submit_sqes+0x2bbd/0x7c50 fs/io_uring.c:6660 + __do_sys_io_uring_enter fs/io_uring.c:9240 [inline] + __se_sys_io_uring_enter+0x256/0x1d60 fs/io_uring.c:9182 + +io_link_timeout_fn() should put only one reference of the linked timeout +request, however in case of racing with the master request's completion +first io_req_complete() puts one and then io_put_req_deferred() is +called. + +Cc: stable@vger.kernel.org # 5.12+ +Fixes: 9ae1f8dd372e0 ("io_uring: fix inconsistent lock state") +Reported-by: syzbot+a2910119328ce8e7996f@syzkaller.appspotmail.com +Signed-off-by: Pavel Begunkov +Link: https://lore.kernel.org/r/ff51018ff29de5ffa76f09273ef48cb24c720368.1620417627.git.asml.silence@gmail.com +Signed-off-by: Jens Axboe +Signed-off-by: Sasha Levin +--- + fs/io_uring.c | 1 + + 1 file changed, 1 insertion(+) + +diff --git a/fs/io_uring.c b/fs/io_uring.c +index 144056b0cac9..89f4e5e80b9e 100644 +--- a/fs/io_uring.c ++++ b/fs/io_uring.c +@@ -6272,6 +6272,7 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer) + if (prev) { + io_async_find_and_cancel(ctx, req, prev->user_data, -ETIME); + io_put_req_deferred(prev, 1); ++ io_put_req_deferred(req, 1); + } else { + io_req_complete_post(req, -ETIME, 0); + io_put_req_deferred(req, 1); +-- +2.30.2 + diff --git a/queue-5.12/io_uring-fix-ltout-double-free-on-completion-race.patch b/queue-5.12/io_uring-fix-ltout-double-free-on-completion-race.patch new file mode 100644 index 00000000000..b995927a825 --- /dev/null +++ b/queue-5.12/io_uring-fix-ltout-double-free-on-completion-race.patch @@ -0,0 +1,47 @@ +From d14225f0fd769f0ee7690c83ecfa35f667784566 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 14 May 2021 12:02:50 +0100 +Subject: io_uring: fix ltout double free on completion race + +From: Pavel Begunkov + +[ Upstream commit 447c19f3b5074409c794b350b10306e1da1ef4ba ] + +Always remove linked timeout on io_link_timeout_fn() from the master +request link list, otherwise we may get use-after-free when first +io_link_timeout_fn() puts linked timeout in the fail path, and then +will be found and put on master's free. + +Cc: stable@vger.kernel.org # 5.10+ +Fixes: 90cd7e424969d ("io_uring: track link timeout's master explicitly") +Reported-and-tested-by: syzbot+5a864149dd970b546223@syzkaller.appspotmail.com +Signed-off-by: Pavel Begunkov +Link: https://lore.kernel.org/r/69c46bf6ce37fec4fdcd98f0882e18eb07ce693a.1620990121.git.asml.silence@gmail.com +Signed-off-by: Jens Axboe +Signed-off-by: Sasha Levin +--- + fs/io_uring.c | 7 ++++--- + 1 file changed, 4 insertions(+), 3 deletions(-) + +diff --git a/fs/io_uring.c b/fs/io_uring.c +index dd8b3fac877c..359d1abb089c 100644 +--- a/fs/io_uring.c ++++ b/fs/io_uring.c +@@ -6289,10 +6289,11 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer) + * We don't expect the list to be empty, that will only happen if we + * race with the completion of the linked work. + */ +- if (prev && req_ref_inc_not_zero(prev)) ++ if (prev) { + io_remove_next_linked(prev); +- else +- prev = NULL; ++ if (!req_ref_inc_not_zero(prev)) ++ prev = NULL; ++ } + spin_unlock_irqrestore(&ctx->completion_lock, flags); + + if (prev) { +-- +2.30.2 + diff --git a/queue-5.12/io_uring-use-better-types-for-cflags.patch b/queue-5.12/io_uring-use-better-types-for-cflags.patch new file mode 100644 index 00000000000..3cfd3060263 --- /dev/null +++ b/queue-5.12/io_uring-use-better-types-for-cflags.patch @@ -0,0 +1,46 @@ +From 97cf0064c528735e335c4edf4e69c4a542255607 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Sun, 28 Feb 2021 22:35:15 +0000 +Subject: io_uring: use better types for cflags + +From: Pavel Begunkov + +[ Upstream commit 8c3f9cd1603d0e4af6c50ebc6d974ab7bdd03cf4 ] + +__io_cqring_fill_event() takes cflags as long to squeeze it into u32 in +an CQE, awhile all users pass int or unsigned. Replace it with unsigned +int and store it as u32 in struct io_completion to match CQE. + +Signed-off-by: Pavel Begunkov +Signed-off-by: Jens Axboe +Signed-off-by: Sasha Levin +--- + fs/io_uring.c | 5 +++-- + 1 file changed, 3 insertions(+), 2 deletions(-) + +diff --git a/fs/io_uring.c b/fs/io_uring.c +index 89f4e5e80b9e..5cc76fa9d4a1 100644 +--- a/fs/io_uring.c ++++ b/fs/io_uring.c +@@ -653,7 +653,7 @@ struct io_unlink { + struct io_completion { + struct file *file; + struct list_head list; +- int cflags; ++ u32 cflags; + }; + + struct io_async_connect { +@@ -1476,7 +1476,8 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force, + return ret; + } + +-static void __io_cqring_fill_event(struct io_kiocb *req, long res, long cflags) ++static void __io_cqring_fill_event(struct io_kiocb *req, long res, ++ unsigned int cflags) + { + struct io_ring_ctx *ctx = req->ctx; + struct io_uring_cqe *cqe; +-- +2.30.2 + diff --git a/queue-5.12/io_uring-wrap-io_kiocb-reference-count-manipulation-.patch b/queue-5.12/io_uring-wrap-io_kiocb-reference-count-manipulation-.patch new file mode 100644 index 00000000000..59dffba83c7 --- /dev/null +++ b/queue-5.12/io_uring-wrap-io_kiocb-reference-count-manipulation-.patch @@ -0,0 +1,191 @@ +From d1f49ca3b4dbda15e114a8946bf809b8020ee0bc Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Wed, 24 Feb 2021 13:28:27 -0700 +Subject: io_uring: wrap io_kiocb reference count manipulation in helpers + +From: Jens Axboe + +[ Upstream commit de9b4ccad750f216616730b74ed2be16c80892a4 ] + +No functional changes in this patch, just in preparation for handling the +references a bit more efficiently. + +Signed-off-by: Jens Axboe +Signed-off-by: Sasha Levin +--- + fs/io_uring.c | 55 +++++++++++++++++++++++++++++++++++++-------------- + 1 file changed, 40 insertions(+), 15 deletions(-) + +diff --git a/fs/io_uring.c b/fs/io_uring.c +index 5cc76fa9d4a1..dd8b3fac877c 100644 +--- a/fs/io_uring.c ++++ b/fs/io_uring.c +@@ -1476,6 +1476,31 @@ static bool io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool force, + return ret; + } + ++static inline bool req_ref_inc_not_zero(struct io_kiocb *req) ++{ ++ return refcount_inc_not_zero(&req->refs); ++} ++ ++static inline bool req_ref_sub_and_test(struct io_kiocb *req, int refs) ++{ ++ return refcount_sub_and_test(refs, &req->refs); ++} ++ ++static inline bool req_ref_put_and_test(struct io_kiocb *req) ++{ ++ return refcount_dec_and_test(&req->refs); ++} ++ ++static inline void req_ref_put(struct io_kiocb *req) ++{ ++ refcount_dec(&req->refs); ++} ++ ++static inline void req_ref_get(struct io_kiocb *req) ++{ ++ refcount_inc(&req->refs); ++} ++ + static void __io_cqring_fill_event(struct io_kiocb *req, long res, + unsigned int cflags) + { +@@ -1512,7 +1537,7 @@ static void __io_cqring_fill_event(struct io_kiocb *req, long res, + io_clean_op(req); + req->result = res; + req->compl.cflags = cflags; +- refcount_inc(&req->refs); ++ req_ref_get(req); + list_add_tail(&req->compl.list, &ctx->cq_overflow_list); + } + } +@@ -1534,7 +1559,7 @@ static void io_req_complete_post(struct io_kiocb *req, long res, + * If we're the last reference to this request, add to our locked + * free_list cache. + */ +- if (refcount_dec_and_test(&req->refs)) { ++ if (req_ref_put_and_test(req)) { + struct io_comp_state *cs = &ctx->submit_state.comp; + + if (req->flags & (REQ_F_LINK | REQ_F_HARDLINK)) { +@@ -2113,7 +2138,7 @@ static void io_submit_flush_completions(struct io_comp_state *cs, + req = cs->reqs[i]; + + /* submission and completion refs */ +- if (refcount_sub_and_test(2, &req->refs)) ++ if (req_ref_sub_and_test(req, 2)) + io_req_free_batch(&rb, req, &ctx->submit_state); + } + +@@ -2129,7 +2154,7 @@ static struct io_kiocb *io_put_req_find_next(struct io_kiocb *req) + { + struct io_kiocb *nxt = NULL; + +- if (refcount_dec_and_test(&req->refs)) { ++ if (req_ref_put_and_test(req)) { + nxt = io_req_find_next(req); + __io_free_req(req); + } +@@ -2138,7 +2163,7 @@ static struct io_kiocb *io_put_req_find_next(struct io_kiocb *req) + + static void io_put_req(struct io_kiocb *req) + { +- if (refcount_dec_and_test(&req->refs)) ++ if (req_ref_put_and_test(req)) + io_free_req(req); + } + +@@ -2161,14 +2186,14 @@ static void io_free_req_deferred(struct io_kiocb *req) + + static inline void io_put_req_deferred(struct io_kiocb *req, int refs) + { +- if (refcount_sub_and_test(refs, &req->refs)) ++ if (req_ref_sub_and_test(req, refs)) + io_free_req_deferred(req); + } + + static void io_double_put_req(struct io_kiocb *req) + { + /* drop both submit and complete references */ +- if (refcount_sub_and_test(2, &req->refs)) ++ if (req_ref_sub_and_test(req, 2)) + io_free_req(req); + } + +@@ -2254,7 +2279,7 @@ static void io_iopoll_complete(struct io_ring_ctx *ctx, unsigned int *nr_events, + __io_cqring_fill_event(req, req->result, cflags); + (*nr_events)++; + +- if (refcount_dec_and_test(&req->refs)) ++ if (req_ref_put_and_test(req)) + io_req_free_batch(&rb, req, &ctx->submit_state); + } + +@@ -2496,7 +2521,7 @@ static bool io_rw_reissue(struct io_kiocb *req) + lockdep_assert_held(&req->ctx->uring_lock); + + if (io_resubmit_prep(req)) { +- refcount_inc(&req->refs); ++ req_ref_get(req); + io_queue_async_work(req); + return true; + } +@@ -3209,7 +3234,7 @@ static int io_async_buf_func(struct wait_queue_entry *wait, unsigned mode, + list_del_init(&wait->entry); + + /* submit ref gets dropped, acquire a new one */ +- refcount_inc(&req->refs); ++ req_ref_get(req); + io_req_task_queue(req); + return 1; + } +@@ -4954,7 +4979,7 @@ static void io_poll_remove_double(struct io_kiocb *req) + spin_lock(&head->lock); + list_del_init(&poll->wait.entry); + if (poll->wait.private) +- refcount_dec(&req->refs); ++ req_ref_put(req); + poll->head = NULL; + spin_unlock(&head->lock); + } +@@ -5020,7 +5045,7 @@ static int io_poll_double_wake(struct wait_queue_entry *wait, unsigned mode, + poll->wait.func(&poll->wait, mode, sync, key); + } + } +- refcount_dec(&req->refs); ++ req_ref_put(req); + return 1; + } + +@@ -5063,7 +5088,7 @@ static void __io_queue_proc(struct io_poll_iocb *poll, struct io_poll_table *pt, + return; + } + io_init_poll_iocb(poll, poll_one->events, io_poll_double_wake); +- refcount_inc(&req->refs); ++ req_ref_get(req); + poll->wait.private = req; + *poll_ptr = poll; + } +@@ -6212,7 +6237,7 @@ static void io_wq_submit_work(struct io_wq_work *work) + /* avoid locking problems by failing it from a clean context */ + if (ret) { + /* io-wq is going to take one down */ +- refcount_inc(&req->refs); ++ req_ref_get(req); + io_req_task_queue_fail(req, ret); + } + } +@@ -6264,7 +6289,7 @@ static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer) + * We don't expect the list to be empty, that will only happen if we + * race with the completion of the linked work. + */ +- if (prev && refcount_inc_not_zero(&prev->refs)) ++ if (prev && req_ref_inc_not_zero(prev)) + io_remove_next_linked(prev); + else + prev = NULL; +-- +2.30.2 + diff --git a/queue-5.12/libceph-don-t-set-global_id-until-we-get-an-auth-tic.patch b/queue-5.12/libceph-don-t-set-global_id-until-we-get-an-auth-tic.patch new file mode 100644 index 00000000000..7a32fb677bd --- /dev/null +++ b/queue-5.12/libceph-don-t-set-global_id-until-we-get-an-auth-tic.patch @@ -0,0 +1,106 @@ +From c2e1ce527ba85e212a22a854d261db4804f94598 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 26 Apr 2021 19:11:37 +0200 +Subject: libceph: don't set global_id until we get an auth ticket + +From: Ilya Dryomov + +[ Upstream commit 61ca49a9105faefa003b37542cebad8722f8ae22 ] + +With the introduction of enforcing mode, setting global_id as soon +as we get it in the first MAuth reply will result in EACCES if the +connection is reset before we get the second MAuth reply containing +an auth ticket -- because on retry we would attempt to reclaim that +global_id with no auth ticket at hand. + +Neither ceph_auth_client nor ceph_mon_client depend on global_id +being set ealy, so just delay the setting until we get and process +the second MAuth reply. While at it, complain if the monitor sends +a zero global_id or changes our global_id as the session is likely +to fail after that. + +Cc: stable@vger.kernel.org # needs backporting for < 5.11 +Signed-off-by: Ilya Dryomov +Reviewed-by: Sage Weil +Signed-off-by: Sasha Levin +--- + net/ceph/auth.c | 36 +++++++++++++++++++++++------------- + 1 file changed, 23 insertions(+), 13 deletions(-) + +diff --git a/net/ceph/auth.c b/net/ceph/auth.c +index eb261aa5fe18..de407e8feb97 100644 +--- a/net/ceph/auth.c ++++ b/net/ceph/auth.c +@@ -36,6 +36,20 @@ static int init_protocol(struct ceph_auth_client *ac, int proto) + } + } + ++static void set_global_id(struct ceph_auth_client *ac, u64 global_id) ++{ ++ dout("%s global_id %llu\n", __func__, global_id); ++ ++ if (!global_id) ++ pr_err("got zero global_id\n"); ++ ++ if (ac->global_id && global_id != ac->global_id) ++ pr_err("global_id changed from %llu to %llu\n", ac->global_id, ++ global_id); ++ ++ ac->global_id = global_id; ++} ++ + /* + * setup, teardown. + */ +@@ -222,11 +236,6 @@ int ceph_handle_auth_reply(struct ceph_auth_client *ac, + + payload_end = payload + payload_len; + +- if (global_id && ac->global_id != global_id) { +- dout(" set global_id %lld -> %lld\n", ac->global_id, global_id); +- ac->global_id = global_id; +- } +- + if (ac->negotiating) { + /* server does not support our protocols? */ + if (!protocol && result < 0) { +@@ -253,11 +262,16 @@ int ceph_handle_auth_reply(struct ceph_auth_client *ac, + + ret = ac->ops->handle_reply(ac, result, payload, payload_end, + NULL, NULL, NULL, NULL); +- if (ret == -EAGAIN) ++ if (ret == -EAGAIN) { + ret = build_request(ac, true, reply_buf, reply_len); +- else if (ret) ++ goto out; ++ } else if (ret) { + pr_err("auth protocol '%s' mauth authentication failed: %d\n", + ceph_auth_proto_name(ac->protocol), result); ++ goto out; ++ } ++ ++ set_global_id(ac, global_id); + + out: + mutex_unlock(&ac->mutex); +@@ -484,15 +498,11 @@ int ceph_auth_handle_reply_done(struct ceph_auth_client *ac, + int ret; + + mutex_lock(&ac->mutex); +- if (global_id && ac->global_id != global_id) { +- dout("%s global_id %llu -> %llu\n", __func__, ac->global_id, +- global_id); +- ac->global_id = global_id; +- } +- + ret = ac->ops->handle_reply(ac, 0, reply, reply + reply_len, + session_key, session_key_len, + con_secret, con_secret_len); ++ if (!ret) ++ set_global_id(ac, global_id); + mutex_unlock(&ac->mutex); + return ret; + } +-- +2.30.2 + diff --git a/queue-5.12/riscv-vdso-fix-and-clean-up-makefile.patch b/queue-5.12/riscv-vdso-fix-and-clean-up-makefile.patch new file mode 100644 index 00000000000..9277d43256e --- /dev/null +++ b/queue-5.12/riscv-vdso-fix-and-clean-up-makefile.patch @@ -0,0 +1,74 @@ +From 0baf6645c3930de0c89ecc2a843a8666a35bd58a Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 2 Apr 2021 21:29:08 +0800 +Subject: riscv: vdso: fix and clean-up Makefile + +From: Jisheng Zhang + +[ Upstream commit 772d7891e8b3b0baae7bb88a294d61fd07ba6d15 ] + +Running "make" on an already compiled kernel tree will rebuild the +kernel even without any modifications: + + CALL linux/scripts/checksyscalls.sh + CALL linux/scripts/atomic/check-atomics.sh + CHK include/generated/compile.h + SO2S arch/riscv/kernel/vdso/vdso-syms.S + AS arch/riscv/kernel/vdso/vdso-syms.o + AR arch/riscv/kernel/vdso/built-in.a + AR arch/riscv/kernel/built-in.a + AR arch/riscv/built-in.a + GEN .version + CHK include/generated/compile.h + UPD include/generated/compile.h + CC init/version.o + AR init/built-in.a + LD vmlinux.o + +The reason is "Any target that utilizes if_changed must be listed in +$(targets), otherwise the command line check will fail, and the target +will always be built" as explained by Documentation/kbuild/makefiles.rst + +Fix this build bug by adding vdso-syms.S to $(targets) + +At the same time, there are two trivial clean up modifications: + +- the vdso-dummy.o is not needed any more after so remove it. + +- vdso.lds is a generated file, so it should be prefixed with + $(obj)/ instead of $(src)/ + +Fixes: c2c81bb2f691 ("RISC-V: Fix the VDSO symbol generaton for binutils-2.35+") +Cc: stable@vger.kernel.org +Signed-off-by: Jisheng Zhang +Signed-off-by: Palmer Dabbelt +Signed-off-by: Sasha Levin +--- + arch/riscv/kernel/vdso/Makefile | 4 ++-- + 1 file changed, 2 insertions(+), 2 deletions(-) + +diff --git a/arch/riscv/kernel/vdso/Makefile b/arch/riscv/kernel/vdso/Makefile +index ca2b40dfd24b..24d936c147cd 100644 +--- a/arch/riscv/kernel/vdso/Makefile ++++ b/arch/riscv/kernel/vdso/Makefile +@@ -23,7 +23,7 @@ ifneq ($(c-gettimeofday-y),) + endif + + # Build rules +-targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-dummy.o ++targets := $(obj-vdso) vdso.so vdso.so.dbg vdso.lds vdso-syms.S + obj-vdso := $(addprefix $(obj)/, $(obj-vdso)) + + obj-y += vdso.o vdso-syms.o +@@ -41,7 +41,7 @@ KASAN_SANITIZE := n + $(obj)/vdso.o: $(obj)/vdso.so + + # link rule for the .so file, .lds has to be first +-$(obj)/vdso.so.dbg: $(src)/vdso.lds $(obj-vdso) FORCE ++$(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE + $(call if_changed,vdsold) + LDFLAGS_vdso.so.dbg = -shared -s -soname=linux-vdso.so.1 \ + --build-id=sha1 --hash-style=both --eh-frame-hdr +-- +2.30.2 + diff --git a/queue-5.12/serial-stm32-fix-threaded-interrupt-handling.patch b/queue-5.12/serial-stm32-fix-threaded-interrupt-handling.patch new file mode 100644 index 00000000000..b048bcd8ac7 --- /dev/null +++ b/queue-5.12/serial-stm32-fix-threaded-interrupt-handling.patch @@ -0,0 +1,103 @@ +From d6a651a988c6635a73f3e7dbc4d7ec0d3c3f7a99 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 16 Apr 2021 16:05:56 +0200 +Subject: serial: stm32: fix threaded interrupt handling + +From: Johan Hovold + +[ Upstream commit e359b4411c2836cf87c8776682d1b594635570de ] + +When DMA is enabled the receive handler runs in a threaded handler, but +the primary handler up until very recently neither disabled interrupts +in the device or used IRQF_ONESHOT. This would lead to a deadlock if an +interrupt comes in while the threaded receive handler is running under +the port lock. + +Commit ad7676812437 ("serial: stm32: fix a deadlock condition with +wakeup event") claimed to fix an unrelated deadlock, but unfortunately +also disabled interrupts in the threaded handler. While this prevents +the deadlock mentioned in the previous paragraph it also defeats the +purpose of using a threaded handler in the first place. + +Fix this by making the interrupt one-shot and not disabling interrupts +in the threaded handler. + +Note that (receive) DMA must not be used for a console port as the +threaded handler could be interrupted while holding the port lock, +something which could lead to a deadlock in case an interrupt handler +ends up calling printk. + +Fixes: ad7676812437 ("serial: stm32: fix a deadlock condition with wakeup event") +Fixes: 3489187204eb ("serial: stm32: adding dma support") +Cc: stable@vger.kernel.org # 4.9 +Cc: Alexandre TORGUE +Cc: Gerald Baeza +Reviewed-by: Valentin Caron +Signed-off-by: Johan Hovold +Link: https://lore.kernel.org/r/20210416140557.25177-3-johan@kernel.org +Signed-off-by: Greg Kroah-Hartman +Signed-off-by: Sasha Levin +--- + drivers/tty/serial/stm32-usart.c | 22 ++++++++++++---------- + 1 file changed, 12 insertions(+), 10 deletions(-) + +diff --git a/drivers/tty/serial/stm32-usart.c b/drivers/tty/serial/stm32-usart.c +index 99dfa884cbef..68c6535bbf7f 100644 +--- a/drivers/tty/serial/stm32-usart.c ++++ b/drivers/tty/serial/stm32-usart.c +@@ -214,14 +214,11 @@ static void stm32_usart_receive_chars(struct uart_port *port, bool threaded) + struct tty_port *tport = &port->state->port; + struct stm32_port *stm32_port = to_stm32_port(port); + const struct stm32_usart_offsets *ofs = &stm32_port->info->ofs; +- unsigned long c, flags; ++ unsigned long c; + u32 sr; + char flag; + +- if (threaded) +- spin_lock_irqsave(&port->lock, flags); +- else +- spin_lock(&port->lock); ++ spin_lock(&port->lock); + + while (stm32_usart_pending_rx(port, &sr, &stm32_port->last_res, + threaded)) { +@@ -278,10 +275,7 @@ static void stm32_usart_receive_chars(struct uart_port *port, bool threaded) + uart_insert_char(port, sr, USART_SR_ORE, c, flag); + } + +- if (threaded) +- spin_unlock_irqrestore(&port->lock, flags); +- else +- spin_unlock(&port->lock); ++ spin_unlock(&port->lock); + + tty_flip_buffer_push(tport); + } +@@ -654,7 +648,8 @@ static int stm32_usart_startup(struct uart_port *port) + + ret = request_threaded_irq(port->irq, stm32_usart_interrupt, + stm32_usart_threaded_interrupt, +- IRQF_NO_SUSPEND, name, port); ++ IRQF_ONESHOT | IRQF_NO_SUSPEND, ++ name, port); + if (ret) + return ret; + +@@ -1136,6 +1131,13 @@ static int stm32_usart_of_dma_rx_probe(struct stm32_port *stm32port, + struct dma_async_tx_descriptor *desc = NULL; + int ret; + ++ /* ++ * Using DMA and threaded handler for the console could lead to ++ * deadlocks. ++ */ ++ if (uart_console(port)) ++ return -ENODEV; ++ + /* Request DMA RX channel */ + stm32port->rx_ch = dma_request_slave_channel(dev, "rx"); + if (!stm32port->rx_ch) { +-- +2.30.2 + diff --git a/queue-5.12/series b/queue-5.12/series index 8c10deaa620..75894295a0b 100644 --- a/queue-5.12/series +++ b/queue-5.12/series @@ -76,3 +76,18 @@ arm-dts-imx7d-pico-fix-the-tuning-step-property.patch arm-dts-imx-emcon-avari-fix-nxp-pca8574-gpio-cells.patch bus-ti-sysc-fix-flakey-idling-of-uarts-and-stop-usin.patch arm64-meson-select-common_clk.patch +tipc-add-extack-messages-for-bearer-media-failure.patch +tipc-fix-unique-bearer-names-sanity-check.patch +serial-stm32-fix-threaded-interrupt-handling.patch +riscv-vdso-fix-and-clean-up-makefile.patch +libceph-don-t-set-global_id-until-we-get-an-auth-tic.patch +amdgpu-fix-gem-obj-leak-in-amdgpu_display_user_frame.patch +io_uring-fix-link-timeout-refs.patch +io_uring-use-better-types-for-cflags.patch +io_uring-wrap-io_kiocb-reference-count-manipulation-.patch +io_uring-fix-ltout-double-free-on-completion-race.patch +btrfs-zoned-fix-parallel-compressed-writes.patch +drm-amdgpu-vcn3-add-cancel_delayed_work_sync-before-.patch +drm-amdgpu-jpeg2.5-add-cancel_delayed_work_sync-befo.patch +drm-amdgpu-jpeg3-add-cancel_delayed_work_sync-before.patch +btrfs-fix-compressed-writes-that-cross-stripe-bounda.patch diff --git a/queue-5.12/tipc-add-extack-messages-for-bearer-media-failure.patch b/queue-5.12/tipc-add-extack-messages-for-bearer-media-failure.patch new file mode 100644 index 00000000000..083a7a0cada --- /dev/null +++ b/queue-5.12/tipc-add-extack-messages-for-bearer-media-failure.patch @@ -0,0 +1,211 @@ +From 6c536e6d26e8477e4e4211620cdc5bcf624ab369 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 25 Mar 2021 08:56:41 +0700 +Subject: tipc: add extack messages for bearer/media failure + +From: Hoang Le + +[ Upstream commit b83e214b2e04204f1fc674574362061492c37245 ] + +Add extack error messages for -EINVAL errors when enabling bearer, +getting/setting properties for a media/bearer + +Acked-by: Jon Maloy +Signed-off-by: Hoang Le +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/tipc/bearer.c | 50 +++++++++++++++++++++++++++++++++++++---------- + 1 file changed, 40 insertions(+), 10 deletions(-) + +diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c +index a4389ef08a98..1090f21fcfac 100644 +--- a/net/tipc/bearer.c ++++ b/net/tipc/bearer.c +@@ -243,7 +243,8 @@ void tipc_bearer_remove_dest(struct net *net, u32 bearer_id, u32 dest) + */ + static int tipc_enable_bearer(struct net *net, const char *name, + u32 disc_domain, u32 prio, +- struct nlattr *attr[]) ++ struct nlattr *attr[], ++ struct netlink_ext_ack *extack) + { + struct tipc_net *tn = tipc_net(net); + struct tipc_bearer_names b_names; +@@ -257,17 +258,20 @@ static int tipc_enable_bearer(struct net *net, const char *name, + + if (!bearer_name_validate(name, &b_names)) { + errstr = "illegal name"; ++ NL_SET_ERR_MSG(extack, "Illegal name"); + goto rejected; + } + + if (prio > TIPC_MAX_LINK_PRI && prio != TIPC_MEDIA_LINK_PRI) { + errstr = "illegal priority"; ++ NL_SET_ERR_MSG(extack, "Illegal priority"); + goto rejected; + } + + m = tipc_media_find(b_names.media_name); + if (!m) { + errstr = "media not registered"; ++ NL_SET_ERR_MSG(extack, "Media not registered"); + goto rejected; + } + +@@ -281,6 +285,7 @@ static int tipc_enable_bearer(struct net *net, const char *name, + break; + if (!strcmp(name, b->name)) { + errstr = "already enabled"; ++ NL_SET_ERR_MSG(extack, "Already enabled"); + goto rejected; + } + bearer_id++; +@@ -292,6 +297,7 @@ static int tipc_enable_bearer(struct net *net, const char *name, + name, prio); + if (prio == TIPC_MIN_LINK_PRI) { + errstr = "cannot adjust to lower"; ++ NL_SET_ERR_MSG(extack, "Cannot adjust to lower"); + goto rejected; + } + pr_warn("Bearer <%s>: trying with adjusted priority\n", name); +@@ -302,6 +308,7 @@ static int tipc_enable_bearer(struct net *net, const char *name, + + if (bearer_id >= MAX_BEARERS) { + errstr = "max 3 bearers permitted"; ++ NL_SET_ERR_MSG(extack, "Max 3 bearers permitted"); + goto rejected; + } + +@@ -315,6 +322,7 @@ static int tipc_enable_bearer(struct net *net, const char *name, + if (res) { + kfree(b); + errstr = "failed to enable media"; ++ NL_SET_ERR_MSG(extack, "Failed to enable media"); + goto rejected; + } + +@@ -331,6 +339,7 @@ static int tipc_enable_bearer(struct net *net, const char *name, + if (res) { + bearer_disable(net, b); + errstr = "failed to create discoverer"; ++ NL_SET_ERR_MSG(extack, "Failed to create discoverer"); + goto rejected; + } + +@@ -909,6 +918,7 @@ int tipc_nl_bearer_get(struct sk_buff *skb, struct genl_info *info) + bearer = tipc_bearer_find(net, name); + if (!bearer) { + err = -EINVAL; ++ NL_SET_ERR_MSG(info->extack, "Bearer not found"); + goto err_out; + } + +@@ -948,8 +958,10 @@ int __tipc_nl_bearer_disable(struct sk_buff *skb, struct genl_info *info) + name = nla_data(attrs[TIPC_NLA_BEARER_NAME]); + + bearer = tipc_bearer_find(net, name); +- if (!bearer) ++ if (!bearer) { ++ NL_SET_ERR_MSG(info->extack, "Bearer not found"); + return -EINVAL; ++ } + + bearer_disable(net, bearer); + +@@ -1007,7 +1019,8 @@ int __tipc_nl_bearer_enable(struct sk_buff *skb, struct genl_info *info) + prio = nla_get_u32(props[TIPC_NLA_PROP_PRIO]); + } + +- return tipc_enable_bearer(net, bearer, domain, prio, attrs); ++ return tipc_enable_bearer(net, bearer, domain, prio, attrs, ++ info->extack); + } + + int tipc_nl_bearer_enable(struct sk_buff *skb, struct genl_info *info) +@@ -1046,6 +1059,7 @@ int tipc_nl_bearer_add(struct sk_buff *skb, struct genl_info *info) + b = tipc_bearer_find(net, name); + if (!b) { + rtnl_unlock(); ++ NL_SET_ERR_MSG(info->extack, "Bearer not found"); + return -EINVAL; + } + +@@ -1086,8 +1100,10 @@ int __tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info) + name = nla_data(attrs[TIPC_NLA_BEARER_NAME]); + + b = tipc_bearer_find(net, name); +- if (!b) ++ if (!b) { ++ NL_SET_ERR_MSG(info->extack, "Bearer not found"); + return -EINVAL; ++ } + + if (attrs[TIPC_NLA_BEARER_PROP]) { + struct nlattr *props[TIPC_NLA_PROP_MAX + 1]; +@@ -1106,12 +1122,18 @@ int __tipc_nl_bearer_set(struct sk_buff *skb, struct genl_info *info) + if (props[TIPC_NLA_PROP_WIN]) + b->max_win = nla_get_u32(props[TIPC_NLA_PROP_WIN]); + if (props[TIPC_NLA_PROP_MTU]) { +- if (b->media->type_id != TIPC_MEDIA_TYPE_UDP) ++ if (b->media->type_id != TIPC_MEDIA_TYPE_UDP) { ++ NL_SET_ERR_MSG(info->extack, ++ "MTU property is unsupported"); + return -EINVAL; ++ } + #ifdef CONFIG_TIPC_MEDIA_UDP + if (tipc_udp_mtu_bad(nla_get_u32 +- (props[TIPC_NLA_PROP_MTU]))) ++ (props[TIPC_NLA_PROP_MTU]))) { ++ NL_SET_ERR_MSG(info->extack, ++ "MTU value is out-of-range"); + return -EINVAL; ++ } + b->mtu = nla_get_u32(props[TIPC_NLA_PROP_MTU]); + tipc_node_apply_property(net, b, TIPC_NLA_PROP_MTU); + #endif +@@ -1239,6 +1261,7 @@ int tipc_nl_media_get(struct sk_buff *skb, struct genl_info *info) + rtnl_lock(); + media = tipc_media_find(name); + if (!media) { ++ NL_SET_ERR_MSG(info->extack, "Media not found"); + err = -EINVAL; + goto err_out; + } +@@ -1275,9 +1298,10 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info) + name = nla_data(attrs[TIPC_NLA_MEDIA_NAME]); + + m = tipc_media_find(name); +- if (!m) ++ if (!m) { ++ NL_SET_ERR_MSG(info->extack, "Media not found"); + return -EINVAL; +- ++ } + if (attrs[TIPC_NLA_MEDIA_PROP]) { + struct nlattr *props[TIPC_NLA_PROP_MAX + 1]; + +@@ -1293,12 +1317,18 @@ int __tipc_nl_media_set(struct sk_buff *skb, struct genl_info *info) + if (props[TIPC_NLA_PROP_WIN]) + m->max_win = nla_get_u32(props[TIPC_NLA_PROP_WIN]); + if (props[TIPC_NLA_PROP_MTU]) { +- if (m->type_id != TIPC_MEDIA_TYPE_UDP) ++ if (m->type_id != TIPC_MEDIA_TYPE_UDP) { ++ NL_SET_ERR_MSG(info->extack, ++ "MTU property is unsupported"); + return -EINVAL; ++ } + #ifdef CONFIG_TIPC_MEDIA_UDP + if (tipc_udp_mtu_bad(nla_get_u32 +- (props[TIPC_NLA_PROP_MTU]))) ++ (props[TIPC_NLA_PROP_MTU]))) { ++ NL_SET_ERR_MSG(info->extack, ++ "MTU value is out-of-range"); + return -EINVAL; ++ } + m->mtu = nla_get_u32(props[TIPC_NLA_PROP_MTU]); + #endif + } +-- +2.30.2 + diff --git a/queue-5.12/tipc-fix-unique-bearer-names-sanity-check.patch b/queue-5.12/tipc-fix-unique-bearer-names-sanity-check.patch new file mode 100644 index 00000000000..11a1815ffe5 --- /dev/null +++ b/queue-5.12/tipc-fix-unique-bearer-names-sanity-check.patch @@ -0,0 +1,99 @@ +From a2976701b49292d9d2bfa7763787c59be45d1447 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 1 Apr 2021 09:30:48 +0700 +Subject: tipc: fix unique bearer names sanity check + +From: Hoang Le + +[ Upstream commit f20a46c3044c3f75232b3d0e2d09af9b25efaf45 ] + +When enabling a bearer by name, we don't sanity check its name with +higher slot in bearer list. This may have the effect that the name +of an already enabled bearer bypasses the check. + +To fix the above issue, we just perform an extra checking with all +existing bearers. + +Fixes: cb30a63384bc9 ("tipc: refactor function tipc_enable_bearer()") +Cc: stable@vger.kernel.org +Acked-by: Jon Maloy +Signed-off-by: Hoang Le +Signed-off-by: David S. Miller +Signed-off-by: Sasha Levin +--- + net/tipc/bearer.c | 46 +++++++++++++++++++++++++++------------------- + 1 file changed, 27 insertions(+), 19 deletions(-) + +diff --git a/net/tipc/bearer.c b/net/tipc/bearer.c +index 1090f21fcfac..0c8882052ba0 100644 +--- a/net/tipc/bearer.c ++++ b/net/tipc/bearer.c +@@ -255,6 +255,7 @@ static int tipc_enable_bearer(struct net *net, const char *name, + int bearer_id = 0; + int res = -EINVAL; + char *errstr = ""; ++ u32 i; + + if (!bearer_name_validate(name, &b_names)) { + errstr = "illegal name"; +@@ -279,31 +280,38 @@ static int tipc_enable_bearer(struct net *net, const char *name, + prio = m->priority; + + /* Check new bearer vs existing ones and find free bearer id if any */ +- while (bearer_id < MAX_BEARERS) { +- b = rtnl_dereference(tn->bearer_list[bearer_id]); +- if (!b) +- break; ++ bearer_id = MAX_BEARERS; ++ i = MAX_BEARERS; ++ while (i-- != 0) { ++ b = rtnl_dereference(tn->bearer_list[i]); ++ if (!b) { ++ bearer_id = i; ++ continue; ++ } + if (!strcmp(name, b->name)) { + errstr = "already enabled"; + NL_SET_ERR_MSG(extack, "Already enabled"); + goto rejected; + } +- bearer_id++; +- if (b->priority != prio) +- continue; +- if (++with_this_prio <= 2) +- continue; +- pr_warn("Bearer <%s>: already 2 bearers with priority %u\n", +- name, prio); +- if (prio == TIPC_MIN_LINK_PRI) { +- errstr = "cannot adjust to lower"; +- NL_SET_ERR_MSG(extack, "Cannot adjust to lower"); +- goto rejected; ++ ++ if (b->priority == prio && ++ (++with_this_prio > 2)) { ++ pr_warn("Bearer <%s>: already 2 bearers with priority %u\n", ++ name, prio); ++ ++ if (prio == TIPC_MIN_LINK_PRI) { ++ errstr = "cannot adjust to lower"; ++ NL_SET_ERR_MSG(extack, "Cannot adjust to lower"); ++ goto rejected; ++ } ++ ++ pr_warn("Bearer <%s>: trying with adjusted priority\n", ++ name); ++ prio--; ++ bearer_id = MAX_BEARERS; ++ i = MAX_BEARERS; ++ with_this_prio = 1; + } +- pr_warn("Bearer <%s>: trying with adjusted priority\n", name); +- prio--; +- bearer_id = 0; +- with_this_prio = 1; + } + + if (bearer_id >= MAX_BEARERS) { +-- +2.30.2 +