From: Greg Kroah-Hartman Date: Sat, 30 Mar 2024 09:52:39 +0000 (+0100) Subject: 6.7-stable patches X-Git-Tag: v6.7.12~110 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=a8d2c379eea72c0965d18ea50ae6f218c7ea3656;p=thirdparty%2Fkernel%2Fstable-queue.git 6.7-stable patches added patches: block-do-not-force-full-zone-append-completion-in-req_bio_endio.patch btrfs-zoned-don-t-skip-block-groups-with-100-zone-unusable.patch btrfs-zoned-use-zone-aware-sb-location-for-scrub.patch drm-amd-display-remove-mpc-rate-control-logic-from-dcn30-and-above.patch drm-amd-display-set-dcn351-bb-and-ip-the-same-as-dcn35.patch drm-amdgpu-fix-deadlock-while-reading-mqd-from-debugfs.patch drm-amdkfd-fix-tlb-flush-after-unmap-for-gfx9.4.2.patch drm-vmwgfx-create-debugfs-ttm_resource_manager-entry-only-if-needed.patch exec-fix-nommu-linux_binprm-exec-in-transfer_args_to_stack.patch gpio-cdev-sanitize-the-label-before-requesting-the-interrupt.patch hexagon-vmlinux.lds.s-handle-attributes-section.patch mm-cachestat-fix-two-shmem-bugs.patch mmc-core-avoid-negative-index-with-array-access.patch mmc-core-initialize-mmc_blk_ioc_data.patch mmc-sdhci-omap-re-tuning-is-needed-after-a-pm-transition-to-support-emmc-hs200-mode.patch net-ll_temac-platform_get_resource-replaced-by-wrong-function.patch nouveau-dmem-handle-kcalloc-allocation-failure.patch revert-drm-amd-display-fix-sending-vsc-colorimetry-packets-for-dp-edp-displays-without-psr.patch sdhci-of-dwcmshc-disable-pm-runtime-in-dwcmshc_remove.patch selftests-mm-fix-arm-related-issue-with-fork-after-pthread_create.patch selftests-mm-sigbus-wp-test-requires-uffd_feature_wp_hugetlbfs_shmem.patch thermal-devfreq_cooling-fix-perf-state-when-calculate-dfc-res_util.patch wifi-cfg80211-add-a-flag-to-disable-wireless-extensions.patch wifi-iwlwifi-fw-don-t-always-use-fw-dump-trig.patch wifi-iwlwifi-mvm-disable-mlo-for-the-time-being.patch wifi-iwlwifi-mvm-handle-debugfs-names-more-carefully.patch wifi-mac80211-check-clear-fast-rx-for-non-4addr-sta-vlan-changes.patch --- diff --git a/queue-6.7/block-do-not-force-full-zone-append-completion-in-req_bio_endio.patch b/queue-6.7/block-do-not-force-full-zone-append-completion-in-req_bio_endio.patch new file mode 100644 index 00000000000..ca217f76e1b --- /dev/null +++ b/queue-6.7/block-do-not-force-full-zone-append-completion-in-req_bio_endio.patch @@ -0,0 +1,53 @@ +From 55251fbdf0146c252ceff146a1bb145546f3e034 Mon Sep 17 00:00:00 2001 +From: Damien Le Moal +Date: Thu, 28 Mar 2024 09:43:40 +0900 +Subject: block: Do not force full zone append completion in req_bio_endio() + +From: Damien Le Moal + +commit 55251fbdf0146c252ceff146a1bb145546f3e034 upstream. + +This reverts commit 748dc0b65ec2b4b7b3dbd7befcc4a54fdcac7988. + +Partial zone append completions cannot be supported as there is no +guarantees that the fragmented data will be written sequentially in the +same manner as with a full command. Commit 748dc0b65ec2 ("block: fix +partial zone append completion handling in req_bio_endio()") changed +req_bio_endio() to always advance a partially failed BIO by its full +length, but this can lead to incorrect accounting. So revert this +change and let low level device drivers handle this case by always +failing completely zone append operations. With this revert, users will +still see an IO error for a partially completed zone append BIO. + +Fixes: 748dc0b65ec2 ("block: fix partial zone append completion handling in req_bio_endio()") +Cc: stable@vger.kernel.org +Signed-off-by: Damien Le Moal +Reviewed-by: Christoph Hellwig +Link: https://lore.kernel.org/r/20240328004409.594888-2-dlemoal@kernel.org +Signed-off-by: Jens Axboe +Signed-off-by: Greg Kroah-Hartman +--- + block/blk-mq.c | 9 ++------- + 1 file changed, 2 insertions(+), 7 deletions(-) + +--- a/block/blk-mq.c ++++ b/block/blk-mq.c +@@ -772,16 +772,11 @@ static void req_bio_endio(struct request + /* + * Partial zone append completions cannot be supported as the + * BIO fragments may end up not being written sequentially. +- * For such case, force the completed nbytes to be equal to +- * the BIO size so that bio_advance() sets the BIO remaining +- * size to 0 and we end up calling bio_endio() before returning. + */ +- if (bio->bi_iter.bi_size != nbytes) { ++ if (bio->bi_iter.bi_size != nbytes) + bio->bi_status = BLK_STS_IOERR; +- nbytes = bio->bi_iter.bi_size; +- } else { ++ else + bio->bi_iter.bi_sector = rq->__sector; +- } + } + + bio_advance(bio, nbytes); diff --git a/queue-6.7/btrfs-zoned-don-t-skip-block-groups-with-100-zone-unusable.patch b/queue-6.7/btrfs-zoned-don-t-skip-block-groups-with-100-zone-unusable.patch new file mode 100644 index 00000000000..fdb771037c3 --- /dev/null +++ b/queue-6.7/btrfs-zoned-don-t-skip-block-groups-with-100-zone-unusable.patch @@ -0,0 +1,45 @@ +From a8b70c7f8600bc77d03c0b032c0662259b9e615e Mon Sep 17 00:00:00 2001 +From: Johannes Thumshirn +Date: Wed, 21 Feb 2024 07:35:52 -0800 +Subject: btrfs: zoned: don't skip block groups with 100% zone unusable + +From: Johannes Thumshirn + +commit a8b70c7f8600bc77d03c0b032c0662259b9e615e upstream. + +Commit f4a9f219411f ("btrfs: do not delete unused block group if it may be +used soon") changed the behaviour of deleting unused block-groups on zoned +filesystems. Starting with this commit, we're using +btrfs_space_info_used() to calculate the number of used bytes in a +space_info. But btrfs_space_info_used() also accounts +btrfs_space_info::bytes_zone_unusable as used bytes. + +So if a block group is 100% zone_unusable it is skipped from the deletion +step. + +In order not to skip fully zone_unusable block-groups, also check if the +block-group has bytes left that can be used on a zoned filesystem. + +Fixes: f4a9f219411f ("btrfs: do not delete unused block group if it may be used soon") +CC: stable@vger.kernel.org # 6.1+ +Reviewed-by: Filipe Manana +Signed-off-by: Johannes Thumshirn +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Signed-off-by: Greg Kroah-Hartman +--- + fs/btrfs/block-group.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +--- a/fs/btrfs/block-group.c ++++ b/fs/btrfs/block-group.c +@@ -1562,7 +1562,8 @@ void btrfs_delete_unused_bgs(struct btrf + * needing to allocate extents from the block group. + */ + used = btrfs_space_info_used(space_info, true); +- if (space_info->total_bytes - block_group->length < used) { ++ if (space_info->total_bytes - block_group->length < used && ++ block_group->zone_unusable < block_group->length) { + /* + * Add a reference for the list, compensate for the ref + * drop under the "next" label for the diff --git a/queue-6.7/btrfs-zoned-use-zone-aware-sb-location-for-scrub.patch b/queue-6.7/btrfs-zoned-use-zone-aware-sb-location-for-scrub.patch new file mode 100644 index 00000000000..9f84fdd8fb1 --- /dev/null +++ b/queue-6.7/btrfs-zoned-use-zone-aware-sb-location-for-scrub.patch @@ -0,0 +1,52 @@ +From 74098a989b9c3370f768140b7783a7aaec2759b3 Mon Sep 17 00:00:00 2001 +From: Johannes Thumshirn +Date: Mon, 26 Feb 2024 16:39:13 +0100 +Subject: btrfs: zoned: use zone aware sb location for scrub + +From: Johannes Thumshirn + +commit 74098a989b9c3370f768140b7783a7aaec2759b3 upstream. + +At the moment scrub_supers() doesn't grab the super block's location via +the zoned device aware btrfs_sb_log_location() but via btrfs_sb_offset(). + +This leads to checksum errors on 'scrub' as we're not accessing the +correct location of the super block. + +So use btrfs_sb_log_location() for getting the super blocks location on +scrub. + +Reported-by: WA AM +Link: http://lore.kernel.org/linux-btrfs/CANU2Z0EvUzfYxczLgGUiREoMndE9WdQnbaawV5Fv5gNXptPUKw@mail.gmail.com +CC: stable@vger.kernel.org # 5.15+ +Reviewed-by: Qu Wenruo +Reviewed-by: Naohiro Aota +Signed-off-by: Johannes Thumshirn +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Signed-off-by: Greg Kroah-Hartman +--- + fs/btrfs/scrub.c | 12 +++++++++++- + 1 file changed, 11 insertions(+), 1 deletion(-) + +--- a/fs/btrfs/scrub.c ++++ b/fs/btrfs/scrub.c +@@ -2809,7 +2809,17 @@ static noinline_for_stack int scrub_supe + gen = btrfs_get_last_trans_committed(fs_info); + + for (i = 0; i < BTRFS_SUPER_MIRROR_MAX; i++) { +- bytenr = btrfs_sb_offset(i); ++ ret = btrfs_sb_log_location(scrub_dev, i, 0, &bytenr); ++ if (ret == -ENOENT) ++ break; ++ ++ if (ret) { ++ spin_lock(&sctx->stat_lock); ++ sctx->stat.super_errors++; ++ spin_unlock(&sctx->stat_lock); ++ continue; ++ } ++ + if (bytenr + BTRFS_SUPER_INFO_SIZE > + scrub_dev->commit_total_bytes) + break; diff --git a/queue-6.7/drm-amd-display-remove-mpc-rate-control-logic-from-dcn30-and-above.patch b/queue-6.7/drm-amd-display-remove-mpc-rate-control-logic-from-dcn30-and-above.patch new file mode 100644 index 00000000000..4909da79df1 --- /dev/null +++ b/queue-6.7/drm-amd-display-remove-mpc-rate-control-logic-from-dcn30-and-above.patch @@ -0,0 +1,369 @@ +From edfa93d87fc46913868481fe8ed3fb62c891ffb5 Mon Sep 17 00:00:00 2001 +From: George Shen +Date: Fri, 16 Feb 2024 19:37:03 -0500 +Subject: drm/amd/display: Remove MPC rate control logic from DCN30 and above + +From: George Shen + +commit edfa93d87fc46913868481fe8ed3fb62c891ffb5 upstream. + +[Why] +MPC flow rate control is not needed for DCN30 and above. Current logic +that uses it can result in underflow for certain edge cases (such as +DSC N422 + ODM combine + 422 left edge pixel). + +[How] +Remove MPC flow rate control logic and programming for DCN30 and above. + +Cc: Mario Limonciello +Cc: Alex Deucher +Cc: stable@vger.kernel.org +Reviewed-by: Wenjing Liu +Acked-by: Tom Chung +Signed-off-by: George Shen +Tested-by: Daniel Wheeler +Signed-off-by: Alex Deucher +Signed-off-by: Greg Kroah-Hartman +--- + drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c | 54 ++++++++------ + drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h | 14 +-- + drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c | 5 - + drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c | 41 ---------- + drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c | 41 ---------- + drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c | 41 ---------- + 6 files changed, 41 insertions(+), 155 deletions(-) + +--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.c +@@ -44,6 +44,36 @@ + #define NUM_ELEMENTS(a) (sizeof(a) / sizeof((a)[0])) + + ++void mpc3_mpc_init(struct mpc *mpc) ++{ ++ struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); ++ int opp_id; ++ ++ mpc1_mpc_init(mpc); ++ ++ for (opp_id = 0; opp_id < MAX_OPP; opp_id++) { ++ if (REG(MUX[opp_id])) ++ /* disable mpc out rate and flow control */ ++ REG_UPDATE_2(MUX[opp_id], MPC_OUT_RATE_CONTROL_DISABLE, ++ 1, MPC_OUT_FLOW_CONTROL_COUNT, 0); ++ } ++} ++ ++void mpc3_mpc_init_single_inst(struct mpc *mpc, unsigned int mpcc_id) ++{ ++ struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); ++ ++ mpc1_mpc_init_single_inst(mpc, mpcc_id); ++ ++ /* assuming mpc out mux is connected to opp with the same index at this ++ * point in time (e.g. transitioning from vbios to driver) ++ */ ++ if (mpcc_id < MAX_OPP && REG(MUX[mpcc_id])) ++ /* disable mpc out rate and flow control */ ++ REG_UPDATE_2(MUX[mpcc_id], MPC_OUT_RATE_CONTROL_DISABLE, ++ 1, MPC_OUT_FLOW_CONTROL_COUNT, 0); ++} ++ + bool mpc3_is_dwb_idle( + struct mpc *mpc, + int dwb_id) +@@ -80,25 +110,6 @@ void mpc3_disable_dwb_mux( + MPC_DWB0_MUX, 0xf); + } + +-void mpc3_set_out_rate_control( +- struct mpc *mpc, +- int opp_id, +- bool enable, +- bool rate_2x_mode, +- struct mpc_dwb_flow_control *flow_control) +-{ +- struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); +- +- REG_UPDATE_2(MUX[opp_id], +- MPC_OUT_RATE_CONTROL_DISABLE, !enable, +- MPC_OUT_RATE_CONTROL, rate_2x_mode); +- +- if (flow_control) +- REG_UPDATE_2(MUX[opp_id], +- MPC_OUT_FLOW_CONTROL_MODE, flow_control->flow_ctrl_mode, +- MPC_OUT_FLOW_CONTROL_COUNT, flow_control->flow_ctrl_cnt1); +-} +- + enum dc_lut_mode mpc3_get_ogam_current(struct mpc *mpc, int mpcc_id) + { + /*Contrary to DCN2 and DCN1 wherein a single status register field holds this info; +@@ -1386,8 +1397,8 @@ static const struct mpc_funcs dcn30_mpc_ + .read_mpcc_state = mpc1_read_mpcc_state, + .insert_plane = mpc1_insert_plane, + .remove_mpcc = mpc1_remove_mpcc, +- .mpc_init = mpc1_mpc_init, +- .mpc_init_single_inst = mpc1_mpc_init_single_inst, ++ .mpc_init = mpc3_mpc_init, ++ .mpc_init_single_inst = mpc3_mpc_init_single_inst, + .update_blending = mpc2_update_blending, + .cursor_lock = mpc1_cursor_lock, + .get_mpcc_for_dpp = mpc1_get_mpcc_for_dpp, +@@ -1404,7 +1415,6 @@ static const struct mpc_funcs dcn30_mpc_ + .set_dwb_mux = mpc3_set_dwb_mux, + .disable_dwb_mux = mpc3_disable_dwb_mux, + .is_dwb_idle = mpc3_is_dwb_idle, +- .set_out_rate_control = mpc3_set_out_rate_control, + .set_gamut_remap = mpc3_set_gamut_remap, + .program_shaper = mpc3_program_shaper, + .acquire_rmu = mpcc3_acquire_rmu, +--- a/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h ++++ b/drivers/gpu/drm/amd/display/dc/dcn30/dcn30_mpc.h +@@ -1007,6 +1007,13 @@ void dcn30_mpc_construct(struct dcn30_mp + int num_mpcc, + int num_rmu); + ++void mpc3_mpc_init( ++ struct mpc *mpc); ++ ++void mpc3_mpc_init_single_inst( ++ struct mpc *mpc, ++ unsigned int mpcc_id); ++ + bool mpc3_program_shaper( + struct mpc *mpc, + const struct pwl_params *params, +@@ -1074,13 +1081,6 @@ bool mpc3_is_dwb_idle( + struct mpc *mpc, + int dwb_id); + +-void mpc3_set_out_rate_control( +- struct mpc *mpc, +- int opp_id, +- bool enable, +- bool rate_2x_mode, +- struct mpc_dwb_flow_control *flow_control); +- + void mpc3_power_on_ogam_lut( + struct mpc *mpc, int mpcc_id, + bool power_on); +--- a/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c ++++ b/drivers/gpu/drm/amd/display/dc/dcn32/dcn32_mpc.c +@@ -47,7 +47,7 @@ void mpc32_mpc_init(struct mpc *mpc) + struct dcn30_mpc *mpc30 = TO_DCN30_MPC(mpc); + int mpcc_id; + +- mpc1_mpc_init(mpc); ++ mpc3_mpc_init(mpc); + + if (mpc->ctx->dc->debug.enable_mem_low_power.bits.mpc) { + if (mpc30->mpc_mask->MPCC_MCM_SHAPER_MEM_LOW_PWR_MODE && mpc30->mpc_mask->MPCC_MCM_3DLUT_MEM_LOW_PWR_MODE) { +@@ -990,7 +990,7 @@ static const struct mpc_funcs dcn32_mpc_ + .insert_plane = mpc1_insert_plane, + .remove_mpcc = mpc1_remove_mpcc, + .mpc_init = mpc32_mpc_init, +- .mpc_init_single_inst = mpc1_mpc_init_single_inst, ++ .mpc_init_single_inst = mpc3_mpc_init_single_inst, + .update_blending = mpc2_update_blending, + .cursor_lock = mpc1_cursor_lock, + .get_mpcc_for_dpp = mpc1_get_mpcc_for_dpp, +@@ -1007,7 +1007,6 @@ static const struct mpc_funcs dcn32_mpc_ + .set_dwb_mux = mpc3_set_dwb_mux, + .disable_dwb_mux = mpc3_disable_dwb_mux, + .is_dwb_idle = mpc3_is_dwb_idle, +- .set_out_rate_control = mpc3_set_out_rate_control, + .set_gamut_remap = mpc3_set_gamut_remap, + .program_shaper = mpc32_program_shaper, + .program_3dlut = mpc32_program_3dlut, +--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c ++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn314/dcn314_hwseq.c +@@ -69,29 +69,6 @@ + #define FN(reg_name, field_name) \ + hws->shifts->field_name, hws->masks->field_name + +-static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream, +- int opp_cnt) +-{ +- bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing); +- int flow_ctrl_cnt; +- +- if (opp_cnt >= 2) +- hblank_halved = true; +- +- flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable - +- stream->timing.h_border_left - +- stream->timing.h_border_right; +- +- if (hblank_halved) +- flow_ctrl_cnt /= 2; +- +- /* ODM combine 4:1 case */ +- if (opp_cnt == 4) +- flow_ctrl_cnt /= 2; +- +- return flow_ctrl_cnt; +-} +- + static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable) + { + struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc; +@@ -183,10 +160,6 @@ void dcn314_update_odm(struct dc *dc, st + struct pipe_ctx *odm_pipe; + int opp_cnt = 0; + int opp_inst[MAX_PIPES] = {0}; +- bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing)); +- struct mpc_dwb_flow_control flow_control; +- struct mpc *mpc = dc->res_pool->mpc; +- int i; + + opp_cnt = get_odm_config(pipe_ctx, opp_inst); + +@@ -199,20 +172,6 @@ void dcn314_update_odm(struct dc *dc, st + pipe_ctx->stream_res.tg->funcs->set_odm_bypass( + pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing); + +- rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1; +- flow_control.flow_ctrl_mode = 0; +- flow_control.flow_ctrl_cnt0 = 0x80; +- flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt); +- if (mpc->funcs->set_out_rate_control) { +- for (i = 0; i < opp_cnt; ++i) { +- mpc->funcs->set_out_rate_control( +- mpc, opp_inst[i], +- true, +- rate_control_2x_pclk, +- &flow_control); +- } +- } +- + for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) { + odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control( + odm_pipe->stream_res.opp, +--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c ++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn32/dcn32_hwseq.c +@@ -969,29 +969,6 @@ void dcn32_init_hw(struct dc *dc) + } + } + +-static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream, +- int opp_cnt) +-{ +- bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing); +- int flow_ctrl_cnt; +- +- if (opp_cnt >= 2) +- hblank_halved = true; +- +- flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable - +- stream->timing.h_border_left - +- stream->timing.h_border_right; +- +- if (hblank_halved) +- flow_ctrl_cnt /= 2; +- +- /* ODM combine 4:1 case */ +- if (opp_cnt == 4) +- flow_ctrl_cnt /= 2; +- +- return flow_ctrl_cnt; +-} +- + static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable) + { + struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc; +@@ -1106,10 +1083,6 @@ void dcn32_update_odm(struct dc *dc, str + struct pipe_ctx *odm_pipe; + int opp_cnt = 0; + int opp_inst[MAX_PIPES] = {0}; +- bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing)); +- struct mpc_dwb_flow_control flow_control; +- struct mpc *mpc = dc->res_pool->mpc; +- int i; + + opp_cnt = get_odm_config(pipe_ctx, opp_inst); + +@@ -1122,20 +1095,6 @@ void dcn32_update_odm(struct dc *dc, str + pipe_ctx->stream_res.tg->funcs->set_odm_bypass( + pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing); + +- rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1; +- flow_control.flow_ctrl_mode = 0; +- flow_control.flow_ctrl_cnt0 = 0x80; +- flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt); +- if (mpc->funcs->set_out_rate_control) { +- for (i = 0; i < opp_cnt; ++i) { +- mpc->funcs->set_out_rate_control( +- mpc, opp_inst[i], +- true, +- rate_control_2x_pclk, +- &flow_control); +- } +- } +- + for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) { + odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control( + odm_pipe->stream_res.opp, +--- a/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c ++++ b/drivers/gpu/drm/amd/display/dc/hwss/dcn35/dcn35_hwseq.c +@@ -338,29 +338,6 @@ void dcn35_init_hw(struct dc *dc) + } + } + +-static int calc_mpc_flow_ctrl_cnt(const struct dc_stream_state *stream, +- int opp_cnt) +-{ +- bool hblank_halved = optc2_is_two_pixels_per_containter(&stream->timing); +- int flow_ctrl_cnt; +- +- if (opp_cnt >= 2) +- hblank_halved = true; +- +- flow_ctrl_cnt = stream->timing.h_total - stream->timing.h_addressable - +- stream->timing.h_border_left - +- stream->timing.h_border_right; +- +- if (hblank_halved) +- flow_ctrl_cnt /= 2; +- +- /* ODM combine 4:1 case */ +- if (opp_cnt == 4) +- flow_ctrl_cnt /= 2; +- +- return flow_ctrl_cnt; +-} +- + static void update_dsc_on_stream(struct pipe_ctx *pipe_ctx, bool enable) + { + struct display_stream_compressor *dsc = pipe_ctx->stream_res.dsc; +@@ -454,10 +431,6 @@ void dcn35_update_odm(struct dc *dc, str + struct pipe_ctx *odm_pipe; + int opp_cnt = 0; + int opp_inst[MAX_PIPES] = {0}; +- bool rate_control_2x_pclk = (pipe_ctx->stream->timing.flags.INTERLACE || optc2_is_two_pixels_per_containter(&pipe_ctx->stream->timing)); +- struct mpc_dwb_flow_control flow_control; +- struct mpc *mpc = dc->res_pool->mpc; +- int i; + + opp_cnt = get_odm_config(pipe_ctx, opp_inst); + +@@ -470,20 +443,6 @@ void dcn35_update_odm(struct dc *dc, str + pipe_ctx->stream_res.tg->funcs->set_odm_bypass( + pipe_ctx->stream_res.tg, &pipe_ctx->stream->timing); + +- rate_control_2x_pclk = rate_control_2x_pclk || opp_cnt > 1; +- flow_control.flow_ctrl_mode = 0; +- flow_control.flow_ctrl_cnt0 = 0x80; +- flow_control.flow_ctrl_cnt1 = calc_mpc_flow_ctrl_cnt(pipe_ctx->stream, opp_cnt); +- if (mpc->funcs->set_out_rate_control) { +- for (i = 0; i < opp_cnt; ++i) { +- mpc->funcs->set_out_rate_control( +- mpc, opp_inst[i], +- true, +- rate_control_2x_pclk, +- &flow_control); +- } +- } +- + for (odm_pipe = pipe_ctx->next_odm_pipe; odm_pipe; odm_pipe = odm_pipe->next_odm_pipe) { + odm_pipe->stream_res.opp->funcs->opp_pipe_clock_control( + odm_pipe->stream_res.opp, diff --git a/queue-6.7/drm-amd-display-set-dcn351-bb-and-ip-the-same-as-dcn35.patch b/queue-6.7/drm-amd-display-set-dcn351-bb-and-ip-the-same-as-dcn35.patch new file mode 100644 index 00000000000..eae5e5e25f3 --- /dev/null +++ b/queue-6.7/drm-amd-display-set-dcn351-bb-and-ip-the-same-as-dcn35.patch @@ -0,0 +1,46 @@ +From 0ccc2b30f4feadc0b1a282dbcc06e396382e5d74 Mon Sep 17 00:00:00 2001 +From: Xi Liu +Date: Tue, 27 Feb 2024 13:39:00 -0500 +Subject: drm/amd/display: Set DCN351 BB and IP the same as DCN35 + +From: Xi Liu + +commit 0ccc2b30f4feadc0b1a282dbcc06e396382e5d74 upstream. + +[WHY & HOW] +DCN351 and DCN35 should use the same bounding box and IP settings. + +Cc: Mario Limonciello +Cc: Alex Deucher +Cc: stable@vger.kernel.org +Reviewed-by: Jun Lei +Acked-by: Alex Hung +Signed-off-by: Xi Liu +Tested-by: Daniel Wheeler +Signed-off-by: Alex Deucher +Signed-off-by: Greg Kroah-Hartman +--- + drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c | 6 +----- + 1 file changed, 1 insertion(+), 5 deletions(-) + +--- a/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c ++++ b/drivers/gpu/drm/amd/display/dc/dml2/dml2_translation_helper.c +@@ -228,17 +228,13 @@ void dml2_init_socbb_params(struct dml2_ + break; + + case dml_project_dcn35: ++ case dml_project_dcn351: + out->num_chans = 4; + out->round_trip_ping_latency_dcfclk_cycles = 106; + out->smn_latency_us = 2; + out->dispclk_dppclk_vco_speed_mhz = 3600; + break; + +- case dml_project_dcn351: +- out->num_chans = 16; +- out->round_trip_ping_latency_dcfclk_cycles = 1100; +- out->smn_latency_us = 2; +- break; + } + /* ---Overrides if available--- */ + if (dml2->config.bbox_overrides.dram_num_chan) diff --git a/queue-6.7/drm-amdgpu-fix-deadlock-while-reading-mqd-from-debugfs.patch b/queue-6.7/drm-amdgpu-fix-deadlock-while-reading-mqd-from-debugfs.patch new file mode 100644 index 00000000000..644e5aa562e --- /dev/null +++ b/queue-6.7/drm-amdgpu-fix-deadlock-while-reading-mqd-from-debugfs.patch @@ -0,0 +1,207 @@ +From 8678b1060ae2b75feb60b87e5b75e17374e3c1c5 Mon Sep 17 00:00:00 2001 +From: Johannes Weiner +Date: Thu, 7 Mar 2024 17:07:37 -0500 +Subject: drm/amdgpu: fix deadlock while reading mqd from debugfs + +From: Johannes Weiner + +commit 8678b1060ae2b75feb60b87e5b75e17374e3c1c5 upstream. + +An errant disk backup on my desktop got into debugfs and triggered the +following deadlock scenario in the amdgpu debugfs files. The machine +also hard-resets immediately after those lines are printed (although I +wasn't able to reproduce that part when reading by hand): + +[ 1318.016074][ T1082] ====================================================== +[ 1318.016607][ T1082] WARNING: possible circular locking dependency detected +[ 1318.017107][ T1082] 6.8.0-rc7-00015-ge0c8221b72c0 #17 Not tainted +[ 1318.017598][ T1082] ------------------------------------------------------ +[ 1318.018096][ T1082] tar/1082 is trying to acquire lock: +[ 1318.018585][ T1082] ffff98c44175d6a0 (&mm->mmap_lock){++++}-{3:3}, at: __might_fault+0x40/0x80 +[ 1318.019084][ T1082] +[ 1318.019084][ T1082] but task is already holding lock: +[ 1318.020052][ T1082] ffff98c4c13f55f8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: amdgpu_debugfs_mqd_read+0x6a/0x250 [amdgpu] +[ 1318.020607][ T1082] +[ 1318.020607][ T1082] which lock already depends on the new lock. +[ 1318.020607][ T1082] +[ 1318.022081][ T1082] +[ 1318.022081][ T1082] the existing dependency chain (in reverse order) is: +[ 1318.023083][ T1082] +[ 1318.023083][ T1082] -> #2 (reservation_ww_class_mutex){+.+.}-{3:3}: +[ 1318.024114][ T1082] __ww_mutex_lock.constprop.0+0xe0/0x12f0 +[ 1318.024639][ T1082] ww_mutex_lock+0x32/0x90 +[ 1318.025161][ T1082] dma_resv_lockdep+0x18a/0x330 +[ 1318.025683][ T1082] do_one_initcall+0x6a/0x350 +[ 1318.026210][ T1082] kernel_init_freeable+0x1a3/0x310 +[ 1318.026728][ T1082] kernel_init+0x15/0x1a0 +[ 1318.027242][ T1082] ret_from_fork+0x2c/0x40 +[ 1318.027759][ T1082] ret_from_fork_asm+0x11/0x20 +[ 1318.028281][ T1082] +[ 1318.028281][ T1082] -> #1 (reservation_ww_class_acquire){+.+.}-{0:0}: +[ 1318.029297][ T1082] dma_resv_lockdep+0x16c/0x330 +[ 1318.029790][ T1082] do_one_initcall+0x6a/0x350 +[ 1318.030263][ T1082] kernel_init_freeable+0x1a3/0x310 +[ 1318.030722][ T1082] kernel_init+0x15/0x1a0 +[ 1318.031168][ T1082] ret_from_fork+0x2c/0x40 +[ 1318.031598][ T1082] ret_from_fork_asm+0x11/0x20 +[ 1318.032011][ T1082] +[ 1318.032011][ T1082] -> #0 (&mm->mmap_lock){++++}-{3:3}: +[ 1318.032778][ T1082] __lock_acquire+0x14bf/0x2680 +[ 1318.033141][ T1082] lock_acquire+0xcd/0x2c0 +[ 1318.033487][ T1082] __might_fault+0x58/0x80 +[ 1318.033814][ T1082] amdgpu_debugfs_mqd_read+0x103/0x250 [amdgpu] +[ 1318.034181][ T1082] full_proxy_read+0x55/0x80 +[ 1318.034487][ T1082] vfs_read+0xa7/0x360 +[ 1318.034788][ T1082] ksys_read+0x70/0xf0 +[ 1318.035085][ T1082] do_syscall_64+0x94/0x180 +[ 1318.035375][ T1082] entry_SYSCALL_64_after_hwframe+0x46/0x4e +[ 1318.035664][ T1082] +[ 1318.035664][ T1082] other info that might help us debug this: +[ 1318.035664][ T1082] +[ 1318.036487][ T1082] Chain exists of: +[ 1318.036487][ T1082] &mm->mmap_lock --> reservation_ww_class_acquire --> reservation_ww_class_mutex +[ 1318.036487][ T1082] +[ 1318.037310][ T1082] Possible unsafe locking scenario: +[ 1318.037310][ T1082] +[ 1318.037838][ T1082] CPU0 CPU1 +[ 1318.038101][ T1082] ---- ---- +[ 1318.038350][ T1082] lock(reservation_ww_class_mutex); +[ 1318.038590][ T1082] lock(reservation_ww_class_acquire); +[ 1318.038839][ T1082] lock(reservation_ww_class_mutex); +[ 1318.039083][ T1082] rlock(&mm->mmap_lock); +[ 1318.039328][ T1082] +[ 1318.039328][ T1082] *** DEADLOCK *** +[ 1318.039328][ T1082] +[ 1318.040029][ T1082] 1 lock held by tar/1082: +[ 1318.040259][ T1082] #0: ffff98c4c13f55f8 (reservation_ww_class_mutex){+.+.}-{3:3}, at: amdgpu_debugfs_mqd_read+0x6a/0x250 [amdgpu] +[ 1318.040560][ T1082] +[ 1318.040560][ T1082] stack backtrace: +[ 1318.041053][ T1082] CPU: 22 PID: 1082 Comm: tar Not tainted 6.8.0-rc7-00015-ge0c8221b72c0 #17 3316c85d50e282c5643b075d1f01a4f6365e39c2 +[ 1318.041329][ T1082] Hardware name: Gigabyte Technology Co., Ltd. B650 AORUS PRO AX/B650 AORUS PRO AX, BIOS F20 12/14/2023 +[ 1318.041614][ T1082] Call Trace: +[ 1318.041895][ T1082] +[ 1318.042175][ T1082] dump_stack_lvl+0x4a/0x80 +[ 1318.042460][ T1082] check_noncircular+0x145/0x160 +[ 1318.042743][ T1082] __lock_acquire+0x14bf/0x2680 +[ 1318.043022][ T1082] lock_acquire+0xcd/0x2c0 +[ 1318.043301][ T1082] ? __might_fault+0x40/0x80 +[ 1318.043580][ T1082] ? __might_fault+0x40/0x80 +[ 1318.043856][ T1082] __might_fault+0x58/0x80 +[ 1318.044131][ T1082] ? __might_fault+0x40/0x80 +[ 1318.044408][ T1082] amdgpu_debugfs_mqd_read+0x103/0x250 [amdgpu 8fe2afaa910cbd7654c8cab23563a94d6caebaab] +[ 1318.044749][ T1082] full_proxy_read+0x55/0x80 +[ 1318.045042][ T1082] vfs_read+0xa7/0x360 +[ 1318.045333][ T1082] ksys_read+0x70/0xf0 +[ 1318.045623][ T1082] do_syscall_64+0x94/0x180 +[ 1318.045913][ T1082] ? do_syscall_64+0xa0/0x180 +[ 1318.046201][ T1082] ? lockdep_hardirqs_on+0x7d/0x100 +[ 1318.046487][ T1082] ? do_syscall_64+0xa0/0x180 +[ 1318.046773][ T1082] ? do_syscall_64+0xa0/0x180 +[ 1318.047057][ T1082] ? do_syscall_64+0xa0/0x180 +[ 1318.047337][ T1082] ? do_syscall_64+0xa0/0x180 +[ 1318.047611][ T1082] entry_SYSCALL_64_after_hwframe+0x46/0x4e +[ 1318.047887][ T1082] RIP: 0033:0x7f480b70a39d +[ 1318.048162][ T1082] Code: 91 ba 0d 00 f7 d8 64 89 02 b8 ff ff ff ff eb b2 e8 18 a3 01 00 0f 1f 84 00 00 00 00 00 80 3d a9 3c 0e 00 00 74 17 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 5b c3 66 2e 0f 1f 84 00 00 00 00 00 53 48 83 +[ 1318.048769][ T1082] RSP: 002b:00007ffde77f5c68 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 +[ 1318.049083][ T1082] RAX: ffffffffffffffda RBX: 0000000000000800 RCX: 00007f480b70a39d +[ 1318.049392][ T1082] RDX: 0000000000000800 RSI: 000055c9f2120c00 RDI: 0000000000000008 +[ 1318.049703][ T1082] RBP: 0000000000000800 R08: 000055c9f2120a94 R09: 0000000000000007 +[ 1318.050011][ T1082] R10: 0000000000000000 R11: 0000000000000246 R12: 000055c9f2120c00 +[ 1318.050324][ T1082] R13: 0000000000000008 R14: 0000000000000008 R15: 0000000000000800 +[ 1318.050638][ T1082] + +amdgpu_debugfs_mqd_read() holds a reservation when it calls +put_user(), which may fault and acquire the mmap_sem. This violates +the established locking order. + +Bounce the mqd data through a kernel buffer to get put_user() out of +the illegal section. + +Fixes: 445d85e3c1df ("drm/amdgpu: add debugfs interface for reading MQDs") +Cc: stable@vger.kernel.org # v6.5+ +Reviewed-by: Shashank Sharma +Signed-off-by: Johannes Weiner +Signed-off-by: Alex Deucher +Signed-off-by: Greg Kroah-Hartman +--- + drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c | 46 +++++++++++++++++++------------ + 1 file changed, 29 insertions(+), 17 deletions(-) + +--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c ++++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.c +@@ -524,46 +524,58 @@ static ssize_t amdgpu_debugfs_mqd_read(s + { + struct amdgpu_ring *ring = file_inode(f)->i_private; + volatile u32 *mqd; +- int r; ++ u32 *kbuf; ++ int r, i; + uint32_t value, result; + + if (*pos & 3 || size & 3) + return -EINVAL; + +- result = 0; ++ kbuf = kmalloc(ring->mqd_size, GFP_KERNEL); ++ if (!kbuf) ++ return -ENOMEM; + + r = amdgpu_bo_reserve(ring->mqd_obj, false); + if (unlikely(r != 0)) +- return r; ++ goto err_free; + + r = amdgpu_bo_kmap(ring->mqd_obj, (void **)&mqd); +- if (r) { +- amdgpu_bo_unreserve(ring->mqd_obj); +- return r; +- } ++ if (r) ++ goto err_unreserve; ++ ++ /* ++ * Copy to local buffer to avoid put_user(), which might fault ++ * and acquire mmap_sem, under reservation_ww_class_mutex. ++ */ ++ for (i = 0; i < ring->mqd_size/sizeof(u32); i++) ++ kbuf[i] = mqd[i]; ++ ++ amdgpu_bo_kunmap(ring->mqd_obj); ++ amdgpu_bo_unreserve(ring->mqd_obj); + ++ result = 0; + while (size) { + if (*pos >= ring->mqd_size) +- goto done; ++ break; + +- value = mqd[*pos/4]; ++ value = kbuf[*pos/4]; + r = put_user(value, (uint32_t *)buf); + if (r) +- goto done; ++ goto err_free; + buf += 4; + result += 4; + size -= 4; + *pos += 4; + } + +-done: +- amdgpu_bo_kunmap(ring->mqd_obj); +- mqd = NULL; +- amdgpu_bo_unreserve(ring->mqd_obj); +- if (r) +- return r; +- ++ kfree(kbuf); + return result; ++ ++err_unreserve: ++ amdgpu_bo_unreserve(ring->mqd_obj); ++err_free: ++ kfree(kbuf); ++ return r; + } + + static const struct file_operations amdgpu_debugfs_mqd_fops = { diff --git a/queue-6.7/drm-amdkfd-fix-tlb-flush-after-unmap-for-gfx9.4.2.patch b/queue-6.7/drm-amdkfd-fix-tlb-flush-after-unmap-for-gfx9.4.2.patch new file mode 100644 index 00000000000..ee7ed371231 --- /dev/null +++ b/queue-6.7/drm-amdkfd-fix-tlb-flush-after-unmap-for-gfx9.4.2.patch @@ -0,0 +1,32 @@ +From 1210e2f1033dc56b666c9f6dfb761a2d3f9f5d6c Mon Sep 17 00:00:00 2001 +From: Eric Huang +Date: Wed, 20 Mar 2024 15:53:47 -0400 +Subject: drm/amdkfd: fix TLB flush after unmap for GFX9.4.2 + +From: Eric Huang + +commit 1210e2f1033dc56b666c9f6dfb761a2d3f9f5d6c upstream. + +TLB flush after unmap accidentially was removed on +gfx9.4.2. It is to add it back. + +Signed-off-by: Eric Huang +Reviewed-by: Harish Kasiviswanathan +Signed-off-by: Alex Deucher +Cc: stable@vger.kernel.org +Signed-off-by: Greg Kroah-Hartman +--- + drivers/gpu/drm/amd/amdkfd/kfd_priv.h | 2 +- + 1 file changed, 1 insertion(+), 1 deletion(-) + +--- a/drivers/gpu/drm/amd/amdkfd/kfd_priv.h ++++ b/drivers/gpu/drm/amd/amdkfd/kfd_priv.h +@@ -1466,7 +1466,7 @@ void kfd_flush_tlb(struct kfd_process_de + + static inline bool kfd_flush_tlb_after_unmap(struct kfd_dev *dev) + { +- return KFD_GC_VERSION(dev) > IP_VERSION(9, 4, 2) || ++ return KFD_GC_VERSION(dev) >= IP_VERSION(9, 4, 2) || + (KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 1) && dev->sdma_fw_version >= 18) || + KFD_GC_VERSION(dev) == IP_VERSION(9, 4, 0); + } diff --git a/queue-6.7/drm-vmwgfx-create-debugfs-ttm_resource_manager-entry-only-if-needed.patch b/queue-6.7/drm-vmwgfx-create-debugfs-ttm_resource_manager-entry-only-if-needed.patch new file mode 100644 index 00000000000..c774ebeba11 --- /dev/null +++ b/queue-6.7/drm-vmwgfx-create-debugfs-ttm_resource_manager-entry-only-if-needed.patch @@ -0,0 +1,78 @@ +From 4be9075fec0a639384ed19975634b662bfab938f Mon Sep 17 00:00:00 2001 +From: Jocelyn Falempe +Date: Tue, 12 Mar 2024 10:35:12 +0100 +Subject: drm/vmwgfx: Create debugfs ttm_resource_manager entry only if needed + +From: Jocelyn Falempe + +commit 4be9075fec0a639384ed19975634b662bfab938f upstream. + +The driver creates /sys/kernel/debug/dri/0/mob_ttm even when the +corresponding ttm_resource_manager is not allocated. +This leads to a crash when trying to read from this file. + +Add a check to create mob_ttm, system_mob_ttm, and gmr_ttm debug file +only when the corresponding ttm_resource_manager is allocated. + +crash> bt +PID: 3133409 TASK: ffff8fe4834a5000 CPU: 3 COMMAND: "grep" + #0 [ffffb954506b3b20] machine_kexec at ffffffffb2a6bec3 + #1 [ffffb954506b3b78] __crash_kexec at ffffffffb2bb598a + #2 [ffffb954506b3c38] crash_kexec at ffffffffb2bb68c1 + #3 [ffffb954506b3c50] oops_end at ffffffffb2a2a9b1 + #4 [ffffb954506b3c70] no_context at ffffffffb2a7e913 + #5 [ffffb954506b3cc8] __bad_area_nosemaphore at ffffffffb2a7ec8c + #6 [ffffb954506b3d10] do_page_fault at ffffffffb2a7f887 + #7 [ffffb954506b3d40] page_fault at ffffffffb360116e + [exception RIP: ttm_resource_manager_debug+0x11] + RIP: ffffffffc04afd11 RSP: ffffb954506b3df0 RFLAGS: 00010246 + RAX: ffff8fe41a6d1200 RBX: 0000000000000000 RCX: 0000000000000940 + RDX: 0000000000000000 RSI: ffffffffc04b4338 RDI: 0000000000000000 + RBP: ffffb954506b3e08 R8: ffff8fee3ffad000 R9: 0000000000000000 + R10: ffff8fe41a76a000 R11: 0000000000000001 R12: 00000000ffffffff + R13: 0000000000000001 R14: ffff8fe5bb6f3900 R15: ffff8fe41a6d1200 + ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018 + #8 [ffffb954506b3e00] ttm_resource_manager_show at ffffffffc04afde7 [ttm] + #9 [ffffb954506b3e30] seq_read at ffffffffb2d8f9f3 + RIP: 00007f4c4eda8985 RSP: 00007ffdbba9e9f8 RFLAGS: 00000246 + RAX: ffffffffffffffda RBX: 000000000037e000 RCX: 00007f4c4eda8985 + RDX: 000000000037e000 RSI: 00007f4c41573000 RDI: 0000000000000003 + RBP: 000000000037e000 R8: 0000000000000000 R9: 000000000037fe30 + R10: 0000000000000000 R11: 0000000000000246 R12: 00007f4c41573000 + R13: 0000000000000003 R14: 00007f4c41572010 R15: 0000000000000003 + ORIG_RAX: 0000000000000000 CS: 0033 SS: 002b + +Signed-off-by: Jocelyn Falempe +Fixes: af4a25bbe5e7 ("drm/vmwgfx: Add debugfs entries for various ttm resource managers") +Cc: +Reviewed-by: Zack Rusin +Link: https://patchwork.freedesktop.org/patch/msgid/20240312093551.196609-1-jfalempe@redhat.com +Signed-off-by: Greg Kroah-Hartman +--- + drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 15 +++++++++------ + 1 file changed, 9 insertions(+), 6 deletions(-) + +--- a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c ++++ b/drivers/gpu/drm/vmwgfx/vmwgfx_drv.c +@@ -1444,12 +1444,15 @@ static void vmw_debugfs_resource_manager + root, "system_ttm"); + ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, TTM_PL_VRAM), + root, "vram_ttm"); +- ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_GMR), +- root, "gmr_ttm"); +- ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_MOB), +- root, "mob_ttm"); +- ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_SYSTEM), +- root, "system_mob_ttm"); ++ if (vmw->has_gmr) ++ ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_GMR), ++ root, "gmr_ttm"); ++ if (vmw->has_mob) { ++ ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_MOB), ++ root, "mob_ttm"); ++ ttm_resource_manager_create_debugfs(ttm_manager_type(&vmw->bdev, VMW_PL_SYSTEM), ++ root, "system_mob_ttm"); ++ } + } + + static int vmwgfx_pm_notifier(struct notifier_block *nb, unsigned long val, diff --git a/queue-6.7/exec-fix-nommu-linux_binprm-exec-in-transfer_args_to_stack.patch b/queue-6.7/exec-fix-nommu-linux_binprm-exec-in-transfer_args_to_stack.patch new file mode 100644 index 00000000000..baa13269bea --- /dev/null +++ b/queue-6.7/exec-fix-nommu-linux_binprm-exec-in-transfer_args_to_stack.patch @@ -0,0 +1,42 @@ +From 2aea94ac14d1e0a8ae9e34febebe208213ba72f7 Mon Sep 17 00:00:00 2001 +From: Max Filippov +Date: Wed, 20 Mar 2024 11:26:07 -0700 +Subject: exec: Fix NOMMU linux_binprm::exec in transfer_args_to_stack() + +From: Max Filippov + +commit 2aea94ac14d1e0a8ae9e34febebe208213ba72f7 upstream. + +In NOMMU kernel the value of linux_binprm::p is the offset inside the +temporary program arguments array maintained in separate pages in the +linux_binprm::page. linux_binprm::exec being a copy of linux_binprm::p +thus must be adjusted when that array is copied to the user stack. +Without that adjustment the value passed by the NOMMU kernel to the ELF +program in the AT_EXECFN entry of the aux array doesn't make any sense +and it may break programs that try to access memory pointed to by that +entry. + +Adjust linux_binprm::exec before the successful return from the +transfer_args_to_stack(). + +Cc: +Fixes: b6a2fea39318 ("mm: variable length argument support") +Fixes: 5edc2a5123a7 ("binfmt_elf_fdpic: wire up AT_EXECFD, AT_EXECFN, AT_SECURE") +Signed-off-by: Max Filippov +Link: https://lore.kernel.org/r/20240320182607.1472887-1-jcmvbkbc@gmail.com +Signed-off-by: Kees Cook +Signed-off-by: Greg Kroah-Hartman +--- + fs/exec.c | 1 + + 1 file changed, 1 insertion(+) + +--- a/fs/exec.c ++++ b/fs/exec.c +@@ -894,6 +894,7 @@ int transfer_args_to_stack(struct linux_ + goto out; + } + ++ bprm->exec += *sp_location - MAX_ARG_PAGES * PAGE_SIZE; + *sp_location = sp; + + out: diff --git a/queue-6.7/gpio-cdev-sanitize-the-label-before-requesting-the-interrupt.patch b/queue-6.7/gpio-cdev-sanitize-the-label-before-requesting-the-interrupt.patch new file mode 100644 index 00000000000..87a18a5ac57 --- /dev/null +++ b/queue-6.7/gpio-cdev-sanitize-the-label-before-requesting-the-interrupt.patch @@ -0,0 +1,126 @@ +From b34490879baa847d16fc529c8ea6e6d34f004b38 Mon Sep 17 00:00:00 2001 +From: Bartosz Golaszewski +Date: Mon, 25 Mar 2024 10:02:42 +0100 +Subject: gpio: cdev: sanitize the label before requesting the interrupt + +From: Bartosz Golaszewski + +commit b34490879baa847d16fc529c8ea6e6d34f004b38 upstream. + +When an interrupt is requested, a procfs directory is created under +"/proc/irq//