From: Sasha Levin Date: Fri, 3 Jul 2020 00:21:57 +0000 (-0400) Subject: Fixes for 5.7 X-Git-Tag: v4.4.230~46 X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=fbb5811aa253e15eb5e6333f7394d3a9c75ed547;p=thirdparty%2Fkernel%2Fstable-queue.git Fixes for 5.7 Signed-off-by: Sasha Levin --- diff --git a/queue-5.7/btrfs-block-group-refactor-how-we-delete-one-block-g.patch b/queue-5.7/btrfs-block-group-refactor-how-we-delete-one-block-g.patch new file mode 100644 index 00000000000..684343281f1 --- /dev/null +++ b/queue-5.7/btrfs-block-group-refactor-how-we-delete-one-block-g.patch @@ -0,0 +1,98 @@ +From 7614fc195e34059bc7b2ac29cf2acbab1ac58019 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Tue, 5 May 2020 07:58:21 +0800 +Subject: btrfs: block-group: refactor how we delete one block group item + +From: Qu Wenruo + +[ Upstream commit 7357623a7f4beb4ac76005f8fac9fc0230f9a67e ] + +When deleting a block group item, it's pretty straight forward, just +delete the item pointed by the key. However it will not be that +straight-forward for incoming skinny block group item. + +So refactor the block group item deletion into a new function, +remove_block_group_item(), also to make the already lengthy +btrfs_remove_block_group() a little shorter. + +Reviewed-by: Johannes Thumshirn +Signed-off-by: Qu Wenruo +Reviewed-by: David Sterba +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/block-group.c | 37 +++++++++++++++++++++++++------------ + 1 file changed, 25 insertions(+), 12 deletions(-) + +diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c +index 0c17f18b47940..d80857d00b0fb 100644 +--- a/fs/btrfs/block-group.c ++++ b/fs/btrfs/block-group.c +@@ -863,11 +863,34 @@ static void clear_incompat_bg_bits(struct btrfs_fs_info *fs_info, u64 flags) + } + } + ++static int remove_block_group_item(struct btrfs_trans_handle *trans, ++ struct btrfs_path *path, ++ struct btrfs_block_group *block_group) ++{ ++ struct btrfs_fs_info *fs_info = trans->fs_info; ++ struct btrfs_root *root; ++ struct btrfs_key key; ++ int ret; ++ ++ root = fs_info->extent_root; ++ key.objectid = block_group->start; ++ key.type = BTRFS_BLOCK_GROUP_ITEM_KEY; ++ key.offset = block_group->length; ++ ++ ret = btrfs_search_slot(trans, root, &key, path, -1, 1); ++ if (ret > 0) ++ ret = -ENOENT; ++ if (ret < 0) ++ return ret; ++ ++ ret = btrfs_del_item(trans, root, path); ++ return ret; ++} ++ + int btrfs_remove_block_group(struct btrfs_trans_handle *trans, + u64 group_start, struct extent_map *em) + { + struct btrfs_fs_info *fs_info = trans->fs_info; +- struct btrfs_root *root = fs_info->extent_root; + struct btrfs_path *path; + struct btrfs_block_group *block_group; + struct btrfs_free_cluster *cluster; +@@ -1068,10 +1091,6 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, + + spin_unlock(&block_group->space_info->lock); + +- key.objectid = block_group->start; +- key.type = BTRFS_BLOCK_GROUP_ITEM_KEY; +- key.offset = block_group->length; +- + mutex_lock(&fs_info->chunk_mutex); + spin_lock(&block_group->lock); + block_group->removed = 1; +@@ -1107,16 +1126,10 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, + if (ret) + goto out; + +- ret = btrfs_search_slot(trans, root, &key, path, -1, 1); +- if (ret > 0) +- ret = -EIO; ++ ret = remove_block_group_item(trans, path, block_group); + if (ret < 0) + goto out; + +- ret = btrfs_del_item(trans, root, path); +- if (ret) +- goto out; +- + if (remove_em) { + struct extent_map_tree *em_tree; + +-- +2.25.1 + diff --git a/queue-5.7/btrfs-fix-race-between-block-group-removal-and-block.patch b/queue-5.7/btrfs-fix-race-between-block-group-removal-and-block.patch new file mode 100644 index 00000000000..2b8cafd94e1 --- /dev/null +++ b/queue-5.7/btrfs-fix-race-between-block-group-removal-and-block.patch @@ -0,0 +1,173 @@ +From 3194f2f33de99452efb73cca36445df4df751e09 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Mon, 1 Jun 2020 19:12:19 +0100 +Subject: btrfs: fix race between block group removal and block group creation + +From: Filipe Manana + +[ Upstream commit ffcb9d44572afbaf8fa6dbf5115bff6dab7b299e ] + +There is a race between block group removal and block group creation +when the removal is completed by a task running fitrim or scrub. When +this happens we end up failing the block group creation with an error +-EEXIST since we attempt to insert a duplicate block group item key +in the extent tree. That results in a transaction abort. + +The race happens like this: + +1) Task A is doing a fitrim, and at btrfs_trim_block_group() it freezes + block group X with btrfs_freeze_block_group() (until very recently + that was named btrfs_get_block_group_trimming()); + +2) Task B starts removing block group X, either because it's now unused + or due to relocation for example. So at btrfs_remove_block_group(), + while holding the chunk mutex and the block group's lock, it sets + the 'removed' flag of the block group and it sets the local variable + 'remove_em' to false, because the block group is currently frozen + (its 'frozen' counter is > 0, until very recently this counter was + named 'trimming'); + +3) Task B unlocks the block group and the chunk mutex; + +4) Task A is done trimming the block group and unfreezes the block group + by calling btrfs_unfreeze_block_group() (until very recently this was + named btrfs_put_block_group_trimming()). In this function we lock the + block group and set the local variable 'cleanup' to true because we + were able to decrement the block group's 'frozen' counter down to 0 and + the flag 'removed' is set in the block group. + + Since 'cleanup' is set to true, it locks the chunk mutex and removes + the extent mapping representing the block group from the mapping tree; + +5) Task C allocates a new block group Y and it picks up the logical address + that block group X had as the logical address for Y, because X was the + block group with the highest logical address and now the second block + group with the highest logical address, the last in the fs mapping tree, + ends at an offset corresponding to block group X's logical address (this + logical address selection is done at volumes.c:find_next_chunk()). + + At this point the new block group Y does not have yet its item added + to the extent tree (nor the corresponding device extent items and + chunk item in the device and chunk trees). The new group Y is added to + the list of pending block groups in the transaction handle; + +6) Before task B proceeds to removing the block group item for block + group X from the extent tree, which has a key matching: + + (X logical offset, BTRFS_BLOCK_GROUP_ITEM_KEY, length) + + task C while ending its transaction handle calls + btrfs_create_pending_block_groups(), which finds block group Y and + tries to insert the block group item for Y into the exten tree, which + fails with -EEXIST since logical offset is the same that X had and + task B hasn't yet deleted the key from the extent tree. + This failure results in a transaction abort, producing a stack like + the following: + +------------[ cut here ]------------ + BTRFS: Transaction aborted (error -17) + WARNING: CPU: 2 PID: 19736 at fs/btrfs/block-group.c:2074 btrfs_create_pending_block_groups+0x1eb/0x260 [btrfs] + Modules linked in: btrfs blake2b_generic xor raid6_pq (...) + CPU: 2 PID: 19736 Comm: fsstress Tainted: G W 5.6.0-rc7-btrfs-next-58 #5 + Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-59-gc9ba5276e321-prebuilt.qemu.org 04/01/2014 + RIP: 0010:btrfs_create_pending_block_groups+0x1eb/0x260 [btrfs] + Code: ff ff ff 48 8b 55 50 f0 48 (...) + RSP: 0018:ffffa4160a1c7d58 EFLAGS: 00010286 + RAX: 0000000000000000 RBX: ffff961581909d98 RCX: 0000000000000000 + RDX: 0000000000000001 RSI: ffffffffb3d63990 RDI: 0000000000000001 + RBP: ffff9614f3356a58 R08: 0000000000000000 R09: 0000000000000001 + R10: ffff9615b65b0040 R11: 0000000000000000 R12: ffff961581909c10 + R13: ffff9615b0c32000 R14: ffff9614f3356ab0 R15: ffff9614be779000 + FS: 00007f2ce2841e80(0000) GS:ffff9615bae00000(0000) knlGS:0000000000000000 + CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 + CR2: 0000555f18780000 CR3: 0000000131d34005 CR4: 00000000003606e0 + DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 + DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 + Call Trace: + btrfs_start_dirty_block_groups+0x398/0x4e0 [btrfs] + btrfs_commit_transaction+0xd0/0xc50 [btrfs] + ? btrfs_attach_transaction_barrier+0x1e/0x50 [btrfs] + ? __ia32_sys_fdatasync+0x20/0x20 + iterate_supers+0xdb/0x180 + ksys_sync+0x60/0xb0 + __ia32_sys_sync+0xa/0x10 + do_syscall_64+0x5c/0x280 + entry_SYSCALL_64_after_hwframe+0x49/0xbe + RIP: 0033:0x7f2ce1d4d5b7 + Code: 83 c4 08 48 3d 01 (...) + RSP: 002b:00007ffd8b558c58 EFLAGS: 00000202 ORIG_RAX: 00000000000000a2 + RAX: ffffffffffffffda RBX: 000000000000002c RCX: 00007f2ce1d4d5b7 + RDX: 00000000ffffffff RSI: 00000000186ba07b RDI: 000000000000002c + RBP: 0000555f17b9e520 R08: 0000000000000012 R09: 000000000000ce00 + R10: 0000000000000078 R11: 0000000000000202 R12: 0000000000000032 + R13: 0000000051eb851f R14: 00007ffd8b558cd0 R15: 0000555f1798ec20 + irq event stamp: 0 + hardirqs last enabled at (0): [<0000000000000000>] 0x0 + hardirqs last disabled at (0): [] copy_process+0x74f/0x2020 + softirqs last enabled at (0): [] copy_process+0x74f/0x2020 + softirqs last disabled at (0): [<0000000000000000>] 0x0 + ---[ end trace bd7c03622e0b0a9c ]--- + +Fix this simply by making btrfs_remove_block_group() remove the block +group's item from the extent tree before it flags the block group as +removed. Also make the free space deletion from the free space tree +before flagging the block group as removed, to avoid a similar race +with adding and removing free space entries for the free space tree. + +Fixes: 04216820fe83d5 ("Btrfs: fix race between fs trimming and block group remove/allocation") +CC: stable@vger.kernel.org # 4.4+ +Signed-off-by: Filipe Manana +Signed-off-by: David Sterba +Signed-off-by: Sasha Levin +--- + fs/btrfs/block-group.c | 27 +++++++++++++++++++-------- + 1 file changed, 19 insertions(+), 8 deletions(-) + +diff --git a/fs/btrfs/block-group.c b/fs/btrfs/block-group.c +index d80857d00b0fb..1b1c869530088 100644 +--- a/fs/btrfs/block-group.c ++++ b/fs/btrfs/block-group.c +@@ -1091,6 +1091,25 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, + + spin_unlock(&block_group->space_info->lock); + ++ /* ++ * Remove the free space for the block group from the free space tree ++ * and the block group's item from the extent tree before marking the ++ * block group as removed. This is to prevent races with tasks that ++ * freeze and unfreeze a block group, this task and another task ++ * allocating a new block group - the unfreeze task ends up removing ++ * the block group's extent map before the task calling this function ++ * deletes the block group item from the extent tree, allowing for ++ * another task to attempt to create another block group with the same ++ * item key (and failing with -EEXIST and a transaction abort). ++ */ ++ ret = remove_block_group_free_space(trans, block_group); ++ if (ret) ++ goto out; ++ ++ ret = remove_block_group_item(trans, path, block_group); ++ if (ret < 0) ++ goto out; ++ + mutex_lock(&fs_info->chunk_mutex); + spin_lock(&block_group->lock); + block_group->removed = 1; +@@ -1122,14 +1141,6 @@ int btrfs_remove_block_group(struct btrfs_trans_handle *trans, + + mutex_unlock(&fs_info->chunk_mutex); + +- ret = remove_block_group_free_space(trans, block_group); +- if (ret) +- goto out; +- +- ret = remove_block_group_item(trans, path, block_group); +- if (ret < 0) +- goto out; +- + if (remove_em) { + struct extent_map_tree *em_tree; + +-- +2.25.1 + diff --git a/queue-5.7/drm-amd-display-fix-incorrectly-pruned-modes-with-de.patch b/queue-5.7/drm-amd-display-fix-incorrectly-pruned-modes-with-de.patch new file mode 100644 index 00000000000..e3de53fac3e --- /dev/null +++ b/queue-5.7/drm-amd-display-fix-incorrectly-pruned-modes-with-de.patch @@ -0,0 +1,226 @@ +From cf32c53954ba8965ec588125a4eac219d4ebc6de Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 30 Apr 2020 16:40:09 +0800 +Subject: drm/amd/display: Fix incorrectly pruned modes with deep color + +From: Stylon Wang + +[ Upstream commit cbd14ae7ea934fd9d9f95103a0601a7fea243573 ] + +[Why] +When "max bpc" is set to enable deep color, some modes are removed from +the list if they fail validation on max bpc. These modes should be kept +if they validates fine with lower bpc. + +[How] +- Retry with lower bpc in mode validation. +- Same in atomic commit to apply working bpc, not necessarily max bpc. + +Signed-off-by: Stylon Wang +Reviewed-by: Nicholas Kazlauskas +Acked-by: Rodrigo Siqueira +Signed-off-by: Alex Deucher +Signed-off-by: Sasha Levin +--- + .../gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 102 +++++++++++------- + 1 file changed, 64 insertions(+), 38 deletions(-) + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index f9f02e08054bc..b7e161f2a47d4 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -3797,8 +3797,7 @@ static void update_stream_scaling_settings(const struct drm_display_mode *mode, + + static enum dc_color_depth + convert_color_depth_from_display_info(const struct drm_connector *connector, +- const struct drm_connector_state *state, +- bool is_y420) ++ bool is_y420, int requested_bpc) + { + uint8_t bpc; + +@@ -3818,10 +3817,7 @@ convert_color_depth_from_display_info(const struct drm_connector *connector, + bpc = bpc ? bpc : 8; + } + +- if (!state) +- state = connector->state; +- +- if (state) { ++ if (requested_bpc > 0) { + /* + * Cap display bpc based on the user requested value. + * +@@ -3830,7 +3826,7 @@ convert_color_depth_from_display_info(const struct drm_connector *connector, + * or if this was called outside of atomic check, so it + * can't be used directly. + */ +- bpc = min(bpc, state->max_requested_bpc); ++ bpc = min_t(u8, bpc, requested_bpc); + + /* Round down to the nearest even number. */ + bpc = bpc - (bpc & 1); +@@ -3952,7 +3948,8 @@ static void fill_stream_properties_from_drm_display_mode( + const struct drm_display_mode *mode_in, + const struct drm_connector *connector, + const struct drm_connector_state *connector_state, +- const struct dc_stream_state *old_stream) ++ const struct dc_stream_state *old_stream, ++ int requested_bpc) + { + struct dc_crtc_timing *timing_out = &stream->timing; + const struct drm_display_info *info = &connector->display_info; +@@ -3982,8 +3979,9 @@ static void fill_stream_properties_from_drm_display_mode( + + timing_out->timing_3d_format = TIMING_3D_FORMAT_NONE; + timing_out->display_color_depth = convert_color_depth_from_display_info( +- connector, connector_state, +- (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420)); ++ connector, ++ (timing_out->pixel_encoding == PIXEL_ENCODING_YCBCR420), ++ requested_bpc); + timing_out->scan_type = SCANNING_TYPE_NODATA; + timing_out->hdmi_vic = 0; + +@@ -4189,7 +4187,8 @@ static struct dc_stream_state * + create_stream_for_sink(struct amdgpu_dm_connector *aconnector, + const struct drm_display_mode *drm_mode, + const struct dm_connector_state *dm_state, +- const struct dc_stream_state *old_stream) ++ const struct dc_stream_state *old_stream, ++ int requested_bpc) + { + struct drm_display_mode *preferred_mode = NULL; + struct drm_connector *drm_connector; +@@ -4274,10 +4273,10 @@ create_stream_for_sink(struct amdgpu_dm_connector *aconnector, + */ + if (!scale || mode_refresh != preferred_refresh) + fill_stream_properties_from_drm_display_mode(stream, +- &mode, &aconnector->base, con_state, NULL); ++ &mode, &aconnector->base, con_state, NULL, requested_bpc); + else + fill_stream_properties_from_drm_display_mode(stream, +- &mode, &aconnector->base, con_state, old_stream); ++ &mode, &aconnector->base, con_state, old_stream, requested_bpc); + + stream->timing.flags.DSC = 0; + +@@ -4800,16 +4799,54 @@ static void handle_edid_mgmt(struct amdgpu_dm_connector *aconnector) + create_eml_sink(aconnector); + } + ++static struct dc_stream_state * ++create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector, ++ const struct drm_display_mode *drm_mode, ++ const struct dm_connector_state *dm_state, ++ const struct dc_stream_state *old_stream) ++{ ++ struct drm_connector *connector = &aconnector->base; ++ struct amdgpu_device *adev = connector->dev->dev_private; ++ struct dc_stream_state *stream; ++ int requested_bpc = connector->state ? connector->state->max_requested_bpc : 8; ++ enum dc_status dc_result = DC_OK; ++ ++ do { ++ stream = create_stream_for_sink(aconnector, drm_mode, ++ dm_state, old_stream, ++ requested_bpc); ++ if (stream == NULL) { ++ DRM_ERROR("Failed to create stream for sink!\n"); ++ break; ++ } ++ ++ dc_result = dc_validate_stream(adev->dm.dc, stream); ++ ++ if (dc_result != DC_OK) { ++ DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d\n", ++ drm_mode->hdisplay, ++ drm_mode->vdisplay, ++ drm_mode->clock, ++ dc_result); ++ ++ dc_stream_release(stream); ++ stream = NULL; ++ requested_bpc -= 2; /* lower bpc to retry validation */ ++ } ++ ++ } while (stream == NULL && requested_bpc >= 6); ++ ++ return stream; ++} ++ + enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connector, + struct drm_display_mode *mode) + { + int result = MODE_ERROR; + struct dc_sink *dc_sink; +- struct amdgpu_device *adev = connector->dev->dev_private; + /* TODO: Unhardcode stream count */ + struct dc_stream_state *stream; + struct amdgpu_dm_connector *aconnector = to_amdgpu_dm_connector(connector); +- enum dc_status dc_result = DC_OK; + + if ((mode->flags & DRM_MODE_FLAG_INTERLACE) || + (mode->flags & DRM_MODE_FLAG_DBLSCAN)) +@@ -4830,24 +4867,11 @@ enum drm_mode_status amdgpu_dm_connector_mode_valid(struct drm_connector *connec + goto fail; + } + +- stream = create_stream_for_sink(aconnector, mode, NULL, NULL); +- if (stream == NULL) { +- DRM_ERROR("Failed to create stream for sink!\n"); +- goto fail; +- } +- +- dc_result = dc_validate_stream(adev->dm.dc, stream); +- +- if (dc_result == DC_OK) ++ stream = create_validate_stream_for_sink(aconnector, mode, NULL, NULL); ++ if (stream) { ++ dc_stream_release(stream); + result = MODE_OK; +- else +- DRM_DEBUG_KMS("Mode %dx%d (clk %d) failed DC validation with error %d\n", +- mode->hdisplay, +- mode->vdisplay, +- mode->clock, +- dc_result); +- +- dc_stream_release(stream); ++ } + + fail: + /* TODO: error handling*/ +@@ -5170,10 +5194,12 @@ static int dm_encoder_helper_atomic_check(struct drm_encoder *encoder, + return 0; + + if (!state->duplicated) { ++ int max_bpc = conn_state->max_requested_bpc; + is_y420 = drm_mode_is_420_also(&connector->display_info, adjusted_mode) && + aconnector->force_yuv420_output; +- color_depth = convert_color_depth_from_display_info(connector, conn_state, +- is_y420); ++ color_depth = convert_color_depth_from_display_info(connector, ++ is_y420, ++ max_bpc); + bpp = convert_dc_color_depth_into_bpc(color_depth) * 3; + clock = adjusted_mode->clock; + dm_new_connector_state->pbn = drm_dp_calc_pbn_mode(clock, bpp, false); +@@ -7589,10 +7615,10 @@ static int dm_update_crtc_state(struct amdgpu_display_manager *dm, + if (!drm_atomic_crtc_needs_modeset(new_crtc_state)) + goto skip_modeset; + +- new_stream = create_stream_for_sink(aconnector, +- &new_crtc_state->mode, +- dm_new_conn_state, +- dm_old_crtc_state->stream); ++ new_stream = create_validate_stream_for_sink(aconnector, ++ &new_crtc_state->mode, ++ dm_new_conn_state, ++ dm_old_crtc_state->stream); + + /* + * we can have no stream on ACTION_SET if a display +-- +2.25.1 + diff --git a/queue-5.7/drm-amd-display-fix-ineffective-setting-of-max-bpc-p.patch b/queue-5.7/drm-amd-display-fix-ineffective-setting-of-max-bpc-p.patch new file mode 100644 index 00000000000..d971632b7f0 --- /dev/null +++ b/queue-5.7/drm-amd-display-fix-ineffective-setting-of-max-bpc-p.patch @@ -0,0 +1,44 @@ +From 7ef093ff596f584af41323b96029ad85510ae4d4 Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Fri, 12 Jun 2020 19:04:18 +0800 +Subject: drm/amd/display: Fix ineffective setting of max bpc property + +From: Stylon Wang + +[ Upstream commit fa7041d9d2fc7401cece43f305eb5b87b7017fc4 ] + +[Why] +Regression was introduced where setting max bpc property has no effect +on the atomic check and final commit. It has the same effect as max bpc +being stuck at 8. + +[How] +Correctly propagate max bpc with the new connector state. + +Signed-off-by: Stylon Wang +Reviewed-by: Nicholas Kazlauskas +Acked-by: Rodrigo Siqueira +Signed-off-by: Alex Deucher +Cc: stable@vger.kernel.org +Signed-off-by: Sasha Levin +--- + drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 3 ++- + 1 file changed, 2 insertions(+), 1 deletion(-) + +diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +index b7e161f2a47d4..69b1f61928eff 100644 +--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c ++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c +@@ -4808,7 +4808,8 @@ create_validate_stream_for_sink(struct amdgpu_dm_connector *aconnector, + struct drm_connector *connector = &aconnector->base; + struct amdgpu_device *adev = connector->dev->dev_private; + struct dc_stream_state *stream; +- int requested_bpc = connector->state ? connector->state->max_requested_bpc : 8; ++ const struct drm_connector_state *drm_state = dm_state ? &dm_state->base : NULL; ++ int requested_bpc = drm_state ? drm_state->max_requested_bpc : 8; + enum dc_status dc_result = DC_OK; + + do { +-- +2.25.1 + diff --git a/queue-5.7/mm-fix-swap-cache-node-allocation-mask.patch b/queue-5.7/mm-fix-swap-cache-node-allocation-mask.patch new file mode 100644 index 00000000000..cfc51b52b8e --- /dev/null +++ b/queue-5.7/mm-fix-swap-cache-node-allocation-mask.patch @@ -0,0 +1,98 @@ +From 4809bed73fcfac8cae2f599839ca1272c90d95fb Mon Sep 17 00:00:00 2001 +From: Sasha Levin +Date: Thu, 25 Jun 2020 20:29:59 -0700 +Subject: mm: fix swap cache node allocation mask + +From: Hugh Dickins + +[ Upstream commit 243bce09c91b0145aeaedd5afba799d81841c030 ] + +Chris Murphy reports that a slightly overcommitted load, testing swap +and zram along with i915, splats and keeps on splatting, when it had +better fail less noisily: + + gnome-shell: page allocation failure: order:0, + mode:0x400d0(__GFP_IO|__GFP_FS|__GFP_COMP|__GFP_RECLAIMABLE), + nodemask=(null),cpuset=/,mems_allowed=0 + CPU: 2 PID: 1155 Comm: gnome-shell Not tainted 5.7.0-1.fc33.x86_64 #1 + Call Trace: + dump_stack+0x64/0x88 + warn_alloc.cold+0x75/0xd9 + __alloc_pages_slowpath.constprop.0+0xcfa/0xd30 + __alloc_pages_nodemask+0x2df/0x320 + alloc_slab_page+0x195/0x310 + allocate_slab+0x3c5/0x440 + ___slab_alloc+0x40c/0x5f0 + __slab_alloc+0x1c/0x30 + kmem_cache_alloc+0x20e/0x220 + xas_nomem+0x28/0x70 + add_to_swap_cache+0x321/0x400 + __read_swap_cache_async+0x105/0x240 + swap_cluster_readahead+0x22c/0x2e0 + shmem_swapin+0x8e/0xc0 + shmem_swapin_page+0x196/0x740 + shmem_getpage_gfp+0x3a2/0xa60 + shmem_read_mapping_page_gfp+0x32/0x60 + shmem_get_pages+0x155/0x5e0 [i915] + __i915_gem_object_get_pages+0x68/0xa0 [i915] + i915_vma_pin+0x3fe/0x6c0 [i915] + eb_add_vma+0x10b/0x2c0 [i915] + i915_gem_do_execbuffer+0x704/0x3430 [i915] + i915_gem_execbuffer2_ioctl+0x1ea/0x3e0 [i915] + drm_ioctl_kernel+0x86/0xd0 [drm] + drm_ioctl+0x206/0x390 [drm] + ksys_ioctl+0x82/0xc0 + __x64_sys_ioctl+0x16/0x20 + do_syscall_64+0x5b/0xf0 + entry_SYSCALL_64_after_hwframe+0x44/0xa9 + +Reported on 5.7, but it goes back really to 3.1: when +shmem_read_mapping_page_gfp() was implemented for use by i915, and +allowed for __GFP_NORETRY and __GFP_NOWARN flags in most places, but +missed swapin's "& GFP_KERNEL" mask for page tree node allocation in +__read_swap_cache_async() - that was to mask off HIGHUSER_MOVABLE bits +from what page cache uses, but GFP_RECLAIM_MASK is now what's needed. + +Link: https://bugzilla.kernel.org/show_bug.cgi?id=208085 +Link: http://lkml.kernel.org/r/alpine.LSU.2.11.2006151330070.11064@eggly.anvils +Fixes: 68da9f055755 ("tmpfs: pass gfp to shmem_getpage_gfp") +Signed-off-by: Hugh Dickins +Reviewed-by: Vlastimil Babka +Reviewed-by: Matthew Wilcox (Oracle) +Reported-by: Chris Murphy +Analyzed-by: Vlastimil Babka +Analyzed-by: Matthew Wilcox +Tested-by: Chris Murphy +Cc: [3.1+] +Signed-off-by: Andrew Morton +Signed-off-by: Linus Torvalds +Signed-off-by: Sasha Levin +--- + mm/swap_state.c | 4 +++- + 1 file changed, 3 insertions(+), 1 deletion(-) + +diff --git a/mm/swap_state.c b/mm/swap_state.c +index ebed37bbf7a39..e3d36776c08bf 100644 +--- a/mm/swap_state.c ++++ b/mm/swap_state.c +@@ -23,6 +23,7 @@ + #include + + #include ++#include "internal.h" + + /* + * swapper_space is a fiction, retained to simplify the path through +@@ -418,7 +419,8 @@ struct page *__read_swap_cache_async(swp_entry_t entry, gfp_t gfp_mask, + /* May fail (-ENOMEM) if XArray node allocation failed. */ + __SetPageLocked(new_page); + __SetPageSwapBacked(new_page); +- err = add_to_swap_cache(new_page, entry, gfp_mask & GFP_KERNEL); ++ err = add_to_swap_cache(new_page, entry, ++ gfp_mask & GFP_RECLAIM_MASK); + if (likely(!err)) { + /* Initiate read into locked page */ + SetPageWorkingset(new_page); +-- +2.25.1 + diff --git a/queue-5.7/series b/queue-5.7/series index 3ca96c45ba2..a3da2fbfb46 100644 --- a/queue-5.7/series +++ b/queue-5.7/series @@ -3,3 +3,8 @@ exfat-add-missing-brelse-calls-on-error-paths.patch exfat-call-sync_filesystem-for-read-only-remount.patch exfat-move-setting-vol_dirty-over-exfat_remove_entri.patch exfat-flush-dirty-metadata-in-fsync.patch +btrfs-block-group-refactor-how-we-delete-one-block-g.patch +btrfs-fix-race-between-block-group-removal-and-block.patch +mm-fix-swap-cache-node-allocation-mask.patch +drm-amd-display-fix-incorrectly-pruned-modes-with-de.patch +drm-amd-display-fix-ineffective-setting-of-max-bpc-p.patch