--- /dev/null
+From 9d0e3cac3517942a6e00eeecfe583a98715edb16 Mon Sep 17 00:00:00 2001
+From: Brian Norris <briannorris@chromium.org>
+Date: Mon, 9 Jan 2023 17:18:16 -0800
+Subject: drm/atomic: Allow vblank-enabled + self-refresh "disable"
+
+From: Brian Norris <briannorris@chromium.org>
+
+commit 9d0e3cac3517942a6e00eeecfe583a98715edb16 upstream.
+
+The self-refresh helper framework overloads "disable" to sometimes mean
+"go into self-refresh mode," and this mode activates automatically
+(e.g., after some period of unchanging display output). In such cases,
+the display pipe is still considered "on", and user-space is not aware
+that we went into self-refresh mode. Thus, users may expect that
+vblank-related features (such as DRM_IOCTL_WAIT_VBLANK) still work
+properly.
+
+However, we trigger the WARN_ONCE() here if a CRTC driver tries to leave
+vblank enabled.
+
+Add a different expectation: that CRTCs *should* leave vblank enabled
+when going into self-refresh.
+
+This patch is preparation for another patch -- "drm/rockchip: vop: Leave
+vblank enabled in self-refresh" -- which resolves conflicts between the
+above self-refresh behavior and the API tests in IGT's kms_vblank test
+module.
+
+== Some alternatives discussed: ==
+
+It's likely that on many display controllers, vblank interrupts will
+turn off when the CRTC is disabled, and so in some cases, self-refresh
+may not support vblank. To support such cases, we might consider
+additions to the generic helpers such that we fire vblank events based
+on a timer.
+
+However, there is currently only one driver using the common
+self-refresh helpers (i.e., rockchip), and at least as of commit
+bed030a49f3e ("drm/rockchip: Don't fully disable vop on self refresh"),
+the CRTC hardware is powered enough to continue to generate vblank
+interrupts.
+
+So we chose the simpler option of leaving vblank interrupts enabled. We
+can reevaluate this decision and perhaps augment the helpers if/when we
+gain a second driver that has different requirements.
+
+v3:
+ * include discussion summary
+
+v2:
+ * add 'ret != 0' warning case for self-refresh
+ * describe failing test case and relation to drm/rockchip patch better
+
+Cc: <stable@vger.kernel.org> # dependency for "drm/rockchip: vop: Leave
+ # vblank enabled in self-refresh"
+Signed-off-by: Brian Norris <briannorris@chromium.org>
+Signed-off-by: Sean Paul <seanpaul@chromium.org>
+Link: https://patchwork.freedesktop.org/patch/msgid/20230109171809.v3.1.I3904f697863649eb1be540ecca147a66e42bfad7@changeid
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/drm_atomic_helper.c | 11 ++++++++++-
+ 1 file changed, 10 insertions(+), 1 deletion(-)
+
+--- a/drivers/gpu/drm/drm_atomic_helper.c
++++ b/drivers/gpu/drm/drm_atomic_helper.c
+@@ -1209,7 +1209,16 @@ disable_outputs(struct drm_device *dev,
+ continue;
+
+ ret = drm_crtc_vblank_get(crtc);
+- WARN_ONCE(ret != -EINVAL, "driver forgot to call drm_crtc_vblank_off()\n");
++ /*
++ * Self-refresh is not a true "disable"; ensure vblank remains
++ * enabled.
++ */
++ if (new_crtc_state->self_refresh_active)
++ WARN_ONCE(ret != 0,
++ "driver disabled vblank in self-refresh\n");
++ else
++ WARN_ONCE(ret != -EINVAL,
++ "driver forgot to call drm_crtc_vblank_off()\n");
+ if (ret == 0)
+ drm_crtc_vblank_put(crtc);
+ }
--- /dev/null
+From 72f1de49ffb90b29748284f27f1d6b829ab1de95 Mon Sep 17 00:00:00 2001
+From: Wayne Lin <Wayne.Lin@amd.com>
+Date: Mon, 17 Apr 2023 17:08:12 +0800
+Subject: drm/dp_mst: Clear MSG_RDY flag before sending new message
+
+From: Wayne Lin <Wayne.Lin@amd.com>
+
+commit 72f1de49ffb90b29748284f27f1d6b829ab1de95 upstream.
+
+[Why]
+The sequence for collecting down_reply from source perspective should
+be:
+
+Request_n->repeat (get partial reply of Request_n->clear message ready
+flag to ack DPRX that the message is received) till all partial
+replies for Request_n are received->new Request_n+1.
+
+Now there is chance that drm_dp_mst_hpd_irq() will fire new down
+request in the tx queue when the down reply is incomplete. Source is
+restricted to generate interveleaved message transactions so we should
+avoid it.
+
+Also, while assembling partial reply packets, reading out DPCD DOWN_REP
+Sideband MSG buffer + clearing DOWN_REP_MSG_RDY flag should be
+wrapped up as a complete operation for reading out a reply packet.
+Kicking off a new request before clearing DOWN_REP_MSG_RDY flag might
+be risky. e.g. If the reply of the new request has overwritten the
+DPRX DOWN_REP Sideband MSG buffer before source writing one to clear
+DOWN_REP_MSG_RDY flag, source then unintentionally flushes the reply
+for the new request. Should handle the up request in the same way.
+
+[How]
+Separete drm_dp_mst_hpd_irq() into 2 steps. After acking the MST IRQ
+event, driver calls drm_dp_mst_hpd_irq_send_new_request() and might
+trigger drm_dp_mst_kick_tx() only when there is no on going message
+transaction.
+
+Changes since v1:
+* Reworked on review comments received
+-> Adjust the fix to let driver explicitly kick off new down request
+when mst irq event is handled and acked
+-> Adjust the commit message
+
+Changes since v2:
+* Adjust the commit message
+* Adjust the naming of the divided 2 functions and add a new input
+ parameter "ack".
+* Adjust code flow as per review comments.
+
+Changes since v3:
+* Update the function description of drm_dp_mst_hpd_irq_handle_event
+
+Changes since v4:
+* Change ack of drm_dp_mst_hpd_irq_handle_event() to be an array align
+ the size of esi[]
+
+Signed-off-by: Wayne Lin <Wayne.Lin@amd.com>
+Reviewed-by: Lyude Paul <lyude@redhat.com>
+Acked-by: Jani Nikula <jani.nikula@intel.com>
+Cc: stable@vger.kernel.org
+Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 30 ++++++------
+ drivers/gpu/drm/display/drm_dp_mst_topology.c | 54 +++++++++++++++++++---
+ drivers/gpu/drm/i915/display/intel_dp.c | 7 +-
+ drivers/gpu/drm/nouveau/dispnv50/disp.c | 12 +++-
+ include/drm/display/drm_dp_mst_helper.h | 7 ++
+ 5 files changed, 80 insertions(+), 30 deletions(-)
+
+--- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
++++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c
+@@ -3263,6 +3263,7 @@ static void dm_handle_mst_sideband_msg(s
+
+ while (dret == dpcd_bytes_to_read &&
+ process_count < max_process_count) {
++ u8 ack[DP_PSR_ERROR_STATUS - DP_SINK_COUNT_ESI] = {};
+ u8 retry;
+ dret = 0;
+
+@@ -3271,28 +3272,29 @@ static void dm_handle_mst_sideband_msg(s
+ DRM_DEBUG_DRIVER("ESI %02x %02x %02x\n", esi[0], esi[1], esi[2]);
+ /* handle HPD short pulse irq */
+ if (aconnector->mst_mgr.mst_state)
+- drm_dp_mst_hpd_irq(
+- &aconnector->mst_mgr,
+- esi,
+- &new_irq_handled);
++ drm_dp_mst_hpd_irq_handle_event(&aconnector->mst_mgr,
++ esi,
++ ack,
++ &new_irq_handled);
+
+ if (new_irq_handled) {
+ /* ACK at DPCD to notify down stream */
+- const int ack_dpcd_bytes_to_write =
+- dpcd_bytes_to_read - 1;
+-
+ for (retry = 0; retry < 3; retry++) {
+- u8 wret;
++ ssize_t wret;
+
+- wret = drm_dp_dpcd_write(
+- &aconnector->dm_dp_aux.aux,
+- dpcd_addr + 1,
+- &esi[1],
+- ack_dpcd_bytes_to_write);
+- if (wret == ack_dpcd_bytes_to_write)
++ wret = drm_dp_dpcd_writeb(&aconnector->dm_dp_aux.aux,
++ dpcd_addr + 1,
++ ack[1]);
++ if (wret == 1)
+ break;
+ }
+
++ if (retry == 3) {
++ DRM_ERROR("Failed to ack MST event.\n");
++ return;
++ }
++
++ drm_dp_mst_hpd_irq_send_new_request(&aconnector->mst_mgr);
+ /* check if there is new irq to be handled */
+ dret = drm_dp_dpcd_read(
+ &aconnector->dm_dp_aux.aux,
+--- a/drivers/gpu/drm/display/drm_dp_mst_topology.c
++++ b/drivers/gpu/drm/display/drm_dp_mst_topology.c
+@@ -4053,17 +4053,28 @@ out:
+ }
+
+ /**
+- * drm_dp_mst_hpd_irq() - MST hotplug IRQ notify
++ * drm_dp_mst_hpd_irq_handle_event() - MST hotplug IRQ handle MST event
+ * @mgr: manager to notify irq for.
+ * @esi: 4 bytes from SINK_COUNT_ESI
++ * @ack: 4 bytes used to ack events starting from SINK_COUNT_ESI
+ * @handled: whether the hpd interrupt was consumed or not
+ *
+- * This should be called from the driver when it detects a short IRQ,
++ * This should be called from the driver when it detects a HPD IRQ,
+ * along with the value of the DEVICE_SERVICE_IRQ_VECTOR_ESI0. The
+- * topology manager will process the sideband messages received as a result
+- * of this.
++ * topology manager will process the sideband messages received
++ * as indicated in the DEVICE_SERVICE_IRQ_VECTOR_ESI0 and set the
++ * corresponding flags that Driver has to ack the DP receiver later.
++ *
++ * Note that driver shall also call
++ * drm_dp_mst_hpd_irq_send_new_request() if the 'handled' is set
++ * after calling this function, to try to kick off a new request in
++ * the queue if the previous message transaction is completed.
++ *
++ * See also:
++ * drm_dp_mst_hpd_irq_send_new_request()
+ */
+-int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handled)
++int drm_dp_mst_hpd_irq_handle_event(struct drm_dp_mst_topology_mgr *mgr, const u8 *esi,
++ u8 *ack, bool *handled)
+ {
+ int ret = 0;
+ int sc;
+@@ -4078,18 +4089,47 @@ int drm_dp_mst_hpd_irq(struct drm_dp_mst
+ if (esi[1] & DP_DOWN_REP_MSG_RDY) {
+ ret = drm_dp_mst_handle_down_rep(mgr);
+ *handled = true;
++ ack[1] |= DP_DOWN_REP_MSG_RDY;
+ }
+
+ if (esi[1] & DP_UP_REQ_MSG_RDY) {
+ ret |= drm_dp_mst_handle_up_req(mgr);
+ *handled = true;
++ ack[1] |= DP_UP_REQ_MSG_RDY;
+ }
+
+- drm_dp_mst_kick_tx(mgr);
+ return ret;
+ }
+-EXPORT_SYMBOL(drm_dp_mst_hpd_irq);
++EXPORT_SYMBOL(drm_dp_mst_hpd_irq_handle_event);
++
++/**
++ * drm_dp_mst_hpd_irq_send_new_request() - MST hotplug IRQ kick off new request
++ * @mgr: manager to notify irq for.
++ *
++ * This should be called from the driver when mst irq event is handled
++ * and acked. Note that new down request should only be sent when
++ * previous message transaction is completed. Source is not supposed to generate
++ * interleaved message transactions.
++ */
++void drm_dp_mst_hpd_irq_send_new_request(struct drm_dp_mst_topology_mgr *mgr)
++{
++ struct drm_dp_sideband_msg_tx *txmsg;
++ bool kick = true;
+
++ mutex_lock(&mgr->qlock);
++ txmsg = list_first_entry_or_null(&mgr->tx_msg_downq,
++ struct drm_dp_sideband_msg_tx, next);
++ /* If last transaction is not completed yet*/
++ if (!txmsg ||
++ txmsg->state == DRM_DP_SIDEBAND_TX_START_SEND ||
++ txmsg->state == DRM_DP_SIDEBAND_TX_SENT)
++ kick = false;
++ mutex_unlock(&mgr->qlock);
++
++ if (kick)
++ drm_dp_mst_kick_tx(mgr);
++}
++EXPORT_SYMBOL(drm_dp_mst_hpd_irq_send_new_request);
+ /**
+ * drm_dp_mst_detect_port() - get connection status for an MST port
+ * @connector: DRM connector for this port
+--- a/drivers/gpu/drm/i915/display/intel_dp.c
++++ b/drivers/gpu/drm/i915/display/intel_dp.c
+@@ -3940,9 +3940,7 @@ intel_dp_mst_hpd_irq(struct intel_dp *in
+ {
+ bool handled = false;
+
+- drm_dp_mst_hpd_irq(&intel_dp->mst_mgr, esi, &handled);
+- if (handled)
+- ack[1] |= esi[1] & (DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY);
++ drm_dp_mst_hpd_irq_handle_event(&intel_dp->mst_mgr, esi, ack, &handled);
+
+ if (esi[1] & DP_CP_IRQ) {
+ intel_hdcp_handle_cp_irq(intel_dp->attached_connector);
+@@ -4017,6 +4015,9 @@ intel_dp_check_mst_status(struct intel_d
+
+ if (!intel_dp_ack_sink_irq_esi(intel_dp, ack))
+ drm_dbg_kms(&i915->drm, "Failed to ack ESI\n");
++
++ if (ack[1] & (DP_DOWN_REP_MSG_RDY | DP_UP_REQ_MSG_RDY))
++ drm_dp_mst_hpd_irq_send_new_request(&intel_dp->mst_mgr);
+ }
+
+ return link_ok;
+--- a/drivers/gpu/drm/nouveau/dispnv50/disp.c
++++ b/drivers/gpu/drm/nouveau/dispnv50/disp.c
+@@ -1359,22 +1359,26 @@ nv50_mstm_service(struct nouveau_drm *dr
+ u8 esi[8] = {};
+
+ while (handled) {
++ u8 ack[8] = {};
++
+ rc = drm_dp_dpcd_read(aux, DP_SINK_COUNT_ESI, esi, 8);
+ if (rc != 8) {
+ ret = false;
+ break;
+ }
+
+- drm_dp_mst_hpd_irq(&mstm->mgr, esi, &handled);
++ drm_dp_mst_hpd_irq_handle_event(&mstm->mgr, esi, ack, &handled);
+ if (!handled)
+ break;
+
+- rc = drm_dp_dpcd_write(aux, DP_SINK_COUNT_ESI + 1, &esi[1],
+- 3);
+- if (rc != 3) {
++ rc = drm_dp_dpcd_writeb(aux, DP_SINK_COUNT_ESI + 1, ack[1]);
++
++ if (rc != 1) {
+ ret = false;
+ break;
+ }
++
++ drm_dp_mst_hpd_irq_send_new_request(&mstm->mgr);
+ }
+
+ if (!ret)
+--- a/include/drm/display/drm_dp_mst_helper.h
++++ b/include/drm/display/drm_dp_mst_helper.h
+@@ -815,8 +815,11 @@ void drm_dp_mst_topology_mgr_destroy(str
+ bool drm_dp_read_mst_cap(struct drm_dp_aux *aux, const u8 dpcd[DP_RECEIVER_CAP_SIZE]);
+ int drm_dp_mst_topology_mgr_set_mst(struct drm_dp_mst_topology_mgr *mgr, bool mst_state);
+
+-int drm_dp_mst_hpd_irq(struct drm_dp_mst_topology_mgr *mgr, u8 *esi, bool *handled);
+-
++int drm_dp_mst_hpd_irq_handle_event(struct drm_dp_mst_topology_mgr *mgr,
++ const u8 *esi,
++ u8 *ack,
++ bool *handled);
++void drm_dp_mst_hpd_irq_send_new_request(struct drm_dp_mst_topology_mgr *mgr);
+
+ int
+ drm_dp_mst_detect_port(struct drm_connector *connector,
--- /dev/null
+From 2bdba9d4a3baa758c2ca7f5b37b35c7b3391dc42 Mon Sep 17 00:00:00 2001
+From: Brian Norris <briannorris@chromium.org>
+Date: Mon, 9 Jan 2023 17:18:17 -0800
+Subject: drm/rockchip: vop: Leave vblank enabled in self-refresh
+
+From: Brian Norris <briannorris@chromium.org>
+
+commit 2bdba9d4a3baa758c2ca7f5b37b35c7b3391dc42 upstream.
+
+If we disable vblank when entering self-refresh, vblank APIs (like
+DRM_IOCTL_WAIT_VBLANK) no longer work. But user space is not aware when
+we enter self-refresh, so this appears to be an API violation -- that
+DRM_IOCTL_WAIT_VBLANK fails with EINVAL whenever the display is idle and
+enters self-refresh.
+
+The downstream driver used by many of these systems never used to
+disable vblank for PSR, and in fact, even upstream, we didn't do that
+until radically redesigning the state machine in commit 6c836d965bad
+("drm/rockchip: Use the helpers for PSR").
+
+Thus, it seems like a reasonable API fix to simply restore that
+behavior, and leave vblank enabled.
+
+Note that this appears to potentially unbalance the
+drm_crtc_vblank_{off,on}() calls in some cases, but:
+(a) drm_crtc_vblank_on() documents this as OK and
+(b) if I do the naive balancing, I find state machine issues such that
+ we're not in sync properly; so it's easier to take advantage of (a).
+
+This issue was exposed by IGT's kms_vblank tests, and reported by
+KernelCI. The bug has been around a while (longer than KernelCI
+noticed), but was only exposed once self-refresh was bugfixed more
+recently, and so KernelCI could properly test it. Some other notes in:
+
+ https://lore.kernel.org/dri-devel/Y6OCg9BPnJvimQLT@google.com/
+ Re: renesas/master bisection: igt-kms-rockchip.kms_vblank.pipe-A-wait-forked on rk3399-gru-kevin
+
+== Backporting notes: ==
+
+Marking as 'Fixes' commit 6c836d965bad ("drm/rockchip: Use the helpers
+for PSR"), but it probably depends on commit bed030a49f3e
+("drm/rockchip: Don't fully disable vop on self refresh") as well.
+
+We also need the previous patch ("drm/atomic: Allow vblank-enabled +
+self-refresh "disable""), of course.
+
+v3:
+ * no update
+
+v2:
+ * skip unnecessary lock/unlock
+
+Fixes: 6c836d965bad ("drm/rockchip: Use the helpers for PSR")
+Cc: <stable@vger.kernel.org>
+Reported-by: "kernelci.org bot" <bot@kernelci.org>
+Link: https://lore.kernel.org/dri-devel/Y5itf0+yNIQa6fU4@sirena.org.uk/
+Signed-off-by: Brian Norris <briannorris@chromium.org>
+Signed-off-by: Sean Paul <seanpaul@chromium.org>
+Link: https://patchwork.freedesktop.org/patch/msgid/20230109171809.v3.2.Ic07cba4ab9a7bd3618a9e4258b8f92ea7d10ae5a@changeid
+Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
+---
+ drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+--- a/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
++++ b/drivers/gpu/drm/rockchip/rockchip_drm_vop.c
+@@ -714,13 +714,13 @@ static void vop_crtc_atomic_disable(stru
+ if (crtc->state->self_refresh_active)
+ rockchip_drm_set_win_enabled(crtc, false);
+
++ if (crtc->state->self_refresh_active)
++ goto out;
++
+ mutex_lock(&vop->vop_lock);
+
+ drm_crtc_vblank_off(crtc);
+
+- if (crtc->state->self_refresh_active)
+- goto out;
+-
+ /*
+ * Vop standby will take effect at end of current frame,
+ * if dsp hold valid irq happen, it means standby complete.
+@@ -754,9 +754,9 @@ static void vop_crtc_atomic_disable(stru
+ vop_core_clks_disable(vop);
+ pm_runtime_put(vop->dev);
+
+-out:
+ mutex_unlock(&vop->vop_lock);
+
++out:
+ if (crtc->state->event && !crtc->state->active) {
+ spin_lock_irq(&crtc->dev->event_lock);
+ drm_crtc_send_vblank_event(crtc, crtc->state->event);