Ville Syrjälä [Tue, 23 Sep 2025 17:19:25 +0000 (20:19 +0300)]
drm/i915/bw: Skip the bw_state->active_pipes update if no pipe is changing its active state
Currently we may end up doing a bunch of redundant bw_state
recomputation whenever any modeset happens. Skip a bunch of
that by only considering whether any pipe actually changes
its active state.
Ville Syrjälä [Fri, 3 Oct 2025 14:57:32 +0000 (17:57 +0300)]
drm/i915/fbdev: Select linear modifier explicitly
Currently we use the implicit modifier fb creation path for fbdev,
but as we never call set_tiling on the bo it will always end up as
linear anyway. The rest of the code (eg. stride alignment) also
assumes that we'll use linear. Just select the linear modifier
explicitly.
Ville Syrjälä [Fri, 3 Oct 2025 14:57:31 +0000 (17:57 +0300)]
drm/i915/fb: Fix the set_tiling vs. addfb race, again
intel_frontbuffer_get() is what locks out subsequent set_tiling
changes to the bo. Thus the fence vs. modifier check must be done
after intel_frontbuffer_get(), or else a concurrent set_tiling ioctl
might sneak in and change the fence after the check has been done.
Close the race again. See commit dd689287b977 ("drm/i915: Prevent
concurrent tiling/framebuffer modifications") for the previous
instance.
v2: Reorder intel_user_framebuffer_destroy() to match the unwind (Jani)
Ville Syrjälä [Fri, 3 Oct 2025 14:57:30 +0000 (17:57 +0300)]
drm/i915/frontbuffer: Move bo refcounting intel_frontbuffer_{get,release}()
Currently xe's intel_frontbuffer implementation forgets to
hold a reference on the bo. This makes the entire thing
extremely fragile as the cleanup order now depends on bo
references held by other things
(namely intel_fb_bo_framebuffer_fini()).
Move the bo refcounting to intel_frontbuffer_{get,release}()
so that both i915 and xe do this the same way.
I first tried to fix this by having xe do the refcounting
from its intel_bo_set_frontbuffer() implementation
(which is what i915 does currently), but turns out xe's
drm_gem_object_free() can sleep and thus drm_gem_object_put()
isn't safe to call while we hold fb_tracking.lock.
The tracing events use trace_intel_pipe_update_start() among other events
use functions acquire spinlock_t locks which are transformed into
sleeping locks on PREEMPT_RT. A few trace points use
intel_get_crtc_scanline(), others use ->get_vblank_counter() wich also
might acquire a sleeping locks on PREEMPT_RT.
At the time the arguments are evaluated within trace point, preemption
is disabled and so the locks must not be acquired on PREEMPT_RT.
Based on this I don't see any other way than disable trace points on
PREMPT_RT.
[mlankhorst]
The original patch was insufficient, and since the tracing
infrastructure does not allow for partial disabling of tracepoints.
Completely disable tracing for the entire i915 driver in PREEMPT_RT,
a separate fix for display tracepoints on xe is added to make those
work.
Cc: Tvrtko Ursulin <tvrtko.ursulin@igalia.com> Reported-by: Luca Abeni <lucabe72@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Co-developed-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Acked-by: Jani Nikula <jani.nikula@intel.com> Link: https://lore.kernel.org/r/20250828090944.101069-1-dev@lankhorst.se Signed-off-by: Maarten Lankhorst <dev@lankhorst.se>
drm/i915/alpm: Remove parameters suffix from intel_dp->alpm_parameters
Now as intel_dp->alpm_parameters doesn't really contain any parameters it
doesn't make sense to call it as alpm_parameters -> remove parameters
suffix.
drm/i915/alpm: Compute ALPM parameters into crtc_state->alpm_state
Currently ALPM parameters are computed directly into
intel_dp->alpm_parameters. This is a problem when compute config ends up to
not using the computed state.
Fix this by adding ALPM parameters into intel_crtc_state and compute into
there. Copy needed parameters (io_wake_lines and fast_wake_lines used by
PSR activate/exit) from crtc_state->alpm_state into
intel_dp->alpm.alpm_parameters when they are configured into HW.
v3:
- enhance commit message
v2:
- store io/fast wake lines into intel_dp->dp instead of
intel_dp->alpm_parameters and do it in intel_psr_enable_locked
- rename crtc_state->alpm_parameters -> crtc_state->alpm_state
- clarify commit message
Rename intel_get_linetime_us() to skl_wm_linetime_us() to better
reflect that it's not meant to be used for anything apart from
the watermark calculations.
Ville Syrjälä [Fri, 19 Sep 2025 18:08:37 +0000 (21:08 +0300)]
drm/i915: Deobfuscate wm linetime calculation
intel_get_linetime_us() is a mess. Rewrite it in a straightforward
manner. Also the checks for the !active and pixel_rate==0 are
completely pointless here since we know that the plane is visible.
Ville Syrjälä [Fri, 19 Sep 2025 18:08:36 +0000 (21:08 +0300)]
drm/i915: Use the the correct pixel rate to compute wm line time
The line time used for the watermark calculations is supposed to
based on the plane's adjusted pixel rate, not the pipe's adjusted
pixel rate. The current code will give incorrect answers if plane
downscaling is used.
Gustavo Sousa [Wed, 1 Oct 2025 16:04:49 +0000 (13:04 -0300)]
drm/i915/display: Enable PICA power before AUX
According to Bspec, before enabling AUX power, we need to have the
"power well containing Aux logic powered up". Starting with Xe2_LPD,
such power well is the "PICA" power well, which is managed by the driver
on demand.
While we did add the mapping of AUX power domains to the PICA power
well, we ended up placing its power well descriptor after the
descriptor for AUX power. As a result, when enabling power wells for one
of the aux power domains, the driver will enable AUX power before PICA
power, going against the order specified in Bspec.
It appears that issue did not become apparent to us mainly because,
luckily, AUX power is brought up after we assert PICA power, even if
done in the wrong order; and in enough time for the first AUX
transaction to succeed.
Furthermore, I have also realized that, in some cases, like driver
initialization, PICA power is already up when we need to acquire AUX
power.
One case where we can observe the incorrect ordering is when the driver
is resuming from runtime PM suspend. Here is an excerpt of a dmesg with
some extra debug logs extracted from a LNL machine to illustrate the
issue:
The first "DBG: ..." line shows that AUX power for TC1 is off after we
assert and wait. The remaining lines show that AUX power for TC1 was on
after we enabled PICA power and waited for AUX power.
It is important that we stay compliant with the spec, so let's fix this
by listing the power wells in an order that matches the requirements
from Bspec. (As a side note, it would be nice if we could define those
dependencies explicitly.)
Gustavo Sousa [Wed, 1 Oct 2025 16:04:48 +0000 (13:04 -0300)]
drm/i915/display: Extract separate AUX PW descriptors
In an upcoming change, we will fix an ordering issue between PICA and
AUX power wells for Xe2_LPD and later, making sure that the driver
acquires PICA power before AUX. As a preparation for that, let's
extract separate descriptors for AUX power wells.
Handle the DSC pixel throughput quirk, limiting the compressed link-bpp
value for Synaptics Panamera branch devices, working around a
blank/unstable output issue observed on docking stations containing
these branch devices, when using a mode with a high pixel clock and a
high compressed link-bpp value.
For now use the same mode clock limit for RGB/YUV444 and YUV422/420
output modes. This may result in limiting the link-bpp value for a
YUV422/420 output mode already at a lower than required mode clock.
v2: Apply the quirk only when DSC is enabled.
v3 (Ville):
- Move adjustment of link-bpp within the already existing is_dsc
if branch.
- Add TODO comment to move the HW revision check as well to the
DRM core quirk table.
v4:
- Fix incorrect fxp_q4_from_int(INT_MAX) vs. INT_MAX return value
from dsc_throughput_quirk_max_bpp_x16().
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reported-by: Vidya Srinivas <vidya.srinivas@intel.com> Reported-and-tested-by: Swati Sharma <swati2.sharma@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://lore.kernel.org/r/20250930182450.563016-7-imre.deak@intel.com
Read out the branch devices' maximum overall DSC pixel throughput and
line width and verify the mode's corresponding pixel clock and hactive
period against these.
v2: Use drm helpers to query the throughput/line-width caps. (Ville)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reported-and-tested-by: Swati Sharma <swati2.sharma@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://lore.kernel.org/r/20250930182450.563016-6-imre.deak@intel.com
Imre Deak [Tue, 30 Sep 2025 18:24:48 +0000 (21:24 +0300)]
drm/i915/dp: Pass DPCD device descriptor to intel_dp_get_dsc_sink_cap()
Pass the DPCD sink/branch device descriptor along with the
is_branch/sink flag to intel_dp_get_dsc_sink_cap(). These will be used
by a follow up change to read out the branch device's DSC overall
throughput/line width capabilities and to detect a throughput/link-bpp
quirk.
Imre Deak [Tue, 30 Sep 2025 18:24:47 +0000 (21:24 +0300)]
drm/i915/dp: Calculate DSC slice count based on per-slice peak throughput
Use the DSC sink device's actual per-slice peak throughput to calculate
the minimum number of required DSC slices, falling back to the
hard-coded throughput values (as suggested by the DP Standard) if the
device's reported throughput value is 0.
For now use the minimum of the two throughput values, which is ok,
potentially resulting in a higher than required minimum slice count.
This doesn't change the current way of using the same minimum throughput
value regardless of the RGB/YUV output format used.
While at it add a TODO comment for MST tiled displays to calculate the
slice count for these based on the total pixel rate of all the tiles.
v2: Use drm helpers to query the throughput caps. (Ville)
v3: Add TODO comment to account for MST tiled displays. (Ville)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com> Reported-and-tested-by: Swati Sharma <swati2.sharma@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://lore.kernel.org/r/20250930182450.563016-4-imre.deak@intel.com
Imre Deak [Tue, 30 Sep 2025 18:24:46 +0000 (21:24 +0300)]
drm/dp: Add helpers to query the branch DSC max throughput/line-width
Add helpers to query the DP DSC sink device's per-slice throughput as
well as a DSC branch device's overall throughput and line-width
capabilities.
v2 (Ville):
- Rename pixel_clock to peak_pixel_rate, document what the value means
in case of MST tiled displays.
- Fix name of drm_dp_dsc_branch_max_slice_throughput() to
drm_dp_dsc_sink_max_slice_throughput().
v3:
- Fix the DSC branch device minimum valid line width value from 2560
to 5120 pixels.
- Fix drm_dp_dsc_sink_max_slice_throughput()'s pixel_clock parameter
name to peak_pixel_rate in header file.
- Add handling for throughput mode 0 granular delta, defined by DP
Standard v2.1a.
v4:
- Remove the default switch case in
drm_dp_dsc_sink_max_slice_throughput(), which is unreachable in the
current code. (Ville)
Cc: dri-devel@lists.freedesktop.org Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reported-and-tested-by: Swati Sharma <swati2.sharma@intel.com> Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@oss.qualcomm.com> Acked-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Signed-off-by: Imre Deak <imre.deak@intel.com> Link: https://lore.kernel.org/r/20250930182450.563016-3-imre.deak@intel.com
Imre Deak [Tue, 30 Sep 2025 18:24:45 +0000 (21:24 +0300)]
drm/dp: Add quirk for Synaptics DSC throughput link-bpp limit
Some Synaptics MST branch devices have a problem decompressing a stream
with a compressed link-bpp higher than 12, if the pixel clock is higher
than ~50 % of the maximum throughput capability reported by the branch
device. The screen remains blank, or for some - mostly black content -
gets enabled, but may stil have jitter artifacts.
At least the following docking stations are affected, based on testing
both with any Intel devices or the UCD-500 reference device as a source:
At least the following docking stations are free from this problem,
based on tests with a source/sink/mode etc. configuration matching the
test cases used above:
All the affected devices have an older version of the Synaptics MST
branch device (Panamera), whereas all the non-affected docking stations
have a newer branch device (at least Synaptics Panamera with a higher HW
revision number and Synaptics Cayenne models). Add the required quirk
entries accordingly. The quirk will be handled by the i915/xe drivers in
a follow-up change.
The latest firmware version of the Synaptics branch device for all the
affected devices tested above is 5.7 (as reported at DPCD address
0x50a/0x50b). For the DELL devices this corresponds to the latest
01.00.14.01.A03 firmware package version of the docking station.
v2:
- Document the DP_DPCD_QUIRK_DSC_THROUGHPUT_BPP_LIMIT enum.
- Describe the quirk in more detail in the dpcd_quirk_list.
v3:
- s/Panarema/Panamera in the commit log.
Return the actual error code from vfio_set_irqs_validate_and_prepare()
instead of always collapsing to -EINVAL. While the helper
currently returns -EINVAL in most cases, passing through the real
error code is more future-proof.
While at it, drop the stray 'intel:' prefix from the error
message.
Jani Nikula [Mon, 29 Sep 2025 13:34:18 +0000 (16:34 +0300)]
drm/i915/irq: duplicate HAS_FBC() for irq error mask usage
The error irq handling needs to mask page table errors on gen 2/3 with
FBC. See commit e7e12f6ec8bf ("drm/i915: Mask page table errors on
gen2/3 with FBC") for details.
We want to avoid using display feature checks in i915 core code. Since
FBC can't be fused off on gen 2/3, just list the platforms that support
FBC. Add a macro purely for making the code self-documenting.
With this, we can drop the intel_display_core.h include, and make struct
intel_display opaque inside i915_irq.c.
Jani Nikula [Fri, 26 Sep 2025 11:10:32 +0000 (14:10 +0300)]
drm/{i915,xe}: driver agnostic drm to display pointer chase
The display driver needs to get from the struct drm_device pointer to
the struct intel_display pointer. Currently, this depends on knowledge
of the struct drm_i915_private and struct xe_device definitions, but
we'd like to hide those definitions from display.
Require the struct drm_device and struct intel_display * members within
struct drm_i915_private and struct xe_device to be placed next to each
other, to be able to figure out the display pointer without knowledge of
the structures.
Use a generic dummy device structure to define the relative offsets of
the drm and display members, and add static assertions to ensure this
holds for both i915 and xe. Use the dummy structure to do the pointer
chase from struct drm_device * to struct intel_display *.
This requires moving the display member in struct xe_device after the
drm member.
Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Rodrigo Vivi <rodrigo.vivi@intel.com> Cc: Ville Syrjala <ville.syrjala@linux.intel.com> Suggested-by: Simona Vetter <simona.vetter@ffwll.ch> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Link: https://lore.kernel.org/r/20250926111032.1188876-1-jani.nikula@intel.com Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Jani Nikula [Wed, 24 Sep 2025 16:43:36 +0000 (19:43 +0300)]
drm/{i915, xe}/stolen: make struct intel_stolen_node opaque
Add i915_gem_stolen_node_alloc() and i915_gem_stolen_node_free(),
returning struct intel_stolen_node pointer. Make struct
intel_stolen_node an opaque pointer, with different implementations in
i915 and xe.
Jani Nikula [Wed, 24 Sep 2025 16:43:35 +0000 (19:43 +0300)]
drm/xe/stolen: convert compat static inlines to proper functions
Add display/xe_stolen.c as the implementation for the stolen interface
exposed to display. This allows hiding the implementation details that
shouldn't be exposed to display.
Jani Nikula [Wed, 24 Sep 2025 16:43:34 +0000 (19:43 +0300)]
drm/i915/stolen: convert intel_stolen_node into a real struct of its own
i915_gem_stolen.h simply defines intel_stolen_node as drm_mm_node. Make
struct intel_stolen_node an actual struct of its own right, and embed
struct drm_mm_node inside. This allow better unification between i915
and xe.
drm/i915/psr: Deactivate PSR only on LNL and when selective fetch enabled
Using intel_psr_exit in frontbuffer flush on older platforms seems to be
causing problems.
Sending single full frame update using intel_psr_force_update is anyways
more optimal compared to psr deactivate/activate -> move back to this
approach on PSR1, PSR HW tracking and Panel Replay full frame update and
use deactivate/activate only on LunarLake and only when selective fetch is
enabled.
Tested-by: Lemen <lemen@lemen.xyz> Tested-by: Koos Vriezen <koos.vriezen@gmail.com> Closes: https://gitlab.freedesktop.org/drm/i915/kernel/-/issues/14946 Signed-off-by: Jouni Högander <jouni.hogander@intel.com> Reviewed-by: Mika Kahola <mika.kahola@intel.com> Link: https://lore.kernel.org/r/20250922102725.2752742-1-jouni.hogander@intel.com
Dave Airlie [Fri, 26 Sep 2025 03:25:42 +0000 (13:25 +1000)]
Merge tag 'drm-habanalabs-next-2025-09-25' of https://github.com/HabanaAI/drivers.accel.habanalabs.kernel into drm-next
This tag contains habanalabs driver changes for v6.18.
It continues the previous upstream work from tags/drm-habanalabs-next-2024-06-23,
Including improvements in debug and visibility, alongside general code cleanups,
and new features such as vmalloc-backed coherent mmap, HLDIO infrastructure, etc.
drm/i915: i915_pmu: Use sysfs_emit() instead of sprintf()
Follow the advice in Documentation/filesystems/sysfs.rst:
show() should only use sysfs_emit() or sysfs_emit_at() when formatting
the value to be returned to user space.
Add error handling for the following VFIO_DEVICE_SET_IRQS cases with
respect to the hdr struct:
- More than one VFIO_IRQ_DATA_TYPE_MASK flag is set in hdr.flags
- More than one VFIO_IRQ_ACTION_TYPE_MASK flag is set in hdr.flags
- hdr.count is not specified
Note that since hdr.count != 0, data_size != 0 is guaranteed unless
vfio_set_irqs_validate_and_prepare fails and returns an error. So, we
no longer need to check data_size before running memdup_user because
checking the return value of the function is sufficient.
v2: Use correct name for mask
v3: Use is_power_of_2 over hweight32 as it's more efficient (Andi)
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Cc: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Zhenyu Wang <zhenyuw.linux@gmail.com> Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://lore.kernel.org/r/20250923212332.112137-2-jonathan.cavitt@intel.com
Jonathan Cavitt [Thu, 18 Sep 2025 21:45:16 +0000 (21:45 +0000)]
drm/i915/gvt: Simplify case switch in intel_vgpu_ioctl
We do not need a case switch to check cap_type_id in intel_vgpu_ioctl
for various reasons (it's impossible to hit the default case in the
current code, there's only one valid case to check, the error handling
code overlaps in both cases, etc.). Simplify the case switch into a
single if statement. This has the additional effect of simplifying the
error handling code.
Note that it is still currently impossible for
'if (cap_type_id == VFIO_REGION_INFO_CAP_SPARSE_MMAP)'
to fail, but we should still guard against the possibility of this
changing in the future.
drm/i915/display: Drop intel_vrr_vblank_delay and use set_context_latency
The helper intel_vrr_vblank_delay() was used to keep track of the SCL
lines + the extra vblank delay required for ICL/TGL.
This was used to wait for sufficient lines for:
-push send bit to clear for VRR case
-evasion to delay the commit.
For first case we are using safe window scanline wait and with that we
just need to wait for SCL lines, we do not need to wait for the extra
vblank delay required for ICL/TGL. For the second case, we actually
do not need to wait for extra lines before the undelayed vblank, if we
are already in the safe window.
To sum up, SCL lines is sufficient for both cases.
So drop the helper intel_vrr_vblank_delay and just use
crtc_state->set_context_latency instead.
drm/i915/vrr: Clamp guardband as per hardware and timing constraints
The maximum guardband value is constrained by two factors:
- The actual vblank length minus set context latency (SCL)
- The hardware register field width:
- 8 bits for ICL/TGL (VRR_CTL_PIPELINE_FULL_MASK -> max 255)
- 16 bits for ADL+ (XELPD_VRR_CTL_VRR_GUARDBAND_MASK -> max 65535)
Remove the #FIXME and clamp the guardband to the maximum allowed value.
v2:
- Use REG_FIELD_MAX(). (Ville)
- Separate out functions for intel_vrr_max_guardband(),
intel_vrr_max_vblank_guardband(). (Ville)
v3:
- Fix Typo: Add the missing adjusted_mode->crtc_vdisplay in guardband
computation. (Ville)
- Refactor intel_vrr_max_hw_guardband() and use else for consistency.
(Ville)
v4:
- Drop max_guardband from intel_vrr_max_hw_guardband(). (Ville)
drm/i915/reg_defs: Add REG_FIELD_MAX wrapper for FIELD_MAX()
Introduce REG_FIELD_MAX macro as local wrapper around FIELD_MAX() to return
the maximum value representable by a bit mask. The value is cast to u32
for consistency with other REG_* macros and assumes the bitfield fits
within 32 bits.
v2: Use __mask as macro argument aligning with other macros. (Ville)
drm/i915/display: Wait for scl start instead of dsb_wait_vblanks
Until LNL, intel_dsb_wait_vblanks() used to wait for the undelayed vblank
start. However, from PTL onwards, it waits for the start of the
safe-window defined by the number of lines programmed in the register
TRANS_SET_CONTEXT_LATENCY. This change was introduced to move the SCL
window out of the vblank region, supporting modes with higher refresh
rates and smaller vblanks. This change introduces a "safe window" a
scanline range from (undelayed vblank - SCL) to (delayed vblank - SCL).
As a result, on PTL+ platforms, the DSB wait for vblank completes exactly
SCL lines earlier than the undelayed vblank start (safe window start).
If the flip occurs in the active region and the push happens before the
vmin decision boundary, the DSB wait fires early, and the push is sent
inside this safe window. In such cases, the push bit is cleared at the
delayed vblank, but our wait logic does not account for the early trigger,
leading to DSB poll errors.
To fix this, we add an explicit wait for the end of the safe window i.e.,
the scanline range from (undelayed vblank - SCL) to (delayed vblank - SCL).
Once past this window, we are exactly SCL lines away from the delayed
vblank, and our existing wait logic works as intended.
This additional wait is only effective if the push occurs before the vmin
decision boundary. If the push happens after the boundary, the hardware
already guarantees we're SCL lines away from the delayed vblank, and the
extra wait becomes a no-op.
v2:
- Use helpers for safe window start/end. (Ville)
- Move the extra wait inside the helper to wait for delayed vblank. (Ville)
- Update the commit message.
v3:
- Add more documentation for explanation for the wait. (Ville)
- Rename intel_vrr_vmin_safe_window_start/end as this is vmin safe
window. (Ville)
- Minor refactoring to align with the code. (Ville)
- Update the commit message for more clarity.
v4:
- Retain name for intel_vrr_safe_window_start as it doesn't change with
vmin/vmax etc. (Ville)
The helper intel_dsb_wait_vblank_delay() is used in DSB to wait for the
delayed vblank after the send push operation. Rename it to
intel_dsb_wait_for_delayed_vblank() to align with the semantics.
v2: Rename to intel_dsb_wait_vblank_delay instead of the proposed SCL
semantics, as this will be ot only about SCL lines with different timing
generator and different refresh rate modes. (Ville)
For now guardband is equal to the vblank length so ideally it should be
computed as difference between the vmin vtotal and vactive. However
since we are having few lines as SCL, we need to account for this while
computing the guardband.
Since the vblank start is moved by SCL lines from the vactive, the delta
between the vmin vtotal and new vblank start was used to account for this.
Now that SCL is explicitly tracked using the `set_context_latency` member,
use it directly in the guardband calculation.
In the future, when the guardband is shortened or optimized, we may need
to factor in both the change in the vblank start and SCL lines. For now,
explicitly accounting for SCL is sufficient.
v2: Fix typo: replace adjusted_mode->vdisplay with
adjusted_mode->crtc_vdisplay. (Ville)
drm/i915/display: Add set_context_latency to crtc_state
'Set context latency' (SCL, Window W2) is defined as the number of lines
before the double buffering point, which are required to complete
programming of the registers, typically when DSB is used to program the
display registers.
Since we are not using this window for programming the registers, this
is mostly set to 0, unless there is a requirement for few cases related
to PSR/PR where the 'set context latency' should be at least 1.
Currently we are using the 'set context latency' (if required) implicitly
by moving the vblank start by the required amount and then measuring the
delay i.e. the difference between undelayed vblank start and delayed vblank
start.
Since our guardband matches the vblank length, this was not a problem as
the difference between the undelayed vblank and delayed vblank was at
the most equal to the 'set context latency' lines.
However, if we want to optimize the guardband, the difference between the
undelayed and the delayed vblank will be large and we cannot use this
difference as the 'set context latency' lines.
To make way for this optimization of guardband, formally introduce the
'set context latency' or SCL and track it as a new member
`set_context_latency` of the structure intel_crtc_state.
Eventually, all references of vblank delay where we mean to use set
context latency will be replaced by this new `set_context_latency`
member.
Note: for TGL the TRANS_SET_CONTEXT_LATENCY doesn't exist to account for
the SCL. However, the VBLANK_START-VACTIVE difference plays an identical
role here ie. it can be used to create the SCL window ahead of the
undelayed vblank.
While readback since there is no specific register to read out the SCL, use
the difference between vblank start and vactive to populate the new member
for TGL.
v2:
- Use u16 for set_context_latency. (Ville)
- s/vblank_delay/set_context_latency. (Ville)
- Meld the changes for TGL with this change. (Ville)
v3:
- Update comment to clarify the TGL case. (Ville)
- Fix typo in commit message.
Rename intel_psr_min_vblank_delay to intel_psr_min_set_context_latency
to reflect that it provides the minimum value for 'Set context
latency'(SCL or Window W2) for PSR/Panel Replay to work correctly across
different platforms.
Add i915_gem_fence_wait_priority_display() helper to wait with
I915_PRIORITY_DISPLAY. This drops the intel_plane.c dependency on
i915_scheduler_types.h, and allows us to remove the compat header from
xe.
Pavan S [Wed, 2 Oct 2024 07:46:40 +0000 (10:46 +0300)]
accel/habanalabs: add Infineon version check
On HL338 ASICs, the Infineon first‑stage firmware is not present and
the reported version is 0. In this case printing a version number is
misleading, as it suggests valid firmware when it does not exist.
Fix this by printing the first‑stage Infineon firmware version only
if the reported value is non‑zero. This avoids confusing or incorrect
log messages on devices where the first stage is not applicable.
accel/habanalabs/gaudi2: read preboot status after recovering from dirty state
Dirty state can occur when the host VM undergoes a reset while the
device does not. In such a case, the driver must reset the device before
it can be used again. As part of this reset, the device capabilities
are zeroed. Therefore, the driver must read the Preboot status again to
learn the Preboot state, capabilities, and security configuration.
accel/habanalabs: add debugfs interface for HLDIO testing
Add debugfs files for NVMe Direct I/O (HLDIO) functionality.
This interface allows userspace access to direct SSD ↔ device transfers
through debugfs nodes.
Four debugfs files are created under /sys/kernel/debug/habanalabs/hlN/:
- dio_ssd2hl : trigger SSD-to-device transfers
- dio_hl2ssd : trigger device-to-SSD transfers
(placeholder, not yet implemented)
- dio_stats : show transfer statistics
- dio_reset : reset statistics counters
accel/habanalabs: add NVMe Direct I/O (HLDIO) infrastructure
Introduce NVMe Direct I/O (HLDIO) infrastructure to support
peer‑to‑peer DMA in the habanalabs driver. This adds internal helpers
and data structures to enable direct transfers between NVMe storage
and device memory.
The feature is built only when CONFIG_HL_HLDIO is enabled. A debugfs
interface is also provided for functional validation.
accel/habanalabs: support mapping cb with vmalloc-backed coherent memory
When IOMMU is enabled, dma_alloc_coherent() with GFP_USER may return
addresses from the vmalloc range. If such an address is mapped without
VM_MIXEDMAP, vm_insert_page() will trigger a BUG_ON due to the
VM_PFNMAP restriction.
Fix this by checking for vmalloc addresses and setting VM_MIXEDMAP
in the VMA before mapping. This ensures safe mapping and avoids kernel
crashes. The memory is still driver-allocated and cannot be accessed
directly by userspace.
Ilia Levi [Mon, 19 Aug 2024 09:13:07 +0000 (12:13 +0300)]
accel/habanalabs: remove old interface variation of 'access_ok()'
The access_ok() API no longer requires the VERIFY_WRITE argument,
and the use of the old interface with VERIFY_WRITE is deprecated.
Clean up the habanalabs memory manager to use the modern access_ok()
interface consistently. This removes old #ifdef guards and aligns the
driver with current upstream kernel APIs.
Signed-off-by: Ilia Levi <ilia.levi@intel.com> Reviewed-by: Koby Elbaz <koby.elbaz@intel.com> Signed-off-by: Koby Elbaz <koby.elbaz@intel.com>
accel/habanalabs: clarify ctx use after hl_ctx_put() in dmabuf release
In hl_release_dmabuf(), ctx is dereferenced after calling hl_ctx_put()
to obtain the compute device file.
This is safe because the dma-buf object holds a file reference taken in
export_dmabuf(), and the file release (which drops another ctx reference)
can only happen after we drop that file reference via fput(). Thus, this
hl_ctx_put() call cannot be the last one at this point.
accel/habanalabs/gaudi2: add support for logging register accesses from debugfs
Add infrastructure for logging the last configuration register accesses
that occur via debugfs read/write operations. At interrupt time, these
log entries can be dumped to dmesg, which helps in diagnosing the cause
of RAZWI and ADDR_DEC interrupts.
The logging is implemented as a ring buffer of access entries, with each
entry recording timestamp and access details. To ensure correctness
under concurrent access, operations are now protected using spinlocks.
Entries are copied under lock and then printed after releasing it, which
minimizes time spent in the critical section.
Change the BMON_CR register value back to its original state before
enabling, so that BMON does not continue to collect information
after being disabled.
Tomer Tayar [Sun, 26 May 2024 13:32:32 +0000 (16:32 +0300)]
accel/habanalabs: return ENOMEM if less than requested pages were pinned
EFAULT is currently returned if less than requested user pages are
pinned. This value means a "bad address" which might be confusing to
the user, as the address of the given user memory is not necessarily
"bad".
Modify the return value to ENOMEM, as "out of memory" is more suitable
in this case.
drm/i915/vrr: Refactor VRR live status wait into common helper
Add a helper to consolidate timeout handling and error logging when waiting
for VRR live status to clear. Log an error message if the VRR live status
bit fails to clear within the timeout.
Jani Nikula [Tue, 23 Sep 2025 14:31:08 +0000 (17:31 +0300)]
drm/i915/irq: split ILK display irq handling
Split out display irq handling on ilk. Since the master IRQ enable is in
DEIIR, we'll need to do this in two parts. First, add
ilk_display_irq_master_disable() to disable master and south interrupts,
and second, add (repurposed) ilk_display_irq_handler() to finish display
irq handling.
It's not the prettiest thing you ever saw, but improves separation of
display irq handling. And removes HAS_PCH_NOP() and DISPLAY_VER() checks
from core irq code.
v2:
- Separate ilk_display_irq_master_enable() (Ville)
- Use _fw mmio accessors (Ville)
Jani Nikula [Tue, 23 Sep 2025 14:31:07 +0000 (17:31 +0300)]
drm/i915/irq: move check for HAS_HOTPLUG() inside i9xx_hpd_irq_ack()
We want to avoid using the display dependent HAS_HOTPLUG() in generic
irq code. Since the enabling of I915_DISPLAY_PORT_INTERRUPT depends on
HAS_HOTPLUG() to begin with, we don't really expect to get the irqs for
!HAS_HOTPLUG(). At least in theory, checking for HAS_HOTPLUG() inside
i9xx_hpd_irq_ack() should not have any impact.
Jani Nikula [Tue, 23 Sep 2025 14:31:05 +0000 (17:31 +0300)]
drm/i915/irq: initialize gen2_imr_mask in terms of enable_mask
Instead of initializing gen2_imr_mask and enable_mask independently, use
the latter for initializing the former. This also highlights the
differences in the masks, i.e. what's set to enable_mask after it's been
used to initialize gen2_imr_mask.
drm/i915/xe3: Restrict PTL intel_encoder_is_c10phy() to only PHY A
On PTL, no combo PHY is connected to PORT B. However, PORT B can
still be used for Type-C and will utilize the C20 PHY for eDP
over Type-C. In such configurations, VBTs also enumerate PORT B.
This leads to issues where PORT B is incorrectly identified as using the
C10 PHY, due to the assumption that returning true for PORT B in
intel_encoder_is_c10phy() would not cause problems.
From PTL's perspective, only PORT A/PHY A uses the C10 PHY.
Update the helper intel_encoder_is_c10phy() to return true only for
PORT A/PHY on PTL.
v2: Change the condition code style for ptl/wcl
Bspec: 72571,73944 Fixes: 9d10de78a37f ("drm/i915/wcl: C10 phy connected to port A and B") Signed-off-by: Dnyaneshwar Bhadane <dnyaneshwar.bhadane@intel.com> Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Signed-off-by: Suraj Kandpal <suraj.kandpal@intel.com> Link: https://lore.kernel.org/r/20250922150317.2334680-4-dnyaneshwar.bhadane@intel.com
drm/i915/display: Add definition for wcl as subplatform
We will need to differentiate between WCL and PTL in
intel_encoder_is_c10phy(). Since WCL and PTL use the same display
architecture, let's define WCL as a subplatform of PTL to allow the
differentiation.
v2: Update commit message and reorder wcl define (Gustavo)
drm/pcids: Split PTL pciids group to make wcl subplatform
To form the WCL platform as a subplatform of PTL in definition,
WCL pci ids are splited into saparate group from PTL.
So update the pciidlist struct to cover all the pci ids.
v2:
- Squash wcl description in single patch for display and xe.(jani,gustavo)
Ville Syrjälä [Fri, 19 Sep 2025 19:30:00 +0000 (22:30 +0300)]
drm/i915: Make sure wm block/lines are non-decreasing
The watermark algorithm sometimes produces results where higher
watermark levels have smaller blocks/lines watermarks than the lower
levels. That doesn't really make sense as the corresponding latencies
are supposed to be non-decreasing. It's unclear how the hardware
responds to such watermark values, so it seems better to avoid that
case and just make sure the values are always non-decreasing.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:56 +0000 (22:29 +0300)]
drm/i915: Extract sanitize_wm_latency()
Pull the "zero out invalid WM latencies" stuff into a helper.
Mainly to avoid mixing higher level and lower level stuff in
the same adjust_wm_latency() function.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:55 +0000 (22:29 +0300)]
drm/i915: Use increase_wm_latency() for the 16Gb DIMM w/a
Bump the latency for all watermark levels in the
16Gb+ DIMM w/a. The spec does ask us to do it only for level
0, but it seems more sane to bump all the levels. If the actual
memory access is slower then the wakeup (WM1+) should also
potentially happen earlier.
This also avoids the theoretical case that WM0 would get bumped
higher than WM1+. Not that it is likely to happen because the WM0
latency is always significantly lower than the WM1 latency.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:53 +0000 (22:29 +0300)]
drm/i915: Extract multiply_wm_latency() from skl_read_wm_latency()
I want skl_read_wm_latency() to just do what it says on
the tin, ie. read the latency values from the pcode mailbox.
Move the DG2 "multiply by two" trick elsewhere.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:52 +0000 (22:29 +0300)]
drm/i915: Move adjust_wm_latency() out from {mtl,skl}_read_wm_latency()
{mtl,skl}_read_wm_latency() are doing way too many things for
my liking. Move the adjustment stuff out into the caller.
This also gives us one place where we specify the 'read_latency'
for all the platforms, instead of two places.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:50 +0000 (22:29 +0300)]
drm/i915: Tweak the read latency fixup code
If WM0 latency is zero we need to bump it (and the WM1+ latencies)
but a fixed amount. But any WM1+ level with zero latency must
not be touched since that indicates that corresponding WM level
isn't supported.
Currently the loop doing that adjustment does work, but only because
the previous loop modified the num_levels used as the loop boundary.
This all seems a bit too fragile. Remove the num_levels adjustment
and instead adjust the read latency loop to abort when it encounters
a zero latency value.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:49 +0000 (22:29 +0300)]
drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it
Currently the code assumes that every platform except dg2 need the
16Gb DIMM w/a, while in reality it's only needed by skl and icl (and
derivatives). Switch to a more specific platform check.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:48 +0000 (22:29 +0300)]
drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips
While the spec only asks us to do the WM0 latency bump for 16Gb
DRAM devices I believe we should apply it for larger DRAM chips.
At the time the w/a was added there were no larger chips on
the market, but I think I've seen at least 32Gb DDR4 chips
being available these days.
Whether it's possible to actually find suitable DIMMs for the
affected systems with largers chips I don't know. Also it's
not known whether the 1 usec latency bump would be sufficient
for larger chips. Someone would need to find such DIMMs and
test this. Fortunately we do have a bit of extra latency already
with the 1 usec bump, as the actual requirement was .4 usec for
for 16Gb chips.
LiangCheng Wang [Mon, 22 Sep 2025 02:57:34 +0000 (10:57 +0800)]
drm/tiny: pixpaper: Fix missing dependency on DRM_GEM_SHMEM_HELPER
The driver uses drm_gem_shmem_prime_import_no_map() and
drm_gem_shmem_dumb_create(), but the Kconfig currently selects
DRM_GEM_DMA_HELPER instead of DRM_GEM_SHMEM_HELPER. This causes
link failures when DRM_GEM_SHMEM_HELPER is not enabled.
Select DRM_GEM_SHMEM_HELPER to fix the build.
Reported-by: kernel test robot <lkp@intel.com> Closes: https://lore.kernel.org/oe-kbuild-all/202509220320.gfFZjmyg-lkp@intel.com/ Fixes: c9e70639f591 ("drm: tiny: Add support for Mayqueen Pixpaper e-ink panel") Reviewed-by: Thomas Zimmermann <tzimmermann@suse.de> Signed-off-by: LiangCheng Wang <zaq14760@gmail.com> Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de> Link: https://lore.kernel.org/r/20250922-bar-v1-1-b2a1f54ace82@gmail.com
Ville Syrjälä [Fri, 19 Sep 2025 18:50:15 +0000 (21:50 +0300)]
drm/i915/pm: Drop redundant pci stuff from suspend/resume paths
I don't think there should be any need for us to call any of
pci_enable_device(), pci_disable_device() or pci_set_master()
from the suspend/resume paths. The config space save/restore should
take care of all of this.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:14 +0000 (21:50 +0300)]
drm/i915/pm: Allow drivers/pci to manage our pci state normally
Stop doing the pci_save_state(), except when we need to prevent
D3 due to BIOS bugs, so that the code in drivers/pci is allowed
to manage the state of the PCI device. Less chance something
getting left by the wayside by i915 if/when the things change in
drivers/pci.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:13 +0000 (21:50 +0300)]
drm/i915/pm: Do pci_restore_state() in switcheroo resume hook
Since this switcheroo garbage bypasses all the core pm we
have to manually manage the pci state. To that end add the
missing pci_restore_state() to the switcheroo resume hook.
We already have the pci_save_state() counterpart on the
suspend side.
Arguably none of this code should exist in the driver
in the first place, and instead the entire switcheroo
mechanism should be rewritten and properly integrated into
core pm code...
Ville Syrjälä [Fri, 19 Sep 2025 18:50:12 +0000 (21:50 +0300)]
drm/i915/pm: Move the hibernate+D3 quirk stuff into noirq() pm hooks
If the driver doesn't call pci_save_state() drivers/pci will
normally save+power manage the device from the _noirq() pm hooks.
We can't let that happen as some old BIOSes fail to hibernate
when the device is in D3. However, we can get very close to
the standard behaviour by doing our explicit pci_save_state()
and pci_set_power_state() stuff from driver provided _noirq()
hooks.
This results in a change of behaviour where we no longer go
into D3 at the end of freeze_late, so when it comes time
to thaw() we'll already be in D0, and thus we can drop the
explicit pci_set_power_state(D0) call.
Presumably switcheroo suspend will want to go into D3 so
call the _noirq() stuff from the switcheroo suspend hook,
and since we dropped the pci_set_power_state(D0) from
resume_early() we'll need to add one back into the
switcheroo resume hook.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:11 +0000 (21:50 +0300)]
drm/i915/pm: Hoist pci_save_state()+pci_set_power_state() to the end of pm _late() hook
drivers/pci does the pci_save_state()+pci_set_power_state() from
the _noirq() pm hooks. Move our manual calls (needed for the
hibernate vs. D3 workaround with buggy BIOSes) towards that same
point. We currently have no _noirq() hooks, so end of _late()
hooks is the best we can do right now.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:10 +0000 (21:50 +0300)]
drm/i915/pm: Simplify pm hook documentation
Stop spelling out each variant of the hook ("" vs. "_late" vs.
"_early") and just say eg. "@thaw*" to indicate all of them.
Avoids having to update the docs whenever we start/stop using
one of the variants.