For now guardband is equal to the vblank length so ideally it should be
computed as difference between the vmin vtotal and vactive. However
since we are having few lines as SCL, we need to account for this while
computing the guardband.
Since the vblank start is moved by SCL lines from the vactive, the delta
between the vmin vtotal and new vblank start was used to account for this.
Now that SCL is explicitly tracked using the `set_context_latency` member,
use it directly in the guardband calculation.
In the future, when the guardband is shortened or optimized, we may need
to factor in both the change in the vblank start and SCL lines. For now,
explicitly accounting for SCL is sufficient.
v2: Fix typo: replace adjusted_mode->vdisplay with
adjusted_mode->crtc_vdisplay. (Ville)
drm/i915/display: Add set_context_latency to crtc_state
'Set context latency' (SCL, Window W2) is defined as the number of lines
before the double buffering point, which are required to complete
programming of the registers, typically when DSB is used to program the
display registers.
Since we are not using this window for programming the registers, this
is mostly set to 0, unless there is a requirement for few cases related
to PSR/PR where the 'set context latency' should be at least 1.
Currently we are using the 'set context latency' (if required) implicitly
by moving the vblank start by the required amount and then measuring the
delay i.e. the difference between undelayed vblank start and delayed vblank
start.
Since our guardband matches the vblank length, this was not a problem as
the difference between the undelayed vblank and delayed vblank was at
the most equal to the 'set context latency' lines.
However, if we want to optimize the guardband, the difference between the
undelayed and the delayed vblank will be large and we cannot use this
difference as the 'set context latency' lines.
To make way for this optimization of guardband, formally introduce the
'set context latency' or SCL and track it as a new member
`set_context_latency` of the structure intel_crtc_state.
Eventually, all references of vblank delay where we mean to use set
context latency will be replaced by this new `set_context_latency`
member.
Note: for TGL the TRANS_SET_CONTEXT_LATENCY doesn't exist to account for
the SCL. However, the VBLANK_START-VACTIVE difference plays an identical
role here ie. it can be used to create the SCL window ahead of the
undelayed vblank.
While readback since there is no specific register to read out the SCL, use
the difference between vblank start and vactive to populate the new member
for TGL.
v2:
- Use u16 for set_context_latency. (Ville)
- s/vblank_delay/set_context_latency. (Ville)
- Meld the changes for TGL with this change. (Ville)
v3:
- Update comment to clarify the TGL case. (Ville)
- Fix typo in commit message.
Rename intel_psr_min_vblank_delay to intel_psr_min_set_context_latency
to reflect that it provides the minimum value for 'Set context
latency'(SCL or Window W2) for PSR/Panel Replay to work correctly across
different platforms.
Add i915_gem_fence_wait_priority_display() helper to wait with
I915_PRIORITY_DISPLAY. This drops the intel_plane.c dependency on
i915_scheduler_types.h, and allows us to remove the compat header from
xe.
drm/i915/vrr: Refactor VRR live status wait into common helper
Add a helper to consolidate timeout handling and error logging when waiting
for VRR live status to clear. Log an error message if the VRR live status
bit fails to clear within the timeout.
Jani Nikula [Tue, 23 Sep 2025 14:31:08 +0000 (17:31 +0300)]
drm/i915/irq: split ILK display irq handling
Split out display irq handling on ilk. Since the master IRQ enable is in
DEIIR, we'll need to do this in two parts. First, add
ilk_display_irq_master_disable() to disable master and south interrupts,
and second, add (repurposed) ilk_display_irq_handler() to finish display
irq handling.
It's not the prettiest thing you ever saw, but improves separation of
display irq handling. And removes HAS_PCH_NOP() and DISPLAY_VER() checks
from core irq code.
v2:
- Separate ilk_display_irq_master_enable() (Ville)
- Use _fw mmio accessors (Ville)
Jani Nikula [Tue, 23 Sep 2025 14:31:07 +0000 (17:31 +0300)]
drm/i915/irq: move check for HAS_HOTPLUG() inside i9xx_hpd_irq_ack()
We want to avoid using the display dependent HAS_HOTPLUG() in generic
irq code. Since the enabling of I915_DISPLAY_PORT_INTERRUPT depends on
HAS_HOTPLUG() to begin with, we don't really expect to get the irqs for
!HAS_HOTPLUG(). At least in theory, checking for HAS_HOTPLUG() inside
i9xx_hpd_irq_ack() should not have any impact.
Jani Nikula [Tue, 23 Sep 2025 14:31:05 +0000 (17:31 +0300)]
drm/i915/irq: initialize gen2_imr_mask in terms of enable_mask
Instead of initializing gen2_imr_mask and enable_mask independently, use
the latter for initializing the former. This also highlights the
differences in the masks, i.e. what's set to enable_mask after it's been
used to initialize gen2_imr_mask.
drm/i915/xe3: Restrict PTL intel_encoder_is_c10phy() to only PHY A
On PTL, no combo PHY is connected to PORT B. However, PORT B can
still be used for Type-C and will utilize the C20 PHY for eDP
over Type-C. In such configurations, VBTs also enumerate PORT B.
This leads to issues where PORT B is incorrectly identified as using the
C10 PHY, due to the assumption that returning true for PORT B in
intel_encoder_is_c10phy() would not cause problems.
From PTL's perspective, only PORT A/PHY A uses the C10 PHY.
Update the helper intel_encoder_is_c10phy() to return true only for
PORT A/PHY on PTL.
v2: Change the condition code style for ptl/wcl
Bspec: 72571,73944 Fixes: 9d10de78a37f ("drm/i915/wcl: C10 phy connected to port A and B") Signed-off-by: Dnyaneshwar Bhadane <dnyaneshwar.bhadane@intel.com> Reviewed-by: Gustavo Sousa <gustavo.sousa@intel.com> Signed-off-by: Suraj Kandpal <suraj.kandpal@intel.com> Link: https://lore.kernel.org/r/20250922150317.2334680-4-dnyaneshwar.bhadane@intel.com
drm/i915/display: Add definition for wcl as subplatform
We will need to differentiate between WCL and PTL in
intel_encoder_is_c10phy(). Since WCL and PTL use the same display
architecture, let's define WCL as a subplatform of PTL to allow the
differentiation.
v2: Update commit message and reorder wcl define (Gustavo)
drm/pcids: Split PTL pciids group to make wcl subplatform
To form the WCL platform as a subplatform of PTL in definition,
WCL pci ids are splited into saparate group from PTL.
So update the pciidlist struct to cover all the pci ids.
v2:
- Squash wcl description in single patch for display and xe.(jani,gustavo)
Ville Syrjälä [Fri, 19 Sep 2025 19:30:00 +0000 (22:30 +0300)]
drm/i915: Make sure wm block/lines are non-decreasing
The watermark algorithm sometimes produces results where higher
watermark levels have smaller blocks/lines watermarks than the lower
levels. That doesn't really make sense as the corresponding latencies
are supposed to be non-decreasing. It's unclear how the hardware
responds to such watermark values, so it seems better to avoid that
case and just make sure the values are always non-decreasing.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:56 +0000 (22:29 +0300)]
drm/i915: Extract sanitize_wm_latency()
Pull the "zero out invalid WM latencies" stuff into a helper.
Mainly to avoid mixing higher level and lower level stuff in
the same adjust_wm_latency() function.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:55 +0000 (22:29 +0300)]
drm/i915: Use increase_wm_latency() for the 16Gb DIMM w/a
Bump the latency for all watermark levels in the
16Gb+ DIMM w/a. The spec does ask us to do it only for level
0, but it seems more sane to bump all the levels. If the actual
memory access is slower then the wakeup (WM1+) should also
potentially happen earlier.
This also avoids the theoretical case that WM0 would get bumped
higher than WM1+. Not that it is likely to happen because the WM0
latency is always significantly lower than the WM1 latency.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:53 +0000 (22:29 +0300)]
drm/i915: Extract multiply_wm_latency() from skl_read_wm_latency()
I want skl_read_wm_latency() to just do what it says on
the tin, ie. read the latency values from the pcode mailbox.
Move the DG2 "multiply by two" trick elsewhere.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:52 +0000 (22:29 +0300)]
drm/i915: Move adjust_wm_latency() out from {mtl,skl}_read_wm_latency()
{mtl,skl}_read_wm_latency() are doing way too many things for
my liking. Move the adjustment stuff out into the caller.
This also gives us one place where we specify the 'read_latency'
for all the platforms, instead of two places.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:50 +0000 (22:29 +0300)]
drm/i915: Tweak the read latency fixup code
If WM0 latency is zero we need to bump it (and the WM1+ latencies)
but a fixed amount. But any WM1+ level with zero latency must
not be touched since that indicates that corresponding WM level
isn't supported.
Currently the loop doing that adjustment does work, but only because
the previous loop modified the num_levels used as the loop boundary.
This all seems a bit too fragile. Remove the num_levels adjustment
and instead adjust the read latency loop to abort when it encounters
a zero latency value.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:49 +0000 (22:29 +0300)]
drm/i915: Apply the 16Gb DIMM w/a only for the platforms that need it
Currently the code assumes that every platform except dg2 need the
16Gb DIMM w/a, while in reality it's only needed by skl and icl (and
derivatives). Switch to a more specific platform check.
Ville Syrjälä [Fri, 19 Sep 2025 19:29:48 +0000 (22:29 +0300)]
drm/i915/dram: Also apply the 16Gb DIMM w/a for larger DRAM chips
While the spec only asks us to do the WM0 latency bump for 16Gb
DRAM devices I believe we should apply it for larger DRAM chips.
At the time the w/a was added there were no larger chips on
the market, but I think I've seen at least 32Gb DDR4 chips
being available these days.
Whether it's possible to actually find suitable DIMMs for the
affected systems with largers chips I don't know. Also it's
not known whether the 1 usec latency bump would be sufficient
for larger chips. Someone would need to find such DIMMs and
test this. Fortunately we do have a bit of extra latency already
with the 1 usec bump, as the actual requirement was .4 usec for
for 16Gb chips.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:15 +0000 (21:50 +0300)]
drm/i915/pm: Drop redundant pci stuff from suspend/resume paths
I don't think there should be any need for us to call any of
pci_enable_device(), pci_disable_device() or pci_set_master()
from the suspend/resume paths. The config space save/restore should
take care of all of this.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:14 +0000 (21:50 +0300)]
drm/i915/pm: Allow drivers/pci to manage our pci state normally
Stop doing the pci_save_state(), except when we need to prevent
D3 due to BIOS bugs, so that the code in drivers/pci is allowed
to manage the state of the PCI device. Less chance something
getting left by the wayside by i915 if/when the things change in
drivers/pci.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:13 +0000 (21:50 +0300)]
drm/i915/pm: Do pci_restore_state() in switcheroo resume hook
Since this switcheroo garbage bypasses all the core pm we
have to manually manage the pci state. To that end add the
missing pci_restore_state() to the switcheroo resume hook.
We already have the pci_save_state() counterpart on the
suspend side.
Arguably none of this code should exist in the driver
in the first place, and instead the entire switcheroo
mechanism should be rewritten and properly integrated into
core pm code...
Ville Syrjälä [Fri, 19 Sep 2025 18:50:12 +0000 (21:50 +0300)]
drm/i915/pm: Move the hibernate+D3 quirk stuff into noirq() pm hooks
If the driver doesn't call pci_save_state() drivers/pci will
normally save+power manage the device from the _noirq() pm hooks.
We can't let that happen as some old BIOSes fail to hibernate
when the device is in D3. However, we can get very close to
the standard behaviour by doing our explicit pci_save_state()
and pci_set_power_state() stuff from driver provided _noirq()
hooks.
This results in a change of behaviour where we no longer go
into D3 at the end of freeze_late, so when it comes time
to thaw() we'll already be in D0, and thus we can drop the
explicit pci_set_power_state(D0) call.
Presumably switcheroo suspend will want to go into D3 so
call the _noirq() stuff from the switcheroo suspend hook,
and since we dropped the pci_set_power_state(D0) from
resume_early() we'll need to add one back into the
switcheroo resume hook.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:11 +0000 (21:50 +0300)]
drm/i915/pm: Hoist pci_save_state()+pci_set_power_state() to the end of pm _late() hook
drivers/pci does the pci_save_state()+pci_set_power_state() from
the _noirq() pm hooks. Move our manual calls (needed for the
hibernate vs. D3 workaround with buggy BIOSes) towards that same
point. We currently have no _noirq() hooks, so end of _late()
hooks is the best we can do right now.
Ville Syrjälä [Fri, 19 Sep 2025 18:50:10 +0000 (21:50 +0300)]
drm/i915/pm: Simplify pm hook documentation
Stop spelling out each variant of the hook ("" vs. "_late" vs.
"_early") and just say eg. "@thaw*" to indicate all of them.
Avoids having to update the docs whenever we start/stop using
one of the variants.
drm/i915/ddi: Guard reg_val against a INVALID_TRANSCODER
Currently we check if the encoder is INVALID or -1 and throw a
WARN_ON but we still end up writing the temp value which will
overflow and corrupt the whole programmed value.
--v2
-Assign a bogus transcoder to master in case we get a INVALID
TRANSCODER [Jani]
Rename intel_vrr_flipline_offset() to intel_vrr_vmin_flipline_offset()
to better reflect the fact that it gives us the minimum offset allowed
between vmin and flipline.
Ville Syrjälä [Thu, 18 Sep 2025 23:22:25 +0000 (02:22 +0300)]
drm/i915/vrr: Hide the ICL/TGL intel_vrr_flipline_offset() mangling better
ICL/TGL VRR hardware won't allow us to program flipline==vmin. If we do
that the actual effect will be the same as if we had programmed
flipline=vmin+1, which would make the minimum vtotal one scanline taller
than expected.
To compensate for this we reduce vmin by one, and then program
flipline=vmin+1. So we end up with a flipline value that matches
the expected minimum vtotal. Currently this adjustment happens
in intel_vrr_compute_config() which means that crtc_state->vrr.vmin
will no longer be directly usable for the remainder of the high
level VRR code. That is annoying at best, fragile at worst.
Hide the adjustment in low level code instead. This will allow most
of the higher level VRR code to remain blissfully ignorant about this
fact. Afterwards crtc_state->vrr.{vmin,flipline} will be equal
and match the minimum vtotal, exactly how things already work
on ADL+.
The only slight downside is that the actual register value will no
longer match crtc_state->vrr.vmin on ICL/TGL, but that may already
be the case on TGL because the register value will also have been
adjusted by the SCL.
Note that we must change the guardband calculation to account
for intel_vrr_extra_vblank_delay() explicitly. Previously that
was accidentally handled by the earlier vmin reduction by
intel_vrr_flipline_offset().
drm/i915/dmc: explicitly sanitize num_entries from package_header
num_entries comes from package_header, which is read from an external
firmware blob and thus untrusted. In parse_dmc_fw_package() we assign
package_header->num_entries to a local variable, but the range check
still uses the struct field directly.
Switch the check to use the local copy instead. This makes the
sanitization explicit and avoids a redundant dereference.
Jani Nikula [Thu, 18 Sep 2025 13:38:35 +0000 (16:38 +0300)]
drm/i915/irq: add ilk_display_irq_reset()
Abstract ilk_display_irq_reset(), moving display related reset
there. This results in a slightly different order between GT and PCH
reset, hopefully with no impact.
Jani Nikula [Thu, 18 Sep 2025 12:25:45 +0000 (15:25 +0300)]
drm/i915/irq: use a dedicated IMR cache for gen 5-7
There are three groups of platforms using i915->irq_mask independently:
gen 2-4, VLV/CHV, and gen 5-7.
The gen 5-7 usage is primarily limited to display. Move its irq_mask
usage to struct intel_display as ilk_de_imr_mask for gen 5-7.
ilk_de_imr_mask could be put inside a union with with vlv_imr_mask and
de_irq_mask[], but keep them separate to avoid accidental aliasing of
the values.
With this, we can also drop the irq_mask member from struct xe_device.
i915 and xe do different things on the failure path; i915 calls
drm_gem_object_put() while xe calls xe_bo_unpin_map_no_vm(). Add a
helper to enable further refactoring.
v2: Call drm_gem_object_put() on intel_fbdev_fb_bo_destroy()
Jani Nikula [Thu, 18 Sep 2025 08:40:52 +0000 (11:40 +0300)]
drm/i915/fbdev: make intel_framebuffer_create() error return handling explicit
It's sketchy to pass error pointers via to_intel_framebuffer(). It
probably works as long as struct intel_framebuffer embeds struct
drm_framebuffer at offset 0, but be explicit about it.
Jani Nikula [Thu, 18 Sep 2025 08:40:51 +0000 (11:40 +0300)]
drm/xe/fbdev: use the same 64-byte stride alignment as i915
For reasons unknown, xe uses XE_PAGE_SIZE alignment for
stride. Presumably it's just a confusion between stride alignment and bo
allocation size alignment. Switch to 64 byte alignment to, uh, align
with i915.
This will also be helpful in deduplicating and unifying the xe and i915
framebuffer allocation.
Ville Syrjälä [Wed, 17 Sep 2025 20:34:46 +0000 (23:34 +0300)]
drm/i915/vrr: Move the TGL SCL mangling of vmin/vmax/flipline deeper
Currently our crtc_state->vrr.{vmin.vmax,flipline} are mangled on
TGL to account for the SCL delay (the hardware requires this mangling
or the actual vtotals will become incorrect). Unfortunately this
means that one can't simply use these values directly in many places,
and instead we always have to go through functions that undo the
damage first. This is all rather fragile.
Simplify our lives a bit by hiding this mangling deeper inside
the low level VRR code, leaving the number stored in the crtc
state actually something that humans can use.
This does introduce a dependdency as intel_vrr_get_config()
will now need to know the SCL value, which is read out in
intel_get_transcoder_timings(). I suppose we could simply
duplicate the SCL readout in both places should this become
a real hinderance. For now just leave a note around the
intel_get_transcoder_timings() call to remind us.
Ville Syrjälä [Wed, 17 Sep 2025 20:34:45 +0000 (23:34 +0300)]
drm/i915/vrr: Annotate some functions with "hw"
intel_vrr_fixed_rr_*() return values that have had the TGL
SCL adjustment applied to them. So we should indicate that these
values are only really useful when fed to the hardware. Add
a "_hw_" indicator to the function names to reflect that fact.
Ville Syrjälä [Wed, 17 Sep 2025 20:34:44 +0000 (23:34 +0300)]
drm/i915/vrr: Store guardband in crtc state even for icl/tgl
While ICL/TGL VRR hardware doesn't have a register for the guardband
value, our lives will be simpler if we pretend that it does. Start
by computing the guardband the same as on ADL+ and storing it in
the state, and only then we convert it into the corresponding
pipeline_full value that the hardware can consume. During readout we
do the opposite.
I was debating whether to completely remove pipeline_full from the
crtc state, but decided to keep it for now. Mainly because we check
it in vrr_params_changed() and simply checking the guardband instead
isn't 100% equivalent; Theoretically, framestart_delay may have
changed in the opposite direction to pipeline_full, keeping the
derived guardband value unchaged. One solution would be to also check
framestart_delay, but that feels a bit leaky abstraction wise.
Also note that we don't currently handle the maximum limit of 255
scanlines for the pipeline_full in a very nice way. The actual
position of the delayed vblank will move because of that clamping,
and so some of our code may get confused. But fixing this shall
wait a for now.
Ville Syrjälä [Wed, 17 Sep 2025 20:34:43 +0000 (23:34 +0300)]
drm/i915/vrr: Readout framestart_delay earlier
In order to pretend that ICL/TGL VRR hardware has a similar guardband
as on ADL+ we'll need access to framestart_delay already during
intel_vrr_get_config(). Hoist the framestart_delay to an earlier point
to make that possible.
Ville Syrjälä [Wed, 17 Sep 2025 20:34:42 +0000 (23:34 +0300)]
drm/i915/vrr: Extract helpers to convert between guardband and pipeline_full values
I'd like to move towards a world where we can't more or less
pretend that the ICl/TGL VRR hardware works the same way as
ADL+. To that end extract some helpers to convert between
the guardband and pipeline_full representations.
Ville Syrjälä [Fri, 12 Sep 2025 13:59:26 +0000 (16:59 +0300)]
drm/i915: Defeature DRRS on LNL+
DRRS has been defeatured on LNL+. Adjust HAS_DOUBLE_BUFFERED_M_N()
to match.
Note that the M/N registers still appear to be double buffered under
the hood but the double buffer update point is now documented to be
just the last register write to the M/N registers, so it no longer
happens synchronously with the vblank/MSA transmission. We should
perhaps rename HAS_DOUBLE_BUFFERED_M_N() to more accurately reflect
reality, but couldn't come up with a decent name right now...
Jonathan Cavitt [Tue, 16 Sep 2025 17:43:19 +0000 (17:43 +0000)]
drm/i915/gvt: Remove unnecessary check in reg_is_mmio
The reg >= 0 check in reg_is_mmio is unnecessary because reg is always
greater than zero in all current use cases. This is obvious when
checking 'offset' by itself (as offset is defined as an unsigned
integer), but it's also true for the offset + bytes - 1 use case in
intel_vgpu_emulate_mmio_read because bytes > 0.
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com> Reviewed-by: Andi Shyti <andi.shyti@linux.intel.com> Reviewed-by: Zhenyu Wang <zhenyuw.linux@gmail.com> Signed-off-by: Andi Shyti <andi.shyti@linux.intel.com> Link: https://lore.kernel.org/r/20250916174317.76521-5-jonathan.cavitt@intel.com
Jani Nikula [Wed, 17 Sep 2025 13:52:00 +0000 (16:52 +0300)]
drm/i915: add note on VLV/CHV hpll_freq and czclk_freq caching
The caching at the initial read is a bit fragile in case, say, a further
refactoring starts reading the frequencies at a time where it's not
possible. Add a note about it.
drm/i915/alpm: Remove error handling from get_lfps_cycle_min_max_time
Getter for LFPS cycle min/max times is unnecessarily checking faulty port
clock value. This doesn't make sense as erroneous port clock value would
have been noticed already at this point. Remove this check and use 140/800
ns always when port clock > 540000.
Jani Nikula [Fri, 12 Sep 2025 14:48:51 +0000 (17:48 +0300)]
drm/i915: remove intel_update_czclk() as unnecessary
With vlv_clock_get_czclk() caching the result on first use, we no longer
need a separate initializer. Remove intel_update_czclk() as
unnecessary. Log the CZCLK in vlv_clock_get_czclk() instead.
Aaron Ma [Sat, 23 Aug 2025 12:16:47 +0000 (20:16 +0800)]
drm/i915/backlight: Honor VESA eDP backlight luminance control capability
The VESA AUX backlight fails to be enable luminance based backlight
mainpulation becaused luminance_set is false by default.
Fix it by using luminance support control capabitliy.
Fixes: e13af5166a359 ("drm/i915/backlight: Use drm helper to initialize edp backlight") Signed-off-by: Aaron Ma <aaron.ma@canonical.com> Reviewed-by: Suraj Kandpal <suraj.kandpal@intel.com> Signed-off-by: Suraj Kandpal <suraj.kandpal@intel.com> Link: https://lore.kernel.org/r/20250823121647.275834-1-aaron.ma@canonical.com
drm/i915: Drop unused struct_mutex from drm_i915_private
The struct_mutex field in drm_i915_private is no longer used anywhere in
the driver. This patch removes it completely to clean up unused code and
avoid confusion.
The struct_mutex will be removed from the DRM subsystem, as it was a
legacy BKL that was only used by i915 driver. After review, it was
concluded that its usage was no longer necessary
This patch updates various comments in the i915 codebase to
either remove or clarify references to struct_mutex, in order to
prevent future misunderstandings.
* i915_drv.h: Removed the statement that stolen_lock is the inner lock
when overlaps with struct_mutex, since struct_mutex is no longer used
in the driver.
* i915_gem.c: Removed parentheses suggesting usage of struct_mutex, which
which is no longer used.
The struct_mutex will be removed from the DRM subsystem, as it was a
legacy BKL that was only used by i915 driver. After review, it was
concluded that its usage was no longer necessary
This patch update a comment about struct_mutex in i915/display, in order
to prevent future misunderstandings.
* intel_fbc.c: Removed the statement that intel_fbc->lock is the inner
lock when overlapping with struct_mutex, since struct_mutex is no
longer used anywhere in the driver.
The struct_mutex will be removed from the DRM subsystem, as it was a
legacy BKL that was only used by i915 driver. After review, it was
concluded that its usage was no longer necessary
This patch updates various comments in the i915/gem and i915/gt
codebase to either remove or clarify references to struct_mutex, in
order to prevent future misunderstandings.
* i915_gem_execbuffer.c: Replace reference to struct_mutex with
vm->mutex, as noted in the eb_reserve() function, which states that
vm->mutex handles deadlocks.
* i915_gem_object.c: Change struct_mutex by
drm_i915_gem_object->vma.lock. i915_gem_object_unbind() in i915_gem.c
states that this lock is who actually protects the unbind.
* i915_gem_shrinker.c: The correct lock is actually i915->mm.obj, as
already documented in its declaration.
* i915_gem_wait.c: The existing comment already mentioned that
struct_mutex was no longer necessary. Updated to refer to a generic
global lock instead.
* intel_reset_types.h: Cleaned up the comment text. Updated to refer to
a generic global lock instead.
Remove the use of struct_mutex from intel_guc_log.c and replace it with
a dedicated lock, guc_lock, defined within the intel_guc_log struct.
The struct_mutex was previously used to protect concurrent access and
modification of intel_guc_log->level in intel_guc_log_set_level().
However, it was suggested that the lock should reside within the
intel_guc_log struct itself.
Initialize the new guc_lock in intel_guc_log_init_early(), alongside the
existing relay.lock. The lock is initialized using drmm_mutex_init(),
which also ensures it is properly destroyed when the driver is unloaded.
drm/i915: Change mutex initialization in intel_guc_log
The intel_guc_log->relay.lock is currently initialized in
intel_guc_log_init_early(), but it lacks a corresponding destructor,
which can lead to a memory leak.
This patch replaces the use of mutex_init() with drmm_mutex_init(),
which ensures the lock is properly destroyed when the driver is
unloaded.
Add a proper function for display && HAS_DISPLAY(display) to hide
indirect struct intel_display access via the macro from a number of
places outside of display. This makes struct intel_display * an opaque
pointer in these places. All HAS_DISPLAY() usage is now constrained
within display.