xe_sync_in_fence_get() uses the same kind of mismatched fence array
allocation vs looping logic that was previously noted and changed by
commit
0a4c2ddc711a ("drm/xe/vm: Use for_each_tlb_inval() to calculate
invalidation fences"). As with that commit, the mismatch doesn't cause
any problem at the moment since for_each_tlb_inval() loops the same
number of times as XE_MAX_GT_PER_TILE (2). However we don't want to
assume that these will always be the same in the future, so switch to
using for_each_tlb_inval() in both places to future-proof the code.
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Link: https://patch.msgid.link/20251202222551.1858930-2-matthew.d.roper@intel.com
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
struct xe_tile *tile;
u8 id;
- for_each_tile(tile, vm->xe, id)
- num_fence += (1 + XE_MAX_GT_PER_TILE);
+ for_each_tile(tile, vm->xe, id) {
+ num_fence++;
+ for_each_tlb_inval(i)
+ num_fence++;
+ }
fences = kmalloc_array(num_fence, sizeof(*fences),
GFP_KERNEL);