A 10ms timeslice for long-running workloads is far too long and causes
significant jitter in benchmarks when the system is shared. Adjust the
value to 5ms for preempt-fencing VMs, as the resume step there is quite
costly as memory is moved around, and set it to zero for pagefault VMs,
since switching back to pagefault mode after dma-fence mode is
relatively fast.
Also change min_run_period_ms to 'unsiged int' type rather than 's64' as
only positive values make sense.
Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: stable@vger.kernel.org
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Link: https://patch.msgid.link/20251212182847.1683222-2-matthew.brost@intel.com
(cherry picked from commit
33a5abd9a68394aa67f9618b20eee65ee8702ff4)
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
INIT_WORK(&vm->destroy_work, vm_destroy_work_func);
INIT_LIST_HEAD(&vm->preempt.exec_queues);
- vm->preempt.min_run_period_ms = 10; /* FIXME: Wire up to uAPI */
+ if (flags & XE_VM_FLAG_FAULT_MODE)
+ vm->preempt.min_run_period_ms = 0;
+ else
+ vm->preempt.min_run_period_ms = 5;
for_each_tile(tile, xe, id)
xe_range_fence_tree_init(&vm->rftree[id]);
* @min_run_period_ms: The minimum run period before preempting
* an engine again
*/
- s64 min_run_period_ms;
+ unsigned int min_run_period_ms;
/** @exec_queues: list of exec queues attached to this VM */
struct list_head exec_queues;
/** @num_exec_queues: number exec queues attached to this VM */