From: Arvind Yadav Date: Thu, 26 Mar 2026 13:08:30 +0000 (+0530) Subject: drm/xe/bo: Block CPU faults to purgeable buffer objects X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=9a16fdf5dca53326a4234826ce97727d53511aa2;p=thirdparty%2Fkernel%2Flinux.git drm/xe/bo: Block CPU faults to purgeable buffer objects Block CPU page faults to buffer objects marked as purgeable (DONTNEED) or already purged. Once a BO is marked DONTNEED, its contents can be discarded by the kernel at any time, making access undefined behavior. Return VM_FAULT_SIGBUS immediately to fail consistently instead of allowing erratic behavior where access sometimes works (if not yet purged) and sometimes fails (if purged). For DONTNEED BOs: - Block new CPU faults with SIGBUS to prevent undefined behavior. - Existing CPU PTEs may still work until TLB flush, but new faults fail immediately. For PURGED BOs: - Backing store has been reclaimed, making CPU access invalid. - Without this check, accessing existing mmap mappings would trigger xe_bo_fault_migrate() on freed backing store, causing kernel hangs or crashes. The purgeable check is added to both CPU fault paths: - Fastpath (xe_bo_cpu_fault_fastpath): Returns VM_FAULT_SIGBUS immediately under dma-resv lock, preventing attempts to migrate/validate DONTNEED/purged pages. - Slowpath (xe_bo_cpu_fault): Returns -EFAULT under drm_exec lock, converted to VM_FAULT_SIGBUS. Cc: Matthew Brost Cc: Himal Prasad Ghimiray Reviewed-by: Thomas Hellström Signed-off-by: Arvind Yadav Signed-off-by: Matthew Brost Link: https://patch.msgid.link/20260326130843.3545241-5-arvind.yadav@intel.com --- diff --git a/drivers/gpu/drm/xe/xe_bo.c b/drivers/gpu/drm/xe/xe_bo.c index 09f7f6f12c4c6..152bdea13d7fb 100644 --- a/drivers/gpu/drm/xe/xe_bo.c +++ b/drivers/gpu/drm/xe/xe_bo.c @@ -1982,6 +1982,16 @@ static vm_fault_t xe_bo_cpu_fault_fastpath(struct vm_fault *vmf, struct xe_devic if (!dma_resv_trylock(tbo->base.resv)) goto out_validation; + /* + * Reject CPU faults to purgeable BOs. DONTNEED BOs can be purged + * at any time, and purged BOs have no backing store. Either case + * is undefined behavior for CPU access. + */ + if (xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo)) { + ret = VM_FAULT_SIGBUS; + goto out_unlock; + } + if (xe_ttm_bo_is_imported(tbo)) { ret = VM_FAULT_SIGBUS; drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n"); @@ -2072,6 +2082,15 @@ static vm_fault_t xe_bo_cpu_fault(struct vm_fault *vmf) if (err) break; + /* + * Reject CPU faults to purgeable BOs. DONTNEED BOs can be + * purged at any time, and purged BOs have no backing store. + */ + if (xe_bo_madv_is_dontneed(bo) || xe_bo_is_purged(bo)) { + err = -EFAULT; + break; + } + if (xe_ttm_bo_is_imported(tbo)) { err = -EFAULT; drm_dbg(&xe->drm, "CPU trying to access an imported buffer object.\n");