Use the large-page metadata to avoid pointless attempts to search SP.
If the target GFN falls within a range where a large page is allowed,
then there cannot be a shadow page for that GFN; a shadow page in the
range would itself disallow using a large page. In that case, there
is nothing to unsync and mmu_try_to_unsync_pages() can return
immediately.
This is always true for TDP MMU without nested TDP, and holds for a
significant fraction of cases with shadow paging even all SPs are 4K.
For shadow paging, this optimization theoretically avoids work for about
1/e ~= 37% of GFNs, assuming one guest page table per 2M of memory and
that each GPT falls randomly into the 2M memory buckets. In a simple
test setup, it skipped unsync in a much higher percentage of cases,
mainly because the guest buddy allocator clusters GPTs into fewer
buckets.
Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
Link: https://patch.msgid.link/20260123090304.32286-2-jiangshanlai@gmail.com
[sean: check for hugepage after write-tracking, update comment]
Signed-off-by: Sean Christopherson <seanjc@google.com>
if (kvm_gfn_is_write_tracked(kvm, slot, gfn))
return -EPERM;
+ /*
+ * Only 4KiB mappings can become unsync, and KVM disallows hugepages
+ * when accounting 4KiB shadow pages. Upper-level gPTEs are always
+ * write-protected (see above), thus if the gfn can be mapped with a
+ * hugepage and isn't write-tracked, it can't have a shadow page.
+ */
+ if (!lpage_info_slot(gfn, slot, PG_LEVEL_2M)->disallow_lpage)
+ return 0;
+
/*
* The page is not write-tracked, mark existing shadow pages unsync
* unless KVM is synchronizing an unsync SP. In that case, KVM must