]> git.ipfire.org Git - thirdparty/kernel/stable-queue.git/blob - queue-6.8/kvm-x86-mmu-x86-don-t-overflow-lpage_info-when-checking-attributes.patch
6.8-stable patches
[thirdparty/kernel/stable-queue.git] / queue-6.8 / kvm-x86-mmu-x86-don-t-overflow-lpage_info-when-checking-attributes.patch
1 From 992b54bd083c5bee24ff7cc35991388ab08598c4 Mon Sep 17 00:00:00 2001
2 From: Rick Edgecombe <rick.p.edgecombe@intel.com>
3 Date: Thu, 14 Mar 2024 14:29:02 -0700
4 Subject: KVM: x86/mmu: x86: Don't overflow lpage_info when checking attributes
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 From: Rick Edgecombe <rick.p.edgecombe@intel.com>
10
11 commit 992b54bd083c5bee24ff7cc35991388ab08598c4 upstream.
12
13 Fix KVM_SET_MEMORY_ATTRIBUTES to not overflow lpage_info array and trigger
14 KASAN splat, as seen in the private_mem_conversions_test selftest.
15
16 When memory attributes are set on a GFN range, that range will have
17 specific properties applied to the TDP. A huge page cannot be used when
18 the attributes are inconsistent, so they are disabled for those the
19 specific huge pages. For internal KVM reasons, huge pages are also not
20 allowed to span adjacent memslots regardless of whether the backing memory
21 could be mapped as huge.
22
23 What GFNs support which huge page sizes is tracked by an array of arrays
24 'lpage_info' on the memslot, of ‘kvm_lpage_info’ structs. Each index of
25 lpage_info contains a vmalloc allocated array of these for a specific
26 supported page size. The kvm_lpage_info denotes whether a specific huge
27 page (GFN and page size) on the memslot is supported. These arrays include
28 indices for unaligned head and tail huge pages.
29
30 Preventing huge pages from spanning adjacent memslot is covered by
31 incrementing the count in head and tail kvm_lpage_info when the memslot is
32 allocated, but disallowing huge pages for memory that has mixed attributes
33 has to be done in a more complicated way. During the
34 KVM_SET_MEMORY_ATTRIBUTES ioctl KVM updates lpage_info for each memslot in
35 the range that has mismatched attributes. KVM does this a memslot at a
36 time, and marks a special bit, KVM_LPAGE_MIXED_FLAG, in the kvm_lpage_info
37 for any huge page. This bit is essentially a permanently elevated count.
38 So huge pages will not be mapped for the GFN at that page size if the
39 count is elevated in either case: a huge head or tail page unaligned to
40 the memslot or if KVM_LPAGE_MIXED_FLAG is set because it has mixed
41 attributes.
42
43 To determine whether a huge page has consistent attributes, the
44 KVM_SET_MEMORY_ATTRIBUTES operation checks an xarray to make sure it
45 consistently has the incoming attribute. Since level - 1 huge pages are
46 aligned to level huge pages, it employs an optimization. As long as the
47 level - 1 huge pages are checked first, it can just check these and assume
48 that if each level - 1 huge page contained within the level sized huge
49 page is not mixed, then the level size huge page is not mixed. This
50 optimization happens in the helper hugepage_has_attrs().
51
52 Unfortunately, although the kvm_lpage_info array representing page size
53 'level' will contain an entry for an unaligned tail page of size level,
54 the array for level - 1 will not contain an entry for each GFN at page
55 size level. The level - 1 array will only contain an index for any
56 unaligned region covered by level - 1 huge page size, which can be a
57 smaller region. So this causes the optimization to overflow the level - 1
58 kvm_lpage_info and perform a vmalloc out of bounds read.
59
60 In some cases of head and tail pages where an overflow could happen,
61 callers skip the operation completely as KVM_LPAGE_MIXED_FLAG is not
62 required to prevent huge pages as discussed earlier. But for memslots that
63 are smaller than the 1GB page size, it does call hugepage_has_attrs(). In
64 this case the huge page is both the head and tail page. The issue can be
65 observed simply by compiling the kernel with CONFIG_KASAN_VMALLOC and
66 running the selftest “private_mem_conversions_test”, which produces the
67 output like the following:
68
69 BUG: KASAN: vmalloc-out-of-bounds in hugepage_has_attrs+0x7e/0x110
70 Read of size 4 at addr ffffc900000a3008 by task private_mem_con/169
71 Call Trace:
72 dump_stack_lvl
73 print_report
74 ? __virt_addr_valid
75 ? hugepage_has_attrs
76 ? hugepage_has_attrs
77 kasan_report
78 ? hugepage_has_attrs
79 hugepage_has_attrs
80 kvm_arch_post_set_memory_attributes
81 kvm_vm_ioctl
82
83 It is a little ambiguous whether the unaligned head page (in the bug case
84 also the tail page) should be expected to have KVM_LPAGE_MIXED_FLAG set.
85 It is not functionally required, as the unaligned head/tail pages will
86 already have their kvm_lpage_info count incremented. The comments imply
87 not setting it on unaligned head pages is intentional, so fix the callers
88 to skip trying to set KVM_LPAGE_MIXED_FLAG in this case, and in doing so
89 not call hugepage_has_attrs().
90
91 Cc: stable@vger.kernel.org
92 Fixes: 90b4fe17981e ("KVM: x86: Disallow hugepages when memory attributes are mixed")
93 Signed-off-by: Rick Edgecombe <rick.p.edgecombe@intel.com>
94 Reviewed-by: Kai Huang <kai.huang@intel.com>
95 Reviewed-by: Chao Peng <chao.p.peng@linux.intel.com>
96 Link: https://lore.kernel.org/r/20240314212902.2762507-1-rick.p.edgecombe@intel.com
97 Signed-off-by: Sean Christopherson <seanjc@google.com>
98 Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
99 ---
100 arch/x86/kvm/mmu/mmu.c | 3 ++-
101 1 file changed, 2 insertions(+), 1 deletion(-)
102
103 --- a/arch/x86/kvm/mmu/mmu.c
104 +++ b/arch/x86/kvm/mmu/mmu.c
105 @@ -7388,7 +7388,8 @@ bool kvm_arch_post_set_memory_attributes
106 * by the memslot, KVM can't use a hugepage due to the
107 * misaligned address regardless of memory attributes.
108 */
109 - if (gfn >= slot->base_gfn) {
110 + if (gfn >= slot->base_gfn &&
111 + gfn + nr_pages <= slot->base_gfn + slot->npages) {
112 if (hugepage_has_attrs(kvm, slot, gfn, level, attrs))
113 hugepage_clear_mixed(slot, gfn, level);
114 else