From: Pierrick Bouvier Date: Thu, 24 Jul 2025 16:11:42 +0000 (-0700) Subject: system/physmem: fix use-after-free with dispatch X-Git-Tag: v10.1.0-rc1~2^2~2 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=2865bf1c5795371ef441e9029bc22566ccff8299;p=thirdparty%2Fqemu.git system/physmem: fix use-after-free with dispatch A use-after-free bug was reported when booting a Linux kernel during the pci setup phase. It's quite hard to reproduce (needs smp, and favored by having several pci devices with BAR and specific Linux config, which is Debian default one in this case). After investigation (see the associated bug ticket), it appears that, under specific conditions, we might access a cached AddressSpaceDispatch that was reclaimed by RCU thread meanwhile. In the Linux boot scenario, during the pci phase, memory region are destroyed/recreated, resulting in exposition of the bug. The core of the issue is that we cache the dispatch associated to current cpu in cpu->cpu_ases[asidx].memory_dispatch. It is updated with tcg_commit, which runs asynchronously on a given cpu. At some point, we leave the rcu critial section, and the RCU thread starts reclaiming it, but tcg_commit is not yet invoked, resulting in the use-after-free. It's not the first problem around this area, and commit 0d58c660689 [1] ("softmmu: Use async_run_on_cpu in tcg_commit") already tried to address it. It did a good job, but it seems that we found a specific situation where it's not enough. This patch takes a simple approach: remove the cached value creating the issue, and make sure we always get the current mapping for address space, using address_space_to_dispatch(cpu->cpu_ases[asidx].as). It's equivalent to qatomic_rcu_read(&as->current_map)->dispatch; This is not really costly, we just need two dereferences, including one atomic (rcu) read, which is negligible considering we are already on mmu slow path anyway. Note that tcg_commit is still needed, as it's taking care of flushing TLB, removing previously mapped entries. Another solution would be to cache directly values under the dispatch (dispatch themselves are not ref counted), keep an active reference on associated memory section, and release it when appropriate (tricky). Given the time already spent debugging this area now and previously, I strongly prefer eliminating the root of the issue, instead of adding more complexity for a hypothetical performance gain. RCU is precisely used to ensure good performance when reading data, so caching is not as beneficial as it might seem IMHO. [1] https://gitlab.com/qemu-project/qemu/-/commit/0d58c660689f6da1e3feff8a997014003d928b3b Cc: qemu-stable@nongnu.org Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3040 Signed-off-by: Pierrick Bouvier Reviewed-by: Richard Henderson Reviewed-by: Michael Tokarev Tested-by: Michael Tokarev Message-ID: <20250724161142.2803091-1-pierrick.bouvier@linaro.org> Signed-off-by: Philippe Mathieu-Daudé --- diff --git a/system/physmem.c b/system/physmem.c index 130c148ffb..e5dd760e0b 100644 --- a/system/physmem.c +++ b/system/physmem.c @@ -165,13 +165,11 @@ static bool ram_is_cpr_compatible(RAMBlock *rb); * CPUAddressSpace: all the information a CPU needs about an AddressSpace * @cpu: the CPU whose AddressSpace this is * @as: the AddressSpace itself - * @memory_dispatch: its dispatch pointer (cached, RCU protected) * @tcg_as_listener: listener for tracking changes to the AddressSpace */ typedef struct CPUAddressSpace { CPUState *cpu; AddressSpace *as; - struct AddressSpaceDispatch *memory_dispatch; MemoryListener tcg_as_listener; } CPUAddressSpace; @@ -692,7 +690,7 @@ address_space_translate_for_iotlb(CPUState *cpu, int asidx, hwaddr orig_addr, IOMMUTLBEntry iotlb; int iommu_idx; hwaddr addr = orig_addr; - AddressSpaceDispatch *d = cpu->cpu_ases[asidx].memory_dispatch; + AddressSpaceDispatch *d = address_space_to_dispatch(cpu->cpu_ases[asidx].as); for (;;) { section = address_space_translate_internal(d, addr, &addr, plen, false); @@ -753,7 +751,7 @@ MemoryRegionSection *iotlb_to_section(CPUState *cpu, { int asidx = cpu_asidx_from_attrs(cpu, attrs); CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx]; - AddressSpaceDispatch *d = cpuas->memory_dispatch; + AddressSpaceDispatch *d = address_space_to_dispatch(cpuas->as); int section_index = index & ~TARGET_PAGE_MASK; MemoryRegionSection *ret; @@ -2780,9 +2778,6 @@ static void tcg_log_global_after_sync(MemoryListener *listener) static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data) { - CPUAddressSpace *cpuas = data.host_ptr; - - cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as); tlb_flush(cpu); } @@ -2798,11 +2793,7 @@ static void tcg_commit(MemoryListener *listener) cpu = cpuas->cpu; /* - * Defer changes to as->memory_dispatch until the cpu is quiescent. - * Otherwise we race between (1) other cpu threads and (2) ongoing - * i/o for the current cpu thread, with data cached by mmu_lookup(). - * - * In addition, queueing the work function will kick the cpu back to + * Queueing the work function will kick the cpu back to * the main loop, which will end the RCU critical section and reclaim * the memory data structures. *