A use-after-free bug was reported when booting a Linux kernel during the
pci setup phase. It's quite hard to reproduce (needs smp, and favored by
having several pci devices with BAR and specific Linux config, which
is Debian default one in this case).
After investigation (see the associated bug ticket), it appears that,
under specific conditions, we might access a cached AddressSpaceDispatch
that was reclaimed by RCU thread meanwhile.
In the Linux boot scenario, during the pci phase, memory region are
destroyed/recreated, resulting in exposition of the bug.
The core of the issue is that we cache the dispatch associated to
current cpu in cpu->cpu_ases[asidx].memory_dispatch. It is updated with
tcg_commit, which runs asynchronously on a given cpu.
At some point, we leave the rcu critial section, and the RCU thread
starts reclaiming it, but tcg_commit is not yet invoked, resulting in
the use-after-free.
It's not the first problem around this area, and commit
0d58c660689 [1]
("softmmu: Use async_run_on_cpu in tcg_commit") already tried to
address it. It did a good job, but it seems that we found a specific
situation where it's not enough.
This patch takes a simple approach: remove the cached value creating the
issue, and make sure we always get the current mapping for address
space, using address_space_to_dispatch(cpu->cpu_ases[asidx].as).
It's equivalent to qatomic_rcu_read(&as->current_map)->dispatch;
This is not really costly, we just need two dereferences,
including one atomic (rcu) read, which is negligible considering we are
already on mmu slow path anyway.
Note that tcg_commit is still needed, as it's taking care of flushing
TLB, removing previously mapped entries.
Another solution would be to cache directly values under the dispatch
(dispatch themselves are not ref counted), keep an active reference on
associated memory section, and release it when appropriate (tricky).
Given the time already spent debugging this area now and previously, I
strongly prefer eliminating the root of the issue, instead of adding
more complexity for a hypothetical performance gain. RCU is precisely
used to ensure good performance when reading data, so caching is not as
beneficial as it might seem IMHO.
[1] https://gitlab.com/qemu-project/qemu/-/commit/
0d58c660689f6da1e3feff8a997014003d928b3b
Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3040
Signed-off-by: Pierrick Bouvier <pierrick.bouvier@linaro.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Michael Tokarev <mjt@tls.msk.ru>
Tested-by: Michael Tokarev <mjt@tls.msk.ru>
Message-ID: <
20250724161142.
2803091-1-pierrick.bouvier@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
* CPUAddressSpace: all the information a CPU needs about an AddressSpace
* @cpu: the CPU whose AddressSpace this is
* @as: the AddressSpace itself
- * @memory_dispatch: its dispatch pointer (cached, RCU protected)
* @tcg_as_listener: listener for tracking changes to the AddressSpace
*/
typedef struct CPUAddressSpace {
CPUState *cpu;
AddressSpace *as;
- struct AddressSpaceDispatch *memory_dispatch;
MemoryListener tcg_as_listener;
} CPUAddressSpace;
IOMMUTLBEntry iotlb;
int iommu_idx;
hwaddr addr = orig_addr;
- AddressSpaceDispatch *d = cpu->cpu_ases[asidx].memory_dispatch;
+ AddressSpaceDispatch *d = address_space_to_dispatch(cpu->cpu_ases[asidx].as);
for (;;) {
section = address_space_translate_internal(d, addr, &addr, plen, false);
{
int asidx = cpu_asidx_from_attrs(cpu, attrs);
CPUAddressSpace *cpuas = &cpu->cpu_ases[asidx];
- AddressSpaceDispatch *d = cpuas->memory_dispatch;
+ AddressSpaceDispatch *d = address_space_to_dispatch(cpuas->as);
int section_index = index & ~TARGET_PAGE_MASK;
MemoryRegionSection *ret;
static void tcg_commit_cpu(CPUState *cpu, run_on_cpu_data data)
{
- CPUAddressSpace *cpuas = data.host_ptr;
-
- cpuas->memory_dispatch = address_space_to_dispatch(cpuas->as);
tlb_flush(cpu);
}
cpu = cpuas->cpu;
/*
- * Defer changes to as->memory_dispatch until the cpu is quiescent.
- * Otherwise we race between (1) other cpu threads and (2) ongoing
- * i/o for the current cpu thread, with data cached by mmu_lookup().
- *
- * In addition, queueing the work function will kick the cpu back to
+ * Queueing the work function will kick the cpu back to
* the main loop, which will end the RCU critical section and reclaim
* the memory data structures.
*