From: Alex Bennée Date: Tue, 9 Dec 2025 09:24:59 +0000 (+0000) Subject: target/arm: handle unaligned PC during tlb probe X-Git-Tag: v10.2.0-rc3~2^2 X-Git-Url: http://git.ipfire.org/?a=commitdiff_plain;h=dd77ef99aa0280c467fe8442b4238122899ae6cf;p=thirdparty%2Fqemu.git target/arm: handle unaligned PC during tlb probe PC alignment faults have priority over instruction aborts and we have code to deal with this in the translation front-ends. However during tb_lookup we can see a potentially faulting probe which doesn't get a MemOp set. If the page isn't available this results in EC_INSNABORT (0x20) instead of EC_PCALIGNMENT (0x22). As there is no easy way to set the appropriate MemOp in the instruction fetch probe path lets just detect it in arm_cpu_tlb_fill_align() ahead of the main alignment check. We also teach arm_deliver_fault to deliver the right syndrome for MMU_INST_FETCH alignment issues. Resolves: https://gitlab.com/qemu-project/qemu/-/issues/3233 Tested-by: Jessica Clarke Reviewed-by: Richard Henderson Message-ID: <20251209092459.1058313-5-alex.bennee@linaro.org> Signed-off-by: Alex Bennée --- diff --git a/target/arm/tcg/tlb_helper.c b/target/arm/tcg/tlb_helper.c index f1983a5732..5c689d3b69 100644 --- a/target/arm/tcg/tlb_helper.c +++ b/target/arm/tcg/tlb_helper.c @@ -250,7 +250,11 @@ void arm_deliver_fault(ARMCPU *cpu, vaddr addr, fsr = compute_fsr_fsc(env, fi, target_el, mmu_idx, &fsc); if (access_type == MMU_INST_FETCH) { - syn = syn_insn_abort(same_el, fi->ea, fi->s1ptw, fsc); + if (fi->type == ARMFault_Alignment) { + syn = syn_pcalignment(); + } else { + syn = syn_insn_abort(same_el, fi->ea, fi->s1ptw, fsc); + } exc = EXCP_PREFETCH_ABORT; } else { bool gcs = regime_is_gcs(core_to_arm_mmu_idx(env, mmu_idx)); @@ -346,11 +350,18 @@ bool arm_cpu_tlb_fill_align(CPUState *cs, CPUTLBEntryFull *out, vaddr address, } /* - * Per R_XCHFJ, alignment fault not due to memory type has - * highest precedence. Otherwise, walk the page table and - * and collect the page description. + * PC alignment faults should be dealt with at translation time + * but we also need to catch them while being probed. + * + * Then per R_XCHFJ, alignment fault not due to memory type take + * precedence. Otherwise, walk the page table and and collect the + * page description. + * */ - if (address & ((1 << memop_alignment_bits(memop)) - 1)) { + if (access_type == MMU_INST_FETCH && !cpu->env.thumb && + (address & 3)) { + fi->type = ARMFault_Alignment; + } else if (address & ((1 << memop_alignment_bits(memop)) - 1)) { fi->type = ARMFault_Alignment; } else if (!get_phys_addr(&cpu->env, address, access_type, memop, core_to_arm_mmu_idx(&cpu->env, mmu_idx),