From: Yao Zihong Date: Thu, 30 Oct 2025 22:47:24 +0000 (-0500) Subject: riscv: memcpy_noalignment: Fold SZREG/BLOCK_SIZE alignment to single andi X-Git-Url: http://git.ipfire.org/gitweb.cgi?a=commitdiff_plain;h=0698fd462a22d5e0fda71ef1dce04656d17a7c5f;p=thirdparty%2Fglibc.git riscv: memcpy_noalignment: Fold SZREG/BLOCK_SIZE alignment to single andi Simplify the alignment steps for SZREG and BLOCK_SIZE multiples. The previous three-instruction sequences addi a7, a2, -SZREG andi a7, a7, -SZREG addi a7, a7, SZREG and addi a7, a2, -BLOCK_SIZE andi a7, a7, -BLOCK_SIZE addi a7, a7, BLOCK_SIZE are equivalent to a single andi a7, a2, -SZREG andi a7, a2, -BLOCK_SIZE because SZREG and BLOCK_SIZE are powers of two in this context, making the surrounding addi steps cancel out. Folding to one instruction reduces code size with identical semantics. No functional change. sysdeps/riscv/multiarch/memcpy_noalignment.S: Remove redundant addi around alignment; keep a single andi for SZREG/BLOCK_SIZE rounding. Signed-off-by: Yao Zihong Reviewed-by: Peter Bergner --- diff --git a/sysdeps/riscv/multiarch/memcpy_noalignment.S b/sysdeps/riscv/multiarch/memcpy_noalignment.S index db8654fb59..6917fc435b 100644 --- a/sysdeps/riscv/multiarch/memcpy_noalignment.S +++ b/sysdeps/riscv/multiarch/memcpy_noalignment.S @@ -57,9 +57,7 @@ ENTRY (__memcpy_noalignment) add a5, a0, a4 add a1, a1, a4 bleu a2, a3, L(word_copy_adjust) - addi a7, a2, -BLOCK_SIZE - andi a7, a7, -BLOCK_SIZE - addi a7, a7, BLOCK_SIZE + andi a7, a2, -BLOCK_SIZE add a3, a5, a7 mv a4, a1 L(block_copy): @@ -106,9 +104,7 @@ L(word_copy): li a5, SZREG-1 /* if LEN < SZREG jump to tail handling. */ bleu a2, a5, L(tail_adjust) - addi a7, a2, -SZREG - andi a7, a7, -SZREG - addi a7, a7, SZREG + andi a7, a2, -SZREG add a6, a3, a7 mv a5, a1 L(word_copy_loop):