From e6b011bcfd52c245978ccd540e3f929571c59471 Mon Sep 17 00:00:00 2001 From: Roger Sayle Date: Wed, 3 Aug 2022 09:03:17 +0100 Subject: [PATCH] Improved pre-reload split of double word comparison against -1 on x86. This patch adds an extra optimization to *cmp_doubleword to improve the code generated for comparisons against -1. Hypothetically, if a comparison against -1 reached this splitter we'd currently generate code that looks like: notq %rdx ; 3 bytes notq %rax ; 3 bytes orq %rdx, %rax ; 3 bytes setne %al With this patch we would instead generate the superior: andq %rdx, %rax ; 3 bytes cmpq $-1, %rax ; 4 bytes setne %al which is both faster and smaller, and also what's currently generated thanks to the middle-end splitting double word comparisons against zero and minus one during RTL expansion. Should that change, this would become a missed-optimization regression, but this patch also (potentially) helps suitable comparisons created by CSE and combine. 2022-08-03 Roger Sayle gcc/ChangeLog * config/i386/i386.md (*cmp_doubleword): Add a special case to split comparisons against -1 using AND and CMP -1 instructions. --- gcc/config/i386/i386.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md index f1158e1356b..e8f3851be01 100644 --- a/gcc/config/i386/i386.md +++ b/gcc/config/i386/i386.md @@ -1526,6 +1526,15 @@ operands[i] = force_reg (mode, operands[i]); operands[4] = gen_reg_rtx (mode); + + /* Special case comparisons against -1. */ + if (operands[1] == constm1_rtx && operands[3] == constm1_rtx) + { + emit_insn (gen_and3 (operands[4], operands[0], operands[2])); + emit_insn (gen_cmp_1 (mode, operands[4], constm1_rtx)); + DONE; + } + if (operands[1] == const0_rtx) emit_move_insn (operands[4], operands[0]); else if (operands[0] == const0_rtx) -- 2.47.3