From: Richard Sandiford Date: Sat, 13 Jan 2018 17:50:01 +0000 (+0000) Subject: Improve canonicalisation of TARGET_MEM_REFs X-Git-Tag: basepoints/gcc-9~1950 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=729f495ad78e2596d166707444941b382dbfc29a;p=thirdparty%2Fgcc.git Improve canonicalisation of TARGET_MEM_REFs A general TARGET_MEM_REF is: BASE + STEP * INDEX + INDEX2 + OFFSET After classifying the address in this way, the code that builds TARGET_MEM_REFs tries to simplify the address until it's valid for the current target and for the mode of memory being addressed. It does this in a fixed order: (1) add SYMBOL to BASE (2) add INDEX * STEP to the base, if STEP != 1 (3) add OFFSET to INDEX or BASE (reverted if unsuccessful) (4) add INDEX to BASE (5) add OFFSET to BASE So suppose we had an address: &symbol + offset + index * 8 (e.g. a[i + 1] for a global "a") on a target only allows an index or an offset, not both. Following the steps above, we'd first create: tmp = symbol tmp2 = tmp + index * 8 Then if the given offset value was valid for the mode being addressed, we'd create: MEM[base:tmp2, offset:offset] while if it was invalid we'd create: tmp3 = tmp2 + offset MEM[base:tmp3, offset:0] The problem is that this could happen if ivopts had decided to use a scaled index for an address that happens to have a constant base. The old procedure failed to give an indexed TARGET_MEM_REF in that case, and adding the offset last prevented later passes from being able to fold the index back in. The patch avoids this by checking at (2) whether the offset is the only component that causes the address to be invalid, folding it into the base if so. 2018-01-13 Richard Sandiford Alan Hayward David Sherwood gcc/ * tree-ssa-address.c (mem_ref_valid_without_offset_p): New function. (add_offset_to_base): New function, split out from... (create_mem_ref): ...here. When handling a scale other than 1, check first whether the address is valid without the offset. Add it into the base if so, leaving the index and scale as-is. Co-Authored-By: Alan Hayward Co-Authored-By: David Sherwood From-SVN: r256609 --- diff --git a/gcc/ChangeLog b/gcc/ChangeLog index 5333a67168a4..561fd708dae5 100644 --- a/gcc/ChangeLog +++ b/gcc/ChangeLog @@ -1,3 +1,13 @@ +2018-01-13 Richard Sandiford + Alan Hayward + David Sherwood + + * tree-ssa-address.c (mem_ref_valid_without_offset_p): New function. + (add_offset_to_base): New function, split out from... + (create_mem_ref): ...here. When handling a scale other than 1, + check first whether the address is valid without the offset. + Add it into the base if so, leaving the index and scale as-is. + 2018-01-12 Jakub Jelinek PR c++/83778 diff --git a/gcc/tree-ssa-address.c b/gcc/tree-ssa-address.c index 2b52fe501291..c8ff8514529c 100644 --- a/gcc/tree-ssa-address.c +++ b/gcc/tree-ssa-address.c @@ -746,6 +746,35 @@ gimplify_mem_ref_parts (gimple_stmt_iterator *gsi, struct mem_address *parts) true, GSI_SAME_STMT); } +/* Return true if the OFFSET in PARTS is the only thing that is making + it an invalid address for type TYPE. */ + +static bool +mem_ref_valid_without_offset_p (tree type, mem_address parts) +{ + if (!parts.base) + parts.base = parts.offset; + parts.offset = NULL_TREE; + return valid_mem_ref_p (TYPE_MODE (type), TYPE_ADDR_SPACE (type), &parts); +} + +/* Fold PARTS->offset into PARTS->base, so that there is no longer + a separate offset. Emit any new instructions before GSI. */ + +static void +add_offset_to_base (gimple_stmt_iterator *gsi, mem_address *parts) +{ + tree tmp = parts->offset; + if (parts->base) + { + tmp = fold_build_pointer_plus (parts->base, tmp); + tmp = force_gimple_operand_gsi_1 (gsi, tmp, is_gimple_mem_ref_addr, + NULL_TREE, true, GSI_SAME_STMT); + } + parts->base = tmp; + parts->offset = NULL_TREE; +} + /* Creates and returns a TARGET_MEM_REF for address ADDR. If necessary computations are emitted in front of GSI. TYPE is the mode of created memory reference. IV_CAND is the selected iv candidate in ADDR, @@ -812,6 +841,14 @@ create_mem_ref (gimple_stmt_iterator *gsi, tree type, aff_tree *addr, if (parts.step && !integer_onep (parts.step)) { gcc_assert (parts.index); + if (parts.offset && mem_ref_valid_without_offset_p (type, parts)) + { + add_offset_to_base (gsi, &parts); + mem_ref = create_mem_ref_raw (type, alias_ptr_type, &parts, true); + gcc_assert (mem_ref); + return mem_ref; + } + parts.index = force_gimple_operand_gsi (gsi, fold_build2 (MULT_EXPR, sizetype, parts.index, parts.step), @@ -906,18 +943,7 @@ create_mem_ref (gimple_stmt_iterator *gsi, tree type, aff_tree *addr, [base']. */ if (parts.offset && !integer_zerop (parts.offset)) { - tmp = parts.offset; - parts.offset = NULL_TREE; - /* Add offset to base. */ - if (parts.base) - { - tmp = fold_build_pointer_plus (parts.base, tmp); - tmp = force_gimple_operand_gsi_1 (gsi, tmp, - is_gimple_mem_ref_addr, - NULL_TREE, true, GSI_SAME_STMT); - } - parts.base = tmp; - + add_offset_to_base (gsi, &parts); mem_ref = create_mem_ref_raw (type, alias_ptr_type, &parts, true); if (mem_ref) return mem_ref;