aarch64: Match unpredicated shift patterns for ADR, SRA and ADDHNB instructions
This patch modifies the shift expander to immediately lower constant
shifts without unspec. It also modifies the ADR, SRA and ADDHNB patterns
to match the lowered forms of the shifts, as the predicate register is
not required for these instructions.
Bootstrapped and regtested on aarch64-linux-gnu.
Signed-off-by: Dhruv Chawla <dhruvc@nvidia.com> Co-authored-by: Richard Sandiford <richard.sandiford@arm.com>
gcc/ChangeLog:
* config/aarch64/aarch64-sve.md (@aarch64_adr<mode>_shift):
Match lowered form of ashift.
(*aarch64_adr<mode>_shift): Likewise.
(*aarch64_adr_shift_sxtw): Likewise.
(*aarch64_adr_shift_uxtw): Likewise.
(<ASHIFT:optab><mode>3): Check amount instead of operands[2] in
aarch64_sve_<lr>shift_operand.
(v<optab><mode>3): Generate unpredicated shifts for constant
operands.
(@aarch64_pred_<optab><mode>): Convert to a define_expand.
(*aarch64_pred_<optab><mode>): Create define_insn_and_split pattern
from @aarch64_pred_<optab><mode>.
(*post_ra_v_ashl<mode>3): Rename to ...
(aarch64_vashl<mode>3_const): ... this and remove reload requirement.
(*post_ra_v_<optab><mode>3): Rename to ...
(aarch64_v<optab><mode>3_const): ... this and remove reload
requirement.
* config/aarch64/aarch64-sve2.md
(@aarch64_sve_add_<sve_int_op><mode>): Match lowered form of
SHIFTRT.
(*aarch64_sve2_sra<mode>): Likewise.
(*bitmask_shift_plus<mode>): Match lowered form of lshiftrt.