The following treats both the same when considering to use gather or
scatter for single-element interleaving accesses.
This will cause
FAIL: gcc.target/aarch64/sve/sve_iters_low_2.c scan-tree-dump-not vect "LOOP VECTORIZED"
where we now vectorize the loop with VNx4QI, I'll leave it to ARM folks
to investigate whether that's OK and to adjust the testcase or to see
where to adjust things to make the testcase not vectorized again. The
original fix for which the testcase was introduced is still efffective.
PR tree-optimization/117502
* tree-vect-stmts.cc (get_group_load_store_type): Also consider
VMAT_STRIDED_SLP when checking to use gather/scatter for
single-element interleaving access.
* tree-vect-loop.cc (update_epilogue_loop_vinfo): STMT_VINFO_STRIDED_P
can be classified as VMAT_GATHER_SCATTER, so update DR_REF for
those as well.
refs that get_load_store_type classified as VMAT_GATHER_SCATTER. */
auto vstmt_vinfo = vect_stmt_to_vectorize (stmt_vinfo);
if (STMT_VINFO_MEMORY_ACCESS_TYPE (vstmt_vinfo) == VMAT_GATHER_SCATTER
+ || STMT_VINFO_STRIDED_P (vstmt_vinfo)
|| STMT_VINFO_GATHER_SCATTER_P (vstmt_vinfo))
{
/* ??? As we copy epilogues from the main loop incremental
on nearby locations. Or, even if it's a win over scalar code,
it might not be a win over vectorizing at a lower VF, if that
allows us to use contiguous accesses. */
- if (*memory_access_type == VMAT_ELEMENTWISE
+ if ((*memory_access_type == VMAT_ELEMENTWISE
+ || *memory_access_type == VMAT_STRIDED_SLP)
&& single_element_p
&& loop_vinfo
&& vect_use_strided_gather_scatters_p (stmt_info, loop_vinfo,