]> git.ipfire.org Git - thirdparty/gcc.git/commit
vect: Preserve OMP info for conditional stores [PR118384]
authorRichard Sandiford <richard.sandiford@arm.com>
Mon, 20 Jan 2025 19:52:31 +0000 (19:52 +0000)
committerRichard Sandiford <richard.sandiford@arm.com>
Mon, 20 Jan 2025 19:52:31 +0000 (19:52 +0000)
commit8edf8b552313951cb4f2f97821ee4b3820c9506b
treebc6a0cd0ab3938cee3be8c866f677e56f610c74a
parent6612b8e55471fabd2071a9637a06d3ffce2b05a6
vect: Preserve OMP info for conditional stores [PR118384]

OMP reductions are lowered into the form:

    idx = .OMP_SIMD_LANE (simuid, 0);
    ...
    oldval = D.anon[idx];
    newval = oldval op ...;
    D.anon[idx] = newval;

So if the scalar loop has a {0, +, 1} iv i, idx = i % vf.
Despite this wraparound, the vectoriser pretends that the D.anon
accesses are linear.  It records the .OMP_SIMD_LANE's second argument
(val) in the data_reference aux field (-1 - val) and then copies this
to the stmt_vec_info simd_lane_access_p field (val + 1).

vectorizable_load and vectorizable_store use simd_lane_access_p
to detect accesses of this form and suppress the vector pointer
increments that would be used for genuine linear accesses.

The difference in this PR is that the reduction is conditional,
and so the store back to D.anon is recognised as a conditional
store pattern.  simd_lane_access_p was not being copied across
from the original stmt_vec_info to the pattern stmt_vec_info,
meaning that it was vectorised as a normal linear store.

gcc/
PR tree-optimization/118384
* tree-vectorizer.cc (vec_info::move_dr): Copy
STMT_VINFO_SIMD_LANE_ACCESS_P.

gcc/testsuite/
PR tree-optimization/118384
* gcc.target/aarch64/pr118384_1.c: New test.
* gcc.target/aarch64/pr118384_2.c: Likewise.
gcc/testsuite/gcc.target/aarch64/pr118384_1.c [new file with mode: 0644]
gcc/testsuite/gcc.target/aarch64/pr118384_2.c [new file with mode: 0644]
gcc/tree-vectorizer.cc