tree-data-refs.cc uses alignment information to try to optimise
the code generated for alias checks. The assumption for "normal"
non-grouped, full-width scalar accesses was that the access size
would be a multiple of the alignment. As Richi notes in the PR,
this is a documented precondition of dr_with_seg_len:
/* The minimum common alignment of DR's start address, SEG_LEN and
ACCESS_SIZE. */
unsigned int align;
PR115192 was a case in which this assumption didn't hold. The access
was part of an aligned 4-element group, but only the first 2 elements
of the group were accessed. The alignment was therefore double the
access size.
In
r15-820-ga0fe4fb1c8d78045 I'd "fixed" that by capping the
alignment in one of the output routines. But I think that was
misconceived. The precondition means that we should cap the
alignment at source instead.
Failure to do that caused a similar wrong code bug in this PR,
where the alignment comes from a short bitfield access rather
than from a group access.
gcc/
PR tree-optimization/116125
* tree-vect-data-refs.cc (vect_prune_runtime_alias_test_list): Make
the dr_with_seg_len alignment fields describe tha access sizes as
well as the pointer alignment.
* tree-data-ref.cc (create_intersect_range_checks): Don't compensate
for invalid alignment fields here.
gcc/testsuite/
PR tree-optimization/116125
* gcc.dg/vect/pr116125.c: New test.
--- /dev/null
+#include "tree-vect.h"
+
+struct st
+{
+ unsigned int num : 8;
+};
+
+void __attribute__((noipa))
+mem_overlap (struct st *a, struct st *b)
+{
+ for (int i = 0; i < 9; i++)
+ a[i].num = b[i].num + 1;
+}
+
+int
+main (void)
+{
+ check_vect ();
+
+ struct st a[9] = {};
+
+ // input a = 0, 0, 0, 0, 0, 0, 0, 0, 0
+ mem_overlap (&a[1], a);
+
+ // output a = 0, 1, 2, 3, 4, 5, 6, 7, 8
+ if (a[2].num == 2)
+ return 0;
+ else
+ __builtin_abort ();
+}
if the maximum value of one segment is equal to the minimum
value of the other. */
min_align = std::min (dr_a.align, dr_b.align);
- min_align = std::min (min_align, known_alignment (dr_a.access_size));
- min_align = std::min (min_align, known_alignment (dr_b.access_size));
cmp_code = LT_EXPR;
}
poly_uint64 lower_bound;
tree segment_length_a, segment_length_b;
unsigned HOST_WIDE_INT access_size_a, access_size_b;
- unsigned int align_a, align_b;
+ unsigned HOST_WIDE_INT align_a, align_b;
/* Ignore the alias if the VF we chose ended up being no greater
than the dependence distance. */
stmt_info_b->stmt);
}
+ /* dr_with_seg_len requires the alignment to apply to the segment length
+ and access size, not just the start address. The access size can be
+ smaller than the pointer alignment for grouped accesses and bitfield
+ references; see PR115192 and PR116125 respectively. */
+ align_a = std::min (align_a, least_bit_hwi (access_size_a));
+ align_b = std::min (align_b, least_bit_hwi (access_size_b));
+
dr_with_seg_len dr_a (dr_info_a->dr, segment_length_a,
access_size_a, align_a);
dr_with_seg_len dr_b (dr_info_b->dr, segment_length_b,