Since vpermq is really slow, we should avoid using it for permutation
when vpmovwb is not available (needs AVX512BW) for ix86_expand_vecop_qihi2
and fall back to ix86_expand_vecop_qihi.
gcc/ChangeLog:
PR target/115069
* config/i386/i386-expand.cc (ix86_expand_vecop_qihi2):
Do not enable the optimization when AVX512BW is not enabled.
gcc/testsuite/ChangeLog:
PR target/115069
* gcc.target/i386/pr115069.c: New.
bool op2vec = GET_MODE_CLASS (GET_MODE (op2)) == MODE_VECTOR_INT;
bool uns_p = code != ASHIFTRT;
+ /* Without VPMOVWB (provided by AVX512BW ISA), the expansion uses the
+ generic permutation to merge the data back into the right place. This
+ permutation results in VPERMQ, which is slow, so better fall back to
+ ix86_expand_vecop_qihi. */
+ if (!TARGET_AVX512BW)
+ return false;
+
if ((qimode == V16QImode && !TARGET_AVX2)
|| (qimode == V32QImode && (!TARGET_AVX512BW || !TARGET_EVEX512))
/* There are no V64HImode instructions. */
--- /dev/null
+/* { dg-do compile } */
+/* { dg-options "-O2 -mavx2" } */
+/* { dg-final { scan-assembler-not "vpermq" } } */
+
+typedef char v16qi __attribute__((vector_size(16)));
+
+v16qi foo (v16qi a, v16qi b) {
+ return a * b;
+}