Since avx_loadups256 and sse_loadups have been replaced by movv8sf_internal
and movv4sf_internal, respectively, we need to scan movv8sf_internal and
movv4sf_internal for load.
* gcc.target/i386/avx256-unaligned-load-1.c: Update load scan.
From-SVN: r235295
+2016-04-20 H.J. Lu <hongjiu.lu@intel.com>
+
+ * gcc.target/i386/avx256-unaligned-load-1.c: Update load scan.
+
2016-04-20 Bin Cheng <bin.cheng@arm.com>
PR tree-optimization/69489
c[i] = a[i] * b[i+3];
}
-/* { dg-final { scan-assembler-not "(avx_loadups256|vmovups\[^\n\r]*movv8sf_internal)" } } */
-/* { dg-final { scan-assembler "(sse_loadups|movv4sf_internal)" } } */
+/* { dg-final { scan-assembler-not "vmovups\[^\n\r]*movv8sf_internal/2" } } */
+/* { dg-final { scan-assembler "movv4sf_internal/2" } } */
/* { dg-final { scan-assembler "vinsertf128" } } */