]> git.ipfire.org Git - thirdparty/glibc.git/commit - ChangeLog
Use AVX unaligned memcpy only if AVX2 is available
authorH.J. Lu <hjl.tools@gmail.com>
Fri, 30 Jan 2015 14:50:20 +0000 (06:50 -0800)
committerH.J. Lu <hjl.tools@gmail.com>
Fri, 30 Jan 2015 23:37:58 +0000 (15:37 -0800)
commit5f3d0b78e011d2a72f9e88b0e9ef5bc081d18f97
tree8eabf127206283d2421bc40b6bc44e123e346598
parentb658fdd82b4524cf6a39881d092caa23f63d93ac
Use AVX unaligned memcpy only if AVX2 is available

memcpy with unaligned 256-bit AVX register loads/stores are slow on older
processorsl like Sandy Bridge.  This patch adds bit_AVX_Fast_Unaligned_Load
and sets it only when AVX2 is available.

[BZ #17801]
* sysdeps/x86_64/multiarch/init-arch.c (__init_cpu_features):
Set the bit_AVX_Fast_Unaligned_Load bit for AVX2.
* sysdeps/x86_64/multiarch/init-arch.h (bit_AVX_Fast_Unaligned_Load):
New.
(index_AVX_Fast_Unaligned_Load): Likewise.
(HAS_AVX_FAST_UNALIGNED_LOAD): Likewise.
* sysdeps/x86_64/multiarch/memcpy.S (__new_memcpy): Check the
bit_AVX_Fast_Unaligned_Load bit instead of the bit_AVX_Usable bit.
* sysdeps/x86_64/multiarch/memcpy_chk.S (__memcpy_chk): Likewise.
* sysdeps/x86_64/multiarch/mempcpy.S (__mempcpy): Likewise.
* sysdeps/x86_64/multiarch/mempcpy_chk.S (__mempcpy_chk): Likewise.
* sysdeps/x86_64/multiarch/memmove.c (__libc_memmove): Replace
HAS_AVX with HAS_AVX_FAST_UNALIGNED_LOAD.
* sysdeps/x86_64/multiarch/memmove_chk.c (__memmove_chk): Likewise.
ChangeLog
NEWS
sysdeps/x86_64/multiarch/init-arch.c
sysdeps/x86_64/multiarch/init-arch.h
sysdeps/x86_64/multiarch/memcpy.S
sysdeps/x86_64/multiarch/memcpy_chk.S
sysdeps/x86_64/multiarch/memmove.c
sysdeps/x86_64/multiarch/memmove_chk.c
sysdeps/x86_64/multiarch/mempcpy.S
sysdeps/x86_64/multiarch/mempcpy_chk.S