From: Noah Goldstein Date: Fri, 24 Jun 2022 16:42:12 +0000 (-0700) Subject: x86: Align entry for memrchr to 64-bytes. X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=17d929ab2eea5fe34e8fc459f1baf28ae8a15ad5;p=thirdparty%2Fglibc.git x86: Align entry for memrchr to 64-bytes. The function was tuned around 64-byte entry alignment and performs better for all sizes with it. As well different code boths where explicitly written to touch the minimum number of cache line i.e sizes <= 32 touch only the entry cache line. (cherry picked from commit 227afaa67213efcdce6a870ef5086200f1076438) --- diff --git a/sysdeps/x86_64/multiarch/memrchr-avx2.S b/sysdeps/x86_64/multiarch/memrchr-avx2.S index 5f8e0be18cf..edd8180ba1e 100644 --- a/sysdeps/x86_64/multiarch/memrchr-avx2.S +++ b/sysdeps/x86_64/multiarch/memrchr-avx2.S @@ -35,7 +35,7 @@ # define VEC_SIZE 32 # define PAGE_SIZE 4096 .section SECTION(.text), "ax", @progbits -ENTRY(MEMRCHR) +ENTRY_P2ALIGN(MEMRCHR, 6) # ifdef __ILP32__ /* Clear upper bits. */ and %RDX_LP, %RDX_LP