posix: Fix double-free after allocation failure in regcomp (bug 33185)
If a memory allocation failure occurs during bracket expression
parsing in regcomp, a double-free error may result.
Reported-by: Anastasia Belova <abelova@astralinux.ru> Co-authored-by: Paul Eggert <eggert@cs.ucla.edu> Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
(cherry picked from commit 7ea06e994093fa0bcca0d0ee2c1db271d8d7885d)
Florian Weimer [Thu, 22 May 2025 12:36:37 +0000 (14:36 +0200)]
Fix error reporting (false negatives) in SGID tests
And simplify the interface of support_capture_subprogram_self_sgid.
Use the existing framework for temporary directories (now with
mode 0700) and directory/file deletion. Handle all execution
errors within support_capture_subprogram_self_sgid. In particular,
this includes test failures because the invoked program did not
exit with exit status zero. Existing tests that expect exit
status 42 are adjusted to use zero instead.
In addition, fix callers not to call exit (0) with test failures
pending (which may mask them, especially when running with --direct).
Florian Weimer [Thu, 13 Feb 2025 20:56:52 +0000 (21:56 +0100)]
elf: Keep using minimal malloc after early DTV resize (bug 32412)
If an auditor loads many TLS-using modules during startup, it is
possible to trigger DTV resizing. Previously, the DTV was marked
as allocated by the main malloc afterwards, even if the minimal
malloc was still in use. With this change, _dl_resize_dtv marks
the resized DTV as allocated with the minimal malloc.
The new test reuses TLS-using modules from other auditing tests.
H.J. Lu [Thu, 15 Feb 2024 11:22:55 +0000 (03:22 -0800)]
sort-makefile-lines.py: Allow '_' in name and "^# name"
'_' is used in Makefile variable names and many variables end with
"^# name". Relax sort-makefile-lines.py to allow '_' in name and
"^# name" as variable end. This fixes BZ #31385.
Arjun Shankar [Fri, 18 Oct 2024 14:03:25 +0000 (16:03 +0200)]
libio: Fix a deadlock after fork in popen
popen modifies its file handler book-keeping under a lock that wasn't
being taken during fork. This meant that a concurrent popen and fork
could end up copying the lock in a "locked" state into the fork child,
where subsequently calling popen would lead to a deadlock due to the
already (spuriously) held lock.
This commit fixes the deadlock by appropriately taking the lock before
fork, and releasing/resetting it in the parent/child after the fork.
A new test for concurrent popen and fork is also added. It consistently
hangs (and therefore fails via timeout) without the fix applied. Reviewed-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 9f0d2c0ee6c728643fcf9a4879e9f20f5e45ce5f)
H.J. Lu [Tue, 30 Apr 2024 16:21:16 +0000 (09:21 -0700)]
libio: Sort test variables in Makefile
Sort test variables in libio/Makefile using scripts/sort-makefile-lines.py. Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
(cherry picked from commit ddf71c550a5940deca74cc676f1cae134a891717)
Florian Weimer [Sat, 17 Feb 2024 08:17:04 +0000 (09:17 +0100)]
Linux: Switch back to assembly syscall wrapper for prctl (bug 29770)
Commit ff026950e280bc3e9487b41b460fb31bc5b57721 ("Add a C wrapper for
prctl [BZ #25896]") replaced the assembler wrapper with a C function.
However, on powerpc64le-linux-gnu, the C variadic function
implementation requires extra work in the caller to set up the
parameter save area. Calling a function that needs a parameter save
area without one (because the prototype used indicates the function is
not variadic) corrupts the caller's stack. The Linux manual pages
project documents prctl as a non-variadic function. This has resulted
in various projects over the years using non-variadic prototypes,
including the sanitizer libraries in LLVm and GCC (GCC PR 113728).
This commit switches back to the assembler implementation on most
targets and only keeps the C implementation for x86-64 x32.
Florian Weimer [Thu, 13 Mar 2025 05:07:07 +0000 (06:07 +0100)]
nptl: PTHREAD_COND_INITIALIZER compatibility with pre-2.41 versions (bug 32786)
The new initializer and struct layout does not initialize the
__g_signals field in the old struct layout before the change in
commit c36fc50781995e6758cae2b6927839d0157f213c ("nptl: Remove
g_refs from condition variables"). Bring back fields at the end
of struct __pthread_cond_s, so that they are again zero-initialized.
Malte Skarupke [Wed, 4 Dec 2024 13:05:40 +0000 (08:05 -0500)]
nptl: Use all of g1_start and g_signals
The LSB of g_signals was unused. The LSB of g1_start was used to indicate
which group is G2. This was used to always go to sleep in pthread_cond_wait
if a waiter is in G2. A comment earlier in the file says that this is not
correct to do:
"Waiters cannot determine whether they are currently in G2 or G1 -- but they
do not have to because all they are interested in is whether there are
available signals"
I either would have had to update the comment, or get rid of the check. I
chose to get rid of the check. In fact I don't quite know why it was there.
There will never be available signals for group G2, so we didn't need the
special case. Even if there were, this would just be a spurious wake. This
might have caught some cases where the count has wrapped around, but it
wouldn't reliably do that, (and even if it did, why would you want to force a
sleep in that case?) and we don't support that many concurrent waiters
anyway. Getting rid of it allows us to use one more bit, making us more
robust to wraparound.
Signed-off-by: Malte Skarupke <malteskarupke@fastmail.fm> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 91bb902f58264a2fd50fbce8f39a9a290dd23706)
Malte Skarupke [Wed, 4 Dec 2024 13:04:10 +0000 (08:04 -0500)]
nptl: Fix indentation
In my previous change I turned a nested loop into a simple loop. I'm doing
the resulting indentation changes in a separate commit to make the diff on
the previous commit easier to review.
Signed-off-by: Malte Skarupke <malteskarupke@fastmail.fm> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit ee6c14ed59d480720721aaacc5fb03213dc153da)
Malte Skarupke [Wed, 4 Dec 2024 13:03:44 +0000 (08:03 -0500)]
nptl: Use a single loop in pthread_cond_wait instaed of a nested loop
The loop was a little more complicated than necessary. There was only one
break statement out of the inner loop, and the outer loop was nearly empty.
So just remove the outer loop, moving its code to the one break statement in
the inner loop. This allows us to replace all gotos with break statements.
Signed-off-by: Malte Skarupke <malteskarupke@fastmail.fm> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 929a4764ac90382616b6a21f099192b2475da674)
Malte Skarupke [Wed, 4 Dec 2024 12:56:38 +0000 (07:56 -0500)]
nptl: Remove g_refs from condition variables
This variable used to be needed to wait in group switching until all sleepers
have confirmed that they have woken. This is no longer needed. Nothing waits
on this variable so there is no need to track how many threads are currently
asleep in each group.
Signed-off-by: Malte Skarupke <malteskarupke@fastmail.fm> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit c36fc50781995e6758cae2b6927839d0157f213c)
Malte Skarupke [Wed, 4 Dec 2024 12:56:13 +0000 (07:56 -0500)]
nptl: Remove unnecessary quadruple check in pthread_cond_wait
pthread_cond_wait was checking whether it was in a closed group no less than
four times. Checking once is enough. Here are the four checks:
1. While spin-waiting. This was dead code: maxspin is set to 0 and has been
for years.
2. Before deciding to go to sleep, and before incrementing grefs: I kept this
3. After incrementing grefs. There is no reason to think that the group would
close while we do an atomic increment. Obviously it could close at any
point, but that doesn't mean we have to recheck after every step. This
check was equally good as check 2, except it has to do more work.
4. When we find ourselves in a group that has a signal. We only get here after
we check that we're not in a closed group. There is no need to check again.
The check would only have helped in cases where the compare_exchange in the
next line would also have failed. Relying on the compare_exchange is fine.
Removing the duplicate checks clarifies the code.
Signed-off-by: Malte Skarupke <malteskarupke@fastmail.fm> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 4f7b051f8ee3feff1b53b27a906f245afaa9cee1)
Malte Skarupke [Wed, 4 Dec 2024 12:55:50 +0000 (07:55 -0500)]
nptl: Remove unnecessary catch-all-wake in condvar group switch
This wake is unnecessary. We only switch groups after every sleeper in a group
has been woken. Sure, they may take a while to actually wake up and may still
hold a reference, but waking them a second time doesn't speed that up. Instead
this just makes the code more complicated and may hide problems.
In particular this safety wake wouldn't even have helped with the bug that was
fixed by Barrus' patch: The bug there was that pthread_cond_signal would not
switch g1 when it should, so we wouldn't even have entered this code path.
Signed-off-by: Malte Skarupke <malteskarupke@fastmail.fm> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit b42cc6af11062c260c7dfa91f1c89891366fed3e)
Frank Barrus [Wed, 4 Dec 2024 12:55:02 +0000 (07:55 -0500)]
pthreads NPTL: lost wakeup fix 2
This fixes the lost wakeup (from a bug in signal stealing) with a change
in the usage of g_signals[] in the condition variable internal state.
It also completely eliminates the concept and handling of signal stealing,
as well as the need for signalers to block to wait for waiters to wake
up every time there is a G1/G2 switch. This greatly reduces the average
and maximum latency for pthread_cond_signal.
The g_signals[] field now contains a signal count that is relative to
the current g1_start value. Since it is a 32-bit field, and the LSB is
still reserved (though not currently used anymore), it has a 31-bit value
that corresponds to the low 31 bits of the sequence number in g1_start.
(since g1_start also has an LSB flag, this means bits 31:1 in g_signals
correspond to bits 31:1 in g1_start, plus the current signal count)
By making the signal count relative to g1_start, there is no longer
any ambiguity or A/B/A issue, and thus any checks before blocking,
including the futex call itself, are guaranteed not to block if the G1/G2
switch occurs, even if the signal count remains the same. This allows
initially safely blocking in G2 until the switch to G1 occurs, and
then transitioning from G1 to a new G1 or G2, and always being able to
distinguish the state change. This removes the race condition and A/B/A
problems that otherwise ocurred if a late (pre-empted) waiter were to
resume just as the futex call attempted to block on g_signal since
otherwise there was no last opportunity to re-check things like whether
the current G1 group was already closed.
By fixing these issues, the signal stealing code can be eliminated,
since there is no concept of signal stealing anymore. The code to block
for all waiters to exit g_refs can also be removed, since any waiters
that are still in the g_refs region can be guaranteed to safely wake
up and exit. If there are still any left at this time, they are all
sent one final futex wakeup to ensure that they are not blocked any
longer, but there is no need for the signaller to block and wait for
them to wake up and exit the g_refs region.
The signal count is then effectively "zeroed" but since it is now
relative to g1_start, this is done by advancing it to a new value that
can be observed by any pending blocking waiters. Any late waiters can
always tell the difference, and can thus just cleanly exit if they are
in a stale G1 or G2. They can never steal a signal from the current
G1 if they are not in the current G1, since the signal value that has
to match in the cmpxchg has the low 31 bits of the g1_start value
contained in it, and that's first checked, and then it won't match if
there's a G1/G2 change.
Note: the 31-bit sequence number used in g_signals is designed to
handle wrap-around when checking the signal count, but if the entire
31-bit wraparound (2 billion signals) occurs while there is still a
late waiter that has not yet resumed, and it happens to then match
the current g1_start low bits, and the pre-emption occurs after the
normal "closed group" checks (which are 64-bit) but then hits the
futex syscall and signal consuming code, then an A/B/A issue could
still result and cause an incorrect assumption about whether it
should block. This particular scenario seems unlikely in practice.
Note that once awake from the futex, the waiter would notice the
closed group before consuming the signal (since that's still a 64-bit
check that would not be aliased in the wrap-around in g_signals),
so the biggest impact would be blocking on the futex until the next
full wakeup from a G1/G2 switch.
Signed-off-by: Frank Barrus <frankbarrus_sw@shaggy.cc> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 1db84775f831a1494993ce9c118deaf9537cc50a)
H.J. Lu [Sat, 12 Apr 2025 15:37:29 +0000 (08:37 -0700)]
x86: Detect Intel Diamond Rapids
Detect Intel Diamond Rapids and tune it similar to Intel Granite Rapids.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
(cherry picked from commit de14f1959ee5f9b845a7cae43bee03068b8136f0)
Scan xstate IDs up to the maximum supported xstate ID. Remove the
separate AMX xstate calculation. Instead, exclude the AMX space from
the start of TILECFG to the end of TILEDATA in xsave_state_size.
Completed validation on SKL/SKX/SPR/SDE and compared xsave state size
with "ld.so --list-diagnostics" option, no regression.
Co-Authored-By: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
(cherry picked from commit 70b648855185e967e54668b101d24704c3fb869d)
Noah Goldstein [Wed, 14 Aug 2024 06:37:30 +0000 (14:37 +0800)]
x86: Use `Avoid_Non_Temporal_Memset` to control non-temporal path
This is just a refactor and there should be no behavioral change from
this commit.
The goal is to make `Avoid_Non_Temporal_Memset` a more universal knob
for controlling whether we use non-temporal memset rather than having
extra logic based on vendor. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit b93dddfaf440aa12f45d7c356f6ffe9f27d35577)
Noah Goldstein [Fri, 14 Jun 2024 18:01:58 +0000 (13:01 -0500)]
x86: Fix value for `x86_memset_non_temporal_threshold` when it is undesirable
When we don't want to use non-temporal stores for memset, we set
`x86_memset_non_temporal_threshold` to SIZE_MAX.
The current code, however, we using `maximum_non_temporal_threshold`
as the upper bound which is `SIZE_MAX >> 4` so we ended up with a
value of `0`.
Fix is to just use `SIZE_MAX` as the upper bound for when setting the
tunable. Tested-by: Borislav Petkov (AMD) <bp@alien8.de> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 5b54a33435e5533653a9956728f2de9d16a3b4ee)
Joe Damato [Fri, 7 Jun 2024 23:04:47 +0000 (23:04 +0000)]
x86: Enable non-temporal memset tunable for AMD
In commit 46b5e98ef6f1 ("x86: Add seperate non-temporal tunable for
memset") a tunable threshold for enabling non-temporal memset was added,
but only for Intel hardware.
Since that commit, new benchmark results suggest that non-temporal
memset is beneficial on AMD, as well, so allow this tunable to be set
for AMD.
See:
https://docs.google.com/spreadsheets/d/1opzukzvum4n6-RUVHTGddV6RjAEil4P2uMjjQGLbLcU/edit?usp=sharing
which has been updated to include data using different stategies for
large memset on AMD Zen2, Zen3, and Zen4.
Signed-off-by: Joe Damato <jdamato@fastly.com> Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
(cherry picked from commit bef2a827a55fc759693ccc5b0f614353b8ad712d)
Noah Goldstein [Fri, 24 May 2024 17:38:51 +0000 (12:38 -0500)]
x86: Add seperate non-temporal tunable for memset
The tuning for non-temporal stores for memset vs memcpy is not always
the same. This includes both the exact value and whether non-temporal
stores are profitable at all for a given arch.
This patch add `x86_memset_non_temporal_threshold`. Currently we
disable non-temporal stores for non Intel vendors as the only
benchmarks showing its benefit have been on Intel hardware. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 46b5e98ef6f1b9f4b53851f152ecb8209064b26c)
Florian Weimer [Fri, 28 Mar 2025 08:26:59 +0000 (09:26 +0100)]
x86: Use separate variable for TLSDESC XSAVE/XSAVEC state size (bug 32810)
Previously, the initialization code reused the xsave_state_full_size
member of struct cpu_features for the TLSDESC state size. However,
the tunable processing code assumes that this member has the
original XSAVE (non-compact) state size, so that it can use its
value if XSAVEC is disabled via tunable.
This change uses a separate variable and not a struct member because
the value is only needed in ld.so and the static libc, but not in
libc.so. As a result, struct cpu_features layout does not change,
helping a future backport of this change.
Florian Weimer [Fri, 28 Mar 2025 08:26:06 +0000 (09:26 +0100)]
x86: Skip XSAVE state size reset if ISA level requires XSAVE
If we have to use XSAVE or XSAVEC trampolines, do not adjust the size
information they need. Technically, it is an operator error to try to
run with -XSAVE,-XSAVEC on such builds, but this change here disables
some unnecessary code with higher ISA levels and simplifies testing.
H.J. Lu [Thu, 4 Apr 2024 22:43:50 +0000 (15:43 -0700)]
x86-64: Exclude FMA4 IFUNC functions for -mapxf
When -mapxf is used to build glibc, the resulting glibc will never run
on FMA4 machines. Exclude FMA4 IFUNC functions when -mapxf is used.
This requires GCC which defines __APX_F__ for -mapxf with commit:
Michael Jeanson [Fri, 14 Feb 2025 18:54:22 +0000 (13:54 -0500)]
nptl: clear the whole rseq area before registration
Due to the extensible nature of the rseq area we can't explictly
initialize fields that are not part of the ABI yet. It was agreed with
upstream that all new fields will be documented as zero initialized by
userspace. Future kernels configured with CONFIG_DEBUG_RSEQ will
validate the content of all fields during registration.
Replace the explicit field initialization with a memset of the whole
rseq area which will cover fields as they are added to future kernels.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 689a62a4217fae78b9ce0db781dc2a421f2b1ab4)
Wilco Dijkstra [Fri, 13 Dec 2024 15:43:07 +0000 (15:43 +0000)]
math: Improve layout of exp/exp10 data
GCC aligns global data to 16 bytes if their size is >= 16 bytes. This patch
changes the exp_data struct slightly so that the fields are better aligned
and without gaps. As a result on targets that support them, more load-pair
instructions are used in exp. Exp10 is improved by moving invlog10_2N later
so that neglog10_2hiN and neglog10_2loN can be loaded using load-pair.
The exp benchmark improves 2.5%, "144bits" by 7.2%, "768bits" by 12.7% on
Neoverse V2. Exp10 improves by 1.5%.
Wilco Dijkstra [Tue, 24 Dec 2024 18:01:59 +0000 (18:01 +0000)]
AArch64: Add SVE memset
Add SVE memset based on the generic memset with predicated load for sizes < 16.
Unaligned memsets of 128-1024 are improved by ~20% on average by using aligned
stores for the last 64 bytes. Performance of random memset benchmark improves
by ~2% on Neoverse V1.
GCC aligns global data to 16 bytes if their size is >= 16 bytes. This patch
changes the exp2f_data struct slightly so that the fields are better aligned.
As a result on targets that support them, load-pair instructions accessing
poly_scaled and invln2_scaled are now 16-byte aligned.
Wilco Dijkstra [Mon, 25 Nov 2024 18:43:08 +0000 (18:43 +0000)]
AArch64: Remove zva_128 from memset
Remove ZVA 128 support from memset - the new memset no longer
guarantees count >= 256, which can result in underflow and a
crash if ZVA size is 128 ([1]). Since only one CPU uses a ZVA
size of 128 and its memcpy implementation was removed in commit e162ab2bf1b82c40f29e1925986582fa07568ce8, remove this special
case too.
Improve small memsets by avoiding branches and use overlapping stores.
Use DC ZVA for copies over 128 bytes. Remove unnecessary code for ZVA sizes
other than 64 and 128. Performance of random memset benchmark improves by 24%
on Neoverse N1.
Wilco Dijkstra [Wed, 7 Aug 2024 13:43:47 +0000 (14:43 +0100)]
AArch64: Improve generic strlen
Improve performance by handling another 16 bytes before entering the loop.
Use ADDHN in the loop to avoid SHRN+FMOV when it terminates. Change final
size computation to avoid increasing latency. On Neoverse V1 performance
of the random strlen benchmark improves by 4.6%.
Yat Long Poon [Fri, 3 Jan 2025 19:07:30 +0000 (19:07 +0000)]
AArch64: Improve codegen for SVE logs
Reduce memory access by using lanewise MLA and moving constants to struct
and reduce number of MOVPRFXs.
Update maximum ULP error for double log_sve from 1 to 2.
Speedup on Neoverse V1 for log (3%), log2 (5%), and log10 (4%).
Joana Cruz [Tue, 17 Dec 2024 14:47:31 +0000 (14:47 +0000)]
AArch64: Improve codegen of AdvSIMD logf function family
Load the polynomial evaluation coefficients into 2 vectors and use lanewise MLAs.
8% improvement in throughput microbenchmark on Neoverse V1 for log2 and log,
and 2% for log10.
Remove spurious ADRP and a few MOVs.
Reduce memory access by using more indexed MLAs in polynomial.
Align notation so that algorithms are easier to compare.
Speedup on Neoverse V1 for log10 (8%), log (8.5%), and log2 (10%).
Update error threshold in AdvSIMD log (now matches SVE log).
Joe Ramsay [Mon, 9 Sep 2024 12:00:01 +0000 (13:00 +0100)]
aarch64: Avoid redundant MOVs in AdvSIMD F32 logs
Since the last operation is destructive, the first argument to the FMA
also has to be the first argument to the special-case in order to
avoid unnecessary MOVs. Reorder arguments and adjust special-case
bounds to facilitate this.
Joe Ramsay [Thu, 2 May 2024 15:43:13 +0000 (16:43 +0100)]
aarch64: Fix AdvSIMD libmvec routines for big-endian
Previously many routines used * to load from vector types stored
in the data table. This is emitted as ldr, which byte-swaps the
entire vector register, and causes bugs for big-endian when not
all lanes contain the same value. When a vector is to be used
this way, it has been replaced with an array and the load with an
explicit ld1 intrinsic, which byte-swaps only within lanes.
As well, many routines previously used non-standard GCC syntax
for vector operations such as indexing into vectors types with []
and assembling vectors using {}. This syntax should not be mixed
with ACLE, as the former does not respect endianness whereas the
latter does. Such examples have been replaced with, for instance,
vcombine_* and vgetq_lane* intrinsics. Helpers which only use the
GCC syntax, such as the v_call helpers, do not need changing as
they do not use intrinsics.
H.J. Lu [Tue, 17 Dec 2024 10:41:45 +0000 (18:41 +0800)]
Hide all malloc functions from compiler [BZ #32366]
Since -1 isn't a power of two, compiler may reject it, hide memalign from
Clang 19 which issues an error:
tst-memalign.c:86:31: error: requested alignment is not a power of 2 [-Werror,-Wnon-power-of-two-alignment]
86 | p = memalign (-1, pagesize);
| ^~
tst-memalign.c:86:31: error: requested alignment must be 4294967296 bytes or smaller; maximum alignment assumed [-Werror,-Wbuiltin-assume-aligned-alignment]
86 | p = memalign (-1, pagesize);
| ^~
Update tst-malloc-aux.h to hide all malloc functions and include it in
all malloc tests to prevent compiler from optimizing out any malloc
functions.
Tested with Clang 19.1.5 and GCC 15 20241206 for BZ #32366.
x86/string: Fixup alignment of main loop in str{n}cmp-evex [BZ #32212]
The loop should be aligned to 32-bytes so that it can ideally run out
the DSB. This is particularly important on Skylake-Server where
deficiencies in it's DSB implementation make it prone to not being
able to run loops out of the DSB.
This results in a 25% performance degradation for the non-aligned
version.
The fix is to just ensure the code layout is such that the loop is
aligned. (Which was previously the case but was accidentally dropped
in 84e7c46df).
NB: The fix was actually 64-byte alignment. This is because 64-byte
alignment generally produces more stable performance than 32-byte
aligned code (cache line crosses can affect perf), so if we are going
past 16-byte alignmnent, might as well go to 64. 64-byte alignment
also matches most other functions we over-align, so it creates a
common point of optimization.
Times are reported as ratio of Time_With_Patch /
Time_Without_Patch. Lower is better.
The values being reported is the geometric mean of the ratio across
all tests in bench-strcmp and bench-strncmp.
Note this patch is only attempting to improve the Skylake-Server
strcmp for long strings. The rest of the numbers are only to test for
regressions.
The 2.6% regression on TGL-strcmp is due to slowdowns caused by
changes in alignment of code handling small sizes (most on the
page-cross logic). These should be safe to ignore because 1) We
previously only 16-byte aligned the function so this behavior is not
new and was essentially up to chance before this patch and 2) this
type of alignment related regression on small sizes really only comes
up in tight micro-benchmark loops and is unlikely to have any affect
on realworld performance.
Noah Goldstein [Fri, 24 May 2024 17:38:50 +0000 (12:38 -0500)]
x86: Improve large memset perf with non-temporal stores [RHEL-29312]
Previously we use `rep stosb` for all medium/large memsets. This is
notably worse than non-temporal stores for large (above a
few MBs) memsets.
See:
https://docs.google.com/spreadsheets/d/1opzukzvum4n6-RUVHTGddV6RjAEil4P2uMjjQGLbLcU/edit?usp=sharing
For data using different stategies for large memset on ICX and SKX.
Using non-temporal stores can be up to 3x faster on ICX and 2x faster
on SKX. Historically, these numbers would not have been so good
because of the zero-over-zero writeback optimization that `rep stosb`
is able to do. But, the zero-over-zero writeback optimization has been
removed as a potential side-channel attack, so there is no longer any
good reason to only rely on `rep stosb` for large memsets. On the flip
size, non-temporal writes can avoid data in their RFO requests saving
memory bandwidth.
All of the other changes to the file are to re-organize the
code-blocks to maintain "good" alignment given the new code added in
the `L(stosb_local)` case.
The results from running the GLIBC memset benchmarks on TGL-client for
N=20 runs:
Geometric Mean across the suite New / Old EXEX256: 0.979
Geometric Mean across the suite New / Old EXEX512: 0.979
Geometric Mean across the suite New / Old AVX2 : 0.986
Geometric Mean across the suite New / Old SSE2 : 0.979
Most of the cases are essentially unchanged, this is mostly to show
that adding the non-temporal case didn't add any regressions to the
other cases.
The results on the memset-large benchmark suite on TGL-client for N=20
runs:
Geometric Mean across the suite New / Old EXEX256: 0.926
Geometric Mean across the suite New / Old EXEX512: 0.925
Geometric Mean across the suite New / Old AVX2 : 0.928
Geometric Mean across the suite New / Old SSE2 : 0.924
So roughly a 7.5% speedup. This is lower than what we see on servers
(likely because clients typically have faster single-core bandwidth so
saving bandwidth on RFOs is less impactful), but still advantageous.
Full test-suite passes on x86_64 w/ and w/o multiarch. Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
(cherry picked from commit 5bf0ab80573d66e4ae5d94b094659094336da90f)
Florian Weimer [Tue, 17 Dec 2024 17:12:03 +0000 (18:12 +0100)]
x86: Avoid integer truncation with large cache sizes (bug 32470)
Some hypervisors report 1 TiB L3 cache size. This results
in some variables incorrectly getting zeroed, causing crashes
in memcpy/memmove because invariants are violated.
H.J. Lu [Thu, 5 Dec 2024 00:39:44 +0000 (08:39 +0800)]
math: Exclude internal math symbols for tests [BZ #32414]
Since internal tests don't have access to internal symbols in libm,
exclude them for internal tests. Also make tst-strtod5 and tst-strtod5i
depend on $(libm) to support older versions of GCC which can't inline
copysign family functions. This fixes BZ #32414.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Sunil K Pandey <skpgkp2@gmail.com>
(cherry picked from commit 5df09b444835fca6e64b3d4b4a5beb19b3b2ba21)
Sam James [Mon, 9 Dec 2024 23:11:25 +0000 (23:11 +0000)]
malloc: add indirection for malloc(-like) functions in tests [BZ #32366]
GCC 15 introduces allocation dead code removal (DCE) for PR117370 in r15-5255-g7828dc070510f8. This breaks various glibc tests which want
to assert various properties of the allocator without doing anything
obviously useful with the allocated memory.
Alexander Monakov rightly pointed out that we can and should do better
than passing -fno-malloc-dce to paper over the problem. Not least because
GCC 14 already does such DCE where there's no testing of malloc's return
value against NULL, and LLVM has such optimisations too.
Handle this by providing malloc (and friends) wrappers with a volatile
function pointer to obscure that we're calling malloc (et. al) from the
compiler.
H.J. Lu [Sat, 18 May 2024 03:00:38 +0000 (20:00 -0700)]
Pass -nostdlib -nostartfiles together with -r [BZ #31753]
Since -r in GCC 6/7/8 doesn't imply -nostdlib -nostartfiles, update the
link-static-libc.out rule to also pass -nostdlib -nostartfiles. This
fixes BZ #31753.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 2be3352f0b1ebaa39596393fffe1062275186669)
Michael Jeanson [Wed, 20 Nov 2024 19:15:42 +0000 (14:15 -0500)]
nptl: initialize cpu_id_start prior to rseq registration
When adding explicit initialization of rseq fields prior to
registration, I glossed over the fact that 'cpu_id_start' is also
documented as initialized by user-space.
While current kernels don't validate the content of this field on
registration, future ones could.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
(cherry picked from commit d9f40387d3305d97e30a8cf8724218c42a63680a)
Michael Jeanson [Thu, 7 Nov 2024 21:23:49 +0000 (22:23 +0100)]
nptl: initialize rseq area prior to registration
Per the rseq syscall documentation, 3 fields are required to be
initialized by userspace prior to registration, they are 'cpu_id',
'rseq_cs' and 'flags'. Since we have no guarantee that 'struct pthread'
is cleared on all architectures, explicitly set those 3 fields prior to
registration.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 97f60abd25628425971f07e9b0e7f8eec0741235)
Florian Weimer [Mon, 28 Oct 2024 13:45:30 +0000 (14:45 +0100)]
elf: Change ldconfig auxcache magic number (bug 32231)
In commit c628c2296392ed3bf2cb8d8470668e64fe53389f (elf: Remove
ldconfig kernel version check), the layout of auxcache entries
changed because the osversion field was removed from
struct aux_cache_file_entry. However, AUX_CACHEMAGIC was not
changed, so existing files are still used, potentially leading
to unintended ldconfig behavior. This commit changes AUX_CACHEMAGIC,
so that the file is regenerated.
Reported-by: DJ Delorie <dj@redhat.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 0a536f6e2f76e3ef581b3fd9af1e5cf4ddc7a5a2)
Joseph Myers [Fri, 20 Sep 2024 23:25:32 +0000 (23:25 +0000)]
Make tst-strtod-underflow type-generic
The test tst-strtod-underflow covers various edge cases close to the
underflow threshold for strtod (especially cases where underflow on
architectures with after-rounding tininess detection depends on the
rounding mode). Make it use the type-generic machinery, with
corresponding test inputs for each supported floating-point format, so
that other functions in the strtod family are tested for underflow
edge cases as well.
H.J. Lu [Tue, 30 Apr 2024 16:57:12 +0000 (09:57 -0700)]
Add crt1-2.0.o for glibc 2.0 compatibility tests
Starting from glibc 2.1, crt1.o contains _IO_stdin_used which is checked
by _IO_check_libio to provide binary compatibility for glibc 2.0. Add
crt1-2.0.o for tests against glibc 2.0. Define tests-2.0 for glibc 2.0
compatibility tests. Add and update glibc 2.0 compatibility tests for
stderr, matherr and pthread_kill. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 5f245f3bfbe61b2182964dafb94907e38284b806)
Joseph Myers [Fri, 20 Sep 2024 23:24:45 +0000 (23:24 +0000)]
Add tests of more strtod special cases
There is very little test coverage of inputs to strtod-family
functions that don't contain anything that can be parsed as a number
(one test of ".y" in tst-strtod2), and none that I can see of skipping
initial whitespace. Add some tests of these things to tst-strtod2.
Joseph Myers [Fri, 20 Sep 2024 23:24:02 +0000 (23:24 +0000)]
Add more tests of strtod end pointer
Although there are some tests in tst-strtod2 and tst-strtod3 for the
end pointer provided by strtod when it doesn't parse the whole string,
they aren't very thorough. Add tests of more such cases to
tst-strtod2.
Joseph Myers [Fri, 20 Sep 2024 23:23:13 +0000 (23:23 +0000)]
Make tst-strtod2 and tst-strtod5 type-generic
Some of the strtod tests use type-generic machinery in tst-strtod.h to
test the strto* functions for all floating types, while others only
test double even when the tests are in fact meaningful for all
floating types.
Convert tst-strtod2 and tst-strtod5 to use the type-generic machinery
so they test all floating types. I haven't tried to convert them to
use newer test interfaces in other ways, just made the changes
necessary to use the type-generic machinery.
powerpc64le: Build new strtod tests with long double ABI flags (bug 32145)
This fixes several test failures:
=====FAIL: stdlib/tst-strtod1i.out=====
Locale tests
all OK
Locale tests
all OK
Locale tests
strtold("1,5") returns -6,38643e+367 and not 1,5
strtold("1.5") returns 1,5 and not 1
strtold("1.500") returns 1 and not 1500
strtold("36.893.488.147.419.103.232") returns 1500 and not 3,68935e+19
Locale tests
all OK
Suggested-by: Joseph Myers <josmyers@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit cc3e743fc09ee6fca45767629df9cbcbe1feba82)
Joseph Myers [Wed, 4 Sep 2024 13:21:23 +0000 (13:21 +0000)]
Do not set errno for overflowing NaN payload in strtod/nan (bug 32045)
As reported in bug 32045, it's incorrect for strtod/nan functions to
set errno based on overflowing payload (strtod should only set errno
for overflow / underflow of its actual result, and potentially if
nothing in the string can be parsed as a number at all; nan should be
a pure function that never sets it). Save and restore errno around
the internal strtoull call and add associated test coverage.
Joseph Myers [Wed, 4 Sep 2024 13:20:18 +0000 (13:20 +0000)]
Improve NaN payload testing
There are two separate sets of tests of NaN payloads in glibc:
* libm-test-{get,set}payload* verify that getpayload, setpayload,
setpayloadsig and __builtin_nan functions are consistent in their
payload handling.
* test-nan-payload verifies that strtod-family functions and the
not-built-in nan functions are consistent in their payload handling.
Nothing, however, connects the two sets of functions (i.e., verifies
that strtod / nan are consistent with getpayload / setpayload /
__builtin_nan).
Improve test-nan-payload to check actual payload value with getpayload
rather than just verifying that the strtod and nan functions produce
the same NaN. Also check that the NaNs produced aren't signaling and
extend the tests to cover _FloatN / _FloatNx.
Joseph Myers [Tue, 27 Aug 2024 20:41:54 +0000 (20:41 +0000)]
Make __strtod_internal tests type-generic
Some of the strtod tests use type-generic machinery in tst-strtod.h to
test the strto* functions for all floating types, while others only
test double even when the tests are in fact meaningful for all
floating types.
Convert the tests of the internal __strtod_internal interface to cover
all floating types. I haven't tried to convert them to use newer test
interfaces in other ways, just made the changes necessary to use the
type-generic machinery. As an internal interface, there are no
aliases for different types with the same ABI (however,
__strtold_internal is defined even if long double has the same ABI as
double), so macros used by the type-generic testing code are redefined
as needed to avoid expecting such aliases to be present.
Joseph Myers [Tue, 27 Aug 2024 12:41:02 +0000 (12:41 +0000)]
Fix strtod subnormal rounding (bug 30220)
As reported in bug 30220, the implementation of strtod-family
functions has a bug in the following case: the input string would,
with infinite exponent range, take one more bit to represent than is
available in the normal precision of the return type; the value
represented is in the subnormal range; and there are no nonzero bits
in the value, below those that can be represented in subnormal
precision, other than the least significant bit and possibly the
0.5ulp bit. In this case, round_and_return ends up discarding the
least significant bit.
Fix by saving that bit to merge into more_bits (it can't be merged in
at the time it's computed, because more_bits mustn't include this bit
in the case of after-rounding tininess detection checking if the
result is still subnormal when rounded to normal precision, so merging
this bit into more_bits needs to take place after that check).
Joseph Myers [Tue, 27 Aug 2024 12:38:01 +0000 (12:38 +0000)]
More thoroughly test underflow / errno in tst-strtod-round
Add tests of underflow in tst-strtod-round, and thus also test for
errno being unchanged when there is neither overflow nor underflow.
The errno setting before the function call to test for being unchanged
is adjusted to set errno to 12345 instead of 0, so that any bugs where
strtod sets errno to 0 would be detected.
This doesn't add any new test inputs for tst-strtod-round, and in
particular doesn't cover the edge cases of underflow the way
tst-strtod-underflow does (none of the existing test inputs for
tst-strtod-round actually exercise cases that have underflow with
before-rounding tininess detection but not with after-rounding
tininess detection), but at least it provides some coverage (as per
the recent discussions) that ordinary non-overflowing non-underflowing
inputs to these functions do not set errno.
Joseph Myers [Wed, 14 Aug 2024 17:15:46 +0000 (17:15 +0000)]
Test errno setting on strtod overflow in tst-strtod-round
We have no tests that errno is set to ERANGE on overflow of
strtod-family functions (we do have some tests for underflow, in
tst-strtod-underflow). Add such tests to tst-strtod-round.
The reading loops did not check for read failures. Addresses
a static analysis report.
Manually tested by compiling a program with the GCC's
-finstrument-functions option, running it with
“LD_PRELOAD=debug/libpcprofile.so PCPROFILE_OUTPUT=output-file”,
and reviewing the output of “debug/pcprofiledump output-file”.
Florian Weimer [Thu, 1 Aug 2024 21:31:30 +0000 (23:31 +0200)]
elf: Avoid re-initializing already allocated TLS in dlopen (bug 31717)
The old code used l_init_called as an indicator for whether TLS
initialization was complete. However, it is possible that
TLS for an object is initialized, written to, and then dlopen
for this object is called again, and l_init_called is not true at
this point. Previously, this resulted in TLS being initialized
twice, discarding any interim writes (technically introducing a
use-after-free bug even).
This commit introduces an explicit per-object flag, l_tls_in_slotinfo.
It indicates whether _dl_add_to_slotinfo has been called for this
object. This flag is used to avoid double-initialization of TLS.
In update_tls_slotinfo, the first_static_tls micro-optimization
is removed because preserving the initalization flag for subsequent
use by the second loop for static TLS is a bit complicated, and
another per-object flag does not seem to be worth it. Furthermore,
the l_init_called flag is dropped from the second loop (for static
TLS initialization) because l_need_tls_init on its own prevents
double-initialization.
The remaining l_init_called usage in resize_scopes and update_scopes
is just an optimization due to the use of scope_has_map, so it is
not changed in this commit.
The isupper check ensures that libc.so.6 is TLS is not reverted.
Such a revert happens if l_need_tls_init is not cleared in
_dl_allocate_tls_init for the main_thread case, now that
l_init_called is not checked anymore in update_tls_slotinfo
in elf/dl-open.c.
Reported-by: Jonathon Anderson <janderson@rice.edu> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 5097cd344fd243fb8deb6dec96e8073753f962f9)
elf: Support recursive use of dynamic TLS in interposed malloc
It turns out that quite a few applications use bundled mallocs that
have been built to use global-dynamic TLS (instead of the recommended
initial-exec TLS). The previous workaround from
commit afe42e935b3ee97bac9a7064157587777259c60e ("elf: Avoid some
free (NULL) calls in _dl_update_slotinfo") does not fix all
encountered cases unfortunatelly.
This change avoids the TLS generation update for recursive use
of TLS from a malloc that was called during a TLS update. This
is possible because an interposed malloc has a fixed module ID and
TLS slot. (It cannot be unloaded.) If an initially-loaded module ID
is encountered in __tls_get_addr and the dynamic linker is already
in the middle of a TLS update, use the outdated DTV, thus avoiding
another call into malloc. It's still necessary to update the
DTV to the most recent generation, to get out of the slow path,
which is why the check for recursion is needed.
The bookkeeping is done using a global counter instead of per-thread
flag because TLS access in the dynamic linker is tricky.
All this will go away once the dynamic linker stops using malloc
for TLS, likely as part of a change that pre-allocates all TLS
during pthread_create/dlopen.
nptl: Use <support/check.h> facilities in tst-setuid3
Remove local FAIL macro in favor to FAIL_EXIT1 from <support/check.h>,
which provides equivalent reporting, with the name of the file and the
line number within of the failure site additionally included. Remove
FAIL_ERR altogether and include ": %m" explicitly with the format string
supplied to FAIL_EXIT1 as there seems little value to have a separate
macro just for this.
posix: Use <support/check.h> facilities in tst-truncate and tst-truncate64
Remove local FAIL macro in favor to FAIL_RET from <support/check.h>,
which provides equivalent reporting, with the name of the file of the
failure site additionally included, for the tst-truncate-common core
shared between the tst-truncate and tst-truncate64 tests.
ungetc: Fix backup buffer leak on program exit [BZ #27821]
If a file descriptor is left unclosed and is cleaned up by _IO_cleanup
on exit, its backup buffer remains unfreed, registering as a leak in
valgrind. This is not strictly an issue since (1) the program should
ideally be closing the stream once it's not in use and (2) the program
is about to exit anyway, so keeping the backup buffer around a wee bit
longer isn't a real problem. Free it anyway to keep valgrind happy
when the streams in question are the standard ones, i.e. stdout, stdin
or stderr.
Also, the _IO_have_backup macro checks for _IO_save_base,
which is a roundabout way to check for a backup buffer instead of
directly looking for _IO_backup_base. The roundabout check breaks when
the main get area has not been used and user pushes a char into the
backup buffer with ungetc. Fix this to use the _IO_backup_base
directly.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 3e1d8d1d1dca24ae90df2ea826a8916896fc7e77)
ungetc: Fix uninitialized read when putting into unused streams [BZ #27821]
When ungetc is called on an unused stream, the backup buffer is
allocated without the main get area being present. This results in
every subsequent ungetc (as the stream remains in the backup area)
checking uninitialized memory in the backup buffer when trying to put a
character back into the stream.
Avoid comparing the input character with buffer contents when in backup
to avoid this uninitialized read. The uninitialized read is harmless in
this context since the location is promptly overwritten with the input
character, thus fulfilling ungetc functionality.
Also adjust wording in the manual to drop the paragraph that says glibc
cannot do multiple ungetc back to back since with this change, ungetc
can actually do this.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit cdf0f88f97b0aaceb894cc02b21159d148d7065c)