while (getline (&line, &len, fp) != -1)
;
/* Process LINE. */
After that commit, line[0] would be equal to '\0' instead of containing
the last line of the file like before that commit. A recent POSIX issue
clarified that the behavior before and after that commit are allowed,
since the contents of LINE are unspecified after -1 is returned
[1]. However, some programs rely on the previous behavior.
This patch null terminates the buffer upon getdelim/getline's initial
allocation. This is compatible with previous glibc versions, while also
protecting the caller from reading uninitialized memory if the file is
empty, as long as getline/getdelim does the initial allocation.
i386: Fix fmod/fmodf/remainder/remainderf for gcc-12
The __builtin_fmod{f} and __builtin_remainder{f} were added on gcc 13,
and the minimum supported gcc is 12. This patch adds a configure test
to check whether the compiler enables inlining for fmod/remainder, and
uses inline assembly if not.
Wilco Dijkstra [Thu, 4 Dec 2025 15:17:25 +0000 (15:17 +0000)]
nptl: Check alignment of pthread structs
Report assertion failure if the alignment of external pthread structs is
lower than the internal version. This triggers on type mismatches like
in BZ #33632.
James Chesterman [Fri, 28 Nov 2025 11:18:53 +0000 (11:18 +0000)]
aarch64: Optimise AdvSIMD atanhf
Optimise AdvSIMD atanhf by vectorising the special case.
There are asymptotes at x = -1 and x = 1. So return inf for these.
Values for which |x| > 1, return NaN.
R.Throughput difference on V2 with GCC@15:
58-60% improvement in special cases.
No regression in fast pass.
James Chesterman [Fri, 28 Nov 2025 11:18:52 +0000 (11:18 +0000)]
aarch64: Optimise AdvSIMD asinhf
Optimise AdvSIMD asinhf by vectorising the special case.
For values greater than 0x1p64, scale the input down first.
This is because the output will overflow with inputs greater than
or equal to this value as there is a squaring operation in the
algorithm.
To scale, do:
2asinh(sqrt[(x-1)/2])
Because:
2asinh(x) = +-acosh(2x^2 + 1)
Apply opposite operations in opposite order for x, and you get:
asinh(x) = 2acosh(sqrt[(x-1)/2]).
Found that using asinh instead of acosh also very closely
approximates asinh(x) for a high input x.
R.Throughput difference on V2 with GCC@15:
25-58% improvement in special cases.
4% regression in fast pass.
James Chesterman [Fri, 28 Nov 2025 11:18:51 +0000 (11:18 +0000)]
aarch64: Optimise AdvSIMD acoshf
Optimise AdvSIMD acoshf by vectorising the special case.
For values greater than 0x1p64, scale the input down first.
This is because the output will overflow with inputs greater than
or equal to this value as there is a squaring operation in the
algorithm.
To scale, do:
2acosh(sqrt[(x+1)/2])
Because:
acosh(x) = 1/2acosh(2x^2 - 1) for x>=1.
Apply opposite operations in opposite order for x, and you get:
acosh(x) = 2acosh(sqrt[(x+1)/2]).
R.Throughput difference on V2 with GCC@15:
30-49% improvement in special cases.
2% regression in fast pass.
Yury Khrustalev [Wed, 29 Oct 2025 16:14:06 +0000 (16:14 +0000)]
aarch64: Add tests for glibc.cpu.aarch64_bti behaviour
Check that the new tunable changes behaviour correctly:
* When BTI is enforced, any unmarked binary that is loaded
results in an error: either an abort or dlopen error when
this binary is loaded via dlopen.
* When BTI is not enforced, it is OK to load an unmarked
binary.
Yury Khrustalev [Mon, 24 Nov 2025 13:23:35 +0000 (13:23 +0000)]
aarch64: Add configure checks for BTI support
We add configure checks for 3 things:
- Compiler (both CC and TEST_CC) supports -mbranch-protection=bti.
- Linker supports -z force-bti.
- The toolchain supplies object files and target libraries with
the BTI marking.
All three must be true in order for the tests to be valid, so
we check all flags and set the makefile variable accordingly.
James Chesterman [Wed, 19 Nov 2025 21:40:43 +0000 (21:40 +0000)]
aarch64: Optimise AdvSIMD log10
Optimise AdvSIMD log10 by vectorising the special case.
For subnormal input values, use the same scaling technique as
described in the single precision equivalent.
Then check for inf, nan and x<=0.
James Chesterman [Wed, 19 Nov 2025 21:40:42 +0000 (21:40 +0000)]
aarch64: Optimise AdvSIMD log2
Optimise AdvSIMD log2 by vectorising the special case.
For subnormal input values, use the same scaling technique as
described in the single precision equivalent.
Then check for inf, nan and x<=0.
James Chesterman [Wed, 19 Nov 2025 21:40:41 +0000 (21:40 +0000)]
aarch64: Optimise AdvSIMD log
Optimise AdvSIMD log by vectorising the special case.
For subnormal input values, use the same scaling technique as
described in the single precision equivalent.
Then check for inf, nan and x<=0.
James Chesterman [Wed, 19 Nov 2025 14:11:40 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD log10f
Optimise AdvSIMD log10f by vectorising the special case.
Use scaling technique on subnormal values, then check for inf and
nan values.
The scaling technique will sqrt the input then multiply the output
by 2 because:
log(sqrt(x)) = 1/2(log(x)), so log(x) = 2log(sqrt(x))
James Chesterman [Wed, 19 Nov 2025 14:11:39 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD log2f
Optimise AdvSIMD log2f by vectorising the special case.
Use scaling technique on subnormal values, then check for inf and
nan values.
The scaling technique used will sqrt the input then multiply the
output by 2 because:
log(sqrt(x)) = 1/2 log(x), so log(x) = 2log(sqrt(x))
James Chesterman [Wed, 19 Nov 2025 14:11:38 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD logf
Optimise AdvSIMD logf by vectorising the special case.
Use scaling technique on subnormal values, then check for inf and
nan values.
The scaling technique used will sqrt the input then multiply the
output by 2 because:
log(sqrt(x)) = 1/2 log(x), so log(x) = 2log(sqrt(x))
James Chesterman [Wed, 19 Nov 2025 14:11:37 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD log1pf
Optimise AdvSIMD log1pf by vectorising the special case and by
reducing the range of values passed to the special case.
Previously, high values such as 0x1.1p127 where treated as special
cases, but now the special cases are for when the input is:
Less than or equal to -1
+/- INFINITY
+/- NaN
checks __WORDSIZE == 32 to decide if int128 should be used, which breaks
x32 which has int128 and __WORDSIZE == 32. Check BITS_PER_MP_LIMB == 32,
instead of __WORDSIZE == 32. This fixes BZ #33677.
Tested on x32, x86-64 and i686.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
uses 64-bit atomic operations on sem_t if 64-bit atomics are supported.
But sem_t may be aligned to 32-bit on 32-bit architectures.
1. Add a macro, SEM_T_ALIGN, for sem_t alignment.
2. Add a macro, HAVE_UNALIGNED_64B_ATOMICS. Define it if unaligned 64-bit
atomic operations are supported.
3. Add a macro, USE_64B_ATOMICS_ON_SEM_T. Define to 1 if 64-bit atomic
operations are supported and SEM_T_ALIGN is at least 8-byte aligned or
HAVE_UNALIGNED_64B_ATOMICS is defined.
4. Assert that size and alignment of sem_t are not lower than those of
the internal struct new_sem.
5. Check USE_64B_ATOMICS_ON_SEM_T, instead of USE_64B_ATOMICS, when using
64-bit atomic operations on sem_t.
Yury Khrustalev [Wed, 12 Nov 2025 10:57:24 +0000 (10:57 +0000)]
scripts: Support custom Git URLs in build-many-glibcs.py
Use environment variables to provide mirror URLs to checkout
sources from Git. Each component has a corresponding env var
that will be used if it's present: <component>_GIT_MIRROR.
Note that '<component>' should be upper case, e.g. GLIBC.
Co-authored-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Yury Khrustalev [Wed, 12 Nov 2025 10:55:58 +0000 (10:55 +0000)]
scripts: Support custom FTP mirror URL in build-many-glibcs.py
Allow to use custom mirror URLs to download tarballs from a mirror
of ftp.gnu.org using the FTP_GNU_ORG_MIRROR env variable (default
value is 'https://ftp.gnu.org').
Yury Khrustalev [Mon, 1 Dec 2025 10:09:14 +0000 (10:09 +0000)]
nptl: tests: Fix test-wrapper use in tst-dl-debug-tid.sh
Test wrapper script was used twice: once to run the test
command and second time within the text command which
seems unnecessary and results in false errors when running
this test.
The allocation_index was being incremented before checking if mmap()
succeeds. If mmap() fails, allocation_index would still be incremented,
creating a gap in the allocations tracking array and making
allocation_index inconsistent with the actual number of successful
allocations.
This fix moves the allocation_index increment to after the mmap()
success check, ensuring it only increments when an allocation actually
succeeds. This maintains proper tracking for leak detection and
prevents gaps in the allocations array.
Joseph Myers [Thu, 27 Nov 2025 19:32:49 +0000 (19:32 +0000)]
Define C23 header version macros
C23 defines library macros __STDC_VERSION_<header>_H__ to indicate
that a header has support for new / changed features from C23. Now
that all the required library features are implemented in glibc,
define these macros. I'm not sure this is sufficiently much of a
user-visible feature to be worth a mention in NEWS.
Tested for x86_64.
There are various optional C23 features we don't yet have, of which I
might look at the Annex H ones (floating-point encoding conversion
functions and _Float16 functions) next.
* Optional time bases TIME_MONOTONIC, TIME_ACTIVE, TIME_THREAD_ACTIVE.
See
<https://sourceware.org/pipermail/libc-alpha/2023-June/149264.html>
- we need to review / update that patch. (I think patch 2/2,
inventing new names for all the nonstandard CLOCK_* supported by the
Linux kernel, is rather more dubious.)
* Updating conform/ tests for C23.
* Defining the rounding mode macro FE_TONEARESTFROMZERO for RISC-V (as
far as I know, the only architecture supported by glibc that has
hardware support for this rounding mode for binary floating point)
and supporting it throughout glibc and its tests (especially the
string/numeric conversions in both directions that explicitly handle
each possible rounding mode, and various tests that do likewise).
* Annex H floating-point encoding conversion functions. (It's not
entirely clear which are optional even given support for Annex H;
there's some wording applied inconsistently about only being
required when non-arithmetic interchange formats are supported; see
the comments I raised on the WG14 reflector on 23 Oct 2025.)
* _Float16 functions (and other header and testcase support for this
type).
* Decimal floating-point support.
* Fully supporting __int128 and unsigned __int128 as integer types
wider than intmax_t, as permitted by C23. Would need doing in
coordination with GCC, see GCC bug 113887 for more discussion of
what's involved.
The current implementation relies on setting the rounding mode for
different calculations (FE_TOWARDZERO) to obtain correctly rounded
results. For most CPUs, this adds significant performance overhead
because it requires executing a typically slow instruction (to
get/set the floating-point status), necessitates flushing the
pipeline, and breaks some compiler assumptions/optimizations.
The original implementation adds tests to handle underflow in corner
cases, whereas this implementation uses a different strategy that
checks both the mantissa and the result to determine whether the
result is not subject to double rounding.
I tested this implementation on various targets (x86_64, i686, arm,
aarch64, powerpc), including some by manually disabling the compiler
instructions.
Yury Khrustalev [Mon, 24 Nov 2025 11:20:57 +0000 (11:20 +0000)]
aarch64: make GCS configure checks aarch64-only
We only need to enable GCS tests on AArch64 targets, however previously
the configure checks for GCS support in compiler and linker were added
for all targets which was not efficient.
To enable tests for GCS we need 4 things to be true:
- Compiler supports GCS branch protection.
- Test compiler supports GCS branch protection.
- Linker supports GCS marking of binaries.
- The CRT objects provided by the toolchain have GCS marking.
To check for the latter, we add new macro to aclocal.m4 that allows to
grep output from readelf.
We check all four and then put the result in one make variable to
simplify checks in makefiles.
The current implementation relies on setting the rounding mode for
different calculations (first to FE_TONEAREST and then to FE_TOWARDZERO)
to obtain correctly rounded results. For most CPUs, this adds a significant
performance overhead since it requires executing a typically slow
instruction (to get/set the floating-point status), it necessitates
flushing the pipeline, and breaks some compiler assumptions/optimizations.
This patch introduces a new implementation originally written by Szabolcs
for musl, which utilizes mostly integer arithmetic. Floating-point
arithmetic is used to raise the expected exceptions, without the need for
fenv.h operations.
I added some changes compared to the original code:
* Fixed some signaling NaN issues when the 3-argument is NaN.
* Use math_uint128.h for the 64-bit multiplication operation. It allows
the compiler to use 128-bit types where available, which enables some
optimizations on certain targets (for instance, MIPS64).
* Fixed an arm32 issue where the libgcc routine might not respect the
rounding mode [1]. This can also be used on other targets to optimize
the conversion from int64_t to double.
* Use -fexcess-precision=standard on i686.
I tested this implementation on various targets (x86_64, i686, arm, aarch64,
powerpc), including some by manually disabling the compiler instructions.
To enable “longlong.h” removal, the umul_ppmm is moved to a gmp-arch.h.
The generic implementation now uses a static inline, which provides
better type checking than the GNU extension to cast the asm constraint
(and it works better with clang).
Most of the architecture uses the generic implementation, which is
expanded from a macro, except for alpha, arm, hppa, x86, m68k, mips,
powerpc, and sparc. The 32 bit architectures the compiler generates
good enough code using uint64_t types, where for 64 bit architecture
the patch leverages the math_u128.h definitions that uses 128-bit
integers when available (all 64 bit architectures on gcc 15).
To enable “longlong.h” removal, add_ssaaaa and sub_ssaaaa are moved to
gmp-arch.h. The generic implementation now uses a static inline. This
provides better type checking than the GNU extension, which casts the
asm constraint; and it also works better with clang.
Most architectures use the generic implementation, with except of
arc, arm, hppa, x86, m68k, powerpc, and sparc. The 32 bit architectures
the compiler generates good enough code using uint64_t types, where
for 64 bit architecture the patch leverages the math_u128.h definitions
that uses 128-bit integers when available (all 64 bit architectures
on gcc 15).
The strongly typed implementation required some changes. I adjusted
_FP_W_TYPE, _FP_WS_TYPE, and _FP_I_TYPE to use the same type as
mp_limb_t on aarch64, powerpc64le, x86_64, and riscv64. This basically
means using “long” instead of “long long.”
To enable “longlong.h” removal, the udiv_qrnnd is moved to a gmp-arch.h
file. It allows each architecture to implement its own arch-specific
optimizations. The generic implementation now uses a static inline,
which provides better type checking than the GNU extension to cast the
asm constraint (and it works better with clang).
Most of the architecture uses the generic implementation, which is
expanded from a macro, except for alpha, x86, m68k, sh, and sparc.
I kept that alpha, which uses out-of-the-line implementations and x86,
where there is no easy way to use the div{q} instruction from C code.
For the rest, the compiler generates good enough code.
The hppa also provides arch-specific implementations, but they are not
routed in “longlong.h” and thus never used.
Arjun Shankar [Fri, 14 Nov 2025 17:31:46 +0000 (18:31 +0100)]
malloc: Add threaded variants of single-threaded malloc tests
Single-threaded malloc tests exercise only the SINGLE_THREAD_P paths in
the malloc implementation. This commit runs variants of these tests in
a multi-threaded environment in order to exercise the alternate code
paths in the same test scenarios, thus potentially improving coverage.
$(test)-threaded-main and $(test)-threaded-worker variants are
introduced for most single-threaded malloc tests (with a small number of
exceptions). The -main variants run the base test in a main thread
while the test environment has an alternate thread running, whereas the
-worker variants run the test in an alternate thread while the main
thread waits on it.
The tests themselves are unmodified, and the change is accomplished by
using -DTEST_IN_THREAD at compile time, which instructs support/
infrastructure to run the test while an alternate thread waits on it.
Arjun Shankar [Fri, 14 Nov 2025 17:31:45 +0000 (18:31 +0100)]
support: Add support for running tests in a multi-threaded environment
It can be useful to be able to write a single-threaded test but run it
as part of a multi-threaded program simply to exercise glibc
synchronization code paths, e.g. the malloc implementation.
This commit adds support to enable this kind of testing. Tests that
define TEST_IN_THREAD, either as TEST_THREAD_MAIN or TEST_THREAD_WORKER,
and then use support infrastructure (by including test-driver.c) will be
accordingly run in either the main thread, or in a second "worker"
thread while the other thread waits.
This can be used in new tests, or to easily make and run copies of
existing tests without modifying the tests themselves.
clang generates internal calls for some _chk symbol, so add internal
aliases for them, and stub some with rtld-stubbed-symbols to avoid
ld.so linker issues.
Joseph Myers [Thu, 20 Nov 2025 19:30:27 +0000 (19:30 +0000)]
Implement C23 const-preserving standard library macros
C23 makes various standard library functions, that return a pointer
into an input array, into macros that return a pointer to const when
the relevant argument passed to the macro is a pointer to const. (The
requirement is for macros, with the existing function types applying
when macro expansion is suppressed. When a null pointer constant is
passed, such as integer 0, that's the same as a pointer to non-const.)
Implement this feature. This only applies to C, not C++, since such
macros are not an appropriate way of doing this for C++ and all the
affected functions other than bsearch have overloads to implement an
equivalent feature for C++ anyway. Nothing is done to apply such a
change to any non-C23 functions with the same property of returning a
pointer into an input array.
The feature is also disabled when _LIBC is defined, since there are
various places in glibc that either redefine these identifiers as
macros, or define the functions themselves, and would need changing to
work in the presence of these macro definitions. A natural question
is whether we should in fact change those places and not disable the
macro definitions for _LIBC. If so, we'd need a solution for the
places in glibc that define the macro *before* including the relevant
header (in order in effect to disable the header declaration of the
function by renaming that declaration).
One testcase has #undef added to avoid conflicting with this feature
and another has const added; -Wno-discarded-qualifiers is added for
building zic (but could be removed once there's a new upstream tzcode
release that's const-safe with this C23 change and glibc has updated
to code from that new release). Probably other places in glibc proper
would need const added if we remove the _LIBC conditionals.
Another question would be whether some GCC extension should be added
to support this feature better with macros that only expand each
argument once (as well as reducing duplication of diagnostics for bad
usages such as non-pointer and pointer-to-volatile-qualfied
arguments).
Although binutils has supported --no-undefined-version for a long timei
(319416359200 back in 2002), --undefined-version was only added more
recently (27fb6a1a7fcd on 2.40).
Wilco Dijkstra [Fri, 29 Aug 2025 12:47:54 +0000 (12:47 +0000)]
malloc: Use _int_free_chunk in tcache_thread_shutdown
Directly call _int_free_chunk during tcache shutdown to avoid recursion.
Calling __libc_free on a block from tcache gets flagged as a double free,
and tcache_double_free_verify checks every tcache chunk (quadratic
overhead).
H. Peter Anvin [Tue, 18 Nov 2025 18:21:18 +0000 (10:21 -0800)]
linux/termios: test the kernel-side termios canonicalization
Verify that the kernel side of the termios interface gets the various
speed fields set according to our current canonicalization policy.
[ v2.1: fix formatting - Adhemerval Netto ]
[ v4: fix typo in patch description - Dan Horák ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> (v2.1) Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Pierre Blanchard [Tue, 18 Nov 2025 15:09:05 +0000 (15:09 +0000)]
AArch64: Fix and improve SVE pow(f) special cases
powf:
Update scalar special case function to best use new interface.
pow:
Make specialcase NOINLINE to prevent str/ldr leaking in fast path.
Remove depency in sv_call2, as new callback impl is not a
performance gain.
Replace with vectorised specialcase since structure of scalar
routine is fairly simple.
Throughput gain of about 5-10% on V1 for large values and 25% for subnormal `x`.
Arjun Shankar [Thu, 13 Nov 2025 20:26:08 +0000 (21:26 +0100)]
malloc: Simplify tst-free-errno munmap failure test
The Linux specific test-case in tst-free-errno was backing up malloc
metadata for a large mmap'd block, overwriting the block with its own
mmap, then restoring malloc metadata and calling free to force an munmap
failure. However, the backed up pages containing metadata can
occasionally be overlapped by the overwriting mmap, leading to a
metadata corruption.
This commit replaces this Linux specific test case with a simpler,
generic, three block allocation, expecting the kernel to coalesce the
VMAs, then cause a fragmentation to trigger the same failure.
Stefan Liebler [Tue, 28 Oct 2025 14:21:18 +0000 (15:21 +0100)]
Remove support for lock elision.
The support for lock elision was already deprecated with glibc 2.42:
commit 77438db8cfa6ee66b3906230156bdae11c49a195
"Mark support for lock elision as deprecated."
See also discussions:
https://sourceware.org/pipermail/libc-alpha/2025-July/168492.html
This patch removes the architecture specific support for lock elision
for x86, powerpc and s390 by removing the elision-conf.h, elision-conf.c,
elision-lock.c, elision-timed.c, elision-unlock.c, elide.h, htm.h/hle.h files.
Those generic files are also removed.
The architecture specific structures are adjusted and the elision fields are
marked as unused. See struct_mutex.h files.
Furthermore in struct_rwlock.h, the leftover __rwelision was also removed.
Those were originally removed with commit 0377a7fde6dfcc078dda29a1225d7720a0931357
"nptl: Remove rwlock elision definitions"
and by chance reintroduced with commit 7df8af43ad1cd8ce527444de50bee6f35eebe071
"nptl: Add struct_rwlock.h"
H. Peter Anvin [Mon, 20 Oct 2025 20:42:10 +0000 (13:42 -0700)]
linux/termios: factor out the kernel interface from termios_internal.h
Factor out the internal kernel interface from termios_internal.h, so
that it can be used in test code without causing breakage due to glibc
internals used in headers.
[ v3: fix Alpha build breakage ]
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
H. Peter Anvin [Mon, 20 Oct 2025 20:42:09 +0000 (13:42 -0700)]
linux/termios: clear k_termios.c_cflag & CIBAUD for non-split speed [BZ 33340]
After getting more experience with the various broken direct-to-ioctl
termios2 hacks using Fedora 43 beta, I have found a fair number of
cases where the software would fail to set, or clear CIBAUD for
non-split-speed operation.
Thus it seems will help improve compatibility to clear the kernel-side
version of c_cflag & CIBAUD (having the same meaning to the Linux
kernel as the speed 0 has for cfsetibaud(), i.e. force the input speed
to equal the output speed) for non-split-speed operation, rather than
having it explicitly equal the output speed in CBAUD.
When writing the code that went into glibc 2.42 I had considered this
issue, and had to make an educated guess which way would be more
likely to break fewer things. Unfortunately, it appears I guessed
wrong.
A third option would be to *always* set CIBAUD to __BOTHER, even for
the standard baud rates. However, that is an even bigger departure
from legacy behavior, whereas this variant mostly preserves current
behavior in terms of under what conditions buggy utilities will
continue to work.
This change is in tcsetattr() rather than
___termios2_canonicalize_speeds(), as it should not be run for
tcgetattr(); that would break split speed support for the legacy
interface versions of cfgetispeed() and cfsetispeed().
[ v2: fixed comment style ]
Resolves: BZ #33340 Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
Pádraig Brady [Fri, 14 Nov 2025 13:58:58 +0000 (13:58 +0000)]
posix: execvpe: fix UMR with file > NAME_MAX [BZ #33627]
* posix/execvpe.c (__execvpe_common): Since strnlen doesn't inspect
beyond NAME_MAX and NAME_MAX does not cover the NUL, we need
to explicitly check for the NUL. I.e. the existing check for,
file_len-1 > NAME_MAX, was never true. This check is required
so that we're guaranteed that file_len includes the NUL, as we
depend on that in the following memcpy to properly terminate
the file buffer passed to execve(). Otherwise that call will trigger
UMR when inspecting the passed file, which can be seen with valgrind.
Note returning ENAMETOOLONG early here for FILE names > NAME_MAX
will also avoid redundant processing of ENAMETOOLONG on each entry
in $PATH, after the change in [BZ #33626] is applied.
configure: Remove for redirection of built-in functions
The check was initially used to define HAVE_BUILTIN_REDIRECTION, which
enables or not libc_hidden_builtin_proto support. It was later removed
with 3ce1f2959437e952b9db4eaeed2407424f11a4d1, making the feature
mandatory. The configure check was kept as a transition knob.
Current minimum gcc/linker always supports this, as well as clang with
some extra care. Also, missing hidden_proto/hidden_def support is
already flagged in the check-localplt test.
Work around the clang limitation wrt inline function and attribute
definition, where it does not allow to 'add' new attribute if a
function is already defined:
clang on x86_64 fails to build s_fabsf128.c with:
../sysdeps/ieee754/float128/../ldbl-128/s_fabsl.c:32:1: error: attribute declaration must precede definition [-Werror,-Wignored-attributes]
32 | libm_alias_ldouble (__fabs, fabs)
| ^
../sysdeps/generic/libm-alias-ldouble.h:63:38: note: expanded from macro 'libm_alias_ldouble'
63 | #define libm_alias_ldouble(from, to) libm_alias_ldouble_r (from, to, )
| ^
../sysdeps/ieee754/float128/float128_private.h:133:43: note: expanded from macro 'libm_alias_ldouble_r'
133 | #define libm_alias_ldouble_r(from, to, r) libm_alias_float128_r (from, to, r)
| ^
../sysdeps/ieee754/float128/s_fabsf128.c:5:3: note: expanded from macro 'libm_alias_float128_r'
5 | static_weak_alias (from ## f128 ## r, to ## f128 ## r); \
| ^
./../include/libc-symbols.h:166:46: note: expanded from macro 'static_weak_alias'
166 | # define static_weak_alias(name, aliasname) weak_alias (name, aliasname)
| ^
./../include/libc-symbols.h:154:38: note: expanded from macro 'weak_alias'
154 | # define weak_alias(name, aliasname) _weak_alias (name, aliasname)
| ^
./../include/libc-symbols.h:156:52: note: expanded from macro '_weak_alias'
156 | extern __typeof (name) aliasname __attribute__ ((weak, alias (#name))) \
| ^
../include/math.h:134:1: note: previous definition is here
134 | fabsf128 (_Float128 x)
If compiler does not support __USE_EXTERN_INLINES we need to route
fabsf128 call to an internal symbol.
Work around the clang limitation wrt inline function and attribute
definition, where it does not allow to 'add' new attribute if a
function is already defined:
Buildint with clang triggers multiple issue on how ifunc macro are
used:
compiler may redirect truncf calls to __truncf, instead of inlining it
(for instance, clang). The USE_TRUNCF_BUILTIN is 1 to indicate that
truncf should be inlined. In this case, we don't want the truncf
redirection:
1. For each math function which may be inlined, we define
With this change If USE_TRUNCF_BUILTIN is 0, we get
float (truncf) (float) asm ("__truncf");
truncf will be redirected to __truncf.
And for USE_TRUNCF_BUILTIN 1, we get:
float (inline_truncf) (float) asm ("__truncf");
In both cases either truncf will be inlined or the internal alias
(__truncf) will be called.
It is not required for all math-use-builtin symbol, only the one
defined in math.h. It also allows to remove all the math-use-builtin
inclusion, since it is now implicitly included by math.h.
For MIPS, some math-use-builtin headers include sysdep.h and this
in turn includes a lot of extra headers that do not allow ldbl-128
code to override alias definition (math.h will include
some stdlib.h definition). The math-use-builtin only requires
the __mips_isa_rev, so move the defintion to sgidefs.h.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
Florian Weimer [Mon, 17 Nov 2025 10:15:13 +0000 (11:15 +0100)]
Update COPYING, COPYING.LIB from gnulib, using gnulib file names
The new file names are COPYINGv2 and COPYING.LESSERv2. Lots of
copyright headers mention COPYING.LIB, so add a symbolic link.
(This is not the first symbolic link in the repository, so this
should be fine.)
Florian Weimer [Mon, 17 Nov 2025 10:15:13 +0000 (11:15 +0100)]
Add COPYINGv3 with the GPL version 3 text
The license is referenced in various headers, so we should ship it.
The text was copied from gnulib commit d64d66cc4897d605f543257dcd0,
file doc/COPYINGv3.
Samuel Thibault [Sun, 16 Nov 2025 14:09:11 +0000 (14:09 +0000)]
htl: move pthread_create to into libc
This is notably needed for the main thread structure to be always
initialized so that some pthread functions can work from the main thread
without other threads, e.g. pthread_cancel.