Luke Shumaker [Wed, 15 Nov 2017 19:31:32 +0000 (20:31 +0100)]
linux ttyname: Update a reference to kernel docs for kernel 4.10
Linux 4.10 moved many of the documentation files around.
4.10 came out between the time the patch adding the comment (commit 15e9a4f378c8607c2ae1aa465436af4321db0e23) was submitted and the time
it was applied (in February, January, and March 2017; respectively).
Luke Shumaker [Wed, 15 Nov 2017 19:28:40 +0000 (20:28 +0100)]
manual: Update to mention ENODEV for ttyname and ttyname_r
Commit 15e9a4f378c8607c2ae1aa465436af4321db0e23 introduced ENODEV as a possible
error condition for ttyname and ttyname_r. Update the manual to mention this GNU
extension.
ia64: Fix thread stack allocation permission set (BZ #21672)
This patch fixes ia64 failures on thread exit by madvise the required
area taking in consideration its disjoing stacks
(NEED_SEPARATE_REGISTER_STACK). Also the snippet that setup the
madvise call to advertise kernel the area won't be used anymore in
near future is reallocated in allocatestack.c (for consistency to
put all stack management function in one place).
Checked on x86_64-linux-gnu and i686-linux-gnu for sanity (since
it is not expected code changes for architecture that do not
define NEED_SEPARATE_REGISTER_STACK) and also got a report that
it fixes ia64-linux-gnu failures from Sergei Trofimovich
<slyfox@gentoo.org>.
[BZ #21672]
* nptl/allocatestack.c [_STACK_GROWS_DOWN] (setup_stack_prot):
Set to use !NEED_SEPARATE_REGISTER_STACK as well.
(advise_stack_range): New function.
* nptl/pthread_create.c (START_THREAD_DEFN): Move logic to mark
stack non required to advise_stack_range at allocatestack.c
Default semantic for mmap2 syscall is to take the offset in 4096-byte
units. However m68k and ia64 mmap2 implementation take in the
configured pageunit units and for both architecture it can be
different values.
This patch fixes the m68k runtime discover of mmap2 offset unit
and adds the ia64 definition to find it at runtime.
Checked the basic tst-mmap and tst-mmap-offset on m68k (the system
is configured with 4k, so current code is already passing on this
system) and a sanity check on x86_64-linux-gnu (which should not be
affected by this change). Sergei also states that ia64 loader now
work correctly with this change.
Adhemerval Zanella <adhemerval.zanella@linaro.org>
Sergei Trofimovich <slyfox@inbox.ru>
* sysdeps/unix/sysv/linux/m68k/mmap_internal.h (MMAP2_PAGE_SHIFT):
Rename to MMAP2_PAGE_UNIT.
* sysdeps/unix/sysv/linux/mmap.c: Include mmap_internal iff
__OFF_T_MATCHES_OFF64_T is not defined.
* sysdeps/unix/sysv/linux/mmap_internal.h (page_unit): Declare as
uint64_t.
(MMAP2_PAGE_UNIT) [MMAP2_PAGE_UNIT == -1]: Redefine to page_unit.
(page_unit) [MMAP2_PAGE_UNIT != -1]: Remove definition.
James Clarke [Tue, 12 Dec 2017 14:17:10 +0000 (12:17 -0200)]
ia64: Add ipc_priv.h header to set __IPC_64 to zero
When running strace, IPC_64 was set in the command, but ia64 is
an architecture where CONFIG_ARCH_WANT_IPC_PARSE_VERSION *isn't* set
in the kernel, so ipc_parse_version just returns IPC_64 without
clearing the IPC_64 bit in the command.
* sysdeps/unix/sysv/linux/ia64/ipc_priv.h: New file defining
__IPC_64 to 0 to avoid IPC_64 being set.
hooks.c: In function ‘realloc_check’:
hooks.c:352:14: error: ‘magic_p’ may be used uninitialized in this function [-Werror=maybe-uninitialized]
*magic_p ^= 0xFF;
Arjun Shankar [Thu, 30 Nov 2017 12:31:45 +0000 (13:31 +0100)]
Fix integer overflow in malloc when tcache is enabled [BZ #22375]
When the per-thread cache is enabled, __libc_malloc uses request2size (which
does not perform an overflow check) to calculate the chunk size from the
requested allocation size. This leads to an integer overflow causing malloc
to incorrectly return the last successfully allocated block when called with
a very large size argument (close to SIZE_MAX).
This commit uses checked_request2size instead, removing the overflow.
Wilco Dijkstra [Tue, 24 Oct 2017 11:39:24 +0000 (12:39 +0100)]
Add single-threaded path to malloc/realloc/calloc/memalloc
This patch adds a single-threaded fast path to malloc, realloc,
calloc and memalloc. When we're single-threaded, we can bypass
arena_get (which always locks the arena it returns) and just use
the main arena. Also avoid retrying a different arena since
there is just the main arena.
Wilco Dijkstra [Thu, 19 Oct 2017 17:19:55 +0000 (18:19 +0100)]
Fix deadlock in _int_free consistency check
This patch fixes a deadlock in the fastbin consistency check.
If we fail the fast check due to concurrent modifications to
the next chunk or system_mem, we should not lock if we already
have the arena lock. Simplify the check to make it obviously
correct.
* malloc/malloc.c (_int_free): Fix deadlock bug in consistency check.
Linux commit ID cba6ac4869e45cc93ac5497024d1d49576e82666 reserved a new
bit for a scenario where transactional memory is available, but the
suspended state is disabled.
* sysdeps/powerpc/bits/hwcap.h (PPC_FEATURE2_HTM_NO_SUSPEND): New
macro.
Andreas Schwab [Tue, 8 Aug 2017 14:21:58 +0000 (16:21 +0200)]
Don't use IFUNC resolver for longjmp or system in libpthread (bug 21041)
Unlike the vfork forwarder and like the fork forwarder as in bug 19861,
there won't be a problem when the compiler does not turn this into a tail
call.
powerpc: Replace lxvd2x/stxvd2x with lvx/stvx in P7's memcpy/memmove
POWER9 DD2.1 and earlier has an issue where some cache inhibited
vector load traps to the kernel, causing a performance degradation. To
handle this in memcpy and memmove, lvx/stvx is used for aligned
addresses instead of lxvd2x/stxvd2x.
crypt: Use NSPR header files in addition to NSS header files [BZ #17956]
When configuring and building GNU libc using the Mozilla NSS library
for cryptography (--enable-nss-crypt option), also include the
NSPR header files along with the Mozilla NSS library header files.
Finally, when running the check-local-headers test, ignore the
Mozilla NSPR library header files (used by the Mozilla NSS library).
Currently free typically uses 2 atomic operations per call. The have_fastchunks
flag indicates whether there are recently freed blocks in the fastbins. This
is purely an optimization to avoid calling malloc_consolidate too often and
avoiding the overhead of walking all fast bins even if all are empty during a
sequence of allocations. However using catomic_or to update the flag is
completely unnecessary since it can be changed into a simple boolean and
accessed using relaxed atomics. There is no change in multi-threaded behaviour
given the flag is already approximate (it may be set when there are no blocks in
any fast bins, or it may be clear when there are free blocks that could be
consolidated).
Performance of malloc/free improves by 27% on a simple benchmark on AArch64
(both single and multithreaded). The number of load/store exclusive instructions
is reduced by 33%. Bench-malloc-thread speeds up by ~3% in all cases.
Wilco Dijkstra [Tue, 17 Oct 2017 17:25:43 +0000 (18:25 +0100)]
Inline tcache functions
The functions tcache_get and tcache_put show up in profiles as they
are a critical part of the tcache code. Inline them to give tcache
a 16% performance gain. Since this improves multi-threaded cases
as well, it helps offset any potential performance loss due to adding
single-threaded fast paths.
James Clarke [Fri, 13 Oct 2017 18:44:39 +0000 (15:44 -0300)]
Fix TLS relocations against local symbols on powerpc32, sparc32 and sparc64
Normally, TLS relocations against local symbols are optimised by the linker
to be absolute. However, gold does not do this, and so it is possible to
end up with, for example, R_SPARC_TLS_DTPMOD64 referring to a local symbol.
Since sym_map is left as null in elf_machine_rela for the special local
symbol case, the relocation handling thinks it has nothing to do, and so
the module gets left as 0. Havoc then ensues when the variable in question
is accessed.
Before this fix, the main_local_gold program would receive a SIGBUS on
sparc64, and SIGSEGV on powerpc32. With this fix applied, that test now
passes like the rest of them.
* sysdeps/powerpc/powerpc32/dl-machine.h (elf_machine_rela):
Assign sym_map to be map for local symbols, as TLS relocations
use sym_map to determine whether the symbol is defined and to
extract the TLS information.
* sysdeps/sparc/sparc32/dl-machine.h (elf_machine_rela): Likewise.
* sysdeps/sparc/sparc64/dl-machine.h (elf_machine_rela): Likewise.
This patch adds two new internal defines to set the internal
pthread_mutex_t layout required by the supported ABIS:
1. __PTHREAD_MUTEX_NUSERS_AFTER_KIND which control whether to define
__nusers fields before or after __kind. The preferred value for
is 0 for new ports and it sets __nusers before __kind.
2. __PTHREAD_MUTEX_USE_UNION which control whether internal __spins and
__list members will be place inside an union for linuxthreads
compatibility. The preferred value is 0 for ports and it sets
to not use an union to define both fields.
It fixes the wrong offsets value for __kind value on x86_64-linux-gnu-x32.
Checked with a make check run-built-tests=no on all afected ABIs.
[BZ #22298]
* nptl/allocatestack.c (allocate_stack): Check if
__PTHREAD_MUTEX_HAVE_PREV is non-zero, instead if
__PTHREAD_MUTEX_HAVE_PREV is defined.
* nptl/descr.h (pthread): Likewise.
* nptl/nptl-init.c (__pthread_initialize_minimal_internal):
Likewise.
* nptl/pthread_create.c (START_THREAD_DEFN): Likewise.
* sysdeps/nptl/fork.c (__libc_fork): Likewise.
* sysdeps/nptl/pthread.h (PTHREAD_MUTEX_INITIALIZER): Likewise.
* sysdeps/nptl/bits/thread-shared-types.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
defines.
(__pthread_internal_list): Check __PTHREAD_MUTEX_USE_UNION instead
of __WORDSIZE for internal layout.
(__pthread_mutex_s): Check __PTHREAD_MUTEX_NUSERS_AFTER_KIND instead
of __WORDSIZE for internal __nusers layout and __PTHREAD_MUTEX_USE_UNION
instead of __WORDSIZE whether to use an union for __spins and __list
fields.
(__PTHREAD_MUTEX_HAVE_PREV): Define also for __PTHREAD_MUTEX_USE_UNION
case.
* sysdeps/aarch64/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION): New
defines.
* sysdeps/alpha/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/arm/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/hppa/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/ia64/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/m68k/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/microblaze/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/mips/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/nios2/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/powerpc/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/s390/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/sh/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/sparc/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/tile/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
* sysdeps/x86/nptl/bits/pthreadtypes-arch.h
(__PTHREAD_MUTEX_NUSERS_AFTER_KIND, __PTHREAD_MUTEX_USE_UNION):
Likewise.
nptl: Add tests for internal pthread_mutex_t offsets
This patch adds a new build test to check for internal fields
offsets for user visible internal field. Although currently
the only field which is statically initialized to a non zero value
is pthread_mutex_t.__data.__kind value, the tests also check the
offset of __kind, __spins, __elision (if supported), and __list
internal member. A internal header (pthread-offset.h) is added
to each major ABI with the reference value.
Checked on x86_64-linux-gnu and with a build check for all affected
ABIs (aarch64-linux-gnu, alpha-linux-gnu, arm-linux-gnueabihf,
hppa-linux-gnu, i686-linux-gnu, ia64-linux-gnu, m68k-linux-gnu,
microblaze-linux-gnu, mips64-linux-gnu, mips64-n32-linux-gnu,
mips-linux-gnu, powerpc64le-linux-gnu, powerpc-linux-gnu,
s390-linux-gnu, s390x-linux-gnu, sh4-linux-gnu, sparc64-linux-gnu,
sparcv9-linux-gnu, tilegx-linux-gnu, tilegx-linux-gnu-x32,
tilepro-linux-gnu, x86_64-linux-gnu, and x86_64-linux-x32).
posix: Do not use WNOHANG in waitpid call for Linux posix_spawn
As shown in some buildbot issues on aarch64 and powerpc, calling
clone (VFORK) and waitpid (WNOHANG) does not guarantee the child
is ready to be collected. This patch changes the call back to 0
as before fe05e1cb6d64 fix.
This change can lead to the scenario 4.3 described in the commit,
where the waitpid call can hang undefinitely on the call. However
this is also a very unlikely and also undefinied situation where
both the caller is trying to terminate a pid before posix_spawn
returns and the race pid reuse is triggered. I don't see how to
correct handle this specific situation within posix_spawn.
Checked on x86_64-linux-gnu, aarch64-linux-gnu and
powerpc64-linux-gnu.
* sysdeps/unix/sysv/linux/spawni.c (__spawnix): Use 0 instead of
WNOHANG in waitpid call.
posix: Fix improper assert in Linux posix_spawn (BZ#22273)
As noted by Florian Weimer, current Linux posix_spawn implementation
can trigger an assert if the auxiliary process is terminated before
actually setting the err member:
340 /* Child must set args.err to something non-negative - we rely on
341 the parent and child sharing VM. */
342 args.err = -1;
[...]
362 new_pid = CLONE (__spawni_child, STACK (stack, stack_size), stack_size,
363 CLONE_VM | CLONE_VFORK | SIGCHLD, &args);
364
365 if (new_pid > 0)
366 {
367 ec = args.err;
368 assert (ec >= 0);
Another possible issue is killing the child between setting the err and
actually calling execve. In this case the process will not ran, but
posix_spawn also will not report any error:
As suggested by Andreas Schwab, this patch removes the faulty assert
and also handles any signal that happens before fork and execve as the
spawn was successful (and thus relaying the handling to the caller to
figure this out). Different than Florian, I can not see why using
atomics to set err would help here, essentially the code runs
sequentially (due CLONE_VFORK) and I think it would not be legal the
compiler evaluate ec without checking for new_pid result (thus there
is no need to compiler barrier).
Summarizing the possible scenarios on posix_spawn execution, we
have:
1. For default case with a success execution, args.err will be 0, pid
will not be collected and it will be reported to caller.
2. For default failure case, args.err will be positive and the it will
be collected by the waitpid. An error will be reported to the
caller.
3. For the unlikely case where the process was terminated and not
collected by a caller signal handler, it will be reported as succeful
execution and not be collected by posix_spawn (since args.err will
be 0). The caller will need to actually handle this case.
4. For the unlikely case where the process was terminated and collected
by caller we have 3 other possible scenarios:
4.1. The auxiliary process was terminated with args.err equal to 0:
it will handled as 1. (so it does not matter if we hit the pid
reuse race since we won't possible collect an unexpected
process).
4.2. The auxiliary process was terminated after execve (due a failure
in calling it) and before setting args.err to -1: it will also
be handle as 1. but with the issue of not be able to report the
caller a possible execve failures.
4.3. The auxiliary process was terminated after args.err is set to -1:
this is the case where it will be possible to hit the pid reuse
case where we will need to collected the auxiliary pid but we
can not be sure if it will be expected one. I think for this
case we need to actually change waitpid to use WNOHANG to avoid
hanging indefinitely on the call and report an error to caller
since we can't differentiate between a default failure as 2.
and a possible pid reuse race issue.
Checked on x86_64-linux-gnu.
* sysdeps/unix/sysv/linux/spawni.c (__spawnix): Handle the case where
the auxiliary process is terminated by a signal before calling _exit
or execve.
This patch consolidates the glob implementation. The main changes are:
* On Linux all implementation now uses the default one at
sysdeps/unix/sysv/linux/glob{free}{64}.c with the exception
of alpha (which requires specific versioning) and s390-32 (which
different than other 32 bits ports it does not add a compat one
symbol for 2.1 version).
* The default implementation uses XSTAT_IS_XSTAT64 to define whether
both glob{free} and glob{free}64 should be different implementations.
For archictures that define XSTAT_IS_XSTAT64, glob{free} is an alias
to glob{free}64.
* Move i386 olddirent.h header to Linux default directory, since it is
the only header with this name and it is shared among different
architectures (and used on compat glob symbol as well).
Checked on x86_64-linux-gnu and on a build using build-many-glibcs.py
for all major architectures.
Current implementation of tunables does not set arena_max and arena_test
values. Any value provided by glibc.malloc.arena_max and
glibc.malloc.arena_test parameters is ignored.
These tunables have minval value set to 1 (see elf/dl-tunables.list file)
and undefined maxval value. In that case default value (which is 0. see
scripts/gen-tunables.awk) is being used to set maxval.
For instance, generated tunable_list[] entry for arena_max is:
(gdb) p *cur
$1 = {name = 0x7ffff7df6217 "glibc.malloc.arena_max",
type = {type_code = TUNABLE_TYPE_SIZE_T, min = 1, max = 0},
val = {numval = 0, strval = 0x0}, initialized = false,
security_level = TUNABLE_SECLEVEL_SXID_IGNORE,
env_alias = 0x7ffff7df622e "MALLOC_ARENA_MAX"}
As a result, any value of glibc.malloc.arena_max is ignored by
TUNABLE_SET_VAL_IF_VALID_RANGE macro
__type min = (__cur)->type.min; <- initialized to 1
__type max = (__cur)->type.max; <- initialized to 0!
if (min == max) <- false
{
min = __default_min;
max = __default_max;
}
if ((__type) (__val) >= min && (__type) (val) <= max) <- false
{
(__cur)->val.numval = val;
(__cur)->initialized = true;
}
Assigning correct min/max values at a build time fixes a problem.
Plus, a bit of optimization: Setting of default min/max values for the
given type at a run time might be eliminated.
* elf/dl-tunables.c (do_tunable_update_val): Range checking fix.
* scripts/gen-tunables.awk: Set unspecified minval and/or maxval
values to correct default value for given type.
Joseph Myers [Thu, 19 Oct 2017 17:32:20 +0000 (17:32 +0000)]
Install correct bits/long-double.h for MIPS64 (bug 22322).
Similar to bug 21987 for SPARC, MIPS64 wrongly installs the ldbl-128
version of bits/long-double.h, meaning incorrect results when using
headers installed from a 64-bit installation for a 32-bit build. (I
haven't actually seen this cause build failures before its interaction
with bits/floatn.h did so - installed headers wrongly expecting
_Float128 to be available in a 32-bit configuration.)
This patch fixes the bug by moving the MIPS header to
sysdeps/mips/ieee754, which comes before sysdeps/ieee754/ldbl-128 in
the sysdeps directory ordering. (bits/floatn.h will need a similar
fix - duplicating the ldbl-128 version for MIPS will suffice - for
headers from a 32-bit installation to be correct for 64-bit builds.)
Tested with build-many-glibcs.py (compilers build for
mips64-linux-gnu, where there was previously a libstdc++ build failure
as at
<https://sourceware.org/ml/libc-testresults/2017-q4/msg00130.html>).
Romain Naour [Mon, 16 Oct 2017 21:21:56 +0000 (23:21 +0200)]
Let signbit use the builtin in C++ mode with gcc < 6.x (bug 22296)
When using gcc < 6.x, signbit does not use the type-generic
__builtin_signbit builtin, instead it uses __MATH_TG.
However, when library support for float128 is available, __MATH_TG uses
__builtin_types_compatible_p, which is not available in C++ mode.
On the other hand, libstdc++ undefines (in cmath) many macros from
math.h, including signbit, so that it can provide its own functions.
However, during its configure tests, libstdc++ just tests for the
availability of the macros (it does not undefine them, nor does it
provide its own functions).
Finally, libstdc++ configure tests include math.h and get the definition
of signbit that uses __MATH_TG (and __builtin_types_compatible_p).
Since libstdc++ does not undefine the macros during its configure
tests, they fail.
This patch lets signbit use the builtin in C++ mode when gcc < 6.x is
used. This allows the configure test in libstdc++ to work.
Tested for x86_64.
[BZ #22296]
* math/math.h: Let signbit use the builtin in C++ mode with gcc
< 6.x
Cc: Gabriel F. T. Gomes <gftg@linux.vnet.ibm.com> Cc: Joseph Myers <joseph@codesourcery.com>
(cherry picked from commit 386e1c26ac473d6863133ab9cbe3bbda16c15816)
H.J. Lu [Sun, 22 Oct 2017 14:40:39 +0000 (07:40 -0700)]
x86-64: Use fxsave/xsave/xsavec in _dl_runtime_resolve [BZ #21265]
In _dl_runtime_resolve, use fxsave/xsave/xsavec to preserve all vector,
mask and bound registers. It simplifies _dl_runtime_resolve and supports
different calling conventions. ld.so code size is reduced by more than
1 KB. However, use fxsave/xsave/xsavec takes a little bit more cycles
than saving and restoring vector and bound registers individually.
Latency for _dl_runtime_resolve to lookup the function, foo, from one
shared library plus libc.so:
This is the worst case where portion of time spent for saving and
restoring registers is bigger than majority of cases. With smaller
_dl_runtime_resolve code size, overall performance impact is negligible.
On IvyBridge, differences in build and test time of binutils with lazy
binding GCC and binutils are noises. On Westmere, differences in
bootstrap and "makc check" time of GCC 7 with lazy binding GCC and
binutils are also noises.
[BZ #21265]
* sysdeps/x86/cpu-features-offsets.sym (XSAVE_STATE_SIZE_OFFSET):
New.
* sysdeps/x86/cpu-features.c: Include <libc-pointer-arith.h>.
(get_common_indeces): Set xsave_state_size, xsave_state_full_size
and bit_arch_XSAVEC_Usable if needed.
(init_cpu_features): Remove bit_arch_Use_dl_runtime_resolve_slow
and bit_arch_Use_dl_runtime_resolve_opt.
* sysdeps/x86/cpu-features.h (bit_arch_Use_dl_runtime_resolve_opt):
Removed.
(bit_arch_Use_dl_runtime_resolve_slow): Likewise.
(bit_arch_Prefer_No_AVX512): Updated.
(bit_arch_MathVec_Prefer_No_AVX512): Likewise.
(bit_arch_XSAVEC_Usable): New.
(STATE_SAVE_OFFSET): Likewise.
(STATE_SAVE_MASK): Likewise.
[__ASSEMBLER__]: Include <cpu-features-offsets.h>.
(cpu_features): Add xsave_state_size and xsave_state_full_size.
(index_arch_Use_dl_runtime_resolve_opt): Removed.
(index_arch_Use_dl_runtime_resolve_slow): Likewise.
(index_arch_XSAVEC_Usable): New.
* sysdeps/x86/cpu-tunables.c (TUNABLE_CALLBACK (set_hwcaps)):
Support XSAVEC_Usable. Remove Use_dl_runtime_resolve_slow.
* sysdeps/x86_64/Makefile (tst-x86_64-1-ENV): New if tunables
is enabled.
* sysdeps/x86_64/dl-machine.h (elf_machine_runtime_setup):
Replace _dl_runtime_resolve_sse, _dl_runtime_resolve_avx,
_dl_runtime_resolve_avx_slow, _dl_runtime_resolve_avx_opt,
_dl_runtime_resolve_avx512 and _dl_runtime_resolve_avx512_opt
with _dl_runtime_resolve_fxsave, _dl_runtime_resolve_xsave and
_dl_runtime_resolve_xsavec.
* sysdeps/x86_64/dl-trampoline.S (DL_RUNTIME_UNALIGNED_VEC_SIZE):
Removed.
(DL_RUNTIME_RESOLVE_REALIGN_STACK): Check STATE_SAVE_ALIGNMENT
instead of VEC_SIZE.
(REGISTER_SAVE_BND0): Removed.
(REGISTER_SAVE_BND1): Likewise.
(REGISTER_SAVE_BND3): Likewise.
(REGISTER_SAVE_RAX): Always defined to 0.
(VMOV): Removed.
(_dl_runtime_resolve_avx): Likewise.
(_dl_runtime_resolve_avx_slow): Likewise.
(_dl_runtime_resolve_avx_opt): Likewise.
(_dl_runtime_resolve_avx512): Likewise.
(_dl_runtime_resolve_avx512_opt): Likewise.
(_dl_runtime_resolve_sse): Likewise.
(_dl_runtime_resolve_sse_vex): Likewise.
(USE_FXSAVE): New.
(_dl_runtime_resolve_fxsave): Likewise.
(USE_XSAVE): Likewise.
(_dl_runtime_resolve_xsave): Likewise.
(USE_XSAVEC): Likewise.
(_dl_runtime_resolve_xsavec): Likewise.
* sysdeps/x86_64/dl-trampoline.h (_dl_runtime_resolve_avx512):
Removed.
(_dl_runtime_resolve_avx512_opt): Likewise.
(_dl_runtime_resolve_avx): Likewise.
(_dl_runtime_resolve_avx_opt): Likewise.
(_dl_runtime_resolve_sse): Likewise.
(_dl_runtime_resolve_sse_vex): Likewise.
(_dl_runtime_resolve_fxsave): New.
(_dl_runtime_resolve_xsave): Likewise.
(_dl_runtime_resolve_xsavec): Likewise.
H.J. Lu [Sun, 22 Oct 2017 11:14:54 +0000 (04:14 -0700)]
x86: Add x86_64 to x86-64 HWCAP [BZ #22093]
Before glibc 2.26, ld.so set dl_platform to "x86_64" and searched the
"x86_64" subdirectory when loading a shared library. ld.so in glibc
2.26 was changed to set dl_platform to "haswell" or "xeon_phi", based
on supported ISAs. This led to shared library loading failure for
shared libraries placed under the "x86_64" subdirectory.
This patch adds "x86_64" to x86-64 dl_hwcap so that ld.so will always
search the "x86_64" subdirectory when loading a shared library.
NB: We can't set x86-64 dl_platform to "x86-64" since ld.so will skip
the "haswell" and "xeon_phi" subdirectories on "haswell" and "xeon_phi"
machines.
This patch syncs posix/glob.c implementation with gnulib version b5ec983 (glob: simplify symlink detection). The only difference
to gnulib code is
* DT_UNKNOWN, DT_DIR, and DT_LNK definition in the case there
were not already defined. Gnulib code which uses
HAVE_STRUCT_DIRENT_D_TYPE will redefine them wrongly because
GLIBC does not define HAVE_STRUCT_DIRENT_D_TYPE. Instead
the patch check for each definition instead.
Also, the patch requires additional globfree and globfree64 files
for compatibility version on some architectures. Also the code
simplification leads to not macro simplification (not need for
NO_GLOB_PATTERN_P anymore).
Checked on x86_64-linux-gnu and on a build using build-many-glibcs.py
for all major architectures.
[BZ #1062]
* posix/Makefile (routines): Add globfree, globfree64, and
glob_pattern_p.
* posix/flexmember.h: New file.
* posix/glob_internal.h: Likewise.
* posix/glob_pattern_p.c: Likewise.
* posix/globfree.c: Likewise.
* posix/globfree64.c: Likewise.
* sysdeps/gnu/globfree64.c: Likewise.
* sysdeps/unix/sysv/linux/alpha/globfree.c: Likewise.
* sysdeps/unix/sysv/linux/mips/mips64/n64/globfree64.c: Likewise.
* sysdeps/unix/sysv/linux/oldglob.c: Likewise.
* sysdeps/unix/sysv/linux/wordsize-64/globfree64.c: Likewise.
* sysdeps/unix/sysv/linux/x86_64/x32/globfree.c: Likewise.
* sysdeps/wordsize-64/globfree.c: Likewise.
* sysdeps/wordsize-64/globfree64.c: Likewise.
* posix/glob.c (HAVE_CONFIG_H): Use !_LIBC instead.
[NDEBUG): Remove comments.
(GLOB_ONLY_P, _AMIGA, VMS): Remove define.
(dirent_type): New type. Use uint_fast8_t not
uint8_t, as C99 does not require uint8_t.
(DT_UNKNOWN, DT_DIR, DT_LNK): New macros.
(struct readdir_result): Use dirent_type. Do not define skip_entry
unless it is needed; this saves a byte on platforms lacking d_ino.
(readdir_result_type, readdir_result_skip_entry):
New functions, replacing ...
(readdir_result_might_be_symlink, readdir_result_might_be_dir):
these functions, which were removed. This makes the callers
easier to read. All callers changed.
(D_INO_TO_RESULT): Now empty if there is no d_ino.
(size_add_wrapv, glob_use_alloca): New static functions.
(glob, glob_in_dir): Check for size_t overflow in several places,
and fix some size_t checks that were not quite right.
Remove old code using SHELL since Bash no longer
uses this.
(glob, prefix_array): Separate MS code better.
(glob_in_dir): Remove old Amiga and VMS code.
(globfree, __glob_pattern_type, __glob_pattern_p): Move to
separate files.
(glob_in_dir): Do not rely on undefined behavior in accessing
struct members beyond their bounds. Use a flexible array member
instead
(link_stat): Rename from link_exists2_p and return -1/0 instead of
0/1. Caller changed.
(glob): Fix memory leaks.
* posix/glob64 (globfree64): Move to separate file.
* sysdeps/gnu/glob64.c (NO_GLOB_PATTERN_P): Remove define.
(globfree64): Remove hidden alias.
* sysdeps/unix/sysv/linux/Makefile (sysdeps_routines): Add
oldglob.
* sysdeps/unix/sysv/linux/alpha/glob.c (__new_globfree): Move to
separate file.
* sysdeps/unix/sysv/linux/i386/glob64.c (NO_GLOB_PATTERN_P): Remove
define.
Move compat code to separate file.
* sysdeps/wordsize-64/glob.c (globfree): Move definitions to
separate file.
Florian Weimer [Fri, 20 Oct 2017 02:10:15 +0000 (04:10 +0200)]
sysconf: Fix missing definition of UIO_MAXIOV on Linux [BZ #22321]
After commit 37f802f86400684c8d13403958b2c598721d6360 (Remove
__need_IOV_MAX and __need_FOPEN_MAX), UIO_MAXIOV is no longer supplied
(indirectly) through <bits/stdio_lim.h>, so sysdeps/posix/sysconf.c no
longer sees the definition.
Remove conditional on LDBL_MANT_DIG from e_lgammal_r.c
The IEEE 754 implementation of lgammal in sysdeps/ieee754/ldbl-128/ used
to be shared by IBM's implementation in sysdeps/ieee754/ldbl-128ibm/ (by
an inclusion of the source file). In order for the algorithm to work
for IBM's implementation, a check for LDBL_MANT_DIG was required. Since
the source file is no longer shared, the requirement for the check is
gone. This patch removes the conditionals.
ldbl-128ibm: Automatic replacing of _Float128 and L()
The ldbl-128ibm implementation of j0l, j1l, lgammal_r, and cbrtl, as
well as the tables used by expl were copied from ldbl-128. However, the
original files used _Float128 for the type and L() for the literal
suffix. This patch uses the following sed command to rewrite _Float128
as long double and L(x) as xL (for e_expl.c, e_j0l.c, e_j1l.c,
e_lgammal_r.c, and t_expl.h):
sed -i <filename> \
-e "/^#define _Float128 long double/d" \
-e "/^#define L(x) x ## L/d" \
-e "/L(/s/)/L/" \
-e "/L(/s/L(//" \
-e "s/_Float128/long double/g"
For sysdeps/ieee754/ldbl-128ibm/s_cbrtl.c, this sed command incorrectly
replaces a few occurrences of L(), so the following command is used
instead:
* sysdeps/ieee754/ldbl-128ibm/e_expl.c: Remove definitions of
_Float128 and L().
* sysdeps/ieee754/ldbl-128ibm/e_j0l.c: Remove definitions of
_Float128 and L(). Replace _Float128 with long double and L(x)
with xL, throughout the file.
* sysdeps/ieee754/ldbl-128ibm/e_j1l.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_lgammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_cbrtl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/t_expl.h: Likewise.
ldbl-128ibm: Copy implementations from ldbl-128 instead of including them
Some files under sysdeps/ieee754/ldbl-128ibm/ are able to reuse the
implementation in sysdeps/ieee754/ldbl-128/ by defining _Float128 to
long double. This relied on compiler support for _Float128 being
disabled. On powerpc, such support was disabled by default, however, it
got enabled by default [1] in GCC 8.
This patch copies the implementations from ldbl-128 to ldbl-128ibm. The
uses of _Float128 and L() are kept intact in this patch and are replaced
with a script in a subsequent patch.
* sysdeps/ieee754/ldbl-128ibm/e_expl.c: Include tables from
sysdeps/ieee754/ldbl-128ibm.
* sysdeps/ieee754/ldbl-128ibm/e_j0l.c: Copy contents from the
equivalent implementation in sysdeps/ieee754/ldbl-128/ instead
of including it. Keep _Float128 and L() intact. These will be
reviewed by a separate patch.
* sysdeps/ieee754/ldbl-128ibm/e_j1l.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/e_lgammal_r.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/s_cbrtl.c: Likewise.
* sysdeps/ieee754/ldbl-128ibm/t_expl.h: Likewise.
powerpc: Add redirection for finitef128, isinf128, and isnanf128
On powerpc64le, compiler support for float128 is not enabled by default
on gcc. To enable it, the flag -mfloat128 must be passed as a command
line option to the compiler. This means that only the few files that
actively have -mfloat128 passed as an argument get compiler support for
float128, whereas all other files don't.
When -mfloat128 becomes enabled by default on powerpc [1], all the files
that do not currently have compiler support for float128 enabled during
their compilation, will start to have it. This will lead to build
errors in s_finite.c, s_isinf.c, and s_isnan.c.
The errors are due to the unintended macro expansion of __finitef128 to
__redirect_finitef128 in math/bits/mathcalls-helper-functions.h. In
that header, __MATHDECL_1 takes '__finite' and 'f128' as arguments and
concatenates them. However, since '__finite' has been redefined in
s_finite.c, the function declaration becomes __redirect_finitef128:
extern int __redirect___finitef128 (_Float128 __value) __attribute__ ((__nothrow__ )) __attribute__ ((__const__));
This declaration itself is OK. The problem arises when include/math.h
creates the hidden prototype ('hidden_proto (__finitef128)'), which
expands to:
Since __finitef128 is not declared, __typeof fails. This effect was
already true for the 'float' and 'long double' versions and is now true
for float128. Likewise for isinsff128 and isnanf128.
This patch defines __finitef128 as __redirect___finitef128 in
sysdeps/powerpc/powerpc64/fpu/multiarch/s_finite.c, similarly to what's
done for the float and long double versions of these functions, to get
rid of the build error. Likewise for isinff128 and isnanf128.
powerpc64le: Add -mfloat128 to tst-strtod-nan-locale testcase
On powerpc64le, not all files can have the flag -mfloat128 passed as an
option on the compile command, since that could conflict with other
flags, such as -mno-vsx. Each file that needs the flag, gets it through
a CFLAGS-filename variable on sysdeps/powerpc/powerpc64le/Makefile.
The test cases tst-strtod-nan-locale and tst-wcstod-nan-locale are
missing this flag.
Tested for powerpc64le.
* sysdeps/powerpc/powerpc64le/Makefile
(CFLAGS-tst-strtod-nan-locale.c): New variable.
(CFLAGS-tst-wcstod-nan-locale.c): New variable.
aarch64: Optimized implementation of memmove for Qualcomm Falkor
This is an optimized memmove implementation for the Qualcomm Falkor
processor core. Due to the way the falkor memcpy needs to be written,
code cannot be easily shared between memmove and memcpy like in case
of other aarch64 memcpy implementations due to which this routine is
separate. The underlying principle is the same as that of memcpy
where it tries to use registers with the same lower 4 bits for
fetching the same stream, thus optimizing hardware prefetcher
performance.
The memcpy copy loop copies 64 bytes at a time using the same register
pair since that's the way to train the hardware prefetcher on the
falkor core. memmove cannot quite do that since it needs to avoid
overlaps, so it does the next best thing, i.e. has a 32 byte loop with
a 32 byte end (prefetch a loop ahead to account for overlapping
locations) with register pairs that alias so that they hit the same
prefetcher. Due to this difference in loop size, they have to
currently be separate implementations but efforts are on to try and
get memmove to fall back into memcpy whenever it can without simply
duplicating all of the code.
Performance:
The routine fares around 20-25% better than the generic memmove for
most medium to large sizes (i.e. > 128 bytes) for the new walking
memmove benchmark (memmove-walk) with an unexplained regression
between 1K and 2K. The minor regression is something worth looking
into for us, but the remaining gains are significant enough that we
would like this included upstream as we looking into the cause for the
regression. Here is a snippet of the numbers as generated from the
microbenchmark by the compare_strings script. Comparisons are against
__memmove_generic:
aarch64: Optimized memcpy for Qualcomm Falkor processor
This is an optimized implementation of the memcpy routine that gives a
significant gain in performance for all sizes of copies on the
Qualcomm Falkor processor. A detailed rationale of the implementation
is written in a comment in the patch.
This implementation improves time for copies up to 128 bytes by up to
15% and for larger copies by up to 35% in the glibc
microbenchmark. The memcpy-random benchmark sees improvements in all
sizes in the range of 13%-18%.
Here are the full numbers extracted from the glibc microbenchmark
using the commands:
Carlos O'Donell [Thu, 28 Sep 2017 17:05:18 +0000 (11:05 -0600)]
malloc: Fix tcache leak after thread destruction [BZ #22111]
The malloc tcache added in 2.26 will leak all of the elements remaining
in the cache and the cache structure itself when a thread exits. The
defect is that we do not set tcache_shutting_down early enough, and the
thread simply recreates the tcache and places the elements back onto a
new tcache which is subsequently lost as the thread exits (unfreed
memory). The fix is relatively simple, move the setting of
tcache_shutting_down earlier in tcache_thread_freeres. We add a test
case which uses mallinfo and some heuristics to look for unaccounted for
memory usage between the start and end of a thread start/join loop. It
is very reliable at detecting that there is a leak given the number of
iterations. Without the fix the test will consume 122MiB of leaked
memory.
H.J. Lu [Wed, 4 Oct 2017 00:41:32 +0000 (17:41 -0700)]
test-math-iscanonical.cc: Replace bool with int
Fix GCC 7 compilation error:
test-math-iscanonical.cc: In function ‘void check_type()’:
test-math-iscanonical.cc:33:11: error: use of an operand of type ‘bool’ in ‘operator++’ is deprecated [-Werror=deprecated]
errors++;
^~
Since not all non-zero error counts are errors, return errors != 0
instead.
Add C++ versions of iscanonical for ldbl-96 and ldbl-128ibm (bug 22235)
All representations of floating-point numbers in types with IEC 60559
binary exchange format are canonical. On the other hand, types with IEC
60559 extended formats, such as those implemented under ldbl-96 and
ldbl-128ibm, contain representations that are not canonical.
TS 18661-1 introduced the type-generic macro iscanonical, which returns
whether a floating-point value is canonical or not. In Glibc, this
type-generic macro is implemented using the macro __MATH_TG, which, when
support for float128 is enabled, relies on __builtin_types_compatible_p
to select between floating-point types. However, this use of
iscanonical breaks C++ applications, because the builtin is only
available in C mode.
This patch provides a C++ implementation of iscanonical that relies on
function overloading, rather than builtins, to select between
floating-point types.
Unlike the C++ implementations for iszero and issignaling, this
implementation ignores __NO_LONG_DOUBLE_MATH. The double type always
matches IEC 60559 double format, which is always canonical. Thus, when
double and long double are the same (__NO_LONG_DOUBLE_MATH), iscanonical
always returns 1 and is not implemented with __MATH_TG.
Tested for powerpc64, powerpc64le and x86_64.
[BZ #22235]
* math/math.h: Trivial fix for unbalanced parentheses in comment.
* math/Makefile [CXX] (tests): Add test-math-iscanonical.cc.
(CFLAGS-test-math-iscanonical.cc): New variable.
* math/test-math-iscanonical.cc: New file.
* sysdeps/ieee754/ldbl-96/bits/iscanonical.h (iscanonical):
Provide a C++ implementation based on function overloading,
rather than using __MATH_TG, which uses C-only builtins.
* sysdeps/ieee754/ldbl-128ibm/bits/iscanonical.h (iscanonical):
Likewise.
* sysdeps/powerpc/powerpc64le/Makefile
(CFLAGS-test-math-iscanonical.cc): New variable.
Refactor long double information into bits/long-double.h.
resulted in sparc32 configurations installing the ldbl-opt version of
bits/long-double.h instead of the intended
sysdeps/unix/sysv/linux/sparc version.
For sparc32 by itself, this is not a problem, since the ldbl-opt
version is correct for sparc32. However, both sparc32 and sparc64 are
supposed to install sets of headers that work for both of them, so
that a single sysroot, whichever order the libraries are built and
installed in, works for both. The effect of having the wrong version
installed is that you end up with a miscompiled sparc64 libstdc++
which fails glibc's configure tests for the C++ compiler.
This patch moves the header from sysdeps/unix/sysv/linux/sparc to
separate copies of the same file for sparc32 and sparc64, to ensure it
comes before ldbl-opt in the sysdeps directory ordering.
Tested with build-many-glibcs.py for sparc64-linux-gnu and
sparcv9-linux-gnu.
[BZ #21987]
* sysdeps/unix/sysv/linux/sparc/bits/long-double.h: Remove file
and copy to ...
* sysdeps/unix/sysv/linux/sparc/sparc32/bits/long-double.h:
... here.
* sysdeps/unix/sysv/linux/sparc/sparc64/bits/long-double.h:
... and here.
Joseph Myers [Thu, 28 Sep 2017 01:59:02 +0000 (01:59 +0000)]
Fix nearbyint arithmetic moved before feholdexcept (bug 22225).
In <https://sourceware.org/ml/libc-alpha/2013-05/msg00722.html> I
remarked on the possibility of arithmetic in various nearbyint
implementations being scheduled before feholdexcept calls, resulting
in spurious "inexact" exceptions.
I'm now actually observing this occurring in glibc built for ARM with
GCC 7 (in fact, both copies of the same addition/subtraction sequence
being combined and moved out before the conditionals and
feholdexcept/fesetenv pairs), resulting in test failures.
This patch makes the nearbyint implementations with this particular
feholdexcept / arithmetic / fesetenv pattern consistently use
math_opt_barrier on the function argument when first used in
arithmetic, and also consistently use math_force_eval before fesetenv
(the latter was generally already done, but the dbl-64/wordsize-64
implementation used math_opt_barrier instead, and as
math_opt_barrier's intended effect is through its output value being
used, such a use that doesn't use the return value is suspect).
Tested for x86_64 (--disable-multi-arch so more of these
implementations get used), and for ARM in a configuration where I saw
the problem scheduling.
[BZ #22225]
* sysdeps/ieee754/dbl-64/s_nearbyint.c (__nearbyint): Use
math_opt_barrier on argument when doing arithmetic on it.
* sysdeps/ieee754/dbl-64/wordsize-64/s_nearbyint.c (__nearbyint):
Likewise. Use math_force_eval not math_opt_barrier after
arithmetic.
* sysdeps/ieee754/flt-32/s_nearbyintf.c (__nearbyintf): Use
math_opt_barrier on argument when doing arithmetic on it.
* sysdeps/ieee754/ldbl-128/s_nearbyintl.c (__nearbyintl):
Likewise.
Let fpclassify use the builtin when optimizing for size in C++ mode (bug 22146)
When optimization for size is on (-Os), fpclassify does not use the
type-generic __builtin_fpclassify builtin, instead it uses __MATH_TG.
However, when library support for float128 is available, __MATH_TG uses
__builtin_types_compatible_p, which is not available in C++ mode.
On the other hand, libstdc++ undefines (in cmath) many macros from
math.h, including fpclassify, so that it can provide its own functions.
However, during its configure tests, libstdc++ just tests for the
availability of the macros (it does not undefine them, nor does it
provide its own functions).
Finally, when libstdc++ is configured with optimization for size
enabled, its configure tests include math.h and get the definition of
fpclassify that uses __MATH_TG (and __builtin_types_compatible_p).
Since libstdc++ does not undefine the macros during its configure tests,
they fail.
This patch lets fpclassify use the builtin in C++ mode, even when
optimization for size is on. This allows the configure test in
libstdc++ to work.
Tested for powerpc64le and x86_64.
[BZ #22146]
math/math.h: Let fpclassify use the builtin in C++ mode, even
when optimazing for size.
Declare ifunc resolver to return a pointer to the same type as the target
function to help GCC detect incompatibilities between the two when it's
enhanced to do so.
builds for powerpc64le fail in the declaration of some ifunc resolvers,
because the ifunc is declared with unmatching return types. One of the
declarations comes from the __ifunc_resolver macro, which was patched by
the aforementioned commit:
/* Helper / base macros for indirect function symbols. */
#define __ifunc_resolver(type_name, name, expr, arg, init, classifier) \
classifier inhibit_stack_protector \
__typeof (type_name) *name##_ifunc (arg) \
whereas the other comes from the unpatched __ifunc macro when
HAVE_GCC_IFUNC is not defined:
This patch changes the return type of the ifunc resolver in the __ifunc
macro, so that it matches the return type of the target function,
similarly to what the aforementioned commit does.
Tested for powerpc64le and s390x with unpatched GCC.
* include/libc-symbols.h: [!defined HAVE_GCC_IFUNC] (__ifunc):
Change the return type of the ifunc resolver to match the return
type of the target function.
Martin Sebor [Tue, 22 Aug 2017 15:35:23 +0000 (09:35 -0600)]
Declare ifunc resolver to return a pointer to the same type as the target
function to help GCC detect incompatibilities between the two when it's
enhanced to do so.
Alan Modra [Thu, 3 Aug 2017 06:09:21 +0000 (15:39 +0930)]
tst-tlsopt-powerpc as a shared lib
This makes the __tls_get_addr_opt test run as a shared library, and so
actually test that DTPMOD64/DTPREL64 pairs are processed by ld.so to
support the __tls_get_adfr_opt call stub fast return. After a
2017-01-24 patch (binutils f0158f4416) ld.bfd no longer emitted
unnecessary dynamic relocations against local thread variables,
instead setting up the __tls_index GOT entries for the call stub fast
return. This meant tst-tlsopt-powerpc passed but did not check ld.so
relocation support. After a 2017-07-16 patch (binutils 676ee2b5fa)
ld.bfd no longer set up the __tls_index GOT entries for the call stub
fast return, and tst-tlsopt-powerpc failed.
Compiling mod-tlsopt-powerpc.c with -DSHARED exposed a bug in
powerpc64/tls-macros.h, which defines a __TLS_GET_ADDR macro that
clashes with one defined in dl-tls.h. The tls-macros.h version is
only used in that file, so delete it and expand.
* sysdeps/powerpc/mod-tlsopt-powerpc.c: Extract from
tst-tlsopt-powerpc.c with function name change and no test harness.
* sysdeps/powerpc/tst-tlsopt-powerpc.c: Remove body of test.
Call tls_get_addr_opt_test.
* sysdeps/powerpc/Makefile (LDFLAGS-tst-tlsopt-powerpc): Don't define.
(modules-names): Add mod-tlsopt-powerpc.
(mod-tlsopt-powerpc.so-no-z-defs): Define.
(tst-tlsopt-powerpc): Depend on .so.
* sysdeps/powerpc/powerpc64/tls-macros.h (__TLS_GET_ADDR): Don't
define. Expand use in TLS_GD and TLS_LD.
H.J. Lu [Wed, 23 Aug 2017 15:22:52 +0000 (08:22 -0700)]
string/stratcliff.c: Replace int with size_t [BZ #21982]
Fix GCC 7 errors when string/stratcliff.c is compiled with -O3:
stratcliff.c: In function ‘do_test’:
cc1: error: assuming signed overflow does not occur when assuming that (X - c) <= X is always true [-Werror=strict-overflow]
[BZ #21982]
* string/stratcliff.c (do_test): Declare size, nchars, inner,
middle and outer with size_t instead of int. Repleace %d and
%Zd with %zu in printf. Update "MAX (0, nchars - 128)" and
"MAX (outer, nchars - 64)" to support unsigned outer and
nchars. Also exit loop when outer == 0.
H.J. Lu [Thu, 31 Aug 2017 13:28:31 +0000 (06:28 -0700)]
Place $(elf-objpfx)sofini.os last [BZ #22051]
Since sofini.os terminates .eh_frame section, it should be placed last.
[BZ #22051]
* Makerules (build-module-helper-objlist): Filter out
$(elf-objpfx)sofini.os.
(build-shlib-objlist): Append $(elf-objpfx)sofini.os if it is
needed.
getaddrinfo: Fix error handling in gethosts [BZ #21915] [BZ #21922]
The old code uses errno as the primary indicator for success or
failure. This is wrong because errno is only set for specific
combinations of the status return value and the h_errno variable.
This simplifies the code because it is not necessary to propagate the
temporary h_errno value to the thread-local variable. It also increases
compatibility with NSS modules which update only one of the two places.
Joseph Myers [Tue, 22 Aug 2017 00:30:51 +0000 (00:30 +0000)]
Fix position of tests-unsupported definition in assert/Makefile.
tests-unsupported has to be defined before the inclusion of Rules in a
subdirectory Makefile; otherwise it is ineffective. This patch fixes
the ordering in assert/Makefile, where a recent test addition put
tests-unsupported too late (resulting in build failures when the C++
compiler was missing or broken, and thereby showing up the unrelated
bug 21987).
Incidentally, I don't see why these tests depend on
$(have-cxx-thread_local) rather than just a working C++ compiler.
Tested in such a configuration (broken compiler/libstdc++) with
build-many-glibcs.py.