Wilco Dijkstra [Wed, 19 Dec 2018 18:28:24 +0000 (18:28 +0000)]
[AArch64] Add ifunc support for Ares
Add Ares to the midr_el0 list and support ifunc dispatch. Since Ares
supports 2 128-bit loads/stores, use Neon registers for memcpy by
selecting __memcpy_falkor by default (we should rename this to
__memcpy_simd or similar).
* manual/tunables.texi (glibc.cpu.name): Add ares tunable.
* sysdeps/aarch64/multiarch/memcpy.c (__libc_memcpy): Use
__memcpy_falkor for ares.
* sysdeps/unix/sysv/linux/aarch64/cpu-features.h (IS_ARES):
Add new define.
* sysdeps/unix/sysv/linux/aarch64/cpu-features.c (cpu_list):
Add ares cpu.
Vector registers perform better than scalar register pairs for copying
data so prefer them instead. This results in a time reduction of over
50% (i.e. 2x speed improvemnet) for some smaller sizes for memcpy-walk.
Larger sizes show improvements of around 1% to 2%. memcpy-random shows
a very small improvement, in the range of 1-2%.
* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
Use vector registers.
aarch64,falkor: Ignore prefetcher tagging for smaller copies
For smaller and medium sized copies, the effect of hardware
prefetching are not as dominant as instruction level parallelism.
Hence it makes more sense to load data into multiple registers than to
try and route them to the same prefetch unit. This is also the case
for the loop exit where we are unable to latch on to the same prefetch
unit anyway so it makes more sense to have data loaded in parallel.
The performance results are a bit mixed with memcpy-random, with
numbers jumping between -1% and +3%, i.e. the numbers don't seem
repeatable. memcpy-walk sees a 70% improvement (i.e. > 2x) for 128
bytes and that improvement reduces down as the impact of the tail copy
decreases in comparison to the loop.
* sysdeps/aarch64/multiarch/memcpy_falkor.S (__memcpy_falkor):
Use multiple registers to copy data in loop tail.
aarch64: Improve strncmp for mutually misaligned inputs
The mutually misaligned inputs on aarch64 are compared with a simple
byte copy, which is not very efficient. Enhance the comparison
similar to strcmp by loading a double-word at a time. The peak
performance improvement (i.e. 4k maxlen comparisons) due to this on
the strncmp microbenchmark is as follows:
falkor: 3.5x (up to 72% time reduction)
cortex-a73: 3.5x (up to 71% time reduction)
cortex-a53: 3.5x (up to 71% time reduction)
All mutually misaligned inputs from 16 bytes maxlen onwards show
upwards of 15% improvement and there is no measurable effect on the
performance of aligned/mutually aligned inputs.
* sysdeps/aarch64/strncmp.S (count): New macro.
(strncmp): Store misaligned length in SRC1 in COUNT.
(mutual_align): Adjust.
(misaligned8): Load dword at a time when it is safe.
I accidentally set the loop jump back label as misaligned8 instead of
do_misaligned. The typo is harmless but it's always nice to not have
to unnecessarily execute those two instructions.
* sysdeps/aarch64/strcmp.S (do_misaligned): Jump back to
do_misaligned, not misaligned8.
Replace the simple byte-wise compare in the misaligned case with a
dword compare with page boundary checks in place. For simplicity I've
chosen a 4K page boundary so that we don't have to query the actual
page size on the system.
This results in up to 3x improvement in performance in the unaligned
case on falkor and about 2.5x improvement on mustang as measured using
bench-strcmp.
* sysdeps/aarch64/strcmp.S (misaligned8): Compare dword at a
time whenever possible.
aarch64: Optimized memcmp for medium to large sizes
This improved memcmp provides a fast path for compares up to 16 bytes
and then compares 16 bytes at a time, thus optimizing loads from both
sources. The glibc memcmp microbenchmark retains performance (with an
error of ~1ns) for smaller compare sizes and reduces up to 31% of
execution time for compares up to 4K on the APM Mustang. On Qualcomm
Falkor this improves to almost 48%, i.e. it is almost 2x improvement
for sizes of 2K and above.
* sysdeps/aarch64/memcmp.S: Widen comparison to 16 bytes at a
time.
Wilco Dijkstra [Thu, 10 Aug 2017 16:00:38 +0000 (17:00 +0100)]
[AArch64] Optimized memcmp.
This is an optimized memcmp for AArch64. This is a complete rewrite
using a different algorithm. The previous version split into cases
where both inputs were aligned, the inputs were mutually aligned and
unaligned using a byte loop. The new version combines all these cases,
while small inputs of less than 8 bytes are handled separately.
This allows the main code to be sped up using unaligned loads since
there are now at least 8 bytes to be compared. After the first 8 bytes,
align the first input. This ensures each iteration does at most one
unaligned access and mutually aligned inputs behave as aligned.
After the main loop, process the last 8 bytes using unaligned accesses.
This improves performance of (mutually) aligned cases by 25% and
unaligned by >500% (yes >6 times faster) on large inputs.
* sysdeps/aarch64/memcmp.S (memcmp):
Rewrite of optimized memcmp.
posix: Fix large mmap64 offset for mips64n32 (BZ#24699)
The fix for BZ#21270 (commit 158d5fa0e19) added a mask to avoid offset larger
than 1^44 to be used along __NR_mmap2. However mips64n32 users __NR_mmap,
as mips64n64, but still defines off_t as old non-LFS type (other ILP32, such
x32, defines off_t being equal to off64_t). This leads to use the same
mask meant only for __NR_mmap2 call for __NR_mmap, thus limiting the maximum
offset it can use with mmap64.
This patch fixes by setting the high mask only for __NR_mmap2 usage. The
posix/tst-mmap-offset.c already tests it and also fails for mips64n32. The
patch also change the test to check for an arch-specific header that defines
the maximum supported offset.
Checked on x86_64-linux-gnu, i686-linux-gnu, and I also tests tst-mmap-offset
on qemu simulated mips64 with kernel 3.2.0 kernel for both mips-linux-gnu and
mips64-n32-linux-gnu.
[BZ #24699]
* posix/tst-mmap-offset.c: Mention BZ #24699.
(do_test_bz21270): Rename to do_test_large_offset and use
mmap64_maximum_offset to check for maximum expected offset value.
* sysdeps/generic/mmap_info.h: New file.
* sysdeps/unix/sysv/linux/mips/mmap_info.h: Likewise.
* sysdeps/unix/sysv/linux/mmap64.c (MMAP_OFF_HIGH_MASK): Define iff
__NR_mmap2 is used.
Avoid lazy binding of symbols that may follow a variant PCS with different
register usage convention from the base PCS.
Currently the lazy binding entry code does not preserve all the registers
required for AdvSIMD and SVE vector calls. Saving and restoring all
registers unconditionally may break existing binaries, even if they never
use vector calls, because of the larger stack requirement for lazy
resolution, which can be significant on an SVE system.
The solution is to mark all symbols in the symbol table that may follow
a variant PCS so the dynamic linker can handle them specially. In this
patch such symbols are always resolved at load time, not lazily.
So currently LD_AUDIT for variant PCS symbols are not supported, for that
the _dl_runtime_profile entry needs to be changed e.g. to unconditionally
save/restore all registers (but pass down arg and retval registers to
pltentry/exit callbacks according to the base PCS).
This patch also removes a __builtin_expect from the modified code because
the branch prediction hint did not seem useful.
* sysdeps/aarch64/dl-machine.h (elf_machine_lazy_rel): Check
STO_AARCH64_VARIANT_PCS and bind such symbols at load time.
Szabolcs Nagy [Thu, 25 Apr 2019 14:35:35 +0000 (15:35 +0100)]
aarch64: add STO_AARCH64_VARIANT_PCS and DT_AARCH64_VARIANT_PCS
STO_AARCH64_VARIANT_PCS is a non-visibility st_other flag for marking
symbols that reference functions that may follow a variant PCS with
different register usage convention from the base PCS.
DT_AARCH64_VARIANT_PCS is a dynamic tag that marks ELF modules that
have R_*_JUMP_SLOT relocations for symbols marked with
STO_AARCH64_VARIANT_PCS (i.e. have variant PCS calls via a PLT).
Wilco Dijkstra [Fri, 10 May 2019 15:38:21 +0000 (16:38 +0100)]
Fix tcache count maximum (BZ #24531)
The tcache counts[] array is a char, which has a very small range and thus
may overflow. When setting tcache_count tunable, there is no overflow check.
However the tunable must not be larger than the maximum value of the tcache
counts[] array, otherwise it can overflow when filling the tcache.
[BZ #24531]
* malloc/malloc.c (MAX_TCACHE_COUNT): New define.
(do_set_tcache_count): Only update if count is small enough.
* manual/tunables.texi (glibc.malloc.tcache_count): Document max value.
Andreas Schwab [Tue, 14 May 2019 15:14:59 +0000 (17:14 +0200)]
Fix crash in _IO_wfile_sync (bug 20568)
When computing the length of the converted part of the stdio buffer, use
the number of consumed wide characters, not the (negative) distance to the
end of the wide buffer.
Stefan Liebler [Thu, 7 Feb 2019 14:18:36 +0000 (15:18 +0100)]
Add compiler barriers around modifications of the robust mutex list for pthread_mutex_trylock. [BZ #24180]
While debugging a kernel warning, Thomas Gleixner, Sebastian Sewior and
Heiko Carstens found a bug in pthread_mutex_trylock due to misordered
instructions:
140: a5 1b 00 01 oill %r1,1
144: e5 48 a0 f0 00 00 mvghi 240(%r10),0 <--- THREAD_SETMEM (THREAD_SELF, robust_head.list_op_pending, NULL);
14a: e3 10 a0 e0 00 24 stg %r1,224(%r10) <--- last THREAD_SETMEM of ENQUEUE_MUTEX_PI
Please have a look at the discussion:
"Re: WARN_ON_ONCE(!new_owner) within wake_futex_pi() triggerede"
(https://lore.kernel.org/lkml/20190202112006.GB3381@osiris/)
This patch is introducing the same compiler barriers and comments
for pthread_mutex_trylock as introduced for pthread_mutex_lock and
pthread_mutex_timedlock by commit 8f9450a0b7a9e78267e8ae1ab1000ebca08e473e
"Add compiler barriers around modifications of the robust mutex list."
ChangeLog:
[BZ #24180]
* nptl/pthread_mutex_trylock.c (__pthread_mutex_trylock):
Add compiler barriers and comments.
H.J. Lu [Mon, 4 Feb 2019 16:55:52 +0000 (08:55 -0800)]
x86-64 memcmp: Use unsigned Jcc instructions on size [BZ #24155]
Since the size argument is unsigned. we should use unsigned Jcc
instructions, instead of signed, to check size.
Tested on x86-64 and x32, with and without --disable-multi-arch.
[BZ #24155]
CVE-2019-7309
* NEWS: Updated for CVE-2019-7309.
* sysdeps/x86_64/memcmp.S: Use RDX_LP for size. Clear the
upper 32 bits of RDX register for x32. Use unsigned Jcc
instructions, instead of signed.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp-2.
* sysdeps/x86_64/x32/tst-size_t-memcmp-2.c: New test.
H.J. Lu [Fri, 1 Feb 2019 20:24:08 +0000 (12:24 -0800)]
x86-64 strnlen/wcsnlen: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strnlen/wcsnlen for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/strlen-avx2.S: Use RSI_LP for length.
Clear the upper 32 bits of RSI register.
* sysdeps/x86_64/strlen.S: Use RSI_LP for length.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strnlen
and tst-size_t-wcsnlen.
* sysdeps/x86_64/x32/tst-size_t-strnlen.c: New file.
* sysdeps/x86_64/x32/tst-size_t-wcsnlen.c: Likewise.
H.J. Lu [Fri, 1 Feb 2019 20:23:23 +0000 (12:23 -0800)]
x86-64 strncpy: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes strncpy for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/strcpy-sse2-unaligned.S: Use RDX_LP
for length.
* sysdeps/x86_64/multiarch/strcpy-ssse3.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncpy.
* sysdeps/x86_64/x32/tst-size_t-strncpy.c: New file.
H.J. Lu [Fri, 1 Feb 2019 20:22:33 +0000 (12:22 -0800)]
x86-64 strncmp family: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes the strncmp family for x32. Tested on x86-64 and x32.
On x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/strcmp-sse42.S: Use RDX_LP for length.
* sysdeps/x86_64/strcmp.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-strncasecmp,
tst-size_t-strncmp and tst-size_t-wcsncmp.
* sysdeps/x86_64/x32/tst-size_t-strncasecmp.c: New file.
* sysdeps/x86_64/x32/tst-size_t-strncmp.c: Likewise.
* sysdeps/x86_64/x32/tst-size_t-wcsncmp.c: Likewise.
H.J. Lu [Fri, 1 Feb 2019 20:21:41 +0000 (12:21 -0800)]
x86-64 memset/wmemset: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memset/wmemset for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/memset-avx512-no-vzeroupper.S: Use
RDX_LP for length. Clear the upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-wmemset.
* sysdeps/x86_64/x32/tst-size_t-memset.c: New file.
* sysdeps/x86_64/x32/tst-size_t-wmemset.c: Likewise.
H.J. Lu [Fri, 1 Feb 2019 20:20:54 +0000 (12:20 -0800)]
x86-64 memrchr: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memrchr for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/memrchr.S: Use RDX_LP for length.
* sysdeps/x86_64/multiarch/memrchr-avx2.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memrchr.
* sysdeps/x86_64/x32/tst-size_t-memrchr.c: New file.
H.J. Lu [Fri, 1 Feb 2019 20:20:06 +0000 (12:20 -0800)]
x86-64 memcpy: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memcpy for x32. Tested on x86-64 and x32. On x86-64,
libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/memcpy-ssse3-back.S: Use RDX_LP for
length. Clear the upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memcpy-ssse3.S: Likewise.
* sysdeps/x86_64/multiarch/memmove-avx512-no-vzeroupper.S:
Likewise.
* sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S:
Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcpy.
tst-size_t-wmemchr.
* sysdeps/x86_64/x32/tst-size_t-memcpy.c: New file.
H.J. Lu [Fri, 1 Feb 2019 20:19:07 +0000 (12:19 -0800)]
x86-64 memcmp/wmemcmp: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memcmp/wmemcmp for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/multiarch/memcmp-avx2-movbe.S: Use RDX_LP for
length. Clear the upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memcmp-sse4.S: Likewise.
* sysdeps/x86_64/multiarch/memcmp-ssse3.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memcmp and
tst-size_t-wmemcmp.
* sysdeps/x86_64/x32/tst-size_t-memcmp.c: New file.
* sysdeps/x86_64/x32/tst-size_t-wmemcmp.c: Likewise.
H.J. Lu [Fri, 1 Feb 2019 20:17:09 +0000 (12:17 -0800)]
x86-64 memchr/wmemchr: Properly handle the length parameter [BZ #24097]
On x32, the size_t parameter may be passed in the lower 32 bits of a
64-bit register with the non-zero upper 32 bits. The string/memory
functions written in assembly can only use the lower 32 bits of a
64-bit register as length or must clear the upper 32 bits before using
the full 64-bit register for length.
This pach fixes memchr/wmemchr for x32. Tested on x86-64 and x32. On
x86-64, libc.so is the same with and withou the fix.
[BZ #24097]
CVE-2019-6488
* sysdeps/x86_64/memchr.S: Use RDX_LP for length. Clear the
upper 32 bits of RDX register.
* sysdeps/x86_64/multiarch/memchr-avx2.S: Likewise.
* sysdeps/x86_64/x32/Makefile (tests): Add tst-size_t-memchr and
tst-size_t-wmemchr.
* sysdeps/x86_64/x32/test-size_t.h: New file.
* sysdeps/x86_64/x32/tst-size_t-memchr.c: Likewise.
* sysdeps/x86_64/x32/tst-size_t-wmemchr.c: Likewise.
[BZ #21745] powerpc: build some IFUNC math functions for libc and libm
Some math functions have to be distributed in libc because they're
required by printf.
libc and libm require their own builds of these functions, e.g. libc
functions have to call __stack_chk_fail_local in order to bypass the
PLT, while libm functions have to call __stack_chk_fail.
While math/Makefile treat the generic cases, i.e. s_isinff, the
multiarch Makefile has to treat its own files, i.e. s_isinff-ppc64.
[BZ #21745]
* sysdeps/powerpc/powerpc64/fpu/multiarch/Makefile:
[$(subdir) = math] (sysdep_calls): New variable. Has the
previous contents of sysdep_routines, but re-sorted..
[$(subdir) = math] (sysdep_routines): Re-use the contents from
sysdep_calls.
[$(subdir) = math] (libm-sysdep_routines): Remove the functions
defined in sysdep_calls and replace by the respective m_* names.
* sysdeps/powerpc/powerpc64/fpu/multiarch/s_isnan-ppc64.S:
(compat_symbol): Undefine to avoid duplicated compat symbols in
libc.
Florian Weimer [Mon, 31 Dec 2018 21:04:36 +0000 (22:04 +0100)]
malloc: Always call memcpy in _int_realloc [BZ #24027]
This commit removes the custom memcpy implementation from _int_realloc
for small chunk sizes. The ncopies variable has the wrong type, and
an integer wraparound could cause the existing code to copy too few
elements (leaving the new memory region mostly uninitialized).
Therefore, removing this code fixes bug 24027.
Th commit 'Disable TSX on some Haswell processors.' (2702856bf4) changed the
default flags for Haswell models. Previously, new models were handled by the
default switch path, which assumed a Core i3/i5/i7 if AVX is available. After
the patch, Haswell models (0x3f, 0x3c, 0x45, 0x46) do not set the flags
Fast_Rep_String, Fast_Unaligned_Load, Fast_Unaligned_Copy, and
Prefer_PMINUB_for_stringop (only the TSX one).
This patch fixes it by disentangle the TSX flag handling from the memory
optimization ones. The strstr case cited on patch now selects the
__strstr_sse2_unaligned as expected for the Haswell cpu.
Checked on x86_64-linux-gnu.
[BZ #23709]
* sysdeps/x86/cpu-features.c (init_cpu_features): Set TSX bits
independently of other flags.
Joseph Myers [Mon, 18 Dec 2017 22:55:28 +0000 (22:55 +0000)]
Disable -Wrestrict for two nptl/tst-attr3.c tests.
nptl/tst-attr3 fails to build with GCC mainline because of
(deliberate) aliasing between the second (attributes) and fourth
(argument to thread start routine) arguments to pthread_create.
Although both those arguments are restrict-qualified in POSIX,
pthread_create does not actually dereference its fourth argument; it's
an opaque pointer passed to the thread start routine. Thus, the
aliasing is actually valid in this case, and it's deliberate in the
test. So this patch makes the test disable -Wrestrict for the two
pthread_create calls in question. (-Wrestrict was added in GCC 7,
hence the __GNUC_PREREQ conditions, but the particular warning in
question is new in GCC 8.)
Tested compilation with build-many-glibcs.py for aarch64-linux-gnu.
* nptl/tst-attr3.c: Include <libc-diag.h>.
(do_test) [__GNUC_PREREQ (7, 0)]: Ignore -Wrestrict for two tests.
Joseph Myers [Thu, 14 Jun 2018 14:20:00 +0000 (14:20 +0000)]
Ignore -Wrestrict for one strncat test.
With current GCC mainline, one strncat test involving a size close to
SIZE_MAX results in a -Wrestrict warning that that buffer size would
imply that the two buffers must overlap. This patch fixes the build
by adding disabling of -Wrestrict (for GCC versions supporting that
option) to the already-present disabling of -Wstringop-overflow= and
-Warray-bounds for this test.
Tested with build-many-glibcs.py that this restores the testsuite
build with GCC mainline for aarch64-linux-gnu.
* string/tester.c (test_strncat) [__GNUC_PREREQ (7, 0)]: Also
ignore -Wrestrict for one test.
Joseph Myers [Mon, 18 Dec 2017 22:52:41 +0000 (22:52 +0000)]
Disable strncat test array-bounds warnings for GCC 8.
Some strncat tests fail to build with GCC 8 because of -Warray-bounds
warnings. These tests are deliberately test over-large size arguments
passed to strncat, and already disable -Wstringop-overflow warnings,
but now the warnings for these tests come under -Warray-bounds so that
option needs disabling for them as well, which this patch does (with
an update on the comments; the DIAG_IGNORE_NEEDS_COMMENT call for
-Warray-bounds doesn't need to be conditional itself, because that
option is supported by all versions of GCC that can build glibc).
Tested compilation with build-many-glibcs.py for aarch64-linux-gnu.
* string/tester.c (test_strncat): Also disable -Warray-bounds
warnings for two tests.
Joseph Myers [Tue, 14 Nov 2017 17:52:26 +0000 (17:52 +0000)]
Fix string/tester.c build with GCC 8.
GCC 8 warns about more cases of string functions truncating their
output or not copying a trailing NUL byte.
This patch fixes testsuite build failures caused by such warnings in
string/tester.c. In general, the warnings are disabled around the
relevant calls using DIAG_* macros, since the relevant cases are being
deliberately tested. In one case, the warning is with
-Wstringop-overflow= instead of -Wstringop-truncation; in that case,
the conditional is __GNUC_PREREQ (7, 0) (being the version where
-Wstringop-overflow= was introduced), to allow the conditional to be
removed sooner, since it's harmless to disable the warning for a
GCC version where it doesn't actually occur. In the case of warnings
for strncpy calls in test_memcmp, the calls in question are changed to
use memcpy, as they don't copy a trailing NUL and the point of that
code is to test memcmp rather than strncpy.
Tested (compilation) with GCC 8 for x86_64-linux-gnu with
build-many-glibcs.py (in conjunction with Martin's patch to allow
glibc to build).
* string/tester.c (test_stpncpy): Disable -Wstringop-truncation
for stpncpy calls for GCC 8.
(test_strncat): Disable -Wstringop-truncation warning for strncat
calls for GCC 8. Disable -Wstringop-overflow= warning for one
strncat call for GCC 7.
(test_strncpy): Disable -Wstringop-truncation warning for strncpy
calls for GCC 8.
(test_memcmp): Use memcpy instead of strncpy for calls not copying
trailing NUL.
Joseph Myers [Mon, 22 Oct 2018 12:08:12 +0000 (14:08 +0200)]
Fix nscd readlink argument aliasing (bug 22446).
Current GCC mainline detects that nscd calls readlink with the same
buffer for both input and output, which is not valid (those arguments
are both restrict-qualified in POSIX). This patch makes it use a
separate buffer for readlink's input (with a size that is sufficient
to avoid truncation, so there should be no problems with warnings
about possible truncation, though not strictly minimal, but much
smaller than the buffer for output) to avoid this problem.
Tested compilation for aarch64-linux-gnu with build-many-glibcs.py.
[BZ #22446]
* nscd/connections.c (handle_request) [SO_PEERCRED]: Use separate
buffers for readlink input and output.
If __gmtime_r returns NULL (because the year overflows the range of
int), this will dereference a null pointer. Otherwise, if the
computed year does not fit in four characters, this will cause a
buffer overrun of the fixed-size 15-byte buffer. With current GCC
mainline, there is a compilation failure because of the possible
buffer overrun.
I couldn't find a specification for how this function is meant to
behave, but Paul pointed to RFC 4034 as relevant to the cases where
this function is called from within glibc. The function's interface
is inherently problematic when dates beyond Y2038 might be involved,
because of the ambiguity in how to interpret 32-bit timestamps as such
dates (the RFC suggests interpreting times as being within 68 years of
the present date, which would mean some kind of interface whose
behavior depends on the present date).
This patch works on the basis of making a minimal fix in preparation
for obsoleting the function. The function is made to handle times in
the interval [0, 0x7fffffff] only, on all platforms, with <overflow>
used as the output string in other cases (and errno set to EOVERFLOW
in such cases). This seems to be a reasonable state for the function
to be in when made a compat symbol by a future patch, being compatible
with any existing uses for existing timestamps without trying to work
for later timestamps. Results independent of the range of time_t also
simplify the testcase.
I couldn't persuade GCC to recognize the ranges of the struct tm
fields by adding explicit range checks with a call to
__builtin_unreachable if outside the range (this looks similar to
<https://gcc.gnu.org/bugzilla/show_bug.cgi?id=80776>), so having added
a range check on the input, this patch then disables the
-Wformat-overflow= warning for the sprintf call (I prefer that to the
use of strftime, as being more transparently correct without knowing
what each of %m and %M etc. is).
I do not know why this build failure should be new with mainline GCC
(that is, I don't know what GCC change might have introduced it, when
the basic functionality for such warnings was already in GCC 7).
I do not know if this is a security issue (that is, if there are
plausible ways in which a date before -999 or after 9999 from an
untrusted source might end up in this function). The system clock is
arguably an untrusted source (in that e.g. NTP is insecure), but
probably not to that extent (NTP can't communicate such wild
timestamps), and uses from within glibc are limited to 32-bit inputs.
Tested with build-many-glibcs.py that this restores the build for arm
with yesterday's mainline GCC. Also tested for x86_64 and x86.
[BZ #22463]
* resolv/res_debug.c: Include <libc-diag.h>.
(p_secstodate): Assert time_t at least as wide as u_long. On
overflow, use integer seconds since the epoch as output, or use
"<overflow>" as output and set errno to EOVERFLOW if integer
seconds since the epoch would be 14 or more characters.
(p_secstodate) [__GNUC_PREREQ (7, 0)]: Disable -Wformat-overflow=
for sprintf call.
* resolv/tst-p_secstodate.c: New file.
* resolv/Makefile (tests): Add tst-p_secstodate.
($(objpfx)tst-p_secstodate): Depend on $(objpfx)libresolv.so.
Martin Sebor [Thu, 16 Nov 2017 00:39:59 +0000 (17:39 -0700)]
utmp: Avoid -Wstringop-truncation warning
The -Wstringop-truncation option new in GCC 8 detects common misuses
of the strncat and strncpy function that may result in truncating
the copied string before the terminating NUL. To avoid false positive
warnings for correct code that intentionally creates sequences of
characters that aren't guaranteed to be NUL-terminated, arrays that
are intended to store such sequences should be decorated with a new
nonstring attribute. This change add this attribute to Glibc and
uses it to suppress such false positives.
ChangeLog:
* misc/sys/cdefs.h (__attribute_nonstring__): New macro.
* sysdeps/gnu/bits/utmp.h (struct utmp): Use it.
* sysdeps/unix/sysv/linux/s390/bits/utmp.h (struct utmp): Same.
Joseph Myers [Wed, 22 Nov 2017 18:44:23 +0000 (18:44 +0000)]
Avoid use of strlen in getlogin_r (bug 22447).
Building glibc with current mainline GCC fails, among other reasons,
because of an error for use of strlen on the nonstring ut_user field.
This patch changes the problem code in getlogin_r to use __strnlen
instead. It also needs to set the trailing NUL byte of the result
explicitly, because of the case where ut_user does not have such a
trailing NUL byte (but the result should always have one).
Tested for x86_64. Also tested that, in conjunction with
<https://sourceware.org/ml/libc-alpha/2017-11/msg00797.html>, it fixes
the build for arm with mainline GCC.
[BZ #22447]
* sysdeps/unix/getlogin_r.c (__getlogin_r): Use __strnlen not
strlen to compute length of ut_user and set trailing NUL byte of
result explicitly.
Fix misreported errno on preadv2/pwritev2 (BZ#23579)
The fallback code of Linux wrapper for preadv2/pwritev2 executes
regardless of the errno code for preadv2, instead of the case where
the syscall is not supported.
This fixes it by calling the fallback code iff errno is ENOSYS. The
patch also adds tests for both invalid file descriptor and invalid
iov_len and vector count.
The only discrepancy between preadv2 and fallback code regarding
error reporting is when an invalid flags are used. The fallback code
bails out earlier with ENOTSUP instead of EINVAL/EBADF when the syscall
is used.
Checked on x86_64-linux-gnu on a 4.4.0 and 4.15.0 kernel.
Stefan Liebler [Thu, 6 Sep 2018 12:27:03 +0000 (14:27 +0200)]
Fix segfault in maybe_script_execute.
If glibc is built with gcc 8 and -march=z900,
the testcase posix/tst-spawn4-compat crashes with a segfault.
In function maybe_script_execute, the new_argv array is dynamically
initialized on stack with (argc + 1) elements.
The function wants to add _PATH_BSHELL as the first argument
and writes out of bounds of new_argv.
There is an off-by-one because maybe_script_execute fails to count
the terminating NULL when sizing new_argv.
ChangeLog:
* sysdeps/unix/sysv/linux/spawni.c (maybe_script_execute):
Increment size of new_argv by one.
conform/conformtest.pl: Escape literal braces in regular expressions
This suppresses Perl warnings like these:
Unescaped left brace in regex is deprecated here (and will be fatal in
Perl 5.32), passed through in regex; marked by <-- HERE in m/^element
*({ <-- HERE ([^}]*)}|([^{ ]*)) *({([^}]*)}|([^{ ]*)) *([A-Za-z0-9_]*)
*(.*)/ at conformtest.pl line 370.
Daniel Alvarez [Fri, 29 Jun 2018 15:23:13 +0000 (17:23 +0200)]
getifaddrs: Don't return ifa entries with NULL names [BZ #21812]
A lookup operation in map_newlink could turn into an insert because of
holes in the interface part of the map. This leads to incorrectly set
the name of the interface to NULL when the interface is not present
for the address being processed (most likely because the interface was
added between the RTM_GETLINK and RTM_GETADDR calls to the kernel).
When such changes are detected by the kernel, it'll mark the dump as
"inconsistent" by setting NLM_F_DUMP_INTR flag on the next netlink
message.
This patch checks this condition and retries the whole operation.
Hopes are that next time the interface corresponding to the address
entry is present in the list and correct name is returned.
Florian Weimer [Thu, 28 Jun 2018 11:22:34 +0000 (13:22 +0200)]
Use _STRUCT_TIMESPEC as guard in <bits/types/struct_timespec.h> [BZ #23349]
After commit d76d3703551a362b472c866b5b6089f66f8daa8e ("Fix missing
timespec definition for sys/stat.h (BZ #21371)") in combination with
kernel UAPI changes, GCC sanitizer builds start to fail due to a
conflicting definition of struct timespec in <linux/time.h>. Use
_STRUCT_TIMESPEC as the header file inclusion guard, which is already
checked in the kernel header, to support including <linux/time.h> and
<sys/stat.h> in the same translation unit.
Florian Weimer [Fri, 1 Jun 2018 09:24:58 +0000 (11:24 +0200)]
libio: Avoid _allocate_buffer, _free_buffer function pointers [BZ #23236]
These unmangled function pointers reside on the heap and could
be targeted by exploit writers, effectively bypassing libio vtable
validation. Instead, we ignore these pointers and always call
malloc or free.
In theory, this is a backwards-incompatible change, but using the
global heap instead of the user-supplied callback functions should
have little application impact. (The old libstdc++ implementation
exposed this functionality via a public, undocumented constructor
in its strstreambuf class.)
Paul Pluzhnikov [Sun, 6 May 2018 01:08:27 +0000 (18:08 -0700)]
Fix stack overflow with huge PT_NOTE segment [BZ #20419]
A PT_NOTE in a binary could be arbitratily large, so using alloca
for it may cause stack overflow. If the note is larger than
__MAX_ALLOCA_CUTOFF, use dynamically allocated memory to read it in.
2018-05-05 Paul Pluzhnikov <ppluzhnikov@google.com>
Stefan Liebler [Thu, 17 May 2018 12:05:51 +0000 (14:05 +0200)]
Fix blocking pthread_join. [BZ #23137]
On s390 (31bit) if glibc is build with -Os, pthread_join sometimes
blocks indefinitely. This is e.g. observable with
testcase intl/tst-gettext6.
pthread_join is calling lll_wait_tid(tid), which performs the futex-wait
syscall in a loop as long as tid != 0 (thread is alive).
On s390 (and build with -Os), tid is loaded from memory before
comparing against zero and then the tid is loaded a second time
in order to pass it to the futex-wait-syscall.
If the thread exits in between, then the futex-wait-syscall is
called with the value zero and it waits until a futex-wake occurs.
As the thread is already exited, there won't be a futex-wake.
In lll_wait_tid, the tid is stored to the local variable __tid,
which is then used as argument for the futex-wait-syscall.
But unfortunately the compiler is allowed to reload the value
from memory.
With this patch, the tid is loaded with atomic_load_acquire.
Then the compiler is not allowed to reload the value for __tid from memory.
ChangeLog:
[BZ #23137]
* sysdeps/nptl/lowlevellock.h (lll_wait_tid):
Use atomic_load_acquire to load __tid.
Joseph Myers [Thu, 17 May 2018 12:04:58 +0000 (14:04 +0200)]
Fix signed integer overflow in random_r (bug 17343).
Bug 17343 reports that stdlib/random_r.c has code with undefined
behavior because of signed integer overflow on int32_t. This patch
changes the code so that the possibly overflowing computations use
unsigned arithmetic instead.
Note that the bug report refers to "Most code" in that file. The
places changed in this patch are the only ones I found where I think
such overflow can occur.
Tested for x86_64 and x86.
[BZ #17343]
* stdlib/random_r.c (__random_r): Use unsigned arithmetic for
possibly overflowing computations.
This patch fixes the i386 sa_restorer field initialization for sigaction
syscall for kernel with vDSO. As described in bug report, i386 Linux
(and compat on x86_64) interprets SA_RESTORER clear with nonzero
sa_restorer as a request for stack switching if the SS segment is 'funny'.
This means that anything that tries to mix glibc's signal handling with
segmentation (for instance through modify_ldt syscall) is randomly broken
depending on what values lands in sa_restorer.
The testcase added is based on Linux test tools/testing/selftests/x86/ldt_gdt.c,
more specifically in do_multicpu_tests function. The main changes are:
- C11 atomics instead of plain access.
- Remove x86_64 support which simplifies the syscall handling and fallbacks.
- Replicate only the test required to trigger the issue.
Checked on i686-linux-gnu.
[BZ #21269]
* sysdeps/unix/sysv/linux/i386/Makefile (tests): Add tst-bz21269.
* sysdeps/unix/sysv/linux/i386/sigaction.c (SET_SA_RESTORER): Clear
sa_restorer for vDSO case.
* sysdeps/unix/sysv/linux/i386/tst-bz21269.c: New file.
DJ Delorie [Fri, 2 Mar 2018 04:20:45 +0000 (23:20 -0500)]
[BZ #22342] Fix netgroup cache keys.
Unlike other nscd caches, the netgroup cache contains two types of
records - those for "iterate through a netgroup" (i.e. setnetgrent())
and those for "is this user in this netgroup" (i.e. innetgr()),
i.e. full and partial records. The timeout code assumes these records
have the same key for the group name, so that the collection of records
that is "this netgroup" can be expired as a unit.
However, the keys are not the same, as the in-netgroup key is generated
by nscd rather than being passed to it from elsewhere, and is generated
without the trailing NUL. All other keys have the trailing NUL, and as
noted in the linked BZ, debug statements confirm that two keys for the
same netgroup are added to the cache with two different lengths.
The result of this is that as records in the cache expire, the purge
code only cleans out one of the two types of entries, resulting in
stale, possibly incorrect, and possibly inconsistent cache data.
The patch simply includes the existing NUL in the computation for the
key length ('key' points to the char after the NUL, and 'group' to the
first char of the group, so 'key-group' includes the first char to the
NUL, inclusive).
[BZ #22342]
* nscd/netgroupcache.c (addinnetgrX): Include trailing NUL in
key value.
Jesse Hathaway [Thu, 17 May 2018 11:56:33 +0000 (13:56 +0200)]
getlogin_r: return early when linux sentinel value is set
When there is no login uid Linux sets /proc/self/loginid to the sentinel
value of, (uid_t) -1. If this is set we can return early and avoid
needlessly looking up the sentinel value in any configured nss
databases.
Checked on aarch64-linux-gnu.
* sysdeps/unix/sysv/linux/getlogin_r.c (__getlogin_r_loginuid): Return
early when linux sentinel value is set.
Arjun Shankar [Thu, 18 Jan 2018 16:47:06 +0000 (16:47 +0000)]
Fix integer overflows in internal memalign and malloc [BZ #22343] [BZ #22774]
When posix_memalign is called with an alignment less than MALLOC_ALIGNMENT
and a requested size close to SIZE_MAX, it falls back to malloc code
(because the alignment of a block returned by malloc is sufficient to
satisfy the call). In this case, an integer overflow in _int_malloc leads
to posix_memalign incorrectly returning successfully.
Upon fixing this and writing a somewhat thorough regression test, it was
discovered that when posix_memalign is called with an alignment larger than
MALLOC_ALIGNMENT (so it uses _int_memalign instead) and a requested size
close to SIZE_MAX, a different integer overflow in _int_memalign leads to
posix_memalign incorrectly returning successfully.
Both integer overflows affect other memory allocation functions that use
_int_malloc (one affected malloc in x86) or _int_memalign as well.
This commit fixes both integer overflows. In addition to this, it adds a
regression test to guard against false successful allocations by the
following memory allocation functions when called with too-large allocation
sizes and, where relevant, various valid alignments:
malloc, realloc, calloc, reallocarray, memalign, posix_memalign,
aligned_alloc, valloc, and pvalloc.
powerpc: Fix syscalls during early process initialization [BZ #22685]
The tunables framework needs to execute syscall early in process
initialization, before the TCB is available for consumption. This
behavior conflicts with powerpc{|64|64le}'s lock elision code, that
checks the TCB before trying to abort transactions immediately before
executing a syscall.
This patch adds a powerpc-specific implementation of __access_noerrno
that does not abort transactions before the executing syscall.
Tested on powerpc{|64|64le}.
[BZ #22685]
* sysdeps/powerpc/powerpc32/sysdep.h (ABORT_TRANSACTION_IMPL): Renamed
from ABORT_TRANSACTION.
(ABORT_TRANSACTION): Redirect to ABORT_TRANSACTION_IMPL.
* sysdeps/powerpc/powerpc64/sysdep.h (ABORT_TRANSACTION,
ABORT_TRANSACTION_IMPL): Likewise.
* sysdeps/unix/sysv/linux/powerpc/not-errno.h: New file. Reuse
Linux code, but remove the code that aborts transactions.
Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.vnet.ibm.com> Tested-by: Aurelien Jarno <aurelien@aurel32.net>
(cherry picked from commit 4612268a0ad8e3409d8ce2314dd2dd8ee0af5269)
In C++ mode, __MATH_TG cannot be used for defining iseqsig, because
__MATH_TG relies on __builtin_types_compatible_p, which is a C-only
builtin. This is true when float128 is provided as an ABI-distinct type
from long double.
Moreover, the comparison macros from ISO C take two floating-point
arguments, which need not have the same type. Choosing what underlying
function to call requires evaluating the formats of the arguments, then
selecting which is wider. The macro __MATH_EVAL_FMT2 provides this
information, however, only the type of the macro expansion is relevant
(actually evaluating the expression would be incorrect).
This patch provides a C++ version of iseqsig, in which only the type of
__MATH_EVAL_FMT2 (__typeof or decltype) is used as a template parameter
for __iseqsig_type. This function calls the appropriate underlying
function.
Tested for powerpc64le and x86_64.
[BZ #22377]
* math/Makefile [C++] (tests): Add test for iseqsig.
* math/math.h [C++] (iseqsig): New implementation, which does
not rely on __MATH_TG/__builtin_types_compatible_p.
* math/test-math-iseqsig.cc: New file.
* sysdeps/powerpc/powerpc64le/Makefile
(CFLAGS-test-math-iseqsig.cc): New variable.
Szabolcs Nagy [Wed, 18 Oct 2017 16:26:23 +0000 (17:26 +0100)]
[AARCH64] Rewrite elf_machine_load_address using _DYNAMIC symbol
This patch rewrites aarch64 elf_machine_load_address to use special _DYNAMIC
symbol instead of _dl_start.
The static address of _DYNAMIC symbol is stored in the first GOT entry.
Here is the change which makes this solution work (part of binutils 2.24):
https://sourceware.org/ml/binutils/2013-06/msg00248.html
i386, x86_64 targets use the same method to do this as well.
The original implementation relies on a trick that R_AARCH64_ABS32 relocation
being resolved at link time and the static address fits in the 32bits.
However, in LP64, normally, the address is defined to be 64 bit.
Here is the C version one which should be portable in all cases.
* sysdeps/aarch64/dl-machine.h (elf_machine_load_address): Use
_DYNAMIC symbol to calculate load address.
H.J. Lu [Fri, 19 Jan 2018 17:41:12 +0000 (09:41 -0800)]
x86-64: Properly align La_x86_64_retval to VEC_SIZE [BZ #22715]
_dl_runtime_profile calls _dl_call_pltexit, passing a pointer to
La_x86_64_retval which is allocated on stack. The lrv_vector0
field in La_x86_64_retval must be aligned to size of vector register.
When allocating stack space for La_x86_64_retval, we need to make sure
that the address of La_x86_64_retval + RV_VECTOR0_OFFSET is aligned to
VEC_SIZE. This patch checks the alignment of the lrv_vector0 field
and pads the stack space if needed.
Tested with x32 and x86-64 on SSE4, AVX and AVX512 machines. It fixed
I verified that without the guard accounting change in commit 630f4cc3aa019ede55976ea561f1a7af2f068639 (Fix stack guard size
accounting) and RTLD_NOW for libgcc_s introduced by commit f993b8754080ac7572b692870e926d8b493db16c (nptl: Open libgcc.so with
RTLD_NOW during pthread_cancel), the tst-minstack-cancel test fails on
an AVX-512F machine. tst-minstack-exit still passes, and either of
the mentioned commit by itself frees sufficient stack space to make
tst-minstack-cancel pass, too.
Szabolcs Nagy [Mon, 15 Jan 2018 15:06:31 +0000 (16:06 +0100)]
[BZ #22637] Fix stack guard size accounting
Previously if user requested S stack and G guard when creating a
thread, the total mapping was S and the actual available stack was
S - G - static_tls, which is not what the user requested.
This patch fixes the guard size accounting by pretending the user
requested S+G stack. This way all later logic works out except
when reporting the user requested stack size (pthread_getattr_np)
or when computing the minimal stack size (__pthread_get_minstack).
Normally this will increase thread stack allocations by one page.
TLS accounting is not affected, that will require a separate fix.
Florian Weimer [Mon, 8 Jan 2018 13:57:25 +0000 (14:57 +0100)]
nptl: Add test for callee-saved register restore in pthread_exit
GCC PR 83641 results in a miscompilation of libpthread, which
causes pthread_exit not to restore callee-saved registers before
running destructors for objects on the stack. This test detects
this situation:
info: unsigned int, direct pthread_exit call
tst-thread-exit-clobber.cc:80: numeric comparison failure
left: 4148288912 (0xf741dd90); from: value
right: 1600833940 (0x5f6ac994); from: magic_values.v2
info: double, direct pthread_exit call
info: unsigned int, indirect pthread_exit call
info: double, indirect pthread_exit call
error: 1 test failures
Dmitry V. Levin [Sun, 7 Jan 2018 02:03:41 +0000 (02:03 +0000)]
linux: make getcwd(3) fail if it cannot obtain an absolute path [BZ #22679]
Currently getcwd(3) can succeed without returning an absolute path
because the underlying getcwd syscall, starting with linux commit
v2.6.36-rc1~96^2~2, may succeed without returning an absolute path.
This is a conformance issue because "The getcwd() function shall
place an absolute pathname of the current working directory
in the array pointed to by buf, and return buf".
This is also a security issue because a non-absolute path returned
by getcwd(3) causes a buffer underflow in realpath(3).
Fix this by checking the path returned by getcwd syscall and falling
back to generic_getcwd if the path is not absolute, effectively making
getcwd(3) fail with ENOENT. The error code is chosen for consistency
with the case when the current directory is unlinked.
[BZ #22679]
CVE-2018-1000001
* sysdeps/unix/sysv/linux/getcwd.c (__getcwd): Fall back to
generic_getcwd if the path returned by getcwd syscall is not absolute.
* io/tst-getcwd-abspath.c: New test.
* io/Makefile (tests): Add tst-getcwd-abspath.
ia64: Fix memchr for large input sizes (BZ #22603)
Current optimized ia64 memchr uses a strategy to check for last address
by adding the input one with expected size. However it does not take
care for possible overflow.
It was triggered by 3038145ca23 where default rawmemchr now uses memchr
(p, c, (size_t)-1).
This patch fixes it by implement a satured addition where overflows
sets the maximum pointer size to UINTPTR_MAX.
Checked on ia64-linux-gnu where it fixes both stratcliff and
test-rawmemchr failures.
Adhemerval Zanella <adhemerval.zanella@linaro.org>
James Clarke <jrtc27@jrtc27.com>
[BZ #22603]
* sysdeps/ia64/memchr.S (__memchr): Avoid overflow in pointer
addition.
Aurelien Jarno [Fri, 29 Dec 2017 13:44:57 +0000 (14:44 +0100)]
tst-realloc: do not check for errno on success [BZ #22611]
POSIX explicitly says that applications should check errno only after
failure, so the errno value can be clobbered on success as long as it
is not set to zero.
Changelog:
[BZ #22611]
* malloc/tst-realloc.c (do_test): Remove the test checking that errno
is unchanged on success.
(cherry picked from commit f8aa69be445f65bb36cb3ae9291423600da7d6d2)
Aurelien Jarno [Sat, 30 Dec 2017 09:54:23 +0000 (10:54 +0100)]
elf: Check for empty tokens before dynamic string token expansion [BZ #22625]
The fillin_rpath function in elf/dl-load.c loops over each RPATH or
RUNPATH tokens and interprets empty tokens as the current directory
("./"). In practice the check for empty token is done *after* the
dynamic string token expansion. The expansion process can return an
empty string for the $ORIGIN token if __libc_enable_secure is set
or if the path of the binary can not be determined (/proc not mounted).
Fix that by moving the check for empty tokens before the dynamic string
token expansion. In addition, check for NULL pointer or empty strings
return by expand_dynamic_string_token.
The above changes highlighted a bug in decompose_rpath, an empty array
is represented by the first element being NULL at the fillin_rpath
level, but by using a -1 pointer in decompose_rpath and other functions.
Changelog:
[BZ #22625]
* elf/dl-load.c (fillin_rpath): Check for empty tokens before dynamic
string token expansion. Check for NULL pointer or empty string possibly
returned by expand_dynamic_string_token.
(decompose_rpath): Check for empty path after dynamic string
token expansion.
(cherry picked from commit 3e3c904daef69b8bf7d5cc07f793c9f07c3553ef)