Wilco Dijkstra [Tue, 17 Oct 2023 15:54:21 +0000 (16:54 +0100)]
AArch64: Add support for MOPS memcpy/memmove/memset
Add support for MOPS in cpu_features and INIT_ARCH. Add ifuncs using MOPS for
memcpy, memmove and memset (use .inst for now so it works with all binutils
versions without needing complex configure and conditional compilation).
Wilco Dijkstra [Wed, 1 Feb 2023 18:45:19 +0000 (18:45 +0000)]
AArch64: Improve SVE memcpy and memmove
Improve SVE memcpy by copying 2 vectors if the size is small enough.
This improves performance of random memcpy by ~9% on Neoverse V1, and
33-64 byte copies are ~16% faster.
Wilco Dijkstra [Wed, 11 Jan 2023 13:53:19 +0000 (13:53 +0000)]
AArch64: Improve strrchr
Use shrn for narrowing the mask which simplifies code and speeds up small
strings. Unroll the first search loop to improve performance on large
strings.
Wilco Dijkstra [Wed, 11 Jan 2023 13:53:05 +0000 (13:53 +0000)]
AArch64: Optimize strnlen
Optimize strnlen using the shrn instruction and improve the main loop.
Small strings are around 10% faster, large strings are 40% faster on
modern CPUs.
Wilco Dijkstra [Wed, 26 Oct 2022 13:16:50 +0000 (14:16 +0100)]
aarch64: Use memcpy_simd as the default memcpy
Since __memcpy_simd is the fastest memcpy on almost all cores, replace
the generic memcpy with it. If SVE is available, a SVE memcpy will be
used by default (including for Neoverse N2).
Florian Weimer [Fri, 15 Mar 2024 18:08:24 +0000 (19:08 +0100)]
linux: Use rseq area unconditionally in sched_getcpu (bug 31479)
Originally, nptl/descr.h included <sys/rseq.h>, but we removed that
in commit 2c6b4b272e6b4d07303af25709051c3e96288f2d ("nptl:
Unconditionally use a 32-byte rseq area"). After that, it was
not ensured that the RSEQ_SIG macro was defined during sched_getcpu.c
compilation that provided a definition. This commit always checks
the rseq area for CPU number information before using the other
approaches.
This adds an unnecessary (but well-predictable) branch on
architectures which do not define RSEQ_SIG, but its cost is small
compared to the system call. Most architectures that have vDSO
acceleration for getcpu also have rseq support.
Stefan Liebler [Tue, 25 Jul 2023 09:34:30 +0000 (11:34 +0200)]
Include sys/rseq.h in tst-rseq-disable.c
Starting with commit 2c6b4b272e6b4d07303af25709051c3e96288f2d
"nptl: Unconditionally use a 32-byte rseq area", the testcase
misc/tst-rseq-disable is UNSUPPORTED as RSEQ_SIG is not defined.
The mentioned commit removes inclusion of sys/rseq.h in nptl/descr.h.
Thus just include sys/rseq.h in the tst-rseq-disable.c as also done
in tst-rseq.c and tst-rseq-nptl.c. Reviewed-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 637aac2ae3980de31a6baab236a9255fe853cc76)
If the kernel headers provide a larger struct rseq, we used that
size as the argument to the rseq system call. As a result,
rseq registration would fail on older kernels which only accept
size 32.
This restore the 2.33 semantic for arena_get2. It was changed by 11a02b035b46 to avoid arena_get2 call malloc (back when __get_nproc
was refactored to use an scratch_buffer - 903bc7dcc2acafc). The
__get_nproc was refactored over then and now it also avoid to call
malloc.
The 11a02b035b46 did not take in consideration any performance
implication, which should have been discussed properly. The
__get_nprocs_sched is still used as a fallback mechanism if procfs
and sysfs is not acessible.
arm: Remove wrong ldr from _dl_start_user (BZ 31339)
The commit 49d877a80b29d3002887b084eec6676d9f5fec18 (arm: Remove
_dl_skip_args usage) removed the _SKIP_ARGS literal, which was
previously loader to r4 on loader _start. However, the cleanup did not
remove the following 'ldr r4, [sl, r4]' on _dl_start_user, used to check
to skip the arguments after ld self-relocations.
In my testing, the kernel initially set r4 to 0, which makes the
ldr instruction just read the _GLOBAL_OFFSET_TABLE_. However, since r4
is a callee-saved register; a different runtime might not zero
initialize it and thus trigger an invalid memory access.
Checked on arm-linux-gnu.
Reported-by: Adrian Ratiu <adrian.ratiu@collabora.com> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 1e25112dc0cb2515d27d8d178b1ecce778a9d37a)
Daniel Cederman [Tue, 16 Jan 2024 08:31:41 +0000 (09:31 +0100)]
sparc: Remove unwind information from signal return stubs [BZ #31244]
The functions were previously written in C, but were not compiled
with unwind information. The ENTRY/END macros includes .cfi_startproc
and .cfi_endproc which adds unwind information. This caused the
tests cleanup-8 and cleanup-10 in the GCC testsuite to fail.
This patch adds a version of the ENTRY/END macros without the
CFI instructions that can be used instead.
sigaction registers a restorer address that is located two instructions
before the stub function. This patch adds a two instruction padding to
avoid that the unwinder accesses the unwind information from the function
that the linker has placed right before it in memory. This fixes an issue
with pthread_cancel that caused tst-mutex8-static (and other tests) to fail.
Signed-off-by: Daniel Cederman <cederman@gaisler.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 7bd06985c0a143cdcba2762bfe020e53514a53de)
The small counts copy bytes comparsion should be unsigned (as the
memmove size argument). It fixes string/tst-memmove-overflow on
sparcv9, where the input size triggers an invalid code path.
Checked on sparc64-linux-gnu and sparcv9-linux-gnu.
Ffsll function randomly regress by ~20%, depending on how code gets
aligned in memory. Ffsll function code size is 17 bytes. Since default
function alignment is 16 bytes, it can load on 16, 32, 48 or 64 bytes
aligned memory. When ffsll function load at 16, 32 or 64 bytes aligned
memory, entire code fits in single 64 bytes cache line. When ffsll
function load at 48 bytes aligned memory, it splits in two cache line,
hence random regression.
Ffsll function size reduction from 17 bytes to 12 bytes ensures that it
will always fit in single 64 bytes cache line.
This patch fixes ffsll function random performance regression.
Arjun Shankar [Mon, 15 Jan 2024 16:44:45 +0000 (17:44 +0100)]
syslog: Fix integer overflow in __vsyslog_internal (CVE-2023-6780)
__vsyslog_internal calculated a buffer size by adding two integers, but
did not first check if the addition would overflow. This commit fixes
that.
Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit ddf542da94caf97ff43cc2875c88749880b7259b)
Arjun Shankar [Mon, 15 Jan 2024 16:44:44 +0000 (17:44 +0100)]
syslog: Fix heap buffer overflow in __vsyslog_internal (CVE-2023-6779)
__vsyslog_internal used the return value of snprintf/vsnprintf to
calculate buffer sizes for memory allocation. If these functions (for
any reason) failed and returned -1, the resulting buffer would be too
small to hold output. This commit fixes that.
All snprintf/vsnprintf calls are checked for negative return values and
the function silently returns upon encountering them.
Arjun Shankar [Mon, 15 Jan 2024 16:44:43 +0000 (17:44 +0100)]
syslog: Fix heap buffer overflow in __vsyslog_internal (CVE-2023-6246)
__vsyslog_internal did not handle a case where printing a SYSLOG_HEADER
containing a long program name failed to update the required buffer
size, leading to the allocation and overflow of a too-small buffer on
the heap. This commit fixes that. It also adds a new regression test
that uses glibc.malloc.check.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 6bd0e4efcc78f3c0115e5ea9739a1642807450da)
H.J. Lu [Thu, 21 Dec 2023 03:42:12 +0000 (19:42 -0800)]
x86-64: Fix the tcb field load for x32 [BZ #31185]
_dl_tlsdesc_undefweak and _dl_tlsdesc_dynamic access the thread pointer
via the tcb field in TCB:
_dl_tlsdesc_undefweak:
_CET_ENDBR
movq 8(%rax), %rax
subq %fs:0, %rax
ret
_dl_tlsdesc_dynamic:
...
subq %fs:0, %rax
movq -8(%rsp), %rdi
ret
Since the tcb field in TCB is a pointer, %fs:0 is a 32-bit location,
not 64-bit. It should use "sub %fs:0, %RAX_LP" instead. Since
_dl_tlsdesc_undefweak returns ptrdiff_t and _dl_make_tlsdesc_dynamic
returns void *, RAX_LP is appropriate here for x32 and x86-64. This
fixes BZ #31185.
H.J. Lu [Thu, 21 Dec 2023 00:31:43 +0000 (16:31 -0800)]
x86-64: Fix the dtv field load for x32 [BZ #31184]
On x32, I got
FAIL: elf/tst-tlsgap
$ gdb elf/tst-tlsgap
...
open tst-tlsgap-mod1.so
Thread 2 "tst-tlsgap" received signal SIGSEGV, Segmentation fault.
[Switching to LWP 2268754]
_dl_tlsdesc_dynamic () at ../sysdeps/x86_64/dl-tlsdesc.S:108
108 movq (%rsi), %rax
(gdb) p/x $rsi
$4 = 0xf7dbf9005655fb18
(gdb)
This is caused by
_dl_tlsdesc_dynamic:
_CET_ENDBR
/* Preserve call-clobbered registers that we modify.
We need two scratch regs anyway. */
movq %rsi, -16(%rsp)
movq %fs:DTV_OFFSET, %rsi
Since the dtv field in TCB is a pointer, %fs:DTV_OFFSET is a 32-bit
location, not 64-bit. Load the dtv field to RSI_LP instead of rsi.
This fixes BZ #31184.
_dl_assign_tls_modid() assigns a slotinfo entry for a new module, but
does *not* do anything to the generation counter. The first time this
happens, the generation is zero and map_generation() returns the current
generation to be used during relocation processing. However, if
a slotinfo entry is later reused, it will already have a generation
assigned. If this generation has fallen behind the current global max
generation, then this causes an obsolete generation to be assigned
during relocation processing, as map_generation() returns this
generation if nonzero. _dl_add_to_slotinfo() eventually resets the
generation, but by then it is too late. This causes DTV updates to be
skipped, leading to NULL or broken TLS slot pointers and segfaults.
Fix this by resetting the generation to zero in _dl_assign_tls_modid(),
so it behaves the same as the first time a slot is assigned.
_dl_add_to_slotinfo() will still assign the correct static generation
later during module load, but relocation processing will no longer use
an obsolete generation.
Note that slotinfo entry (aka modid) reuse typically happens after a
dlclose and only TLS access via dynamic tlsdesc is affected. Because
tlsdesc is optimized to use the optional part of static TLS, dynamic
tlsdesc can be avoided by increasing the glibc.rtld.optional_static_tls
tunable to a large enough value, or by LD_PRELOAD-ing the affected
modules.
tunables: Terminate if end of input is reached (CVE-2023-4911)
The string parsing routine may end up writing beyond bounds of tunestr
if the input tunable string is malformed, of the form name=name=val.
This gets processed twice, first as name=name=val and next as name=val,
resulting in tunestr being name=name=val:name=val, thus overflowing
tunestr.
Terminate the parsing loop at the first instance itself so that tunestr
does not overflow.
This also fixes up tst-env-setuid-tunables to actually handle failures
correct and add new tests to validate the fix for this CVE.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 1056e5b4c3f2d90ed2b4a55f96add28da2f4c8fa)
getaddrinfo: Fix use after free in getcanonname (CVE-2023-4806)
When an NSS plugin only implements the _gethostbyname2_r and
_getcanonname_r callbacks, getaddrinfo could use memory that was freed
during tmpbuf resizing, through h_name in a previous query response.
The backing store for res->at->name when doing a query with
gethostbyname3_r or gethostbyname2_r is tmpbuf, which is reallocated in
gethosts during the query. For AF_INET6 lookup with AI_ALL |
AI_V4MAPPED, gethosts gets called twice, once for a v6 lookup and second
for a v4 lookup. In this case, if the first call reallocates tmpbuf
enough number of times, resulting in a malloc, th->h_name (that
res->at->name refers to) ends up on a heap allocated storage in tmpbuf.
Now if the second call to gethosts also causes the plugin callback to
return NSS_STATUS_TRYAGAIN, tmpbuf will get freed, resulting in a UAF
reference in res->at->name. This then gets dereferenced in the
getcanonname_r plugin call, resulting in the use after free.
Fix this by copying h_name over and freeing it at the end. This
resolves BZ #30843, which is assigned CVE-2023-4806.
Aurelien Jarno [Mon, 28 Aug 2023 21:30:37 +0000 (23:30 +0200)]
io: Fix record locking contants for powerpc64 with __USE_FILE_OFFSET64
Commit 5f828ff824e3b7cd1 ("io: Fix F_GETLK, F_SETLK, and F_SETLKW for
powerpc64") fixed an issue with the value of the lock constants on
powerpc64 when not using __USE_FILE_OFFSET64, but it ended-up also
changing the value when using __USE_FILE_OFFSET64 causing an API change.
Fix that by also checking that define, restoring the pre 4d0fe291aed3a476a commit values:
CVE-2023-4527: Stack read overflow with large TCP responses in no-aaaa mode
Without passing alt_dns_packet_buffer, __res_context_search can only
store 2048 bytes (what fits into dns_packet_buffer). However,
the function returns the total packet size, and the subsequent
DNS parsing code in _nss_dns_gethostbyname4_r reads beyond the end
of the stack-allocated buffer.
When backporting commmit 6985865bc3ad5b23147ee73466583dd7fdf65892
("elf: Always call destructors in reverse constructor order
(bug 30785)"), we can move the l_init_called_next field to this
place, so that the internal GLIBC_PRIVATE ABI does not change.
Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 53df2ce6885da3d0e89e87dca7b095622296014f)
elf: Always call destructors in reverse constructor order (bug 30785)
The current implementation of dlclose (and process exit) re-sorts the
link maps before calling ELF destructors. Destructor order is not the
reverse of the constructor order as a result: The second sort takes
relocation dependencies into account, and other differences can result
from ambiguous inputs, such as cycles. (The force_first handling in
_dl_sort_maps is not effective for dlclose.) After the changes in
this commit, there is still a required difference due to
dlopen/dlclose ordering by the application, but the previous
discrepancies went beyond that.
A new global (namespace-spanning) list of link maps,
_dl_init_called_list, is updated right before ELF constructors are
called from _dl_init.
In dl_close_worker, the maps variable, an on-stack variable length
array, is eliminated. (VLAs are problematic, and dlclose should not
call malloc because it cannot readily deal with malloc failure.)
Marking still-used objects uses the namespace list directly, with
next and next_idx replacing the done_index variable.
After marking, _dl_init_called_list is used to call the destructors
of now-unused maps in reverse destructor order. These destructors
can call dlopen. Previously, new objects do not have l_map_used set.
This had to change: There is no copy of the link map list anymore,
so processing would cover newly opened (and unmarked) mappings,
unloading them. Now, _dl_init (indirectly) sets l_map_used, too.
(dlclose is handled by the existing reentrancy guard.)
After _dl_init_called_list traversal, two more loops follow. The
processing order changes to the original link map order in the
namespace. Previously, dependency order was used. The difference
should not matter because relocation dependencies could already
reorder link maps in the old code.
The changes to _dl_fini remove the sorting step and replace it with
a traversal of _dl_init_called_list. The l_direct_opencount
decrement outside the loader lock is removed because it appears
incorrect: the counter manipulation could race with other dynamic
loader operations.
tst-audit23 needs adjustments to the changes in LA_ACT_DELETE
notifications. The new approach for checking la_activity should
make it clearer that la_activty calls come in pairs around namespace
updates.
The dependency sorting test cases need updates because the destructor
order is always the opposite order of constructor order, even with
relocation dependencies or cycles present.
There is a future cleanup opportunity to remove the now-constant
force_first and for_fini arguments from the _dl_sort_maps function.
Florian Weimer [Thu, 27 Oct 2022 09:36:44 +0000 (11:36 +0200)]
elf: Introduce to _dl_call_fini
This consolidates the destructor invocations from _dl_fini and
dlclose. Remove the micro-optimization that avoids
calling _dl_call_fini if they are no destructors (as dlclose is quite
expensive anyway). The debug log message is now printed
unconditionally.
x86: Use `3/4*sizeof(per-thread-L3)` as low bound for NT threshold.
On some machines we end up with incomplete cache information. This can
make the new calculation of `sizeof(total-L3)/custom-divisor` end up
lower than intended (and lower than the prior value). So reintroduce
the old bound as a lower bound to avoid potentially regressing code
where we don't have complete information to make the decision. Reviewed-by: DJ Delorie <dj@redhat.com>
(cherry picked from commit 8b9a0af8ca012217bf90d1dc0694f85b49ae09da)
x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
```
Split `shared` (cumulative cache size) from `shared_per_thread` (cache
size per socket), the `shared_per_thread` *can* be slightly off from
the previous calculation.
Previously we added `core` even if `threads_l2` was invalid, and only
used `threads_l2` to divide `core` if it was present. The changed
version only included `core` if `threads_l2` was valid.
This change restores the old behavior if `threads_l2` is invalid by
adding the entire value of `core`. Reviewed-by: DJ Delorie <dj@redhat.com>
(cherry picked from commit 47f747217811db35854ea06741be3685e8bbd44d)
Noah Goldstein [Wed, 7 Jun 2023 18:18:01 +0000 (13:18 -0500)]
x86: Increase `non_temporal_threshold` to roughly `sizeof_L3 / 4`
Current `non_temporal_threshold` set to roughly '3/4 * sizeof_L3 /
ncores_per_socket'. This patch updates that value to roughly
'sizeof_L3 / 4`
The original value (specifically dividing the `ncores_per_socket`) was
done to limit the amount of other threads' data a `memcpy`/`memset`
could evict.
Dividing by 'ncores_per_socket', however leads to exceedingly low
non-temporal thresholds and leads to using non-temporal stores in
cases where REP MOVSB is multiple times faster.
Furthermore, non-temporal stores are written directly to main memory
so using it at a size much smaller than L3 can place soon to be
accessed data much further away than it otherwise could be. As well,
modern machines are able to detect streaming patterns (especially if
REP MOVSB is used) and provide LRU hints to the memory subsystem. This
in affect caps the total amount of eviction at 1/cache_associativity,
far below meaningfully thrashing the entire cache.
As best I can tell, the benchmarks that lead this small threshold
where done comparing non-temporal stores versus standard cacheable
stores. A better comparison (linked below) is to be REP MOVSB which,
on the measure systems, is nearly 2x faster than non-temporal stores
at the low-end of the previous threshold, and within 10% for over
100MB copies (well past even the current threshold). In cases with a
low number of threads competing for bandwidth, REP MOVSB is ~2x faster
up to `sizeof_L3`.
The divisor of `4` is a somewhat arbitrary value. From benchmarks it
seems Skylake and Icelake both prefer a divisor of `2`, but older CPUs
such as Broadwell prefer something closer to `8`. This patch is meant
to be followed up by another one to make the divisor cpu-specific, but
in the meantime (and for easier backporting), this patch settles on
`4` as a middle-ground.
Benchmarks comparing non-temporal stores, REP MOVSB, and cacheable
stores where done using:
https://github.com/goldsteinn/memcpy-nt-benchmarks
Sheets results (also available in pdf on the github):
https://docs.google.com/spreadsheets/d/e/2PACX-1vS183r0rW_jRX6tG_E90m9qVuFiMbRIJvi5VAE8yYOvEOIEEc3aSNuEsrFbuXw5c3nGboxMmrupZD7K/pubhtml Reviewed-by: DJ Delorie <dj@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit af992e7abdc9049714da76cae1e5e18bc4838fb8)
elf: _dl_find_object may return 1 during early startup (bug 30515)
Success is reported with a 0 return value, and failure is -1.
Enhance the kitchen sink test elf/tst-audit28 to cover
_dl_find_object as well.
Fixes commit 5d28a8962dcb ("elf: Add _dl_find_object function")
and bug 30515.
Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 1bcfe0f732066ae5336b252295591ebe7e51c301)
io: Fix F_GETLK, F_SETLK, and F_SETLKW for powerpc64
Different than other 64 bit architectures, powerpc64 defines the
LFS POSIX lock constants with values similar to 32 ABI, which
are meant to be used with fcntl64 syscall. Since powerpc64 kABI
does not have fcntl, the constants are adjusted with the
FCNTL_ADJUST_CMD macro.
The 4d0fe291aed3a476a changed the logic of generic constants
LFS value are equal to the default values; which is now wrong
for powerpc64.
Fix the value by explicit define the previous glibc constants
(powerpc64 does not need to use the 32 kABI value, but it simplifies
the FCNTL_ADJUST_CMD which should be kept as compatibility).
Checked on powerpc64-linux-gnu and powerpc-linux-gnu.
io: Fix record locking contants on 32 bit arch with 64 bit default time_t (BZ#30477)
For architecture with default 64 bit time_t support, the kernel
does not provide LFS and non-LFS values for F_GETLK, F_GETLK, and
F_GETLK (the default value used for 64 bit architecture are used).
This is might be considered an ABI break, but the currenct exported
values is bogus anyway.
The POSIX lockf is not affected since it is aliased to lockf64,
which already uses the LFS values.
Checked on i686-linux-gnu and the new tests on a riscv32.
H.J. Lu [Thu, 27 Apr 2023 20:06:15 +0000 (13:06 -0700)]
__check_pf: Add a cancellation cleanup handler [BZ #20975]
There are reports for hang in __check_pf:
https://github.com/JoeDog/siege/issues/4
It is reproducible only under specific configurations:
1. Large number of cores (>= 64) and large number of threads (> 3X of
the number of cores) with long lived socket connection.
2. Low power (frequency) mode.
3. Power management is enabled.
While holding lock, __check_pf calls make_request which calls __sendto
and __recvmsg. Since __sendto and __recvmsg are cancellation points,
lock held by __check_pf won't be released and can cause deadlock when
thread cancellation happens in __sendto or __recvmsg. Add a cancellation
cleanup handler for __check_pf to unlock the lock when cancelled by
another thread. This fixes BZ #20975 and the siege hang issue.
Simon Kissane [Fri, 10 Feb 2023 21:58:02 +0000 (08:58 +1100)]
gmon: fix memory corruption issues [BZ# 30101]
V2 of this patch fixes an issue in V1, where the state was changed to ON not
OFF at end of _mcleanup. I hadn't noticed that (counterintuitively) ON=0 and
OFF=3, hence zeroing the buffer turned it back on. So set the state to OFF
after the memset.
1. Prevent double free, and reads from unallocated memory, when
_mcleanup is (incorrectly) called two or more times in a row,
without an intervening call to __monstartup; with this patch, the
second and subsequent calls effectively become no-ops instead.
While setting tos=NULL is minimal fix, safest action is to zero the
whole gmonparam buffer.
2. Prevent memory leak when __monstartup is (incorrectly) called two
or more times in a row, without an intervening call to _mcleanup;
with this patch, the second and subsequent calls effectively become
no-ops instead.
3. After _mcleanup, treat __moncontrol(1) as __moncontrol(0) instead.
With zeroing of gmonparam buffer in _mcleanup, this stops the
state incorrectly being changed to GMON_PROF_ON despite profiling
actually being off. If we'd just done the minimal fix to _mcleanup
of setting tos=NULL, there is risk of far worse memory corruption:
kcount would point to deallocated memory, and the __profil syscall
would make the kernel write profiling data into that memory,
which could have since been reallocated to something unrelated.
4. Ensure __moncontrol(0) still turns off profiling even in error
state. Otherwise, if mcount overflows and sets state to
GMON_PROF_ERROR, when _mcleanup calls __moncontrol(0), the __profil
syscall to disable profiling will not be invoked. _mcleanup will
free the buffer, but the kernel will still be writing profiling
data into it, potentially corrupted arbitrary memory.
Also adds a test case for (1). Issues (2)-(4) are not feasible to test.
When mcount overflows, no gmon.out file is generated, but no message is printed
to the user, leaving the user with no idea why, and thinking maybe there is
some bug - which is how BZ 27576 ended up being logged. Print a message to
stderr in this case so the user knows what is going on.
As a comment in sys/gmon.h acknowledges, the hardcoded MAXARCS value is too
small for some large applications, including the test case in that BZ. Rather
than increase it, add tunables to enable MINARCS and MAXARCS to be overridden
at runtime (glibc.gmon.minarcs and glibc.gmon.maxarcs). So if a user gets the
mcount overflow error, they can try increasing maxarcs (they might need to
increase minarcs too if the heuristic is wrong in their case.)
Note setting minarcs/maxarcs too large can cause monstartup to fail with an
out of memory error. If you set them large enough, it can cause an integer
overflow in calculating the buffer size. I haven't done anything to defend
against that - it would not generally be a security vulnerability, since these
tunables will be ignored in suid/sgid programs (due to the SXID_ERASE default),
and if you can set GLIBC_TUNABLES in the environment of a process, you can take
it over anyway (LD_PRELOAD, LD_LIBRARY_PATH, etc). I thought about modifying
the code of monstartup to defend against integer overflows, but doing so is
complicated, and I realise the existing code is susceptible to them even prior
to this change (e.g. try passing a pathologically large highpc argument to
monstartup), so I decided just to leave that possibility in-place.
Add a test case which demonstrates mcount overflow and the tunables.
The `__monstartup()` allocates a buffer used to store all the data
accumulated by the monitor.
The size of this buffer depends on the size of the internal structures
used and the address range for which the monitor is activated, as well
as on the maximum density of call instructions and/or callable functions
that could be potentially on a segment of executable code.
In particular a hash table of arcs is placed at the end of this buffer.
The size of this hash table is calculated in bytes as
p->fromssize = p->textsize / HASHFRACTION;
but actually should be
p->fromssize = ROUNDUP(p->textsize / HASHFRACTION, sizeof(*p->froms));
This results in writing beyond the end of the allocated buffer when an
added arc corresponds to a call near from the end of the monitored
address range, since `_mcount()` check the incoming caller address for
monitored range but not the intermediate result hash-like index that
uses to write into the table.
It should be noted that when the results are output to `gmon.out`, the
table is read to the last element calculated from the allocated size in
bytes, so the arcs stored outside the buffer boundary did not fall into
`gprof` for analysis. Thus this "feature" help me to found this bug
during working with https://sourceware.org/bugzilla/show_bug.cgi?id=29438
Just in case, I will explicitly note that the problem breaks the
`make test t=gmon/tst-gmon-dso` added for Bug 29438.
There, the arc of the `f3()` call disappears from the output, since in
the DSO case, the call to `f3` is located close to the end of the
monitored range.
Signed-off-by: Леонид Юрьев (Leonid Yuriev) <leo@yuriev.ru>
Another minor error seems a related typo in the calculation of
`kcountsize`, but since kcounts are smaller than froms, this is
actually to align the p->froms data.
Co-authored-by: DJ Delorie <dj@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 801af9fafd4689337ebf27260aa115335a0cb2bc)
Adam Yi [Tue, 7 Mar 2023 12:30:02 +0000 (07:30 -0500)]
posix: Fix system blocks SIGCHLD erroneously [BZ #30163]
Fix bug that SIGCHLD is erroneously blocked forever in the following
scenario:
1. Thread A calls system but hasn't returned yet
2. Thread B calls another system but returns
SIGCHLD would be blocked forever in thread B after its system() returns,
even after the system() in thread A returns.
Although POSIX does not require, glibc system implementation aims to be
thread and cancellation safe. This bug was introduced in 5fb7fc96350575c9adb1316833e48ca11553be49 when we moved reverting signal
mask to happen when the last concurrently running system returns,
despite that signal mask is per thread. This commit reverts this logic
and adds a test.
Signed-off-by: Adam Yi <ayi@janestreet.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 436a604b7dc741fc76b5a6704c6cd8bb178518e7)
x86_64: Fix asm constraints in feraiseexcept (bug 30305)
The divss instruction clobbers its first argument, and the constraints
need to reflect that. Fortunately, with GCC 12, generated code does
not actually change, so there is no externally visible bug.
Suggested-by: Jakub Jelinek <jakub@redhat.com> Reviewed-by: Noah Goldstein <goldstein.w.n@gmail.com>
(cherry picked from commit 5d1ccdda7b0c625751661d50977f3dfbc73f8eae)
Before this change, sgetsgent_r did not set errno to ERANGE, but
sgetsgent only check errno, not the return value from sgetsgent_r.
Consequently, sgetsgent did not detect any error, and reported
success to the caller, without initializing the struct sgrp object
whose address was returned.
This commit changes sgetsgent_r to set errno as well. This avoids
similar issues in applications which only change errno.
H.J. Lu [Tue, 3 Jan 2023 21:06:48 +0000 (13:06 -0800)]
x86: Check minimum/maximum of non_temporal_threshold [BZ #29953]
The minimum non_temporal_threshold is 0x4040. non_temporal_threshold may
be set to less than the minimum value when the shared cache size isn't
available (e.g., in an emulator) or by the tunable. Add checks for
minimum and maximum of non_temporal_threshold.
Vitaly Buka [Sat, 18 Feb 2023 20:53:41 +0000 (12:53 -0800)]
stdlib: Undo post review change to 16adc58e73f3 [BZ #27749]
Post review removal of "goto restart" from
https://sourceware.org/pipermail/libc-alpha/2021-April/125470.html
introduced a bug when some atexit handers skipped.
Florian Weimer [Wed, 8 Feb 2023 17:11:04 +0000 (18:11 +0100)]
elf: Smoke-test ldconfig -p against system /etc/ld.so.cache
The test is sufficient to detect the ldconfig bug fixed in
commit 9fe6f6363886aae6b2b210cae3ed1f5921299083 ("elf: Fix 64 time_t
support for installed statically binaries").
elf: Fix GL(dl_phdr) and GL(dl_phnum) for static builds [BZ #29864]
The 73fc4e28b9464f0e refactor did not add the GL(dl_phdr) and
GL(dl_phnum) for static build, relying on the __ehdr_start symbol,
which is always added by the static linker, to get the correct values.
This is problematic in some ways:
- The segment may see its in-memory size differ from its in-file
size (or the binary may have holes). The Linux has fixed is to
provide concise values for both AT_PHDR and AT_PHNUM (commit 0da1d5002745c - "fs/binfmt_elf: Fix AT_PHDR for unusual ELF files")
- Some archs (alpha for instance) the hidden weak reference is not
correctly pulled by the static linker and __ehdr_start address
end up being 0, which makes GL(dl_phdr) and GL(dl_phnum) have both
invalid values (and triggering a segfault later on libc.so while
accessing TLS variables).
The safer fix is to just restore the previous behavior to setup
GL(dl_phdr) and GL(dl_phnum) for static based on kernel auxv. The
__ehdr_start fallback can also be simplified by not assuming weak
linkage (as for PIE).
The libc-static.c auxv init logic is moved to dl-support.c, since
the later is build without SHARED and then GLRO macro is defined
to access the variables directly.
The _dl_phdr is also assumed to be always non NULL, since an invalid
NULL values does not trigger TLS initialization (which is used in
various libc systems).
Checked on aarch64-linux-gnu, x86_64-linux-gnu, and i686-linux-gnu.
Define the __glibc_fortify and other macros only when __FORTIFY_LEVEL >
0. This has the effect of not defining these macros on older C90
compilers that do not have support for variable length argument lists.
Also trim off the trailing backslashes from the definition of
__glibc_fortify and __glibc_fortify_n macros.
Noah Goldstein [Wed, 14 Dec 2022 18:52:10 +0000 (10:52 -0800)]
x86: Prevent SIGSEGV in memcmp-sse2 when data is concurrently modified [BZ #29863]
In the case of INCORRECT usage of `memcmp(a, b, N)` where `a` and `b`
are concurrently modified as `memcmp` runs, there can be a SIGSEGV
in `L(ret_nonzero_vec_end_0)` because the sequential logic
assumes that `(rdx - 32 + rax)` is a positive 32-bit integer.
To be clear, this change does not mean the usage of `memcmp` is
supported. The program behaviour is undefined (UB) in the
presence of data races, and `memcmp` is incorrect when the values
of `a` and/or `b` are modified concurrently (data race). This UB
may manifest itself as a SIGSEGV. That being said, if we can
allow the idiomatic use cases, like those in yottadb with
opportunistic concurrency control (OCC), to execute without a
SIGSEGV, at no cost to regular use cases, then we can aim to
minimize harm to those existing users.
The fix replaces a 32-bit `addl %edx, %eax` with the 64-bit variant
`addq %rdx, %rax`. The 1-extra byte of code size from using the
64-bit instruction doesn't contribute to overall code size as the
next target is aligned and has multiple bytes of `nop` padding
before it. As well all the logic between the add and `ret` still
fits in the same fetch block, so the cost of this change is
basically zero.
The relevant sequential logic can be seen in the following
pseudo-code:
```
/*
* rsi = a
* rdi = b
* rdx = len - 32
*/
/* cmp a[0:15] and b[0:15]. Since length is known to be [17, 32]
in this case, this check is also assumed to cover a[0:(31 - len)]
and b[0:(31 - len)]. */
movups (%rsi), %xmm0
movups (%rdi), %xmm1
PCMPEQ %xmm0, %xmm1
pmovmskb %xmm1, %eax
subl %ecx, %eax
jnz L(END_NEQ)
L(END2):
/* Position first mismatch. */
bsfl %eax, %eax
/* The sequential version is able to assume this value is a
positive 32-bit value because the first check included bytes in
range a[0:(31 - len)] and b[0:(31 - len)] so `eax` must be
greater than `31 - len` so the minimum value of `edx` + `eax` is
`(len - 32) + (32 - len) >= 0`. In the concurrent case, however,
`a` or `b` could have been changed so a mismatch in `eax` less or
equal than `(31 - len)` is possible (the new low bound is `(16 -
len)`. This can result in a negative 32-bit signed integer, which
when zero extended to 64-bits is a random large value this out
out of bounds. */
addl %edx, %eax
/* Crash here because 32-bit negative number in `eax` zero
extends to out of bounds 64-bit offset. */
movzbl 16(%rdi, %rax), %ecx
movzbl 16(%rsi, %rax), %eax
```
This fix is quite simple, just make the `addl %edx, %eax` 64 bit (i.e
`addq %rdx, %rax`). This prevents the 32-bit zero extension
and since `eax` is still a low bound of `16 - len` the `rdx + rax`
is bound by `(len - 32) - (16 - len) >= -16`. Since we have a
fixed offset of `16` in the memory access this must be in bounds.
The compiler might transform __stpcpy calls (which are routed to
__builtin_stpcpy as an optimization) to strcpy and x86_64 strcpy
multiarch implementation does not build any working symbol due
ISA_SHOULD_BUILD not being evaluated for IS_IN(rtld).
Checked on x86_64-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 9dc4e29f630c6ef8299120b275e503321dc0c8c7)
netname.c: In function ‘user2netname’:
netname.c:51:28: error: ‘%s’ directive writing up to 255 bytes into a
region of size between 239 and 249 [-Werror=format-overflow=]
51 | sprintf (netname, "%s.%d@%s", OPSYS, uid, dfltdom);
| ^~ ~~~~~~~
netname.c:51:3: note: ‘sprintf’ output between 8 and 273 bytes into a
destination of size 256
51 | sprintf (netname, "%s.%d@%s", OPSYS, uid, dfltdom);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
cc1: all warnings being treated as errors
However the code does test prior the sprintf call that dfltdom plus
the required extra space for OPSYS, uid, and extra character will not
overflow and return 0 instead.
Checked on x86_64-linux-gnu and i686-linux-gnu. Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 6128e82ebe973163d2dd614d31753c88c0c4d645)
Martin Jansa [Wed, 21 Sep 2022 13:51:03 +0000 (10:51 -0300)]
locale: prevent maybe-uninitialized errors with -Os [BZ #19444]
Fixes following error when building with -Os:
| In file included from strcoll_l.c:43:
| strcoll_l.c: In function '__strcoll_l':
| ../locale/weight.h:31:26: error: 'seq2.back_us' may be used
uninitialized in this function [-Werror=maybe-uninitialized]
| int_fast32_t i = table[*(*cpp)++];
| ^~~~~~~~~
| strcoll_l.c:304:18: note: 'seq2.back_us' was declared here
| coll_seq seq1, seq2;
| ^~~~
| In file included from strcoll_l.c:43:
| ../locale/weight.h:31:26: error: 'seq1.back_us' may be used
uninitialized in this function [-Werror=maybe-uninitialized]
| int_fast32_t i = table[*(*cpp)++];
| ^~~~~~~~~
| strcoll_l.c:304:12: note: 'seq1.back_us' was declared here
| coll_seq seq1, seq2;
| ^~~~ Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit c651f9da530320e9939e6cbad57b87695eeba41c)
The tzfile_mtime is already compared to 64 bit time_t stat call. Reviewed-by: DJ Delorie <dj@redhat.com>
(cherry picked from commit 4e21c2075193e406a92c0d1cb091a7c804fda4d9)
nscd: Use 64 bit time_t on libc nscd routines (BZ# 29402)
Although the nscd module is built with 64 bit time_t, the routines
linked direct to libc.so need to use the internal symbols. Reviewed-by: DJ Delorie <dj@redhat.com>
(cherry picked from commit fa4a19277842fd09a4815a986f70e0fe0903836f)
Apply asm redirections in syslog.h before first use [BZ #27087]
Similar to d0fa09a770, but for syslog.h when _FORTIFY_SOURCE > 0.
Fixes [BZ #27087] by applying long double-related asm redirections
before using functions in bits/syslog.h.
Florian Weimer [Tue, 8 Nov 2022 13:15:02 +0000 (14:15 +0100)]
Linux: Support __IPC_64 in sysvctl *ctl command arguments (bug 29771)
Old applications pass __IPC_64 as part of the command argument because
old glibc did not check for unknown commands, and passed through the
arguments directly to the kernel, without adding __IPC_64.
Applications need to continue doing that for old glibc compatibility,
so this commit enables this approach in current glibc.
For msgctl and shmctl, if no translation is required, make
direct system calls, as we did before the time64 changes. If
translation is required, mask __IPC_64 from the command argument.
For semctl, the union-in-vararg argument handling means that
translation is needed on all architectures.
Paul Eggert [Fri, 9 Sep 2022 01:08:32 +0000 (20:08 -0500)]
mktime: improve heuristic for ca-1986 Indiana DST
This patch syncs mktime.c from Gnulib, fixing a
problem reported by Mark Krenz <https://bugs.gnu.org/48085>,
and it should fix BZ#29035 too.
* time/mktime.c (__mktime_internal): Be more generous about
accepting arguments with the wrong value of tm_isdst, by falling
back to a one-hour DST difference if we find no nearby DST that is
unusual. This fixes a problem where "1986-04-28 00:00 EDT" was
rejected when TZ="America/Indianapolis" because the nearest DST
timestamp occurred in 1970, a temporal distance too great for the
old heuristic. This also also narrows the search a bit, which
is a minor performance win.
Makerules: fix MAKEFLAGS assignment for upcoming make-4.4 [BZ# 29564]
make-4.4 will add long flags to MAKEFLAGS variable:
* WARNING: Backward-incompatibility!
Previously only simple (one-letter) options were added to the MAKEFLAGS
variable that was visible while parsing makefiles. Now, all options
are available in MAKEFLAGS.
This causes locale builds to fail when long options are used:
$ make --shuffle
...
make -C localedata install-locales
make: invalid shuffle mode: '1662724426r'
The change fixes it by passing eash option via whitespace and dashes.
That way option is appended to both single-word form and whitespace
separated form.
While at it fixed --silent mode detection in $(MAKEFLAGS) by filtering
out --long-options. Otherwise options like --shuffle flag enable silent
mode unintentionally. $(silent-make) variable consolidates the checks.
Resolves: BZ# 29564
CC: Paul Smith <psmith@gnu.org> CC: Siddhesh Poyarekar <siddhesh@gotplt.org> Signed-off-by: Sergei Trofimovich <slyich@gmail.com> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit 2d7ed98add14f75041499ac189696c9bd3d757fe)
Aurelien Jarno [Tue, 1 Nov 2022 19:43:55 +0000 (20:43 +0100)]
linux: Fix fstatat on MIPSn64 (BZ #29730)
Commit 6e8a0aac2f883 ("time: Fix overflow itimer tests on 32-bit
systems") changed in_time_t_range to assume a 32-bit time_t. This broke
fstatat on MIPSn64 that was using it with a 64-bit time_t due to
difference between stat and stat64. This commit fix that by adding a
MIPSn64 specific version, which bypasses the EOVERFLOW tests.
linux: Fix generic struct_stat for 64 bit time (BZ# 29657)
The generic Linux struct_stat misses the conditionals to use
bits/struct_stat_time64_helper.h in the __USE_TIME_BITS64 for
architecture that uses __TIMESIZE == 32 (currently csky and nios2).
Since newer ports should not support 32 bit time_t, the generic
implementation should be used as default.
For arm, hppa, and sh a copy of default struct_stat is added,
while for csky and nios a new one based on generic is used, along
with conditionals to use bits/struct_stat_time64_helper.h.
The default struct_stat is also replaced with the generic one.
Checked on aarch64-linux-gnu and arm-linux-gnueabihf.
Florian Weimer [Fri, 14 Oct 2022 09:02:25 +0000 (11:02 +0200)]
elf: Do not completely clear reused namespace in dlmopen (bug 29600)
The data in the _ns_debug member must be preserved, otherwise
_dl_debug_initialize enters an infinite loop. To be conservative,
only clear the libc_map member for now, to fix bug 29528.
Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 2c42257314536b94cc8d52edede86e94e98c1436)
nss: Use shared prefix in IPv4 address in tst-reload1
Otherwise, sorting based on the longest-matching prefix in
getaddrinfo can reorder the addresses in ways the test does not
expect, depending on the IPv4 address of the host.
nss: Fix tst-nss-files-hosts-long on single-stack hosts (bug 24816)
getent implicitly passes AI_ADDRCONFIG to getaddrinfo by default.
Use --no-addrconfig to suppress that, so that both IPv4 and IPv6
lookups succeed even if the address family is not supported by the
host.
Ensure calculations happen with desired rounding mode in y1lf128
math/test-float128-y1 fails on x86_64 and ppc64el with gcc 12 and -O3,
because code inside a block guarded by SET_RESTORE_ROUNDL is being moved
after the rounding mode has been restored. Use math_force_eval to
prevent this (and insert some math_opt_barrier calls to prevent code
from being moved before the rounding mode is set).
nscd: Drop local address tuple variable [BZ #29607]
When a request needs to be resent (e.g. due to insufficient buffer
space), the references to subsequent tuples in the local variable are
stale and should not be used. This used to work by accident before, but
since 1d495912a it no longer does. Instead of trying to reset it, just
let gethostbyname4_r write into TUMPBUF6 for us, thus maintaining a
consistent state at all times. This is now consistent with what is done
in gaih_inet for getaddrinfo.
Resolves: BZ #29607 Reported-by: Holger Hoffstätte <holger@applied-asynchrony.com> Tested-by: Holger Hoffstätte <holger@applied-asynchrony.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 6e33e5c4b73cea7b8aa3de0947123db16200fb65)
Aurelien Jarno [Mon, 3 Oct 2022 21:46:11 +0000 (23:46 +0200)]
x86-64: Require BMI1/BMI2 for AVX2 strrchr and wcsrchr implementations
The AVX2 strrchr and wcsrchr implementation uses the 'blsmsk'
instruction which belongs to the BMI1 CPU feature and the 'shrx'
instruction, which belongs to the BMI2 CPU feature.