Joseph Myers [Tue, 12 Sep 2023 14:08:53 +0000 (14:08 +0000)]
Update syscall lists for Linux 6.5
Linux 6.5 has one new syscall, cachestat, and also enables the
cacheflush syscall for hppa. Update syscall-names.list and regenerate
the arch-syscall.h headers with build-many-glibcs.py update-syscalls.
H.J. Lu [Wed, 24 Jul 2024 21:05:13 +0000 (14:05 -0700)]
linux: Update the mremap C implementation [BZ #31968]
Update the mremap C implementation to support the optional argument for
MREMAP_DONTUNMAP added in Linux 5.7 since it may not always be correct
to implement a variadic function as a non-variadic function on all Linux
targets. Return MAP_FAILED and set errno to EINVAL for unknown flag bits.
This fixes BZ #31968.
Note: A test must be added when a new flag bit is introduced.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 6c40cb0e9f893d49dc7caee580a055de53562206)
resolv: Do not wait for non-existing second DNS response after error (bug 30081)
In single-request mode, there is no second response after an error
because the second query has not been sent yet. Waiting for it
introduces an unnecessary timeout.
Michael Jeanson [Wed, 3 Jul 2024 16:35:34 +0000 (12:35 -0400)]
nptl: fix potential merge of __rseq_* relro symbols
While working on a patch to add support for the extensible rseq ABI, we
came across an issue where a new 'const' variable would be merged with
the existing '__rseq_size' variable. We tracked this to the use of
'-fmerge-all-constants' which allows the compiler to merge identical
constant variables. This means that all 'const' variables in a compile
unit that are of the same size and are initialized to the same value can
be merged.
In this specific case, on 32 bit systems 'unsigned int' and 'ptrdiff_t'
are both 4 bytes and initialized to 0 which should trigger the merge.
However for reasons we haven't delved into when the attribute 'section
(".data.rel.ro")' is added to the mix, only variables of the same exact
types are merged. As far as we know this behavior is not specified
anywhere and could change with a new compiler version, hence this patch.
Move the definitions of these variables into an assembler file and add
hidden writable aliases for internal use. This has the added bonus of
removing the asm workaround to set the values on rseq registration.
Tested on Debian 12 with GCC 12.2.
Signed-off-by: Michael Jeanson <mjeanson@efficios.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 2b92982e2369d292560793bee8e730f695f48ff3)
Stefan Liebler [Thu, 11 Jul 2024 09:28:53 +0000 (11:28 +0200)]
s390x: Fix segfault in wcsncmp [BZ #31934]
The z13/vector-optimized wcsncmp implementation segfaults if n=1
and there is only one character (equal on both strings) before
the page end. Then it loads and compares one character and misses
to check n again. The following load fails.
This patch removes the extra load and compare of the first character
and just start with the loop which uses vector-load-to-block-boundary.
This code-path also checks n.
With this patch both tests are passing:
- the simplified one mentioned in the bugzilla 31934
- the full one in Florian Weimer's patch:
"manual: Document a GNU extension for strncmp/wcsncmp"
(https://patchwork.sourceware.org/project/glibc/patch/874j9eml6y.fsf@oldenburg.str.redhat.com/):
On s390x-linux-gnu (z16), the new wcsncmp test fails due to bug 31934. Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 9b7651410375ec8848a1944992d663d514db4ba7)
misc: Add support for Linux uio.h RWF_NOAPPEND flag
In Linux 6.9 a new flag is added to allow for Per-io operations to
disable append mode even if a file was opened with the flag O_APPEND.
This is done with the new RWF_NOAPPEND flag.
This caused two test failures as these tests expected the flag 0x00000020
to be unused. Adding the flag definition now fixes these tests on Linux
6.9 (v6.9-rc1).
H.J. Lu [Fri, 10 May 2024 03:07:01 +0000 (20:07 -0700)]
Force DT_RPATH for --enable-hardcoded-path-in-tests
On Fedora 40/x86-64, linker enables --enable-new-dtags by default which
generates DT_RUNPATH instead of DT_RPATH. Unlike DT_RPATH, DT_RUNPATH
only applies to DT_NEEDED entries in the executable and doesn't applies
to DT_NEEDED entries in shared libraries which are loaded via DT_NEEDED
entries in the executable. Some glibc tests have libstdc++.so.6 in
DT_NEEDED, which has libm.so.6 in DT_NEEDED. When DT_RUNPATH is generated,
/lib64/libm.so.6 is loaded for such tests. If the newly built glibc is
older than glibc 2.36, these tests fail with
assert/tst-assert-c++: /export/build/gnu/tools-build/glibc-gitlab-release/build-x86_64-linux/libc.so.6: version `GLIBC_2.36' not found (required by /lib64/libm.so.6)
assert/tst-assert-c++: /export/build/gnu/tools-build/glibc-gitlab-release/build-x86_64-linux/libc.so.6: version `GLIBC_ABI_DT_RELR' not found (required by /lib64/libm.so.6)
Pass -Wl,--disable-new-dtags to linker when building glibc tests with
--enable-hardcoded-path-in-tests. This fixes BZ #31719.
H.J. Lu [Thu, 25 Apr 2024 15:06:52 +0000 (08:06 -0700)]
elf: Also compile dl-misc.os with $(rtld-early-cflags)
Also compile dl-misc.os with $(rtld-early-cflags) to avoid
Program received signal SIGILL, Illegal instruction.
0x00007ffff7fd36ea in _dl_strtoul (nptr=nptr@entry=0x7fffffffe2c9 "2",
endptr=endptr@entry=0x7fffffffd728) at dl-misc.c:156
156 bool positive = true;
(gdb) bt
#0 0x00007ffff7fd36ea in _dl_strtoul (nptr=nptr@entry=0x7fffffffe2c9 "2",
endptr=endptr@entry=0x7fffffffd728) at dl-misc.c:156
#1 0x00007ffff7fdb1a9 in tunable_initialize (
cur=cur@entry=0x7ffff7ffbc00 <tunable_list+2176>,
strval=strval@entry=0x7fffffffe2c9 "2", len=len@entry=1)
at dl-tunables.c:131
#2 0x00007ffff7fdb3a2 in parse_tunables (valstring=<optimized out>)
at dl-tunables.c:258
#3 0x00007ffff7fdb5d9 in __GI___tunables_init (envp=0x7fffffffdd58)
at dl-tunables.c:288
#4 0x00007ffff7fe44c3 in _dl_sysdep_start (
start_argptr=start_argptr@entry=0x7fffffffdcb0,
dl_main=dl_main@entry=0x7ffff7fe5f80 <dl_main>)
at ../sysdeps/unix/sysv/linux/dl-sysdep.c:110
#5 0x00007ffff7fe5cae in _dl_start_final (arg=0x7fffffffdcb0) at rtld.c:494
#6 _dl_start (arg=0x7fffffffdcb0) at rtld.c:581
#7 0x00007ffff7fe4b38 in _start ()
(gdb)
when setting GLIBC_TUNABLES in glibc compiled with APX. Reviewed-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 049b7684c912dd32b67b1b15b0f43bf07d5f512e)
CVE-2024-33601, CVE-2024-33602: nscd: netgroup: Use two buffers in addgetnetgrentX (bug 31680)
This avoids potential memory corruption when the underlying NSS
callback function does not use the buffer space to store all strings
(e.g., for constant strings).
Instead of custom buffer management, two scratch buffers are used.
This increases stack usage somewhat.
Scratch buffer allocation failure is handled by return -1
(an invalid timeout value) instead of terminating the process.
This fixes bug 31679.
The addgetnetgrentX call in addinnetgrX may have failed to produce
a result, so the result variable in addinnetgrX can be NULL.
Use db->negtimeout as the fallback value if there is no result data;
the timeout is also overwritten below.
Also avoid sending a second not-found response. (The client
disconnects after receiving the first response, so the data stream did
not go out of sync even without this fix.) It is still beneficial to
add the negative response to the mapping, so that the client can get
it from there in the future, instead of going through the socket.
These structs describe file formats under /var/log, and should not
depend on the definition of _TIME_BITS. This is achieved by
defining __WORDSIZE_TIME64_COMPAT32 to 1 on 32-bit ports that
support 32-bit time_t values (where __time_t is 32 bits).
Charles Fol [Thu, 28 Mar 2024 15:25:38 +0000 (12:25 -0300)]
iconv: ISO-2022-CN-EXT: fix out-of-bound writes when writing escape sequence (CVE-2024-2961)
ISO-2022-CN-EXT uses escape sequences to indicate character set changes
(as specified by RFC 1922). While the SOdesignation has the expected
bounds checks, neither SS2designation nor SS3designation have its;
allowing a write overflow of 1, 2, or 3 bytes with fixed values:
'$+I', '$+J', '$+K', '$+L', '$+M', or '$*H'.
Checked on aarch64-linux-gnu.
Co-authored-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit f9dc609e06b1136bb0408be9605ce7973a767ada)
powerpc: Fix ld.so address determination for PCREL mode (bug 31640)
This seems to have stopped working with some GCC 14 versions,
which clobber r2. With other compilers, the kernel-provided
r2 value is still available at this point.
Wilco Dijkstra [Thu, 21 Mar 2024 16:48:33 +0000 (16:48 +0000)]
AArch64: Check kernel version for SVE ifuncs
Old Linux kernels disable SVE after every system call. Calling the
SVE-optimized memcpy afterwards will then cause a trap to reenable SVE.
As a result, applications with a high use of syscalls may run slower with
the SVE memcpy. This is true for kernels between 4.15.0 and before 6.2.0,
except for 5.14.0 which was patched. Avoid this by checking the kernel
version and selecting the SVE ifunc on modern kernels.
Parse the kernel version reported by uname() into a 24-bit kernel.major.minor
value without calling any library functions. If uname() is not supported or
if the version format is not recognized, assume the kernel is modern.
Tested-by: Florian Weimer <fweimer@redhat.com> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 2e94e2f5d2bf2de124c8ad7da85463355e54ccb2)
Szabolcs Nagy [Wed, 13 Mar 2024 14:34:14 +0000 (14:34 +0000)]
aarch64: fix check for SVE support in assembler
Due to GCC bug 110901 -mcpu can override -march setting when compiling
asm code and thus a compiler targetting a specific cpu can fail the
configure check even when binutils gas supports SVE.
The workaround is that explicit .arch directive overrides both -mcpu
and -march, and since that's what the actual SVE memcpy uses the
configure check should use that too even if the GCC issue is fixed
independently.
Andreas Schwab [Thu, 23 Nov 2023 17:23:46 +0000 (18:23 +0100)]
aarch64: correct CFI in rawmemchr (bug 31113)
The .cfi_return_column directive changes the return column for the whole
FDE range. But the actual intent is to tell the unwinder that the value
in x30 (lr) now resides in x15 after the move, and that is expressed by
the .cfi_register directive.
Wilco Dijkstra [Thu, 26 Oct 2023 16:30:36 +0000 (17:30 +0100)]
AArch64: Remove Falkor memcpy
The latest implementations of memcpy are actually faster than the Falkor
implementations [1], so remove the falkor/phecda ifuncs for memcpy and
the now unused IS_FALKOR/IS_PHECDA defines.
Wilco Dijkstra [Thu, 26 Oct 2023 16:07:21 +0000 (17:07 +0100)]
AArch64: Add memset_zva64
Add a specialized memset for the common ZVA size of 64 to avoid the
overhead of reading the ZVA size. Since the code is identical to
__memset_falkor, remove the latter.
Wilco Dijkstra [Tue, 24 Oct 2023 12:51:07 +0000 (13:51 +0100)]
AArch64: Cleanup ifuncs
Cleanup ifuncs. Remove uses of libc_hidden_builtin_def, use ENTRY rather than
ENTRY_ALIGN, remove unnecessary defines and conditional compilation. Rename
strlen_mte to strlen_generic. Remove rtld-memset.
Wilco Dijkstra [Tue, 17 Oct 2023 15:54:21 +0000 (16:54 +0100)]
AArch64: Add support for MOPS memcpy/memmove/memset
Add support for MOPS in cpu_features and INIT_ARCH. Add ifuncs using MOPS for
memcpy, memmove and memset (use .inst for now so it works with all binutils
versions without needing complex configure and conditional compilation).
Florian Weimer [Fri, 15 Mar 2024 18:08:24 +0000 (19:08 +0100)]
linux: Use rseq area unconditionally in sched_getcpu (bug 31479)
Originally, nptl/descr.h included <sys/rseq.h>, but we removed that
in commit 2c6b4b272e6b4d07303af25709051c3e96288f2d ("nptl:
Unconditionally use a 32-byte rseq area"). After that, it was
not ensured that the RSEQ_SIG macro was defined during sched_getcpu.c
compilation that provided a definition. This commit always checks
the rseq area for CPU number information before using the other
approaches.
This adds an unnecessary (but well-predictable) branch on
architectures which do not define RSEQ_SIG, but its cost is small
compared to the system call. Most architectures that have vDSO
acceleration for getcpu also have rseq support.
Stefan Liebler [Thu, 22 Feb 2024 14:03:27 +0000 (15:03 +0100)]
S390: Do not clobber r7 in clone [BZ #31402]
Starting with commit e57d8fc97b90127de4ed3e3a9cdf663667580935
"S390: Always use svc 0"
clone clobbers the call-saved register r7 in error case:
function or stack is NULL.
This patch restores the saved registers also in the error case.
Furthermore the existing test misc/tst-clone is extended to check
all error cases and that clone does not clobber registers in this
error case.
This restore the 2.33 semantic for arena_get2. It was changed by 11a02b035b46 to avoid arena_get2 call malloc (back when __get_nproc
was refactored to use an scratch_buffer - 903bc7dcc2acafc). The
__get_nproc was refactored over then and now it also avoid to call
malloc.
The 11a02b035b46 did not take in consideration any performance
implication, which should have been discussed properly. The
__get_nprocs_sched is still used as a fallback mechanism if procfs
and sysfs is not acessible.
arm: Remove wrong ldr from _dl_start_user (BZ 31339)
The commit 49d877a80b29d3002887b084eec6676d9f5fec18 (arm: Remove
_dl_skip_args usage) removed the _SKIP_ARGS literal, which was
previously loader to r4 on loader _start. However, the cleanup did not
remove the following 'ldr r4, [sl, r4]' on _dl_start_user, used to check
to skip the arguments after ld self-relocations.
In my testing, the kernel initially set r4 to 0, which makes the
ldr instruction just read the _GLOBAL_OFFSET_TABLE_. However, since r4
is a callee-saved register; a different runtime might not zero
initialize it and thus trigger an invalid memory access.
Checked on arm-linux-gnu.
Reported-by: Adrian Ratiu <adrian.ratiu@collabora.com> Reviewed-by: Szabolcs Nagy <szabolcs.nagy@arm.com>
(cherry picked from commit 1e25112dc0cb2515d27d8d178b1ecce778a9d37a)
Daniel Cederman [Tue, 16 Jan 2024 08:31:41 +0000 (09:31 +0100)]
sparc: Remove unwind information from signal return stubs [BZ #31244]
The functions were previously written in C, but were not compiled
with unwind information. The ENTRY/END macros includes .cfi_startproc
and .cfi_endproc which adds unwind information. This caused the
tests cleanup-8 and cleanup-10 in the GCC testsuite to fail.
This patch adds a version of the ENTRY/END macros without the
CFI instructions that can be used instead.
sigaction registers a restorer address that is located two instructions
before the stub function. This patch adds a two instruction padding to
avoid that the unwinder accesses the unwind information from the function
that the linker has placed right before it in memory. This fixes an issue
with pthread_cancel that caused tst-mutex8-static (and other tests) to fail.
Signed-off-by: Daniel Cederman <cederman@gaisler.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit 7bd06985c0a143cdcba2762bfe020e53514a53de)
The small counts copy bytes comparsion should be unsigned (as the
memmove size argument). It fixes string/tst-memmove-overflow on
sparcv9, where the input size triggers an invalid code path.
Checked on sparc64-linux-gnu and sparcv9-linux-gnu.
building glibc on s390x with --disable-multi-arch fails if only
the C-variant of strchrnul / memrchr is used. This is the case
if gcc uses -march < z13.
The build fails with:
../sysdeps/s390/strchrnul-c.c:28:49: error: ‘__strchrnul_c’ undeclared here (not in a function); did you mean ‘__strchrnul’?
28 | __hidden_ver1 (__strchrnul_c, __GI___strchrnul, __strchrnul_c);
With --disable-multi-arch, __strchrnul_c is not available as string/strchrnul.c
is just included without defining STRCHRNUL and thus we also don't have to create
the internal hidden symbol.
Ffsll function randomly regress by ~20%, depending on how code gets
aligned in memory. Ffsll function code size is 17 bytes. Since default
function alignment is 16 bytes, it can load on 16, 32, 48 or 64 bytes
aligned memory. When ffsll function load at 16, 32 or 64 bytes aligned
memory, entire code fits in single 64 bytes cache line. When ffsll
function load at 48 bytes aligned memory, it splits in two cache line,
hence random regression.
Ffsll function size reduction from 17 bytes to 12 bytes ensures that it
will always fit in single 64 bytes cache line.
This patch fixes ffsll function random performance regression.
Arjun Shankar [Mon, 15 Jan 2024 16:44:45 +0000 (17:44 +0100)]
syslog: Fix integer overflow in __vsyslog_internal (CVE-2023-6780)
__vsyslog_internal calculated a buffer size by adding two integers, but
did not first check if the addition would overflow. This commit fixes
that.
Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit ddf542da94caf97ff43cc2875c88749880b7259b)
Arjun Shankar [Mon, 15 Jan 2024 16:44:44 +0000 (17:44 +0100)]
syslog: Fix heap buffer overflow in __vsyslog_internal (CVE-2023-6779)
__vsyslog_internal used the return value of snprintf/vsnprintf to
calculate buffer sizes for memory allocation. If these functions (for
any reason) failed and returned -1, the resulting buffer would be too
small to hold output. This commit fixes that.
All snprintf/vsnprintf calls are checked for negative return values and
the function silently returns upon encountering them.
Arjun Shankar [Mon, 15 Jan 2024 16:44:43 +0000 (17:44 +0100)]
syslog: Fix heap buffer overflow in __vsyslog_internal (CVE-2023-6246)
__vsyslog_internal did not handle a case where printing a SYSLOG_HEADER
containing a long program name failed to update the required buffer
size, leading to the allocation and overflow of a too-small buffer on
the heap. This commit fixes that. It also adds a new regression test
that uses glibc.malloc.check.
Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 6bd0e4efcc78f3c0115e5ea9739a1642807450da)
Florian Weimer [Tue, 2 Jan 2024 13:36:17 +0000 (14:36 +0100)]
libio: Check remaining buffer size in _IO_wdo_write (bug 31183)
The multibyte character needs to fit into the remaining buffer space,
not the already-written buffer space. Without the fix, we were never
moving the write pointer from the start of the buffer, always using
the single-character fallback buffer.
Fixes commit 04b76b5aa8b2d1d19066e42dd1 ("Don't error out writing
a multibyte character to an unbuffered stream (bug 17522)").
H.J. Lu [Thu, 21 Dec 2023 03:42:12 +0000 (19:42 -0800)]
x86-64: Fix the tcb field load for x32 [BZ #31185]
_dl_tlsdesc_undefweak and _dl_tlsdesc_dynamic access the thread pointer
via the tcb field in TCB:
_dl_tlsdesc_undefweak:
_CET_ENDBR
movq 8(%rax), %rax
subq %fs:0, %rax
ret
_dl_tlsdesc_dynamic:
...
subq %fs:0, %rax
movq -8(%rsp), %rdi
ret
Since the tcb field in TCB is a pointer, %fs:0 is a 32-bit location,
not 64-bit. It should use "sub %fs:0, %RAX_LP" instead. Since
_dl_tlsdesc_undefweak returns ptrdiff_t and _dl_make_tlsdesc_dynamic
returns void *, RAX_LP is appropriate here for x32 and x86-64. This
fixes BZ #31185.
H.J. Lu [Thu, 21 Dec 2023 00:31:43 +0000 (16:31 -0800)]
x86-64: Fix the dtv field load for x32 [BZ #31184]
On x32, I got
FAIL: elf/tst-tlsgap
$ gdb elf/tst-tlsgap
...
open tst-tlsgap-mod1.so
Thread 2 "tst-tlsgap" received signal SIGSEGV, Segmentation fault.
[Switching to LWP 2268754]
_dl_tlsdesc_dynamic () at ../sysdeps/x86_64/dl-tlsdesc.S:108
108 movq (%rsi), %rax
(gdb) p/x $rsi
$4 = 0xf7dbf9005655fb18
(gdb)
This is caused by
_dl_tlsdesc_dynamic:
_CET_ENDBR
/* Preserve call-clobbered registers that we modify.
We need two scratch regs anyway. */
movq %rsi, -16(%rsp)
movq %fs:DTV_OFFSET, %rsi
Since the dtv field in TCB is a pointer, %fs:DTV_OFFSET is a 32-bit
location, not 64-bit. Load the dtv field to RSI_LP instead of rsi.
This fixes BZ #31184.
Szabolcs Nagy [Wed, 29 Nov 2023 11:31:37 +0000 (11:31 +0000)]
elf: Add TLS modid reuse test for bug 29039
This is a minimal regression test for bug 29039 which only affects
targets with TLSDESC and a reproducer requires that
1) Have modid gaps (closed modules) with old generation.
2) Update a DTV to a newer generation (needs a newer dlopen).
3) But do not update the closed gap entry in that DTV.
4) Reuse the modid gap for a new module (another dlopen).
5) Use dynamic TLSDESC in that new module with old generation (bug).
6) Access TLS via this TLSDESC and the now outdated DTV.
However step (3) in practice rarely happens: during DTV update the
entries for closed modids are initialized to "unallocated" and then
dynamic TLSDESC calls __tls_get_addr independently of its generation.
The only exception to this is DTV setup at thread creation (gaps are
initialized to NULL instead of unallocated) or DTV resize where the
gap entries are outside the previous DTV array (again NULL instead
of unallocated, and this requires loading > DTV_SURPLUS modules).
So the bug can only cause NULL (+ offset) dereference, not use after
free. And the easiest way to get (3) is via thread creation.
Note that step (5) requires that the newly loaded module has larger
TLS than the remaining optional static TLS. And for (6) there cannot
be other TLS access or dlopen in the thread that updates the DTV.
_dl_assign_tls_modid() assigns a slotinfo entry for a new module, but
does *not* do anything to the generation counter. The first time this
happens, the generation is zero and map_generation() returns the current
generation to be used during relocation processing. However, if
a slotinfo entry is later reused, it will already have a generation
assigned. If this generation has fallen behind the current global max
generation, then this causes an obsolete generation to be assigned
during relocation processing, as map_generation() returns this
generation if nonzero. _dl_add_to_slotinfo() eventually resets the
generation, but by then it is too late. This causes DTV updates to be
skipped, leading to NULL or broken TLS slot pointers and segfaults.
Fix this by resetting the generation to zero in _dl_assign_tls_modid(),
so it behaves the same as the first time a slot is assigned.
_dl_add_to_slotinfo() will still assign the correct static generation
later during module load, but relocation processing will no longer use
an obsolete generation.
Note that slotinfo entry (aka modid) reuse typically happens after a
dlclose and only TLS access via dynamic tlsdesc is affected. Because
tlsdesc is optimized to use the optional part of static TLS, dynamic
tlsdesc can be avoided by increasing the glibc.rtld.optional_static_tls
tunable to a large enough value, or by LD_PRELOAD-ing the affected
modules.
sysdeps: sem_open: Clear O_CREAT when semaphore file is expected to exist [BZ #30789]
When invoking sem_open with O_CREAT as one of its flags, we'll end up
in the second part of sem_open's "if ((oflag & O_CREAT) == 0 || (oflag
& O_EXCL) == 0)", which means that we don't expect the semaphore file
to exist.
In that part, open_flags is initialized as "O_RDWR | O_CREAT | O_EXCL
| O_CLOEXEC" and there's an attempt to open(2) the file, which will
likely fail because it won't exist. After that first (expected)
failure, some cleanup is done and we go back to the label "try_again",
which lives in the first part of the aforementioned "if".
The problem is that, in that part of the code, we expect the semaphore
file to exist, and as such O_CREAT (this time the flag we pass to
open(2)) needs to be cleaned from open_flags, otherwise we'll see
another failure (this time unexpected) when trying to open the file,
which will lead the call to sem_open to fail as well.
This can cause very strange bugs, especially with OpenMPI, which makes
extensive use of semaphores.
Fix the bug by simplifying the logic when choosing open(2) flags and
making sure O_CREAT is not set when the semaphore file is expected to
exist.
A regression test for this issue would require a complex and cpu time
consuming logic, since to trigger the wrong code path is not
straightforward due the racy condition. There is a somewhat reliable
reproducer in the bug, but it requires using OpenMPI.
This resolves BZ #30789.
See also: https://bugs.launchpad.net/ubuntu/+source/h5py/+bug/2031912
Signed-off-by: Sergio Durigan Junior <sergiodj@sergiodj.net> Co-Authored-By: Simon Chopin <simon.chopin@canonical.com> Co-Authored-By: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org> Fixes: 533deafbdf189f5fbb280c28562dd43ace2f4b0f ("Use O_CLOEXEC in more places (BZ #15722)")
(cherry picked from commit f957f47df75b9fab995754011491edebc6feb147)
tunables: Terminate if end of input is reached (CVE-2023-4911)
The string parsing routine may end up writing beyond bounds of tunestr
if the input tunable string is malformed, of the form name=name=val.
This gets processed twice, first as name=name=val and next as name=val,
resulting in tunestr being name=name=val:name=val, thus overflowing
tunestr.
Terminate the parsing loop at the first instance itself so that tunestr
does not overflow.
This also fixes up tst-env-setuid-tunables to actually handle failures
correct and add new tests to validate the fix for this CVE.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 1056e5b4c3f2d90ed2b4a55f96add28da2f4c8fa)
GLIBC_TUNABLES scrubbing happens earlier than envvar scrubbing and some
tunables are required to propagate past setxid boundary, like their
env_alias. Rely on tunable scrubbing to clean out GLIBC_TUNABLES like
before, restoring behaviour in glibc 2.37 and earlier.
Signed-off-by: Siddhesh Poyarekar <siddhesh@sourceware.org> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 0d5f9ea97f1b39f2a855756078771673a68497e1)
Mahesh Bodapati [Fri, 11 Aug 2023 15:38:25 +0000 (10:38 -0500)]
string: Fix tester build with fortify enable with gcc < 12
When building with fortify enabled, GCC < 12 issues a warning on the
fortify strncat wrapper might overflow the destination buffer (the
failure is tied to -Werror).
Checked on ppc64 and x86_64. Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
(cherry picked from commit f1c7ed0859a45929136836341741c7cd70f428cb)
Andreas Schwab [Tue, 1 Aug 2023 15:01:37 +0000 (17:01 +0200)]
iconv: restore verbosity with unrecognized encoding names (bug 30694)
Commit 91927b7c76 ("Rewrite iconv option parsing [BZ #19519]") changed the
iconv program to call __gconv_open directly instead of the iconv_open
wrapper, but the former does not set errno. Update the caller to
interpret the return codes like iconv_open does.
getaddrinfo: Fix use after free in getcanonname (CVE-2023-4806)
When an NSS plugin only implements the _gethostbyname2_r and
_getcanonname_r callbacks, getaddrinfo could use memory that was freed
during tmpbuf resizing, through h_name in a previous query response.
The backing store for res->at->name when doing a query with
gethostbyname3_r or gethostbyname2_r is tmpbuf, which is reallocated in
gethosts during the query. For AF_INET6 lookup with AI_ALL |
AI_V4MAPPED, gethosts gets called twice, once for a v6 lookup and second
for a v4 lookup. In this case, if the first call reallocates tmpbuf
enough number of times, resulting in a malloc, th->h_name (that
res->at->name refers to) ends up on a heap allocated storage in tmpbuf.
Now if the second call to gethosts also causes the plugin callback to
return NSS_STATUS_TRYAGAIN, tmpbuf will get freed, resulting in a UAF
reference in res->at->name. This then gets dereferenced in the
getcanonname_r plugin call, resulting in the use after free.
Fix this by copying h_name over and freeing it at the end. This
resolves BZ #30843, which is assigned CVE-2023-4806.
CVE-2023-4527: Stack read overflow with large TCP responses in no-aaaa mode
Without passing alt_dns_packet_buffer, __res_context_search can only
store 2048 bytes (what fits into dns_packet_buffer). However,
the function returns the total packet size, and the subsequent
DNS parsing code in _nss_dns_gethostbyname4_r reads beyond the end
of the stack-allocated buffer.
When backporting commmit 6985865bc3ad5b23147ee73466583dd7fdf65892
("elf: Always call destructors in reverse constructor order
(bug 30785)"), we can move the l_init_called_next field to this
place, so that the internal GLIBC_PRIVATE ABI does not change.
Reviewed-by: Carlos O'Donell <carlos@redhat.com> Tested-by: Carlos O'Donell <carlos@redhat.com>
(cherry picked from commit 53df2ce6885da3d0e89e87dca7b095622296014f)
elf: Always call destructors in reverse constructor order (bug 30785)
The current implementation of dlclose (and process exit) re-sorts the
link maps before calling ELF destructors. Destructor order is not the
reverse of the constructor order as a result: The second sort takes
relocation dependencies into account, and other differences can result
from ambiguous inputs, such as cycles. (The force_first handling in
_dl_sort_maps is not effective for dlclose.) After the changes in
this commit, there is still a required difference due to
dlopen/dlclose ordering by the application, but the previous
discrepancies went beyond that.
A new global (namespace-spanning) list of link maps,
_dl_init_called_list, is updated right before ELF constructors are
called from _dl_init.
In dl_close_worker, the maps variable, an on-stack variable length
array, is eliminated. (VLAs are problematic, and dlclose should not
call malloc because it cannot readily deal with malloc failure.)
Marking still-used objects uses the namespace list directly, with
next and next_idx replacing the done_index variable.
After marking, _dl_init_called_list is used to call the destructors
of now-unused maps in reverse destructor order. These destructors
can call dlopen. Previously, new objects do not have l_map_used set.
This had to change: There is no copy of the link map list anymore,
so processing would cover newly opened (and unmarked) mappings,
unloading them. Now, _dl_init (indirectly) sets l_map_used, too.
(dlclose is handled by the existing reentrancy guard.)
After _dl_init_called_list traversal, two more loops follow. The
processing order changes to the original link map order in the
namespace. Previously, dependency order was used. The difference
should not matter because relocation dependencies could already
reorder link maps in the old code.
The changes to _dl_fini remove the sorting step and replace it with
a traversal of _dl_init_called_list. The l_direct_opencount
decrement outside the loader lock is removed because it appears
incorrect: the counter manipulation could race with other dynamic
loader operations.
tst-audit23 needs adjustments to the changes in LA_ACT_DELETE
notifications. The new approach for checking la_activity should
make it clearer that la_activty calls come in pairs around namespace
updates.
The dependency sorting test cases need updates because the destructor
order is always the opposite order of constructor order, even with
relocation dependencies or cycles present.
There is a future cleanup opportunity to remove the now-constant
force_first and for_fini arguments from the _dl_sort_maps function.
Adam Jackson [Fri, 8 Sep 2023 19:55:19 +0000 (15:55 -0400)]
libio: Fix oversized __io_vtables
IO_VTABLES_LEN is the size of the struct array in bytes, not the number
of __IO_jump_t's in the array. Drops just under 384kb from .rodata on
LP64 machines.
Fixes: 3020f72618e ("libio: Remove the usage of __libc_IO_vtables") Signed-off-by: Adam Jackson <ajax@redhat.com> Reviewed-by: Florian Weimer <fweimer@redhat.com> Tested-by: Florian Weimer <fweimer@redhat.com>
(cherry picked from commit 8cb69e054386f980f9ff4d93b157861d72b2019e)
Aurelien Jarno [Mon, 28 Aug 2023 21:30:37 +0000 (23:30 +0200)]
io: Fix record locking contants for powerpc64 with __USE_FILE_OFFSET64
Commit 5f828ff824e3b7cd1 ("io: Fix F_GETLK, F_SETLK, and F_SETLKW for
powerpc64") fixed an issue with the value of the lock constants on
powerpc64 when not using __USE_FILE_OFFSET64, but it ended-up also
changing the value when using __USE_FILE_OFFSET64 causing an API change.
Fix that by also checking that define, restoring the pre 4d0fe291aed3a476a commit values:
SYS_modify_ldt requires CONFIG_MODIFY_LDT_SYSCALL to be set in the kernel, which
some distributions may disable for hardening. Check if that's the case (unset)
and mark the test as UNSUPPORTED if so.
Florian Weimer [Thu, 10 Aug 2023 17:36:56 +0000 (19:36 +0200)]
malloc: Remove bin scanning from memalign (bug 30723)
On the test workload (mpv --cache=yes with VP9 video decoding), the
bin scanning has a very poor success rate (less than 2%). The tcache
scanning has about 50% success rate, so keep that.
Update comments in malloc/tst-memalign-2 to indicate the purpose
of the tests. Even with the scanning removed, the additional
merging opportunities since commit 542b1105852568c3ebc712225ae78b
("malloc: Enable merging of remainders in memalign (bug 30723)")
are sufficient to pass the existing large bins test.
Remove leftover variables from _int_free from refactoring in the
same commit.
Florian Weimer [Fri, 11 Aug 2023 09:18:17 +0000 (11:18 +0200)]
malloc: Enable merging of remainders in memalign (bug 30723)
Previously, calling _int_free from _int_memalign could put remainders
into the tcache or into fastbins, where they are invisible to the
low-level allocator. This results in missed merge opportunities
because once these freed chunks become available to the low-level
allocator, further memalign allocations (even of the same size are)
likely obstructing merges.
Furthermore, during forwards merging in _int_memalign, do not
completely give up when the remainder is too small to serve as a
chunk on its own. We can still give it back if it can be merged
with the following unused chunk. This makes it more likely that
memalign calls in a loop achieve a compact memory layout,
independently of initial heap layout.
Drop some useless (unsigned long) casts along the way, and tweak
the style to more closely match GNU on changed lines.
Since i686 provides the fortified wrappers for memcpy, mempcpy,
memmove, and memset on the same string implementation, the static
build tries to optimized it by not tying the fortified wrappers
to string routine (to avoid pulling the fortify function if
they are not required).
Checked on i686-linux-gnu building with different option:
default and --disable-multi-arch plus default, --disable-default-pie,
--enable-fortify-source={2,3}, and --enable-fortify-source={2,3}
with --disable-default-pie. Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit c73c96a4a1af1326df7f96eec58209e1e04066d8)
x86_64: Fix build with --disable-multiarch (BZ 30721)
With multiarch disabled, the default memmove implementation provides
the fortify routines for memcpy, mempcpy, and memmove. However, it
does not provide the internal hidden definitions used when building
with fortify enabled. The memset has a similar issue.
Checked on x86_64-linux-gnu building with different options:
default and --disable-multi-arch plus default, --disable-default-pie,
--enable-fortify-source={2,3}, and --enable-fortify-source={2,3}
with --disable-default-pie. Tested-by: Andreas K. Huettel <dilfridge@gentoo.org> Reviewed-by: Siddhesh Poyarekar <siddhesh@sourceware.org>
(cherry picked from commit 51cb52214fcd72849c640b12f5099ed3ac776181)
Florian Weimer [Fri, 11 Aug 2023 08:10:16 +0000 (10:10 +0200)]
nscd: Do not rebuild getaddrinfo (bug 30709)
The nscd daemon caches hosts data from NSS modules verbatim, without
filtering protocol families or sorting them (otherwise separate caches
would be needed for certain ai_flags combinations). The cache
implementation is complete separate from the getaddrinfo code. This
means that rebuilding getaddrinfo is not needed. The only function
actually used is __bump_nl_timestamp from check_pf.c, and this change
moves it into nscd/connections.c.
Tested on x86_64-linux-gnu with -fexceptions, built with
build-many-glibcs.py. I also backported this patch into a distribution
that still supports nscd and verified manually that caching still works.
x86: Fix for cache computation on AMD legacy cpus.
Some legacy AMD CPUs and hypervisors have the _cpuid_ '0x8000_001D'
set to Zero, thus resulting in zeroed-out computed cache values.
This patch reintroduces the old way of cache computation as a
fail-safe option to handle these exceptions.
Fixed 'level4_cache_size' value through handle_amd().
Florian Weimer [Tue, 1 Aug 2023 08:27:15 +0000 (10:27 +0200)]
stdlib: Improve tst-realpath compatibility with source fortification
On GCC before 11, IPA can make the fortified realpath aware that the
buffer size is not large enough (8 bytes instead of PATH_MAX bytes).
Fix this by using a buffer that is large enough.