This triggers a code generation bug in gcc 6.3.0
(at least with gcc version 6.3.0 20170516 (Debian 6.3.0-18)).
(this bug can be reproduced e.g. on gcc67, which is a debian 9.2 system)
The bad code causes the debug trace to be indented by more than 500 characters,
giving e.g. for the first debug line produced by stage 2:
--12305:1:launcher launching /home/philippe/valgrind/git/smallthing/./.in_place/memcheck-amd64-linux
--12305:1:debuglog DebugLog system started by Stage 2 (main), level 1 logging requested
This commit bypasses the code generation bug, by moving the indent calculation
just before its usage.
Note: on amd64/x86, the code size of memcheck tool increases by about 12%
with -finline-functions.
In terms of perf impact (using perf/vg_perf) this gives mixed results :
memcheck is usually slightly faster, but some tests are slower (e.g. heap_pdb4)
callgrind is usually slower, but some tests are faster
helgrind : some tests are slowed down, some tests are faster (some significantly faster such as sarp and ffbench).
See below 2 runs of comparing trunk (with -finline-functions) with fixes
(which does not have -finline-functions).
Petar Jovanovic [Mon, 13 Nov 2017 12:12:25 +0000 (13:12 +0100)]
synchronize access to vgdb_interrupted_tid
Delay writing to the global vgdb_interrupted_tid until all the threads are
in interruptible state. This ensures that valgrind_wait() will see correct
value.
This solves occasional failures of gdbserver_tests/hgtls test.
Improve efficiency of SP tracking in helgrind (and incidentally in exp-sgheck)
Helgrind (and incidentally exp-sgcheck) does not need both of
tracking new mem stack and die mem stack:
Helgrind only tracks new mem stack. exp-sgcheck only tracks die mem stack.
Currently, m_translate.c vg_SP_update_pass inserts helpers calls
for new and die mem stack, even if the tool only needs new mem stack (helgrind)
or die mem stack (exp-sgcheck).
The optimisation consists in not inserting helpers calls when the tool
does not need to see new (or die) mem stack.
Also, for helgrind, implement specialised new_mem_stack for known SP updates
with small values (like memcheck).
This reduces the size of the generated code for helgrind and exp-sgcheck.
(see below the diffs on perf/memrw). This does not impact the code generation
for tools that tracks both new and die mem stack (such as memcheck).
trunk:
exp-sgcheck: --28481-- transtab: new 2,256 (44,529 -> 581,402; ratio 13.1) [0 scs] avg tce size 257
helgrind: --28496-- transtab: new 2,299 (46,667 -> 416,575; ratio 8.9) [0 scs] avg tce size 181
memcheck: --28501-- transtab: new 2,220 (50,038 -> 777,139; ratio 15.5) [0 scs] avg tce size 350
with this patch:
exp-sgcheck: --28516-- transtab: new 2,254 (44,479 -> 567,196; ratio 12.8) [0 scs] avg tce size 251
helgrind: --28512-- transtab: new 2,297 (46,620 -> 399,799; ratio 8.6) [0 scs] avg tce size 174
memcheck: --28507-- transtab: new 2,219 (49,991 -> 776,028; ratio 15.5) [0 scs] avg tce size 349
More in details, the changes consist in:
pub_core_tooliface.h:
* add 2 booleans any_new_mem_stack and any_die_mem_stack to the tdict struct
* renamed VG_(sanity_check_needs) to VG_(finish_needs_init), as it
does now more than sanity checks : it derives the 2 above booleans.
m_tooliface.c:
* change VG_(sanity_check_needs) to VG_(finish_needs_init)
m_main.c:
* update call to VG_(sanity_check_needs)
hg_main.c:
* add a few inlines for functions just calling another function
* define the functions evh__new_mem_stack_[4|8|12|16|32|112|128|144|160]
(using the macro DCL_evh__new_mem_stack).
* call the VG_(track_new_mem_stack_[4|8|12|16|32|112|128|144|160])
m_translate.c
* n_SP_updates_* stats are now maintained separately for the new and die
fast and known cases.
* need_to_handle_SP_assignment can now check only the 2 booleans
any_new_mem_stack and any_die_mem_stack
* DO_NEW macro: does not insert anymore a helper call if the tool does
not track 'new' mem_stack.
In case there is no new tracking, it however still does update the
SP aliases (and the n_SP_updates_new_fast).
* similar changes for DO_DIE macro.
* a bunch of white spaces changes
Note: it is easier to look at the changes in this file using
git diff -w
to ignore the white spaces changes (e.g. due to DO_NEW/DO_DIE indentation
changes).
Move or conditionalise on CHECK_CEM some expensive asserts
* Some RCEC related asserts checking there was no corruption are on hot paths
=> make these checks only when CHECK_CEM is set.
* Move an expensive assert where the event is inserted, as it is useless
to check this when searching for an already existing event :
it is enough to ensure that an invalid szB cannot be inserted,
and so will not be found, and so assert will trigger in the insertion logic.
Julian Seward [Tue, 7 Nov 2017 14:01:51 +0000 (15:01 +0100)]
s390_irgen_EX_SS: add initialisations so as to remove (false positive) warnings from gcc-7.x.
When compiling guest_s390_toIR.c for a 32-bit target (a configuration in which
it will never be used, but never mind), gcc-7.x notices that sizeof(ss.dec) is
larger than sizeof(ss.bytes), so the initialisation of ss.bytes leaves ss.dec.b2
and ss.dec.d2 uninitialised. This patch causes both variants to be initialised.
When built for a 64 bit target, the existing initialisation of ss.bytes covers
ss.dec completely, so there is no error.
Small optimisation in helgrind address description
Searching if an addr is in a malloc-ed client block is expensive (linear search)
So, before scanning the list of malloc block, check that the address is
in a client heap segment : this is a fast operation (it has a small
cache, and for cache miss, does a dichotomic search) and avoids
scanning a often big list (for big applications).
Petar Jovanovic [Fri, 3 Nov 2017 18:10:04 +0000 (19:10 +0100)]
mips: finetune tests that print FCSR
Bits 18 (NAN2008) and 19 (ABS2008) in FCSR are preset by hardware and can
differ between platforms. Hence, we should clear these bits before printing
FCSR value in order to have the same output on different platforms.
This fixes several failures (tests modified by this change) that occur on
MIPS P5600 board. The P5600 is a core that implements MIPS32 Release 5 arch.
Fix 376257 - helgrind history full speed up using a cached stack
This patch implements the flag --delta-stacktrace=yes/no.
Yes indicates to calculate the full history stack traces by
changing just the last frame if no call/return instruction was
executed.
This can speed up helgrind by up to 25%.
This flags is currently set to yes only on linux x86 and amd64, as some
platform dependent validation of the used heuristics is needed before
setting the default to yes on a platform. See function check_cached_rcec_ok
in libhb_core.c for more details about how to validate/check the behaviour
on a new platform.
Petar Jovanovic [Tue, 31 Oct 2017 16:30:14 +0000 (17:30 +0100)]
android: compute possible size of a symbol of unknown size
Under specific circumstances, setting 2048 as a size of symbol of unknown
size causes that symbol crosses unmapped region. This further causes an
assertion in Valgrind.
Compute possible size by computing maximal size the symbol can have within
its section.
none/tests/mips64/msa_arithmetic (symlink to mips32)
none/tests/mips64/msa_comparison (symlink to mips32)
none/tests/mips64/msa_data_transfer
none/tests/mips64/msa_fpu (symlink to mips32)
none/tests/mips64/msa_logical_and_shift (symlink to mips32)
none/tests/mips64/msa_shuffle (symlink to mips32)
Contributed by:
Tamara Vlahovic, Aleksandar Rikalo and Aleksandra Karadzic.
Mark Wielaard [Fri, 20 Oct 2017 12:55:06 +0000 (14:55 +0200)]
Bug #385912. Remove explicit NULL check from none/tests/rlimit_nofile.
glibc doesn't guarantee anything about setrlimit with a NULL limit argument.
It could just crash (if it needs to adjust the limit) or might silently
succeed (as newer glibc do). Just remove the extra check.
See also the "setrlimit change to prlimit change in behavior" thread:
https://sourceware.org/ml/libc-alpha/2017-10/threads.html#00830
Mark Wielaard [Tue, 17 Oct 2017 15:49:26 +0000 (17:49 +0200)]
Suppress _dl_runtime_resolve_avx_slow for memcheck conditional.
glibc ld.so has an optimization when resolving a symbol that checks
whether or not the upper 128 bits of the ymm registers are zero. If
so it uses "cheaper" instructions to save/restore them using the xmm
registers. If those upper 128 bits contain undefined values memcheck
will issue an Conditional jump or move depends on uninitialised value(s)
warning whenever trying to resolve a symbol.
This triggers in our sh-mem-vecxxx test cases. Suppress the warning
by default.
Petar Jovanovic [Tue, 17 Oct 2017 13:40:47 +0000 (15:40 +0200)]
mips: simplify handling of Iop_Max32U
Use MIPSRH_Reg to get MIPSRH for Iop_Max32U. Without it, under specific
circumstances, the code may explode and exceed Valgrind instruction buffer
due to multiple calls to iselWordExpr_R through iselWordExpr_RH.
Issue discovered while testing Valgrind on Android.
Ivo Raisr [Fri, 22 Sep 2017 20:50:11 +0000 (22:50 +0200)]
Refactor tracking of MOV coalescing.
Reg<->Reg MOV coalescing status is now a part of the HRegUsage.
This allows register allocation to query it two times without incurring
a performance penalty. This in turn allows to better keep track of
vreg<->vreg MOV coalescing so that all vregs in the coalesce chain
get the effective |dead_before| of the last vreg.
A small performance improvement has been observed because this allows
to coalesce even spilled vregs (previously only assigned ones).
Petar Jovanovic [Tue, 10 Oct 2017 16:06:14 +0000 (18:06 +0200)]
mips: add support for bi-arch build on mips64
If native compiler can build Valgrind for mips32 o32 on native mips64
system, it should do it.
This change adds a second architecture for MIPS in a similar way how it has
been previously done for amd64 and ppc64.
Carl Love [Thu, 5 Oct 2017 17:19:59 +0000 (12:19 -0500)]
PPC64, vpermr, xxperm, xxpermr fix Iop_Perm8x16 selector field
The implementation of the vpermr, xxperm, xxpermr violate this by
using a mask of 0x1F. Fix the code and the corresponding comments
to met the definition for Iop_Perm8x16. Use Iop_Dup8x16 to generate
vector value for subtraction.
Carl Love [Wed, 4 Oct 2017 15:54:07 +0000 (10:54 -0500)]
PPC64, revert the change to vperm instruction.
The patch was in my git tree with the patch I intended to apply.
I didn't realize the patch was in the tree. Git applied both
patches. Still investigating the vperm change to see if it is
really needed.
Carl Love [Tue, 3 Oct 2017 20:18:09 +0000 (15:18 -0500)]
PPC64, Re-implement the vpermr instruction using the Iop_Perm8x16.
The current implementation will generate a lot of Iops. The number
of generated Iops can lead to Valgrind running out of temporary space.
See bugzilla https://bugs.kde.org/show_bug.cgi?id=385208 as an example
of the issue. Using Iop_Perm8x16 reduces the number of Iops significantly.
Carl Love [Tue, 3 Oct 2017 20:09:22 +0000 (15:09 -0500)]
PPC64, Use the vperm code to implement the xxperm inst.
The current xxperm instruction implementation generates a huge
number of Iops to explicitly do the permutation. The code
was changed to use the Iop_Perm8x16 which is much more efficient
so temporary memory doesn't get exhausted.
Carl Love [Tue, 3 Oct 2017 17:08:09 +0000 (12:08 -0500)]
PPC64, Replace body of generate_store_FPRF with C helper function.
The function calculates the floating point condition code values
and stores them into the floating point condition code register.
The function is used by a number of instructions. The calculation
generates a lot of Iops as it much check the operatds for NaN, SNaN,
zero, dnorm, norm and infinity. The large number of Iops exhausts
temporary memory.
Rhys Kidd [Mon, 2 Oct 2017 00:57:04 +0000 (20:57 -0400)]
gitignore: Fix up false directory-level .gitignore settings
So we never intended to ignore all changes from the top-level down in /include
or /cachegrind. Instead allow the filetype-specific .gitignore patterns match
to the contents of these two folders.
Also, don't ignore changes to include/valgrind.h as it exists in the repository
and should be tracked for any changes developers might make.
Changes tested by running a git clean force and then full rebuild. No stray
build artifacts were being tracked erroneously by git after these changes.
Ivo Raisr [Tue, 26 Sep 2017 07:33:27 +0000 (09:33 +0200)]
Reorder allocatable registers for s390x so that the callee saved are listed first.
Helper calls always trash all caller saved registers. By listing the callee saved
first then VEX register allocator (both v2 and v3) is more likely to pick them
and does not need to spill that much before helper calls.
Follow up to 'On ppc, add generic_start_main.isra.0 as a below main function'
massif/tests/mmapunmap on ppc now indicates a below main function.
Note: this ppc53 specific file is needed because the valgrind stack unwinder
does not properly unwind in main.
At the mmap syscall, gdb backtrace gives:
Breakpoint 3, 0x00000000041dbae0 in .__GI_mmap () from /lib64/libc.so.6
(gdb) bt
while the valgrind stack trace gives:
Thread 1: status = VgTs_Runnable (lwpid 64207)
==64207== at 0x41DBAE0: mmap (in /usr/lib64/libc-2.17.so)
==64207== by 0x10000833: f (mmapunmap.c:9)
==64207== by 0x40E6BEB: (below main) (in /usr/lib64/libc-2.17.so)
client stack range: [0x1FFEFF0000 0x1FFF00FFFF] client SP: 0x1FFF00ECE0
valgrind stack top usage: 15632 of 1048576
On ppc, add generic_start_main.isra.0 as a below main function
We can have stacktraces such as:
==41840== by 0x10000927: a1 (deep.c:27)
==41840== by 0x1000096F: main (deep.c:35)
==41840== by 0x4126BEB: generic_start_main.isra.0 (in /usr/lib64/libc-2.17.so)
==41840== by 0x4126E13: __libc_start_main (in /usr/lib64/libc-2.17.so)
So, add generic_start_main.isra.0 as a below main function.
This fixes the test massif/tests/deep-D
massif: match --ignore-fn with the first IP that has a fnname
Currently, --ignore-fn is only matched with the top IP entries that
have a fnname. With this change, we first search for the first IP that
has a fnname.
This e.g. allows to ignore the allocation for a stacktrace such as:
0x1 0x2 0x3 fn_to_ignore otherfn
This is then used in massif c++ tests new-cpp and overloaded-new to ignore
the c++ libstdc++ allocation similar to:
==10754== 72,704 bytes in 1 blocks are still reachable in loss record 10 of 10
==10754== at 0x4C2BBCD: malloc (vg_replace_malloc.c:299)
==10754== by 0x4EC39BF: ??? (in /usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.22)
==10754== by 0x400F8A9: call_init.part.0 (dl-init.c:72)
==10754== by 0x400F9BA: call_init (dl-init.c:30)
==10754== by 0x400F9BA: _dl_init (dl-init.c:120)
==10754== by 0x4000C59: ??? (in /lib/x86_64-linux-gnu/ld-2.24.so)
Follow up to 345307 - Warning about "still reachable" memory when using libstdc++ from gcc 5
The bug itself was solved in 3.12 by the addition of __gnu_cxx::__freeres
in the libstdc++ and have valgrind calling it before exit.
However, depending on the version of the libstdc++, the test leak_cpp_interior
was giving different results.
This commit adds some filtering specific to the test, so as to not depend
anymore of the absolute number of bytes leaked, and adds a suppression entry to
ignore the memory allocated by libstdc++.
This allows to have only 2 .exp files, instead of 4 (or worse, if
we would have to handle yet other .exp files depending on the libstdc++
version).
gdbserver_tests/hgtls is failing on a number of platforms
as it looks like static tls handling is now needed.
So, omplement static tls for a few more platforms.
The formulas that are platform dependent are somewhat wild guesses
obtained with trial and errors.
Note that arm/arm64/ppc32 are not (yet) done
The below commit introduced a regression on ppc32
ommit 00d4667295a821fef9eb198abcb0c942dffb6045
Author: Ivo Raisr <ivosh@ivosh.net>
Date: Wed Sep 6 08:10:36 2017 +0200
Reorder allocatable registers for AMD64, X86, and PPC so that the callee saved are listed first.
Helper calls always trash all caller saved registers. By listing the callee saved
first then VEX register allocator (both v2 and v3) is more likely to pick them
and does not need to spill that much before helper calls.
Petar Jovanovic [Fri, 15 Sep 2017 16:29:29 +0000 (18:29 +0200)]
mips: finetune none/tests/(mips32|64)/test_math test
Compiler may optimize out call to cbrt. Change test to prevent that.
Otherwise, the test does not exercise a desired codepath for cbrt, and it
prints precalculated value.
Ivo Raisr [Wed, 6 Sep 2017 06:10:36 +0000 (08:10 +0200)]
Reorder allocatable registers for AMD64, X86, and PPC so that the callee saved are listed first.
Helper calls always trash all caller saved registers. By listing the callee saved
first then VEX register allocator (both v2 and v3) is more likely to pick them
and does not need to spill that much before helper calls.
The code handling array bounds is not ready to accept a reference
to something else (not very clear what this reference could be) :
the code only expects directly the value of a bound.
So, it was using the reference (i.e. an offset somewehere in the debug
info) as the value of the bound.
This then gave huge bounds for some arrays, causing an overlap
in the stack variable handling code in exp-sgcheck.
Such references seems to be used sometimes for arrays with variable
size stack allocated.
Fix (or rather bypass) the problem by not considering that we have
a usable array bound when a reference is given.
Ivo Raisr [Sat, 9 Sep 2017 20:08:21 +0000 (22:08 +0200)]
Reduce number of spill instructions generated by VEX register allocator v3.
Keeps track whether the bound real register has been reloaded from a virtual
register recently and if this real reg is still equal to that spill slot.
Avoids unnecessary spilling that vreg later, when this rreg needs
to be reserved, usually as a caller save register for a helper call.
Julian Seward [Thu, 31 Aug 2017 09:11:25 +0000 (11:11 +0200)]
Improve the implementation of expensiveCmpEQorNE.
.. so that the code it creates runs in approximately half the time it did
before. This is in support of making the cost of expensive (exactly)
integer EQ/NE as low as possible, since the day will soon come when we'll
need to enable this by default.
Julian Seward [Wed, 30 Aug 2017 17:43:59 +0000 (19:43 +0200)]
amd64 back end: handle CmpNEZ64(And64(x,y)) better; ditto the 32 bit case.
Handle CmpNEZ64(And64(x,y)) by branching on flags, similarly to
CmpNEZ64(Or64(x,y)). Ditto the 32 bit equivalents. Also, remove expensive
DEFINE_PATTERN/DECLARE_PATTERN uses there and hardwire the matching logic.
n-i-bz. This is in support of reducing the cost of expensiveCmpEQorNE
in memcheck.
Ivo Raisr [Fri, 25 Aug 2017 22:19:05 +0000 (00:19 +0200)]
VEX register allocator version 3.
Implements a new version of VEX register allocator which
keeps the main state per virtual registers, as opposed
to real registers in v2. This results in a simpler and
cleaner design and much simpler implementation.
It has been observed that the new allocator executes 20-30%
faster than the previous one but could produce slightly worse
spilling decisions. Overall performance improvement when running
the Valgrind performance regression test suite has been observed
in terms of a few percent.
The new register allocator (v3) is now the default one.
The old register allocator (v2) is still kept around and can be
activated with command line option '--vex-regalloc-version=2'.
Petar Jovanovic [Tue, 22 Aug 2017 13:53:15 +0000 (15:53 +0200)]
mips: reimplement handling of div, divu and ddivu
Previous implementation misused some opcodes, and a side effect was
dead code emission.
To reimplement handling of these instructions, three new IoPs have been
introduced:
Iop_DivModU64to64, // :: I64,I64 -> I128
// of which lo half is div and hi half is mod
Iop_DivModS32to32, // :: I32,I32 -> I64
// of which lo half is div and hi half is mod
Iop_DivModU32to32, // :: I32,I32 -> I64
// of which lo half is div and hi half is mod
Ivo Raisr [Fri, 18 Aug 2017 14:53:57 +0000 (16:53 +0200)]
Recognize signal 151 (SIGLIBRT) sent by gdb.
It has been observed that gdb on Solaris sends this signal to
child processes. Unfortunately array "pass_signals" was too small
to accomodate this signal and subsequently VG_(clo_vex_control).iropt_verbosity
was overwritten.
This has been fixed now.
Petar Jovanovic [Thu, 17 Aug 2017 18:08:17 +0000 (20:08 +0200)]
mips32: finetune vfp test to avoid compiler warnings
This patch removes two compiler warnings from the test:
vfp.c: In function 'handler':
vfp.c:260:4: warning: implicit declaration of function 'exit'
[-Wimplicit-function-declaration]
exit(0);
^
vfp.c:260:4: warning: incompatible implicit declaration of built-in
function 'exit'
vfp.c: At top level:
vfp.c:258:13: warning: 'handler' defined but not used [-Wunused-function]
static void handler(int sig)
^