324181 mmap does not handle MAP_32BIT (handle it now, rather than fail it)
324181 was previously closed with a solution to always make
MAP_32BIT fail. This is technically correct/according to the doc,
but is not very usable.
This patch ensures that MAP_32BIT mmap is succesful, as long as
aspacemgr gives a range in the first 2GB
(so, compared to a native run, MAP_32BIT will fail much more quickly
as aspacemgr does not reserve the address space below 2GB on a 64 bits).
Far to be perfect, but this is better than nothing.
Added a regression test that test succesful mmap 32 bits till
the 2GB limit is reached.
Florian Krohm [Tue, 9 Jun 2015 21:44:58 +0000 (21:44 +0000)]
Followup to r15323. Cannot use AC_GCC_WARNING_SUBST to detect
whether -Wformat-security is supported. Special handling is needed.
gcc (Ubuntu 4.8.2-19ubuntu1) 4.8.2 accepts -Wformat-security
without -Wformat being present on the command line. Other GCC
versions will issue a warning if -Wformat is missing. r15323
adds -Werror to AC_GCC_WARNING_SUBST and therefore turns that
warning into an error. With the consequence that
-Wformat-security appears to be unsupported -- a false conclusion.
Rhys Kidd [Sat, 6 Jun 2015 04:18:49 +0000 (04:18 +0000)]
Resolve remaining clang warning on OS X. Should be possible to build Valgrind on modern OS X without any warnings (note: does not hold for regression test suite).
Rhys Kidd [Sat, 6 Jun 2015 03:57:34 +0000 (03:57 +0000)]
Resolve clang warning on OS X: m_stacktrace.c:542:7: warning: implicit declaration of function 'vgPlain_is_in_syscall' is invalid in C99 [-Wimplicit-function-declaration]
Florian Krohm [Fri, 5 Jun 2015 21:19:06 +0000 (21:19 +0000)]
clang, as opposed to gcc, does not terminate with a non-zero return code
in case an unrecognised command line option is encountered. configure.ac
however was assuming just that which led to compile time warnings later on.
Add -Werror to the configure bits to make clang behave like gcc in this
regard. Fixes BZ #348565.
Florian Krohm [Fri, 5 Jun 2015 17:09:57 +0000 (17:09 +0000)]
Simplify configury and eliminate AC_GCC_WARNING_COND which was only used
in one place and can be replaced with AC_GCC_WARNING_SUBST_NEW. Adjust
perf/Makefile.am.
Julian Seward [Fri, 5 Jun 2015 13:33:46 +0000 (13:33 +0000)]
arm32-linux only: add handwritten assembly helpers for
MC_(helperc_LOADV32le), MC_(helperc_LOADV16le) and
MC_(helperc_LOADV8). This improves performance by around 5% to 7% in
the best case, for run-of-the-mill integer code.
On platforms that have an accessible redzone below the SP, the unwind logic
should be able to access the redzone.
So, when computing fp_min, substract the redzone.
Currently, only amd64 and ppc64 have a non 0 redzone.
Mark Wielaard [Wed, 3 Jun 2015 09:52:00 +0000 (09:52 +0000)]
Run memcheck/tests/demangle with -q.
The interesting part is the demangled backtrace in the error message.
Suppress the memory allocation/blocks summary which can differ slightly
depending on the underlying arch/libs.
Mark Wielaard [Tue, 2 Jun 2015 20:23:06 +0000 (20:23 +0000)]
GCC 5.1 is too smart. Disable Identical Code Folding for preload libs.
We want to disabled Identical Code Folding for the tools preload shared
objects to get better backraces. For GCC 5.1 -fipa-icf is enabled by
default at -O2.
The optimization reduces code size and may disturb
unwind stacks by replacing a function by equivalent
one with a different name.
Add a configure check to see if GCC supports -fno-ipa-icf.
If it does then add the flag to AM_CFLAGS_PSO_BASE.
Without this GCC will notice some of the preload replacement functions
in vg_replace_strmem are identical and fold them all into one picking
a random (existing) function name. This causes backtraces showing
completely unexpected function names.
Rhys Kidd [Tue, 2 Jun 2015 10:30:15 +0000 (10:30 +0000)]
Darwin11.supp should include suppression for known uninitialised read in pthread_rwlock_init() as required to pass the memcheck/tests/darwin/pth-supp test. bz#196528.
Slightly improve x86 unwind intensive workload.
e.g. perf/memrw is improved by 2% to 3% with this patch.
The unwinding code on x86 is trying to unwind using
either the %ebp-chain or CFI unwinding.
If these 2 techniques fail, then it tries to unwind
using FPO (PDB) debug info.
However, unless running wine or similar, there will never be
such FPO/PDB info.
The function VG_(use_FPO_info) is thus called for nothing
for each 'end of stack'. This function scans all the loaded di
to find a debug info that has some FP, to not find anything.
With this patch, the unwind code on x86 will only call VG_(use_FPO_info) if
some FPO/PDB info was loaded.
The fact that FPO/PDB info was loaded is cached and updated similarly to
cfi cache : each time new debug info is loaded, the cache value is refreshed
using the debuginfo generation.
The patch also changes the name of VG_(CF_info_generation)
to VG_(debuginfo_generation), as this generation is changed for
any kind of load or unload of debug info, not only for CFI based debug
info
Some platforms such as x86 and amd64 have efficient unaligned access.
On these platforms, implement read_/write_<type> by doing a direct
access, rather than calling a function that will read or write
'byte per byte'.
For platforms that do not have efficient unaligned access,
or that do not support at all unaligned access, call function
readUAS_/writeUAS_<type> that works as before.
Currently, direct acecss is activated only for x86 and amd64.
Unclear what other platforms support (efficiently) unaligned access.
On unwind intensive code (such as perf/memrw on amd64), this patch
gives up to 5% improvement.
This patch decreases significantly the memory needed for OldRef and
slightly increases the performance. It also moderately improves
the nr of cases where helgrind can provide the stack trace of the old
access (when using the same amount of memory for the OldRef entries).
The patch also provides a new helgrind monitor command to show
the recorded accesses for an address+len, and adds an optional argument
lock_address to the monitor command 'info locks', to show the info
about just this lock.
Currently, oldref are maintained in a sparse WA, that points to N
entries, as specified by --conflict-cache-size=N.
For each entry (associated to an address), we have the last 5 accesses.
Old entries are recycled in an exact LRU order.
But inside an entry, we could have a recent access, and 4 very
old accesses that are kept 'alive' by a single thread accessing
repetitively the address shared with the 4 other old entries.
The attached patch replaces the sparse WA that maintains the OldREf
by an hash table.
Each OldRef now also only maintains one single access for an address.
As an OldRef now maintains only one access, all the entries are now
strictly in LRU mode.
Memory used for OldRef
-----------------------
For the trunk, an OldRef has a size of 72 bytes (on 32 bits archs)
maintaining up to 5 accesses to the same address.
On 64 bits arch, an OldRef is 104 bytes.
With the patch, an OldRef has a size of 32 bytes (on 32 bits archs)
or 56 bytes (on 64 bits archs).
So, for one single access, the new code needs (on 32 bits)
32 bytes, while the trunk needs only 14.4 bytes.
However, that is the worst case, assuming that the 5 entries in the
accs array are all used.
Looking on 2 big apps (one of them being firefox), we see that
we have very few OldRef entries that have the 5 entries occupied.
On a firefox startup, of the 5x1,000,000 accesses, we only have
1,406,939 accesses that are used.
So, in average, the trunk uses in reality around 52 bytes per access.
The default value for --conflict-cache-size has been doubled to 2000000.
This ensures that the memory used for the OldRef is more or less the
same as the trunk (104Mb for OldRef entries).
Memory used for sparseWA versus hashtable
-----------------------------------------
Looking on 2 big apps (one of them being firefox), we see that
there are big variations on the size of the WA : it can go in a few
seconds from 10MB to 250MB, or can decrease back to 10 MB.
This all depends where the last N accesses were done: if well localised,
the WA will be small.
If the last N accesses were distributed over a big address space,
then the WA will be big: the last level of WA (the biggest memory consumer)
uses slightly more than 1KB (2KB on 64 bits) for each '256 bytes' memory
zone where there is an oldref. So, in the worst case, on 32 bits, we
need > 1_000_000_000 sparseWA memory to keep 1_000_000 OldRef.
The hash table has between 1 to 2 Word overhead per OldRef
(as the chain array is +- doubled each time the hash table is full).
So, unless the OldRef are extremely localised, the overhead of the
hash table will be significantly less.
With the patch, the core arena total alloc is: 5299535/1201448632 totalloc-blocks/bytes
The trunk is 6693111/3959050280 totalloc-blocks/bytes
(so, around 1.20Gb versus 3.95Gb).
This big difference is due to the fact that the sparseWA repetitively
allocates then frees Level0 or LevelN when OldRef in the region covered
by the Level0/N have all been recycled.
In terms of CPU
---------------
With the patch, on amd64, a firefox startup seems slightly faster (around 1%).
The peak memory mmaped/used decreases by 200Mb.
For a libreoffice test, the memory decreases by 230Mb. CPU also decreases
slightly (1%).
In terms of correctness:
-----------------------
The trunk could potentially show not the most recent access
to the memory of a race : the first OldRef entry matching the raced upon
address was used, while we could have a more recent access in a following
OldRef entry. In other words, the trunk only guaranteed to find the
most recent access in an OldRef, but not between the several OldRef that
could cover the raced upon address.
So, assuming it is important to show the most recent access, this patch
ensures we really show the most recent access, even in presence of overlapping
accesses.
Mark Wielaard [Fri, 22 May 2015 09:20:03 +0000 (09:20 +0000)]
Add procfs-non-linux.stderr.exp variants to EXTRA_DIST.
For bz#344936 procfs-non-linux.stderr.exp was renamed and split into
procfs-non-linux.stderr.exp-with-readlinkat and
procfs-non-linux.stderr.exp-without-readlinkat add both to EXTRA_DIST.
Fixes make post-regtest-checks.
Improve presentation of first line of --profile-heap=yes
(i.e. use
-------- Arena "client": 4,194,304/4,194,304 max/cu...
instead of
-------- Arena "client": 4194304/4194304 max/cu....
Rhys Kidd [Wed, 20 May 2015 11:31:35 +0000 (11:31 +0000)]
Improve documentation of syscall: unix: 44 profil() which was deprecated around OS X 10.6 and removed from the xnu kernel shipped with OS X 10.7. See unresolved bz#264253.
Carl Love [Tue, 19 May 2015 16:08:05 +0000 (16:08 +0000)]
Fix for the HWCAP2 aux vector.
The support assumed that if HWCAP2 is present that the system also supports
ISA2.07. That assumption is not correct as we have found a few systems (OS)
where the HWCAP2 entry is present but the ISA2.07 bit is not set. This patch
fixes the assertion test to specifically check the ISA2.07 support bit setting
in the HWCAP2 and vex_archinfo->hwcaps variable. The setting for the
ISA2.07 support must be the same in both variables if the HWCAP2 entry exists.
This patch reduces the memory needed for the linesF.
Currently, each SecMap has an array of linesF, referenced by the linesZ
of the secmap that needs a lineF, via an index stored in dict[1].
When the array is full, its size is doubled.
The linesF array of a secmap is freed when the SecMap is GC-ed.
The above strategy has the following consequences:
A. in average, 25% of the LinesF are unused.
B. if a SecMap has 'temporarily' a need for linesF, but afterwards,
these linesF are converted to normal lineZ representation, the linesF
will not be recuperated unless the SecMap is GC-ed (i.e. fully marked
no access).
The patch replaces the linesF array private per SecMap
by a pool allocator of LinesF shared between all SecMap.
A lineZ that needs a lineF will directly point to its lineF (using a pointer
stored in dict[1]), instead of having in dict[1] the index in the SecMap
linesF array.
When a lineZ needs a lineF, it is allocated from the pool allocator.
When a lineZ does not need anymore a lineF, it is returned back to the
pool allocator.
On a firefox startup, the above strategy reduces the memory for linesF
by about 42Mb. It seems that the more firefox is used (e.g. to visit
a few websites), the bigger the memory gain.
After opening the home page of valgrind, wikipedia and google, the memory
gain is about 94Mb:
trunk:
linesF: 392,181 allocd ( 203,934,120 bytes occupied) ( 173,279 used)
patch:
linesF: 212,966 allocd ( 109,038,592 bytes occupied) ( 170,252 used)
There is also less alloc/free operations in core arena with the patch:
trunk:
core : 810,680,320/ 802,291,712 max/curr mmap'd, 17/19 unsplit/split sb unmmap'd, 759,441,224/ 703,191,896 max/curr, 40631760/16376828248 totalloc-blocks/bytes, 188015696 searches 8 rzB
patch:
core : 701,628,416/ 690,753,536 max/curr mmap'd, 12/29 unsplit/split sb unmmap'd, 643,041,944/ 577,793,712 max/curr, 32050040/14056017712 totalloc-blocks/bytes, 174097728 searches 8 rzB
In terms of performance, no CPU impact detected on Firefox startup.
Note we have no representative reproducible (and preferrably small)
perf test that uses extensively linesF. Firefox is a good heavy lineF
user but is far to be reproducible, and is very far to be small.
Theoretically, in terms of CPU performance, the patch might have some
small benefits here and there for read operations, as the lineF pointer
is directly retrieved from the lineZ, rather than retrieved via an indirection
in the linesF array.
For write operations, the patch might need a little bit more CPU,
as we replace an
assignment to lineF inUse boolean to False (and then probably back to True
when the cacheline is written back)
by
a call to pool allocator VG_(freeEltPA) (and then probably a call to
VG_(allocEltPA) when the cacheline is written back).
These PA functions are small, so cost should be ok.
We might however still maintain in clear_LineF_of_Z the last cleared lineF
and re-use it in alloc_LineF_for_Z. Not sure how many calls to the PA functions
would be avoided by this '1 elt cache' (and the needed 'if elt == NULL'
check in both clear_LineF_of_Z and alloc_LineF_for_Z.
This possible optimisationwill be looked at later.
Carl Love [Fri, 15 May 2015 20:09:05 +0000 (20:09 +0000)]
Patch 5 in a revised series of cleanup patches from Will Schmidt
Add a .exp for the pth_cond_destroy_busy for PPC64 big endian.
This is specifically to cover the last line of output as
seen on ppc64BE, which is "ERROR SUMMARY: X errors from 3 contexts",
where X is 6, versus 3 as seen on other architectures.
The additional errors show up on BE during the "Thread #1: pthread_cond
_destroy: destruction of condition variable being waited upon."
Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com>
This patch fixes Vagrind bugzilla 347686
Carl Love [Fri, 15 May 2015 16:50:06 +0000 (16:50 +0000)]
Patch 6 in a revised series of cleanup patches from Will Schmidt
Fix multipleinheritance heuristic for ppc64LE (leak_cpp_interior test).
Adjust the PPC64 #ifdiffery to indicate that ppc64BE uses a thunk table,
but ppc64LE (in particular, the ELF ABIV2) does not. In this case, thunk
table == function descriptors.
Signed-off-by: Will Schmidt <will_schmidt@vnet.ibm.com>
--
This patch replaces the previously posted "[6/7] add leak_cpp_interior
test .exp results ....."
This patch (re-)gains performance in helgrind, following revision 15207, that
reduced memory use doing SecMap GC, but was slowing down some workloads
(typically, workloads doing a lot of malloc/free).
A significant part of the slowdown came from the clear of the filter,
that was not optimised for big ranges : the filter was working byte
per byte till an 8 alignment. Then working per 8 bytes at a time.
With the patch, the filter clear is done the following way:
* all the bytes till 8 alignement are done together
* then 8 bytes at a time till filter_line alignment (32 bytes)
* then 32 bytes at a time.
Moreover, as the filter cache is small (1024 lines of 32 bytes),
clearing filter for ranges bigger than 32Kb was uselessly checking
several times the same entry. This is now avoided by using a range
check rather than a tag equality check.
As the new filter clear is significanly more complex than the previous simple
algorithm, the old algorithm is kept and used to check the new algorithm
when CHECK_ZSM is defined as 1.
The patch also contains a few micro optimisations and
disables
// VG_(track_die_mem_stack) ( evh__die_mem );
as this had no effect and was somewhat costly.
With this patch, we have almost reached for all perf tests the same
performance as we had before revision 15207. Some tests are still
slightly slower than before the SecMap GC (max 2% difference).
Some tests are now significantly faster (e.g. sarp).
For almost all tests, we are now faster than valgrind 3.10.1.
Details below.
Regtested on x86/amd64/ppc64 (and regtested with all compile time
checks set).
I have also regtested with libreoffice and firefox.
(with firefox, also with CHECK_ZSM set to 1).
Details about performance:
hgtrace = this patch
trunk_untouched = trunk
base_secmap = trunk before secmap GC
valgrind 3.10.1 included for comparison
Measured on core i5 2.53GHz
Carl Love [Thu, 14 May 2015 21:52:59 +0000 (21:52 +0000)]
Patch 4 in a revised series of cleanup patches from Will Schmidt
Add a suppression to handle a "Jump to the invalid address..." message
that gets generated on power. This is a variation of the existing
suppressions.
While here, I also updated the "prog:" line in the vgtest file to reference
the supp_unknown executable, versus the badjump executable. They share the
same source code, so I think this is effectively cosmetic.