Carl Love [Fri, 5 Apr 2019 20:04:23 +0000 (15:04 -0500)]
PPC64, fix test_isa_3_0_other.c test
Valgrind ppc64 test_isa_3_0_other test will attempt to display
all of the bits of the XER as part of the test case results.
The tests have no existing logic to clear those bits, so this can
pick up straggling values that cascade into a testcase failure.
This adds some code to correct this in two directions;
- Print only the bits that are expected by the tests. This
is currently just the OV and OV32 bits.
- print all of the bits when run under higher verbosity levels.
Bugzilla 406198 - none/tests/ppc64/test_isa_3_0_other test sporadically
including CA bit in output
Patch submitted by Will Schmidt <will_schmidt@vnet.ibm.com>
Patch reviewed, committed by: Carl Love <cel@us.ibm.com>
Bug 404843 - s390x: backtrace sometimes ends prematurely.
On s390x-linux, adds CFI based unwinding for %f0..%f7, since these are sometimes
used by gcc >= 8.0 to spill integer register values in leaf functions. Hence the
lack of unwinding them was causing unwind failures on this platform.
Carl Love [Thu, 4 Apr 2019 17:31:05 +0000 (12:31 -0500)]
PPC64, patch to test case issues reported in bugzilla 401827 and 401828.
This corrects a valgrind instruction emulation issue revealed by
a GCC change.
The xscvdpsp,xscvdpspn,xscvdpuxws instructions each convert
double precision values to single precision values, and write
the results into bits 0-32 of the 128 bit target register.
To get the value into the normal position for a scalar register
the result needed to be right-shifted 32 bits, so gcc always
did that.
It was determined that hardware also always did that, so the (redundant)
gcc shift was removed.
This exposed an issue because valgrind was only writing the result to
bits 0-31 of the target register.
This patch updates the emulation to write the result to both of the involved
32-bit fields.
VEX/priv/guest_ppc_toIR.c:
- rearrange ops in dis_vx_conv to update more portions of the target
register with copies of the result. xscvdpsp,xscvdpspn,xscvdpuxws
none/tests/ppc64/test_isa_2_06_part1.c
- update res32 checking to explicitly include fcfids and fcfidus in the
32-bit result grouping.
none/tests/ppc64/test_isa_2_07_part2.c
- correct NULL initializer for logic_tests definition
[*1] - GCC change referenced:
2017-09-26 Michael Meissner <meissner@linux.vnet.ibm.com>
* config/rs6000/rs6000.md (movsi_from_sf): Adjust code to
eliminate doing a 32-bit shift right or vector extract after
doing XSCVDPSPN.
patch submitted by: Will Schmidt <will_schmidt@vnet.ibm.com>
reviewed, committed by: Carl Love <cel@us.ibm.com>
DHAT: when the run ends, print a how-to-view-the-profile hint message. n-i-bz.
The aim is to make it zero-effort for users to view the profile after
a run. The printed message is as follows:
To view the resulting profile, open
file:///path/to/valgrind/installation/lib/valgrind/dh_view.html
in a web browser, click on "Load..." and then select the file
/path/to/dhat.out.12345
Scroll to the end the displayed page to see a short
explanation of some of the abbreviations used in the page.
This patch adds printing of the message, then filters it out in
dhat/tests/filter_stderr, and updates the .stderr.exp files to
remove blank lines.
Petar Jovanovic [Wed, 3 Apr 2019 17:38:08 +0000 (17:38 +0000)]
mips32: pass correct syscall value to kernel in case of __NR_syscall
The syscall number has to be put in register v0 before call into the kernel.
This was omitted when system call is __NR_syscall (and when the syscall
argument is the system call number of interest).
It caught a couple of bugs, but it does need a bit of extra comments to
explain when a switch case statement fall-through is deliberate. Luckily
with -Wimplicit-fallthrough=2 various existing comments already do that.
I have fixed the bugs, but adding explicit break statements where
necessary and added comments where the fall-through was correct.
Carl Love [Fri, 22 Mar 2019 17:06:31 +0000 (12:06 -0500)]
PPC64, fix for vmsummbm instruction.
The instruction needs to have the 32-bit "lane" values chopped to 32-bits.
The current lane implementation is not doing the chopping. Need to
explicitly do the chop and add.
* amd64 RDRAND instruction, on hosts that have it.
* amd64 VCVTPH2PS and VCVTPS2PH, on hosts that have it.
The presence/absence of these on the host is now reflected in the CPUID
results returned to the guest. So code that tests for these features in
CPUID and acts accordingly should "just work".
* New test cases, none/tests/amd64/rdrand and none/tests/amd64/f16c. These
are built if the host's assembler can handle them, in the usual way.
Ilya Leoshkevich [Tue, 12 Mar 2019 18:23:55 +0000 (19:23 +0100)]
Bug 405403 - s390x: Allow using disInstr_S390 on little-endian hosts
Certain projects, e.g. https://angr.io, use VEX as an intermediate
representation for the binary code analysis. In order to make it
possible to use them to analyze S/390 code on Intel, this patch
resolves the following issues in the disassembler:
- Bit fields, which are used to describe instruction formats, map to
different bits on different hosts. This patch replaces them with
macros, e.g. SS.l bit field becomes SS_l macro. Most bit field usages
are replaced using the following perl script:
Since after that there are no more structs, #pragma pack is also
removed.
- Instructions are loaded from memory as words, which behaves
differently depending on host endianness. Such loads are replaced by
assembly of words from separately loaded bytes. This affects regular
disassembly functions, and also s390_irgen_EXRL(), which loads
last_execute_target this way.
- disInstr_S390() explicitly prohibits little-endian hosts with an
assert, which is removed in this patch.
Julian Seward [Tue, 12 Mar 2019 17:37:15 +0000 (18:37 +0100)]
VEX/auxprogs/genoffsets.c: Add cast to my_offsetof. n-i-bz.
Clang/LLVM trips over my_offsetof in VEX/auxprogs/genoffsets.c. See LLVM
PR 40890 for details (https://bugs.llvm.org/show_bug.cgi?id=40890).
Now, it's a Clang bug that Clang exits on an assertion failure rather than
emits a diagnostic, but the previous my_offsetof expression is a pointer,
not an integer. Add a cast as done in other definitions of offsetof in
the tree. Patch from Ed Maste <emaste@freebsd.org>.
Rhys Kidd [Sat, 2 Feb 2019 23:22:16 +0000 (18:22 -0500)]
macOS: Don't duplicate -fno-stack-protector
Since f38d96d -fno-stack-protector has been added to $(AM_CFLAGS_BASE) on all
platforms, if the compiler supports it. Accordingly, there's no need to still add
this a second time specifically for macOS.
Fixes: f38d96d ("Add -Wformat -Wformat-security to the list of compile flags.") Signed-off-by: Rhys Kidd <rhyskidd@gmail.com>
Rhys Kidd [Thu, 31 Jan 2019 03:52:07 +0000 (22:52 -0500)]
config: remove unrequired AC_HEADER_STDC
Autoconf says:
"This macro is obsolescent, as current systems have conforming
header files. New programs need not use this macro".
Was previously required to ensure the system has C header files conforming
to ANSI C89 (ISO C90). Specifically, this macro checks for stdlib.h,
stdarg.h, string.h, and float.h.
This autoconf option was used to provide conditional fallback support
via defined STDC_HEADERS.
valgrind does not utilize conditional fallback support so, so this macro
is both obsolete and unused, so let's drop it.
Petar Jovanovic [Mon, 4 Mar 2019 18:24:55 +0000 (19:24 +0100)]
modify massif/tests/mmapunmap.vgtest to comply with glibc change
The change in the glibc version (2.27 -> 2.28) results in one additional
function call being present in the backtrace for mips64, which leads to the
line to be checked to be out of bounds.
Changed the post line in mmapunmap.vgtest to work around this.
This fixes massif/tests/mmapunmap failure on mips64.
Mark Wielaard [Thu, 21 Feb 2019 16:21:53 +0000 (17:21 +0100)]
memcheck powerpc subfe x, x, x initializes x to 0 or -1 based on CA
GCC might use subfe x, x, x to initialize x to 0 or -1, based on
whether the carry flag is set. This happens in some cases when g++
compiles resetting a unique_ptr. The "trick" used by the compiler is
that it can AND a pointer with the register x (now 0x0 or 0xffffffff)
to set something to NULL or to the given pointer.
subfe is implemented as rD = (log not)rA + rB + XER[CA]
if we instead implement it as rD = rB - rA - (XER[CA] ^ 1)
then memcheck can see that rB and Ra cancel each other out if they
are the same.
Carl Love [Tue, 5 Feb 2019 16:15:09 +0000 (10:15 -0600)]
Fix missed changes from Rename some int<->fp conversion IROps patch
The previous commit 6b16f0e2a0427f57fb5dc76cbe9177ee35f997ab dated
Sat Jan 26 17:38:01 2019 by Julian Seward <jseward@acm.org> renamed some of
the int<->fp conversion Iops to add a trailing _DEP. The patch missed
renaming two of the Iops. This patch renames the missed Iops.
This commit thoroughly overhauls DHAT, moving it out of the
"experimental" ghetto. It makes moderate changes to DHAT itself,
including dumping profiling data to a JSON format output file. It also
implements a new data viewer (as a web app, in dhat/dh_view.html).
The main benefits over the old DHAT are as follows.
- The separation of data collection and presentation means you can run a
program once under DHAT and then sort the data in various ways. Also,
full data is in the output file, and the viewer chooses what to omit.
- The data can be sorted in more ways than previously. Some of these
sorts involve useful filters such as "short-lived" and "zero reads or
zero writes".
- The tree structure view avoids the need to choose stack trace depth.
This avoids both the problem of not enough depth (when records that
should be distinct are combined, and may not contain enough
information to be actionable) and the problem of too much depth (when
records that should be combined are separated, making them seem less
important than they really are).
- Byte and block measures are shown with a percentage relative to the
global count, which helps gauge relative significance of different
parts of the profile.
- Byte and blocks measures are also shown with an allocation rate
(bytes and blocks per million instructions), which enables comparisons
across multiple profiles, even if those profiles represent different
workloads.
- Both global and per-node measurements are taken at the global heap
peak ("At t-gmax"), which gives Massif-like insight into the point of
peak memory use.
- The final/liftimes stats are a bit more useful than the old deaths
stats. (E.g. the old deaths stats didn't take into account lifetimes
of unfreed blocks.)
- The handling of realloc() has changed. The sequence `p = malloc(100);
realloc(p, 200);` now increases the total block count by 2 and the
total byte count by 300. Previously it increased them by 1 and 200.
The new handling is a more operational view that better reflects the
effect of allocations on performance. It makes a significant
difference in the results, giving paths involving reallocation (e.g.
repeated pushing to a growing vector) more prominence.
Other things of note:
- There is now testing, both regression tests that run within the
standard test suite, and viewer-specific tests that cannot run within
the standard test suite. The latter are run by loading
dh_view.html?test=1 in a web browser.
- The commit puts all tool lists in Makefiles (and similar files) in the
following consistent order: memcheck, cachegrind, callgrind, helgrind,
drd, massif, dhat, lackey, none; exp-sgcheck, exp-bbv.
- A lot of fields in dh_main.c have been given more descriptive names.
Those names now match those used in dh_view.js.
Julian Seward [Thu, 31 Jan 2019 06:56:26 +0000 (07:56 +0100)]
s390 back end: s390_isel_vec_expr_wrk: fix some enum type confusion. n-i-bz.
In s390_isel_vec_expr_wrk() there has been some assignments of enum-typed
values to variables of different enum types. This fixes it. It also adds a
few initialisations to variables of type HReg for safety against the
possibility of them being used uninitialised. No functional change. Tested
by Andreas Arnez.
Rhys Kidd [Tue, 29 Jan 2019 06:07:09 +0000 (01:07 -0500)]
memcheck,macos: Fix vbit-test building on macOS x86 architectures. n-i-bz.
Secondary architectures on macOS are generally x86, which requires additional
LDFLAGS to be set to avoid linker errors.
apple clang (clang-800.0.42.1) error:
ld: illegal text-relocation to '___stderrp' in /usr/lib/libSystem.dylib from '_main'
in vbit_test_sec-main.o for architecture i386
Fixes: 49ca185 ("Also test memcheck/tests/vbit-test on any secondary arch.") Signed-off-by: Rhys Kidd <rhyskidd@gmail.com>
Fix callgrind_annotate Use of uninitialized value in numeric gt (>)
When a callgrind dump file contains no event (at all I think),
then callgrind_annotate can produce the below error messages:
Ir sysCount sysTime file:function
--------------------------------------------------------------------------------
Use of uninitialized value in numeric gt (>) at ../trunk_untouched/Inst/bin/callgrind_annotate line 957.
Use of uninitialized value in numeric gt (>) at ../trunk_untouched/Inst/bin/callgrind_annotate line 957.
Use of uninitialized value in numeric gt (>) at ../trunk_untouched/Inst/bin/callgrind_annotate line 957.
. . . /build/glibc-yWQXbR/glibc-2.24/csu/../csu/libc-start.c:(below main) [/lib/x86_64-linux-gnu/libc-2.24.so]
Use of uninitialized value in numeric gt (>) at ../trunk_untouched/Inst/bin/callgrind_annotate line 957.
Use of uninitialized value in numeric gt (>) at ../trunk_untouched/Inst/bin/callgrind_annotate line 957.
Use of uninitialized value in numeric gt (>) at ../trunk_untouched/Inst/bin/callgrind_annotate line 957.
. . . /build/glibc-yWQXbR/glibc-2.24/elf/../sysdeps/x86_64/dl-trampoline.h:_dl_runtime_resolve_xsave [/lib/x86_64-linux-gnu/ld-2.24.so]
Use of uninitialized value in numeric gt (>) at ../trunk_untouched/Inst/bin/callgrind_annotate line 957.
.....
The above can be produced by:
run sleep 100 under callgrind.
take some callgrind dumps after the startup.
./Inst/bin/callgrind_annotate --threshold=1 callgrind.out.31377.2
Check that the value is defined before doing the comparison.
Note: callgrind_annotate shows functions which have undefined costs
for all events (and I guess it would also show functions that have zero
costs for all events).
Maybe it would be better to not show at all such functions, rather than
show them with all '.'.
Julian Seward [Sat, 26 Jan 2019 16:38:01 +0000 (17:38 +0100)]
Rename some int<->fp conversion IROps for consistency. No functional change. n-i-bz.
2018-Dec-27: some of int<->fp conversion operations have been renamed so as to
have a trailing _DEP, meaning "deprecated". This is because they don't
specify a rounding mode to be used for the conversion and so are
underspecified. Their use should be replaced with equivalents that do specify
a rounding mode, either as a first argument or using a suffix on the name,
that indicates the rounding mode to use.
Julian Seward [Fri, 25 Jan 2019 11:06:37 +0000 (12:06 +0100)]
VG_(discard_translations): try to avoid invalidating the entire VG_(tt_fast) cache. n-i-bz.
It is very commonly the case that a call to VG_(discard_translations) results
in the discarding of exactly one superblock. In such cases, it's much cheaper
to find and invalidate the VG_(tt_fast) cache entry associated with the block,
than it is to invalidate the entire cache, because
(1) invalidating the fast cache is expensive, and
(2) repopulating the fast cache after invalidation is even more expensive.
For QEMU, which intensively invalidates individual translations (presumably
due to patching them), this reduces the fast-cache miss rate from circa one in
33 lookups to around one in 130 lookups.
Julian Seward [Fri, 25 Jan 2019 08:27:23 +0000 (09:27 +0100)]
Bug 402781 - Redo the cache used to process indirect branch targets.
Implementation for x86-solaris and amd64-solaris. This completes the
implementations for all targets. Note these two are untested because I don't
have any way to test them.
Julian Seward [Fri, 25 Jan 2019 08:14:56 +0000 (09:14 +0100)]
Bug 402781 - Redo the cache used to process indirect branch targets.
[This commit contains an implementation for all targets except amd64-solaris
and x86-solaris, which will be completed shortly.]
In the baseline simulator, jumps to guest code addresses that are not known at
JIT time have to be looked up in a guest->host mapping table. That means:
indirect branches, indirect calls and most commonly, returns. Since there are
huge numbers of these (often 10+ million/second) the mapping mechanism needs
to be extremely cheap.
Currently, this is implemented using a direct-mapped cache, VG_(tt_fast), with
2^15 (guest_addr, host_addr) pairs. This is queried in handwritten assembly
in VG_(disp_cp_xindir) in dispatch-<arch>-<os>.S. If there is a miss in the
cache then we fall back out to C land, and do a slow lookup using
VG_(search_transtab).
Given that the size of the translation table(s) in recent years has expanded
significantly in order to keep pace with increasing application sizes, two bad
things have happened: (1) the cost of a miss in the fast cache has risen
significantly, and (2) the miss rate on the fast cache has also increased
significantly. This means that large (~ one-million-basic-blocks-JITted)
applications that run for a long time end up spending a lot of time in
VG_(search_transtab).
The proposed fix is to increase associativity of the fast cache, from 1
(direct mapped) to 4. Simulations of various cache configurations using
indirect-branch traces from a large application show that is the best of
various configurations. In an extreme case with 5.7 billion indirect
branches:
* The increase of associativity from 1 way to 4 way, whilst keeping the
overall cache size the same (32k guest/host pairs), reduces the miss rate by
around a factor of 3, from 4.02% to 1.30%.
* The use of a slightly better hash function than merely slicing off the
bottom 15 bits of the address, reduces the miss rate further, from 1.30% to
0.53%.
Overall the VG_(tt_fast) miss rate is almost unchanged on small workloads, but
reduced by a factor of up to almost 8 on large workloads.
By implementing each (4-entry) cache set using a move-to-front scheme in the
case of hits in ways 1, 2 or 3, the vast majority of hits can be made to
happen in way 0. Hence the cost of having this extra associativity is almost
zero in the case of a hit. The improved hash function costs an extra 2 ALU
shots (a shift and an xor) but overall this seems performance neutral to a
win.
Andreas Arnez [Mon, 21 Jan 2019 13:10:00 +0000 (14:10 +0100)]
Bug 403552 s390x: Fix vector facility bit number
The wrong bit number was used when checking for the vector facility. This
can result in a fatal emulation error: "Encountered an instruction that
requires the vector facility. That facility is not available on this
host."
In many cases the wrong facility bit was usually set as well, hence
nothing bad happened. But when running Valgrind within a Qemu/KVM guest,
the wrong bit was not (always?) set and the emulation error occurred.
This fix simply corrects the vector facility bit number, changing it from
128 to 129.
Fix false positive 'Conditional jump or move' on amd64 64 bits ptracing 32 bits.
PTRACE_GET_THREAD_AREA is not handled by amd64 linux syswrap, which leads
to false positive errors in 64 bits program ptrace-ing 32 bits processes.
For example, the below error was wrongly reported on GDB:
==25377== Conditional jump or move depends on uninitialised value(s)
==25377== at 0x8A1D7EC: td_thr_get_info (td_thr_get_info.c:35)
==25377== by 0x526819: thread_from_lwp(thread_info*, ptid_t) (linux-thread-db.c:417)
==25377== by 0x5281D4: thread_db_notice_clone(ptid_t, ptid_t) (linux-thread-db.c:442)
==25377== by 0x51773B: linux_handle_extended_wait(lwp_info*, int) (linux-nat.c:2027)
....
==25377== Uninitialised value was created by a stack allocation
==25377== at 0x69A360: x86_linux_get_thread_area(int, void*, unsigned int*) (x86-linux-nat.c:278)
Fix this by implementing PTRACE_GET|SET_THREAD_AREA on amd64.
Mark Wielaard [Mon, 31 Dec 2018 21:26:31 +0000 (22:26 +0100)]
Bug 402519 - POWER 3.0 addex instruction incorrectly implemented
addex uses OV as carry in and carry out. For all other instructions
OV is the signed overflow flag. And instructions like adde use CA
as carry.
Replace set_XER_OV_OV32 with set_XER_OV_OV32_ADDEX, which will
call calculate_XER_CA_64 and calculate_XER_CA_32, but with OV
as input, and sets OV and OV32.
Enable test_addex in none/tests/ppc64/test_isa_3_0.c and update
the expected output. test_addex would fail to match the expected
output before this patch.
Some more .exp changes following --show-error-list new option
A few .exp files (not tested on amd64) have to be changed to
have the messages in the new order:
Use --track-origins=yes to see where uninitialised values come from
For lists of detected and suppressed errors, rerun with: -s
This option allows to list the detected errors and show the used
suppressions without increasing the verbosity.
Increasing the verbosity also activates a lot of messages that
are often not very useful for the user.
So, this option allows to see the list of errors and used suppressions
independently of the verbosity.
Note if a high verbosity is selected, the behaviour is unchanged.
In other words, when specifying -v, the list of detected errors
and the used suppressions are still shown, even if
--show-error-list=yes and -s are not used.
Factorize producing the 'For counts of detected and suppressed errors' msg
Each tool producing errors had identical code to produce this msg.
Factorize the production of the message in m_main.c
This prepares the work to have a specific option to show the list
of detected errors and the count of suppressed errors.
This has a (small) visible effect on the output of memcheck:
Instead of producing
For counts of detected and suppressed errors, rerun with: -v
Use --track-origins=yes to see where uninitialised values come from
memcheck now produces:
Use --track-origins=yes to see where uninitialised values come from
For counts of detected and suppressed errors, rerun with: -v
i.e. the track origin and counts of errors msg are inverted.
Julian Seward [Sat, 22 Dec 2018 18:01:50 +0000 (19:01 +0100)]
amd64 back end: generate improved SIMD64 code.
For most SIMD operations that happen on 64-bit values (as would arise from MMX
instructions, for example, such as Add16x4, CmpEQ32x2, etc), generate code
that performs the operation using SSE/SSE2 instructions on values in the low
halves of XMM registers. This is much more efficient than the previous scheme
of calling out to helper functions written in C. There are still a few SIMD64
operations done via helpers, though.
Julian Seward [Sat, 22 Dec 2018 17:04:42 +0000 (18:04 +0100)]
amd64 back end: generate better code for 2x64<-->V128 and 4x64<-->V256 transfers ..
.. by adding support for MOVQ xmm/ireg and using that to implement 64HLtoV128,
4x64toV256 and their inverses. This reduces the number of instructions,
removes the use of memory as an intermediary, and avoids store-forwarding
stalls.
Julian Seward [Sat, 22 Dec 2018 06:23:00 +0000 (07:23 +0100)]
amd64 pipeline: generate much better code for pshufb mm/xmm/ymm. n-i-bz.
pshufb mm/xmm/ymm rearranges byte lanes in vector registers. It's fairly
widely used, but we generated terrible code for it. With this patch, we just
generate, at the back end, pshufb plus a bit of masking, which is a great
improvement.