Mark Wielaard [Mon, 1 Feb 2021 21:46:43 +0000 (22:46 +0100)]
Handle Iop_NegF16, Iop_AbsF16 and Iop_SqrtF16 as non-trapping.
Add Iop_NegF16, Iop_AbsF16 and Iop_SqrtF16 to VEX/priv/ir_defs.c
primopMightTrap. Also rewrite case statement slightly so GCC will warn
if an enumeration value is missed.
Mark Wielaard [Mon, 25 Jan 2021 14:33:34 +0000 (15:33 +0100)]
Add support for DWARF5 as produced by GCC11
Implement DWARF5 in readdwarf.c and readdwarf3.c
Since gcc11 will default to DWARF5 by default it is time for
valgrind to support it. The patch handles everything gcc11 produces
(except for the new DWARF expressions).
There is some duplication in the patch since we actually have two DWARF
readers which use slightly different abstractions (Slices vs Cursors).
It would be nice if we could merge these somehow. The reader in
readdwarf3.c is only used when --read-var-info=yes is used (which
drd uses to provide the allocation context).
The handling of DW_FORM_implicit_const is tricky with the current design.
An abbrev which contains an attribute encoded with DW_FORM_implicit_const
has its value also in the abbrev. The code in readdwarf3.c assumed it
always could simply get the data from the .debug_info/current Cursor.
For now I added a value field to the name_form field that holds the
associated value. This is slightly wasteful since the extra field is
not necessary for other forms.
Tested against GCC10 (defaulting to DWARF4) and GCC11 (defaulting to
DWARF5) on x86_64. No regressions in the regtests.
Mark Wielaard [Sat, 23 Jan 2021 20:54:07 +0000 (21:54 +0100)]
Define AT as UChar in VEX/priv/guest_ppc_toIR.c (dis_vsx_accumulator_prefix)
GCC notices that AT is passed around as char, specifically as %u argument
to DIP. But ifieldAT returns an UChar and vsx_matrix_ger takes AT as UChar.
This causes lots of format string warnings when building with GCC11.
Mark Wielaard [Sat, 23 Jan 2021 19:22:28 +0000 (20:22 +0100)]
Fix indentation in coregrind/m_debuginfo/readpdb.c (DEBUG_SnarfLinetab)
GCC warns:
readpdb.c:1631:16: warning: this 'if' clause does not guard...
[-Wmisleading-indentation]
1631 | if (debug)
| ^~
In file included from ./pub_core_basics.h:38,
from m_debuginfo/readpdb.c:38:
../include/pub_tool_basics.h:69:30: note: ...this statement, but the latter
is misleadingly indented as if it were guarded by the 'if'
69 | #define ML_(str) VGAPPEND(vgModuleLocal_, str)
| ^~~~~~~~~~~~~~
../include/pub_tool_basics.h:66:29: note: in definition of macro 'VGAPPEND'
66 | #define VGAPPEND(str1,str2) str1##str2
| ^~~~
m_debuginfo/readpdb.c:1636:19: note: in expansion of macro 'ML_'
1636 | ML_(addLineInfo)(
| ^~~
The warning message is slightly hard to read because of the macro expansion.
But GCC is right that the indentation is misleading. Fixed by reindenting.
Carl Love [Mon, 11 Jan 2021 17:39:23 +0000 (11:39 -0600)]
PPC64: Fix load store instructions
This patch fixes numerous errors in the ISA support.
The word and prefix versions of the instructions do not use the same mask
to extract the immediate values. The prefix instructions should all use
the DFOM_IMMASK.
The parsing of prefix instructions has been fixed to ensure the ISA 3.1
instructions all have the ISA_3_1_PREFIX_CHECK check.
Fixed the commenting to improve the comments for the instruction parsing.
Carl Love [Mon, 11 Jan 2021 16:41:47 +0000 (10:41 -0600)]
PPC64: Fix EA calculation for prefixed instructions
The effective address (EA) calculation for the prefixed instructions
concatenate an 18-bit immediate value from the prefix word and a 16-bit
immediate value fro the instruction word. This results in a 34-bit value.
The concatenated value must be stored into a long long int not a 32-bit
integer.
Carl Love [Sun, 10 Jan 2021 02:15:46 +0000 (20:15 -0600)]
PPC64: Fix for VG_MAX_INSTR_SZB, max instruction size is now 8bytes for prefix inst
The ISA 3.1 support has both word instructions of length 4-bytes and prefixed
instruction of length 8-bytes. The following fix is needed when Valgrind
is compiled using an ISA 3.1 compiler.
Julian Seward [Mon, 4 Jan 2021 12:33:24 +0000 (13:33 +0100)]
arm64 insn selector: improved handling of Or1/And1 trees.
This is the exact analog of cadd90993504678607a4f95dfe5d1df5207c1eb0, to the
point of almost being a copy-n-paste. That commit split (amd64) iselCondCode
into two functions, iselCondCode_C (existing) and iselCondCode_R (new). The
latter computes an I1-typed expression into a register rather than a condition
code. The two functions cooperate so as to minimise between conversions between
a condition-code value and a value in a register.
Julian Seward [Sat, 2 Jan 2021 16:18:53 +0000 (17:18 +0100)]
More arm64 isel tuning: create {and,orr,eor,add,sub} reg,reg,reg-shifted-by-imm
Thus far the arm64 isel can't generate instructions of the form
{and,or,xor,add,sub} reg,reg,reg-shifted-by-imm
and hence sometimes winds up generating pairs like
lsh x2, x1, #13 ; orr x4, x3, x2
when instead it could just have generated
orr x4, x3, x1, lsh #13
This commit fixes that, although only for the 64-bit case, not the 32-bit
case. Specifically, it can transform the IR forms
{Add,Sub,And,Or,Xor}(E1, {Shl,Shr,Sar}(E2, immediate)) and
{Add,And,Or,Xor}({Shl,Shr,Sar}(E1, immediate), E2)
into a single arm64 instruction. Note that `Sub` is not included in the
second line, because shifting the first operand requires inverting the arg
order in the arm64 instruction, which isn't allowable with `Sub`, since it's
not commutative and arm64 doesn't offer us a reverse-subtract instruction to
use instead.
This gives a 1.1% reduction generated code size when running
/usr/bin/date on Memcheck.
Julian Seward [Sat, 2 Jan 2021 15:15:03 +0000 (16:15 +0100)]
A bit of tuning of the arm64 isel: do PUT(..) = 0x0:I64 in a single insn.
When running Memcheck, most blocks will do one and often two of `PUT(..) =
0x0:I64`, as a result of the way the front end models arm64 condition codes.
The arm64 isel would generate `mov xN, #0 ; str xN, [xBaseblock, #imm]`,
which is pretty stupid. This patch changes it to a single insn:
`str xzr, [xBaseblock, #imm]`.
This is a special-case for `PUT(..) = 0x0:I64`. General-case integer stores
of 0x0:I64 are unchanged.
This gives a 1.9% reduction in generated code size when running
/usr/bin/date on Memcheck.
Paul Floyd [Wed, 30 Dec 2020 12:57:39 +0000 (13:57 +0100)]
Add an extra suppression.
On Fedora 33 with gcc (GCC) 10.2.1 20201125 (Red Hat 10.2.1-9)
it looks like fun:__static_initialization_and_destruction_0 is
now inlined which causes the existing suppression for the
same reachable to no longer match.
For the cases of sfbm that are actually just sign-extensions to a wider width,
emit that directly and do disassembly-printing accordingly. No functional
change.
Mark Wielaard [Fri, 18 Dec 2020 17:23:42 +0000 (18:23 +0100)]
Fix magic cookie reference in mc-manual.
The URL to the original C++ front-end for GCC internals document
disappeared. Replace it with an URL that still has a description of
the original magic cookie added by operator new [] by that frontend.
Julian Seward [Thu, 17 Dec 2020 16:40:46 +0000 (17:40 +0100)]
arm64 front end: ufbm/sfbm: handle plain shifts explicitly
The ufbm and sfbm instructions implement some kind of semi-magical rotate,
mask and sign/zero-extend functionality. Boring old left and right shifts are
special cases of it. The existing translation into IR is correct, but has the
disadvantage that the IR optimiser isn't clever enough to simplify the
resulting IR back into a single shift in the case where the instruction is
used simply to encode a shift. This induces inefficiency and it also makes
the resulting disassembly pretty difficult to read, if you're into that kind
of thing.
This commit does the obvious thing: detects cases where the required behaviour
is just a single shift, and emits IR and disassembly-printing accordingly.
All other cases fall through to the existing general-case handling and so are
unchanged.
Mark Wielaard [Wed, 14 Oct 2020 10:11:34 +0000 (06:11 -0400)]
arm64 VEX frontend and backend support for Iop_M{Add,Sub}F{32,64}
The arm64 frontend used to implement the scalar fmadd, fmsub, fnmadd
and fnmsub iinstructions into separate addition/substraction and
multiplication instructions, which caused rounding issues.
This patch turns them into Iop_M{Add,Sub}F{32,64} instructions
(with some arguments negated). And the backend now emits fmadd or fmsub
instructions.
Alexandra Hajkova <ahajkova@redhat.com> added tests and fixed up the
implementation to make sure rounding (and sign) are correct now.
Mark Wielaard [Tue, 15 Dec 2020 10:49:58 +0000 (11:49 +0100)]
ppc stxsibx and stxsihx instructions write too much data
stxsibx (Store VSX Scalar as Integer Byte Indexed X-form) is implemented
by first reading a whole word, merging in the new byte, and then writing
out the whole word. Causing memcheck to warn when the destination might
have room for less than 8 bytes.
The stxsihx (Store VSX Scalar as Integer Halfword Indexed X-form)
instruction does something similar reading and then writing a full
word instead of a half word.
The code can be simplified (and made more correct) by storing the byte
(or half-word) directly, IRStmt_Store seems fine to store byte or half
word sized data, and so seems the ppc backend.
Implement the new instructions/features that were added to z/Architecture
with the vector-enhancements facility 1. Also cover the instructions from
the vector-packed-decimal facility that are defined outside the chapter
"Vector Decimal Instructions", but not the ones from that chapter itself.
For a detailed list of newly supported instructions see the updates to
`docs/internals/s390-opcodes.csv'.
Since the miscellaneous instruction extensions facility 2 was already
addressed by Bug 404406, this completes the support necessary to run
general programs built with `--march=z14' under Valgrind. The
vector-packed-decimal facility is currently not exploited by the standard
toolchain and libraries.
Andreas Arnez [Thu, 3 Dec 2020 17:32:45 +0000 (18:32 +0100)]
Bug 429864 - s390: Use Iop_CasCmp* to fix memcheck false positives
Compare-and-swap instructions can cause memcheck false positives when
operating on partially uninitialized data. An example is where a 1-byte
lock is allocated on the stack and then manipulated using CS on the
surrounding word. This is correct, and the uninitialized data has no
influence on the result, but memcheck still complains.
This is caused by logic in the s390 backend, where the expected and actual
memory values are compared using Iop_Sub32. Fix this by using
Iop_CasCmpNE32 instead.
Currently, if there are multiple equal global peaks, `intro_Block` and
`resize_Block` record the first one while `check_for_peak` records the
last one. This could lead to inconsistent output, though it's unlikely
in practice.
This commit fixes things so that all functions record the last peak.
Add a missing ifdef, whose absence caused build breakage on non-POWER targets.
fixed the compile issue in conv_f16_to_double() where non-Power platforms
do not support the power xscvhpdp assembly instructions. The instruction
is supported by ISA 3.0 platforms. Older Power platforms still fail to
compile with the assembly instruction. This patch fixes the if def for
power systems that do not support ISA 3.0.
Andreas Arnez [Tue, 3 Nov 2020 17:17:30 +0000 (18:17 +0100)]
Bug 428648 - s390x: Force 12-bit amode for vector loads in isel
Similar to Bug 417452, where the instruction selector sometimes attempted
to generate vector stores with a 20-bit displacement, the same problem has
now been reported with vector loads.
The problem is caused in s390_isel_vec_expr_wrk(), where the addressing
mode is generated with s390_isel_amode() instead of
s390_isel_amode_short(). This is fixed.
Julian Seward [Fri, 30 Oct 2020 16:34:14 +0000 (17:34 +0100)]
arm64 front end: mark a couple of vector load/store insns as "verbose".
Mark
LD3/ST3 (multiple 3-elem structs to/from 3 regs
LD4/ST4 (multiple 4-elem structs to/from 4 regs
as "verbose", since they can generate so much IR that a long sequence
of them causes later stages of the JIT to run out of space.
Mark Wielaard [Fri, 16 Oct 2020 00:55:06 +0000 (02:55 +0200)]
Support new faccessat2 linux syscall (439)
faccessat2 is a new syscall in linux 5.8 and will be used by glibc 2.33.
faccessat2 is simply faccessat with a new flag argument. It has
a common number across all linux arches.
Carl Love [Wed, 13 May 2020 20:19:07 +0000 (15:19 -0500)]
Add ISA 3.1 Vector Integer Multiply/Divide/Modulo Instructions
Add support for:
vdivesd Vector Divide Extended Signed Doubleword
vdivesw Vector Divide Extended Signed Word
vdiveud Vector Divide Extended Unsigned Doubleword
vdiveuw Vector Divide Extended Unsigned Word
vdivsd Vector Divide Signed Doubleword
vdivsw Vector Divide Signed Word
vdivud Vector Divide Unsigned Doubleword
vdivuw Vector Divide Unsigned Word
vmodsd Vector Modulo Signed Doubleword
vmodsw Vector Modulo Signed Word
vmodud Vector Modulo Unsigned Doubleword
vmoduw Vector Modulo Unsigned Word
vmulhsd Vector Multiply High Signed Doubleword
vmulhsw Vector Multiply High Signed Word
vmulhud Vector Multiply High Unsigned Doubleword
vmulhuw Vector Multiply High Unsigned Word
vmulld Vector Multiply Low Doubleword
Mark Wielaard [Wed, 23 Sep 2020 10:49:34 +0000 (12:49 +0200)]
Fix isa 3.1 test code on ppc64.
On ppc64 [old big endian] altivec.h can not be included directly.
Move the HAS_ISA_3_1 guard around so the include is only done when
the full test (and test_list_t) are build.
Apparently on Fedora 33 the POSIX thread functions exist in both libc and
libpthread. Hence this patch that intercepts the pthread functions in
libc. See also https://bugs.kde.org/show_bug.cgi?id=426144 .
On amd64, use by default the expensive instrumentation scheme for Iop_Add32.
This is necessary to avoid some false positives in code compiled by clang 10
at -O2. Some very crude measurements suggest the increase in generated code
size is around 0.2%, viz, insignificant.
Bug 425820 - Failure to recognize vpcmpeqq as a dependency breaking idiom.
In the IR optimiser (ir_opt.c): Recognise the following IROps as
dependency-breaking ops that generate an all-ones output: Iop_CmpEQ16x4
Iop_CmpEQ32x2 Iop_CmpEQ64x2 Iop_CmpEQ8x32 Iop_CmpEQ16x16 Iop_CmpEQ64x4. I
think this fixes all the known cases for sizes 32 bits to 256 bits. It also
fixes bug 425820.
This commit implements amd64 RDSEED instruction, on hosts that have it
and adds the new test case - none/tests/amd64/rdseed based on the existing
rdrand support.
Mark Wielaard [Tue, 18 Aug 2020 21:58:55 +0000 (23:58 +0200)]
Fix epoll_ctl setting of array event and data fields.
Fix for https://bugs.kde.org/show_bug.cgi?id=422623 in commit ecf5ba119
epoll_ctl warns for uninitialized padding on non-amd64 64bit arches
contained a bug. A pointer to an array is not a pointer to a pointer to
an array. Found by a Fedora user:
https://bugzilla.redhat.com/show_bug.cgi?id=1844778#c10
Mark Wielaard [Sun, 26 Jul 2020 19:17:23 +0000 (21:17 +0200)]
Handle REX prefixed JMP instruction.
The NET Core runtime might generate a JMP with a REX prefix.
For Jv (32bit offset) and Jb (8bit offset) this is valid.
Prefixes that change operand size are ignored for such JMPs.
So remove the check for sz == 4 and force sz = 4 for Jv.
Fix warning in syswrap sched_getattr print format.
m_syswrap/syswrap-linux.c:3716:10: warning: format '%ld' expects argument of type 'long int', but argument 4 has type 'RegWord' {aka 'long unsigned int'} [-Wformat=]