Bob Duff [Thu, 22 Aug 2024 16:32:00 +0000 (12:32 -0400)]
ada: Fix Finalize_Storage_Only bug in b-i-p calls
Do not pass null for the Collection parameter when
Finalize_Storage_Only is in effect. If the collection
is null in that case, we will blow up later when we
deallocate the object.
gcc/ada/
* exp_ch6.adb (Add_Collection_Actual_To_Build_In_Place_Call):
Remove Finalize_Storage_Only from the code that checks whether to
pass null to the Collection parameter. Having done that, we don't
need to check for Is_Library_Level_Entity, because
No_Heap_Finalization requires that. And if we ever change
No_Heap_Finalization to allow nested access types, we will still
want to pass null. Note that the comment "Such a type lacks a
collection." is incorrect in the case of Finalize_Storage_Only;
such types have a collection.
Jennifer Schmitz [Fri, 30 Aug 2024 14:16:43 +0000 (07:16 -0700)]
SVE intrinsics: Fold constant operands for svmul.
This patch implements constant folding for svmul by calling
gimple_folder::fold_const_binary with tree_code MULT_EXPR.
Tests were added to check the produced assembly for different
predicates, signed and unsigned integers, and the svmul_n_* case.
The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
OK for mainline?
Jennifer Schmitz [Fri, 30 Aug 2024 14:03:49 +0000 (07:03 -0700)]
SVE intrinsics: Fold constant operands for svdiv.
This patch implements constant folding for svdiv:
The new function aarch64_const_binop was created, which - in contrast to
int_const_binop - does not treat operations as overflowing. This function is
passed as callback to vector_const_binop from the new gimple_folder
method fold_const_binary, if the predicate is ptrue or predication is _x.
From svdiv_impl::fold, fold_const_binary is called with TRUNC_DIV_EXPR as
tree_code.
In aarch64_const_binop, a case was added for TRUNC_DIV_EXPR to return 0
for division by 0, as defined in the semantics for svdiv.
Tests were added to check the produced assembly for different
predicates, signed and unsigned integers, and the svdiv_n_* case.
The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
OK for mainline?
Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com>
gcc/
* config/aarch64/aarch64-sve-builtins-base.cc (svdiv_impl::fold):
Try constant folding.
* config/aarch64/aarch64-sve-builtins.h: Declare
gimple_folder::fold_const_binary.
* config/aarch64/aarch64-sve-builtins.cc (aarch64_const_binop):
New function to fold binary SVE intrinsics without overflow.
(gimple_folder::fold_const_binary): New helper function for
constant folding of SVE intrinsics.
gcc/testsuite/
* gcc.target/aarch64/sve/const_fold_div_1.c: New test.
Jennifer Schmitz [Fri, 30 Aug 2024 13:56:52 +0000 (06:56 -0700)]
SVE intrinsics: Refactor const_binop to allow constant folding of intrinsics.
This patch sets the stage for constant folding of binary operations for SVE
intrinsics:
In fold-const.cc, the code for folding vector constants was moved from
const_binop to a new function vector_const_binop. This function takes a
function pointer as argument specifying how to fold the vector elements.
The intention is to call vector_const_binop from the backend with an
aarch64-specific callback function.
The code in const_binop for folding operations where the first operand is a
vector constant and the second argument is an integer constant was also moved
into vector_const_binop to to allow folding of binary SVE intrinsics where
the second operand is an integer (_n).
To allow calling poly_int_binop from the backend, the latter was made public.
The patch was bootstrapped and regtested on aarch64-linux-gnu, no regression.
OK for mainline?
Signed-off-by: Jennifer Schmitz <jschmitz@nvidia.com>
gcc/
* fold-const.h: Declare vector_const_binop.
* fold-const.cc (const_binop): Remove cases for vector constants.
(vector_const_binop): New function that folds vector constants
element-wise.
(int_const_binop): Remove call to wide_int_binop.
(poly_int_binop): Add call to wide_int_binop.
Richard Biener [Mon, 2 Sep 2024 13:12:58 +0000 (15:12 +0200)]
Handle mixing REALPART/IMAGPART with other components in SLP groups
The following makes sure we handle a SLP load/store group from
a structure with complex and scalar members. This for example
happens in gcc.target/i386/pr106010-9a.c.
* tree-vect-slp.cc (vect_build_slp_tree_1): Handle mixing
all of handled components besides ARRAY_RANGE_REF, drop
handling of INDIRECT_REF.
Richard Biener [Mon, 2 Sep 2024 09:16:12 +0000 (11:16 +0200)]
Correctly handle store IFNs in vect_get_vector_types_for_stmt
Currently vect_get_vector_types_for_stmt only special-cases
IFN_MASK_STORE but there are now very many variants and simply
passing analysis without setting *VECTYPE will ICE duing SLP
discovery (noticed with IFN_SCATTER_STORE). The following
properly uses internal_store_fn_p. I also noticed we're
unnecessarily handing those again to determine the scalar type
but there should always be a data reference for them.
* tree-vect-stmts.cc (vect_get_vector_types_for_stmt):
Handle all internal_store_fn_p the same. Remove special-casing
for the scalar_type of IFN_MASK_STORE.
Levy Hsu [Mon, 26 Aug 2024 01:16:30 +0000 (10:46 +0930)]
i386: Support partial vectorized V2BF/V4BF plus/minus/mult/div/sqrt
This patch introduces new mode iterators and expands for the i386 architecture to support partial vectorization of bf16 operations using AVX10.2 instructions.
gcc/ChangeLog:
* config/i386/mmx.md (VBF_32_64): New mode iterator for partial vectorized V2BF/V4BF.
(<insn><mode>3): New define_expand for plusminusmultdiv.
(sqrt<mode>2): New define_expand for sqrt.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx10_2-partial-bf-vector-fast-math-1.c: New test.
* gcc.target/i386/avx10_2-partial-bf-vector-operations-1.c: New test.
Before this patch:
10 │ sat_s_add_int64_t_fmt_1:
11 │ mv a5,a0
12 │ add a0,a0,a1
13 │ xor a1,a5,a1
14 │ not a1,a1
15 │ xor a4,a5,a0
16 │ and a1,a1,a4
17 │ blt a1,zero,.L5
18 │ ret
19 │ .L5:
20 │ srai a5,a5,63
21 │ li a0,-1
22 │ srli a0,a0,1
23 │ xor a0,a5,a0
24 │ ret
After this patch:
10 │ sat_s_add_int64_t_fmt_1:
11 │ add a2,a0,a1
12 │ xor a1,a0,a1
13 │ xor a5,a0,a2
14 │ srli a5,a5,63
15 │ srli a1,a1,63
16 │ xori a1,a1,1
17 │ and a5,a5,a1
18 │ srai a4,a0,63
19 │ li a3,-1
20 │ srli a3,a3,1
21 │ xor a3,a3,a4
22 │ neg a4,a5
23 │ and a3,a3,a4
24 │ addi a5,a5,-1
25 │ and a0,a2,a5
26 │ or a0,a0,a3
27 │ ret
The below test suites are passed for this patch:
1. The rv64gcv fully regression test.
gcc/ChangeLog:
* config/riscv/riscv-protos.h (riscv_expand_ssadd): Add new func
decl for expanding ssadd.
* config/riscv/riscv.cc (riscv_gen_sign_max_cst): Add new func
impl to gen the max int rtx.
(riscv_expand_ssadd): Add new func impl to expand the ssadd.
* config/riscv/riscv.md (ssadd<mode>3): Add new pattern for
signed integer .SAT_ADD.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_arith.h: Add test helper macros.
* gcc.target/riscv/sat_arith_data.h: Add test data.
* gcc.target/riscv/sat_s_add-1.c: New test.
* gcc.target/riscv/sat_s_add-2.c: New test.
* gcc.target/riscv/sat_s_add-3.c: New test.
* gcc.target/riscv/sat_s_add-4.c: New test.
* gcc.target/riscv/sat_s_add-run-1.c: New test.
* gcc.target/riscv/sat_s_add-run-2.c: New test.
* gcc.target/riscv/sat_s_add-run-3.c: New test.
* gcc.target/riscv/sat_s_add-run-4.c: New test.
* gcc.target/riscv/scalar_sat_binary_run_xxx.h: New test.
YunQiang Su [Mon, 26 Aug 2024 00:45:36 +0000 (08:45 +0800)]
MIPS: Support vector reduc for MSA
We have SHF.fmt and HADD_S/U.fmt with MSA, which can be used for
vector reduc.
For min/max for U8/S8, we can
SHF.B W1, W0, 0xb1 # swap byte inner every half
MIN.B W1, W1, W0
SHF.H W2, W1, 0xb1 # swap half inner every word
MIN.B W2, W2, W1
SHF.W W3, W2, 0xb1 # swap word inner every doubleword
MIN.B W4, W3, W2
SHF.W W4, W4, 0x4e # swap the two doubleword
MIN.B W4, W4, W3
For plus of S8/U8, we can use HADD
HADD.H W0, W0, W0
HADD.W W0, W0, W0
HADD.D W0, W0, W0
SHF.W W1, W0, 0x4e # swap the two doubleword
ADDV.D W1, W1, W0
COPY_S.B T0, W1 # COPY_U.B for U8
We can do similar for S16/U16/S32/U32/S64/U64/FLOAT/DOUBLE.
gcc
* config/mips/mips-msa.md: (MSA_NO_HADD): we have HADD for
S8/U8/S16/U16/S32/U32 only.
(reduc_smin_scal_<mode>): New define pattern.
(reduc_smax_scal_<mode>): Ditto.
(reduc_umin_scal_<mode>): Ditto.
(reduc_umax_scal_<mode>): Ditto.
(reduc_plus_scal_<mode>): Ditto.
(reduc_plus_scal_v4si): Ditto.
(reduc_plus_scal_v8hi): Ditto.
(reduc_plus_scal_v16qi): Ditto.
(reduc_<optab>_scal_<mode>): Ditto.
* config/mips/mips-protos.h: New function mips_expand_msa_reduc.
* config/mips/mips.cc: New function mips_expand_msa_reduc.
* config/mips/mips.md: Define any_bitwise iterator.
30_threads/future/members/poll.cc has calibration code that, on
systems with very low clock resolution, may spuriously fail to run.
Even when it does run, low resolution and reasonable
timeouts limit severely the viability of increasing the loop counts so
as to reduce measurement noise, so we end up with very noisy results.
On various vxworks targets, high iteration count (low-noise)
measurements confirmed that some of the operations that we expected to
be up to 100x slower than the fastest ones can run a little slower
than that and, with significant noise, may seem to be even slower,
comparatively.
Bump the factors up to 200x, so that we have plenty of margin over
measured results.
for libstdc++-v3/ChangeLog
* testsuite/30_threads/future/members/poll.cc: Factor out
calibration, and run it unconditionally. Lower its
strictness. Bump wait_until_*'s slowness factor.
[libstdc++] [testsuite] avoid async.cc loss of precision [PR91486]
When we get to test_pr91486_wait_until(), we're about 10s past the
float_steady_clock epoch. This is enough for the 1s delta for the
timeout to come out slightly lower when the futex-less wait_until
converts the deadline from float_steady_clock to __clock_t. So we may
wake up a little too early, and end up looping one extra time to sleep
for e.g. another 954ns until we hit the deadline.
Each iteration calls float_steady_clock::now(), bumping the call_count
that we VERIFY() at the end of the subtest. Since we expect at most 3
calls, and we're going to have at the very least 3 on futex-less
targets (one in the test proper, one before wait_until_impl to compute
the deadline, and one after wait_until_impl to check whether the
deadline was hit), any such imprecision that causes an extra iteration
will reach 5 and cause the test to fail.
Initializing the epoch in the beginning of the test makes such
spurious fails due to loss of precision far less likely. I don't
suppose allowing for an extra couple of calls would be desirable.
While at that, I'm annotating unused status variables as such.
for libstdc++-v3/ChangeLog
PR libstdc++/91486
* testsuite/30_threads/async/async.cc
(test_pr91486_wait_for): Mark status as unused.
(test_pr91486_wait_until): Likewise. Initialize epoch later.
[testsuite] add linkonly to dg-additional-sources [PR115295]
The D testsuite shows it was a mistake to assume that
dg-additional-sources are never to be used for compilation tests.
Even if an output file is specified for compilation, extra module
files can be named and used in the compilation without being flagged
as errors.
Introduce a 'linkonly' flag for dg-additional-sources, and use it in
pr95401.cc and other vector tests that default to run, so that its
additional sources get discarded when vector tests downgrade to
compile-only. This reverts previous workarounds for this very
circumstance, that relied on being able to run vector tests anyway,
even after failing to detect runtime or hardware vector support.
Andrew Stubbs [Tue, 6 Aug 2024 16:00:21 +0000 (16:00 +0000)]
amdgcn: Remove TARGET_GCN5_PLUS
Now that GCN3 support is gone, TARGET_GCN5_PLUS always evaluates to true, so
we can make that code unconditional, and remove all the "else" cases.
The ISA features TARGET_GLOBAL_ADDRSPACE, TARGET_FLAT_OFFSETS,
TARGET_EXPLICIT_CARRY, and TARGET_MULTIPLY_IMMEDIATE, are similarly also
redundant and can be made unconditional.
The naming of the "gcc_version" attribute has been confusing since the "rdna"
attribute was added and this makes it worse, so it has been renamed to "cdna".
The add-with-carry assembler mnemonics no longer have two forms, so '%^' can be
removed.
Gaius Mulley [Mon, 2 Sep 2024 12:29:25 +0000 (13:29 +0100)]
PR modula2/116557 Remove physical address from the GPL header comment
This patch removes the physical address from all the header comments
in the m2 subdirectory. The physical address is replaced with the
text "You should have received a copy of the GNU General Public License
along with GCC; see the file COPYING3. If not see
<http://www.gnu.org/licenses/>." instead.
Since r15-3254-g3f51f0dc88ec21c1ec79df694200f10ef85915f4
added scan-ltrans-rtl* variants to scanltranstree.exp, it no longer
makes sense to have "tree" in the name. This renames the file
accordingly and updates users.
libatomic/ChangeLog:
* testsuite/lib/libatomic.exp: Load scanltrans.exp instead of
scanltranstree.exp.
libgomp/ChangeLog:
* testsuite/lib/libgomp.exp: Load scanltrans.exp instead of
scanltranstree.exp.
libitm/ChangeLog:
* testsuite/lib/libitm.exp: Load scanltrans.exp instead of
scanltranstree.exp.
libphobos/ChangeLog:
* testsuite/lib/libphobos-dg.exp: Load scanltrans.exp instead of
scanltranstree.exp.
libvtv/ChangeLog:
* testsuite/lib/libvtv.exp: Load scanltrans.exp instead of
scanltranstree.exp.
gcc/testsuite/ChangeLog:
* gcc.dg-selftests/dg-final.exp: Load scanltrans.exp instead of
scanltranstree.exp.
* lib/gcc-dg.exp: Likewise.
* lib/scanltranstree.exp: Rename to ...
* lib/scanltrans.exp: ... this.
ASM_INPUT_P is so named because it causes the eventual rtl insn
pattern to be a top-level ASM_INPUT rather than an ASM_OPERANDS.
However, this name has caused confusion, partly due to earlier
documentation. The name also sounds related to ASM_INPUTS but
is for a different piece of state.
This patch renames it to ASM_BASIC_P, with the inverse meaning
an extended asm. ("Basic asm" is the term used in extend.texi.)
Eric Botcazou [Tue, 20 Aug 2024 15:40:41 +0000 (17:40 +0200)]
ada: Diagnose too large size clause on floating-point type
The problem is that the size clause changes the floating-point format used
for the type, but it must not when this format is the widest format that is
supported in hardware on the target. Instead a padding type must be built
and the associated warning given.
gcc/ada/
* gcc-interface/decl.cc (gnat_to_gnu_entity): Cap the Esize of a
floating-point type to the size of the widest format supported in
hardware if it is explicity defined.
ada: Fix standard output stream for gnatcmd output
Before this patch, the gnat command sent to standard error pieces of
information that are a better match for standard output. This patch
makes this information go to standard output.
gcc/ada/
* gnatcmd.adb (GNATCmd): Fix standard output stream.
Before this patch, the documentation of -gnaty0 used 0-based indexing
for column numbers while 1-based indexing is used everywhere else. This
patch makes this documentation use 1-based indexing, and also adds a
missing parenthesis.
gcc/ada/
* doc/gnat_ugn/building_executable_programs_with_gnat.rst: Fix
minor issues.
* gnat_ugn.texi: Regenerate.
Bob Duff [Sun, 18 Aug 2024 23:13:46 +0000 (19:13 -0400)]
ada: Documentation for generic type inference
...plus minor improvements to existing documentation.
gcc/ada/
* doc/gnat_rm/gnat_language_extensions.rst: I assume "extended set
of extensions" was a typo for "experimental set of extensions",
because "extended extensions" is repetitive and redundant. "in
addition" clarifies that the one subsumes the other. Add a
reminder at the start of each subsection about what switch/pragma
enables what extensions. Add new section about "Inference of
Dependent Types in Generic Instantiations".
* gnat_rm.texi: Regenerate.
Marc Poulhiès [Thu, 8 Aug 2024 11:36:37 +0000 (13:36 +0200)]
ada: Also reset scope for some nested declaration
When changing the scope for entities found in the entry body that is
mutated into a procedure, the compiler needs to look deeper than only
the top level entities as expansion may produce object declarations
which scopes are also the entry. For example, the tree after expansion
may look like:
procedure This_Is_An_Entry_Proc is
...
O1 : Typ := do
TMP1 : OTyp := ...;
...
in TMP1;
O1's scope needs to be reset to This_Is_An_Entry_Proc, but so does
TMP1's scope.
This change also fix a small oversight where
N_Implicit_Label_Declaration scope must be reset and its content
skipped.
gcc/ada/
* exp_ch9.adb (Reset_Scopes_To): Adjust comment.
(Reset_Scopes_To.Reset_Scope): Adjust the scope reset for object
declaration. In particular, visit the children nodes if any. Also
extend the handling of other declarations to
N_Implicit_Label_Declaration.
Jakub Jelinek [Mon, 2 Sep 2024 07:44:09 +0000 (09:44 +0200)]
ranger: Fix up range computation for CLZ [PR116486]
The initial CLZ gimple-range-op.cc implementation handled just the
case where second argument to .CLZ is equal to prec, but in
r15-1014 I've added also handling of the -1 case. As the following
testcase shows, incorrectly though for the case where the first argument
has [0,0] range. If the second argument is prec, then the result should
be [prec,prec] and that was handled correctly, but when the second argument
is -1, the result should be [-1,-1] but instead it was incorrectly computed
as [prec-1,prec-1] (when second argument is prec, mini is 0 and maxi is
prec, while when second argument is -1, mini is -1 and maxi is prec-1).
Fixed thusly (the actual handling is then similar to the CTZ [0,0] case).
2024-09-02 Jakub Jelinek <jakub@redhat.com>
PR middle-end/116486
* gimple-range-op.cc (cfn_clz::fold_range): If lh is [0,0]
and mini is -1, return [-1,-1] range rather than [prec-1,prec-1].
Richard Biener [Fri, 5 Jul 2024 08:35:08 +0000 (10:35 +0200)]
load and store-lanes with SLP
The following is a prototype for how to represent load/store-lanes
within SLP. I've for now settled with having a single load node
with multiple permute nodes acting as selection, one for each loaded lane
and a single store node fed from all stored lanes. For
for (int i = 0; i < 1024; ++i)
{
a[2*i] = b[2*i] + 7;
a[2*i+1] = b[2*i+1] * 3;
}
you have the following SLP graph where I explain how things are set
up and code-generated:
This is the load node, marked with ldst_lanes = true (the load
permutation is only accurate when taking into account the lane permute
in the selection nodes). It code generates
This scheme allows to leave code generation in vectorizable_load/store
mostly as-is.
While this should support both load-lanes and (masked) store-lanes
the decision to do either is done during SLP discovery time and
cannot be reversed without altering the SLP tree - as-is the SLP
tree is not usable for non-store-lanes on the store side, the
load side is OK representation-wise but will very likely fail
permute handling as the lowering to deal with the two input vector
restriction isn't done - but of course since the permute node is
marked as to be ignored that doesn't work out. So I've put
restrictions in place that fail vectorization if a load/store-lane
SLP tree is later classified differently by get_load_store_type.
I'll note that for example gcc.target/aarch64/sve/mask_struct_store_3.c
will not get SLP store-lanes used because the full store SLPs just
fine though we then fail to handle the "splat" load-permutation
the load permute lowering code currently doesn't consider it worth
lowering single loads from a group (or in this case not grouped loads).
The expectation is the target can handle this by two interleaves with
itself.
So what we see here is that while the explicit SLP representation is
helpful in some cases, in cases like this it would require changing
it when we make decisions how to vectorize. My idea is that this
all will change a lot when we re-do SLP discovery (for loops) and
when we get rid of non-SLP as I think vectorizable_* should be
allowed to alter the SLP graph during analysis.
The patch also removes the code cancelling SLP if we can use
load/store-lanes from the main loop vector analysis code and
re-implements it as re-discovering the SLP instance with
forced single-lane splits so SLP load/store-lanes scheme can be
used.
This is now done after SLP discovery and SLP pattern recog are
complete to not disturb the latter but per SLP instance instead
of being a global decision on the whole loop.
This is a behavioral change that for example shows in
gcc.dg/vect/slp-perm-6.c on ARM where we formerly used SLP permutes
but now a mix of SLP without permutes and load/store lanes. The
previous flaky heuristic is now flaky in a different way.
Testing on RISC-V and aarch64 reveal several testcases that require
adjustment as to now expect SLP even when load/store lanes are being
used. If in doubt I've adjusted them to the final expectation which
will lead to one or two new FAILs where we still do the SLP cancelling.
I have a followup that implements that while remaining in SLP that's
in final testing.
Note that gcc.dg/vect/slp-42.c and gcc.dg/vect/pr68445.c will FAIL
on aarch64 with SVE because for some odd reason vect_stridedN
is true for any N for check_effective_target_vect_fully_masked
targets but SVE cannot do ld8 while risc-v can.
I have not bothered to adjust target tests that now fail assembly-scan.
* tree-vectorizer.h (_slp_tree::ldst_lanes): New flag to mark
load, store and permute nodes.
* tree-vect-slp.cc (_slp_tree::_slp_tree): Initialize ldst_lanes.
(vect_build_slp_instance): For stores iff the target prefers
store-lanes discover single-lane sub-groups, do not perform
interleaving lowering but mark the node with ldst_lanes.
Also allow i == 0 - fatal failure - for splitting up a store group
when we're not doing single-lane discovery already.
(vect_lower_load_permutations): When the target supports
load lanes and the loads all fit the pattern split out
a single level of permutes only and mark the load and
permute nodes with ldst_lanes.
(vectorizable_slp_permutation_1): Handle the load-lane permute
forwarding of vector defs.
(vect_analyze_slp): After SLP pattern recog is finished see if
there are any SLP instances that would benefit from using
load/store-lanes and re-discover those with forced single lanes.
* tree-vect-stmts.cc (get_group_load_store_type): Support
load/store-lanes for SLP.
(vectorizable_store): Support SLP code generation for store-lanes.
(vectorizable_load): Support SLP code generation for load-lanes.
* tree-vect-loop.cc (vect_analyze_loop_2): Do not cancel SLP
when store-lanes can be used.
Richard Biener [Mon, 13 May 2024 12:57:01 +0000 (14:57 +0200)]
lower SLP load permutation to interleaving
The following emulates classical interleaving for SLP load permutes
that we are unlikely handling natively. This is to handle cases
where interleaving (or load/store-lanes) is the optimal choice for
vectorizing even when we are doing that within SLP. An example
would be
void foo (int * __restrict a, int * b)
{
for (int i = 0; i < 16; ++i)
{
a[4*i + 0] = b[4*i + 0] * 3;
a[4*i + 1] = b[4*i + 1] + 3;
a[4*i + 2] = (b[4*i + 2] * 3 + 3);
a[4*i + 3] = b[4*i + 3] * 3;
}
}
where currently the SLP store is merging four single-lane SLP
sub-graphs but none of the loads in it can be code-generated
with V4SImode vectors and a VF of four as the permutes would need
three vectors.
The patch introduces a lowering phase after SLP discovery but
before SLP pattern recognition or permute optimization that
analyzes all loads from the same dataref group and creates an
interleaving scheme starting from an unpermuted load.
What can be handled is power-of-two group size and a group size of
three. The possibility for doing the interleaving with a load-lanes
like instruction is done as followup.
For a group-size of three this is done by using
the non-interleaving fallback code which then creates at VF == 4 from
{ { a0, b0, c0 }, { a1, b1, c1 }, { a2, b2, c2 }, { a3, b3, c3 } }
the intermediate vectors { c0, c0, c1, c1 } and { c2, c2, c3, c3 }
to produce { c0, c1, c2, c3 }. This turns out to be more effective
than the scheme implemented for non-SLP for SSE and only slightly
worse for AVX512 and a bit more worse for AVX2. It seems to me that
this would extend to other non-power-of-two group-sizes though (but
the patch does not). Optimal schemes are likely difficult to lay out
in VF agnostic form.
I'll note that while the lowering assumes even/odd extract is
generally available for all vector element sizes (which is probably
a good assumption), it doesn't in any way constrain the other
permutes it generates based on target availability. Again difficult
to do in a VF agnostic way (but at least currently the vector type
is fixed).
I'll also note that the SLP store side merges lanes in a way
producing three-vector permutes for store group-size of three, so
the testcase uses a store group-size of four.
The patch has a fallback for when there are multi-lane groups
and the resulting permutes to not fit interleaving. Code
generation is not optimal when this triggers and might be
worse than doing single-lane group interleaving.
The patch handles gaps by representing them with NULL
entries in SLP_TREE_SCALAR_STMTS for the unpermuted load node.
The SLP discovery changes could be elided if we manually build the
load node instead.
SLP load nodes covering enough lanes to not need intermediate
permutes are retained as having a load-permutation and do not
use the single SLP load node for each dataref group. That's
something we might want to change, making load-permutation
something purely local to SLP discovery (but then SLP discovery
could do part of the lowering).
The patch misses CSEing intermediate generated permutes and
registering them with the bst_map which is possibly required
for SLP pattern detection in some cases - this re-spin of the
patch moves the lowering after SLP pattern detection.
* tree-vect-slp.cc (vect_build_slp_tree_1): Handle NULL stmt.
(vect_build_slp_tree_2): Likewise. Release load permutation
when there's a NULL in SLP_TREE_SCALAR_STMTS and assert there's
no actual permutation in that case.
(vllp_cmp): New function.
(vect_lower_load_permutations): Likewise.
(vect_analyze_slp): Call it.
* gcc.dg/vect/slp-11a.c: Expect SLP.
* gcc.dg/vect/slp-12a.c: Likewise.
* gcc.dg/vect/slp-51.c: New testcase.
* gcc.dg/vect/slp-52.c: New testcase.
[PATCH] RISC-V: Optimize the cost of the DFmode register move for RV32.
Currently, in RV32, even with the D extension enabled, the cost of DFmode
register moves is still set to 'COSTS_N_INSNS (2)'. This results in the
'lower-subreg' pass splitting DFmode register moves into two SImode SUBREG
register moves, leading to the generation of many redundant instructions.
As an example, consider the following test case:
double foo (int t, double a, double b)
{
if (t > 0)
return a;
else
return b;
}
When compiling with -march=rv32imafdc -mabi=ilp32d, the following code is generated:
.cfi_startproc
addi sp,sp,-32
.cfi_def_cfa_offset 32
fsd fa0,8(sp)
fsd fa1,16(sp)
lw a4,8(sp)
lw a5,12(sp)
lw a2,16(sp)
lw a3,20(sp)
bgt a0,zero,.L1
mv a4,a2
mv a5,a3
.L1:
sw a4,24(sp)
sw a5,28(sp)
fld fa0,24(sp)
addi sp,sp,32
.cfi_def_cfa_offset 0
jr ra
.cfi_endproc
After adjust the DFmode register move's cost to 'COSTS_N_INSNS (1)', the
generated code is as follows, with a significant reduction in the number
of instructions.
.cfi_startproc
ble a0,zero,.L5
ret
.L5:
fmv.d fa0,fa1
ret
.cfi_endproc
gcc/
* config/riscv/riscv.cc (riscv_rtx_costs): Optimize the cost of the
DFmode register move for RV32.
gcc/testsuite/
* gcc.target/riscv/rv32-movdf-cost.c: New test.
Jeff Law [Mon, 2 Sep 2024 04:16:04 +0000 (22:16 -0600)]
[committed][PR rtl-optimization/116544] Fix test for promoted subregs
This is a small bug in the ext-dce code's handling of promoted subregs.
Essentially when we see a promoted subreg we need to make additional bit groups
live as various parts of the RTL path know that an extension of a suitably
promoted subreg can be trivially eliminated.
When I added support for dealing with this quirk I failed to account for the
larger modes properly and it ignored the case when the size of the inner object
was > 32 bits. Opps.
This does _not_ fix the outstanding x86 issue. That's caused by something
completely different and more concerning ;(
Bootstrapped and regression tested on x86. Obviously fixes the testcase on
riscv as well.
Pushing to the trunk.
PR rtl-optimization/116544
gcc/
* ext-dce.cc (ext_dce_process_uses): Fix thinko in promoted subreg
handling.
gcc/testsuite/
* gcc.dg/torture/pr116544.c: New test.
Levy Hsu [Mon, 2 Sep 2024 02:24:49 +0000 (10:24 +0800)]
i386: Support vec_cmp for V8BF/V16BF/V32BF in AVX10.2
gcc/ChangeLog:
* config/i386/i386-expand.cc (ix86_use_mask_cmp_p): Add BFmode
for int mask cmp.
* config/i386/sse.md (vec_cmp<mode><avx512fmaskmodelower>): New
vec_cmp expand for VBF modes.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx10_2-512-bf-vector-cmpp-1.c: New test.
* gcc.target/i386/avx10_2-bf-vector-cmpp-1.c: Ditto.
Levy Hsu [Mon, 2 Sep 2024 02:24:45 +0000 (10:24 +0800)]
i386: Support vectorized BF16 add/sub/mul/div with AVX10.2 instructions
AVX10.2 introduces several non-exception instructions for BF16 vector.
Enable vectorized BF add/sub/mul/div operation by supporting standard
optab for them.
gcc/ChangeLog:
* config/i386/sse.md (div<mode>3): New expander for BFmode div.
(VF_BHSD): New mode iterator with vector BFmodes.
(<insn><mode>3<mask_name><round_name>): Change mode to VF_BHSD.
(mul<mode>3<mask_name><round_name>): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/i386/avx10_2-512-bf-vector-operations-1.c: New test.
* gcc.target/i386/avx10_2-bf-vector-operations-1.c: Ditto.
i386: Auto vectorize sdot_prod, usdot_prod, udot_prod with AVX10.2 instructions
gcc/ChangeLog:
* config/i386/sse.md (VI1_AVX512VNNIBW): New.
(VI2_AVX10_2): Ditto.
(sdot_prod<mode>): Add AVX10.2
to auto vectorize and combine 512 bit part.
(udot_prod<mode>): Ditto.
(sdot_prodv64qi): Removed.
(udot_prodv64qi): Ditto.
(usdot_prod<mode>): Add AVX10.2 to auto vectorize.
(udot_prod<mode>): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/i386/vnniint16-auto-vectorize-2.c: Only define
TEST when not defined.
* gcc.target/i386/vnniint8-auto-vectorize-2.c: Ditto.
* gcc.target/i386/vnniint16-auto-vectorize-3.c: New test.
* gcc.target/i386/vnniint16-auto-vectorize-4.c: Ditto.
* gcc.target/i386/vnniint8-auto-vectorize-3.c: Ditto.
* gcc.target/i386/vnniint8-auto-vectorize-4.c: Ditto.
The below test is passed for this patch.
* The rv64gcv regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_u_trunc-16.c: New test.
* gcc.target/riscv/sat_u_trunc-17.c: New test.
* gcc.target/riscv/sat_u_trunc-18.c: New test.
* gcc.target/riscv/sat_u_trunc-run-16.c: New test.
* gcc.target/riscv/sat_u_trunc-run-17.c: New test.
* gcc.target/riscv/sat_u_trunc-run-18.c: New test.
The below test is passed for this patch.
* The rv64gcv regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/sat_u_trunc-10.c: New test.
* gcc.target/riscv/sat_u_trunc-11.c: New test.
* gcc.target/riscv/sat_u_trunc-12.c: New test.
* gcc.target/riscv/sat_u_trunc-run-10.c: New test.
* gcc.target/riscv/sat_u_trunc-run-11.c: New test.
* gcc.target/riscv/sat_u_trunc-run-12.c: New test.
Pan Li [Fri, 30 Aug 2024 03:01:37 +0000 (11:01 +0800)]
RISC-V: Add testcases for form 4 of unsigned vector .SAT_ADD IMM
This patch would like to add test cases for the unsigned vector .SAT_ADD
when one of the operand is IMM.
Form 4:
#define DEF_VEC_SAT_U_ADD_IMM_FMT_4(T, IMM) \
T __attribute__((noinline)) \
vec_sat_u_add_imm##IMM##_##T##_fmt_4 (T *out, T *in, unsigned limit) \
{ \
unsigned i; \
T ret; \
for (i = 0; i < limit; i++) \
{ \
out[i] = __builtin_add_overflow (in[i], IMM, &ret) == 0 ? ret : -1; \
} \
}
DEF_VEC_SAT_U_ADD_IMM_FMT_4(uint64_t, 123)
The below test are passed for this patch.
* The rv64gcv fully regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/vec_sat_arith.h: Add test helper macros.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-13.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-14.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-15.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-16.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-13.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-14.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-15.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-16.c: New test.
Pan Li [Fri, 30 Aug 2024 00:36:45 +0000 (08:36 +0800)]
RISC-V: Add testcases for form 3 of unsigned vector .SAT_ADD IMM
This patch would like to add test cases for the unsigned vector .SAT_ADD
when one of the operand is IMM.
Form 3:
#define DEF_VEC_SAT_U_ADD_IMM_FMT_3(T, IMM) \
T __attribute__((noinline)) \
vec_sat_u_add_imm##IMM##_##T##_fmt_3 (T *out, T *in, unsigned limit) \
{ \
unsigned i; \
T ret; \
for (i = 0; i < limit; i++) \
{ \
out[i] = __builtin_add_overflow (in[i], IMM, &ret) ? -1 : ret; \
} \
}
DEF_VEC_SAT_U_ADD_IMM_FMT_3(uint64_t, 123)
The below test are passed for this patch.
* The rv64gcv fully regression test.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-10.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-11.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-12.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-9.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-10.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-11.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-12.c: New test.
* gcc.target/riscv/rvv/autovec/binop/vec_sat_u_add_imm-run-9.c: New test.
Pan Li [Fri, 30 Aug 2024 06:07:12 +0000 (14:07 +0800)]
RISC-V: Refactor gen zero_extend rtx for SAT_* when expand SImode in RV64
In previous, we have some specially handling for both the .SAT_ADD and
.SAT_SUB for unsigned int. There are similar to take care of SImode
in RV64 for zero extend. Thus refactor these two helper function
into one for possible code duplication.
The below test suite are passed for this patch.
* The rv64gcv fully regression test.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_gen_zero_extend_rtx): Merge
the zero_extend handing from func riscv_gen_unsigned_xmode_reg.
(riscv_gen_unsigned_xmode_reg): Remove.
(riscv_expand_ussub): Leverage riscv_gen_zero_extend_rtx
instead of riscv_gen_unsigned_xmode_reg.
Andrew Pinski [Sun, 1 Sep 2024 00:23:19 +0000 (17:23 -0700)]
slsr: Use simple_dce_from_worklist in SLSR [PR116554]
While working on a phiopt patch, it was noticed that
SLSR would leave around some unused ssa names. Let's
add simple_dce_from_worklist usage to SLSR to remove
the dead statements. This should give a small improvemnent
for passes afterwards.
Boostrapped and tested on x86_64.
gcc/ChangeLog:
PR tree-optimization/116554
* gimple-ssa-strength-reduction.cc: Include tree-ssa-dce.h.
(replace_mult_candidate): Add sdce_worklist argument, mark
the rhs1/rhs2 for maybe dceing.
(replace_unconditional_candidate): Add sdce_worklist argument,
Update call to replace_mult_candidate.
(replace_conditional_candidate): Add sdce_worklist argument,
update call to replace_mult_candidate.
(replace_uncond_cands_and_profitable_phis): Add sdce_worklist argument,
update call to replace_conditional_candidate,
replace_unconditional_candidate, and replace_uncond_cands_and_profitable_phis.
(replace_one_candidate): Add sdce_worklist argument, mark
the orig_rhs1/orig_rhs2 for maybe dceing.
(replace_profitable_candidates): Add sdce_worklist argument,
update call to replace_one_candidate and replace_profitable_candidates.
(analyze_candidates_and_replace): Call simple_dce_from_worklist and
update calls to replace_profitable_candidates, and
replace_uncond_cands_and_profitable_phis.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
testsuite: Prune compilation messages for modules tests
All testsuite compiler-calls pass default_target_compile in the
dejagnu installation (typically /usr/share/dejagnu/target.exp) which
also calls the dejagnu-installed prune_warnings.
Normally, tests using the dg framework (most or all tests these days)
compile and link by calling various wrappers that end up calling
dg-test in the dejagnu installation, typically installed as
/usr/share/dejagnu/dg.exp. That, besides the compiler call, also
calls ${tool}-dg-prune (g++-dg-prune) on the messages, which in turn
ends up calling prune_gcc_output in gcc/testsuite/lib/prune.exp. That
gcc-specific "pruning" function handles more cases than the dejagnu
prune_warnings, and also has updated patterns.
But, module_do_it in modules.exp calls the lower-level
${tool}_target_compile "directly", i.e. g++_target_compile defined in
gcc/testsuite/lib/g++.exp. That does not call ${tool}-dg-prune,
meaning those test-cases miss the gcc-specific pruning.
Noticed while testing a dejagnu update that handled the miniscule "in"
in the warning (line-breaks added below besides the original one after
"(void*)':")
"/path/to/cris-elf/bin/ld:
/gccobj/cris-elf/./libstdc++-v3/src/.libs/libstdc++.a(random.o): in
function `std::(anonymous namespace)::__libc_getentropy(void*)':
/gccsrc/libstdc++-v3/src/c++11/random.cc:183: warning: _getentropy is
not implemented and will always fail"
The line saying "in function" rather than "In function" (from the
binutils linker since 2018) is pruned by prune_gcc_output. The
prune_warnings in dejagnu-1.6.3 and earlier handles the second line
separately. It's an unfortunate wart that neither consumes the
delimiting line-break, leaving to the callers to prune residual empty
lines. See prune_warnings in dejagnu (default_target_compile and
dg-test) for those other line-break fixups, as alluded in the comment.
Roger Sayle [Sat, 31 Aug 2024 20:17:18 +0000 (14:17 -0600)]
i386: Support read-modify-write memory operands in STV.
This patch enables STV when the first operand of a TImode binary
logic operand (AND, IOR or XOR) is a memory operand, which is commonly
the case with read-modify-write instructions.
A different motivating example from the one given previously is:
__int128 m, p, q;
void foo() {
m ^= (p & q);
}
Currently with -O2 -mavx the RMW instructions are rejected by STV,
resulting in scalar code:
2024-08-31 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/i386-features.cc (timode_scalar_to_vector_candidate_p):
Support the first operand of AND, IOR and XOR being MEM_P, i.e. a
read-modify-write insn.
gcc/testsuite/ChangeLog
* gcc.target/i386/movti-2.c: Change dg-options to -Os.
* gcc.target/i386/movti-4.c: Expected output of original movti-2.c.
Andrew Pinski [Sat, 31 Aug 2024 18:57:32 +0000 (11:57 -0700)]
libobjc: Add cast to void* to disable warning for casting between incompatible function types [PR89586]
Even though __objc_get_forward_imp returns an IMP type, it will be casted to a compatable function
type before calling it. So we adding a cast to `void*` will disable warning about the incompatible type.
Pushed after bootstrap/test on x86_64.
libobjc/ChangeLog:
PR libobjc/89586
* sendmsg.c (__objc_get_forward_imp): Add cast to `void*` before casting to IMP.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Georg-Johann Lay [Fri, 30 Aug 2024 17:38:30 +0000 (19:38 +0200)]
AVR: Run pass avr-fuse-add a second time after pass_cprop_hardreg.
gcc/
* config/avr/avr-passes.cc (avr_pass_fuse_add) <clone>: Override.
* config/avr/avr-passes.def (avr_pass_fuse_add): Run again
after pass_cprop_hardreg.
Georg-Johann Lay [Fri, 30 Aug 2024 17:38:30 +0000 (19:38 +0200)]
AVR: Tidy pass avr-fuse-add.
gcc/
* config/avr/avr-protos.h (avr_split_tiny_move): Rename to
avr_split_fake_addressing_move.
* config/avr/avr-passes.def: Same.
* config/avr/avr-passes.cc: Same.
(avr_pass_data_fuse_add) <tv_id>: Set to TV_MACH_DEP.
* config/avr/avr.md (split-lpmx): Remove a define_split. Such
splits are performed by avr_split_fake_addressing_move.
The 'torture' section of the coroutine tests is primarily about checking
correct operation of the generated code. It should, ideally, be possible
to run this part of the testsuite with '-Wall' and expect no fails. In
the case that we wish to test for a specific diagnostic (and that it does
not appear over a range of optimisation/debug conditions) then we should
make that explict (as done, for example, in pr109867.C).
The tests amended here have warnings because of unused entities; in many
cases those are relevant to the test, and so we just mark them with
__attribute__((__unused__)).
We amend the debug output in coro.h to avoid similar warnings when print
output is disabled (the default).
gcc/testsuite/ChangeLog:
* g++.dg/coroutines/coro.h: Use a variadic macro for PRINTF to
avoid unused warnings when output is disabled.
* g++.dg/coroutines/torture/co-await-04-control-flow.C: Avoid
unused warnings.
* g++.dg/coroutines/torture/co-ret-13-template-2.C: Likewise.
* g++.dg/coroutines/torture/exceptions-test-01-n4849-a.C: Likewise.
* g++.dg/coroutines/torture/local-var-04-hiding-nested-scopes.C:
Likewise.
* g++.dg/coroutines/torture/pr109867.C: Likewise.
Iain Sandoe [Sat, 31 Aug 2024 11:42:36 +0000 (12:42 +0100)]
testsuite, c++, coroutines: Correct a test intent.
The intention of the series of tests numberef pr95615-* is to
verify that entities created by the ramp and potentially needing
destruction are correctly handled when exceptions are thrown.
Because of a typo, one case was not being checked correctly (the
return object). This patch amends the check to test that the
returned object is properly deleted.
gcc/testsuite/ChangeLog:
* g++.dg/coroutines/torture/pr95615.inc: Check tha the
task object produced by get_return_object is correctly
deleted on exception.
Iain Sandoe [Tue, 27 Aug 2024 13:52:26 +0000 (14:52 +0100)]
c++, coroutines: Make and use a frame access helper.
In the review of earlier patches it was suggested that we might make
use of finish_class_access_expr instead of doing a lookup for the
member and then a build_class_access_expr call.
finish_class_access_expr does a lot more work than we need and ends
up calling build_class_access_expr anyway. So, instead, this patch
makes a new helper to do the lookup and build and uses that helper
everywhere except instances in the ramp function that we are going
to handle separately.
Andrew Pinski [Fri, 30 Aug 2024 17:36:24 +0000 (10:36 -0700)]
phiopt: Ignore some nop statements in heursics [PR116098]
The heurstics that was added for PR71016, try to search to see
if the conversion was being moved away from its definition. The problem
is the heurstics would stop if there was a non GIMPLE_ASSIGN (and already ignores
debug statements) and in this case we would have a GIMPLE_LABEL that was not
being ignored. So we should need to ignore GIMPLE_NOP, GIMPLE_LABEL and GIMPLE_PREDICT.
Note this is now similar to how gimple_empty_block_p behaves.
Note this fixes the wrong code that was reported by moving the VCE (conversion) out before
the phiopt/match could convert it into an bit_ior and move the VCE out with the VCE being
conditionally valid.
Bootstrapped and tested on x86_64-linux-gnu.
Also built and tested for aarch64-linux-gnu.
PR tree-optimization/116098
gcc/ChangeLog:
* tree-ssa-phiopt.cc (factor_out_conditional_operation): Ignore
nops, labels and predicts for heuristic for conversion with a constant.
gcc/testsuite/ChangeLog:
* c-c++-common/torture/pr116098-1.c: New test.
* gcc.target/aarch64/csel-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Fri, 30 Aug 2024 16:53:01 +0000 (09:53 -0700)]
testsuite: Change what is being tested for pr66726-2.c
r14-575-g6d6c17e45f62cf changed the debug dump message but the testcase
pr66726-2.c was not updated for the change. The testcase was searching to
make sure we didn't factor out a conversion but the testcase was no longer
testing that so we needed to update what was being searched for.
Harald Anlauf [Fri, 30 Aug 2024 19:15:43 +0000 (21:15 +0200)]
Fortran: downgrade use associated namelist group name to legacy extension
The Fortran standard disallows use associated names as namelist group name
(e.g. F2003:C581, but also later standards). This feature is a gfortran
legacy extension, and we should give a warning even for -std=gnu.
gcc/fortran/ChangeLog:
* match.cc (gfc_match_namelist): Downgrade feature from GNU to
legacy extension.
Jakub Jelinek [Sat, 31 Aug 2024 14:03:20 +0000 (16:03 +0200)]
c++: Add unsequenced C++ testcase
This is the testcase I wrote originally and which on top of the
https://gcc.gnu.org/pipermail/gcc-patches/2024-August/659154.html
patch didn't behave the way I wanted (no warning and no optimizations of
[[unsequenced]] function templates which don't have pointer/reference
arguments.
Posting this separately, because it depends on the above mentioned
patch as well as the PR116175
https://gcc.gnu.org/pipermail/gcc-patches/2024-August/659157.html
patch.
Jakub Jelinek [Sat, 31 Aug 2024 13:58:23 +0000 (15:58 +0200)]
c: Add support for unsequenced and reproducible attributes
C23 added in N2956 ( https://open-std.org/JTC1/SC22/WG14/www/docs/n2956.htm )
two new attributes, which are described as similar to GCC const and pure
attributes, but they aren't really same and it seems that even the paper
is missing some of the differences.
The paper says unsequenced is the same as const on functions without pointer
arguments and reproducible is the same as pure on such functions (except
that they are function type attributes rather than function
declaration ones), but it seems the paper doesn't consider the finiteness GCC
relies on (aka non-DECL_LOOPING_CONST_OR_PURE_P) - the paper only talks
about using the attributes for CSE etc., not for DCE.
The following patch introduces (for now limited) support for those
attributes, both as standard C23 attributes and as GNU extensions (the
difference is that the patch is then less strict on where it allows them,
like other function type attributes they can be specified on function
declarations as well and apply to the type, while C23 standard ones must
go on the function declarators (i.e. after closing paren after function
parameters) or in type specifiers of function type.
If function doesn't have any pointer/reference arguments, the patch
adds additional internal attribute with " noptr" suffix which then is used
by flags_from_decl_or_type to handle those easy cases as
ECF_CONST|ECF_LOOPING_CONST_OR_PURE or
ECF_PURE|ECF_LOOPING_CONST_OR_PURE
The harder cases aren't handled right now, I'd hope they can be handled
incrementally.
I wonder whether we shouldn't emit a warning for the
gcc.dg/c23-attr-{reproducible,unsequenced}-5.c cases, while the standard
clearly specifies that composite types should union the attributes and it
is what GCC implements for decades, for ?: that feels dangerous for the
new attributes, it would be much better to be conservative on say
(cond ? unsequenced_function : normal_function) (args)
There is no diagnostics on incorrect [[unsequenced]] or [[reproducible]]
function definitions, while I think diagnosing non-const static/TLS
declarations in the former could be easy, the rest feels hard. E.g. the
const/pure discovery can just punt on everything it doesn't understand,
but complete diagnostics would need to understand it.
2024-08-31 Jakub Jelinek <jakub@redhat.com>
PR c/116130
gcc/
* doc/extend.texi (unsequenced, reproducible): Document new function
type attributes.
* calls.cc (flags_from_decl_or_type): Handle "unsequenced noptr" and
"reproducible noptr" attributes.
gcc/c-family/
* c-attribs.cc (c_common_gnu_attributes): Add entries for
"unsequenced", "reproducible", "unsequenced noptr" and
"reproducible noptr" attributes.
(handle_unsequenced_attribute): New function.
(handle_reproducible_attribute): Likewise.
* c-common.h (handle_unsequenced_attribute): Declare.
(handle_reproducible_attribute): Likewise.
* c-lex.cc (c_common_has_attribute): Return 202311 for standard
unsequenced and reproducible attributes.
gcc/c/
* c-decl.cc (handle_std_unsequenced_attribute): New function.
(handle_std_reproducible_attribute): Likewise.
(std_attributes): Add entries for "unsequenced" and "reproducible"
attributes.
(c_warn_type_attributes): Add TYPE argument. Allow unsequenced
or reproducible attributes if it is FUNCTION_TYPE.
(groktypename): Adjust c_warn_type_attributes caller.
(grokdeclarator): Likewise.
(finish_declspecs): Likewise.
* c-parser.cc (c_parser_declaration_or_fndef): Likewise.
* c-tree.h (c_warn_type_attributes): Add TYPE argument.
gcc/testsuite/
* c-c++-common/attr-reproducible-1.c: New test.
* c-c++-common/attr-reproducible-2.c: New test.
* c-c++-common/attr-unsequenced-1.c: New test.
* c-c++-common/attr-unsequenced-2.c: New test.
* gcc.dg/c23-attr-reproducible-1.c: New test.
* gcc.dg/c23-attr-reproducible-2.c: New test.
* gcc.dg/c23-attr-reproducible-3.c: New test.
* gcc.dg/c23-attr-reproducible-4.c: New test.
* gcc.dg/c23-attr-reproducible-5.c: New test.
* gcc.dg/c23-attr-reproducible-5-aux.c: New file.
* gcc.dg/c23-attr-unsequenced-1.c: New test.
* gcc.dg/c23-attr-unsequenced-2.c: New test.
* gcc.dg/c23-attr-unsequenced-3.c: New test.
* gcc.dg/c23-attr-unsequenced-4.c: New test.
* gcc.dg/c23-attr-unsequenced-5.c: New test.
* gcc.dg/c23-attr-unsequenced-5-aux.c: New file.
* gcc.dg/c23-has-c-attribute-2.c: Add tests for unsequenced
and reproducible attributes.
Alexandre Oliva [Sat, 31 Aug 2024 09:03:12 +0000 (06:03 -0300)]
Optimize initialization of small padded objects
When small objects containing padding bits (or bytes) are fully
initialized, we will often store them in registers, and setting
bitfields and other small fields will attempt to preserve the
uninitialized padding bits, which tends to be expensive.
Zero-initializing registers, OTOH, tends to be cheap.
So, if we're optimizing, zero-initialize such small padded objects
even if that's not needed for correctness. We can't zero-initialize
all such padding objects, though: if there's no padding whatsoever,
and all fields are initialized with nonzero, the zero initialization
would be flagged as dead. That's why we introduce machinery to detect
whether objects have padding bits. I considered distinguishing
between bitfields, units and larger padding elements, but I didn't
pursue that distinction.
Since the object's zero-initialization subsumes fields'
zero-initialization, the empty string test in builtin-snprintf-6.c's
test_assign_aggregate would regress without the addition of
native_encode_constructor.
for gcc/ChangeLog
* expr.cc (categorize_ctor_elements_1): Change p_complete to
int, to distinguish complete initialization in presence or
absence of uninitialized padding bits.
(categorize_ctor_elements): Likewise. Adjust all callers...
* expr.h (categorize_ctor_elements): ... and declaration.
(type_has_padding_at_level_p): New.
* gimple-fold.cc (type_has_padding_at_level_p): New.
* fold-const.cc (native_encode_constructor): New.
(native_encode_expr): Call it.
* gimplify.cc (gimplify_init_constructor): Clear small
non-addressable non-volatile objects with padding or
other uninitialized fields as an optimization.
Jason Merrill [Thu, 29 Aug 2024 17:27:13 +0000 (13:27 -0400)]
c++: fix used but not defined warning for friend
Here limit_bad_template_recursion avoids instantiating foo, and then we
wrongly warn that it isn't defined, because as a non-template (but
templated) friend DECL_TEMPLATE_INSTANTIATION is false.
gcc/cp/ChangeLog:
* decl2.cc (c_parse_final_cleanups): Also check
DECL_FRIEND_PSEUDO_TEMPLATE_INSTANTIATION.
Alex Coplan [Fri, 30 Aug 2024 14:29:34 +0000 (15:29 +0100)]
gdbhooks: Fix printing of vec with vl_ptr layout
As it stands, the pretty printing of GCC's vecs by gdbhooks.py only
handles vectors with vl_embed layout. As such, when encountering a vec
with vl_ptr layout, GDB would print a diagnostic like:
gdb.error: There is no member or method named m_vecpfx.
when (e.g.) any such vec occurred in a backtrace. This patch extends
VecPrinter.children to also handle vl_ptr vectors.
gcc/ChangeLog:
* gdbhooks.py (VEC_KIND_EMBED): New.
(VEC_KIND_PTR): New.
(get_vec_kind): New.
(VecPrinter.children): Also handle vectors with vl_ptr layout.
Andrew Pinski [Tue, 16 Apr 2024 19:06:51 +0000 (12:06 -0700)]
Don't remove /usr/lib and /lib from when passing to the linker [PR97304/104707]
With newer ld, the default search library path does not include /usr/lib nor /lib
but the driver decides to not pass -L down to the link for these and then in some/most
cases libc is not found.
This code dates from at least 1992 and it is done in a way which is not safe and
does not make sense. So let's remove it.
Bootstrapped and tested on x86_64-linux-gnu (which defaults to being a multilib).
gcc/ChangeLog:
PR driver/104707
PR driver/97304
* gcc.cc (is_directory): Don't not include /usr/lib and /lib
for library directory pathes. Remove library argument.
(add_to_obstack): Update call to is_directory.
(driver_handle_option): Likewise.
(spec_path): Likewise.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Thu, 29 Aug 2024 18:01:56 +0000 (11:01 -0700)]
middle-end: Remove integer_three_node [PR116537]
After the small expansion patch for __builtin_prefetch, the
only use of integer_three_node is inside tree-ssa-loop-prefetch.cc so let's
remove it as the loop prefetch pass is not enabled these days by default and
having a tree node around just for that pass is a little wasteful. Integer
constants are also shared these days so calling build_int_cst will use the cached
node anyways.
Bootstrapped and tested on x86_64-linux.
PR middle-end/116537
gcc/ChangeLog:
* tree-core.h (enum tree_index): Remove TI_INTEGER_THREE
* tree-ssa-loop-prefetch.cc (issue_prefetch_ref): Call build_int_cst
instead of using integer_three_node.
* tree.cc (build_common_tree_nodes): Remove initialization
of integer_three_node.
* tree.h (integer_three_node): Delete.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Andrew Pinski [Thu, 29 Aug 2024 17:58:41 +0000 (10:58 -0700)]
expand: Small speed up expansion of __builtin_prefetch
This is a small speed up of the expansion of __builtin_prefetch.
Basically for the optional arguments, no reason to call expand_normal
on a constant integer that we know the value, just replace it with
GEN_INT/const0_rtx instead.
Bootstrapped and tested on x86_64-linux.
gcc/ChangeLog:
* builtins.cc (expand_builtin_prefetch): Rewrite expansion of the optional
arguments to not expand known constants.
where for [2, 2] / [0, 2] the condition doesn't reflect what we
are trying to test - that, when remain is zero or, when non-zero,
nunits is a multiple of remain, we can avoid touching a gap via
loading smaller pieces and vector composition.
It isn't safe to change the known_eq to maybe_eq so instead
require known_ne (remain, 0u) before doing constant_multiple_p.
There's the corresponding code in vectorizable_load that's known
to have a latent similar issue, so sync that up as well.
Jakub Jelinek [Fri, 30 Aug 2024 07:40:34 +0000 (09:40 +0200)]
c++: Allow standard attributes after closing square bracket in new-type-id [PR110345]
For C++ 26 P2552R3 I went through all the spots (except modules) where
attribute-specifier-seq appears in the grammar and tried to construct
a testcase in all those spots, for now for [[deprecated]] attribute.
The first thing I found is that we aren't parsing standard attributes in
noptr-new-declarator - https://eel.is/c++draft/expr.new#1
The following patch parses it there, for the non-outermost arrays
applies normally the attributes to the array type, for the outermost
where we just set *nelts and don't really build an array type just
warns that we ignore those attributes (or, do you think we should
just build an array type in that case and just take its element type?).
2024-08-30 Jakub Jelinek <jakub@redhat.com>
PR c++/110345
* parser.cc (make_array_declarator): Add STD_ATTRS argument, set
declarator->std_attributes to it.
(cp_parser_new_type_id): Warn on non-ignored std_attributes on the
array declarator which is being omitted.
(cp_parser_direct_new_declarator): Parse standard attributes after
closing square bracket, pass it to make_array_declarator.
(cp_parser_direct_declarator): Pass std_attrs to make_array_declarator
instead of setting declarator->std_attributes manually.
* g++.dg/cpp0x/gen-attrs-80.C: New test.
* g++.dg/cpp0x/gen-attrs-81.C: New test.
liuhongt [Thu, 29 Aug 2024 03:39:20 +0000 (11:39 +0800)]
Check avx upper register for parallel.
For function arguments/return, when it's BLK mode, it's put in a
parallel with an expr_list, and the expr_list contains the real mode
and registers.
Current ix86_check_avx_upper_register only checked for SSE_REG_P, and
failed to handle that. The patch extend the handle to each subrtx.
gcc/ChangeLog:
PR target/116512
* config/i386/i386.cc (ix86_check_avx_upper_register): Iterate
subrtx to scan for avx upper register.
(ix86_check_avx_upper_stores): Inline old
ix86_check_avx_upper_register.
(ix86_avx_u128_mode_needed): Ditto, and replace
FOR_EACH_SUBRTX with call to new
ix86_check_avx_upper_register.
David Malcolm [Thu, 29 Aug 2024 22:48:32 +0000 (18:48 -0400)]
SARIF output: implement embedded URLs in messages (§3.11.6; PR other/116419)
GCC diagnostic messages can contain URLs, such as to our documentation
when we suggest an option name to correct a misspelling.
SARIF message strings can contain embedded URLs in the plain text
messages (see SARIF v2.1.0 §3.11.6), but previously we were
simply dropping any URLs from the diagnostic messages.
This patch adds support for encoding URLs into messages in our SARIF
output, using the pp_token machinery added in the previous patch.
As well as supporting URLs, the patch also adjusts how we report
event IDs in SARIF message, so that rather than e.g.
"text": "second 'free' here; first 'free' was at (1)"
we now report:
"text": "second 'free' here; first 'free' was at [(1)](sarif:/runs/0/results/0/codeFlows/0/threadFlows/0/locations/0)"
i.e. the text "(1)" now has a embedded link referring within the sarif
log to the threadFlowLocation object for the other event, via JSON
pointer (see §3.10.3 "URIs that use the sarif scheme"). Doing so
requires the arious objects to know their index within their containing
array, requiring some reworking of how they are constructed.
gcc/ChangeLog:
PR other/116419
* diagnostic-event-id.h (diagnostic_event_id_t::zero_based): New.
* diagnostic-format-sarif.cc: Include "pretty-print-format-impl.h"
and "pretty-print-urlifier.h".
(sarif_result::sarif_result): Add param "idx_within_parent".
(sarif_result::get_index_within_parent): New accessor.
(sarif_result::m_idx_within_parent): New field.
(sarif_code_flow::sarif_code_flow): New ctor.
(sarif_code_flow::get_parent): New accessor.
(sarif_code_flow::get_index_within_parent): New accessor.
(sarif_code_flow::m_parent): New field.
(sarif_code_flow::m_thread_id_map): New field.
(sarif_code_flow::m_thread_flows_arr): New field.
(sarif_code_flow::m_all_tfl_objs): New field.
(sarif_thread_flow::sarif_thread_flow): Add "parent" and
"idx_within_parent" params.
(sarif_thread_flow::get_parent): New accessor.
(sarif_thread_flow::get_index_within_parent): New accessor.
(sarif_thread_flow::m_parent): New field.
(sarif_thread_flow::m_idx_within_parent): New field.
(sarif_thread_flow_location::sarif_thread_flow_location): New
ctor.
(sarif_thread_flow_location::get_parent): New accessor.
(sarif_thread_flow_location::get_index_within_parent): New
accessor.
(sarif_thread_flow_location::m_parent): New field.
(sarif_thread_flow_location::m_idx_within_parent): New field.
(sarif_builder::get_code_flow_for_event_ids): New accessor.
(class sarif_builder::sarif_token_printer): New.
(sarif_builder::m_token_printer): New member.
(sarif_builder::m_next_result_idx): New field.
(sarif_builder::m_current_code_flow): New field.
(sarif_code_flow::get_or_append_thread_flow): New.
(sarif_code_flow::get_thread_flow): New.
(sarif_code_flow::add_location): New.
(sarif_code_flow::get_thread_flow_loc_obj): New.
(sarif_thread_flow::add_location): Create the new
sarif_thread_flow_location internally, rather than passing
it in as a parm so that we can keep track of its index in
the array. Return a reference to it.
(sarif_builder::sarif_builder): Initialize m_token_printer,
m_next_result_idx, and m_current_code_flow.
(sarif_builder::on_report_diagnostic): Pass index to
make_result_object.
(sarif_builder::make_result_object): Add "idx_within_parent" param
and pass to sarif_result ctor. Pass code flow index to call to
make_code_flow_object.
(make_sarif_url_for_event): New.
(sarif_builder::make_code_flow_object): Add "idx_within_parent"
param and pass it to sarif_code_flow ctor. Reimplement walking
of events so that we first create threadFlow objects for each
thread, then populate them with threadFlowLocation objects, so
that the IDs work. Set m_current_code_flow whilst creating the
latter, so that we can create correct URIs for "%@".
(sarif_builder::make_thread_flow_location_object): Replace with...
(sarif_builder::populate_thread_flow_location_object): ...this.
(sarif_output_format::get_builder): New accessor.
(sarif_begin_embedded_link): New.
(sarif_end_embedded_link): New.
(sarif_builder::sarif_token_printer::print_tokens): New.
(diagnostic_output_format_init_sarif): Add "fmt" param; use it to
set the token printer and output format for the context.
(diagnostic_output_format_init_sarif_stderr): Move responsibility
for setting the context's output format to within
diagnostic_output_format_init_sarif.
(diagnostic_output_format_init_sarif_file): Likewise.
(diagnostic_output_format_init_sarif_stream): Likewise.
(test_sarif_diagnostic_context::test_sarif_diagnostic_context):
Likewise.
(selftest::test_make_location_object): Provide an idx for the
result.
(selftest::get_result_from_log): New.
(selftest::get_message_from_log): New.
(selftest::test_message_with_embedded_link): New test.
(selftest::diagnostic_format_sarif_cc_tests): Call it.
* pretty-print-format-impl.h: Include "diagnostic-event-id.h".
(pp_token::kind): Add "event_id".
(struct pp_token_event_id): New.
(is_a_helper <pp_token_event_id *>::test): New.
(is_a_helper <const pp_token_event_id *>::test): New.
* pretty-print.cc (pp_token::dump): Handle kind::event_id.
(pretty_printer::format): Update handling of "%@" in phase 2
so that we add a pp_token_event_id, rather that the text "(N)".
(default_token_printer): Handle pp_token::kind::event_id by
printing the text "(N)".
gcc/testsuite/ChangeLog:
PR other/116419
* gcc.dg/sarif-output/bad-pragma.c: New test.
* gcc.dg/sarif-output/test-bad-pragma.py: New test.
* gcc.dg/sarif-output/test-include-chain-2.py
(test_location_relationships): Update expected text of event to
include an intra-sarif URI to the other event.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 29 Aug 2024 22:48:27 +0000 (18:48 -0400)]
pretty-print: reimplement pp_format with a new struct pp_token
The following patch rewrites the internals of pp_format.
A pretty_printer's output_buffer maintains a stack of chunk_info
instances, each one responsible for handling a call to pp_format, where
having a stack allows us to support re-entrant calls to pp_format on the
same pretty_printer.
Previously a chunk_info merely stored buffers of accumulated text
per unformatted run and per formatted argument.
This led to various special-casing for handling:
- urlifiers, needing class quoting_info to handle awkard cases where
the run of quoted text could be split between stages 1 and 2
of formatting
- dumpfiles, where the optinfo machinery could lead to objects being
stashed during formatting for later replay to JSON optimization
records
- in the C++ frontend, the format codes %H and %I can't be processed
until we've seen both, leading to awkward code to manipulate the
text buffers
Further, supporting URLs in messages in SARIF output (PR other/116419)
would add additional manipulations of text buffers, since our internal
pp_begin_url API gives the URL at the beginning of the wrapped text,
whereas SARIF's format for embedded URLs has the URL *after* the wrapped
text. Also when handling "%@" we wouldn't necessarily know the URL of
an event ID until later, requiring further nasty special-case
manipulation of text buffers.
This patch rewrites pretty-print formatting by introducing a new
intermediate representation during formatting: pp_token and
pp_token_list. Rather than simply accumulating a buffer of "char" in
the chunk_obstack during formatting, we now also accumulate a
pp_token_list, a doubly-linked list of pp_token, which can be:
- text buffers
- begin/end colorization
- begin/end quote
- begin/end URL
- "custom data" tokens
Working at the level of tokens rather than just text buffers allows the
various awkward special cases above to be replaced with uniform logic.
For example, all "urlification" is now done in phase 3 of formatting,
in one place, by looking for [..., BEGIN_QUOTE, TEXT, END_QUOTE, ...]
and injecting BEGIN_URL and END_URL wrapper tokens when the urlifier
has a URL for TEXT. Doing so greatly simplifies the urlifier code,
allowing the removal of class quoting_info.
The tokens and token lists are allocated on the chunk_obstack, and so
there's no additional heap activity required, with the memory reclaimed
when the chunk_obstack is freed after phase 3 of formatting.
New kinds of pp_token can be added as needed to support output formats.
For example, the followup patch adds a token for "%@" for events IDs, to
better support SARIF output.
No functional change intended.
gcc/c/ChangeLog:
* c-objc-common.cc (c_tree_printer): Convert final param from
const char ** to pp_token_list &.
gcc/cp/ChangeLog:
* error.cc: Include "make-unique.h".
(deferred_printed_type::m_buffer_ptr): Replace with...
(deferred_printed_type::m_printed_text): ...this and...
(deferred_printed_type::m_token_list): ...this.
(deferred_printed_type::deferred_printed_type): Update ctors for
above changes.
(deferred_printed_type::set_text_for_token_list): New.
(append_formatted_chunk): Pass chunk_obstack to
append_formatted_chunk.
(add_quotes): Delete.
(cxx_format_postprocessor::handle): Reimplement to call
deferred_printed_type::set_text_for_token_list, rather than store
buffer pointers.
(defer_phase_2_of_type_diff): Replace param "buffer_ptr"
with "formatted_token_list". Reimplement by storing
a pointer to formatted_token_list so that the postprocessor can
put its text there.
(cp_printer): Convert param "buffer_ptr" to
"formatted_token_list". Update calls to
defer_phase_2_of_type_diff accordingly.
gcc/ChangeLog:
* diagnostic.cc (diagnostic_context::report_diagnostic): Don't
pass m_urlifier to pp_format, as urlification now happens in
phase 3.
* dump-context.h (class dump_pretty_printer): Update leading
comment.
(dump_pretty_printer::emit_items): Drop decl.
(dump_pretty_printer::set_optinfo): New.
(class dump_pretty_printer::stashed_item): Delete class.
(class dump_pretty_printer::custom_token_printer): New class.
(dump_pretty_printer::format_decoder_cb): Convert param from
const char ** to pp_token_list &.
(dump_pretty_printer::decode_format): Likewise.
(dump_pretty_printer::stash_item): Likewise.
(dump_pretty_printer::emit_any_pending_textual_chunks): Drop decl.
(dump_pretty_printer::m_stashed_items): Delete field.
(dump_pretty_printer::m_token_printer): New member data.
* dumpfile.cc (struct wrapped_optinfo_item): New.
(dump_pretty_printer::dump_pretty_printer): Update for dropping
of field m_stashed_items and new field m_token_printer.
(dump_pretty_printer::emit_items): Delete; we now use
pp_output_formatted_text..
(dump_pretty_printer::emit_any_pending_textual_chunks): Delete.
(dump_pretty_printer::stash_item): Convert param from
const char ** to pp_token_list &.
(dump_pretty_printer::format_decoder_cb): Likewise.
(dump_pretty_printer::decode_format): Likewise.
(dump_pretty_printer::custom_token_printer::print_tokens): New.
(dump_pretty_printer::custom_token_printer::emit_any_pending_textual_chunks):
New.
(dump_context::dump_printf_va): Call set_optinfo on the
dump_pretty_printer. Replace call to emit_items with a call to
pp_output_formatted_text.
* opt-problem.cc (opt_problem::opt_problem): Replace call to
emit_items with call to set_optinfo and call to
pp_output_formatted_text.
* pretty-print-format-impl.h (struct pp_token): New.
(struct pp_token_text): New.
(is_a_helper <pp_token_text *>::test): New.
(is_a_helper <const pp_token_text *>::test): New.
(struct pp_token_begin_color): New.
(is_a_helper <pp_token_begin_color *>::test): New.
(is_a_helper <const pp_token_begin_color *>::test): New.
(struct pp_token_end_color): New.
(struct pp_token_begin_quote): New.
(struct pp_token_end_quote): New.
(struct pp_token_begin_url): New.
(is_a_helper <pp_token_begin_url*>::test): New.
(is_a_helper <const pp_token_begin_url*>::test): New.
(struct pp_token_end_url): New.
(struct pp_token_custom_data): New.
(is_a_helper <pp_token_custom_data *>::test): New.
(is_a_helper <const pp_token_custom_data *>::test): New.
(class pp_token_list): New.
(chunk_info::get_args): Drop.
(chunk_info::get_quoting_info): Drop.
(chunk_info::get_token_lists): New accessor.
(chunk_info::append_formatted_chunk): Add obstack & param.
(chunk_info::dump): New decls.
(chunk_info::m_args): Convert element type from const char * to
pp_token_list *. Rewrite/update comment.
(chunk_info::m_quotes): Drop field.
* pretty-print-markup.h (class pp_token_list): New forward decl.
(pp_markup::context::context): Drop urlifier param; add
formatted_token_list param.
(pp_markup::context::push_back_any_text): New decl.
(pp_markup::context::m_urlifier): Drop field.
(pp_markup::context::m_formatted_token_list): New field.
* pretty-print-urlifier.h: Update comment.
* pretty-print.cc: Define INCLUDE_MEMORY. Include
"make-unique.h".
(default_token_printer): New forward decl.
(obstack_append_string): Delete.
(urlify_quoted_string): Delete.
(pp_token::pp_token): New.
(pp_token::dump): New.
(allocate_object): New.
(class quoting_info): Delete.
(pp_token::operator new): New.
(pp_token::operator delete): New.
(pp_token_list::operator new): New.
(pp_token_list::operator delete): New.
(pp_token_list::pp_token_list): New.
(pp_token_list::~pp_token_list): New.
(pp_token_list::push_back_text): New.
(pp_token_list::push_back): New.
(pp_token_list::push_back_list): New.
(pp_token_list::pop_front): New.
(pp_token_list::remove_token): New.
(pp_token_list::insert_after): New.
(pp_token_list::replace_custom_tokens): New.
(pp_token_list::merge_consecutive_text_tokens): New.
(pp_token_list::apply_urlifier): New.
(pp_token_list::dump): New.
(chunk_info::append_formatted_chunk): Add obstack & param and use
it to reimplement in terms of token lists.
(chunk_info::pop_from_output_buffer): Drop m_quotes.
(chunk_info::on_begin_quote): Delete.
(chunk_info::dump): New.
(chunk_info::on_end_quote): Delete.
(push_back_any_text): New.
(pretty_printer::format): Drop "urlifier" param and quoting_info
logic. Convert "formatters" and "args" from const ** to
pp_token_list **. Reimplement so that rather than just
accumulating a text buffer in the chunk_obstack for each arg,
instead also accumulate a pp_token_list and pp_tokens for each
arg.
(auto_obstack::operator obstack &): New.
(quoting_info::handle_phase_3): Delete.
(pp_output_formatted_text): Reimplement in terms of manipulations
of pp_token_lists, rather than char buffers. Call
default_token_printer, or m_token_printer's print_tokens vfunc.
(default_token_printer): New.
(pretty_printer::pretty_printer): Initialize m_token_printer in
both ctors.
(pp_markup::context::begin_quote): Reimplement to use token list.
(pp_markup::context::end_quote): Likewise.
(pp_markup::context::begin_highlight_color): Likewise.
(pp_markup::context::end_highlight_color): Likewise.
(pp_markup::context::push_back_any_text): New.
(selftest::test_merge_consecutive_text_tokens): New.
(selftest::test_custom_tokens_1): New.
(selftest::test_custom_tokens_2): New.
(selftest::pp_printf_with_urlifier): Drop "urlifier" param from
call to pp_format.
(selftest::test_urlification): Add test of the example from
pretty-print-format-impl.h.
(selftest::pretty_print_cc_tests): Call the new selftest
functions.
* pretty-print.h (class quoting_info): Drop forward decl.
(class pp_token_list): New forward decl.
(printer_fn): Convert final param from const char ** to
pp_token_list &.
(class token_printer): New.
(class pretty_printer): Add pp_output_formatted_text as friend.
(pretty_printer::set_token_printer): New.
(pretty_printer::format): Drop urlifier param as this now happens
in phase 3.
(pretty_printer::m_format_decoder): Update comment.
(pretty_printer::m_token_printer): New field.
(pp_format): Drop urlifier param.
* tree-diagnostic.cc (default_tree_printer): Convert final param
from const char ** to pp_token_list &.
* tree-diagnostic.h: Likewise for decl.
gcc/fortran/ChangeLog:
* error.cc (gfc_format_decoder): Convert final param from
const char **buffer_ptr to pp_token_list &formatted_token_list,
and update call to default_tree_printer accordingly.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 29 Aug 2024 22:48:20 +0000 (18:48 -0400)]
pretty-print: move class chunk_info into its own header
No functional change intended.
gcc/cp/ChangeLog:
* error.cc: Include "pretty-print-format-impl.h".
gcc/ChangeLog:
* dumpfile.cc: Include "pretty-print-format-impl.h".
* pretty-print-format-impl.h: New file, based on material from
pretty-print.h.
* pretty-print.cc: Include "pretty-print-format-impl.h".
* pretty-print.h (chunk_info): Replace full declaration with
a forward decl, moving full decl to pretty-print-format-impl.h.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 29 Aug 2024 22:48:16 +0000 (18:48 -0400)]
Use std::unique_ptr for optinfo_item
As preliminary work towards an overhaul of how optinfo_items
interact with dump_pretty_printer, replace uses of optinfo_item * with
std::unique_ptr<optinfo_item> to make ownership clearer.
hppa: Fix handling of unscaled index addresses on HP-UX
The PA-RISC architecture uses the top two bits of memory pointers
to select space registers. The space register ID is ored with the
pointer offset to compute the global virtual address for an access.
The new late combine passes broke gcc on HP-UX. One of these passes
runs after reload. The existing code assumed no unscaled index
instructions would be created after reload as the REG_POINTER flag
is not reliable after reload. The new pass sometimes interchanged
the base and index registers, causing these instructions to fault
when the wrong space register was selected.
I investigated various alternatives to try to retain generation
of unscaled index instructions on HP-UX. It's not possible to
simply treat unscaled index addresses as not legitimate after
reload as sometimes instructions need to be rerecognized after
reload. So, we needed to allow unscaled index addresses after
reload and to disable the late combine passes.
I had noticed that reversing the current order of base and index
register canonicalization resulted in more accesses using unscaled
index addresses. However, this exposed issues with the REG_POINTER
flag.
The flag is not propagated when a constant is added to a pointer.
Tree opimization sometimes adds two pointers. I found that I had
to treat the result as a pointer but the addition generally corrupts
the space register bits. These get fixed when a negative pointer
is added. Finally, the REG_POINTER flag isn't set when a pointer
is passed in a function call. I couldn't get this approach to work.
Thus, I came to the conclusion that the best approach was to
disable use of unscaled index addresses on HP-UX. I don't think
this impacts performance significantly. Code size might get
slightly larger but we get some or more back from having the late
combine passes.
2024-08-29 John David Anglin <danglin@gcc.gnu.org>
gcc/ChangeLog:
* config/pa/pa.cc (load_reg): Don't generate load with
unscaled index address when !TARGET_NO_SPACE_REGS.
(pa_legitimate_address_p): Only allow unscaled index
addresses when TARGET_NO_SPACE_REGS.
Andrew Pinski [Wed, 28 Aug 2024 22:03:53 +0000 (15:03 -0700)]
expand: Allow widdening optab when expanding popcount==1 [PR116508]
After adding popcount{qi,hi}2 to the aarch64 backend, I noticed that
the expansion for popcount==1 was no longer trying to do the trick
of handling popcount==1 as `(arg ^ (arg - 1)) > arg - 1`. The problem
is the expansion was using OPTAB_DIRECT, when using OPTAB_WIDEN
will allow modes which are smaller than SImode (in the aarch64 case).
Note QImode's cost still needs some improvements so part of popcnt-eq-1.c
is xfailed. Though there is a check to make sure the costs are compared now.
Built and tested on aarch64-linux-gnu.
PR middle-end/116508
gcc/ChangeLog:
* internal-fn.cc (expand_POPCOUNT): Use OPTAB_WIDEN for PLUS and
XOR/AND expansion.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/popcnt-eq-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Eric Botcazou [Fri, 16 Aug 2024 14:03:30 +0000 (16:03 +0200)]
ada: Fix internal error on concatenation of discriminant-dependent component
This only occurs with optimization enabled, but the expanded code is always
wrong because it reuses the formal parameter of an initialization procedure
associated with a discriminant (a discriminal in GNAT parlance) outside of
the initialization procedure.
gcc/ada/
* checks.adb (Selected_Length_Checks.Get_E_Length): For a
component of a record with discriminants and if the expression is
a selected component, try to build an actual subtype from its
prefix instead of from the discriminal.
Steve Baird [Mon, 5 Aug 2024 22:53:12 +0000 (15:53 -0700)]
ada: Missing legality check when type completed
Refine previous fix to better handle tagged cases.
gcc/ada/
* sem_ch6.adb (Check_Discriminant_Conformance): Immediately after
calling Is_Immutably_Limited_Type, perform an additional test that
one might reasonably imagine would instead have been part of
Is_Immutably_Limited_Type. The new test is a call to a new
function Has_Tagged_Limited_Partial_View whose implementation
includes a call to Incomplete_Or_Partial_View, which cannot be
easily be called from Is_Immutably_Limited_Type (because sem_aux,
which is in the closure of the binder, cannot easily "with"
sem_util).
* sem_aux.adb (Is_Immutably_Limited): Include
N_Derived_Type_Definition case when testing Limited_Present flag.
Eric Botcazou [Fri, 16 Aug 2024 09:28:37 +0000 (11:28 +0200)]
ada: Fix missing finalization for call to function returning limited view
The call is legal because it is made from the body, which has visibility on
the nonlimited view, so this changes the code in Expand_Call_Helper to look
at the Etype of the call node instead of the Etype of the function.
gcc/ada/
* exp_ch6.adb (Expand_Call_Helper): In the case of a function
call, look at the Etype of the call node to determine whether
finalization actions need to be performed.
ada: Use the same warning character in continuation messages
For consitency sake the main and continuation messages should
use the same warning characters.
gcc/ada/
* exp_aggr.adb (Expand_Range_Component): Remove extra warning
character. Use same conditional warning char.
* freeze.adb (Warn_Overlay): Use named warning character.
* restrict.adb (Id_Case): Use named warning character.
* sem_prag.adb (Rewrite_Assertion_Kind): Use default warning
character.
ada: Restructure continuation message for pretty printing
Continuation messages should have the same location
as the main message. If the goal is to point to a different
location then Error_Msg_Sloc should be used to change
the location of the continuation message.
gcc/ada/
* par-ch4.adb (P_Name): Use Error_Msg_Sloc for the location of the
continuation message.