Roger Sayle [Sun, 7 Aug 2022 07:49:48 +0000 (08:49 +0100)]
Allow any immediate constant in *cmp<dwi>_doubleword splitter on x86_64.
This patch tweaks i386.md's *cmp<dwi>_doubleword splitter's predicate to
allow general_operand, not just x86_64_hilo_general_operand, to improve
code generation. As a general rule, i386.md's _doubleword splitters should
be post-reload splitters that require integer immediate operands to be
x86_64_hilo_int_operand, so that each part is a valid word mode immediate
constant. As an exception to this rule, doubleword patterns that must be
split before reload, because they require additional scratch registers,
can use take advantage of this ability to create new pseudos, to accept
any immediate constant, and call force_reg on the high and/or low parts
if they are not suitable immediate operands in word mode.
The benefit is shown in the new cmpti3.c test case below.
__int128 x;
int foo()
{
__int128 t = 0x1234567890abcdefLL;
return x == t;
}
2022-08-07 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/i386.md (*cmp<dwi>_doubleword): Change predicate
for x86_64_hilo_general_operand to general operand. Call
force_reg on parts that are not x86_64_immediate_operand.
gcc/testsuite/ChangeLog
* gcc.target/i386/cmpti1.c: New test case.
* gcc.target/i386/cmpti2.c: Likewise.
* gcc.target/i386/cmpti3.c: Likewise.
Roger Sayle [Fri, 5 Aug 2022 20:05:35 +0000 (21:05 +0100)]
middle-end: Allow backend to expand/split double word compare to 0/-1.
This patch to the middle-end's RTL expansion reorders the code in
emit_store_flag_1 so that the backend has more control over how best
to expand/split double word equality/inequality comparisons against
zero or minus one. With the current implementation, the middle-end
always decides to lower this idiom during RTL expansion using SUBREGs
and word mode instructions, without ever consulting the backend's
machine description. Hence on x86_64, a TImode comparison against zero
is always expanded as:
This patch, which makes no changes to the code itself, simply reorders
the clauses in emit_store_flag_1 so that the middle-end first attempts
expansion using the target's doubleword mode cstore optab/expander,
and only if this fails, falls back to lowering to word mode operations.
On x86_64, this allows the expander to produce:
which is a candidate for scalar-to-vector transformations (and
combine simplifications etc.). On targets that don't define a cstore
pattern for doubleword integer modes, there should be no change in
behaviour. For those that do, the current behaviour can be restored
(if desired) by restricting the expander/insn to not apply when the
comparison is EQ or NE, and operand[2] is either const0_rtx or
constm1_rtx.
This change just keeps RTL expansion more consistent (in philosophy).
For other doubleword comparisons, such as with operators LT and GT,
or with constants other than zero or -1, the wishes of the backend
are respected, and only if the optab expansion fails are the default
fall-back implementations using narrower integer mode operations
(and conditional jumps) used.
2022-08-05 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* expmed.cc (emit_store_flag_1): Move code to expand double word
equality and inequality against zero or -1, using word operations,
to after trying to use the backend's cstore<mode>4 optab/expander.
Jonathan Wakely [Wed, 13 Jul 2022 10:54:36 +0000 (11:54 +0100)]
libstdc++: Implement <experimental/scope> from LFTSv3
libstdc++-v3/ChangeLog:
* include/Makefile.am: Add new header.
* include/Makefile.in: Regenerate.
* include/experimental/scope: New file.
* testsuite/experimental/scopeguard/uniqueres.cc: New test.
* testsuite/experimental/scopeguard/exit.cc: New test.
Aldy Hernandez [Fri, 5 Aug 2022 06:04:10 +0000 (08:04 +0200)]
Inline unsupported_range constructor.
An unsupported_range temporary is instantiated in every Value_Range
for completeness sake and should be mostly a NOP. However, it's
showing up in the callgrind stats, because it's not inline. This
fixes the oversight.
Richard Biener [Fri, 5 Aug 2022 08:40:18 +0000 (10:40 +0200)]
tree-optimization/106533 - loop distribution of inner loop of nest
Loop distribution currently gives up if the outer loop of a loop
nest it analyzes contains a stmt with side-effects instead of
continuing to analyze the innermost loop. The following fixes that
by continuing anyway.
PR tree-optimization/106533
* tree-loop-distribution.cc (loop_distribution::execute): Continue
analyzing the inner loops when find_seed_stmts_for_distribution
fails.
Andrew Pinski [Fri, 5 Aug 2022 02:34:55 +0000 (19:34 -0700)]
[RSIC-V] Fix 32bit riscv with zbs extension enabled
The problem here was a disconnect between splittable_const_int_operand
predicate and the function riscv_build_integer_1 for 32bits with zbs enabled.
The splittable_const_int_operand predicate had a check for TARGET_64BIT which
was not needed so this patch removed it.
Committed as obvious after a build for risc32-elf configured with --with-arch=rv32imac_zba_zbb_zbc_zbs.
Thanks,
Andrew Pinski
gcc/ChangeLog:
* config/riscv/predicates.md (splittable_const_int_operand):
Remove the check for TARGET_64BIT for single bit const values.
Andrew MacLeod [Thu, 4 Aug 2022 16:22:59 +0000 (12:22 -0400)]
Loop over intersected bitmaps.
compute_ranges_in_block loops over the import list and then checks the
same bit in exports. It is nmore efficent to loop over the intersection
of the 2 bitmaps.
PR tree-optimization/106514
* gimple-range-path.cc (path_range_query::compute_ranges_in_block):
Use EXECUTE_IF_AND_IN_BITMAP to loop over 2 bitmaps.
For the diamond PHI form in tree_ssa_phiopt_worker we need to
extract edge e2 sooner. This changes it so we extract it at the
same time we determine we have a diamond shape.
gcc/ChangeLog:
PR middle-end/106519
* tree-ssa-phiopt.cc (tree_ssa_phiopt_worker): Check final phi edge for
diamond shapes.
gcc/testsuite/ChangeLog:
PR middle-end/106519
* gcc.dg/pr106519.c: New test.
The LC SSA rewrite performs SSA verification at start but the VN
run performed on the unrolled-and-jammed body can leave us with
invalid SSA form until CFG cleanup is run. So make sure we do that
before rewriting into LC SSA.
PR tree-optimization/106521
* gimple-loop-jam.cc (tree_loop_unroll_and_jam): Perform
CFG cleanup manually before rewriting into LC SSA.
Richard Biener [Thu, 4 Aug 2022 07:21:24 +0000 (09:21 +0200)]
Backwards threader greedy search TLC
I've tried to understand how the greedy search works seeing the
bitmap dances and the split into resolve_phi. I've summarized
the intent of the algorithm as
// For further greedy searching we want to remove interesting
// names defined in BB but add ones on the PHI edges for the
// respective edges.
but the implementation differs in detail. In particular when
there is more than one interesting PHI in BB it seems to only consider
the first for translating defs across edges. It also only applies
the loop crossing restriction when there is an interesting PHI.
The following preserves the loop crossing restriction to the case
of interesting PHIs but merges resolve_phi back, changing interesting
as outlined with the intent above. It should get more threading
cases when there are multiple interesting PHI defs in a block.
It might be a bit faster due to less bitmap operations but in the
end the main intent was to make what happens more obvious.
Jonathan Wakely [Thu, 4 Aug 2022 09:20:18 +0000 (10:20 +0100)]
libstdc++: Rename data members of std::unexpected and std::bad_expected_access
The P2549R1 paper was accepted for C++23. I already implemented it for
our <expected>, but I didn't rename the private daata members, only the
public member functions. This renames the data members for consistency
with the working draft.
libstdc++-v3/ChangeLog:
* include/std/expected (unexpected::_M_val): Rename to _M_unex.
(bad_expected_access::_M_val): Likewise.
Jonathan Wakely [Thu, 4 Aug 2022 09:18:23 +0000 (10:18 +0100)]
libstdc++: Update value of __cpp_lib_ios_noreplace macro
My P2467R1 proposal was accepted for C++23 so there's an official value
for this macro now.
libstdc++-v3/ChangeLog:
* include/bits/ios_base.h (__cpp_lib_ios_noreplace): Update
value to 202207L.
* include/std/version (__cpp_lib_ios_noreplace): Likewise.
* testsuite/27_io/basic_ofstream/open/char/noreplace.cc: Check
for new value.
* testsuite/27_io/basic_ofstream/open/wchar_t/noreplace.cc:
Likewise.
Jonathan Wakely [Thu, 28 Jul 2022 15:15:58 +0000 (16:15 +0100)]
libstdc++: Unblock atomic wait on non-futex platforms [PR106183]
When using a mutex and condition variable, the notifying thread needs to
increment _M_ver while holding the mutex lock, and the waiting thread
needs to re-check after locking the mutex. This avoids a missed
notification as described in the PR.
By moving the increment of _M_ver to the base _M_notify we can make the
use of the mutex local to the use of the condition variable, and
simplify the code a little. We can use a relaxed store because the mutex
already provides sequential consistency. Also we don't need to check
whether __addr == &_M_ver because we know that's always true for
platforms that use a condition variable, and so we also know that we
always need to use notify_all() not notify_one().
Reviewed-by: Thomas Rodgers <trodgers@redhat.com>
libstdc++-v3/ChangeLog:
PR libstdc++/106183
* include/bits/atomic_wait.h (__waiter_pool_base::_M_notify):
Move increment of _M_ver here.
[!_GLIBCXX_HAVE_PLATFORM_WAIT]: Lock mutex around increment.
Use relaxed memory order and always notify all waiters.
(__waiter_base::_M_do_wait) [!_GLIBCXX_HAVE_PLATFORM_WAIT]:
Check value again after locking mutex.
(__waiter_base::_M_notify): Remove increment of _M_ver.
Ulrich Drepper [Thu, 4 Aug 2022 11:18:05 +0000 (13:18 +0200)]
Adjust index number of tuple pretty printer
The tuple pretty printer uses 1-based indeces which is quite confusing
considering the access to the same values with the std::get functions
uses 0-based indeces. This patch changes the pretty printer since
this is not a guaranteed API.
libstdc++-v3/ChangeLog:
* python/libstdcxx/v6/printers.py (class StdTuplePrinter): Use
zero-based indeces just like std:get takes.
PR106342 - IBM zSystems: Provide vsel for all vector modes
dg.exp=pr104612.c fails with an ICE on s390x, because copysignv2sf3
produces an insn that vsel<mode> is supposed to recognize, but can't,
because it's not defined for V2SF. Fix by defining it for all vector
modes supported by copysign<mode>3.
gcc/ChangeLog:
* config/s390/vector.md (V_HW_FT): New iterator.
* config/s390/vx-builtins.md (vsel<mode>): Use V_HW_FT instead
of V_HW.
Testing has shown that using the load vector pair and store vector pair
instructions for block moves has some performance issues on power10.
A patch on June 11th modified the code so that GCC would not set
-mblock-ops-vector-pair by default if we are tuning for power10, but it would
set the option if we were tuning for a different machine and have load and store
vector pair instructions enabled.
This patch eliminates the code setting -mblock-ops-vector-pair. If you want to
generate load vector pair and store vector pair instructions for block moves,
you must use -mblock-ops-vector-pair.
2022-08-03 Michael Meissner <meissner@linux.ibm.com>
Andrew MacLeod [Wed, 3 Aug 2022 17:55:42 +0000 (13:55 -0400)]
Do not walk equivalence set in path_oracle::killing_def.
When killing a def in the path ranger, there is no need to walk the set
of existing equivalences clearing bits. An equivalence match requires
that both ssa-names have to be in each others set. As killing_def
creates a new empty set contianing only the current def, it already
ensures false equivaelnces won't happen.
PR tree-optimization/106514
* value-relation.cc (path_oracle::killing_def) Do not walk the
equivalence set clearing bits.
The regexps in hte test btf-int-1.c were not working properly with the
commenting style of at least one target: powerpc64le-linux-gnu. This
patch changes the test to use better regexps.
Tested in bpf-unkonwn-none, x86_64-linux-gnu and powerpc64le-linux-gnu.
Pushed to master as obvious.
gcc/testsuite/ChangeLog:
PR testsuite/106515
* gcc.dg/debug/btf/btf-int-1.c: Fix regexps in
scan-assembler-times.
The same function also immediately deals with turning a minimization problem
into a maximization one if the results are inverted. We do this here since
doing it in match.pd would end up changing the shape of the BBs and adding
additional instructions which would prevent various optimizations from working.
gcc/ChangeLog:
* tree-ssa-phiopt.cc (minmax_replacement): Optionally search for the phi
sequence of a three-way conditional.
(replace_phi_edge_with_variable): Support diamonds.
(tree_ssa_phiopt_worker): Detect diamond phi structure for three-way
min/max.
(strip_bit_not, invert_minmax_code): New.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/split-path-1.c: Disable phi-opts so we don't optimize
code away.
* gcc.dg/tree-ssa/minmax-10.c: New test.
* gcc.dg/tree-ssa/minmax-11.c: New test.
* gcc.dg/tree-ssa/minmax-12.c: New test.
* gcc.dg/tree-ssa/minmax-13.c: New test.
* gcc.dg/tree-ssa/minmax-14.c: New test.
* gcc.dg/tree-ssa/minmax-15.c: New test.
* gcc.dg/tree-ssa/minmax-16.c: New test.
* gcc.dg/tree-ssa/minmax-3.c: New test.
* gcc.dg/tree-ssa/minmax-4.c: New test.
* gcc.dg/tree-ssa/minmax-5.c: New test.
* gcc.dg/tree-ssa/minmax-6.c: New test.
* gcc.dg/tree-ssa/minmax-7.c: New test.
* gcc.dg/tree-ssa/minmax-8.c: New test.
* gcc.dg/tree-ssa/minmax-9.c: New test.
In upstream dmd, the compiler front-end and run-time have been merged
together into one repository. Both dmd and libdruntime now track that.
D front-end changes:
- Deprecated `scope(failure)' blocks that contain `return' statements.
- Deprecated using integers for `version' or `debug' conditions.
- Deprecated returning a discarded void value from a function.
- `new' can now allocate an associative array.
D runtime changes:
- Added avx512f detection to core.cpuid module.
Phobos changes:
- Changed std.experimental.logger.core.sharedLog to return
shared(Logger).
gcc/d/ChangeLog:
* dmd/MERGE: Merge upstream dmd d7772a2369.
* dmd/VERSION: Bump version to v2.100.1.
* d-codegen.cc (get_frameinfo): Check whether decision to generate
closure changed since semantic finished.
* d-lang.cc (d_handle_option): Remove handling of -fdebug=level and
-fversion=level.
* decl.cc (DeclVisitor::visit (VarDeclaration *)): Generate evaluation
of noreturn variable initializers before throw.
* expr.cc (ExprVisitor::visit (AssignExp *)): Don't generate
assignment for noreturn types, only evaluate for side effects.
* lang.opt (fdebug=): Undocument -fdebug=level.
(fversion=): Undocument -fversion=level.
cselib: add function to check if SET is redundant [PR106187]
A SET operation that writes memory may have the same value as an
earlier store but if the alias sets of the new and earlier store do
not conflict then the set is not truly redundant. This can happen,
for example, if objects of different types share a stack slot.
To fix this we define a new function in cselib that first checks for
equality and if that is successful then finds the earlier store in the
value history and checks the alias sets.
The routine is used in two places elsewhere in the compiler:
cfgcleanup and postreload.
gcc/ChangeLog:
PR rtl-optimization/106187
* alias.h (mems_same_for_tbaa_p): Declare.
* alias.cc (mems_same_for_tbaa_p): New function.
* dse.cc (record_store): Use it instead of open-coding
alias check.
* cselib.h (cselib_redundant_set_p): Declare.
* cselib.cc: Include alias.h
(cselib_redundant_set_p): New function.
* cfgcleanup.cc: (mark_effect): Use cselib_redundant_set_p instead
of rtx_equal_for_cselib_p.
* postreload.cc (reload_cse_simplify): Use cselib_redundant_set_p.
(reload_cse_noop_set_p): Delete.
Martin Liska [Mon, 1 Aug 2022 13:50:43 +0000 (15:50 +0200)]
gcov-dump: add --stable option
The option prints TOP N counters in a stable format
usage for comparison (diff).
gcc/ChangeLog:
* doc/gcov-dump.texi: Document the new option.
* gcov-dump.cc (main): Parse the new option.
(print_usage): Show the option.
(tag_counters): Sort key:value pairs of TOP N counter.
Roger Sayle [Wed, 3 Aug 2022 08:07:36 +0000 (09:07 +0100)]
PR target/47949: Use xchg to move from/to AX_REG with -Oz on x86.
This patch adds a peephole2 to i386.md to implement the suggestion in
PR target/47949, of using xchg instead of mov for moving values to/from
the %rax/%eax register, controlled by -Oz, as the xchg instruction is
one byte shorter than the move it is replacing.
The new test case is taken from the PR:
int foo(int x) { return x; }
where previously we'd generate:
foo: mov %edi,%eax // 2 bytes
ret
but with this patch, using -Oz, we generate:
foo: xchg %eax,%edi // 1 byte
ret
On the CSiBE benchmark, this saves a total of 10238 bytes (reducing
the -Oz total from 3661796 bytes to 3651558 bytes, a 0.28% saving).
Interestingly, some modern architectures (such as Zen 3) implement
xchg using zero latency register renaming (just like mov), so in theory
this transformation could be enabled when optimizing for speed, if
benchmarking shows the improved code density produces consistently
better performance. However, this is architecture dependent, and
there may be interactions using xchg (instead a single_set) in the
late RTL passes (such as cprop_hardreg), so for now I've restricted
this to -Oz.
2022-08-03 Roger Sayle <roger@nextmovesoftware.com>
Uroš Bizjak <ubizjak@gmail.com>
gcc/ChangeLog
PR target/47949
* config/i386/i386.md (peephole2): New peephole2 to convert
SWI48 moves to/from %rax/%eax where the src is dead to xchg,
when optimizing for minimal size with -Oz.
gcc/testsuite/ChangeLog
PR target/47949
* gcc.target/i386/pr47949.c: New test case.
Roger Sayle [Wed, 3 Aug 2022 08:03:17 +0000 (09:03 +0100)]
Improved pre-reload split of double word comparison against -1 on x86.
This patch adds an extra optimization to *cmp<dwi>_doubleword to improve
the code generated for comparisons against -1. Hypothetically, if a
comparison against -1 reached this splitter we'd currently generate code
that looks like:
which is both faster and smaller, and also what's currently generated
thanks to the middle-end splitting double word comparisons against
zero and minus one during RTL expansion. Should that change, this would
become a missed-optimization regression, but this patch also (potentially)
helps suitable comparisons created by CSE and combine.
2022-08-03 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/i386.md (*cmp<dwi>_doubleword): Add a special case
to split comparisons against -1 using AND and CMP -1 instructions.
movdqa b(%rip), %xmm0
pslldq $2, %xmm0
movaps %xmm0, a(%rip)
ret
2022-08-03 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/i386-features.cc (compute_convert_gain): Add gain
for converting suitable TImode shift to a V1TImode shift.
(timode_scalar_chain::convert_insn): Add support for converting
suitable ASHIFT and LSHIFTRT.
(timode_scalar_to_vector_candidate_p): Consider logical shifts
by integer constants that are multiples of 8 to be candidates.
gcc/testsuite/ChangeLog
* gcc.target/i386/sse4_1-stv-7.c: New test case.
Roger Sayle [Wed, 3 Aug 2022 07:55:35 +0000 (08:55 +0100)]
Some additional zero-extension related optimizations in simplify-rtx.
This patch implements some additional zero-extension and sign-extension
related optimizations in simplify-rtx.cc. The original motivation comes
from PR rtl-optimization/71775, where in comment #2 Andrew Pinksi sees:
Failed to match this instruction:
(set (reg:DI 88 [ _1 ])
(sign_extend:DI (subreg:SI (ctz:DI (reg/v:DI 86 [ x ])) 0)))
On many platforms the result of DImode CTZ is constrained to be a
small unsigned integer (between 0 and 64), hence the truncation to
32-bits (using a SUBREG) and the following sign extension back to
64-bits are effectively a no-op, so the above should ideally (often)
be simplified to "(set (reg:DI 88) (ctz:DI (reg/v:DI 86 [ x ]))".
To implement this, and some closely related transformations, we build
upon the existing val_signbit_known_clear_p predicate. In the first
chunk, nonzero_bits knows that FFS and ABS can't leave the sign-bit
bit set, so the simplification of of ABS (ABS (x)) and ABS (FFS (x))
can itself be simplified. The second transformation is that we can
canonicalized SIGN_EXTEND to ZERO_EXTEND (as in the PR 71775 case above)
when the operand's sign-bit is known to be clear. The final two chunks
are for SIGN_EXTEND of a truncating SUBREG, and ZERO_EXTEND of a
truncating SUBREG respectively. The nonzero_bits of a truncating
SUBREG pessimistically thinks that the upper bits may have an
arbitrary value (by taking the SUBREG), so we need look deeper at the
SUBREG's operand to confirm that the high bits are known to be zero.
Unfortunately, for PR rtl-optimization/71775, ctz:DI on x86_64 with
default architecture options is undefined at zero, so we can't be sure
the upper bits of reg:DI 88 will be sign extended (all zeros or all ones).
nonzero_bits knows this, so the above transformations don't trigger,
but the transformations themselves are perfectly valid for other
operations such as FFS, POPCOUNT and PARITY, and on other targets/-march
settings where CTZ is defined at zero.
2022-08-03 Roger Sayle <roger@nextmovesoftware.com>
Segher Boessenkool <segher@kernel.crashing.org>
Richard Sandiford <richard.sandiford@arm.com>
gcc/ChangeLog
* simplify-rtx.cc (simplify_unary_operation_1) <ABS>: Add
optimizations for CLRSB, PARITY, POPCOUNT, SS_ABS and LSHIFTRT
that are all positive to complement the existing FFS and
idempotent ABS simplifications.
<SIGN_EXTEND>: Canonicalize SIGN_EXTEND to ZERO_EXTEND when
val_signbit_known_clear_p is true of the operand.
Simplify sign extensions of SUBREG truncations of operands
that are already suitably (zero) extended.
<ZERO_EXTEND>: Simplify zero extensions of SUBREG truncations
of operands that are already suitably zero extended.
Andrew MacLeod [Tue, 2 Aug 2022 21:31:37 +0000 (17:31 -0400)]
Do not register edges for statements not understood.
Previously, all gimple_cond types were undserstoof, with float values,
this is no longer true. We should gracefully do nothing if the
gcond type is not supported.
Aldy Hernandez [Tue, 2 Aug 2022 18:56:49 +0000 (20:56 +0200)]
Adjust testsuite/gcc.dg/tree-ssa/vrp-float-1.c
I missed the -details dump flag, plus I wasn't checking the actual folding.
As a bonus I had flipped the dump file name and the count, so the test
was coming out as unresolved, which I missed because I was only checking
for failures and passes.
Whooops.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/vrp-float-1.c: Adjust test so it passes.
Andrew MacLeod [Fri, 29 Jul 2022 16:05:38 +0000 (12:05 -0400)]
Check equivalencies when calculating range on entry.
When propagating on-entry values in the cache, checking if any equivalence
has a known value can improve results. No new calculations are made.
Only queries via dominators which do not populate the cache are checked.
PR tree-optimization/106474
gcc/
* gimple-range-cache.cc (ranger_cache::fill_block_cache): Query
range of equivalences that may contribute to the range.
Jose E. Marchesi [Fri, 22 Jul 2022 10:40:50 +0000 (12:40 +0200)]
btf: do not use the CHAR `encoding' bit for BTF
Contrary to CTF and our previous expectations, as per [1], turns out
that in BTF:
1) The `encoding' field in integer types shall not be treated as a
bitmap, but as an enumerated, i.e. these bits are exclusive to each
other.
2) The CHAR bit in `encoding' shall _not_ be set when emitting types
for char nor `unsigned char'.
Consequently this patch clears the CHAR bit before emitting the
variable part of BTF integral types. It also updates the testsuite
accordingly, expanding it to check for BOOL bits.
Richard Biener [Tue, 2 Aug 2022 07:58:44 +0000 (09:58 +0200)]
Properly honor param_max_fsm_thread_path_insns in backwards threader
I am trying to make sense of back_threader_profitability::profitable_path_p
and the first thing I notice is that we do
/* Threading is profitable if the path duplicated is hot but also
in a case we separate cold path from hot path and permit optimization
of the hot path later. Be on the agressive side here. In some testcases,
as in PR 78407 this leads to noticeable improvements. */
if (m_speed_p
&& ((taken_edge && optimize_edge_for_speed_p (taken_edge))
|| contains_hot_bb))
{
if (n_insns >= param_max_fsm_thread_path_insns)
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, " FAIL: Jump-thread path not considered: "
"the number of instructions on the path "
"exceeds PARAM_MAX_FSM_THREAD_PATH_INSNS.\n");
return false;
}
...
}
else if (!m_speed_p && n_insns > 1)
{
if (dump_file && (dump_flags & TDF_DETAILS))
fprintf (dump_file, " FAIL: Jump-thread path not considered: "
"duplication of %i insns is needed and optimizing for size.\n",
n_insns);
return false;
}
...
return true;
thus we apply the n_insns >= param_max_fsm_thread_path_insns only
to "hot paths". The comment above this isn't entirely clear whether
this is by design ("Be on the aggressive side here ...") but I think
this is a mistake. In fact the "hot path" check seems entirely
useless since if the path is not hot we simply continue threading it.
This was caused by r12-324-g69e5544210e3c0 and the following simply
reverts the offending change.
* tree-ssa-threadbackward.cc
(back_threader_profitability::profitable_path_p): Apply
size constraints to all paths again.
Implement basic range operators to enable floating point VRP.
Without further ado, here is the implementation for floating point
range operators, plus the switch to enable all ranger clients to
handle floats.
These are bare bone implementations good enough for relation operators
to work, while keeping the NAN bits up to date in the frange. There
is also minimal support for keeping track of +-INF when it is obvious.
* g++.dg/opt/pr94589-2.C: XFAIL.
* gcc.dg/tree-ssa/vrp-float-1.c: New test.
* gcc.dg/tree-ssa/vrp-float-11.c: New test.
* gcc.dg/tree-ssa/vrp-float-3.c: New test.
* gcc.dg/tree-ssa/vrp-float-4.c: New test.
* gcc.dg/tree-ssa/vrp-float-6.c: New test.
* gcc.dg/tree-ssa/vrp-float-7.c: New test.
* gcc.dg/tree-ssa/vrp-float-8.c: New test.
This patch Allows us to export floating point ranges into the SSA name
(SSA_NAME_RANGE_INFO).
[Richi, in PR24021 you suggested that match.pd could use global float
ranges, because it would generally not invoke ranger. This patch
implements the boiler plate to save the frange globally.]
[Jeff, we've also been talking in parallel of using NAN knowledge
during expansion to RTL. This patch will provide the NAN bits in the
SSA name.]
Since frange's currently implementation is just a shell, with no
actual endpoints, frange_storage_slot only contains frange_props which
fits inside a byte. When we have endpoints, y'all can decide if it's
worth saving them, or if the NAN/etc bits are good enough.
gcc/ChangeLog:
* tree-core.h (struct tree_ssa_name): Add frange_info and
reshuffle the rest.
* value-range-storage.cc (vrange_storage::alloc_slot): Add case
for frange.
(vrange_storage::set_vrange): Same.
(vrange_storage::get_vrange): Same.
(vrange_storage::fits_p): Same.
(frange_storage_slot::alloc_slot): New.
(frange_storage_slot::set_frange): New.
(frange_storage_slot::get_frange): New.
(frange_storage_slot::fits_p): New.
* value-range-storage.h (class frange_storage_slot): New.
Aldy Hernandez [Tue, 2 Aug 2022 10:14:22 +0000 (12:14 +0200)]
Limit ranger query in ipa-prop.cc to integrals.
ipa-* still works on legacy value_range's which only support
integrals. This patch limits the query to integrals, as to not get a
floating point range that can't exist in an irange.
gcc/ChangeLog:
* ipa-prop.cc (ipa_compute_jump_functions_for_edge): Limit ranger
query to integrals.
Martin Liska [Tue, 2 Aug 2022 07:58:43 +0000 (09:58 +0200)]
IPA: reduce what we dump in normal mode
gcc/ChangeLog:
* profile.cc (compute_branch_probabilities): Dump details only
if TDF_DETAILS.
* symtab.cc (symtab_node::dump_base): Do not dump pointer unless
TDF_ADDRESS is used, it makes comparison harder.
Richard Biener [Tue, 2 Aug 2022 06:37:16 +0000 (08:37 +0200)]
tree-optimization/106498 - reduce SSA updates in autopar
The following reduces the number of SSA updates done during autopar
OMP expansion, specifically avoiding the cases that just add virtual
operands (where maybe none have been before) in dead regions of the CFG.
Instead virtual SSA update is delayed until after the pass. There's
much more TLC needed here, but test coverage makes it really difficult.
PR tree-optimization/106498
* omp-expand.cc (expand_omp_taskreg): Do not perform virtual
SSA update here.
(expand_omp_for): Or here.
(execute_expand_omp): Instead schedule it here together
with CFG cleanup via TODO.
Richard Biener [Mon, 1 Aug 2022 12:59:08 +0000 (14:59 +0200)]
tree-optimization/106495 - avoid threading to possibly never executed edge
The following builds upon the logic of the PR105679 fix by avoiding
to thread to a known edge that is predicted as probably never executed.
PR tree-optimization/106495
* tree-ssa-threadbackward.cc
(back_threader_profitability::profitable_path_p): If known_edge
is probably never executed avoid threading.
David Malcolm [Mon, 1 Aug 2022 23:30:15 +0000 (19:30 -0400)]
c: improvements to address space diagnostics
This adds a clarifying "note" to address space mismatch diagnostics.
For example, it improves the diagnostic for
gcc.target/i386/addr-space-typeck-2.c from:
addr-space-typeck-2.c: In function 'test_bad_call':
addr-space-typeck-2.c:12:22: error: passing argument 2 of 'expects_seg_gs'
from pointer to non-enclosed address space
12 | expects_seg_gs (0, ptr, 1);
| ^~~
to:
addr-space-typeck-2.c: In function 'test_bad_call':
addr-space-typeck-2.c:12:22: error: passing argument 2 of 'expects_seg_gs'
from pointer to non-enclosed address space
12 | expects_seg_gs (0, ptr, 1);
| ^~~
addr-space-typeck-2.c:7:51: note: expected '__seg_gs void *' but argument
is of type 'void *'
7 | extern void expects_seg_gs (int i, void __seg_gs *param, int j);
| ~~~~~~~~~~~~~~~^~~~~
I took the liberty of adding the test coverage to i386 since we need
a specific target to test this on.
gcc/c/ChangeLog:
* c-typeck.cc (build_c_cast): Quote names of address spaces in
diagnostics.
(convert_for_assignment): Add a note to address space mismatch
diagnostics, specifying the expected and actual types.
gcc/testsuite/ChangeLog:
* gcc.target/i386/addr-space-typeck-1.c: New test.
* gcc.target/i386/addr-space-typeck-2.c: New test.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Roger Sayle [Mon, 1 Aug 2022 22:08:23 +0000 (23:08 +0100)]
PR target/106481: Handle CONST_WIDE_INT in REG_EQUAL during STV on x86_64.
This patch resolves PR target/106481, and is an oversight in my recent
battles with REG_EQUAL notes during TImode STV (see PR target/106278
https://gcc.gnu.org/pipermail/gcc-patches/2022-July/598416.html).
The patch above's/current behaviour is that we check that the mode of
the REG_EQUAL note is TImode before using PUT_MODE to set it to V1TImode.
However, the new test case reveals that this doesn't consider REG_EQUAL
notes that are CONST_INT or CONST_WIDE_INT, i.e. that are VOIDmode,
and so STV produces:
2022-08-01 Roger Sayle <roger@nextmovesoftware.com>
Uroš Bizjak <ubizjak@gmail.com>
gcc/ChangeLog
PR target/106481
* config/i386/i386-features.cc (timode_scalar_chain::convert_insn):
Convert a CONST_SCALAR_INT_P in a REG_EQUAL note into a V1TImode
CONST_VECTOR.
gcc/testsuite/ChangeLog
PR target/106481
* gcc.target/i386/pr106481.c: New test case.
H.J. Lu [Wed, 20 Jul 2022 23:57:32 +0000 (16:57 -0700)]
x86: Add ix86_ifunc_ref_local_ok
We can't always use the PLT entry as the function address for local IFUNC
functions. When the PIC register is needed for PLT call, indirect call
via the PLT entry will fail since the PIC register may not be set up
properly for indirect call. Add ix86_ifunc_ref_local_ok to return false
when the PLT entry can't be used as local IFUNC function pointers.
gcc/
PR target/83782
* config/i386/i386.cc (ix86_ifunc_ref_local_ok): New.
(TARGET_IFUNC_REF_LOCAL_OK): Use it.
btf: emit linkage information in BTF_KIND_FUNC entries
The kernel bpftool expects BTF_KIND_FUNC entries in BTF to include an
annotation reflecting the linkage of functions (static, global). For
whatever reason they abuse the `vlen' field of the BTF_KIND_FUNC entry
instead of adding a variable-part to the record like it is done with
other entry kinds.
This patch makes GCC to include this linkage info in BTF_KIND_FUNC
entries.
Tested in bpf-unknown-none target.
gcc/ChangeLog:
PR debug/106263
* ctfc.h (struct ctf_dtdef): Add field linkage.
* ctfc.cc (ctf_add_function): Set ctti_linkage.
* dwarf2ctf.cc (gen_ctf_function_type): Pass a linkage for
function types and subprograms.
* btfout.cc (btf_asm_func_type): Emit linkage information for the
function.
(btf_dtd_emit_preprocess_cb): Propagate the linkage information
for functions.
gcc/testsuite/ChangeLog:
PR debug/106263
* gcc.dg/debug/btf/btf-function-4.c: New test.
* gcc.dg/debug/btf/btf-function-5.c: Likewise.
Sam Feifer [Fri, 29 Jul 2022 13:44:48 +0000 (09:44 -0400)]
match.pd: Add new division pattern [PR104992]
This patch fixes a missed optimization in match.pd. It takes the pattern,
x / y * y == x, and optimizes it to x % y == 0. This produces fewer
instructions. This simplification does not happen for complex types.
This patch also adds tests for the optimization rule.
Bootstrapped/regtested on x86_64-pc-linux-gnu.
PR tree-optimization/104992
gcc/ChangeLog:
* match.pd (x / y * y == x): New simplification.
gcc/testsuite/ChangeLog:
* g++.dg/pr104992-1.C: New test.
* gcc.dg/pr104992.c: New test.
Roger Sayle [Mon, 1 Aug 2022 10:36:23 +0000 (11:36 +0100)]
Update configure to check for a recent gnat Ada compiler.
GCC fails to bootstrap when configured with --enable-languages=all on
machines that have older versions of GNAT installed as the system Ada
compiler. In configure, it's not sufficient to check whether gnat is
available, but whether a sufficiently recent version of GNAT is
installed. This patch tweaks config/acx.m4 so that conftest.adb also
contains a reference to System.CRTL.int64 as required by the current
version of gcc/ada/osint.adb. This fixes the build when the system
Ada is GNAT v4.8.5 (on Redhat 7) by disabling ada, but continues to
work fine when the system Ada is GNAT v11.3.1.
2022-08-01 Roger Sayle <roger@nextmovesoftware.com>
Arnaud Charlet <charlet@adacore.com>
config/ChangeLog
* acx.m4 (AC_PROG_GNAT): Update conftest.adb to include
features required of the host gnat compiler.
Jakub Jelinek [Mon, 1 Aug 2022 06:26:03 +0000 (08:26 +0200)]
libfortran: Fix up boz_15.f90 on powerpc64le with -mabi=ieeelongdouble [PR106079]
The boz_15.f90 test FAILs on powerpc64le-linux when -mabi=ieeelongdouble
is used (either default through --with-long-double-format=ieee or
when used explicitly).
The problem is that the read/write transfer routines are called with
BT_REAL (or BT_COMPLEX) type and kind 17 which is magic we use to say
it is the IEEE quad real(kind=16) rather than the IBM double double
real(kind=16). For the floating point input/output we then handle kind
17 specially, but for B/O/Z we just treat the bytes of the floating point
value as binary blob and using 17 in that case results in unexpected
behavior, for write it means we don't estimate right how many chars we'll
need and print ******************** etc. rather than what we should, and
even with explicit size we'd print one further byte than intended.
For read it would even mean overwriting some unrelated byte after the
floating point object.
Fixed by using 16 instead of 17 in the read_radix and write_{b,o,z} calls.
2022-08-01 Jakub Jelinek <jakub@redhat.com>
PR libfortran/106079
* io/transfer.c (formatted_transfer_scalar_read,
formatted_transfer_scalar_write): For type BT_REAL with kind 17
change kind to 16 before calling read_radix or write_{b,o,z}.
These are some assorted cleanups to the frange class to make it easier
to drop in an implementation with FP endpoints:
* frange::set() had some asserts limiting the type of arguments
passed. There's no reason why we can't handle all the variants.
Worse comes to worse, we can always return a VARYING which is
conservative and correct.
* frange::normalize_kind() now returns a boolean that can be used in
union and intersection to indicate that the range changed.
* Implement vrp_val_max and vrp_val_min for floats. Also, move them
earlier in the header file so frange can use them.
Tested on x86-64 Linux.
gcc/ChangeLog:
* value-range.cc (tree_compare): New.
(frange::set): Make more general.
(frange::normalize_kind): Cleanup and return bool.
(frange::union_): Use normalize_kind return value.
(frange::intersect): Same.
(frange::verify_range): Remove unnecessary else.
* value-range.h (vrp_val_max): Move before frange class.
(vrp_val_min): Same.
(frange::frange): Remove set to m_type.
Make irange dependency explicit for range_of_ssa_name_with_loop_info.
Even though ranger is type agnostic, SCEV seems to only work with
integers. This patch removes some FIXME notes making it explicit that
bounds_of_var_in_loop only works with iranges.
Tested on x86-64 Linux.
gcc/ChangeLog:
* gimple-range-fold.cc (fold_using_range::range_of_phi): Only
query SCEV for integers.
(fold_using_range::range_of_ssa_name_with_loop_info): Remove
irange check.
Roger Sayle [Sun, 31 Jul 2022 20:51:44 +0000 (21:51 +0100)]
Add rotl64ti2_doubleword pattern to i386.md
This patch adds rot[lr]64ti2_doubleword patterns to the x86_64 backend,
to move splitting of 128-bit TImode rotates by 64 bits after reload,
matching what we now do for 64-bit DImode rotations by 32 bits with -m32.
In theory moving when this rotation is split should have little
influence on code generation, but in practice "reload" sometimes
decides to make use of the increased flexibility to reduce the number
of registers used, and the code size, by using xchg.
For example:
__int128 x;
__int128 y;
__int128 a;
__int128 b;
void foo()
{
unsigned __int128 t = x;
t ^= a;
t = (t<<64) | (t>>64);
t ^= b;
y = t;
}
One some modern architectures this is a small win, on some older
architectures this is a small loss. The decision which code to
generate is made in "reload", and could probably be tweaked by
register preferencing. The much bigger win is that (eventually) all
TImode mode shifts and rotates by constants will become potential
candidates for TImode STV.
2022-07-31 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
* config/i386/i386.md (define_expand <any_rotate>ti3): For
rotations by 64 bits use new rot[lr]64ti2_doubleword pattern.
(rot[lr]64ti2_doubleword): New post-reload splitter.
Roger Sayle [Sun, 31 Jul 2022 20:44:51 +0000 (21:44 +0100)]
PR target/106450: Tweak timode_remove_non_convertible_regs on x86_64.
This patch resolves PR target/106450, some more fall-out from more
aggressive TImode scalar-to-vector (STV) optimizations. I continue
to be caught out by how far TImode STV has diverged from DImode/SImode
STV, and therefore requires additional (unexpected) tweaking. Many
thanks to H.J. Lu for pointing out timode_remove_non_convertible_regs
needs to be extended to handle XOR (and other new operations).
Unhelpfully the comment above this function states that it's the TImode
version of "remove_non_convertible_regs", which doesn't exist anymore,
so I've resurrected an explanatory comment from the git history.
By refactoring the checks for hard regs and already "marked" regs
into timode_check_non_convertible_regs itself, all of its callers are
simplified. This patch then FOR_EACH_INSN_USE and FOR_EACH_INSN_DEF
to generically handle arbitrary (non-move) instructions (including
unary and binary operations), calling timode_check_non_convertible_regs
on each TImode register USE and DEF.
2022-07-31 Roger Sayle <roger@nextmovesoftware.com>
H.J. Lu <hjl.tools@gmail.com>
gcc/ChangeLog
PR target/106450
* config/i386/i386-features.cc (timode_check_non_convertible_regs):
Do nothing if REGNO is set in the REGS bitmap, or is a hard reg.
(timode_remove_non_convertible_regs): Update comment.
Call timode_check_non_convertible_reg on all TImode register
DEFs and USEs in each instruction.
gcc/testsuite/ChangeLog
PR target/106450
* gcc.target/i386/pr106450.c: New test case.
Harald Anlauf [Thu, 28 Jul 2022 20:07:02 +0000 (22:07 +0200)]
Fortran: detect blanks within literal constants in free-form mode [PR92805]
gcc/fortran/ChangeLog:
PR fortran/92805
* match.cc (gfc_match_small_literal_int): Make gobbling of leading
whitespace optional.
(gfc_match_name): Likewise.
(gfc_match_char): Likewise.
* match.h (gfc_match_small_literal_int): Adjust prototype.
(gfc_match_name): Likewise.
(gfc_match_char): Likewise.
* primary.cc (match_kind_param): Match small literal int or name
without gobbling whitespace.
(get_kind): Do not skip over blanks.
(match_string_constant): Likewise.
gcc/testsuite/ChangeLog:
PR fortran/92805
* gfortran.dg/literal_constants.f: New test.
* gfortran.dg/literal_constants.f90: New test.
Co-authored-by: Steven G. Kargl <kargl@gcc.gnu.org>
Harald Anlauf [Wed, 27 Jul 2022 19:34:22 +0000 (21:34 +0200)]
Fortran: fix invalid rank error in ASSOCIATED when rank is remapped [PR77652]
gcc/fortran/ChangeLog:
PR fortran/77652
* check.cc (gfc_check_associated): Make the rank check of POINTER
vs. TARGET match the allowed forms of pointer assignment for the
selected Fortran standard.
gcc/testsuite/ChangeLog:
PR fortran/77652
* gfortran.dg/associated_target_9a.f90: New test.
* gfortran.dg/associated_target_9b.f90: New test.
Lewis Hyatt [Tue, 12 Jul 2022 13:47:47 +0000 (09:47 -0400)]
c++: Fix location for -Wunused-macros [PR66290]
In C++, since all tokens are lexed from libcpp up front, diagnostics generated
by libcpp after lexing has completed do not get a valid location from libcpp
(rather, libcpp thinks they all pertain to the end of the file.) This has long
been addressed using the global variable "done_lexing", which the C++ frontend
sets at the appropriate time; when done_lexing is true, then c_cpp_diagnostic(),
which outputs libcpp's diagnostics, uses input_location instead of the wrong
libcpp location. The C++ frontend arranges that input_location will point to the
token it is currently processing, so this generally works fine. However, there
is one exception currently, which is -Wunused-macros. This gets generated at the
end of processing in cpp_finish (), since we need to wait until then to
determine whether a macro was eventually used or not. But the locations it
passes to c_cpp_diagnostic () were remembered from the original lexing and hence
they should not be overridden with input_location, which is now the one
incorrectly pointing to the end of the file.
Fixed by setting done_lexing=false again just prior to calling cpp_finish (). I
also renamed the variable from done_lexing to "override_libcpp_locations", since
it's now not strictly about lexing anymore.
There is no new testcase with this patch, since we already had an xfailed
testcase which is now fixed.
gcc/c-family/ChangeLog:
PR c++/66290
* c-common.h: Rename global done_lexing to
override_libcpp_locations.
* c-common.cc (c_cpp_diagnostic): Likewise.
* c-opts.cc (c_common_finish): Set override_libcpp_locations
(formerly done_lexing) immediately prior to calling cpp_finish ().
gcc/cp/ChangeLog:
PR c++/66290
* parser.cc (cp_lexer_new_main): Rename global done_lexing to
override_libcpp_locations.
gcc/testsuite/ChangeLog:
PR c++/66290
* c-c++-common/pragma-diag-15.c: Remove xfail for C++.
Roger Sayle [Sun, 31 Jul 2022 07:13:30 +0000 (08:13 +0100)]
PR bootstrap/106472: Add libgo depends on libbacktrace to Makefile.def
This patch fixes PR bootstrap/106472 by adding a missing dependency
to Makefile.def to allow make bootstrap when configured using
"--enable-languages=go" (and not using make with multiple threads).
2022-07-31 Roger Sayle <roger@nextmovesoftware.com>
ChangeLog
PR bootstrap/106472
* Makefile.def (dependencies): Make configure-target-libgo depend
upon all-target-libbacktrace.
Jason Merrill [Tue, 26 Jul 2022 15:02:21 +0000 (11:02 -0400)]
c++: constexpr, empty base after non-empty [PR106369]
Here the CONSTRUCTOR we were providing for D{} had an entry for the B base
subobject at offset 0 following the entry for the C base, causing
output_constructor_regular_field to ICE due to going backwards. It might be
nice for that function to be more tolerant of empty fields, but it also
seems reasonable for the front end to prune the useless entry.
PR c++/106369
gcc/cp/ChangeLog:
* constexpr.cc (reduced_constant_expression_p): Return false
if a CONSTRUCTOR initializes an empty field.
xtensa: Fix conflicting hard regno between indirect sibcall fixups and EH_RETURN_STACKADJ_RTX
The hard register A10 was already allocated for EH_RETURN_STACKADJ_RTX.
(although exception handling and sibling call may not apply at the same time,
but for safety)
gcc/ChangeLog:
* config/xtensa/xtensa.md: Change hard register number used in
the split patterns for indirect sibling call fixups from 10 to 11,
the last free one for the CALL0 ABI.
Richard Biener [Fri, 29 Jul 2022 08:40:34 +0000 (10:40 +0200)]
tree-optimization/105679 - disable backward threading of unlikely entry
The following makes the backward threader reject threads whose entry
edge is probably never executed according to the profile. That in
particular, for the testcase, avoids threading the irq == 1 check
on the path where irq > 31, thereby avoiding spurious -Warray-bounds
diagnostics
PR tree-optimization/105679
* tree-ssa-threadbackward.cc
(back_threader_profitability::profitable_path_p): Avoid threading
when the entry edge is probably never executed.
Jonathan Wakely [Thu, 28 Jul 2022 19:55:51 +0000 (20:55 +0100)]
libstdc++: Tweak common_iterator::operator-> return type [PR104443]
This adjusts the return type to match the resolution of LWG 3672. There
is no functional difference, because decltype(auto) always deduced a
value anyway, but this makes it simpler and consistent with the working
draft.
libstdc++-v3/ChangeLog:
PR libstdc++/104443
* include/bits/stl_iterator.h (common_iterator::operator->):
Change return type to just auto.
Richard Biener [Fri, 29 Jul 2022 06:24:52 +0000 (08:24 +0200)]
tree-optimization/106422 - verify block copying in forward threading
The forward threader failed to check whether it can actually duplicate
blocks. The following adds this in a similar place the backwards threader
performs this check.
PR tree-optimization/106422
* tree-ssa-threadupdate.cc (fwd_jt_path_registry::update_cfg):
Check whether we can copy thread blocks and cancel the thread if not.
Jakub Jelinek [Fri, 29 Jul 2022 07:59:19 +0000 (09:59 +0200)]
openmp: Reject invalid forms of C++ #pragma omp atomic compare [PR106448]
The allowed syntaxes of atomic compare don't allow ()s around the condition
of ?:, but we were accepting it in one case for C++.
Fixed thusly.
2022-07-29 Jakub Jelinek <jakub@redhat.com>
PR c++/106448
* parser.cc (cp_parser_omp_atomic): For simple cast followed by
CPP_QUERY token, don't try cp_parser_binary_operation if compare
is true.
Jakub Jelinek [Fri, 29 Jul 2022 07:49:11 +0000 (09:49 +0200)]
openmp: Fix up handling of non-rectangular simd loops with pointer type iterators [PR106449]
There were 2 issues visible on this new testcase, one that we didn't have
special POINTER_TYPE_P handling in a few spots of expand_omp_simd - for
pointers we need to use POINTER_PLUS_EXPR and need to have the non-pointer
part in sizetype, for non-rectangular loop on the other side we can rely on
multiplication factor 1, pointers can't be multiplied, without those changes
we'd ICE. The other issue was that we put n2 expression directly into a
comparison in a condition and regimplified that, for the &a[512] case that
and with gimplification being destructed that unfortunately meant modification
of original fd->loops[?].n2. Fixed by unsharing the expression. This was
causing a runtime failure on the testcase.
2022-07-29 Jakub Jelinek <jakub@redhat.com>
PR middle-end/106449
* omp-expand.cc (expand_omp_simd): Fix up handling of pointer
iterators in non-rectangular simd loops. Unshare fd->loops[i].n2
or n2 before regimplifying it inside of a condition.
* testsuite/libgomp.c-c++-common/pr106449.c: New test.
Jakub Jelinek [Fri, 29 Jul 2022 07:43:34 +0000 (09:43 +0200)]
openmp: Simplify fold_build_pointer_plus callers in omp-expand
Tobias mentioned in PR106449 that fold_build_pointer_plus already
fold_converts the second argument to sizetype if it doesn't already
have an integral type gimple compatible with sizetype.
So, this patch simplifies the callers of fold_build_pointer_plus in
omp-expand so that they don't do those conversions manually.
2022-07-29 Jakub Jelinek <jakub@redhat.com>
* omp-expand.cc (expand_omp_for_init_counts, expand_omp_for_init_vars,
extract_omp_for_update_vars, expand_omp_for_ordered_loops,
expand_omp_simd): Don't fold_convert second argument to
fold_build_pointer_plus to sizetype.
LoongArch: Define the macro ASM_PREFERRED_EH_DATA_FORMAT by checking the assembler's support for eh_frame encoding.
.eh_frame DW_EH_PE_pcrel encoding format is not supported by gas <= 2.39.
Check if the assembler support DW_EH_PE_PCREL encoding and define .eh_frame
encoding type.
gcc/ChangeLog:
* config.in: Regenerate.
* config/loongarch/loongarch.h (ASM_PREFERRED_EH_DATA_FORMAT):
Select the value of the macro definition according to whether
HAVE_AS_EH_FRAME_PCREL_ENCODING_SUPPORT is defined.
* configure: Regenerate.
* configure.ac: Reinstate HAVE_AS_EH_FRAME_PCREL_ENCODING_SUPPORT test.