Jakub Jelinek [Wed, 24 Nov 2021 09:30:32 +0000 (10:30 +0100)]
openmp: Fix up handling of kind(host) and kind(nohost) in ACCEL_COMPILERs [PR103384]
As the testcase shows, we weren't handling kind(host) and kind(nohost) properly
in the ACCEL_COMPILERs, the code written in there is valid for the host
compiler only, where if we are maybe offloaded, we defer resolution after IPA,
otherwise return 0 for kind(nohost) and accept it for kind(host). Note,
omp_maybe_offloaded is false after IPA. If ACCEL_COMPILER is defined, it is
the other way around, but also we know we are after IPA.
2021-11-24 Jakub Jelinek <jakub@redhat.com>
PR middle-end/103384
gcc/
* omp-general.c (omp_context_selector_matches): For ACCEL_COMPILER,
return 0 for kind(host) and continue for kind(nohost).
libgomp/
* testsuite/libgomp.c/declare-variant-2.c: New test.
Jakub Jelinek [Wed, 24 Nov 2021 09:08:35 +0000 (10:08 +0100)]
attribs: Fix ICEs on attributes starting with _ [PR103365]
As the patch shows, we have quite a few asserts that we don't call
lookup_attribute etc. with attr_name that starts with an underscore,
to make sure nobody is trying to call it with non-canonicalized
attribute name like "__cold__" instead of "cold".
We canonicalize only attributes that start with 2 underscores and end
with 2 underscores though.
Before Marek's patch, that wasn't an issue, we had no attributes like
"_foo" or "__bar_" etc., so lookup_scoped_attribute_spec would
always return NULL for those and we wouldn't try to register them,
look them up etc., just with -Wattributes would warn about them.
But now, as the new testcases show, users can actually request such
attributes to be ignored, and we ICE for those during
register_scoped_attribute and when that is fixed, ICE later on when
somebody uses those attributes because they will be looked up
to find out that they should be ignored.
So, the following patch instead of or in addition to, depending on
how performance sensitive a particular spot is, checking that
attribute doesn't start with underscore allows attribute
names that start with underscore as long as it doesn't canonicalize
(i.e. doesn't start and end with 2 underscores).
In addition to that, I've noticed lookup_attribute_by_prefix
was calling get_attribute_name twice unnecessarily, and 2 tests
were running in c++98 mode with -std=c++98 -std=c++11 which IMHO
isn't useful because -std=c++11 testing is done too when testing
all language versions.
2021-11-24 Jakub Jelinek <jakub@redhat.com>
PR middle-end/103365
* attribs.h (lookup_attribute): Allow attr_name to start with
underscore, as long as canonicalize_attr_name returns false.
(lookup_attribute_by_prefix): Don't call get_attribute_name twice.
* attribs.c (extract_attribute_substring): Reimplement using
canonicalize_attr_name.
(register_scoped_attribute): Change gcc_assert into
gcc_checking_assert, verify !canonicalize_attr_name rather than
that str.str doesn't start with '_'.
* c-c++-common/Wno-attributes-1.c: Require effective target
c || c++11 and drop dg-additional-options.
* c-c++-common/Wno-attributes-2.c: Likewise.
* c-c++-common/Wno-attributes-4.c: New test.
* c-c++-common/Wno-attributes-5.c: New test.
Jakub Jelinek [Wed, 24 Nov 2021 08:54:44 +0000 (09:54 +0100)]
bswap: Fix up symbolic merging for xor and plus [PR103376]
On Mon, Nov 22, 2021 at 08:39:42AM -0000, Roger Sayle wrote:
> This patch implements PR tree-optimization/103345 to merge adjacent
> loads when combined with addition or bitwise xor. The current code
> in gimple-ssa-store-merging.c's find_bswap_or_nop alreay handles ior,
> so that all that's required is to treat PLUS_EXPR and BIT_XOR_EXPR in
> the same way at BIT_IOR_EXPR.
Unfortunately they aren't exactly the same. They work the same if always
at least one operand (or corresponding byte in it) is known to be 0,
0 | 0 = 0 ^ 0 = 0 + 0 = 0. But for | also x | x = x for any other x,
so perform_symbolic_merge has been accepting either that at least one
of the bytes is 0 or that both are the same, but that is wrong for ^
and +.
The following patch fixes that by passing through the code of binary
operation and allowing non-zero masked1 == masked2 through only
for BIT_IOR_EXPR.
Thinking more about it, perhaps we could do more for BIT_XOR_EXPR.
We could allow masked1 == masked2 case for it, but would need to
do something different than the
n->n = n1->n | n2->n;
we do on all the bytes together.
In particular, for masked1 == masked2 if masked1 != 0 (well, for 0
both variants are the same) and masked1 != 0xff we would need to
clear corresponding n->n byte instead of setting it to the input
as x ^ x = 0 (but if we don't know what x and y are, the result is
also don't know). Now, for plus it is much harder, because not only
for non-zero operands we don't know what the result is, but it can
modify upper bytes as well. So perhaps only if current's byte
masked1 && masked2 set the resulting byte to 0xff (unknown) iff
the byte above it is 0 and 0, and set that resulting byte to 0xff too.
Also, even for | we could instead of return NULL just set the resulting
byte to 0xff if it is different, perhaps it will be masked off later on.
2021-11-24 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/103376
* gimple-ssa-store-merging.c (perform_symbolic_merge): Add CODE
argument. If CODE is not BIT_IOR_EXPR, ensure that one of masked1
or masked2 is 0.
(find_bswap_or_nop_1, find_bswap_or_nop,
imm_store_chain_info::try_coalesce_bswap): Adjust
perform_symbolic_merge callers.
Richard Biener [Tue, 23 Nov 2021 12:51:10 +0000 (13:51 +0100)]
Avoid redundant get_loop_body calls in IVOPTs
This removes redundant get_loop_body calls in IVOPTs by passing
around the body we're gathering early.
2021-11-23 Richard Biener <rguenther@suse.de>
* tree-ssa-loop-ivopts.c (find_givs): Take loop body as
argument instead of re-computing it.
(find_interesting_uses): Likewise.
(find_induction_variables): Pass through loop body.
(tree_ssa_iv_optimize_loop): Pass down loop body.
Tamar Christina [Wed, 24 Nov 2021 06:39:05 +0000 (06:39 +0000)]
middle-end: Fix failures with bitclear patterns on signed values
During testing after rebasing to commit I noticed a failing testcase with the
bitmask compare patch.
Consider the following C++ testcase:
#include <compare>
#define A __attribute__((noipa))
A bool f5 (double i, double j) { auto c = i <=> j; return c >= 0; }
This turns into a comparison against chars, on systems where chars are signed
the pattern inserts an unsigned convert such that it's able to do the
transformation.
This causes much worse codegen under -ffast-math due to phiops no longer
recognizing the pattern. It turns out that phiopts spaceship_replacement is
looking for the exact form that was just changed.
The comments seems to suggest this code only checks for (res & ~1) == 0 but the
implementation seems to suggest it's broader.
As such I added a case to check to see if the value comparison we found is a
type cast. and strips away the type cast and continues.
In match.pd the typecasts are only added for signed comparisons to == 0 and != 0
which are then rewritten into comparisons with 1.
As such I only check for 1 and LE and GT, which is what match.pd would have
rewritten it to.
This fixes the regression but this is not code I 100% understand, since I don't
really know the semantics of the spaceship operator so would appreciate an extra
look.
gcc/ChangeLog:
* tree-ssa-phiopt.c (spaceship_replacement): Handle new canonical
codegen.
Tamar Christina [Wed, 24 Nov 2021 06:38:18 +0000 (06:38 +0000)]
middle-end: Convert bitclear <imm> + cmp<cc> #0 into cm<cc2> <imm2>
This optimizes the case where a mask Y which fulfills ~Y + 1 == pow2 is used to
clear a some bits and then compared against 0 into one without the masking and
a compare against a different bit immediate.
We can do this for all unsigned compares and for signed we can do it for
comparisons of EQ and NE:
(x & (~255)) == 0 becomes x <= 255. Which for leaves it to the target to
optimally deal with the comparison.
This transformation has to be done in the mid-end because in RTL you don't have
the signs of the comparison operands and if the target needs an immediate this
should be floated outside of the loop.
The RTL loop invariant hoisting is done before split1.
i.e.
void fun1(int32_t *x, int n)
{
for (int i = 0; i < (n & -16); i++)
x[i] = (x[i]&(~255)) == 0;
}
In order to not break IVopts and CSE I have added a
requirement for the scalar version to be single use.
gcc/ChangeLog:
* tree.c (bitmask_inv_cst_vector_p): New.
* tree.h (bitmask_inv_cst_vector_p): New.
* match.pd: Use it in new bitmask compare pattern.
gcc/testsuite/ChangeLog:
* gcc.dg/bic-bitmask-10.c: New test.
* gcc.dg/bic-bitmask-11.c: New test.
* gcc.dg/bic-bitmask-12.c: New test.
* gcc.dg/bic-bitmask-13.c: New test.
* gcc.dg/bic-bitmask-14.c: New test.
* gcc.dg/bic-bitmask-15.c: New test.
* gcc.dg/bic-bitmask-16.c: New test.
* gcc.dg/bic-bitmask-17.c: New test.
* gcc.dg/bic-bitmask-18.c: New test.
* gcc.dg/bic-bitmask-19.c: New test.
* gcc.dg/bic-bitmask-2.c: New test.
* gcc.dg/bic-bitmask-20.c: New test.
* gcc.dg/bic-bitmask-21.c: New test.
* gcc.dg/bic-bitmask-22.c: New test.
* gcc.dg/bic-bitmask-23.c: New test.
* gcc.dg/bic-bitmask-3.c: New test.
* gcc.dg/bic-bitmask-4.c: New test.
* gcc.dg/bic-bitmask-5.c: New test.
* gcc.dg/bic-bitmask-6.c: New test.
* gcc.dg/bic-bitmask-7.c: New test.
* gcc.dg/bic-bitmask-8.c: New test.
* gcc.dg/bic-bitmask-9.c: New test.
* gcc.dg/bic-bitmask.h: New test.
* gcc.target/aarch64/bic-bitmask-1.c: New test.
Marek Polacek [Mon, 22 Nov 2021 19:09:25 +0000 (14:09 -0500)]
c++: Fix missing NSDMI diagnostic in C++98 [PR103347]
Here the problem is that we aren't detecting a NSDMI in C++98:
struct A {
void *x = NULL;
};
because maybe_warn_cpp0x uses input_location and that happens to point
to NULL which comes from a system header. Jakub suggested changing the
location to the '=', thereby avoiding the system header problem. To
that end, I've added a new location_t member into cp_declarator. This
member is used when this declarator is part of an init-declarator. The
rest of the changes is obvious. I've also taken the liberty of adding
loc_or_input_loc, since I want to avoid checking for UNKNOWN_LOCATION.
PR c++/103347
gcc/cp/ChangeLog:
* cp-tree.h (struct cp_declarator): Add a location_t member.
(maybe_warn_cpp0x): Add a location_t parameter with a default argument.
(loc_or_input_loc): New.
* decl.c (grokdeclarator): Use loc_or_input_loc. Pass init_loc down
to maybe_warn_cpp0x.
* error.c (maybe_warn_cpp0x): Add a location_t parameter. Use it.
* parser.c (make_declarator): Initialize init_loc.
(cp_parser_member_declaration): Set init_loc.
(cp_parser_condition): Likewise.
(cp_parser_init_declarator): Likewise.
(cp_parser_parameter_declaration): Likewise.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/nsdmi-warn1.C: New test.
* g++.dg/cpp0x/nsdmi-warn1.h: New file.
Jason Merrill [Sat, 16 Oct 2021 04:04:25 +0000 (00:04 -0400)]
timevar: Add auto_cond_timevar class
The auto_timevar sentinel class for starting and stopping timevars was added
in 2014, but doesn't work for the many uses of timevar_cond_start/stop in
the C++ front end. So let's add one that does.
This allows us to remove a lot of wrapper functions that were just used to
call timevar_cond_stop on all exits from the function.
gcc/ChangeLog:
* timevar.h (class auto_cond_timevar): New.
gcc/cp/ChangeLog:
* call.c
* decl.c
* name-lookup.c:
Use auto_cond_timevar instead of timevar_cond_start/stop.
Remove wrapper functions.
2021-11-17 Hongtao Liu <hongtao.liu@intel.com>
H.J. Lu <hongjiu.lu@intel.com>
gcc/ChangeLog:
PR tree-optimization/103194
* match.pd (gimple_nop_atomic_bit_test_and_p): Extended to
match truncation.
* tree-ssa-ccp.c (gimple_nop_convert): Declare.
(optimize_atomic_bit_test_and): Enhance
optimize_atomic_bit_test_and to handle truncation.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr103194-2.c: New test.
* gcc.target/i386/pr103194-3.c: New test.
* gcc.target/i386/pr103194-4.c: New test.
* gcc.target/i386/pr103194-5.c: New test.
* gcc.target/i386/pr103194.c: New test.
Xi Ruoyao [Thu, 18 Nov 2021 10:46:12 +0000 (18:46 +0800)]
fixincludes: don't abort() on access failure [PR103306]
Some distro may ship dangling symlinks in include directories, triggers
the access failure. Skip it and continue to next header instead of
being to panic.
Restore to old behavior before r12-5234 but without resurrecting the
problematic getcwd() call, by using the environment variable "INPUT"
exported by fixinc.sh.
Tested on x86_64-linux-gnu, with a dangling symlink intentionally
injected into /usr/include.
Bill Schmidt [Tue, 23 Nov 2021 18:54:50 +0000 (12:54 -0600)]
rs6000: Fix test_mffsl.c effective target check
Paul Clarke pointed out to me that I had wrongly used a compile-time check
instead of a run-time check in this executable test. This patch fixes
that. I also fixed a typo in a string that caught my eye.
2021-11-23 Bill Schmidt <wschmidt@linux.ibm.com>
gcc/testsuite/
* gcc.target/powerpc/test_mffsl.c: Change effective target to
a run-time check. Fix a typo in a debug print statement.
Harald Anlauf [Tue, 23 Nov 2021 16:51:38 +0000 (17:51 +0100)]
Fortran: fix scalarization for intrinsic LEN_TRIM with present KIND argument
gcc/fortran/ChangeLog:
PR fortran/87711
PR fortran/87851
* trans-array.c (arg_evaluated_for_scalarization): Add LEN_TRIM to
list of intrinsics for which an optional KIND argument needs to be
removed before scalarization.
gcc/testsuite/ChangeLog:
PR fortran/87711
PR fortran/87851
* gfortran.dg/len_trim.f90: New test.
Jan Hubicka [Tue, 23 Nov 2021 15:36:01 +0000 (16:36 +0100)]
Remove duplicated param valud in modref tree
Modref tree template stores its own copy of param_moderf_max_bases, *_max_refs
and *_max_accesses values. This was done before we had per-function limits and
even back then it was bit dubious, so this patch removes it.
Jonathan Wakely [Tue, 23 Nov 2021 12:28:22 +0000 (12:28 +0000)]
libstdc++: Fix circular dependency for bitmap_allocator [PR103381]
<ext/bitmap_allocator.h> includes <function>, and since C++17 that
includes <unordered_map>. If std::allocator is defined in terms of
__gnu_cxx::bitmap_allocator then you get a circular reference and
bootstrap fails when compiling src/c++17/*.cc.
libstdc++-v3/ChangeLog:
PR libstdc++/103381
* include/ext/bitmap_allocator.h: Include <bits/stl_function.h>
instead of <functional>.
Richard Biener [Tue, 23 Nov 2021 09:11:41 +0000 (10:11 +0100)]
tree-optimization/103361 - fix unroll-and-jam direction vector handling
This properly uses lambda_int instead of truncating the direction
vector to int which leads to false unexpected negative values.
2021-11-23 Richard Biener <rguenther@suse.de>
PR tree-optimization/103361
* gimple-loop-jam.c (adjust_unroll_factor): Use lambda_int
for the dependence distance.
* tree-data-ref.c (print_lambda_vector): Properly print a lambda_int.
This struct copy_body_data's hook is always NULL since merge
of the tuples branch, before that it has been shortly used by the C++
FE during ctor/dtor cloning to chain the remapped blocks, but only
very shortly, before transform_lang_insert_block was a bool and
the call to insert_block was done through a langhook.
I'd say that for something that hasn't been used since 4.4 there is
zero chance we'll want to use it again in the near future.
Jan Hubicka [Tue, 23 Nov 2021 09:55:56 +0000 (10:55 +0100)]
Improve bytewise DSE
testcase modref-dse-4.c and modref-dse-5.c fails on some targets because they
depend on store merging. What really happens is that without store merging
we produce for kill_me combined write that is ao_ref with offset=0, size=32
and max_size=96. We have size != max_size becaue we do ont track the info that
all 3 writes must happen in a group and conider case only some of them are done.
This disables byte-wise DSE which checks that size == max_size. This is
completely unnecesary for store being proved to be dead or load being checked
to not read live bytes. It is only necessary for kill store that is used to
prove that given store is dead.
While looking into this I also noticed that we check that everything is byte
aligned. This is also unnecessary and with access merging in modref may more
commonly fire on accesses that we could otherwise handle.
This patch fixes both also also changes interface to normalize_ref that I found
confusing since it modifies the ref. Instead of that we have get_byte_range
that is computing range in bytes (since that is what we need to maintain the
bitmap) and has additional parameter specifying if the store in question should
be turned into sub-range or super-range depending whether we compute range
for kill or load.
gcc/ChangeLog:
2021-11-23 Jan Hubicka <hubicka@ucw.cz>
PR tree-optimization/103335
* tree-ssa-dse.c (valid_ao_ref_for_dse): Rename to ...
(valid_ao_ref_kill_for_dse): ... this; do not check that boundaries
are divisible by BITS_PER_UNIT.
(get_byte_aligned_range_containing_ref): New function.
(get_byte_aligned_range_contained_in_ref): New function.
(normalize_ref): Rename to ...
(get_byte_range): ... this one; handle accesses not aligned to byte
boundary; return range in bytes rater than updating ao_ref.
(clear_live_bytes_for_ref): Take write ref by reference; simplify using
get_byte_access.
(setup_live_bytes_from_ref): Likewise.
(clear_bytes_written_by): Update.
(live_bytes_read): Update.
(dse_classify_store): Simplify tech before live_bytes_read checks.
Andrew Pinski [Tue, 23 Nov 2021 01:08:55 +0000 (01:08 +0000)]
Canonicalize &MEM[ssa_n, CST] to ssa_n p+ CST in fold_stmt_1
This is a new version of the patch to fix PR 102216.
Instead of doing the canonicalization inside forwprop, Richi
mentioned we should do it inside fold_stmt_1 and that is what
this patch does.
PR tree-optimization/102216
gcc/ChangeLog:
* gimple-fold.c (fold_stmt_1): Add canonicalization
of "&MEM[ssa_n, CST]" to "ssa_n p+ CST", note this
can only be done if !in_place.
gcc/testsuite/ChangeLog:
* g++.dg/tree-ssa/pr102216-1.C: New test.
* g++.dg/tree-ssa/pr102216-2.C: New test.
Jakub Jelinek [Tue, 23 Nov 2021 09:30:02 +0000 (10:30 +0100)]
openmp: Fix up handling of reduction clauses on the loop construct [PR102431]
We were using unshare_expr and walk_tree_without_duplicate replacement
of the placeholder vars. The OMP_CLAUSE_REDUCTION_{INIT,MERGE} can contain
other trees that need to be duplicated though, e.g. BLOCKs referenced in
BIND_EXPR(s), or local VAR_DECLs. This patch uses the inliner code to copy
all of that. There is a slight complication that those local VAR_DECLs or
placeholders don't have DECL_CONTEXT set, they will get that only when
they are gimplified later on, so this patch sets DECL_CONTEXT for those
temporarily and resets it afterwards.
2021-11-23 Jakub Jelinek <jakub@redhat.com>
PR middle-end/102431
* gimplify.c (replace_reduction_placeholders): Remove.
(note_no_context_vars): New function.
(gimplify_omp_loop): For OMP_PARALLEL's BIND_EXPR create a new
BLOCK. Use copy_tree_body_r with walk_tree instead of unshare_expr
and replace_reduction_placeholders for duplication of
OMP_CLAUSE_REDUCTION_{INIT,MERGE} expressions. Ensure all mentioned
automatic vars have DECL_CONTEXT set to non-NULL before doing so
and reset it afterwards for those vars and their corresponding
vars.
* c-c++-common/gomp/pr102431.c: New test.
* g++.dg/gomp/pr102431.C: New test.
* gfortran.dg/gomp/pr102431.f90: New test.
Haochen Gui [Wed, 17 Nov 2021 08:16:02 +0000 (16:16 +0800)]
rs6000: Optimize code generation of vec_reve [PR100868]
gcc/
PR target/100868
* config/rs6000/altivec.md (altivec_vreve<mode>2 for VEC_K): Use
xxbrq for v16qi, xxbrq + xxbrh for v8hi and xxbrq + xxbrw for v4si
or v4sf when p9_vector is set.
(altivec_vreve<mode>2 for VEC_64): Defined. Implemented by xxswapd.
gcc/testsuite/
PR target/100868
* gcc.target/powerpc/vec_reve_1.c: New test.
* gcc.target/powerpc/vec_reve_2.c: Likewise.
gcc/testsuite
* gcc.dg/tree-ssa/pr96779.c: Testcase for this optimization.
* gcc.dg/tree-ssa/pr96779-disabled.c: Testcase for this optimization
when -fwrapv passed.
Jason Merrill [Fri, 19 Nov 2021 22:01:10 +0000 (17:01 -0500)]
c++: improved return expression location
Stripping the location wrapper from retval meant we didn't have the
necessary location information for any conversion diagnostics. We only need
the stripping for the named return value optimization, let's use the
unstripped expression for everything else.
gcc/cp/ChangeLog:
* typeck.c (check_return_expr): Only strip location wrapper during
NRV handling.
Jakub Jelinek [Mon, 22 Nov 2021 21:29:20 +0000 (22:29 +0100)]
libcpp: Fix _Pragma stringification [PR103165]
As the testcase show, sometimes _Pragma is turned into CPP_PRAGMA
.. CPP_PRAGMA_EOL tokens, even when it might still need to be
stringized later on. We are then ICEing because we don't handle
stringification of CPP_PRAGMA or CPP_PRAGMA_EOL, but trying to
reconstruct the exact tokens with exact spacing after it has been
lowered is very hard. So, instead this patch ensures we don't
lower _Pragma during expand_arg calls, but only later when
cpp_get_token_1 is called outside of expand_arg.
2021-11-22 Jakub Jelinek <jakub@redhat.com>
Tobias Burnus <tobias@codesourcery.com>
PR preprocessor/103165
libcpp/
* internal.h (struct lexer_state): Add ignore__Pragma field.
* macro.c (builtin_macro): Don't interpret _Pragma if
pfile->state.ignore__Pragma.
(expand_arg): Temporarily set pfile->state.ignore__Pragma to 1.
gcc/testsuite/
* c-c++-common/gomp/pragma-3.c: New test.
* c-c++-common/gomp/pragma-4.c: New test.
* c-c++-common/gomp/pragma-5.c: New test.
Roger Sayle [Mon, 22 Nov 2021 18:15:36 +0000 (18:15 +0000)]
tree-optimization/103345: Improved load merging.
This patch implements PR tree-optimization/103345 to merge adjacent
loads when combined with addition or bitwise xor. The current code
in gimple-ssa-store-merging.c's find_bswap_or_nop alreay handles ior,
so that all that's required is to treat PLUS_EXPR and BIT_XOR_EXPR in
the same way at BIT_IOR_EXPR. Many thanks to Andrew Pinski for
pointing out that this also resolves PR target/98953.
2021-11-22 Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR tree-optimization/98953
PR tree-optimization/103345
* gimple-ssa-store-merging.c (find_bswap_or_nop_1): Handle
BIT_XOR_EXPR and PLUS_EXPR the same as BIT_IOR_EXPR.
(pass_optimize_bswap::execute): Likewise.
gcc/testsuite/ChangeLog
PR tree-optimization/98953
PR tree-optimization/103345
* gcc.dg/tree-ssa/pr98953.c: New test case.
* gcc.dg/tree-ssa/pr103345.c: New test case.
Jakub Jelinek [Mon, 22 Nov 2021 16:06:12 +0000 (17:06 +0100)]
openacc: Fix up C++ #pragma acc routine handling [PR101731]
The following testcase ICEs because two function declarations are nested in
each other and the acc routine handling code isn't prepared to put the
pragma on both.
The fix is similar to what #pragma omp declare {simd,variant} does,
in particular set the fndecl_seen flag already in cp_parser_late_parsing*
when we encounter it rather than only after we finalize it.
In cp_finalize_oacc_routine I had to move the fndecl_seen diagnostics to
non-FUNCTION_DECL block, because for FUNCTION_DECLs the flag is already
known to be set from cp_parser_late_parsing_oacc_routine, but can't be
removed altogether, because that regresses quality of 2 goacc/routine-5.c
diagnostics - we drop "a single " from the
'#pragma acc routine' not immediately followed by a single function declaration or definition
diagnostic say on
#pragma acc routine
int foo (), b;
if we drop it altogether.
2021-11-22 Jakub Jelinek <jakub@redhat.com>
PR c++/101731
* parser.c (cp_parser_late_parsing_oacc_routine): Set
parser->oacc_routine->fndecl_seen here, rather than ...
(cp_finalize_oacc_routine): ... here. Don't error if
parser->oacc_routine->fndecl_seen is set for FUNCTION_DECLs.
Florian Weimer [Mon, 22 Nov 2021 12:30:23 +0000 (13:30 +0100)]
libgcc: Remove dbase member from struct unw_eh_callback_data if NULL
Only bfin, frv, i386 and nios2 need this member at present.
libgcc/ChangeLog
* unwind-dw2-fde-dip.c (NEED_DBASE_MEMBER): Define.
(struct unw_eh_callback_data): Make dbase member conditional.
(unw_eh_callback_data_dbase): New function.
(base_from_cb_data): Simplify for the non-dbase case.
(_Unwind_IteratePhdrCallback): Adjust.
(_Unwind_Find_FDE): Likewise.
This avoids differences in the split edge of a cluster due to different
order of same key PHI args when sorting by sorting after the edge
destination index as second key.
2021-11-22 Richard Biener <rguenther@suse.de>
PR tree-optimization/103351
* tree-ssa-dce.c (sort_phi_args): Sort after e->dest_idx as
second key.
Kewen Lin [Mon, 22 Nov 2021 02:18:31 +0000 (20:18 -0600)]
xtensa: Fix non-robust split condition in define_insn_and_split
This patch is to fix some non-robust split conditions in some
define_insn_and_splits, to make each of them applied on top of
the corresponding condition for define_insn part, otherwise the
splitting could perform unexpectedly.
Jakub Jelinek [Sun, 21 Nov 2021 20:08:04 +0000 (21:08 +0100)]
fortran, debug: Fix up DW_AT_rank [PR103315]
For DW_AT_rank we were emitting
.uleb128 0x4 # DW_AT_rank
.byte 0x97 # DW_OP_push_object_address
.byte 0x23 # DW_OP_plus_uconst
.uleb128 0x1c
.byte 0x6 # DW_OP_deref
on 64-bit and
.uleb128 0x4 # DW_AT_rank
.byte 0x97 # DW_OP_push_object_address
.byte 0x23 # DW_OP_plus_uconst
.uleb128 0x10
.byte 0x6 # DW_OP_deref
on 32-bit. I think this is wrong, as dtype.rank field in the descriptor
has unsigned char type, not pointer type nor pointer sized integral.
E.g. if we have a
REAL :: a(..)
dummy argument, which is passed as a reference to the function descriptor,
we want to evaluate a->dtype.rank. The above DWARF expressions perform
*(uintptr_t *)(a + 0x1c)
and
*(uintptr_t *)(a + 0x10)
respectively. The following patch changes those to:
.uleb128 0x5 # DW_AT_rank
.byte 0x97 # DW_OP_push_object_address
.byte 0x23 # DW_OP_plus_uconst
.uleb128 0x1c
.byte 0x94 # DW_OP_deref_size
.byte 0x1
and
.uleb128 0x5 # DW_AT_rank
.byte 0x97 # DW_OP_push_object_address
.byte 0x23 # DW_OP_plus_uconst
.uleb128 0x10
.byte 0x94 # DW_OP_deref_size
.byte 0x1
which perform
*(unsigned char *)(a + 0x1c)
and
*(unsigned char *)(a + 0x10)
respectively.
2021-11-21 Jakub Jelinek <jakub@redhat.com>
PR debug/103315
* trans-types.c (gfc_get_array_descr_info): Use DW_OP_deref_size 1
instead of DW_OP_deref for DW_AT_rank.
Jakub Jelinek [Sun, 21 Nov 2021 20:06:23 +0000 (21:06 +0100)]
i386: Fix up handling of target attribute [PR101180]
As shown in the testcase below, if a function has multiple target attributes
(rather than a single one with one or more arguments) or if a function
gets one target attribute on one declaration and another one on another
declaration, on x86 their effect is not combined into
DECL_FUNCTION_SPECIFIC_TARGET, but instead only the last processed target
attribute wins. aarch64 handles this right, the following patch follows
what it does, i.e. only start with target_option_default_node if
DECL_FUNCTION_SPECIFIC_TARGET is previously NULL (i.e. the first target
attribute being processed on a function) and otherwise start from the
previous DECL_FUNCTION_SPECIFIC_TARGET.
2021-11-21 Jakub Jelinek <jakub@redhat.com>
PR c++/101180
* config/i386/i386-options.c (ix86_valid_target_attribute_p): If
fndecl already has DECL_FUNCTION_SPECIFIC_TARGET, use that as base
instead of target_option_default_node.
Harald Anlauf [Sun, 21 Nov 2021 18:29:27 +0000 (19:29 +0100)]
Fortran: fix lookup for gfortran builtin math intrinsics used by DEC extensions
gcc/fortran/ChangeLog:
PR fortran/99061
* trans-intrinsic.c (gfc_lookup_intrinsic): Helper function for
looking up gfortran builtin intrinsics.
(gfc_conv_intrinsic_atrigd): Use it.
(gfc_conv_intrinsic_cotan): Likewise.
(gfc_conv_intrinsic_cotand): Likewise.
(gfc_conv_intrinsic_atan2d): Likewise.
gcc/testsuite/ChangeLog:
PR fortran/99061
* gfortran.dg/dec_math_5.f90: New test.
Co-authored-by: Steven G. Kargl <kargl@gcc.gnu.org>
Jan Hubicka [Sun, 21 Nov 2021 15:15:41 +0000 (16:15 +0100)]
Improve base tracking in ipa-modref
on exchange2 benchamrk we miss some useful propagation because modref gives
up very early on analyzing accesses through pointers. For example in
int test (int *a)
{
int i;
for (i=0; a[i];i++);
return i+a[i];
}
We are not able to determine that a[i] accesses are relative to a.
This is because get_access requires the SSA name that is in MEM_REF to be
PARM_DECL while on other places we use ipa-prop helper to work out the proper
base pointers.
This patch commonizes the code in get_access and parm_map_for_arg so both
use the check properly and extends it to also figure out that newly allocated
memory is not a side effect to caller.
gcc/ChangeLog:
2021-11-21 Jan Hubicka <hubicka@ucw.cz>
PR ipa/103227
* ipa-modref.c (parm_map_for_arg): Rename to ...
(parm_map_for_ptr): .. this one; handle static chain and calls to
malloc functions.
(modref_access_analysis::get_access): Use parm_map_for_ptr.
(modref_access_analysis::process_fnspec): Update.
(modref_access_analysis::analyze_load): Update.
(modref_access_analysis::analyze_store): Update.
gcc/testsuite/ChangeLog:
2021-11-21 Jan Hubicka <hubicka@ucw.cz>
PR ipa/103227
* gcc.dg/tree-ssa/modref-15.c: New test.
Jan Hubicka [Sun, 21 Nov 2021 12:21:32 +0000 (13:21 +0100)]
Refactor load/store/kill analysis in ipa-modref
Refactor load/store/kill analysis in ipa-modref to a class
modref_access_analysis. This is done in order to avoid some code duplication
and early exits that has turned out to be hard to maintain and there were
multiple bugs we noticed recently.
gcc/ChangeLog:
2021-11-21 Jan Hubicka <hubicka@ucw.cz>
* ipa-modref.c (ignore_nondeterminism_p): Move earlier in source
code.
(ignore_retval_p): Likewise.
(ignore_stores_p): Likewise.
(parm_map_for_arg): Likewise.
(class modref_access_analysis): New class.
(modref_access_analysis::set_side_effects): New member function.
(modref_access_analysis::set_nondeterministic): New member function.
(get_access): Turn to ...
(modref_access_analysis::get_access): ... this one.
(record_access): Turn to ...
(modref_access_analysis::record_access): ... this one.
(record_access_lto): Turn to ...
(modref_access_analysis::record_access_lto): ... This one.
(record_access_p): Turn to ...
(modref_access_analysis::record_access_p): ... This one
(modref_access_analysis::record_unknown_load): New member function.
(modref_access_analysis::record_unknown_store): New member function.
(get_access_for_fnspec): Turn to ...
(modref_access_analysis::get_access_for_fnspec): ... this one.
(merge_call_side_effects): Turn to ...
(moderf_access_analysis::merge_call_side_effects): Turn to ...
(collapse_loads): Move later in source code.
(collapse_stores): Move later in source code.
(process_fnspec): Turn to ...
(modref_access_analysis::process_fnspec): ... this one.
(analyze_call): Turn to ...
(modref_access_analysis::analyze_call): ... this one.
(struct summary_ptrs): Remove.
(analyze_load): Turn to ...
(modref_access_analysis::analyze_load): ... this one.
(analyze_store): Turn to ...
(modref_access_analysis::analyze_store): ... this one.
(analyze_stmt): Turn to ...
(modref_access_analysis::analyze_stmt): ... This one.
(remove_summary): Remove.
(modref_access_analysis::propagate): Break out from ...
(modref_access_analysis::analyze): Break out from ...
(analyze_function): ... here.
Roger Sayle [Sun, 21 Nov 2021 11:40:08 +0000 (11:40 +0000)]
Tweak tree-ssa-math-opts.c to solve PR target/102117.
This patch resolves PR target/102117 on s390. The problem is that
some of the functionality of GCC's RTL expanders is no longer triggered
following the transition to tree SSA form. On s390, unsigned widening
multiplications are converted into WIDEN_MULT_EXPR (aka w* in tree dumps),
but signed widening multiplies are left in their original form, which
alas doesn't benefit from the clever logic in expand_widening_mult.
The fix is to teach convert_mult_to_widen, that RTL expansion can
synthesize a signed widening multiplication if the target provides
a suitable umul_widen_optab.
On s390-linux-gnu with -O2 -m64, the code in the bugzilla PR currently
generates:
2021-11-21 Roger Sayle <roger@nextmovesoftware.com>
Robin Dapp <rdapp@linux.ibm.com>
gcc/ChangeLog
PR target/102117
* tree-ssa-math-opts.c (convert_mult_to_widen): Recognize
signed WIDEN_MULT_EXPR if the target supports umul_widen_optab.
gcc/testsuite/ChangeLog
PR target/102117
* gcc.target/s390/mul-wide.c: New test case.
* gcc.target/s390/umul-wide.c: New test case.
Jan Hubicka [Sat, 20 Nov 2021 23:35:22 +0000 (00:35 +0100)]
Fix looping flag discovery in ipa-pure-const
The testcase shows situation where there is non-trivial cycle in the callgraph
involving a noreturn call. This cycle is important for const function discovery
but not important for pure. IPA pure const uses same strongly connected
components for both propagations which makes it to get suboptimal result
(does not detect the pure flag). However local pure const gets the situation
right becaue it processes functions in right order. This hits rarely
executed code in propagate_pure_const that merge results with previously
known state that has long standing bug in it that makes it to throw away
the looping flag.
Bootstrapped/regtested x86_64-linux.
gcc/ChangeLog:
2021-11-21 Jan Hubicka <hubicka@ucw.cz>
PR ipa/103052
* ipa-pure-const.c (propagate_pure_const): Fix merging of loping flag.
gcc/testsuite/ChangeLog:
2021-11-21 Jan Hubicka <hubicka@ucw.cz>
PR ipa/103052
* gcc.c-torture/execute/pr103052.c: New test.
Jeff Law [Sat, 20 Nov 2021 16:20:07 +0000 (11:20 -0500)]
Clobber the condition code in the bfin doloop patterns
Per Aldy's excellent, but tough to follow analysis in PR 103226, this patch
fixes the bfin-elf regression.
In simplest terms the doloop patterns on this port may clobber the condition
code register, but they do not expose that until after register allocation.
That would be fine, except that other patterns have exposed CC earlier. As
a result the dataflow, particularly for CC, is incorrect.
This leads the register allocators to assume that a value in CC outside the
loop is still valid inside the loop when in fact, the value has been
clobbered. This is what caused pr80974 to start failing.
With this fix, not only do we fix the pr80974 regression, but we fix ~20
other execution failures in the port. It also reduces test time for the
port from ~90 minutes to ~60 minutes.
PR tree-optimization/103226
gcc/
* config/bfin/bfin.md (doloop pattern, splitter and expander): Clobber
CC.
Andrew Pinski [Sat, 20 Nov 2021 01:37:54 +0000 (01:37 +0000)]
Fix tree-optimization/103220: Another missing folding of (type) X op CST where type is a nop convert
The problem here is that int_fits_type_p will return false if we just
change the sign of things like -2 (or 254) so we should accept the case
where we just change the sign (and not the precision) of the type.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/103220
gcc/ChangeLog:
* match.pd ((type) X bitop CST): Don't check if CST
fits into the type if only the sign changes.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/pr103220-1.c: New test.
* gcc.dg/tree-ssa/pr103220-2.c: New test.
* gcc.dg/pr25530.c: Update test to check for 4294967294 in the case -2 is not matched.
Alexandre Oliva [Sat, 20 Nov 2021 05:51:27 +0000 (02:51 -0300)]
harden conds: detach without decls
When we create copies of SSA_NAMEs to hold "detached" copies of the
values for the hardening tests, we end up with assignments to
SSA_NAMEs that refer to the same decls. That would be generally
desirable, since it enables the variable to be recognized in dumps,
and makes coalescing more likely if the original variable dies at that
point. When the decl is a DECL_BY_REFERENCE, the SSA_NAME holds the
address of a parm or result, and it's read-only, so we shouldn't
create assignments to it. Gimple checkers flag at least the case of
results.
This patch arranges for us to avoid referencing the same decls, which
cures the problem, but retaining the visible association between the
SSA_NAMEs, by using the same identifier for the copy.
for gcc/ChangeLog
PR tree-optimization/102988
* gimple-harden-conditionals.cc (detach_value): Copy SSA_NAME
without decl sharing.
Jakub Jelinek [Fri, 19 Nov 2021 21:09:01 +0000 (22:09 +0100)]
c++: Avoid adding implicit attributes during apply_late_template_attributes [PR101180]
decl_attributes and its caller cplus_decl_attributes sometimes add
implicit attributes, e.g. optimize attribute if #pragma GCC optimize
is active, target attribute if #pragma GCC target is active, or
e.g. omp declare target attribute if in between #pragma omp declare target
and #pragma omp end declare target.
For templates that seems highly undesirable to me though, they should
get those implicit attributes from the spot the templates were parsed
(and they do get that), then tsubst through copy_node copies those
attributes, but then apply_late_template_attributes can or does add
a new set from the spot where they are instantiated, which can be pretty
random point of first use of the template.
Consider e.g.
#pragma GCC push_options
#pragma GCC target "avx"
template <int N>
inline void foo ()
{
}
#pragma GCC pop_options
#pragma GCC push_options
#pragma GCC target "crc32"
void
bar ()
{
foo<0> ();
}
#pragma GCC pop_options
testcase where the intention is that foo has avx target attribute
and bar has crc32 target attribute, but we end up with
__attribute__((target ("crc32"), target ("avx")))
on foo<0> (and due to yet another bug actually don't enable avx
in foo<0>). In this particular case it is a regression caused
by r12-299-ga0fdff3cf33f7284 which apparently calls
cplus_decl_attributes even if attributes != NULL but late_attrs
is NULL, before those changes we didn't call it in those cases.
But, if there is at least one unrelated dependent attribute this
would happen already in older releases.
The following patch fixes that by temporarily overriding the variables
that control the addition of the implicit attributes.
Shall we also change the function so that it doesn't call
cplus_decl_attributes if late_attrs is NULL, or was that change
intentional?
2021-11-19 Jakub Jelinek <jakub@redhat.com>
PR c++/101180
* pt.c (apply_late_template_attributes): Temporarily override
current_optimize_pragma, optimization_current_node,
current_target_pragma and scope_chain->omp_declare_target_attribute,
so that cplus_decl_attributes doesn't add implicit attributes.
despite "CONJURED(val_4 = strdup (src_2(D));, val_4)" having sm-state,
in this case malloc:nonnull ({free}), thus leading to both references
to the conjured svalue being lost at merger.
This patch tweaks the state merger code so that it will not consider
merging two different svalues for the value of a region if either svalue
has non-purgable sm-state (in the above example, malloc:nonnull). This
fixes the false leak report above.
Doing so uncovered an issue with explode-2a.c in which the warnings
moved from the correct location to the "while" stmt. This turned out
to be a missing call to detect_leaks in phi-handling, which the patch
also fixes (in the PK_BEFORE_SUPERNODE case in
exploded_graph::process_node). Doing this fixed the regression in
explode-2a.c and also fixed the location of the leak warning in
explode-1.c.
The other side effect of the change is that pr94858-1.c now emits
a -Wanalyzer-too-complex warning, since pertinent state is no longer
being thrown away. There doesn't seem to be a good way of avoiding
this, so the patch also adds -Wno-analyzer-too-complex to that test
case (restoring the default).
gcc/analyzer/ChangeLog:
PR analyzer/103217
* engine.cc (exploded_graph::get_or_create_node): Pass in
m_ext_state to program_state::can_merge_with_p.
(exploded_graph::process_worklist): Likewise.
(exploded_graph::maybe_process_run_of_before_supernode_enodes):
Likewise.
(exploded_graph::process_node): Add missing call to detect_leaks
when handling phi nodes.
* program-state.cc (program_state::can_merge_with_p): Add
"ext_state" param. Pass it and state ptrs to
region_model::can_merge_with_p.
(selftest::test_program_state_merging): Update for new ext_state
param of program_state::can_merge_with_p.
(selftest::test_program_state_merging_2): Likewise.
* program-state.h (program_state::can_purge_p): Make const.
(program_state::can_merge_with_p): Add "ext_state" param.
* region-model.cc: Include "analyzer/program-state.h".
(region_model::can_merge_with_p): Add params "ext_state",
"state_a", and "state_b", use them when creating model_merger
object.
(model_merger::mergeable_svalue_p): New.
* region-model.h (region_model::can_merge_with_p): Add params
"ext_state", "state_a", and "state_b".
(model_merger::model_merger) Likewise, initializing new fields.
(model_merger::mergeable_svalue_p): New decl.
(model_merger::m_ext_state): New field.
(model_merger::m_state_a): New field.
(model_merger::m_state_b): New field.
* svalue.cc (svalue::can_merge_p): Call
model_merger::mergeable_svalue_p on both states and reject the
merger accordingly.
gcc/testsuite/ChangeLog:
PR analyzer/103217
* gcc.dg/analyzer/explode-1.c: Update for improvement to location
of leak warning.
* gcc.dg/analyzer/pr103217.c: New test.
* gcc.dg/analyzer/pr94858-1.c: Add -Wno-analyzer-too-complex.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Jonathan Wakely [Fri, 19 Nov 2021 12:26:49 +0000 (12:26 +0000)]
libstdc++: Use __is_single_threaded in locale initialization
This replaces a __gthread_active_p() check with __is_single_threaded()
so that std::locale initialization doesn't use __gthread_once if it
happens before the first thread is created.
This means that _S_initialize_once() might now be called twice instead
of only once, because if __is_single_threaded() changes to false then we
will do the __gthread_once call even if _S_initialize_once() was already
called. Add a check to _S_initialize_once() and return immediately if
it is the second call.
Also use __builtin_expect to _S_initialize, as the branch will be taken
at most once in the lifetime of the program.
libstdc++-v3/ChangeLog:
* src/c++98/locale_init.cc (_S_initialize_once): Check if
initialization has already been done.
(_S_initialize): Replace __gthread_active_p with
__is_single_threaded. Use __builtin_expect.
Paul A. Clarke [Fri, 22 Oct 2021 17:09:43 +0000 (12:09 -0500)]
rs6000: Add optimizations for _mm_sad_epu8
Power9 ISA added `vabsdub` instruction which is realized in the
`vec_absd` instrinsic.
Use `vec_absd` for `_mm_sad_epu8` compatibility intrinsic, when
`_ARCH_PWR9`.
Also, the realization of `vec_sum2s` on little-endian includes
two rotates in order to position the input and output to match
the semantics of `vec_sum2s`:
- Rotate the second input vector left 12 bytes. In the current usage,
that vector is `{0}`, so this shift is unnecessary, but is currently
not eliminated under optimization.
- Rotate the vector produced by the `vsum2sws` instruction left 4 bytes.
The two words within each doubleword of this (rotated) result must then
be explicitly swapped to match the semantics of `_mm_sad_epu8`,
effectively reversing this rotate. So, this rotate (and a susequent
swap) are unnecessary, but not currently removed under optimization.
Using `__builtin_altivec_vsum2sws` retains both rotates, so is not an
option for removing the rotates.
For little-endian, use the `vsum2sws` instruction directly, and
eliminate the explicit rotate (swap).
2021-11-19 Paul A. Clarke <pc@us.ibm.com>
gcc
* config/rs6000/emmintrin.h (_mm_sad_epu8): Use vec_absd when
_ARCH_PWR9, optimize vec_sum2s when LE.
Darwin: Rework handling for unwinder code in libgcc_s and specs [PR80556].
This addresses a long-standing problem where a work-around for an unwinder
issue (also a regression) regresses other functionality. The patch replaces
several work-arounds with a fix for PR80556 and a work-around for PR88590.
* The fix for PR80556 requires a bump to the SO name for libgcc_s, since we
need to remove the unwinder symbols from it. This would trigger PR88590
hence the work-around for that.
* We weaken the symbols for emulated TLS support so that it is possible
for a DSO linked with static-libgcc to interoperate with a DSO linked with
libgcc_s. Likewise main exes.
* We remove all the gcc-4.2.1 era stubs machinery and workarounds.
* libgcc is always now linked ahead of libc, which avoids fails where the
libc (libSystem) builtins implementations are not up to date.
* The unwinder now always comes from the system
- for Darwin9 from /usr/lib/libgcc_s.1.dylib
- for Darwin10 from /usr/lib/libSystem.dylib
- for Darwin11+ from /usr/lib/system/libunwind.dylib.
We still insert a shim on Darwin10 to fix an omitted unwind function, but
the underlying unwinder remains the system one.
* The work-around for PR88590 has two parts (1) we always link libgcc from
its convenience lib on affected system versions (avoiding the need to find
the DSO path); (2) we add and export the emutls functions from DSOs - this
makes a relatively small (20k) addition to a DSO. These can be backed out
when a proper fix for PR88590 is committed.
For distributions that wish to install a libgcc_s.1.dylib to satisfy linkage
from exes that linked against the stubs can use a reexported libgcc_s.1.1
(since that contains all the symbols that were previously exported via the
stubs).
libgcc, emutls: Allow building weak definitions of the emutls functions.
In order to better support use of the emulated TLS between objects with
DSO dependencies and static-linked libgcc, allow a target to make weak
definitions.
Iain Sandoe [Fri, 19 Nov 2021 15:52:29 +0000 (15:52 +0000)]
libstdc++, testsuite: Add a prune expression for external tool bug.
Depending on the permutation of CPU, OS version and shared/non-
shared library inclusion, we get can get warnings from the external
tools (ld64, dsymutil) which are not actually libstdc++ issues but
relate to the external tools themselves. This is already pruned
in the main testsuite, this adds it to the library.
Iain Sandoe [Fri, 19 Nov 2021 15:48:53 +0000 (15:48 +0000)]
libphobos, testsuite: Add prune clauses for two Darwin cases.
Depending on the permutation of CPU, OS version and shared/non-
shared library inclusion, we get can get two warnings from the
external tools (ld64, dsymutil) which are not actually GCC issues
but relate to the external tools. These are alrrady pruned in
the main testsuite, this adds them to the library.
Jonathan Wakely [Thu, 18 Nov 2021 10:33:14 +0000 (10:33 +0000)]
libstdc++: Begin lifetime of chars in constexpr std::string [PR103295]
Clang gives errors for constexpr std::string because the memory returned
by std::allocator<T>::allocate does not contain any objects yet, and
attempting to set them using char_traits::assign or char_traits::copy
fails with:
assignment to object outside its lifetime is not allowed in a constant expression
*__result = *__first;
^
This adds code to std::char_traits to use std::construct_at to begin
lifetimes when called during constant evaluation. To support
specializations of std::basic_string that don't use std::char_traits
there is now another layer of wrapper around the allocator_traits, so
that the lifetime of characters is begun as soon as the memory is
allocated. By doing it in the char traits and allocator traits, the rest
of basic_string can ignore the problem.
While modifying char_traits::copy and char_traits::assign to begin
lifetimes for the constexpr cases, I also replaced their uses of
std::copy and std::fill_n respectively. That means we don't need
<bits/stl_algobase.h> for char_traits.
libstdc++-v3/ChangeLog:
PR libstdc++/103295
* include/bits/basic_string.h (_Alloc_traits): Replace typedef
with struct for C++20 mode.
* include/bits/basic_string.tcc (_M_replace): Use _Alloc_traits
for allocation.
* include/bits/char_traits.h (__gnu_cxx::char_traits::assign):
Use std::_Construct during constant evaluation.
(__gnu_cxx::char_traits::assign(CharT*, const CharT*, size_t)):
Likewise. Replace std::fill_n with memset or manual loop.
(__gnu_cxx::char_traits::copy): Likewise, replacing std::copy
with memcpy.
* include/ext/vstring.h: Include <bits/stl_algobase.h> for
std::min.
* include/std/string_view: Likewise.
* testsuite/21_strings/basic_string/capacity/char/resize_and_overwrite.cc:
Add constexpr test.
Martin Jambor [Fri, 19 Nov 2021 17:46:00 +0000 (18:46 +0100)]
options: Make -Ofast switch off -fsemantic-interposition
Using -fno-semantic-interposition has been reported by various people
to bring about considerable speed up at the cost of strict compliance
to the ELF symbol interposition rules See for example
https://fedoraproject.org/wiki/Changes/PythonNoSemanticInterpositionSpeedup
As such I believe it should be implied by our -Ofast optimization
level, not only so that benchmarks that can benefit run faster, but
also so that people looking at -Ofast documentation for options that
could speed their programs find it.
gcc/ChangeLog:
2021-11-12 Martin Jambor <mjambor@suse.cz>
* opts.c (default_options_table): Switch off
flag_semantic_interposition at Ofast.
* doc/invoke.texi (Optimize Options): Document that Ofast switches off
-fsemantic-interposition.
Jan Hubicka [Fri, 19 Nov 2021 17:09:13 +0000 (18:09 +0100)]
Use modref even for nested functions in ref_maybe_used_by_call_p_1
Remove test for function not having call chain guarding modref use in
ref_maybe_used_by_call_p_1. It never made sense since modref treats call chain
accesses explicitly. It was however copied from earlier check for ECF_CONST
(which seems dubious too, but I would like to discuss it independelty).
This enables us to detect that memory pointed to static chain (or parts of it)
are unused by the function.
lto-bootstrapped-regtested all lanugages on x86_64-linux.
gcc/ChangeLog:
2021-11-19 Jan Hubicka <hubicka@ucw.cz>
* tree-ssa-alias.c (ref_maybe_used_by_call_p_1): Do not guard modref
by !gimple_call_chain.
PR c++/33925
PR c/102867
* g++.dg/warn/Walways-true-2.C: Adjust to avoid a valid warning.
* c-c++-common/Waddress-5.c: New test.
* c-c++-common/Waddress-6.c: New test.
* g++.dg/warn/Waddress-7.C: New test.
* gcc.dg/Walways-true-2.c: Adjust to avoid a valid warning.
* gcc.dg/weak/weak-3.c: Expect a warning.
Tamar Christina [Fri, 19 Nov 2021 15:12:38 +0000 (15:12 +0000)]
middle-end: Handle FMA_CONJ correctly after SLP layout update.
Apologies, I got dinged by the i386 regressions bot for a test I didn't have in
my tree at the time I made the previous patch. The bot was telling me that FMA
stopped working after I strengthened the FMA check in the previous patch.
The reason is that the check is slightly early. The first check can indeed only
exit early when either node isn't a mult. However we need to delay till we know
if the node is a MUL or FMA before enforcing that both nodes must be a MULT
since the node to inspect is different if the operation is a MUL or FMA.
Also with the update patch for GCC 11 tree layout update to the new GCC 12 one
I had missed that the difference in which node is conjucated is not symmetrical.
So the test for it can just be testing the inverse order. It was Currently
no detecting when the first node was conjucated instead of the second one.
This also made me wonder why my own test didn't detect this. It turns out that
the tests, being copied from the _Float16 ones were incorrectly marked as
xfail. The _Float16 ones are marked as xfail since C doesn't have a conj
operation for _Float16, which means you get extra type-casts in between.
While you could use the GCC _Complex extension here I opted to mark them xfail
since I wanted to include detection over the widenings next year.
Secondly the double tests were being skipped because Adv. SIMD was missing from
targets supporting Complex Double vectorization.
With these changes all other tests run and pass and only XFAIL ones are
correctly the _Float16 ones. Sorry for missing this before, testing should now
cover all cases.
gcc/ChangeLog:
PR tree-optimization/103311
PR target/103330
* tree-vect-slp-patterns.c (vect_validate_multiplication): Fix CONJ
test to new codegen.
(complex_mul_pattern::matches): Move check downwards.
The `configure` scripts generated with autoconf often tests compiler
features by setting output to `/dev/null`, which then sets the dump
folder as being /dev/* and the compilation halts with an error because
GCC cannot create files in /dev/. This is a problem when configure is
testing for compiler features because it cannot tell if the failure was
due to unsupported features or any other problem, and disable it even
if it is working.
As an example, running configure overriding CFLAGS="-fdump-ipa-clones"
will result in several compiler-features as being disabled because of
gcc halting with an error creating files in /dev/*.
This commit fixes this issue by checking if the output file is
/dev/null or /dev/zero. In this case we use the current working
directory for dump output instead of the directory of the output
file because we cannot write to /dev/*.
Patrick Palka [Fri, 19 Nov 2021 13:54:25 +0000 (08:54 -0500)]
c++: nested lambda capturing a capture proxy [PR94376]
Here when determining the type of the FIELD_DECL for the by-value capture
of 'i' in the inner lambda, we incorrectly give it the type const int
instead of int since the effective initializer is the proxy for the outer
capture, and this proxy is const since the outer lambda is non-mutable.
This patch fixes this by making lambda_capture_field_type handle
by-value capturing of capture proxies specially, namely we instead
consider the type of their FIELD_DECL which unlike the proxy has the
true cv-quals of the captured entity.
PR c++/94376
gcc/cp/ChangeLog:
* lambda.c (lambda_capture_field_type): Simplify by handling the
is_this case first. When capturing by-value a capture proxy,
consider the type of the corresponding field instead.
Iain Buclaw [Fri, 19 Nov 2021 13:43:07 +0000 (14:43 +0100)]
libphobos: Increase size of defaultStackPages on OSX X86_64 targets.
As of macOS 11, libunwind now requires more stack space than 16k, so
default to a larger stack size. This is only applied to X86 as the
PAGESIZE is still 4k, however on AArch64 it is 16k.
libphobos/ChangeLog:
* libdruntime/core/thread/fiber.d (defaultStackPages): Increase size
on OSX X86_64 targets.
Iain Buclaw [Fri, 19 Nov 2021 13:26:07 +0000 (14:26 +0100)]
libphobos: Don't call __gthread_key_delete in the emutls destroy function.
Fixes a EXC_BAD_ACCESS issue seen on Darwin when the libphobos DSO gets
unloaded. Based on reading libgcc's emutls implementation, as it
doesn't call __gthread_key_delete directly, neither should libphobos.
libphobos/ChangeLog:
* libdruntime/gcc/emutls.d (emutlsDestroyThread): Don't remove entry
from global array.
(_d_emutls_destroy): Don't call __gthread_key_delete.
Andrew Pinski [Fri, 19 Nov 2021 01:42:41 +0000 (01:42 +0000)]
Fix tree-optimization/103314 : Limit folding of (type) X op CST where type is a nop convert to gimple
There is some re-association code in fold_binary which conflicts with
this optimization due keeping around some "constants" which are not
INTEGER_CST (1 << -1) so we end up in an infinite loop because of that.
So we need to limit this case to GIMPLE level only.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/103314
gcc/ChangeLog:
* match.pd ((type) X op CST): Restrict the equal
TYPE_PRECISION case to GIMPLE only.
Jakub Jelinek [Fri, 19 Nov 2021 09:05:01 +0000 (10:05 +0100)]
c++: Fix up -fstrong-eval-order handling of call arguments [PR70796]
For -fstrong-eval-order (default for C++17 and later) we make sure to
gimplify arguments in the right order, but as the following testcase
shows that is not enough.
The problem is that some lvalues can satisfy the is_gimple_val / fb_rvalue
predicate used by gimplify_arg for is_gimple_reg_type typed expressions,
or is_gimple_lvalue / fb_either used for other types.
E.g. in foo we have:
C::C (&p, ++i, ++i)
before gimplification where i is an automatic int variable and without this
patch gimplify that as:
i = i + 1;
i = i + 1;
C::C (&p, i, i);
which means that the ctor is called with the original i value incremented
by 2 in both arguments, while because the call is CALL_EXPR_ORDERED_ARGS
the first argument should be different. Similarly in qux we have:
B::B (&p, TARGET_EXPR <D.2274, *(const struct A &) A::operator++ (&i)>,
TARGET_EXPR <D.2275, *(const struct A &) A::operator++ (&i)>)
and gimplify it as:
_1 = A::operator++ (&i);
_2 = A::operator++ (&i);
B::B (&p, MEM[(const struct A &)_1], MEM[(const struct A &)_2]);
but because A::operator++ returns the passed in argument, again we have
the same value in both cases due to gimplify_arg doing:
/* Also strip a TARGET_EXPR that would force an extra copy. */
if (TREE_CODE (*arg_p) == TARGET_EXPR)
{
tree init = TARGET_EXPR_INITIAL (*arg_p);
if (init
&& !VOID_TYPE_P (TREE_TYPE (init)))
*arg_p = init;
}
which is perfectly fine optimization for calls with unordered arguments,
but breaks the ordered ones.
Lastly, in corge, we have before gimplification:
D::foo (NON_LVALUE_EXPR <p>, 3, ++p)
and gimplify it as
p = p + 4;
D::foo (p, 3, p);
which is again wrong, because the this argument isn't before the
side-effects but after it.
The following patch adds cp_gimplify_arg wrapper, which if ordered
and is_gimple_reg_type forces non-SSA_NAME is_gimple_variable
result into a temporary, and if ordered, not is_gimple_reg_type
and argument is TARGET_EXPR bypasses the gimplify_arg optimization.
So, in foo with this patch we gimplify it as:
i = i + 1;
i.0_1 = i;
i = i + 1;
C::C (&p, i.0_1, i);
in qux as:
_1 = A::operator++ (&i);
D.2312 = MEM[(const struct A &)_1];
_2 = A::operator++ (&i);
B::B (&p, D.2312, MEM[(const struct A &)_2]);
where D.2312 is a temporary and in corge as:
p.9_1 = p;
p = p + 4;
D::foo (p.9_1, 3, p);
The is_gimple_reg_type forcing into a temporary should be really cheap
(I think even at -O0 it should be optimized if there is no modification in
between), the aggregate copies might be more expensive but I think e.g. SRA
or FRE should be able to deal with those if there are no intervening
changes. But still, the patch tries to avoid those when it is cheaply
provable that nothing bad happens (if no argument following it in the
strong evaluation order doesn't have TREE_SIDE_EFFECTS, then even VAR_DECLs
etc. shouldn't be modified after it). There is also an optimization to
avoid doing that for this or for arguments with reference types as nothing
can modify the parameter values during evaluation of other argument's
side-effects.
I've tried if e.g.
int i = 1;
return i << ++i;
doesn't suffer from this problem as well, but it doesn't, the FE uses
SAVE_EXPR <i>, SAVE_EXPR <i> << ++i;
in that case which gimplifies the way we want (temporary in the first
operand).
2021-11-19 Jakub Jelinek <jakub@redhat.com>
PR c++/70796
* cp-gimplify.c (cp_gimplify_arg): New function.
(cp_gimplify_expr): Use cp_gimplify_arg instead of gimplify_arg,
pass true as last argument to it if there are any following
arguments in strong evaluation order with side-effects.