Jan Hubicka [Mon, 8 Nov 2021 17:40:17 +0000 (18:40 +0100)]
Improve handling of some builtins.
For nested functions we output call to builtin_dwarf_cfa which
initializes frame entry used only for debugging. This however
prevents us from detecting functions containing nested functions
as const/pure or analyze side effects in modref.
builtin_dwarf_cfa is not documented and I wonder if it should be turned to
internal function. But I think we could consider functions using it const even
if in theory one can do things like test the return address and see the
difference between different frame addreses.
While doing so I also noticed that special_buitin_state handles quite few
builtins that are not special cased by ipa-modref. They do not make
user visible loads/stores and thus I think they shoul dbe annotated by
".c" to make this explicit for both modref and PTA.
Finally I aded dwarf_cfa and similar return_address to list of simple
bulitins since it compiles to simple stack frame load (and we consider
simple other builtins doing so).
Jan Hubicka [Mon, 8 Nov 2021 17:38:09 +0000 (18:38 +0100)]
Move uncprop after modref
moveS uncprop after modref and pure/const pass and adds a comment that
this pass should alwasy be last since it is only supposed to help PHI lowering.
The pass replaces constant by SSA names that are known to be constant at the
place which hardly helps other passes.
gcc/ChangeLog:
PR tree-optimization/103177
* passes.def: Move uncprop after pure/const and modref.
Martin Jambor [Mon, 8 Nov 2021 16:49:54 +0000 (17:49 +0100)]
ipa: Unshare expresseions before putting them into debug statements (PR 103099, PR 103107)
My recent patch to improve debug experience when there are removed
parameters (by ipa-sra or ipa-split) was not careful to unshare the
expressions that were then put into debug statements, which manifests
itself as PR 103099. This patch adds unsharing them using
unshare_expr_without_location which is a bit more careful with stripping
locations than what we were doing manually and so also fixes PR 103107.
gcc/ChangeLog:
2021-11-08 Martin Jambor <mjambor@suse.cz>
PR ipa/103099
PR ipa/103107
* tree-inline.c (remap_gimple_stmt): Unshare the expression without
location before invoking remap_with_debug_expressions on it.
* ipa-param-manipulation.c
(ipa_param_body_adjustments::prepare_debug_expressions): Likewise.
David Edelsohn [Mon, 8 Nov 2021 16:46:47 +0000 (11:46 -0500)]
powerpc: Fix vsx_splat_v4si_di breakage on Power8.
The vsx_splat_v4si_di pattern uses a Power8 and a Power9 instruction.
The final condition of TARGET_DIRECT_MODE_64BIT implicitly requires Power8.
The "we" constraint requires Power9, but also requires 64 bit. Because
the DImode pattern already requires 64 bit mode, this isn't horrible,
but it would be best to remove all uses of "we" constraint. The
mtvsrws instruction itself does not require 64 bit mode.
This patch reverts the previous change to fix the breakage.
gcc/ChangeLog:
* config/rs6000/vsx.md (vsx_splat_v4si_di): Revert "wa"
constraint to "we".
Richard Biener [Mon, 8 Nov 2021 14:21:08 +0000 (15:21 +0100)]
Fix spurious valgrind errors in irred loop verification
The sbitmap bitmap_{set,clear}_bit changes trigger spurious
uninit value use reportings from valgrind since we now
read the old value before setting/clearing a bit so
verify_loop_structures optimization to not clear the sbitmap is reported.
Fixed by using a temporary BB flag which should also be more
efficient in terms of cache re-use.
2021-11-08 Richard Biener <rguenther@suse.de>
* cfgloop.c (verify_loop_structure): Use a temporary BB flag
instead of an sbitmap to cache irreducible state.
The problem here is that both value_17 and value_20 are in the set of
imports we must pre-calculate. The value_17 name occurs first in the
bitmap, so we try to resolve it first, which causes us to recursively
solve the value_20 range. We do so correctly and put them both in the
cache. However, when we try to solve value_20 from the bitmap, we
ignore that it already has a cached entry and try to resolve the PHI
with the wrong value of value_17:
# value_20 = PHI <value_17(19), value_7(D)(17)>
The right thing to do is to avoid recalculating definitions already
solved.
Regstrapped and checked for # threads before and after on x86-64 Linux.
gcc/ChangeLog:
PR tree-optimization/103120
* gimple-range-path.cc (path_range_query::range_defined_in_block):
Bail if there's a cache entry.
Bill Schmidt [Mon, 8 Nov 2021 14:34:03 +0000 (08:34 -0600)]
rs6000: Miscellaneous uses of rs6000_builtins_decl_x
There are a few leftover places where we use the old rs6000_builtins_decl
array, but we need to use rs6000_builtins_decl_x instead when the new
builtins infrastructure is in play.
2021-11-08 Bill Schmidt <wschmidt@linux.ibm.com>
gcc/
* config/rs6000/rs6000.c (rs6000_builtin_reciprocal): Use
rs6000_builtin_decls_x when appropriate.
(add_condition_to_bb): Likewise.
(rs6000_atomic_assign_expand_fenv): Likewise.
Thomas Schwinge [Fri, 22 Oct 2021 13:54:42 +0000 (15:54 +0200)]
Fix 'contrib/update-copyright.py': 'TypeError: exceptions must derive from BaseException'
Running 'contrib/update-copyright.py' currently fails:
[...]
Traceback (most recent call last):
File "contrib/update-copyright.py", line 365, in update_copyright
canon_form = self.canonicalise_years (dir, filename, filter, years)
File "contrib/update-copyright.py", line 270, in canonicalise_years
(min_year, max_year) = self.year_range (years)
File "contrib/update-copyright.py", line 253, in year_range
year_list = [self.parse_year (year)
File "contrib/update-copyright.py", line 253, in <listcomp>
year_list = [self.parse_year (year)
File "contrib/update-copyright.py", line 250, in parse_year
raise self.BadYear (string)
TypeError: exceptions must derive from BaseException
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "contrib/update-copyright.py", line 796, in <module>
GCCCmdLine().main()
File "contrib/update-copyright.py", line 527, in main
self.copyright.process_tree (dir, filter)
File "contrib/update-copyright.py", line 458, in process_tree
self.process_file (dir, filename, filter)
File "contrib/update-copyright.py", line 421, in process_file
res = self.update_copyright (dir, filename, filter,
File "contrib/update-copyright.py", line 366, in update_copyright
except self.BadYear as e:
TypeError: catching classes that do not inherit from BaseException is not allowed
aarch64: LD3/LD4 post-modify costs for struct modes
The LD3/ST3 and LD4/ST4 address cost code had no test coverage (oops).
This patch fixes that and updates it for the new structure modes.
The test only covers Advanced SIMD because SVE doesn't have
post-increment forms.
gcc/
* config/aarch64/aarch64.c (aarch64_ldn_stn_vectors): New function.
(aarch64_address_cost): Use it instead of testing for CImode and
XImode directly.
gcc/testsuite/
* gcc.target/aarch64/neoverse_v1_1.c: New test.
I was working on a patch that needed to calculate the number of
modes in a particular class. It seemed better to have genmodes
generate this directly rather than do the kind of dance that
expmed.h had.
gcc/
* genmodes.c (emit_insn_modes_h): Define NUM_MODE_* macros.
* expmed.h (NUM_MODE_INT): Delete in favor of genmodes definitions.
(NUM_MODE_PARTIAL_INT, NUM_MODE_VECTOR_INT): Likewise.
* real.h (real_format_for_mode): Use NUM_MODE_FLOAT and
NUM_MODE_DECIMAL_FLOAT.
(REAL_MODE_FORMAT): Likewise.
Richard Biener [Mon, 8 Nov 2021 08:08:12 +0000 (09:08 +0100)]
tree-optimization/103102 - fix error in vectorizer refactoring
This fixes an oversight that caused vectorized epilogues to have
versioning for niters applied.
2021-11-08 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vect_create_loop_vinfo): Add main_loop_info
parameter.
* tree-vect-loop.c (vect_create_loop_vinfo): Likewise. Set
LOOP_VINFO_ORIG_LOOP_INFO and conditionalize set of
LOOP_VINFO_NITERS_ASSUMPTIONS.
(vect_analyze_loop_1): Adjust.
(vect_analyze_loop): Move loop constraint setting and
SCEV/niter reset here from vect_create_loop_vinfo to perform
it only once.
(vect_analyze_loop_form): Move dumping of symbolic niters
here from vect_create_loop_vinfo.
Jan Hubicka [Mon, 8 Nov 2021 06:52:45 +0000 (07:52 +0100)]
Add loads/stores relative to static chain in ipa-modref
Adds tracking of accesses relative to static chain into modref
load/stores analysis. This helps some Fortran benchmarks however it is still
quite limited. One problem is that we never discover functions with nested
functions as const, pure or not accessing global memory because it contains
__builtin_dward_cfa call which we believe to be non-pure.
Bootstrapped/regtested x86_64-linux. Plan to commit it tomorrow if there are
no complains and once periodic testers picks today modref changes.
Honza
gcc/ChangeLog:
* ipa-modref-tree.h (enum modref_special_parms): New enum.
(struct modref_access_node): update for special parms.
(struct modref_ref_node): Likewise.
(struct modref_parm_map): Likewise.
(struct modref_tree): Likewise.
* ipa-modref.c (dump_access): Likewise.
(get_access): Detect static chain.
(parm_map_for_arg): Take tree as arg instead of
stmt and index.
(merge_call_side_effects): Compute map for static chain.
(process_fnspec): Update.
(struct escape_point): Remove retslot_arg and static_chain_arg.
(analyze_parms): Update.
(compute_parm_map): Update.
(propagate_unknown_call): Update.
(modref_propagate_in_scc): Update.
(modref_merge_call_site_flags): Update.
(ipa_merge_modref_summary_after_inlining): Update.
* tree-ssa-alias.c (modref_may_conflict): Handle static chain.
* ipa-modref-tree.c (test_merge): Update.
Haochen Gui [Tue, 2 Nov 2021 06:09:32 +0000 (14:09 +0800)]
Disables gimple folding for VSX_BUILTIN_XVMINDP, VSX_BUILTIN_XVMAXDP,ALTIVEC_BUILTIN_VMINFP and ALTIVEC_BUILTIN_VMAXFP when fast-math is not set.
gcc/
* config/rs6000/rs6000-call.c (rs6000_gimple_fold_builtin): Disable
gimple fold for VSX_BUILTIN_XVMINDP, ALTIVEC_BUILTIN_VMINFP,
VSX_BUILTIN_XVMAXDP, ALTIVEC_BUILTIN_VMAXFP when fast-math is not
set.
gcc/testsuite/
* gcc.target/powerpc/vec-minmax-1.c: New test.
* gcc.target/powerpc/vec-minmax-2.c: Likewise.
liuhongt [Fri, 5 Nov 2021 02:41:22 +0000 (10:41 +0800)]
Update documentation for -ftree-loop-vectorize and -ftree-slp-vectorize which are enabled by default at -02.
gcc/ChangeLog:
PR tree-optimization/103077
* doc/invoke.texi (Options That Control Optimization):
Update documentation for -ftree-loop-vectorize and
-ftree-slp-vectorize which are enabled by default at -02.
liuhongt [Mon, 8 Nov 2021 01:32:17 +0000 (09:32 +0800)]
Add !HONOR_SNANS to simplifcation: (trunc)copysign((extend)a, (extend)b) to copysign (a, b).
> Note that this is not safe with -fsignaling-nans, so needs to be disabled
> for that option (if there isn't already logic somewhere with that effect),
> because the extend will convert a signaling NaN to quiet (raising
> "invalid"), but copysign won't, so this transformation could result in a
> signaling NaN being wrongly returned when the original code would never
> have returned a signaling NaN.
>
> --
> Joseph S. Myers
> joseph@codesourcery.com
Thomas Koenig [Sun, 7 Nov 2021 14:38:35 +0000 (15:38 +0100)]
Fix keyword name for co_reduce.
gcc/fortran/ChangeLog:
* intrinsic.c (add_subroutines): Change keyword "operator"
to the correct one, "operation".
* check.c (gfc_check_co_reduce): Change OPERATOR to
OPERATION in error messages.
* intrinsic.texi: Change OPERATOR to OPERATION in
documentation.
gcc/testsuite/ChangeLog:
* gfortran.dg/co_reduce_2.f90: New test.
* gfortran.dg/coarray_collectives_14.f90: Change OPERATOR
to OPERATION.
* gfortran.dg/coarray_collectives_16.f90: Likewise.
* gfortran.dg/coarray_collectives_9.f90: Likewise.
Aldy Hernandez [Mon, 1 Nov 2021 14:50:38 +0000 (15:50 +0100)]
Remove VRP threader.
Now that things have stabilized, we can remove the old code.
I have left the hybrid threader in tree-ssa-threadedge, even though the
VRP threader was the only user, because we may need it as an interim
step for DOM threading removal.
Jan Hubicka [Sun, 7 Nov 2021 17:20:45 +0000 (18:20 +0100)]
Fix inter-procedural EAF flags propagation with respect to !binds_to_current_def_p
While proofreading the code for handling EAF flags of !binds_to_current_def_p I
noticed that the interprocedural dataflow actually ignores the flag possibly
introducing wrong code on quite complex interposable functions in non-trivial
recursion cycles (or at ltrans partition boundary).
This patch unifies the flags changes to single place (remove_useless_eaf_flags)
and does extend modref_merge_call_site_flags to do the right thing.
lto-bootstrapped/regtested x86_64-linux. Plan to commit it today after bit
more testing (firefox/clang build).
gcc/ChangeLog:
* gimple.c (gimple_call_arg_flags): Use interposable_eaf_flags.
(gimple_call_retslot_flags): Likewise.
(gimple_call_static_chain_flags): Likewise.
* ipa-modref.c (remove_useless_eaf_flags): Do not remove everything for
NOVOPS.
(modref_summary::useful_p): Likewise.
(modref_summary_lto::useful_p): Likewise.
(analyze_parms): Do not give up on NOVOPS.
(analyze_function): When dumping report chnages in EAF flags
between IPA and local pass.
(modref_merge_call_site_flags): Compute implicit eaf flags
based on callee ecf_flags and fnspec; if the function does not
bind to current defs use interposable_eaf_flags.
(modref_propagate_flags_in_scc): Update.
* ipa-modref.h (interposable_eaf_flags): New function.
Bill Schmidt [Sun, 7 Nov 2021 13:56:07 +0000 (07:56 -0600)]
rs6000: Replace the builtin expansion machinery
This patch forms the meat of the improvements for this patch series.
We develop a replacement for rs6000_expand_builtin and its supporting
functions, which are inefficient and difficult to maintain.
Differences between the old and new support in this patch include:
- Make use of the new builtin data structures, directly looking up
a function's information rather than searching for the function
multiple times;
- Test for enablement of builtins at expand time, to support #pragma
target changes within a compilation unit;
- Use the builtin function attributes (e.g., bif_is_cpu) to control
special handling;
- Refactor common code into one place; and
- Provide common error handling in one place for operands that are
restricted to specific values or ranges.
Jan Hubicka [Sun, 7 Nov 2021 08:35:16 +0000 (09:35 +0100)]
Implement intra-procedural dataflow in ipa-modref flags propagation.
implement the (long promised) intraprocedural dataflow for
propagating eaf flags, so we can handle parameters that participate
in loops in SSA graphs. Typical example are acessors that walk linked
lists, for example.
I implemented dataflow using the standard iteration over BBs in RPO some time
ago, but did not like it becuase it had measurable compile time impact with
very small code quality effect. This is why I kept mainline to do the DFS walk
instead. The reason is that we care about flags of SSA names that corresponds
to parameters and those can be often determined from a small fraction of the
SSA graph so solving dataflow for all SSA names in a function is a waste.
This patch implements dataflow more carefully. The DFS walk is kept in place to
solve acyclic cases and discover the relevat part of SSA graph into new graph
(which is similar to one used for inter-procedrual dataflow - we only need to
know the edges and if the access is direct or derefernced). The RPO iterative
dataflow then works on this simplified graph.
This seems to be fast in practice. For GCC linktime we do dataflow for 4881
functions. Out of that 4726 finishes in one iteration, 144 in two and 10 in 3.
Overall 31979 functions are analysed, so we do dataflow only for bit over of
10% of cases. 131123 edges are visited by the solver. I measured no compile
time impact of this.
gcc/ChangeLog:
* ipa-modref.c (modref_lattice): Add do_dataflow,
changed and propagate_to fields.
(modref_lattice::release): Free propagate_to
(modref_lattice::merge): Do not give up early on unknown
lattice values.
(modref_lattice::merge_deref): Likewise.
(modref_eaf_analysis): Update toplevel comment.
(modref_eaf_analysis::analyze_ssa_name): Record postponned ssa names;
do optimistic dataflow initialization.
(modref_eaf_analysis::merge_with_ssa_name): Build dataflow graph.
(modref_eaf_analysis::propagate): New member function.
(analyze_parms): Update to new API of modref_eaf_analysis.
David Edelsohn [Sat, 6 Nov 2021 00:33:45 +0000 (20:33 -0400)]
powerpc: Fix vsx_splat_v4si in 32 bit mode
Tamar's recent patch to teach CSE to perform vector extract exercises
VSX splat more frequently, which exposed a constraint error for the
vsx_splat patterns. The pattern could be created for Power9, but
the "we constraint only provided alternatives in 64 bit mode. The
instructions are valid in 32 bit mode and SImode is allowed in VSX
registers. This patch updates the constraints from "we" to "wa" to
allow the pattern and fix the failing testcases.
gcc/ChangeLog:
* config/rs6000/vsx.md (vsx_splat_v4si): Change constraints to "wa".
(vsx_splat_v4si_di): Change constraint to "wa".
First, lb_17 == _134 because of the PHI.
Second, _134 > M.10_120 because of _134 = M.10_120 + 1.
We then assume that lb_75 > M.10_120, but this is incorrect because
M.10_120 was killed along the path.
This incorrect thread causes the miscompilation in 527.cam4_r.
Tested on x86-64 and ppc64le Linux.
gcc/ChangeLog:
PR tree-optimization/103061
* value-relation.cc (path_oracle::path_oracle): Initialize
m_killed_defs.
(path_oracle::killing_def): Set m_killed_defs.
(path_oracle::query_relation): Do not look at the root oracle for
killed defs.
* value-relation.h (class path_oracle): Add m_killed_defs.
Aldy Hernandez [Thu, 4 Nov 2021 18:44:15 +0000 (19:44 +0100)]
Cleanup back_threader::find_path_to_names.
The main path discovery function was due for a cleanup. First,
there's a nagging goto and second, my bitmap use was sloppy. Hopefully
this makes the code easier for others to read.
Regstrapped on x86-64 Linux. I also made sure there were no difference
in the number of threads with this patch.
No functional changes.
gcc/ChangeLog:
* tree-ssa-threadbackward.c (back_threader::find_paths_to_names):
Remove gotos and other cleanups.
Darwin : Make trampoline templates linker-visible.
For aarch64, the alignment of the LTRAMPn symbols matters.
Actually, the LTRAMPn symbols _are_ 8 byte aligned, but because
they are Local, the linker doesn't know that this guarantee can be met.
It assumes that they are not necessarily more aligned than the
containing section (ld64 atoms strike again).
The fix is to publish the trampoline symbol for the linker to access
directly - it can then see that the atom is suitably aligned.
Iain Sandoe [Fri, 28 Aug 2020 18:09:45 +0000 (19:09 +0100)]
Darwin, aarch64 : Ada fixes for hosted tools.
This will allow someone (with an existing Ada compiler on the
platform - which can be provided by the experimental aarch64-darwin
branch) - to build the host tools (gnatmake and friends) for a
non-native cross.
The existing provisions for iOS are OK for cross-compilation from
an x86-64-darwin platform, but we need some adjustments so that these
host tools can be built to run on aarch64-darwin.
* gcc-interface/Make-lang.in: Use iOS signal trampoline code
for hosted Ada tools.
* sigtramp-ios.c: Wrap the declarations in extern "C" when
the code is built by a C++ compiler.
Jonathan Wakely [Wed, 20 Oct 2021 08:25:24 +0000 (09:25 +0100)]
libstdc++: Add support for POWER9 DARN instruction to std::random_device
The ISA-3.0 instruction set includes DARN ("deliver a random number")
which can be used similarly to the existing support for RDRAND and RDSEED.
libstdc++-v3/ChangeLog:
* src/c++11/random.cc [__powerpc__] (USE_DARN): Define.
(__ppc_darn): New function to use POWER9 DARN instruction.
(Which): Add 'darn' enumerator.
(which_source): Check for __ppc_darn.
(random_device::_M_init): Support "darn" and "hw" tokens.
(random_device::_M_getentropy): Add darn to switch.
* testsuite/26_numerics/random/random_device/cons/token.cc:
Check "darn" token.
* testsuite/26_numerics/random/random_device/entropy.cc:
Likewise.
This change implements TI mode on PA64. Various new patterns are
added to pa.md. The libgcc build needed modification to build both
DI and TI routines. We also need various softfp routines to
convert to and from TImode.
I added full softfp for the -msoft-float option. At the moment,
this doesn't completely eliminate all use of the floating-point
co-processor. For this, libgcc needs to be built with -msoft-mult.
The floating-point exception support also needs a soft option.
2021-11-05 John David Anglin <danglin@gcc.gnu.org>
PR libgomp/96661
gcc/ChangeLog:
* config/pa/pa-modes.def: Add OImode integer type.
* config/pa/pa.c (pa_scalar_mode_supported_p): Allow TImode
for TARGET_64BIT.
* config/pa/pa.h (MIN_UNITS_PER_WORD) Define to MIN_UNITS_PER_WORD
to UNITS_PER_WORD if IN_LIBGCC2.
* config/pa/pa.md (addti3, addvti3, subti3, subvti3, negti2,
negvti2, ashlti3, shrpd_internal): New patterns.
Change some multi instruction types to multi.
Jakub Jelinek [Fri, 5 Nov 2021 15:39:14 +0000 (16:39 +0100)]
x86: Make stringop_algs::stringop_strategy ctor constexpr [PR100246]
> Several older compilers fail to build modern GCC because of missing
> or incomplete C++11 support.
>
> * config/i386/i386.h (struct stringop_algs): Define a CTOR for
> this type.
Unfortunately, as mentioned in my
https://gcc.gnu.org/pipermail/gcc-patches/2021-November/583289.html
mail, without the new dyninit pass this causes dynamic initialization of
many variables, 6.5KB _GLOBAL__sub_I_* on x86_64 and 12.5KB on i686.
The following patch makes the ctor constexpr so that already the FE
is able to statically initialize all those.
I have tested on godbolt a reduced testcase without a constructor,
with constructor and with constexpr constructor.
clang before 3.3 is unhappy about all the 3 cases, clang 3.3 and 3.4
is ok with ctor and ctor with constexpr and optimizes it into static
initialization, clang 3.5+ is ok with all 3 versions and optimizes,
gcc 4.8 and 5+ is ok with all 3 versions and no ctor and ctor with constexpr
is optimized, gcc 4.9 is unhappy about the no ctor case and happy with the
other two.
2021-11-05 Jakub Jelinek <jakub@redhat.com>
PR bootstrap/100246
* config/i386/i386.h
(stringop_algs::stringop_strategy::stringop_strategy): Make the ctor
constexpr.
Wilco Dijkstra [Fri, 5 Nov 2021 15:05:15 +0000 (15:05 +0000)]
AArch64: Fix PR103085
The stack protector implementation hides symbols in a const unspec, which means
movdi/movsi patterns must always support const on symbol operands and
explicitly strip away the unspec. Do this for the recently added GOT
alternatives. Add a test to ensure stack-protector tests GOT accesses as well.
2021-11-05 Wilco Dijkstra <wdijkstr@arm.com>
PR target/103085
* config/aarch64/aarch64.c (aarch64_mov_operand_p): Strip the salt
first.
* config/aarch64/constraints.md: Support const in Usw.
gcc/testsuite/
PR target/103085
* gcc.target/aarch64/pr103085.c: New test
Richard Biener [Fri, 5 Nov 2021 09:10:46 +0000 (10:10 +0100)]
Split vector loop analysis into main and epilogue analysis
As discussed this splits the analysis loop into two, first settling
on a vector mode used for the main loop and only then analyzing
the epilogue of that for possible vectorization. That makes it
easier to put in support for unrolled main loops.
On the way I've realized some cleanup opportunities, namely caching
n_stmts in vec_info_shared (it's computed by dataref analysis)
avoiding to pass that around and setting/clearing loop->aux
during analysis - try_vectorize_loop_1 will ultimatively set it
on those we vectorize.
This also gets rid of the previously introduced callback in
vect_analyze_loop_1 in favor of making that advance the mode iterator.
I'm now pushing VOIDmode explicitely into the vector_modes array
which makes the re-start on the epilogue side a bit more
straight-forward. Note that will now use auto-detection of the
vector mode in case the main loop used it and we want to try
LOOP_VINFO_EPIL_USING_PARTIAL_VECTORS_P and the first mode from
the target array if not. I've added a comment that says we may
want to make sure we don't try vectorizing the epilogue with a
bigger vector size than the main loop but the situation isn't
very likely to appear in practice I guess (and it was also present
before this change).
In principle this change should not change vectorization decisions
but the way we handled re-analyzing epilogues as main loops makes
me only 99% sure that it does.
2021-11-05 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (vec_info_shared::n_stmts): Add.
(LOOP_VINFO_N_STMTS): Likewise.
(vec_info_for_bb): Remove unused function.
* tree-vectorizer.c (vec_info_shared::vec_info_shared):
Initialize n_stmts member.
* tree-vect-loop.c: Remove INCLUDE_FUNCTIONAL.
(vect_create_loop_vinfo): Do not set loop->aux.
(vect_analyze_loop_2): Do not get n_stmts as argument,
instead use LOOP_VINFO_N_STMTS. Set LOOP_VINFO_VECTORIZABLE_P
here.
(vect_analyze_loop_1): Remove callback, get the mode iterator
and autodetected_vector_mode as argument, advancing the
iterator and initializing autodetected_vector_mode here.
(vect_analyze_loop): Split analysis loop into two, first
processing main loops only and then epilogues.
Martin Jambor [Fri, 5 Nov 2021 13:04:42 +0000 (14:04 +0100)]
ipa: Do not require RECORD_TYPE for ancestor jump functions
The check this patch removes has remained from times when ancestor
jump functions have been only used for devirtualization and also
contained BINFOs. It is not necessary now and should have been
removed long time ago.
gcc/ChangeLog:
2021-11-04 Martin Jambor <mjambor@suse.cz>
* ipa-prop.c (compute_complex_assign_jump_func): Remove
unnecessary check for RECORD_TYPE.
Jonathan Wakely [Fri, 5 Nov 2021 11:25:27 +0000 (11:25 +0000)]
libstdc++: Add xfail to pretty printer tests that fail in C++20
For some reason the type printer for std::string doesn't work in C++20
mode, so std::basic_string<char, char_traits<char>, allocator<char> is
printed out in full rather than being shown as std::string. It's
probably related to the fact that the extern template declarations are
disabled for C++20, but I don't know why that affects GDB.
For now I'm just marking the relevant tests as XFAIL. That requires
adding support for target selectors to individual GDB directives such as
note-test and whatis-regexp-test.
libstdc++-v3/ChangeLog:
* testsuite/lib/gdb-test.exp: Add target selector support to the
dg-final directives.
* testsuite/libstdc++-prettyprinters/80276.cc: Add xfail for
C++20.
* testsuite/libstdc++-prettyprinters/libfundts.cc: Likewise.
* testsuite/libstdc++-prettyprinters/prettyprinters.exp: Tweak
comment.
Gerald Pfeifer [Sun, 24 Oct 2021 09:48:29 +0000 (11:48 +0200)]
doc: No longer generate old.html
Commit 431d26e1dd18c1146d3d4dcd3b45a3b04f7f7d59 removed
doc/install-old.texi, alas we still tried to generate the
associated web page old.html - which then turned out empty.
Simplify remove this from the list of pages to be generated.
gcc:
* doc/install.texi2html: Do not generate old.html any longer.
Jakub Jelinek [Fri, 5 Nov 2021 09:20:10 +0000 (10:20 +0100)]
dwarf2out: Fix up CONST_WIDE_INT handling once more [PR103046]
My last change to CONST_WIDE_INT handling in add_const_value_attribute broke
handling of CONST_WIDE_INT constants like ((__uint128_t) 1 << 120).
wi::min_precision (w1, UNSIGNED) in that case 121, but wide_int::from
creates a wide_int that has 0 and 0xff00000000000000ULL in its elts and
precision 121. When we output that, we output both elements and thus emit
0, 0xff00000000000000 instead of the desired 0, 0x0100000000000000.
IMHO we should actually pass machine_mode to add_const_value_attribute from
callers, so that we know exactly what precision we want. Because
hypothetically, if say mode is OImode and the CONST_WIDE_INT value fits into
128 bits or 192 bits, we'd emit just those 128 or 192 bits but debug info
users would expect 256 bits.
On
typedef unsigned __int128 U;
int
main ()
{
U a = (U) 1 << 120;
U b = 0xffffffffffffffffULL;
U c = ((U) 0xffffffff00000000ULL) << 64;
return 0;
}
vanilla gcc incorrectly emits 0, 0xff00000000000000 for a,
0xffffffffffffffff alone (DW_FORM_data8) for b and 0, 0xffffffff00000000
for c. gcc with the previously posted PR103046 patch emits
0, 0x0100000000000000 for a, 0xffffffffffffffff alone for b and
0, 0xffffffff00000000 for c. And with this patch we emit
0, 0x0100000000000000 for a, 0xffffffffffffffff, 0 for b and
0, 0xffffffff00000000 for c.
So, the patch below certainly causes larger debug info (well, 128-bit
integers are pretty rare), but in this case the question is if it isn't
more correct, as debug info consumers generally will not know if they
should sign or zero extend the value in DW_AT_const_value.
The previous code assumes they will always zero extend it...
2021-11-05 Jakub Jelinek <jakub@redhat.com>
PR debug/103046
* dwarf2out.c (add_const_value_attribute): Add MODE argument, use it
in CONST_WIDE_INT handling. Adjust recursive calls.
(add_location_or_const_value_attribute): Pass DECL_MODE (decl) to
new add_const_value_attribute argument.
(tree_add_const_value_attribute): Pass TYPE_MODE (type) to new
add_const_value_attribute argument.
Richard Biener [Wed, 27 Oct 2021 11:14:41 +0000 (13:14 +0200)]
First refactor of vect_analyze_loop
This refactors the main loop analysis part in vect_analyze_loop,
re-purposing the existing vect_reanalyze_as_main_loop for this
to reduce code duplication. Failure flow is a bit tricky since
we want to extract info from the analyzed loop but I wanted to
share the destruction part. Thus I add some std::function and
lambda to funnel post-analysis for the case we want that
(when analyzing from the main iteration but not when re-analyzing
an epilogue as main).
In addition I split vect_analyze_loop_form into analysis and
vinfo creation so we can do the analysis only once, simplifying
the new vect_analyze_loop_1.
As discussed we probably want to change the loop over vector
modes to first only analyze things as the main loop, picking
the best (or simd VF) mode for the main loop and then analyze
for a vectorized epilogue. The unroll would then integrate
with the main loop vectorization. I think that currently
we may fail to analyze the epilogue with the same mode as
the main loop when using partial vectors since we increment
mode_i before doing that.
2021-11-04 Richard Biener <rguenther@suse.de>
* tree-vectorizer.h (struct vect_loop_form_info): New.
(vect_analyze_loop_form): Adjust.
(vect_create_loop_vinfo): New.
* tree-parloops.c (gather_scalar_reductions): Adjust for
vect_analyze_loop_form API change.
* tree-vect-loop.c: Include <functional>.
(vect_analyze_loop_form_1): Rename to vect_analyze_loop_form,
take struct vect_loop_form_info as output parameter and adjust.
(vect_analyze_loop_form): Rename to vect_create_loop_vinfo and
split out call to the original vect_analyze_loop_form_1.
(vect_reanalyze_as_main_loop): Rename to...
(vect_analyze_loop_1): ... this, factor out the call to
vect_analyze_loop_form and generalize to be able to use it twice ...
(vect_analyze_loop): ... here. Perform vect_analyze_loop_form
once only and here.
Jonathan Wakely [Thu, 4 Nov 2021 22:50:02 +0000 (22:50 +0000)]
libstdc++: Fix pretty printing of std::unique_ptr [PR103086]
Since std::tuple started using [[no_unique_address]] the tuple<T*, D>
member of std::unique_ptr<T, D> has two _M_head_impl subobjects, in
different base classes. That means this printer code is ambiguous:
In older versions of GDB it happened to work by chance, because GDB
returned the last _M_head_impl member and std::tuple's base classes are
stored in reverse order, so the last one was the T* element of the
tuple. Since GDB 11 it returns the first _M_head_impl, which is the
deleter element.
The fix is for the printer to stop using an ambiguous field name and
cast the tuple to the correct base class before accessing the
_M_head_impl member.
Instead of fixing this in both UniquePointerPrinter and StdPathPrinter a
new unique_ptr_get function is defined to do it correctly. That is
defined in terms of new tuple_get and _tuple_impl_get functions.
It would be possible to reuse _tuple_impl_get to access each element in
StdTuplePrinter._iterator.__next__, but that already does the correct
casting, and wouldn't be much simpler anyway.
libstdc++-v3/ChangeLog:
PR libstdc++/103086
* python/libstdcxx/v6/printers.py (_tuple_impl_get): New helper
for accessing the tuple element stored in a _Tuple_impl node.
(tuple_get): New function for accessing a tuple element.
(unique_ptr_get): New function for accessing a unique_ptr.
(UniquePointerPrinter, StdPathPrinter): Use unique_ptr_get.
* python/libstdcxx/v6/xmethods.py (UniquePtrGetWorker): Cast
tuple to its base class before accessing _M_head_impl.
Jonathan Wakely [Wed, 29 Sep 2021 20:18:42 +0000 (21:18 +0100)]
libstdc++: Deprecate std::unexpected and handler functions
These functions have been deprecated since C++11, and were removed in
C++17. The proposal P0323 wants to reuse the name std::unexpected for a
class template, so we will need to stop defining the current function
for C++23 anyway.
This marks them as deprecated for C++11 and up, to warn users they won't
continue to be available. It disables them for C++17 and up, unless the
_GLIBCXX_USE_DEPRECATED macro is defined.
The <unwind-cxx.h> header uses std::unexpected_handler in the public
API, but since that type is the same as std::terminate_handler we can
just use that instead, to avoid warnings about it being deprecated.
libstdc++-v3/ChangeLog:
* doc/xml/manual/evolution.xml: Document deprecations.
* doc/html/*: Regenerate.
* libsupc++/exception (unexpected_handler, unexpected)
(get_unexpected, set_unexpected): Add deprecated attribute.
Do not define without _GLIBCXX_USE_DEPRECATED for C++17 and up.
* libsupc++/eh_personality.cc (PERSONALITY_FUNCTION): Disable
deprecated warnings.
* libsupc++/eh_ptr.cc (std::rethrow_exception): Likewise.
* libsupc++/eh_terminate.cc: Likewise.
* libsupc++/eh_throw.cc (__cxa_init_primary_exception):
Likewise.
* libsupc++/unwind-cxx.h (struct __cxa_exception): Use
terminate_handler instead of unexpected_handler.
(struct __cxa_dependent_exception): Likewise.
(__unexpected): Likewise.
* testsuite/18_support/headers/exception/synopsis.cc: Add
dg-warning for deprecated warning.
* testsuite/18_support/exception_ptr/60612-unexpected.cc:
Disable deprecated warnings.
* testsuite/18_support/set_unexpected.cc: Likewise.
* testsuite/18_support/unexpected_handler.cc: Likewise.
Currently std::variant uses __index_of<T, Types...> to find the first
occurence of a type in a pack, and __exactly_once<T, Types...> to check
that there is no other occurrence.
We can reuse the __find_uniq_type_in_pack<T, Types...>() function for
both tasks, and remove the recursive templates used to implement
__index_of and __exactly_once.
libstdc++-v3/ChangeLog:
* include/bits/utility.h (__find_uniq_type_in_pack): Move
definition to here, ...
* include/std/tuple (__find_uniq_type_in_pack): ... from here.
* include/std/variant (__detail__variant::__index_of): Remove.
(__detail::__variant::__exactly_once): Define using
__find_uniq_type_in_pack instead of __index_of.
(get<T>, get_if<T>, variant::__index_of): Likewise.
Jonathan Wakely [Thu, 4 Nov 2021 09:31:50 +0000 (09:31 +0000)]
libstdc++: Optimize std::tuple_element and std::tuple_size_v
This reduces the number of class template instantiations needed for code
using tuples, by reusing _Nth_type in tuple_element and specializing
tuple_size_v for tuple, pair and array (and const-qualified versions of
them).
Also define the _Nth_type primary template as a complete type (but with
no nested 'type' member). This avoids "invalid use of incomplete type"
errors for out-of-range specializations of tuple_element. Those errors
would probably be confusing and unhelpful for users. We already have
a user-friendly static assert in tuple_element itself.
Also ensure that tuple_size_v is available whenever tuple_size is (as
proposed by LWG 3387). We already do that for tuple_element_t.
libstdc++-v3/ChangeLog:
* include/bits/stl_pair.h (tuple_size_v): Define partial
specializations for std::pair.
* include/bits/utility.h (_Nth_type): Move definition here
and define primary template.
(tuple_size_v): Move definition here.
* include/std/array (tuple_size_v): Define partial
specializations for std::array.
* include/std/tuple (tuple_size_v): Move primary template to
<bits/utility.h>. Define partial specializations for
std::tuple.
(tuple_element): Change definition to use _Nth_type.
* include/std/variant (_Nth_type): Move to <bits/utility.h>.
(variant_alternative, variant): Adjust qualification of
_Nth_type.
* testsuite/20_util/tuple/element_access/get_neg.cc: Prune
additional errors from _Nth_type.
Tamar Christina [Thu, 4 Nov 2021 17:36:08 +0000 (17:36 +0000)]
AArch64: Lower intrinsics shift to GIMPLE when possible.
This lowers shifts to GIMPLE when the C interpretations of the shift operations
matches that of AArch64.
In C shifting right by BITSIZE is undefined, but the behavior is defined in
AArch64. Additionally negative shifts lefts are undefined for the register
variant of the instruction (SSHL, USHL) as being right shifts.
Since we have a right shift by immediate I rewrite those cases into right shifts
So:
int64x1_t foo3 (int64x1_t a)
{
return vshl_s64 (a, vdup_n_s64(-6));
}
produces:
foo3:
sshr d0, d0, 6
ret
instead of:
foo3:
mov x0, -6
fmov d1, x0
sshl d0, d0, d1
ret
This behavior isn't specifically mentioned for a left shift by immediate, but I
believe that only the case because we do have a right shift by immediate but not
a right shift by register. As such I do the same for left shift by immediate.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-1.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-2.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-3.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-4.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-5.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-6.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-7.c: New test.
* gcc.target/aarch64/advsimd-intrinsics/vshl-opt-8.c: New test.
* gcc.target/aarch64/signbit-2.c: New test.
* gcc.dg/signbit-2.c: New test.
* gcc.dg/signbit-3.c: New test.
* gcc.dg/signbit-4.c: New test.
* gcc.dg/signbit-5.c: New test.
* gcc.dg/signbit-6.c: New test.
* gcc.target/aarch64/signbit-1.c: New test.
Martin Jambor [Thu, 4 Nov 2021 17:01:20 +0000 (18:01 +0100)]
ipa-sra: Improve debug info for removed parameters (PR 93385)
In spring I added code eliminating any statements using parameters
removed by IPA passes (to fix PR 93385). That patch fixed issues such
as divisions by zero that such code could perform but it only reset
all affected debug bind statements, this one updates them with
expressions which can allow the debugger to print the removed value -
see the added test-case for an example.
Even though I originally did not want to create DEBUG_EXPR_DECLs for
intermediate values, I ended up doing so, because otherwise the code
started creating statements like
which not only is a bit scary but also gimple-fold ICEs on
it. Therefore I decided they are probably quite necessary.
The patch simply notes each removed SSA name present in a debug
statement and then works from it backwards, looking if it can
reconstruct the expression it represents (which can fail if a
non-degenerate PHI node is in the way). If it can, it populates two
hash maps with those expressions so that 1) removed assignments are
replaced with a debug bind defining a new intermediate debug_decl_expr
and 2) existing debug binds that refer to SSA names that are bing
removed now refer to corresponding debug_decl_exprs.
If a removed parameter is passed to another function, the debugging
information still cannot describe its value there - see the xfailed
test in the testcase. I sort of know what needs to be done but that
needs a little bit more of IPA infrastructure on top of this patch and
so I would like to get this patch reviewed first.
Bootstrapped and tested on x86_64-linux, i686-linux and (long time
ago) on aarch64-linux. Also LTO-bootstrapped and on x86_64-linux.
Perhaps it is good to go to trunk?
Thanks,
Martin
gcc/ChangeLog:
2021-03-29 Martin Jambor <mjambor@suse.cz>
PR ipa/93385
* ipa-param-manipulation.h (class ipa_param_body_adjustments): New
members remap_with_debug_expressions, m_dead_ssa_debug_equiv,
m_dead_stmt_debug_equiv and prepare_debug_expressions. Added
parameter to mark_dead_statements.
* ipa-param-manipulation.c: Include tree-phinodes.h and cfgexpand.h.
(ipa_param_body_adjustments::mark_dead_statements): New parameter
debugstack, push into it all SSA names used in debug statements,
produce m_dead_ssa_debug_equiv mapping for the removed param.
(replace_with_mapped_expr): New function.
(ipa_param_body_adjustments::remap_with_debug_expressions): Likewise.
(ipa_param_body_adjustments::prepare_debug_expressions): Likewise.
(ipa_param_body_adjustments::common_initialization): Gather and
procecc SSA which will be removed but are in debug statements. Simplify.
(ipa_param_body_adjustments::ipa_param_body_adjustments): Initialize
new members.
* tree-inline.c (remap_gimple_stmt): Create a debug bind when possible
when avoiding a copy of an unnecessary statement. Remap removed SSA
names in existing debug statements.
(tree_function_versioning): Do not create DEBUG_EXPR_DECL for removed
parameters if we have already done so.
gcc/testsuite/ChangeLog:
2021-03-29 Martin Jambor <mjambor@suse.cz>
PR ipa/93385
* gcc.dg/guality/ipa-sra-1.c: New test.
gcc/fortran/
* gfortran.texi (Interoperability with C): Copy-editing. Add
more index entries.
(Intrinsic Types): Likewise.
(Derived Types and struct): Likewise.
(Interoperable Global Variables): Likewise.
(Interoperable Subroutines and Functions): Likewise.
(Working with C Pointers): Likewise.
(Further Interoperability of Fortran with C): Likewise. Rewrite
to reflect that this is now fully supported by gfortran.
Sandra Loosemore [Fri, 29 Oct 2021 22:08:47 +0000 (15:08 -0700)]
Fortran manual: Revise introductory chapter.
Fix various bit-rot in the discussion of standards conformance, remove
material that is only of historical interest, copy-editing. Also move
discussion of preprocessing out of the introductory chapter.
gcc/fortran/
* gfortran.texi (About GNU Fortran): Consolidate material
formerly in other sections. Copy-editing.
(Preprocessing and conditional compilation): Delete, moving
most material to invoke.texi.
(GNU Fortran and G77): Delete.
(Project Status): Delete.
(Standards): Update.
(Fortran 95 status): Mention conditional compilation here.
(Fortran 2003 status): Rewrite to mention the 1 missing feature
instead of all the ones implemented.
(Fortran 2008 status): Similarly for the 2 missing features.
(Fortran 2018 status): Rewrite to reflect completion of TS29113
feature support.
* invoke.texi (Preprocessing Options): Move material formerly
in introductory chapter here.
Sandra Loosemore [Sat, 30 Oct 2021 00:16:40 +0000 (17:16 -0700)]
Fortran manual: Combine standard conformance docs in one place.
Discussion of conformance with various revisions of the
Fortran standard was split between two separate parts of the
manual. This patch moves it all to the introductory chapter.
gcc/fortran/
* gfortran.texi (Standards): Move discussion of specific
standard versions here....
(Fortran standards status): ...from here, and delete this node.
Jonathan Wright [Thu, 7 Oct 2021 15:08:33 +0000 (16:08 +0100)]
aarch64: Pass and return Neon vector-tuple types without a parallel
Neon vector-tuple types can be passed in registers on function call
and return - there is no need to generate a parallel rtx. This patch
adds cases to detect vector-tuple modes and generates an appropriate
register rtx.
This change greatly improves code generated when passing Neon vector-
tuple types between functions; many new test cases are added to
defend these improvements.
gcc/ChangeLog:
2021-10-07 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64.c (aarch64_function_value): Generate
a register rtx for Neon vector-tuple modes.
(aarch64_layout_arg): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/aarch64/vector_structure_intrinsics.c: New code
generation tests.
Jonathan Wright [Mon, 9 Aug 2021 14:26:48 +0000 (15:26 +0100)]
aarch64: Add machine modes for Neon vector-tuple types
Until now, GCC has used large integer machine modes (OI, CI and XI)
to model Neon vector-tuple types. This is suboptimal for many
reasons, the most notable are:
1) Large integer modes are opaque and modifying one vector in the
tuple requires a lot of inefficient set/get gymnastics. The
result is a lot of superfluous move instructions.
2) Large integer modes do not map well to types that are tuples of
64-bit vectors - we need additional zero-padding which again
results in superfluous move instructions.
This patch adds new machine modes that better model the C-level Neon
vector-tuple types. The approach is somewhat similar to that already
used for SVE vector-tuple types.
All of the AArch64 backend patterns and builtins that manipulate Neon
vector tuples are updated to use the new machine modes. This has the
effect of significantly reducing the amount of boiler-plate code in
the arm_neon.h header.
While this patch increases the quality of code generated in many
instances, there is still room for significant improvement - which
will be attempted in subsequent patches.
gcc/ChangeLog:
2021-08-09 Jonathan Wright <jonathan.wright@arm.com>
Richard Sandiford <richard.sandiford@arm.com>
Jonathan Wright [Mon, 11 Oct 2021 22:02:16 +0000 (23:02 +0100)]
gcc/expmed.c: Ensure vector modes are tieable before extraction
Extracting a bitfield from a vector can be achieved by casting the
vector to a new type whose elements are the same size as the desired
bitfield, before generating a subreg. However, this is only an
optimization if the original vector can be accessed in the new
machine mode without first being copied - a condition denoted by the
TARGET_MODES_TIEABLE_P hook.
This patch adds a check to make sure that the vector modes are
tieable before attempting to generate a subreg. This is a necessary
prerequisite for a subsequent patch that will introduce new machine
modes for Arm Neon vector-tuple types.
gcc/ChangeLog:
2021-10-11 Jonathan Wright <jonathan.wright@arm.com>
* expmed.c (extract_bit_field_1): Ensure modes are tieable.
Jonathan Wright [Mon, 11 Oct 2021 17:37:32 +0000 (18:37 +0100)]
gcc/expr.c: Remove historic workaround for broken SIMD subreg
A long time ago, using a parallel to take a subreg of a SIMD register
was broken. This temporary fix[1] (from 2003) spilled these registers
to memory and reloaded the appropriate part to obtain the subreg.
The fix initially existed for the benefit of the PowerPC E500 - a
platform for which GCC removed support a number of years ago.
Regardless, a proper mechanism for taking a subreg of a SIMD register
exists now anyway.
This patch removes the workaround thus preventing SIMD registers
being dumped to memory unnecessarily - which sometimes can't be fixed
by later passes.
Jonathan Wright [Fri, 10 Sep 2021 15:48:02 +0000 (16:48 +0100)]
aarch64: Move Neon vector-tuple type declaration into the compiler
Declare the Neon vector-tuple types inside the compiler instead of in
the arm_neon.h header. This is a necessary first step before adding
corresponding machine modes to the AArch64 backend.
The vector-tuple types are implemented using a #pragma. This means
initialization of builtin functions that have vector-tuple types as
arguments or return values has to be delayed until the #pragma is
handled.
gcc/ChangeLog:
2021-09-10 Jonathan Wright <jonathan.wright@arm.com>
* config/aarch64/aarch64-builtins.c (aarch64_init_simd_builtins):
Factor out main loop to...
(aarch64_init_simd_builtin_functions): This new function.
(register_tuple_type): Define.
(aarch64_scalar_builtin_type_p): Define.
(handle_arm_neon_h): Define.
* config/aarch64/aarch64-c.c (aarch64_pragma_aarch64): Handle
pragma for arm_neon.h.
* config/aarch64/aarch64-protos.h (aarch64_advsimd_struct_mode_p):
Declare.
(handle_arm_neon_h): Likewise.
* config/aarch64/aarch64.c (aarch64_advsimd_struct_mode_p):
Remove static modifier.
* config/aarch64/arm_neon.h (target): Remove Neon vector
structure type definitions.