Tobias Burnus [Mon, 8 Jan 2024 14:18:10 +0000 (15:18 +0100)]
amdgcn: Add gfx1100 to new XNACK defaults in mkoffload
Commit r14-6997-g78dff4c25c1b95 added an arch-dependent
SET_XNACK_OFF vs. SET_XNACK_ANY check; that was added
between writing and committing the add-gfx1100
commit r14-7005-g52a2c659ae6c21 - and I missed to add
it there.
gcc/ChangeLog:
* config/gcn/mkoffload.cc (main): Handle gfx1100
when setting the default XNACK.
Tobias Burnus [Mon, 8 Jan 2024 14:12:44 +0000 (15:12 +0100)]
GCN: Add pre-initial support for gfx1100
ROCm since 5.7.1 supports gfx1100 (RDNA3) cards. This commit adds support
for it, mostly by assuming gfx1100 behaves identical to gfx1030. Like gfx1030,
gfx1100 support is neither documented nor the build of the multilib enabled by
default.
But contrary to gfx1030, gfx1100 has a known issue causing some libraries not
to build, including newlib: The sdwa variant of v_mov_b32_sdwa is not supported
by the hardware but GCC current does generates this instruction.
This will be addressed in a later commit.
Richard Biener [Mon, 8 Jan 2024 09:48:19 +0000 (10:48 +0100)]
Clarify -mmovbe documentation
It was noticed that -mmovbe doesn't use movbe for __builtin_bswap{32,64}
when not optimizing. The follownig adjusts the documentation to
say it will be used for optimizing and applies to all byte swaps,
not just those carried out via builtin function calls.
Richard Biener [Fri, 15 Dec 2023 09:32:29 +0000 (10:32 +0100)]
tree-optimization/113026 - avoid vector epilog in more cases
The following avoids creating a niter peeling epilog more consistently,
matching what peeling later uses for the skip_vector condition, in
particular when versioning is required which then also ensures the
vector loop is entered unless the epilog is vectorized. This should
ideally match LOOP_VINFO_VERSIONING_THRESHOLD which is only computed
later, some refactoring could make that better matching.
The patch also makes sure to adjust the upper bound of the epilogues
when we do not have a skip edge around the vector loop.
PR tree-optimization/113026
* tree-vect-loop.cc (vect_need_peeling_or_partial_vectors_p):
Avoid an epilog in more cases.
* tree-vect-loop-manip.cc (vect_do_peeling): Adjust the
epilogues niter upper bounds and estimates.
* gcc.dg/torture/pr113026-1.c: New testcase.
* gcc.dg/torture/pr113026-2.c: Likewise.
Jakub Jelinek [Mon, 8 Jan 2024 12:59:15 +0000 (13:59 +0100)]
gimplify: Fix ICE in recalculate_side_effects [PR113228]
The following testcase ICEs during regimplificatgion since the addition of
(convert (eqne zero_one_valued_p@0 INTEGER_CST@1))
simplification. That simplification is novel in the sense that in
gimplify_expr it can turn an expression (comparison in particular) into
a SSA_NAME. Normally when gimplify_expr sees originally a SSA_NAME, it does
case SSA_NAME:
/* Allow callbacks into the gimplifier during optimization. */
ret = GS_ALL_DONE;
break;
and doesn't try to recalculate side effects because of that, but in this
case gimplify_expr normally enters the:
default:
switch (TREE_CODE_CLASS (TREE_CODE (*expr_p)))
{
case tcc_comparison:
then does
*expr_p = gimple_boolify (*expr_p);
and then
*expr_p = fold_convert_loc (input_location,
org_type, *expr_p);
with this new match.pd simplification turns that tcc_comparison class
into SSA_NAME. Unlike the outer SSA_NAME handling though, this falls
through into
recalculate_side_effects (*expr_p);
dont_recalculate:
break;
but unfortunately recalculate_side_effects doesn't handle SSA_NAME and ICEs
on it.
SSA_NAMEs don't ever have TREE_SIDE_EFFECTS set on those, so the following
patch fixes it by handling it similarly to the tcc_constant case.
2024-01-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113228
* gimplify.cc (recalculate_side_effects): Do nothing for SSA_NAMEs.
Jakub Jelinek [Mon, 8 Jan 2024 12:58:28 +0000 (13:58 +0100)]
lower-bitint: Fix up lowering of huge _BitInt 0 PHI args [PR113120]
The PHI argument expansion of INTEGER_CSTs where bitint_min_cst_precision
returns significantly smaller precision than the PHI result precision is
optimized by loading the much smaller constant (if any) from memory and
then either setting the remaining limbs to {} or calling memset with -1.
The case where no constant is loaded (i.e. c == NULL) is when the
INTEGER_CST is 0 or all_ones - in that case we can just set all the limbs
to {} or call memset with -1 on everything.
While for the all ones extension case that is what the code was already
doing, I missed one spot in the zero extension case, where constricting
the offset of the MEM_REF lhs of the = {} store it was using unconditionally
the byte size of c, which obviously doesn't work if c is NULL. In that case
we want to use zero offset.
2024-01-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113120
* gimple-lower-bitint.cc (gimple_lower_bitint): Fix handling of very
large _BitInt zero INTEGER_CST PHI argument.
Jakub Jelinek [Mon, 8 Jan 2024 12:57:26 +0000 (13:57 +0100)]
lower-bitint: Punt .*_OVERFLOW optimization if cast from IMAGPART_EXPR appears before REALPART_EXPR [PR113119]
_BitInt lowering for .{ADD,SUB,MUL}_OVERFLOW calls which have both
REALPART_EXPR and IMAGPART_EXPR used and have a cast from the IMAGPART_EXPR
to a boolean or normal integral type lowers them at the point of
the REALPART_EXPR statement (which is especially needed if the lhs of
the call is complex with large/huge _BitInt element type); we emit the
stmt to set the lhs of the cast at the same spot as well.
Normally, the lowering of __builtin_{add,sub,mul}_overflow arranges
the REALPART_EXPR to come before IMAGPART_EXPR, followed by cast from that,
but as the testcase shows, a redundant __builtin_*_overflow call and VN
can reorder those and we then ICE because the def-stmt of the former cast
from IMAGPART_EXPR may appear after its uses.
We already check that all of REALPART_EXPR, IMAGPART_EXPR and the cast
from the latter appear in the same bb as the .{ADD,SUB,MUL}_OVERFLOW call
in the optimization, the following patch just extends it to make sure
cast appears after REALPART_EXPR; if not, we punt on the optimization and
expand it as a store of a complex _BitInt on the location of the ifn call.
Only the testcase in the testsuite is changed by the patch, all other
__builtin_*_overflow* calls in the bitint* tests (and there are quite a few)
have REALPART_EXPR first.
2024-01-08 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/113119
* gimple-lower-bitint.cc (optimizable_arith_overflow): Punt if
both REALPART_EXPR and cast from IMAGPART_EXPR appear, but cast
is before REALPART_EXPR.
AVR: PR target/112952: Fix attribute "address", "io" and "io_low"
so they work with all combinations of -f[no-]data-sections -f[no-]common.
The patch also improves some diagnostics and adds additional checks, for
example these attributes must only be applied to variables in static storage.
gcc/
PR target/112952
* config/avr/avr.cc (avr_handle_addr_attribute): Also print valid
range when diagnosing attribute "io" and "io_low" are out of range.
(avr_eval_addr_attrib): Don't ICE on empty address at that place.
(avr_insert_attributes): Reject if attribute "address", "io" or "io_low"
in contexts other than static storage.
(avr_asm_output_aligned_decl_common): Move output of decls with
attribute "address", "io", and "io_low" to...
(avr_output_addr_attrib): ...this new function.
(avr_asm_asm_output_aligned_bss): Remove output for decls with
attribute "address", "io", and "io_low".
(avr_encode_section_info): Rectify handling of decls with attribute
"address", "io", and "io_low".
gcc/testsuite/
PR target/112952
* gcc.target/avr/attribute-io.h: New file.
* gcc.target/avr/pr112952-0.c: New test.
* gcc.target/avr/pr112952-1.c: New test.
* gcc.target/avr/pr112952-2.c: New test.
* gcc.target/avr/pr112952-3.c: New test.
Andrew Stubbs [Wed, 3 Jan 2024 16:53:52 +0000 (16:53 +0000)]
amdgcn: Match new XNACK defaults in mkoffload
The patch that disabled XNACK by default for ISA other than gfx90a was missing
the matching mkoffload changes. This patch should fix offload.
gcc/ChangeLog:
* config/gcn/mkoffload.cc (TEST_XNACK_UNSET): New.
(elf_flags): Remove XNACK from the default value.
(main): Set a default XNACK according to the arch.
Andrew Stubbs [Wed, 3 Jan 2024 16:18:43 +0000 (16:18 +0000)]
amdgcn: Don't double-count AVGPRs
CDNA2 devices have VGPRs and AVGPRs combined into a single hardware register
file (they're seperate in CDNA1). I originally thought they were counted
separately in the vgpr_count and agpr_count metadata fields, and therefore
mkoffload had to account for this when passing the values to libgomp. However,
that wasn't the case, and this code should have been removed when I corrected
the calculations in gcn.cc. Fixing the error now.
Jonathan Wakely [Sun, 7 Jan 2024 23:14:31 +0000 (23:14 +0000)]
libstdc++: Implement P2918R0 "Runtime format strings II" for C++26
This adds std::runtime_format for C++26. These new overloaded functions
enhance the std::format API so that it isn't necessary to use the less
ergonomic std::vformat and std::make_format_args (which are meant to be
implementation details). This was approved in Kona 2023 for C++26.
libstdc++-v3/ChangeLog:
* include/std/format (__format::_Runtime_format_string): Define
new class template.
(basic_format_string): Add non-consteval constructor for runtime
format strings.
(runtime_format): Define new function for C++26.
* testsuite/std/format/runtime_format.cc: New test.
Jonathan Wakely [Sun, 7 Jan 2024 22:21:08 +0000 (22:21 +0000)]
libstdc++: Implement P2905R2 "Runtime format strings" for C++20
This change makes std::make_format_args refuse to create dangling
references to temporaries. This makes the std::vformat API safer. This
was approved in Kona 2023 as a DR for C++20 so the change is implemented
unconditionally.
libstdc++-v3/ChangeLog:
* include/bits/chrono_io.h (__formatter_chrono): Always use
lvalue arguments to make_format_args.
* include/std/format (make_format_args): Change parameter pack
from forwarding references to lvalue references. Remove use of
remove_reference_t which is now unnecessary.
(format_to, formatted_size): Remove incorrect forwarding of
arguments.
* include/std/ostream (print): Remove forwarding of arguments.
* include/std/print (print): Likewise.
* testsuite/20_util/duration/io.cc: Use lvalues as arguments to
make_format_args.
* testsuite/std/format/arguments/args.cc: Likewise.
* testsuite/std/format/arguments/lwg3810.cc: Likewise.
* testsuite/std/format/functions/format.cc: Likewise.
* testsuite/std/format/functions/vformat_to.cc: Likewise.
* testsuite/std/format/string.cc: Likewise.
* testsuite/std/time/day/io.cc: Likewise.
* testsuite/std/time/month/io.cc: Likewise.
* testsuite/std/time/weekday/io.cc: Likewise.
* testsuite/std/time/year/io.cc: Likewise.
* testsuite/std/time/year_month_day/io.cc: Likewise.
* testsuite/std/format/arguments/args_neg.cc: New test.
Jonathan Wakely [Sat, 16 Dec 2023 23:30:20 +0000 (23:30 +0000)]
libstdc++: Add Unicode-aware width estimation for std::format
This implements the requirements in the following proposals, which
dictate how std::format deals with non-ASCII strings:
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p1868r1.html
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2572r1.html
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p2675r1.pdf
There are two parts to this. The width estimation for strings must only
count the width of the first character in an extended grapheme cluster.
That requires implementing the algorithm for detecting cluster breaks,
which requires a number of lookup tables of the grapheme cluster break
properties (and Indic_Conjunct_Break and Extended_Pictographic
properties) of every code point. Additionally, some characters have a
field width of 2, which requires another lookup table of field widths
for every code point. The tables added in this commit do not contain
entries for every code point from 0 to 0x10FFFF as that would be very
inefficient and use too much memory. Instead the tables only contain the
code points that form an "edge" for a property, omitting all the code
points that have the same property as the preceding one. We can use a
binary search to find the closest code point in the table that is not
greater than the one we're looking for.
The tables are generated by a new Python script added to the
contrib/unicode directory, and a new data file downloaded from the
Unicode Consortium website.
The rules for extended grapheme cluster breaking are implemented for the
latest Unicode standard, version 15.1.0.
libstdc++-v3/ChangeLog:
* include/Makefile.am: Add new headers.
* include/Makefile.in: Regenerate.
* include/bits/unicode.h: New file.
* include/bits/unicode-data.h: New file.
* include/std/format: Include <bits/unicode.h>.
(__literal_encoding_is_utf8): Move to <bits/unicode.h>.
(_Spec::_M_fill): Change type to char32_t.
(_Spec::_M_parse_fill_and_align): Read a Unicode scalar value
instead of a single character.
(__write_padded): Change __fill_char parameter to char32_t and
encode it into the output.
(__formatter_str::format): Use new __unicode::__field_width and
__unicode::__truncate functions.
* include/std/ostream: Adjust namespace qualification for
__literal_encoding_is_utf8.
* include/std/print: Likewise.
* src/c++23/print.cc: Add [[unlikely]] attribute to error path.
* testsuite/ext/unicode/view.cc: New test.
* testsuite/std/format/functions/format.cc: Add missing examples
from the standard demonstrating alignment with non-ASCII
characters. Add examples checking correct handling of extended
grapheme clusters.
contrib/ChangeLog:
* unicode/README: Add notes about generating libstdc++ tables.
* unicode/GraphemeBreakProperty.txt: New file.
* unicode/emoji-data.txt: New file.
* unicode/gen_libstdcxx_unicode_data.py: New file.
Jonathan Wakely [Wed, 3 Jan 2024 15:35:50 +0000 (15:35 +0000)]
libstdc++: Implement P2909R4 ("Dude, where's my char?") for C++20
This change ensures that char and wchar_t arguments are formatted
consistently when using integer presentation types. This avoids
non-portable std::format output that depends on whether char and wchar_t
happen to be signed or unsigned on the target. Formatting '\xff' as an
integer will now always format 255 and not sometimes -1. This was
approved in Kona 2023 as a DR for C++20 so the change is implemented
unconditionally.
Also make character formatters check for _Pres_c explicitly and call
_M_format_character directly. This avoid the overhead of calling format
and _S_to_character and then calling _M_format_character anyway.
libstdc++-v3/ChangeLog:
* include/bits/version.def (format_uchar): Define.
* include/bits/version.h: Regenerate.
* include/std/format (formatter<C, C>::format): Check for
_Pres_c and call _M_format_character directly. Cast C to its
unsigned equivalent for formatting as an integer.
(formatter<char, wchar_t>::format): Likewise.
(basic_format_arg(T&)): Store char arguments as unsigned char
for formatting to a wide string.
* testsuite/std/format/functions/format.cc: Adjust test. Check
formatting of
Feng Wang [Fri, 5 Jan 2024 09:23:44 +0000 (09:23 +0000)]
RISC-V: Fix avl-type operand index error for ZVBC
This patch fix the rtl-checking error for crypto vector. The root
cause is the avl-type index of zvbc ins is error,it should be operand[8]
not operand[5].
gcc/ChangeLog:
* config/riscv/vector.md: Modify avl_type operand index of zvbc ins.
AVR: Fix some test options. Skip tests with address-space on Reduced Tiny.
gcc/testsuite/
* gcc.target/avr/lra-cpymem_qi.c: Remove duplicate -mmcu=.
* gcc.target/avr/lra-elim.c: Same.
* gcc.target/avr/pr112830.c: Skip for Reduced Tiny.
* gcc.target/avr/pr46779-1.c: Same.
* gcc.target/avr/pr46779-2.c: Same.
* gcc.target/avr/pr86869.c: Skip for Reduced Tiny and add -std=gnu99
for GNU-C due to address spaces.
* gcc.target/avr/pr89270.c: Same.
* gcc.target/avr/torture/builtins-2-flash.c: Only test address
space __flash1 if we have it.
* gcc.target/avr/torture/addr-space-1-1.c: Same.
* gcc.target/avr/torture/addr-space-2-1.c: Same.
Roger Sayle [Sun, 7 Jan 2024 17:42:00 +0000 (17:42 +0000)]
i386: PR target/113231: Improved costs in Scalar-To-Vector (STV) pass.
This patch improves the cost/gain calculation used during the i386 backend's
SImode/DImode scalar-to-vector (STV) conversion pass. The current code
handles loads and stores, but doesn't consider that converting other
scalar operations with a memory destination, requires an explicit load
before and an explicit store after the vector equivalent.
To ease the review, the significant change looks like:
/* For operations on memory operands, include the overhead
of explicit load and store instructions. */
if (MEM_P (dst))
igain += !optimize_insn_for_size_p ()
? -COSTS_N_BYTES (8);
: (m * (ix86_cost->int_load[2]
+ ix86_cost->int_store[2])
- (ix86_cost->sse_load[sse_cost_idx] +
ix86_cost->sse_store[sse_cost_idx]));
however the patch itself is complicated by a change in indentation
which leads to a number of lines with only whitespace changes.
For architectures where integer load/store costs are the same as
vector load/store costs, there should be no change without -Os/-Oz.
2024-01-07 Roger Sayle <roger@nextmovesoftware.com>
Uros Bizjak <ubizjak@gmail.com>
gcc/ChangeLog
PR target/113231
* config/i386/i386-features.cc (compute_convert_gain): Include
the overhead of explicit load and store (movd) instructions when
converting non-store scalar operations with memory destinations.
Various indentation whitespace fixes.
gcc/testsuite/ChangeLog
PR target/113231
* gcc.target/i386/pr113231.c: New test case.
gcc/testsuite/
PR testsuite/52641
* gcc.dg/torture/pr110838.c: Use proper shift offset to get MSB or int.
* gcc.dg/torture/pr112282.c: Use at least 32 bits for :20 bit-fields.
* gcc.dg/tree-ssa/bitcmp-5.c: Use integral type with 32 bits or more.
* gcc.dg/tree-ssa/bitcmp-6.c: Same.
* gcc.dg/tree-ssa/cltz-complement-max.c: Same.
* gcc.dg/tree-ssa/cltz-max.c: Same.
* gcc.dg/tree-ssa/if-to-switch-8.c: Use literals that fit int.
* gcc.dg/tree-ssa/if-to-switch-9.c [avr]: Set case-values-threshold=3.
* gcc.dg/tree-ssa/negneg-3.c: Discriminate [not] large_double.
* gcc.dg/tree-ssa/phi-opt-25b.c: Use types of correct widths for
__builtin_bswapN.
* gcc.dg/tree-ssa/pr55177-1.c: Same.
* gcc.dg/tree-ssa/popcount-max.c: Use int32_t where required.
* gcc.dg/tree-ssa/pr111583-1.c: Use intptr_t as needed.
* gcc.dg/tree-ssa/pr111583-2.c: Same.
Nathaniel Shead [Tue, 2 Jan 2024 22:28:43 +0000 (09:28 +1100)]
c++: Fix ICE when writing nontrivial variable initializers
The attached testcase Patrick found in PR c++/112899 ICEs because it is
attempting to write a variable initializer that is no longer in the
static_aggregates map.
The issue is that, for non-header modules, the loop in
c_parse_final_cleanups prunes the static_aggregates list, which means
that by the time we get to emitting module information those
initialisers have been lost.
However, we don't actually need to write non-trivial initialisers for
non-header modules, because they've already been emitted as part of the
module TU itself. Instead let's just only write the initializers from
header modules (which skipped writing them in c_parse_final_cleanups).
gcc/cp/ChangeLog:
* module.cc (trees_out::write_var_def): Only write initializers
in header modules.
gcc/testsuite/ChangeLog:
* g++.dg/modules/init-5_a.C: New test.
* g++.dg/modules/init-5_b.C: New test.
Nathaniel Shead [Wed, 3 Jan 2024 04:29:51 +0000 (15:29 +1100)]
c++: Export usings referring to global module fragment [PR109679]
This patch stops 'add_binding_entity' from ignoring all names in the
global module fragment, since they should still be exported if named
in an exported using-declaration.
PR c++/109679
gcc/cp/ChangeLog:
* module.cc (depset::hash::add_binding_entity): Don't skip names
in the GMF if they've been exported with a using declaration.
gcc/testsuite/ChangeLog:
* g++.dg/modules/using-11.h: New test.
* g++.dg/modules/using-11_a.C: New test.
* g++.dg/modules/using-11_b.C: New test.
Nathaniel Shead [Fri, 24 Nov 2023 05:26:43 +0000 (16:26 +1100)]
c++: Follow module grammar more closely [PR110808]
This patch cleans up the parsing of module-declarations and
import-declarations to more closely follow the grammar defined by the
standard.
For instance, currently we allow declarations like 'import A:B', even
from an unrelated source file (not part of module A), which causes
errors in merging declarations. However, the syntax in [module.import]
doesn't even allow this form of import, so this patch prevents this from
parsing at all and avoids the error that way.
Additionally, we sometimes allow statements like 'import :X' or
'module :X' even when not in a named module, and this causes segfaults,
so we disallow this too.
PR c++/110808
gcc/cp/ChangeLog:
* parser.cc (cp_parser_module_name): Rewrite to handle
module-names and module-partitions independently.
(cp_parser_module_partition): New function.
(cp_parser_module_declaration): Parse module partitions
explicitly. Don't change state if parsing module decl failed.
(cp_parser_import_declaration): Handle different kinds of
import-declarations locally.
gcc/testsuite/ChangeLog:
* g++.dg/modules/part-hdr-1_c.C: Fix syntax.
* g++.dg/modules/part-mac-1_c.C: Likewise.
* g++.dg/modules/mod-invalid-1.C: New test.
* g++.dg/modules/part-8_a.C: New test.
* g++.dg/modules/part-8_b.C: New test.
* g++.dg/modules/part-8_c.C: New test.
Jonathan Wakely [Wed, 13 Dec 2023 09:45:44 +0000 (09:45 +0000)]
libstdc++: Avoid conflicting declaration in eh_call.cc [PR112997]
r14-1527-g2415024e0f81f8 changed the parameter of the
__cxa_call_terminate definition, but there's also a declaration in
unwind-cxx.h which should have been changed too.
libstdc++-v3/ChangeLog:
PR libstdc++/112997
* libsupc++/unwind-cxx.h (__cxa_call_terminate): Change first
parameter to void*.
This reduces the overhead of using std::is_trivially_destructible_v and
as a result fixes some recent regressions seen with a non-default
GLIBCXX_TESTSUITE_STDS env var:
FAIL: 20_util/variant/87619.cc -std=gnu++20 (test for excess errors)
FAIL: 20_util/variant/87619.cc -std=gnu++23 (test for excess errors)
FAIL: 20_util/variant/87619.cc -std=gnu++26 (test for excess errors)
libstdc++-v3/ChangeLog:
* include/std/type_traits (is_trivially_destructible_v): Use
built-in directly when concepts are supported.
* testsuite/20_util/is_trivially_destructible/value_v.cc: New
test.
1). We not only have vashl_optab,vashr_optab,vlshr_optab which vectorize shift with vector shift amount,
that is, vectorization of 'a[i] >> x[i]', the shift amount is loop variant.
2). But also, we have ashl_optab, ashr_optab, lshr_optab which can vectorize shift with scalar shift amount,
that is, vectorization of 'a[i] >> x', the shift amount is loop invariant.
For the 2) case, we don't need to allocate a vector register group for shift amount.
So consider this following case:
void
f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int x,
int n)
{
for (int i = 0; i < n; i++)
{
int tmp = b[i] >> x;
int tmp2 = tmp * b[i];
c[i] = tmp2 * b[i];
d[i] = tmp * tmp2 * b[i] >> x;
}
}
Before this patch, we choose LMUL = 4, now after this patch, we can choose LMUL = 8:
Tested on both RV32/RV64 no regression. Ok for trunk ?
Note that we will apply same heuristic for vadd.vx, ... etc when the late-combine pass from
Richard Sandiford is committed (Since we need late combine pass to do vv->vx transformation for vadd).
Mark Wielaard [Sat, 6 Jan 2024 00:25:01 +0000 (01:25 +0100)]
Regenerate libgomp/configure for copyright year update
commit a945c346f57ba40fc80c14ac59be0d43624e559d updated
libgomp/plugin/configfrag.ac but didn't regenerate/update
libgomp/configure which includes that configfrag.
aarch64: Extend VECT_COMPARE_COSTS to !SVE [PR113104]
When SVE is enabled, we try vectorising with multiple different SVE and
Advanced SIMD approaches and use the cost model to pick the best one.
Until now, we've not done that for Advanced SIMD, since "the first mode
that works should always be the best".
The testcase is a counterexample. Each iteration of the scalar loop
vectorises naturally with 64-bit input vectors and 128-bit output
vectors. We do try that for SVE, and choose it as the best approach.
But the first approach we try is instead to use:
- a vectorisation factor of 2
- 1 128-bit vector for the inputs
- 2 128-bit vectors for the outputs
But since the stride is variable, the cost of marshalling the input
vector from two iterations outweighs the benefit of doing two iterations
at once.
This patch therefore generalises aarch64-sve-compare-costs to
aarch64-vect-compare-costs and applies it to non-SVE compilations.
gcc/
PR target/113104
* doc/invoke.texi (aarch64-sve-compare-costs): Replace with...
(aarch64-vect-compare-costs): ...this.
* config/aarch64/aarch64.opt (-param=aarch64-sve-compare-costs=):
Replace with...
(-param=aarch64-vect-compare-costs=): ...this new param.
* config/aarch64/aarch64.cc (aarch64_override_options_internal):
Don't disable it when vectorizing for Advanced SIMD only.
(aarch64_autovectorize_vector_modes): Apply VECT_COMPARE_COSTS
whenever aarch64_vect_compare_costs is true.
Jonathan Wakely [Fri, 5 Jan 2024 13:40:06 +0000 (13:40 +0000)]
libstdc++: Avoid overflow when appending to std::filesystem::path
This prevents a std::filesystem::path from exceeding INT_MAX/4
components (which is unlikely to ever be a problem except on 16-bit
targets). That limit ensures that the capacity*1.5 calculation doesn't
overflow. We should also check that we don't exceed SIZE_MAX when
calculating how many bytes to allocate. That only needs to be checked
when int is at least as large as size_t, because otherwise we know that
the product INT_MAX/4 * sizeof(value_type) will fit in SIZE_MAX. For
targets where size_t is twice as wide as int this obviously holds. For
msp430-elf we have 16-bit int and 20-bit size_t, so the condition holds
as long as sizeof(value_type) fits in 7 bits, which it does.
We can also remove some floating-point arithmetic in operator/= which
ensures exponential growth of the buffer. That's redundant because
path::_List::reserve does that anyway (and does so more efficiently
since the commit immediately before this one).
libstdc++-v3/ChangeLog:
* src/c++17/fs_path.cc (path::_List::reserve): Limit maximum
size and check for overflows in arithmetic.
(path::operator/=(const path&)): Remove redundant exponential
growth calculation.
Lulu Cheng [Thu, 4 Jan 2024 02:37:53 +0000 (10:37 +0800)]
LoongArch: Fixed the problem of incorrect judgment of the immediate field of the [x]vld/[x]vst instruction.
The [x]vld/[x]vst directive is defined as follows:
[x]vld/[x]vst {x/v}d, rj, si12
When not modified, the immediate field of [x]vld/[x]vst is between 10 and
14 bits depending on the type. However, in loongarch_valid_offset_p, the
immediate field is restricted first, so there is no error. However, in
some cases redundant instructions will be generated, see test cases.
Now modify it according to the description in the instruction manual.
gcc/ChangeLog:
* config/loongarch/lasx.md (lasx_mxld_<lasxfmt_f>):
Modify the method of determining the memory offset of [x]vld/[x]vst.
(lasx_mxst_<lasxfmt_f>): Likewise.
* config/loongarch/loongarch.cc (loongarch_valid_offset_p): Delete.
(loongarch_address_insns): Likewise.
* config/loongarch/lsx.md (lsx_ld_<lsxfmt_f>): Likewise.
(lsx_st_<lsxfmt_f>): Likewise.
* config/loongarch/predicates.md (aq10b_operand): Likewise.
(aq10h_operand): Likewise.
(aq10w_operand): Likewise.
(aq10d_operand): Likewise.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vect-ld-st-imm12.c: New test.
chenxiaolong [Fri, 5 Jan 2024 03:43:29 +0000 (11:43 +0800)]
LoongArch: testsuite:Give up the detection of the gcc.dg/fma-{3, 4, 6, 7}.c file.
On the LoongArch architecture, the above four test cases need to be waived
during testing. There are two situations:
1. The function of fma-{3,6}.c test is to find the value of c-a*b, but on
the LoongArch architecture, the function of the existing fnmsub instruction
is to find the value of -(a*b - c);
2. The function of fma-{4,7}.c test is to find the value of -(a*b)-c, but on
the LoongArch architecture, the function of the existing fnmadd instruction
is to find the value of -(a*b + c);
Through the analysis of the above two cases, there will be positive and
negative zero inequality.
gcc/testsuite/ChangeLog
* gcc.dg/fma-3.c: The intermediate file corresponding to the
function does not produce the corresponding FNMA symbol, so the test
rules should be skipped when testing.
* gcc.dg/fma-4.c: The intermediate file corresponding to the
function does not produce the corresponding FNMS symbol, so skip the
test rules when testing.
* gcc.dg/fma-6.c: The cause is the same as fma-3.c.
* gcc.dg/fma-7.c: The cause is the same as fma-4.c
In the LoongArch architecture, the reason for not adding the 128-bit
vector-width-*hi* instruction template in the GCC back end is that it causes
program performance loss, so we can only add the "-mlasx" compilation option
to use 256-bit vectorization functions in test files.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/bb-slp-pattern-1.c: If you are testing on the
LoongArch architecture, you need to add the "-mlasx" compilation
option to generate vectorized code.
* gcc.dg/vect/slp-widen-mult-half.c: Dito.
* gcc.dg/vect/vect-widen-mult-const-s16.c: Dito.
* gcc.dg/vect/vect-widen-mult-const-u16.c: Dito.
* gcc.dg/vect/vect-widen-mult-half-u8.c: Dito.
* gcc.dg/vect/vect-widen-mult-half.c: Dito.
* gcc.dg/vect/vect-widen-mult-u16.c: Dito.
* gcc.dg/vect/vect-widen-mult-u8-s16-s32.c: Dito.
* gcc.dg/vect/vect-widen-mult-u8-u32.c: Dito.
* gcc.dg/vect/vect-widen-mult-u8.c: Dito.
chenxiaolong [Fri, 5 Jan 2024 03:43:27 +0000 (11:43 +0800)]
LoongArch: testsuite:Delete the default run behavior in pr60510.f.
When binutils does not support vector instruction sets, the test program fails
because it does not recognize vectorization at the assembly stage. Therefore,
the default run behavior of the program is deleted, so that the behavior of
the program depends on whether the software supports vectorization.
gcc/testsuite/ChangeLog:
* gfortran.dg/vect/pr60510.f: Delete the default behavior of the
program.
chenxiaolong [Fri, 5 Jan 2024 03:43:26 +0000 (11:43 +0800)]
LoongArch: testsuite:Fix FAIL in file bind_c_array_params_2.f90.
On the LoongArch architecture, an error was found in the
bind_c_array_params_2.f90 file because there was no proper assembly code
for the regular expression detection function call, such as bl %plt(myBindC).
gcc/testsuite/ChangeLog:
* gfortran.dg/bind_c_array_params_2.f90: Add code test rules to
support testing of the loongArch architecture.
chenxiaolong [Fri, 5 Jan 2024 03:43:24 +0000 (11:43 +0800)]
LoongArch: testsuite:Modify the test behavior of the vect-bic-bitmask-{12, 23}.c file.
Before modifying the test behavior of the program, dg-do is set to assemble in
vect-bic-bitmask-{12,23}.c. However, when the binutils library does not support
the vector instruction set, it will FAIL to recognize the vector instruction
and fail item will appear in the assembly stage. So set the program's dg-do to
compile.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/vect-bic-bitmask-12.c: Change the default
setting of assembly to compile.
* gcc.dg/vect/vect-bic-bitmask-23.c: Dito.
Alex Coplan [Fri, 5 Jan 2024 12:25:00 +0000 (12:25 +0000)]
aarch64: Further fix for throwing insns in ldp/stp pass [PR113217]
As the PR shows, the fix in r14-6916-g057dc349021660c40699fb5c98fd9cac8e168653 was not complete.
That fix was enough to stop us trying to move throwing accesses above
nondebug insns, but due to this code in try_fuse_pair:
// Placement strategy: push loads down and pull stores up, this should
// help register pressure by reducing live ranges.
if (load_p)
range.first = range.last;
else
range.last = range.first;
we would still try to move stores up above any debug insns that occurred
immediately after the previous nondebug insn. This patch fixes that by
narrowing the move range in the case that the second access is throwing
to exactly the range of that insn.
Note that we still need the fix to latest_hazard_before mentioned above
so as to ensure we select a suitable base and reject pairs if it isn't
viable to form the pair at the end of the BB.
gcc/ChangeLog:
PR target/113217
* config/aarch64/aarch64-ldp-fusion.cc
(ldp_bb_info::try_fuse_pair): If the second access can throw,
narrow the move range to exactly that insn.
GCC can emit code between the function label and the .LASANPC label,
making the latter unaligned. Some architectures cannot load unaligned
labels directly and require literal pool entries, which is inefficient.
Move the invocation of asan_function_start to
ASM_OUTPUT_FUNCTION_LABEL, which guarantees that no additional code is
emitted. This allows setting the .LASANPC label alignment to the
respective function alignment.
Implement ASM_DECLARE_FUNCTION_NAME using ASM_OUTPUT_FUNCTION_LABEL
gccint recommends using ASM_OUTPUT_FUNCTION_LABEL in
ASM_DECLARE_FUNCTION_NAME, but many implementations use
ASM_OUTPUT_LABEL instead. It's inconsistent and prevents changes to
ASM_OUTPUT_FUNCTION_LABEL from affecting the respective targets.
The current constexpr implementation of std::char_traits<C>::move relies
on being able to compare the pointer parameters, which is not allowed
for unrelated pointers. We can use __builtin_constant_p to determine
whether it's safe to compare the pointers directly. If not, then we know
the ranges must be disjoint and so we can use char_traits<C>::copy to
copy forwards from the first character to the last. If the pointers can
be compared directly, then we can simplify the condition for copying
backwards to just two pointer comparisons.
libstdc++-v3/ChangeLog:
PR libstdc++/113200
* include/bits/char_traits.h (__gnu_cxx::char_traits::move): Use
__builtin_constant_p to check for unrelated pointers that cannot
be compared during constant evaluation.
* testsuite/21_strings/char_traits/requirements/113200.cc: New
test.
Cassio Neri [Sun, 10 Dec 2023 11:31:31 +0000 (11:31 +0000)]
libstdc++: Remove UB from month and weekday additions and subtractions.
The following invoke signed integer overflow (UB) [1]:
month + months{MAX} // where MAX is the maximum value of months::rep
month + months{MIN} // where MIN is the maximum value of months::rep
month - months{MIN} // where MIN is the minimum value of months::rep
weekday + days {MAX} // where MAX is the maximum value of days::rep
weekday - days {MIN} // where MIN is the minimum value of days::rep
For the additions to MAX, the crux of the problem is that, in libstdc++,
months::rep and days::rep are int64_t. Other implementations use int32_t, cast
operands to int64_t and perform arithmetic operations without risk of
overflowing.
For month + months{MIN}, the implementation follows the Standard's "returns
clause" and evaluates:
Overflow occurs when MIN - 1 is evaluated. Casting to a larger type could help
but, unfortunately again, this is not possible for libstdc++.
For the subtraction of MIN, the problem is that -MIN is not representable.
It's fair to say that the intention is for these additions/subtractions to
be performed in modulus (12 or 7) arithmetic so that no overflow is expected.
which respectively, returns the remainder of Euclidean division of, __x + __y
and __x - __y by __d without overflowing. These functions replace
constexpr unsigned __modulo(long long __n, unsigned __d);
which also calculates the reminder of __n, where __n is the result of the
addition or subtraction. Hence, these operations might invoke UB before __modulo
is called and thus, __modulo can't do anything to remediate the issue.
In addition to solve the UB issues, __add_modulo and __sub_modulo allow better
codegen (shorter and branchless) on x86-64 and ARM [2].
* include/std/chrono: Fix + and - for months and weekdays.
* testsuite/std/time/month/1.cc: Add constexpr tests against overflow.
* testsuite/std/time/month/2.cc: New test for extreme values.
* testsuite/std/time/weekday/1.cc: Add constexpr tests against overflow.
* testsuite/std/time/weekday/2.cc: New test for extreme values.
Jonathan Wakely [Wed, 3 Jan 2024 12:23:32 +0000 (12:23 +0000)]
libstdc++: Use if-constexpr in std::__try_use_facet [PR113099]
As noted in the PR, we can use if-constexpr for the explicit
instantantiation definitions that are compiled with -std=gnu++11. We
just need to disable the -Wc++17-extensions diagnostics.
libstdc++-v3/ChangeLog:
PR libstdc++/113099
* include/bits/locale_classes.tcc (__try_use_facet): Use
if-constexpr for C++11 and up.
Jakub Jelinek [Fri, 5 Jan 2024 10:18:17 +0000 (11:18 +0100)]
scev: Avoid ICE on results used in abnormal PHI args [PR113201]
The following testcase ICEs when rslt is SSA_NAME_OCCURS_IN_ABNORMAL_PHI
and we call replace_uses_by with a INTEGER_CST def, where it ICEs on:
if (e->flags & EDGE_ABNORMAL
&& !SSA_NAME_OCCURS_IN_ABNORMAL_PHI (val))
because val is not an SSA_NAME. One way would be to add
&& TREE_CODE (val) == SSA_NAME
check in between the above 2 lines in replace_uses_by.
And/or the following patch just punts propagating constants to
SSA_NAME_OCCURS_IN_ABNORMAL_PHI rslt uses.
Or we could punt somewhere earlier in final value replacement (but dunno
where).
Jakub Jelinek [Fri, 5 Jan 2024 10:16:58 +0000 (11:16 +0100)]
Improve __builtin_popcount* (x) == 1 generation if x is known != 0 [PR90693]
We expand __builtin_popcount* (x) == 1 as
x ^ (x - 1) > x - 1, either unconditionally in tree-ssa-math-opts.cc
if we don't have a direct optab support for popcount, or during
expansion where we compare the costs of comparison of the popcount
against one vs. the above expression.
As mentioned in the PR, if we know from ranger that the argument is
not zero, we can emit x & (x - 1) == 0 test which is same number of
GIMPLE statements, but on many targets cheaper (e.g. whenever an AND
instruction can also set flags on whether result was zero or not).
The following patch does that.
2024-01-05 Jakub Jelinek <jakub@redhat.com>
PR tree-optimization/90693
* tree-ssa-math-opts.cc (match_single_bit_test): If
tree_expr_nonzero_p (arg), remember it in the second argument to
IFN_POPCOUNT or lower it as arg & (arg - 1) == 0 rather than
arg ^ (arg - 1) > arg - 1.
* internal-fn.cc (expand_POPCOUNT): If second argument to
IFN_POPCOUNT suggests arg is non-zero, try to expand it as
arg & (arg - 1) == 0 rather than arg ^ (arg - 1) > arg - 1.
Feng Wang [Wed, 3 Jan 2024 05:21:45 +0000 (05:21 +0000)]
RISC-V: Add crypto vector api-testing cases.
This patch add crypto vector api-testing cases based on
https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/vector-crypto
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/zvbb-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvbb_vandn_vx_constraint.c: New test.
* gcc.target/riscv/rvv/base/zvbc-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvbc_vx_constraint-1.c: New test.
* gcc.target/riscv/rvv/base/zvbc_vx_constraint-2.c: New test.
* gcc.target/riscv/rvv/base/zvkg-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvkned-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvknha-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvknhb-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvksed-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvksh-intrinsic.c: New test.
* gcc.target/riscv/zvkb.c: New test.
Feng Wang [Tue, 2 Jan 2024 09:18:14 +0000 (09:18 +0000)]
RISC-V: Add crypto vector builtin function.
This patch add the intrinsic funtions of crypto vector based on the
intrinsic doc(https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob
/eopc/vector-crypto/auto-generated/vector-crypto/intrinsic_funcs.md).
Ken Matsui [Mon, 11 Sep 2023 15:21:50 +0000 (08:21 -0700)]
libstdc++: Use _GLIBCXX_USE_BUILTIN_TRAIT
This patch uses _GLIBCXX_USE_BUILTIN_TRAIT macro instead of __has_builtin
in the type_traits header for traits that have a corresponding fallback
non-built-in implementation. This macro supports to toggle the use of
built-in traits in the type_traits header through
_GLIBCXX_DO_NOT_USE_BUILTIN_TRAITS macro, without needing to modify the
source code.
libstdc++-v3/ChangeLog:
* include/std/type_traits: Use _GLIBCXX_USE_BUILTIN_TRAIT.
Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Patrick Palka <ppalka@redhat.com> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Juzhe-Zhong [Thu, 4 Jan 2024 12:29:15 +0000 (20:29 +0800)]
RISC-V: Make liveness estimation be aware of .vi variant
Consider this following case:
void
f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int n)
{
for (int i = 0; i < n; i++)
{
int tmp = b[i] + 15;
int tmp2 = tmp + b[i];
c[i] = tmp2 + b[i];
d[i] = tmp + tmp2 + b[i];
}
}
Current dynamic LMUL cost model choose LMUL = 4 because we count the "15" as
consuming 1 vector register group which is not accurate.
We teach the dynamic LMUL cost model be aware of the potential vi variant instructions
transformation, so that we can choose LMUL = 8 according to more accurate cost model.
Andrew Pinski [Mon, 1 Jan 2024 00:38:30 +0000 (16:38 -0800)]
Match: Improve inverted_equal_p for bool and `^` and `==` [PR113186]
For boolean types, `a ^ b` is a valid form for `a != b`. This means for
gimple_bitwise_inverted_equal_p, we catch some inverted value forms. This
patch extends inverted_equal_p to allow matching of `^` with the
corresponding `==`. Note in the testcase provided we used to optimize
in GCC 12 to just `return 0` where `a == b` was used,
this allows us to do that again.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/113186
gcc/ChangeLog:
* gimple-match-head.cc (gimple_bitwise_inverted_equal_p):
Match `^` with the `==` for 1bit integral types.
* match.pd (maybe_cmp): Allow for bit_xor for 1bit
integral types.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/bitops-bool-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
This commit adds a new function intended for checking the XID properties
of a possibly unicode character, as well as the accompanying enum
describing the possible properties.
David Malcolm [Thu, 4 Jan 2024 14:36:28 +0000 (09:36 -0500)]
options: wire up options-urls.cc into gcc_urlifier
Changed in v2:
- split out from the code that generates options-urls.cc
- call the generated function, rather than use a generated array
- pass around lang_mask
gcc/ChangeLog:
* diagnostic.h (diagnostic_make_option_url_cb): Add lang_mask
param.
(diagnostic_context::make_option_url): Update for lang_mask param.
* gcc-urlifier.cc: Include "opts.h" and "options.h".
(gcc_urlifier::gcc_urlifier): Add lang_mask param.
(gcc_urlifier::m_lang_mask): New field.
(doc_urls): Make static.
(gcc_urlifier::get_url_for_quoted_text): Use label_text.
(gcc_urlifier::get_url_suffix_for_quoted_text): Use label_text.
Look for an option by name before trying a binary search in
doc_urls.
(gcc_urlifier::get_url_suffix_for_quoted_text): Use label_text.
(gcc_urlifier::get_url_suffix_for_option): New.
(make_gcc_urlifier): Add lang_mask param.
(selftest::gcc_urlifier_cc_tests): Update for above changes.
Verify that a URL is found for "-fpack-struct".
* gcc-urlifier.def: Drop options "--version" and "-fpack-struct".
* gcc-urlifier.h (make_gcc_urlifier): Add lang_mask param.
* gcc.cc (driver::global_initializations): Pass 0 for lang_mask
to make_gcc_urlifier.
* opts-diagnostic.h (get_option_url): Add lang_mask param.
* opts.cc (get_option_html_page): Remove special-casing for
analyzer and LTO.
(get_option_url_suffix): New.
(get_option_url): Reimplement.
(selftest::test_get_option_html_page): Rename to...
(selftest::test_get_option_url_suffix): ...this and update for
above changes.
(selftest::opts_cc_tests): Update for renaming.
* opts.h: Include "rich-location.h".
(get_option_url_suffix): New decl.
gcc/testsuite/ChangeLog:
* lib/gcc-dg.exp: Set TERM to xterm.
gcc/ChangeLog:
* toplev.cc (general_init): Pass lang_mask to urlifier.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 4 Jan 2024 14:36:28 +0000 (09:36 -0500)]
opts: add logic to generate options-urls.cc
Changed in v2:
- split out from the code that uses this
- now handles lang-specific URLs, as well as generic URLs
- the generated options-urls.cc now contains a function with a
switch statement, rather than an array, to support
lang-specific URLs:
gcc/ChangeLog:
* Makefile.in (ALL_OPT_URL_FILES): New.
(GCC_OBJS): Add options-urls.o.
(OBJS): Likewise.
(OBJS-libcommon): Likewise.
(s-options): Depend on $(ALL_OPT_URL_FILES), and add this to
inputs to opt-gather.awk.
(options-urls.cc): New Makefile target.
* opt-functions.awk (url_suffix): New function.
(lang_url_suffix): New function.
* options-urls-cc-gen.awk: New file.
* opts.h (get_opt_url_suffix): New decl.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 4 Jan 2024 14:36:27 +0000 (09:36 -0500)]
options: add gcc/regenerate-opt-urls.py
In r14-5118-gc5db4d8ba5f3de I added a mechanism to automatically add
URLs to quoted strings in diagnostics. This was based on a data table
mapping strings to URLs, with placeholder data covering various pragmas
and a couple of options.
The following patches add automatic URLification in our diagnostic
messages to mentions of *all* of our options in quoted strings, linking
to our HTML documentation.
For example, with these patches, given:
./xgcc -B. -S t.c -Wctad-maybe-unsupported
cc1: warning: command-line option ‘-Wctad-maybe-unsupported’ is valid for C++/ObjC++ but not for C
the quoted string '-Wctad-maybe-unsupported' gets automatically URLified
in a sufficiently modern terminal to:
https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html#index-Wctad-maybe-unsupported
Objectives:
- integrate with DOCUMENTATION_ROOT_URL
- integrate with the existing .opt mechanisms
- automate keeping the URLs up-to-date
- work with target-specific options based on current configuration
- work with lang-specific options based on current configuration
- keep autogenerated material separate from the human-maintained .opt
files
- no new build-time requirements (by using awk at build time)
- be maintainable
The approach is a new regenerate-opt-urls.py which:
- scrapes the generated HTML documentation finding anchors
for options,
- reads all the .opt files in the source tree
- for each .opt file, generates a .opt.urls file; for each
option in the .opt file it has either a UrlSuffix directives giving
the final part of the URL of that option's documentation (relative
to DOCUMENTATION_ROOT_URL), or a comment describing the problem.
regenerate-opt-urls.py is written in Python 3, and has unit tests.
I tested it with Python 3.8, and it probably works with earlier
releases of Python 3.
The .opt.urls files it generates become part of the source tree, and
would be regenerated by maintainers whenever new options are added.
Forgetting to update the files (or not having Python 3 handy) merely
means that URLs might be missing or out of date until someone else
regenerates them.
At build time, the .opt.urls are added to .opt files when regenerating
the optionslist file. A new "options-urls-cc-gen.awk" is run at build
time on the optionslist to generate a "options-urls.cc" file, and this
is then used by the gcc_urlifier class when emitting diagnostics.
Changed in v5:
- removed commented-out code
Changed in v4:
- added PER_LANGUAGE_OPTION_INDEXES
- added info to sourcebuild.texi on adding a new front end
- removed TODOs and out-of-date comment
Changed in v3:
- Makefile.in: added OPT_URLS_HTML_DEPS and a comment
Changed in v2:
- added convenience targets to Makefile for regenerating the .opt.urls
files, and for running unit tests for the generation code
- parse gdc and gfortran documentation, and create LangUrlSuffix_{lang}
directives for language-specific URLs.
- add documentation to sourcebuild.texi
gcc/ChangeLog:
* Makefile.in (OPT_URLS_HTML_DEPS): New.
(regenerate-opt-urls): New target.
(regenerate-opt-urls-unit-test): New target.
* doc/options.texi (Option properties): Add UrlSuffix and
description of regenerate-opt-urls.py. Add LangUrlSuffix_*.
* doc/sourcebuild.texi (Anatomy of a Language Front End): Add
reference to regenerate-opt-urls.py's PER_LANGUAGE_OPTION_INDEXES
and Makefile.in's OPT_URLS_HTML_DEPS.
(Anatomy of a Target Back End): Add
reference to regenerate-opt-urls.py's TARGET_SPECIFIC_PAGES.
* regenerate-opt-urls.py: New file.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 4 Jan 2024 14:19:06 +0000 (09:19 -0500)]
analyzer: add sarif properties for checker events
As another followup to r14-6057-g12b67d1e13b3cf, optionally add SARIF
property bags to threadFlowLocation objects when writing out diagnostic
paths, and add analyzer-specific properties to them.
This was useful for debugging PR analyzer/112790.
gcc/analyzer/ChangeLog:
* checker-event.cc: Include "diagnostic-format-sarif.h" and
"tree-logical-location.h".
(checker_event::maybe_add_sarif_properties): New.
(superedge_event::maybe_add_sarif_properties): New.
(superedge_event::superedge_event): Add comment.
* checker-event.h (checker_event::maybe_add_sarif_properties): New
decl.
(superedge_event::maybe_add_sarif_properties): New decl.
gcc/ChangeLog:
* diagnostic-format-sarif.cc
(sarif_builder::make_logical_location_object): Convert to...
(make_sarif_logical_location_object): ...this.
(sarif_builder::set_any_logical_locs_arr): Update for above
change.
(sarif_builder::make_thread_flow_location_object): Call
maybe_add_sarif_properties on each diagnostic_event.
* diagnostic-format-sarif.h (class logical_location): New forward
decl.
(make_sarif_logical_location_object): New decl.
* diagnostic-path.h (class sarif_object): New forward decl.
(diagnostic_event::maybe_add_sarif_properties): New vfunc.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Kuan-Lin Chen [Wed, 20 Dec 2023 07:18:59 +0000 (15:18 +0800)]
RISC-V: Nan-box the result of movhf on soft-fp16
According to spec, fmv.h checks if the input operands are correctly
NaN-boxed. If not, the input value is treated as an n-bit canonical NaN.
This patch fixs the issue that operands returned by soft-fp16 libgcc
(i.e., __truncdfhf2) was not correctly NaN-boxed.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_legitimize_move): Expand movfh
with Nan-boxing value.
* config/riscv/riscv.md (*movhf_softfloat_unspec): New pattern.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/_Float16-nanboxing.c: New test.
Co-authored-by: Patrick Lin <patrick@andestech.com> Co-authored-by: Rufus Chen <rufus@andestech.com> Co-authored-by: Monk Chiang <monk.chiang@sifive.com>
Roger Sayle [Thu, 4 Jan 2024 10:49:33 +0000 (10:49 +0000)]
Improved RTL expansion of field assignments into promoted registers.
This patch fixes PR rtl-optmization/104914 by tweaking/improving the way
the fields are written into a pseudo register that needs to be kept sign
extended.
<bb 5> [local count: 1073741824]:
val ={v} {CLOBBER(eol)};
return;
}
Here four bytes are being sequentially written into the SImode value
val. On some platforms, such as MIPS64, this SImode value is kept in
a 64-bit register, suitably sign-extended. The function expand_assignment
contains logic to handle this via SUBREG_PROMOTED_VAR_P (around line 6264
in expr.cc) which outputs an explicit extension operation after each
store_field (typically insv) to such promoted/extended pseudos.
The first observation is that there's no need to perform sign extension
after each byte in the example above; the extension is only required
after changes to the most significant byte (i.e. to a field that overlaps
the most significant bit).
The bug fix is actually a bit more subtle, but at this point during
code expansion it's not safe to use a SUBREG when sign-extending this
field. Currently, GCC generates (sign_extend:DI (subreg:SI (reg:DI) 0))
but combine (and other RTL optimizers) later realize that because SImode
values are always sign-extended in their 64-bit hard registers that
this is a no-op and eliminates it. The trouble is that it's unsafe to
refer to the SImode lowpart of a 64-bit register using SUBREG at those
critical points when temporarily the value isn't correctly sign-extended,
and the usual backend invariants don't hold. At these critical points,
the middle-end needs to use an explicit TRUNCATE rtx (as this isn't a
TRULY_NOOP_TRUNCATION), so that the explicit sign-extension looks like
(sign_extend:DI (truncate:SI (reg:DI)), which avoids the problem.
2024-01-04 Roger Sayle <roger@nextmovesoftware.com>
Jeff Law <jlaw@ventanamicro.com>
gcc/ChangeLog
PR rtl-optimization/104914
* expr.cc (expand_assignment): When target is SUBREG_PROMOTED_VAR_P
a sign or zero extension is only required if the modified field
overlaps the SUBREG's most significant bit. On MODE_REP_EXTENDED
targets, don't refer to the temporarily incorrectly extended value
using a SUBREG, but instead generate an explicit TRUNCATE rtx.
Juzhe-Zhong [Thu, 4 Jan 2024 08:22:48 +0000 (16:22 +0800)]
RISC-V: Make liveness estimation be aware of .vi variant
Consider this following case:
void
f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int n)
{
for (int i = 0; i < n; i++)
{
int tmp = b[i] + 15;
int tmp2 = tmp + b[i];
c[i] = tmp2 + b[i];
d[i] = tmp + tmp2 + b[i];
}
}
Current dynamic LMUL cost model choose LMUL = 4 because we count the "15" as
consuming 1 vector register group which is not accurate.
We teach the dynamic LMUL cost model be aware of the potential vi variant instructions
transformation, so that we can choose LMUL = 8 according to more accurate cost model.
Kito Cheng [Mon, 25 Dec 2023 08:45:21 +0000 (16:45 +0800)]
RISC-V: Fix misaligned stack offset for interrupt function
`interrupt` function will backup fcsr register, but it fixed to SImode,
it's not big issue since fcsr only used 8 bits so far, however the
offset should still using UNITS_PER_WORD to prevent the stack offset
become non 8 byte aligned, it will cause problem for RV64.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_for_each_saved_reg): Adjust the
offset of fcsr.
chenxiaolong [Fri, 29 Dec 2023 07:48:06 +0000 (15:48 +0800)]
LoongArch: testsuite:Add loongarch to gcc.dg/vect/slp-26.c.
In the LoongArch architecture, GCC supports the vectorization function tested
by vect/slp-26.c, but there is no detection of loongarch in dg-finals. Add
loongarch to the appropriate dg-finals.
chenxiaolong [Fri, 29 Dec 2023 01:45:15 +0000 (09:45 +0800)]
LoongArch: testsuite:Fix FAIL in lasx-xvstelm.c file.
After implementing the cost model on the LoongArch architecture, the GCC
compiler code has this feature turned on by default, which causes the
lasx-xvstelm.c file test to fail. Through analysis, this test case can
generate vectorization instructions required for detection only after
disabling the functionality of the cost model with the "-fno-vect-cost-model"
compilation option.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lasx/lasx-xvstelm.c:Add compile
option "-fno-vect-cost-model" to dg-options.
There are currently two versions of the implementations of constant
vector permutation: loongarch_expand_vec_perm_const_1 and
loongarch_expand_vec_perm_const_2. The implementations of the two
versions are different. Currently, only the implementation of
loongarch_expand_vec_perm_const_1 is used for 256-bit vectors. We
hope to streamline the code as much as possible while retaining the
better-performing implementation of the two. By repeatedly testing
spec2006 and spec2017, we got the following Merged version.
Compared with the pre-merger version, the number of lines of code
in loongarch.cc has been reduced by 888 lines. At the same time,
the performance of SPECint2006 under Ofast has been improved by 0.97%,
and the performance of SPEC2017 fprate has been improved by 0.27%.