Feng Wang [Wed, 3 Jan 2024 05:21:45 +0000 (05:21 +0000)]
RISC-V: Add crypto vector api-testing cases.
This patch add crypto vector api-testing cases based on
https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob/eopc/vector-crypto/auto-generated/vector-crypto
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/base/zvbb-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvbb_vandn_vx_constraint.c: New test.
* gcc.target/riscv/rvv/base/zvbc-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvbc_vx_constraint-1.c: New test.
* gcc.target/riscv/rvv/base/zvbc_vx_constraint-2.c: New test.
* gcc.target/riscv/rvv/base/zvkg-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvkned-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvknha-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvknhb-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvksed-intrinsic.c: New test.
* gcc.target/riscv/rvv/base/zvksh-intrinsic.c: New test.
* gcc.target/riscv/zvkb.c: New test.
Feng Wang [Tue, 2 Jan 2024 09:18:14 +0000 (09:18 +0000)]
RISC-V: Add crypto vector builtin function.
This patch add the intrinsic funtions of crypto vector based on the
intrinsic doc(https://github.com/riscv-non-isa/rvv-intrinsic-doc/blob
/eopc/vector-crypto/auto-generated/vector-crypto/intrinsic_funcs.md).
Ken Matsui [Mon, 11 Sep 2023 15:21:50 +0000 (08:21 -0700)]
libstdc++: Use _GLIBCXX_USE_BUILTIN_TRAIT
This patch uses _GLIBCXX_USE_BUILTIN_TRAIT macro instead of __has_builtin
in the type_traits header for traits that have a corresponding fallback
non-built-in implementation. This macro supports to toggle the use of
built-in traits in the type_traits header through
_GLIBCXX_DO_NOT_USE_BUILTIN_TRAITS macro, without needing to modify the
source code.
libstdc++-v3/ChangeLog:
* include/std/type_traits: Use _GLIBCXX_USE_BUILTIN_TRAIT.
Signed-off-by: Ken Matsui <kmatsui@gcc.gnu.org> Reviewed-by: Patrick Palka <ppalka@redhat.com> Reviewed-by: Jonathan Wakely <jwakely@redhat.com>
Juzhe-Zhong [Thu, 4 Jan 2024 12:29:15 +0000 (20:29 +0800)]
RISC-V: Make liveness estimation be aware of .vi variant
Consider this following case:
void
f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int n)
{
for (int i = 0; i < n; i++)
{
int tmp = b[i] + 15;
int tmp2 = tmp + b[i];
c[i] = tmp2 + b[i];
d[i] = tmp + tmp2 + b[i];
}
}
Current dynamic LMUL cost model choose LMUL = 4 because we count the "15" as
consuming 1 vector register group which is not accurate.
We teach the dynamic LMUL cost model be aware of the potential vi variant instructions
transformation, so that we can choose LMUL = 8 according to more accurate cost model.
Andrew Pinski [Mon, 1 Jan 2024 00:38:30 +0000 (16:38 -0800)]
Match: Improve inverted_equal_p for bool and `^` and `==` [PR113186]
For boolean types, `a ^ b` is a valid form for `a != b`. This means for
gimple_bitwise_inverted_equal_p, we catch some inverted value forms. This
patch extends inverted_equal_p to allow matching of `^` with the
corresponding `==`. Note in the testcase provided we used to optimize
in GCC 12 to just `return 0` where `a == b` was used,
this allows us to do that again.
Bootstrapped and tested on x86_64-linux-gnu with no regressions.
PR tree-optimization/113186
gcc/ChangeLog:
* gimple-match-head.cc (gimple_bitwise_inverted_equal_p):
Match `^` with the `==` for 1bit integral types.
* match.pd (maybe_cmp): Allow for bit_xor for 1bit
integral types.
gcc/testsuite/ChangeLog:
* gcc.dg/tree-ssa/bitops-bool-1.c: New test.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
This commit adds a new function intended for checking the XID properties
of a possibly unicode character, as well as the accompanying enum
describing the possible properties.
David Malcolm [Thu, 4 Jan 2024 14:36:28 +0000 (09:36 -0500)]
options: wire up options-urls.cc into gcc_urlifier
Changed in v2:
- split out from the code that generates options-urls.cc
- call the generated function, rather than use a generated array
- pass around lang_mask
gcc/ChangeLog:
* diagnostic.h (diagnostic_make_option_url_cb): Add lang_mask
param.
(diagnostic_context::make_option_url): Update for lang_mask param.
* gcc-urlifier.cc: Include "opts.h" and "options.h".
(gcc_urlifier::gcc_urlifier): Add lang_mask param.
(gcc_urlifier::m_lang_mask): New field.
(doc_urls): Make static.
(gcc_urlifier::get_url_for_quoted_text): Use label_text.
(gcc_urlifier::get_url_suffix_for_quoted_text): Use label_text.
Look for an option by name before trying a binary search in
doc_urls.
(gcc_urlifier::get_url_suffix_for_quoted_text): Use label_text.
(gcc_urlifier::get_url_suffix_for_option): New.
(make_gcc_urlifier): Add lang_mask param.
(selftest::gcc_urlifier_cc_tests): Update for above changes.
Verify that a URL is found for "-fpack-struct".
* gcc-urlifier.def: Drop options "--version" and "-fpack-struct".
* gcc-urlifier.h (make_gcc_urlifier): Add lang_mask param.
* gcc.cc (driver::global_initializations): Pass 0 for lang_mask
to make_gcc_urlifier.
* opts-diagnostic.h (get_option_url): Add lang_mask param.
* opts.cc (get_option_html_page): Remove special-casing for
analyzer and LTO.
(get_option_url_suffix): New.
(get_option_url): Reimplement.
(selftest::test_get_option_html_page): Rename to...
(selftest::test_get_option_url_suffix): ...this and update for
above changes.
(selftest::opts_cc_tests): Update for renaming.
* opts.h: Include "rich-location.h".
(get_option_url_suffix): New decl.
gcc/testsuite/ChangeLog:
* lib/gcc-dg.exp: Set TERM to xterm.
gcc/ChangeLog:
* toplev.cc (general_init): Pass lang_mask to urlifier.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 4 Jan 2024 14:36:28 +0000 (09:36 -0500)]
opts: add logic to generate options-urls.cc
Changed in v2:
- split out from the code that uses this
- now handles lang-specific URLs, as well as generic URLs
- the generated options-urls.cc now contains a function with a
switch statement, rather than an array, to support
lang-specific URLs:
gcc/ChangeLog:
* Makefile.in (ALL_OPT_URL_FILES): New.
(GCC_OBJS): Add options-urls.o.
(OBJS): Likewise.
(OBJS-libcommon): Likewise.
(s-options): Depend on $(ALL_OPT_URL_FILES), and add this to
inputs to opt-gather.awk.
(options-urls.cc): New Makefile target.
* opt-functions.awk (url_suffix): New function.
(lang_url_suffix): New function.
* options-urls-cc-gen.awk: New file.
* opts.h (get_opt_url_suffix): New decl.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 4 Jan 2024 14:36:27 +0000 (09:36 -0500)]
options: add gcc/regenerate-opt-urls.py
In r14-5118-gc5db4d8ba5f3de I added a mechanism to automatically add
URLs to quoted strings in diagnostics. This was based on a data table
mapping strings to URLs, with placeholder data covering various pragmas
and a couple of options.
The following patches add automatic URLification in our diagnostic
messages to mentions of *all* of our options in quoted strings, linking
to our HTML documentation.
For example, with these patches, given:
./xgcc -B. -S t.c -Wctad-maybe-unsupported
cc1: warning: command-line option ‘-Wctad-maybe-unsupported’ is valid for C++/ObjC++ but not for C
the quoted string '-Wctad-maybe-unsupported' gets automatically URLified
in a sufficiently modern terminal to:
https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html#index-Wctad-maybe-unsupported
Objectives:
- integrate with DOCUMENTATION_ROOT_URL
- integrate with the existing .opt mechanisms
- automate keeping the URLs up-to-date
- work with target-specific options based on current configuration
- work with lang-specific options based on current configuration
- keep autogenerated material separate from the human-maintained .opt
files
- no new build-time requirements (by using awk at build time)
- be maintainable
The approach is a new regenerate-opt-urls.py which:
- scrapes the generated HTML documentation finding anchors
for options,
- reads all the .opt files in the source tree
- for each .opt file, generates a .opt.urls file; for each
option in the .opt file it has either a UrlSuffix directives giving
the final part of the URL of that option's documentation (relative
to DOCUMENTATION_ROOT_URL), or a comment describing the problem.
regenerate-opt-urls.py is written in Python 3, and has unit tests.
I tested it with Python 3.8, and it probably works with earlier
releases of Python 3.
The .opt.urls files it generates become part of the source tree, and
would be regenerated by maintainers whenever new options are added.
Forgetting to update the files (or not having Python 3 handy) merely
means that URLs might be missing or out of date until someone else
regenerates them.
At build time, the .opt.urls are added to .opt files when regenerating
the optionslist file. A new "options-urls-cc-gen.awk" is run at build
time on the optionslist to generate a "options-urls.cc" file, and this
is then used by the gcc_urlifier class when emitting diagnostics.
Changed in v5:
- removed commented-out code
Changed in v4:
- added PER_LANGUAGE_OPTION_INDEXES
- added info to sourcebuild.texi on adding a new front end
- removed TODOs and out-of-date comment
Changed in v3:
- Makefile.in: added OPT_URLS_HTML_DEPS and a comment
Changed in v2:
- added convenience targets to Makefile for regenerating the .opt.urls
files, and for running unit tests for the generation code
- parse gdc and gfortran documentation, and create LangUrlSuffix_{lang}
directives for language-specific URLs.
- add documentation to sourcebuild.texi
gcc/ChangeLog:
* Makefile.in (OPT_URLS_HTML_DEPS): New.
(regenerate-opt-urls): New target.
(regenerate-opt-urls-unit-test): New target.
* doc/options.texi (Option properties): Add UrlSuffix and
description of regenerate-opt-urls.py. Add LangUrlSuffix_*.
* doc/sourcebuild.texi (Anatomy of a Language Front End): Add
reference to regenerate-opt-urls.py's PER_LANGUAGE_OPTION_INDEXES
and Makefile.in's OPT_URLS_HTML_DEPS.
(Anatomy of a Target Back End): Add
reference to regenerate-opt-urls.py's TARGET_SPECIFIC_PAGES.
* regenerate-opt-urls.py: New file.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
David Malcolm [Thu, 4 Jan 2024 14:19:06 +0000 (09:19 -0500)]
analyzer: add sarif properties for checker events
As another followup to r14-6057-g12b67d1e13b3cf, optionally add SARIF
property bags to threadFlowLocation objects when writing out diagnostic
paths, and add analyzer-specific properties to them.
This was useful for debugging PR analyzer/112790.
gcc/analyzer/ChangeLog:
* checker-event.cc: Include "diagnostic-format-sarif.h" and
"tree-logical-location.h".
(checker_event::maybe_add_sarif_properties): New.
(superedge_event::maybe_add_sarif_properties): New.
(superedge_event::superedge_event): Add comment.
* checker-event.h (checker_event::maybe_add_sarif_properties): New
decl.
(superedge_event::maybe_add_sarif_properties): New decl.
gcc/ChangeLog:
* diagnostic-format-sarif.cc
(sarif_builder::make_logical_location_object): Convert to...
(make_sarif_logical_location_object): ...this.
(sarif_builder::set_any_logical_locs_arr): Update for above
change.
(sarif_builder::make_thread_flow_location_object): Call
maybe_add_sarif_properties on each diagnostic_event.
* diagnostic-format-sarif.h (class logical_location): New forward
decl.
(make_sarif_logical_location_object): New decl.
* diagnostic-path.h (class sarif_object): New forward decl.
(diagnostic_event::maybe_add_sarif_properties): New vfunc.
Signed-off-by: David Malcolm <dmalcolm@redhat.com>
Kuan-Lin Chen [Wed, 20 Dec 2023 07:18:59 +0000 (15:18 +0800)]
RISC-V: Nan-box the result of movhf on soft-fp16
According to spec, fmv.h checks if the input operands are correctly
NaN-boxed. If not, the input value is treated as an n-bit canonical NaN.
This patch fixs the issue that operands returned by soft-fp16 libgcc
(i.e., __truncdfhf2) was not correctly NaN-boxed.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_legitimize_move): Expand movfh
with Nan-boxing value.
* config/riscv/riscv.md (*movhf_softfloat_unspec): New pattern.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/_Float16-nanboxing.c: New test.
Co-authored-by: Patrick Lin <patrick@andestech.com> Co-authored-by: Rufus Chen <rufus@andestech.com> Co-authored-by: Monk Chiang <monk.chiang@sifive.com>
Roger Sayle [Thu, 4 Jan 2024 10:49:33 +0000 (10:49 +0000)]
Improved RTL expansion of field assignments into promoted registers.
This patch fixes PR rtl-optmization/104914 by tweaking/improving the way
the fields are written into a pseudo register that needs to be kept sign
extended.
<bb 5> [local count: 1073741824]:
val ={v} {CLOBBER(eol)};
return;
}
Here four bytes are being sequentially written into the SImode value
val. On some platforms, such as MIPS64, this SImode value is kept in
a 64-bit register, suitably sign-extended. The function expand_assignment
contains logic to handle this via SUBREG_PROMOTED_VAR_P (around line 6264
in expr.cc) which outputs an explicit extension operation after each
store_field (typically insv) to such promoted/extended pseudos.
The first observation is that there's no need to perform sign extension
after each byte in the example above; the extension is only required
after changes to the most significant byte (i.e. to a field that overlaps
the most significant bit).
The bug fix is actually a bit more subtle, but at this point during
code expansion it's not safe to use a SUBREG when sign-extending this
field. Currently, GCC generates (sign_extend:DI (subreg:SI (reg:DI) 0))
but combine (and other RTL optimizers) later realize that because SImode
values are always sign-extended in their 64-bit hard registers that
this is a no-op and eliminates it. The trouble is that it's unsafe to
refer to the SImode lowpart of a 64-bit register using SUBREG at those
critical points when temporarily the value isn't correctly sign-extended,
and the usual backend invariants don't hold. At these critical points,
the middle-end needs to use an explicit TRUNCATE rtx (as this isn't a
TRULY_NOOP_TRUNCATION), so that the explicit sign-extension looks like
(sign_extend:DI (truncate:SI (reg:DI)), which avoids the problem.
2024-01-04 Roger Sayle <roger@nextmovesoftware.com>
Jeff Law <jlaw@ventanamicro.com>
gcc/ChangeLog
PR rtl-optimization/104914
* expr.cc (expand_assignment): When target is SUBREG_PROMOTED_VAR_P
a sign or zero extension is only required if the modified field
overlaps the SUBREG's most significant bit. On MODE_REP_EXTENDED
targets, don't refer to the temporarily incorrectly extended value
using a SUBREG, but instead generate an explicit TRUNCATE rtx.
Juzhe-Zhong [Thu, 4 Jan 2024 08:22:48 +0000 (16:22 +0800)]
RISC-V: Make liveness estimation be aware of .vi variant
Consider this following case:
void
f (int *restrict a, int *restrict b, int *restrict c, int *restrict d, int n)
{
for (int i = 0; i < n; i++)
{
int tmp = b[i] + 15;
int tmp2 = tmp + b[i];
c[i] = tmp2 + b[i];
d[i] = tmp + tmp2 + b[i];
}
}
Current dynamic LMUL cost model choose LMUL = 4 because we count the "15" as
consuming 1 vector register group which is not accurate.
We teach the dynamic LMUL cost model be aware of the potential vi variant instructions
transformation, so that we can choose LMUL = 8 according to more accurate cost model.
Kito Cheng [Mon, 25 Dec 2023 08:45:21 +0000 (16:45 +0800)]
RISC-V: Fix misaligned stack offset for interrupt function
`interrupt` function will backup fcsr register, but it fixed to SImode,
it's not big issue since fcsr only used 8 bits so far, however the
offset should still using UNITS_PER_WORD to prevent the stack offset
become non 8 byte aligned, it will cause problem for RV64.
gcc/ChangeLog:
* config/riscv/riscv.cc (riscv_for_each_saved_reg): Adjust the
offset of fcsr.
chenxiaolong [Fri, 29 Dec 2023 07:48:06 +0000 (15:48 +0800)]
LoongArch: testsuite:Add loongarch to gcc.dg/vect/slp-26.c.
In the LoongArch architecture, GCC supports the vectorization function tested
by vect/slp-26.c, but there is no detection of loongarch in dg-finals. Add
loongarch to the appropriate dg-finals.
chenxiaolong [Fri, 29 Dec 2023 01:45:15 +0000 (09:45 +0800)]
LoongArch: testsuite:Fix FAIL in lasx-xvstelm.c file.
After implementing the cost model on the LoongArch architecture, the GCC
compiler code has this feature turned on by default, which causes the
lasx-xvstelm.c file test to fail. Through analysis, this test case can
generate vectorization instructions required for detection only after
disabling the functionality of the cost model with the "-fno-vect-cost-model"
compilation option.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lasx/lasx-xvstelm.c:Add compile
option "-fno-vect-cost-model" to dg-options.
There are currently two versions of the implementations of constant
vector permutation: loongarch_expand_vec_perm_const_1 and
loongarch_expand_vec_perm_const_2. The implementations of the two
versions are different. Currently, only the implementation of
loongarch_expand_vec_perm_const_1 is used for 256-bit vectors. We
hope to streamline the code as much as possible while retaining the
better-performing implementation of the two. By repeatedly testing
spec2006 and spec2017, we got the following Merged version.
Compared with the pre-merger version, the number of lines of code
in loongarch.cc has been reduced by 888 lines. At the same time,
the performance of SPECint2006 under Ofast has been improved by 0.97%,
and the performance of SPEC2017 fprate has been improved by 0.27%.
YunQiang Su [Fri, 29 Dec 2023 16:17:52 +0000 (00:17 +0800)]
MIPS: Add pattern insqisi_extended and inshisi_extended
This match pattern allows combination (zero_extract:DI 8, 24, QI)
with an sign-extend to 32bit INS instruction on TARGET_64BIT.
For SI mode, if the sign-bit is modified by bitops, we will need a
sign-extend operation. Since 32bit INS instruction can be sure that
result is sign-extended, and the QImode src register is safe for INS, too.
YunQiang Su [Fri, 29 Dec 2023 17:34:28 +0000 (01:34 +0800)]
MIPS: Implement TARGET_INSN_COSTS
When combine some instructions, the generic `rtx_cost`
may over estimate the cost of result RTL, due to that
the RTL may be quite complex and `rtx_cost` has no
information that this RTL can be convert to simple
hardware instruction(s).
In this case, Let's use `insn_count * perf_ratio` to
estimate the cost if both of them are available.
Otherwise fallback to pattern_cost.
When non-speed, Let's use the length as cost.
gcc
* config/mips/mips.cc (mips_insn_cost): New function.
gcc/testsuite
* gcc.target/mips/data-sym-multi-pool.c: Skip Os or -O0.
YunQiang Su [Fri, 29 Dec 2023 16:17:52 +0000 (00:17 +0800)]
MIPS: define_attr perf_ratio in mips.md
The accurate cost of an pattern can get with
insn_count * perf_ratio
The default value is set to 0 instead of 1, since that
we will need to distinguish the default value and it is
really set for an pattern. Since it is not set for most
patterns yet, to use it, we will need to be sure that it's
value is greater than 0.
This attr will be used in `mips_insn_cost`.
gcc
* config/mips/mips.md (perf_ratio): New attribute.
Juzhe-Zhong [Wed, 3 Jan 2024 22:38:43 +0000 (06:38 +0800)]
RISC-V: Fix bug of earliest fusion for infinite loop[VSETVL PASS]
As PR113206 and PR113209, the bugs happens on the following situation:
li a4,32
...
vsetvli zero,a4,e8,m8,ta,ma
...
slliw a4,a3,24
sraiw a4,a4,24
bge a3,a1,.L8
sb a4,%lo(e)(a0)
vsetvli zero,a4,e8,m8,ta,ma --> a4 is polluted value not the expected "32".
...
.L7:
j .L7 ---> infinite loop.
The root cause is that infinite loop confuse earliest computation and let earliest fusion
happens on unexpected place.
Disable blocks that belong to infinite loop to fix this bug since applying ealiest LCM fusion
on infinite loop seems quite complicated and we don't see any benefits.
Note that disabling earliest fusion on infinite loops doesn't hurt the vsetvli performance,
instead, it does improve codegen of some cases.
Tested on both RV32 and RV64 no regression.
PR target/113206
PR target/113209
gcc/ChangeLog:
* config/riscv/riscv-vsetvl.cc (invalid_opt_bb_p): New function.
(pre_vsetvl::compute_lcm_local_properties): Disable earliest fusion on
blocks belong to infinite loop.
(pre_vsetvl::emit_vsetvl): Remove fake edges.
* config/riscv/t-riscv: Add a new include file.
Patrick Palka [Wed, 3 Jan 2024 20:43:28 +0000 (15:43 -0500)]
c++: bad direct reference binding via conv fn [PR113064]
When computing a direct reference binding via a conversion function
yields a bad conversion, reference_binding incorrectly commits to that
conversion instead of trying a conversion via a temporary. This causes
us to reject the first testcase because the bad direct conversion to B&&
via the && conversion operator prevents us from considering the good
conversion via the & conversion operator and a temporary. (Similar
story for the second testcase.)
This patch fixes this by making reference_binding not prematurely commit
to such a bad direct conversion. We still fall back to it if using a
temporary also fails (otherwise the diagnostic for cpp0x/explicit7.C
regresses).
PR c++/113064
gcc/cp/ChangeLog:
* call.cc (reference_binding): Still try a conversion via a
temporary if a direct conversion was bad.
gcc/testsuite/ChangeLog:
* g++.dg/cpp0x/rv-conv4.C: New test.
* g++.dg/cpp0x/rv-conv5.C: New test.
gcc/c/
* c-parser.cc (c_parser_omp_clause_name): Move handling of indirect
clause to correspond to alphabetical order.
gcc/cp/
* parser.cc (cp_parser_omp_clause_name): Move handling of indirect
clause to correspond to alphabetical order.
gcc/
* tree-core.h (enum omp_clause_code): Move OMP_CLAUSE_INDIRECT to before
OMP_CLAUSE__SIMDUID_.
* tree.cc (omp_clause_num_ops): Update position of entry for
OMP_CLAUSE_INDIRECT to correspond with omp_clause_code.
(omp_clause_code_name): Likewise.
nvptx: Restructure code generating function map labels
This restructures the code generating FUNC_MAP and IND_FUNC_MAP labels
in the assembly code for mkoffload to consume, hopefully making it a
bit clearer and easier to search for.
Jakub Jelinek [Wed, 3 Jan 2024 11:11:32 +0000 (12:11 +0100)]
Small tweaks for update-copyright.py
update-copyright.py --this-year FAILs on two spots in the modula2
directories.
One is gpl_v3_without_node.texi, I think that is similar to other
license files which we already exclude from updates.
And the other is GmcOptions.cc, which has lines like
mcPrintf_printf0 ((const char *) "Copyright ", 10);
mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
mcPrintf_printf1 ((const char *) "Copyright (C) %d Free Software Foundation, Inc.\\n", 49, (const unsigned char *) &year, (sizeof (year)-1));
which update-copyhright.py obviously can't grok. The file is generated
and doesn't contain normal Copyright year which should be updated, so I think
it is also ok to skip it.
Xi Ruoyao [Sat, 30 Dec 2023 13:40:11 +0000 (21:40 +0800)]
LoongArch: Provide fmin/fmax RTL pattern for vectors
We already had smin/smax RTL pattern using vfmin/vfmax instructions.
But for smin/smax, it's unspecified what will happen if either operand
contains any NaN operands. So we would not vectorize the loop with
-fno-finite-math-only (the default for all optimization levels expect
-Ofast).
But, LoongArch vfmin/vfmax instruction is IEEE-754-2008 conformant so we
can also use them and vectorize the loop.
gcc/ChangeLog:
* config/loongarch/simd.md (fmax<mode>3): New define_insn.
(fmin<mode>3): Likewise.
(reduc_fmax_scal_<mode>3): New define_expand.
(reduc_fmin_scal_<mode>3): Likewise.
Patrick Palka [Wed, 3 Jan 2024 02:31:20 +0000 (21:31 -0500)]
libstdc++: testsuite: Reduce max_size_type.cc exec time [PR113175]
The adjustment to max_size_type.cc in r14-205-g83470a5cd4c3d2
inadvertently increased the execution time of this test by over 5x due
to making the two main loops actually run in the signed_p case instead
of being dead code.
To compensate, this patch cuts the relevant loops' range [-1000,1000] by
10x as proposed in the PR. This shouldn't significantly weaken the test
since the same important edge cases are still checked in the smaller range
and/or elsewhere. On my machine this reduces the test's execution time by
roughly 10x (and 1.6x relative to before r14-205).
PR testsuite/113175
libstdc++-v3/ChangeLog:
* testsuite/std/ranges/iota/max_size_type.cc (test02): Reduce
'limit' to 100 from 1000 and adjust 'log2_limit' accordingly.
(test03): Likewise.
Jun Sha (Joshua) [Fri, 29 Dec 2023 04:10:44 +0000 (12:10 +0800)]
RISC-V: Use vector_length_operand instead of csr_operand in vsetvl patterns
This patch replaces csr_operand by vector_length_operand in the vsetvl
patterns. This allows future changes in the vector code (i.e. in the
vector_length_operand predicate) without affecting scalar patterns that
use the csr_operand predicate.
gcc/ChangeLog:
* config/riscv/vector.md:
Use vector_length_operand for vsetvl patterns.
Co-authored-by: Jin Ma <jinma@linux.alibaba.com> Co-authored-by: Xianmiao Qu <cooper.qu@linux.alibaba.com> Co-authored-by: Christoph Müllner <christoph.muellner@vrull.eu>
Juzhe-Zhong [Tue, 2 Jan 2024 07:26:55 +0000 (15:26 +0800)]
RISC-V: Add simplification of dummy len and dummy mask COND_LEN_xxx pattern
In https://gcc.gnu.org/git/?p=gcc.git;a=commit;h=d1eacedc6d9ba9f5522f2c8d49ccfdf7939ad72d
I optimize COND_LEN_xxx pattern with dummy len and dummy mask with too simply solution which
causes redundant vsetvli in the following case:
vsetvli a5,a2,e8,m1,ta,ma
vle32.v v8,0(a0)
vsetivli zero,16,e32,m4,tu,mu ----> We should apply VLMAX instead of a CONST_INT AVL
slli a4,a5,2
vand.vv v0,v8,v16
vand.vv v4,v8,v12
vmseq.vi v0,v0,0
sub a2,a2,a5
vneg.v v4,v8,v0.t
vsetvli zero,a5,e32,m4,ta,ma
Actually, we should not elide the VLMAX situation that has AVL in range of [0,31].
After removing the the check above, we will have this following issue:
vsetivli zero,4,e32,m1,ta,ma
vlseg4e32.v v4,(a5)
vlseg4e32.v v12,(a3)
vsetvli a5,zero,e32,m1,tu,ma ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5
Since all the following operations (vfadd.vf ... etc) are COND_LEN_xxx with dummy len and dummy mask,
we add the simplification operations dummy len and dummy mask into VLMAX TA and MA policy.
So, after this patch. Both cases are optimal codegen now:
case 1:
vsetvli a5,a2,e32,m1,ta,mu
vle32.v v2,0(a0)
slli a4,a5,2
vand.vv v1,v2,v3
vand.vv v0,v2,v4
sub a2,a2,a5
vmseq.vi v0,v0,0
vneg.v v1,v2,v0.t
vse32.v v1,0(a1)
This patch adds a new tuning option
'AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA', to consider fully
pipelined FMAs in reassociation. Also, set this option by default
for Ampere CPUs.
gcc/ChangeLog:
* config/aarch64/aarch64-tuning-flags.def
(AARCH64_EXTRA_TUNING_OPTION): New tuning option
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA.
* config/aarch64/aarch64.cc
(aarch64_override_options_internal): Set
param_fully_pipelined_fma according to tuning option.
* config/aarch64/tuning_models/ampere1.h: Add
AARCH64_EXTRA_TUNE_FULLY_PIPELINED_FMA to tune_flags.
* config/aarch64/tuning_models/ampere1a.h: Likewise.
* config/aarch64/tuning_models/ampere1b.h: Likewise.
* config/riscv/iterators.md: Add rotate insn name.
* config/riscv/riscv.md: Add new insns name for crypto vector.
* config/riscv/vector-iterators.md: Add new iterators for crypto vector.
* config/riscv/vector.md: Add the corresponding attr for crypto vector.
* config/riscv/vector-crypto.md: New file.The machine descriptions for crypto vector.
Roger Sayle [Sun, 31 Dec 2023 21:37:24 +0000 (21:37 +0000)]
i386: Tweak define_insn_and_split to fix FAIL of gcc.target/i386/pr43644-2.c
This patch resolves the failure of pr43644-2.c in the testsuite, a code
quality test I added back in July, that started failing as the code GCC
generates for 128-bit values (and their parameter passing) has been in
flux.
The function:
unsigned __int128 foo(unsigned __int128 x, unsigned long long y) {
return x+y;
}
2023-12-31 Uros Bizjak <ubizjak@gmail.com>
Roger Sayle <roger@nextmovesoftware.com>
gcc/ChangeLog
PR target/43644
* config/i386/i386.md (*add<dwi>3_doubleword_concat_zext): Tweak
order of instructions after split, to minimize number of moves.
Testing for mmix (a 64-bit target using Knuth's simulator). The test
is largely pruned for simulators, but still needs 5m57s on my laptop
from 3.5 years ago to run to successful completion. Perhaps slow
hosted targets could also have problems so increasing the timeout
limit, not just for simulators but for everyone, and by more than a
factor 2.
* testsuite/20_util/hash/quality.cc: Increase timeout by a factor 3.
François Dumont [Wed, 27 Sep 2023 04:53:51 +0000 (06:53 +0200)]
libstdc++: [_Hashtable] Extend the small size optimization
A number of methods were still not using the small size optimization which
is to prefer an O(N) research to a hash computation as long as N is small.
libstdc++-v3/ChangeLog:
* include/bits/hashtable.h: Move comment about all equivalent values
being next to each other in the class documentation header.
(_M_reinsert_node, _M_merge_unique): Implement small size optimization.
(_M_find_tr, _M_count_tr, _M_equal_range_tr): Likewise.
Add benches on insert with hint and before begin cache.
libstdc++-v3/ChangeLog:
* testsuite/performance/23_containers/insert/54075.cc: Add lookup on unknown entries
w/o copy to see potential impact of memory fragmentation enhancements.
* testsuite/performance/23_containers/insert/unordered_multiset_hint.cc: Enhance hash
functor to make it perfect, exactly 1 entry per bucket. Also use hash functor tagged as
slow or not to bench w/o hash code cache.
* testsuite/performance/23_containers/insert/unordered_set_hint.cc: New test case. Like
previous one but using std::unordered_set.
* testsuite/performance/23_containers/insert/unordered_set_range_insert.cc: New test case.
Check performance of range-insertion compared to individual insertions.
* testsuite/performance/23_containers/insert_erase/unordered_small_size.cc: Add same bench
but after a copy to demonstrate impact of enhancements regarding memory fragmentation.
Martin Uecker [Fri, 22 Dec 2023 16:32:34 +0000 (17:32 +0100)]
C: Fix type compatibility for structs with variable sized fields.
This fixes the test gcc.dg/gnu23-tag-4.c introduced by commit 23fee88f
which fails for -march=... because the DECL_FIELD_BIT_OFFSET are set
inconsistently for types with and without variable-sized field. This
is fixed by testing for DECL_ALIGN instead. The code is further
simplified by removing some unnecessary conditions, i.e. anon_field is
set unconditionaly and all fields are assumed to be DECL_FIELDs.
Tamar Christina [Fri, 29 Dec 2023 15:58:29 +0000 (15:58 +0000)]
AArch64: Update costing for vector conversions [PR110625]
In gimple the operation
short _8;
double _9;
_9 = (double) _8;
denotes two operations on AArch64. First we have to widen from short to
long and then convert this integer to a double.
Currently however we only count the widen/truncate operations:
(double) _5 6 times vec_promote_demote costs 12 in body
(double) _5 12 times vec_promote_demote costs 24 in body
but not the actual conversion operation, which needs an additional 12
instructions in the attached testcase. Without this the attached testcase ends
up incorrectly thinking that it's beneficial to vectorize the loop at a very
high VF = 8 (4x unrolled).
Because we can't change the mid-end to account for this the costing code in the
backend now keeps track of whether the previous operation was a
promotion/demotion and ajdusts the expected number of instructions to:
1. If it's the first FLOAT_EXPR and the precision of the lhs and rhs are
different, double it, since we need to convert and promote.
2. If it's the previous operation was a demonition/promotion then reduce the
cost of the current operation by the amount we added extra in the last.
with the patch we get:
(double) _5 6 times vec_promote_demote costs 24 in body
(double) _5 12 times vec_promote_demote costs 36 in body
which correctly accounts for 30 operations.
This fixes the 16% regression in imagick in SPECCPU 2017 reported on Neoverse N2
and using the new generic Armv9-a cost model.
gcc/ChangeLog:
PR target/110625
* config/aarch64/aarch64.cc (aarch64_vector_costs::add_stmt_cost):
Adjust throughput and latency calculations for vector conversions.
(class aarch64_vector_costs): Add m_num_last_promote_demote.
Xi Ruoyao [Fri, 29 Dec 2023 12:04:34 +0000 (20:04 +0800)]
LoongArch: Fix the format of bstrins_<mode>_for_ior_mask condition (NFC)
gcc/ChangeLog:
* config/loongarch/loongarch.md (bstrins_<mode>_for_ior_mask):
For the condition, remove unneeded trailing "\" and move "&&" to
follow GNU coding style. NFC.
However the sliding-window algorithm just won't detect the pcalau12i/fld
pair to be optimized. Use a define_insn_and_rewrite in combine pass
will work around the issue.
* gcc.target/loongarch/explicit-relocs-auto-single-load-store-2.c:
New test.
* gcc.target/loongarch/explicit-relocs-auto-single-load-store-3.c:
New test.
The post-reload splitter currently allows xmm16+ registers with TARGET_EVEX512.
The splitter changes SFmode of the output operand to V4SFmode, but the vector
mode is currently unsupported in xmm16+ without TARGET_AVX512VL. lowpart_subreg
returns NULL_RTX in this case and the compilation fails with invalid RTX.
The patch removes support for x/ymm16+ registers with TARGET_EVEX512. The
support should be restored once ix86_hard_regno_mode_ok is fixed to allow
16-byte modes in x/ymm16+ with TARGET_EVEX512.
PR target/113133
gcc/ChangeLog:
* config/i386/i386.md
(TARGET_USE_VECTOR_FP_CONVERTS SF->DF float_extend splitter):
Do not handle xmm16+ with TARGET_EVEX512.
gcc/testsuite/ChangeLog:
* gcc.target/i386/pr113133-1.c: New test.
* gcc.target/i386/pr113133-2.c: New test.
Andrew Pinski [Fri, 29 Dec 2023 04:26:01 +0000 (20:26 -0800)]
Fix gen-vect-26.c testcase after loops with multiple exits [PR113167]
This fixes the gcc.dg/tree-ssa/gen-vect-26.c testcase by adding
`#pragma GCC novector` in front of the loop that is doing the checking
of the result. We only want to test the first loop to see if it can be
vectorize.
Committed as obvious after testing on x86_64-linux-gnu with -m32.
gcc/testsuite/ChangeLog:
PR testsuite/113167
* gcc.dg/tree-ssa/gen-vect-26.c: Mark the test/check loop
as novector.
Signed-off-by: Andrew Pinski <quic_apinski@quicinc.com>
Juzhe-Zhong [Wed, 27 Dec 2023 02:38:26 +0000 (10:38 +0800)]
RISC-V: Disallow transformation into VLMAX AVL for cond_len_xxx when length is in range [0, 31]
Notice we have this following situation:
vsetivli zero,4,e32,m1,ta,ma
vlseg4e32.v v4,(a5)
vlseg4e32.v v12,(a3)
vsetvli a5,zero,e32,m1,tu,ma ---> This is redundant since VLMAX AVL = 4 when it is fixed-vlmax
vfadd.vf v3,v13,fa0
vfadd.vf v1,v12,fa1
vfmul.vv v17,v3,v5
vfmul.vv v16,v1,v5
The rootcause is that we transform COND_LEN_xxx into VLMAX AVL when len == NUNITS blindly.
However, we don't need to transform all of them since when len is range of [0,31], we don't need to
consume scalar registers.
* config/riscv/riscv-v.cc (is_vlmax_len_p): New function.
(expand_load_store): Disallow transformation into VLMAX when len is in range of [0,31]
(expand_cond_len_op): Ditto.
(expand_gather_scatter): Ditto.
(expand_lanes_load_store): Ditto.
(expand_fold_extract_last): Ditto.
gcc/testsuite/ChangeLog:
* gcc.target/riscv/rvv/autovec/post-ra-avl.c: Adapt test.
* gcc.target/riscv/rvv/base/vf_avl-2.c: New test.
Move ix86_expand_unary_operator from i386.cc to i386-expand.cc, re-arrange
prototypes and do some cosmetic changes with the usage of TARGET_APX_NDD.
No functional changes.
gcc/ChangeLog:
* config/i386/i386.cc (ix86_unary_operator_ok): Move from here...
* config/i386/i386-expand.cc (ix86_unary_operator_ok): ... to here.
* config/i386/i386-protos.h: Re-arrange ix86_{unary|binary}_operator_ok
and ix86_expand_{unary|binary}_operator prototypes.
* config/i386/i386.md: Cosmetic changes with the usage of
TARGET_APX_NDD in ix86_expand_{unary|binary}_operator
and ix86_{unary|binary}_operator_ok function calls.
Juzhe-Zhong [Thu, 28 Dec 2023 01:33:32 +0000 (09:33 +0800)]
RISC-V: Make dynamic LMUL cost model more accurate for conversion codes
Notice current dynamic LMUL is not accurate for conversion codes.
Refine for it, there is current case is changed from choosing LMUL = 4 into LMUL = 8.
Xi Ruoyao [Tue, 26 Dec 2023 20:28:56 +0000 (04:28 +0800)]
LoongArch: Fix infinite secondary reloading of FCCmode [PR113148]
The GCC internal doc says:
X might be a pseudo-register or a 'subreg' of a pseudo-register,
which could either be in a hard register or in memory. Use
'true_regnum' to find out; it will return -1 if the pseudo is in
memory and the hard register number if it is in a register.
So "MEM_P (x)" is not enough for checking if we are reloading from/to
the memory. This bug has caused reload pass to stall and finally ICE
complaining with "maximum number of generated reload insns per insn
achieved", since r14-6814.
Check if "true_regnum (x)" is -1 besides "MEM_P (x)" to fix the issue.
gcc/ChangeLog:
PR target/113148
* config/loongarch/loongarch.cc (loongarch_secondary_reload):
Check if regno == -1 besides MEM_P (x) for reloading FCCmode
from/to FPR to/from memory.
gcc/testsuite/ChangeLog:
PR target/113148
* gcc.target/loongarch/pr113148.c: New test.
* gcc.target/loongarch/rotl-with-rotr.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-vrotr-d.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-b.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-h.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-w.c: New test.
* gcc.target/loongarch/rotl-with-xvrotr-d.c: New test.
* config/riscv/riscv-vector-costs.cc (is_gimple_assign_or_call): New function.
(get_first_lane_point): Ditto.
(get_last_lane_point): Ditto.
(max_number_of_live_regs): Refine live point dump.
(compute_estimated_lmul): Make unknown NITERS loop be aware of liveness.
(costs::better_main_loop_than_p): Ditto.
* config/riscv/riscv-vector-costs.h (struct stmt_point): Add new member.
gcc/testsuite/ChangeLog:
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-1.c:
* gcc.dg/vect/costmodel/riscv/rvv/pr113112-3.c: New test.
Chenghui Pan [Fri, 22 Dec 2023 08:18:44 +0000 (16:18 +0800)]
LoongArch: Fix ICE when passing two same vector argument consecutively
Following code will cause ICE on LoongArch target:
#include <lsxintrin.h>
extern void bar (__m128i, __m128i);
__m128i a;
void
foo ()
{
bar (a, a);
}
It is caused by missing constraint definition in mov<mode>_lsx. This
patch fixes the template and remove the unnecessary processing from
loongarch_split_move () function.
This patch also cleanup the redundant definition from
loongarch_split_move () and loongarch_split_move_p ().
gcc/ChangeLog:
* config/loongarch/lasx.md: Use loongarch_split_move and
loongarch_split_move_p directly.
* config/loongarch/loongarch-protos.h
(loongarch_split_move): Remove unnecessary argument.
(loongarch_split_move_insn_p): Delete.
(loongarch_split_move_insn): Delete.
* config/loongarch/loongarch.cc
(loongarch_split_move_insn_p): Delete.
(loongarch_load_store_insns): Use loongarch_split_move_p
directly.
(loongarch_split_move): remove the unnecessary processing.
(loongarch_split_move_insn): Delete.
* config/loongarch/lsx.md: Use loongarch_split_move and
loongarch_split_move_p directly.
gcc/testsuite/ChangeLog:
* gcc.target/loongarch/vector/lsx/lsx-mov-1.c: New test.
Chenghui Pan [Fri, 22 Dec 2023 08:22:03 +0000 (16:22 +0800)]
LoongArch: Fix insn output of vec_concat templates for LASX.
When investigaing failure of gcc.dg/vect/slp-reduc-sad.c, following
instruction block are being generated by vec_concatv32qi (which is
generated by vec_initv32qiv16qi) at entrance of foo() function:
causes the reversion of vec_initv32qiv16qi operation's high and
low 128-bit part.
According to other target's similar impl and LSX impl for following
RTL representation, current definition in lasx.md of "vec_concat<mode>"
are wrong:
(set (op0) (vec_concat (op1) (op2)))
For correct behavior, the last argument of xvpermi.q should be 0x02
instead of 0x20. This patch fixes this issue and cleanup the vec_concat
template impl.
Li Wei [Mon, 25 Dec 2023 03:20:23 +0000 (11:20 +0800)]
LoongArch: Fixed bug in *bstrins_<mode>_for_ior_mask template.
We found that using the latest compiled gcc will cause a miscompare error
when running spec2006 400.perlbench test with -flto turned on. After testing,
it was found that only the LoongArch architecture will report errors.
The first error commit was located through the git bisect command as r14-3773-g5b857e87201335. Through debugging, it was found that the problem
was that the split condition of the *bstrins_<mode>_for_ior_mask template was
empty, which should actually be consistent with the insn condition.
Haochen Gui [Wed, 27 Dec 2023 02:32:21 +0000 (10:32 +0800)]
rs6000: Clean up the pre-checkings of expand_block_compare
Remove P7 CPU test as only P7 above can enter this function and P7 LE is
excluded by the checking of targetm.slow_unaligned_access on word_mode.
Also performance test shows the expand of block compare is better than
library on P7 BE when the length is from 16 bytes to 64 bytes.
gcc/
* config/rs6000/rs6000-string.cc (expand_block_compare): Assert
only P7 above can enter this function. Remove P7 CPU test and let
P7 BE do the expand.
Haochen Gui [Wed, 27 Dec 2023 02:30:06 +0000 (10:30 +0800)]
rs6000: Correct definition of macro of fixed point efficient unaligned
Marco TARGET_EFFICIENT_OVERLAPPING_UNALIGNED is used in rs6000-string.cc
to guard the platform which is efficient on fixed point unaligned
load/store. It's originally defined by TARGET_EFFICIENT_UNALIGNED_VSX
which is enabled from P8 and can be disabled by mno-vsx option. So the
definition is improper. This patch corrects it and call
slow_unaligned_access to judge if fixed point unaligned load/store is
efficient or not.