Wanalyzer-undefined-behavior-strtok
Common Var(warn_analyzer_undefined_behavior_strtok) Init(1) Warning
-Warn about code paths in in which a call is made to strtok with undefined behavior.
+Warn about code paths in which a call is made to strtok with undefined behavior.
Wanalyzer-use-after-free
Common Var(warn_analyzer_use_after_free) Init(1) Warning
}
}
-/* Attempt to to use R to replay SUMMARY into this object.
+/* Attempt to use R to replay SUMMARY into this object.
Return true if it is possible. */
bool
dest_state.m_region_model->unset_dynamic_extents (reg);
}
-/* Attempt to to use R to replay SUMMARY into this object.
+/* Attempt to use R to replay SUMMARY into this object.
Return true if it is possible. */
bool
update_for_return_gcall (call_stmt, ctxt);
}
-/* Attempt to to use R to replay SUMMARY into this object.
+/* Attempt to use R to replay SUMMARY into this object.
Return true if it is possible. */
bool
/* Helper function. For a tagged type, it finds the declaration
- for a visible tag declared in the the same scope if such a
+ for a visible tag declared in the same scope if such a
declaration exists. */
static tree
previous_tag (tree type)
(C23 6.7.2.2/5), but may pose portability problems. */
else if (enum_and_int_p
&& TREE_CODE (newdecl) != TYPE_DECL
- /* Don't warn about about acc_on_device built-in redeclaration,
+ /* Don't warn about acc_on_device built-in redeclaration,
the built-in is declared with int rather than enum because
the enum isn't intrinsic. */
&& !(TREE_CODE (olddecl) == FUNCTION_DECL
if (tune)
return res;
- /* Add any features that should be be present, but can't be verified using
+ /* Add any features that should be present, but can't be verified using
the /proc/cpuinfo "Features" list. */
extension_flags |= unchecked_extension_flags & default_flags;
/* TODO: We only do AVL propagation for VLMAX AVL with tail
agnostic policy since we have missed-LEN information partial
- autovectorization. We could add more more AVL propagation
+ autovectorization. We could add more AVL propagation
for intrinsic codes in the future. */
if (vlmax_ta_p (insn->rtl ()))
m_candidates.safe_push (std::make_pair (AVLPROP_VLMAX_TA, insn));
#define BASE_NAME_MAX_LEN 16
-/* Base class for for build. */
+/* Base class for build. */
struct build_base : public function_shape
{
void build (function_builder &b,
/* For some target specific vectorization cost which can't be handled per stmt,
we check the requisite conditions and adjust the vectorization cost
- accordingly if satisfied. One typical example is to model model and adjust
+ accordingly if satisfied. One typical example is to model and adjust
loop_len cost for known_lt (NITERS, VF). */
void
in bytes. If COOKIE_SIZE is NULL, return array type
ELT_TYPE[FULL_SIZE / sizeof(ELT_TYPE)], otherwise return
struct { size_t[COOKIE_SIZE/sizeof(size_t)]; ELT_TYPE[N]; }
- where N is is computed such that the size of the struct fits into FULL_SIZE.
+ where N is computed such that the size of the struct fits into FULL_SIZE.
If ARG_SIZE is non-NULL, it is the first argument to the new operator.
It should be passed if ELT_TYPE is zero sized type in which case FULL_SIZE
will be also 0 and so it is not possible to determine the actual array
/* The co_return expression is used to support coroutines.
Op0 is the original expr, can be void (for use in diagnostics)
- Op1 is the promise return_xxxx call for for the expression given. */
+ Op1 is the promise return_xxxx call for the expression given. */
DEFTREECODE (CO_RETURN_EXPR, "co_return", tcc_statement, 2)
}
/* Walker to patch up the BLOCK_NODE hierarchy after the above surgery.
- *DP is is the parent block. */
+ *DP is the parent block. */
static tree
fixup_blocks_walker (tree *tp, int *walk_subtrees, void *dp)
if (DECL_STATIC_FUNCTION_P (decl1) || DECL_STATIC_FUNCTION_P (decl2))
{
/* Note C++20 DR2445 extended the above to static member functions, but
- I think think the old G++ behavior of just skipping the object
+ I think the old G++ behavior of just skipping the object
parameter when comparing to a static member function was better, so
let's stick with that for now. This is CWG2834. --jason 2023-12 */
if (DECL_OBJECT_MEMBER_FUNCTION_P (decl1))
/* Maybe add in default template args. This seems like a flaw in the
specification in terms of partial specialization, since it says the
- partial specialization has the the template parameter list of A, but a
+ partial specialization has the template parameter list of A, but a
partial specialization can't have default targs. */
targs = coerce_template_parms (tparms, targs, tmpl, tf_none);
if (targs == error_mark_node)
// typename pair<T, U>::first_type void f(T, U);
//
// Here, it is unlikely that there is a partial specialization of
-// pair constrained for for Integral and Floating_point arguments.
+// pair constrained for Integral and Floating_point arguments.
//
// The general rule is: if a constrained specialization with matching
// constraints is found return that type. Also note that if TYPE is not a
initializer is a binding of the iteration variable, save
that location. Any of these locations in the initialization clause
for the current nested loop are better than using the argument locus,
- that points to the "for" of the the outermost loop in the nest. */
+ that points to the "for" of the outermost loop in the nest. */
if (init && EXPR_HAS_LOCATION (init))
elocus = EXPR_LOCATION (init);
else if (decl && INDIRECT_REF_P (decl) && EXPR_HAS_LOCATION (decl))
frames, and to emit events showing the inlined calls.
With @option{-fno-analyzer-undo-inlining} this attempt to reconstruct
-the original frame information can be be disabled, which may be of help
+the original frame information can be disabled, which may be of help
when debugging issues in the analyzer.
@item -fanalyzer-verbose-edges
@item -minline-memops-threshold=@var{bytes}
Specifies a size threshold in bytes at or below which memmove, memcpy
and memset shall always be expanded inline. Operations dealing with
-sizes larger than this threshold would have to be be implemented using
+sizes larger than this threshold would have to be implemented using
a library call instead of being expanded inline, but since BPF doesn't
allow libcalls, exceeding this threshold results in a compile-time
error. The default is @samp{1024} bytes.
the options-processing script will declare @code{TARGET_@var{thisname}},
@code{TARGET_@var{name}_P} and @code{TARGET_@var{name}_OPTS_P} macros:
@code{TARGET_@var{thisname}} is 1 when the option is active and 0 otherwise,
-@code{TARGET_@var{name}_P} is similar to @code{TARGET_@var{name}} but take an
-argument as @samp{target_flags}, and and @code{TARGET_@var{name}_OPTS_P} also
-similar to @code{TARGET_@var{name}} but take an argument as @code{gcc_options}.
+@code{TARGET_@var{name}_P} is similar to @code{TARGET_@var{name}} but takes an
+argument as @samp{target_flags}, and @code{TARGET_@var{name}_OPTS_P} is also
+similar to @code{TARGET_@var{name}} but takes an argument as @code{gcc_options}.
@item Enum(@var{name})
The option's argument is a string from the set of strings associated
BFmode -> SFmode -> HFmode conversion where SFmode
has superset of BFmode values. We don't need
to handle sNaNs by raising exception and turning
- into into qNaN though, as that can be done in the
+ it into qNaN though, as that can be done in the
SFmode -> HFmode conversion too. */
rtx temp = gen_reg_rtx (SFmode);
int save_flag_finite_math_only = flag_finite_math_only;
htab_t GTY((skip)) value_histograms;
/* Annotated gconds so that basic conditions in the same expression map to
- the same same uid. This is used for condition coverage. */
+ the same uid. This is used for condition coverage. */
hash_map <gcond*, unsigned> *GTY((skip)) cond_uids;
/* For function.cc. */
}
}
-/* Increment totals in COVERAGE according to to block BLOCK. */
+/* Increment totals in COVERAGE according to block BLOCK. */
static void
add_condition_counts (coverage_info *coverage, const block_info *block)
return m_tab[v];
}
-// Process phi node PHI to see if it it part of a group.
+// Process phi node PHI to see if it is part of a group.
void
phi_analyzer::process_phi (gphi *phi)
The fields in ``fields`` need to be the same objects that were used
to create the struct.
- Each value has to have have the same unqualified type as the field
+ Each value has to have the same unqualified type as the field
it is applied to.
A NULL value element in ``values`` is a shorthand for zero initialization
return true;
}
-/* A backwards confluence function. Update the the bb_info single_succ
+/* A backwards confluence function. Update the bb_info single_succ
field for E's source block, based on changes to E's destination block.
At the end of the dataflow problem, single_succ is the single mode
that all successors require (directly or indirectly), or no_mode
use_info *next_any_insn_use () const;
// Return the next use by a debug instruction, or null if none.
- // This is only valid if if is_in_debug_insn ().
+ // This is only valid if is_in_debug_insn ().
use_info *next_debug_insn_use () const;
// Return the previous use by a phi node in the list, or null if none.
|/ \
T F
- T has has multiple incoming edges and is the outcome of a short circuit,
+ T has multiple incoming edges and is the outcome of a short circuit,
with top = a, bot = b. The top node (a) is masked when the edge (b, T) is
taken.
The masking table is represented as two bitfields per term in the expression
with the index corresponding to the term in the Boolean expression.
a || b && c becomes the term vector [a b c] and the masking table [a[0]
- a[1] b[0] ...]. The kth bit of a masking vector is set if the the kth term
+ a[1] b[0] ...]. The kth bit of a masking vector is set if the kth term
is masked by taking the edge.
The out masks are in uint64_t (the practical maximum for gcov_type_node for
_3 = i_6 != 0;
Here, carg is 4, oarg is 6, crhs is 0, and because
(4 != 0) == (6 != 0), we don't care if i_6 is 4 or 6, both
- have the same outcome. So, can can optimize this to:
+ have the same outcome. So, we can optimize this to:
_3 = i_2(D) != 0;
If the single imm use of phi result >, >=, < or <=, similarly
we can check if both carg and oarg compare the same against
if (SSA_NAME_OCCURS_IN_ABNORMAL_PHI (PHI_RESULT (phi)))
return set_ssa_val_to (PHI_RESULT (phi), PHI_RESULT (phi));
- /* We track whether a PHI was CSEd to to avoid excessive iterations
+ /* We track whether a PHI was CSEd to avoid excessive iterations
that would be necessary only because the PHI changed arguments
but not value. */
if (!inserted)
m_list.safe_push (std::make_pair (e->src->index, e->dest->index));
}
-// Return true if all uses of NAME are dominated by by block BB. 1 use
+// Return true if all uses of NAME are dominated by block BB. 1 use
// is allowed in block BB, This is one we hope to remove.
// ie
// _2 = _1 & 7;
}
}
-// If the the mask can be trivially converted to a range, do so and
+// If the mask can be trivially converted to a range, do so and
// return TRUE.
bool
return true;
}
-/* Set INIT, STEP, and DIRECTION the the corresponding values of NAME
+/* Set INIT, STEP, and DIRECTION to the corresponding values of NAME
within LOOP, and return TRUE. Otherwise return FALSE, and set R to
the conservative range of NAME within the loop. */