The check for the type seems unnecessary and gets in the way sometimes.
Also with a patch I am working on for match.pd, it causes a failure to happen.
Before my patch the IR was:
_1 = BIT_FIELD_REF <s, 8, 16>;
_2 = _1 & 1;
_3 = _2 != 0;
_4 = (int) _3;
__analyzer_eval (_4);
Where _2 was an unsigned char type.
And After my patch we have:
_1 = BIT_FIELD_REF <s, 8, 16>;
_2 = (int) _1;
_3 = _2 & 1;
__analyzer_eval (_3);
But in this case, the BIT_AND_EXPR is in an int type.
OK? Bootstrapped and tested on x86_64-linux-gnu with no regressions.
gcc/analyzer/ChangeLog:
* region-model-manager.cc (maybe_undo_optimize_bit_field_compare): Remove
the check for type being unsigned_char_type_node.
tree cst,
const svalue *arg1)
{
- if (type != unsigned_char_type_node)
- return NULL;
-
const binding_map &map = compound_sval->get_map ();
unsigned HOST_WIDE_INT mask = TREE_INT_CST_LOW (cst);
/* If "mask" is a contiguous range of set bits, see if the