This mis-initialization caused the executor optimization to kick in sooner than intended. It also set the lower 4 bits of the counter to `1` -- those bits are supposed to be reserved (the actual counter is in the upper 12 bits).
opt = _testinternalcapi.get_uop_optimizer()
with temporary_optimizer(opt):
- testfunc([1, 2, 3])
+ testfunc(range(10))
ex = get_first_executor(testfunc)
self.assertIsNotNone(ex)
opt = _testinternalcapi.get_uop_optimizer()
with temporary_optimizer(opt):
- testfunc([1, 2, 3])
+ testfunc(range(10))
ex = get_first_executor(testfunc)
self.assertIsNotNone(ex)
--- /dev/null
+Change the initialization of inline cache entries so that the cache entry for ``JUMP_BACKWARD`` is initialized to zero, instead of the ``adaptive_counter_warmup()`` value used for all other instructions. This counter, unique among instructions, counts up from zero.
assert(opcode < MIN_INSTRUMENTED_OPCODE);
int caches = _PyOpcode_Caches[opcode];
if (caches) {
- instructions[i + 1].cache = adaptive_counter_warmup();
+ // JUMP_BACKWARD counter counts up from 0 until it is > backedge_threshold
+ instructions[i + 1].cache =
+ opcode == JUMP_BACKWARD ? 0 : adaptive_counter_warmup();
i += caches;
}
}