Whenever we advance a subiter we first call `iterator_is_null()`. This
is not needed though because we only ever advance subiters which have
entries in the priority queue, and we do not end entries to the priority
queue when the subiter has been exhausted.
Drop the check as well as the now-unused function. This results in a
surprisingly big speedup:
Benchmark 1: show-ref: single matching ref (revision = HEAD~)
Time (mean ± σ): 138.1 ms ± 4.4 ms [User: 135.1 ms, System: 2.8 ms]
Range (min … max): 133.4 ms … 167.3 ms 1000 runs
Benchmark 2: show-ref: single matching ref (revision = HEAD)
Time (mean ± σ): 134.4 ms ± 4.2 ms [User: 131.5 ms, System: 2.8 ms]
Range (min … max): 130.0 ms … 164.0 ms 1000 runs
Summary
show-ref: single matching ref (revision = HEAD) ran
1.03 ± 0.05 times faster than show-ref: single matching ref (revision = HEAD~)
Signed-off-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
#include "reader.h"
#include "reftable-error.h"
-int iterator_is_null(struct reftable_iterator *it)
-{
- return !it->ops;
-}
-
static void filtering_ref_iterator_close(void *iter_arg)
{
struct filtering_ref_iterator *fri = iter_arg;
#include "reftable-iterator.h"
#include "reftable-generic.h"
-/* Returns true for a zeroed out iterator, such as the one returned from
- * iterator_destroy. */
-int iterator_is_null(struct reftable_iterator *it);
-
/* iterator that produces only ref records that point to `oid` */
struct filtering_ref_iterator {
int double_check;
reftable_free(mi->subiters);
}
-static int merged_iter_advance_nonnull_subiter(struct merged_iter *mi,
- size_t idx)
+static int merged_iter_advance_subiter(struct merged_iter *mi, size_t idx)
{
struct pq_entry e = {
.index = idx,
return 0;
}
-static int merged_iter_advance_subiter(struct merged_iter *mi, size_t idx)
-{
- if (iterator_is_null(&mi->subiters[idx].iter))
- return 0;
- return merged_iter_advance_nonnull_subiter(mi, idx);
-}
-
static int merged_iter_next_entry(struct merged_iter *mi,
struct reftable_record *rec)
{