In
106140a99f (builtin/fast-import: fix segfault with unsafe SHA1
backend, 2024-12-30) and
9218c0bfe1 (bulk-checkin: fix segfault with
unsafe SHA1 backend, 2024-12-30), we observed the effects of failing to
initialize a hashfile_checkpoint with the same hash function
implementation as is used by the hashfile it is used to checkpoint.
While both
106140a99f and
9218c0bfe1 work around the immediate crash,
changing the hash function implementation within the hashfile API to,
for example, the non-unsafe variant would re-introduce the crash. This
is a result of the tight coupling between initializing hashfiles and
hashfile_checkpoints.
Introduce and use a new function which ensures that both parts of a
hashfile and hashfile_checkpoint pair use the same hash function
implementation to avoid such crashes.
A few things worth noting:
- In the change to builtin/fast-import.c::stream_blob(), we can see
that by removing the explicit reference to
'the_hash_algo->unsafe_init_fn()', we are hardened against the
hashfile API changing away from the_hash_algo (or its unsafe
variant) in the future.
- The bulk-checkin code no longer needs to explicitly zero-initialize
the hashfile_checkpoint, since it is now done as a result of calling
'hashfile_checkpoint_init()'.
- Also in the bulk-checkin code, we add an additional call to
prepare_to_stream() outside of the main loop in order to initialize
'state->f' so we know which hash function implementation to use when
calling 'hashfile_checkpoint_init()'.
This is OK, since subsequent 'prepare_to_stream()' calls are noops.
However, we only need to call 'prepare_to_stream()' when we have the
HASH_WRITE_OBJECT bit set in our flags. Without that bit, calling
'prepare_to_stream()' does not assign 'state->f', so we have nothing
to initialize.
- Other uses of the 'checkpoint' in 'deflate_blob_to_pack()' are
appropriately guarded.
Helped-by: Patrick Steinhardt <ps@pks.im>
Signed-off-by: Taylor Blau <me@ttaylorr.com>
Signed-off-by: Junio C Hamano <gitster@pobox.com>
|| (pack_size + PACK_SIZE_THRESHOLD + len) < pack_size)
cycle_packfile();
- the_hash_algo->unsafe_init_fn(&checkpoint.ctx);
+ hashfile_checkpoint_init(pack_file, &checkpoint);
hashfile_checkpoint(pack_file, &checkpoint);
offset = checkpoint.offset;
git_hash_ctx ctx;
unsigned char obuf[16384];
unsigned header_len;
- struct hashfile_checkpoint checkpoint = {0};
+ struct hashfile_checkpoint checkpoint;
struct pack_idx_entry *idx = NULL;
seekback = lseek(fd, 0, SEEK_CUR);
OBJ_BLOB, size);
the_hash_algo->init_fn(&ctx);
the_hash_algo->update_fn(&ctx, obuf, header_len);
- the_hash_algo->unsafe_init_fn(&checkpoint.ctx);
/* Note: idx is non-NULL when we are writing */
- if ((flags & HASH_WRITE_OBJECT) != 0)
+ if ((flags & HASH_WRITE_OBJECT) != 0) {
CALLOC_ARRAY(idx, 1);
+ prepare_to_stream(state, flags);
+ hashfile_checkpoint_init(state->f, &checkpoint);
+ }
+
already_hashed_to = 0;
while (1) {
return hashfd_internal(fd, name, tp, 8 * 1024);
}
+void hashfile_checkpoint_init(struct hashfile *f,
+ struct hashfile_checkpoint *checkpoint)
+{
+ memset(checkpoint, 0, sizeof(*checkpoint));
+ f->algop->init_fn(&checkpoint->ctx);
+}
+
void hashfile_checkpoint(struct hashfile *f, struct hashfile_checkpoint *checkpoint)
{
hashflush(f);
git_hash_ctx ctx;
};
+void hashfile_checkpoint_init(struct hashfile *, struct hashfile_checkpoint *);
void hashfile_checkpoint(struct hashfile *, struct hashfile_checkpoint *);
int hashfile_truncate(struct hashfile *, struct hashfile_checkpoint *);