Karthik Nayak [Mon, 19 May 2025 09:58:09 +0000 (11:58 +0200)]
receive-pack: use batched reference updates
The reference updates performed as a part of 'git-receive-pack(1)', take
place one at a time. For each reference update, a new transaction is
created and committed. This is necessary to ensure we can allow
individual updates to fail without failing the entire command. The
command also supports an 'atomic' mode, which uses a single transaction
to update all of the references. But this mode has an all-or-nothing
approach, where if a single update fails, all updates would fail.
In 23fc8e4f61 (refs: implement batch reference update support,
2025-04-08), we introduced a new mechanism to batch reference updates.
Under the hood, this uses a single transaction to perform a batch of
reference updates, while allowing only individual updates to fail.
Utilize this newly introduced batch update mechanism in
'git-receive-pack(1)'. This provides a significant bump in performance,
especially when dealing with repositories with large number of
references.
With the reftable backend there is a 18x performance improvement, when
performing receive-pack with 10000 refs:
Benchmark 1: receive: many refs (refformat = reftable, refcount = 10000, revision = master)
Time (mean ± σ): 4.276 s ± 0.078 s [User: 0.796 s, System: 3.318 s]
Range (min … max): 4.185 s … 4.430 s 10 runs
Benchmark 2: receive: many refs (refformat = reftable, refcount = 10000, revision = HEAD)
Time (mean ± σ): 235.4 ms ± 6.9 ms [User: 75.4 ms, System: 157.3 ms]
Range (min … max): 228.5 ms … 254.2 ms 11 runs
Summary
receive: many refs (refformat = reftable, refcount = 10000, revision = HEAD) ran
18.16 ± 0.63 times faster than receive: many refs (refformat = reftable, refcount = 10000, revision = master)
In similar conditions, the files backend sees a 1.21x performance
improvement:
Benchmark 1: receive: many refs (refformat = files, refcount = 10000, revision = master)
Time (mean ± σ): 1.121 s ± 0.021 s [User: 0.128 s, System: 0.975 s]
Range (min … max): 1.097 s … 1.156 s 10 runs
Benchmark 2: receive: many refs (refformat = files, refcount = 10000, revision = HEAD)
Time (mean ± σ): 927.9 ms ± 22.6 ms [User: 99.0 ms, System: 815.2 ms]
Range (min … max): 903.1 ms … 978.0 ms 10 runs
Summary
receive: many refs (refformat = files, refcount = 10000, revision = HEAD) ran
1.21 ± 0.04 times faster than receive: many refs (refformat = files, refcount = 10000, revision = master)
As using batched updates requires the error handling to be moved to the
end of the flow, create and use a 'struct strset' to track the failed
refs and attribute the correct errors to them.
This change also uncovers an issue when a client provides multiple
updates to the same reference. For example:
$ git send-pack remote.git A:foo B:foo
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Delta compression using up to 20 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 226 bytes | 226.00 KiB/s, done.
Total 3 (delta 1), reused 0 (delta 0), pack-reused 0 (from 0)
remote: error: cannot lock ref 'refs/heads/foo': reference already exists
To remote.git
! [remote rejected] A -> foo (failed to update ref)
! [remote failure] B -> foo (remote failed to report status)
As you can see, the remote runs into an error because it cannot lock the
target reference for the second update. Furthermore, the remote complains
that the first update has been rejected whereas the second update didn't
receive any status update because we failed to lock it. Reading this status
message alone a user would probably expect that `foo` has not been updated
at all. But that's not the case: while we claim that the ref wasn't updated,
it surprisingly points to `A` now.
One could argue that this is merely an error in how we report the result of
this push. But ultimately, the user's request itself is already broken and
doesn't make any sense in the first place and cannot ever lead to a sensible
outcome that honors the full request.
The conversion to batched transactions fixes the issue because we now try to
queue both updates in the same transaction. As such, the transaction itself
will notice this conflict and refuse the update altogether before we commit
any of the values.
Note that this requires changes to a couple of tests in t5408 that happened
to exercise this behaviour. Given that the generated output is misleading
and given that the user request cannot ever be fully honored this really
feels more like a bug than properly designed behaviour. As such, changing
the behaviour feels like the right thing to do.
Since now reference updates are batched, the 'reference-transaction'
hook will be invoked with all updates together. Currently git will 'die'
when the hook returns with a non-zero exit status in the 'prepared'
stage. For 'git-receive-pack(1)', this allowed users to reject an
individual reference update, git would have applied previous updates but
immediately abort further execution. This is definitely an incorrect
usage of this hook, since the right place to do this would be the
'update' hook. This patch retains the latter behavior, but
'reference-transaction' hook now changes to a all-or-nothing behavior
when a non-zero exit status is returned in the 'prepared' stage, since
batch updates use a transaction under the hood. This explains the change
in 't1416'.
Helped-by: Jeff King <peff@peff.net> Helped-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Karthik Nayak [Mon, 19 May 2025 09:58:08 +0000 (11:58 +0200)]
send-pack: fix memory leak around duplicate refs
The 'git-send-pack(1)' allows users to push objects to a remote
repository and explicitly list the references to be pushed. The status
of each reference pushed is captured into a list mapped by refname.
If a reference fails to be updated, its error message is captured in the
`ref->remote_status` field. While the command allows duplicate ref
inputs, the list doesn't accommodate this behavior as a particular
refname is linked to a single `struct ref*` element. So if the user
inputs a reference twice like:
git send-pack remote.git A:foo B:foo
where the user is trying to update the same reference 'foo' twice and
the reference fails to be updated, we first fill `ref->remote_status`
with error message for the input 'A:foo' then we override the same field
with the error message for 'B:foo'. This override happens without first
free'ing the previous value. Fix this leak.
The current tests already incorporate the above example, but in the test
'A:foo' succeeds while 'B:foo' fails, meaning that the memory leak isn't
triggered. Add a new test with multiple duplicates.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Karthik Nayak [Mon, 19 May 2025 09:58:07 +0000 (11:58 +0200)]
fetch: use batched reference updates
The reference updates performed as a part of 'git-fetch(1)', take place
one at a time. For each reference update, a new transaction is created
and committed. This is necessary to ensure we can allow individual
updates to fail without failing the entire command. The command also
supports an '--atomic' mode, which uses a single transaction to update
all of the references. But this mode has an all-or-nothing approach,
where if a single update fails, all updates would fail.
In 23fc8e4f61 (refs: implement batch reference update support,
2025-04-08), we introduced a new mechanism to batch reference updates.
Under the hood, this uses a single transaction to perform a batch of
reference updates, while allowing only individual updates to fail.
Utilize this newly introduced batch update mechanism in 'git-fetch(1)'.
This provides a significant bump in performance, especially when dealing
with repositories with large number of references.
Adding support for batched updates is simply modifying the flow to also
create a batch update transaction in the non-atomic flow.
With the reftable backend there is a 22x performance improvement, when
performing 'git-fetch(1)' with 10000 refs:
Benchmark 1: fetch: many refs (refformat = reftable, refcount = 10000, revision = master)
Time (mean ± σ): 3.403 s ± 0.775 s [User: 1.875 s, System: 1.417 s]
Range (min … max): 2.454 s … 4.529 s 10 runs
Benchmark 2: fetch: many refs (refformat = reftable, refcount = 10000, revision = HEAD)
Time (mean ± σ): 154.3 ms ± 17.6 ms [User: 102.5 ms, System: 56.1 ms]
Range (min … max): 145.2 ms … 220.5 ms 18 runs
Summary
fetch: many refs (refformat = reftable, refcount = 10000, revision = HEAD) ran
22.06 ± 5.62 times faster than fetch: many refs (refformat = reftable, refcount = 10000, revision = master)
In similar conditions, the files backend sees a 1.25x performance
improvement:
Benchmark 1: fetch: many refs (refformat = files, refcount = 10000, revision = master)
Time (mean ± σ): 605.5 ms ± 9.4 ms [User: 117.8 ms, System: 483.3 ms]
Range (min … max): 595.6 ms … 621.5 ms 10 runs
Benchmark 2: fetch: many refs (refformat = files, refcount = 10000, revision = HEAD)
Time (mean ± σ): 485.8 ms ± 4.3 ms [User: 91.1 ms, System: 396.7 ms]
Range (min … max): 477.6 ms … 494.3 ms 10 runs
Summary
fetch: many refs (refformat = files, refcount = 10000, revision = HEAD) ran
1.25 ± 0.02 times faster than fetch: many refs (refformat = files, refcount = 10000, revision = master)
With this we'll either be using a regular transaction or a batch update
transaction. This helps cleanup some code which is no longer needed as
we'll now always have some type of 'ref_transaction' object being
propagated.
One big change is that earlier, each individual update would propagate a
failure. Whereas now, the `ref_transaction_for_each_rejected_update`
function is called at the end of the flow to capture the exit status for
'git-fetch(1)' and also to print F/D conflict errors. This does change
the order of the errors being printed, but the behavior stays the same.
Since transaction errors are now explicitly defined as part of 76e760b999 (refs: introduce enum-based transaction error types,
2025-04-08), utilize them and get rid of custom errors defined within
'builtin/fetch.c'.
Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Karthik Nayak [Mon, 19 May 2025 09:58:06 +0000 (11:58 +0200)]
refs: add function to translate errors to strings
The commit 76e760b999 (refs: introduce enum-based transaction error
types, 2025-04-08) introduced enum-based transaction error types. The
refs transaction logic was also modified to propagate these errors. For
clients of the ref transaction system, it would be beneficial to provide
human readable messages for these errors.
There is already an existing mapping in 'builtin/update-ref.c', move it
to 'refs.c' as `ref_transaction_error_msg()` and use the same within the
'builtin/update-ref.c'.
Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Karthik Nayak <karthik.188@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Junio C Hamano [Tue, 13 May 2025 21:05:06 +0000 (14:05 -0700)]
Merge branch 'js/ci-buildsystems-cleanup'
Code clean-up around stale CI elements and building with Visual Studio.
* js/ci-buildsystems-cleanup:
config.mak.uname: drop the `vcxproj` target
contrib/buildsystems: drop support for building . vcproj/.vcxproj files
ci: stop linking the `prove` cache
Junio C Hamano [Mon, 12 May 2025 21:22:48 +0000 (14:22 -0700)]
Merge branch 'ps/object-store-cleanup'
Further code clean-up in the object-store layer.
* ps/object-store-cleanup:
object-store: drop `repo_has_object_file()`
treewide: convert users of `repo_has_object_file()` to `has_object()`
object-store: allow fetching objects via `has_object()`
object-store: move function declarations to their respective subsystems
object-store: move and rename `odb_pack_keep()`
object-store: drop `loose_object_path()`
object-store: move `struct packed_git` into "packfile.h"
Junio C Hamano [Fri, 9 May 2025 20:16:51 +0000 (13:16 -0700)]
Merge branch 'master' of https://github.com/j6t/gitk
* 'master' of https://github.com/j6t/gitk:
gitk: add Tamil translation
gitk: limit PATH search to bare executable names
gitk: _search_exe is no longer needed
gitk: override $PATH search only on Windows
gitk: adjust indentation to match the style used in this script
Junio C Hamano [Fri, 9 May 2025 20:14:36 +0000 (13:14 -0700)]
Merge branch 'master' of https://github.com/j6t/git-gui
* 'master' of https://github.com/j6t/git-gui:
git-gui: treat the message template file as a built file
git-gui: heed core.commentChar/commentString
git-gui: po/README: update repository location and maintainer
Junio C Hamano [Thu, 8 May 2025 19:36:31 +0000 (12:36 -0700)]
Merge branch 'ps/mv-contradiction-fix'
"git mv a a/b dst" would ask to move the directory 'a' itself, as
well as its contents, in a single destination directory, which is
a contradicting request that is impossible to satisfy. This case is
now detected and the command errors out.
* ps/mv-contradiction-fix:
builtin/mv: convert assert(3p) into `BUG()`
builtin/mv: bail out when trying to move child and its parent
Johannes Sixt [Tue, 24 Dec 2024 13:47:08 +0000 (14:47 +0100)]
git-gui: treat the message template file as a built file
Follow the lead of 5377abc0c9d5 ("po/git.pot: don't check in result
of "make pot"", 2022-05-26) in the Git repository and do not track
git-gui.pot anymore.
Instead, translators are expected to integrate an up-to-date version
from the master branch into their translation file using
make ALL_POFILES=po/xx.po update-po
Update README to describe the new process. It is now understood that
different translations need not be based on the same message template
file, but rather individual translators should base their translation
on the most up-to-date code. Remove the section that addresses the
i18n coordinator as it does not apply when no common base is required
among translators.
Junio C Hamano [Mon, 5 May 2025 21:56:25 +0000 (14:56 -0700)]
Merge branch 'kn/meson-hdr-check'
Add an equivalent to "make hdr-check" target to meson based builds.
* kn/meson-hdr-check:
makefile/meson: add 'check-headers' as alias for 'hdr-check'
meson: add support for 'hdr-check'
meson: rename 'third_party_sources' to 'third_party_excludes'
meson: move headers definition from 'contrib/coccinelle'
coccinelle: meson: rename variables to be more specific
ci/github: install git before checking out the repository
Junio C Hamano [Mon, 5 May 2025 21:56:25 +0000 (14:56 -0700)]
Merge branch 'es/meson-cleanup'
Code clean-up for meson-based build infrastructure.
* es/meson-cleanup:
meson: only check for missing networking syms on non-Windows; add compat impls
meson: fix typo in function check that prevented checking for hstrerror
meson: add a couple missing networking dependencies
meson: do a full usage-based compile check for sysinfo
meson: check for getpagesize before using it
meson: simplify and parameterize various standard function checks
Junio C Hamano [Mon, 5 May 2025 21:56:24 +0000 (14:56 -0700)]
Merge branch 'js/windows-arm64'
Update to arm64 Windows port.
* js/windows-arm64:
max_tree_depth: lower it for clangarm64 on Windows
mingw(arm64): do move the `/etc/git*` location
msvc: do handle builds on Windows/ARM64
mingw: do not use nedmalloc on Windows/ARM64
config.mak.uname: add support for clangarm64
bswap.h: add support for built-in bswap functions
Our CI needs to be aware of the location of the test output directory so
that it knows where to find test results. Some of our CI jobs achieve
this by setting the `TEST_OUTPUT_DIRECTORY` environment variable, which
ensures that the output will be written to that directory. Other jobs,
especially on GitHub Workflows, don't set that environment variable and
instead expect test results to be located in the source directory in
"t/".
The latter logic does not work with Meson though, as the test results
are not written into the source directory by default, but instead into
the build directory. As such, any job that uses Meson without setting
the environment variable will be unable to locate and aggregate results.
Fix this by explicitly setting the test output directory when we set up
the Meson build directory. Like this, we can easily default to "t/" in
the source directory when the value hasn't been set explicitly.
Reported-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Now that we dropped `contrib/buildsystems/generate` to generate Visual
Studio Solution files, it is time to also drop the `vcxproj` Makefile
target that depended on that script.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
contrib/buildsystems: drop support for building . vcproj/.vcxproj files
Before we had CMake support, the only way to build Git in Visual Studio
was via this hacky `generate` script.
For a while I tried to fix whenever things got broken, in particular to
allow building confidence in embargoed releases by running the CI builds
in Azure Pipelines in a private Azure DevOps project. I even carried the
patches in Git for Windows with the intention of upstreaming them,
eventually.
However, it is a lot of work with too little benefit. CMake is much
better supported by Visual Studio. So let's drop this hacky script (plus
support code).
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
7b399322a2e (doc: apply new format to git-branch man page, 2025-03-19)
updated the formatting for this doc to, among other things, use backtick
for some elements. In the process `è` was used by accident instead
of backtick.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The tilde (~) count doesn’t match the length of the heading. In turn
you get a bunch of `<sub>~</sub>` instead of the intended `<h3>` in the
HTML output.
Signed-off-by: Kristoffer Haugsbakk <code@khaugsbakk.name> Signed-off-by: Junio C Hamano <gitster@pobox.com>
K Jayatheerth [Sat, 3 May 2025 06:07:36 +0000 (11:37 +0530)]
dir.c: literal match with wildcard in pathspec should still glob
When a path with wildcard characters, e.g. 'f*o', exists in the
working tree, "git add -- 'f*o'" stops after happily finding
that there is 'f*o' and adding it to the index, without
realizing there may be other paths, e.g. 'foooo', that may match
the given pathspec.
This is because dir.c:do_match_pathspec() disables further
matches with pathspec when it finds an exact match.
Reported-by: piotrsiupa <piotrsiupa@gmail.com> Helped-by: Jeff King <peff@peff.net> Signed-off-by: K Jayatheerth <jayatheerthkulkarni2005@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Elijah Newren [Fri, 2 May 2025 19:19:11 +0000 (19:19 +0000)]
tree-walk.h: fix incorrect API comment
When commit 50ddb089ff68 (tree-walk.c: remove the_repo from
get_tree_entry(), 2019-06-27) added an extra parameter to
get_tree_entry(), it did not fix the ordering comment about the meaning
of the parameters. Rather than just changing "third"->"fourth" and
"fourth"->"fifth", give the paramemters meaningful names (or actually,
just take the existing names from the get_tree_entry() definition in the
tree-walk.c file) and while at it, tweak the rest of the description to
incorporate the other parameter names as well.
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The use of asserts is discouraged in our codebase because they lead to
different behaviour depending on how Git is built. When being unsure
enough whether a condition always holds so that one adds the assert,
then the assert should probably trigger regardless of how Git is being
built.
Drop the call to assert(3p) in git-mv(1) and instead use `BUG()`.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
builtin/mv: bail out when trying to move child and its parent
We have a known issue in git-mv(1) where moving both a child and any of
its parents causes an assert to trigger because the child cannot be
found anymore in the index. We have added a test for this in commit 0fcd473fdd3 (t7001: add failure test which triggers assertion,
2024-10-22) without addressing the issue, which is why the test itself
is marked as `test_expect_failure`.
The behaviour of that test relies on a call to assert(3p) though, which
may or may not be compiled into the resulting binary depending on
whether or not we pass `-DNDEBUG`. When these asserts are compiled into
Git this may cause our CI to hang on Windows though, because asserts may
cause a modal window to be shown.
While we could work around the issue by converting this into a call to
`BUG()`, let's rather address the root cause of the issue by bailing out
in case we see that both a child and any of its parents are being moved
in the same command.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Junio C Hamano [Tue, 29 Apr 2025 21:21:30 +0000 (14:21 -0700)]
Merge branch 'ps/fewer-perl'
Reduce requirement for Perl in our documentation build and a few
scripts.
* ps/fewer-perl:
Documentation: stop depending on Perl to generate command list
Documentation: stop depending on Perl to massage user manual
request-pull: stop depending on Perl
filter-branch: stop depending on Perl
Junio C Hamano [Tue, 29 Apr 2025 21:21:29 +0000 (14:21 -0700)]
Merge branch 'ps/reftable-api-revamp'
Overhaul of the reftable API.
* ps/reftable-api-revamp:
reftable/table: move printing logic into test helper
reftable/constants: make block types part of the public interface
reftable/table: introduce iterator for table blocks
reftable/table: add `reftable_table` to the public interface
reftable/block: expose a generic iterator over reftable records
reftable/block: make block iterators reseekable
reftable/block: store block pointer in the block iterator
reftable/block: create public interface for reading blocks
git-zlib: use `struct z_stream_s` instead of typedef
reftable/block: rename `block_reader` to `reftable_block`
reftable/block: rename `block` to `block_data`
reftable/table: move reading block into block reader
reftable/block: simplify how we track restart points
reftable/blocksource: consolidate code into a single file
reftable/reader: rename data structure to "table"
reftable: fix formatting of the license header
Since a call to repo_config() can be called with repo set to NULL
these days, a command that is marked as RUN_SETUP in the builtin
command table does not have to check repo with NULL before making
the call.
* ua/call-repo-config-with-possibly-null-repository:
builtin/difftool: remove unnecessary if statement
builtin/add: remove unnecessary if statement
xdiff: disable cleanup_records heuristic with --minimal
The cleanup_records function marks some lines as changed before running
the actual diff algorithm. For most lines, this is a good performance
optimization, but it also marks lines that are surrounded by many
changed lines as changed as well. This can cause redundant changes and
longer-than-necessary diffs.
Whether this results in better-looking diffs is subjective. However, the
--minimal flag explicitly requests the shortest possible diff.
The change results in shorter diffs in about 1.3% of all diffs in Git's
history. Performance wise, I have measured the impact on
"git log -p -3000 --minimal > /dev/null". With this change, I get
Time (mean ± σ): 2.363 s ± 0.023 s (25 runs)
and without this patch I measured
Time (mean ± σ): 2.362 s ± 0.035 s (25 runs).
As the difference is well within the margin of error, this does not seem
to have an impact on performance.
Signed-off-by: Niels Glodny <n.glodny@campus.lmu.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
diff: check range before dereferencing an array element
Before accessing an array element at a given index, it should be
verified that the index is within the desired bounds, not afterwards,
otherwise it may not make sense to even access the array element in the
first place. This is the point of CodeQL's
`cpp/offset-use-before-range-check` rule.
This CodeQL rule unfortunately is also triggered by the
`fill_es_indent_data()` code, even though the condition `off < len - 1`
does not even need to guarantee that the offset is in bounds (`s` points
to a NUL-terminated string, for which `s[off] == '\r'` would fail before
running out of bounds).
Let's work around this rare false positive to help us use an otherwise
mostly useful tool is a worthy thing to do.
Signed-off-by: Johannes Schindelin <johannes.schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
In the preceding commits we have converted all users of
`repo_has_object_file()` and its `_with_flags()` variant to instead use
`has_object()`. Drop these functions.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
treewide: convert users of `repo_has_object_file()` to `has_object()`
As the comment of `repo_has_object_file()` and its `_with_flags()`
variant tells us, these functions are considered to be deprecated in
favor of `has_object()`. There are a couple of slight benefits in favor
of the replacement:
- The new function has a short-and-sweet name.
- More explicit defaults: `has_object()` doesn't fetch missing objects
via promisor remotes, and neither does it reload packfiles if an
object wasn't found by default. This ensures that it becomes
immediately obvious when a simple object existence check may result
in expensive actions.
Most importantly though, it is confusing that we have two sets of
functions that ultimately do the same thing, but with different
defaults.
Start sunsetting `repo_has_object_file()` and its `_with_flags()`
sibling by replacing all callsites with `has_object()`:
- `repo_has_object_file(...)` is equivalent to
`has_object(..., HAS_OBJECT_RECHECK_PACKED | HAS_OBJECT_FETCH_PROMISOR)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_QUICK | OBJECT_INFO_SKIP_FETCH_OBJECT)`
is equivalent to `has_object(..., 0)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_SKIP_FETCH_OBJECT)`
is equivalent to `has_object(..., HAS_OBJECT_RECHECK_PACKED)`.
- `repo_has_object_file_with_flags(..., OBJECT_INFO_QUICK)`
is equivalent to `has_object(..., HAS_OBJECT_FETCH_PROMISOR)`.
The replacements should be functionally equivalent.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
object-store: allow fetching objects via `has_object()`
We're about to fully remove `repo_has_object_file()` in favor of
`has_object()`. The latter function does not yet have a way to fetch
missing objects via a promisor remote though, which means that it cannot
fully replace all usecases of `repo_has_object_file()`.
Introduce a new flag `HAS_OBJECT_FETCH_PROMISOR` that causes the
function to optionally fetch missing objects which are part of a
promisor pack. This flag will be used in the subsequent commit.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
object-store: move function declarations to their respective subsystems
We carry declarations for a couple of functions in "object-store.h" that
are not defined in "object-store.c", but in a different subsystem. Move
these declarations to the respective headers whose matching code files
carry the corresponding definition.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `odb_pack_keep()` creates a file at the passed-in path. If
this fails, then the function re-tries by first creating any potentially
missing leading directories and then trying to create the file once
again. As such, this function doesn't host any kind of logic that is
specific to the object store, but is rather a generic helper function.
Rename the function to `safe_create_file_with_leading_directories()` and
move it into "path.c". While at it, refactor it so that it loses its
dependency on `the_repository`.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The function `loose_object_path()` is a trivial wrapper around
`odb_loose_path()`, with the only exception that it always uses the
primary object database of the given repository. This doesn't really add
a ton of value though, so let's drop the function and inline it at every
callsite.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
object-store: move `struct packed_git` into "packfile.h"
The "object-store.h" header contains the definition of `struct
packed_git`. As this structure hosts all kind of information about a
specific packfile it is arguably a bit out of place in a generic place
like "object-store.h".
Move the structure as well as `pack_map_entry_cmp()` into "packfile.h".
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Add an option to allow users to specifically enable or disable
retrieving the Message-ID from the Outlook SMTP server. This can be used
for other hosts mimicking the behaviour of Outlook, or for users who set
a custom domain to be a CNAME for the Outlook SMTP server.
While at it, lets also add missing * in description of --no-smtp-auth.
Helped-by: Junio C Hamano <gitster@pobox.com> Signed-off-by: Aditya Garg <gargaditya08@live.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
hashmap: ensure hashmaps are reusable after hashmap_clear()
In the series merged at bf0a430f70b5 (Merge branch 'en/strmap',
2020-11-21), strmap was built on top of hashmap and hashmap was extended
in a few ways to support strmap and be more generally useful. One of
the extensions was that hashmap_partial_clear() was introduced to allow
reuse of the hashmap without freeing the table. Peff believed that it
also made sense to introduce a hashmap_clear() which freed everything
while allowing reuse.
I added hashmap_clear(), but in doing so, overlooked the fact that for
a hashmap to be reusable, it needs a defined cmpfn and data (the
HASHMAP_INIT macro requires these fields as parameters, for example).
So, if we want the hashmap to be reusable, we shouldn't zero out those
fields. We probably also shouldn't zero out do_count_items. (We could
zero out grow_at and shrink_at, but whether we zero those or not is
irrelevant as they'll be automatically updated whenever a new entry is
inserted.)
Since clearing is associated with freeing map->table, and the only thing
required for consistency after freeing map->table is zeroing tablesize
and private_size, let's only zero those fields out.
Signed-off-by: Elijah Newren <newren@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
As detailed in the previous changes to t5309-pack-delta-cycles.sh, the
logic within 'git index-pack' to analyze an incoming thin packfile with
REF_DELTAs is suspect. The algorithm is overly cautious around delta
cycles, and that leads in fact to failing even when there is no cycle.
This change adjusts the algorithm to no longer fail in these cases. In
fact, these cycle cases will no longer fail but more importantly the
valid cases will no longer fail, either. The resulting packfile from the
--fix-thin operation will not have cycles either since REF_DELTAs are
forbidden from the on-disk format and OFS_DELTAs are impossible to write
as a cycle.
The crux of the matter is how the algorithm works when the REF_DELTAs
point to base objects that exist in the local repository. When reading
the thin packfile, the object IDs for the delta objects are unknown so
we do not have the delta chain structure automatically. Instead, we need
to start somewhere by selecting a delta whose base is inside our current
object database.
Consider the case where the packfile has two REF_DELTA objects, A and B,
and the delta chain looks like "A depends on B" and "B depends on C" for
some third object C, where C is already in the current repository. The
algorithm _should_ start with all objects that depend on C, finding B,
and then moving on to all objects depending on B, finding A.
However, if the repository also already has object B, then the delta
chain can be analyzed in a different order. The deltas with base B can
be analyzed first, finding A, and then the deltas with base C are
analyzed, finding B. The algorithm currently continues to look for
objects that depend on B, finding A again. This fails due to A's
'real_type' member already being overwritten from OBJ_REF_DELTA to the
correct object type.
This scenario is possible in a typical 'git fetch' where the client does
not advertise B as a 'have' but requests A as a 'want' (and C is noticed
as a common object based on other 'have's). The reason this isn't
typically seen is that most Git servers use OFS_DELTAs to represent
deltas within a packfile. However, if a server uses only REF_DELTAs,
then this kind of issue can occur. There is nothing in the explicit
packfile format that states this use of inter-pack REF_DELTA is
incorrect, only that REF_DELTAs should not be used in the on-disk
representation to avoid cycles.
This die() was introduced in ab791dd138 (index-pack: fix race condition
with duplicate bases, 2014-08-29). Several refactors have adjusted the
error message and the surrounding logic, but this issue has existed for
a longer time as that was only a conversion from an assert().
The tests in t5309 originated in 3b910d0c5e (add tests for indexing
packs with delta cycles, 2013-08-23) and b2ef3d9ebb (test index-pack on
packs with recoverable delta cycles, 2013-08-23). These changes make
note that the current behavior of handling "resolvable" cycles is mostly
a documentation-only test, not that this behavior is the best way for
Git to handle the situation.
The fix here is somewhat complicated due to the amount of state being
adjusted by the loop within threaded_second_pass(). Instead of trying to
resume the start of the loop while adjusting the necessary context, I
chose to scan the REF_DELTAs depending on the current 'parent' and skip
any that have already been processed. This necessarily leaves us in a
state where 'child' and 'child_obj' could be left as NULL and that must
be handled later. There is also some careful handling around skipping
REF_DELTAs when there are also OFS_DELTAs depending on that parent.
There may be value in extending 'test-tool pack-deltas' to allow writing
OFS_DELTAs in order to exercise this logic across the delta types.
Signed-off-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
This new test demonstrates some behavior where a valid packfile is being
rejected by the Git client due to the order in which it is resolving
REF_DELTAs.
The thin packfile has a REF_DELTA chain A->B->C where C is not included
in the packfile. However, the client repository contains both C and B
already. Thus, 'git index-pack' is able to resolve A before resolving B.
When resolving B, it then attempts to resolve any other REF_DELTAs that
are pointing to B as a base. This "revisits" A and complains as if there
is a cycle, but it did not actually detect a cycle.
A fix will arrive in the next change.
Signed-off-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
When trying to demonstrate certain behavior in tests, it can be helpful
to create packfiles that have specific delta structures. 'git
pack-objects' uses various algorithms to select deltas based on their
compression rates, but that does not always demonstrate all possible
packfile shapes. This becomes especially important when wanting to test
'git index-pack' and its ability to parse certain pack shapes.
We have prior art in t/lib-pack.sh, where certain delta structures are
produced by manually writing certain opaque pack contents. However,
producing these script updates is cumbersome and difficult to do as a
contributor.
Instead, create a new test-tool, 'test-tool pack-deltas', that reads a
list of instructions for which objects to include in a packfile and how
those objects should be written in delta form.
At the moment, this only supports REF_DELTAs as those are the kinds of
deltas needed to exercise a bug in 'git index-pack'.
Signed-off-by: Derrick Stolee <stolee@gmail.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Wire up a couple of benchmarking options that we end up writing into our
"GIT-BUILD-OPTIONS" file. These options allow users to control how
exactly benchmarks are executed.
Note that neither `GIT_PERF_MAKE_COMMAND` nor `GIT_PERF_MAKE_OPTS` are
exposed as a build option. Those options are used by "t/perf/run", which
is not used by Meson.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Wire up benchmarks in Meson. The setup is mostly the same as how we wire
up our tests. The only difference is that benchmarks get wired up via
the `benchmark()` option instead of via `test()`, which gives them a bit
of special treatment:
- Benchmarks never run in parallel.
- Benchmarks aren't run by default when tests are executed.
- Meson does not inject the `MALLOC_PERTURB` environment variable.
Using benchmarks is quite simple:
```
$ meson setup build
# Run all benchmarks.
$ meson test -C build --benchmark
# Run a specific benchmark.
$ meson test -C build --benchmark p0000-*
```
Other than that the usual command line arguments accepted when running
tests are also accepted when running benchmarks.
Note that the benchmarking target is somewhat limited because it will
only run benchmarks for the current build. Other use cases, like running
benchmarks against multiple different versions of Git, are not currently
supported. Users should continue to use "t/perf/run" for those use
cases. The script should get extended at one point in time to support
Meson, but this is outside of the scope of this series.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
The "perf-lib.sh" script is sourced by all of our benchmarking suites to
make available common infrastructure. The script assumes that build and
source directory are the same, which works for our Makefile. But the
assumption breaks with both CMake and Meson, where the build directory
can be located in an arbitrary place.
Adapt the script so that it works with out-of-tree builds. Most
importantly, this requires us to figure out the location of the build
directory:
- When running benchmarks via our Makefile the build directory is the
same as the source directory. We already know to derive the test
directory ("t/") via `$(pwd)/..`, which works because we chdir into
"t/perf" before executing benchmarks. We can thus derive the build
directory by appending another "/.." to that path.
- When running benchmarks via Meson the build directory is located at
an arbitrary location. The build system thus has to make the path
known by exporting the `GIT_BUILD_DIR` environment variable.
This change prepares us for wiring up benchmarks in Meson.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Our benchmarks use a couple of Perl scripts to compute results. These
Perl scripts get executed directly, and as the shebang is hardcoded to
"/usr/bin/perl" this will fail on any system where the Perl interpreter
is located in a different path.
Our build infrastructure already lets users configure the location of
Perl, which ultimately gets written into the GIT-BUILD-OPTIONS file.
This file is being sourced by "test-lib.sh", and consequently we already
have the "PERL_PATH" variable available that contains its configured
location.
Use "PERL_PATH" to execute Perl scripts, which makes them work on more
esoteric systems like NixOS. Furthermore, adapt the shebang to use
env(1) to execute Perl so that users who have Perl in PATH, but in a
non-standard location can execute the script directly.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
t/perf: fix benchmarks with alternate repo formats
Many of our benchmarks operate on a user-defined repository that we copy
over before running the benchmarked logic. To keep unintentional side
effects caused by on-disk state at bay we skip copying some files. This
includes for example hooks, but also the repo's configuration.
It is quite sensible to not copy over the configuration, as it is quite
easy to inadvertently carry over configuration that may significantly
impact the performance measurements. But we cannot fully ignore the
configuration either, as it may contain information about the repository
format. This will cause failures when for example using a repository
with SHA256 object format or the reftable ref format.
Fix the issue by parsing the reference and object formats from the
source repository and passing them to git-init(1).
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
send-email: retrieve Message-ID from outlook SMTP server
The script generates a Message-ID alongwith the other headers when
gen_header is called, and is sent alongwith the email. For most email
providers, including gmail, the Message-ID goes unchanged to the
recipient.
But, this does not seem to be a case with Outlook. In Outlook, when we
send our own Message-ID as a part of the headers, it discards it. Then
it generates a new random Message-ID and that is what the recipient
gets.
This is a problem because the Message-ID is crucial when we are sending
multiple emails in a thread. The current implementation for threads in
the script replies to the Message-ID it generated, but due to Outlook's
behavior, it is not the same as the one that the recipient got, thus
breaking threads. So a need arises to retrieve the Message-ID from the
server response and set it in the In-Reply-To and References email
headers instead of using the self generated one for the purpose of
replies.
The $smtp->message variable in this script for outlook is something like
this:
2.0.0 OK <Message-ID> [Hostname=Some-hostname]
The Message-ID here is the one the recipient gets, rather than the one
the script generated.
This patch uses the fact above and retrieves the Message-ID from the
server response. It then changes the value of the $message_id variable
to the one received from the server. This value will be used when next
and subsequent messages are sent as replies to the message, thus
preserving the threading of the messages.
Signed-off-by: Aditya Garg <gargaditya08@live.com> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Meson detects the path of the target shell via `find_program("sh")`,
which essentially does a lookup via `PATH`. This may easily lead to a
subtly-broken Git distribution when the build host has its shell in a
location that the target host doesn't know about.
Fix the issue by appending "/bin" to the custom program path, which
causes us to prefer "/bin/sh" over a `PATH`-based lookup. While
"/bin/sh" isn't standardized, this path tends to work alright on Linux
and BSD distributions. Furthermore, "/bin/sh" is also the path we pick
in our Makefile by default, which further demonstrates that this shell
fulfills our needs.
Note that we intentionally append, not prepend, to the custom program
path. This is because the program path can be configured by the user via
the `-Dsane_tool_path=` build option, which should take precedence over
any defaults we pick for the user.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Git needs to know about a couple of executable paths to pick at runtime.
This includes the system shell, but may also optionally include the Perl
and Python interpreters. Meson detects the location of these paths
automatically via `find_program()`, which does a lookup via the `PATH`
environment variable. As such, it may not be immediately obvious to the
developer which paths have been autodetected.
Improve this by exposing runtime executable paths at setup time.
Signed-off-by: Patrick Steinhardt <ps@pks.im> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eli Schwartz [Fri, 25 Apr 2025 05:25:45 +0000 (01:25 -0400)]
meson: only check for missing networking syms on non-Windows; add compat impls
These are added in the Makefile, but not in meson. They probably won't
work well on systems without them.
CMake adds them, but only on non-Windows. Actually, it only performs
compiler checks for hstrerror, but excludes that check on Windows with
the note that it is "incompatible with the Windows build". This seems to
be misleading -- it is not incompatible, it simply doesn't exist. Still,
the compat version should not be used.
I interpret this cmake logic to mean we shouldn't even be checking for
symbol availability on Windows. In addition to making it simple to add
compat definitions, this also probably shaves off a second or two of
configure time on Windows as no compiler check needs to be performed.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eli Schwartz [Fri, 25 Apr 2025 05:25:44 +0000 (01:25 -0400)]
meson: fix typo in function check that prevented checking for hstrerror
Nowhere in the codebase do we otherwise check for strerror. Nowhere in
the codebase do we make use of -DNO_STRERROR. `strerror` is not a
networking function at all.
We do utilize `hstrerror` though, which is a networking function we
should have been checking here.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eli Schwartz [Fri, 25 Apr 2025 05:25:43 +0000 (01:25 -0400)]
meson: add a couple missing networking dependencies
As evidenced in config.mak.uname and configure.ac, there are various
possible scenarios where these libraries are default-enabled in the
build, which mainly boils down to: SunOS. -lresolv is simply not the
only library that, when it exists, probably needs to be linked to for
networking.
Check for and add -lnsl -lsocket as well.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eli Schwartz [Fri, 25 Apr 2025 05:25:42 +0000 (01:25 -0400)]
meson: do a full usage-based compile check for sysinfo
On Solaris, sys/sysinfo.h is a completely different file and doesn't
resemble the linux file at all. There is also a sysinfo() function, but
it takes a totally different call signature, which asks for:
- the field you wish to receive
- a `char *buf` to copy the data to
and is very useful IFF you want to know, say, the hardware provider. Or,
get *specific* fields from uname(2).
It is surely possible to do this manually via `sysconf(3)` without the
nice API. I can't find anything more direct. Either way, I'm not very
attached to Solaris, so someone who cares can add it. Either way, it's
wrong to assume that sysinfo.h contains what we are looking for.
Check that sysinfo.h defines the struct we actually utilize in
builtins/gc.c, which will correctly fail on systems that don't have it.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eli Schwartz [Fri, 25 Apr 2025 05:25:41 +0000 (01:25 -0400)]
meson: check for getpagesize before using it
It is deprecated and removed in SUS v3 / POSIX 2001, so various systems
may not include it. Solaris, in particular, carefully refrains from
defining it except inside of a maze of `#ifdef` to make sure you have
kept your nose clean and only used it in code that *targets* SUS v2 or
earlier.
config.mak.uname defines this automatically, though only for QNX.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Eli Schwartz [Fri, 25 Apr 2025 05:25:40 +0000 (01:25 -0400)]
meson: simplify and parameterize various standard function checks
This is repetitive logic. We either want to use some -lc function, or if
it is not available we define it as -DNO_XXX and usually (but not
always) provide some custom compatibility impl instead.
Checking the intent of each block when reading through the file is slow
and not very DRY. Switch to taking an array of checkable functions
instead.
Not all functions are straightforward to move, since different macro
prefixes are used.
Signed-off-by: Eli Schwartz <eschwartz@gentoo.org> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Junio C Hamano [Fri, 25 Apr 2025 15:38:26 +0000 (08:38 -0700)]
ci: update the message for unavailble third-party software
An earlier fix added an extra message immediately after failing to
download a third-party package. But near the end of the script,
their availability is checked again and given a message.
Remove the new ones added with a recent fix, as they are redundant.
If we were to add more places to download these software (e.g. for
other platforms we currently do not download them on), the existing
warnning near the end of the script will also trigger.
While at it, as Dscho suggests, rewrite the WARNING: label on the
warning message to ::warning::, which presumably should be shown a
bit more prominently in the CI summary.
Suggested-by: Johannes Schindelin <Johannes.Schindelin@gmx.de> Signed-off-by: Junio C Hamano <gitster@pobox.com>
Junio C Hamano [Fri, 25 Apr 2025 00:25:34 +0000 (17:25 -0700)]
Merge branch 'rj/build-tweaks'
Various build tweaks, including CSPRNG selection on some platforms.
* rj/build-tweaks:
config.mak.uname: set CSPRNG_METHOD to getrandom on Linux
config.mak.uname: add arc4random to the cygwin build
config.mak.uname: add sysinfo() configuration for cygwin
builtin/gc.c: correct RAM calculation when using sysinfo
config.mak.uname: add clock_gettime() to the cygwin build
config.mak.uname: add HAVE_GETDELIM to the cygwin section
config.mak.uname: only set NO_REGEX on cygwin for v1.7
config.mak.uname: add a note about NO_STRLCPY for Linux
Makefile: remove NEEDS_LIBRT build variable
meson.build: set default help format to html on windows
meson.build: only set build variables for non-default values
Makefile: only set some BASIC_CFLAGS when RUNTIME_PREFIX is set
meson.build: remove -DCURL_DISABLE_TYPECHECK
Junio C Hamano [Fri, 25 Apr 2025 00:25:33 +0000 (17:25 -0700)]
Merge branch 'ps/parse-options-integers'
Update parse-options API to catch mistakes to pass address of an
integral variable of a wrong type/size.
* ps/parse-options-integers:
parse-options: detect mismatches in integer signedness
parse-options: introduce precision handling for `OPTION_UNSIGNED`
parse-options: introduce precision handling for `OPTION_INTEGER`
parse-options: rename `OPT_MAGNITUDE()` to `OPT_UNSIGNED()`
parse-options: support unit factors in `OPT_INTEGER()`
global: use designated initializers for options
parse: fix off-by-one for minimum signed values
Junio C Hamano [Fri, 25 Apr 2025 00:25:33 +0000 (17:25 -0700)]
Merge branch 'ps/object-file-cleanup'
Code clean-up.
* ps/object-file-cleanup:
object-store: merge "object-store-ll.h" and "object-store.h"
object-store: remove global array of cached objects
object: split out functions relating to object store subsystem
object-file: drop `index_blob_stream()`
object-file: split up concerns of `HASH_*` flags
object-file: split out functions relating to object store subsystem
object-file: move `xmmap()` into "wrapper.c"
object-file: move `git_open_cloexec()` to "compat/open.c"
object-file: move `safe_create_leading_directories()` into "path.c"
object-file: move `mkdir_in_gitdir()` into "path.c"
Junio C Hamano [Fri, 25 Apr 2025 00:14:14 +0000 (17:14 -0700)]
CI updates
Ever since we issued 2.49, external forces broke our CI jobs in
various ways, and we had to adjust our code to work them around.
Backmerge them from the 'master' front to make it easier to test
real changes to the maintenance track.
Junio C Hamano [Thu, 24 Apr 2025 23:10:47 +0000 (16:10 -0700)]
ci: skip unavailable external software
The ci/install-dependencies.sh script used in a very early phase of
our CI jobs downloads Perforce, Git-LFS, and JGit, used for running
the test scripts. The test framework is prepared to properly skip
the tests that depend on these external software, but the CI script
is unnecessarily strict (due to its use of "set -e" in ci/lib.sh)
and fails the entire CI run before even starting to test the rest of
the system.
Notice a failure to download to any of these external software, but
keep going. We need to be careful about cleaning after a failed
wget, as a later part of the script that does:
if type jgit >/dev/null 2>&1
then
echo "$(tput setaf 6)JGit Version$(tput sgr0)"
jgit version
else
echo >&2 "WARNING: JGit wasn't installed, see above for clues why"
fi
will (surprise!) succeed running "type jgit", and then fail with
"jgit version", taking the whole thing down due to "set -e".
Junio C Hamano [Thu, 24 Apr 2025 18:37:21 +0000 (11:37 -0700)]
Merge branch 'ps/object-file-cleanup' into ps/object-store-cleanup
* ps/object-file-cleanup:
object-store: merge "object-store-ll.h" and "object-store.h"
object-store: remove global array of cached objects
object: split out functions relating to object store subsystem
object-file: drop `index_blob_stream()`
object-file: split up concerns of `HASH_*` flags
object-file: split out functions relating to object store subsystem
object-file: move `xmmap()` into "wrapper.c"
object-file: move `git_open_cloexec()` to "compat/open.c"
object-file: move `safe_create_leading_directories()` into "path.c"
object-file: move `mkdir_in_gitdir()` into "path.c"