]> git.ipfire.org Git - thirdparty/git.git/blob - t/perf/README
Merge branch 'hl/bisect-doc-clarify-bad-good-ordering'
[thirdparty/git.git] / t / perf / README
1 Git performance tests
2 =====================
3
4 This directory holds performance testing scripts for git tools. The
5 first part of this document describes the various ways in which you
6 can run them.
7
8 When fixing the tools or adding enhancements, you are strongly
9 encouraged to add tests in this directory to cover what you are
10 trying to fix or enhance. The later part of this short document
11 describes how your test scripts should be organized.
12
13
14 Running Tests
15 -------------
16
17 The easiest way to run tests is to say "make". This runs all
18 the tests on the current git repository.
19
20 === Running 2 tests in this tree ===
21 [...]
22 Test this tree
23 ---------------------------------------------------------
24 0001.1: rev-list --all 0.54(0.51+0.02)
25 0001.2: rev-list --all --objects 6.14(5.99+0.11)
26 7810.1: grep worktree, cheap regex 0.16(0.16+0.35)
27 7810.2: grep worktree, expensive regex 7.90(29.75+0.37)
28 7810.3: grep --cached, cheap regex 3.07(3.02+0.25)
29 7810.4: grep --cached, expensive regex 9.39(30.57+0.24)
30
31 You can compare multiple repositories and even git revisions with the
32 'run' script:
33
34 $ ./run . origin/next /path/to/git-tree p0001-rev-list.sh
35
36 where . stands for the current git tree. The full invocation is
37
38 ./run [<revision|directory>...] [--] [<test-script>...]
39
40 A '.' argument is implied if you do not pass any other
41 revisions/directories.
42
43 You can also manually test this or another git build tree, and then
44 call the aggregation script to summarize the results:
45
46 $ ./p0001-rev-list.sh
47 [...]
48 $ ./run /path/to/other/git -- ./p0001-rev-list.sh
49 [...]
50 $ ./aggregate.perl . /path/to/other/git ./p0001-rev-list.sh
51
52 aggregate.perl has the same invocation as 'run', it just does not run
53 anything beforehand.
54
55 You can set the following variables (also in your config.mak):
56
57 GIT_PERF_REPEAT_COUNT
58 Number of times a test should be repeated for best-of-N
59 measurements. Defaults to 3.
60
61 GIT_PERF_MAKE_OPTS
62 Options to use when automatically building a git tree for
63 performance testing. E.g., -j6 would be useful. Passed
64 directly to make as "make $GIT_PERF_MAKE_OPTS".
65
66 GIT_PERF_MAKE_COMMAND
67 An arbitrary command that'll be run in place of the make
68 command, if set the GIT_PERF_MAKE_OPTS variable is
69 ignored. Useful in cases where source tree changes might
70 require issuing a different make command to different
71 revisions.
72
73 This can be (ab)used to monkeypatch or otherwise change the
74 tree about to be built. Note that the build directory can be
75 re-used for subsequent runs so the make command might get
76 executed multiple times on the same tree, but don't count on
77 any of that, that's an implementation detail that might change
78 in the future.
79
80 GIT_PERF_REPO
81 GIT_PERF_LARGE_REPO
82 Repositories to copy for the performance tests. The normal
83 repo should be at least git.git size. The large repo should
84 probably be about linux.git size for optimal results.
85 Both default to the git.git you are running from.
86
87 GIT_PERF_EXTRA
88 Boolean to enable additional tests. Most test scripts are
89 written to detect regressions between two versions of Git, and
90 the output will compare timings for individual tests between
91 those versions. Some scripts have additional tests which are not
92 run by default, that show patterns within a single version of
93 Git (e.g., performance of index-pack as the number of threads
94 changes). These can be enabled with GIT_PERF_EXTRA.
95
96 You can also pass the options taken by ordinary git tests; the most
97 useful one is:
98
99 --root=<directory>::
100 Create "trash" directories used to store all temporary data during
101 testing under <directory>, instead of the t/ directory.
102 Using this option with a RAM-based filesystem (such as tmpfs)
103 can massively speed up the test suite.
104
105
106 Naming Tests
107 ------------
108
109 The performance test files are named as:
110
111 pNNNN-commandname-details.sh
112
113 where N is a decimal digit. The same conventions for choosing NNNN as
114 for normal tests apply.
115
116
117 Writing Tests
118 -------------
119
120 The perf script starts much like a normal test script, except it
121 sources perf-lib.sh:
122
123 #!/bin/sh
124 #
125 # Copyright (c) 2005 Junio C Hamano
126 #
127
128 test_description='xxx performance test'
129 . ./perf-lib.sh
130
131 After that you will want to use some of the following:
132
133 test_perf_fresh_repo # sets up an empty repository
134 test_perf_default_repo # sets up a "normal" repository
135 test_perf_large_repo # sets up a "large" repository
136
137 test_perf_default_repo sub # ditto, in a subdir "sub"
138
139 test_checkout_worktree # if you need the worktree too
140
141 At least one of the first two is required!
142
143 You can use test_expect_success as usual. In both test_expect_success
144 and in test_perf, running "git" points to the version that is being
145 perf-tested. The $MODERN_GIT variable points to the git wrapper for the
146 currently checked-out version (i.e., the one that matches the t/perf
147 scripts you are running). This is useful if your setup uses commands
148 that only work with newer versions of git than what you might want to
149 test (but obviously your new commands must still create a state that can
150 be used by the older version of git you are testing).
151
152 For actual performance tests, use
153
154 test_perf 'descriptive string' '
155 command1 &&
156 command2
157 '
158
159 test_perf spawns a subshell, for lack of better options. This means
160 that
161
162 * you _must_ export all variables that you need in the subshell
163
164 * you _must_ flag all variables that you want to persist from the
165 subshell with 'test_export':
166
167 test_perf 'descriptive string' '
168 foo=$(git rev-parse HEAD) &&
169 test_export foo
170 '
171
172 The so-exported variables are automatically marked for export in the
173 shell executing the perf test. For your convenience, test_export is
174 the same as export in the main shell.
175
176 This feature relies on a bit of magic using 'set' and 'source'.
177 While we have tried to make sure that it can cope with embedded
178 whitespace and other special characters, it will not work with
179 multi-line data.
180
181 Rather than tracking the performance by run-time as `test_perf` does, you
182 may also track output size by using `test_size`. The stdout of the
183 function should be a single numeric value, which will be captured and
184 shown in the aggregated output. For example:
185
186 test_perf 'time foo' '
187 ./foo >foo.out
188 '
189
190 test_size 'output size'
191 wc -c <foo.out
192 '
193
194 might produce output like:
195
196 Test origin HEAD
197 -------------------------------------------------------------
198 1234.1 time foo 0.37(0.79+0.02) 0.26(0.51+0.02) -29.7%
199 1234.2 output size 4.3M 3.6M -14.7%
200
201 The item being measured (and its units) is up to the test; the context
202 and the test title should make it clear to the user whether bigger or
203 smaller numbers are better. Unlike test_perf, the test code will only be
204 run once, since output sizes tend to be more deterministic than timings.