On XFS, when creating the ~2G test file 'big' in a for-loop by
appending 20M each time, the file ends up using ~4G - visible in
'st_blocks'. The unused space would be reclaimed later.
This feature is called "speculative preallocation" which aims at
avoiding fragmentation.
According to the XFS FAQ [1], there are two particular aspects of
XFS speculative preallocation that are triggering this:
1. "Applications that repeatedly trigger preallocation and reclaim
cycles [after file close] can cause fragmentation.
Therefore, this pattern is detected and causes the preallocation
to persist beyond the lifecycle of the file descriptor."
2. "Preallocation sizes grow as files grow larger."
[1] http://xfs.org/index.php/XFS_FAQ
Avoid one of the above by only doing a single close (reclaim cycle).
* tests/du/2g.sh: Similar to the fix for a dd test (see commit
v8.22-65-g7c03fe2), avoid speculative preallocation by creating
the 'big' file in one go instead of appending to it in the loop.
Remove debugging statements as the output with 'set -x' is
sufficient nowadays.
big=big
rm -f $big
-test -t 1 || printf 'creating a 2GB file...\n'
-for i in $(seq 100); do
- # Note: 2147483648 == 2^31. Print floor(2^31/100) per iteration.
- printf %21474836s x >> $big || fail=1
- # On the final iteration, append the remaining 48 bytes.
- test $i = 100 && { printf %48s x >> $big || fail=1; }
- test -t 1 && printf 'creating a 2GB file: %d%% complete\r' $i
-done
-echo
+{
+ for i in $(seq 100); do
+ # Note: 2147483648 == 2^31. Print floor(2^31/100) per iteration.
+ printf %21474836s x || fail=1
+ done
+ # After the final iteration, append the remaining 48 bytes.
+ printf %48s x || fail=1
+} > $big || fail=1
du -k $big > out1 || fail=1
rm -f $big