For certain sizes mke2fs's hugefile creation would fail with the error:
mke2fs: Could not allocate block in ext2 filesystem while creating huge file 0
This would happen because we had failed to reserve enough space for
the metadata blocks for the hugefile. There were two problems:
1) The overhead calculation function failed to take into account the
cluster size for bigalloc file systems.
2) In the case where num_blocks is 0 and num_files is 1, the overhead
calculation function was passed a size of 0, which caused the
calculated overhead to be zero.
Google-Bug-Id:
123239032
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
e_blocks2 = (e_blocks + extents_per_block - 1) / extents_per_block;
e_blocks3 = (e_blocks2 + extents_per_block - 1) / extents_per_block;
e_blocks4 = (e_blocks3 + extents_per_block - 1) / extents_per_block;
- return e_blocks + e_blocks2 + e_blocks3 + e_blocks4;
+ return (e_blocks + e_blocks2 + e_blocks3 + e_blocks4) *
+ EXT2FS_CLUSTER_RATIO(fs);
}
/*
num_blocks = fs_blocks / num_files;
}
- num_slack += calc_overhead(fs, num_blocks) * num_files;
+ num_slack += (calc_overhead(fs, num_blocks ? num_blocks : fs_blocks) *
+ num_files);
num_slack += (num_files / 16) + 1; /* space for dir entries */
goal = get_start_block(fs, num_slack);
goal = round_up_align(goal, align, part_offset);