We have two locations using open-coded 512K size, as the async chunk
size.
For compression we have not only the max size a compressed extent can
represent (128K), but also how large an async chunk can be (512K).
Although we have a macro for the maximum compressed extent size, we do
not have any macro for the async chunk size.
Add such a macro and replace the two open-coded SZ_512K.
Reviewed-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
#define BTRFS_MAX_COMPRESSED_PAGES (BTRFS_MAX_COMPRESSED / PAGE_SIZE)
static_assert((BTRFS_MAX_COMPRESSED % PAGE_SIZE) == 0);
+/* The max size for a single worker to compress. */
+#define BTRFS_COMPRESSION_CHUNK_SIZE (SZ_512K)
+
/* Maximum size of data before compression */
#define BTRFS_MAX_UNCOMPRESSED (SZ_128K)
struct async_cow *ctx;
struct async_chunk *async_chunk;
unsigned long nr_pages;
- u64 num_chunks = DIV_ROUND_UP(end - start, SZ_512K);
+ u64 num_chunks = DIV_ROUND_UP(end - start, BTRFS_COMPRESSION_CHUNK_SIZE);
int i;
unsigned nofs_flag;
const blk_opf_t write_flags = wbc_to_write_flags(wbc);
atomic_set(&ctx->num_chunks, num_chunks);
for (i = 0; i < num_chunks; i++) {
- u64 cur_end = min(end, start + SZ_512K - 1);
+ u64 cur_end = min(end, start + BTRFS_COMPRESSION_CHUNK_SIZE - 1);
/*
* igrab is called higher up in the call chain, take only the