btrfs: switch to btrfs_compress_bio() interface for compressed writes
This switch has the following benefits:
- A single structure to handle all compression
No more extra members like compressed_folios[] nor compress_type, all
those members.
This means the structure of async_extent is much smaller.
- Simpler error handling
A single cleanup_compressed_bio() will handle everything, no extra
compressed_folios[] array to bother.
Some extra notes:
- Compressed folios releasing
Now we go bio_for_each_folio_all() loop to release the folios of the
bio. This will work for both the old compressed_folios[] array and the
new pure bio method.
For old compressed_folios[], all folios of that array is queued into
the bio, thus releasing the folios from the bio is the same as
releasing each folio of that array. We just need to be sure no double
releasing from the array and bio.
For the new pure bio method, that array is NULL, just usual folio
releasing of the bio.
The only extra note is for end_bbio_compressed_read(), as the folios
are allocated using btrfs_alloc_folio_array(), thus the folios should
only be released by regular folio_put(), not btrfs_free_compr_folio().
- Rounding up the bio to block size
We cannot simply increase bi_size, as that will not increase the
length of the last bvec.
Thus we have to properly add the last part into the bio.
This will be done by the helper, round_up_last_block().
The reason we do not round those bios up at compression time is to get
the unaligned compressed size, so that they can be utilized for
inline extents.
If we round the bios up at *_compress_bio(), then every compressed bio
will be larger than or equal to one fs block, resulting no inline
compressed extent.
Reviewed-by: Boris Burkov <boris@bur.io> Signed-off-by: Qu Wenruo <wqu@suse.com> Reviewed-by: David Sterba <dsterba@suse.com> Signed-off-by: David Sterba <dsterba@suse.com>