This is done by:
- Shrink the size of btrfs_bio::mirror_num
From 32 bits unsigned int to u16.
Normally btrfs mirror number is either 0 (all profiles), 1 (all
profiles), 2 (DUP/RAID1/RAID10/RAID5), 3 (RAID1C3) or 4 (RAID1C4).
But for RAID6 the mirror number can go as large as the number of
devices of that chunk.
Currently the limit for number of devices for a data chunk is
BTRFS_MAX_DEVS(), which is around 500 for the default 16K nodesize.
And if going the max 64K nodesize, we can have a little over 2000
devices for a chunk.
Although I'd argue it's way overkilled, we don't reject such cases yet
thus u8 is not going to cut it, and have to use u16 (max out at 64K).
- Use bit fields for boolean members
Although it's not always safe for racy call sites, those members are
safe.
* csum_search_commit_root
* is_scrub
Those two are set immediately after bbio allocation and no more
writes after allocation, thus they are very safe.
* async_csum
* can_use_append
Those two are set for each split range, and after that there is no
writes into those two members in different threads, thus they are
also safe.
And there are spaces for 4 more bits before increasing the size of
btrfs_bio again, which should be future proof enough.
- Reorder the structure members
Now we always put the largest member first (after the huge 120 bytes
union), making it easier to fill any holes.
This reduce the size of btrfs_bio by 8 bytes, from 312 bytes to 304 bytes.
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
Signed-off-by: Qu Wenruo <wqu@suse.com>
Reviewed-by: David Sterba <dsterba@suse.com>
Signed-off-by: David Sterba <dsterba@suse.com>
struct btrfs_tree_parent_check parent_check;
};
+ /* For internal use in read end I/O handling */
+ struct work_struct end_io_work;
+
/* End I/O information supplied to btrfs_bio_alloc */
btrfs_bio_end_io_t end_io;
void *private;
- /* For internal use in read end I/O handling */
- unsigned int mirror_num;
atomic_t pending_ios;
- struct work_struct end_io_work;
+ u16 mirror_num;
/* Save the first error status of split bio. */
blk_status_t status;
/* Use the commit root to look up csums (data read bio only). */
- bool csum_search_commit_root;
+ bool csum_search_commit_root:1;
/*
* Since scrub will reuse btree inode, we need this flag to distinguish
* scrub bios.
*/
- bool is_scrub;
+ bool is_scrub:1;
/* Whether the csum generation for data write is async. */
- bool async_csum;
+ bool async_csum:1;
/* Whether the bio is written using zone append. */
- bool can_use_append;
+ bool can_use_append:1;
/*
* This member must come last, bio_alloc_bioset will allocate enough