This should avoid *some* cache misses.
Successful path lookup is guaranteed to load at least ->i_mode,
->i_opflags and ->i_acl. At the same time the common case will avoid
looking at more fields.
struct inode is not guaranteed to have any particular alignment, notably
ext4 has it only aligned to 8 bytes meaning nearby fields might happen
to be on the same or only adjacent cache lines depending on luck (or no
luck).
According to pahole:
umode_t i_mode; /* 0 2 */
short unsigned int i_opflags; /* 2 2 */
kuid_t i_uid; /* 4 4 */
kgid_t i_gid; /* 8 4 */
unsigned int i_flags; /* 12 4 */
struct posix_acl * i_acl; /* 16 8 */
struct posix_acl * i_default_acl; /* 24 8 */
->i_acl is unnecessarily separated by 8 bytes from the other fields.
With struct inode being offset 48 bytes into the cacheline this means an
avoidable miss. Note it will still be there for the 56 byte case.
New layout:
umode_t i_mode; /* 0 2 */
short unsigned int i_opflags; /* 2 2 */
unsigned int i_flags; /* 4 4 */
struct posix_acl * i_acl; /* 8 8 */
struct posix_acl * i_default_acl; /* 16 8 */
kuid_t i_uid; /* 24 4 */
kgid_t i_gid; /* 28 4 */
I verified with pahole there are no size or hole changes.
This is stopgap until someone(tm) sanitizes the layout in the first
place, allocation methods aside.
Signed-off-by: Mateusz Guzik <mjguzik@gmail.com>
Link: https://patch.msgid.link/20251109121931.1285366-1-mjguzik@gmail.com
Reviewed-by: Jan Kara <jack@suse.cz>
Signed-off-by: Christian Brauner <brauner@kernel.org>