The flogr() inline assembly has no side effects and generates the same
output if the input does not change. Therefore remove the volatile
qualifier to allow the compiler to optimize the inline assembly away,
if possible.
Also remove the superfluous '\n' which makes the inline assembly appear
larger than it is according to compiler heuristics (number of lines).
Furthermore change the return type of flogr() to unsigned long and add the
const attribute to the function.
This reduces the kernel image size by 994 bytes (defconfig, gcc 15.2.0).
Suggested-by: Juergen Christ <jchrist@linux.ibm.com>
Reviewed-by: Juergen Christ <jchrist@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com>
* where the most significant bit has bit number 0.
* If no bit is set this function returns 64.
*/
-static __always_inline unsigned char __flogr(unsigned long word)
+static __always_inline __attribute_const__ unsigned long __flogr(unsigned long word)
{
unsigned long bit;
union register_pair rp __uninitialized;
rp.even = word;
- asm volatile(
- " flogr %[rp],%[rp]\n"
- : [rp] "+d" (rp.pair) : : "cc");
+ asm("flogr %[rp],%[rp]"
+ : [rp] "+d" (rp.pair) : : "cc");
bit = rp.even;
/*
* The result of the flogr instruction is a value in the range