]> git.ipfire.org Git - thirdparty/kernel/stable.git/commit
arm64: entry: always restore x0 from the stack on syscall return
authorWill Deacon <will.deacon@arm.com>
Wed, 19 Aug 2015 14:57:09 +0000 (15:57 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Tue, 29 Sep 2015 17:33:19 +0000 (19:33 +0200)
commitd2047f152a4338d485c18273573589ad688cb038
treebc530bbe93a15702479cfbf8b6436d8547461a49
parent212e80ab1da1611237b3d2f33106e51852cef15a
arm64: entry: always restore x0 from the stack on syscall return

commit 412fcb6cebd758d080cacd5a41a0cbc656ea5fce upstream.

We have a micro-optimisation on the fast syscall return path where we
take care to keep x0 live with the return value from the syscall so that
we can avoid restoring it from the stack. The benefit of doing this is
fairly suspect, since we will be restoring x1 from the stack anyway
(which lives adjacent in the pt_regs structure) and the only additional
cost is saving x0 back to pt_regs after the syscall handler, which could
be seen as a poor man's prefetch.

More importantly, this causes issues with the context tracking code.

The ct_user_enter macro ends up branching into C code, which is free to
use x0 as a scratch register and consequently leads to us returning junk
back to userspace as the syscall return value. Rather than special case
the context-tracking code, this patch removes the questionable
optimisation entirely.

Cc: Larry Bassel <larry.bassel@linaro.org>
Cc: Kevin Hilman <khilman@linaro.org>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Hanjun Guo <hanjun.guo@linaro.org>
Tested-by: Hanjun Guo <hanjun.guo@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
arch/arm64/kernel/entry.S