android_kernel_motorola_sm6225/arch/arm64/lib
Robin Murphy aaf4e1b05c arm64: Avoid premature usercopy failure
commit 295cf156231ca3f9e3a66bde7fab5e09c41835e0 upstream.

Al reminds us that the usercopy API must only return complete failure
if absolutely nothing could be copied. Currently, if userspace does
something silly like giving us an unaligned pointer to Device memory,
or a size which overruns MTE tag bounds, we may fail to honour that
requirement when faulting on a multi-byte access even though a smaller
access could have succeeded.

Add a mitigation to the fixup routines to fall back to a single-byte
copy if we faulted on a larger access before anything has been written
to the destination, to guarantee making *some* forward progress. We
needn't be too concerned about the overall performance since this should
only occur when callers are doing something a bit dodgy in the first
place. Particularly broken userspace might still be able to trick
generic_perform_write() into an infinite loop by targeting write() at
an mmap() of some read-only device register where the fault-in load
succeeds but any store synchronously aborts such that copy_to_user() is
genuinely unable to make progress, but, well, don't do that...

CC: stable@vger.kernel.org
Reported-by: Chen Huang <chenhuang5@huawei.com>
Suggested-by: Al Viro <viro@zeniv.linux.org.uk>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/dc03d5c675731a1f24a62417dba5429ad744234e.1626098433.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Chen Huang <chenhuang5@huawei.com>
2021-11-02 18:26:43 +01:00
..
atomic_ll_sc.c arm64: introduce CONFIG_ARM64_LSE_ATOMICS as fallback to ll/sc atomics 2015-07-27 15:28:50 +01:00
clear_page.S
clear_user.S arm64: uaccess: Ensure PAN is re-enabled after unhandled uaccess fault 2019-11-24 08:19:14 +01:00
copy_from_user.S arm64: Avoid premature usercopy failure 2021-11-02 18:26:43 +01:00
copy_in_user.S arm64: Avoid premature usercopy failure 2021-11-02 18:26:43 +01:00
copy_page.S arm64/lib: copy_page: use consistent prefetch stride 2017-07-25 10:04:42 +01:00
copy_template.S scripts/spelling.txt: add "overwritting" pattern and fix typo instances 2017-02-27 18:43:47 -08:00
copy_to_user.S arm64: Avoid premature usercopy failure 2021-11-02 18:26:43 +01:00
delay.c arm64: use WFE for long delays 2017-10-13 18:56:15 +01:00
Makefile arm64: lse: remove -fcall-used-x0 flag 2018-11-13 11:08:54 -08:00
memchr.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
memcmp.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
memcpy.S arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S 2020-12-30 11:25:43 +01:00
memmove.S arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S 2020-12-30 11:25:43 +01:00
memset.S arm64: Change .weak to SYM_FUNC_START_WEAK_PI for arch/arm64/lib/mem*.S 2020-12-30 11:25:43 +01:00
strchr.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
strcmp.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
strlen.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
strncmp.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
strnlen.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
strrchr.S arm64: lib: use C string functions with KASAN enabled 2019-12-01 09:17:01 +01:00
tishift.S arm64: export tishift functions to modules 2018-05-21 19:00:48 +01:00
uaccess_flushcache.c arm64: uaccess: Add the uaccess_flushcache.c file 2017-08-10 10:49:21 +01:00