x86/fpu: Add might_fault() to user_insn()
authorSebastian Andrzej Siewior <[email protected]>
Wed, 28 Nov 2018 22:20:11 +0000 (23:20 +0100)
committerBorislav Petkov <[email protected]>
Mon, 3 Dec 2018 18:15:32 +0000 (19:15 +0100)
Every user of user_insn() passes an user memory pointer to this macro.

Add might_fault() to user_insn() so we can spot users which are using
this macro in sections where page faulting is not allowed.

 [ bp: Space it out to make it more visible. ]

Signed-off-by: Sebastian Andrzej Siewior <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Jason A. Donenfeld" <[email protected]>
Cc: Andy Lutomirski <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Jann Horn <[email protected]>
Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: kvm ML <[email protected]>
Cc: x86-ml <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
arch/x86/include/asm/fpu/internal.h

index 69dcdf195b6112b691616e2512f8a4ecca4796a1..fa2c93cb42a27e9eecd3774a6fba2cef9dffa334 100644 (file)
@@ -106,6 +106,9 @@ extern void fpstate_sanitize_xstate(struct fpu *fpu);
 #define user_insn(insn, output, input...)                              \
 ({                                                                     \
        int err;                                                        \
+                                                                       \
+       might_fault();                                                  \
+                                                                       \
        asm volatile(ASM_STAC "\n"                                      \
                     "1:" #insn "\n\t"                                  \
                     "2: " ASM_CLAC "\n"                                \