locking/core: Fix deadlock during boot on systems with GENERIC_LOCKBREAK
authorWill Deacon <[email protected]>
Tue, 28 Nov 2017 18:42:18 +0000 (18:42 +0000)
committerIngo Molnar <[email protected]>
Tue, 12 Dec 2017 10:24:01 +0000 (11:24 +0100)
Commit:

  a8a217c22116 ("locking/core: Remove {read,spin,write}_can_lock()")

removed the definition of raw_spin_can_lock(), causing the GENERIC_LOCKBREAK
spin_lock() routines to poll the ->break_lock field when waiting on a lock.

This has been reported to cause a deadlock during boot on s390, because
the ->break_lock field is also set by the waiters, and can potentially
remain set indefinitely if no other CPUs come in to take the lock after
it has been released.

This patch removes the explicit spinning on ->break_lock from the waiters,
instead relying on the outer trylock() operation to determine when the
lock is available.

Reported-by: Sebastian Ott <[email protected]>
Tested-by: Sebastian Ott <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Fixes: a8a217c22116 ("locking/core: Remove {read,spin,write}_can_lock()")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
kernel/locking/spinlock.c

index 1fd1a7543cdddf39197acaa3882a0f9de7ddb3ab..0ebb253e21999232b639b1248763d963bc790e8c 100644 (file)
@@ -68,8 +68,8 @@ void __lockfunc __raw_##op##_lock(locktype##_t *lock)                 \
                                                                        \
                if (!(lock)->break_lock)                                \
                        (lock)->break_lock = 1;                         \
-               while ((lock)->break_lock)                              \
-                       arch_##op##_relax(&lock->raw_lock);             \
+                                                                       \
+               arch_##op##_relax(&lock->raw_lock);                     \
        }                                                               \
        (lock)->break_lock = 0;                                         \
 }                                                                      \
@@ -88,8 +88,8 @@ unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock)        \
                                                                        \
                if (!(lock)->break_lock)                                \
                        (lock)->break_lock = 1;                         \
-               while ((lock)->break_lock)                              \
-                       arch_##op##_relax(&lock->raw_lock);             \
+                                                                       \
+               arch_##op##_relax(&lock->raw_lock);                     \
        }                                                               \
        (lock)->break_lock = 0;                                         \
        return flags;                                                   \