arm64: rwlocks: don't fail trylock purely due to contention
authorWill Deacon <[email protected]>
Wed, 22 Jul 2015 17:25:52 +0000 (18:25 +0100)
committerWill Deacon <[email protected]>
Mon, 27 Jul 2015 13:26:34 +0000 (14:26 +0100)
STXR can fail for a number of reasons, so don't fail an rwlock trylock
operation simply because the STXR reported failure.

I'm not aware of any issues with the current code, but this makes it
consistent with spin_trylock and also other architectures (e.g. arch/arm).

Reported-by: Catalin Marinas <[email protected]>
Reviewed-by: Catalin Marinas <[email protected]>
Signed-off-by: Will Deacon <[email protected]>
arch/arm64/include/asm/spinlock.h

index cee128732435c7b99fdedd3d690659f96ab81a9b..0f08ba5cfb3309d01f59cd891acae724f596baa6 100644 (file)
@@ -140,10 +140,11 @@ static inline int arch_write_trylock(arch_rwlock_t *rw)
        unsigned int tmp;
 
        asm volatile(
-       "       ldaxr   %w0, %1\n"
-       "       cbnz    %w0, 1f\n"
+       "1:     ldaxr   %w0, %1\n"
+       "       cbnz    %w0, 2f\n"
        "       stxr    %w0, %w2, %1\n"
-       "1:\n"
+       "       cbnz    %w0, 1b\n"
+       "2:\n"
        : "=&r" (tmp), "+Q" (rw->lock)
        : "r" (0x80000000)
        : "memory");
@@ -209,11 +210,12 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
        unsigned int tmp, tmp2 = 1;
 
        asm volatile(
-       "       ldaxr   %w0, %2\n"
+       "1:     ldaxr   %w0, %2\n"
        "       add     %w0, %w0, #1\n"
-       "       tbnz    %w0, #31, 1f\n"
+       "       tbnz    %w0, #31, 2f\n"
        "       stxr    %w1, %w0, %2\n"
-       "1:\n"
+       "       cbnz    %w1, 1b\n"
+       "2:\n"
        : "=&r" (tmp), "+r" (tmp2), "+Q" (rw->lock)
        :
        : "memory");