arch,alpha: Convert smp_mb__*() to the asm-generic primitives
authorPeter Zijlstra <[email protected]>
Wed, 12 Mar 2014 16:11:00 +0000 (17:11 +0100)
committerIngo Molnar <[email protected]>
Fri, 18 Apr 2014 09:40:31 +0000 (11:40 +0200)
The Alpha ll/sc primitives do not imply any sort of barrier; therefore
the smp_mb__{before,after} should be a full barrier. This is the
default from asm-generic/barrier.h and therefore just remove the
current definitions.

Signed-off-by: Peter Zijlstra <[email protected]>
Acked-by: Paul E. McKenney <[email protected]>
Link: http://lkml.kernel.org/n/[email protected]
Cc: Ivan Kokshaysky <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Matt Turner <[email protected]>
Cc: Richard Henderson <[email protected]>
Cc: [email protected]
Cc: [email protected]
Signed-off-by: Ingo Molnar <[email protected]>
arch/alpha/include/asm/atomic.h
arch/alpha/include/asm/bitops.h

index 78b03ef39f6f0c2960e0b9c4b45eeae47eff07fa..ed60a1ee1ed3813e4ad873d4dab3ea5a0eb19702 100644 (file)
@@ -292,9 +292,4 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
 #define atomic_dec(v) atomic_sub(1,(v))
 #define atomic64_dec(v) atomic64_sub(1,(v))
 
-#define smp_mb__before_atomic_dec()    smp_mb()
-#define smp_mb__after_atomic_dec()     smp_mb()
-#define smp_mb__before_atomic_inc()    smp_mb()
-#define smp_mb__after_atomic_inc()     smp_mb()
-
 #endif /* _ALPHA_ATOMIC_H */
index a19ba5efea4ca77f6685108e49f0430ed21663d0..4bdfbd444e632a3050b0e7ebb5c3b93d1da7fc0f 100644 (file)
@@ -53,9 +53,6 @@ __set_bit(unsigned long nr, volatile void * addr)
        *m |= 1 << (nr & 31);
 }
 
-#define smp_mb__before_clear_bit()     smp_mb()
-#define smp_mb__after_clear_bit()      smp_mb()
-
 static inline void
 clear_bit(unsigned long nr, volatile void * addr)
 {