slab: alien caches must not be initialized if the allocation of the alien cache failed
authorChristoph Lameter <[email protected]>
Tue, 8 Jan 2019 23:23:00 +0000 (15:23 -0800)
committerLinus Torvalds <[email protected]>
Wed, 9 Jan 2019 01:15:11 +0000 (17:15 -0800)
Callers of __alloc_alien() check for NULL.  We must do the same check in
__alloc_alien_cache to avoid NULL pointer dereferences on allocation
failures.

Link: http://lkml.kernel.org/r/010001680f42f192-82b4e12e-1565-4ee0-ae1f-1e98974906aa-000000@email.amazonses.com
Fixes: 49dfc304ba241 ("slab: use the lock on alien_cache, instead of the lock on array_cache")
Fixes: c8522a3a5832b ("Slab: introduce alloc_alien")
Signed-off-by: Christoph Lameter <[email protected]>
Reported-by: [email protected]
Reviewed-by: Andrew Morton <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/slab.c

index 73fe23e649c91abb135cd930a48475021b9f54d2..78eb8c5bf4e4ca126dbba917904216ab27fdf72f 100644 (file)
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -666,8 +666,10 @@ static struct alien_cache *__alloc_alien_cache(int node, int entries,
        struct alien_cache *alc = NULL;
 
        alc = kmalloc_node(memsize, gfp, node);
-       init_arraycache(&alc->ac, entries, batch);
-       spin_lock_init(&alc->lock);
+       if (alc) {
+               init_arraycache(&alc->ac, entries, batch);
+               spin_lock_init(&alc->lock);
+       }
        return alc;
 }