mm/hugetlb.c: avoid bogus counter of surplus huge page
authorHillf Danton <[email protected]>
Tue, 10 Jan 2012 23:08:30 +0000 (15:08 -0800)
committerLinus Torvalds <[email protected]>
Wed, 11 Jan 2012 00:30:45 +0000 (16:30 -0800)
If we have to hand back the newly allocated huge page to page allocator,
for any reason, the changed counter should be recovered.

This affects only s390 at present.

Signed-off-by: Hillf Danton <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Acked-by: KAMEZAWA Hiroyuki <[email protected]>
Cc: Martin Schwidefsky <[email protected]>
Cc: Heiko Carstens <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/hugetlb.c

index bb7dc405634ff38bb25a4765618a9e295d3069d7..ea8c3a4cd2ae8acdf52a7a4e862e277f2390c265 100644 (file)
@@ -800,7 +800,7 @@ static struct page *alloc_buddy_huge_page(struct hstate *h, int nid)
 
        if (page && arch_prepare_hugepage(page)) {
                __free_pages(page, huge_page_order(h));
-               return NULL;
+               page = NULL;
        }
 
        spin_lock(&hugetlb_lock);