mm: numa: defer TLB flush for THP migration as long as possible
authorMel Gorman <[email protected]>
Thu, 19 Dec 2013 01:08:46 +0000 (17:08 -0800)
committerLinus Torvalds <[email protected]>
Thu, 19 Dec 2013 03:04:51 +0000 (19:04 -0800)
THP migration can fail for a variety of reasons.  Avoid flushing the TLB
to deal with THP migration races until the copy is ready to start.

Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Rik van Riel <[email protected]>
Cc: Alex Thorlton <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/huge_memory.c
mm/migrate.c

index 3d2783e10596ac1fc7124e39444f48add28c9b9c..7de1bf85f6833422e16161445b71e328fad2e1f6 100644 (file)
@@ -1376,13 +1376,6 @@ int do_huge_pmd_numa_page(struct mm_struct *mm, struct vm_area_struct *vma,
                goto clear_pmdnuma;
        }
 
-       /*
-        * The page_table_lock above provides a memory barrier
-        * with change_protection_range.
-        */
-       if (mm_tlb_flush_pending(mm))
-               flush_tlb_range(vma, haddr, haddr + HPAGE_PMD_SIZE);
-
        /*
         * Migrate the THP to the requested node, returns with page unlocked
         * and pmd_numa cleared.
index cfb419085261ce4eba3ab7bb6f55ad9b1ff230f8..e9b7102013354197fb2c0d48cf6fb72731c27827 100644 (file)
@@ -1759,6 +1759,9 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
                goto out_fail;
        }
 
+       if (mm_tlb_flush_pending(mm))
+               flush_tlb_range(vma, mmun_start, mmun_end);
+
        /* Prepare a page as a migration target */
        __set_page_locked(new_page);
        SetPageSwapBacked(new_page);