mm: thp: relocate flush_cache_range() in migrate_misplaced_transhuge_page()
authorAndrea Arcangeli <[email protected]>
Fri, 26 Oct 2018 22:10:43 +0000 (15:10 -0700)
committerLinus Torvalds <[email protected]>
Fri, 26 Oct 2018 23:38:15 +0000 (16:38 -0700)
There should be no cache left by the time we overwrite the old transhuge
pmd with the new one.  It's already too late to flush through the virtual
address because we already copied the page data to the new physical
address.

So flush the cache before the data copy.

Also delete the "end" variable to shutoff a "unused variable" warning on
x86 where flush_cache_range() is a noop.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Andrea Arcangeli <[email protected]>
Acked-by: Kirill A. Shutemov <[email protected]>
Cc: Aaron Tomlin <[email protected]>
Cc: Jerome Glisse <[email protected]>
Cc: Mel Gorman <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/migrate.c

index 905c2264c7885cdf23f4e6f6382153daaff2bd60..b6700f2962f32d77663655901edf403e688395d9 100644 (file)
@@ -1976,7 +1976,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
        struct page *new_page = NULL;
        int page_lru = page_is_file_cache(page);
        unsigned long start = address & HPAGE_PMD_MASK;
-       unsigned long end = start + HPAGE_PMD_SIZE;
 
        new_page = alloc_pages_node(node,
                (GFP_TRANSHUGE_LIGHT | __GFP_THISNODE),
@@ -1999,6 +1998,8 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
        /* anon mapping, we can simply copy page->mapping to the new page: */
        new_page->mapping = page->mapping;
        new_page->index = page->index;
+       /* flush the cache before copying using the kernel virtual address */
+       flush_cache_range(vma, start, start + HPAGE_PMD_SIZE);
        migrate_page_copy(new_page, page);
        WARN_ON(PageLRU(new_page));
 
@@ -2036,7 +2037,6 @@ int migrate_misplaced_transhuge_page(struct mm_struct *mm,
         * new page and page_add_new_anon_rmap guarantee the copy is
         * visible before the pagetable update.
         */
-       flush_cache_range(vma, start, end);
        page_add_anon_rmap(new_page, vma, start, true);
        /*
         * At this point the pmd is numa/protnone (i.e. non present) and the TLB