migration: only migrate_prep() once per move_pages()
authorBrice Goglin <[email protected]>
Tue, 16 Jun 2009 22:32:43 +0000 (15:32 -0700)
committerLinus Torvalds <[email protected]>
Wed, 17 Jun 2009 02:47:41 +0000 (19:47 -0700)
migrate_prep() is fairly expensive (72us on 16-core barcelona 1.9GHz).
Commit 3140a2273009c01c27d316f35ab76a37e105fdd8 improved move_pages()
throughput by breaking it into chunks, but it also made migrate_prep() be
called once per chunk (every 128pages or so) instead of once per
move_pages().

This patch reverts to calling migrate_prep() only once per chunk as we did
before 2.6.29.  It is also a followup to commit
0aedadf91a70a11c4a3e7c7d99b21e5528af8d5d ("mm: move migrate_prep out from
under mmap_sem").

This improves migration throughput on the above machine from 600MB/s to
750MB/s.

Signed-off-by: Brice Goglin <[email protected]>
Acked-by: Christoph Lameter <[email protected]>
Cc: KOSAKI Motohiro <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: Nick Piggin <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Lee Schermerhorn <[email protected]>
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/migrate.c

index 5a24923e7fd7a398fc3f0b323270eafe0b4e29b3..939888f9ddab21ecabdc2f732a8ee604cb56da13 100644 (file)
@@ -820,7 +820,6 @@ static int do_move_page_to_node_array(struct mm_struct *mm,
        struct page_to_node *pp;
        LIST_HEAD(pagelist);
 
-       migrate_prep();
        down_read(&mm->mmap_sem);
 
        /*
@@ -907,6 +906,9 @@ static int do_pages_move(struct mm_struct *mm, struct task_struct *task,
        pm = (struct page_to_node *)__get_free_page(GFP_KERNEL);
        if (!pm)
                goto out;
+
+       migrate_prep();
+
        /*
         * Store a chunk of page_to_node array in a page,
         * but keep the last one as a marker