readahead: reduce unnecessary mmap_miss increases
authorAndi Kleen <[email protected]>
Wed, 25 May 2011 00:12:29 +0000 (17:12 -0700)
committerLinus Torvalds <[email protected]>
Wed, 25 May 2011 15:39:26 +0000 (08:39 -0700)
The original INT_MAX is too large, reduce it to

- avoid unnecessarily dirtying/bouncing the cache line

- restore mmap read-around faster on changed access pattern

Background: in the mosbench exim benchmark which does multi-threaded page
faults on shared struct file, the ra->mmap_miss updates are found to cause
excessive cache line bouncing on tmpfs.  The ra state updates are needless
for tmpfs because it actually disabled readahead totally
(shmem_backing_dev_info.ra_pages == 0).

Tested-by: Tim Chen <[email protected]>
Signed-off-by: Andi Kleen <[email protected]>
Signed-off-by: Wu Fengguang <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/filemap.c

index c974a2863897564d097b8e7c2ecfce4928b64ef1..e5131392d32e77175ebfce87ed6084855d392141 100644 (file)
@@ -1566,7 +1566,8 @@ static void do_sync_mmap_readahead(struct vm_area_struct *vma,
                return;
        }
 
-       if (ra->mmap_miss < INT_MAX)
+       /* Avoid banging the cache line if not needed */
+       if (ra->mmap_miss < MMAP_LOTSAMISS * 10)
                ra->mmap_miss++;
 
        /*