mm: mlock: remove lru_add_drain_all()
authorShakeel Butt <[email protected]>
Thu, 16 Nov 2017 01:38:26 +0000 (17:38 -0800)
committerLinus Torvalds <[email protected]>
Thu, 16 Nov 2017 02:21:07 +0000 (18:21 -0800)
commit72b03fcd5d515441d4aefcad01c1c4392c8099c9
treed96a20e332ed3f147ef4431c5e410205fc376e9a
parent4518085e127dff97e74f74a8780d7564e273bec8
mm: mlock: remove lru_add_drain_all()

lru_add_drain_all() is not required by mlock() and it will drain
everything that has been cached at the time mlock is called.  And that
is not really related to the memory which will be faulted in (and
cached) and mlocked by the syscall itself.

If anything lru_add_drain_all() should be called _after_ pages have been
mlocked and faulted in but even that is not strictly needed because
those pages would get to the appropriate LRUs lazily during the reclaim
path.  Moreover follow_page_pte (gup) will drain the local pcp LRU
cache.

On larger machines the overhead of lru_add_drain_all() in mlock() can be
significant when mlocking data already in memory.  We have observed high
latency in mlock() due to lru_add_drain_all() when the users were
mlocking in memory tmpfs files.

[[email protected]: changelog fix]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Shakeel Butt <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Balbir Singh <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Cc: "Kirill A. Shutemov" <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Yisheng Xie <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Greg Thelen <[email protected]>
Cc: Hugh Dickins <[email protected]>
Cc: Anshuman Khandual <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/mlock.c