mm: fix Committed_AS underflow on large NR_CPUS environment
The Committed_AS field can underflow in certain situations:
> # while true; do cat /proc/meminfo | grep _AS; sleep 1; done | uniq -c
> 1 Committed_AS:
18446744073709323392 kB
> 11 Committed_AS:
18446744073709455488 kB
> 6 Committed_AS: 35136 kB
> 5 Committed_AS:
18446744073709454400 kB
> 7 Committed_AS: 35904 kB
> 3 Committed_AS:
18446744073709453248 kB
> 2 Committed_AS: 34752 kB
> 9 Committed_AS:
18446744073709453248 kB
> 8 Committed_AS: 34752 kB
> 3 Committed_AS:
18446744073709320960 kB
> 7 Committed_AS:
18446744073709454080 kB
> 3 Committed_AS:
18446744073709320960 kB
> 5 Committed_AS:
18446744073709454080 kB
> 6 Committed_AS:
18446744073709320960 kB
Because NR_CPUS can be greater than 1000 and meminfo_proc_show() does
not check for underflow.
But NR_CPUS proportional isn't good calculation. In general,
possibility of lock contention is proportional to the number of online
cpus, not theorical maximum cpus (NR_CPUS).
The current kernel has generic percpu-counter stuff. using it is right
way. it makes code simplify and percpu_counter_read_positive() don't
make underflow issue.
Reported-by: Dave Hansen <[email protected]>
Signed-off-by: KOSAKI Motohiro <[email protected]>
Cc: Eric B Munson <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: <[email protected]> [All kernel versions]
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>