mm/page_alloc.c: eliminate unsigned confusion in __rmqueue_fallback
authorRasmus Villemoes <[email protected]>
Mon, 10 Jul 2017 22:49:26 +0000 (15:49 -0700)
committerLinus Torvalds <[email protected]>
Mon, 10 Jul 2017 23:32:32 +0000 (16:32 -0700)
Since current_order starts as MAX_ORDER-1 and is then only decremented,
the second half of the loop condition seems superfluous.  However, if
order is 0, we may decrement current_order past 0, making it UINT_MAX.
This is obviously too subtle ([1], [2]).

Since we need to add some comment anyway, change the two variables to
signed, making the counting-down for loop look more familiar, and
apparently also making gcc generate slightly smaller code.

[1] https://lkml.org/lkml/2016/6/20/493
[2] https://lkml.org/lkml/2017/6/19/345

[[email protected]: fix up reject fixupping]
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Rasmus Villemoes <[email protected]>
Reported-by: Hao Lee <[email protected]>
Acked-by: Wei Yang <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
mm/page_alloc.c

index 869035717048020b6bb9e88477e4e82a95760c87..d90c31951b9010e4440947b1917f00098712c59d 100644 (file)
@@ -2206,12 +2206,16 @@ static bool unreserve_highatomic_pageblock(const struct alloc_context *ac,
  * list of requested migratetype, possibly along with other pages from the same
  * block, depending on fragmentation avoidance heuristics. Returns true if
  * fallback was found so that __rmqueue_smallest() can grab it.
+ *
+ * The use of signed ints for order and current_order is a deliberate
+ * deviation from the rest of this file, to make the for loop
+ * condition simpler.
  */
 static inline bool
-__rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
+__rmqueue_fallback(struct zone *zone, int order, int start_migratetype)
 {
        struct free_area *area;
-       unsigned int current_order;
+       int current_order;
        struct page *page;
        int fallback_mt;
        bool can_steal;
@@ -2221,8 +2225,7 @@ __rmqueue_fallback(struct zone *zone, unsigned int order, int start_migratetype)
         * approximates finding the pageblock with the most free pages, which
         * would be too costly to do exactly.
         */
-       for (current_order = MAX_ORDER-1;
-                               current_order >= order && current_order <= MAX_ORDER-1;
+       for (current_order = MAX_ORDER - 1; current_order >= order;
                                --current_order) {
                area = &(zone->free_area[current_order]);
                fallback_mt = find_suitable_fallback(area, current_order,