mm: introduce free_highmem_page() helper to free highmem pages into buddy system
authorJiang Liu <[email protected]>
Mon, 29 Apr 2013 22:07:00 +0000 (15:07 -0700)
committerLinus Torvalds <[email protected]>
Mon, 29 Apr 2013 22:54:31 +0000 (15:54 -0700)
The original goal of this patchset is to fix the bug reported by

  https://bugzilla.kernel.org/show_bug.cgi?id=53501

Now it has also been expanded to reduce common code used by memory
initializion.

This is the second part, which applies to the previous part at:
  http://marc.info/?l=linux-mm&m=136289696323825&w=2

It introduces a helper function free_highmem_page() to free highmem
pages into the buddy system when initializing mm subsystem.
Introduction of free_highmem_page() is one step forward to clean up
accesses and modificaitons of totalhigh_pages, totalram_pages and
zone->managed_pages etc. I hope we could remove all references to
totalhigh_pages from the arch/ subdirectory.

We have only tested these patchset on x86 platforms, and have done basic
compliation tests using cross-compilers from ftp.kernel.org. That means
some code may not pass compilation on some architectures. So any help
to test this patchset are welcomed!

There are several other parts still under development:
Part3: refine code to manage totalram_pages, totalhigh_pages and
zone->managed_pages
Part4: introduce helper functions to simplify mem_init() and remove the
global variable num_physpages.

This patch:

Introduce helper function free_highmem_page(), which will be used by
architectures with HIGHMEM enabled to free highmem pages into the buddy
system.

Signed-off-by: Jiang Liu <[email protected]>
Cc: "David S. Miller" <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: "Suzuki K. Poulose" <[email protected]>
Cc: Alexander Graf <[email protected]>
Cc: Arnd Bergmann <[email protected]>
Cc: Attilio Rao <[email protected]>
Cc: Benjamin Herrenschmidt <[email protected]>
Cc: Cong Wang <[email protected]>
Cc: David Daney <[email protected]>
Cc: David Howells <[email protected]>
Cc: Geert Uytterhoeven <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: James Hogan <[email protected]>
Cc: Jeff Dike <[email protected]>
Cc: Jiang Liu <[email protected]>
Cc: Jiang Liu <[email protected]>
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: Konstantin Khlebnikov <[email protected]>
Cc: Linus Walleij <[email protected]>
Cc: Marek Szyprowski <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Nazarewicz <[email protected]>
Cc: Michal Simek <[email protected]>
Cc: Michel Lespinasse <[email protected]>
Cc: Minchan Kim <[email protected]>
Cc: Paul Mackerras <[email protected]>
Cc: Ralf Baechle <[email protected]>
Cc: Richard Weinberger <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Russell King <[email protected]>
Cc: Sam Ravnborg <[email protected]>
Cc: Stephen Boyd <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Yinghai Lu <[email protected]>
Reviewed-by: Pekka Enberg <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
include/linux/mm.h
mm/page_alloc.c

index d064c73c925e17130dcb08614a071e75d5571e6a..43b70d5f82019b08d3f42c4b68c9b43fc4eb2b2d 100644 (file)
@@ -1303,6 +1303,13 @@ extern void free_initmem(void);
  */
 extern unsigned long free_reserved_area(unsigned long start, unsigned long end,
                                        int poison, char *s);
+#ifdef CONFIG_HIGHMEM
+/*
+ * Free a highmem page into the buddy system, adjusting totalhigh_pages
+ * and totalram_pages.
+ */
+extern void free_highmem_page(struct page *page);
+#endif
 
 static inline void adjust_managed_page_count(struct page *page, long count)
 {
index 5c660f5ba3d340a5cb55901c0147ccba6b7a1ab7..72da11c6804d203e38d77d2cd7688b9dd73fcdd2 100644 (file)
@@ -5141,6 +5141,15 @@ unsigned long free_reserved_area(unsigned long start, unsigned long end,
        return pages;
 }
 
+#ifdef CONFIG_HIGHMEM
+void free_highmem_page(struct page *page)
+{
+       __free_reserved_page(page);
+       totalram_pages++;
+       totalhigh_pages++;
+}
+#endif
+
 /**
  * set_dma_reserve - set the specified number of pages reserved in the first zone
  * @new_dma_reserve: The number of pages to mark reserved