sparc64: optimize struct page zeroing
authorPavel Tatashin <[email protected]>
Thu, 16 Nov 2017 01:36:48 +0000 (17:36 -0800)
committerLinus Torvalds <[email protected]>
Thu, 16 Nov 2017 02:21:05 +0000 (18:21 -0800)
Add an optimized mm_zero_struct_page(), so struct page's are zeroed
without calling memset().  We do eight to ten regular stores based on
the size of struct page.  Compiler optimizes out the conditions of
switch() statement.

SPARC-M6 with 15T of memory, single thread performance:

                               BASE            FIX  OPTIMIZED_FIX
        bootmem_init   28.440467985s   2.305674818s   2.305161615s
free_area_init_nodes  202.845901673s 225.343084508s 172.556506560s
                      --------------------------------------------
Total                 231.286369658s 227.648759326s 174.861668175s

BASE:  current linux
FIX:   This patch series without "optimized struct page zeroing"
OPTIMIZED_FIX: This patch series including the current patch.

bootmem_init() is where memory for struct pages is zeroed during
allocation.  Note, about two seconds in this function is a fixed time:
it does not increase as memory is increased.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Pavel Tatashin <[email protected]>
Reviewed-by: Steven Sistare <[email protected]>
Reviewed-by: Daniel Jordan <[email protected]>
Reviewed-by: Bob Picco <[email protected]>
Acked-by: David S. Miller <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Andrey Ryabinin <[email protected]>
Cc: Ard Biesheuvel <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Christian Borntraeger <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Heiko Carstens <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Sam Ravnborg <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
arch/sparc/include/asm/pgtable_64.h

index fd9d9bac7cfa7b3ac96b1fa54dfba71f96e60916..5a9e96be16652bc13bb4e6cd0f298b0e613d5883 100644 (file)
@@ -231,6 +231,36 @@ extern unsigned long _PAGE_ALL_SZ_BITS;
 extern struct page *mem_map_zero;
 #define ZERO_PAGE(vaddr)       (mem_map_zero)
 
+/* This macro must be updated when the size of struct page grows above 80
+ * or reduces below 64.
+ * The idea that compiler optimizes out switch() statement, and only
+ * leaves clrx instructions
+ */
+#define        mm_zero_struct_page(pp) do {                                    \
+       unsigned long *_pp = (void *)(pp);                              \
+                                                                       \
+        /* Check that struct page is either 64, 72, or 80 bytes */     \
+       BUILD_BUG_ON(sizeof(struct page) & 7);                          \
+       BUILD_BUG_ON(sizeof(struct page) < 64);                         \
+       BUILD_BUG_ON(sizeof(struct page) > 80);                         \
+                                                                       \
+       switch (sizeof(struct page)) {                                  \
+       case 80:                                                        \
+               _pp[9] = 0;     /* fallthrough */                       \
+       case 72:                                                        \
+               _pp[8] = 0;     /* fallthrough */                       \
+       default:                                                        \
+               _pp[7] = 0;                                             \
+               _pp[6] = 0;                                             \
+               _pp[5] = 0;                                             \
+               _pp[4] = 0;                                             \
+               _pp[3] = 0;                                             \
+               _pp[2] = 0;                                             \
+               _pp[1] = 0;                                             \
+               _pp[0] = 0;                                             \
+       }                                                               \
+} while (0)
+
 /* PFNs are real physical page numbers.  However, mem_map only begins to record
  * per-page information starting at pfn_base.  This is to handle systems where
  * the first physical page in the machine is at some huge physical address,