x86/kasan: don't allocate extra shadow memory
authorAndrey Ryabinin <[email protected]>
Mon, 10 Jul 2017 22:50:27 +0000 (15:50 -0700)
committerLinus Torvalds <[email protected]>
Mon, 10 Jul 2017 23:32:33 +0000 (16:32 -0700)
We used to read several bytes of the shadow memory in advance.
Therefore additional shadow memory mapped to prevent crash if
speculative load would happen near the end of the mapped shadow memory.

Now we don't have such speculative loads, so we no longer need to map
additional shadow memory.

Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Andrey Ryabinin <[email protected]>
Cc: Mark Rutland <[email protected]>
Cc: "H. Peter Anvin" <[email protected]>
Cc: Alexander Potapenko <[email protected]>
Cc: Catalin Marinas <[email protected]>
Cc: Dmitry Vyukov <[email protected]>
Cc: Ingo Molnar <[email protected]>
Cc: Thomas Gleixner <[email protected]>
Cc: Will Deacon <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
arch/x86/mm/kasan_init_64.c

index 88215ac16b24bd3d721a2069eaa9a09e933ec883..02c9d75534091a0cf06b78716a990c41847cb6e4 100644 (file)
@@ -23,12 +23,7 @@ static int __init map_range(struct range *range)
        start = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->start));
        end = (unsigned long)kasan_mem_to_shadow(pfn_to_kaddr(range->end));
 
-       /*
-        * end + 1 here is intentional. We check several shadow bytes in advance
-        * to slightly speed up fastpath. In some rare cases we could cross
-        * boundary of mapped shadow, so we just map some more here.
-        */
-       return vmemmap_populate(start, end + 1, NUMA_NO_NODE);
+       return vmemmap_populate(start, end, NUMA_NO_NODE);
 }
 
 static void __init clear_pgds(unsigned long start,