memblock allocator aligns @size to @align to reduce the amount
of fragmentation. Commit:
7bd0b0f0da ("memblock: Reimplement memblock allocation using reverse free area iterator")
Broke it by incorrectly relocating @size aligning to
memblock_find_in_range_node(). As the aligned size is not
propagated back to memblock_alloc_base_nid(), the actually
reserved size isn't aligned.
While this increases memory use for memblock reserved array,
this shouldn't cause any critical failure; however, it seems
that the size aligning was hiding a use-beyond-allocation bug in
sparc64 and losing the aligning causes boot failure.
The underlying problem is currently being debugged but this is a
proper fix in itself, it's already pretty late in -rc cycle for
boot failures and reverting the change for debugging isn't
difficult. Restore the size aligning moving it to
memblock_alloc_base_nid().
Reported-by: Meelis Roos <[email protected]>
Signed-off-by: Tejun Heo <[email protected]>
Cc: David S. Miller <[email protected]>
Cc: Grant Likely <[email protected]>
Cc: Rob Herring <[email protected]>
Cc: Linus Torvalds <[email protected]>
Cc: Andrew Morton <[email protected]>
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Ingo Molnar <[email protected]>
LKML-Reference: <alpine.SOC.1.00.
1202130942030[email protected]>
phys_addr_t this_start, this_end, cand;
u64 i;
- /* align @size to avoid excessive fragmentation on reserved array */
- size = round_up(size, align);
-
/* pump up @end */
if (end == MEMBLOCK_ALLOC_ACCESSIBLE)
end = memblock.current_limit;
{
phys_addr_t found;
+ /* align @size to avoid excessive fragmentation on reserved array */
+ size = round_up(size, align);
+
found = memblock_find_in_range_node(0, max_addr, size, align, nid);
if (found && !memblock_reserve(found, size))
return found;