Skip to content

Commit

Permalink
[PATCH] zone handle unaligned zone boundaries
Browse files Browse the repository at this point in the history
The buddy allocator has a requirement that boundaries between contigious
zones occur aligned with the the MAX_ORDER ranges.  Where they do not we
will incorrectly merge pages cross zone boundaries.  This can lead to pages
from the wrong zone being handed out.

Originally the buddy allocator would check that buddies were in the same
zone by referencing the zone start and end page frame numbers.  This was
removed as it became very expensive and the buddy allocator already made
the assumption that zones boundaries were aligned.

It is clear that not all configurations and architectures are honouring
this alignment requirement.  Therefore it seems safest to reintroduce
support for non-aligned zone boundaries.  This patch introduces a new check
when considering a page a buddy it compares the zone_table index for the
two pages and refuses to merge the pages where they do not match.  The
zone_table index is unique for each node/zone combination when
FLATMEM/DISCONTIGMEM is enabled and for each section/zone combination when
SPARSEMEM is enabled (a SPARSEMEM section is at least a MAX_ORDER size).

Signed-off-by: Andy Whitcroft <[email protected]>
Cc: Dave Hansen <[email protected]>
Cc: Mel Gorman <[email protected]>
Cc: Yasunori Goto <[email protected]>
Cc: KAMEZAWA Hiroyuki <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
awhitcroft authored and Linus Torvalds committed Jun 23, 2006
1 parent 6f0419e commit cb2b95e
Show file tree
Hide file tree
Showing 2 changed files with 16 additions and 8 deletions.
7 changes: 5 additions & 2 deletions include/linux/mm.h
Original file line number Diff line number Diff line change
Expand Up @@ -465,10 +465,13 @@ static inline unsigned long page_zonenum(struct page *page)
struct zone;
extern struct zone *zone_table[];

static inline int page_zone_id(struct page *page)
{
return (page->flags >> ZONETABLE_PGSHIFT) & ZONETABLE_MASK;
}
static inline struct zone *page_zone(struct page *page)
{
return zone_table[(page->flags >> ZONETABLE_PGSHIFT) &
ZONETABLE_MASK];
return zone_table[page_zone_id(page)];
}

static inline unsigned long page_to_nid(struct page *page)
Expand Down
17 changes: 11 additions & 6 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -286,22 +286,27 @@ __find_combined_index(unsigned long page_idx, unsigned int order)
* we can do coalesce a page and its buddy if
* (a) the buddy is not in a hole &&
* (b) the buddy is in the buddy system &&
* (c) a page and its buddy have the same order.
* (c) a page and its buddy have the same order &&
* (d) a page and its buddy are in the same zone.
*
* For recording whether a page is in the buddy system, we use PG_buddy.
* Setting, clearing, and testing PG_buddy is serialized by zone->lock.
*
* For recording page's order, we use page_private(page).
*/
static inline int page_is_buddy(struct page *page, int order)
static inline int page_is_buddy(struct page *page, struct page *buddy,
int order)
{
#ifdef CONFIG_HOLES_IN_ZONE
if (!pfn_valid(page_to_pfn(page)))
if (!pfn_valid(page_to_pfn(buddy)))
return 0;
#endif

if (PageBuddy(page) && page_order(page) == order) {
BUG_ON(page_count(page) != 0);
if (page_zone_id(page) != page_zone_id(buddy))
return 0;

if (PageBuddy(buddy) && page_order(buddy) == order) {
BUG_ON(page_count(buddy) != 0);
return 1;
}
return 0;
Expand Down Expand Up @@ -352,7 +357,7 @@ static inline void __free_one_page(struct page *page,
struct page *buddy;

buddy = __page_find_buddy(page, page_idx, order);
if (!page_is_buddy(buddy, order))
if (!page_is_buddy(page, buddy, order))
break; /* Move the buddy up one level. */

list_del(&buddy->lru);
Expand Down

0 comments on commit cb2b95e

Please sign in to comment.