Skip to content

Commit

Permalink
Merge branch 'akpm' (Andrew's patchbomb)
Browse files Browse the repository at this point in the history
Merge misc updates from Andrew Morton:
 "About half of most of MM.  Going very early this time due to
  uncertainty over the coreautounifiednumasched things.  I'll send the
  other half of most of MM tomorrow.  The rest of MM awaits a slab merge
  from Pekka."

* emailed patches from Andrew Morton: (71 commits)
  memory_hotplug: ensure every online node has NORMAL memory
  memory_hotplug: handle empty zone when online_movable/online_kernel
  mm, memory-hotplug: dynamic configure movable memory and portion memory
  drivers/base/node.c: cleanup node_state_attr[]
  bootmem: fix wrong call parameter for free_bootmem()
  avr32, kconfig: remove HAVE_ARCH_BOOTMEM
  mm: cma: remove watermark hacks
  mm: cma: skip watermarks check for already isolated blocks in split_free_page()
  mm, oom: fix race when specifying a thread as the oom origin
  mm, oom: change type of oom_score_adj to short
  mm: cleanup register_node()
  mm, mempolicy: remove duplicate code
  mm/vmscan.c: try_to_freeze() returns boolean
  mm: introduce putback_movable_pages()
  virtio_balloon: introduce migration primitives to balloon pages
  mm: introduce compaction and migration for ballooned pages
  mm: introduce a common interface for balloon pages mobility
  mm: redefine address_space.assoc_mapping
  mm: adjust address_space_operations.migratepage() return code
  arch/sparc/kernel/sys_sparc_64.c: s/COLOUR/COLOR/
  ...
  • Loading branch information
torvalds committed Dec 12, 2012
2 parents 414a675 + 74d42d8 commit 608ff1a
Show file tree
Hide file tree
Showing 96 changed files with 2,792 additions and 1,697 deletions.
6 changes: 3 additions & 3 deletions Documentation/cgroups/memory.txt
Original file line number Diff line number Diff line change
Expand Up @@ -144,9 +144,9 @@ Figure 1 shows the important aspects of the controller
3. Each page has a pointer to the page_cgroup, which in turn knows the
cgroup it belongs to

The accounting is done as follows: mem_cgroup_charge() is invoked to set up
the necessary data structures and check if the cgroup that is being charged
is over its limit. If it is, then reclaim is invoked on the cgroup.
The accounting is done as follows: mem_cgroup_charge_common() is invoked to
set up the necessary data structures and check if the cgroup that is being
charged is over its limit. If it is, then reclaim is invoked on the cgroup.
More details can be found in the reclaim section of this document.
If everything goes well, a page meta-data-structure called page_cgroup is
updated. page_cgroup has its own LRU on cgroup.
Expand Down
19 changes: 17 additions & 2 deletions Documentation/memory-hotplug.txt
Original file line number Diff line number Diff line change
Expand Up @@ -161,7 +161,8 @@ a recent addition and not present on older kernels.
in the memory block.
'state' : read-write
at read: contains online/offline state of memory.
at write: user can specify "online", "offline" command
at write: user can specify "online_kernel",
"online_movable", "online", "offline" command
which will be performed on al sections in the block.
'phys_device' : read-only: designed to show the name of physical memory
device. This is not well implemented now.
Expand Down Expand Up @@ -255,6 +256,17 @@ For onlining, you have to write "online" to the section's state file as:

% echo online > /sys/devices/system/memory/memoryXXX/state

This onlining will not change the ZONE type of the target memory section,
If the memory section is in ZONE_NORMAL, you can change it to ZONE_MOVABLE:

% echo online_movable > /sys/devices/system/memory/memoryXXX/state
(NOTE: current limit: this memory section must be adjacent to ZONE_MOVABLE)

And if the memory section is in ZONE_MOVABLE, you can change it to ZONE_NORMAL:

% echo online_kernel > /sys/devices/system/memory/memoryXXX/state
(NOTE: current limit: this memory section must be adjacent to ZONE_NORMAL)

After this, section memoryXXX's state will be 'online' and the amount of
available memory will be increased.

Expand Down Expand Up @@ -377,15 +389,18 @@ The third argument is passed by pointer of struct memory_notify.
struct memory_notify {
unsigned long start_pfn;
unsigned long nr_pages;
int status_change_nid_normal;
int status_change_nid;
}

start_pfn is start_pfn of online/offline memory.
nr_pages is # of pages of online/offline memory.
status_change_nid_normal is set node id when N_NORMAL_MEMORY of nodemask
is (will be) set/clear, if this is -1, then nodemask status is not changed.
status_change_nid is set node id when N_HIGH_MEMORY of nodemask is (will be)
set/clear. It means a new(memoryless) node gets new memory by online and a
node loses all memory. If this is -1, then nodemask status is not changed.
If status_changed_nid >= 0, callback should create/discard structures for the
If status_changed_nid* >= 0, callback should create/discard structures for the
node if necessary.

--------------
Expand Down
11 changes: 11 additions & 0 deletions arch/alpha/include/asm/mman.h
Original file line number Diff line number Diff line change
Expand Up @@ -63,4 +63,15 @@
/* compatibility flags */
#define MAP_FILE 0

/*
* When MAP_HUGETLB is set bits [26:31] encode the log2 of the huge page size.
* This gives us 6 bits, which is enough until someone invents 128 bit address
* spaces.
*
* Assume these are all power of twos.
* When 0 use the default page size.
*/
#define MAP_HUGE_SHIFT 26
#define MAP_HUGE_MASK 0x3f

#endif /* __ALPHA_MMAN_H__ */
132 changes: 23 additions & 109 deletions arch/arm/mm/mmap.c
Original file line number Diff line number Diff line change
Expand Up @@ -11,18 +11,6 @@
#include <linux/random.h>
#include <asm/cachetype.h>

static inline unsigned long COLOUR_ALIGN_DOWN(unsigned long addr,
unsigned long pgoff)
{
unsigned long base = addr & ~(SHMLBA-1);
unsigned long off = (pgoff << PAGE_SHIFT) & (SHMLBA-1);

if (base + off <= addr)
return base + off;

return base - off;
}

#define COLOUR_ALIGN(addr,pgoff) \
((((addr)+SHMLBA-1)&~(SHMLBA-1)) + \
(((pgoff)<<PAGE_SHIFT) & (SHMLBA-1)))
Expand Down Expand Up @@ -69,9 +57,9 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
{
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
unsigned long start_addr;
int do_align = 0;
int aliasing = cache_is_vipt_aliasing();
struct vm_unmapped_area_info info;

/*
* We only need to do colour alignment if either the I or D
Expand Down Expand Up @@ -104,46 +92,14 @@ arch_get_unmapped_area(struct file *filp, unsigned long addr,
(!vma || addr + len <= vma->vm_start))
return addr;
}
if (len > mm->cached_hole_size) {
start_addr = addr = mm->free_area_cache;
} else {
start_addr = addr = mm->mmap_base;
mm->cached_hole_size = 0;
}

full_search:
if (do_align)
addr = COLOUR_ALIGN(addr, pgoff);
else
addr = PAGE_ALIGN(addr);

for (vma = find_vma(mm, addr); ; vma = vma->vm_next) {
/* At this point: (!vma || addr < vma->vm_end). */
if (TASK_SIZE - len < addr) {
/*
* Start a new search - just in case we missed
* some holes.
*/
if (start_addr != TASK_UNMAPPED_BASE) {
start_addr = addr = TASK_UNMAPPED_BASE;
mm->cached_hole_size = 0;
goto full_search;
}
return -ENOMEM;
}
if (!vma || addr + len <= vma->vm_start) {
/*
* Remember the place where we stopped the search:
*/
mm->free_area_cache = addr + len;
return addr;
}
if (addr + mm->cached_hole_size < vma->vm_start)
mm->cached_hole_size = vma->vm_start - addr;
addr = vma->vm_end;
if (do_align)
addr = COLOUR_ALIGN(addr, pgoff);
}
info.flags = 0;
info.length = len;
info.low_limit = mm->mmap_base;
info.high_limit = TASK_SIZE;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
return vm_unmapped_area(&info);
}

unsigned long
Expand All @@ -156,6 +112,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
unsigned long addr = addr0;
int do_align = 0;
int aliasing = cache_is_vipt_aliasing();
struct vm_unmapped_area_info info;

/*
* We only need to do colour alignment if either the I or D
Expand Down Expand Up @@ -187,70 +144,27 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
return addr;
}

/* check if free_area_cache is useful for us */
if (len <= mm->cached_hole_size) {
mm->cached_hole_size = 0;
mm->free_area_cache = mm->mmap_base;
}

/* either no address requested or can't fit in requested address hole */
addr = mm->free_area_cache;
if (do_align) {
unsigned long base = COLOUR_ALIGN_DOWN(addr - len, pgoff);
addr = base + len;
}

/* make sure it can fit in the remaining address space */
if (addr > len) {
vma = find_vma(mm, addr-len);
if (!vma || addr <= vma->vm_start)
/* remember the address as a hint for next time */
return (mm->free_area_cache = addr-len);
}

if (mm->mmap_base < len)
goto bottomup;

addr = mm->mmap_base - len;
if (do_align)
addr = COLOUR_ALIGN_DOWN(addr, pgoff);

do {
/*
* Lookup failure means no vma is above this address,
* else if new region fits below vma->vm_start,
* return with success:
*/
vma = find_vma(mm, addr);
if (!vma || addr+len <= vma->vm_start)
/* remember the address as a hint for next time */
return (mm->free_area_cache = addr);
info.flags = VM_UNMAPPED_AREA_TOPDOWN;
info.length = len;
info.low_limit = PAGE_SIZE;
info.high_limit = mm->mmap_base;
info.align_mask = do_align ? (PAGE_MASK & (SHMLBA - 1)) : 0;
info.align_offset = pgoff << PAGE_SHIFT;
addr = vm_unmapped_area(&info);

/* remember the largest hole we saw so far */
if (addr + mm->cached_hole_size < vma->vm_start)
mm->cached_hole_size = vma->vm_start - addr;

/* try just below the current vma->vm_start */
addr = vma->vm_start - len;
if (do_align)
addr = COLOUR_ALIGN_DOWN(addr, pgoff);
} while (len < vma->vm_start);

bottomup:
/*
* A failed mmap() very likely causes application failure,
* so fall back to the bottom-up function here. This scenario
* can happen with large stack limits and large mmap()
* allocations.
*/
mm->cached_hole_size = ~0UL;
mm->free_area_cache = TASK_UNMAPPED_BASE;
addr = arch_get_unmapped_area(filp, addr0, len, pgoff, flags);
/*
* Restore the topdown base:
*/
mm->free_area_cache = mm->mmap_base;
mm->cached_hole_size = ~0UL;
if (addr & ~PAGE_MASK) {
VM_BUG_ON(addr != -ENOMEM);
info.flags = 0;
info.low_limit = mm->mmap_base;
info.high_limit = TASK_SIZE;
addr = vm_unmapped_area(&info);
}

return addr;
}
Expand Down
3 changes: 0 additions & 3 deletions arch/avr32/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -193,9 +193,6 @@ source "kernel/Kconfig.preempt"
config QUICKLIST
def_bool y

config HAVE_ARCH_BOOTMEM
def_bool n

config ARCH_HAVE_MEMORY_PRESENT
def_bool n

Expand Down
11 changes: 11 additions & 0 deletions arch/mips/include/uapi/asm/mman.h
Original file line number Diff line number Diff line change
Expand Up @@ -87,4 +87,15 @@
/* compatibility flags */
#define MAP_FILE 0

/*
* When MAP_HUGETLB is set bits [26:31] encode the log2 of the huge page size.
* This gives us 6 bits, which is enough until someone invents 128 bit address
* spaces.
*
* Assume these are all power of twos.
* When 0 use the default page size.
*/
#define MAP_HUGE_SHIFT 26
#define MAP_HUGE_MASK 0x3f

#endif /* _ASM_MMAN_H */
Loading

0 comments on commit 608ff1a

Please sign in to comment.