Skip to content

Commit

Permalink
mm: fix global NR_SLAB_.*CLAIMABLE counter reads
Browse files Browse the repository at this point in the history
As Tetsuo points out:
 "Commit 385386c ("mm: vmstat: move slab statistics from zone to
  node counters") broke "Slab:" field of /proc/meminfo . It shows nearly
  0kB"

In addition to /proc/meminfo, this problem also affects the slab
counters OOM/allocation failure info dumps, can cause early -ENOMEM from
overcommit protection, and miscalculate image size requirements during
suspend-to-disk.

This is because the patch in question switched the slab counters from
the zone level to the node level, but forgot to update the global
accessor functions to read the aggregate node data instead of the
aggregate zone data.

Use global_node_page_state() to access the global slab counters.

Fixes: 385386c ("mm: vmstat: move slab statistics from zone to node counters")
Link: http://lkml.kernel.org/r/[email protected]
Signed-off-by: Johannes Weiner <[email protected]>
Reported-by: Tetsuo Handa <[email protected]>
Acked-by: Michal Hocko <[email protected]>
Cc: Josef Bacik <[email protected]>
Cc: Vladimir Davydov <[email protected]>
Cc: Stefan Agner <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
hnaz authored and torvalds committed Aug 10, 2017
1 parent 2627393 commit d507e2e
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 10 deletions.
8 changes: 4 additions & 4 deletions fs/proc/meminfo.c
Original file line number Diff line number Diff line change
Expand Up @@ -106,13 +106,13 @@ static int meminfo_proc_show(struct seq_file *m, void *v)
global_node_page_state(NR_FILE_MAPPED));
show_val_kb(m, "Shmem: ", i.sharedram);
show_val_kb(m, "Slab: ",
global_page_state(NR_SLAB_RECLAIMABLE) +
global_page_state(NR_SLAB_UNRECLAIMABLE));
global_node_page_state(NR_SLAB_RECLAIMABLE) +
global_node_page_state(NR_SLAB_UNRECLAIMABLE));

show_val_kb(m, "SReclaimable: ",
global_page_state(NR_SLAB_RECLAIMABLE));
global_node_page_state(NR_SLAB_RECLAIMABLE));
show_val_kb(m, "SUnreclaim: ",
global_page_state(NR_SLAB_UNRECLAIMABLE));
global_node_page_state(NR_SLAB_UNRECLAIMABLE));
seq_printf(m, "KernelStack: %8lu kB\n",
global_page_state(NR_KERNEL_STACK_KB));
show_val_kb(m, "PageTables: ",
Expand Down
2 changes: 1 addition & 1 deletion kernel/power/snapshot.c
Original file line number Diff line number Diff line change
Expand Up @@ -1650,7 +1650,7 @@ static unsigned long minimum_image_size(unsigned long saveable)
{
unsigned long size;

size = global_page_state(NR_SLAB_RECLAIMABLE)
size = global_node_page_state(NR_SLAB_RECLAIMABLE)
+ global_node_page_state(NR_ACTIVE_ANON)
+ global_node_page_state(NR_INACTIVE_ANON)
+ global_node_page_state(NR_ACTIVE_FILE)
Expand Down
9 changes: 5 additions & 4 deletions mm/page_alloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -4458,8 +4458,9 @@ long si_mem_available(void)
* Part of the reclaimable slab consists of items that are in use,
* and cannot be freed. Cap this estimate at the low watermark.
*/
available += global_page_state(NR_SLAB_RECLAIMABLE) -
min(global_page_state(NR_SLAB_RECLAIMABLE) / 2, wmark_low);
available += global_node_page_state(NR_SLAB_RECLAIMABLE) -
min(global_node_page_state(NR_SLAB_RECLAIMABLE) / 2,
wmark_low);

if (available < 0)
available = 0;
Expand Down Expand Up @@ -4602,8 +4603,8 @@ void show_free_areas(unsigned int filter, nodemask_t *nodemask)
global_node_page_state(NR_FILE_DIRTY),
global_node_page_state(NR_WRITEBACK),
global_node_page_state(NR_UNSTABLE_NFS),
global_page_state(NR_SLAB_RECLAIMABLE),
global_page_state(NR_SLAB_UNRECLAIMABLE),
global_node_page_state(NR_SLAB_RECLAIMABLE),
global_node_page_state(NR_SLAB_UNRECLAIMABLE),
global_node_page_state(NR_FILE_MAPPED),
global_node_page_state(NR_SHMEM),
global_page_state(NR_PAGETABLE),
Expand Down
2 changes: 1 addition & 1 deletion mm/util.c
Original file line number Diff line number Diff line change
Expand Up @@ -633,7 +633,7 @@ int __vm_enough_memory(struct mm_struct *mm, long pages, int cap_sys_admin)
* which are reclaimable, under pressure. The dentry
* cache and most inode caches should fall into this
*/
free += global_page_state(NR_SLAB_RECLAIMABLE);
free += global_node_page_state(NR_SLAB_RECLAIMABLE);

/*
* Leave reserved pages. The pages are not for anonymous pages.
Expand Down

0 comments on commit d507e2e

Please sign in to comment.