Skip to content

Commit

Permalink
memcg: do not recalculate section unnecessarily in init_section_page_…
Browse files Browse the repository at this point in the history
…cgroup

In init_section_page_cgroup() the section a given pfn belongs to is
calculated at the top of the function and, despite the fact that the
pfn/section correspondence does not change, it is recalculated further
down the same function.  By computing this just once and reusing that
value we save some bytes in the object file and do not waste CPU cycles.

Signed-off-by: Fernando Luis Vazquez Cao <[email protected]>
Reviewed-by: KAMEZAWA Hiroyuki <[email protected]>
Cc: Daisuke Nishimura <[email protected]>
Cc: Balbir Singh <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Fernando Luis Vazquez Cao authored and torvalds committed Jan 8, 2009
1 parent 01b1ae6 commit 0753b0e
Showing 1 changed file with 1 addition and 4 deletions.
5 changes: 1 addition & 4 deletions mm/page_cgroup.c
Original file line number Diff line number Diff line change
Expand Up @@ -103,13 +103,11 @@ struct page_cgroup *lookup_page_cgroup(struct page *page)
/* __alloc_bootmem...() is protected by !slab_available() */
static int __init_refok init_section_page_cgroup(unsigned long pfn)
{
struct mem_section *section;
struct mem_section *section = __pfn_to_section(pfn);
struct page_cgroup *base, *pc;
unsigned long table_size;
int nid, index;

section = __pfn_to_section(pfn);

if (!section->page_cgroup) {
nid = page_to_nid(pfn_to_page(pfn));
table_size = sizeof(struct page_cgroup) * PAGES_PER_SECTION;
Expand Down Expand Up @@ -145,7 +143,6 @@ static int __init_refok init_section_page_cgroup(unsigned long pfn)
__init_page_cgroup(pc, pfn + index);
}

section = __pfn_to_section(pfn);
section->page_cgroup = base - pfn;
total_usage += table_size;
return 0;
Expand Down

0 comments on commit 0753b0e

Please sign in to comment.