Skip to content

Commit

Permalink
slub: fix kmalloc_pagealloc_invalid_free unit test
Browse files Browse the repository at this point in the history
The unit test kmalloc_pagealloc_invalid_free makes sure that for the
higher order slub allocation which goes to page allocator, the free is
called with the correct address i.e.  the virtual address of the head
page.

Commit f227f0f ("slub: fix unreclaimable slab stat for bulk free")
unified the free code paths for page allocator based slub allocations
but instead of using the address passed by the caller, it extracted the
address from the page.  Thus making the unit test
kmalloc_pagealloc_invalid_free moot.  So, fix this by using the address
passed by the caller.

Should we fix this? I think yes because dev expect kasan to catch these
type of programming bugs.

Link: https://lkml.kernel.org/r/[email protected]
Fixes: f227f0f ("slub: fix unreclaimable slab stat for bulk free")
Signed-off-by: Shakeel Butt <[email protected]>
Reported-by: Nathan Chancellor <[email protected]>
Tested-by: Nathan Chancellor <[email protected]>
Acked-by: Roman Gushchin <[email protected]>
Cc: Michal Hocko <[email protected]>
Cc: Muchun Song <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Pekka Enberg <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: Joonsoo Kim <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
shakeelb authored and torvalds committed Aug 14, 2021
1 parent 340caf1 commit 1ed7ce5
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -3236,12 +3236,12 @@ struct detached_freelist {
struct kmem_cache *s;
};

static inline void free_nonslab_page(struct page *page)
static inline void free_nonslab_page(struct page *page, void *object)
{
unsigned int order = compound_order(page);

VM_BUG_ON_PAGE(!PageCompound(page), page);
kfree_hook(page_address(page));
kfree_hook(object);
mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
__free_pages(page, order);
}
Expand Down Expand Up @@ -3282,7 +3282,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
if (!s) {
/* Handle kalloc'ed objects */
if (unlikely(!PageSlab(page))) {
free_nonslab_page(page);
free_nonslab_page(page, object);
p[size] = NULL; /* mark object processed */
return size;
}
Expand Down Expand Up @@ -4258,7 +4258,7 @@ void kfree(const void *x)

page = virt_to_head_page(x);
if (unlikely(!PageSlab(page))) {
free_nonslab_page(page);
free_nonslab_page(page, object);
return;
}
slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);
Expand Down

0 comments on commit 1ed7ce5

Please sign in to comment.