Skip to content

Commit

Permalink
mm: SLUB hardened usercopy support
Browse files Browse the repository at this point in the history
Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix discovered by Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <[email protected]>
Tested-by: Michael Ellerman <[email protected]>
Reviwed-by: Laura Abbott <[email protected]>
  • Loading branch information
kees committed Jul 26, 2016
1 parent 04385fc commit ed18adc
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 0 deletions.
1 change: 1 addition & 0 deletions init/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -1766,6 +1766,7 @@ config SLAB

config SLUB
bool "SLUB (Unqueued Allocator)"
select HAVE_HARDENED_USERCOPY_ALLOCATOR
help
SLUB is a slab allocator that minimizes cache line usage
instead of managing queues of cached objects (SLAB approach).
Expand Down
40 changes: 40 additions & 0 deletions mm/slub.c
Original file line number Diff line number Diff line change
Expand Up @@ -3614,6 +3614,46 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
EXPORT_SYMBOL(__kmalloc_node);
#endif

#ifdef CONFIG_HARDENED_USERCOPY
/*
* Rejects objects that are incorrectly sized.
*
* Returns NULL if check passes, otherwise const char * to name of cache
* to indicate an error.
*/
const char *__check_heap_object(const void *ptr, unsigned long n,
struct page *page)
{
struct kmem_cache *s;
unsigned long offset;
size_t object_size;

/* Find object and usable object size. */
s = page->slab_cache;
object_size = slab_ksize(s);

/* Reject impossible pointers. */
if (ptr < page_address(page))
return s->name;

/* Find offset within object. */
offset = (ptr - page_address(page)) % s->size;

/* Adjust for redzone and reject if within the redzone. */
if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
if (offset < s->red_left_pad)
return s->name;
offset -= s->red_left_pad;
}

/* Allow address range falling entirely within object size. */
if (offset <= object_size && n <= object_size - offset)
return NULL;

return s->name;
}
#endif /* CONFIG_HARDENED_USERCOPY */

static size_t __ksize(const void *object)
{
struct page *page;
Expand Down

0 comments on commit ed18adc

Please sign in to comment.