Skip to content

Commit

Permalink
mm/vmalloc: get rid of dirty bitmap inside vmap_block structure
Browse files Browse the repository at this point in the history
In original implementation of vm_map_ram made by Nick Piggin there were
two bitmaps: alloc_map and dirty_map.  None of them were used as supposed
to be: finding a suitable free hole for next allocation in block.
vm_map_ram allocates space sequentially in block and on free call marks
pages as dirty, so freed space can't be reused anymore.

Actually it would be very interesting to know the real meaning of those
bitmaps, maybe implementation was incomplete, etc.

But long time ago Zhang Yanfei removed alloc_map by these two commits:

  mm/vmalloc.c: remove dead code in vb_alloc
     3fcd76e
  mm/vmalloc.c: remove alloc_map from vmap_block
     b8e748b

In this patch I replaced dirty_map with two range variables: dirty min and
max.  These variables store minimum and maximum position of dirty space in
a block, since we need only to know the dirty range, not exact position of
dirty pages.

Why it was made?  Several reasons: at first glance it seems that
vm_map_ram allocator concerns about fragmentation thus it uses bitmaps for
finding free hole, but it is not true.  To avoid complexity seems it is
better to use something simple, like min or max range values.  Secondly,
code also becomes simpler, without iteration over bitmap, just comparing
values in min and max macros.  Thirdly, bitmap occupies up to 1024 bits
(4MB is a max size of a block).  Here I replaced the whole bitmap with two
longs.

Finally vm_unmap_aliases should be slightly faster and the whole
vmap_block structure occupies less memory.

Signed-off-by: Roman Pen <[email protected]>
Cc: Zhang Yanfei <[email protected]>
Cc: Eric Dumazet <[email protected]>
Acked-by: Joonsoo Kim <[email protected]>
Cc: David Rientjes <[email protected]>
Cc: WANG Chao <[email protected]>
Cc: Fabian Frederick <[email protected]>
Cc: Christoph Lameter <[email protected]>
Cc: Gioh Kim <[email protected]>
Cc: Rob Jones <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
rouming authored and torvalds committed Apr 15, 2015
1 parent cf725ce commit 7d61bfe
Showing 1 changed file with 17 additions and 18 deletions.
35 changes: 17 additions & 18 deletions mm/vmalloc.c
Original file line number Diff line number Diff line change
Expand Up @@ -765,7 +765,7 @@ struct vmap_block {
spinlock_t lock;
struct vmap_area *va;
unsigned long free, dirty;
DECLARE_BITMAP(dirty_map, VMAP_BBMAP_BITS);
unsigned long dirty_min, dirty_max; /*< dirty range */
struct list_head free_list;
struct rcu_head rcu_head;
struct list_head purge;
Expand Down Expand Up @@ -851,7 +851,8 @@ static void *new_vmap_block(unsigned int order, gfp_t gfp_mask)
BUG_ON(VMAP_BBMAP_BITS <= (1UL << order));
vb->free = VMAP_BBMAP_BITS - (1UL << order);
vb->dirty = 0;
bitmap_zero(vb->dirty_map, VMAP_BBMAP_BITS);
vb->dirty_min = VMAP_BBMAP_BITS;
vb->dirty_max = 0;
INIT_LIST_HEAD(&vb->free_list);

vb_idx = addr_to_vb_idx(va->va_start);
Expand Down Expand Up @@ -902,7 +903,8 @@ static void purge_fragmented_blocks(int cpu)
if (vb->free + vb->dirty == VMAP_BBMAP_BITS && vb->dirty != VMAP_BBMAP_BITS) {
vb->free = 0; /* prevent further allocs after releasing lock */
vb->dirty = VMAP_BBMAP_BITS; /* prevent purging it again */
bitmap_fill(vb->dirty_map, VMAP_BBMAP_BITS);
vb->dirty_min = 0;
vb->dirty_max = VMAP_BBMAP_BITS;
spin_lock(&vbq->lock);
list_del_rcu(&vb->free_list);
spin_unlock(&vbq->lock);
Expand Down Expand Up @@ -995,6 +997,7 @@ static void vb_free(const void *addr, unsigned long size)
order = get_order(size);

offset = (unsigned long)addr & (VMAP_BLOCK_SIZE - 1);
offset >>= PAGE_SHIFT;

vb_idx = addr_to_vb_idx((unsigned long)addr);
rcu_read_lock();
Expand All @@ -1005,7 +1008,10 @@ static void vb_free(const void *addr, unsigned long size)
vunmap_page_range((unsigned long)addr, (unsigned long)addr + size);

spin_lock(&vb->lock);
BUG_ON(bitmap_allocate_region(vb->dirty_map, offset >> PAGE_SHIFT, order));

/* Expand dirty range */
vb->dirty_min = min(vb->dirty_min, offset);
vb->dirty_max = max(vb->dirty_max, offset + (1UL << order));

vb->dirty += 1UL << order;
if (vb->dirty == VMAP_BBMAP_BITS) {
Expand Down Expand Up @@ -1044,25 +1050,18 @@ void vm_unmap_aliases(void)

rcu_read_lock();
list_for_each_entry_rcu(vb, &vbq->free, free_list) {
int i, j;

spin_lock(&vb->lock);
i = find_first_bit(vb->dirty_map, VMAP_BBMAP_BITS);
if (i < VMAP_BBMAP_BITS) {
if (vb->dirty) {
unsigned long va_start = vb->va->va_start;
unsigned long s, e;

j = find_last_bit(vb->dirty_map,
VMAP_BBMAP_BITS);
j = j + 1; /* need exclusive index */
s = va_start + (vb->dirty_min << PAGE_SHIFT);
e = va_start + (vb->dirty_max << PAGE_SHIFT);

s = vb->va->va_start + (i << PAGE_SHIFT);
e = vb->va->va_start + (j << PAGE_SHIFT);
flush = 1;
start = min(s, start);
end = max(e, end);

if (s < start)
start = s;
if (e > end)
end = e;
flush = 1;
}
spin_unlock(&vb->lock);
}
Expand Down

0 comments on commit 7d61bfe

Please sign in to comment.