Skip to content

Commit

Permalink
mm: split ->readpages calls to avoid non-contiguous pages lists
Browse files Browse the repository at this point in the history
That way file systems don't have to go spotting for non-contiguous pages
and work around them.  It also kicks off I/O earlier, allowing it to
finish earlier and reduce latency.

Signed-off-by: Christoph Hellwig <[email protected]>
Reviewed-by: Dave Chinner <[email protected]>
Reviewed-by: Darrick J. Wong <[email protected]>
Signed-off-by: Darrick J. Wong <[email protected]>
  • Loading branch information
Christoph Hellwig authored and djwong committed Jun 2, 2018
1 parent c534aa3 commit b3751e6
Showing 1 changed file with 13 additions and 3 deletions.
16 changes: 13 additions & 3 deletions mm/readahead.c
Original file line number Diff line number Diff line change
Expand Up @@ -140,8 +140,8 @@ static int read_pages(struct address_space *mapping, struct file *filp,
}

/*
* __do_page_cache_readahead() actually reads a chunk of disk. It allocates all
* the pages first, then submits them all for I/O. This avoids the very bad
* __do_page_cache_readahead() actually reads a chunk of disk. It allocates
* the pages first, then submits them for I/O. This avoids the very bad
* behaviour which would occur if page allocations are causing VM writeback.
* We really don't want to intermingle reads and writes like that.
*
Expand Down Expand Up @@ -177,8 +177,18 @@ unsigned int __do_page_cache_readahead(struct address_space *mapping,
rcu_read_lock();
page = radix_tree_lookup(&mapping->i_pages, page_offset);
rcu_read_unlock();
if (page && !radix_tree_exceptional_entry(page))
if (page && !radix_tree_exceptional_entry(page)) {
/*
* Page already present? Kick off the current batch of
* contiguous pages before continuing with the next
* batch.
*/
if (nr_pages)
read_pages(mapping, filp, &page_pool, nr_pages,
gfp_mask);
nr_pages = 0;
continue;
}

page = __page_cache_alloc(gfp_mask);
if (!page)
Expand Down

0 comments on commit b3751e6

Please sign in to comment.