Skip to content

Commit

Permalink
readahead: move the random read case to bottom
Browse files Browse the repository at this point in the history
Split all readahead cases, and move the random one to bottom.

No behavior changes.

This is to prepare for the introduction of context readahead, and make it
easy for inserting accounting/tracing points for each case.

Signed-off-by: Wu Fengguang <[email protected]>
Cc: Vladislav Bolkhovitin <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Jeff Moyer <[email protected]>
Cc: Nick Piggin <[email protected]>
Cc: Ying Han <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Wu Fengguang authored and torvalds committed Jun 17, 2009
1 parent dc56612 commit 045a252
Showing 1 changed file with 25 additions and 21 deletions.
46 changes: 25 additions & 21 deletions mm/readahead.c
Original file line number Diff line number Diff line change
Expand Up @@ -339,33 +339,25 @@ ondemand_readahead(struct address_space *mapping,
unsigned long req_size)
{
unsigned long max = max_sane_readahead(ra->ra_pages);
pgoff_t prev_offset;
int sequential;

/*
* start of file
*/
if (!offset)
goto initial_readahead;

/*
* It's the expected callback offset, assume sequential access.
* Ramp up sizes, and push forward the readahead window.
*/
if (offset && (offset == (ra->start + ra->size - ra->async_size) ||
offset == (ra->start + ra->size))) {
if ((offset == (ra->start + ra->size - ra->async_size) ||
offset == (ra->start + ra->size))) {
ra->start += ra->size;
ra->size = get_next_ra_size(ra, max);
ra->async_size = ra->size;
goto readit;
}

prev_offset = ra->prev_pos >> PAGE_CACHE_SHIFT;
sequential = offset - prev_offset <= 1UL || req_size > max;

/*
* Standalone, small read.
* Read as is, and do not pollute the readahead state.
*/
if (!hit_readahead_marker && !sequential) {
return __do_page_cache_readahead(mapping, filp,
offset, req_size, 0);
}

/*
* Hit a marked page without valid readahead state.
* E.g. interleaved reads.
Expand All @@ -391,12 +383,24 @@ ondemand_readahead(struct address_space *mapping,
}

/*
* It may be one of
* - first read on start of file
* - sequential cache miss
* - oversize random read
* Start readahead for it.
* oversize read
*/
if (req_size > max)
goto initial_readahead;

/*
* sequential cache miss
*/
if (offset - (ra->prev_pos >> PAGE_CACHE_SHIFT) <= 1UL)
goto initial_readahead;

/*
* standalone, small random read
* Read as is, and do not pollute the readahead state.
*/
return __do_page_cache_readahead(mapping, filp, offset, req_size, 0);

initial_readahead:
ra->start = offset;
ra->size = get_init_ra_size(req_size, max);
ra->async_size = ra->size > req_size ? ra->size - req_size : ra->size;
Expand Down

0 comments on commit 045a252

Please sign in to comment.