Skip to content

Commit

Permalink
mm/filemap: Convert mapping_get_entry to return a folio
Browse files Browse the repository at this point in the history
The pagecache only contains folios, so indicate that this is definitely
not a tail page.  Shrinks mapping_get_entry() by 56 bytes, but grows
pagecache_get_page() by 21 bytes as gcc makes slightly different hot/cold
code decisions.  A net reduction of 35 bytes of text.

Signed-off-by: Matthew Wilcox (Oracle) <[email protected]>
Reviewed-by: Christoph Hellwig <[email protected]>
Reviewed-by: David Howells <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
  • Loading branch information
Matthew Wilcox (Oracle) committed Oct 18, 2021
1 parent 9dd3d06 commit bca65ee
Showing 1 changed file with 14 additions and 21 deletions.
35 changes: 14 additions & 21 deletions mm/filemap.c
Original file line number Diff line number Diff line change
Expand Up @@ -1810,49 +1810,42 @@ EXPORT_SYMBOL(page_cache_prev_miss);
* @mapping: the address_space to search
* @index: The page cache index.
*
* Looks up the page cache slot at @mapping & @index. If there is a
* page cache page, the head page is returned with an increased refcount.
* Looks up the page cache entry at @mapping & @index. If it is a folio,
* it is returned with an increased refcount. If it is a shadow entry
* of a previously evicted folio, or a swap entry from shmem/tmpfs,
* it is returned without further action.
*
* If the slot holds a shadow entry of a previously evicted page, or a
* swap entry from shmem/tmpfs, it is returned.
*
* Return: The head page or shadow entry, %NULL if nothing is found.
* Return: The folio, swap or shadow entry, %NULL if nothing is found.
*/
static struct page *mapping_get_entry(struct address_space *mapping,
pgoff_t index)
static void *mapping_get_entry(struct address_space *mapping, pgoff_t index)
{
XA_STATE(xas, &mapping->i_pages, index);
struct page *page;
struct folio *folio;

rcu_read_lock();
repeat:
xas_reset(&xas);
page = xas_load(&xas);
if (xas_retry(&xas, page))
folio = xas_load(&xas);
if (xas_retry(&xas, folio))
goto repeat;
/*
* A shadow entry of a recently evicted page, or a swap entry from
* shmem/tmpfs. Return it without attempting to raise page count.
*/
if (!page || xa_is_value(page))
if (!folio || xa_is_value(folio))
goto out;

if (!page_cache_get_speculative(page))
if (!folio_try_get_rcu(folio))
goto repeat;

/*
* Has the page moved or been split?
* This is part of the lockless pagecache protocol. See
* include/linux/pagemap.h for details.
*/
if (unlikely(page != xas_reload(&xas))) {
put_page(page);
if (unlikely(folio != xas_reload(&xas))) {
folio_put(folio);
goto repeat;
}
out:
rcu_read_unlock();

return page;
return folio;
}

/**
Expand Down

0 comments on commit bca65ee

Please sign in to comment.