Skip to content

Commit

Permalink
Revert "mm: skip CMA pages when they are not available"
Browse files Browse the repository at this point in the history
[ Upstream commit bfe0857 ]

This reverts commit 5da226d ("mm: skip CMA pages when they are not
available") and b7108d6 ("Multi-gen LRU: skip CMA pages when they are
not eligible").

lruvec->lru_lock is highly contended and is held when calling
isolate_lru_folios.  If the lru has a large number of CMA folios
consecutively, while the allocation type requested is not MIGRATE_MOVABLE,
isolate_lru_folios can hold the lock for a very long time while it skips
those.  For FIO workload, ~150million order=0 folios were skipped to
isolate a few ZONE_DMA folios [1].  This can cause lockups [1] and high
memory pressure for extended periods of time [2].

Remove skipping CMA for MGLRU as well, as it was introduced in sort_folio
for the same resaon as 5da226d.

[1] https://lore.kernel.org/all/CAOUHufbkhMZYz20aM_3rHZ3OcK4m2puji2FGpUpn_-DevGk3Kg@mail.gmail.com/
[2] https://lore.kernel.org/all/[email protected]/

[[email protected]: also revert b7108d6, per Johannes]
  Link: https://lkml.kernel.org/r/[email protected]
  Link: https://lkml.kernel.org/r/[email protected]
Link: https://lkml.kernel.org/r/[email protected]
Fixes: 5da226d ("mm: skip CMA pages when they are not available")
Signed-off-by: Usama Arif <[email protected]>
Acked-by: Johannes Weiner <[email protected]>
Cc: Bharata B Rao <[email protected]>
Cc: Breno Leitao <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Matthew Wilcox <[email protected]>
Cc: Rik van Riel <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: Yu Zhao <[email protected]>
Cc: Zhaoyang Huang <[email protected]>
Cc: Zhaoyang Huang <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Sasha Levin <[email protected]>
  • Loading branch information
uarif1 authored and gregkh committed Sep 12, 2024
1 parent 9a99747 commit 0eceaa9
Showing 1 changed file with 2 additions and 22 deletions.
24 changes: 2 additions & 22 deletions mm/vmscan.c
Original file line number Diff line number Diff line change
Expand Up @@ -2261,25 +2261,6 @@ static __always_inline void update_lru_sizes(struct lruvec *lruvec,

}

#ifdef CONFIG_CMA
/*
* It is waste of effort to scan and reclaim CMA pages if it is not available
* for current allocation context. Kswapd can not be enrolled as it can not
* distinguish this scenario by using sc->gfp_mask = GFP_KERNEL
*/
static bool skip_cma(struct folio *folio, struct scan_control *sc)
{
return !current_is_kswapd() &&
gfp_migratetype(sc->gfp_mask) != MIGRATE_MOVABLE &&
folio_migratetype(folio) == MIGRATE_CMA;
}
#else
static bool skip_cma(struct folio *folio, struct scan_control *sc)
{
return false;
}
#endif

/*
* Isolating page from the lruvec to fill in @dst list by nr_to_scan times.
*
Expand Down Expand Up @@ -2326,8 +2307,7 @@ static unsigned long isolate_lru_folios(unsigned long nr_to_scan,
nr_pages = folio_nr_pages(folio);
total_scan += nr_pages;

if (folio_zonenum(folio) > sc->reclaim_idx ||
skip_cma(folio, sc)) {
if (folio_zonenum(folio) > sc->reclaim_idx) {
nr_skipped[folio_zonenum(folio)] += nr_pages;
move_to = &folios_skipped;
goto move;
Expand Down Expand Up @@ -4971,7 +4951,7 @@ static bool sort_folio(struct lruvec *lruvec, struct folio *folio, struct scan_c
}

/* ineligible */
if (zone > sc->reclaim_idx || skip_cma(folio, sc)) {
if (zone > sc->reclaim_idx) {
gen = folio_inc_gen(lruvec, folio, false);
list_move_tail(&folio->lru, &lrugen->folios[gen][type][zone]);
return true;
Expand Down

0 comments on commit 0eceaa9

Please sign in to comment.