Skip to content

Commit

Permalink
mm: dirty page tracking race fix
Browse files Browse the repository at this point in the history
There is a race with dirty page accounting where a page may not properly
be accounted for.

clear_page_dirty_for_io() calls page_mkclean; then TestClearPageDirty.

page_mkclean walks the rmaps for that page, and for each one it cleans and
write protects the pte if it was dirty.  It uses page_check_address to
find the pte.  That function has a shortcut to avoid the ptl if the pte is
not present.  Unfortunately, the pte can be switched to not-present then
back to present by other code while holding the page table lock -- this
should not be a signal for page_mkclean to ignore that pte, because it may
be dirty.

For example, powerpc64's set_pte_at will clear a previously present pte
before setting it to the desired value.  There may also be other code in
core mm or in arch which do similar things.

The consequence of the bug is loss of data integrity due to msync, and
loss of dirty page accounting accuracy.  XIP's __xip_unmap could easily
also be unreliable (depending on the exact XIP locking scheme), which can
lead to data corruption.

Fix this by having an option to always take ptl to check the pte in
page_check_address.

It's possible to retain this optimization for page_referenced and
try_to_unmap.

Signed-off-by: Nick Piggin <[email protected]>
Cc: Jared Hulbert <[email protected]>
Cc: Carsten Otte <[email protected]>
Cc: Hugh Dickins <[email protected]>
Acked-by: Peter Zijlstra <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
Nick Piggin authored and torvalds committed Aug 20, 2008
1 parent 2d70b68 commit 479db0b
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 7 deletions.
2 changes: 1 addition & 1 deletion include/linux/rmap.h
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ int try_to_unmap(struct page *, int ignore_refs);
* Called from mm/filemap_xip.c to unmap empty zero page
*/
pte_t *page_check_address(struct page *, struct mm_struct *,
unsigned long, spinlock_t **);
unsigned long, spinlock_t **, int);

/*
* Used by swapoff to help locate where page is expected in vma.
Expand Down
2 changes: 1 addition & 1 deletion mm/filemap_xip.c
Original file line number Diff line number Diff line change
Expand Up @@ -185,7 +185,7 @@ __xip_unmap (struct address_space * mapping,
address = vma->vm_start +
((pgoff - vma->vm_pgoff) << PAGE_SHIFT);
BUG_ON(address < vma->vm_start || address >= vma->vm_end);
pte = page_check_address(page, mm, address, &ptl);
pte = page_check_address(page, mm, address, &ptl, 1);
if (pte) {
/* Nuke the page table entry. */
flush_cache_page(vma, address, pte_pfn(*pte));
Expand Down
14 changes: 9 additions & 5 deletions mm/rmap.c
Original file line number Diff line number Diff line change
Expand Up @@ -224,10 +224,14 @@ unsigned long page_address_in_vma(struct page *page, struct vm_area_struct *vma)
/*
* Check that @page is mapped at @address into @mm.
*
* If @sync is false, page_check_address may perform a racy check to avoid
* the page table lock when the pte is not present (helpful when reclaiming
* highly shared pages).
*
* On success returns with pte mapped and locked.
*/
pte_t *page_check_address(struct page *page, struct mm_struct *mm,
unsigned long address, spinlock_t **ptlp)
unsigned long address, spinlock_t **ptlp, int sync)
{
pgd_t *pgd;
pud_t *pud;
Expand All @@ -249,7 +253,7 @@ pte_t *page_check_address(struct page *page, struct mm_struct *mm,

pte = pte_offset_map(pmd, address);
/* Make a quick check before getting the lock */
if (!pte_present(*pte)) {
if (!sync && !pte_present(*pte)) {
pte_unmap(pte);
return NULL;
}
Expand Down Expand Up @@ -281,7 +285,7 @@ static int page_referenced_one(struct page *page,
if (address == -EFAULT)
goto out;

pte = page_check_address(page, mm, address, &ptl);
pte = page_check_address(page, mm, address, &ptl, 0);
if (!pte)
goto out;

Expand Down Expand Up @@ -450,7 +454,7 @@ static int page_mkclean_one(struct page *page, struct vm_area_struct *vma)
if (address == -EFAULT)
goto out;

pte = page_check_address(page, mm, address, &ptl);
pte = page_check_address(page, mm, address, &ptl, 1);
if (!pte)
goto out;

Expand Down Expand Up @@ -704,7 +708,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
if (address == -EFAULT)
goto out;

pte = page_check_address(page, mm, address, &ptl);
pte = page_check_address(page, mm, address, &ptl, 0);
if (!pte)
goto out;

Expand Down

0 comments on commit 479db0b

Please sign in to comment.