Skip to content

Commit

Permalink
bio: modify __bio_add_page() to accept pages that don't start a new s…
Browse files Browse the repository at this point in the history
…egment

The original behaviour is to refuse to add a new page if the maximum
number of segments has been reached, regardless of the fact the page we
are going to add can be merged into the last segment or not.

Unfortunately, when the system runs under heavy memory fragmentation
conditions, a driver may try to add multiple pages to the last segment.
The original code won't accept them and EBUSY will be reported to
userspace.

This patch modifies the function so it refuses to add a page only in case
the latter starts a new segment and the maximum number of segments has
already been reached.

The bug can be easily reproduced with the st driver:

1) set CONFIG_SCSI_MPT2SAS_MAX_SGE or CONFIG_SCSI_MPT3SAS_MAX_SGE  to 16
2) modprobe st buffer_kbs=1024
3) #dd if=/dev/zero of=/dev/st0 bs=1M count=10
   dd: error writing `/dev/st0': Device or resource busy

Signed-off-by: Maurizio Lombardi <[email protected]>
Signed-off-by: Ming Lei <[email protected]>
Cc: Jet Chen <[email protected]>
Cc: Tomas Henzl <[email protected]>
Cc: Jens Axboe <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Jens Axboe <[email protected]>
  • Loading branch information
maurizio-lombardi authored and axboe committed Dec 11, 2014
1 parent 06a41a9 commit fcbf6a0
Showing 1 changed file with 30 additions and 24 deletions.
54 changes: 30 additions & 24 deletions block/bio.c
Original file line number Diff line number Diff line change
Expand Up @@ -748,6 +748,7 @@ static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page
}
}

bio->bi_iter.bi_size += len;
goto done;
}

Expand All @@ -764,28 +765,31 @@ static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page
return 0;

/*
* we might lose a segment or two here, but rather that than
* make this too complex.
* setup the new entry, we might clear it again later if we
* cannot add the page
*/
bvec = &bio->bi_io_vec[bio->bi_vcnt];
bvec->bv_page = page;
bvec->bv_len = len;
bvec->bv_offset = offset;
bio->bi_vcnt++;
bio->bi_phys_segments++;
bio->bi_iter.bi_size += len;

/*
* Perform a recount if the number of segments is greater
* than queue_max_segments(q).
*/

while (bio->bi_phys_segments >= queue_max_segments(q)) {
while (bio->bi_phys_segments > queue_max_segments(q)) {

if (retried_segments)
return 0;
goto failed;

retried_segments = 1;
blk_recount_segments(q, bio);
}

/*
* setup the new entry, we might clear it again later if we
* cannot add the page
*/
bvec = &bio->bi_io_vec[bio->bi_vcnt];
bvec->bv_page = page;
bvec->bv_len = len;
bvec->bv_offset = offset;

/*
* if queue has other restrictions (eg varying max sector size
* depending on offset), it can specify a merge_bvec_fn in the
Expand All @@ -795,31 +799,33 @@ static int __bio_add_page(struct request_queue *q, struct bio *bio, struct page
struct bvec_merge_data bvm = {
.bi_bdev = bio->bi_bdev,
.bi_sector = bio->bi_iter.bi_sector,
.bi_size = bio->bi_iter.bi_size,
.bi_size = bio->bi_iter.bi_size - len,
.bi_rw = bio->bi_rw,
};

/*
* merge_bvec_fn() returns number of bytes it can accept
* at this offset
*/
if (q->merge_bvec_fn(q, &bvm, bvec) < bvec->bv_len) {
bvec->bv_page = NULL;
bvec->bv_len = 0;
bvec->bv_offset = 0;
return 0;
}
if (q->merge_bvec_fn(q, &bvm, bvec) < bvec->bv_len)
goto failed;
}

/* If we may be able to merge these biovecs, force a recount */
if (bio->bi_vcnt && (BIOVEC_PHYS_MERGEABLE(bvec-1, bvec)))
if (bio->bi_vcnt > 1 && (BIOVEC_PHYS_MERGEABLE(bvec-1, bvec)))
bio->bi_flags &= ~(1 << BIO_SEG_VALID);

bio->bi_vcnt++;
bio->bi_phys_segments++;
done:
bio->bi_iter.bi_size += len;
return len;

failed:
bvec->bv_page = NULL;
bvec->bv_len = 0;
bvec->bv_offset = 0;
bio->bi_vcnt--;
bio->bi_iter.bi_size -= len;
blk_recount_segments(q, bio);
return 0;
}

/**
Expand Down

0 comments on commit fcbf6a0

Please sign in to comment.