Skip to content

Commit

Permalink
scsi: 53c700: remove dma_is_consistent usage
Browse files Browse the repository at this point in the history
This driver is the only user of dma_is_consistent().  We plan to remove this
API.

The driver uses the API in the following way:

BUG_ON(!dma_is_consistent(hostdata->dev, pScript) && L1_CACHE_BYTES < dma_get_cache_alignment());

The above code tries to see if L1_CACHE_BYTES is greater than
dma_get_cache_alignment() on sysmtes that can not allocate coherent memory
(some old systems can't).

James Bottomley exmplained that this is necesary because the driver packs the
set of mailboxes into a single coherent area and separates the different
usages by a L1 cache stride.  So it's fatal if the dma

He also pointed out that we can kill this checking because we don't hit this
BUG_ON on all architectures that actually use the driver.

(akpm: stolen from the scsi tree because
dma-mapping-remove-dma_is_consistent-api.patch needs it)

Signed-off-by: FUJITA Tomonori <[email protected]>
Signed-off-by: James Bottomley <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
  • Loading branch information
fujita authored and torvalds committed Aug 11, 2010
1 parent 7896bfa commit d80e0d9
Showing 1 changed file with 0 additions and 3 deletions.
3 changes: 0 additions & 3 deletions drivers/scsi/53c700.c
Original file line number Diff line number Diff line change
Expand Up @@ -309,9 +309,6 @@ NCR_700_detect(struct scsi_host_template *tpnt,
hostdata->msgin = memory + MSGIN_OFFSET;
hostdata->msgout = memory + MSGOUT_OFFSET;
hostdata->status = memory + STATUS_OFFSET;
/* all of these offsets are L1_CACHE_BYTES separated. It is fatal
* if this isn't sufficient separation to avoid dma flushing issues */
BUG_ON(!dma_is_consistent(hostdata->dev, pScript) && L1_CACHE_BYTES < dma_get_cache_alignment());
hostdata->slots = (struct NCR_700_command_slot *)(memory + SLOTS_OFFSET);
hostdata->dev = dev;

Expand Down

0 comments on commit d80e0d9

Please sign in to comment.