Specifications
This is the Title of the Book, eMatter Edition
Copyright © 2005 O’Reilly & Associates, Inc. All rights reserved.
440
|
Chapter 15: Memory Mapping and DMA
/* Copy now while we can access the buffer */
if (write)
result = scullp_write(iocb->ki_filp, buf, count, &pos);
else
result = scullp_read(iocb->ki_filp, buf, count, &pos);
/* If this is a synchronous IOCB, we return our status now. */
if (is_sync_kiocb(iocb))
return result;
/* Otherwise defer the completion for a few milliseconds. */
stuff = kmalloc (sizeof (*stuff), GFP_KERNEL);
if (stuff = = NULL)
return result; /* No memory, just complete now */
stuff->iocb = iocb;
stuff->result = result;
INIT_WORK(&stuff->work, scullp_do_deferred_op, stuff);
schedule_delayed_work(&stuff->work, HZ/100);
return -EIOCBQUEUED;
}
A more complete implementation would use get_user_pages to map the user buffer
into kernel space. We chose to keep life simple by just copying over the data at the
outset. Then a call is made to is_sync_kiocb to see if this operation must be com-
pleted synchronously; if so, the result status is returned, and we are done. Otherwise
we remember the relevant information in a little structure, arrange for “completion”
via a workqueue, and return
-EIOCBQUEUED. At this point, control returns to user
space.
Later on, the workqueue executes our completion function:
static void scullp_do_deferred_op(void *p)
{
struct async_work *stuff = (struct async_work *) p;
aio_complete(stuff->iocb, stuff->result, 0);
kfree(stuff);
}
Here, it is simply a matter of calling aio_complete with our saved information. A real
driver’s asynchronous I/O implementation is somewhat more complicated, of
course, but it follows this sort of structure.
Direct Memory Access
Direct memory access, or DMA, is the advanced topic that completes our overview
of memory issues. DMA is the hardware mechanism that allows peripheral compo-
nents to transfer their I/O data directly to and from main memory without the need
to involve the system processor. Use of this mechanism can greatly increase through-
put to and from a device, because a great deal of computational overhead is eliminated.
,ch15.13676 Page 440 Friday, January 21, 2005 11:04 AM