Loading...
1 Dynamic DMA mapping Guide
2 =========================
3
4 David S. Miller <davem@redhat.com>
5 Richard Henderson <rth@cygnus.com>
6 Jakub Jelinek <jakub@redhat.com>
7
8This is a guide to device driver writers on how to use the DMA API
9with example pseudo-code. For a concise description of the API, see
10DMA-API.txt.
11
12Most of the 64bit platforms have special hardware that translates bus
13addresses (DMA addresses) into physical addresses. This is similar to
14how page tables and/or a TLB translates virtual addresses to physical
15addresses on a CPU. This is needed so that e.g. PCI devices can
16access with a Single Address Cycle (32bit DMA address) any page in the
1764bit physical address space. Previously in Linux those 64bit
18platforms had to set artificial limits on the maximum RAM size in the
19system, so that the virt_to_bus() static scheme works (the DMA address
20translation tables were simply filled on bootup to map each bus
21address to the physical page __pa(bus_to_virt())).
22
23So that Linux can use the dynamic DMA mapping, it needs some help from the
24drivers, namely it has to take into account that DMA addresses should be
25mapped only for the time they are actually used and unmapped after the DMA
26transfer.
27
28The following API will work of course even on platforms where no such
29hardware exists.
30
31Note that the DMA API works with any bus independent of the underlying
32microprocessor architecture. You should use the DMA API rather than
33the bus specific DMA API (e.g. pci_dma_*).
34
35First of all, you should make sure
36
37#include <linux/dma-mapping.h>
38
39is in your driver. This file will obtain for you the definition of the
40dma_addr_t (which can hold any valid DMA address for the platform)
41type which should be used everywhere you hold a DMA (bus) address
42returned from the DMA mapping functions.
43
44 What memory is DMA'able?
45
46The first piece of information you must know is what kernel memory can
47be used with the DMA mapping facilities. There has been an unwritten
48set of rules regarding this, and this text is an attempt to finally
49write them down.
50
51If you acquired your memory via the page allocator
52(i.e. __get_free_page*()) or the generic memory allocators
53(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
54that memory using the addresses returned from those routines.
55
56This means specifically that you may _not_ use the memory/addresses
57returned from vmalloc() for DMA. It is possible to DMA to the
58_underlying_ memory mapped into a vmalloc() area, but this requires
59walking page tables to get the physical addresses, and then
60translating each of those pages back to a kernel address using
61something like __va(). [ EDIT: Update this when we integrate
62Gerd Knorr's generic code which does this. ]
63
64This rule also means that you may use neither kernel image addresses
65(items in data/text/bss segments), nor module image addresses, nor
66stack addresses for DMA. These could all be mapped somewhere entirely
67different than the rest of physical memory. Even if those classes of
68memory could physically work with DMA, you'd need to ensure the I/O
69buffers were cacheline-aligned. Without that, you'd see cacheline
70sharing problems (data corruption) on CPUs with DMA-incoherent caches.
71(The CPU could write to one word, DMA would write to a different one
72in the same cache line, and one of them could be overwritten.)
73
74Also, this means that you cannot take the return of a kmap()
75call and DMA to/from that. This is similar to vmalloc().
76
77What about block I/O and networking buffers? The block I/O and
78networking subsystems make sure that the buffers they use are valid
79for you to DMA from/to.
80
81 DMA addressing limitations
82
83Does your device have any DMA addressing limitations? For example, is
84your device only capable of driving the low order 24-bits of address?
85If so, you need to inform the kernel of this fact.
86
87By default, the kernel assumes that your device can address the full
8832-bits. For a 64-bit capable device, this needs to be increased.
89And for a device with limitations, as discussed in the previous
90paragraph, it needs to be decreased.
91
92Special note about PCI: PCI-X specification requires PCI-X devices to
93support 64-bit addressing (DAC) for all transactions. And at least
94one platform (SGI SN2) requires 64-bit consistent allocations to
95operate correctly when the IO bus is in PCI-X mode.
96
97For correct operation, you must interrogate the kernel in your device
98probe routine to see if the DMA controller on the machine can properly
99support the DMA addressing limitation your device has. It is good
100style to do this even if your device holds the default setting,
101because this shows that you did think about these issues wrt. your
102device.
103
104The query is performed via a call to dma_set_mask_and_coherent():
105
106 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
107
108which will query the mask for both streaming and coherent APIs together.
109If you have some special requirements, then the following two separate
110queries can be used instead:
111
112 The query for streaming mappings is performed via a call to
113 dma_set_mask():
114
115 int dma_set_mask(struct device *dev, u64 mask);
116
117 The query for consistent allocations is performed via a call
118 to dma_set_coherent_mask():
119
120 int dma_set_coherent_mask(struct device *dev, u64 mask);
121
122Here, dev is a pointer to the device struct of your device, and mask
123is a bit mask describing which bits of an address your device
124supports. It returns zero if your card can perform DMA properly on
125the machine given the address mask you provided. In general, the
126device struct of your device is embedded in the bus specific device
127struct of your device. For example, a pointer to the device struct of
128your PCI device is pdev->dev (pdev is a pointer to the PCI device
129struct of your device).
130
131If it returns non-zero, your device cannot perform DMA properly on
132this platform, and attempting to do so will result in undefined
133behavior. You must either use a different mask, or not use DMA.
134
135This means that in the failure case, you have three options:
136
1371) Use another DMA mask, if possible (see below).
1382) Use some non-DMA mode for data transfer, if possible.
1393) Ignore this device and do not initialize it.
140
141It is recommended that your driver print a kernel KERN_WARNING message
142when you end up performing either #2 or #3. In this manner, if a user
143of your driver reports that performance is bad or that the device is not
144even detected, you can ask them for the kernel messages to find out
145exactly why.
146
147The standard 32-bit addressing device would do something like this:
148
149 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
150 printk(KERN_WARNING
151 "mydev: No suitable DMA available.\n");
152 goto ignore_this_device;
153 }
154
155Another common scenario is a 64-bit capable device. The approach here
156is to try for 64-bit addressing, but back down to a 32-bit mask that
157should not fail. The kernel may fail the 64-bit mask not because the
158platform is not capable of 64-bit addressing. Rather, it may fail in
159this case simply because 32-bit addressing is done more efficiently
160than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
161more efficient than DAC addressing.
162
163Here is how you would handle a 64-bit capable device which can drive
164all 64-bits when accessing streaming DMA:
165
166 int using_dac;
167
168 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
169 using_dac = 1;
170 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
171 using_dac = 0;
172 } else {
173 printk(KERN_WARNING
174 "mydev: No suitable DMA available.\n");
175 goto ignore_this_device;
176 }
177
178If a card is capable of using 64-bit consistent allocations as well,
179the case would look like this:
180
181 int using_dac, consistent_using_dac;
182
183 if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
184 using_dac = 1;
185 consistent_using_dac = 1;
186 } else if (!dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32))) {
187 using_dac = 0;
188 consistent_using_dac = 0;
189 } else {
190 printk(KERN_WARNING
191 "mydev: No suitable DMA available.\n");
192 goto ignore_this_device;
193 }
194
195The coherent coherent mask will always be able to set the same or a
196smaller mask as the streaming mask. However for the rare case that a
197device driver only uses consistent allocations, one would have to
198check the return value from dma_set_coherent_mask().
199
200Finally, if your device can only drive the low 24-bits of
201address you might do something like:
202
203 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
204 printk(KERN_WARNING
205 "mydev: 24-bit DMA addressing not available.\n");
206 goto ignore_this_device;
207 }
208
209When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
210returns zero, the kernel saves away this mask you have provided. The
211kernel will use this information later when you make DMA mappings.
212
213There is a case which we are aware of at this time, which is worth
214mentioning in this documentation. If your device supports multiple
215functions (for example a sound card provides playback and record
216functions) and the various different functions have _different_
217DMA addressing limitations, you may wish to probe each mask and
218only provide the functionality which the machine can handle. It
219is important that the last call to dma_set_mask() be for the
220most specific mask.
221
222Here is pseudo-code showing how this might be done:
223
224 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
225 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
226
227 struct my_sound_card *card;
228 struct device *dev;
229
230 ...
231 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
232 card->playback_enabled = 1;
233 } else {
234 card->playback_enabled = 0;
235 printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
236 card->name);
237 }
238 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
239 card->record_enabled = 1;
240 } else {
241 card->record_enabled = 0;
242 printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
243 card->name);
244 }
245
246A sound card was used as an example here because this genre of PCI
247devices seems to be littered with ISA chips given a PCI front end,
248and thus retaining the 16MB DMA addressing limitations of ISA.
249
250 Types of DMA mappings
251
252There are two types of DMA mappings:
253
254- Consistent DMA mappings which are usually mapped at driver
255 initialization, unmapped at the end and for which the hardware should
256 guarantee that the device and the CPU can access the data
257 in parallel and will see updates made by each other without any
258 explicit software flushing.
259
260 Think of "consistent" as "synchronous" or "coherent".
261
262 The current default is to return consistent memory in the low 32
263 bits of the bus space. However, for future compatibility you should
264 set the consistent mask even if this default is fine for your
265 driver.
266
267 Good examples of what to use consistent mappings for are:
268
269 - Network card DMA ring descriptors.
270 - SCSI adapter mailbox command data structures.
271 - Device firmware microcode executed out of
272 main memory.
273
274 The invariant these examples all require is that any CPU store
275 to memory is immediately visible to the device, and vice
276 versa. Consistent mappings guarantee this.
277
278 IMPORTANT: Consistent DMA memory does not preclude the usage of
279 proper memory barriers. The CPU may reorder stores to
280 consistent memory just as it may normal memory. Example:
281 if it is important for the device to see the first word
282 of a descriptor updated before the second, you must do
283 something like:
284
285 desc->word0 = address;
286 wmb();
287 desc->word1 = DESC_VALID;
288
289 in order to get correct behavior on all platforms.
290
291 Also, on some platforms your driver may need to flush CPU write
292 buffers in much the same way as it needs to flush write buffers
293 found in PCI bridges (such as by reading a register's value
294 after writing it).
295
296- Streaming DMA mappings which are usually mapped for one DMA
297 transfer, unmapped right after it (unless you use dma_sync_* below)
298 and for which hardware can optimize for sequential accesses.
299
300 This of "streaming" as "asynchronous" or "outside the coherency
301 domain".
302
303 Good examples of what to use streaming mappings for are:
304
305 - Networking buffers transmitted/received by a device.
306 - Filesystem buffers written/read by a SCSI device.
307
308 The interfaces for using this type of mapping were designed in
309 such a way that an implementation can make whatever performance
310 optimizations the hardware allows. To this end, when using
311 such mappings you must be explicit about what you want to happen.
312
313Neither type of DMA mapping has alignment restrictions that come from
314the underlying bus, although some devices may have such restrictions.
315Also, systems with caches that aren't DMA-coherent will work better
316when the underlying buffers don't share cache lines with other data.
317
318
319 Using Consistent DMA mappings.
320
321To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
322you should do:
323
324 dma_addr_t dma_handle;
325
326 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
327
328where device is a struct device *. This may be called in interrupt
329context with the GFP_ATOMIC flag.
330
331Size is the length of the region you want to allocate, in bytes.
332
333This routine will allocate RAM for that region, so it acts similarly to
334__get_free_pages (but takes size instead of a page order). If your
335driver needs regions sized smaller than a page, you may prefer using
336the dma_pool interface, described below.
337
338The consistent DMA mapping interfaces, for non-NULL dev, will by
339default return a DMA address which is 32-bit addressable. Even if the
340device indicates (via DMA mask) that it may address the upper 32-bits,
341consistent allocation will only return > 32-bit addresses for DMA if
342the consistent DMA mask has been explicitly changed via
343dma_set_coherent_mask(). This is true of the dma_pool interface as
344well.
345
346dma_alloc_coherent returns two values: the virtual address which you
347can use to access it from the CPU and dma_handle which you pass to the
348card.
349
350The cpu return address and the DMA bus master address are both
351guaranteed to be aligned to the smallest PAGE_SIZE order which
352is greater than or equal to the requested size. This invariant
353exists (for example) to guarantee that if you allocate a chunk
354which is smaller than or equal to 64 kilobytes, the extent of the
355buffer you receive will not cross a 64K boundary.
356
357To unmap and free such a DMA region, you call:
358
359 dma_free_coherent(dev, size, cpu_addr, dma_handle);
360
361where dev, size are the same as in the above call and cpu_addr and
362dma_handle are the values dma_alloc_coherent returned to you.
363This function may not be called in interrupt context.
364
365If your driver needs lots of smaller memory regions, you can write
366custom code to subdivide pages returned by dma_alloc_coherent,
367or you can use the dma_pool API to do that. A dma_pool is like
368a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
369Also, it understands common hardware constraints for alignment,
370like queue heads needing to be aligned on N byte boundaries.
371
372Create a dma_pool like this:
373
374 struct dma_pool *pool;
375
376 pool = dma_pool_create(name, dev, size, align, alloc);
377
378The "name" is for diagnostics (like a kmem_cache name); dev and size
379are as above. The device's hardware alignment requirement for this
380type of data is "align" (which is expressed in bytes, and must be a
381power of two). If your device has no boundary crossing restrictions,
382pass 0 for alloc; passing 4096 says memory allocated from this pool
383must not cross 4KByte boundaries (but at that time it may be better to
384go for dma_alloc_coherent directly instead).
385
386Allocate memory from a dma pool like this:
387
388 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
389
390flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
391holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent,
392this returns two values, cpu_addr and dma_handle.
393
394Free memory that was allocated from a dma_pool like this:
395
396 dma_pool_free(pool, cpu_addr, dma_handle);
397
398where pool is what you passed to dma_pool_alloc, and cpu_addr and
399dma_handle are the values dma_pool_alloc returned. This function
400may be called in interrupt context.
401
402Destroy a dma_pool by calling:
403
404 dma_pool_destroy(pool);
405
406Make sure you've called dma_pool_free for all memory allocated
407from a pool before you destroy the pool. This function may not
408be called in interrupt context.
409
410 DMA Direction
411
412The interfaces described in subsequent portions of this document
413take a DMA direction argument, which is an integer and takes on
414one of the following values:
415
416 DMA_BIDIRECTIONAL
417 DMA_TO_DEVICE
418 DMA_FROM_DEVICE
419 DMA_NONE
420
421One should provide the exact DMA direction if you know it.
422
423DMA_TO_DEVICE means "from main memory to the device"
424DMA_FROM_DEVICE means "from the device to main memory"
425It is the direction in which the data moves during the DMA
426transfer.
427
428You are _strongly_ encouraged to specify this as precisely
429as you possibly can.
430
431If you absolutely cannot know the direction of the DMA transfer,
432specify DMA_BIDIRECTIONAL. It means that the DMA can go in
433either direction. The platform guarantees that you may legally
434specify this, and that it will work, but this may be at the
435cost of performance for example.
436
437The value DMA_NONE is to be used for debugging. One can
438hold this in a data structure before you come to know the
439precise direction, and this will help catch cases where your
440direction tracking logic has failed to set things up properly.
441
442Another advantage of specifying this value precisely (outside of
443potential platform-specific optimizations of such) is for debugging.
444Some platforms actually have a write permission boolean which DMA
445mappings can be marked with, much like page protections in the user
446program address space. Such platforms can and do report errors in the
447kernel logs when the DMA controller hardware detects violation of the
448permission setting.
449
450Only streaming mappings specify a direction, consistent mappings
451implicitly have a direction attribute setting of
452DMA_BIDIRECTIONAL.
453
454The SCSI subsystem tells you the direction to use in the
455'sc_data_direction' member of the SCSI command your driver is
456working on.
457
458For Networking drivers, it's a rather simple affair. For transmit
459packets, map/unmap them with the DMA_TO_DEVICE direction
460specifier. For receive packets, just the opposite, map/unmap them
461with the DMA_FROM_DEVICE direction specifier.
462
463 Using Streaming DMA mappings
464
465The streaming DMA mapping routines can be called from interrupt
466context. There are two versions of each map/unmap, one which will
467map/unmap a single memory region, and one which will map/unmap a
468scatterlist.
469
470To map a single region, you do:
471
472 struct device *dev = &my_dev->dev;
473 dma_addr_t dma_handle;
474 void *addr = buffer->ptr;
475 size_t size = buffer->len;
476
477 dma_handle = dma_map_single(dev, addr, size, direction);
478 if (dma_mapping_error(dma_handle)) {
479 /*
480 * reduce current DMA mapping usage,
481 * delay and try again later or
482 * reset driver.
483 */
484 goto map_error_handling;
485 }
486
487and to unmap it:
488
489 dma_unmap_single(dev, dma_handle, size, direction);
490
491You should call dma_mapping_error() as dma_map_single() could fail and return
492error. Not all dma implementations support dma_mapping_error() interface.
493However, it is a good practice to call dma_mapping_error() interface, which
494will invoke the generic mapping error check interface. Doing so will ensure
495that the mapping code will work correctly on all dma implementations without
496any dependency on the specifics of the underlying implementation. Using the
497returned address without checking for errors could result in failures ranging
498from panics to silent data corruption. A couple of examples of incorrect ways
499to check for errors that make assumptions about the underlying dma
500implementation are as follows and these are applicable to dma_map_page() as
501well.
502
503Incorrect example 1:
504 dma_addr_t dma_handle;
505
506 dma_handle = dma_map_single(dev, addr, size, direction);
507 if ((dma_handle & 0xffff != 0) || (dma_handle >= 0x1000000)) {
508 goto map_error;
509 }
510
511Incorrect example 2:
512 dma_addr_t dma_handle;
513
514 dma_handle = dma_map_single(dev, addr, size, direction);
515 if (dma_handle == DMA_ERROR_CODE) {
516 goto map_error;
517 }
518
519You should call dma_unmap_single when the DMA activity is finished, e.g.
520from the interrupt which told you that the DMA transfer is done.
521
522Using cpu pointers like this for single mappings has a disadvantage,
523you cannot reference HIGHMEM memory in this way. Thus, there is a
524map/unmap interface pair akin to dma_{map,unmap}_single. These
525interfaces deal with page/offset pairs instead of cpu pointers.
526Specifically:
527
528 struct device *dev = &my_dev->dev;
529 dma_addr_t dma_handle;
530 struct page *page = buffer->page;
531 unsigned long offset = buffer->offset;
532 size_t size = buffer->len;
533
534 dma_handle = dma_map_page(dev, page, offset, size, direction);
535 if (dma_mapping_error(dma_handle)) {
536 /*
537 * reduce current DMA mapping usage,
538 * delay and try again later or
539 * reset driver.
540 */
541 goto map_error_handling;
542 }
543
544 ...
545
546 dma_unmap_page(dev, dma_handle, size, direction);
547
548Here, "offset" means byte offset within the given page.
549
550You should call dma_mapping_error() as dma_map_page() could fail and return
551error as outlined under the dma_map_single() discussion.
552
553You should call dma_unmap_page when the DMA activity is finished, e.g.
554from the interrupt which told you that the DMA transfer is done.
555
556With scatterlists, you map a region gathered from several regions by:
557
558 int i, count = dma_map_sg(dev, sglist, nents, direction);
559 struct scatterlist *sg;
560
561 for_each_sg(sglist, sg, count, i) {
562 hw_address[i] = sg_dma_address(sg);
563 hw_len[i] = sg_dma_len(sg);
564 }
565
566where nents is the number of entries in the sglist.
567
568The implementation is free to merge several consecutive sglist entries
569into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
570consecutive sglist entries can be merged into one provided the first one
571ends and the second one starts on a page boundary - in fact this is a huge
572advantage for cards which either cannot do scatter-gather or have very
573limited number of scatter-gather entries) and returns the actual number
574of sg entries it mapped them to. On failure 0 is returned.
575
576Then you should loop count times (note: this can be less than nents times)
577and use sg_dma_address() and sg_dma_len() macros where you previously
578accessed sg->address and sg->length as shown above.
579
580To unmap a scatterlist, just call:
581
582 dma_unmap_sg(dev, sglist, nents, direction);
583
584Again, make sure DMA activity has already finished.
585
586PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
587 the _same_ one you passed into the dma_map_sg call,
588 it should _NOT_ be the 'count' value _returned_ from the
589 dma_map_sg call.
590
591Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
592counterpart, because the bus address space is a shared resource (although
593in some ports the mapping is per each BUS so less devices contend for the
594same bus address space) and you could render the machine unusable by eating
595all bus addresses.
596
597If you need to use the same streaming DMA region multiple times and touch
598the data in between the DMA transfers, the buffer needs to be synced
599properly in order for the cpu and device to see the most uptodate and
600correct copy of the DMA buffer.
601
602So, firstly, just map it with dma_map_{single,sg}, and after each DMA
603transfer call either:
604
605 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
606
607or:
608
609 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
610
611as appropriate.
612
613Then, if you wish to let the device get at the DMA area again,
614finish accessing the data with the cpu, and then before actually
615giving the buffer to the hardware call either:
616
617 dma_sync_single_for_device(dev, dma_handle, size, direction);
618
619or:
620
621 dma_sync_sg_for_device(dev, sglist, nents, direction);
622
623as appropriate.
624
625After the last DMA transfer call one of the DMA unmap routines
626dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
627call till dma_unmap_*, then you don't have to call the dma_sync_*
628routines at all.
629
630Here is pseudo code which shows a situation in which you would need
631to use the dma_sync_*() interfaces.
632
633 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
634 {
635 dma_addr_t mapping;
636
637 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
638 if (dma_mapping_error(dma_handle)) {
639 /*
640 * reduce current DMA mapping usage,
641 * delay and try again later or
642 * reset driver.
643 */
644 goto map_error_handling;
645 }
646
647 cp->rx_buf = buffer;
648 cp->rx_len = len;
649 cp->rx_dma = mapping;
650
651 give_rx_buf_to_card(cp);
652 }
653
654 ...
655
656 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
657 {
658 struct my_card *cp = devid;
659
660 ...
661 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
662 struct my_card_header *hp;
663
664 /* Examine the header to see if we wish
665 * to accept the data. But synchronize
666 * the DMA transfer with the CPU first
667 * so that we see updated contents.
668 */
669 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
670 cp->rx_len,
671 DMA_FROM_DEVICE);
672
673 /* Now it is safe to examine the buffer. */
674 hp = (struct my_card_header *) cp->rx_buf;
675 if (header_is_ok(hp)) {
676 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
677 DMA_FROM_DEVICE);
678 pass_to_upper_layers(cp->rx_buf);
679 make_and_setup_new_rx_buf(cp);
680 } else {
681 /* CPU should not write to
682 * DMA_FROM_DEVICE-mapped area,
683 * so dma_sync_single_for_device() is
684 * not needed here. It would be required
685 * for DMA_BIDIRECTIONAL mapping if
686 * the memory was modified.
687 */
688 give_rx_buf_to_card(cp);
689 }
690 }
691 }
692
693Drivers converted fully to this interface should not use virt_to_bus any
694longer, nor should they use bus_to_virt. Some drivers have to be changed a
695little bit, because there is no longer an equivalent to bus_to_virt in the
696dynamic DMA mapping scheme - you have to always store the DMA addresses
697returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
698calls (dma_map_sg stores them in the scatterlist itself if the platform
699supports dynamic DMA mapping in hardware) in your driver structures and/or
700in the card registers.
701
702All drivers should be using these interfaces with no exceptions. It
703is planned to completely remove virt_to_bus() and bus_to_virt() as
704they are entirely deprecated. Some ports already do not provide these
705as it is impossible to correctly support them.
706
707 Handling Errors
708
709DMA address space is limited on some architectures and an allocation
710failure can be determined by:
711
712- checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
713
714- checking the returned dma_addr_t of dma_map_single and dma_map_page
715 by using dma_mapping_error():
716
717 dma_addr_t dma_handle;
718
719 dma_handle = dma_map_single(dev, addr, size, direction);
720 if (dma_mapping_error(dev, dma_handle)) {
721 /*
722 * reduce current DMA mapping usage,
723 * delay and try again later or
724 * reset driver.
725 */
726 goto map_error_handling;
727 }
728
729- unmap pages that are already mapped, when mapping error occurs in the middle
730 of a multiple page mapping attempt. These example are applicable to
731 dma_map_page() as well.
732
733Example 1:
734 dma_addr_t dma_handle1;
735 dma_addr_t dma_handle2;
736
737 dma_handle1 = dma_map_single(dev, addr, size, direction);
738 if (dma_mapping_error(dev, dma_handle1)) {
739 /*
740 * reduce current DMA mapping usage,
741 * delay and try again later or
742 * reset driver.
743 */
744 goto map_error_handling1;
745 }
746 dma_handle2 = dma_map_single(dev, addr, size, direction);
747 if (dma_mapping_error(dev, dma_handle2)) {
748 /*
749 * reduce current DMA mapping usage,
750 * delay and try again later or
751 * reset driver.
752 */
753 goto map_error_handling2;
754 }
755
756 ...
757
758 map_error_handling2:
759 dma_unmap_single(dma_handle1);
760 map_error_handling1:
761
762Example 2: (if buffers are allocated in a loop, unmap all mapped buffers when
763 mapping error is detected in the middle)
764
765 dma_addr_t dma_addr;
766 dma_addr_t array[DMA_BUFFERS];
767 int save_index = 0;
768
769 for (i = 0; i < DMA_BUFFERS; i++) {
770
771 ...
772
773 dma_addr = dma_map_single(dev, addr, size, direction);
774 if (dma_mapping_error(dev, dma_addr)) {
775 /*
776 * reduce current DMA mapping usage,
777 * delay and try again later or
778 * reset driver.
779 */
780 goto map_error_handling;
781 }
782 array[i].dma_addr = dma_addr;
783 save_index++;
784 }
785
786 ...
787
788 map_error_handling:
789
790 for (i = 0; i < save_index; i++) {
791
792 ...
793
794 dma_unmap_single(array[i].dma_addr);
795 }
796
797Networking drivers must call dev_kfree_skb to free the socket buffer
798and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
799(ndo_start_xmit). This means that the socket buffer is just dropped in
800the failure case.
801
802SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
803fails in the queuecommand hook. This means that the SCSI subsystem
804passes the command to the driver again later.
805
806 Optimizing Unmap State Space Consumption
807
808On many platforms, dma_unmap_{single,page}() is simply a nop.
809Therefore, keeping track of the mapping address and length is a waste
810of space. Instead of filling your drivers up with ifdefs and the like
811to "work around" this (which would defeat the whole purpose of a
812portable API) the following facilities are provided.
813
814Actually, instead of describing the macros one by one, we'll
815transform some example code.
816
8171) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
818 Example, before:
819
820 struct ring_state {
821 struct sk_buff *skb;
822 dma_addr_t mapping;
823 __u32 len;
824 };
825
826 after:
827
828 struct ring_state {
829 struct sk_buff *skb;
830 DEFINE_DMA_UNMAP_ADDR(mapping);
831 DEFINE_DMA_UNMAP_LEN(len);
832 };
833
8342) Use dma_unmap_{addr,len}_set to set these values.
835 Example, before:
836
837 ringp->mapping = FOO;
838 ringp->len = BAR;
839
840 after:
841
842 dma_unmap_addr_set(ringp, mapping, FOO);
843 dma_unmap_len_set(ringp, len, BAR);
844
8453) Use dma_unmap_{addr,len} to access these values.
846 Example, before:
847
848 dma_unmap_single(dev, ringp->mapping, ringp->len,
849 DMA_FROM_DEVICE);
850
851 after:
852
853 dma_unmap_single(dev,
854 dma_unmap_addr(ringp, mapping),
855 dma_unmap_len(ringp, len),
856 DMA_FROM_DEVICE);
857
858It really should be self-explanatory. We treat the ADDR and LEN
859separately, because it is possible for an implementation to only
860need the address in order to perform the unmap operation.
861
862 Platform Issues
863
864If you are just writing drivers for Linux and do not maintain
865an architecture port for the kernel, you can safely skip down
866to "Closing".
867
8681) Struct scatterlist requirements.
869
870 Don't invent the architecture specific struct scatterlist; just use
871 <asm-generic/scatterlist.h>. You need to enable
872 CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
873 (including software IOMMU).
874
8752) ARCH_DMA_MINALIGN
876
877 Architectures must ensure that kmalloc'ed buffer is
878 DMA-safe. Drivers and subsystems depend on it. If an architecture
879 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
880 the CPU cache is identical to data in main memory),
881 ARCH_DMA_MINALIGN must be set so that the memory allocator
882 makes sure that kmalloc'ed buffer doesn't share a cache line with
883 the others. See arch/arm/include/asm/cache.h as an example.
884
885 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
886 constraints. You don't need to worry about the architecture data
887 alignment constraints (e.g. the alignment constraints about 64-bit
888 objects).
889
8903) Supporting multiple types of IOMMUs
891
892 If your architecture needs to support multiple types of IOMMUs, you
893 can use include/linux/asm-generic/dma-mapping-common.h. It's a
894 library to support the DMA API with multiple types of IOMMUs. Lots
895 of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
896 sparc) use it. Choose one to see how it can be used. If you need to
897 support multiple types of IOMMUs in a single system, the example of
898 x86 or powerpc helps.
899
900 Closing
901
902This document, and the API itself, would not be in its current
903form without the feedback and suggestions from numerous individuals.
904We would like to specifically mention, in no particular order, the
905following people:
906
907 Russell King <rmk@arm.linux.org.uk>
908 Leo Dagum <dagum@barrel.engr.sgi.com>
909 Ralf Baechle <ralf@oss.sgi.com>
910 Grant Grundler <grundler@cup.hp.com>
911 Jay Estabrook <Jay.Estabrook@compaq.com>
912 Thomas Sailer <sailer@ife.ee.ethz.ch>
913 Andrea Arcangeli <andrea@suse.de>
914 Jens Axboe <jens.axboe@oracle.com>
915 David Mosberger-Tang <davidm@hpl.hp.com>
1 Dynamic DMA mapping Guide
2 =========================
3
4 David S. Miller <davem@redhat.com>
5 Richard Henderson <rth@cygnus.com>
6 Jakub Jelinek <jakub@redhat.com>
7
8This is a guide to device driver writers on how to use the DMA API
9with example pseudo-code. For a concise description of the API, see
10DMA-API.txt.
11
12Most of the 64bit platforms have special hardware that translates bus
13addresses (DMA addresses) into physical addresses. This is similar to
14how page tables and/or a TLB translates virtual addresses to physical
15addresses on a CPU. This is needed so that e.g. PCI devices can
16access with a Single Address Cycle (32bit DMA address) any page in the
1764bit physical address space. Previously in Linux those 64bit
18platforms had to set artificial limits on the maximum RAM size in the
19system, so that the virt_to_bus() static scheme works (the DMA address
20translation tables were simply filled on bootup to map each bus
21address to the physical page __pa(bus_to_virt())).
22
23So that Linux can use the dynamic DMA mapping, it needs some help from the
24drivers, namely it has to take into account that DMA addresses should be
25mapped only for the time they are actually used and unmapped after the DMA
26transfer.
27
28The following API will work of course even on platforms where no such
29hardware exists.
30
31Note that the DMA API works with any bus independent of the underlying
32microprocessor architecture. You should use the DMA API rather than
33the bus specific DMA API (e.g. pci_dma_*).
34
35First of all, you should make sure
36
37#include <linux/dma-mapping.h>
38
39is in your driver. This file will obtain for you the definition of the
40dma_addr_t (which can hold any valid DMA address for the platform)
41type which should be used everywhere you hold a DMA (bus) address
42returned from the DMA mapping functions.
43
44 What memory is DMA'able?
45
46The first piece of information you must know is what kernel memory can
47be used with the DMA mapping facilities. There has been an unwritten
48set of rules regarding this, and this text is an attempt to finally
49write them down.
50
51If you acquired your memory via the page allocator
52(i.e. __get_free_page*()) or the generic memory allocators
53(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
54that memory using the addresses returned from those routines.
55
56This means specifically that you may _not_ use the memory/addresses
57returned from vmalloc() for DMA. It is possible to DMA to the
58_underlying_ memory mapped into a vmalloc() area, but this requires
59walking page tables to get the physical addresses, and then
60translating each of those pages back to a kernel address using
61something like __va(). [ EDIT: Update this when we integrate
62Gerd Knorr's generic code which does this. ]
63
64This rule also means that you may use neither kernel image addresses
65(items in data/text/bss segments), nor module image addresses, nor
66stack addresses for DMA. These could all be mapped somewhere entirely
67different than the rest of physical memory. Even if those classes of
68memory could physically work with DMA, you'd need to ensure the I/O
69buffers were cacheline-aligned. Without that, you'd see cacheline
70sharing problems (data corruption) on CPUs with DMA-incoherent caches.
71(The CPU could write to one word, DMA would write to a different one
72in the same cache line, and one of them could be overwritten.)
73
74Also, this means that you cannot take the return of a kmap()
75call and DMA to/from that. This is similar to vmalloc().
76
77What about block I/O and networking buffers? The block I/O and
78networking subsystems make sure that the buffers they use are valid
79for you to DMA from/to.
80
81 DMA addressing limitations
82
83Does your device have any DMA addressing limitations? For example, is
84your device only capable of driving the low order 24-bits of address?
85If so, you need to inform the kernel of this fact.
86
87By default, the kernel assumes that your device can address the full
8832-bits. For a 64-bit capable device, this needs to be increased.
89And for a device with limitations, as discussed in the previous
90paragraph, it needs to be decreased.
91
92Special note about PCI: PCI-X specification requires PCI-X devices to
93support 64-bit addressing (DAC) for all transactions. And at least
94one platform (SGI SN2) requires 64-bit consistent allocations to
95operate correctly when the IO bus is in PCI-X mode.
96
97For correct operation, you must interrogate the kernel in your device
98probe routine to see if the DMA controller on the machine can properly
99support the DMA addressing limitation your device has. It is good
100style to do this even if your device holds the default setting,
101because this shows that you did think about these issues wrt. your
102device.
103
104The query is performed via a call to dma_set_mask():
105
106 int dma_set_mask(struct device *dev, u64 mask);
107
108The query for consistent allocations is performed via a call to
109dma_set_coherent_mask():
110
111 int dma_set_coherent_mask(struct device *dev, u64 mask);
112
113Here, dev is a pointer to the device struct of your device, and mask
114is a bit mask describing which bits of an address your device
115supports. It returns zero if your card can perform DMA properly on
116the machine given the address mask you provided. In general, the
117device struct of your device is embedded in the bus specific device
118struct of your device. For example, a pointer to the device struct of
119your PCI device is pdev->dev (pdev is a pointer to the PCI device
120struct of your device).
121
122If it returns non-zero, your device cannot perform DMA properly on
123this platform, and attempting to do so will result in undefined
124behavior. You must either use a different mask, or not use DMA.
125
126This means that in the failure case, you have three options:
127
1281) Use another DMA mask, if possible (see below).
1292) Use some non-DMA mode for data transfer, if possible.
1303) Ignore this device and do not initialize it.
131
132It is recommended that your driver print a kernel KERN_WARNING message
133when you end up performing either #2 or #3. In this manner, if a user
134of your driver reports that performance is bad or that the device is not
135even detected, you can ask them for the kernel messages to find out
136exactly why.
137
138The standard 32-bit addressing device would do something like this:
139
140 if (dma_set_mask(dev, DMA_BIT_MASK(32))) {
141 printk(KERN_WARNING
142 "mydev: No suitable DMA available.\n");
143 goto ignore_this_device;
144 }
145
146Another common scenario is a 64-bit capable device. The approach here
147is to try for 64-bit addressing, but back down to a 32-bit mask that
148should not fail. The kernel may fail the 64-bit mask not because the
149platform is not capable of 64-bit addressing. Rather, it may fail in
150this case simply because 32-bit addressing is done more efficiently
151than 64-bit addressing. For example, Sparc64 PCI SAC addressing is
152more efficient than DAC addressing.
153
154Here is how you would handle a 64-bit capable device which can drive
155all 64-bits when accessing streaming DMA:
156
157 int using_dac;
158
159 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
160 using_dac = 1;
161 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
162 using_dac = 0;
163 } else {
164 printk(KERN_WARNING
165 "mydev: No suitable DMA available.\n");
166 goto ignore_this_device;
167 }
168
169If a card is capable of using 64-bit consistent allocations as well,
170the case would look like this:
171
172 int using_dac, consistent_using_dac;
173
174 if (!dma_set_mask(dev, DMA_BIT_MASK(64))) {
175 using_dac = 1;
176 consistent_using_dac = 1;
177 dma_set_coherent_mask(dev, DMA_BIT_MASK(64));
178 } else if (!dma_set_mask(dev, DMA_BIT_MASK(32))) {
179 using_dac = 0;
180 consistent_using_dac = 0;
181 dma_set_coherent_mask(dev, DMA_BIT_MASK(32));
182 } else {
183 printk(KERN_WARNING
184 "mydev: No suitable DMA available.\n");
185 goto ignore_this_device;
186 }
187
188dma_set_coherent_mask() will always be able to set the same or a
189smaller mask as dma_set_mask(). However for the rare case that a
190device driver only uses consistent allocations, one would have to
191check the return value from dma_set_coherent_mask().
192
193Finally, if your device can only drive the low 24-bits of
194address you might do something like:
195
196 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
197 printk(KERN_WARNING
198 "mydev: 24-bit DMA addressing not available.\n");
199 goto ignore_this_device;
200 }
201
202When dma_set_mask() is successful, and returns zero, the kernel saves
203away this mask you have provided. The kernel will use this
204information later when you make DMA mappings.
205
206There is a case which we are aware of at this time, which is worth
207mentioning in this documentation. If your device supports multiple
208functions (for example a sound card provides playback and record
209functions) and the various different functions have _different_
210DMA addressing limitations, you may wish to probe each mask and
211only provide the functionality which the machine can handle. It
212is important that the last call to dma_set_mask() be for the
213most specific mask.
214
215Here is pseudo-code showing how this might be done:
216
217 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
218 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
219
220 struct my_sound_card *card;
221 struct device *dev;
222
223 ...
224 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
225 card->playback_enabled = 1;
226 } else {
227 card->playback_enabled = 0;
228 printk(KERN_WARNING "%s: Playback disabled due to DMA limitations.\n",
229 card->name);
230 }
231 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
232 card->record_enabled = 1;
233 } else {
234 card->record_enabled = 0;
235 printk(KERN_WARNING "%s: Record disabled due to DMA limitations.\n",
236 card->name);
237 }
238
239A sound card was used as an example here because this genre of PCI
240devices seems to be littered with ISA chips given a PCI front end,
241and thus retaining the 16MB DMA addressing limitations of ISA.
242
243 Types of DMA mappings
244
245There are two types of DMA mappings:
246
247- Consistent DMA mappings which are usually mapped at driver
248 initialization, unmapped at the end and for which the hardware should
249 guarantee that the device and the CPU can access the data
250 in parallel and will see updates made by each other without any
251 explicit software flushing.
252
253 Think of "consistent" as "synchronous" or "coherent".
254
255 The current default is to return consistent memory in the low 32
256 bits of the bus space. However, for future compatibility you should
257 set the consistent mask even if this default is fine for your
258 driver.
259
260 Good examples of what to use consistent mappings for are:
261
262 - Network card DMA ring descriptors.
263 - SCSI adapter mailbox command data structures.
264 - Device firmware microcode executed out of
265 main memory.
266
267 The invariant these examples all require is that any CPU store
268 to memory is immediately visible to the device, and vice
269 versa. Consistent mappings guarantee this.
270
271 IMPORTANT: Consistent DMA memory does not preclude the usage of
272 proper memory barriers. The CPU may reorder stores to
273 consistent memory just as it may normal memory. Example:
274 if it is important for the device to see the first word
275 of a descriptor updated before the second, you must do
276 something like:
277
278 desc->word0 = address;
279 wmb();
280 desc->word1 = DESC_VALID;
281
282 in order to get correct behavior on all platforms.
283
284 Also, on some platforms your driver may need to flush CPU write
285 buffers in much the same way as it needs to flush write buffers
286 found in PCI bridges (such as by reading a register's value
287 after writing it).
288
289- Streaming DMA mappings which are usually mapped for one DMA
290 transfer, unmapped right after it (unless you use dma_sync_* below)
291 and for which hardware can optimize for sequential accesses.
292
293 This of "streaming" as "asynchronous" or "outside the coherency
294 domain".
295
296 Good examples of what to use streaming mappings for are:
297
298 - Networking buffers transmitted/received by a device.
299 - Filesystem buffers written/read by a SCSI device.
300
301 The interfaces for using this type of mapping were designed in
302 such a way that an implementation can make whatever performance
303 optimizations the hardware allows. To this end, when using
304 such mappings you must be explicit about what you want to happen.
305
306Neither type of DMA mapping has alignment restrictions that come from
307the underlying bus, although some devices may have such restrictions.
308Also, systems with caches that aren't DMA-coherent will work better
309when the underlying buffers don't share cache lines with other data.
310
311
312 Using Consistent DMA mappings.
313
314To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
315you should do:
316
317 dma_addr_t dma_handle;
318
319 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
320
321where device is a struct device *. This may be called in interrupt
322context with the GFP_ATOMIC flag.
323
324Size is the length of the region you want to allocate, in bytes.
325
326This routine will allocate RAM for that region, so it acts similarly to
327__get_free_pages (but takes size instead of a page order). If your
328driver needs regions sized smaller than a page, you may prefer using
329the dma_pool interface, described below.
330
331The consistent DMA mapping interfaces, for non-NULL dev, will by
332default return a DMA address which is 32-bit addressable. Even if the
333device indicates (via DMA mask) that it may address the upper 32-bits,
334consistent allocation will only return > 32-bit addresses for DMA if
335the consistent DMA mask has been explicitly changed via
336dma_set_coherent_mask(). This is true of the dma_pool interface as
337well.
338
339dma_alloc_coherent returns two values: the virtual address which you
340can use to access it from the CPU and dma_handle which you pass to the
341card.
342
343The cpu return address and the DMA bus master address are both
344guaranteed to be aligned to the smallest PAGE_SIZE order which
345is greater than or equal to the requested size. This invariant
346exists (for example) to guarantee that if you allocate a chunk
347which is smaller than or equal to 64 kilobytes, the extent of the
348buffer you receive will not cross a 64K boundary.
349
350To unmap and free such a DMA region, you call:
351
352 dma_free_coherent(dev, size, cpu_addr, dma_handle);
353
354where dev, size are the same as in the above call and cpu_addr and
355dma_handle are the values dma_alloc_coherent returned to you.
356This function may not be called in interrupt context.
357
358If your driver needs lots of smaller memory regions, you can write
359custom code to subdivide pages returned by dma_alloc_coherent,
360or you can use the dma_pool API to do that. A dma_pool is like
361a kmem_cache, but it uses dma_alloc_coherent not __get_free_pages.
362Also, it understands common hardware constraints for alignment,
363like queue heads needing to be aligned on N byte boundaries.
364
365Create a dma_pool like this:
366
367 struct dma_pool *pool;
368
369 pool = dma_pool_create(name, dev, size, align, alloc);
370
371The "name" is for diagnostics (like a kmem_cache name); dev and size
372are as above. The device's hardware alignment requirement for this
373type of data is "align" (which is expressed in bytes, and must be a
374power of two). If your device has no boundary crossing restrictions,
375pass 0 for alloc; passing 4096 says memory allocated from this pool
376must not cross 4KByte boundaries (but at that time it may be better to
377go for dma_alloc_coherent directly instead).
378
379Allocate memory from a dma pool like this:
380
381 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
382
383flags are SLAB_KERNEL if blocking is permitted (not in_interrupt nor
384holding SMP locks), SLAB_ATOMIC otherwise. Like dma_alloc_coherent,
385this returns two values, cpu_addr and dma_handle.
386
387Free memory that was allocated from a dma_pool like this:
388
389 dma_pool_free(pool, cpu_addr, dma_handle);
390
391where pool is what you passed to dma_pool_alloc, and cpu_addr and
392dma_handle are the values dma_pool_alloc returned. This function
393may be called in interrupt context.
394
395Destroy a dma_pool by calling:
396
397 dma_pool_destroy(pool);
398
399Make sure you've called dma_pool_free for all memory allocated
400from a pool before you destroy the pool. This function may not
401be called in interrupt context.
402
403 DMA Direction
404
405The interfaces described in subsequent portions of this document
406take a DMA direction argument, which is an integer and takes on
407one of the following values:
408
409 DMA_BIDIRECTIONAL
410 DMA_TO_DEVICE
411 DMA_FROM_DEVICE
412 DMA_NONE
413
414One should provide the exact DMA direction if you know it.
415
416DMA_TO_DEVICE means "from main memory to the device"
417DMA_FROM_DEVICE means "from the device to main memory"
418It is the direction in which the data moves during the DMA
419transfer.
420
421You are _strongly_ encouraged to specify this as precisely
422as you possibly can.
423
424If you absolutely cannot know the direction of the DMA transfer,
425specify DMA_BIDIRECTIONAL. It means that the DMA can go in
426either direction. The platform guarantees that you may legally
427specify this, and that it will work, but this may be at the
428cost of performance for example.
429
430The value DMA_NONE is to be used for debugging. One can
431hold this in a data structure before you come to know the
432precise direction, and this will help catch cases where your
433direction tracking logic has failed to set things up properly.
434
435Another advantage of specifying this value precisely (outside of
436potential platform-specific optimizations of such) is for debugging.
437Some platforms actually have a write permission boolean which DMA
438mappings can be marked with, much like page protections in the user
439program address space. Such platforms can and do report errors in the
440kernel logs when the DMA controller hardware detects violation of the
441permission setting.
442
443Only streaming mappings specify a direction, consistent mappings
444implicitly have a direction attribute setting of
445DMA_BIDIRECTIONAL.
446
447The SCSI subsystem tells you the direction to use in the
448'sc_data_direction' member of the SCSI command your driver is
449working on.
450
451For Networking drivers, it's a rather simple affair. For transmit
452packets, map/unmap them with the DMA_TO_DEVICE direction
453specifier. For receive packets, just the opposite, map/unmap them
454with the DMA_FROM_DEVICE direction specifier.
455
456 Using Streaming DMA mappings
457
458The streaming DMA mapping routines can be called from interrupt
459context. There are two versions of each map/unmap, one which will
460map/unmap a single memory region, and one which will map/unmap a
461scatterlist.
462
463To map a single region, you do:
464
465 struct device *dev = &my_dev->dev;
466 dma_addr_t dma_handle;
467 void *addr = buffer->ptr;
468 size_t size = buffer->len;
469
470 dma_handle = dma_map_single(dev, addr, size, direction);
471
472and to unmap it:
473
474 dma_unmap_single(dev, dma_handle, size, direction);
475
476You should call dma_unmap_single when the DMA activity is finished, e.g.
477from the interrupt which told you that the DMA transfer is done.
478
479Using cpu pointers like this for single mappings has a disadvantage,
480you cannot reference HIGHMEM memory in this way. Thus, there is a
481map/unmap interface pair akin to dma_{map,unmap}_single. These
482interfaces deal with page/offset pairs instead of cpu pointers.
483Specifically:
484
485 struct device *dev = &my_dev->dev;
486 dma_addr_t dma_handle;
487 struct page *page = buffer->page;
488 unsigned long offset = buffer->offset;
489 size_t size = buffer->len;
490
491 dma_handle = dma_map_page(dev, page, offset, size, direction);
492
493 ...
494
495 dma_unmap_page(dev, dma_handle, size, direction);
496
497Here, "offset" means byte offset within the given page.
498
499With scatterlists, you map a region gathered from several regions by:
500
501 int i, count = dma_map_sg(dev, sglist, nents, direction);
502 struct scatterlist *sg;
503
504 for_each_sg(sglist, sg, count, i) {
505 hw_address[i] = sg_dma_address(sg);
506 hw_len[i] = sg_dma_len(sg);
507 }
508
509where nents is the number of entries in the sglist.
510
511The implementation is free to merge several consecutive sglist entries
512into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
513consecutive sglist entries can be merged into one provided the first one
514ends and the second one starts on a page boundary - in fact this is a huge
515advantage for cards which either cannot do scatter-gather or have very
516limited number of scatter-gather entries) and returns the actual number
517of sg entries it mapped them to. On failure 0 is returned.
518
519Then you should loop count times (note: this can be less than nents times)
520and use sg_dma_address() and sg_dma_len() macros where you previously
521accessed sg->address and sg->length as shown above.
522
523To unmap a scatterlist, just call:
524
525 dma_unmap_sg(dev, sglist, nents, direction);
526
527Again, make sure DMA activity has already finished.
528
529PLEASE NOTE: The 'nents' argument to the dma_unmap_sg call must be
530 the _same_ one you passed into the dma_map_sg call,
531 it should _NOT_ be the 'count' value _returned_ from the
532 dma_map_sg call.
533
534Every dma_map_{single,sg} call should have its dma_unmap_{single,sg}
535counterpart, because the bus address space is a shared resource (although
536in some ports the mapping is per each BUS so less devices contend for the
537same bus address space) and you could render the machine unusable by eating
538all bus addresses.
539
540If you need to use the same streaming DMA region multiple times and touch
541the data in between the DMA transfers, the buffer needs to be synced
542properly in order for the cpu and device to see the most uptodate and
543correct copy of the DMA buffer.
544
545So, firstly, just map it with dma_map_{single,sg}, and after each DMA
546transfer call either:
547
548 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
549
550or:
551
552 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
553
554as appropriate.
555
556Then, if you wish to let the device get at the DMA area again,
557finish accessing the data with the cpu, and then before actually
558giving the buffer to the hardware call either:
559
560 dma_sync_single_for_device(dev, dma_handle, size, direction);
561
562or:
563
564 dma_sync_sg_for_device(dev, sglist, nents, direction);
565
566as appropriate.
567
568After the last DMA transfer call one of the DMA unmap routines
569dma_unmap_{single,sg}. If you don't touch the data from the first dma_map_*
570call till dma_unmap_*, then you don't have to call the dma_sync_*
571routines at all.
572
573Here is pseudo code which shows a situation in which you would need
574to use the dma_sync_*() interfaces.
575
576 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
577 {
578 dma_addr_t mapping;
579
580 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
581
582 cp->rx_buf = buffer;
583 cp->rx_len = len;
584 cp->rx_dma = mapping;
585
586 give_rx_buf_to_card(cp);
587 }
588
589 ...
590
591 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
592 {
593 struct my_card *cp = devid;
594
595 ...
596 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
597 struct my_card_header *hp;
598
599 /* Examine the header to see if we wish
600 * to accept the data. But synchronize
601 * the DMA transfer with the CPU first
602 * so that we see updated contents.
603 */
604 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
605 cp->rx_len,
606 DMA_FROM_DEVICE);
607
608 /* Now it is safe to examine the buffer. */
609 hp = (struct my_card_header *) cp->rx_buf;
610 if (header_is_ok(hp)) {
611 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
612 DMA_FROM_DEVICE);
613 pass_to_upper_layers(cp->rx_buf);
614 make_and_setup_new_rx_buf(cp);
615 } else {
616 /* CPU should not write to
617 * DMA_FROM_DEVICE-mapped area,
618 * so dma_sync_single_for_device() is
619 * not needed here. It would be required
620 * for DMA_BIDIRECTIONAL mapping if
621 * the memory was modified.
622 */
623 give_rx_buf_to_card(cp);
624 }
625 }
626 }
627
628Drivers converted fully to this interface should not use virt_to_bus any
629longer, nor should they use bus_to_virt. Some drivers have to be changed a
630little bit, because there is no longer an equivalent to bus_to_virt in the
631dynamic DMA mapping scheme - you have to always store the DMA addresses
632returned by the dma_alloc_coherent, dma_pool_alloc, and dma_map_single
633calls (dma_map_sg stores them in the scatterlist itself if the platform
634supports dynamic DMA mapping in hardware) in your driver structures and/or
635in the card registers.
636
637All drivers should be using these interfaces with no exceptions. It
638is planned to completely remove virt_to_bus() and bus_to_virt() as
639they are entirely deprecated. Some ports already do not provide these
640as it is impossible to correctly support them.
641
642 Handling Errors
643
644DMA address space is limited on some architectures and an allocation
645failure can be determined by:
646
647- checking if dma_alloc_coherent returns NULL or dma_map_sg returns 0
648
649- checking the returned dma_addr_t of dma_map_single and dma_map_page
650 by using dma_mapping_error():
651
652 dma_addr_t dma_handle;
653
654 dma_handle = dma_map_single(dev, addr, size, direction);
655 if (dma_mapping_error(dev, dma_handle)) {
656 /*
657 * reduce current DMA mapping usage,
658 * delay and try again later or
659 * reset driver.
660 */
661 }
662
663Networking drivers must call dev_kfree_skb to free the socket buffer
664and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
665(ndo_start_xmit). This means that the socket buffer is just dropped in
666the failure case.
667
668SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
669fails in the queuecommand hook. This means that the SCSI subsystem
670passes the command to the driver again later.
671
672 Optimizing Unmap State Space Consumption
673
674On many platforms, dma_unmap_{single,page}() is simply a nop.
675Therefore, keeping track of the mapping address and length is a waste
676of space. Instead of filling your drivers up with ifdefs and the like
677to "work around" this (which would defeat the whole purpose of a
678portable API) the following facilities are provided.
679
680Actually, instead of describing the macros one by one, we'll
681transform some example code.
682
6831) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
684 Example, before:
685
686 struct ring_state {
687 struct sk_buff *skb;
688 dma_addr_t mapping;
689 __u32 len;
690 };
691
692 after:
693
694 struct ring_state {
695 struct sk_buff *skb;
696 DEFINE_DMA_UNMAP_ADDR(mapping);
697 DEFINE_DMA_UNMAP_LEN(len);
698 };
699
7002) Use dma_unmap_{addr,len}_set to set these values.
701 Example, before:
702
703 ringp->mapping = FOO;
704 ringp->len = BAR;
705
706 after:
707
708 dma_unmap_addr_set(ringp, mapping, FOO);
709 dma_unmap_len_set(ringp, len, BAR);
710
7113) Use dma_unmap_{addr,len} to access these values.
712 Example, before:
713
714 dma_unmap_single(dev, ringp->mapping, ringp->len,
715 DMA_FROM_DEVICE);
716
717 after:
718
719 dma_unmap_single(dev,
720 dma_unmap_addr(ringp, mapping),
721 dma_unmap_len(ringp, len),
722 DMA_FROM_DEVICE);
723
724It really should be self-explanatory. We treat the ADDR and LEN
725separately, because it is possible for an implementation to only
726need the address in order to perform the unmap operation.
727
728 Platform Issues
729
730If you are just writing drivers for Linux and do not maintain
731an architecture port for the kernel, you can safely skip down
732to "Closing".
733
7341) Struct scatterlist requirements.
735
736 Don't invent the architecture specific struct scatterlist; just use
737 <asm-generic/scatterlist.h>. You need to enable
738 CONFIG_NEED_SG_DMA_LENGTH if the architecture supports IOMMUs
739 (including software IOMMU).
740
7412) ARCH_DMA_MINALIGN
742
743 Architectures must ensure that kmalloc'ed buffer is
744 DMA-safe. Drivers and subsystems depend on it. If an architecture
745 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
746 the CPU cache is identical to data in main memory),
747 ARCH_DMA_MINALIGN must be set so that the memory allocator
748 makes sure that kmalloc'ed buffer doesn't share a cache line with
749 the others. See arch/arm/include/asm/cache.h as an example.
750
751 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
752 constraints. You don't need to worry about the architecture data
753 alignment constraints (e.g. the alignment constraints about 64-bit
754 objects).
755
7563) Supporting multiple types of IOMMUs
757
758 If your architecture needs to support multiple types of IOMMUs, you
759 can use include/linux/asm-generic/dma-mapping-common.h. It's a
760 library to support the DMA API with multiple types of IOMMUs. Lots
761 of architectures (x86, powerpc, sh, alpha, ia64, microblaze and
762 sparc) use it. Choose one to see how it can be used. If you need to
763 support multiple types of IOMMUs in a single system, the example of
764 x86 or powerpc helps.
765
766 Closing
767
768This document, and the API itself, would not be in its current
769form without the feedback and suggestions from numerous individuals.
770We would like to specifically mention, in no particular order, the
771following people:
772
773 Russell King <rmk@arm.linux.org.uk>
774 Leo Dagum <dagum@barrel.engr.sgi.com>
775 Ralf Baechle <ralf@oss.sgi.com>
776 Grant Grundler <grundler@cup.hp.com>
777 Jay Estabrook <Jay.Estabrook@compaq.com>
778 Thomas Sailer <sailer@ife.ee.ethz.ch>
779 Andrea Arcangeli <andrea@suse.de>
780 Jens Axboe <jens.axboe@oracle.com>
781 David Mosberger-Tang <davidm@hpl.hp.com>