Loading...
1============================================
2Dynamic DMA mapping using the generic device
3============================================
4
5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
6
7This document describes the DMA API. For a more gentle introduction
8of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt.
9
10This API is split into two pieces. Part I describes the basic API.
11Part II describes extensions for supporting non-consistent memory
12machines. Unless you know that your driver absolutely has to support
13non-consistent platforms (this is usually only legacy platforms) you
14should only use the API described in part I.
15
16Part I - dma_API
17----------------
18
19To get the dma_API, you must #include <linux/dma-mapping.h>. This
20provides dma_addr_t and the interfaces described below.
21
22A dma_addr_t can hold any valid DMA address for the platform. It can be
23given to a device to use as a DMA source or target. A CPU cannot reference
24a dma_addr_t directly because there may be translation between its physical
25address space and the DMA address space.
26
27Part Ia - Using large DMA-coherent buffers
28------------------------------------------
29
30::
31
32 void *
33 dma_alloc_coherent(struct device *dev, size_t size,
34 dma_addr_t *dma_handle, gfp_t flag)
35
36Consistent memory is memory for which a write by either the device or
37the processor can immediately be read by the processor or device
38without having to worry about caching effects. (You may however need
39to make sure to flush the processor's write buffers before telling
40devices to read that memory.)
41
42This routine allocates a region of <size> bytes of consistent memory.
43
44It returns a pointer to the allocated region (in the processor's virtual
45address space) or NULL if the allocation failed.
46
47It also returns a <dma_handle> which may be cast to an unsigned integer the
48same width as the bus and given to the device as the DMA address base of
49the region.
50
51Note: consistent memory can be expensive on some platforms, and the
52minimum allocation length may be as big as a page, so you should
53consolidate your requests for consistent memory as much as possible.
54The simplest way to do that is to use the dma_pool calls (see below).
55
56The flag parameter (dma_alloc_coherent() only) allows the caller to
57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
58implementation may choose to ignore flags that affect the location of
59the returned memory, like GFP_DMA).
60
61::
62
63 void *
64 dma_zalloc_coherent(struct device *dev, size_t size,
65 dma_addr_t *dma_handle, gfp_t flag)
66
67Wraps dma_alloc_coherent() and also zeroes the returned memory if the
68allocation attempt succeeded.
69
70::
71
72 void
73 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
74 dma_addr_t dma_handle)
75
76Free a region of consistent memory you previously allocated. dev,
77size and dma_handle must all be the same as those passed into
78dma_alloc_coherent(). cpu_addr must be the virtual address returned by
79the dma_alloc_coherent().
80
81Note that unlike their sibling allocation calls, these routines
82may only be called with IRQs enabled.
83
84
85Part Ib - Using small DMA-coherent buffers
86------------------------------------------
87
88To get this part of the dma_API, you must #include <linux/dmapool.h>
89
90Many drivers need lots of small DMA-coherent memory regions for DMA
91descriptors or I/O buffers. Rather than allocating in units of a page
92or more using dma_alloc_coherent(), you can use DMA pools. These work
93much like a struct kmem_cache, except that they use the DMA-coherent allocator,
94not __get_free_pages(). Also, they understand common hardware constraints
95for alignment, like queue heads needing to be aligned on N-byte boundaries.
96
97
98::
99
100 struct dma_pool *
101 dma_pool_create(const char *name, struct device *dev,
102 size_t size, size_t align, size_t alloc);
103
104dma_pool_create() initializes a pool of DMA-coherent buffers
105for use with a given device. It must be called in a context which
106can sleep.
107
108The "name" is for diagnostics (like a struct kmem_cache name); dev and size
109are like what you'd pass to dma_alloc_coherent(). The device's hardware
110alignment requirement for this type of data is "align" (which is expressed
111in bytes, and must be a power of two). If your device has no boundary
112crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
113from this pool must not cross 4KByte boundaries.
114
115::
116
117 void *
118 dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
119 dma_addr_t *handle)
120
121Wraps dma_pool_alloc() and also zeroes the returned memory if the
122allocation attempt succeeded.
123
124
125::
126
127 void *
128 dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
129 dma_addr_t *dma_handle);
130
131This allocates memory from the pool; the returned memory will meet the
132size and alignment requirements specified at creation time. Pass
133GFP_ATOMIC to prevent blocking, or if it's permitted (not
134in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
135blocking. Like dma_alloc_coherent(), this returns two values: an
136address usable by the CPU, and the DMA address usable by the pool's
137device.
138
139::
140
141 void
142 dma_pool_free(struct dma_pool *pool, void *vaddr,
143 dma_addr_t addr);
144
145This puts memory back into the pool. The pool is what was passed to
146dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
147were returned when that routine allocated the memory being freed.
148
149::
150
151 void
152 dma_pool_destroy(struct dma_pool *pool);
153
154dma_pool_destroy() frees the resources of the pool. It must be
155called in a context which can sleep. Make sure you've freed all allocated
156memory back to the pool before you destroy it.
157
158
159Part Ic - DMA addressing limitations
160------------------------------------
161
162::
163
164 int
165 dma_set_mask_and_coherent(struct device *dev, u64 mask)
166
167Checks to see if the mask is possible and updates the device
168streaming and coherent DMA mask parameters if it is.
169
170Returns: 0 if successful and a negative error if not.
171
172::
173
174 int
175 dma_set_mask(struct device *dev, u64 mask)
176
177Checks to see if the mask is possible and updates the device
178parameters if it is.
179
180Returns: 0 if successful and a negative error if not.
181
182::
183
184 int
185 dma_set_coherent_mask(struct device *dev, u64 mask)
186
187Checks to see if the mask is possible and updates the device
188parameters if it is.
189
190Returns: 0 if successful and a negative error if not.
191
192::
193
194 u64
195 dma_get_required_mask(struct device *dev)
196
197This API returns the mask that the platform requires to
198operate efficiently. Usually this means the returned mask
199is the minimum required to cover all of memory. Examining the
200required mask gives drivers with variable descriptor sizes the
201opportunity to use smaller descriptors as necessary.
202
203Requesting the required mask does not alter the current mask. If you
204wish to take advantage of it, you should issue a dma_set_mask()
205call to set the mask to the value returned.
206
207
208Part Id - Streaming DMA mappings
209--------------------------------
210
211::
212
213 dma_addr_t
214 dma_map_single(struct device *dev, void *cpu_addr, size_t size,
215 enum dma_data_direction direction)
216
217Maps a piece of processor virtual memory so it can be accessed by the
218device and returns the DMA address of the memory.
219
220The direction for both APIs may be converted freely by casting.
221However the dma_API uses a strongly typed enumerator for its
222direction:
223
224======================= =============================================
225DMA_NONE no direction (used for debugging)
226DMA_TO_DEVICE data is going from the memory to the device
227DMA_FROM_DEVICE data is coming from the device to the memory
228DMA_BIDIRECTIONAL direction isn't known
229======================= =============================================
230
231.. note::
232
233 Not all memory regions in a machine can be mapped by this API.
234 Further, contiguous kernel virtual space may not be contiguous as
235 physical memory. Since this API does not provide any scatter/gather
236 capability, it will fail if the user tries to map a non-physically
237 contiguous piece of memory. For this reason, memory to be mapped by
238 this API should be obtained from sources which guarantee it to be
239 physically contiguous (like kmalloc).
240
241 Further, the DMA address of the memory must be within the
242 dma_mask of the device (the dma_mask is a bit mask of the
243 addressable region for the device, i.e., if the DMA address of
244 the memory ANDed with the dma_mask is still equal to the DMA
245 address, then the device can perform DMA to the memory). To
246 ensure that the memory allocated by kmalloc is within the dma_mask,
247 the driver may specify various platform-dependent flags to restrict
248 the DMA address range of the allocation (e.g., on x86, GFP_DMA
249 guarantees to be within the first 16MB of available DMA addresses,
250 as required by ISA devices).
251
252 Note also that the above constraints on physical contiguity and
253 dma_mask may not apply if the platform has an IOMMU (a device which
254 maps an I/O DMA address to a physical memory address). However, to be
255 portable, device driver writers may *not* assume that such an IOMMU
256 exists.
257
258.. warning::
259
260 Memory coherency operates at a granularity called the cache
261 line width. In order for memory mapped by this API to operate
262 correctly, the mapped region must begin exactly on a cache line
263 boundary and end exactly on one (to prevent two separately mapped
264 regions from sharing a single cache line). Since the cache line size
265 may not be known at compile time, the API will not enforce this
266 requirement. Therefore, it is recommended that driver writers who
267 don't take special care to determine the cache line size at run time
268 only map virtual regions that begin and end on page boundaries (which
269 are guaranteed also to be cache line boundaries).
270
271 DMA_TO_DEVICE synchronisation must be done after the last modification
272 of the memory region by the software and before it is handed off to
273 the device. Once this primitive is used, memory covered by this
274 primitive should be treated as read-only by the device. If the device
275 may write to it at any point, it should be DMA_BIDIRECTIONAL (see
276 below).
277
278 DMA_FROM_DEVICE synchronisation must be done before the driver
279 accesses data that may be changed by the device. This memory should
280 be treated as read-only by the driver. If the driver needs to write
281 to it at any point, it should be DMA_BIDIRECTIONAL (see below).
282
283 DMA_BIDIRECTIONAL requires special handling: it means that the driver
284 isn't sure if the memory was modified before being handed off to the
285 device and also isn't sure if the device will also modify it. Thus,
286 you must always sync bidirectional memory twice: once before the
287 memory is handed off to the device (to make sure all memory changes
288 are flushed from the processor) and once before the data may be
289 accessed after being used by the device (to make sure any processor
290 cache lines are updated with data that the device may have changed).
291
292::
293
294 void
295 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
296 enum dma_data_direction direction)
297
298Unmaps the region previously mapped. All the parameters passed in
299must be identical to those passed in (and returned) by the mapping
300API.
301
302::
303
304 dma_addr_t
305 dma_map_page(struct device *dev, struct page *page,
306 unsigned long offset, size_t size,
307 enum dma_data_direction direction)
308
309 void
310 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
311 enum dma_data_direction direction)
312
313API for mapping and unmapping for pages. All the notes and warnings
314for the other mapping APIs apply here. Also, although the <offset>
315and <size> parameters are provided to do partial page mapping, it is
316recommended that you never use these unless you really know what the
317cache width is.
318
319::
320
321 dma_addr_t
322 dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
323 enum dma_data_direction dir, unsigned long attrs)
324
325 void
326 dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
327 enum dma_data_direction dir, unsigned long attrs)
328
329API for mapping and unmapping for MMIO resources. All the notes and
330warnings for the other mapping APIs apply here. The API should only be
331used to map device MMIO resources, mapping of RAM is not permitted.
332
333::
334
335 int
336 dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
337
338In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
339will fail to create a mapping. A driver can check for these errors by testing
340the returned DMA address with dma_mapping_error(). A non-zero return value
341means the mapping could not be created and the driver should take appropriate
342action (e.g. reduce current DMA mapping usage or delay and try again later).
343
344::
345
346 int
347 dma_map_sg(struct device *dev, struct scatterlist *sg,
348 int nents, enum dma_data_direction direction)
349
350Returns: the number of DMA address segments mapped (this may be shorter
351than <nents> passed in if some elements of the scatter/gather list are
352physically or virtually adjacent and an IOMMU maps them with a single
353entry).
354
355Please note that the sg cannot be mapped again if it has been mapped once.
356The mapping process is allowed to destroy information in the sg.
357
358As with the other mapping interfaces, dma_map_sg() can fail. When it
359does, 0 is returned and a driver must take appropriate action. It is
360critical that the driver do something, in the case of a block driver
361aborting the request or even oopsing is better than doing nothing and
362corrupting the filesystem.
363
364With scatterlists, you use the resulting mapping like this::
365
366 int i, count = dma_map_sg(dev, sglist, nents, direction);
367 struct scatterlist *sg;
368
369 for_each_sg(sglist, sg, count, i) {
370 hw_address[i] = sg_dma_address(sg);
371 hw_len[i] = sg_dma_len(sg);
372 }
373
374where nents is the number of entries in the sglist.
375
376The implementation is free to merge several consecutive sglist entries
377into one (e.g. with an IOMMU, or if several pages just happen to be
378physically contiguous) and returns the actual number of sg entries it
379mapped them to. On failure 0, is returned.
380
381Then you should loop count times (note: this can be less than nents times)
382and use sg_dma_address() and sg_dma_len() macros where you previously
383accessed sg->address and sg->length as shown above.
384
385::
386
387 void
388 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
389 int nents, enum dma_data_direction direction)
390
391Unmap the previously mapped scatter/gather list. All the parameters
392must be the same as those and passed in to the scatter/gather mapping
393API.
394
395Note: <nents> must be the number you passed in, *not* the number of
396DMA address entries returned.
397
398::
399
400 void
401 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
402 size_t size,
403 enum dma_data_direction direction)
404
405 void
406 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
407 size_t size,
408 enum dma_data_direction direction)
409
410 void
411 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
412 int nents,
413 enum dma_data_direction direction)
414
415 void
416 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
417 int nents,
418 enum dma_data_direction direction)
419
420Synchronise a single contiguous or scatter/gather mapping for the CPU
421and device. With the sync_sg API, all the parameters must be the same
422as those passed into the single mapping API. With the sync_single API,
423you can use dma_handle and size parameters that aren't identical to
424those passed into the single mapping API to do a partial sync.
425
426
427.. note::
428
429 You must do this:
430
431 - Before reading values that have been written by DMA from the device
432 (use the DMA_FROM_DEVICE direction)
433 - After writing values that will be written to the device using DMA
434 (use the DMA_TO_DEVICE) direction
435 - before *and* after handing memory to the device if the memory is
436 DMA_BIDIRECTIONAL
437
438See also dma_map_single().
439
440::
441
442 dma_addr_t
443 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
444 enum dma_data_direction dir,
445 unsigned long attrs)
446
447 void
448 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
449 size_t size, enum dma_data_direction dir,
450 unsigned long attrs)
451
452 int
453 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
454 int nents, enum dma_data_direction dir,
455 unsigned long attrs)
456
457 void
458 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
459 int nents, enum dma_data_direction dir,
460 unsigned long attrs)
461
462The four functions above are just like the counterpart functions
463without the _attrs suffixes, except that they pass an optional
464dma_attrs.
465
466The interpretation of DMA attributes is architecture-specific, and
467each attribute should be documented in Documentation/DMA-attributes.txt.
468
469If dma_attrs are 0, the semantics of each of these functions
470is identical to those of the corresponding function
471without the _attrs suffix. As a result dma_map_single_attrs()
472can generally replace dma_map_single(), etc.
473
474As an example of the use of the ``*_attrs`` functions, here's how
475you could pass an attribute DMA_ATTR_FOO when mapping memory
476for DMA::
477
478 #include <linux/dma-mapping.h>
479 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
480 * documented in Documentation/DMA-attributes.txt */
481 ...
482
483 unsigned long attr;
484 attr |= DMA_ATTR_FOO;
485 ....
486 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
487 ....
488
489Architectures that care about DMA_ATTR_FOO would check for its
490presence in their implementations of the mapping and unmapping
491routines, e.g.:::
492
493 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
494 size_t size, enum dma_data_direction dir,
495 unsigned long attrs)
496 {
497 ....
498 if (attrs & DMA_ATTR_FOO)
499 /* twizzle the frobnozzle */
500 ....
501 }
502
503
504Part II - Advanced dma usage
505----------------------------
506
507Warning: These pieces of the DMA API should not be used in the
508majority of cases, since they cater for unlikely corner cases that
509don't belong in usual drivers.
510
511If you don't understand how cache line coherency works between a
512processor and an I/O device, you should not be using this part of the
513API at all.
514
515::
516
517 void *
518 dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
519 gfp_t flag, unsigned long attrs)
520
521Identical to dma_alloc_coherent() except that when the
522DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the
523platform will choose to return either consistent or non-consistent memory
524as it sees fit. By using this API, you are guaranteeing to the platform
525that you have all the correct and necessary sync points for this memory
526in the driver should it choose to return non-consistent memory.
527
528Note: where the platform can return consistent memory, it will
529guarantee that the sync points become nops.
530
531Warning: Handling non-consistent memory is a real pain. You should
532only use this API if you positively know your driver will be
533required to work on one of the rare (usually non-PCI) architectures
534that simply cannot make consistent memory.
535
536::
537
538 void
539 dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
540 dma_addr_t dma_handle, unsigned long attrs)
541
542Free memory allocated by the dma_alloc_attrs(). All parameters common
543parameters must identical to those otherwise passed to dma_fre_coherent,
544and the attrs argument must be identical to the attrs passed to
545dma_alloc_attrs().
546
547::
548
549 int
550 dma_get_cache_alignment(void)
551
552Returns the processor cache alignment. This is the absolute minimum
553alignment *and* width that you must observe when either mapping
554memory or doing partial flushes.
555
556.. note::
557
558 This API may return a number *larger* than the actual cache
559 line, but it will guarantee that one or more cache lines fit exactly
560 into the width returned by this call. It will also always be a power
561 of two for easy alignment.
562
563::
564
565 void
566 dma_cache_sync(struct device *dev, void *vaddr, size_t size,
567 enum dma_data_direction direction)
568
569Do a partial sync of memory that was allocated by dma_alloc_attrs() with
570the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and
571continuing on for size. Again, you *must* observe the cache line
572boundaries when doing this.
573
574::
575
576 int
577 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
578 dma_addr_t device_addr, size_t size, int
579 flags)
580
581Declare region of memory to be handed out by dma_alloc_coherent() when
582it's asked for coherent memory for this device.
583
584phys_addr is the CPU physical address to which the memory is currently
585assigned (this will be ioremapped so the CPU can access the region).
586
587device_addr is the DMA address the device needs to be programmed
588with to actually address this memory (this will be handed out as the
589dma_addr_t in dma_alloc_coherent()).
590
591size is the size of the area (must be multiples of PAGE_SIZE).
592
593flags can be ORed together and are:
594
595- DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
596 Do not allow dma_alloc_coherent() to fall back to system memory when
597 it's out of memory in the declared region.
598
599As a simplification for the platforms, only *one* such region of
600memory may be declared per device.
601
602For reasons of efficiency, most platforms choose to track the declared
603region only at the granularity of a page. For smaller allocations,
604you should use the dma_pool() API.
605
606::
607
608 void
609 dma_release_declared_memory(struct device *dev)
610
611Remove the memory region previously declared from the system. This
612API performs *no* in-use checking for this region and will return
613unconditionally having removed all the required structures. It is the
614driver's job to ensure that no parts of this memory region are
615currently in use.
616
617::
618
619 void *
620 dma_mark_declared_memory_occupied(struct device *dev,
621 dma_addr_t device_addr, size_t size)
622
623This is used to occupy specific regions of the declared space
624(dma_alloc_coherent() will hand out the first free region it finds).
625
626device_addr is the *device* address of the region requested.
627
628size is the size (and should be a page-sized multiple).
629
630The return value will be either a pointer to the processor virtual
631address of the memory, or an error (via PTR_ERR()) if any part of the
632region is occupied.
633
634Part III - Debug drivers use of the DMA-API
635-------------------------------------------
636
637The DMA-API as described above has some constraints. DMA addresses must be
638released with the corresponding function with the same size for example. With
639the advent of hardware IOMMUs it becomes more and more important that drivers
640do not violate those constraints. In the worst case such a violation can
641result in data corruption up to destroyed filesystems.
642
643To debug drivers and find bugs in the usage of the DMA-API checking code can
644be compiled into the kernel which will tell the developer about those
645violations. If your architecture supports it you can select the "Enable
646debugging of DMA-API usage" option in your kernel configuration. Enabling this
647option has a performance impact. Do not enable it in production kernels.
648
649If you boot the resulting kernel will contain code which does some bookkeeping
650about what DMA memory was allocated for which device. If this code detects an
651error it prints a warning message with some details into your kernel log. An
652example warning message may look like this::
653
654 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
655 check_unmap+0x203/0x490()
656 Hardware name:
657 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
658 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
659 single] [unmapped as page]
660 Modules linked in: nfsd exportfs bridge stp llc r8169
661 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
662 Call Trace:
663 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
664 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
665 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
666 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
667 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
668 [<ffffffff80252f96>] queue_work+0x56/0x60
669 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
670 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
671 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
672 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
673 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
674 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
675 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
676 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
677 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
678 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
679 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
680 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
681 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
682 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
683
684The driver developer can find the driver and the device including a stacktrace
685of the DMA-API call which caused this warning.
686
687Per default only the first error will result in a warning message. All other
688errors will only silently counted. This limitation exist to prevent the code
689from flooding your kernel log. To support debugging a device driver this can
690be disabled via debugfs. See the debugfs interface documentation below for
691details.
692
693The debugfs directory for the DMA-API debugging code is called dma-api/. In
694this directory the following files can currently be found:
695
696=============================== ===============================================
697dma-api/all_errors This file contains a numeric value. If this
698 value is not equal to zero the debugging code
699 will print a warning for every error it finds
700 into the kernel log. Be careful with this
701 option, as it can easily flood your logs.
702
703dma-api/disabled This read-only file contains the character 'Y'
704 if the debugging code is disabled. This can
705 happen when it runs out of memory or if it was
706 disabled at boot time
707
708dma-api/error_count This file is read-only and shows the total
709 numbers of errors found.
710
711dma-api/num_errors The number in this file shows how many
712 warnings will be printed to the kernel log
713 before it stops. This number is initialized to
714 one at system boot and be set by writing into
715 this file
716
717dma-api/min_free_entries This read-only file can be read to get the
718 minimum number of free dma_debug_entries the
719 allocator has ever seen. If this value goes
720 down to zero the code will disable itself
721 because it is not longer reliable.
722
723dma-api/num_free_entries The current number of free dma_debug_entries
724 in the allocator.
725
726dma-api/driver-filter You can write a name of a driver into this file
727 to limit the debug output to requests from that
728 particular driver. Write an empty string to
729 that file to disable the filter and see
730 all errors again.
731=============================== ===============================================
732
733If you have this code compiled into your kernel it will be enabled by default.
734If you want to boot without the bookkeeping anyway you can provide
735'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
736Notice that you can not enable it again at runtime. You have to reboot to do
737so.
738
739If you want to see debug messages only for a special device driver you can
740specify the dma_debug_driver=<drivername> parameter. This will enable the
741driver filter at boot time. The debug code will only print errors for that
742driver afterwards. This filter can be disabled or changed later using debugfs.
743
744When the code disables itself at runtime this is most likely because it ran
745out of dma_debug_entries. These entries are preallocated at boot. The number
746of preallocated entries is defined per architecture. If it is too low for you
747boot with 'dma_debug_entries=<your_desired_number>' to overwrite the
748architectural default.
749
750::
751
752 void
753 debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
754
755dma-debug interface debug_dma_mapping_error() to debug drivers that fail
756to check DMA mapping errors on addresses returned by dma_map_single() and
757dma_map_page() interfaces. This interface clears a flag set by
758debug_dma_map_page() to indicate that dma_mapping_error() has been called by
759the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
760this flag is still set, prints warning message that includes call trace that
761leads up to the unmap. This interface can be called from dma_mapping_error()
762routines to enable DMA mapping error check debugging.
1 Dynamic DMA mapping using the generic device
2 ============================================
3
4 James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
5
6This document describes the DMA API. For a more gentle introduction
7of the API (and actual examples) see
8Documentation/DMA-API-HOWTO.txt.
9
10This API is split into two pieces. Part I describes the API. Part II
11describes the extensions to the API for supporting non-consistent
12memory machines. Unless you know that your driver absolutely has to
13support non-consistent platforms (this is usually only legacy
14platforms) you should only use the API described in part I.
15
16Part I - dma_ API
17-------------------------------------
18
19To get the dma_ API, you must #include <linux/dma-mapping.h>
20
21
22Part Ia - Using large dma-coherent buffers
23------------------------------------------
24
25void *
26dma_alloc_coherent(struct device *dev, size_t size,
27 dma_addr_t *dma_handle, gfp_t flag)
28
29Consistent memory is memory for which a write by either the device or
30the processor can immediately be read by the processor or device
31without having to worry about caching effects. (You may however need
32to make sure to flush the processor's write buffers before telling
33devices to read that memory.)
34
35This routine allocates a region of <size> bytes of consistent memory.
36It also returns a <dma_handle> which may be cast to an unsigned
37integer the same width as the bus and used as the physical address
38base of the region.
39
40Returns: a pointer to the allocated region (in the processor's virtual
41address space) or NULL if the allocation failed.
42
43Note: consistent memory can be expensive on some platforms, and the
44minimum allocation length may be as big as a page, so you should
45consolidate your requests for consistent memory as much as possible.
46The simplest way to do that is to use the dma_pool calls (see below).
47
48The flag parameter (dma_alloc_coherent only) allows the caller to
49specify the GFP_ flags (see kmalloc) for the allocation (the
50implementation may choose to ignore flags that affect the location of
51the returned memory, like GFP_DMA).
52
53void
54dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
55 dma_addr_t dma_handle)
56
57Free the region of consistent memory you previously allocated. dev,
58size and dma_handle must all be the same as those passed into the
59consistent allocate. cpu_addr must be the virtual address returned by
60the consistent allocate.
61
62Note that unlike their sibling allocation calls, these routines
63may only be called with IRQs enabled.
64
65
66Part Ib - Using small dma-coherent buffers
67------------------------------------------
68
69To get this part of the dma_ API, you must #include <linux/dmapool.h>
70
71Many drivers need lots of small dma-coherent memory regions for DMA
72descriptors or I/O buffers. Rather than allocating in units of a page
73or more using dma_alloc_coherent(), you can use DMA pools. These work
74much like a struct kmem_cache, except that they use the dma-coherent allocator,
75not __get_free_pages(). Also, they understand common hardware constraints
76for alignment, like queue heads needing to be aligned on N-byte boundaries.
77
78
79 struct dma_pool *
80 dma_pool_create(const char *name, struct device *dev,
81 size_t size, size_t align, size_t alloc);
82
83The pool create() routines initialize a pool of dma-coherent buffers
84for use with a given device. It must be called in a context which
85can sleep.
86
87The "name" is for diagnostics (like a struct kmem_cache name); dev and size
88are like what you'd pass to dma_alloc_coherent(). The device's hardware
89alignment requirement for this type of data is "align" (which is expressed
90in bytes, and must be a power of two). If your device has no boundary
91crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
92from this pool must not cross 4KByte boundaries.
93
94
95 void *dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
96 dma_addr_t *dma_handle);
97
98This allocates memory from the pool; the returned memory will meet the size
99and alignment requirements specified at creation time. Pass GFP_ATOMIC to
100prevent blocking, or if it's permitted (not in_interrupt, not holding SMP locks),
101pass GFP_KERNEL to allow blocking. Like dma_alloc_coherent(), this returns
102two values: an address usable by the cpu, and the dma address usable by the
103pool's device.
104
105
106 void dma_pool_free(struct dma_pool *pool, void *vaddr,
107 dma_addr_t addr);
108
109This puts memory back into the pool. The pool is what was passed to
110the pool allocation routine; the cpu (vaddr) and dma addresses are what
111were returned when that routine allocated the memory being freed.
112
113
114 void dma_pool_destroy(struct dma_pool *pool);
115
116The pool destroy() routines free the resources of the pool. They must be
117called in a context which can sleep. Make sure you've freed all allocated
118memory back to the pool before you destroy it.
119
120
121Part Ic - DMA addressing limitations
122------------------------------------
123
124int
125dma_supported(struct device *dev, u64 mask)
126
127Checks to see if the device can support DMA to the memory described by
128mask.
129
130Returns: 1 if it can and 0 if it can't.
131
132Notes: This routine merely tests to see if the mask is possible. It
133won't change the current mask settings. It is more intended as an
134internal API for use by the platform than an external API for use by
135driver writers.
136
137int
138dma_set_mask(struct device *dev, u64 mask)
139
140Checks to see if the mask is possible and updates the device
141parameters if it is.
142
143Returns: 0 if successful and a negative error if not.
144
145int
146dma_set_coherent_mask(struct device *dev, u64 mask)
147
148Checks to see if the mask is possible and updates the device
149parameters if it is.
150
151Returns: 0 if successful and a negative error if not.
152
153u64
154dma_get_required_mask(struct device *dev)
155
156This API returns the mask that the platform requires to
157operate efficiently. Usually this means the returned mask
158is the minimum required to cover all of memory. Examining the
159required mask gives drivers with variable descriptor sizes the
160opportunity to use smaller descriptors as necessary.
161
162Requesting the required mask does not alter the current mask. If you
163wish to take advantage of it, you should issue a dma_set_mask()
164call to set the mask to the value returned.
165
166
167Part Id - Streaming DMA mappings
168--------------------------------
169
170dma_addr_t
171dma_map_single(struct device *dev, void *cpu_addr, size_t size,
172 enum dma_data_direction direction)
173
174Maps a piece of processor virtual memory so it can be accessed by the
175device and returns the physical handle of the memory.
176
177The direction for both api's may be converted freely by casting.
178However the dma_ API uses a strongly typed enumerator for its
179direction:
180
181DMA_NONE no direction (used for debugging)
182DMA_TO_DEVICE data is going from the memory to the device
183DMA_FROM_DEVICE data is coming from the device to the memory
184DMA_BIDIRECTIONAL direction isn't known
185
186Notes: Not all memory regions in a machine can be mapped by this
187API. Further, regions that appear to be physically contiguous in
188kernel virtual space may not be contiguous as physical memory. Since
189this API does not provide any scatter/gather capability, it will fail
190if the user tries to map a non-physically contiguous piece of memory.
191For this reason, it is recommended that memory mapped by this API be
192obtained only from sources which guarantee it to be physically contiguous
193(like kmalloc).
194
195Further, the physical address of the memory must be within the
196dma_mask of the device (the dma_mask represents a bit mask of the
197addressable region for the device. I.e., if the physical address of
198the memory anded with the dma_mask is still equal to the physical
199address, then the device can perform DMA to the memory). In order to
200ensure that the memory allocated by kmalloc is within the dma_mask,
201the driver may specify various platform-dependent flags to restrict
202the physical memory range of the allocation (e.g. on x86, GFP_DMA
203guarantees to be within the first 16Mb of available physical memory,
204as required by ISA devices).
205
206Note also that the above constraints on physical contiguity and
207dma_mask may not apply if the platform has an IOMMU (a device which
208supplies a physical to virtual mapping between the I/O memory bus and
209the device). However, to be portable, device driver writers may *not*
210assume that such an IOMMU exists.
211
212Warnings: Memory coherency operates at a granularity called the cache
213line width. In order for memory mapped by this API to operate
214correctly, the mapped region must begin exactly on a cache line
215boundary and end exactly on one (to prevent two separately mapped
216regions from sharing a single cache line). Since the cache line size
217may not be known at compile time, the API will not enforce this
218requirement. Therefore, it is recommended that driver writers who
219don't take special care to determine the cache line size at run time
220only map virtual regions that begin and end on page boundaries (which
221are guaranteed also to be cache line boundaries).
222
223DMA_TO_DEVICE synchronisation must be done after the last modification
224of the memory region by the software and before it is handed off to
225the driver. Once this primitive is used, memory covered by this
226primitive should be treated as read-only by the device. If the device
227may write to it at any point, it should be DMA_BIDIRECTIONAL (see
228below).
229
230DMA_FROM_DEVICE synchronisation must be done before the driver
231accesses data that may be changed by the device. This memory should
232be treated as read-only by the driver. If the driver needs to write
233to it at any point, it should be DMA_BIDIRECTIONAL (see below).
234
235DMA_BIDIRECTIONAL requires special handling: it means that the driver
236isn't sure if the memory was modified before being handed off to the
237device and also isn't sure if the device will also modify it. Thus,
238you must always sync bidirectional memory twice: once before the
239memory is handed off to the device (to make sure all memory changes
240are flushed from the processor) and once before the data may be
241accessed after being used by the device (to make sure any processor
242cache lines are updated with data that the device may have changed).
243
244void
245dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
246 enum dma_data_direction direction)
247
248Unmaps the region previously mapped. All the parameters passed in
249must be identical to those passed in (and returned) by the mapping
250API.
251
252dma_addr_t
253dma_map_page(struct device *dev, struct page *page,
254 unsigned long offset, size_t size,
255 enum dma_data_direction direction)
256void
257dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
258 enum dma_data_direction direction)
259
260API for mapping and unmapping for pages. All the notes and warnings
261for the other mapping APIs apply here. Also, although the <offset>
262and <size> parameters are provided to do partial page mapping, it is
263recommended that you never use these unless you really know what the
264cache width is.
265
266int
267dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
268
269In some circumstances dma_map_single and dma_map_page will fail to create
270a mapping. A driver can check for these errors by testing the returned
271dma address with dma_mapping_error(). A non-zero return value means the mapping
272could not be created and the driver should take appropriate action (e.g.
273reduce current DMA mapping usage or delay and try again later).
274
275 int
276 dma_map_sg(struct device *dev, struct scatterlist *sg,
277 int nents, enum dma_data_direction direction)
278
279Returns: the number of physical segments mapped (this may be shorter
280than <nents> passed in if some elements of the scatter/gather list are
281physically or virtually adjacent and an IOMMU maps them with a single
282entry).
283
284Please note that the sg cannot be mapped again if it has been mapped once.
285The mapping process is allowed to destroy information in the sg.
286
287As with the other mapping interfaces, dma_map_sg can fail. When it
288does, 0 is returned and a driver must take appropriate action. It is
289critical that the driver do something, in the case of a block driver
290aborting the request or even oopsing is better than doing nothing and
291corrupting the filesystem.
292
293With scatterlists, you use the resulting mapping like this:
294
295 int i, count = dma_map_sg(dev, sglist, nents, direction);
296 struct scatterlist *sg;
297
298 for_each_sg(sglist, sg, count, i) {
299 hw_address[i] = sg_dma_address(sg);
300 hw_len[i] = sg_dma_len(sg);
301 }
302
303where nents is the number of entries in the sglist.
304
305The implementation is free to merge several consecutive sglist entries
306into one (e.g. with an IOMMU, or if several pages just happen to be
307physically contiguous) and returns the actual number of sg entries it
308mapped them to. On failure 0, is returned.
309
310Then you should loop count times (note: this can be less than nents times)
311and use sg_dma_address() and sg_dma_len() macros where you previously
312accessed sg->address and sg->length as shown above.
313
314 void
315 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
316 int nhwentries, enum dma_data_direction direction)
317
318Unmap the previously mapped scatter/gather list. All the parameters
319must be the same as those and passed in to the scatter/gather mapping
320API.
321
322Note: <nents> must be the number you passed in, *not* the number of
323physical entries returned.
324
325void
326dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size,
327 enum dma_data_direction direction)
328void
329dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size,
330 enum dma_data_direction direction)
331void
332dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
333 enum dma_data_direction direction)
334void
335dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, int nelems,
336 enum dma_data_direction direction)
337
338Synchronise a single contiguous or scatter/gather mapping for the cpu
339and device. With the sync_sg API, all the parameters must be the same
340as those passed into the single mapping API. With the sync_single API,
341you can use dma_handle and size parameters that aren't identical to
342those passed into the single mapping API to do a partial sync.
343
344Notes: You must do this:
345
346- Before reading values that have been written by DMA from the device
347 (use the DMA_FROM_DEVICE direction)
348- After writing values that will be written to the device using DMA
349 (use the DMA_TO_DEVICE) direction
350- before *and* after handing memory to the device if the memory is
351 DMA_BIDIRECTIONAL
352
353See also dma_map_single().
354
355dma_addr_t
356dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
357 enum dma_data_direction dir,
358 struct dma_attrs *attrs)
359
360void
361dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
362 size_t size, enum dma_data_direction dir,
363 struct dma_attrs *attrs)
364
365int
366dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
367 int nents, enum dma_data_direction dir,
368 struct dma_attrs *attrs)
369
370void
371dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
372 int nents, enum dma_data_direction dir,
373 struct dma_attrs *attrs)
374
375The four functions above are just like the counterpart functions
376without the _attrs suffixes, except that they pass an optional
377struct dma_attrs*.
378
379struct dma_attrs encapsulates a set of "dma attributes". For the
380definition of struct dma_attrs see linux/dma-attrs.h.
381
382The interpretation of dma attributes is architecture-specific, and
383each attribute should be documented in Documentation/DMA-attributes.txt.
384
385If struct dma_attrs* is NULL, the semantics of each of these
386functions is identical to those of the corresponding function
387without the _attrs suffix. As a result dma_map_single_attrs()
388can generally replace dma_map_single(), etc.
389
390As an example of the use of the *_attrs functions, here's how
391you could pass an attribute DMA_ATTR_FOO when mapping memory
392for DMA:
393
394#include <linux/dma-attrs.h>
395/* DMA_ATTR_FOO should be defined in linux/dma-attrs.h and
396 * documented in Documentation/DMA-attributes.txt */
397...
398
399 DEFINE_DMA_ATTRS(attrs);
400 dma_set_attr(DMA_ATTR_FOO, &attrs);
401 ....
402 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, &attr);
403 ....
404
405Architectures that care about DMA_ATTR_FOO would check for its
406presence in their implementations of the mapping and unmapping
407routines, e.g.:
408
409void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
410 size_t size, enum dma_data_direction dir,
411 struct dma_attrs *attrs)
412{
413 ....
414 int foo = dma_get_attr(DMA_ATTR_FOO, attrs);
415 ....
416 if (foo)
417 /* twizzle the frobnozzle */
418 ....
419
420
421Part II - Advanced dma_ usage
422-----------------------------
423
424Warning: These pieces of the DMA API should not be used in the
425majority of cases, since they cater for unlikely corner cases that
426don't belong in usual drivers.
427
428If you don't understand how cache line coherency works between a
429processor and an I/O device, you should not be using this part of the
430API at all.
431
432void *
433dma_alloc_noncoherent(struct device *dev, size_t size,
434 dma_addr_t *dma_handle, gfp_t flag)
435
436Identical to dma_alloc_coherent() except that the platform will
437choose to return either consistent or non-consistent memory as it sees
438fit. By using this API, you are guaranteeing to the platform that you
439have all the correct and necessary sync points for this memory in the
440driver should it choose to return non-consistent memory.
441
442Note: where the platform can return consistent memory, it will
443guarantee that the sync points become nops.
444
445Warning: Handling non-consistent memory is a real pain. You should
446only ever use this API if you positively know your driver will be
447required to work on one of the rare (usually non-PCI) architectures
448that simply cannot make consistent memory.
449
450void
451dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
452 dma_addr_t dma_handle)
453
454Free memory allocated by the nonconsistent API. All parameters must
455be identical to those passed in (and returned by
456dma_alloc_noncoherent()).
457
458int
459dma_get_cache_alignment(void)
460
461Returns the processor cache alignment. This is the absolute minimum
462alignment *and* width that you must observe when either mapping
463memory or doing partial flushes.
464
465Notes: This API may return a number *larger* than the actual cache
466line, but it will guarantee that one or more cache lines fit exactly
467into the width returned by this call. It will also always be a power
468of two for easy alignment.
469
470void
471dma_cache_sync(struct device *dev, void *vaddr, size_t size,
472 enum dma_data_direction direction)
473
474Do a partial sync of memory that was allocated by
475dma_alloc_noncoherent(), starting at virtual address vaddr and
476continuing on for size. Again, you *must* observe the cache line
477boundaries when doing this.
478
479int
480dma_declare_coherent_memory(struct device *dev, dma_addr_t bus_addr,
481 dma_addr_t device_addr, size_t size, int
482 flags)
483
484Declare region of memory to be handed out by dma_alloc_coherent when
485it's asked for coherent memory for this device.
486
487bus_addr is the physical address to which the memory is currently
488assigned in the bus responding region (this will be used by the
489platform to perform the mapping).
490
491device_addr is the physical address the device needs to be programmed
492with actually to address this memory (this will be handed out as the
493dma_addr_t in dma_alloc_coherent()).
494
495size is the size of the area (must be multiples of PAGE_SIZE).
496
497flags can be or'd together and are:
498
499DMA_MEMORY_MAP - request that the memory returned from
500dma_alloc_coherent() be directly writable.
501
502DMA_MEMORY_IO - request that the memory returned from
503dma_alloc_coherent() be addressable using read/write/memcpy_toio etc.
504
505One or both of these flags must be present.
506
507DMA_MEMORY_INCLUDES_CHILDREN - make the declared memory be allocated by
508dma_alloc_coherent of any child devices of this one (for memory residing
509on a bridge).
510
511DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions.
512Do not allow dma_alloc_coherent() to fall back to system memory when
513it's out of memory in the declared region.
514
515The return value will be either DMA_MEMORY_MAP or DMA_MEMORY_IO and
516must correspond to a passed in flag (i.e. no returning DMA_MEMORY_IO
517if only DMA_MEMORY_MAP were passed in) for success or zero for
518failure.
519
520Note, for DMA_MEMORY_IO returns, all subsequent memory returned by
521dma_alloc_coherent() may no longer be accessed directly, but instead
522must be accessed using the correct bus functions. If your driver
523isn't prepared to handle this contingency, it should not specify
524DMA_MEMORY_IO in the input flags.
525
526As a simplification for the platforms, only *one* such region of
527memory may be declared per device.
528
529For reasons of efficiency, most platforms choose to track the declared
530region only at the granularity of a page. For smaller allocations,
531you should use the dma_pool() API.
532
533void
534dma_release_declared_memory(struct device *dev)
535
536Remove the memory region previously declared from the system. This
537API performs *no* in-use checking for this region and will return
538unconditionally having removed all the required structures. It is the
539driver's job to ensure that no parts of this memory region are
540currently in use.
541
542void *
543dma_mark_declared_memory_occupied(struct device *dev,
544 dma_addr_t device_addr, size_t size)
545
546This is used to occupy specific regions of the declared space
547(dma_alloc_coherent() will hand out the first free region it finds).
548
549device_addr is the *device* address of the region requested.
550
551size is the size (and should be a page-sized multiple).
552
553The return value will be either a pointer to the processor virtual
554address of the memory, or an error (via PTR_ERR()) if any part of the
555region is occupied.
556
557Part III - Debug drivers use of the DMA-API
558-------------------------------------------
559
560The DMA-API as described above as some constraints. DMA addresses must be
561released with the corresponding function with the same size for example. With
562the advent of hardware IOMMUs it becomes more and more important that drivers
563do not violate those constraints. In the worst case such a violation can
564result in data corruption up to destroyed filesystems.
565
566To debug drivers and find bugs in the usage of the DMA-API checking code can
567be compiled into the kernel which will tell the developer about those
568violations. If your architecture supports it you can select the "Enable
569debugging of DMA-API usage" option in your kernel configuration. Enabling this
570option has a performance impact. Do not enable it in production kernels.
571
572If you boot the resulting kernel will contain code which does some bookkeeping
573about what DMA memory was allocated for which device. If this code detects an
574error it prints a warning message with some details into your kernel log. An
575example warning message may look like this:
576
577------------[ cut here ]------------
578WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
579 check_unmap+0x203/0x490()
580Hardware name:
581forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
582 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
583single] [unmapped as page]
584Modules linked in: nfsd exportfs bridge stp llc r8169
585Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
586Call Trace:
587 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
588 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
589 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
590 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
591 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
592 [<ffffffff80252f96>] queue_work+0x56/0x60
593 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
594 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
595 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
596 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
597 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
598 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
599 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
600 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
601 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
602 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
603 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
604 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
605 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
606 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
607
608The driver developer can find the driver and the device including a stacktrace
609of the DMA-API call which caused this warning.
610
611Per default only the first error will result in a warning message. All other
612errors will only silently counted. This limitation exist to prevent the code
613from flooding your kernel log. To support debugging a device driver this can
614be disabled via debugfs. See the debugfs interface documentation below for
615details.
616
617The debugfs directory for the DMA-API debugging code is called dma-api/. In
618this directory the following files can currently be found:
619
620 dma-api/all_errors This file contains a numeric value. If this
621 value is not equal to zero the debugging code
622 will print a warning for every error it finds
623 into the kernel log. Be careful with this
624 option, as it can easily flood your logs.
625
626 dma-api/disabled This read-only file contains the character 'Y'
627 if the debugging code is disabled. This can
628 happen when it runs out of memory or if it was
629 disabled at boot time
630
631 dma-api/error_count This file is read-only and shows the total
632 numbers of errors found.
633
634 dma-api/num_errors The number in this file shows how many
635 warnings will be printed to the kernel log
636 before it stops. This number is initialized to
637 one at system boot and be set by writing into
638 this file
639
640 dma-api/min_free_entries
641 This read-only file can be read to get the
642 minimum number of free dma_debug_entries the
643 allocator has ever seen. If this value goes
644 down to zero the code will disable itself
645 because it is not longer reliable.
646
647 dma-api/num_free_entries
648 The current number of free dma_debug_entries
649 in the allocator.
650
651 dma-api/driver-filter
652 You can write a name of a driver into this file
653 to limit the debug output to requests from that
654 particular driver. Write an empty string to
655 that file to disable the filter and see
656 all errors again.
657
658If you have this code compiled into your kernel it will be enabled by default.
659If you want to boot without the bookkeeping anyway you can provide
660'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
661Notice that you can not enable it again at runtime. You have to reboot to do
662so.
663
664If you want to see debug messages only for a special device driver you can
665specify the dma_debug_driver=<drivername> parameter. This will enable the
666driver filter at boot time. The debug code will only print errors for that
667driver afterwards. This filter can be disabled or changed later using debugfs.
668
669When the code disables itself at runtime this is most likely because it ran
670out of dma_debug_entries. These entries are preallocated at boot. The number
671of preallocated entries is defined per architecture. If it is too low for you
672boot with 'dma_debug_entries=<your_desired_number>' to overwrite the
673architectural default.