Loading...
1============================================
2Dynamic DMA mapping using the generic device
3============================================
4
5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
6
7This document describes the DMA API. For a more gentle introduction
8of the API (and actual examples), see Documentation/core-api/dma-api-howto.rst.
9
10This API is split into two pieces. Part I describes the basic API.
11Part II describes extensions for supporting non-consistent memory
12machines. Unless you know that your driver absolutely has to support
13non-consistent platforms (this is usually only legacy platforms) you
14should only use the API described in part I.
15
16Part I - dma_API
17----------------
18
19To get the dma_API, you must #include <linux/dma-mapping.h>. This
20provides dma_addr_t and the interfaces described below.
21
22A dma_addr_t can hold any valid DMA address for the platform. It can be
23given to a device to use as a DMA source or target. A CPU cannot reference
24a dma_addr_t directly because there may be translation between its physical
25address space and the DMA address space.
26
27Part Ia - Using large DMA-coherent buffers
28------------------------------------------
29
30::
31
32 void *
33 dma_alloc_coherent(struct device *dev, size_t size,
34 dma_addr_t *dma_handle, gfp_t flag)
35
36Consistent memory is memory for which a write by either the device or
37the processor can immediately be read by the processor or device
38without having to worry about caching effects. (You may however need
39to make sure to flush the processor's write buffers before telling
40devices to read that memory.)
41
42This routine allocates a region of <size> bytes of consistent memory.
43
44It returns a pointer to the allocated region (in the processor's virtual
45address space) or NULL if the allocation failed.
46
47It also returns a <dma_handle> which may be cast to an unsigned integer the
48same width as the bus and given to the device as the DMA address base of
49the region.
50
51Note: consistent memory can be expensive on some platforms, and the
52minimum allocation length may be as big as a page, so you should
53consolidate your requests for consistent memory as much as possible.
54The simplest way to do that is to use the dma_pool calls (see below).
55
56The flag parameter (dma_alloc_coherent() only) allows the caller to
57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
58implementation may choose to ignore flags that affect the location of
59the returned memory, like GFP_DMA).
60
61::
62
63 void
64 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
65 dma_addr_t dma_handle)
66
67Free a region of consistent memory you previously allocated. dev,
68size and dma_handle must all be the same as those passed into
69dma_alloc_coherent(). cpu_addr must be the virtual address returned by
70the dma_alloc_coherent().
71
72Note that unlike their sibling allocation calls, these routines
73may only be called with IRQs enabled.
74
75
76Part Ib - Using small DMA-coherent buffers
77------------------------------------------
78
79To get this part of the dma_API, you must #include <linux/dmapool.h>
80
81Many drivers need lots of small DMA-coherent memory regions for DMA
82descriptors or I/O buffers. Rather than allocating in units of a page
83or more using dma_alloc_coherent(), you can use DMA pools. These work
84much like a struct kmem_cache, except that they use the DMA-coherent allocator,
85not __get_free_pages(). Also, they understand common hardware constraints
86for alignment, like queue heads needing to be aligned on N-byte boundaries.
87
88
89::
90
91 struct dma_pool *
92 dma_pool_create(const char *name, struct device *dev,
93 size_t size, size_t align, size_t alloc);
94
95dma_pool_create() initializes a pool of DMA-coherent buffers
96for use with a given device. It must be called in a context which
97can sleep.
98
99The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100are like what you'd pass to dma_alloc_coherent(). The device's hardware
101alignment requirement for this type of data is "align" (which is expressed
102in bytes, and must be a power of two). If your device has no boundary
103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104from this pool must not cross 4KByte boundaries.
105
106::
107
108 void *
109 dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
110 dma_addr_t *handle)
111
112Wraps dma_pool_alloc() and also zeroes the returned memory if the
113allocation attempt succeeded.
114
115
116::
117
118 void *
119 dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
120 dma_addr_t *dma_handle);
121
122This allocates memory from the pool; the returned memory will meet the
123size and alignment requirements specified at creation time. Pass
124GFP_ATOMIC to prevent blocking, or if it's permitted (not
125in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
126blocking. Like dma_alloc_coherent(), this returns two values: an
127address usable by the CPU, and the DMA address usable by the pool's
128device.
129
130::
131
132 void
133 dma_pool_free(struct dma_pool *pool, void *vaddr,
134 dma_addr_t addr);
135
136This puts memory back into the pool. The pool is what was passed to
137dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
138were returned when that routine allocated the memory being freed.
139
140::
141
142 void
143 dma_pool_destroy(struct dma_pool *pool);
144
145dma_pool_destroy() frees the resources of the pool. It must be
146called in a context which can sleep. Make sure you've freed all allocated
147memory back to the pool before you destroy it.
148
149
150Part Ic - DMA addressing limitations
151------------------------------------
152
153::
154
155 int
156 dma_set_mask_and_coherent(struct device *dev, u64 mask)
157
158Checks to see if the mask is possible and updates the device
159streaming and coherent DMA mask parameters if it is.
160
161Returns: 0 if successful and a negative error if not.
162
163::
164
165 int
166 dma_set_mask(struct device *dev, u64 mask)
167
168Checks to see if the mask is possible and updates the device
169parameters if it is.
170
171Returns: 0 if successful and a negative error if not.
172
173::
174
175 int
176 dma_set_coherent_mask(struct device *dev, u64 mask)
177
178Checks to see if the mask is possible and updates the device
179parameters if it is.
180
181Returns: 0 if successful and a negative error if not.
182
183::
184
185 u64
186 dma_get_required_mask(struct device *dev)
187
188This API returns the mask that the platform requires to
189operate efficiently. Usually this means the returned mask
190is the minimum required to cover all of memory. Examining the
191required mask gives drivers with variable descriptor sizes the
192opportunity to use smaller descriptors as necessary.
193
194Requesting the required mask does not alter the current mask. If you
195wish to take advantage of it, you should issue a dma_set_mask()
196call to set the mask to the value returned.
197
198::
199
200 size_t
201 dma_max_mapping_size(struct device *dev);
202
203Returns the maximum size of a mapping for the device. The size parameter
204of the mapping functions like dma_map_single(), dma_map_page() and
205others should not be larger than the returned value.
206
207::
208
209 size_t
210 dma_opt_mapping_size(struct device *dev);
211
212Returns the maximum optimal size of a mapping for the device.
213
214Mapping larger buffers may take much longer in certain scenarios. In
215addition, for high-rate short-lived streaming mappings, the upfront time
216spent on the mapping may account for an appreciable part of the total
217request lifetime. As such, if splitting larger requests incurs no
218significant performance penalty, then device drivers are advised to
219limit total DMA streaming mappings length to the returned value.
220
221::
222
223 bool
224 dma_need_sync(struct device *dev, dma_addr_t dma_addr);
225
226Returns %true if dma_sync_single_for_{device,cpu} calls are required to
227transfer memory ownership. Returns %false if those calls can be skipped.
228
229::
230
231 unsigned long
232 dma_get_merge_boundary(struct device *dev);
233
234Returns the DMA merge boundary. If the device cannot merge any the DMA address
235segments, the function returns 0.
236
237Part Id - Streaming DMA mappings
238--------------------------------
239
240::
241
242 dma_addr_t
243 dma_map_single(struct device *dev, void *cpu_addr, size_t size,
244 enum dma_data_direction direction)
245
246Maps a piece of processor virtual memory so it can be accessed by the
247device and returns the DMA address of the memory.
248
249The direction for both APIs may be converted freely by casting.
250However the dma_API uses a strongly typed enumerator for its
251direction:
252
253======================= =============================================
254DMA_NONE no direction (used for debugging)
255DMA_TO_DEVICE data is going from the memory to the device
256DMA_FROM_DEVICE data is coming from the device to the memory
257DMA_BIDIRECTIONAL direction isn't known
258======================= =============================================
259
260.. note::
261
262 Not all memory regions in a machine can be mapped by this API.
263 Further, contiguous kernel virtual space may not be contiguous as
264 physical memory. Since this API does not provide any scatter/gather
265 capability, it will fail if the user tries to map a non-physically
266 contiguous piece of memory. For this reason, memory to be mapped by
267 this API should be obtained from sources which guarantee it to be
268 physically contiguous (like kmalloc).
269
270 Further, the DMA address of the memory must be within the
271 dma_mask of the device (the dma_mask is a bit mask of the
272 addressable region for the device, i.e., if the DMA address of
273 the memory ANDed with the dma_mask is still equal to the DMA
274 address, then the device can perform DMA to the memory). To
275 ensure that the memory allocated by kmalloc is within the dma_mask,
276 the driver may specify various platform-dependent flags to restrict
277 the DMA address range of the allocation (e.g., on x86, GFP_DMA
278 guarantees to be within the first 16MB of available DMA addresses,
279 as required by ISA devices).
280
281 Note also that the above constraints on physical contiguity and
282 dma_mask may not apply if the platform has an IOMMU (a device which
283 maps an I/O DMA address to a physical memory address). However, to be
284 portable, device driver writers may *not* assume that such an IOMMU
285 exists.
286
287.. warning::
288
289 Memory coherency operates at a granularity called the cache
290 line width. In order for memory mapped by this API to operate
291 correctly, the mapped region must begin exactly on a cache line
292 boundary and end exactly on one (to prevent two separately mapped
293 regions from sharing a single cache line). Since the cache line size
294 may not be known at compile time, the API will not enforce this
295 requirement. Therefore, it is recommended that driver writers who
296 don't take special care to determine the cache line size at run time
297 only map virtual regions that begin and end on page boundaries (which
298 are guaranteed also to be cache line boundaries).
299
300 DMA_TO_DEVICE synchronisation must be done after the last modification
301 of the memory region by the software and before it is handed off to
302 the device. Once this primitive is used, memory covered by this
303 primitive should be treated as read-only by the device. If the device
304 may write to it at any point, it should be DMA_BIDIRECTIONAL (see
305 below).
306
307 DMA_FROM_DEVICE synchronisation must be done before the driver
308 accesses data that may be changed by the device. This memory should
309 be treated as read-only by the driver. If the driver needs to write
310 to it at any point, it should be DMA_BIDIRECTIONAL (see below).
311
312 DMA_BIDIRECTIONAL requires special handling: it means that the driver
313 isn't sure if the memory was modified before being handed off to the
314 device and also isn't sure if the device will also modify it. Thus,
315 you must always sync bidirectional memory twice: once before the
316 memory is handed off to the device (to make sure all memory changes
317 are flushed from the processor) and once before the data may be
318 accessed after being used by the device (to make sure any processor
319 cache lines are updated with data that the device may have changed).
320
321::
322
323 void
324 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
325 enum dma_data_direction direction)
326
327Unmaps the region previously mapped. All the parameters passed in
328must be identical to those passed in (and returned) by the mapping
329API.
330
331::
332
333 dma_addr_t
334 dma_map_page(struct device *dev, struct page *page,
335 unsigned long offset, size_t size,
336 enum dma_data_direction direction)
337
338 void
339 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
340 enum dma_data_direction direction)
341
342API for mapping and unmapping for pages. All the notes and warnings
343for the other mapping APIs apply here. Also, although the <offset>
344and <size> parameters are provided to do partial page mapping, it is
345recommended that you never use these unless you really know what the
346cache width is.
347
348::
349
350 dma_addr_t
351 dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
352 enum dma_data_direction dir, unsigned long attrs)
353
354 void
355 dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
356 enum dma_data_direction dir, unsigned long attrs)
357
358API for mapping and unmapping for MMIO resources. All the notes and
359warnings for the other mapping APIs apply here. The API should only be
360used to map device MMIO resources, mapping of RAM is not permitted.
361
362::
363
364 int
365 dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
366
367In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
368will fail to create a mapping. A driver can check for these errors by testing
369the returned DMA address with dma_mapping_error(). A non-zero return value
370means the mapping could not be created and the driver should take appropriate
371action (e.g. reduce current DMA mapping usage or delay and try again later).
372
373::
374
375 int
376 dma_map_sg(struct device *dev, struct scatterlist *sg,
377 int nents, enum dma_data_direction direction)
378
379Returns: the number of DMA address segments mapped (this may be shorter
380than <nents> passed in if some elements of the scatter/gather list are
381physically or virtually adjacent and an IOMMU maps them with a single
382entry).
383
384Please note that the sg cannot be mapped again if it has been mapped once.
385The mapping process is allowed to destroy information in the sg.
386
387As with the other mapping interfaces, dma_map_sg() can fail. When it
388does, 0 is returned and a driver must take appropriate action. It is
389critical that the driver do something, in the case of a block driver
390aborting the request or even oopsing is better than doing nothing and
391corrupting the filesystem.
392
393With scatterlists, you use the resulting mapping like this::
394
395 int i, count = dma_map_sg(dev, sglist, nents, direction);
396 struct scatterlist *sg;
397
398 for_each_sg(sglist, sg, count, i) {
399 hw_address[i] = sg_dma_address(sg);
400 hw_len[i] = sg_dma_len(sg);
401 }
402
403where nents is the number of entries in the sglist.
404
405The implementation is free to merge several consecutive sglist entries
406into one (e.g. with an IOMMU, or if several pages just happen to be
407physically contiguous) and returns the actual number of sg entries it
408mapped them to. On failure 0, is returned.
409
410Then you should loop count times (note: this can be less than nents times)
411and use sg_dma_address() and sg_dma_len() macros where you previously
412accessed sg->address and sg->length as shown above.
413
414::
415
416 void
417 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
418 int nents, enum dma_data_direction direction)
419
420Unmap the previously mapped scatter/gather list. All the parameters
421must be the same as those and passed in to the scatter/gather mapping
422API.
423
424Note: <nents> must be the number you passed in, *not* the number of
425DMA address entries returned.
426
427::
428
429 void
430 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
431 size_t size,
432 enum dma_data_direction direction)
433
434 void
435 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
436 size_t size,
437 enum dma_data_direction direction)
438
439 void
440 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
441 int nents,
442 enum dma_data_direction direction)
443
444 void
445 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
446 int nents,
447 enum dma_data_direction direction)
448
449Synchronise a single contiguous or scatter/gather mapping for the CPU
450and device. With the sync_sg API, all the parameters must be the same
451as those passed into the single mapping API. With the sync_single API,
452you can use dma_handle and size parameters that aren't identical to
453those passed into the single mapping API to do a partial sync.
454
455
456.. note::
457
458 You must do this:
459
460 - Before reading values that have been written by DMA from the device
461 (use the DMA_FROM_DEVICE direction)
462 - After writing values that will be written to the device using DMA
463 (use the DMA_TO_DEVICE) direction
464 - before *and* after handing memory to the device if the memory is
465 DMA_BIDIRECTIONAL
466
467See also dma_map_single().
468
469::
470
471 dma_addr_t
472 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
473 enum dma_data_direction dir,
474 unsigned long attrs)
475
476 void
477 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
478 size_t size, enum dma_data_direction dir,
479 unsigned long attrs)
480
481 int
482 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
483 int nents, enum dma_data_direction dir,
484 unsigned long attrs)
485
486 void
487 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
488 int nents, enum dma_data_direction dir,
489 unsigned long attrs)
490
491The four functions above are just like the counterpart functions
492without the _attrs suffixes, except that they pass an optional
493dma_attrs.
494
495The interpretation of DMA attributes is architecture-specific, and
496each attribute should be documented in
497Documentation/core-api/dma-attributes.rst.
498
499If dma_attrs are 0, the semantics of each of these functions
500is identical to those of the corresponding function
501without the _attrs suffix. As a result dma_map_single_attrs()
502can generally replace dma_map_single(), etc.
503
504As an example of the use of the ``*_attrs`` functions, here's how
505you could pass an attribute DMA_ATTR_FOO when mapping memory
506for DMA::
507
508 #include <linux/dma-mapping.h>
509 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
510 * documented in Documentation/core-api/dma-attributes.rst */
511 ...
512
513 unsigned long attr;
514 attr |= DMA_ATTR_FOO;
515 ....
516 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
517 ....
518
519Architectures that care about DMA_ATTR_FOO would check for its
520presence in their implementations of the mapping and unmapping
521routines, e.g.:::
522
523 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
524 size_t size, enum dma_data_direction dir,
525 unsigned long attrs)
526 {
527 ....
528 if (attrs & DMA_ATTR_FOO)
529 /* twizzle the frobnozzle */
530 ....
531 }
532
533
534Part II - Non-coherent DMA allocations
535--------------------------------------
536
537These APIs allow to allocate pages that are guaranteed to be DMA addressable
538by the passed in device, but which need explicit management of memory ownership
539for the kernel vs the device.
540
541If you don't understand how cache line coherency works between a processor and
542an I/O device, you should not be using this part of the API.
543
544::
545
546 struct page *
547 dma_alloc_pages(struct device *dev, size_t size, dma_addr_t *dma_handle,
548 enum dma_data_direction dir, gfp_t gfp)
549
550This routine allocates a region of <size> bytes of non-coherent memory. It
551returns a pointer to first struct page for the region, or NULL if the
552allocation failed. The resulting struct page can be used for everything a
553struct page is suitable for.
554
555It also returns a <dma_handle> which may be cast to an unsigned integer the
556same width as the bus and given to the device as the DMA address base of
557the region.
558
559The dir parameter specified if data is read and/or written by the device,
560see dma_map_single() for details.
561
562The gfp parameter allows the caller to specify the ``GFP_`` flags (see
563kmalloc()) for the allocation, but rejects flags used to specify a memory
564zone such as GFP_DMA or GFP_HIGHMEM.
565
566Before giving the memory to the device, dma_sync_single_for_device() needs
567to be called, and before reading memory written by the device,
568dma_sync_single_for_cpu(), just like for streaming DMA mappings that are
569reused.
570
571::
572
573 void
574 dma_free_pages(struct device *dev, size_t size, struct page *page,
575 dma_addr_t dma_handle, enum dma_data_direction dir)
576
577Free a region of memory previously allocated using dma_alloc_pages().
578dev, size, dma_handle and dir must all be the same as those passed into
579dma_alloc_pages(). page must be the pointer returned by dma_alloc_pages().
580
581::
582
583 int
584 dma_mmap_pages(struct device *dev, struct vm_area_struct *vma,
585 size_t size, struct page *page)
586
587Map an allocation returned from dma_alloc_pages() into a user address space.
588dev and size must be the same as those passed into dma_alloc_pages().
589page must be the pointer returned by dma_alloc_pages().
590
591::
592
593 void *
594 dma_alloc_noncoherent(struct device *dev, size_t size,
595 dma_addr_t *dma_handle, enum dma_data_direction dir,
596 gfp_t gfp)
597
598This routine is a convenient wrapper around dma_alloc_pages that returns the
599kernel virtual address for the allocated memory instead of the page structure.
600
601::
602
603 void
604 dma_free_noncoherent(struct device *dev, size_t size, void *cpu_addr,
605 dma_addr_t dma_handle, enum dma_data_direction dir)
606
607Free a region of memory previously allocated using dma_alloc_noncoherent().
608dev, size, dma_handle and dir must all be the same as those passed into
609dma_alloc_noncoherent(). cpu_addr must be the virtual address returned by
610dma_alloc_noncoherent().
611
612::
613
614 struct sg_table *
615 dma_alloc_noncontiguous(struct device *dev, size_t size,
616 enum dma_data_direction dir, gfp_t gfp,
617 unsigned long attrs);
618
619This routine allocates <size> bytes of non-coherent and possibly non-contiguous
620memory. It returns a pointer to struct sg_table that describes the allocated
621and DMA mapped memory, or NULL if the allocation failed. The resulting memory
622can be used for struct page mapped into a scatterlist are suitable for.
623
624The return sg_table is guaranteed to have 1 single DMA mapped segment as
625indicated by sgt->nents, but it might have multiple CPU side segments as
626indicated by sgt->orig_nents.
627
628The dir parameter specified if data is read and/or written by the device,
629see dma_map_single() for details.
630
631The gfp parameter allows the caller to specify the ``GFP_`` flags (see
632kmalloc()) for the allocation, but rejects flags used to specify a memory
633zone such as GFP_DMA or GFP_HIGHMEM.
634
635The attrs argument must be either 0 or DMA_ATTR_ALLOC_SINGLE_PAGES.
636
637Before giving the memory to the device, dma_sync_sgtable_for_device() needs
638to be called, and before reading memory written by the device,
639dma_sync_sgtable_for_cpu(), just like for streaming DMA mappings that are
640reused.
641
642::
643
644 void
645 dma_free_noncontiguous(struct device *dev, size_t size,
646 struct sg_table *sgt,
647 enum dma_data_direction dir)
648
649Free memory previously allocated using dma_alloc_noncontiguous(). dev, size,
650and dir must all be the same as those passed into dma_alloc_noncontiguous().
651sgt must be the pointer returned by dma_alloc_noncontiguous().
652
653::
654
655 void *
656 dma_vmap_noncontiguous(struct device *dev, size_t size,
657 struct sg_table *sgt)
658
659Return a contiguous kernel mapping for an allocation returned from
660dma_alloc_noncontiguous(). dev and size must be the same as those passed into
661dma_alloc_noncontiguous(). sgt must be the pointer returned by
662dma_alloc_noncontiguous().
663
664Once a non-contiguous allocation is mapped using this function, the
665flush_kernel_vmap_range() and invalidate_kernel_vmap_range() APIs must be used
666to manage the coherency between the kernel mapping, the device and user space
667mappings (if any).
668
669::
670
671 void
672 dma_vunmap_noncontiguous(struct device *dev, void *vaddr)
673
674Unmap a kernel mapping returned by dma_vmap_noncontiguous(). dev must be the
675same the one passed into dma_alloc_noncontiguous(). vaddr must be the pointer
676returned by dma_vmap_noncontiguous().
677
678
679::
680
681 int
682 dma_mmap_noncontiguous(struct device *dev, struct vm_area_struct *vma,
683 size_t size, struct sg_table *sgt)
684
685Map an allocation returned from dma_alloc_noncontiguous() into a user address
686space. dev and size must be the same as those passed into
687dma_alloc_noncontiguous(). sgt must be the pointer returned by
688dma_alloc_noncontiguous().
689
690::
691
692 int
693 dma_get_cache_alignment(void)
694
695Returns the processor cache alignment. This is the absolute minimum
696alignment *and* width that you must observe when either mapping
697memory or doing partial flushes.
698
699.. note::
700
701 This API may return a number *larger* than the actual cache
702 line, but it will guarantee that one or more cache lines fit exactly
703 into the width returned by this call. It will also always be a power
704 of two for easy alignment.
705
706
707Part III - Debug drivers use of the DMA-API
708-------------------------------------------
709
710The DMA-API as described above has some constraints. DMA addresses must be
711released with the corresponding function with the same size for example. With
712the advent of hardware IOMMUs it becomes more and more important that drivers
713do not violate those constraints. In the worst case such a violation can
714result in data corruption up to destroyed filesystems.
715
716To debug drivers and find bugs in the usage of the DMA-API checking code can
717be compiled into the kernel which will tell the developer about those
718violations. If your architecture supports it you can select the "Enable
719debugging of DMA-API usage" option in your kernel configuration. Enabling this
720option has a performance impact. Do not enable it in production kernels.
721
722If you boot the resulting kernel will contain code which does some bookkeeping
723about what DMA memory was allocated for which device. If this code detects an
724error it prints a warning message with some details into your kernel log. An
725example warning message may look like this::
726
727 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
728 check_unmap+0x203/0x490()
729 Hardware name:
730 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
731 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
732 single] [unmapped as page]
733 Modules linked in: nfsd exportfs bridge stp llc r8169
734 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
735 Call Trace:
736 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
737 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
738 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
739 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
740 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
741 [<ffffffff80252f96>] queue_work+0x56/0x60
742 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
743 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
744 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
745 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
746 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
747 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
748 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
749 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
750 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
751 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
752 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
753 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
754 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
755 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
756
757The driver developer can find the driver and the device including a stacktrace
758of the DMA-API call which caused this warning.
759
760Per default only the first error will result in a warning message. All other
761errors will only silently counted. This limitation exist to prevent the code
762from flooding your kernel log. To support debugging a device driver this can
763be disabled via debugfs. See the debugfs interface documentation below for
764details.
765
766The debugfs directory for the DMA-API debugging code is called dma-api/. In
767this directory the following files can currently be found:
768
769=============================== ===============================================
770dma-api/all_errors This file contains a numeric value. If this
771 value is not equal to zero the debugging code
772 will print a warning for every error it finds
773 into the kernel log. Be careful with this
774 option, as it can easily flood your logs.
775
776dma-api/disabled This read-only file contains the character 'Y'
777 if the debugging code is disabled. This can
778 happen when it runs out of memory or if it was
779 disabled at boot time
780
781dma-api/dump This read-only file contains current DMA
782 mappings.
783
784dma-api/error_count This file is read-only and shows the total
785 numbers of errors found.
786
787dma-api/num_errors The number in this file shows how many
788 warnings will be printed to the kernel log
789 before it stops. This number is initialized to
790 one at system boot and be set by writing into
791 this file
792
793dma-api/min_free_entries This read-only file can be read to get the
794 minimum number of free dma_debug_entries the
795 allocator has ever seen. If this value goes
796 down to zero the code will attempt to increase
797 nr_total_entries to compensate.
798
799dma-api/num_free_entries The current number of free dma_debug_entries
800 in the allocator.
801
802dma-api/nr_total_entries The total number of dma_debug_entries in the
803 allocator, both free and used.
804
805dma-api/driver_filter You can write a name of a driver into this file
806 to limit the debug output to requests from that
807 particular driver. Write an empty string to
808 that file to disable the filter and see
809 all errors again.
810=============================== ===============================================
811
812If you have this code compiled into your kernel it will be enabled by default.
813If you want to boot without the bookkeeping anyway you can provide
814'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
815Notice that you can not enable it again at runtime. You have to reboot to do
816so.
817
818If you want to see debug messages only for a special device driver you can
819specify the dma_debug_driver=<drivername> parameter. This will enable the
820driver filter at boot time. The debug code will only print errors for that
821driver afterwards. This filter can be disabled or changed later using debugfs.
822
823When the code disables itself at runtime this is most likely because it ran
824out of dma_debug_entries and was unable to allocate more on-demand. 65536
825entries are preallocated at boot - if this is too low for you boot with
826'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
827that the code allocates entries in batches, so the exact number of
828preallocated entries may be greater than the actual number requested. The
829code will print to the kernel log each time it has dynamically allocated
830as many entries as were initially preallocated. This is to indicate that a
831larger preallocation size may be appropriate, or if it happens continually
832that a driver may be leaking mappings.
833
834::
835
836 void
837 debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
838
839dma-debug interface debug_dma_mapping_error() to debug drivers that fail
840to check DMA mapping errors on addresses returned by dma_map_single() and
841dma_map_page() interfaces. This interface clears a flag set by
842debug_dma_map_page() to indicate that dma_mapping_error() has been called by
843the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
844this flag is still set, prints warning message that includes call trace that
845leads up to the unmap. This interface can be called from dma_mapping_error()
846routines to enable DMA mapping error check debugging.
1============================================
2Dynamic DMA mapping using the generic device
3============================================
4
5:Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com>
6
7This document describes the DMA API. For a more gentle introduction
8of the API (and actual examples), see :doc:`/core-api/dma-api-howto`.
9
10This API is split into two pieces. Part I describes the basic API.
11Part II describes extensions for supporting non-consistent memory
12machines. Unless you know that your driver absolutely has to support
13non-consistent platforms (this is usually only legacy platforms) you
14should only use the API described in part I.
15
16Part I - dma_API
17----------------
18
19To get the dma_API, you must #include <linux/dma-mapping.h>. This
20provides dma_addr_t and the interfaces described below.
21
22A dma_addr_t can hold any valid DMA address for the platform. It can be
23given to a device to use as a DMA source or target. A CPU cannot reference
24a dma_addr_t directly because there may be translation between its physical
25address space and the DMA address space.
26
27Part Ia - Using large DMA-coherent buffers
28------------------------------------------
29
30::
31
32 void *
33 dma_alloc_coherent(struct device *dev, size_t size,
34 dma_addr_t *dma_handle, gfp_t flag)
35
36Consistent memory is memory for which a write by either the device or
37the processor can immediately be read by the processor or device
38without having to worry about caching effects. (You may however need
39to make sure to flush the processor's write buffers before telling
40devices to read that memory.)
41
42This routine allocates a region of <size> bytes of consistent memory.
43
44It returns a pointer to the allocated region (in the processor's virtual
45address space) or NULL if the allocation failed.
46
47It also returns a <dma_handle> which may be cast to an unsigned integer the
48same width as the bus and given to the device as the DMA address base of
49the region.
50
51Note: consistent memory can be expensive on some platforms, and the
52minimum allocation length may be as big as a page, so you should
53consolidate your requests for consistent memory as much as possible.
54The simplest way to do that is to use the dma_pool calls (see below).
55
56The flag parameter (dma_alloc_coherent() only) allows the caller to
57specify the ``GFP_`` flags (see kmalloc()) for the allocation (the
58implementation may choose to ignore flags that affect the location of
59the returned memory, like GFP_DMA).
60
61::
62
63 void
64 dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
65 dma_addr_t dma_handle)
66
67Free a region of consistent memory you previously allocated. dev,
68size and dma_handle must all be the same as those passed into
69dma_alloc_coherent(). cpu_addr must be the virtual address returned by
70the dma_alloc_coherent().
71
72Note that unlike their sibling allocation calls, these routines
73may only be called with IRQs enabled.
74
75
76Part Ib - Using small DMA-coherent buffers
77------------------------------------------
78
79To get this part of the dma_API, you must #include <linux/dmapool.h>
80
81Many drivers need lots of small DMA-coherent memory regions for DMA
82descriptors or I/O buffers. Rather than allocating in units of a page
83or more using dma_alloc_coherent(), you can use DMA pools. These work
84much like a struct kmem_cache, except that they use the DMA-coherent allocator,
85not __get_free_pages(). Also, they understand common hardware constraints
86for alignment, like queue heads needing to be aligned on N-byte boundaries.
87
88
89::
90
91 struct dma_pool *
92 dma_pool_create(const char *name, struct device *dev,
93 size_t size, size_t align, size_t alloc);
94
95dma_pool_create() initializes a pool of DMA-coherent buffers
96for use with a given device. It must be called in a context which
97can sleep.
98
99The "name" is for diagnostics (like a struct kmem_cache name); dev and size
100are like what you'd pass to dma_alloc_coherent(). The device's hardware
101alignment requirement for this type of data is "align" (which is expressed
102in bytes, and must be a power of two). If your device has no boundary
103crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated
104from this pool must not cross 4KByte boundaries.
105
106::
107
108 void *
109 dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags,
110 dma_addr_t *handle)
111
112Wraps dma_pool_alloc() and also zeroes the returned memory if the
113allocation attempt succeeded.
114
115
116::
117
118 void *
119 dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags,
120 dma_addr_t *dma_handle);
121
122This allocates memory from the pool; the returned memory will meet the
123size and alignment requirements specified at creation time. Pass
124GFP_ATOMIC to prevent blocking, or if it's permitted (not
125in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow
126blocking. Like dma_alloc_coherent(), this returns two values: an
127address usable by the CPU, and the DMA address usable by the pool's
128device.
129
130::
131
132 void
133 dma_pool_free(struct dma_pool *pool, void *vaddr,
134 dma_addr_t addr);
135
136This puts memory back into the pool. The pool is what was passed to
137dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what
138were returned when that routine allocated the memory being freed.
139
140::
141
142 void
143 dma_pool_destroy(struct dma_pool *pool);
144
145dma_pool_destroy() frees the resources of the pool. It must be
146called in a context which can sleep. Make sure you've freed all allocated
147memory back to the pool before you destroy it.
148
149
150Part Ic - DMA addressing limitations
151------------------------------------
152
153::
154
155 int
156 dma_set_mask_and_coherent(struct device *dev, u64 mask)
157
158Checks to see if the mask is possible and updates the device
159streaming and coherent DMA mask parameters if it is.
160
161Returns: 0 if successful and a negative error if not.
162
163::
164
165 int
166 dma_set_mask(struct device *dev, u64 mask)
167
168Checks to see if the mask is possible and updates the device
169parameters if it is.
170
171Returns: 0 if successful and a negative error if not.
172
173::
174
175 int
176 dma_set_coherent_mask(struct device *dev, u64 mask)
177
178Checks to see if the mask is possible and updates the device
179parameters if it is.
180
181Returns: 0 if successful and a negative error if not.
182
183::
184
185 u64
186 dma_get_required_mask(struct device *dev)
187
188This API returns the mask that the platform requires to
189operate efficiently. Usually this means the returned mask
190is the minimum required to cover all of memory. Examining the
191required mask gives drivers with variable descriptor sizes the
192opportunity to use smaller descriptors as necessary.
193
194Requesting the required mask does not alter the current mask. If you
195wish to take advantage of it, you should issue a dma_set_mask()
196call to set the mask to the value returned.
197
198::
199
200 size_t
201 dma_max_mapping_size(struct device *dev);
202
203Returns the maximum size of a mapping for the device. The size parameter
204of the mapping functions like dma_map_single(), dma_map_page() and
205others should not be larger than the returned value.
206
207::
208
209 bool
210 dma_need_sync(struct device *dev, dma_addr_t dma_addr);
211
212Returns %true if dma_sync_single_for_{device,cpu} calls are required to
213transfer memory ownership. Returns %false if those calls can be skipped.
214
215::
216
217 unsigned long
218 dma_get_merge_boundary(struct device *dev);
219
220Returns the DMA merge boundary. If the device cannot merge any the DMA address
221segments, the function returns 0.
222
223Part Id - Streaming DMA mappings
224--------------------------------
225
226::
227
228 dma_addr_t
229 dma_map_single(struct device *dev, void *cpu_addr, size_t size,
230 enum dma_data_direction direction)
231
232Maps a piece of processor virtual memory so it can be accessed by the
233device and returns the DMA address of the memory.
234
235The direction for both APIs may be converted freely by casting.
236However the dma_API uses a strongly typed enumerator for its
237direction:
238
239======================= =============================================
240DMA_NONE no direction (used for debugging)
241DMA_TO_DEVICE data is going from the memory to the device
242DMA_FROM_DEVICE data is coming from the device to the memory
243DMA_BIDIRECTIONAL direction isn't known
244======================= =============================================
245
246.. note::
247
248 Not all memory regions in a machine can be mapped by this API.
249 Further, contiguous kernel virtual space may not be contiguous as
250 physical memory. Since this API does not provide any scatter/gather
251 capability, it will fail if the user tries to map a non-physically
252 contiguous piece of memory. For this reason, memory to be mapped by
253 this API should be obtained from sources which guarantee it to be
254 physically contiguous (like kmalloc).
255
256 Further, the DMA address of the memory must be within the
257 dma_mask of the device (the dma_mask is a bit mask of the
258 addressable region for the device, i.e., if the DMA address of
259 the memory ANDed with the dma_mask is still equal to the DMA
260 address, then the device can perform DMA to the memory). To
261 ensure that the memory allocated by kmalloc is within the dma_mask,
262 the driver may specify various platform-dependent flags to restrict
263 the DMA address range of the allocation (e.g., on x86, GFP_DMA
264 guarantees to be within the first 16MB of available DMA addresses,
265 as required by ISA devices).
266
267 Note also that the above constraints on physical contiguity and
268 dma_mask may not apply if the platform has an IOMMU (a device which
269 maps an I/O DMA address to a physical memory address). However, to be
270 portable, device driver writers may *not* assume that such an IOMMU
271 exists.
272
273.. warning::
274
275 Memory coherency operates at a granularity called the cache
276 line width. In order for memory mapped by this API to operate
277 correctly, the mapped region must begin exactly on a cache line
278 boundary and end exactly on one (to prevent two separately mapped
279 regions from sharing a single cache line). Since the cache line size
280 may not be known at compile time, the API will not enforce this
281 requirement. Therefore, it is recommended that driver writers who
282 don't take special care to determine the cache line size at run time
283 only map virtual regions that begin and end on page boundaries (which
284 are guaranteed also to be cache line boundaries).
285
286 DMA_TO_DEVICE synchronisation must be done after the last modification
287 of the memory region by the software and before it is handed off to
288 the device. Once this primitive is used, memory covered by this
289 primitive should be treated as read-only by the device. If the device
290 may write to it at any point, it should be DMA_BIDIRECTIONAL (see
291 below).
292
293 DMA_FROM_DEVICE synchronisation must be done before the driver
294 accesses data that may be changed by the device. This memory should
295 be treated as read-only by the driver. If the driver needs to write
296 to it at any point, it should be DMA_BIDIRECTIONAL (see below).
297
298 DMA_BIDIRECTIONAL requires special handling: it means that the driver
299 isn't sure if the memory was modified before being handed off to the
300 device and also isn't sure if the device will also modify it. Thus,
301 you must always sync bidirectional memory twice: once before the
302 memory is handed off to the device (to make sure all memory changes
303 are flushed from the processor) and once before the data may be
304 accessed after being used by the device (to make sure any processor
305 cache lines are updated with data that the device may have changed).
306
307::
308
309 void
310 dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
311 enum dma_data_direction direction)
312
313Unmaps the region previously mapped. All the parameters passed in
314must be identical to those passed in (and returned) by the mapping
315API.
316
317::
318
319 dma_addr_t
320 dma_map_page(struct device *dev, struct page *page,
321 unsigned long offset, size_t size,
322 enum dma_data_direction direction)
323
324 void
325 dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
326 enum dma_data_direction direction)
327
328API for mapping and unmapping for pages. All the notes and warnings
329for the other mapping APIs apply here. Also, although the <offset>
330and <size> parameters are provided to do partial page mapping, it is
331recommended that you never use these unless you really know what the
332cache width is.
333
334::
335
336 dma_addr_t
337 dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size,
338 enum dma_data_direction dir, unsigned long attrs)
339
340 void
341 dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size,
342 enum dma_data_direction dir, unsigned long attrs)
343
344API for mapping and unmapping for MMIO resources. All the notes and
345warnings for the other mapping APIs apply here. The API should only be
346used to map device MMIO resources, mapping of RAM is not permitted.
347
348::
349
350 int
351 dma_mapping_error(struct device *dev, dma_addr_t dma_addr)
352
353In some circumstances dma_map_single(), dma_map_page() and dma_map_resource()
354will fail to create a mapping. A driver can check for these errors by testing
355the returned DMA address with dma_mapping_error(). A non-zero return value
356means the mapping could not be created and the driver should take appropriate
357action (e.g. reduce current DMA mapping usage or delay and try again later).
358
359::
360
361 int
362 dma_map_sg(struct device *dev, struct scatterlist *sg,
363 int nents, enum dma_data_direction direction)
364
365Returns: the number of DMA address segments mapped (this may be shorter
366than <nents> passed in if some elements of the scatter/gather list are
367physically or virtually adjacent and an IOMMU maps them with a single
368entry).
369
370Please note that the sg cannot be mapped again if it has been mapped once.
371The mapping process is allowed to destroy information in the sg.
372
373As with the other mapping interfaces, dma_map_sg() can fail. When it
374does, 0 is returned and a driver must take appropriate action. It is
375critical that the driver do something, in the case of a block driver
376aborting the request or even oopsing is better than doing nothing and
377corrupting the filesystem.
378
379With scatterlists, you use the resulting mapping like this::
380
381 int i, count = dma_map_sg(dev, sglist, nents, direction);
382 struct scatterlist *sg;
383
384 for_each_sg(sglist, sg, count, i) {
385 hw_address[i] = sg_dma_address(sg);
386 hw_len[i] = sg_dma_len(sg);
387 }
388
389where nents is the number of entries in the sglist.
390
391The implementation is free to merge several consecutive sglist entries
392into one (e.g. with an IOMMU, or if several pages just happen to be
393physically contiguous) and returns the actual number of sg entries it
394mapped them to. On failure 0, is returned.
395
396Then you should loop count times (note: this can be less than nents times)
397and use sg_dma_address() and sg_dma_len() macros where you previously
398accessed sg->address and sg->length as shown above.
399
400::
401
402 void
403 dma_unmap_sg(struct device *dev, struct scatterlist *sg,
404 int nents, enum dma_data_direction direction)
405
406Unmap the previously mapped scatter/gather list. All the parameters
407must be the same as those and passed in to the scatter/gather mapping
408API.
409
410Note: <nents> must be the number you passed in, *not* the number of
411DMA address entries returned.
412
413::
414
415 void
416 dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
417 size_t size,
418 enum dma_data_direction direction)
419
420 void
421 dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
422 size_t size,
423 enum dma_data_direction direction)
424
425 void
426 dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
427 int nents,
428 enum dma_data_direction direction)
429
430 void
431 dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
432 int nents,
433 enum dma_data_direction direction)
434
435Synchronise a single contiguous or scatter/gather mapping for the CPU
436and device. With the sync_sg API, all the parameters must be the same
437as those passed into the single mapping API. With the sync_single API,
438you can use dma_handle and size parameters that aren't identical to
439those passed into the single mapping API to do a partial sync.
440
441
442.. note::
443
444 You must do this:
445
446 - Before reading values that have been written by DMA from the device
447 (use the DMA_FROM_DEVICE direction)
448 - After writing values that will be written to the device using DMA
449 (use the DMA_TO_DEVICE) direction
450 - before *and* after handing memory to the device if the memory is
451 DMA_BIDIRECTIONAL
452
453See also dma_map_single().
454
455::
456
457 dma_addr_t
458 dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size,
459 enum dma_data_direction dir,
460 unsigned long attrs)
461
462 void
463 dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr,
464 size_t size, enum dma_data_direction dir,
465 unsigned long attrs)
466
467 int
468 dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl,
469 int nents, enum dma_data_direction dir,
470 unsigned long attrs)
471
472 void
473 dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl,
474 int nents, enum dma_data_direction dir,
475 unsigned long attrs)
476
477The four functions above are just like the counterpart functions
478without the _attrs suffixes, except that they pass an optional
479dma_attrs.
480
481The interpretation of DMA attributes is architecture-specific, and
482each attribute should be documented in :doc:`/core-api/dma-attributes`.
483
484If dma_attrs are 0, the semantics of each of these functions
485is identical to those of the corresponding function
486without the _attrs suffix. As a result dma_map_single_attrs()
487can generally replace dma_map_single(), etc.
488
489As an example of the use of the ``*_attrs`` functions, here's how
490you could pass an attribute DMA_ATTR_FOO when mapping memory
491for DMA::
492
493 #include <linux/dma-mapping.h>
494 /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and
495 * documented in Documentation/core-api/dma-attributes.rst */
496 ...
497
498 unsigned long attr;
499 attr |= DMA_ATTR_FOO;
500 ....
501 n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr);
502 ....
503
504Architectures that care about DMA_ATTR_FOO would check for its
505presence in their implementations of the mapping and unmapping
506routines, e.g.:::
507
508 void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr,
509 size_t size, enum dma_data_direction dir,
510 unsigned long attrs)
511 {
512 ....
513 if (attrs & DMA_ATTR_FOO)
514 /* twizzle the frobnozzle */
515 ....
516 }
517
518
519Part II - Advanced dma usage
520----------------------------
521
522Warning: These pieces of the DMA API should not be used in the
523majority of cases, since they cater for unlikely corner cases that
524don't belong in usual drivers.
525
526If you don't understand how cache line coherency works between a
527processor and an I/O device, you should not be using this part of the
528API at all.
529
530::
531
532 void *
533 dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle,
534 gfp_t flag, unsigned long attrs)
535
536Identical to dma_alloc_coherent() except that when the
537DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the
538platform will choose to return either consistent or non-consistent memory
539as it sees fit. By using this API, you are guaranteeing to the platform
540that you have all the correct and necessary sync points for this memory
541in the driver should it choose to return non-consistent memory.
542
543Note: where the platform can return consistent memory, it will
544guarantee that the sync points become nops.
545
546Warning: Handling non-consistent memory is a real pain. You should
547only use this API if you positively know your driver will be
548required to work on one of the rare (usually non-PCI) architectures
549that simply cannot make consistent memory.
550
551::
552
553 void
554 dma_free_attrs(struct device *dev, size_t size, void *cpu_addr,
555 dma_addr_t dma_handle, unsigned long attrs)
556
557Free memory allocated by the dma_alloc_attrs(). All common
558parameters must be identical to those otherwise passed to dma_free_coherent,
559and the attrs argument must be identical to the attrs passed to
560dma_alloc_attrs().
561
562::
563
564 int
565 dma_get_cache_alignment(void)
566
567Returns the processor cache alignment. This is the absolute minimum
568alignment *and* width that you must observe when either mapping
569memory or doing partial flushes.
570
571.. note::
572
573 This API may return a number *larger* than the actual cache
574 line, but it will guarantee that one or more cache lines fit exactly
575 into the width returned by this call. It will also always be a power
576 of two for easy alignment.
577
578::
579
580 void
581 dma_cache_sync(struct device *dev, void *vaddr, size_t size,
582 enum dma_data_direction direction)
583
584Do a partial sync of memory that was allocated by dma_alloc_attrs() with
585the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and
586continuing on for size. Again, you *must* observe the cache line
587boundaries when doing this.
588
589::
590
591 int
592 dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr,
593 dma_addr_t device_addr, size_t size);
594
595Declare region of memory to be handed out by dma_alloc_coherent() when
596it's asked for coherent memory for this device.
597
598phys_addr is the CPU physical address to which the memory is currently
599assigned (this will be ioremapped so the CPU can access the region).
600
601device_addr is the DMA address the device needs to be programmed
602with to actually address this memory (this will be handed out as the
603dma_addr_t in dma_alloc_coherent()).
604
605size is the size of the area (must be multiples of PAGE_SIZE).
606
607As a simplification for the platforms, only *one* such region of
608memory may be declared per device.
609
610For reasons of efficiency, most platforms choose to track the declared
611region only at the granularity of a page. For smaller allocations,
612you should use the dma_pool() API.
613
614Part III - Debug drivers use of the DMA-API
615-------------------------------------------
616
617The DMA-API as described above has some constraints. DMA addresses must be
618released with the corresponding function with the same size for example. With
619the advent of hardware IOMMUs it becomes more and more important that drivers
620do not violate those constraints. In the worst case such a violation can
621result in data corruption up to destroyed filesystems.
622
623To debug drivers and find bugs in the usage of the DMA-API checking code can
624be compiled into the kernel which will tell the developer about those
625violations. If your architecture supports it you can select the "Enable
626debugging of DMA-API usage" option in your kernel configuration. Enabling this
627option has a performance impact. Do not enable it in production kernels.
628
629If you boot the resulting kernel will contain code which does some bookkeeping
630about what DMA memory was allocated for which device. If this code detects an
631error it prints a warning message with some details into your kernel log. An
632example warning message may look like this::
633
634 WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448
635 check_unmap+0x203/0x490()
636 Hardware name:
637 forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong
638 function [device address=0x00000000640444be] [size=66 bytes] [mapped as
639 single] [unmapped as page]
640 Modules linked in: nfsd exportfs bridge stp llc r8169
641 Pid: 0, comm: swapper Tainted: G W 2.6.28-dmatest-09289-g8bb99c0 #1
642 Call Trace:
643 <IRQ> [<ffffffff80240b22>] warn_slowpath+0xf2/0x130
644 [<ffffffff80647b70>] _spin_unlock+0x10/0x30
645 [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0
646 [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40
647 [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0
648 [<ffffffff80252f96>] queue_work+0x56/0x60
649 [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50
650 [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0
651 [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40
652 [<ffffffff80235177>] find_busiest_group+0x207/0x8a0
653 [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50
654 [<ffffffff803c7ea3>] check_unmap+0x203/0x490
655 [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50
656 [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0
657 [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0
658 [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70
659 [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150
660 [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0
661 [<ffffffff8020c093>] ret_from_intr+0x0/0xa
662 <EOI> <4>---[ end trace f6435a98e2a38c0e ]---
663
664The driver developer can find the driver and the device including a stacktrace
665of the DMA-API call which caused this warning.
666
667Per default only the first error will result in a warning message. All other
668errors will only silently counted. This limitation exist to prevent the code
669from flooding your kernel log. To support debugging a device driver this can
670be disabled via debugfs. See the debugfs interface documentation below for
671details.
672
673The debugfs directory for the DMA-API debugging code is called dma-api/. In
674this directory the following files can currently be found:
675
676=============================== ===============================================
677dma-api/all_errors This file contains a numeric value. If this
678 value is not equal to zero the debugging code
679 will print a warning for every error it finds
680 into the kernel log. Be careful with this
681 option, as it can easily flood your logs.
682
683dma-api/disabled This read-only file contains the character 'Y'
684 if the debugging code is disabled. This can
685 happen when it runs out of memory or if it was
686 disabled at boot time
687
688dma-api/dump This read-only file contains current DMA
689 mappings.
690
691dma-api/error_count This file is read-only and shows the total
692 numbers of errors found.
693
694dma-api/num_errors The number in this file shows how many
695 warnings will be printed to the kernel log
696 before it stops. This number is initialized to
697 one at system boot and be set by writing into
698 this file
699
700dma-api/min_free_entries This read-only file can be read to get the
701 minimum number of free dma_debug_entries the
702 allocator has ever seen. If this value goes
703 down to zero the code will attempt to increase
704 nr_total_entries to compensate.
705
706dma-api/num_free_entries The current number of free dma_debug_entries
707 in the allocator.
708
709dma-api/nr_total_entries The total number of dma_debug_entries in the
710 allocator, both free and used.
711
712dma-api/driver_filter You can write a name of a driver into this file
713 to limit the debug output to requests from that
714 particular driver. Write an empty string to
715 that file to disable the filter and see
716 all errors again.
717=============================== ===============================================
718
719If you have this code compiled into your kernel it will be enabled by default.
720If you want to boot without the bookkeeping anyway you can provide
721'dma_debug=off' as a boot parameter. This will disable DMA-API debugging.
722Notice that you can not enable it again at runtime. You have to reboot to do
723so.
724
725If you want to see debug messages only for a special device driver you can
726specify the dma_debug_driver=<drivername> parameter. This will enable the
727driver filter at boot time. The debug code will only print errors for that
728driver afterwards. This filter can be disabled or changed later using debugfs.
729
730When the code disables itself at runtime this is most likely because it ran
731out of dma_debug_entries and was unable to allocate more on-demand. 65536
732entries are preallocated at boot - if this is too low for you boot with
733'dma_debug_entries=<your_desired_number>' to overwrite the default. Note
734that the code allocates entries in batches, so the exact number of
735preallocated entries may be greater than the actual number requested. The
736code will print to the kernel log each time it has dynamically allocated
737as many entries as were initially preallocated. This is to indicate that a
738larger preallocation size may be appropriate, or if it happens continually
739that a driver may be leaking mappings.
740
741::
742
743 void
744 debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
745
746dma-debug interface debug_dma_mapping_error() to debug drivers that fail
747to check DMA mapping errors on addresses returned by dma_map_single() and
748dma_map_page() interfaces. This interface clears a flag set by
749debug_dma_map_page() to indicate that dma_mapping_error() has been called by
750the driver. When driver does unmap, debug_dma_unmap() checks the flag and if
751this flag is still set, prints warning message that includes call trace that
752leads up to the unmap. This interface can be called from dma_mapping_error()
753routines to enable DMA mapping error check debugging.