]> git.ipfire.org Git - thirdparty/kernel/stable.git/blame_incremental - Documentation/DMA-API-HOWTO.txt
treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 485
[thirdparty/kernel/stable.git] / Documentation / DMA-API-HOWTO.txt
... / ...
CommitLineData
1=========================
2Dynamic DMA mapping Guide
3=========================
4
5:Author: David S. Miller <davem@redhat.com>
6:Author: Richard Henderson <rth@cygnus.com>
7:Author: Jakub Jelinek <jakub@redhat.com>
8
9This is a guide to device driver writers on how to use the DMA API
10with example pseudo-code. For a concise description of the API, see
11DMA-API.txt.
12
13CPU and DMA addresses
14=====================
15
16There are several kinds of addresses involved in the DMA API, and it's
17important to understand the differences.
18
19The kernel normally uses virtual addresses. Any address returned by
20kmalloc(), vmalloc(), and similar interfaces is a virtual address and can
21be stored in a ``void *``.
22
23The virtual memory system (TLB, page tables, etc.) translates virtual
24addresses to CPU physical addresses, which are stored as "phys_addr_t" or
25"resource_size_t". The kernel manages device resources like registers as
26physical addresses. These are the addresses in /proc/iomem. The physical
27address is not directly useful to a driver; it must use ioremap() to map
28the space and produce a virtual address.
29
30I/O devices use a third kind of address: a "bus address". If a device has
31registers at an MMIO address, or if it performs DMA to read or write system
32memory, the addresses used by the device are bus addresses. In some
33systems, bus addresses are identical to CPU physical addresses, but in
34general they are not. IOMMUs and host bridges can produce arbitrary
35mappings between physical and bus addresses.
36
37From a device's point of view, DMA uses the bus address space, but it may
38be restricted to a subset of that space. For example, even if a system
39supports 64-bit addresses for main memory and PCI BARs, it may use an IOMMU
40so devices only need to use 32-bit DMA addresses.
41
42Here's a picture and some examples::
43
44 CPU CPU Bus
45 Virtual Physical Address
46 Address Address Space
47 Space Space
48
49 +-------+ +------+ +------+
50 | | |MMIO | Offset | |
51 | | Virtual |Space | applied | |
52 C +-------+ --------> B +------+ ----------> +------+ A
53 | | mapping | | by host | |
54 +-----+ | | | | bridge | | +--------+
55 | | | | +------+ | | | |
56 | CPU | | | | RAM | | | | Device |
57 | | | | | | | | | |
58 +-----+ +-------+ +------+ +------+ +--------+
59 | | Virtual |Buffer| Mapping | |
60 X +-------+ --------> Y +------+ <---------- +------+ Z
61 | | mapping | RAM | by IOMMU
62 | | | |
63 | | | |
64 +-------+ +------+
65
66During the enumeration process, the kernel learns about I/O devices and
67their MMIO space and the host bridges that connect them to the system. For
68example, if a PCI device has a BAR, the kernel reads the bus address (A)
69from the BAR and converts it to a CPU physical address (B). The address B
70is stored in a struct resource and usually exposed via /proc/iomem. When a
71driver claims a device, it typically uses ioremap() to map physical address
72B at a virtual address (C). It can then use, e.g., ioread32(C), to access
73the device registers at bus address A.
74
75If the device supports DMA, the driver sets up a buffer using kmalloc() or
76a similar interface, which returns a virtual address (X). The virtual
77memory system maps X to a physical address (Y) in system RAM. The driver
78can use virtual address X to access the buffer, but the device itself
79cannot because DMA doesn't go through the CPU virtual memory system.
80
81In some simple systems, the device can do DMA directly to physical address
82Y. But in many others, there is IOMMU hardware that translates DMA
83addresses to physical addresses, e.g., it translates Z to Y. This is part
84of the reason for the DMA API: the driver can give a virtual address X to
85an interface like dma_map_single(), which sets up any required IOMMU
86mapping and returns the DMA address Z. The driver then tells the device to
87do DMA to Z, and the IOMMU maps it to the buffer at address Y in system
88RAM.
89
90So that Linux can use the dynamic DMA mapping, it needs some help from the
91drivers, namely it has to take into account that DMA addresses should be
92mapped only for the time they are actually used and unmapped after the DMA
93transfer.
94
95The following API will work of course even on platforms where no such
96hardware exists.
97
98Note that the DMA API works with any bus independent of the underlying
99microprocessor architecture. You should use the DMA API rather than the
100bus-specific DMA API, i.e., use the dma_map_*() interfaces rather than the
101pci_map_*() interfaces.
102
103First of all, you should make sure::
104
105 #include <linux/dma-mapping.h>
106
107is in your driver, which provides the definition of dma_addr_t. This type
108can hold any valid DMA address for the platform and should be used
109everywhere you hold a DMA address returned from the DMA mapping functions.
110
111What memory is DMA'able?
112========================
113
114The first piece of information you must know is what kernel memory can
115be used with the DMA mapping facilities. There has been an unwritten
116set of rules regarding this, and this text is an attempt to finally
117write them down.
118
119If you acquired your memory via the page allocator
120(i.e. __get_free_page*()) or the generic memory allocators
121(i.e. kmalloc() or kmem_cache_alloc()) then you may DMA to/from
122that memory using the addresses returned from those routines.
123
124This means specifically that you may _not_ use the memory/addresses
125returned from vmalloc() for DMA. It is possible to DMA to the
126_underlying_ memory mapped into a vmalloc() area, but this requires
127walking page tables to get the physical addresses, and then
128translating each of those pages back to a kernel address using
129something like __va(). [ EDIT: Update this when we integrate
130Gerd Knorr's generic code which does this. ]
131
132This rule also means that you may use neither kernel image addresses
133(items in data/text/bss segments), nor module image addresses, nor
134stack addresses for DMA. These could all be mapped somewhere entirely
135different than the rest of physical memory. Even if those classes of
136memory could physically work with DMA, you'd need to ensure the I/O
137buffers were cacheline-aligned. Without that, you'd see cacheline
138sharing problems (data corruption) on CPUs with DMA-incoherent caches.
139(The CPU could write to one word, DMA would write to a different one
140in the same cache line, and one of them could be overwritten.)
141
142Also, this means that you cannot take the return of a kmap()
143call and DMA to/from that. This is similar to vmalloc().
144
145What about block I/O and networking buffers? The block I/O and
146networking subsystems make sure that the buffers they use are valid
147for you to DMA from/to.
148
149DMA addressing capabilities
150===========================
151
152By default, the kernel assumes that your device can address 32-bits of DMA
153addressing. For a 64-bit capable device, this needs to be increased, and for
154a device with limitations, it needs to be decreased.
155
156Special note about PCI: PCI-X specification requires PCI-X devices to support
15764-bit addressing (DAC) for all transactions. And at least one platform (SGI
158SN2) requires 64-bit consistent allocations to operate correctly when the IO
159bus is in PCI-X mode.
160
161For correct operation, you must set the DMA mask to inform the kernel about
162your devices DMA addressing capabilities.
163
164This is performed via a call to dma_set_mask_and_coherent()::
165
166 int dma_set_mask_and_coherent(struct device *dev, u64 mask);
167
168which will set the mask for both streaming and coherent APIs together. If you
169have some special requirements, then the following two separate calls can be
170used instead:
171
172 The setup for streaming mappings is performed via a call to
173 dma_set_mask()::
174
175 int dma_set_mask(struct device *dev, u64 mask);
176
177 The setup for consistent allocations is performed via a call
178 to dma_set_coherent_mask()::
179
180 int dma_set_coherent_mask(struct device *dev, u64 mask);
181
182Here, dev is a pointer to the device struct of your device, and mask is a bit
183mask describing which bits of an address your device supports. Often the
184device struct of your device is embedded in the bus-specific device struct of
185your device. For example, &pdev->dev is a pointer to the device struct of a
186PCI device (pdev is a pointer to the PCI device struct of your device).
187
188These calls usually return zero to indicated your device can perform DMA
189properly on the machine given the address mask you provided, but they might
190return an error if the mask is too small to be supportable on the given
191system. If it returns non-zero, your device cannot perform DMA properly on
192this platform, and attempting to do so will result in undefined behavior.
193You must not use DMA on this device unless the dma_set_mask family of
194functions has returned success.
195
196This means that in the failure case, you have two options:
197
1981) Use some non-DMA mode for data transfer, if possible.
1992) Ignore this device and do not initialize it.
200
201It is recommended that your driver print a kernel KERN_WARNING message when
202setting the DMA mask fails. In this manner, if a user of your driver reports
203that performance is bad or that the device is not even detected, you can ask
204them for the kernel messages to find out exactly why.
205
206The standard 64-bit addressing device would do something like this::
207
208 if (dma_set_mask_and_coherent(dev, DMA_BIT_MASK(64))) {
209 dev_warn(dev, "mydev: No suitable DMA available\n");
210 goto ignore_this_device;
211 }
212
213If the device only supports 32-bit addressing for descriptors in the
214coherent allocations, but supports full 64-bits for streaming mappings
215it would look like this:
216
217 if (dma_set_mask(dev, DMA_BIT_MASK(64))) {
218 dev_warn(dev, "mydev: No suitable DMA available\n");
219 goto ignore_this_device;
220 }
221
222The coherent mask will always be able to set the same or a smaller mask as
223the streaming mask. However for the rare case that a device driver only
224uses consistent allocations, one would have to check the return value from
225dma_set_coherent_mask().
226
227Finally, if your device can only drive the low 24-bits of
228address you might do something like::
229
230 if (dma_set_mask(dev, DMA_BIT_MASK(24))) {
231 dev_warn(dev, "mydev: 24-bit DMA addressing not available\n");
232 goto ignore_this_device;
233 }
234
235When dma_set_mask() or dma_set_mask_and_coherent() is successful, and
236returns zero, the kernel saves away this mask you have provided. The
237kernel will use this information later when you make DMA mappings.
238
239There is a case which we are aware of at this time, which is worth
240mentioning in this documentation. If your device supports multiple
241functions (for example a sound card provides playback and record
242functions) and the various different functions have _different_
243DMA addressing limitations, you may wish to probe each mask and
244only provide the functionality which the machine can handle. It
245is important that the last call to dma_set_mask() be for the
246most specific mask.
247
248Here is pseudo-code showing how this might be done::
249
250 #define PLAYBACK_ADDRESS_BITS DMA_BIT_MASK(32)
251 #define RECORD_ADDRESS_BITS DMA_BIT_MASK(24)
252
253 struct my_sound_card *card;
254 struct device *dev;
255
256 ...
257 if (!dma_set_mask(dev, PLAYBACK_ADDRESS_BITS)) {
258 card->playback_enabled = 1;
259 } else {
260 card->playback_enabled = 0;
261 dev_warn(dev, "%s: Playback disabled due to DMA limitations\n",
262 card->name);
263 }
264 if (!dma_set_mask(dev, RECORD_ADDRESS_BITS)) {
265 card->record_enabled = 1;
266 } else {
267 card->record_enabled = 0;
268 dev_warn(dev, "%s: Record disabled due to DMA limitations\n",
269 card->name);
270 }
271
272A sound card was used as an example here because this genre of PCI
273devices seems to be littered with ISA chips given a PCI front end,
274and thus retaining the 16MB DMA addressing limitations of ISA.
275
276Types of DMA mappings
277=====================
278
279There are two types of DMA mappings:
280
281- Consistent DMA mappings which are usually mapped at driver
282 initialization, unmapped at the end and for which the hardware should
283 guarantee that the device and the CPU can access the data
284 in parallel and will see updates made by each other without any
285 explicit software flushing.
286
287 Think of "consistent" as "synchronous" or "coherent".
288
289 The current default is to return consistent memory in the low 32
290 bits of the DMA space. However, for future compatibility you should
291 set the consistent mask even if this default is fine for your
292 driver.
293
294 Good examples of what to use consistent mappings for are:
295
296 - Network card DMA ring descriptors.
297 - SCSI adapter mailbox command data structures.
298 - Device firmware microcode executed out of
299 main memory.
300
301 The invariant these examples all require is that any CPU store
302 to memory is immediately visible to the device, and vice
303 versa. Consistent mappings guarantee this.
304
305 .. important::
306
307 Consistent DMA memory does not preclude the usage of
308 proper memory barriers. The CPU may reorder stores to
309 consistent memory just as it may normal memory. Example:
310 if it is important for the device to see the first word
311 of a descriptor updated before the second, you must do
312 something like::
313
314 desc->word0 = address;
315 wmb();
316 desc->word1 = DESC_VALID;
317
318 in order to get correct behavior on all platforms.
319
320 Also, on some platforms your driver may need to flush CPU write
321 buffers in much the same way as it needs to flush write buffers
322 found in PCI bridges (such as by reading a register's value
323 after writing it).
324
325- Streaming DMA mappings which are usually mapped for one DMA
326 transfer, unmapped right after it (unless you use dma_sync_* below)
327 and for which hardware can optimize for sequential accesses.
328
329 Think of "streaming" as "asynchronous" or "outside the coherency
330 domain".
331
332 Good examples of what to use streaming mappings for are:
333
334 - Networking buffers transmitted/received by a device.
335 - Filesystem buffers written/read by a SCSI device.
336
337 The interfaces for using this type of mapping were designed in
338 such a way that an implementation can make whatever performance
339 optimizations the hardware allows. To this end, when using
340 such mappings you must be explicit about what you want to happen.
341
342Neither type of DMA mapping has alignment restrictions that come from
343the underlying bus, although some devices may have such restrictions.
344Also, systems with caches that aren't DMA-coherent will work better
345when the underlying buffers don't share cache lines with other data.
346
347
348Using Consistent DMA mappings
349=============================
350
351To allocate and map large (PAGE_SIZE or so) consistent DMA regions,
352you should do::
353
354 dma_addr_t dma_handle;
355
356 cpu_addr = dma_alloc_coherent(dev, size, &dma_handle, gfp);
357
358where device is a ``struct device *``. This may be called in interrupt
359context with the GFP_ATOMIC flag.
360
361Size is the length of the region you want to allocate, in bytes.
362
363This routine will allocate RAM for that region, so it acts similarly to
364__get_free_pages() (but takes size instead of a page order). If your
365driver needs regions sized smaller than a page, you may prefer using
366the dma_pool interface, described below.
367
368The consistent DMA mapping interfaces, will by default return a DMA address
369which is 32-bit addressable. Even if the device indicates (via the DMA mask)
370that it may address the upper 32-bits, consistent allocation will only
371return > 32-bit addresses for DMA if the consistent DMA mask has been
372explicitly changed via dma_set_coherent_mask(). This is true of the
373dma_pool interface as well.
374
375dma_alloc_coherent() returns two values: the virtual address which you
376can use to access it from the CPU and dma_handle which you pass to the
377card.
378
379The CPU virtual address and the DMA address are both
380guaranteed to be aligned to the smallest PAGE_SIZE order which
381is greater than or equal to the requested size. This invariant
382exists (for example) to guarantee that if you allocate a chunk
383which is smaller than or equal to 64 kilobytes, the extent of the
384buffer you receive will not cross a 64K boundary.
385
386To unmap and free such a DMA region, you call::
387
388 dma_free_coherent(dev, size, cpu_addr, dma_handle);
389
390where dev, size are the same as in the above call and cpu_addr and
391dma_handle are the values dma_alloc_coherent() returned to you.
392This function may not be called in interrupt context.
393
394If your driver needs lots of smaller memory regions, you can write
395custom code to subdivide pages returned by dma_alloc_coherent(),
396or you can use the dma_pool API to do that. A dma_pool is like
397a kmem_cache, but it uses dma_alloc_coherent(), not __get_free_pages().
398Also, it understands common hardware constraints for alignment,
399like queue heads needing to be aligned on N byte boundaries.
400
401Create a dma_pool like this::
402
403 struct dma_pool *pool;
404
405 pool = dma_pool_create(name, dev, size, align, boundary);
406
407The "name" is for diagnostics (like a kmem_cache name); dev and size
408are as above. The device's hardware alignment requirement for this
409type of data is "align" (which is expressed in bytes, and must be a
410power of two). If your device has no boundary crossing restrictions,
411pass 0 for boundary; passing 4096 says memory allocated from this pool
412must not cross 4KByte boundaries (but at that time it may be better to
413use dma_alloc_coherent() directly instead).
414
415Allocate memory from a DMA pool like this::
416
417 cpu_addr = dma_pool_alloc(pool, flags, &dma_handle);
418
419flags are GFP_KERNEL if blocking is permitted (not in_interrupt nor
420holding SMP locks), GFP_ATOMIC otherwise. Like dma_alloc_coherent(),
421this returns two values, cpu_addr and dma_handle.
422
423Free memory that was allocated from a dma_pool like this::
424
425 dma_pool_free(pool, cpu_addr, dma_handle);
426
427where pool is what you passed to dma_pool_alloc(), and cpu_addr and
428dma_handle are the values dma_pool_alloc() returned. This function
429may be called in interrupt context.
430
431Destroy a dma_pool by calling::
432
433 dma_pool_destroy(pool);
434
435Make sure you've called dma_pool_free() for all memory allocated
436from a pool before you destroy the pool. This function may not
437be called in interrupt context.
438
439DMA Direction
440=============
441
442The interfaces described in subsequent portions of this document
443take a DMA direction argument, which is an integer and takes on
444one of the following values::
445
446 DMA_BIDIRECTIONAL
447 DMA_TO_DEVICE
448 DMA_FROM_DEVICE
449 DMA_NONE
450
451You should provide the exact DMA direction if you know it.
452
453DMA_TO_DEVICE means "from main memory to the device"
454DMA_FROM_DEVICE means "from the device to main memory"
455It is the direction in which the data moves during the DMA
456transfer.
457
458You are _strongly_ encouraged to specify this as precisely
459as you possibly can.
460
461If you absolutely cannot know the direction of the DMA transfer,
462specify DMA_BIDIRECTIONAL. It means that the DMA can go in
463either direction. The platform guarantees that you may legally
464specify this, and that it will work, but this may be at the
465cost of performance for example.
466
467The value DMA_NONE is to be used for debugging. One can
468hold this in a data structure before you come to know the
469precise direction, and this will help catch cases where your
470direction tracking logic has failed to set things up properly.
471
472Another advantage of specifying this value precisely (outside of
473potential platform-specific optimizations of such) is for debugging.
474Some platforms actually have a write permission boolean which DMA
475mappings can be marked with, much like page protections in the user
476program address space. Such platforms can and do report errors in the
477kernel logs when the DMA controller hardware detects violation of the
478permission setting.
479
480Only streaming mappings specify a direction, consistent mappings
481implicitly have a direction attribute setting of
482DMA_BIDIRECTIONAL.
483
484The SCSI subsystem tells you the direction to use in the
485'sc_data_direction' member of the SCSI command your driver is
486working on.
487
488For Networking drivers, it's a rather simple affair. For transmit
489packets, map/unmap them with the DMA_TO_DEVICE direction
490specifier. For receive packets, just the opposite, map/unmap them
491with the DMA_FROM_DEVICE direction specifier.
492
493Using Streaming DMA mappings
494============================
495
496The streaming DMA mapping routines can be called from interrupt
497context. There are two versions of each map/unmap, one which will
498map/unmap a single memory region, and one which will map/unmap a
499scatterlist.
500
501To map a single region, you do::
502
503 struct device *dev = &my_dev->dev;
504 dma_addr_t dma_handle;
505 void *addr = buffer->ptr;
506 size_t size = buffer->len;
507
508 dma_handle = dma_map_single(dev, addr, size, direction);
509 if (dma_mapping_error(dev, dma_handle)) {
510 /*
511 * reduce current DMA mapping usage,
512 * delay and try again later or
513 * reset driver.
514 */
515 goto map_error_handling;
516 }
517
518and to unmap it::
519
520 dma_unmap_single(dev, dma_handle, size, direction);
521
522You should call dma_mapping_error() as dma_map_single() could fail and return
523error. Doing so will ensure that the mapping code will work correctly on all
524DMA implementations without any dependency on the specifics of the underlying
525implementation. Using the returned address without checking for errors could
526result in failures ranging from panics to silent data corruption. The same
527applies to dma_map_page() as well.
528
529You should call dma_unmap_single() when the DMA activity is finished, e.g.,
530from the interrupt which told you that the DMA transfer is done.
531
532Using CPU pointers like this for single mappings has a disadvantage:
533you cannot reference HIGHMEM memory in this way. Thus, there is a
534map/unmap interface pair akin to dma_{map,unmap}_single(). These
535interfaces deal with page/offset pairs instead of CPU pointers.
536Specifically::
537
538 struct device *dev = &my_dev->dev;
539 dma_addr_t dma_handle;
540 struct page *page = buffer->page;
541 unsigned long offset = buffer->offset;
542 size_t size = buffer->len;
543
544 dma_handle = dma_map_page(dev, page, offset, size, direction);
545 if (dma_mapping_error(dev, dma_handle)) {
546 /*
547 * reduce current DMA mapping usage,
548 * delay and try again later or
549 * reset driver.
550 */
551 goto map_error_handling;
552 }
553
554 ...
555
556 dma_unmap_page(dev, dma_handle, size, direction);
557
558Here, "offset" means byte offset within the given page.
559
560You should call dma_mapping_error() as dma_map_page() could fail and return
561error as outlined under the dma_map_single() discussion.
562
563You should call dma_unmap_page() when the DMA activity is finished, e.g.,
564from the interrupt which told you that the DMA transfer is done.
565
566With scatterlists, you map a region gathered from several regions by::
567
568 int i, count = dma_map_sg(dev, sglist, nents, direction);
569 struct scatterlist *sg;
570
571 for_each_sg(sglist, sg, count, i) {
572 hw_address[i] = sg_dma_address(sg);
573 hw_len[i] = sg_dma_len(sg);
574 }
575
576where nents is the number of entries in the sglist.
577
578The implementation is free to merge several consecutive sglist entries
579into one (e.g. if DMA mapping is done with PAGE_SIZE granularity, any
580consecutive sglist entries can be merged into one provided the first one
581ends and the second one starts on a page boundary - in fact this is a huge
582advantage for cards which either cannot do scatter-gather or have very
583limited number of scatter-gather entries) and returns the actual number
584of sg entries it mapped them to. On failure 0 is returned.
585
586Then you should loop count times (note: this can be less than nents times)
587and use sg_dma_address() and sg_dma_len() macros where you previously
588accessed sg->address and sg->length as shown above.
589
590To unmap a scatterlist, just call::
591
592 dma_unmap_sg(dev, sglist, nents, direction);
593
594Again, make sure DMA activity has already finished.
595
596.. note::
597
598 The 'nents' argument to the dma_unmap_sg call must be
599 the _same_ one you passed into the dma_map_sg call,
600 it should _NOT_ be the 'count' value _returned_ from the
601 dma_map_sg call.
602
603Every dma_map_{single,sg}() call should have its dma_unmap_{single,sg}()
604counterpart, because the DMA address space is a shared resource and
605you could render the machine unusable by consuming all DMA addresses.
606
607If you need to use the same streaming DMA region multiple times and touch
608the data in between the DMA transfers, the buffer needs to be synced
609properly in order for the CPU and device to see the most up-to-date and
610correct copy of the DMA buffer.
611
612So, firstly, just map it with dma_map_{single,sg}(), and after each DMA
613transfer call either::
614
615 dma_sync_single_for_cpu(dev, dma_handle, size, direction);
616
617or::
618
619 dma_sync_sg_for_cpu(dev, sglist, nents, direction);
620
621as appropriate.
622
623Then, if you wish to let the device get at the DMA area again,
624finish accessing the data with the CPU, and then before actually
625giving the buffer to the hardware call either::
626
627 dma_sync_single_for_device(dev, dma_handle, size, direction);
628
629or::
630
631 dma_sync_sg_for_device(dev, sglist, nents, direction);
632
633as appropriate.
634
635.. note::
636
637 The 'nents' argument to dma_sync_sg_for_cpu() and
638 dma_sync_sg_for_device() must be the same passed to
639 dma_map_sg(). It is _NOT_ the count returned by
640 dma_map_sg().
641
642After the last DMA transfer call one of the DMA unmap routines
643dma_unmap_{single,sg}(). If you don't touch the data from the first
644dma_map_*() call till dma_unmap_*(), then you don't have to call the
645dma_sync_*() routines at all.
646
647Here is pseudo code which shows a situation in which you would need
648to use the dma_sync_*() interfaces::
649
650 my_card_setup_receive_buffer(struct my_card *cp, char *buffer, int len)
651 {
652 dma_addr_t mapping;
653
654 mapping = dma_map_single(cp->dev, buffer, len, DMA_FROM_DEVICE);
655 if (dma_mapping_error(cp->dev, mapping)) {
656 /*
657 * reduce current DMA mapping usage,
658 * delay and try again later or
659 * reset driver.
660 */
661 goto map_error_handling;
662 }
663
664 cp->rx_buf = buffer;
665 cp->rx_len = len;
666 cp->rx_dma = mapping;
667
668 give_rx_buf_to_card(cp);
669 }
670
671 ...
672
673 my_card_interrupt_handler(int irq, void *devid, struct pt_regs *regs)
674 {
675 struct my_card *cp = devid;
676
677 ...
678 if (read_card_status(cp) == RX_BUF_TRANSFERRED) {
679 struct my_card_header *hp;
680
681 /* Examine the header to see if we wish
682 * to accept the data. But synchronize
683 * the DMA transfer with the CPU first
684 * so that we see updated contents.
685 */
686 dma_sync_single_for_cpu(&cp->dev, cp->rx_dma,
687 cp->rx_len,
688 DMA_FROM_DEVICE);
689
690 /* Now it is safe to examine the buffer. */
691 hp = (struct my_card_header *) cp->rx_buf;
692 if (header_is_ok(hp)) {
693 dma_unmap_single(&cp->dev, cp->rx_dma, cp->rx_len,
694 DMA_FROM_DEVICE);
695 pass_to_upper_layers(cp->rx_buf);
696 make_and_setup_new_rx_buf(cp);
697 } else {
698 /* CPU should not write to
699 * DMA_FROM_DEVICE-mapped area,
700 * so dma_sync_single_for_device() is
701 * not needed here. It would be required
702 * for DMA_BIDIRECTIONAL mapping if
703 * the memory was modified.
704 */
705 give_rx_buf_to_card(cp);
706 }
707 }
708 }
709
710Drivers converted fully to this interface should not use virt_to_bus() any
711longer, nor should they use bus_to_virt(). Some drivers have to be changed a
712little bit, because there is no longer an equivalent to bus_to_virt() in the
713dynamic DMA mapping scheme - you have to always store the DMA addresses
714returned by the dma_alloc_coherent(), dma_pool_alloc(), and dma_map_single()
715calls (dma_map_sg() stores them in the scatterlist itself if the platform
716supports dynamic DMA mapping in hardware) in your driver structures and/or
717in the card registers.
718
719All drivers should be using these interfaces with no exceptions. It
720is planned to completely remove virt_to_bus() and bus_to_virt() as
721they are entirely deprecated. Some ports already do not provide these
722as it is impossible to correctly support them.
723
724Handling Errors
725===============
726
727DMA address space is limited on some architectures and an allocation
728failure can be determined by:
729
730- checking if dma_alloc_coherent() returns NULL or dma_map_sg returns 0
731
732- checking the dma_addr_t returned from dma_map_single() and dma_map_page()
733 by using dma_mapping_error()::
734
735 dma_addr_t dma_handle;
736
737 dma_handle = dma_map_single(dev, addr, size, direction);
738 if (dma_mapping_error(dev, dma_handle)) {
739 /*
740 * reduce current DMA mapping usage,
741 * delay and try again later or
742 * reset driver.
743 */
744 goto map_error_handling;
745 }
746
747- unmap pages that are already mapped, when mapping error occurs in the middle
748 of a multiple page mapping attempt. These example are applicable to
749 dma_map_page() as well.
750
751Example 1::
752
753 dma_addr_t dma_handle1;
754 dma_addr_t dma_handle2;
755
756 dma_handle1 = dma_map_single(dev, addr, size, direction);
757 if (dma_mapping_error(dev, dma_handle1)) {
758 /*
759 * reduce current DMA mapping usage,
760 * delay and try again later or
761 * reset driver.
762 */
763 goto map_error_handling1;
764 }
765 dma_handle2 = dma_map_single(dev, addr, size, direction);
766 if (dma_mapping_error(dev, dma_handle2)) {
767 /*
768 * reduce current DMA mapping usage,
769 * delay and try again later or
770 * reset driver.
771 */
772 goto map_error_handling2;
773 }
774
775 ...
776
777 map_error_handling2:
778 dma_unmap_single(dma_handle1);
779 map_error_handling1:
780
781Example 2::
782
783 /*
784 * if buffers are allocated in a loop, unmap all mapped buffers when
785 * mapping error is detected in the middle
786 */
787
788 dma_addr_t dma_addr;
789 dma_addr_t array[DMA_BUFFERS];
790 int save_index = 0;
791
792 for (i = 0; i < DMA_BUFFERS; i++) {
793
794 ...
795
796 dma_addr = dma_map_single(dev, addr, size, direction);
797 if (dma_mapping_error(dev, dma_addr)) {
798 /*
799 * reduce current DMA mapping usage,
800 * delay and try again later or
801 * reset driver.
802 */
803 goto map_error_handling;
804 }
805 array[i].dma_addr = dma_addr;
806 save_index++;
807 }
808
809 ...
810
811 map_error_handling:
812
813 for (i = 0; i < save_index; i++) {
814
815 ...
816
817 dma_unmap_single(array[i].dma_addr);
818 }
819
820Networking drivers must call dev_kfree_skb() to free the socket buffer
821and return NETDEV_TX_OK if the DMA mapping fails on the transmit hook
822(ndo_start_xmit). This means that the socket buffer is just dropped in
823the failure case.
824
825SCSI drivers must return SCSI_MLQUEUE_HOST_BUSY if the DMA mapping
826fails in the queuecommand hook. This means that the SCSI subsystem
827passes the command to the driver again later.
828
829Optimizing Unmap State Space Consumption
830========================================
831
832On many platforms, dma_unmap_{single,page}() is simply a nop.
833Therefore, keeping track of the mapping address and length is a waste
834of space. Instead of filling your drivers up with ifdefs and the like
835to "work around" this (which would defeat the whole purpose of a
836portable API) the following facilities are provided.
837
838Actually, instead of describing the macros one by one, we'll
839transform some example code.
840
8411) Use DEFINE_DMA_UNMAP_{ADDR,LEN} in state saving structures.
842 Example, before::
843
844 struct ring_state {
845 struct sk_buff *skb;
846 dma_addr_t mapping;
847 __u32 len;
848 };
849
850 after::
851
852 struct ring_state {
853 struct sk_buff *skb;
854 DEFINE_DMA_UNMAP_ADDR(mapping);
855 DEFINE_DMA_UNMAP_LEN(len);
856 };
857
8582) Use dma_unmap_{addr,len}_set() to set these values.
859 Example, before::
860
861 ringp->mapping = FOO;
862 ringp->len = BAR;
863
864 after::
865
866 dma_unmap_addr_set(ringp, mapping, FOO);
867 dma_unmap_len_set(ringp, len, BAR);
868
8693) Use dma_unmap_{addr,len}() to access these values.
870 Example, before::
871
872 dma_unmap_single(dev, ringp->mapping, ringp->len,
873 DMA_FROM_DEVICE);
874
875 after::
876
877 dma_unmap_single(dev,
878 dma_unmap_addr(ringp, mapping),
879 dma_unmap_len(ringp, len),
880 DMA_FROM_DEVICE);
881
882It really should be self-explanatory. We treat the ADDR and LEN
883separately, because it is possible for an implementation to only
884need the address in order to perform the unmap operation.
885
886Platform Issues
887===============
888
889If you are just writing drivers for Linux and do not maintain
890an architecture port for the kernel, you can safely skip down
891to "Closing".
892
8931) Struct scatterlist requirements.
894
895 You need to enable CONFIG_NEED_SG_DMA_LENGTH if the architecture
896 supports IOMMUs (including software IOMMU).
897
8982) ARCH_DMA_MINALIGN
899
900 Architectures must ensure that kmalloc'ed buffer is
901 DMA-safe. Drivers and subsystems depend on it. If an architecture
902 isn't fully DMA-coherent (i.e. hardware doesn't ensure that data in
903 the CPU cache is identical to data in main memory),
904 ARCH_DMA_MINALIGN must be set so that the memory allocator
905 makes sure that kmalloc'ed buffer doesn't share a cache line with
906 the others. See arch/arm/include/asm/cache.h as an example.
907
908 Note that ARCH_DMA_MINALIGN is about DMA memory alignment
909 constraints. You don't need to worry about the architecture data
910 alignment constraints (e.g. the alignment constraints about 64-bit
911 objects).
912
913Closing
914=======
915
916This document, and the API itself, would not be in its current
917form without the feedback and suggestions from numerous individuals.
918We would like to specifically mention, in no particular order, the
919following people::
920
921 Russell King <rmk@arm.linux.org.uk>
922 Leo Dagum <dagum@barrel.engr.sgi.com>
923 Ralf Baechle <ralf@oss.sgi.com>
924 Grant Grundler <grundler@cup.hp.com>
925 Jay Estabrook <Jay.Estabrook@compaq.com>
926 Thomas Sailer <sailer@ife.ee.ethz.ch>
927 Andrea Arcangeli <andrea@suse.de>
928 Jens Axboe <jens.axboe@oracle.com>
929 David Mosberger-Tang <davidm@hpl.hp.com>