Loading...
Note: File does not exist in v3.5.6.
1.. SPDX-License-Identifier: GPL-2.0+
2
3=======
4IOMMUFD
5=======
6
7:Author: Jason Gunthorpe
8:Author: Kevin Tian
9
10Overview
11========
12
13IOMMUFD is the user API to control the IOMMU subsystem as it relates to managing
14IO page tables from userspace using file descriptors. It intends to be general
15and consumable by any driver that wants to expose DMA to userspace. These
16drivers are eventually expected to deprecate any internal IOMMU logic
17they may already/historically implement (e.g. vfio_iommu_type1.c).
18
19At minimum iommufd provides universal support of managing I/O address spaces and
20I/O page tables for all IOMMUs, with room in the design to add non-generic
21features to cater to specific hardware functionality.
22
23In this context the capital letter (IOMMUFD) refers to the subsystem while the
24small letter (iommufd) refers to the file descriptors created via /dev/iommu for
25use by userspace.
26
27Key Concepts
28============
29
30User Visible Objects
31--------------------
32
33Following IOMMUFD objects are exposed to userspace:
34
35- IOMMUFD_OBJ_IOAS, representing an I/O address space (IOAS), allowing map/unmap
36 of user space memory into ranges of I/O Virtual Address (IOVA).
37
38 The IOAS is a functional replacement for the VFIO container, and like the VFIO
39 container it copies an IOVA map to a list of iommu_domains held within it.
40
41- IOMMUFD_OBJ_DEVICE, representing a device that is bound to iommufd by an
42 external driver.
43
44- IOMMUFD_OBJ_HWPT_PAGING, representing an actual hardware I/O page table
45 (i.e. a single struct iommu_domain) managed by the iommu driver. "PAGING"
46 primarly indicates this type of HWPT should be linked to an IOAS. It also
47 indicates that it is backed by an iommu_domain with __IOMMU_DOMAIN_PAGING
48 feature flag. This can be either an UNMANAGED stage-1 domain for a device
49 running in the user space, or a nesting parent stage-2 domain for mappings
50 from guest-level physical addresses to host-level physical addresses.
51
52 The IOAS has a list of HWPT_PAGINGs that share the same IOVA mapping and
53 it will synchronize its mapping with each member HWPT_PAGING.
54
55- IOMMUFD_OBJ_HWPT_NESTED, representing an actual hardware I/O page table
56 (i.e. a single struct iommu_domain) managed by user space (e.g. guest OS).
57 "NESTED" indicates that this type of HWPT should be linked to an HWPT_PAGING.
58 It also indicates that it is backed by an iommu_domain that has a type of
59 IOMMU_DOMAIN_NESTED. This must be a stage-1 domain for a device running in
60 the user space (e.g. in a guest VM enabling the IOMMU nested translation
61 feature.) As such, it must be created with a given nesting parent stage-2
62 domain to associate to. This nested stage-1 page table managed by the user
63 space usually has mappings from guest-level I/O virtual addresses to guest-
64 level physical addresses.
65
66- IOMMUFD_OBJ_VIOMMU, representing a slice of the physical IOMMU instance,
67 passed to or shared with a VM. It may be some HW-accelerated virtualization
68 features and some SW resources used by the VM. For examples:
69
70 * Security namespace for guest owned ID, e.g. guest-controlled cache tags
71 * Non-device-affiliated event reporting, e.g. invalidation queue errors
72 * Access to a sharable nesting parent pagetable across physical IOMMUs
73 * Virtualization of various platforms IDs, e.g. RIDs and others
74 * Delivery of paravirtualized invalidation
75 * Direct assigned invalidation queues
76 * Direct assigned interrupts
77
78 Such a vIOMMU object generally has the access to a nesting parent pagetable
79 to support some HW-accelerated virtualization features. So, a vIOMMU object
80 must be created given a nesting parent HWPT_PAGING object, and then it would
81 encapsulate that HWPT_PAGING object. Therefore, a vIOMMU object can be used
82 to allocate an HWPT_NESTED object in place of the encapsulated HWPT_PAGING.
83
84 .. note::
85
86 The name "vIOMMU" isn't necessarily identical to a virtualized IOMMU in a
87 VM. A VM can have one giant virtualized IOMMU running on a machine having
88 multiple physical IOMMUs, in which case the VMM will dispatch the requests
89 or configurations from this single virtualized IOMMU instance to multiple
90 vIOMMU objects created for individual slices of different physical IOMMUs.
91 In other words, a vIOMMU object is always a representation of one physical
92 IOMMU, not necessarily of a virtualized IOMMU. For VMMs that want the full
93 virtualization features from physical IOMMUs, it is suggested to build the
94 same number of virtualized IOMMUs as the number of physical IOMMUs, so the
95 passed-through devices would be connected to their own virtualized IOMMUs
96 backed by corresponding vIOMMU objects, in which case a guest OS would do
97 the "dispatch" naturally instead of VMM trappings.
98
99- IOMMUFD_OBJ_VDEVICE, representing a virtual device for an IOMMUFD_OBJ_DEVICE
100 against an IOMMUFD_OBJ_VIOMMU. This virtual device holds the device's virtual
101 information or attributes (related to the vIOMMU) in a VM. An immediate vDATA
102 example can be the virtual ID of the device on a vIOMMU, which is a unique ID
103 that VMM assigns to the device for a translation channel/port of the vIOMMU,
104 e.g. vSID of ARM SMMUv3, vDeviceID of AMD IOMMU, and vRID of Intel VT-d to a
105 Context Table. Potential use cases of some advanced security information can
106 be forwarded via this object too, such as security level or realm information
107 in a Confidential Compute Architecture. A VMM should create a vDEVICE object
108 to forward all the device information in a VM, when it connects a device to a
109 vIOMMU, which is a separate ioctl call from attaching the same device to an
110 HWPT_PAGING that the vIOMMU holds.
111
112All user-visible objects are destroyed via the IOMMU_DESTROY uAPI.
113
114The diagrams below show relationships between user-visible objects and kernel
115datastructures (external to iommufd), with numbers referred to operations
116creating the objects and links::
117
118 _______________________________________________________________________
119 | iommufd (HWPT_PAGING only) |
120 | |
121 | [1] [3] [2] |
122 | ________________ _____________ ________ |
123 | | | | | | | |
124 | | IOAS |<---| HWPT_PAGING |<---------------------| DEVICE | |
125 | |________________| |_____________| |________| |
126 | | | | |
127 |_________|____________________|__________________________________|_____|
128 | | |
129 | ______v_____ ___v__
130 | PFN storage | (paging) | |struct|
131 |------------>|iommu_domain|<-----------------------|device|
132 |____________| |______|
133
134 _______________________________________________________________________
135 | iommufd (with HWPT_NESTED) |
136 | |
137 | [1] [3] [4] [2] |
138 | ________________ _____________ _____________ ________ |
139 | | | | | | | | | |
140 | | IOAS |<---| HWPT_PAGING |<---| HWPT_NESTED |<--| DEVICE | |
141 | |________________| |_____________| |_____________| |________| |
142 | | | | | |
143 |_________|____________________|__________________|_______________|_____|
144 | | | |
145 | ______v_____ ______v_____ ___v__
146 | PFN storage | (paging) | | (nested) | |struct|
147 |------------>|iommu_domain|<----|iommu_domain|<----|device|
148 |____________| |____________| |______|
149
150 _______________________________________________________________________
151 | iommufd (with vIOMMU/vDEVICE) |
152 | |
153 | [5] [6] |
154 | _____________ _____________ |
155 | | | | | |
156 | |----------------| vIOMMU |<---| vDEVICE |<----| |
157 | | | | |_____________| | |
158 | | | | | |
159 | | [1] | | [4] | [2] |
160 | | ______ | | _____________ _|______ |
161 | | | | | [3] | | | | | |
162 | | | IOAS |<---|(HWPT_PAGING)|<---| HWPT_NESTED |<--| DEVICE | |
163 | | |______| |_____________| |_____________| |________| |
164 | | | | | | |
165 |______|________|______________|__________________|_______________|_____|
166 | | | | |
167 ______v_____ | ______v_____ ______v_____ ___v__
168 | struct | | PFN | (paging) | | (nested) | |struct|
169 |iommu_device| |------>|iommu_domain|<----|iommu_domain|<----|device|
170 |____________| storage|____________| |____________| |______|
171
1721. IOMMUFD_OBJ_IOAS is created via the IOMMU_IOAS_ALLOC uAPI. An iommufd can
173 hold multiple IOAS objects. IOAS is the most generic object and does not
174 expose interfaces that are specific to single IOMMU drivers. All operations
175 on the IOAS must operate equally on each of the iommu_domains inside of it.
176
1772. IOMMUFD_OBJ_DEVICE is created when an external driver calls the IOMMUFD kAPI
178 to bind a device to an iommufd. The driver is expected to implement a set of
179 ioctls to allow userspace to initiate the binding operation. Successful
180 completion of this operation establishes the desired DMA ownership over the
181 device. The driver must also set the driver_managed_dma flag and must not
182 touch the device until this operation succeeds.
183
1843. IOMMUFD_OBJ_HWPT_PAGING can be created in two ways:
185
186 * IOMMUFD_OBJ_HWPT_PAGING is automatically created when an external driver
187 calls the IOMMUFD kAPI to attach a bound device to an IOAS. Similarly the
188 external driver uAPI allows userspace to initiate the attaching operation.
189 If a compatible member HWPT_PAGING object exists in the IOAS's HWPT_PAGING
190 list, then it will be reused. Otherwise a new HWPT_PAGING that represents
191 an iommu_domain to userspace will be created, and then added to the list.
192 Successful completion of this operation sets up the linkages among IOAS,
193 device and iommu_domain. Once this completes the device could do DMA.
194
195 * IOMMUFD_OBJ_HWPT_PAGING can be manually created via the IOMMU_HWPT_ALLOC
196 uAPI, provided an ioas_id via @pt_id to associate the new HWPT_PAGING to
197 the corresponding IOAS object. The benefit of this manual allocation is to
198 allow allocation flags (defined in enum iommufd_hwpt_alloc_flags), e.g. it
199 allocates a nesting parent HWPT_PAGING if the IOMMU_HWPT_ALLOC_NEST_PARENT
200 flag is set.
201
2024. IOMMUFD_OBJ_HWPT_NESTED can be only manually created via the IOMMU_HWPT_ALLOC
203 uAPI, provided an hwpt_id or a viommu_id of a vIOMMU object encapsulating a
204 nesting parent HWPT_PAGING via @pt_id to associate the new HWPT_NESTED object
205 to the corresponding HWPT_PAGING object. The associating HWPT_PAGING object
206 must be a nesting parent manually allocated via the same uAPI previously with
207 an IOMMU_HWPT_ALLOC_NEST_PARENT flag, otherwise the allocation will fail. The
208 allocation will be further validated by the IOMMU driver to ensure that the
209 nesting parent domain and the nested domain being allocated are compatible.
210 Successful completion of this operation sets up linkages among IOAS, device,
211 and iommu_domains. Once this completes the device could do DMA via a 2-stage
212 translation, a.k.a nested translation. Note that multiple HWPT_NESTED objects
213 can be allocated by (and then associated to) the same nesting parent.
214
215 .. note::
216
217 Either a manual IOMMUFD_OBJ_HWPT_PAGING or an IOMMUFD_OBJ_HWPT_NESTED is
218 created via the same IOMMU_HWPT_ALLOC uAPI. The difference is at the type
219 of the object passed in via the @pt_id field of struct iommufd_hwpt_alloc.
220
2215. IOMMUFD_OBJ_VIOMMU can be only manually created via the IOMMU_VIOMMU_ALLOC
222 uAPI, provided a dev_id (for the device's physical IOMMU to back the vIOMMU)
223 and an hwpt_id (to associate the vIOMMU to a nesting parent HWPT_PAGING). The
224 iommufd core will link the vIOMMU object to the struct iommu_device that the
225 struct device is behind. And an IOMMU driver can implement a viommu_alloc op
226 to allocate its own vIOMMU data structure embedding the core-level structure
227 iommufd_viommu and some driver-specific data. If necessary, the driver can
228 also configure its HW virtualization feature for that vIOMMU (and thus for
229 the VM). Successful completion of this operation sets up the linkages between
230 the vIOMMU object and the HWPT_PAGING, then this vIOMMU object can be used
231 as a nesting parent object to allocate an HWPT_NESTED object described above.
232
2336. IOMMUFD_OBJ_VDEVICE can be only manually created via the IOMMU_VDEVICE_ALLOC
234 uAPI, provided a viommu_id for an iommufd_viommu object and a dev_id for an
235 iommufd_device object. The vDEVICE object will be the binding between these
236 two parent objects. Another @virt_id will be also set via the uAPI providing
237 the iommufd core an index to store the vDEVICE object to a vDEVICE array per
238 vIOMMU. If necessary, the IOMMU driver may choose to implement a vdevce_alloc
239 op to init its HW for virtualization feature related to a vDEVICE. Successful
240 completion of this operation sets up the linkages between vIOMMU and device.
241
242A device can only bind to an iommufd due to DMA ownership claim and attach to at
243most one IOAS object (no support of PASID yet).
244
245Kernel Datastructure
246--------------------
247
248User visible objects are backed by following datastructures:
249
250- iommufd_ioas for IOMMUFD_OBJ_IOAS.
251- iommufd_device for IOMMUFD_OBJ_DEVICE.
252- iommufd_hwpt_paging for IOMMUFD_OBJ_HWPT_PAGING.
253- iommufd_hwpt_nested for IOMMUFD_OBJ_HWPT_NESTED.
254- iommufd_viommu for IOMMUFD_OBJ_VIOMMU.
255- iommufd_vdevice for IOMMUFD_OBJ_VDEVICE.
256
257Several terminologies when looking at these datastructures:
258
259- Automatic domain - refers to an iommu domain created automatically when
260 attaching a device to an IOAS object. This is compatible to the semantics of
261 VFIO type1.
262
263- Manual domain - refers to an iommu domain designated by the user as the
264 target pagetable to be attached to by a device. Though currently there are
265 no uAPIs to directly create such domain, the datastructure and algorithms
266 are ready for handling that use case.
267
268- In-kernel user - refers to something like a VFIO mdev that is using the
269 IOMMUFD access interface to access the IOAS. This starts by creating an
270 iommufd_access object that is similar to the domain binding a physical device
271 would do. The access object will then allow converting IOVA ranges into struct
272 page * lists, or doing direct read/write to an IOVA.
273
274iommufd_ioas serves as the metadata datastructure to manage how IOVA ranges are
275mapped to memory pages, composed of:
276
277- struct io_pagetable holding the IOVA map
278- struct iopt_area's representing populated portions of IOVA
279- struct iopt_pages representing the storage of PFNs
280- struct iommu_domain representing the IO page table in the IOMMU
281- struct iopt_pages_access representing in-kernel users of PFNs
282- struct xarray pinned_pfns holding a list of pages pinned by in-kernel users
283
284Each iopt_pages represents a logical linear array of full PFNs. The PFNs are
285ultimately derived from userspace VAs via an mm_struct. Once they have been
286pinned the PFNs are stored in IOPTEs of an iommu_domain or inside the pinned_pfns
287xarray if they have been pinned through an iommufd_access.
288
289PFN have to be copied between all combinations of storage locations, depending
290on what domains are present and what kinds of in-kernel "software access" users
291exist. The mechanism ensures that a page is pinned only once.
292
293An io_pagetable is composed of iopt_areas pointing at iopt_pages, along with a
294list of iommu_domains that mirror the IOVA to PFN map.
295
296Multiple io_pagetable-s, through their iopt_area-s, can share a single
297iopt_pages which avoids multi-pinning and double accounting of page
298consumption.
299
300iommufd_ioas is shareable between subsystems, e.g. VFIO and VDPA, as long as
301devices managed by different subsystems are bound to a same iommufd.
302
303IOMMUFD User API
304================
305
306.. kernel-doc:: include/uapi/linux/iommufd.h
307
308IOMMUFD Kernel API
309==================
310
311The IOMMUFD kAPI is device-centric with group-related tricks managed behind the
312scene. This allows the external drivers calling such kAPI to implement a simple
313device-centric uAPI for connecting its device to an iommufd, instead of
314explicitly imposing the group semantics in its uAPI as VFIO does.
315
316.. kernel-doc:: drivers/iommu/iommufd/device.c
317 :export:
318
319.. kernel-doc:: drivers/iommu/iommufd/main.c
320 :export:
321
322VFIO and IOMMUFD
323----------------
324
325Connecting a VFIO device to iommufd can be done in two ways.
326
327First is a VFIO compatible way by directly implementing the /dev/vfio/vfio
328container IOCTLs by mapping them into io_pagetable operations. Doing so allows
329the use of iommufd in legacy VFIO applications by symlinking /dev/vfio/vfio to
330/dev/iommufd or extending VFIO to SET_CONTAINER using an iommufd instead of a
331container fd.
332
333The second approach directly extends VFIO to support a new set of device-centric
334user API based on aforementioned IOMMUFD kernel API. It requires userspace
335change but better matches the IOMMUFD API semantics and easier to support new
336iommufd features when comparing it to the first approach.
337
338Currently both approaches are still work-in-progress.
339
340There are still a few gaps to be resolved to catch up with VFIO type1, as
341documented in iommufd_vfio_check_extension().
342
343Future TODOs
344============
345
346Currently IOMMUFD supports only kernel-managed I/O page table, similar to VFIO
347type1. New features on the radar include:
348
349 - Binding iommu_domain's to PASID/SSID
350 - Userspace page tables, for ARM, x86 and S390
351 - Kernel bypass'd invalidation of user page tables
352 - Re-use of the KVM page table in the IOMMU
353 - Dirty page tracking in the IOMMU
354 - Runtime Increase/Decrease of IOPTE size
355 - PRI support with faults resolved in userspace