Linux Audio

Check our new training course

Loading...
v3.1
 
  1/*
  2 * Copyright (C) 2007-2008 Advanced Micro Devices, Inc.
  3 * Author: Joerg Roedel <joerg.roedel@amd.com>
  4 *
  5 * This program is free software; you can redistribute it and/or modify it
  6 * under the terms of the GNU General Public License version 2 as published
  7 * by the Free Software Foundation.
  8 *
  9 * This program is distributed in the hope that it will be useful,
 10 * but WITHOUT ANY WARRANTY; without even the implied warranty of
 11 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
 12 * GNU General Public License for more details.
 13 *
 14 * You should have received a copy of the GNU General Public License
 15 * along with this program; if not, write to the Free Software
 16 * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307 USA
 17 */
 18
 
 
 
 
 
 
 19#include <linux/bug.h>
 20#include <linux/types.h>
 21#include <linux/module.h>
 
 22#include <linux/slab.h>
 23#include <linux/errno.h>
 
 24#include <linux/iommu.h>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 25
 26static struct iommu_ops *iommu_ops;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 27
 28void register_iommu(struct iommu_ops *ops)
 
 
 
 
 29{
 30	if (iommu_ops)
 31		BUG();
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 32
 33	iommu_ops = ops;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 34}
 
 35
 36bool iommu_found(void)
 37{
 38	return iommu_ops != NULL;
 
 
 
 39}
 40EXPORT_SYMBOL_GPL(iommu_found);
 41
 42struct iommu_domain *iommu_domain_alloc(void)
 
 
 
 
 
 
 
 
 
 43{
 44	struct iommu_domain *domain;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 45	int ret;
 46
 47	domain = kmalloc(sizeof(*domain), GFP_KERNEL);
 48	if (!domain)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 49		return NULL;
 50
 51	ret = iommu_ops->domain_init(domain);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 52	if (ret)
 53		goto out_free;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 55	return domain;
 
 56
 57out_free:
 58	kfree(domain);
 
 
 59
 60	return NULL;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 61}
 62EXPORT_SYMBOL_GPL(iommu_domain_alloc);
 63
 64void iommu_domain_free(struct iommu_domain *domain)
 65{
 66	iommu_ops->domain_destroy(domain);
 67	kfree(domain);
 
 
 
 68}
 69EXPORT_SYMBOL_GPL(iommu_domain_free);
 70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 71int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
 72{
 73	return iommu_ops->attach_dev(domain, dev);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 74}
 75EXPORT_SYMBOL_GPL(iommu_attach_device);
 76
 
 
 
 
 
 
 
 
 77void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
 78{
 79	iommu_ops->detach_dev(domain, dev);
 
 
 
 
 
 
 
 
 
 
 
 
 
 80}
 81EXPORT_SYMBOL_GPL(iommu_detach_device);
 82
 83phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain,
 84			       unsigned long iova)
 85{
 86	return iommu_ops->iova_to_phys(domain, iova);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 87}
 88EXPORT_SYMBOL_GPL(iommu_iova_to_phys);
 89
 90int iommu_domain_has_cap(struct iommu_domain *domain,
 91			 unsigned long cap)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 92{
 93	return iommu_ops->domain_has_cap(domain, cap);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 94}
 95EXPORT_SYMBOL_GPL(iommu_domain_has_cap);
 96
 97int iommu_map(struct iommu_domain *domain, unsigned long iova,
 98	      phys_addr_t paddr, int gfp_order, int prot)
 99{
100	unsigned long invalid_mask;
101	size_t size;
102
103	size         = 0x1000UL << gfp_order;
104	invalid_mask = size - 1;
105
106	BUG_ON((iova | paddr) & invalid_mask);
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
108	return iommu_ops->map(domain, iova, paddr, gfp_order, prot);
109}
110EXPORT_SYMBOL_GPL(iommu_map);
111
112int iommu_unmap(struct iommu_domain *domain, unsigned long iova, int gfp_order)
 
 
113{
114	unsigned long invalid_mask;
115	size_t size;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
116
117	size         = 0x1000UL << gfp_order;
118	invalid_mask = size - 1;
 
 
 
 
 
 
 
119
120	BUG_ON(iova & invalid_mask);
 
 
121
122	return iommu_ops->unmap(domain, iova, gfp_order);
123}
124EXPORT_SYMBOL_GPL(iommu_unmap);
v6.8
   1// SPDX-License-Identifier: GPL-2.0-only
   2/*
   3 * Copyright (C) 2007-2008 Advanced Micro Devices, Inc.
   4 * Author: Joerg Roedel <jroedel@suse.de>
 
 
 
 
 
 
 
 
 
 
 
 
 
   5 */
   6
   7#define pr_fmt(fmt)    "iommu: " fmt
   8
   9#include <linux/amba/bus.h>
  10#include <linux/device.h>
  11#include <linux/kernel.h>
  12#include <linux/bits.h>
  13#include <linux/bug.h>
  14#include <linux/types.h>
  15#include <linux/init.h>
  16#include <linux/export.h>
  17#include <linux/slab.h>
  18#include <linux/errno.h>
  19#include <linux/host1x_context_bus.h>
  20#include <linux/iommu.h>
  21#include <linux/idr.h>
  22#include <linux/err.h>
  23#include <linux/pci.h>
  24#include <linux/pci-ats.h>
  25#include <linux/bitops.h>
  26#include <linux/platform_device.h>
  27#include <linux/property.h>
  28#include <linux/fsl/mc.h>
  29#include <linux/module.h>
  30#include <linux/cc_platform.h>
  31#include <linux/cdx/cdx_bus.h>
  32#include <trace/events/iommu.h>
  33#include <linux/sched/mm.h>
  34#include <linux/msi.h>
  35
  36#include "dma-iommu.h"
  37#include "iommu-priv.h"
  38
  39#include "iommu-sva.h"
  40
  41static struct kset *iommu_group_kset;
  42static DEFINE_IDA(iommu_group_ida);
  43static DEFINE_IDA(iommu_global_pasid_ida);
  44
  45static unsigned int iommu_def_domain_type __read_mostly;
  46static bool iommu_dma_strict __read_mostly = IS_ENABLED(CONFIG_IOMMU_DEFAULT_DMA_STRICT);
  47static u32 iommu_cmd_line __read_mostly;
  48
  49struct iommu_group {
  50	struct kobject kobj;
  51	struct kobject *devices_kobj;
  52	struct list_head devices;
  53	struct xarray pasid_array;
  54	struct mutex mutex;
  55	void *iommu_data;
  56	void (*iommu_data_release)(void *iommu_data);
  57	char *name;
  58	int id;
  59	struct iommu_domain *default_domain;
  60	struct iommu_domain *blocking_domain;
  61	struct iommu_domain *domain;
  62	struct list_head entry;
  63	unsigned int owner_cnt;
  64	void *owner;
  65};
  66
  67struct group_device {
  68	struct list_head list;
  69	struct device *dev;
  70	char *name;
  71};
  72
  73/* Iterate over each struct group_device in a struct iommu_group */
  74#define for_each_group_device(group, pos) \
  75	list_for_each_entry(pos, &(group)->devices, list)
  76
  77struct iommu_group_attribute {
  78	struct attribute attr;
  79	ssize_t (*show)(struct iommu_group *group, char *buf);
  80	ssize_t (*store)(struct iommu_group *group,
  81			 const char *buf, size_t count);
  82};
  83
  84static const char * const iommu_group_resv_type_string[] = {
  85	[IOMMU_RESV_DIRECT]			= "direct",
  86	[IOMMU_RESV_DIRECT_RELAXABLE]		= "direct-relaxable",
  87	[IOMMU_RESV_RESERVED]			= "reserved",
  88	[IOMMU_RESV_MSI]			= "msi",
  89	[IOMMU_RESV_SW_MSI]			= "msi",
  90};
  91
  92#define IOMMU_CMD_LINE_DMA_API		BIT(0)
  93#define IOMMU_CMD_LINE_STRICT		BIT(1)
  94
  95static int iommu_bus_notifier(struct notifier_block *nb,
  96			      unsigned long action, void *data);
  97static void iommu_release_device(struct device *dev);
  98static struct iommu_domain *
  99__iommu_group_domain_alloc(struct iommu_group *group, unsigned int type);
 100static int __iommu_attach_device(struct iommu_domain *domain,
 101				 struct device *dev);
 102static int __iommu_attach_group(struct iommu_domain *domain,
 103				struct iommu_group *group);
 104
 105enum {
 106	IOMMU_SET_DOMAIN_MUST_SUCCEED = 1 << 0,
 107};
 108
 109static int __iommu_device_set_domain(struct iommu_group *group,
 110				     struct device *dev,
 111				     struct iommu_domain *new_domain,
 112				     unsigned int flags);
 113static int __iommu_group_set_domain_internal(struct iommu_group *group,
 114					     struct iommu_domain *new_domain,
 115					     unsigned int flags);
 116static int __iommu_group_set_domain(struct iommu_group *group,
 117				    struct iommu_domain *new_domain)
 118{
 119	return __iommu_group_set_domain_internal(group, new_domain, 0);
 120}
 121static void __iommu_group_set_domain_nofail(struct iommu_group *group,
 122					    struct iommu_domain *new_domain)
 123{
 124	WARN_ON(__iommu_group_set_domain_internal(
 125		group, new_domain, IOMMU_SET_DOMAIN_MUST_SUCCEED));
 126}
 127
 128static int iommu_setup_default_domain(struct iommu_group *group,
 129				      int target_type);
 130static int iommu_create_device_direct_mappings(struct iommu_domain *domain,
 131					       struct device *dev);
 132static ssize_t iommu_group_store_type(struct iommu_group *group,
 133				      const char *buf, size_t count);
 134static struct group_device *iommu_group_alloc_device(struct iommu_group *group,
 135						     struct device *dev);
 136static void __iommu_group_free_device(struct iommu_group *group,
 137				      struct group_device *grp_dev);
 138
 139#define IOMMU_GROUP_ATTR(_name, _mode, _show, _store)		\
 140struct iommu_group_attribute iommu_group_attr_##_name =		\
 141	__ATTR(_name, _mode, _show, _store)
 142
 143#define to_iommu_group_attr(_attr)	\
 144	container_of(_attr, struct iommu_group_attribute, attr)
 145#define to_iommu_group(_kobj)		\
 146	container_of(_kobj, struct iommu_group, kobj)
 147
 148static LIST_HEAD(iommu_device_list);
 149static DEFINE_SPINLOCK(iommu_device_lock);
 150
 151static const struct bus_type * const iommu_buses[] = {
 152	&platform_bus_type,
 153#ifdef CONFIG_PCI
 154	&pci_bus_type,
 155#endif
 156#ifdef CONFIG_ARM_AMBA
 157	&amba_bustype,
 158#endif
 159#ifdef CONFIG_FSL_MC_BUS
 160	&fsl_mc_bus_type,
 161#endif
 162#ifdef CONFIG_TEGRA_HOST1X_CONTEXT_BUS
 163	&host1x_context_device_bus_type,
 164#endif
 165#ifdef CONFIG_CDX_BUS
 166	&cdx_bus_type,
 167#endif
 168};
 169
 170/*
 171 * Use a function instead of an array here because the domain-type is a
 172 * bit-field, so an array would waste memory.
 173 */
 174static const char *iommu_domain_type_str(unsigned int t)
 175{
 176	switch (t) {
 177	case IOMMU_DOMAIN_BLOCKED:
 178		return "Blocked";
 179	case IOMMU_DOMAIN_IDENTITY:
 180		return "Passthrough";
 181	case IOMMU_DOMAIN_UNMANAGED:
 182		return "Unmanaged";
 183	case IOMMU_DOMAIN_DMA:
 184	case IOMMU_DOMAIN_DMA_FQ:
 185		return "Translated";
 186	case IOMMU_DOMAIN_PLATFORM:
 187		return "Platform";
 188	default:
 189		return "Unknown";
 190	}
 191}
 192
 193static int __init iommu_subsys_init(void)
 194{
 195	struct notifier_block *nb;
 196
 197	if (!(iommu_cmd_line & IOMMU_CMD_LINE_DMA_API)) {
 198		if (IS_ENABLED(CONFIG_IOMMU_DEFAULT_PASSTHROUGH))
 199			iommu_set_default_passthrough(false);
 200		else
 201			iommu_set_default_translated(false);
 202
 203		if (iommu_default_passthrough() && cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
 204			pr_info("Memory encryption detected - Disabling default IOMMU Passthrough\n");
 205			iommu_set_default_translated(false);
 206		}
 207	}
 208
 209	if (!iommu_default_passthrough() && !iommu_dma_strict)
 210		iommu_def_domain_type = IOMMU_DOMAIN_DMA_FQ;
 211
 212	pr_info("Default domain type: %s%s\n",
 213		iommu_domain_type_str(iommu_def_domain_type),
 214		(iommu_cmd_line & IOMMU_CMD_LINE_DMA_API) ?
 215			" (set via kernel command line)" : "");
 216
 217	if (!iommu_default_passthrough())
 218		pr_info("DMA domain TLB invalidation policy: %s mode%s\n",
 219			iommu_dma_strict ? "strict" : "lazy",
 220			(iommu_cmd_line & IOMMU_CMD_LINE_STRICT) ?
 221				" (set via kernel command line)" : "");
 222
 223	nb = kcalloc(ARRAY_SIZE(iommu_buses), sizeof(*nb), GFP_KERNEL);
 224	if (!nb)
 225		return -ENOMEM;
 226
 227	for (int i = 0; i < ARRAY_SIZE(iommu_buses); i++) {
 228		nb[i].notifier_call = iommu_bus_notifier;
 229		bus_register_notifier(iommu_buses[i], &nb[i]);
 230	}
 231
 232	return 0;
 233}
 234subsys_initcall(iommu_subsys_init);
 235
 236static int remove_iommu_group(struct device *dev, void *data)
 237{
 238	if (dev->iommu && dev->iommu->iommu_dev == data)
 239		iommu_release_device(dev);
 240
 241	return 0;
 242}
 
 243
 244/**
 245 * iommu_device_register() - Register an IOMMU hardware instance
 246 * @iommu: IOMMU handle for the instance
 247 * @ops:   IOMMU ops to associate with the instance
 248 * @hwdev: (optional) actual instance device, used for fwnode lookup
 249 *
 250 * Return: 0 on success, or an error.
 251 */
 252int iommu_device_register(struct iommu_device *iommu,
 253			  const struct iommu_ops *ops, struct device *hwdev)
 254{
 255	int err = 0;
 256
 257	/* We need to be able to take module references appropriately */
 258	if (WARN_ON(is_module_address((unsigned long)ops) && !ops->owner))
 259		return -EINVAL;
 260
 261	iommu->ops = ops;
 262	if (hwdev)
 263		iommu->fwnode = dev_fwnode(hwdev);
 264
 265	spin_lock(&iommu_device_lock);
 266	list_add_tail(&iommu->list, &iommu_device_list);
 267	spin_unlock(&iommu_device_lock);
 268
 269	for (int i = 0; i < ARRAY_SIZE(iommu_buses) && !err; i++)
 270		err = bus_iommu_probe(iommu_buses[i]);
 271	if (err)
 272		iommu_device_unregister(iommu);
 273	return err;
 274}
 275EXPORT_SYMBOL_GPL(iommu_device_register);
 276
 277void iommu_device_unregister(struct iommu_device *iommu)
 278{
 279	for (int i = 0; i < ARRAY_SIZE(iommu_buses); i++)
 280		bus_for_each_dev(iommu_buses[i], NULL, iommu, remove_iommu_group);
 281
 282	spin_lock(&iommu_device_lock);
 283	list_del(&iommu->list);
 284	spin_unlock(&iommu_device_lock);
 285
 286	/* Pairs with the alloc in generic_single_device_group() */
 287	iommu_group_put(iommu->singleton_group);
 288	iommu->singleton_group = NULL;
 289}
 290EXPORT_SYMBOL_GPL(iommu_device_unregister);
 291
 292#if IS_ENABLED(CONFIG_IOMMUFD_TEST)
 293void iommu_device_unregister_bus(struct iommu_device *iommu,
 294				 struct bus_type *bus,
 295				 struct notifier_block *nb)
 296{
 297	bus_unregister_notifier(bus, nb);
 298	iommu_device_unregister(iommu);
 299}
 300EXPORT_SYMBOL_GPL(iommu_device_unregister_bus);
 301
 302/*
 303 * Register an iommu driver against a single bus. This is only used by iommufd
 304 * selftest to create a mock iommu driver. The caller must provide
 305 * some memory to hold a notifier_block.
 306 */
 307int iommu_device_register_bus(struct iommu_device *iommu,
 308			      const struct iommu_ops *ops, struct bus_type *bus,
 309			      struct notifier_block *nb)
 310{
 311	int err;
 312
 313	iommu->ops = ops;
 314	nb->notifier_call = iommu_bus_notifier;
 315	err = bus_register_notifier(bus, nb);
 316	if (err)
 317		return err;
 318
 319	spin_lock(&iommu_device_lock);
 320	list_add_tail(&iommu->list, &iommu_device_list);
 321	spin_unlock(&iommu_device_lock);
 322
 323	err = bus_iommu_probe(bus);
 324	if (err) {
 325		iommu_device_unregister_bus(iommu, bus, nb);
 326		return err;
 327	}
 328	return 0;
 329}
 330EXPORT_SYMBOL_GPL(iommu_device_register_bus);
 331#endif
 332
 333static struct dev_iommu *dev_iommu_get(struct device *dev)
 334{
 335	struct dev_iommu *param = dev->iommu;
 336
 337	lockdep_assert_held(&iommu_probe_device_lock);
 338
 339	if (param)
 340		return param;
 341
 342	param = kzalloc(sizeof(*param), GFP_KERNEL);
 343	if (!param)
 344		return NULL;
 345
 346	mutex_init(&param->lock);
 347	dev->iommu = param;
 348	return param;
 349}
 350
 351static void dev_iommu_free(struct device *dev)
 352{
 353	struct dev_iommu *param = dev->iommu;
 354
 355	dev->iommu = NULL;
 356	if (param->fwspec) {
 357		fwnode_handle_put(param->fwspec->iommu_fwnode);
 358		kfree(param->fwspec);
 359	}
 360	kfree(param);
 361}
 362
 363/*
 364 * Internal equivalent of device_iommu_mapped() for when we care that a device
 365 * actually has API ops, and don't want false positives from VFIO-only groups.
 366 */
 367static bool dev_has_iommu(struct device *dev)
 368{
 369	return dev->iommu && dev->iommu->iommu_dev;
 370}
 371
 372static u32 dev_iommu_get_max_pasids(struct device *dev)
 373{
 374	u32 max_pasids = 0, bits = 0;
 375	int ret;
 376
 377	if (dev_is_pci(dev)) {
 378		ret = pci_max_pasids(to_pci_dev(dev));
 379		if (ret > 0)
 380			max_pasids = ret;
 381	} else {
 382		ret = device_property_read_u32(dev, "pasid-num-bits", &bits);
 383		if (!ret)
 384			max_pasids = 1UL << bits;
 385	}
 386
 387	return min_t(u32, max_pasids, dev->iommu->iommu_dev->max_pasids);
 388}
 389
 390void dev_iommu_priv_set(struct device *dev, void *priv)
 391{
 392	/* FSL_PAMU does something weird */
 393	if (!IS_ENABLED(CONFIG_FSL_PAMU))
 394		lockdep_assert_held(&iommu_probe_device_lock);
 395	dev->iommu->priv = priv;
 396}
 397EXPORT_SYMBOL_GPL(dev_iommu_priv_set);
 398
 399/*
 400 * Init the dev->iommu and dev->iommu_group in the struct device and get the
 401 * driver probed
 402 */
 403static int iommu_init_device(struct device *dev, const struct iommu_ops *ops)
 404{
 405	struct iommu_device *iommu_dev;
 406	struct iommu_group *group;
 407	int ret;
 408
 409	if (!dev_iommu_get(dev))
 410		return -ENOMEM;
 411
 412	if (!try_module_get(ops->owner)) {
 413		ret = -EINVAL;
 414		goto err_free;
 415	}
 416
 417	iommu_dev = ops->probe_device(dev);
 418	if (IS_ERR(iommu_dev)) {
 419		ret = PTR_ERR(iommu_dev);
 420		goto err_module_put;
 421	}
 422	dev->iommu->iommu_dev = iommu_dev;
 423
 424	ret = iommu_device_link(iommu_dev, dev);
 425	if (ret)
 426		goto err_release;
 427
 428	group = ops->device_group(dev);
 429	if (WARN_ON_ONCE(group == NULL))
 430		group = ERR_PTR(-EINVAL);
 431	if (IS_ERR(group)) {
 432		ret = PTR_ERR(group);
 433		goto err_unlink;
 434	}
 435	dev->iommu_group = group;
 436
 437	dev->iommu->max_pasids = dev_iommu_get_max_pasids(dev);
 438	if (ops->is_attach_deferred)
 439		dev->iommu->attach_deferred = ops->is_attach_deferred(dev);
 440	return 0;
 441
 442err_unlink:
 443	iommu_device_unlink(iommu_dev, dev);
 444err_release:
 445	if (ops->release_device)
 446		ops->release_device(dev);
 447err_module_put:
 448	module_put(ops->owner);
 449err_free:
 450	dev->iommu->iommu_dev = NULL;
 451	dev_iommu_free(dev);
 452	return ret;
 453}
 454
 455static void iommu_deinit_device(struct device *dev)
 456{
 457	struct iommu_group *group = dev->iommu_group;
 458	const struct iommu_ops *ops = dev_iommu_ops(dev);
 459
 460	lockdep_assert_held(&group->mutex);
 461
 462	iommu_device_unlink(dev->iommu->iommu_dev, dev);
 463
 464	/*
 465	 * release_device() must stop using any attached domain on the device.
 466	 * If there are still other devices in the group they are not effected
 467	 * by this callback.
 468	 *
 469	 * The IOMMU driver must set the device to either an identity or
 470	 * blocking translation and stop using any domain pointer, as it is
 471	 * going to be freed.
 472	 */
 473	if (ops->release_device)
 474		ops->release_device(dev);
 475
 476	/*
 477	 * If this is the last driver to use the group then we must free the
 478	 * domains before we do the module_put().
 479	 */
 480	if (list_empty(&group->devices)) {
 481		if (group->default_domain) {
 482			iommu_domain_free(group->default_domain);
 483			group->default_domain = NULL;
 484		}
 485		if (group->blocking_domain) {
 486			iommu_domain_free(group->blocking_domain);
 487			group->blocking_domain = NULL;
 488		}
 489		group->domain = NULL;
 490	}
 491
 492	/* Caller must put iommu_group */
 493	dev->iommu_group = NULL;
 494	module_put(ops->owner);
 495	dev_iommu_free(dev);
 496}
 497
 498DEFINE_MUTEX(iommu_probe_device_lock);
 499
 500static int __iommu_probe_device(struct device *dev, struct list_head *group_list)
 501{
 502	const struct iommu_ops *ops;
 503	struct iommu_fwspec *fwspec;
 504	struct iommu_group *group;
 505	struct group_device *gdev;
 506	int ret;
 507
 508	/*
 509	 * For FDT-based systems and ACPI IORT/VIOT, drivers register IOMMU
 510	 * instances with non-NULL fwnodes, and client devices should have been
 511	 * identified with a fwspec by this point. Otherwise, we can currently
 512	 * assume that only one of Intel, AMD, s390, PAMU or legacy SMMUv2 can
 513	 * be present, and that any of their registered instances has suitable
 514	 * ops for probing, and thus cheekily co-opt the same mechanism.
 515	 */
 516	fwspec = dev_iommu_fwspec_get(dev);
 517	if (fwspec && fwspec->ops)
 518		ops = fwspec->ops;
 519	else
 520		ops = iommu_ops_from_fwnode(NULL);
 521
 522	if (!ops)
 523		return -ENODEV;
 524	/*
 525	 * Serialise to avoid races between IOMMU drivers registering in
 526	 * parallel and/or the "replay" calls from ACPI/OF code via client
 527	 * driver probe. Once the latter have been cleaned up we should
 528	 * probably be able to use device_lock() here to minimise the scope,
 529	 * but for now enforcing a simple global ordering is fine.
 530	 */
 531	lockdep_assert_held(&iommu_probe_device_lock);
 532
 533	/* Device is probed already if in a group */
 534	if (dev->iommu_group)
 535		return 0;
 536
 537	ret = iommu_init_device(dev, ops);
 538	if (ret)
 539		return ret;
 540
 541	group = dev->iommu_group;
 542	gdev = iommu_group_alloc_device(group, dev);
 543	mutex_lock(&group->mutex);
 544	if (IS_ERR(gdev)) {
 545		ret = PTR_ERR(gdev);
 546		goto err_put_group;
 547	}
 548
 549	/*
 550	 * The gdev must be in the list before calling
 551	 * iommu_setup_default_domain()
 552	 */
 553	list_add_tail(&gdev->list, &group->devices);
 554	WARN_ON(group->default_domain && !group->domain);
 555	if (group->default_domain)
 556		iommu_create_device_direct_mappings(group->default_domain, dev);
 557	if (group->domain) {
 558		ret = __iommu_device_set_domain(group, dev, group->domain, 0);
 559		if (ret)
 560			goto err_remove_gdev;
 561	} else if (!group->default_domain && !group_list) {
 562		ret = iommu_setup_default_domain(group, 0);
 563		if (ret)
 564			goto err_remove_gdev;
 565	} else if (!group->default_domain) {
 566		/*
 567		 * With a group_list argument we defer the default_domain setup
 568		 * to the caller by providing a de-duplicated list of groups
 569		 * that need further setup.
 570		 */
 571		if (list_empty(&group->entry))
 572			list_add_tail(&group->entry, group_list);
 573	}
 574	mutex_unlock(&group->mutex);
 575
 576	if (dev_is_pci(dev))
 577		iommu_dma_set_pci_32bit_workaround(dev);
 578
 579	return 0;
 580
 581err_remove_gdev:
 582	list_del(&gdev->list);
 583	__iommu_group_free_device(group, gdev);
 584err_put_group:
 585	iommu_deinit_device(dev);
 586	mutex_unlock(&group->mutex);
 587	iommu_group_put(group);
 588
 589	return ret;
 590}
 591
 592int iommu_probe_device(struct device *dev)
 593{
 594	const struct iommu_ops *ops;
 595	int ret;
 596
 597	mutex_lock(&iommu_probe_device_lock);
 598	ret = __iommu_probe_device(dev, NULL);
 599	mutex_unlock(&iommu_probe_device_lock);
 600	if (ret)
 601		return ret;
 602
 603	ops = dev_iommu_ops(dev);
 604	if (ops->probe_finalize)
 605		ops->probe_finalize(dev);
 606
 607	return 0;
 608}
 609
 610static void __iommu_group_free_device(struct iommu_group *group,
 611				      struct group_device *grp_dev)
 612{
 613	struct device *dev = grp_dev->dev;
 614
 615	sysfs_remove_link(group->devices_kobj, grp_dev->name);
 616	sysfs_remove_link(&dev->kobj, "iommu_group");
 617
 618	trace_remove_device_from_group(group->id, dev);
 619
 620	/*
 621	 * If the group has become empty then ownership must have been
 622	 * released, and the current domain must be set back to NULL or
 623	 * the default domain.
 624	 */
 625	if (list_empty(&group->devices))
 626		WARN_ON(group->owner_cnt ||
 627			group->domain != group->default_domain);
 628
 629	kfree(grp_dev->name);
 630	kfree(grp_dev);
 631}
 632
 633/* Remove the iommu_group from the struct device. */
 634static void __iommu_group_remove_device(struct device *dev)
 635{
 636	struct iommu_group *group = dev->iommu_group;
 637	struct group_device *device;
 638
 639	mutex_lock(&group->mutex);
 640	for_each_group_device(group, device) {
 641		if (device->dev != dev)
 642			continue;
 643
 644		list_del(&device->list);
 645		__iommu_group_free_device(group, device);
 646		if (dev_has_iommu(dev))
 647			iommu_deinit_device(dev);
 648		else
 649			dev->iommu_group = NULL;
 650		break;
 651	}
 652	mutex_unlock(&group->mutex);
 653
 654	/*
 655	 * Pairs with the get in iommu_init_device() or
 656	 * iommu_group_add_device()
 657	 */
 658	iommu_group_put(group);
 659}
 660
 661static void iommu_release_device(struct device *dev)
 662{
 663	struct iommu_group *group = dev->iommu_group;
 664
 665	if (group)
 666		__iommu_group_remove_device(dev);
 667
 668	/* Free any fwspec if no iommu_driver was ever attached */
 669	if (dev->iommu)
 670		dev_iommu_free(dev);
 671}
 672
 673static int __init iommu_set_def_domain_type(char *str)
 674{
 675	bool pt;
 676	int ret;
 677
 678	ret = kstrtobool(str, &pt);
 679	if (ret)
 680		return ret;
 681
 682	if (pt)
 683		iommu_set_default_passthrough(true);
 684	else
 685		iommu_set_default_translated(true);
 686
 687	return 0;
 688}
 689early_param("iommu.passthrough", iommu_set_def_domain_type);
 690
 691static int __init iommu_dma_setup(char *str)
 692{
 693	int ret = kstrtobool(str, &iommu_dma_strict);
 694
 695	if (!ret)
 696		iommu_cmd_line |= IOMMU_CMD_LINE_STRICT;
 697	return ret;
 698}
 699early_param("iommu.strict", iommu_dma_setup);
 700
 701void iommu_set_dma_strict(void)
 702{
 703	iommu_dma_strict = true;
 704	if (iommu_def_domain_type == IOMMU_DOMAIN_DMA_FQ)
 705		iommu_def_domain_type = IOMMU_DOMAIN_DMA;
 706}
 707
 708static ssize_t iommu_group_attr_show(struct kobject *kobj,
 709				     struct attribute *__attr, char *buf)
 710{
 711	struct iommu_group_attribute *attr = to_iommu_group_attr(__attr);
 712	struct iommu_group *group = to_iommu_group(kobj);
 713	ssize_t ret = -EIO;
 714
 715	if (attr->show)
 716		ret = attr->show(group, buf);
 717	return ret;
 718}
 719
 720static ssize_t iommu_group_attr_store(struct kobject *kobj,
 721				      struct attribute *__attr,
 722				      const char *buf, size_t count)
 723{
 724	struct iommu_group_attribute *attr = to_iommu_group_attr(__attr);
 725	struct iommu_group *group = to_iommu_group(kobj);
 726	ssize_t ret = -EIO;
 727
 728	if (attr->store)
 729		ret = attr->store(group, buf, count);
 730	return ret;
 731}
 732
 733static const struct sysfs_ops iommu_group_sysfs_ops = {
 734	.show = iommu_group_attr_show,
 735	.store = iommu_group_attr_store,
 736};
 737
 738static int iommu_group_create_file(struct iommu_group *group,
 739				   struct iommu_group_attribute *attr)
 740{
 741	return sysfs_create_file(&group->kobj, &attr->attr);
 742}
 743
 744static void iommu_group_remove_file(struct iommu_group *group,
 745				    struct iommu_group_attribute *attr)
 746{
 747	sysfs_remove_file(&group->kobj, &attr->attr);
 748}
 749
 750static ssize_t iommu_group_show_name(struct iommu_group *group, char *buf)
 751{
 752	return sysfs_emit(buf, "%s\n", group->name);
 753}
 754
 755/**
 756 * iommu_insert_resv_region - Insert a new region in the
 757 * list of reserved regions.
 758 * @new: new region to insert
 759 * @regions: list of regions
 760 *
 761 * Elements are sorted by start address and overlapping segments
 762 * of the same type are merged.
 763 */
 764static int iommu_insert_resv_region(struct iommu_resv_region *new,
 765				    struct list_head *regions)
 766{
 767	struct iommu_resv_region *iter, *tmp, *nr, *top;
 768	LIST_HEAD(stack);
 769
 770	nr = iommu_alloc_resv_region(new->start, new->length,
 771				     new->prot, new->type, GFP_KERNEL);
 772	if (!nr)
 773		return -ENOMEM;
 774
 775	/* First add the new element based on start address sorting */
 776	list_for_each_entry(iter, regions, list) {
 777		if (nr->start < iter->start ||
 778		    (nr->start == iter->start && nr->type <= iter->type))
 779			break;
 780	}
 781	list_add_tail(&nr->list, &iter->list);
 782
 783	/* Merge overlapping segments of type nr->type in @regions, if any */
 784	list_for_each_entry_safe(iter, tmp, regions, list) {
 785		phys_addr_t top_end, iter_end = iter->start + iter->length - 1;
 786
 787		/* no merge needed on elements of different types than @new */
 788		if (iter->type != new->type) {
 789			list_move_tail(&iter->list, &stack);
 790			continue;
 791		}
 792
 793		/* look for the last stack element of same type as @iter */
 794		list_for_each_entry_reverse(top, &stack, list)
 795			if (top->type == iter->type)
 796				goto check_overlap;
 797
 798		list_move_tail(&iter->list, &stack);
 799		continue;
 800
 801check_overlap:
 802		top_end = top->start + top->length - 1;
 803
 804		if (iter->start > top_end + 1) {
 805			list_move_tail(&iter->list, &stack);
 806		} else {
 807			top->length = max(top_end, iter_end) - top->start + 1;
 808			list_del(&iter->list);
 809			kfree(iter);
 810		}
 811	}
 812	list_splice(&stack, regions);
 813	return 0;
 814}
 815
 816static int
 817iommu_insert_device_resv_regions(struct list_head *dev_resv_regions,
 818				 struct list_head *group_resv_regions)
 819{
 820	struct iommu_resv_region *entry;
 821	int ret = 0;
 822
 823	list_for_each_entry(entry, dev_resv_regions, list) {
 824		ret = iommu_insert_resv_region(entry, group_resv_regions);
 825		if (ret)
 826			break;
 827	}
 828	return ret;
 829}
 830
 831int iommu_get_group_resv_regions(struct iommu_group *group,
 832				 struct list_head *head)
 833{
 834	struct group_device *device;
 835	int ret = 0;
 836
 837	mutex_lock(&group->mutex);
 838	for_each_group_device(group, device) {
 839		struct list_head dev_resv_regions;
 840
 841		/*
 842		 * Non-API groups still expose reserved_regions in sysfs,
 843		 * so filter out calls that get here that way.
 844		 */
 845		if (!dev_has_iommu(device->dev))
 846			break;
 847
 848		INIT_LIST_HEAD(&dev_resv_regions);
 849		iommu_get_resv_regions(device->dev, &dev_resv_regions);
 850		ret = iommu_insert_device_resv_regions(&dev_resv_regions, head);
 851		iommu_put_resv_regions(device->dev, &dev_resv_regions);
 852		if (ret)
 853			break;
 854	}
 855	mutex_unlock(&group->mutex);
 856	return ret;
 857}
 858EXPORT_SYMBOL_GPL(iommu_get_group_resv_regions);
 859
 860static ssize_t iommu_group_show_resv_regions(struct iommu_group *group,
 861					     char *buf)
 862{
 863	struct iommu_resv_region *region, *next;
 864	struct list_head group_resv_regions;
 865	int offset = 0;
 866
 867	INIT_LIST_HEAD(&group_resv_regions);
 868	iommu_get_group_resv_regions(group, &group_resv_regions);
 869
 870	list_for_each_entry_safe(region, next, &group_resv_regions, list) {
 871		offset += sysfs_emit_at(buf, offset, "0x%016llx 0x%016llx %s\n",
 872					(long long)region->start,
 873					(long long)(region->start +
 874						    region->length - 1),
 875					iommu_group_resv_type_string[region->type]);
 876		kfree(region);
 877	}
 878
 879	return offset;
 880}
 881
 882static ssize_t iommu_group_show_type(struct iommu_group *group,
 883				     char *buf)
 884{
 885	char *type = "unknown";
 886
 887	mutex_lock(&group->mutex);
 888	if (group->default_domain) {
 889		switch (group->default_domain->type) {
 890		case IOMMU_DOMAIN_BLOCKED:
 891			type = "blocked";
 892			break;
 893		case IOMMU_DOMAIN_IDENTITY:
 894			type = "identity";
 895			break;
 896		case IOMMU_DOMAIN_UNMANAGED:
 897			type = "unmanaged";
 898			break;
 899		case IOMMU_DOMAIN_DMA:
 900			type = "DMA";
 901			break;
 902		case IOMMU_DOMAIN_DMA_FQ:
 903			type = "DMA-FQ";
 904			break;
 905		}
 906	}
 907	mutex_unlock(&group->mutex);
 908
 909	return sysfs_emit(buf, "%s\n", type);
 910}
 911
 912static IOMMU_GROUP_ATTR(name, S_IRUGO, iommu_group_show_name, NULL);
 913
 914static IOMMU_GROUP_ATTR(reserved_regions, 0444,
 915			iommu_group_show_resv_regions, NULL);
 916
 917static IOMMU_GROUP_ATTR(type, 0644, iommu_group_show_type,
 918			iommu_group_store_type);
 919
 920static void iommu_group_release(struct kobject *kobj)
 921{
 922	struct iommu_group *group = to_iommu_group(kobj);
 923
 924	pr_debug("Releasing group %d\n", group->id);
 925
 926	if (group->iommu_data_release)
 927		group->iommu_data_release(group->iommu_data);
 928
 929	ida_free(&iommu_group_ida, group->id);
 930
 931	/* Domains are free'd by iommu_deinit_device() */
 932	WARN_ON(group->default_domain);
 933	WARN_ON(group->blocking_domain);
 934
 935	kfree(group->name);
 936	kfree(group);
 937}
 938
 939static const struct kobj_type iommu_group_ktype = {
 940	.sysfs_ops = &iommu_group_sysfs_ops,
 941	.release = iommu_group_release,
 942};
 943
 944/**
 945 * iommu_group_alloc - Allocate a new group
 946 *
 947 * This function is called by an iommu driver to allocate a new iommu
 948 * group.  The iommu group represents the minimum granularity of the iommu.
 949 * Upon successful return, the caller holds a reference to the supplied
 950 * group in order to hold the group until devices are added.  Use
 951 * iommu_group_put() to release this extra reference count, allowing the
 952 * group to be automatically reclaimed once it has no devices or external
 953 * references.
 954 */
 955struct iommu_group *iommu_group_alloc(void)
 956{
 957	struct iommu_group *group;
 958	int ret;
 959
 960	group = kzalloc(sizeof(*group), GFP_KERNEL);
 961	if (!group)
 962		return ERR_PTR(-ENOMEM);
 963
 964	group->kobj.kset = iommu_group_kset;
 965	mutex_init(&group->mutex);
 966	INIT_LIST_HEAD(&group->devices);
 967	INIT_LIST_HEAD(&group->entry);
 968	xa_init(&group->pasid_array);
 969
 970	ret = ida_alloc(&iommu_group_ida, GFP_KERNEL);
 971	if (ret < 0) {
 972		kfree(group);
 973		return ERR_PTR(ret);
 974	}
 975	group->id = ret;
 976
 977	ret = kobject_init_and_add(&group->kobj, &iommu_group_ktype,
 978				   NULL, "%d", group->id);
 979	if (ret) {
 980		kobject_put(&group->kobj);
 981		return ERR_PTR(ret);
 982	}
 983
 984	group->devices_kobj = kobject_create_and_add("devices", &group->kobj);
 985	if (!group->devices_kobj) {
 986		kobject_put(&group->kobj); /* triggers .release & free */
 987		return ERR_PTR(-ENOMEM);
 988	}
 989
 990	/*
 991	 * The devices_kobj holds a reference on the group kobject, so
 992	 * as long as that exists so will the group.  We can therefore
 993	 * use the devices_kobj for reference counting.
 994	 */
 995	kobject_put(&group->kobj);
 996
 997	ret = iommu_group_create_file(group,
 998				      &iommu_group_attr_reserved_regions);
 999	if (ret) {
1000		kobject_put(group->devices_kobj);
1001		return ERR_PTR(ret);
1002	}
1003
1004	ret = iommu_group_create_file(group, &iommu_group_attr_type);
1005	if (ret) {
1006		kobject_put(group->devices_kobj);
1007		return ERR_PTR(ret);
1008	}
1009
1010	pr_debug("Allocated group %d\n", group->id);
1011
1012	return group;
1013}
1014EXPORT_SYMBOL_GPL(iommu_group_alloc);
1015
1016/**
1017 * iommu_group_get_iommudata - retrieve iommu_data registered for a group
1018 * @group: the group
1019 *
1020 * iommu drivers can store data in the group for use when doing iommu
1021 * operations.  This function provides a way to retrieve it.  Caller
1022 * should hold a group reference.
1023 */
1024void *iommu_group_get_iommudata(struct iommu_group *group)
1025{
1026	return group->iommu_data;
1027}
1028EXPORT_SYMBOL_GPL(iommu_group_get_iommudata);
1029
1030/**
1031 * iommu_group_set_iommudata - set iommu_data for a group
1032 * @group: the group
1033 * @iommu_data: new data
1034 * @release: release function for iommu_data
1035 *
1036 * iommu drivers can store data in the group for use when doing iommu
1037 * operations.  This function provides a way to set the data after
1038 * the group has been allocated.  Caller should hold a group reference.
1039 */
1040void iommu_group_set_iommudata(struct iommu_group *group, void *iommu_data,
1041			       void (*release)(void *iommu_data))
1042{
1043	group->iommu_data = iommu_data;
1044	group->iommu_data_release = release;
1045}
1046EXPORT_SYMBOL_GPL(iommu_group_set_iommudata);
1047
1048/**
1049 * iommu_group_set_name - set name for a group
1050 * @group: the group
1051 * @name: name
1052 *
1053 * Allow iommu driver to set a name for a group.  When set it will
1054 * appear in a name attribute file under the group in sysfs.
1055 */
1056int iommu_group_set_name(struct iommu_group *group, const char *name)
1057{
1058	int ret;
1059
1060	if (group->name) {
1061		iommu_group_remove_file(group, &iommu_group_attr_name);
1062		kfree(group->name);
1063		group->name = NULL;
1064		if (!name)
1065			return 0;
1066	}
1067
1068	group->name = kstrdup(name, GFP_KERNEL);
1069	if (!group->name)
1070		return -ENOMEM;
1071
1072	ret = iommu_group_create_file(group, &iommu_group_attr_name);
1073	if (ret) {
1074		kfree(group->name);
1075		group->name = NULL;
1076		return ret;
1077	}
1078
1079	return 0;
1080}
1081EXPORT_SYMBOL_GPL(iommu_group_set_name);
1082
1083static int iommu_create_device_direct_mappings(struct iommu_domain *domain,
1084					       struct device *dev)
1085{
1086	struct iommu_resv_region *entry;
1087	struct list_head mappings;
1088	unsigned long pg_size;
1089	int ret = 0;
1090
1091	pg_size = domain->pgsize_bitmap ? 1UL << __ffs(domain->pgsize_bitmap) : 0;
1092	INIT_LIST_HEAD(&mappings);
1093
1094	if (WARN_ON_ONCE(iommu_is_dma_domain(domain) && !pg_size))
1095		return -EINVAL;
1096
1097	iommu_get_resv_regions(dev, &mappings);
1098
1099	/* We need to consider overlapping regions for different devices */
1100	list_for_each_entry(entry, &mappings, list) {
1101		dma_addr_t start, end, addr;
1102		size_t map_size = 0;
1103
1104		if (entry->type == IOMMU_RESV_DIRECT)
1105			dev->iommu->require_direct = 1;
1106
1107		if ((entry->type != IOMMU_RESV_DIRECT &&
1108		     entry->type != IOMMU_RESV_DIRECT_RELAXABLE) ||
1109		    !iommu_is_dma_domain(domain))
1110			continue;
1111
1112		start = ALIGN(entry->start, pg_size);
1113		end   = ALIGN(entry->start + entry->length, pg_size);
1114
1115		for (addr = start; addr <= end; addr += pg_size) {
1116			phys_addr_t phys_addr;
1117
1118			if (addr == end)
1119				goto map_end;
1120
1121			phys_addr = iommu_iova_to_phys(domain, addr);
1122			if (!phys_addr) {
1123				map_size += pg_size;
1124				continue;
1125			}
1126
1127map_end:
1128			if (map_size) {
1129				ret = iommu_map(domain, addr - map_size,
1130						addr - map_size, map_size,
1131						entry->prot, GFP_KERNEL);
1132				if (ret)
1133					goto out;
1134				map_size = 0;
1135			}
1136		}
1137
1138	}
1139
1140	if (!list_empty(&mappings) && iommu_is_dma_domain(domain))
1141		iommu_flush_iotlb_all(domain);
1142
1143out:
1144	iommu_put_resv_regions(dev, &mappings);
1145
1146	return ret;
1147}
1148
1149/* This is undone by __iommu_group_free_device() */
1150static struct group_device *iommu_group_alloc_device(struct iommu_group *group,
1151						     struct device *dev)
1152{
1153	int ret, i = 0;
1154	struct group_device *device;
1155
1156	device = kzalloc(sizeof(*device), GFP_KERNEL);
1157	if (!device)
1158		return ERR_PTR(-ENOMEM);
1159
1160	device->dev = dev;
1161
1162	ret = sysfs_create_link(&dev->kobj, &group->kobj, "iommu_group");
1163	if (ret)
1164		goto err_free_device;
1165
1166	device->name = kasprintf(GFP_KERNEL, "%s", kobject_name(&dev->kobj));
1167rename:
1168	if (!device->name) {
1169		ret = -ENOMEM;
1170		goto err_remove_link;
1171	}
1172
1173	ret = sysfs_create_link_nowarn(group->devices_kobj,
1174				       &dev->kobj, device->name);
1175	if (ret) {
1176		if (ret == -EEXIST && i >= 0) {
1177			/*
1178			 * Account for the slim chance of collision
1179			 * and append an instance to the name.
1180			 */
1181			kfree(device->name);
1182			device->name = kasprintf(GFP_KERNEL, "%s.%d",
1183						 kobject_name(&dev->kobj), i++);
1184			goto rename;
1185		}
1186		goto err_free_name;
1187	}
1188
1189	trace_add_device_to_group(group->id, dev);
1190
1191	dev_info(dev, "Adding to iommu group %d\n", group->id);
1192
1193	return device;
1194
1195err_free_name:
1196	kfree(device->name);
1197err_remove_link:
1198	sysfs_remove_link(&dev->kobj, "iommu_group");
1199err_free_device:
1200	kfree(device);
1201	dev_err(dev, "Failed to add to iommu group %d: %d\n", group->id, ret);
1202	return ERR_PTR(ret);
1203}
1204
1205/**
1206 * iommu_group_add_device - add a device to an iommu group
1207 * @group: the group into which to add the device (reference should be held)
1208 * @dev: the device
1209 *
1210 * This function is called by an iommu driver to add a device into a
1211 * group.  Adding a device increments the group reference count.
1212 */
1213int iommu_group_add_device(struct iommu_group *group, struct device *dev)
1214{
1215	struct group_device *gdev;
1216
1217	gdev = iommu_group_alloc_device(group, dev);
1218	if (IS_ERR(gdev))
1219		return PTR_ERR(gdev);
1220
1221	iommu_group_ref_get(group);
1222	dev->iommu_group = group;
1223
1224	mutex_lock(&group->mutex);
1225	list_add_tail(&gdev->list, &group->devices);
1226	mutex_unlock(&group->mutex);
1227	return 0;
1228}
1229EXPORT_SYMBOL_GPL(iommu_group_add_device);
1230
1231/**
1232 * iommu_group_remove_device - remove a device from it's current group
1233 * @dev: device to be removed
1234 *
1235 * This function is called by an iommu driver to remove the device from
1236 * it's current group.  This decrements the iommu group reference count.
1237 */
1238void iommu_group_remove_device(struct device *dev)
1239{
1240	struct iommu_group *group = dev->iommu_group;
1241
1242	if (!group)
1243		return;
1244
1245	dev_info(dev, "Removing from iommu group %d\n", group->id);
1246
1247	__iommu_group_remove_device(dev);
1248}
1249EXPORT_SYMBOL_GPL(iommu_group_remove_device);
1250
1251static struct device *iommu_group_first_dev(struct iommu_group *group)
1252{
1253	lockdep_assert_held(&group->mutex);
1254	return list_first_entry(&group->devices, struct group_device, list)->dev;
1255}
1256
1257/**
1258 * iommu_group_for_each_dev - iterate over each device in the group
1259 * @group: the group
1260 * @data: caller opaque data to be passed to callback function
1261 * @fn: caller supplied callback function
1262 *
1263 * This function is called by group users to iterate over group devices.
1264 * Callers should hold a reference count to the group during callback.
1265 * The group->mutex is held across callbacks, which will block calls to
1266 * iommu_group_add/remove_device.
1267 */
1268int iommu_group_for_each_dev(struct iommu_group *group, void *data,
1269			     int (*fn)(struct device *, void *))
1270{
1271	struct group_device *device;
1272	int ret = 0;
1273
1274	mutex_lock(&group->mutex);
1275	for_each_group_device(group, device) {
1276		ret = fn(device->dev, data);
1277		if (ret)
1278			break;
1279	}
1280	mutex_unlock(&group->mutex);
1281
1282	return ret;
1283}
1284EXPORT_SYMBOL_GPL(iommu_group_for_each_dev);
1285
1286/**
1287 * iommu_group_get - Return the group for a device and increment reference
1288 * @dev: get the group that this device belongs to
1289 *
1290 * This function is called by iommu drivers and users to get the group
1291 * for the specified device.  If found, the group is returned and the group
1292 * reference in incremented, else NULL.
1293 */
1294struct iommu_group *iommu_group_get(struct device *dev)
1295{
1296	struct iommu_group *group = dev->iommu_group;
1297
1298	if (group)
1299		kobject_get(group->devices_kobj);
1300
1301	return group;
1302}
1303EXPORT_SYMBOL_GPL(iommu_group_get);
1304
1305/**
1306 * iommu_group_ref_get - Increment reference on a group
1307 * @group: the group to use, must not be NULL
1308 *
1309 * This function is called by iommu drivers to take additional references on an
1310 * existing group.  Returns the given group for convenience.
1311 */
1312struct iommu_group *iommu_group_ref_get(struct iommu_group *group)
1313{
1314	kobject_get(group->devices_kobj);
1315	return group;
1316}
1317EXPORT_SYMBOL_GPL(iommu_group_ref_get);
1318
1319/**
1320 * iommu_group_put - Decrement group reference
1321 * @group: the group to use
1322 *
1323 * This function is called by iommu drivers and users to release the
1324 * iommu group.  Once the reference count is zero, the group is released.
1325 */
1326void iommu_group_put(struct iommu_group *group)
1327{
1328	if (group)
1329		kobject_put(group->devices_kobj);
1330}
1331EXPORT_SYMBOL_GPL(iommu_group_put);
1332
1333/**
1334 * iommu_register_device_fault_handler() - Register a device fault handler
1335 * @dev: the device
1336 * @handler: the fault handler
1337 * @data: private data passed as argument to the handler
1338 *
1339 * When an IOMMU fault event is received, this handler gets called with the
1340 * fault event and data as argument. The handler should return 0 on success. If
1341 * the fault is recoverable (IOMMU_FAULT_PAGE_REQ), the consumer should also
1342 * complete the fault by calling iommu_page_response() with one of the following
1343 * response code:
1344 * - IOMMU_PAGE_RESP_SUCCESS: retry the translation
1345 * - IOMMU_PAGE_RESP_INVALID: terminate the fault
1346 * - IOMMU_PAGE_RESP_FAILURE: terminate the fault and stop reporting
1347 *   page faults if possible.
1348 *
1349 * Return 0 if the fault handler was installed successfully, or an error.
1350 */
1351int iommu_register_device_fault_handler(struct device *dev,
1352					iommu_dev_fault_handler_t handler,
1353					void *data)
1354{
1355	struct dev_iommu *param = dev->iommu;
1356	int ret = 0;
1357
1358	if (!param)
1359		return -EINVAL;
1360
1361	mutex_lock(&param->lock);
1362	/* Only allow one fault handler registered for each device */
1363	if (param->fault_param) {
1364		ret = -EBUSY;
1365		goto done_unlock;
1366	}
1367
1368	get_device(dev);
1369	param->fault_param = kzalloc(sizeof(*param->fault_param), GFP_KERNEL);
1370	if (!param->fault_param) {
1371		put_device(dev);
1372		ret = -ENOMEM;
1373		goto done_unlock;
1374	}
1375	param->fault_param->handler = handler;
1376	param->fault_param->data = data;
1377	mutex_init(&param->fault_param->lock);
1378	INIT_LIST_HEAD(&param->fault_param->faults);
1379
1380done_unlock:
1381	mutex_unlock(&param->lock);
1382
1383	return ret;
1384}
1385EXPORT_SYMBOL_GPL(iommu_register_device_fault_handler);
1386
1387/**
1388 * iommu_unregister_device_fault_handler() - Unregister the device fault handler
1389 * @dev: the device
1390 *
1391 * Remove the device fault handler installed with
1392 * iommu_register_device_fault_handler().
1393 *
1394 * Return 0 on success, or an error.
1395 */
1396int iommu_unregister_device_fault_handler(struct device *dev)
1397{
1398	struct dev_iommu *param = dev->iommu;
1399	int ret = 0;
1400
1401	if (!param)
1402		return -EINVAL;
1403
1404	mutex_lock(&param->lock);
1405
1406	if (!param->fault_param)
1407		goto unlock;
1408
1409	/* we cannot unregister handler if there are pending faults */
1410	if (!list_empty(&param->fault_param->faults)) {
1411		ret = -EBUSY;
1412		goto unlock;
1413	}
1414
1415	kfree(param->fault_param);
1416	param->fault_param = NULL;
1417	put_device(dev);
1418unlock:
1419	mutex_unlock(&param->lock);
1420
1421	return ret;
1422}
1423EXPORT_SYMBOL_GPL(iommu_unregister_device_fault_handler);
1424
1425/**
1426 * iommu_report_device_fault() - Report fault event to device driver
1427 * @dev: the device
1428 * @evt: fault event data
1429 *
1430 * Called by IOMMU drivers when a fault is detected, typically in a threaded IRQ
1431 * handler. When this function fails and the fault is recoverable, it is the
1432 * caller's responsibility to complete the fault.
1433 *
1434 * Return 0 on success, or an error.
1435 */
1436int iommu_report_device_fault(struct device *dev, struct iommu_fault_event *evt)
1437{
1438	struct dev_iommu *param = dev->iommu;
1439	struct iommu_fault_event *evt_pending = NULL;
1440	struct iommu_fault_param *fparam;
1441	int ret = 0;
1442
1443	if (!param || !evt)
1444		return -EINVAL;
1445
1446	/* we only report device fault if there is a handler registered */
1447	mutex_lock(&param->lock);
1448	fparam = param->fault_param;
1449	if (!fparam || !fparam->handler) {
1450		ret = -EINVAL;
1451		goto done_unlock;
1452	}
1453
1454	if (evt->fault.type == IOMMU_FAULT_PAGE_REQ &&
1455	    (evt->fault.prm.flags & IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE)) {
1456		evt_pending = kmemdup(evt, sizeof(struct iommu_fault_event),
1457				      GFP_KERNEL);
1458		if (!evt_pending) {
1459			ret = -ENOMEM;
1460			goto done_unlock;
1461		}
1462		mutex_lock(&fparam->lock);
1463		list_add_tail(&evt_pending->list, &fparam->faults);
1464		mutex_unlock(&fparam->lock);
1465	}
1466
1467	ret = fparam->handler(&evt->fault, fparam->data);
1468	if (ret && evt_pending) {
1469		mutex_lock(&fparam->lock);
1470		list_del(&evt_pending->list);
1471		mutex_unlock(&fparam->lock);
1472		kfree(evt_pending);
1473	}
1474done_unlock:
1475	mutex_unlock(&param->lock);
1476	return ret;
1477}
1478EXPORT_SYMBOL_GPL(iommu_report_device_fault);
1479
1480int iommu_page_response(struct device *dev,
1481			struct iommu_page_response *msg)
1482{
1483	bool needs_pasid;
1484	int ret = -EINVAL;
1485	struct iommu_fault_event *evt;
1486	struct iommu_fault_page_request *prm;
1487	struct dev_iommu *param = dev->iommu;
1488	const struct iommu_ops *ops = dev_iommu_ops(dev);
1489	bool has_pasid = msg->flags & IOMMU_PAGE_RESP_PASID_VALID;
1490
1491	if (!ops->page_response)
1492		return -ENODEV;
1493
1494	if (!param || !param->fault_param)
1495		return -EINVAL;
1496
1497	if (msg->version != IOMMU_PAGE_RESP_VERSION_1 ||
1498	    msg->flags & ~IOMMU_PAGE_RESP_PASID_VALID)
1499		return -EINVAL;
1500
1501	/* Only send response if there is a fault report pending */
1502	mutex_lock(&param->fault_param->lock);
1503	if (list_empty(&param->fault_param->faults)) {
1504		dev_warn_ratelimited(dev, "no pending PRQ, drop response\n");
1505		goto done_unlock;
1506	}
1507	/*
1508	 * Check if we have a matching page request pending to respond,
1509	 * otherwise return -EINVAL
1510	 */
1511	list_for_each_entry(evt, &param->fault_param->faults, list) {
1512		prm = &evt->fault.prm;
1513		if (prm->grpid != msg->grpid)
1514			continue;
1515
1516		/*
1517		 * If the PASID is required, the corresponding request is
1518		 * matched using the group ID, the PASID valid bit and the PASID
1519		 * value. Otherwise only the group ID matches request and
1520		 * response.
1521		 */
1522		needs_pasid = prm->flags & IOMMU_FAULT_PAGE_RESPONSE_NEEDS_PASID;
1523		if (needs_pasid && (!has_pasid || msg->pasid != prm->pasid))
1524			continue;
1525
1526		if (!needs_pasid && has_pasid) {
1527			/* No big deal, just clear it. */
1528			msg->flags &= ~IOMMU_PAGE_RESP_PASID_VALID;
1529			msg->pasid = 0;
1530		}
1531
1532		ret = ops->page_response(dev, evt, msg);
1533		list_del(&evt->list);
1534		kfree(evt);
1535		break;
1536	}
1537
1538done_unlock:
1539	mutex_unlock(&param->fault_param->lock);
1540	return ret;
1541}
1542EXPORT_SYMBOL_GPL(iommu_page_response);
1543
1544/**
1545 * iommu_group_id - Return ID for a group
1546 * @group: the group to ID
1547 *
1548 * Return the unique ID for the group matching the sysfs group number.
1549 */
1550int iommu_group_id(struct iommu_group *group)
1551{
1552	return group->id;
1553}
1554EXPORT_SYMBOL_GPL(iommu_group_id);
1555
1556static struct iommu_group *get_pci_alias_group(struct pci_dev *pdev,
1557					       unsigned long *devfns);
1558
1559/*
1560 * To consider a PCI device isolated, we require ACS to support Source
1561 * Validation, Request Redirection, Completer Redirection, and Upstream
1562 * Forwarding.  This effectively means that devices cannot spoof their
1563 * requester ID, requests and completions cannot be redirected, and all
1564 * transactions are forwarded upstream, even as it passes through a
1565 * bridge where the target device is downstream.
1566 */
1567#define REQ_ACS_FLAGS   (PCI_ACS_SV | PCI_ACS_RR | PCI_ACS_CR | PCI_ACS_UF)
1568
1569/*
1570 * For multifunction devices which are not isolated from each other, find
1571 * all the other non-isolated functions and look for existing groups.  For
1572 * each function, we also need to look for aliases to or from other devices
1573 * that may already have a group.
1574 */
1575static struct iommu_group *get_pci_function_alias_group(struct pci_dev *pdev,
1576							unsigned long *devfns)
1577{
1578	struct pci_dev *tmp = NULL;
1579	struct iommu_group *group;
1580
1581	if (!pdev->multifunction || pci_acs_enabled(pdev, REQ_ACS_FLAGS))
1582		return NULL;
1583
1584	for_each_pci_dev(tmp) {
1585		if (tmp == pdev || tmp->bus != pdev->bus ||
1586		    PCI_SLOT(tmp->devfn) != PCI_SLOT(pdev->devfn) ||
1587		    pci_acs_enabled(tmp, REQ_ACS_FLAGS))
1588			continue;
1589
1590		group = get_pci_alias_group(tmp, devfns);
1591		if (group) {
1592			pci_dev_put(tmp);
1593			return group;
1594		}
1595	}
1596
1597	return NULL;
1598}
1599
1600/*
1601 * Look for aliases to or from the given device for existing groups. DMA
1602 * aliases are only supported on the same bus, therefore the search
1603 * space is quite small (especially since we're really only looking at pcie
1604 * device, and therefore only expect multiple slots on the root complex or
1605 * downstream switch ports).  It's conceivable though that a pair of
1606 * multifunction devices could have aliases between them that would cause a
1607 * loop.  To prevent this, we use a bitmap to track where we've been.
1608 */
1609static struct iommu_group *get_pci_alias_group(struct pci_dev *pdev,
1610					       unsigned long *devfns)
1611{
1612	struct pci_dev *tmp = NULL;
1613	struct iommu_group *group;
1614
1615	if (test_and_set_bit(pdev->devfn & 0xff, devfns))
1616		return NULL;
1617
1618	group = iommu_group_get(&pdev->dev);
1619	if (group)
1620		return group;
1621
1622	for_each_pci_dev(tmp) {
1623		if (tmp == pdev || tmp->bus != pdev->bus)
1624			continue;
1625
1626		/* We alias them or they alias us */
1627		if (pci_devs_are_dma_aliases(pdev, tmp)) {
1628			group = get_pci_alias_group(tmp, devfns);
1629			if (group) {
1630				pci_dev_put(tmp);
1631				return group;
1632			}
1633
1634			group = get_pci_function_alias_group(tmp, devfns);
1635			if (group) {
1636				pci_dev_put(tmp);
1637				return group;
1638			}
1639		}
1640	}
1641
1642	return NULL;
1643}
1644
1645struct group_for_pci_data {
1646	struct pci_dev *pdev;
1647	struct iommu_group *group;
1648};
1649
1650/*
1651 * DMA alias iterator callback, return the last seen device.  Stop and return
1652 * the IOMMU group if we find one along the way.
1653 */
1654static int get_pci_alias_or_group(struct pci_dev *pdev, u16 alias, void *opaque)
1655{
1656	struct group_for_pci_data *data = opaque;
1657
1658	data->pdev = pdev;
1659	data->group = iommu_group_get(&pdev->dev);
1660
1661	return data->group != NULL;
1662}
1663
1664/*
1665 * Generic device_group call-back function. It just allocates one
1666 * iommu-group per device.
1667 */
1668struct iommu_group *generic_device_group(struct device *dev)
1669{
1670	return iommu_group_alloc();
1671}
1672EXPORT_SYMBOL_GPL(generic_device_group);
1673
1674/*
1675 * Generic device_group call-back function. It just allocates one
1676 * iommu-group per iommu driver instance shared by every device
1677 * probed by that iommu driver.
1678 */
1679struct iommu_group *generic_single_device_group(struct device *dev)
1680{
1681	struct iommu_device *iommu = dev->iommu->iommu_dev;
1682
1683	if (!iommu->singleton_group) {
1684		struct iommu_group *group;
1685
1686		group = iommu_group_alloc();
1687		if (IS_ERR(group))
1688			return group;
1689		iommu->singleton_group = group;
1690	}
1691	return iommu_group_ref_get(iommu->singleton_group);
1692}
1693EXPORT_SYMBOL_GPL(generic_single_device_group);
1694
1695/*
1696 * Use standard PCI bus topology, isolation features, and DMA alias quirks
1697 * to find or create an IOMMU group for a device.
1698 */
1699struct iommu_group *pci_device_group(struct device *dev)
1700{
1701	struct pci_dev *pdev = to_pci_dev(dev);
1702	struct group_for_pci_data data;
1703	struct pci_bus *bus;
1704	struct iommu_group *group = NULL;
1705	u64 devfns[4] = { 0 };
1706
1707	if (WARN_ON(!dev_is_pci(dev)))
1708		return ERR_PTR(-EINVAL);
1709
1710	/*
1711	 * Find the upstream DMA alias for the device.  A device must not
1712	 * be aliased due to topology in order to have its own IOMMU group.
1713	 * If we find an alias along the way that already belongs to a
1714	 * group, use it.
1715	 */
1716	if (pci_for_each_dma_alias(pdev, get_pci_alias_or_group, &data))
1717		return data.group;
1718
1719	pdev = data.pdev;
1720
1721	/*
1722	 * Continue upstream from the point of minimum IOMMU granularity
1723	 * due to aliases to the point where devices are protected from
1724	 * peer-to-peer DMA by PCI ACS.  Again, if we find an existing
1725	 * group, use it.
1726	 */
1727	for (bus = pdev->bus; !pci_is_root_bus(bus); bus = bus->parent) {
1728		if (!bus->self)
1729			continue;
1730
1731		if (pci_acs_path_enabled(bus->self, NULL, REQ_ACS_FLAGS))
1732			break;
1733
1734		pdev = bus->self;
1735
1736		group = iommu_group_get(&pdev->dev);
1737		if (group)
1738			return group;
1739	}
1740
1741	/*
1742	 * Look for existing groups on device aliases.  If we alias another
1743	 * device or another device aliases us, use the same group.
1744	 */
1745	group = get_pci_alias_group(pdev, (unsigned long *)devfns);
1746	if (group)
1747		return group;
1748
1749	/*
1750	 * Look for existing groups on non-isolated functions on the same
1751	 * slot and aliases of those funcions, if any.  No need to clear
1752	 * the search bitmap, the tested devfns are still valid.
1753	 */
1754	group = get_pci_function_alias_group(pdev, (unsigned long *)devfns);
1755	if (group)
1756		return group;
1757
1758	/* No shared group found, allocate new */
1759	return iommu_group_alloc();
1760}
1761EXPORT_SYMBOL_GPL(pci_device_group);
1762
1763/* Get the IOMMU group for device on fsl-mc bus */
1764struct iommu_group *fsl_mc_device_group(struct device *dev)
1765{
1766	struct device *cont_dev = fsl_mc_cont_dev(dev);
1767	struct iommu_group *group;
1768
1769	group = iommu_group_get(cont_dev);
1770	if (!group)
1771		group = iommu_group_alloc();
1772	return group;
1773}
1774EXPORT_SYMBOL_GPL(fsl_mc_device_group);
1775
1776static struct iommu_domain *
1777__iommu_group_alloc_default_domain(struct iommu_group *group, int req_type)
1778{
1779	if (group->default_domain && group->default_domain->type == req_type)
1780		return group->default_domain;
1781	return __iommu_group_domain_alloc(group, req_type);
1782}
1783
1784/*
1785 * req_type of 0 means "auto" which means to select a domain based on
1786 * iommu_def_domain_type or what the driver actually supports.
1787 */
1788static struct iommu_domain *
1789iommu_group_alloc_default_domain(struct iommu_group *group, int req_type)
1790{
1791	const struct iommu_ops *ops = dev_iommu_ops(iommu_group_first_dev(group));
1792	struct iommu_domain *dom;
1793
1794	lockdep_assert_held(&group->mutex);
1795
1796	/*
1797	 * Allow legacy drivers to specify the domain that will be the default
1798	 * domain. This should always be either an IDENTITY/BLOCKED/PLATFORM
1799	 * domain. Do not use in new drivers.
1800	 */
1801	if (ops->default_domain) {
1802		if (req_type != ops->default_domain->type)
1803			return ERR_PTR(-EINVAL);
1804		return ops->default_domain;
1805	}
1806
1807	if (req_type)
1808		return __iommu_group_alloc_default_domain(group, req_type);
1809
1810	/* The driver gave no guidance on what type to use, try the default */
1811	dom = __iommu_group_alloc_default_domain(group, iommu_def_domain_type);
1812	if (!IS_ERR(dom))
1813		return dom;
1814
1815	/* Otherwise IDENTITY and DMA_FQ defaults will try DMA */
1816	if (iommu_def_domain_type == IOMMU_DOMAIN_DMA)
1817		return ERR_PTR(-EINVAL);
1818	dom = __iommu_group_alloc_default_domain(group, IOMMU_DOMAIN_DMA);
1819	if (IS_ERR(dom))
1820		return dom;
1821
1822	pr_warn("Failed to allocate default IOMMU domain of type %u for group %s - Falling back to IOMMU_DOMAIN_DMA",
1823		iommu_def_domain_type, group->name);
1824	return dom;
1825}
1826
1827struct iommu_domain *iommu_group_default_domain(struct iommu_group *group)
1828{
1829	return group->default_domain;
1830}
1831
1832static int probe_iommu_group(struct device *dev, void *data)
1833{
1834	struct list_head *group_list = data;
1835	int ret;
1836
1837	mutex_lock(&iommu_probe_device_lock);
1838	ret = __iommu_probe_device(dev, group_list);
1839	mutex_unlock(&iommu_probe_device_lock);
1840	if (ret == -ENODEV)
1841		ret = 0;
1842
1843	return ret;
1844}
1845
1846static int iommu_bus_notifier(struct notifier_block *nb,
1847			      unsigned long action, void *data)
1848{
1849	struct device *dev = data;
1850
1851	if (action == BUS_NOTIFY_ADD_DEVICE) {
1852		int ret;
1853
1854		ret = iommu_probe_device(dev);
1855		return (ret) ? NOTIFY_DONE : NOTIFY_OK;
1856	} else if (action == BUS_NOTIFY_REMOVED_DEVICE) {
1857		iommu_release_device(dev);
1858		return NOTIFY_OK;
1859	}
1860
1861	return 0;
1862}
1863
1864/*
1865 * Combine the driver's chosen def_domain_type across all the devices in a
1866 * group. Drivers must give a consistent result.
1867 */
1868static int iommu_get_def_domain_type(struct iommu_group *group,
1869				     struct device *dev, int cur_type)
1870{
1871	const struct iommu_ops *ops = dev_iommu_ops(dev);
1872	int type;
1873
1874	if (ops->default_domain) {
1875		/*
1876		 * Drivers that declare a global static default_domain will
1877		 * always choose that.
1878		 */
1879		type = ops->default_domain->type;
1880	} else {
1881		if (ops->def_domain_type)
1882			type = ops->def_domain_type(dev);
1883		else
1884			return cur_type;
1885	}
1886	if (!type || cur_type == type)
1887		return cur_type;
1888	if (!cur_type)
1889		return type;
1890
1891	dev_err_ratelimited(
1892		dev,
1893		"IOMMU driver error, requesting conflicting def_domain_type, %s and %s, for devices in group %u.\n",
1894		iommu_domain_type_str(cur_type), iommu_domain_type_str(type),
1895		group->id);
1896
1897	/*
1898	 * Try to recover, drivers are allowed to force IDENITY or DMA, IDENTITY
1899	 * takes precedence.
1900	 */
1901	if (type == IOMMU_DOMAIN_IDENTITY)
1902		return type;
1903	return cur_type;
1904}
1905
1906/*
1907 * A target_type of 0 will select the best domain type. 0 can be returned in
1908 * this case meaning the global default should be used.
1909 */
1910static int iommu_get_default_domain_type(struct iommu_group *group,
1911					 int target_type)
1912{
1913	struct device *untrusted = NULL;
1914	struct group_device *gdev;
1915	int driver_type = 0;
1916
1917	lockdep_assert_held(&group->mutex);
1918
1919	/*
1920	 * ARM32 drivers supporting CONFIG_ARM_DMA_USE_IOMMU can declare an
1921	 * identity_domain and it will automatically become their default
1922	 * domain. Later on ARM_DMA_USE_IOMMU will install its UNMANAGED domain.
1923	 * Override the selection to IDENTITY.
1924	 */
1925	if (IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)) {
1926		static_assert(!(IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU) &&
1927				IS_ENABLED(CONFIG_IOMMU_DMA)));
1928		driver_type = IOMMU_DOMAIN_IDENTITY;
1929	}
1930
1931	for_each_group_device(group, gdev) {
1932		driver_type = iommu_get_def_domain_type(group, gdev->dev,
1933							driver_type);
1934
1935		if (dev_is_pci(gdev->dev) && to_pci_dev(gdev->dev)->untrusted) {
1936			/*
1937			 * No ARM32 using systems will set untrusted, it cannot
1938			 * work.
1939			 */
1940			if (WARN_ON(IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)))
1941				return -1;
1942			untrusted = gdev->dev;
1943		}
1944	}
1945
1946	/*
1947	 * If the common dma ops are not selected in kconfig then we cannot use
1948	 * IOMMU_DOMAIN_DMA at all. Force IDENTITY if nothing else has been
1949	 * selected.
1950	 */
1951	if (!IS_ENABLED(CONFIG_IOMMU_DMA)) {
1952		if (WARN_ON(driver_type == IOMMU_DOMAIN_DMA))
1953			return -1;
1954		if (!driver_type)
1955			driver_type = IOMMU_DOMAIN_IDENTITY;
1956	}
1957
1958	if (untrusted) {
1959		if (driver_type && driver_type != IOMMU_DOMAIN_DMA) {
1960			dev_err_ratelimited(
1961				untrusted,
1962				"Device is not trusted, but driver is overriding group %u to %s, refusing to probe.\n",
1963				group->id, iommu_domain_type_str(driver_type));
1964			return -1;
1965		}
1966		driver_type = IOMMU_DOMAIN_DMA;
1967	}
1968
1969	if (target_type) {
1970		if (driver_type && target_type != driver_type)
1971			return -1;
1972		return target_type;
1973	}
1974	return driver_type;
1975}
1976
1977static void iommu_group_do_probe_finalize(struct device *dev)
1978{
1979	const struct iommu_ops *ops = dev_iommu_ops(dev);
1980
1981	if (ops->probe_finalize)
1982		ops->probe_finalize(dev);
1983}
1984
1985int bus_iommu_probe(const struct bus_type *bus)
1986{
1987	struct iommu_group *group, *next;
1988	LIST_HEAD(group_list);
1989	int ret;
1990
1991	ret = bus_for_each_dev(bus, NULL, &group_list, probe_iommu_group);
1992	if (ret)
1993		return ret;
1994
1995	list_for_each_entry_safe(group, next, &group_list, entry) {
1996		struct group_device *gdev;
1997
1998		mutex_lock(&group->mutex);
1999
2000		/* Remove item from the list */
2001		list_del_init(&group->entry);
2002
2003		/*
2004		 * We go to the trouble of deferred default domain creation so
2005		 * that the cross-group default domain type and the setup of the
2006		 * IOMMU_RESV_DIRECT will work correctly in non-hotpug scenarios.
2007		 */
2008		ret = iommu_setup_default_domain(group, 0);
2009		if (ret) {
2010			mutex_unlock(&group->mutex);
2011			return ret;
2012		}
2013		mutex_unlock(&group->mutex);
2014
2015		/*
2016		 * FIXME: Mis-locked because the ops->probe_finalize() call-back
2017		 * of some IOMMU drivers calls arm_iommu_attach_device() which
2018		 * in-turn might call back into IOMMU core code, where it tries
2019		 * to take group->mutex, resulting in a deadlock.
2020		 */
2021		for_each_group_device(group, gdev)
2022			iommu_group_do_probe_finalize(gdev->dev);
2023	}
2024
2025	return 0;
2026}
2027
2028/**
2029 * iommu_present() - make platform-specific assumptions about an IOMMU
2030 * @bus: bus to check
2031 *
2032 * Do not use this function. You want device_iommu_mapped() instead.
2033 *
2034 * Return: true if some IOMMU is present and aware of devices on the given bus;
2035 * in general it may not be the only IOMMU, and it may not have anything to do
2036 * with whatever device you are ultimately interested in.
2037 */
2038bool iommu_present(const struct bus_type *bus)
2039{
2040	bool ret = false;
2041
2042	for (int i = 0; i < ARRAY_SIZE(iommu_buses); i++) {
2043		if (iommu_buses[i] == bus) {
2044			spin_lock(&iommu_device_lock);
2045			ret = !list_empty(&iommu_device_list);
2046			spin_unlock(&iommu_device_lock);
2047		}
2048	}
2049	return ret;
2050}
2051EXPORT_SYMBOL_GPL(iommu_present);
2052
2053/**
2054 * device_iommu_capable() - check for a general IOMMU capability
2055 * @dev: device to which the capability would be relevant, if available
2056 * @cap: IOMMU capability
2057 *
2058 * Return: true if an IOMMU is present and supports the given capability
2059 * for the given device, otherwise false.
2060 */
2061bool device_iommu_capable(struct device *dev, enum iommu_cap cap)
2062{
2063	const struct iommu_ops *ops;
2064
2065	if (!dev_has_iommu(dev))
2066		return false;
2067
2068	ops = dev_iommu_ops(dev);
2069	if (!ops->capable)
2070		return false;
2071
2072	return ops->capable(dev, cap);
2073}
2074EXPORT_SYMBOL_GPL(device_iommu_capable);
2075
2076/**
2077 * iommu_group_has_isolated_msi() - Compute msi_device_has_isolated_msi()
2078 *       for a group
2079 * @group: Group to query
2080 *
2081 * IOMMU groups should not have differing values of
2082 * msi_device_has_isolated_msi() for devices in a group. However nothing
2083 * directly prevents this, so ensure mistakes don't result in isolation failures
2084 * by checking that all the devices are the same.
2085 */
2086bool iommu_group_has_isolated_msi(struct iommu_group *group)
2087{
2088	struct group_device *group_dev;
2089	bool ret = true;
2090
2091	mutex_lock(&group->mutex);
2092	for_each_group_device(group, group_dev)
2093		ret &= msi_device_has_isolated_msi(group_dev->dev);
2094	mutex_unlock(&group->mutex);
2095	return ret;
2096}
2097EXPORT_SYMBOL_GPL(iommu_group_has_isolated_msi);
2098
2099/**
2100 * iommu_set_fault_handler() - set a fault handler for an iommu domain
2101 * @domain: iommu domain
2102 * @handler: fault handler
2103 * @token: user data, will be passed back to the fault handler
2104 *
2105 * This function should be used by IOMMU users which want to be notified
2106 * whenever an IOMMU fault happens.
2107 *
2108 * The fault handler itself should return 0 on success, and an appropriate
2109 * error code otherwise.
2110 */
2111void iommu_set_fault_handler(struct iommu_domain *domain,
2112					iommu_fault_handler_t handler,
2113					void *token)
2114{
2115	BUG_ON(!domain);
2116
2117	domain->handler = handler;
2118	domain->handler_token = token;
2119}
2120EXPORT_SYMBOL_GPL(iommu_set_fault_handler);
2121
2122static struct iommu_domain *__iommu_domain_alloc(const struct iommu_ops *ops,
2123						 struct device *dev,
2124						 unsigned int type)
2125{
2126	struct iommu_domain *domain;
2127	unsigned int alloc_type = type & IOMMU_DOMAIN_ALLOC_FLAGS;
2128
2129	if (alloc_type == IOMMU_DOMAIN_IDENTITY && ops->identity_domain)
2130		return ops->identity_domain;
2131	else if (alloc_type == IOMMU_DOMAIN_BLOCKED && ops->blocked_domain)
2132		return ops->blocked_domain;
2133	else if (type & __IOMMU_DOMAIN_PAGING && ops->domain_alloc_paging)
2134		domain = ops->domain_alloc_paging(dev);
2135	else if (ops->domain_alloc)
2136		domain = ops->domain_alloc(alloc_type);
2137	else
2138		return ERR_PTR(-EOPNOTSUPP);
2139
2140	/*
2141	 * Many domain_alloc ops now return ERR_PTR, make things easier for the
2142	 * driver by accepting ERR_PTR from all domain_alloc ops instead of
2143	 * having two rules.
2144	 */
2145	if (IS_ERR(domain))
2146		return domain;
2147	if (!domain)
2148		return ERR_PTR(-ENOMEM);
2149
2150	domain->type = type;
2151	domain->owner = ops;
2152	/*
2153	 * If not already set, assume all sizes by default; the driver
2154	 * may override this later
2155	 */
2156	if (!domain->pgsize_bitmap)
2157		domain->pgsize_bitmap = ops->pgsize_bitmap;
2158
2159	if (!domain->ops)
2160		domain->ops = ops->default_domain_ops;
2161
2162	if (iommu_is_dma_domain(domain)) {
2163		int rc;
2164
2165		rc = iommu_get_dma_cookie(domain);
2166		if (rc) {
2167			iommu_domain_free(domain);
2168			return ERR_PTR(rc);
2169		}
2170	}
2171	return domain;
2172}
2173
2174static struct iommu_domain *
2175__iommu_group_domain_alloc(struct iommu_group *group, unsigned int type)
2176{
2177	struct device *dev = iommu_group_first_dev(group);
2178
2179	return __iommu_domain_alloc(dev_iommu_ops(dev), dev, type);
2180}
2181
2182static int __iommu_domain_alloc_dev(struct device *dev, void *data)
2183{
2184	const struct iommu_ops **ops = data;
2185
2186	if (!dev_has_iommu(dev))
2187		return 0;
2188
2189	if (WARN_ONCE(*ops && *ops != dev_iommu_ops(dev),
2190		      "Multiple IOMMU drivers present for bus %s, which the public IOMMU API can't fully support yet. You will still need to disable one or more for this to work, sorry!\n",
2191		      dev_bus_name(dev)))
2192		return -EBUSY;
2193
2194	*ops = dev_iommu_ops(dev);
2195	return 0;
2196}
2197
2198struct iommu_domain *iommu_domain_alloc(const struct bus_type *bus)
2199{
2200	const struct iommu_ops *ops = NULL;
2201	int err = bus_for_each_dev(bus, NULL, &ops, __iommu_domain_alloc_dev);
2202	struct iommu_domain *domain;
2203
2204	if (err || !ops)
2205		return NULL;
2206
2207	domain = __iommu_domain_alloc(ops, NULL, IOMMU_DOMAIN_UNMANAGED);
2208	if (IS_ERR(domain))
2209		return NULL;
2210	return domain;
2211}
2212EXPORT_SYMBOL_GPL(iommu_domain_alloc);
2213
2214void iommu_domain_free(struct iommu_domain *domain)
2215{
2216	if (domain->type == IOMMU_DOMAIN_SVA)
2217		mmdrop(domain->mm);
2218	iommu_put_dma_cookie(domain);
2219	if (domain->ops->free)
2220		domain->ops->free(domain);
2221}
2222EXPORT_SYMBOL_GPL(iommu_domain_free);
2223
2224/*
2225 * Put the group's domain back to the appropriate core-owned domain - either the
2226 * standard kernel-mode DMA configuration or an all-DMA-blocked domain.
2227 */
2228static void __iommu_group_set_core_domain(struct iommu_group *group)
2229{
2230	struct iommu_domain *new_domain;
2231
2232	if (group->owner)
2233		new_domain = group->blocking_domain;
2234	else
2235		new_domain = group->default_domain;
2236
2237	__iommu_group_set_domain_nofail(group, new_domain);
2238}
2239
2240static int __iommu_attach_device(struct iommu_domain *domain,
2241				 struct device *dev)
2242{
2243	int ret;
2244
2245	if (unlikely(domain->ops->attach_dev == NULL))
2246		return -ENODEV;
2247
2248	ret = domain->ops->attach_dev(domain, dev);
2249	if (ret)
2250		return ret;
2251	dev->iommu->attach_deferred = 0;
2252	trace_attach_device_to_domain(dev);
2253	return 0;
2254}
2255
2256/**
2257 * iommu_attach_device - Attach an IOMMU domain to a device
2258 * @domain: IOMMU domain to attach
2259 * @dev: Device that will be attached
2260 *
2261 * Returns 0 on success and error code on failure
2262 *
2263 * Note that EINVAL can be treated as a soft failure, indicating
2264 * that certain configuration of the domain is incompatible with
2265 * the device. In this case attaching a different domain to the
2266 * device may succeed.
2267 */
2268int iommu_attach_device(struct iommu_domain *domain, struct device *dev)
2269{
2270	/* Caller must be a probed driver on dev */
2271	struct iommu_group *group = dev->iommu_group;
2272	int ret;
2273
2274	if (!group)
2275		return -ENODEV;
2276
2277	/*
2278	 * Lock the group to make sure the device-count doesn't
2279	 * change while we are attaching
2280	 */
2281	mutex_lock(&group->mutex);
2282	ret = -EINVAL;
2283	if (list_count_nodes(&group->devices) != 1)
2284		goto out_unlock;
2285
2286	ret = __iommu_attach_group(domain, group);
2287
2288out_unlock:
2289	mutex_unlock(&group->mutex);
2290	return ret;
2291}
2292EXPORT_SYMBOL_GPL(iommu_attach_device);
2293
2294int iommu_deferred_attach(struct device *dev, struct iommu_domain *domain)
2295{
2296	if (dev->iommu && dev->iommu->attach_deferred)
2297		return __iommu_attach_device(domain, dev);
2298
2299	return 0;
2300}
2301
2302void iommu_detach_device(struct iommu_domain *domain, struct device *dev)
2303{
2304	/* Caller must be a probed driver on dev */
2305	struct iommu_group *group = dev->iommu_group;
2306
2307	if (!group)
2308		return;
2309
2310	mutex_lock(&group->mutex);
2311	if (WARN_ON(domain != group->domain) ||
2312	    WARN_ON(list_count_nodes(&group->devices) != 1))
2313		goto out_unlock;
2314	__iommu_group_set_core_domain(group);
2315
2316out_unlock:
2317	mutex_unlock(&group->mutex);
2318}
2319EXPORT_SYMBOL_GPL(iommu_detach_device);
2320
2321struct iommu_domain *iommu_get_domain_for_dev(struct device *dev)
 
2322{
2323	/* Caller must be a probed driver on dev */
2324	struct iommu_group *group = dev->iommu_group;
2325
2326	if (!group)
2327		return NULL;
2328
2329	return group->domain;
2330}
2331EXPORT_SYMBOL_GPL(iommu_get_domain_for_dev);
2332
2333/*
2334 * For IOMMU_DOMAIN_DMA implementations which already provide their own
2335 * guarantees that the group and its default domain are valid and correct.
2336 */
2337struct iommu_domain *iommu_get_dma_domain(struct device *dev)
2338{
2339	return dev->iommu_group->default_domain;
2340}
2341
2342static int __iommu_attach_group(struct iommu_domain *domain,
2343				struct iommu_group *group)
2344{
2345	struct device *dev;
2346
2347	if (group->domain && group->domain != group->default_domain &&
2348	    group->domain != group->blocking_domain)
2349		return -EBUSY;
2350
2351	dev = iommu_group_first_dev(group);
2352	if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner)
2353		return -EINVAL;
2354
2355	return __iommu_group_set_domain(group, domain);
2356}
2357
2358/**
2359 * iommu_attach_group - Attach an IOMMU domain to an IOMMU group
2360 * @domain: IOMMU domain to attach
2361 * @group: IOMMU group that will be attached
2362 *
2363 * Returns 0 on success and error code on failure
2364 *
2365 * Note that EINVAL can be treated as a soft failure, indicating
2366 * that certain configuration of the domain is incompatible with
2367 * the group. In this case attaching a different domain to the
2368 * group may succeed.
2369 */
2370int iommu_attach_group(struct iommu_domain *domain, struct iommu_group *group)
2371{
2372	int ret;
2373
2374	mutex_lock(&group->mutex);
2375	ret = __iommu_attach_group(domain, group);
2376	mutex_unlock(&group->mutex);
2377
2378	return ret;
2379}
2380EXPORT_SYMBOL_GPL(iommu_attach_group);
2381
2382/**
2383 * iommu_group_replace_domain - replace the domain that a group is attached to
2384 * @new_domain: new IOMMU domain to replace with
2385 * @group: IOMMU group that will be attached to the new domain
2386 *
2387 * This API allows the group to switch domains without being forced to go to
2388 * the blocking domain in-between.
2389 *
2390 * If the currently attached domain is a core domain (e.g. a default_domain),
2391 * it will act just like the iommu_attach_group().
2392 */
2393int iommu_group_replace_domain(struct iommu_group *group,
2394			       struct iommu_domain *new_domain)
2395{
2396	int ret;
2397
2398	if (!new_domain)
2399		return -EINVAL;
2400
2401	mutex_lock(&group->mutex);
2402	ret = __iommu_group_set_domain(group, new_domain);
2403	mutex_unlock(&group->mutex);
2404	return ret;
2405}
2406EXPORT_SYMBOL_NS_GPL(iommu_group_replace_domain, IOMMUFD_INTERNAL);
2407
2408static int __iommu_device_set_domain(struct iommu_group *group,
2409				     struct device *dev,
2410				     struct iommu_domain *new_domain,
2411				     unsigned int flags)
2412{
2413	int ret;
2414
2415	/*
2416	 * If the device requires IOMMU_RESV_DIRECT then we cannot allow
2417	 * the blocking domain to be attached as it does not contain the
2418	 * required 1:1 mapping. This test effectively excludes the device
2419	 * being used with iommu_group_claim_dma_owner() which will block
2420	 * vfio and iommufd as well.
2421	 */
2422	if (dev->iommu->require_direct &&
2423	    (new_domain->type == IOMMU_DOMAIN_BLOCKED ||
2424	     new_domain == group->blocking_domain)) {
2425		dev_warn(dev,
2426			 "Firmware has requested this device have a 1:1 IOMMU mapping, rejecting configuring the device without a 1:1 mapping. Contact your platform vendor.\n");
2427		return -EINVAL;
2428	}
2429
2430	if (dev->iommu->attach_deferred) {
2431		if (new_domain == group->default_domain)
2432			return 0;
2433		dev->iommu->attach_deferred = 0;
2434	}
2435
2436	ret = __iommu_attach_device(new_domain, dev);
2437	if (ret) {
2438		/*
2439		 * If we have a blocking domain then try to attach that in hopes
2440		 * of avoiding a UAF. Modern drivers should implement blocking
2441		 * domains as global statics that cannot fail.
2442		 */
2443		if ((flags & IOMMU_SET_DOMAIN_MUST_SUCCEED) &&
2444		    group->blocking_domain &&
2445		    group->blocking_domain != new_domain)
2446			__iommu_attach_device(group->blocking_domain, dev);
2447		return ret;
2448	}
2449	return 0;
2450}
2451
2452/*
2453 * If 0 is returned the group's domain is new_domain. If an error is returned
2454 * then the group's domain will be set back to the existing domain unless
2455 * IOMMU_SET_DOMAIN_MUST_SUCCEED, otherwise an error is returned and the group's
2456 * domains is left inconsistent. This is a driver bug to fail attach with a
2457 * previously good domain. We try to avoid a kernel UAF because of this.
2458 *
2459 * IOMMU groups are really the natural working unit of the IOMMU, but the IOMMU
2460 * API works on domains and devices.  Bridge that gap by iterating over the
2461 * devices in a group.  Ideally we'd have a single device which represents the
2462 * requestor ID of the group, but we also allow IOMMU drivers to create policy
2463 * defined minimum sets, where the physical hardware may be able to distiguish
2464 * members, but we wish to group them at a higher level (ex. untrusted
2465 * multi-function PCI devices).  Thus we attach each device.
2466 */
2467static int __iommu_group_set_domain_internal(struct iommu_group *group,
2468					     struct iommu_domain *new_domain,
2469					     unsigned int flags)
2470{
2471	struct group_device *last_gdev;
2472	struct group_device *gdev;
2473	int result;
2474	int ret;
2475
2476	lockdep_assert_held(&group->mutex);
2477
2478	if (group->domain == new_domain)
2479		return 0;
2480
2481	if (WARN_ON(!new_domain))
2482		return -EINVAL;
2483
2484	/*
2485	 * Changing the domain is done by calling attach_dev() on the new
2486	 * domain. This switch does not have to be atomic and DMA can be
2487	 * discarded during the transition. DMA must only be able to access
2488	 * either new_domain or group->domain, never something else.
2489	 */
2490	result = 0;
2491	for_each_group_device(group, gdev) {
2492		ret = __iommu_device_set_domain(group, gdev->dev, new_domain,
2493						flags);
2494		if (ret) {
2495			result = ret;
2496			/*
2497			 * Keep trying the other devices in the group. If a
2498			 * driver fails attach to an otherwise good domain, and
2499			 * does not support blocking domains, it should at least
2500			 * drop its reference on the current domain so we don't
2501			 * UAF.
2502			 */
2503			if (flags & IOMMU_SET_DOMAIN_MUST_SUCCEED)
2504				continue;
2505			goto err_revert;
2506		}
2507	}
2508	group->domain = new_domain;
2509	return result;
2510
2511err_revert:
2512	/*
2513	 * This is called in error unwind paths. A well behaved driver should
2514	 * always allow us to attach to a domain that was already attached.
2515	 */
2516	last_gdev = gdev;
2517	for_each_group_device(group, gdev) {
2518		/*
2519		 * A NULL domain can happen only for first probe, in which case
2520		 * we leave group->domain as NULL and let release clean
2521		 * everything up.
2522		 */
2523		if (group->domain)
2524			WARN_ON(__iommu_device_set_domain(
2525				group, gdev->dev, group->domain,
2526				IOMMU_SET_DOMAIN_MUST_SUCCEED));
2527		if (gdev == last_gdev)
2528			break;
2529	}
2530	return ret;
2531}
2532
2533void iommu_detach_group(struct iommu_domain *domain, struct iommu_group *group)
2534{
2535	mutex_lock(&group->mutex);
2536	__iommu_group_set_core_domain(group);
2537	mutex_unlock(&group->mutex);
2538}
2539EXPORT_SYMBOL_GPL(iommu_detach_group);
2540
2541phys_addr_t iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
2542{
2543	if (domain->type == IOMMU_DOMAIN_IDENTITY)
2544		return iova;
2545
2546	if (domain->type == IOMMU_DOMAIN_BLOCKED)
2547		return 0;
2548
2549	return domain->ops->iova_to_phys(domain, iova);
2550}
2551EXPORT_SYMBOL_GPL(iommu_iova_to_phys);
2552
2553static size_t iommu_pgsize(struct iommu_domain *domain, unsigned long iova,
2554			   phys_addr_t paddr, size_t size, size_t *count)
2555{
2556	unsigned int pgsize_idx, pgsize_idx_next;
2557	unsigned long pgsizes;
2558	size_t offset, pgsize, pgsize_next;
2559	unsigned long addr_merge = paddr | iova;
2560
2561	/* Page sizes supported by the hardware and small enough for @size */
2562	pgsizes = domain->pgsize_bitmap & GENMASK(__fls(size), 0);
2563
2564	/* Constrain the page sizes further based on the maximum alignment */
2565	if (likely(addr_merge))
2566		pgsizes &= GENMASK(__ffs(addr_merge), 0);
2567
2568	/* Make sure we have at least one suitable page size */
2569	BUG_ON(!pgsizes);
2570
2571	/* Pick the biggest page size remaining */
2572	pgsize_idx = __fls(pgsizes);
2573	pgsize = BIT(pgsize_idx);
2574	if (!count)
2575		return pgsize;
2576
2577	/* Find the next biggest support page size, if it exists */
2578	pgsizes = domain->pgsize_bitmap & ~GENMASK(pgsize_idx, 0);
2579	if (!pgsizes)
2580		goto out_set_count;
2581
2582	pgsize_idx_next = __ffs(pgsizes);
2583	pgsize_next = BIT(pgsize_idx_next);
2584
2585	/*
2586	 * There's no point trying a bigger page size unless the virtual
2587	 * and physical addresses are similarly offset within the larger page.
2588	 */
2589	if ((iova ^ paddr) & (pgsize_next - 1))
2590		goto out_set_count;
2591
2592	/* Calculate the offset to the next page size alignment boundary */
2593	offset = pgsize_next - (addr_merge & (pgsize_next - 1));
2594
2595	/*
2596	 * If size is big enough to accommodate the larger page, reduce
2597	 * the number of smaller pages.
2598	 */
2599	if (offset + pgsize_next <= size)
2600		size = offset;
2601
2602out_set_count:
2603	*count = size >> pgsize_idx;
2604	return pgsize;
2605}
2606
2607static int __iommu_map(struct iommu_domain *domain, unsigned long iova,
2608		       phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
2609{
2610	const struct iommu_domain_ops *ops = domain->ops;
2611	unsigned long orig_iova = iova;
2612	unsigned int min_pagesz;
2613	size_t orig_size = size;
2614	phys_addr_t orig_paddr = paddr;
2615	int ret = 0;
2616
2617	if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
2618		return -EINVAL;
2619
2620	if (WARN_ON(!ops->map_pages || domain->pgsize_bitmap == 0UL))
2621		return -ENODEV;
2622
2623	/* find out the minimum page size supported */
2624	min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
2625
2626	/*
2627	 * both the virtual address and the physical one, as well as
2628	 * the size of the mapping, must be aligned (at least) to the
2629	 * size of the smallest page supported by the hardware
2630	 */
2631	if (!IS_ALIGNED(iova | paddr | size, min_pagesz)) {
2632		pr_err("unaligned: iova 0x%lx pa %pa size 0x%zx min_pagesz 0x%x\n",
2633		       iova, &paddr, size, min_pagesz);
2634		return -EINVAL;
2635	}
2636
2637	pr_debug("map: iova 0x%lx pa %pa size 0x%zx\n", iova, &paddr, size);
2638
2639	while (size) {
2640		size_t pgsize, count, mapped = 0;
2641
2642		pgsize = iommu_pgsize(domain, iova, paddr, size, &count);
2643
2644		pr_debug("mapping: iova 0x%lx pa %pa pgsize 0x%zx count %zu\n",
2645			 iova, &paddr, pgsize, count);
2646		ret = ops->map_pages(domain, iova, paddr, pgsize, count, prot,
2647				     gfp, &mapped);
2648		/*
2649		 * Some pages may have been mapped, even if an error occurred,
2650		 * so we should account for those so they can be unmapped.
2651		 */
2652		size -= mapped;
2653
2654		if (ret)
2655			break;
2656
2657		iova += mapped;
2658		paddr += mapped;
2659	}
2660
2661	/* unroll mapping in case something went wrong */
2662	if (ret)
2663		iommu_unmap(domain, orig_iova, orig_size - size);
2664	else
2665		trace_map(orig_iova, orig_paddr, orig_size);
2666
2667	return ret;
2668}
 
2669
2670int iommu_map(struct iommu_domain *domain, unsigned long iova,
2671	      phys_addr_t paddr, size_t size, int prot, gfp_t gfp)
2672{
2673	const struct iommu_domain_ops *ops = domain->ops;
2674	int ret;
2675
2676	might_sleep_if(gfpflags_allow_blocking(gfp));
 
2677
2678	/* Discourage passing strange GFP flags */
2679	if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
2680				__GFP_HIGHMEM)))
2681		return -EINVAL;
2682
2683	ret = __iommu_map(domain, iova, paddr, size, prot, gfp);
2684	if (ret == 0 && ops->iotlb_sync_map) {
2685		ret = ops->iotlb_sync_map(domain, iova, size);
2686		if (ret)
2687			goto out_err;
2688	}
2689
2690	return ret;
2691
2692out_err:
2693	/* undo mappings already done */
2694	iommu_unmap(domain, iova, size);
2695
2696	return ret;
2697}
2698EXPORT_SYMBOL_GPL(iommu_map);
2699
2700static size_t __iommu_unmap(struct iommu_domain *domain,
2701			    unsigned long iova, size_t size,
2702			    struct iommu_iotlb_gather *iotlb_gather)
2703{
2704	const struct iommu_domain_ops *ops = domain->ops;
2705	size_t unmapped_page, unmapped = 0;
2706	unsigned long orig_iova = iova;
2707	unsigned int min_pagesz;
2708
2709	if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
2710		return 0;
2711
2712	if (WARN_ON(!ops->unmap_pages || domain->pgsize_bitmap == 0UL))
2713		return 0;
2714
2715	/* find out the minimum page size supported */
2716	min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
2717
2718	/*
2719	 * The virtual address, as well as the size of the mapping, must be
2720	 * aligned (at least) to the size of the smallest page supported
2721	 * by the hardware
2722	 */
2723	if (!IS_ALIGNED(iova | size, min_pagesz)) {
2724		pr_err("unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n",
2725		       iova, size, min_pagesz);
2726		return 0;
2727	}
2728
2729	pr_debug("unmap this: iova 0x%lx size 0x%zx\n", iova, size);
2730
2731	/*
2732	 * Keep iterating until we either unmap 'size' bytes (or more)
2733	 * or we hit an area that isn't mapped.
2734	 */
2735	while (unmapped < size) {
2736		size_t pgsize, count;
2737
2738		pgsize = iommu_pgsize(domain, iova, iova, size - unmapped, &count);
2739		unmapped_page = ops->unmap_pages(domain, iova, pgsize, count, iotlb_gather);
2740		if (!unmapped_page)
2741			break;
2742
2743		pr_debug("unmapped: iova 0x%lx size 0x%zx\n",
2744			 iova, unmapped_page);
2745
2746		iova += unmapped_page;
2747		unmapped += unmapped_page;
2748	}
2749
2750	trace_unmap(orig_iova, size, unmapped);
2751	return unmapped;
2752}
2753
2754size_t iommu_unmap(struct iommu_domain *domain,
2755		   unsigned long iova, size_t size)
2756{
2757	struct iommu_iotlb_gather iotlb_gather;
2758	size_t ret;
2759
2760	iommu_iotlb_gather_init(&iotlb_gather);
2761	ret = __iommu_unmap(domain, iova, size, &iotlb_gather);
2762	iommu_iotlb_sync(domain, &iotlb_gather);
2763
2764	return ret;
2765}
2766EXPORT_SYMBOL_GPL(iommu_unmap);
2767
2768size_t iommu_unmap_fast(struct iommu_domain *domain,
2769			unsigned long iova, size_t size,
2770			struct iommu_iotlb_gather *iotlb_gather)
2771{
2772	return __iommu_unmap(domain, iova, size, iotlb_gather);
2773}
2774EXPORT_SYMBOL_GPL(iommu_unmap_fast);
2775
2776ssize_t iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
2777		     struct scatterlist *sg, unsigned int nents, int prot,
2778		     gfp_t gfp)
2779{
2780	const struct iommu_domain_ops *ops = domain->ops;
2781	size_t len = 0, mapped = 0;
2782	phys_addr_t start;
2783	unsigned int i = 0;
2784	int ret;
2785
2786	might_sleep_if(gfpflags_allow_blocking(gfp));
2787
2788	/* Discourage passing strange GFP flags */
2789	if (WARN_ON_ONCE(gfp & (__GFP_COMP | __GFP_DMA | __GFP_DMA32 |
2790				__GFP_HIGHMEM)))
2791		return -EINVAL;
2792
2793	while (i <= nents) {
2794		phys_addr_t s_phys = sg_phys(sg);
2795
2796		if (len && s_phys != start + len) {
2797			ret = __iommu_map(domain, iova + mapped, start,
2798					len, prot, gfp);
2799
2800			if (ret)
2801				goto out_err;
2802
2803			mapped += len;
2804			len = 0;
2805		}
2806
2807		if (sg_dma_is_bus_address(sg))
2808			goto next;
2809
2810		if (len) {
2811			len += sg->length;
2812		} else {
2813			len = sg->length;
2814			start = s_phys;
2815		}
2816
2817next:
2818		if (++i < nents)
2819			sg = sg_next(sg);
2820	}
2821
2822	if (ops->iotlb_sync_map) {
2823		ret = ops->iotlb_sync_map(domain, iova, mapped);
2824		if (ret)
2825			goto out_err;
2826	}
2827	return mapped;
2828
2829out_err:
2830	/* undo mappings already done */
2831	iommu_unmap(domain, iova, mapped);
2832
2833	return ret;
2834}
2835EXPORT_SYMBOL_GPL(iommu_map_sg);
2836
2837/**
2838 * report_iommu_fault() - report about an IOMMU fault to the IOMMU framework
2839 * @domain: the iommu domain where the fault has happened
2840 * @dev: the device where the fault has happened
2841 * @iova: the faulting address
2842 * @flags: mmu fault flags (e.g. IOMMU_FAULT_READ/IOMMU_FAULT_WRITE/...)
2843 *
2844 * This function should be called by the low-level IOMMU implementations
2845 * whenever IOMMU faults happen, to allow high-level users, that are
2846 * interested in such events, to know about them.
2847 *
2848 * This event may be useful for several possible use cases:
2849 * - mere logging of the event
2850 * - dynamic TLB/PTE loading
2851 * - if restarting of the faulting device is required
2852 *
2853 * Returns 0 on success and an appropriate error code otherwise (if dynamic
2854 * PTE/TLB loading will one day be supported, implementations will be able
2855 * to tell whether it succeeded or not according to this return value).
2856 *
2857 * Specifically, -ENOSYS is returned if a fault handler isn't installed
2858 * (though fault handlers can also return -ENOSYS, in case they want to
2859 * elicit the default behavior of the IOMMU drivers).
2860 */
2861int report_iommu_fault(struct iommu_domain *domain, struct device *dev,
2862		       unsigned long iova, int flags)
2863{
2864	int ret = -ENOSYS;
2865
2866	/*
2867	 * if upper layers showed interest and installed a fault handler,
2868	 * invoke it.
2869	 */
2870	if (domain->handler)
2871		ret = domain->handler(domain, dev, iova, flags,
2872						domain->handler_token);
2873
2874	trace_io_page_fault(dev, iova, flags);
2875	return ret;
2876}
2877EXPORT_SYMBOL_GPL(report_iommu_fault);
2878
2879static int __init iommu_init(void)
2880{
2881	iommu_group_kset = kset_create_and_add("iommu_groups",
2882					       NULL, kernel_kobj);
2883	BUG_ON(!iommu_group_kset);
2884
2885	iommu_debugfs_setup();
2886
2887	return 0;
2888}
2889core_initcall(iommu_init);
2890
2891int iommu_enable_nesting(struct iommu_domain *domain)
2892{
2893	if (domain->type != IOMMU_DOMAIN_UNMANAGED)
2894		return -EINVAL;
2895	if (!domain->ops->enable_nesting)
2896		return -EINVAL;
2897	return domain->ops->enable_nesting(domain);
2898}
2899EXPORT_SYMBOL_GPL(iommu_enable_nesting);
2900
2901int iommu_set_pgtable_quirks(struct iommu_domain *domain,
2902		unsigned long quirk)
2903{
2904	if (domain->type != IOMMU_DOMAIN_UNMANAGED)
2905		return -EINVAL;
2906	if (!domain->ops->set_pgtable_quirks)
2907		return -EINVAL;
2908	return domain->ops->set_pgtable_quirks(domain, quirk);
2909}
2910EXPORT_SYMBOL_GPL(iommu_set_pgtable_quirks);
2911
2912/**
2913 * iommu_get_resv_regions - get reserved regions
2914 * @dev: device for which to get reserved regions
2915 * @list: reserved region list for device
2916 *
2917 * This returns a list of reserved IOVA regions specific to this device.
2918 * A domain user should not map IOVA in these ranges.
2919 */
2920void iommu_get_resv_regions(struct device *dev, struct list_head *list)
2921{
2922	const struct iommu_ops *ops = dev_iommu_ops(dev);
2923
2924	if (ops->get_resv_regions)
2925		ops->get_resv_regions(dev, list);
2926}
2927EXPORT_SYMBOL_GPL(iommu_get_resv_regions);
2928
2929/**
2930 * iommu_put_resv_regions - release reserved regions
2931 * @dev: device for which to free reserved regions
2932 * @list: reserved region list for device
2933 *
2934 * This releases a reserved region list acquired by iommu_get_resv_regions().
2935 */
2936void iommu_put_resv_regions(struct device *dev, struct list_head *list)
2937{
2938	struct iommu_resv_region *entry, *next;
2939
2940	list_for_each_entry_safe(entry, next, list, list) {
2941		if (entry->free)
2942			entry->free(dev, entry);
2943		else
2944			kfree(entry);
2945	}
2946}
2947EXPORT_SYMBOL(iommu_put_resv_regions);
2948
2949struct iommu_resv_region *iommu_alloc_resv_region(phys_addr_t start,
2950						  size_t length, int prot,
2951						  enum iommu_resv_type type,
2952						  gfp_t gfp)
2953{
2954	struct iommu_resv_region *region;
2955
2956	region = kzalloc(sizeof(*region), gfp);
2957	if (!region)
2958		return NULL;
2959
2960	INIT_LIST_HEAD(&region->list);
2961	region->start = start;
2962	region->length = length;
2963	region->prot = prot;
2964	region->type = type;
2965	return region;
2966}
2967EXPORT_SYMBOL_GPL(iommu_alloc_resv_region);
2968
2969void iommu_set_default_passthrough(bool cmd_line)
2970{
2971	if (cmd_line)
2972		iommu_cmd_line |= IOMMU_CMD_LINE_DMA_API;
2973	iommu_def_domain_type = IOMMU_DOMAIN_IDENTITY;
2974}
2975
2976void iommu_set_default_translated(bool cmd_line)
2977{
2978	if (cmd_line)
2979		iommu_cmd_line |= IOMMU_CMD_LINE_DMA_API;
2980	iommu_def_domain_type = IOMMU_DOMAIN_DMA;
2981}
2982
2983bool iommu_default_passthrough(void)
2984{
2985	return iommu_def_domain_type == IOMMU_DOMAIN_IDENTITY;
2986}
2987EXPORT_SYMBOL_GPL(iommu_default_passthrough);
2988
2989const struct iommu_ops *iommu_ops_from_fwnode(struct fwnode_handle *fwnode)
2990{
2991	const struct iommu_ops *ops = NULL;
2992	struct iommu_device *iommu;
2993
2994	spin_lock(&iommu_device_lock);
2995	list_for_each_entry(iommu, &iommu_device_list, list)
2996		if (iommu->fwnode == fwnode) {
2997			ops = iommu->ops;
2998			break;
2999		}
3000	spin_unlock(&iommu_device_lock);
3001	return ops;
3002}
3003
3004int iommu_fwspec_init(struct device *dev, struct fwnode_handle *iommu_fwnode,
3005		      const struct iommu_ops *ops)
3006{
3007	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
3008
3009	if (fwspec)
3010		return ops == fwspec->ops ? 0 : -EINVAL;
3011
3012	if (!dev_iommu_get(dev))
3013		return -ENOMEM;
3014
3015	/* Preallocate for the overwhelmingly common case of 1 ID */
3016	fwspec = kzalloc(struct_size(fwspec, ids, 1), GFP_KERNEL);
3017	if (!fwspec)
3018		return -ENOMEM;
3019
3020	of_node_get(to_of_node(iommu_fwnode));
3021	fwspec->iommu_fwnode = iommu_fwnode;
3022	fwspec->ops = ops;
3023	dev_iommu_fwspec_set(dev, fwspec);
3024	return 0;
3025}
3026EXPORT_SYMBOL_GPL(iommu_fwspec_init);
3027
3028void iommu_fwspec_free(struct device *dev)
3029{
3030	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
3031
3032	if (fwspec) {
3033		fwnode_handle_put(fwspec->iommu_fwnode);
3034		kfree(fwspec);
3035		dev_iommu_fwspec_set(dev, NULL);
3036	}
3037}
3038EXPORT_SYMBOL_GPL(iommu_fwspec_free);
3039
3040int iommu_fwspec_add_ids(struct device *dev, u32 *ids, int num_ids)
3041{
3042	struct iommu_fwspec *fwspec = dev_iommu_fwspec_get(dev);
3043	int i, new_num;
3044
3045	if (!fwspec)
3046		return -EINVAL;
3047
3048	new_num = fwspec->num_ids + num_ids;
3049	if (new_num > 1) {
3050		fwspec = krealloc(fwspec, struct_size(fwspec, ids, new_num),
3051				  GFP_KERNEL);
3052		if (!fwspec)
3053			return -ENOMEM;
3054
3055		dev_iommu_fwspec_set(dev, fwspec);
3056	}
3057
3058	for (i = 0; i < num_ids; i++)
3059		fwspec->ids[fwspec->num_ids + i] = ids[i];
3060
3061	fwspec->num_ids = new_num;
3062	return 0;
3063}
3064EXPORT_SYMBOL_GPL(iommu_fwspec_add_ids);
3065
3066/*
3067 * Per device IOMMU features.
3068 */
3069int iommu_dev_enable_feature(struct device *dev, enum iommu_dev_features feat)
3070{
3071	if (dev_has_iommu(dev)) {
3072		const struct iommu_ops *ops = dev_iommu_ops(dev);
3073
3074		if (ops->dev_enable_feat)
3075			return ops->dev_enable_feat(dev, feat);
3076	}
3077
3078	return -ENODEV;
3079}
3080EXPORT_SYMBOL_GPL(iommu_dev_enable_feature);
3081
3082/*
3083 * The device drivers should do the necessary cleanups before calling this.
3084 */
3085int iommu_dev_disable_feature(struct device *dev, enum iommu_dev_features feat)
3086{
3087	if (dev_has_iommu(dev)) {
3088		const struct iommu_ops *ops = dev_iommu_ops(dev);
3089
3090		if (ops->dev_disable_feat)
3091			return ops->dev_disable_feat(dev, feat);
3092	}
3093
3094	return -EBUSY;
3095}
3096EXPORT_SYMBOL_GPL(iommu_dev_disable_feature);
3097
3098/**
3099 * iommu_setup_default_domain - Set the default_domain for the group
3100 * @group: Group to change
3101 * @target_type: Domain type to set as the default_domain
3102 *
3103 * Allocate a default domain and set it as the current domain on the group. If
3104 * the group already has a default domain it will be changed to the target_type.
3105 * When target_type is 0 the default domain is selected based on driver and
3106 * system preferences.
3107 */
3108static int iommu_setup_default_domain(struct iommu_group *group,
3109				      int target_type)
3110{
3111	struct iommu_domain *old_dom = group->default_domain;
3112	struct group_device *gdev;
3113	struct iommu_domain *dom;
3114	bool direct_failed;
3115	int req_type;
3116	int ret;
3117
3118	lockdep_assert_held(&group->mutex);
3119
3120	req_type = iommu_get_default_domain_type(group, target_type);
3121	if (req_type < 0)
3122		return -EINVAL;
3123
3124	dom = iommu_group_alloc_default_domain(group, req_type);
3125	if (IS_ERR(dom))
3126		return PTR_ERR(dom);
3127
3128	if (group->default_domain == dom)
3129		return 0;
3130
3131	/*
3132	 * IOMMU_RESV_DIRECT and IOMMU_RESV_DIRECT_RELAXABLE regions must be
3133	 * mapped before their device is attached, in order to guarantee
3134	 * continuity with any FW activity
3135	 */
3136	direct_failed = false;
3137	for_each_group_device(group, gdev) {
3138		if (iommu_create_device_direct_mappings(dom, gdev->dev)) {
3139			direct_failed = true;
3140			dev_warn_once(
3141				gdev->dev->iommu->iommu_dev->dev,
3142				"IOMMU driver was not able to establish FW requested direct mapping.");
3143		}
3144	}
3145
3146	/* We must set default_domain early for __iommu_device_set_domain */
3147	group->default_domain = dom;
3148	if (!group->domain) {
3149		/*
3150		 * Drivers are not allowed to fail the first domain attach.
3151		 * The only way to recover from this is to fail attaching the
3152		 * iommu driver and call ops->release_device. Put the domain
3153		 * in group->default_domain so it is freed after.
3154		 */
3155		ret = __iommu_group_set_domain_internal(
3156			group, dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);
3157		if (WARN_ON(ret))
3158			goto out_free_old;
3159	} else {
3160		ret = __iommu_group_set_domain(group, dom);
3161		if (ret)
3162			goto err_restore_def_domain;
3163	}
3164
3165	/*
3166	 * Drivers are supposed to allow mappings to be installed in a domain
3167	 * before device attachment, but some don't. Hack around this defect by
3168	 * trying again after attaching. If this happens it means the device
3169	 * will not continuously have the IOMMU_RESV_DIRECT map.
3170	 */
3171	if (direct_failed) {
3172		for_each_group_device(group, gdev) {
3173			ret = iommu_create_device_direct_mappings(dom, gdev->dev);
3174			if (ret)
3175				goto err_restore_domain;
3176		}
3177	}
3178
3179out_free_old:
3180	if (old_dom)
3181		iommu_domain_free(old_dom);
3182	return ret;
3183
3184err_restore_domain:
3185	if (old_dom)
3186		__iommu_group_set_domain_internal(
3187			group, old_dom, IOMMU_SET_DOMAIN_MUST_SUCCEED);
3188err_restore_def_domain:
3189	if (old_dom) {
3190		iommu_domain_free(dom);
3191		group->default_domain = old_dom;
3192	}
3193	return ret;
3194}
3195
3196/*
3197 * Changing the default domain through sysfs requires the users to unbind the
3198 * drivers from the devices in the iommu group, except for a DMA -> DMA-FQ
3199 * transition. Return failure if this isn't met.
3200 *
3201 * We need to consider the race between this and the device release path.
3202 * group->mutex is used here to guarantee that the device release path
3203 * will not be entered at the same time.
3204 */
3205static ssize_t iommu_group_store_type(struct iommu_group *group,
3206				      const char *buf, size_t count)
3207{
3208	struct group_device *gdev;
3209	int ret, req_type;
3210
3211	if (!capable(CAP_SYS_ADMIN) || !capable(CAP_SYS_RAWIO))
3212		return -EACCES;
3213
3214	if (WARN_ON(!group) || !group->default_domain)
3215		return -EINVAL;
3216
3217	if (sysfs_streq(buf, "identity"))
3218		req_type = IOMMU_DOMAIN_IDENTITY;
3219	else if (sysfs_streq(buf, "DMA"))
3220		req_type = IOMMU_DOMAIN_DMA;
3221	else if (sysfs_streq(buf, "DMA-FQ"))
3222		req_type = IOMMU_DOMAIN_DMA_FQ;
3223	else if (sysfs_streq(buf, "auto"))
3224		req_type = 0;
3225	else
3226		return -EINVAL;
3227
3228	mutex_lock(&group->mutex);
3229	/* We can bring up a flush queue without tearing down the domain. */
3230	if (req_type == IOMMU_DOMAIN_DMA_FQ &&
3231	    group->default_domain->type == IOMMU_DOMAIN_DMA) {
3232		ret = iommu_dma_init_fq(group->default_domain);
3233		if (ret)
3234			goto out_unlock;
3235
3236		group->default_domain->type = IOMMU_DOMAIN_DMA_FQ;
3237		ret = count;
3238		goto out_unlock;
3239	}
3240
3241	/* Otherwise, ensure that device exists and no driver is bound. */
3242	if (list_empty(&group->devices) || group->owner_cnt) {
3243		ret = -EPERM;
3244		goto out_unlock;
3245	}
3246
3247	ret = iommu_setup_default_domain(group, req_type);
3248	if (ret)
3249		goto out_unlock;
3250
3251	/*
3252	 * Release the mutex here because ops->probe_finalize() call-back of
3253	 * some vendor IOMMU drivers calls arm_iommu_attach_device() which
3254	 * in-turn might call back into IOMMU core code, where it tries to take
3255	 * group->mutex, resulting in a deadlock.
3256	 */
3257	mutex_unlock(&group->mutex);
3258
3259	/* Make sure dma_ops is appropriatley set */
3260	for_each_group_device(group, gdev)
3261		iommu_group_do_probe_finalize(gdev->dev);
3262	return count;
3263
3264out_unlock:
3265	mutex_unlock(&group->mutex);
3266	return ret ?: count;
3267}
3268
3269/**
3270 * iommu_device_use_default_domain() - Device driver wants to handle device
3271 *                                     DMA through the kernel DMA API.
3272 * @dev: The device.
3273 *
3274 * The device driver about to bind @dev wants to do DMA through the kernel
3275 * DMA API. Return 0 if it is allowed, otherwise an error.
3276 */
3277int iommu_device_use_default_domain(struct device *dev)
3278{
3279	/* Caller is the driver core during the pre-probe path */
3280	struct iommu_group *group = dev->iommu_group;
3281	int ret = 0;
3282
3283	if (!group)
3284		return 0;
3285
3286	mutex_lock(&group->mutex);
3287	if (group->owner_cnt) {
3288		if (group->domain != group->default_domain || group->owner ||
3289		    !xa_empty(&group->pasid_array)) {
3290			ret = -EBUSY;
3291			goto unlock_out;
3292		}
3293	}
3294
3295	group->owner_cnt++;
3296
3297unlock_out:
3298	mutex_unlock(&group->mutex);
3299	return ret;
3300}
3301
3302/**
3303 * iommu_device_unuse_default_domain() - Device driver stops handling device
3304 *                                       DMA through the kernel DMA API.
3305 * @dev: The device.
3306 *
3307 * The device driver doesn't want to do DMA through kernel DMA API anymore.
3308 * It must be called after iommu_device_use_default_domain().
3309 */
3310void iommu_device_unuse_default_domain(struct device *dev)
3311{
3312	/* Caller is the driver core during the post-probe path */
3313	struct iommu_group *group = dev->iommu_group;
3314
3315	if (!group)
3316		return;
3317
3318	mutex_lock(&group->mutex);
3319	if (!WARN_ON(!group->owner_cnt || !xa_empty(&group->pasid_array)))
3320		group->owner_cnt--;
3321
3322	mutex_unlock(&group->mutex);
3323}
3324
3325static int __iommu_group_alloc_blocking_domain(struct iommu_group *group)
3326{
3327	struct iommu_domain *domain;
3328
3329	if (group->blocking_domain)
3330		return 0;
3331
3332	domain = __iommu_group_domain_alloc(group, IOMMU_DOMAIN_BLOCKED);
3333	if (IS_ERR(domain)) {
3334		/*
3335		 * For drivers that do not yet understand IOMMU_DOMAIN_BLOCKED
3336		 * create an empty domain instead.
3337		 */
3338		domain = __iommu_group_domain_alloc(group,
3339						    IOMMU_DOMAIN_UNMANAGED);
3340		if (IS_ERR(domain))
3341			return PTR_ERR(domain);
3342	}
3343	group->blocking_domain = domain;
3344	return 0;
3345}
3346
3347static int __iommu_take_dma_ownership(struct iommu_group *group, void *owner)
3348{
3349	int ret;
3350
3351	if ((group->domain && group->domain != group->default_domain) ||
3352	    !xa_empty(&group->pasid_array))
3353		return -EBUSY;
3354
3355	ret = __iommu_group_alloc_blocking_domain(group);
3356	if (ret)
3357		return ret;
3358	ret = __iommu_group_set_domain(group, group->blocking_domain);
3359	if (ret)
3360		return ret;
3361
3362	group->owner = owner;
3363	group->owner_cnt++;
3364	return 0;
3365}
3366
3367/**
3368 * iommu_group_claim_dma_owner() - Set DMA ownership of a group
3369 * @group: The group.
3370 * @owner: Caller specified pointer. Used for exclusive ownership.
3371 *
3372 * This is to support backward compatibility for vfio which manages the dma
3373 * ownership in iommu_group level. New invocations on this interface should be
3374 * prohibited. Only a single owner may exist for a group.
3375 */
3376int iommu_group_claim_dma_owner(struct iommu_group *group, void *owner)
3377{
3378	int ret = 0;
3379
3380	if (WARN_ON(!owner))
3381		return -EINVAL;
3382
3383	mutex_lock(&group->mutex);
3384	if (group->owner_cnt) {
3385		ret = -EPERM;
3386		goto unlock_out;
3387	}
3388
3389	ret = __iommu_take_dma_ownership(group, owner);
3390unlock_out:
3391	mutex_unlock(&group->mutex);
3392
3393	return ret;
3394}
3395EXPORT_SYMBOL_GPL(iommu_group_claim_dma_owner);
3396
3397/**
3398 * iommu_device_claim_dma_owner() - Set DMA ownership of a device
3399 * @dev: The device.
3400 * @owner: Caller specified pointer. Used for exclusive ownership.
3401 *
3402 * Claim the DMA ownership of a device. Multiple devices in the same group may
3403 * concurrently claim ownership if they present the same owner value. Returns 0
3404 * on success and error code on failure
3405 */
3406int iommu_device_claim_dma_owner(struct device *dev, void *owner)
3407{
3408	/* Caller must be a probed driver on dev */
3409	struct iommu_group *group = dev->iommu_group;
3410	int ret = 0;
3411
3412	if (WARN_ON(!owner))
3413		return -EINVAL;
3414
3415	if (!group)
3416		return -ENODEV;
3417
3418	mutex_lock(&group->mutex);
3419	if (group->owner_cnt) {
3420		if (group->owner != owner) {
3421			ret = -EPERM;
3422			goto unlock_out;
3423		}
3424		group->owner_cnt++;
3425		goto unlock_out;
3426	}
3427
3428	ret = __iommu_take_dma_ownership(group, owner);
3429unlock_out:
3430	mutex_unlock(&group->mutex);
3431	return ret;
3432}
3433EXPORT_SYMBOL_GPL(iommu_device_claim_dma_owner);
3434
3435static void __iommu_release_dma_ownership(struct iommu_group *group)
3436{
3437	if (WARN_ON(!group->owner_cnt || !group->owner ||
3438		    !xa_empty(&group->pasid_array)))
3439		return;
3440
3441	group->owner_cnt = 0;
3442	group->owner = NULL;
3443	__iommu_group_set_domain_nofail(group, group->default_domain);
3444}
3445
3446/**
3447 * iommu_group_release_dma_owner() - Release DMA ownership of a group
3448 * @group: The group
3449 *
3450 * Release the DMA ownership claimed by iommu_group_claim_dma_owner().
3451 */
3452void iommu_group_release_dma_owner(struct iommu_group *group)
3453{
3454	mutex_lock(&group->mutex);
3455	__iommu_release_dma_ownership(group);
3456	mutex_unlock(&group->mutex);
3457}
3458EXPORT_SYMBOL_GPL(iommu_group_release_dma_owner);
3459
3460/**
3461 * iommu_device_release_dma_owner() - Release DMA ownership of a device
3462 * @dev: The device.
3463 *
3464 * Release the DMA ownership claimed by iommu_device_claim_dma_owner().
3465 */
3466void iommu_device_release_dma_owner(struct device *dev)
3467{
3468	/* Caller must be a probed driver on dev */
3469	struct iommu_group *group = dev->iommu_group;
3470
3471	mutex_lock(&group->mutex);
3472	if (group->owner_cnt > 1)
3473		group->owner_cnt--;
3474	else
3475		__iommu_release_dma_ownership(group);
3476	mutex_unlock(&group->mutex);
3477}
3478EXPORT_SYMBOL_GPL(iommu_device_release_dma_owner);
3479
3480/**
3481 * iommu_group_dma_owner_claimed() - Query group dma ownership status
3482 * @group: The group.
3483 *
3484 * This provides status query on a given group. It is racy and only for
3485 * non-binding status reporting.
3486 */
3487bool iommu_group_dma_owner_claimed(struct iommu_group *group)
3488{
3489	unsigned int user;
3490
3491	mutex_lock(&group->mutex);
3492	user = group->owner_cnt;
3493	mutex_unlock(&group->mutex);
3494
3495	return user;
3496}
3497EXPORT_SYMBOL_GPL(iommu_group_dma_owner_claimed);
3498
3499static int __iommu_set_group_pasid(struct iommu_domain *domain,
3500				   struct iommu_group *group, ioasid_t pasid)
3501{
3502	struct group_device *device;
3503	int ret = 0;
3504
3505	for_each_group_device(group, device) {
3506		ret = domain->ops->set_dev_pasid(domain, device->dev, pasid);
3507		if (ret)
3508			break;
3509	}
3510
3511	return ret;
3512}
3513
3514static void __iommu_remove_group_pasid(struct iommu_group *group,
3515				       ioasid_t pasid)
3516{
3517	struct group_device *device;
3518	const struct iommu_ops *ops;
3519
3520	for_each_group_device(group, device) {
3521		ops = dev_iommu_ops(device->dev);
3522		ops->remove_dev_pasid(device->dev, pasid);
3523	}
3524}
3525
3526/*
3527 * iommu_attach_device_pasid() - Attach a domain to pasid of device
3528 * @domain: the iommu domain.
3529 * @dev: the attached device.
3530 * @pasid: the pasid of the device.
3531 *
3532 * Return: 0 on success, or an error.
3533 */
3534int iommu_attach_device_pasid(struct iommu_domain *domain,
3535			      struct device *dev, ioasid_t pasid)
3536{
3537	/* Caller must be a probed driver on dev */
3538	struct iommu_group *group = dev->iommu_group;
3539	void *curr;
3540	int ret;
3541
3542	if (!domain->ops->set_dev_pasid)
3543		return -EOPNOTSUPP;
3544
3545	if (!group)
3546		return -ENODEV;
3547
3548	if (!dev_has_iommu(dev) || dev_iommu_ops(dev) != domain->owner)
3549		return -EINVAL;
3550
3551	mutex_lock(&group->mutex);
3552	curr = xa_cmpxchg(&group->pasid_array, pasid, NULL, domain, GFP_KERNEL);
3553	if (curr) {
3554		ret = xa_err(curr) ? : -EBUSY;
3555		goto out_unlock;
3556	}
3557
3558	ret = __iommu_set_group_pasid(domain, group, pasid);
3559	if (ret) {
3560		__iommu_remove_group_pasid(group, pasid);
3561		xa_erase(&group->pasid_array, pasid);
3562	}
3563out_unlock:
3564	mutex_unlock(&group->mutex);
3565	return ret;
3566}
3567EXPORT_SYMBOL_GPL(iommu_attach_device_pasid);
3568
3569/*
3570 * iommu_detach_device_pasid() - Detach the domain from pasid of device
3571 * @domain: the iommu domain.
3572 * @dev: the attached device.
3573 * @pasid: the pasid of the device.
3574 *
3575 * The @domain must have been attached to @pasid of the @dev with
3576 * iommu_attach_device_pasid().
3577 */
3578void iommu_detach_device_pasid(struct iommu_domain *domain, struct device *dev,
3579			       ioasid_t pasid)
3580{
3581	/* Caller must be a probed driver on dev */
3582	struct iommu_group *group = dev->iommu_group;
3583
3584	mutex_lock(&group->mutex);
3585	__iommu_remove_group_pasid(group, pasid);
3586	WARN_ON(xa_erase(&group->pasid_array, pasid) != domain);
3587	mutex_unlock(&group->mutex);
3588}
3589EXPORT_SYMBOL_GPL(iommu_detach_device_pasid);
3590
3591/*
3592 * iommu_get_domain_for_dev_pasid() - Retrieve domain for @pasid of @dev
3593 * @dev: the queried device
3594 * @pasid: the pasid of the device
3595 * @type: matched domain type, 0 for any match
3596 *
3597 * This is a variant of iommu_get_domain_for_dev(). It returns the existing
3598 * domain attached to pasid of a device. Callers must hold a lock around this
3599 * function, and both iommu_attach/detach_dev_pasid() whenever a domain of
3600 * type is being manipulated. This API does not internally resolve races with
3601 * attach/detach.
3602 *
3603 * Return: attached domain on success, NULL otherwise.
3604 */
3605struct iommu_domain *iommu_get_domain_for_dev_pasid(struct device *dev,
3606						    ioasid_t pasid,
3607						    unsigned int type)
3608{
3609	/* Caller must be a probed driver on dev */
3610	struct iommu_group *group = dev->iommu_group;
3611	struct iommu_domain *domain;
3612
3613	if (!group)
3614		return NULL;
3615
3616	xa_lock(&group->pasid_array);
3617	domain = xa_load(&group->pasid_array, pasid);
3618	if (type && domain && domain->type != type)
3619		domain = ERR_PTR(-EBUSY);
3620	xa_unlock(&group->pasid_array);
3621
3622	return domain;
3623}
3624EXPORT_SYMBOL_GPL(iommu_get_domain_for_dev_pasid);
3625
3626struct iommu_domain *iommu_sva_domain_alloc(struct device *dev,
3627					    struct mm_struct *mm)
3628{
3629	const struct iommu_ops *ops = dev_iommu_ops(dev);
3630	struct iommu_domain *domain;
3631
3632	domain = ops->domain_alloc(IOMMU_DOMAIN_SVA);
3633	if (!domain)
3634		return NULL;
3635
3636	domain->type = IOMMU_DOMAIN_SVA;
3637	mmgrab(mm);
3638	domain->mm = mm;
3639	domain->owner = ops;
3640	domain->iopf_handler = iommu_sva_handle_iopf;
3641	domain->fault_data = mm;
3642
3643	return domain;
3644}
3645
3646ioasid_t iommu_alloc_global_pasid(struct device *dev)
3647{
3648	int ret;
3649
3650	/* max_pasids == 0 means that the device does not support PASID */
3651	if (!dev->iommu->max_pasids)
3652		return IOMMU_PASID_INVALID;
3653
3654	/*
3655	 * max_pasids is set up by vendor driver based on number of PASID bits
3656	 * supported but the IDA allocation is inclusive.
3657	 */
3658	ret = ida_alloc_range(&iommu_global_pasid_ida, IOMMU_FIRST_GLOBAL_PASID,
3659			      dev->iommu->max_pasids - 1, GFP_KERNEL);
3660	return ret < 0 ? IOMMU_PASID_INVALID : ret;
3661}
3662EXPORT_SYMBOL_GPL(iommu_alloc_global_pasid);
3663
3664void iommu_free_global_pasid(ioasid_t pasid)
3665{
3666	if (WARN_ON(pasid == IOMMU_PASID_INVALID))
3667		return;
3668
3669	ida_free(&iommu_global_pasid_ida, pasid);
3670}
3671EXPORT_SYMBOL_GPL(iommu_free_global_pasid);