Loading...
1menu "Xen driver support"
2 depends on XEN
3
4config XEN_BALLOON
5 bool "Xen memory balloon driver"
6 default y
7 help
8 The balloon driver allows the Xen domain to request more memory from
9 the system to expand the domain's memory allocation, or alternatively
10 return unneeded memory to the system.
11
12config XEN_SELFBALLOONING
13 bool "Dynamically self-balloon kernel memory to target"
14 depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
15 default n
16 help
17 Self-ballooning dynamically balloons available kernel memory driven
18 by the current usage of anonymous memory ("committed AS") and
19 controlled by various sysfs-settable parameters. Configuring
20 FRONTSWAP is highly recommended; if it is not configured, self-
21 ballooning is disabled by default but can be enabled with the
22 'selfballooning' kernel boot parameter. If FRONTSWAP is configured,
23 frontswap-selfshrinking is enabled by default but can be disabled
24 with the 'noselfshrink' kernel boot parameter; and self-ballooning
25 is enabled by default but can be disabled with the 'noselfballooning'
26 kernel boot parameter. Note that systems without a sufficiently
27 large swap device should not enable self-ballooning.
28
29config XEN_BALLOON_MEMORY_HOTPLUG
30 bool "Memory hotplug support for Xen balloon driver"
31 default n
32 depends on XEN_BALLOON && MEMORY_HOTPLUG
33 help
34 Memory hotplug support for Xen balloon driver allows expanding memory
35 available for the system above limit declared at system startup.
36 It is very useful on critical systems which require long
37 run without rebooting.
38
39 Memory could be hotplugged in following steps:
40
41 1) dom0: xl mem-max <domU> <maxmem>
42 where <maxmem> is >= requested memory size,
43
44 2) dom0: xl mem-set <domU> <memory>
45 where <memory> is requested memory size; alternatively memory
46 could be added by writing proper value to
47 /sys/devices/system/xen_memory/xen_memory0/target or
48 /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
49
50 3) domU: for i in /sys/devices/system/memory/memory*/state; do \
51 [ "`cat "$i"`" = offline ] && echo online > "$i"; done
52
53 Memory could be onlined automatically on domU by adding following line to udev rules:
54
55 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
56
57 In that case step 3 should be omitted.
58
59config XEN_SCRUB_PAGES
60 bool "Scrub pages before returning them to system"
61 depends on XEN_BALLOON
62 default y
63 help
64 Scrub pages before returning them to the system for reuse by
65 other domains. This makes sure that any confidential data
66 is not accidentally visible to other domains. Is it more
67 secure, but slightly less efficient.
68 If in doubt, say yes.
69
70config XEN_DEV_EVTCHN
71 tristate "Xen /dev/xen/evtchn device"
72 default y
73 help
74 The evtchn driver allows a userspace process to triger event
75 channels and to receive notification of an event channel
76 firing.
77 If in doubt, say yes.
78
79config XEN_BACKEND
80 bool "Backend driver support"
81 depends on XEN_DOM0
82 default y
83 help
84 Support for backend device drivers that provide I/O services
85 to other virtual machines.
86
87config XENFS
88 tristate "Xen filesystem"
89 default y
90 help
91 The xen filesystem provides a way for domains to share
92 information with each other and with the hypervisor.
93 For example, by reading and writing the "xenbus" file, guests
94 may pass arbitrary information to the initial domain.
95 If in doubt, say yes.
96
97config XEN_COMPAT_XENFS
98 bool "Create compatibility mount point /proc/xen"
99 depends on XENFS
100 default y
101 help
102 The old xenstore userspace tools expect to find "xenbus"
103 under /proc/xen, but "xenbus" is now found at the root of the
104 xenfs filesystem. Selecting this causes the kernel to create
105 the compatibility mount point /proc/xen if it is running on
106 a xen platform.
107 If in doubt, say yes.
108
109config XEN_SYS_HYPERVISOR
110 bool "Create xen entries under /sys/hypervisor"
111 depends on SYSFS
112 select SYS_HYPERVISOR
113 default y
114 help
115 Create entries under /sys/hypervisor describing the Xen
116 hypervisor environment. When running native or in another
117 virtual environment, /sys/hypervisor will still be present,
118 but will have no xen contents.
119
120config XEN_XENBUS_FRONTEND
121 tristate
122
123config XEN_GNTDEV
124 tristate "userspace grant access device driver"
125 depends on XEN
126 default m
127 select MMU_NOTIFIER
128 help
129 Allows userspace processes to use grants.
130
131config XEN_GRANT_DEV_ALLOC
132 tristate "User-space grant reference allocator driver"
133 depends on XEN
134 default m
135 help
136 Allows userspace processes to create pages with access granted
137 to other domains. This can be used to implement frontend drivers
138 or as part of an inter-domain shared memory channel.
139
140config XEN_PLATFORM_PCI
141 tristate "xen platform pci device driver"
142 depends on XEN_PVHVM && PCI
143 default m
144 help
145 Driver for the Xen PCI Platform device: it is responsible for
146 initializing xenbus and grant_table when running in a Xen HVM
147 domain. As a consequence this driver is required to run any Xen PV
148 frontend on Xen HVM.
149
150config SWIOTLB_XEN
151 def_bool y
152 depends on PCI
153 select SWIOTLB
154
155config XEN_TMEM
156 bool
157 default y if (CLEANCACHE || FRONTSWAP)
158 help
159 Shim to interface in-kernel Transcendent Memory hooks
160 (e.g. cleancache and frontswap) to Xen tmem hypercalls.
161
162config XEN_PCIDEV_BACKEND
163 tristate "Xen PCI-device backend driver"
164 depends on PCI && X86 && XEN
165 depends on XEN_BACKEND
166 default m
167 help
168 The PCI device backend driver allows the kernel to export arbitrary
169 PCI devices to other guests. If you select this to be a module, you
170 will need to make sure no other driver has bound to the device(s)
171 you want to make visible to other guests.
172
173 The parameter "passthrough" allows you specify how you want the PCI
174 devices to appear in the guest. You can choose the default (0) where
175 PCI topology starts at 00.00.0, or (1) for passthrough if you want
176 the PCI devices topology appear the same as in the host.
177
178 The "hide" parameter (only applicable if backend driver is compiled
179 into the kernel) allows you to bind the PCI devices to this module
180 from the default device drivers. The argument is the list of PCI BDFs:
181 xen-pciback.hide=(03:00.0)(04:00.0)
182
183 If in doubt, say m.
184endmenu
1menu "Xen driver support"
2 depends on XEN
3
4config XEN_BALLOON
5 bool "Xen memory balloon driver"
6 default y
7 help
8 The balloon driver allows the Xen domain to request more memory from
9 the system to expand the domain's memory allocation, or alternatively
10 return unneeded memory to the system.
11
12config XEN_SELFBALLOONING
13 bool "Dynamically self-balloon kernel memory to target"
14 depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
15 default n
16 help
17 Self-ballooning dynamically balloons available kernel memory driven
18 by the current usage of anonymous memory ("committed AS") and
19 controlled by various sysfs-settable parameters. Configuring
20 FRONTSWAP is highly recommended; if it is not configured, self-
21 ballooning is disabled by default. If FRONTSWAP is configured,
22 frontswap-selfshrinking is enabled by default but can be disabled
23 with the 'tmem.selfshrink=0' kernel boot parameter; and self-ballooning
24 is enabled by default but can be disabled with the 'tmem.selfballooning=0'
25 kernel boot parameter. Note that systems without a sufficiently
26 large swap device should not enable self-ballooning.
27
28config XEN_BALLOON_MEMORY_HOTPLUG
29 bool "Memory hotplug support for Xen balloon driver"
30 default n
31 depends on XEN_BALLOON && MEMORY_HOTPLUG
32 help
33 Memory hotplug support for Xen balloon driver allows expanding memory
34 available for the system above limit declared at system startup.
35 It is very useful on critical systems which require long
36 run without rebooting.
37
38 Memory could be hotplugged in following steps:
39
40 1) dom0: xl mem-max <domU> <maxmem>
41 where <maxmem> is >= requested memory size,
42
43 2) dom0: xl mem-set <domU> <memory>
44 where <memory> is requested memory size; alternatively memory
45 could be added by writing proper value to
46 /sys/devices/system/xen_memory/xen_memory0/target or
47 /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
48
49 3) domU: for i in /sys/devices/system/memory/memory*/state; do \
50 [ "`cat "$i"`" = offline ] && echo online > "$i"; done
51
52 Memory could be onlined automatically on domU by adding following line to udev rules:
53
54 SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
55
56 In that case step 3 should be omitted.
57
58config XEN_SCRUB_PAGES
59 bool "Scrub pages before returning them to system"
60 depends on XEN_BALLOON
61 default y
62 help
63 Scrub pages before returning them to the system for reuse by
64 other domains. This makes sure that any confidential data
65 is not accidentally visible to other domains. Is it more
66 secure, but slightly less efficient.
67 If in doubt, say yes.
68
69config XEN_DEV_EVTCHN
70 tristate "Xen /dev/xen/evtchn device"
71 default y
72 help
73 The evtchn driver allows a userspace process to trigger event
74 channels and to receive notification of an event channel
75 firing.
76 If in doubt, say yes.
77
78config XEN_BACKEND
79 bool "Backend driver support"
80 depends on XEN_DOM0
81 default y
82 help
83 Support for backend device drivers that provide I/O services
84 to other virtual machines.
85
86config XENFS
87 tristate "Xen filesystem"
88 select XEN_PRIVCMD
89 default y
90 help
91 The xen filesystem provides a way for domains to share
92 information with each other and with the hypervisor.
93 For example, by reading and writing the "xenbus" file, guests
94 may pass arbitrary information to the initial domain.
95 If in doubt, say yes.
96
97config XEN_COMPAT_XENFS
98 bool "Create compatibility mount point /proc/xen"
99 depends on XENFS
100 default y
101 help
102 The old xenstore userspace tools expect to find "xenbus"
103 under /proc/xen, but "xenbus" is now found at the root of the
104 xenfs filesystem. Selecting this causes the kernel to create
105 the compatibility mount point /proc/xen if it is running on
106 a xen platform.
107 If in doubt, say yes.
108
109config XEN_SYS_HYPERVISOR
110 bool "Create xen entries under /sys/hypervisor"
111 depends on SYSFS
112 select SYS_HYPERVISOR
113 default y
114 help
115 Create entries under /sys/hypervisor describing the Xen
116 hypervisor environment. When running native or in another
117 virtual environment, /sys/hypervisor will still be present,
118 but will have no xen contents.
119
120config XEN_XENBUS_FRONTEND
121 tristate
122
123config XEN_GNTDEV
124 tristate "userspace grant access device driver"
125 depends on XEN
126 default m
127 select MMU_NOTIFIER
128 help
129 Allows userspace processes to use grants.
130
131config XEN_GRANT_DEV_ALLOC
132 tristate "User-space grant reference allocator driver"
133 depends on XEN
134 default m
135 help
136 Allows userspace processes to create pages with access granted
137 to other domains. This can be used to implement frontend drivers
138 or as part of an inter-domain shared memory channel.
139
140config SWIOTLB_XEN
141 def_bool y
142 select SWIOTLB
143
144config XEN_TMEM
145 tristate
146 depends on !ARM && !ARM64
147 default m if (CLEANCACHE || FRONTSWAP)
148 help
149 Shim to interface in-kernel Transcendent Memory hooks
150 (e.g. cleancache and frontswap) to Xen tmem hypercalls.
151
152config XEN_PCIDEV_BACKEND
153 tristate "Xen PCI-device backend driver"
154 depends on PCI && X86 && XEN
155 depends on XEN_BACKEND
156 default m
157 help
158 The PCI device backend driver allows the kernel to export arbitrary
159 PCI devices to other guests. If you select this to be a module, you
160 will need to make sure no other driver has bound to the device(s)
161 you want to make visible to other guests.
162
163 The parameter "passthrough" allows you specify how you want the PCI
164 devices to appear in the guest. You can choose the default (0) where
165 PCI topology starts at 00.00.0, or (1) for passthrough if you want
166 the PCI devices topology appear the same as in the host.
167
168 The "hide" parameter (only applicable if backend driver is compiled
169 into the kernel) allows you to bind the PCI devices to this module
170 from the default device drivers. The argument is the list of PCI BDFs:
171 xen-pciback.hide=(03:00.0)(04:00.0)
172
173 If in doubt, say m.
174
175config XEN_PRIVCMD
176 tristate
177 depends on XEN
178 default m
179
180config XEN_STUB
181 bool "Xen stub drivers"
182 depends on XEN && X86_64 && BROKEN
183 default n
184 help
185 Allow kernel to install stub drivers, to reserve space for Xen drivers,
186 i.e. memory hotplug and cpu hotplug, and to block native drivers loaded,
187 so that real Xen drivers can be modular.
188
189 To enable Xen features like cpu and memory hotplug, select Y here.
190
191config XEN_ACPI_HOTPLUG_MEMORY
192 tristate "Xen ACPI memory hotplug"
193 depends on XEN_DOM0 && XEN_STUB && ACPI
194 default n
195 help
196 This is Xen ACPI memory hotplug.
197
198 Currently Xen only support ACPI memory hot-add. If you want
199 to hot-add memory at runtime (the hot-added memory cannot be
200 removed until machine stop), select Y/M here, otherwise select N.
201
202config XEN_ACPI_HOTPLUG_CPU
203 tristate "Xen ACPI cpu hotplug"
204 depends on XEN_DOM0 && XEN_STUB && ACPI
205 select ACPI_CONTAINER
206 default n
207 help
208 Xen ACPI cpu enumerating and hotplugging
209
210 For hotplugging, currently Xen only support ACPI cpu hotadd.
211 If you want to hotadd cpu at runtime (the hotadded cpu cannot
212 be removed until machine stop), select Y/M here.
213
214config XEN_ACPI_PROCESSOR
215 tristate "Xen ACPI processor"
216 depends on XEN && X86 && ACPI_PROCESSOR && CPU_FREQ
217 default m
218 help
219 This ACPI processor uploads Power Management information to the Xen
220 hypervisor.
221
222 To do that the driver parses the Power Management data and uploads
223 said information to the Xen hypervisor. Then the Xen hypervisor can
224 select the proper Cx and Pxx states. It also registers itself as the
225 SMM so that other drivers (such as ACPI cpufreq scaling driver) will
226 not load.
227
228 To compile this driver as a module, choose M here: the module will be
229 called xen_acpi_processor If you do not know what to choose, select
230 M here. If the CPUFREQ drivers are built in, select Y here.
231
232config XEN_MCE_LOG
233 bool "Xen platform mcelog"
234 depends on XEN_DOM0 && X86_64 && X86_MCE
235 default n
236 help
237 Allow kernel fetching MCE error from Xen platform and
238 converting it into Linux mcelog format for mcelog tools
239
240config XEN_HAVE_PVMMU
241 bool
242
243endmenu