Linux Audio

Check our new training course

Loading...
v3.1
  1menu "Xen driver support"
  2	depends on XEN
  3
  4config XEN_BALLOON
  5	bool "Xen memory balloon driver"
  6	default y
  7	help
  8	  The balloon driver allows the Xen domain to request more memory from
  9	  the system to expand the domain's memory allocation, or alternatively
 10	  return unneeded memory to the system.
 11
 12config XEN_SELFBALLOONING
 13	bool "Dynamically self-balloon kernel memory to target"
 14	depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
 15	default n
 16	help
 17	  Self-ballooning dynamically balloons available kernel memory driven
 18	  by the current usage of anonymous memory ("committed AS") and
 19	  controlled by various sysfs-settable parameters.  Configuring
 20	  FRONTSWAP is highly recommended; if it is not configured, self-
 21	  ballooning is disabled by default but can be enabled with the
 22	  'selfballooning' kernel boot parameter.  If FRONTSWAP is configured,
 23	  frontswap-selfshrinking is enabled by default but can be disabled
 24	  with the 'noselfshrink' kernel boot parameter; and self-ballooning
 25	  is enabled by default but can be disabled with the 'noselfballooning'
 26	  kernel boot parameter.  Note that systems without a sufficiently
 27	  large swap device should not enable self-ballooning.
 28
 29config XEN_BALLOON_MEMORY_HOTPLUG
 30	bool "Memory hotplug support for Xen balloon driver"
 31	default n
 32	depends on XEN_BALLOON && MEMORY_HOTPLUG
 33	help
 34	  Memory hotplug support for Xen balloon driver allows expanding memory
 35	  available for the system above limit declared at system startup.
 36	  It is very useful on critical systems which require long
 37	  run without rebooting.
 38
 39	  Memory could be hotplugged in following steps:
 40
 41	    1) dom0: xl mem-max <domU> <maxmem>
 
 
 
 
 42	       where <maxmem> is >= requested memory size,
 43
 44	    2) dom0: xl mem-set <domU> <memory>
 45	       where <memory> is requested memory size; alternatively memory
 46	       could be added by writing proper value to
 47	       /sys/devices/system/xen_memory/xen_memory0/target or
 48	       /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
 
 
 
 
 
 49
 50	    3) domU: for i in /sys/devices/system/memory/memory*/state; do \
 51	               [ "`cat "$i"`" = offline ] && echo online > "$i"; done
 52
 53	  Memory could be onlined automatically on domU by adding following line to udev rules:
 54
 55	  SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
 56
 57	  In that case step 3 should be omitted.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 58
 59config XEN_SCRUB_PAGES
 60	bool "Scrub pages before returning them to system"
 61	depends on XEN_BALLOON
 62	default y
 63	help
 64	  Scrub pages before returning them to the system for reuse by
 65	  other domains.  This makes sure that any confidential data
 66	  is not accidentally visible to other domains.  Is it more
 67	  secure, but slightly less efficient.
 68	  If in doubt, say yes.
 69
 70config XEN_DEV_EVTCHN
 71	tristate "Xen /dev/xen/evtchn device"
 72	default y
 73	help
 74	  The evtchn driver allows a userspace process to triger event
 75	  channels and to receive notification of an event channel
 76	  firing.
 77	  If in doubt, say yes.
 78
 79config XEN_BACKEND
 80	bool "Backend driver support"
 81	depends on XEN_DOM0
 82	default y
 83	help
 84	  Support for backend device drivers that provide I/O services
 85	  to other virtual machines.
 86
 87config XENFS
 88	tristate "Xen filesystem"
 
 89	default y
 90	help
 91	  The xen filesystem provides a way for domains to share
 92	  information with each other and with the hypervisor.
 93	  For example, by reading and writing the "xenbus" file, guests
 94	  may pass arbitrary information to the initial domain.
 95	  If in doubt, say yes.
 96
 97config XEN_COMPAT_XENFS
 98       bool "Create compatibility mount point /proc/xen"
 99       depends on XENFS
100       default y
101       help
102         The old xenstore userspace tools expect to find "xenbus"
103         under /proc/xen, but "xenbus" is now found at the root of the
104         xenfs filesystem.  Selecting this causes the kernel to create
105         the compatibility mount point /proc/xen if it is running on
106         a xen platform.
107         If in doubt, say yes.
108
109config XEN_SYS_HYPERVISOR
110       bool "Create xen entries under /sys/hypervisor"
111       depends on SYSFS
112       select SYS_HYPERVISOR
113       default y
114       help
115         Create entries under /sys/hypervisor describing the Xen
116	 hypervisor environment.  When running native or in another
117	 virtual environment, /sys/hypervisor will still be present,
118	 but will have no xen contents.
119
120config XEN_XENBUS_FRONTEND
121	tristate
122
123config XEN_GNTDEV
124	tristate "userspace grant access device driver"
125	depends on XEN
126	default m
127	select MMU_NOTIFIER
128	help
129	  Allows userspace processes to use grants.
130
131config XEN_GRANT_DEV_ALLOC
132	tristate "User-space grant reference allocator driver"
133	depends on XEN
134	default m
135	help
136	  Allows userspace processes to create pages with access granted
137	  to other domains. This can be used to implement frontend drivers
138	  or as part of an inter-domain shared memory channel.
139
140config XEN_PLATFORM_PCI
141	tristate "xen platform pci device driver"
142	depends on XEN_PVHVM && PCI
143	default m
144	help
145	  Driver for the Xen PCI Platform device: it is responsible for
146	  initializing xenbus and grant_table when running in a Xen HVM
147	  domain. As a consequence this driver is required to run any Xen PV
148	  frontend on Xen HVM.
149
150config SWIOTLB_XEN
151	def_bool y
152	depends on PCI
153	select SWIOTLB
154
155config XEN_TMEM
156	bool
157	default y if (CLEANCACHE || FRONTSWAP)
 
158	help
159	  Shim to interface in-kernel Transcendent Memory hooks
160	  (e.g. cleancache and frontswap) to Xen tmem hypercalls.
161
162config XEN_PCIDEV_BACKEND
163	tristate "Xen PCI-device backend driver"
164	depends on PCI && X86 && XEN
165	depends on XEN_BACKEND
166	default m
167	help
168	  The PCI device backend driver allows the kernel to export arbitrary
169	  PCI devices to other guests. If you select this to be a module, you
170	  will need to make sure no other driver has bound to the device(s)
171	  you want to make visible to other guests.
172
173	  The parameter "passthrough" allows you specify how you want the PCI
174	  devices to appear in the guest. You can choose the default (0) where
175	  PCI topology starts at 00.00.0, or (1) for passthrough if you want
176	  the PCI devices topology appear the same as in the host.
177
178	  The "hide" parameter (only applicable if backend driver is compiled
179	  into the kernel) allows you to bind the PCI devices to this module
180	  from the default device drivers. The argument is the list of PCI BDFs:
181	  xen-pciback.hide=(03:00.0)(04:00.0)
182
183	  If in doubt, say m.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184endmenu
v4.10.11
  1menu "Xen driver support"
  2	depends on XEN
  3
  4config XEN_BALLOON
  5	bool "Xen memory balloon driver"
  6	default y
  7	help
  8	  The balloon driver allows the Xen domain to request more memory from
  9	  the system to expand the domain's memory allocation, or alternatively
 10	  return unneeded memory to the system.
 11
 12config XEN_SELFBALLOONING
 13	bool "Dynamically self-balloon kernel memory to target"
 14	depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
 15	default n
 16	help
 17	  Self-ballooning dynamically balloons available kernel memory driven
 18	  by the current usage of anonymous memory ("committed AS") and
 19	  controlled by various sysfs-settable parameters.  Configuring
 20	  FRONTSWAP is highly recommended; if it is not configured, self-
 21	  ballooning is disabled by default. If FRONTSWAP is configured,
 
 22	  frontswap-selfshrinking is enabled by default but can be disabled
 23	  with the 'tmem.selfshrink=0' kernel boot parameter; and self-ballooning
 24	  is enabled by default but can be disabled with the 'tmem.selfballooning=0'
 25	  kernel boot parameter.  Note that systems without a sufficiently
 26	  large swap device should not enable self-ballooning.
 27
 28config XEN_BALLOON_MEMORY_HOTPLUG
 29	bool "Memory hotplug support for Xen balloon driver"
 30	default n
 31	depends on XEN_BALLOON && MEMORY_HOTPLUG
 32	help
 33	  Memory hotplug support for Xen balloon driver allows expanding memory
 34	  available for the system above limit declared at system startup.
 35	  It is very useful on critical systems which require long
 36	  run without rebooting.
 37
 38	  Memory could be hotplugged in following steps:
 39
 40	    1) target domain: ensure that memory auto online policy is in
 41	       effect by checking /sys/devices/system/memory/auto_online_blocks
 42	       file (should be 'online').
 43
 44	    2) control domain: xl mem-max <target-domain> <maxmem>
 45	       where <maxmem> is >= requested memory size,
 46
 47	    3) control domain: xl mem-set <target-domain> <memory>
 48	       where <memory> is requested memory size; alternatively memory
 49	       could be added by writing proper value to
 50	       /sys/devices/system/xen_memory/xen_memory0/target or
 51	       /sys/devices/system/xen_memory/xen_memory0/target_kb on the
 52	       target domain.
 53
 54	  Alternatively, if memory auto onlining was not requested at step 1
 55	  the newly added memory can be manually onlined in the target domain
 56	  by doing the following:
 57
 58		for i in /sys/devices/system/memory/memory*/state; do \
 59		  [ "`cat "$i"`" = offline ] && echo online > "$i"; done
 60
 61	  or by adding the following line to udev rules:
 62
 63	  SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
 64
 65config XEN_BALLOON_MEMORY_HOTPLUG_LIMIT
 66	int "Hotplugged memory limit (in GiB) for a PV guest"
 67	default 512 if X86_64
 68	default 4 if X86_32
 69	range 0 64 if X86_32
 70	depends on XEN_HAVE_PVMMU
 71	depends on XEN_BALLOON_MEMORY_HOTPLUG
 72	help
 73	  Maxmium amount of memory (in GiB) that a PV guest can be
 74	  expanded to when using memory hotplug.
 75
 76	  A PV guest can have more memory than this limit if is
 77	  started with a larger maximum.
 78
 79	  This value is used to allocate enough space in internal
 80	  tables needed for physical memory administration.
 81
 82config XEN_SCRUB_PAGES
 83	bool "Scrub pages before returning them to system"
 84	depends on XEN_BALLOON
 85	default y
 86	help
 87	  Scrub pages before returning them to the system for reuse by
 88	  other domains.  This makes sure that any confidential data
 89	  is not accidentally visible to other domains.  Is it more
 90	  secure, but slightly less efficient.
 91	  If in doubt, say yes.
 92
 93config XEN_DEV_EVTCHN
 94	tristate "Xen /dev/xen/evtchn device"
 95	default y
 96	help
 97	  The evtchn driver allows a userspace process to trigger event
 98	  channels and to receive notification of an event channel
 99	  firing.
100	  If in doubt, say yes.
101
102config XEN_BACKEND
103	bool "Backend driver support"
104	depends on XEN_DOM0
105	default y
106	help
107	  Support for backend device drivers that provide I/O services
108	  to other virtual machines.
109
110config XENFS
111	tristate "Xen filesystem"
112	select XEN_PRIVCMD
113	default y
114	help
115	  The xen filesystem provides a way for domains to share
116	  information with each other and with the hypervisor.
117	  For example, by reading and writing the "xenbus" file, guests
118	  may pass arbitrary information to the initial domain.
119	  If in doubt, say yes.
120
121config XEN_COMPAT_XENFS
122       bool "Create compatibility mount point /proc/xen"
123       depends on XENFS
124       default y
125       help
126         The old xenstore userspace tools expect to find "xenbus"
127         under /proc/xen, but "xenbus" is now found at the root of the
128         xenfs filesystem.  Selecting this causes the kernel to create
129         the compatibility mount point /proc/xen if it is running on
130         a xen platform.
131         If in doubt, say yes.
132
133config XEN_SYS_HYPERVISOR
134       bool "Create xen entries under /sys/hypervisor"
135       depends on SYSFS
136       select SYS_HYPERVISOR
137       default y
138       help
139         Create entries under /sys/hypervisor describing the Xen
140	 hypervisor environment.  When running native or in another
141	 virtual environment, /sys/hypervisor will still be present,
142	 but will have no xen contents.
143
144config XEN_XENBUS_FRONTEND
145	tristate
146
147config XEN_GNTDEV
148	tristate "userspace grant access device driver"
149	depends on XEN
150	default m
151	select MMU_NOTIFIER
152	help
153	  Allows userspace processes to use grants.
154
155config XEN_GRANT_DEV_ALLOC
156	tristate "User-space grant reference allocator driver"
157	depends on XEN
158	default m
159	help
160	  Allows userspace processes to create pages with access granted
161	  to other domains. This can be used to implement frontend drivers
162	  or as part of an inter-domain shared memory channel.
163
 
 
 
 
 
 
 
 
 
 
164config SWIOTLB_XEN
165	def_bool y
 
166	select SWIOTLB
167
168config XEN_TMEM
169	tristate
170	depends on !ARM && !ARM64
171	default m if (CLEANCACHE || FRONTSWAP)
172	help
173	  Shim to interface in-kernel Transcendent Memory hooks
174	  (e.g. cleancache and frontswap) to Xen tmem hypercalls.
175
176config XEN_PCIDEV_BACKEND
177	tristate "Xen PCI-device backend driver"
178	depends on PCI && X86 && XEN
179	depends on XEN_BACKEND
180	default m
181	help
182	  The PCI device backend driver allows the kernel to export arbitrary
183	  PCI devices to other guests. If you select this to be a module, you
184	  will need to make sure no other driver has bound to the device(s)
185	  you want to make visible to other guests.
186
187	  The parameter "passthrough" allows you specify how you want the PCI
188	  devices to appear in the guest. You can choose the default (0) where
189	  PCI topology starts at 00.00.0, or (1) for passthrough if you want
190	  the PCI devices topology appear the same as in the host.
191
192	  The "hide" parameter (only applicable if backend driver is compiled
193	  into the kernel) allows you to bind the PCI devices to this module
194	  from the default device drivers. The argument is the list of PCI BDFs:
195	  xen-pciback.hide=(03:00.0)(04:00.0)
196
197	  If in doubt, say m.
198
199config XEN_SCSI_BACKEND
200	tristate "XEN SCSI backend driver"
201	depends on XEN && XEN_BACKEND && TARGET_CORE
202	help
203	  The SCSI backend driver allows the kernel to export its SCSI Devices
204	  to other guests via a high-performance shared-memory interface.
205	  Only needed for systems running as XEN driver domains (e.g. Dom0) and
206	  if guests need generic access to SCSI devices.
207
208config XEN_PRIVCMD
209	tristate
210	depends on XEN
211	default m
212
213config XEN_STUB
214	bool "Xen stub drivers"
215	depends on XEN && X86_64 && BROKEN
216	default n
217	help
218	  Allow kernel to install stub drivers, to reserve space for Xen drivers,
219	  i.e. memory hotplug and cpu hotplug, and to block native drivers loaded,
220	  so that real Xen drivers can be modular.
221
222	  To enable Xen features like cpu and memory hotplug, select Y here.
223
224config XEN_ACPI_HOTPLUG_MEMORY
225	tristate "Xen ACPI memory hotplug"
226	depends on XEN_DOM0 && XEN_STUB && ACPI
227	default n
228	help
229	  This is Xen ACPI memory hotplug.
230
231	  Currently Xen only support ACPI memory hot-add. If you want
232	  to hot-add memory at runtime (the hot-added memory cannot be
233	  removed until machine stop), select Y/M here, otherwise select N.
234
235config XEN_ACPI_HOTPLUG_CPU
236	tristate "Xen ACPI cpu hotplug"
237	depends on XEN_DOM0 && XEN_STUB && ACPI
238	select ACPI_CONTAINER
239	default n
240	help
241	  Xen ACPI cpu enumerating and hotplugging
242
243	  For hotplugging, currently Xen only support ACPI cpu hotadd.
244	  If you want to hotadd cpu at runtime (the hotadded cpu cannot
245	  be removed until machine stop), select Y/M here.
246
247config XEN_ACPI_PROCESSOR
248	tristate "Xen ACPI processor"
249	depends on XEN && X86 && ACPI_PROCESSOR && CPU_FREQ
250	default m
251	help
252          This ACPI processor uploads Power Management information to the Xen
253	  hypervisor.
254
255	  To do that the driver parses the Power Management data and uploads
256	  said information to the Xen hypervisor. Then the Xen hypervisor can
257	  select the proper Cx and Pxx states. It also registers itself as the
258	  SMM so that other drivers (such as ACPI cpufreq scaling driver) will
259	  not load.
260
261          To compile this driver as a module, choose M here: the module will be
262	  called xen_acpi_processor  If you do not know what to choose, select
263	  M here. If the CPUFREQ drivers are built in, select Y here.
264
265config XEN_MCE_LOG
266	bool "Xen platform mcelog"
267	depends on XEN_DOM0 && X86_64 && X86_MCE
268	default n
269	help
270	  Allow kernel fetching MCE error from Xen platform and
271	  converting it into Linux mcelog format for mcelog tools
272
273config XEN_HAVE_PVMMU
274       bool
275
276config XEN_EFI
277	def_bool y
278	depends on (ARM || ARM64 || X86_64) && EFI
279
280config XEN_AUTO_XLATE
281	def_bool y
282	depends on ARM || ARM64 || XEN_PVHVM
283	help
284	  Support for auto-translated physmap guests.
285
286config XEN_ACPI
287	def_bool y
288	depends on X86 && ACPI
289
290config XEN_SYMS
291       bool "Xen symbols"
292       depends on X86 && XEN_DOM0 && XENFS
293       default y if KALLSYMS
294       help
295          Exports hypervisor symbols (along with their types and addresses) via
296          /proc/xen/xensyms file, similar to /proc/kallsyms
297
298config XEN_HAVE_VPMU
299       bool
300
301endmenu