Linux Audio

Check our new training course

Loading...
v3.1
  1menu "Xen driver support"
  2	depends on XEN
  3
  4config XEN_BALLOON
  5	bool "Xen memory balloon driver"
  6	default y
  7	help
  8	  The balloon driver allows the Xen domain to request more memory from
  9	  the system to expand the domain's memory allocation, or alternatively
 10	  return unneeded memory to the system.
 11
 12config XEN_SELFBALLOONING
 13	bool "Dynamically self-balloon kernel memory to target"
 14	depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
 15	default n
 16	help
 17	  Self-ballooning dynamically balloons available kernel memory driven
 18	  by the current usage of anonymous memory ("committed AS") and
 19	  controlled by various sysfs-settable parameters.  Configuring
 20	  FRONTSWAP is highly recommended; if it is not configured, self-
 21	  ballooning is disabled by default but can be enabled with the
 22	  'selfballooning' kernel boot parameter.  If FRONTSWAP is configured,
 23	  frontswap-selfshrinking is enabled by default but can be disabled
 24	  with the 'noselfshrink' kernel boot parameter; and self-ballooning
 25	  is enabled by default but can be disabled with the 'noselfballooning'
 26	  kernel boot parameter.  Note that systems without a sufficiently
 27	  large swap device should not enable self-ballooning.
 28
 29config XEN_BALLOON_MEMORY_HOTPLUG
 30	bool "Memory hotplug support for Xen balloon driver"
 31	default n
 32	depends on XEN_BALLOON && MEMORY_HOTPLUG
 33	help
 34	  Memory hotplug support for Xen balloon driver allows expanding memory
 35	  available for the system above limit declared at system startup.
 36	  It is very useful on critical systems which require long
 37	  run without rebooting.
 38
 39	  Memory could be hotplugged in following steps:
 40
 41	    1) dom0: xl mem-max <domU> <maxmem>
 42	       where <maxmem> is >= requested memory size,
 43
 44	    2) dom0: xl mem-set <domU> <memory>
 45	       where <memory> is requested memory size; alternatively memory
 46	       could be added by writing proper value to
 47	       /sys/devices/system/xen_memory/xen_memory0/target or
 48	       /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
 49
 50	    3) domU: for i in /sys/devices/system/memory/memory*/state; do \
 51	               [ "`cat "$i"`" = offline ] && echo online > "$i"; done
 52
 53	  Memory could be onlined automatically on domU by adding following line to udev rules:
 54
 55	  SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
 56
 57	  In that case step 3 should be omitted.
 58
 59config XEN_SCRUB_PAGES
 60	bool "Scrub pages before returning them to system"
 61	depends on XEN_BALLOON
 62	default y
 63	help
 64	  Scrub pages before returning them to the system for reuse by
 65	  other domains.  This makes sure that any confidential data
 66	  is not accidentally visible to other domains.  Is it more
 67	  secure, but slightly less efficient.
 68	  If in doubt, say yes.
 69
 70config XEN_DEV_EVTCHN
 71	tristate "Xen /dev/xen/evtchn device"
 72	default y
 73	help
 74	  The evtchn driver allows a userspace process to triger event
 75	  channels and to receive notification of an event channel
 76	  firing.
 77	  If in doubt, say yes.
 78
 79config XEN_BACKEND
 80	bool "Backend driver support"
 81	depends on XEN_DOM0
 82	default y
 83	help
 84	  Support for backend device drivers that provide I/O services
 85	  to other virtual machines.
 86
 87config XENFS
 88	tristate "Xen filesystem"
 
 89	default y
 90	help
 91	  The xen filesystem provides a way for domains to share
 92	  information with each other and with the hypervisor.
 93	  For example, by reading and writing the "xenbus" file, guests
 94	  may pass arbitrary information to the initial domain.
 95	  If in doubt, say yes.
 96
 97config XEN_COMPAT_XENFS
 98       bool "Create compatibility mount point /proc/xen"
 99       depends on XENFS
100       default y
101       help
102         The old xenstore userspace tools expect to find "xenbus"
103         under /proc/xen, but "xenbus" is now found at the root of the
104         xenfs filesystem.  Selecting this causes the kernel to create
105         the compatibility mount point /proc/xen if it is running on
106         a xen platform.
107         If in doubt, say yes.
108
109config XEN_SYS_HYPERVISOR
110       bool "Create xen entries under /sys/hypervisor"
111       depends on SYSFS
112       select SYS_HYPERVISOR
113       default y
114       help
115         Create entries under /sys/hypervisor describing the Xen
116	 hypervisor environment.  When running native or in another
117	 virtual environment, /sys/hypervisor will still be present,
118	 but will have no xen contents.
119
120config XEN_XENBUS_FRONTEND
121	tristate
122
123config XEN_GNTDEV
124	tristate "userspace grant access device driver"
125	depends on XEN
126	default m
127	select MMU_NOTIFIER
128	help
129	  Allows userspace processes to use grants.
130
131config XEN_GRANT_DEV_ALLOC
132	tristate "User-space grant reference allocator driver"
133	depends on XEN
134	default m
135	help
136	  Allows userspace processes to create pages with access granted
137	  to other domains. This can be used to implement frontend drivers
138	  or as part of an inter-domain shared memory channel.
139
140config XEN_PLATFORM_PCI
141	tristate "xen platform pci device driver"
142	depends on XEN_PVHVM && PCI
143	default m
144	help
145	  Driver for the Xen PCI Platform device: it is responsible for
146	  initializing xenbus and grant_table when running in a Xen HVM
147	  domain. As a consequence this driver is required to run any Xen PV
148	  frontend on Xen HVM.
149
150config SWIOTLB_XEN
151	def_bool y
152	depends on PCI
153	select SWIOTLB
154
155config XEN_TMEM
156	bool
157	default y if (CLEANCACHE || FRONTSWAP)
158	help
159	  Shim to interface in-kernel Transcendent Memory hooks
160	  (e.g. cleancache and frontswap) to Xen tmem hypercalls.
161
162config XEN_PCIDEV_BACKEND
163	tristate "Xen PCI-device backend driver"
164	depends on PCI && X86 && XEN
165	depends on XEN_BACKEND
166	default m
167	help
168	  The PCI device backend driver allows the kernel to export arbitrary
169	  PCI devices to other guests. If you select this to be a module, you
170	  will need to make sure no other driver has bound to the device(s)
171	  you want to make visible to other guests.
172
173	  The parameter "passthrough" allows you specify how you want the PCI
174	  devices to appear in the guest. You can choose the default (0) where
175	  PCI topology starts at 00.00.0, or (1) for passthrough if you want
176	  the PCI devices topology appear the same as in the host.
177
178	  The "hide" parameter (only applicable if backend driver is compiled
179	  into the kernel) allows you to bind the PCI devices to this module
180	  from the default device drivers. The argument is the list of PCI BDFs:
181	  xen-pciback.hide=(03:00.0)(04:00.0)
182
183	  If in doubt, say m.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
184endmenu
v3.5.6
  1menu "Xen driver support"
  2	depends on XEN
  3
  4config XEN_BALLOON
  5	bool "Xen memory balloon driver"
  6	default y
  7	help
  8	  The balloon driver allows the Xen domain to request more memory from
  9	  the system to expand the domain's memory allocation, or alternatively
 10	  return unneeded memory to the system.
 11
 12config XEN_SELFBALLOONING
 13	bool "Dynamically self-balloon kernel memory to target"
 14	depends on XEN && XEN_BALLOON && CLEANCACHE && SWAP && XEN_TMEM
 15	default n
 16	help
 17	  Self-ballooning dynamically balloons available kernel memory driven
 18	  by the current usage of anonymous memory ("committed AS") and
 19	  controlled by various sysfs-settable parameters.  Configuring
 20	  FRONTSWAP is highly recommended; if it is not configured, self-
 21	  ballooning is disabled by default but can be enabled with the
 22	  'selfballooning' kernel boot parameter.  If FRONTSWAP is configured,
 23	  frontswap-selfshrinking is enabled by default but can be disabled
 24	  with the 'noselfshrink' kernel boot parameter; and self-ballooning
 25	  is enabled by default but can be disabled with the 'noselfballooning'
 26	  kernel boot parameter.  Note that systems without a sufficiently
 27	  large swap device should not enable self-ballooning.
 28
 29config XEN_BALLOON_MEMORY_HOTPLUG
 30	bool "Memory hotplug support for Xen balloon driver"
 31	default n
 32	depends on XEN_BALLOON && MEMORY_HOTPLUG
 33	help
 34	  Memory hotplug support for Xen balloon driver allows expanding memory
 35	  available for the system above limit declared at system startup.
 36	  It is very useful on critical systems which require long
 37	  run without rebooting.
 38
 39	  Memory could be hotplugged in following steps:
 40
 41	    1) dom0: xl mem-max <domU> <maxmem>
 42	       where <maxmem> is >= requested memory size,
 43
 44	    2) dom0: xl mem-set <domU> <memory>
 45	       where <memory> is requested memory size; alternatively memory
 46	       could be added by writing proper value to
 47	       /sys/devices/system/xen_memory/xen_memory0/target or
 48	       /sys/devices/system/xen_memory/xen_memory0/target_kb on dumU,
 49
 50	    3) domU: for i in /sys/devices/system/memory/memory*/state; do \
 51	               [ "`cat "$i"`" = offline ] && echo online > "$i"; done
 52
 53	  Memory could be onlined automatically on domU by adding following line to udev rules:
 54
 55	  SUBSYSTEM=="memory", ACTION=="add", RUN+="/bin/sh -c '[ -f /sys$devpath/state ] && echo online > /sys$devpath/state'"
 56
 57	  In that case step 3 should be omitted.
 58
 59config XEN_SCRUB_PAGES
 60	bool "Scrub pages before returning them to system"
 61	depends on XEN_BALLOON
 62	default y
 63	help
 64	  Scrub pages before returning them to the system for reuse by
 65	  other domains.  This makes sure that any confidential data
 66	  is not accidentally visible to other domains.  Is it more
 67	  secure, but slightly less efficient.
 68	  If in doubt, say yes.
 69
 70config XEN_DEV_EVTCHN
 71	tristate "Xen /dev/xen/evtchn device"
 72	default y
 73	help
 74	  The evtchn driver allows a userspace process to trigger event
 75	  channels and to receive notification of an event channel
 76	  firing.
 77	  If in doubt, say yes.
 78
 79config XEN_BACKEND
 80	bool "Backend driver support"
 81	depends on XEN_DOM0
 82	default y
 83	help
 84	  Support for backend device drivers that provide I/O services
 85	  to other virtual machines.
 86
 87config XENFS
 88	tristate "Xen filesystem"
 89	select XEN_PRIVCMD
 90	default y
 91	help
 92	  The xen filesystem provides a way for domains to share
 93	  information with each other and with the hypervisor.
 94	  For example, by reading and writing the "xenbus" file, guests
 95	  may pass arbitrary information to the initial domain.
 96	  If in doubt, say yes.
 97
 98config XEN_COMPAT_XENFS
 99       bool "Create compatibility mount point /proc/xen"
100       depends on XENFS
101       default y
102       help
103         The old xenstore userspace tools expect to find "xenbus"
104         under /proc/xen, but "xenbus" is now found at the root of the
105         xenfs filesystem.  Selecting this causes the kernel to create
106         the compatibility mount point /proc/xen if it is running on
107         a xen platform.
108         If in doubt, say yes.
109
110config XEN_SYS_HYPERVISOR
111       bool "Create xen entries under /sys/hypervisor"
112       depends on SYSFS
113       select SYS_HYPERVISOR
114       default y
115       help
116         Create entries under /sys/hypervisor describing the Xen
117	 hypervisor environment.  When running native or in another
118	 virtual environment, /sys/hypervisor will still be present,
119	 but will have no xen contents.
120
121config XEN_XENBUS_FRONTEND
122	tristate
123
124config XEN_GNTDEV
125	tristate "userspace grant access device driver"
126	depends on XEN
127	default m
128	select MMU_NOTIFIER
129	help
130	  Allows userspace processes to use grants.
131
132config XEN_GRANT_DEV_ALLOC
133	tristate "User-space grant reference allocator driver"
134	depends on XEN
135	default m
136	help
137	  Allows userspace processes to create pages with access granted
138	  to other domains. This can be used to implement frontend drivers
139	  or as part of an inter-domain shared memory channel.
140
 
 
 
 
 
 
 
 
 
 
141config SWIOTLB_XEN
142	def_bool y
143	depends on PCI
144	select SWIOTLB
145
146config XEN_TMEM
147	bool
148	default y if (CLEANCACHE || FRONTSWAP)
149	help
150	  Shim to interface in-kernel Transcendent Memory hooks
151	  (e.g. cleancache and frontswap) to Xen tmem hypercalls.
152
153config XEN_PCIDEV_BACKEND
154	tristate "Xen PCI-device backend driver"
155	depends on PCI && X86 && XEN
156	depends on XEN_BACKEND
157	default m
158	help
159	  The PCI device backend driver allows the kernel to export arbitrary
160	  PCI devices to other guests. If you select this to be a module, you
161	  will need to make sure no other driver has bound to the device(s)
162	  you want to make visible to other guests.
163
164	  The parameter "passthrough" allows you specify how you want the PCI
165	  devices to appear in the guest. You can choose the default (0) where
166	  PCI topology starts at 00.00.0, or (1) for passthrough if you want
167	  the PCI devices topology appear the same as in the host.
168
169	  The "hide" parameter (only applicable if backend driver is compiled
170	  into the kernel) allows you to bind the PCI devices to this module
171	  from the default device drivers. The argument is the list of PCI BDFs:
172	  xen-pciback.hide=(03:00.0)(04:00.0)
173
174	  If in doubt, say m.
175
176config XEN_PRIVCMD
177	tristate
178	depends on XEN
179	default m
180
181config XEN_ACPI_PROCESSOR
182	tristate "Xen ACPI processor"
183	depends on XEN && X86 && ACPI_PROCESSOR && CPU_FREQ
184	default m
185	help
186          This ACPI processor uploads Power Management information to the Xen
187	  hypervisor.
188
189	  To do that the driver parses the Power Management data and uploads
190	  said information to the Xen hypervisor. Then the Xen hypervisor can
191	  select the proper Cx and Pxx states. It also registers itslef as the
192	  SMM so that other drivers (such as ACPI cpufreq scaling driver) will
193	  not load.
194
195          To compile this driver as a module, choose M here: the module will be
196	  called xen_acpi_processor  If you do not know what to choose, select
197	  M here. If the CPUFREQ drivers are built in, select Y here.
198
199endmenu