Linux Audio

Check our new training course

Loading...
v3.1
  1#
  2# Block device driver configuration
  3#
  4
  5menuconfig MD
  6	bool "Multiple devices driver support (RAID and LVM)"
  7	depends on BLOCK
 
  8	help
  9	  Support multiple physical spindles through a single logical device.
 10	  Required for RAID and logical volume management.
 11
 12if MD
 13
 14config BLK_DEV_MD
 15	tristate "RAID support"
 16	---help---
 17	  This driver lets you combine several hard disk partitions into one
 18	  logical block device. This can be used to simply append one
 19	  partition to another one or to combine several redundant hard disks
 20	  into a RAID1/4/5 device so as to provide protection against hard
 21	  disk failures. This is called "Software RAID" since the combining of
 22	  the partitions is done by the kernel. "Hardware RAID" means that the
 23	  combining is done by a dedicated controller; if you have such a
 24	  controller, you do not need to say Y here.
 25
 26	  More information about Software RAID on Linux is contained in the
 27	  Software RAID mini-HOWTO, available from
 28	  <http://www.tldp.org/docs.html#howto>. There you will also learn
 29	  where to get the supporting user space utilities raidtools.
 30
 31	  If unsure, say N.
 32
 33config MD_AUTODETECT
 34	bool "Autodetect RAID arrays during kernel boot"
 35	depends on BLK_DEV_MD=y
 36	default y
 37	---help---
 38	  If you say Y here, then the kernel will try to autodetect raid
 39	  arrays as part of its boot process. 
 40
 41	  If you don't use raid and say Y, this autodetection can cause 
 42	  a several-second delay in the boot time due to various
 43	  synchronisation steps that are part of this step.
 44
 45	  If unsure, say Y.
 46
 47config MD_LINEAR
 48	tristate "Linear (append) mode"
 49	depends on BLK_DEV_MD
 50	---help---
 51	  If you say Y here, then your multiple devices driver will be able to
 52	  use the so-called linear mode, i.e. it will combine the hard disk
 53	  partitions by simply appending one to the other.
 54
 55	  To compile this as a module, choose M here: the module
 56	  will be called linear.
 57
 58	  If unsure, say Y.
 59
 60config MD_RAID0
 61	tristate "RAID-0 (striping) mode"
 62	depends on BLK_DEV_MD
 63	---help---
 64	  If you say Y here, then your multiple devices driver will be able to
 65	  use the so-called raid0 mode, i.e. it will combine the hard disk
 66	  partitions into one logical device in such a fashion as to fill them
 67	  up evenly, one chunk here and one chunk there. This will increase
 68	  the throughput rate if the partitions reside on distinct disks.
 69
 70	  Information about Software RAID on Linux is contained in the
 71	  Software-RAID mini-HOWTO, available from
 72	  <http://www.tldp.org/docs.html#howto>. There you will also
 73	  learn where to get the supporting user space utilities raidtools.
 74
 75	  To compile this as a module, choose M here: the module
 76	  will be called raid0.
 77
 78	  If unsure, say Y.
 79
 80config MD_RAID1
 81	tristate "RAID-1 (mirroring) mode"
 82	depends on BLK_DEV_MD
 83	---help---
 84	  A RAID-1 set consists of several disk drives which are exact copies
 85	  of each other.  In the event of a mirror failure, the RAID driver
 86	  will continue to use the operational mirrors in the set, providing
 87	  an error free MD (multiple device) to the higher levels of the
 88	  kernel.  In a set with N drives, the available space is the capacity
 89	  of a single drive, and the set protects against a failure of (N - 1)
 90	  drives.
 91
 92	  Information about Software RAID on Linux is contained in the
 93	  Software-RAID mini-HOWTO, available from
 94	  <http://www.tldp.org/docs.html#howto>.  There you will also
 95	  learn where to get the supporting user space utilities raidtools.
 96
 97	  If you want to use such a RAID-1 set, say Y.  To compile this code
 98	  as a module, choose M here: the module will be called raid1.
 99
100	  If unsure, say Y.
101
102config MD_RAID10
103	tristate "RAID-10 (mirrored striping) mode"
104	depends on BLK_DEV_MD
105	---help---
106	  RAID-10 provides a combination of striping (RAID-0) and
107	  mirroring (RAID-1) with easier configuration and more flexible
108	  layout.
109	  Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
110	  be the same size (or at least, only as much as the smallest device
111	  will be used).
112	  RAID-10 provides a variety of layouts that provide different levels
113	  of redundancy and performance.
114
115	  RAID-10 requires mdadm-1.7.0 or later, available at:
116
117	  ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
118
119	  If unsure, say Y.
120
121config MD_RAID456
122	tristate "RAID-4/RAID-5/RAID-6 mode"
123	depends on BLK_DEV_MD
124	select RAID6_PQ
 
125	select ASYNC_MEMCPY
126	select ASYNC_XOR
127	select ASYNC_PQ
128	select ASYNC_RAID6_RECOV
129	---help---
130	  A RAID-5 set of N drives with a capacity of C MB per drive provides
131	  the capacity of C * (N - 1) MB, and protects against a failure
132	  of a single drive. For a given sector (row) number, (N - 1) drives
133	  contain data sectors, and one drive contains the parity protection.
134	  For a RAID-4 set, the parity blocks are present on a single drive,
135	  while a RAID-5 set distributes the parity across the drives in one
136	  of the available parity distribution methods.
137
138	  A RAID-6 set of N drives with a capacity of C MB per drive
139	  provides the capacity of C * (N - 2) MB, and protects
140	  against a failure of any two drives. For a given sector
141	  (row) number, (N - 2) drives contain data sectors, and two
142	  drives contains two independent redundancy syndromes.  Like
143	  RAID-5, RAID-6 distributes the syndromes across the drives
144	  in one of the available parity distribution methods.
145
146	  Information about Software RAID on Linux is contained in the
147	  Software-RAID mini-HOWTO, available from
148	  <http://www.tldp.org/docs.html#howto>. There you will also
149	  learn where to get the supporting user space utilities raidtools.
150
151	  If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y.  To
152	  compile this code as a module, choose M here: the module
153	  will be called raid456.
154
155	  If unsure, say Y.
156
157config MULTICORE_RAID456
158	bool "RAID-4/RAID-5/RAID-6 Multicore processing (EXPERIMENTAL)"
159	depends on MD_RAID456
160	depends on SMP
161	depends on EXPERIMENTAL
162	---help---
163	  Enable the raid456 module to dispatch per-stripe raid operations to a
164	  thread pool.
165
166	  If unsure, say N.
167
168config MD_MULTIPATH
169	tristate "Multipath I/O support"
170	depends on BLK_DEV_MD
171	help
172	  MD_MULTIPATH provides a simple multi-path personality for use
173	  the MD framework.  It is not under active development.  New
174	  projects should consider using DM_MULTIPATH which has more
175	  features and more testing.
176
177	  If unsure, say N.
178
179config MD_FAULTY
180	tristate "Faulty test module for MD"
181	depends on BLK_DEV_MD
182	help
183	  The "faulty" module allows for a block device that occasionally returns
184	  read or write errors.  It is useful for testing.
185
186	  In unsure, say N.
187
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
188config BLK_DEV_DM
189	tristate "Device mapper support"
 
190	---help---
191	  Device-mapper is a low level volume manager.  It works by allowing
192	  people to specify mappings for ranges of logical sectors.  Various
193	  mapping types are available, in addition people may write their own
194	  modules containing custom mappings if they wish.
195
196	  Higher level volume managers such as LVM2 use this driver.
197
198	  To compile this as a module, choose M here: the module will be
199	  called dm-mod.
200
201	  If unsure, say N.
202
 
 
 
 
 
 
 
 
 
 
 
203config DM_DEBUG
204	boolean "Device mapper debugging support"
205	depends on BLK_DEV_DM
206	---help---
207	  Enable this for messages that may help debug device-mapper problems.
208
209	  If unsure, say N.
210
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
211config DM_CRYPT
212	tristate "Crypt target support"
213	depends on BLK_DEV_DM
214	select CRYPTO
215	select CRYPTO_CBC
216	---help---
217	  This device-mapper target allows you to create a device that
218	  transparently encrypts the data on it. You'll need to activate
219	  the ciphers you're going to use in the cryptoapi configuration.
220
221	  Information on how to use dm-crypt can be found on
222
223	  <http://www.saout.de/misc/dm-crypt/>
224
225	  To compile this code as a module, choose M here: the module will
226	  be called dm-crypt.
227
228	  If unsure, say N.
229
230config DM_SNAPSHOT
231       tristate "Snapshot target"
232       depends on BLK_DEV_DM
 
233       ---help---
234         Allow volume managers to take writable snapshots of a device.
235
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
236config DM_MIRROR
237       tristate "Mirror target"
238       depends on BLK_DEV_DM
239       ---help---
240         Allow volume managers to mirror logical volumes, also
241         needed for live data migration tools such as 'pvmove'.
242
 
 
 
 
 
 
 
 
 
 
 
243config DM_RAID
244       tristate "RAID 1/4/5/6 target (EXPERIMENTAL)"
245       depends on BLK_DEV_DM && EXPERIMENTAL
246       select MD_RAID1
 
247       select MD_RAID456
248       select BLK_DEV_MD
249       ---help---
250	 A dm target that supports RAID1, RAID4, RAID5 and RAID6 mappings
251
252	 A RAID-5 set of N drives with a capacity of C MB per drive provides
253	 the capacity of C * (N - 1) MB, and protects against a failure
254	 of a single drive. For a given sector (row) number, (N - 1) drives
255	 contain data sectors, and one drive contains the parity protection.
256	 For a RAID-4 set, the parity blocks are present on a single drive,
257	 while a RAID-5 set distributes the parity across the drives in one
258	 of the available parity distribution methods.
259
260	 A RAID-6 set of N drives with a capacity of C MB per drive
261	 provides the capacity of C * (N - 2) MB, and protects
262	 against a failure of any two drives. For a given sector
263	 (row) number, (N - 2) drives contain data sectors, and two
264	 drives contains two independent redundancy syndromes.  Like
265	 RAID-5, RAID-6 distributes the syndromes across the drives
266	 in one of the available parity distribution methods.
267
268config DM_LOG_USERSPACE
269	tristate "Mirror userspace logging (EXPERIMENTAL)"
270	depends on DM_MIRROR && EXPERIMENTAL && NET
271	select CONNECTOR
272	---help---
273	  The userspace logging module provides a mechanism for
274	  relaying the dm-dirty-log API to userspace.  Log designs
275	  which are more suited to userspace implementation (e.g.
276	  shared storage logs) or experimental logs can be implemented
277	  by leveraging this framework.
278
279config DM_ZERO
280	tristate "Zero target"
281	depends on BLK_DEV_DM
282	---help---
283	  A target that discards writes, and returns all zeroes for
284	  reads.  Useful in some recovery situations.
285
286config DM_MULTIPATH
287	tristate "Multipath target"
288	depends on BLK_DEV_DM
289	# nasty syntax but means make DM_MULTIPATH independent
290	# of SCSI_DH if the latter isn't defined but if
291	# it is, DM_MULTIPATH must depend on it.  We get a build
292	# error if SCSI_DH=m and DM_MULTIPATH=y
293	depends on SCSI_DH || !SCSI_DH
294	---help---
295	  Allow volume managers to support multipath hardware.
296
297config DM_MULTIPATH_QL
298	tristate "I/O Path Selector based on the number of in-flight I/Os"
299	depends on DM_MULTIPATH
300	---help---
301	  This path selector is a dynamic load balancer which selects
302	  the path with the least number of in-flight I/Os.
303
304	  If unsure, say N.
305
306config DM_MULTIPATH_ST
307	tristate "I/O Path Selector based on the service time"
308	depends on DM_MULTIPATH
309	---help---
310	  This path selector is a dynamic load balancer which selects
311	  the path expected to complete the incoming I/O in the shortest
312	  time.
313
314	  If unsure, say N.
315
316config DM_DELAY
317	tristate "I/O delaying target (EXPERIMENTAL)"
318	depends on BLK_DEV_DM && EXPERIMENTAL
319	---help---
320	A target that delays reads and/or writes and can send
321	them to different devices.  Useful for testing.
322
323	If unsure, say N.
324
325config DM_UEVENT
326	bool "DM uevents (EXPERIMENTAL)"
327	depends on BLK_DEV_DM && EXPERIMENTAL
328	---help---
329	Generate udev events for DM events.
330
331config DM_FLAKEY
332       tristate "Flakey target (EXPERIMENTAL)"
333       depends on BLK_DEV_DM && EXPERIMENTAL
334       ---help---
335         A target that intermittently fails I/O for debugging purposes.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
336
337endif # MD
v4.10.11
  1#
  2# Block device driver configuration
  3#
  4
  5menuconfig MD
  6	bool "Multiple devices driver support (RAID and LVM)"
  7	depends on BLOCK
  8	select SRCU
  9	help
 10	  Support multiple physical spindles through a single logical device.
 11	  Required for RAID and logical volume management.
 12
 13if MD
 14
 15config BLK_DEV_MD
 16	tristate "RAID support"
 17	---help---
 18	  This driver lets you combine several hard disk partitions into one
 19	  logical block device. This can be used to simply append one
 20	  partition to another one or to combine several redundant hard disks
 21	  into a RAID1/4/5 device so as to provide protection against hard
 22	  disk failures. This is called "Software RAID" since the combining of
 23	  the partitions is done by the kernel. "Hardware RAID" means that the
 24	  combining is done by a dedicated controller; if you have such a
 25	  controller, you do not need to say Y here.
 26
 27	  More information about Software RAID on Linux is contained in the
 28	  Software RAID mini-HOWTO, available from
 29	  <http://www.tldp.org/docs.html#howto>. There you will also learn
 30	  where to get the supporting user space utilities raidtools.
 31
 32	  If unsure, say N.
 33
 34config MD_AUTODETECT
 35	bool "Autodetect RAID arrays during kernel boot"
 36	depends on BLK_DEV_MD=y
 37	default y
 38	---help---
 39	  If you say Y here, then the kernel will try to autodetect raid
 40	  arrays as part of its boot process. 
 41
 42	  If you don't use raid and say Y, this autodetection can cause 
 43	  a several-second delay in the boot time due to various
 44	  synchronisation steps that are part of this step.
 45
 46	  If unsure, say Y.
 47
 48config MD_LINEAR
 49	tristate "Linear (append) mode"
 50	depends on BLK_DEV_MD
 51	---help---
 52	  If you say Y here, then your multiple devices driver will be able to
 53	  use the so-called linear mode, i.e. it will combine the hard disk
 54	  partitions by simply appending one to the other.
 55
 56	  To compile this as a module, choose M here: the module
 57	  will be called linear.
 58
 59	  If unsure, say Y.
 60
 61config MD_RAID0
 62	tristate "RAID-0 (striping) mode"
 63	depends on BLK_DEV_MD
 64	---help---
 65	  If you say Y here, then your multiple devices driver will be able to
 66	  use the so-called raid0 mode, i.e. it will combine the hard disk
 67	  partitions into one logical device in such a fashion as to fill them
 68	  up evenly, one chunk here and one chunk there. This will increase
 69	  the throughput rate if the partitions reside on distinct disks.
 70
 71	  Information about Software RAID on Linux is contained in the
 72	  Software-RAID mini-HOWTO, available from
 73	  <http://www.tldp.org/docs.html#howto>. There you will also
 74	  learn where to get the supporting user space utilities raidtools.
 75
 76	  To compile this as a module, choose M here: the module
 77	  will be called raid0.
 78
 79	  If unsure, say Y.
 80
 81config MD_RAID1
 82	tristate "RAID-1 (mirroring) mode"
 83	depends on BLK_DEV_MD
 84	---help---
 85	  A RAID-1 set consists of several disk drives which are exact copies
 86	  of each other.  In the event of a mirror failure, the RAID driver
 87	  will continue to use the operational mirrors in the set, providing
 88	  an error free MD (multiple device) to the higher levels of the
 89	  kernel.  In a set with N drives, the available space is the capacity
 90	  of a single drive, and the set protects against a failure of (N - 1)
 91	  drives.
 92
 93	  Information about Software RAID on Linux is contained in the
 94	  Software-RAID mini-HOWTO, available from
 95	  <http://www.tldp.org/docs.html#howto>.  There you will also
 96	  learn where to get the supporting user space utilities raidtools.
 97
 98	  If you want to use such a RAID-1 set, say Y.  To compile this code
 99	  as a module, choose M here: the module will be called raid1.
100
101	  If unsure, say Y.
102
103config MD_RAID10
104	tristate "RAID-10 (mirrored striping) mode"
105	depends on BLK_DEV_MD
106	---help---
107	  RAID-10 provides a combination of striping (RAID-0) and
108	  mirroring (RAID-1) with easier configuration and more flexible
109	  layout.
110	  Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to
111	  be the same size (or at least, only as much as the smallest device
112	  will be used).
113	  RAID-10 provides a variety of layouts that provide different levels
114	  of redundancy and performance.
115
116	  RAID-10 requires mdadm-1.7.0 or later, available at:
117
118	  ftp://ftp.kernel.org/pub/linux/utils/raid/mdadm/
119
120	  If unsure, say Y.
121
122config MD_RAID456
123	tristate "RAID-4/RAID-5/RAID-6 mode"
124	depends on BLK_DEV_MD
125	select RAID6_PQ
126	select LIBCRC32C
127	select ASYNC_MEMCPY
128	select ASYNC_XOR
129	select ASYNC_PQ
130	select ASYNC_RAID6_RECOV
131	---help---
132	  A RAID-5 set of N drives with a capacity of C MB per drive provides
133	  the capacity of C * (N - 1) MB, and protects against a failure
134	  of a single drive. For a given sector (row) number, (N - 1) drives
135	  contain data sectors, and one drive contains the parity protection.
136	  For a RAID-4 set, the parity blocks are present on a single drive,
137	  while a RAID-5 set distributes the parity across the drives in one
138	  of the available parity distribution methods.
139
140	  A RAID-6 set of N drives with a capacity of C MB per drive
141	  provides the capacity of C * (N - 2) MB, and protects
142	  against a failure of any two drives. For a given sector
143	  (row) number, (N - 2) drives contain data sectors, and two
144	  drives contains two independent redundancy syndromes.  Like
145	  RAID-5, RAID-6 distributes the syndromes across the drives
146	  in one of the available parity distribution methods.
147
148	  Information about Software RAID on Linux is contained in the
149	  Software-RAID mini-HOWTO, available from
150	  <http://www.tldp.org/docs.html#howto>. There you will also
151	  learn where to get the supporting user space utilities raidtools.
152
153	  If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y.  To
154	  compile this code as a module, choose M here: the module
155	  will be called raid456.
156
157	  If unsure, say Y.
158
 
 
 
 
 
 
 
 
 
 
 
159config MD_MULTIPATH
160	tristate "Multipath I/O support"
161	depends on BLK_DEV_MD
162	help
163	  MD_MULTIPATH provides a simple multi-path personality for use
164	  the MD framework.  It is not under active development.  New
165	  projects should consider using DM_MULTIPATH which has more
166	  features and more testing.
167
168	  If unsure, say N.
169
170config MD_FAULTY
171	tristate "Faulty test module for MD"
172	depends on BLK_DEV_MD
173	help
174	  The "faulty" module allows for a block device that occasionally returns
175	  read or write errors.  It is useful for testing.
176
177	  In unsure, say N.
178
179
180config MD_CLUSTER
181	tristate "Cluster Support for MD (EXPERIMENTAL)"
182	depends on BLK_DEV_MD
183	depends on DLM
184	default n
185	---help---
186	Clustering support for MD devices. This enables locking and
187	synchronization across multiple systems on the cluster, so all
188	nodes in the cluster can access the MD devices simultaneously.
189
190	This brings the redundancy (and uptime) of RAID levels across the
191	nodes of the cluster.
192
193	If unsure, say N.
194
195source "drivers/md/bcache/Kconfig"
196
197config BLK_DEV_DM_BUILTIN
198	bool
199
200config BLK_DEV_DM
201	tristate "Device mapper support"
202	select BLK_DEV_DM_BUILTIN
203	---help---
204	  Device-mapper is a low level volume manager.  It works by allowing
205	  people to specify mappings for ranges of logical sectors.  Various
206	  mapping types are available, in addition people may write their own
207	  modules containing custom mappings if they wish.
208
209	  Higher level volume managers such as LVM2 use this driver.
210
211	  To compile this as a module, choose M here: the module will be
212	  called dm-mod.
213
214	  If unsure, say N.
215
216config DM_MQ_DEFAULT
217	bool "request-based DM: use blk-mq I/O path by default"
218	depends on BLK_DEV_DM
219	---help---
220	  This option enables the blk-mq based I/O path for request-based
221	  DM devices by default.  With the option the dm_mod.use_blk_mq
222	  module/boot option defaults to Y, without it to N, but it can
223	  still be overriden either way.
224
225	  If unsure say N.
226
227config DM_DEBUG
228	bool "Device mapper debugging support"
229	depends on BLK_DEV_DM
230	---help---
231	  Enable this for messages that may help debug device-mapper problems.
232
233	  If unsure, say N.
234
235config DM_BUFIO
236       tristate
237       depends on BLK_DEV_DM
238       ---help---
239	 This interface allows you to do buffered I/O on a device and acts
240	 as a cache, holding recently-read blocks in memory and performing
241	 delayed writes.
242
243config DM_DEBUG_BLOCK_MANAGER_LOCKING
244       bool "Block manager locking"
245       depends on DM_BUFIO
246       ---help---
247	 Block manager locking can catch various metadata corruption issues.
248
249	 If unsure, say N.
250
251config DM_DEBUG_BLOCK_STACK_TRACING
252       bool "Keep stack trace of persistent data block lock holders"
253       depends on STACKTRACE_SUPPORT && DM_DEBUG_BLOCK_MANAGER_LOCKING
254       select STACKTRACE
255       ---help---
256	 Enable this for messages that may help debug problems with the
257	 block manager locking used by thin provisioning and caching.
258
259	 If unsure, say N.
260
261config DM_BIO_PRISON
262       tristate
263       depends on BLK_DEV_DM
264       ---help---
265	 Some bio locking schemes used by other device-mapper targets
266	 including thin provisioning.
267
268source "drivers/md/persistent-data/Kconfig"
269
270config DM_CRYPT
271	tristate "Crypt target support"
272	depends on BLK_DEV_DM
273	select CRYPTO
274	select CRYPTO_CBC
275	---help---
276	  This device-mapper target allows you to create a device that
277	  transparently encrypts the data on it. You'll need to activate
278	  the ciphers you're going to use in the cryptoapi configuration.
279
280	  For further information on dm-crypt and userspace tools see:
281	  <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt>
 
282
283	  To compile this code as a module, choose M here: the module will
284	  be called dm-crypt.
285
286	  If unsure, say N.
287
288config DM_SNAPSHOT
289       tristate "Snapshot target"
290       depends on BLK_DEV_DM
291       select DM_BUFIO
292       ---help---
293         Allow volume managers to take writable snapshots of a device.
294
295config DM_THIN_PROVISIONING
296       tristate "Thin provisioning target"
297       depends on BLK_DEV_DM
298       select DM_PERSISTENT_DATA
299       select DM_BIO_PRISON
300       ---help---
301         Provides thin provisioning and snapshots that share a data store.
302
303config DM_CACHE
304       tristate "Cache target (EXPERIMENTAL)"
305       depends on BLK_DEV_DM
306       default n
307       select DM_PERSISTENT_DATA
308       select DM_BIO_PRISON
309       ---help---
310         dm-cache attempts to improve performance of a block device by
311         moving frequently used data to a smaller, higher performance
312         device.  Different 'policy' plugins can be used to change the
313         algorithms used to select which blocks are promoted, demoted,
314         cleaned etc.  It supports writeback and writethrough modes.
315
316config DM_CACHE_SMQ
317       tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)"
318       depends on DM_CACHE
319       default y
320       ---help---
321         A cache policy that uses a multiqueue ordered by recent hits
322         to select which blocks should be promoted and demoted.
323         This is meant to be a general purpose policy.  It prioritises
324         reads over writes.  This SMQ policy (vs MQ) offers the promise
325         of less memory utilization, improved performance and increased
326         adaptability in the face of changing workloads.
327
328config DM_CACHE_CLEANER
329       tristate "Cleaner Cache Policy (EXPERIMENTAL)"
330       depends on DM_CACHE
331       default y
332       ---help---
333         A simple cache policy that writes back all data to the
334         origin.  Used when decommissioning a dm-cache.
335
336config DM_ERA
337       tristate "Era target (EXPERIMENTAL)"
338       depends on BLK_DEV_DM
339       default n
340       select DM_PERSISTENT_DATA
341       select DM_BIO_PRISON
342       ---help---
343         dm-era tracks which parts of a block device are written to
344         over time.  Useful for maintaining cache coherency when using
345         vendor snapshots.
346
347config DM_MIRROR
348       tristate "Mirror target"
349       depends on BLK_DEV_DM
350       ---help---
351         Allow volume managers to mirror logical volumes, also
352         needed for live data migration tools such as 'pvmove'.
353
354config DM_LOG_USERSPACE
355	tristate "Mirror userspace logging"
356	depends on DM_MIRROR && NET
357	select CONNECTOR
358	---help---
359	  The userspace logging module provides a mechanism for
360	  relaying the dm-dirty-log API to userspace.  Log designs
361	  which are more suited to userspace implementation (e.g.
362	  shared storage logs) or experimental logs can be implemented
363	  by leveraging this framework.
364
365config DM_RAID
366       tristate "RAID 1/4/5/6/10 target"
367       depends on BLK_DEV_DM
368       select MD_RAID1
369       select MD_RAID10
370       select MD_RAID456
371       select BLK_DEV_MD
372       ---help---
373	 A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings
374
375	 A RAID-5 set of N drives with a capacity of C MB per drive provides
376	 the capacity of C * (N - 1) MB, and protects against a failure
377	 of a single drive. For a given sector (row) number, (N - 1) drives
378	 contain data sectors, and one drive contains the parity protection.
379	 For a RAID-4 set, the parity blocks are present on a single drive,
380	 while a RAID-5 set distributes the parity across the drives in one
381	 of the available parity distribution methods.
382
383	 A RAID-6 set of N drives with a capacity of C MB per drive
384	 provides the capacity of C * (N - 2) MB, and protects
385	 against a failure of any two drives. For a given sector
386	 (row) number, (N - 2) drives contain data sectors, and two
387	 drives contains two independent redundancy syndromes.  Like
388	 RAID-5, RAID-6 distributes the syndromes across the drives
389	 in one of the available parity distribution methods.
390
 
 
 
 
 
 
 
 
 
 
 
391config DM_ZERO
392	tristate "Zero target"
393	depends on BLK_DEV_DM
394	---help---
395	  A target that discards writes, and returns all zeroes for
396	  reads.  Useful in some recovery situations.
397
398config DM_MULTIPATH
399	tristate "Multipath target"
400	depends on BLK_DEV_DM
401	# nasty syntax but means make DM_MULTIPATH independent
402	# of SCSI_DH if the latter isn't defined but if
403	# it is, DM_MULTIPATH must depend on it.  We get a build
404	# error if SCSI_DH=m and DM_MULTIPATH=y
405	depends on !SCSI_DH || SCSI
406	---help---
407	  Allow volume managers to support multipath hardware.
408
409config DM_MULTIPATH_QL
410	tristate "I/O Path Selector based on the number of in-flight I/Os"
411	depends on DM_MULTIPATH
412	---help---
413	  This path selector is a dynamic load balancer which selects
414	  the path with the least number of in-flight I/Os.
415
416	  If unsure, say N.
417
418config DM_MULTIPATH_ST
419	tristate "I/O Path Selector based on the service time"
420	depends on DM_MULTIPATH
421	---help---
422	  This path selector is a dynamic load balancer which selects
423	  the path expected to complete the incoming I/O in the shortest
424	  time.
425
426	  If unsure, say N.
427
428config DM_DELAY
429	tristate "I/O delaying target"
430	depends on BLK_DEV_DM
431	---help---
432	A target that delays reads and/or writes and can send
433	them to different devices.  Useful for testing.
434
435	If unsure, say N.
436
437config DM_UEVENT
438	bool "DM uevents"
439	depends on BLK_DEV_DM
440	---help---
441	Generate udev events for DM events.
442
443config DM_FLAKEY
444       tristate "Flakey target"
445       depends on BLK_DEV_DM
446       ---help---
447         A target that intermittently fails I/O for debugging purposes.
448
449config DM_VERITY
450	tristate "Verity target support"
451	depends on BLK_DEV_DM
452	select CRYPTO
453	select CRYPTO_HASH
454	select DM_BUFIO
455	---help---
456	  This device-mapper target creates a read-only device that
457	  transparently validates the data on one underlying device against
458	  a pre-generated tree of cryptographic checksums stored on a second
459	  device.
460
461	  You'll need to activate the digests you're going to use in the
462	  cryptoapi configuration.
463
464	  To compile this code as a module, choose M here: the module will
465	  be called dm-verity.
466
467	  If unsure, say N.
468
469config DM_VERITY_FEC
470	bool "Verity forward error correction support"
471	depends on DM_VERITY
472	select REED_SOLOMON
473	select REED_SOLOMON_DEC8
474	---help---
475	  Add forward error correction support to dm-verity. This option
476	  makes it possible to use pre-generated error correction data to
477	  recover from corrupted blocks.
478
479	  If unsure, say N.
480
481config DM_SWITCH
482	tristate "Switch target support (EXPERIMENTAL)"
483	depends on BLK_DEV_DM
484	---help---
485	  This device-mapper target creates a device that supports an arbitrary
486	  mapping of fixed-size regions of I/O across a fixed set of paths.
487	  The path used for any specific region can be switched dynamically
488	  by sending the target a message.
489
490	  To compile this code as a module, choose M here: the module will
491	  be called dm-switch.
492
493	  If unsure, say N.
494
495config DM_LOG_WRITES
496	tristate "Log writes target support"
497	depends on BLK_DEV_DM
498	---help---
499	  This device-mapper target takes two devices, one device to use
500	  normally, one to log all write operations done to the first device.
501	  This is for use by file system developers wishing to verify that
502	  their fs is writing a consistent file system at all times by allowing
503	  them to replay the log in a variety of ways and to check the
504	  contents.
505
506	  To compile this code as a module, choose M here: the module will
507	  be called dm-log-writes.
508
509	  If unsure, say N.
510
511endif # MD