Loading...
1Ceph Distributed File System
2============================
3
4Ceph is a distributed network file system designed to provide good
5performance, reliability, and scalability.
6
7Basic features include:
8
9 * POSIX semantics
10 * Seamless scaling from 1 to many thousands of nodes
11 * High availability and reliability. No single point of failure.
12 * N-way replication of data across storage nodes
13 * Fast recovery from node failures
14 * Automatic rebalancing of data on node addition/removal
15 * Easy deployment: most FS components are userspace daemons
16
17Also,
18 * Flexible snapshots (on any directory)
19 * Recursive accounting (nested files, directories, bytes)
20
21In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
22on symmetric access by all clients to shared block devices, Ceph
23separates data and metadata management into independent server
24clusters, similar to Lustre. Unlike Lustre, however, metadata and
25storage nodes run entirely as user space daemons. Storage nodes
26utilize btrfs to store data objects, leveraging its advanced features
27(checksumming, metadata replication, etc.). File data is striped
28across storage nodes in large chunks to distribute workload and
29facilitate high throughputs. When storage nodes fail, data is
30re-replicated in a distributed fashion by the storage nodes themselves
31(with some minimal coordination from a cluster monitor), making the
32system extremely efficient and scalable.
33
34Metadata servers effectively form a large, consistent, distributed
35in-memory cache above the file namespace that is extremely scalable,
36dynamically redistributes metadata in response to workload changes,
37and can tolerate arbitrary (well, non-Byzantine) node failures. The
38metadata server takes a somewhat unconventional approach to metadata
39storage to significantly improve performance for common workloads. In
40particular, inodes with only a single link are embedded in
41directories, allowing entire directories of dentries and inodes to be
42loaded into its cache with a single I/O operation. The contents of
43extremely large directories can be fragmented and managed by
44independent metadata servers, allowing scalable concurrent access.
45
46The system offers automatic data rebalancing/migration when scaling
47from a small cluster of just a few nodes to many hundreds, without
48requiring an administrator carve the data set into static volumes or
49go through the tedious process of migrating data between servers.
50When the file system approaches full, new nodes can be easily added
51and things will "just work."
52
53Ceph includes flexible snapshot mechanism that allows a user to create
54a snapshot on any subdirectory (and its nested contents) in the
55system. Snapshot creation and deletion are as simple as 'mkdir
56.snap/foo' and 'rmdir .snap/foo'.
57
58Ceph also provides some recursive accounting on directories for nested
59files and bytes. That is, a 'getfattr -d foo' on any directory in the
60system will reveal the total number of nested regular files and
61subdirectories, and a summation of all nested file sizes. This makes
62the identification of large disk space consumers relatively quick, as
63no 'du' or similar recursive scan of the file system is required.
64
65
66Mount Syntax
67============
68
69The basic mount syntax is:
70
71 # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt
72
73You only need to specify a single monitor, as the client will get the
74full list when it connects. (However, if the monitor you specify
75happens to be down, the mount won't succeed.) The port can be left
76off if the monitor is using the default. So if the monitor is at
771.2.3.4,
78
79 # mount -t ceph 1.2.3.4:/ /mnt/ceph
80
81is sufficient. If /sbin/mount.ceph is installed, a hostname can be
82used instead of an IP address.
83
84
85
86Mount Options
87=============
88
89 ip=A.B.C.D[:N]
90 Specify the IP and/or port the client should bind to locally.
91 There is normally not much reason to do this. If the IP is not
92 specified, the client's IP address is determined by looking at the
93 address its connection to the monitor originates from.
94
95 wsize=X
96 Specify the maximum write size in bytes. By default there is no
97 maximum. Ceph will normally size writes based on the file stripe
98 size.
99
100 rsize=X
101 Specify the maximum readahead.
102
103 mount_timeout=X
104 Specify the timeout value for mount (in seconds), in the case
105 of a non-responsive Ceph file system. The default is 30
106 seconds.
107
108 rbytes
109 When stat() is called on a directory, set st_size to 'rbytes',
110 the summation of file sizes over all files nested beneath that
111 directory. This is the default.
112
113 norbytes
114 When stat() is called on a directory, set st_size to the
115 number of entries in that directory.
116
117 nocrc
118 Disable CRC32C calculation for data writes. If set, the storage node
119 must rely on TCP's error correction to detect data corruption
120 in the data payload.
121
122 dcache
123 Use the dcache contents to perform negative lookups and
124 readdir when the client has the entire directory contents in
125 its cache. (This does not change correctness; the client uses
126 cached metadata only when a lease or capability ensures it is
127 valid.)
128
129 nodcache
130 Do not use the dcache as above. This avoids a significant amount of
131 complex code, sacrificing performance without affecting correctness,
132 and is useful for tracking down bugs.
133
134 noasyncreaddir
135 Do not use the dcache as above for readdir.
136
137More Information
138================
139
140For more information on Ceph, see the home page at
141 http://ceph.newdream.net/
142
143The Linux kernel client source tree is available at
144 git://ceph.newdream.net/git/ceph-client.git
145 git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
146
147and the source for the full system is at
148 git://ceph.newdream.net/git/ceph.git
1Ceph Distributed File System
2============================
3
4Ceph is a distributed network file system designed to provide good
5performance, reliability, and scalability.
6
7Basic features include:
8
9 * POSIX semantics
10 * Seamless scaling from 1 to many thousands of nodes
11 * High availability and reliability. No single point of failure.
12 * N-way replication of data across storage nodes
13 * Fast recovery from node failures
14 * Automatic rebalancing of data on node addition/removal
15 * Easy deployment: most FS components are userspace daemons
16
17Also,
18 * Flexible snapshots (on any directory)
19 * Recursive accounting (nested files, directories, bytes)
20
21In contrast to cluster filesystems like GFS, OCFS2, and GPFS that rely
22on symmetric access by all clients to shared block devices, Ceph
23separates data and metadata management into independent server
24clusters, similar to Lustre. Unlike Lustre, however, metadata and
25storage nodes run entirely as user space daemons. File data is striped
26across storage nodes in large chunks to distribute workload and
27facilitate high throughputs. When storage nodes fail, data is
28re-replicated in a distributed fashion by the storage nodes themselves
29(with some minimal coordination from a cluster monitor), making the
30system extremely efficient and scalable.
31
32Metadata servers effectively form a large, consistent, distributed
33in-memory cache above the file namespace that is extremely scalable,
34dynamically redistributes metadata in response to workload changes,
35and can tolerate arbitrary (well, non-Byzantine) node failures. The
36metadata server takes a somewhat unconventional approach to metadata
37storage to significantly improve performance for common workloads. In
38particular, inodes with only a single link are embedded in
39directories, allowing entire directories of dentries and inodes to be
40loaded into its cache with a single I/O operation. The contents of
41extremely large directories can be fragmented and managed by
42independent metadata servers, allowing scalable concurrent access.
43
44The system offers automatic data rebalancing/migration when scaling
45from a small cluster of just a few nodes to many hundreds, without
46requiring an administrator carve the data set into static volumes or
47go through the tedious process of migrating data between servers.
48When the file system approaches full, new nodes can be easily added
49and things will "just work."
50
51Ceph includes flexible snapshot mechanism that allows a user to create
52a snapshot on any subdirectory (and its nested contents) in the
53system. Snapshot creation and deletion are as simple as 'mkdir
54.snap/foo' and 'rmdir .snap/foo'.
55
56Ceph also provides some recursive accounting on directories for nested
57files and bytes. That is, a 'getfattr -d foo' on any directory in the
58system will reveal the total number of nested regular files and
59subdirectories, and a summation of all nested file sizes. This makes
60the identification of large disk space consumers relatively quick, as
61no 'du' or similar recursive scan of the file system is required.
62
63Finally, Ceph also allows quotas to be set on any directory in the system.
64The quota can restrict the number of bytes or the number of files stored
65beneath that point in the directory hierarchy. Quotas can be set using
66extended attributes 'ceph.quota.max_files' and 'ceph.quota.max_bytes', eg:
67
68 setfattr -n ceph.quota.max_bytes -v 100000000 /some/dir
69 getfattr -n ceph.quota.max_bytes /some/dir
70
71A limitation of the current quotas implementation is that it relies on the
72cooperation of the client mounting the file system to stop writers when a
73limit is reached. A modified or adversarial client cannot be prevented
74from writing as much data as it needs.
75
76Mount Syntax
77============
78
79The basic mount syntax is:
80
81 # mount -t ceph monip[:port][,monip2[:port]...]:/[subdir] mnt
82
83You only need to specify a single monitor, as the client will get the
84full list when it connects. (However, if the monitor you specify
85happens to be down, the mount won't succeed.) The port can be left
86off if the monitor is using the default. So if the monitor is at
871.2.3.4,
88
89 # mount -t ceph 1.2.3.4:/ /mnt/ceph
90
91is sufficient. If /sbin/mount.ceph is installed, a hostname can be
92used instead of an IP address.
93
94
95
96Mount Options
97=============
98
99 ip=A.B.C.D[:N]
100 Specify the IP and/or port the client should bind to locally.
101 There is normally not much reason to do this. If the IP is not
102 specified, the client's IP address is determined by looking at the
103 address its connection to the monitor originates from.
104
105 wsize=X
106 Specify the maximum write size in bytes. Default: 16 MB.
107
108 rsize=X
109 Specify the maximum read size in bytes. Default: 16 MB.
110
111 rasize=X
112 Specify the maximum readahead size in bytes. Default: 8 MB.
113
114 mount_timeout=X
115 Specify the timeout value for mount (in seconds), in the case
116 of a non-responsive Ceph file system. The default is 30
117 seconds.
118
119 caps_max=X
120 Specify the maximum number of caps to hold. Unused caps are released
121 when number of caps exceeds the limit. The default is 0 (no limit)
122
123 rbytes
124 When stat() is called on a directory, set st_size to 'rbytes',
125 the summation of file sizes over all files nested beneath that
126 directory. This is the default.
127
128 norbytes
129 When stat() is called on a directory, set st_size to the
130 number of entries in that directory.
131
132 nocrc
133 Disable CRC32C calculation for data writes. If set, the storage node
134 must rely on TCP's error correction to detect data corruption
135 in the data payload.
136
137 dcache
138 Use the dcache contents to perform negative lookups and
139 readdir when the client has the entire directory contents in
140 its cache. (This does not change correctness; the client uses
141 cached metadata only when a lease or capability ensures it is
142 valid.)
143
144 nodcache
145 Do not use the dcache as above. This avoids a significant amount of
146 complex code, sacrificing performance without affecting correctness,
147 and is useful for tracking down bugs.
148
149 noasyncreaddir
150 Do not use the dcache as above for readdir.
151
152 noquotadf
153 Report overall filesystem usage in statfs instead of using the root
154 directory quota.
155
156 nocopyfrom
157 Don't use the RADOS 'copy-from' operation to perform remote object
158 copies. Currently, it's only used in copy_file_range, which will revert
159 to the default VFS implementation if this option is used.
160
161 recover_session=<no|clean>
162 Set auto reconnect mode in the case where the client is blacklisted. The
163 available modes are "no" and "clean". The default is "no".
164
165 * no: never attempt to reconnect when client detects that it has been
166 blacklisted. Operations will generally fail after being blacklisted.
167
168 * clean: client reconnects to the ceph cluster automatically when it
169 detects that it has been blacklisted. During reconnect, client drops
170 dirty data/metadata, invalidates page caches and writable file handles.
171 After reconnect, file locks become stale because the MDS loses track
172 of them. If an inode contains any stale file locks, read/write on the
173 inode is not allowed until applications release all stale file locks.
174
175More Information
176================
177
178For more information on Ceph, see the home page at
179 https://ceph.com/
180
181The Linux kernel client source tree is available at
182 https://github.com/ceph/ceph-client.git
183 git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client.git
184
185and the source for the full system is at
186 https://github.com/ceph/ceph.git