Loading...
1================================
2Documentation for /proc/sys/net/
3================================
4
5Copyright
6
7Copyright (c) 1999
8
9 - Terrehon Bowden <terrehon@pacbell.net>
10 - Bodo Bauer <bb@ricochet.net>
11
12Copyright (c) 2000
13
14 - Jorge Nerin <comandante@zaralinux.com>
15
16Copyright (c) 2009
17
18 - Shen Feng <shen@cn.fujitsu.com>
19
20For general info and legal blurb, please look in index.rst.
21
22------------------------------------------------------------------------------
23
24This file contains the documentation for the sysctl files in
25/proc/sys/net
26
27The interface to the networking parts of the kernel is located in
28/proc/sys/net. The following table shows all possible subdirectories. You may
29see only some of them, depending on your kernel's configuration.
30
31
32Table : Subdirectories in /proc/sys/net
33
34 ========= =================== = ========== ==================
35 Directory Content Directory Content
36 ========= =================== = ========== ==================
37 core General parameter appletalk Appletalk protocol
38 unix Unix domain sockets netrom NET/ROM
39 802 E802 protocol ax25 AX25
40 ethernet Ethernet protocol rose X.25 PLP layer
41 ipv4 IP version 4 x25 X.25 protocol
42 bridge Bridging decnet DEC net
43 ipv6 IP version 6 tipc TIPC
44 ========= =================== = ========== ==================
45
461. /proc/sys/net/core - Network core options
47============================================
48
49bpf_jit_enable
50--------------
51
52This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
53and efficient infrastructure allowing to execute bytecode at various
54hook points. It is used in a number of Linux kernel subsystems such
55as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
56and security (e.g. seccomp). LLVM has a BPF back end that can compile
57restricted C into a sequence of BPF instructions. After program load
58through bpf(2) and passing a verifier in the kernel, a JIT will then
59translate these BPF proglets into native CPU instructions. There are
60two flavors of JITs, the newer eBPF JIT currently supported on:
61
62 - x86_64
63 - x86_32
64 - arm64
65 - arm32
66 - ppc64
67 - sparc64
68 - mips64
69 - s390x
70 - riscv
71
72And the older cBPF JIT supported on the following archs:
73
74 - mips
75 - ppc
76 - sparc
77
78eBPF JITs are a superset of cBPF JITs, meaning the kernel will
79migrate cBPF instructions into eBPF instructions and then JIT
80compile them transparently. Older cBPF JITs can only translate
81tcpdump filters, seccomp rules, etc, but not mentioned eBPF
82programs loaded through bpf(2).
83
84Values:
85
86 - 0 - disable the JIT (default value)
87 - 1 - enable the JIT
88 - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
89
90bpf_jit_harden
91--------------
92
93This enables hardening for the BPF JIT compiler. Supported are eBPF
94JIT backends. Enabling hardening trades off performance, but can
95mitigate JIT spraying.
96
97Values:
98
99 - 0 - disable JIT hardening (default value)
100 - 1 - enable JIT hardening for unprivileged users only
101 - 2 - enable JIT hardening for all users
102
103bpf_jit_kallsyms
104----------------
105
106When BPF JIT compiler is enabled, then compiled images are unknown
107addresses to the kernel, meaning they neither show up in traces nor
108in /proc/kallsyms. This enables export of these addresses, which can
109be used for debugging/tracing. If bpf_jit_harden is enabled, this
110feature is disabled.
111
112Values :
113
114 - 0 - disable JIT kallsyms export (default value)
115 - 1 - enable JIT kallsyms export for privileged users only
116
117bpf_jit_limit
118-------------
119
120This enforces a global limit for memory allocations to the BPF JIT
121compiler in order to reject unprivileged JIT requests once it has
122been surpassed. bpf_jit_limit contains the value of the global limit
123in bytes.
124
125dev_weight
126----------
127
128The maximum number of packets that kernel can handle on a NAPI interrupt,
129it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
130aggregated packet is counted as one packet in this context.
131
132Default: 64
133
134dev_weight_rx_bias
135------------------
136
137RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
138of the driver for the per softirq cycle netdev_budget. This parameter influences
139the proportion of the configured netdev_budget that is spent on RPS based packet
140processing during RX softirq cycles. It is further meant for making current
141dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
142(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
143on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
144
145Default: 1
146
147dev_weight_tx_bias
148------------------
149
150Scales the maximum number of packets that can be processed during a TX softirq cycle.
151Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
152net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
153
154Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
155
156Default: 1
157
158default_qdisc
159-------------
160
161The default queuing discipline to use for network devices. This allows
162overriding the default of pfifo_fast with an alternative. Since the default
163queuing discipline is created without additional parameters so is best suited
164to queuing disciplines that work well without configuration like stochastic
165fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
166queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
167which require setting up classes and bandwidths. Note that physical multiqueue
168interfaces still use mq as root qdisc, which in turn uses this default for its
169leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
170default to noqueue.
171
172Default: pfifo_fast
173
174busy_read
175---------
176
177Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
178Approximate time in us to busy loop waiting for packets on the device queue.
179This sets the default value of the SO_BUSY_POLL socket option.
180Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
181which is the preferred method of enabling. If you need to enable the feature
182globally via sysctl, a value of 50 is recommended.
183
184Will increase power usage.
185
186Default: 0 (off)
187
188busy_poll
189----------------
190Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
191Approximate time in us to busy loop waiting for events.
192Recommended value depends on the number of sockets you poll on.
193For several sockets 50, for several hundreds 100.
194For more than that you probably want to use epoll.
195Note that only sockets with SO_BUSY_POLL set will be busy polled,
196so you want to either selectively set SO_BUSY_POLL on those sockets or set
197sysctl.net.busy_read globally.
198
199Will increase power usage.
200
201Default: 0 (off)
202
203rmem_default
204------------
205
206The default setting of the socket receive buffer in bytes.
207
208rmem_max
209--------
210
211The maximum receive socket buffer size in bytes.
212
213tstamp_allow_data
214-----------------
215Allow processes to receive tx timestamps looped together with the original
216packet contents. If disabled, transmit timestamp requests from unprivileged
217processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
218
219Default: 1 (on)
220
221
222wmem_default
223------------
224
225The default setting (in bytes) of the socket send buffer.
226
227wmem_max
228--------
229
230The maximum send socket buffer size in bytes.
231
232message_burst and message_cost
233------------------------------
234
235These parameters are used to limit the warning messages written to the kernel
236log from the networking code. They enforce a rate limit to make a
237denial-of-service attack impossible. A higher message_cost factor, results in
238fewer messages that will be written. Message_burst controls when messages will
239be dropped. The default settings limit warning messages to one every five
240seconds.
241
242warnings
243--------
244
245This sysctl is now unused.
246
247This was used to control console messages from the networking stack that
248occur because of problems on the network like duplicate address or bad
249checksums.
250
251These messages are now emitted at KERN_DEBUG and can generally be enabled
252and controlled by the dynamic_debug facility.
253
254netdev_budget
255-------------
256
257Maximum number of packets taken from all interfaces in one polling cycle (NAPI
258poll). In one polling cycle interfaces which are registered to polling are
259probed in a round-robin manner. Also, a polling cycle may not exceed
260netdev_budget_usecs microseconds, even if netdev_budget has not been
261exhausted.
262
263netdev_budget_usecs
264---------------------
265
266Maximum number of microseconds in one NAPI polling cycle. Polling
267will exit when either netdev_budget_usecs have elapsed during the
268poll cycle or the number of packets processed reaches netdev_budget.
269
270netdev_max_backlog
271------------------
272
273Maximum number of packets, queued on the INPUT side, when the interface
274receives packets faster than kernel can process them.
275
276netdev_rss_key
277--------------
278
279RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
280randomly generated.
281Some user space might need to gather its content even if drivers do not
282provide ethtool -x support yet.
283
284::
285
286 myhost:~# cat /proc/sys/net/core/netdev_rss_key
287 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
288
289File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
290
291Note:
292 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
293 but most drivers only use 40 bytes of it.
294
295::
296
297 myhost:~# ethtool -x eth0
298 RX flow hash indirection table for eth0 with 8 RX ring(s):
299 0: 0 1 2 3 4 5 6 7
300 RSS hash key:
301 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
302
303netdev_tstamp_prequeue
304----------------------
305
306If set to 0, RX packet timestamps can be sampled after RPS processing, when
307the target CPU processes packets. It might give some delay on timestamps, but
308permit to distribute the load on several cpus.
309
310If set to 1 (default), timestamps are sampled as soon as possible, before
311queueing.
312
313optmem_max
314----------
315
316Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
317of struct cmsghdr structures with appended data.
318
319fb_tunnels_only_for_init_net
320----------------------------
321
322Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
323sit0, ip6tnl0, ip6gre0) are automatically created when a new
324network namespace is created, if corresponding tunnel is present
325in initial network namespace.
326If set to 1, these devices are not automatically created, and
327user space is responsible for creating them if needed.
328
329Default : 0 (for compatibility reasons)
330
331devconf_inherit_init_net
332------------------------
333
334Controls if a new network namespace should inherit all current
335settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
336default, we keep the current behavior: for IPv4 we inherit all current
337settings from init_net and for IPv6 we reset all settings to default.
338
339If set to 1, both IPv4 and IPv6 settings are forced to inherit from
340current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
341forced to reset to their default values.
342
343Default : 0 (for compatibility reasons)
344
3452. /proc/sys/net/unix - Parameters for Unix domain sockets
346----------------------------------------------------------
347
348There is only one file in this directory.
349unix_dgram_qlen limits the max number of datagrams queued in Unix domain
350socket's buffer. It will not take effect unless PF_UNIX flag is specified.
351
352
3533. /proc/sys/net/ipv4 - IPV4 settings
354-------------------------------------
355Please see: Documentation/networking/ip-sysctl.txt and ipvs-sysctl.txt for
356descriptions of these entries.
357
358
3594. Appletalk
360------------
361
362The /proc/sys/net/appletalk directory holds the Appletalk configuration data
363when Appletalk is loaded. The configurable parameters are:
364
365aarp-expiry-time
366----------------
367
368The amount of time we keep an ARP entry before expiring it. Used to age out
369old hosts.
370
371aarp-resolve-time
372-----------------
373
374The amount of time we will spend trying to resolve an Appletalk address.
375
376aarp-retransmit-limit
377---------------------
378
379The number of times we will retransmit a query before giving up.
380
381aarp-tick-time
382--------------
383
384Controls the rate at which expires are checked.
385
386The directory /proc/net/appletalk holds the list of active Appletalk sockets
387on a machine.
388
389The fields indicate the DDP type, the local address (in network:node format)
390the remote address, the size of the transmit pending queue, the size of the
391received queue (bytes waiting for applications to read) the state and the uid
392owning the socket.
393
394/proc/net/atalk_iface lists all the interfaces configured for appletalk.It
395shows the name of the interface, its Appletalk address, the network range on
396that address (or network number for phase 1 networks), and the status of the
397interface.
398
399/proc/net/atalk_route lists each known network route. It lists the target
400(network) that the route leads to, the router (may be directly connected), the
401route flags, and the device the route is using.
402
4035. TIPC
404-------
405
406tipc_rmem
407---------
408
409The TIPC protocol now has a tunable for the receive memory, similar to the
410tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
411
412::
413
414 # cat /proc/sys/net/tipc/tipc_rmem
415 4252725 34021800 68043600
416 #
417
418The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
419are scaled (shifted) versions of that same value. Note that the min value
420is not at this point in time used in any meaningful way, but the triplet is
421preserved in order to be consistent with things like tcp_rmem.
422
423named_timeout
424-------------
425
426TIPC name table updates are distributed asynchronously in a cluster, without
427any form of transaction handling. This means that different race scenarios are
428possible. One such is that a name withdrawal sent out by one node and received
429by another node may arrive after a second, overlapping name publication already
430has been accepted from a third node, although the conflicting updates
431originally may have been issued in the correct sequential order.
432If named_timeout is nonzero, failed topology updates will be placed on a defer
433queue until another event arrives that clears the error, or until the timeout
434expires. Value is in milliseconds.
1================================
2Documentation for /proc/sys/net/
3================================
4
5Copyright
6
7Copyright (c) 1999
8
9 - Terrehon Bowden <terrehon@pacbell.net>
10 - Bodo Bauer <bb@ricochet.net>
11
12Copyright (c) 2000
13
14 - Jorge Nerin <comandante@zaralinux.com>
15
16Copyright (c) 2009
17
18 - Shen Feng <shen@cn.fujitsu.com>
19
20For general info and legal blurb, please look in index.rst.
21
22------------------------------------------------------------------------------
23
24This file contains the documentation for the sysctl files in
25/proc/sys/net
26
27The interface to the networking parts of the kernel is located in
28/proc/sys/net. The following table shows all possible subdirectories. You may
29see only some of them, depending on your kernel's configuration.
30
31
32Table : Subdirectories in /proc/sys/net
33
34 ========= =================== = ========== ===================
35 Directory Content Directory Content
36 ========= =================== = ========== ===================
37 802 E802 protocol mptcp Multipath TCP
38 appletalk Appletalk protocol netfilter Network Filter
39 ax25 AX25 netrom NET/ROM
40 bridge Bridging rose X.25 PLP layer
41 core General parameter tipc TIPC
42 ethernet Ethernet protocol unix Unix domain sockets
43 ipv4 IP version 4 x25 X.25 protocol
44 ipv6 IP version 6
45 ========= =================== = ========== ===================
46
471. /proc/sys/net/core - Network core options
48============================================
49
50bpf_jit_enable
51--------------
52
53This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
54and efficient infrastructure allowing to execute bytecode at various
55hook points. It is used in a number of Linux kernel subsystems such
56as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
57and security (e.g. seccomp). LLVM has a BPF back end that can compile
58restricted C into a sequence of BPF instructions. After program load
59through bpf(2) and passing a verifier in the kernel, a JIT will then
60translate these BPF proglets into native CPU instructions. There are
61two flavors of JITs, the newer eBPF JIT currently supported on:
62
63 - x86_64
64 - x86_32
65 - arm64
66 - arm32
67 - ppc64
68 - ppc32
69 - sparc64
70 - mips64
71 - s390x
72 - riscv64
73 - riscv32
74 - loongarch64
75
76And the older cBPF JIT supported on the following archs:
77
78 - mips
79 - sparc
80
81eBPF JITs are a superset of cBPF JITs, meaning the kernel will
82migrate cBPF instructions into eBPF instructions and then JIT
83compile them transparently. Older cBPF JITs can only translate
84tcpdump filters, seccomp rules, etc, but not mentioned eBPF
85programs loaded through bpf(2).
86
87Values:
88
89 - 0 - disable the JIT (default value)
90 - 1 - enable the JIT
91 - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
92
93bpf_jit_harden
94--------------
95
96This enables hardening for the BPF JIT compiler. Supported are eBPF
97JIT backends. Enabling hardening trades off performance, but can
98mitigate JIT spraying.
99
100Values:
101
102 - 0 - disable JIT hardening (default value)
103 - 1 - enable JIT hardening for unprivileged users only
104 - 2 - enable JIT hardening for all users
105
106where "privileged user" in this context means a process having
107CAP_BPF or CAP_SYS_ADMIN in the root user name space.
108
109bpf_jit_kallsyms
110----------------
111
112When BPF JIT compiler is enabled, then compiled images are unknown
113addresses to the kernel, meaning they neither show up in traces nor
114in /proc/kallsyms. This enables export of these addresses, which can
115be used for debugging/tracing. If bpf_jit_harden is enabled, this
116feature is disabled.
117
118Values :
119
120 - 0 - disable JIT kallsyms export (default value)
121 - 1 - enable JIT kallsyms export for privileged users only
122
123bpf_jit_limit
124-------------
125
126This enforces a global limit for memory allocations to the BPF JIT
127compiler in order to reject unprivileged JIT requests once it has
128been surpassed. bpf_jit_limit contains the value of the global limit
129in bytes.
130
131dev_weight
132----------
133
134The maximum number of packets that kernel can handle on a NAPI interrupt,
135it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
136aggregated packet is counted as one packet in this context.
137
138Default: 64
139
140dev_weight_rx_bias
141------------------
142
143RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
144of the driver for the per softirq cycle netdev_budget. This parameter influences
145the proportion of the configured netdev_budget that is spent on RPS based packet
146processing during RX softirq cycles. It is further meant for making current
147dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
148(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
149on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
150
151Default: 1
152
153dev_weight_tx_bias
154------------------
155
156Scales the maximum number of packets that can be processed during a TX softirq cycle.
157Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
158net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
159
160Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
161
162Default: 1
163
164default_qdisc
165-------------
166
167The default queuing discipline to use for network devices. This allows
168overriding the default of pfifo_fast with an alternative. Since the default
169queuing discipline is created without additional parameters so is best suited
170to queuing disciplines that work well without configuration like stochastic
171fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
172queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
173which require setting up classes and bandwidths. Note that physical multiqueue
174interfaces still use mq as root qdisc, which in turn uses this default for its
175leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
176default to noqueue.
177
178Default: pfifo_fast
179
180busy_read
181---------
182
183Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
184Approximate time in us to busy loop waiting for packets on the device queue.
185This sets the default value of the SO_BUSY_POLL socket option.
186Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
187which is the preferred method of enabling. If you need to enable the feature
188globally via sysctl, a value of 50 is recommended.
189
190Will increase power usage.
191
192Default: 0 (off)
193
194busy_poll
195----------------
196Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
197Approximate time in us to busy loop waiting for events.
198Recommended value depends on the number of sockets you poll on.
199For several sockets 50, for several hundreds 100.
200For more than that you probably want to use epoll.
201Note that only sockets with SO_BUSY_POLL set will be busy polled,
202so you want to either selectively set SO_BUSY_POLL on those sockets or set
203sysctl.net.busy_read globally.
204
205Will increase power usage.
206
207Default: 0 (off)
208
209mem_pcpu_rsv
210------------
211
212Per-cpu reserved forward alloc cache size in page units. Default 1MB per CPU.
213
214rmem_default
215------------
216
217The default setting of the socket receive buffer in bytes.
218
219rmem_max
220--------
221
222The maximum receive socket buffer size in bytes.
223
224rps_default_mask
225----------------
226
227The default RPS CPU mask used on newly created network devices. An empty
228mask means RPS disabled by default.
229
230tstamp_allow_data
231-----------------
232Allow processes to receive tx timestamps looped together with the original
233packet contents. If disabled, transmit timestamp requests from unprivileged
234processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
235
236Default: 1 (on)
237
238
239wmem_default
240------------
241
242The default setting (in bytes) of the socket send buffer.
243
244wmem_max
245--------
246
247The maximum send socket buffer size in bytes.
248
249message_burst and message_cost
250------------------------------
251
252These parameters are used to limit the warning messages written to the kernel
253log from the networking code. They enforce a rate limit to make a
254denial-of-service attack impossible. A higher message_cost factor, results in
255fewer messages that will be written. Message_burst controls when messages will
256be dropped. The default settings limit warning messages to one every five
257seconds.
258
259warnings
260--------
261
262This sysctl is now unused.
263
264This was used to control console messages from the networking stack that
265occur because of problems on the network like duplicate address or bad
266checksums.
267
268These messages are now emitted at KERN_DEBUG and can generally be enabled
269and controlled by the dynamic_debug facility.
270
271netdev_budget
272-------------
273
274Maximum number of packets taken from all interfaces in one polling cycle (NAPI
275poll). In one polling cycle interfaces which are registered to polling are
276probed in a round-robin manner. Also, a polling cycle may not exceed
277netdev_budget_usecs microseconds, even if netdev_budget has not been
278exhausted.
279
280netdev_budget_usecs
281---------------------
282
283Maximum number of microseconds in one NAPI polling cycle. Polling
284will exit when either netdev_budget_usecs have elapsed during the
285poll cycle or the number of packets processed reaches netdev_budget.
286
287netdev_max_backlog
288------------------
289
290Maximum number of packets, queued on the INPUT side, when the interface
291receives packets faster than kernel can process them.
292
293netdev_rss_key
294--------------
295
296RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
297randomly generated.
298Some user space might need to gather its content even if drivers do not
299provide ethtool -x support yet.
300
301::
302
303 myhost:~# cat /proc/sys/net/core/netdev_rss_key
304 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
305
306File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
307
308Note:
309 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
310 but most drivers only use 40 bytes of it.
311
312::
313
314 myhost:~# ethtool -x eth0
315 RX flow hash indirection table for eth0 with 8 RX ring(s):
316 0: 0 1 2 3 4 5 6 7
317 RSS hash key:
318 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
319
320netdev_tstamp_prequeue
321----------------------
322
323If set to 0, RX packet timestamps can be sampled after RPS processing, when
324the target CPU processes packets. It might give some delay on timestamps, but
325permit to distribute the load on several cpus.
326
327If set to 1 (default), timestamps are sampled as soon as possible, before
328queueing.
329
330netdev_unregister_timeout_secs
331------------------------------
332
333Unregister network device timeout in seconds.
334This option controls the timeout (in seconds) used to issue a warning while
335waiting for a network device refcount to drop to 0 during device
336unregistration. A lower value may be useful during bisection to detect
337a leaked reference faster. A larger value may be useful to prevent false
338warnings on slow/loaded systems.
339Default value is 10, minimum 1, maximum 3600.
340
341skb_defer_max
342-------------
343
344Max size (in skbs) of the per-cpu list of skbs being freed
345by the cpu which allocated them. Used by TCP stack so far.
346
347Default: 64
348
349optmem_max
350----------
351
352Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
353of struct cmsghdr structures with appended data. TCP tx zerocopy also uses
354optmem_max as a limit for its internal structures.
355
356Default : 128 KB
357
358fb_tunnels_only_for_init_net
359----------------------------
360
361Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
362sit0, ip6tnl0, ip6gre0) are automatically created. There are 3 possibilities
363(a) value = 0; respective fallback tunnels are created when module is
364loaded in every net namespaces (backward compatible behavior).
365(b) value = 1; [kcmd value: initns] respective fallback tunnels are
366created only in init net namespace and every other net namespace will
367not have them.
368(c) value = 2; [kcmd value: none] fallback tunnels are not created
369when a module is loaded in any of the net namespace. Setting value to
370"2" is pointless after boot if these modules are built-in, so there is
371a kernel command-line option that can change this default. Please refer to
372Documentation/admin-guide/kernel-parameters.txt for additional details.
373
374Not creating fallback tunnels gives control to userspace to create
375whatever is needed only and avoid creating devices which are redundant.
376
377Default : 0 (for compatibility reasons)
378
379devconf_inherit_init_net
380------------------------
381
382Controls if a new network namespace should inherit all current
383settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
384default, we keep the current behavior: for IPv4 we inherit all current
385settings from init_net and for IPv6 we reset all settings to default.
386
387If set to 1, both IPv4 and IPv6 settings are forced to inherit from
388current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
389forced to reset to their default values. If set to 3, both IPv4 and IPv6
390settings are forced to inherit from current ones in the netns where this
391new netns has been created.
392
393Default : 0 (for compatibility reasons)
394
395txrehash
396--------
397
398Controls default hash rethink behaviour on socket when SO_TXREHASH option is set
399to SOCK_TXREHASH_DEFAULT (i. e. not overridden by setsockopt).
400
401If set to 1 (default), hash rethink is performed on listening socket.
402If set to 0, hash rethink is not performed.
403
404gro_normal_batch
405----------------
406
407Maximum number of the segments to batch up on output of GRO. When a packet
408exits GRO, either as a coalesced superframe or as an original packet which
409GRO has decided not to coalesce, it is placed on a per-NAPI list. This
410list is then passed to the stack when the number of segments reaches the
411gro_normal_batch limit.
412
413high_order_alloc_disable
414------------------------
415
416By default the allocator for page frags tries to use high order pages (order-3
417on x86). While the default behavior gives good results in most cases, some users
418might have hit a contention in page allocations/freeing. This was especially
419true on older kernels (< 5.14) when high-order pages were not stored on per-cpu
420lists. This allows to opt-in for order-0 allocation instead but is now mostly of
421historical importance.
422
423Default: 0
424
4252. /proc/sys/net/unix - Parameters for Unix domain sockets
426----------------------------------------------------------
427
428There is only one file in this directory.
429unix_dgram_qlen limits the max number of datagrams queued in Unix domain
430socket's buffer. It will not take effect unless PF_UNIX flag is specified.
431
432
4333. /proc/sys/net/ipv4 - IPV4 settings
434-------------------------------------
435Please see: Documentation/networking/ip-sysctl.rst and
436Documentation/admin-guide/sysctl/net.rst for descriptions of these entries.
437
438
4394. Appletalk
440------------
441
442The /proc/sys/net/appletalk directory holds the Appletalk configuration data
443when Appletalk is loaded. The configurable parameters are:
444
445aarp-expiry-time
446----------------
447
448The amount of time we keep an ARP entry before expiring it. Used to age out
449old hosts.
450
451aarp-resolve-time
452-----------------
453
454The amount of time we will spend trying to resolve an Appletalk address.
455
456aarp-retransmit-limit
457---------------------
458
459The number of times we will retransmit a query before giving up.
460
461aarp-tick-time
462--------------
463
464Controls the rate at which expires are checked.
465
466The directory /proc/net/appletalk holds the list of active Appletalk sockets
467on a machine.
468
469The fields indicate the DDP type, the local address (in network:node format)
470the remote address, the size of the transmit pending queue, the size of the
471received queue (bytes waiting for applications to read) the state and the uid
472owning the socket.
473
474/proc/net/atalk_iface lists all the interfaces configured for appletalk.It
475shows the name of the interface, its Appletalk address, the network range on
476that address (or network number for phase 1 networks), and the status of the
477interface.
478
479/proc/net/atalk_route lists each known network route. It lists the target
480(network) that the route leads to, the router (may be directly connected), the
481route flags, and the device the route is using.
482
4835. TIPC
484-------
485
486tipc_rmem
487---------
488
489The TIPC protocol now has a tunable for the receive memory, similar to the
490tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
491
492::
493
494 # cat /proc/sys/net/tipc/tipc_rmem
495 4252725 34021800 68043600
496 #
497
498The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
499are scaled (shifted) versions of that same value. Note that the min value
500is not at this point in time used in any meaningful way, but the triplet is
501preserved in order to be consistent with things like tcp_rmem.
502
503named_timeout
504-------------
505
506TIPC name table updates are distributed asynchronously in a cluster, without
507any form of transaction handling. This means that different race scenarios are
508possible. One such is that a name withdrawal sent out by one node and received
509by another node may arrive after a second, overlapping name publication already
510has been accepted from a third node, although the conflicting updates
511originally may have been issued in the correct sequential order.
512If named_timeout is nonzero, failed topology updates will be placed on a defer
513queue until another event arrives that clears the error, or until the timeout
514expires. Value is in milliseconds.