Loading...
1================================
2Documentation for /proc/sys/net/
3================================
4
5Copyright
6
7Copyright (c) 1999
8
9 - Terrehon Bowden <terrehon@pacbell.net>
10 - Bodo Bauer <bb@ricochet.net>
11
12Copyright (c) 2000
13
14 - Jorge Nerin <comandante@zaralinux.com>
15
16Copyright (c) 2009
17
18 - Shen Feng <shen@cn.fujitsu.com>
19
20For general info and legal blurb, please look in index.rst.
21
22------------------------------------------------------------------------------
23
24This file contains the documentation for the sysctl files in
25/proc/sys/net
26
27The interface to the networking parts of the kernel is located in
28/proc/sys/net. The following table shows all possible subdirectories. You may
29see only some of them, depending on your kernel's configuration.
30
31
32Table : Subdirectories in /proc/sys/net
33
34 ========= =================== = ========== ===================
35 Directory Content Directory Content
36 ========= =================== = ========== ===================
37 802 E802 protocol mptcp Multipath TCP
38 appletalk Appletalk protocol netfilter Network Filter
39 ax25 AX25 netrom NET/ROM
40 bridge Bridging rose X.25 PLP layer
41 core General parameter tipc TIPC
42 ethernet Ethernet protocol unix Unix domain sockets
43 ipv4 IP version 4 x25 X.25 protocol
44 ipv6 IP version 6
45 ========= =================== = ========== ===================
46
471. /proc/sys/net/core - Network core options
48============================================
49
50bpf_jit_enable
51--------------
52
53This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
54and efficient infrastructure allowing to execute bytecode at various
55hook points. It is used in a number of Linux kernel subsystems such
56as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
57and security (e.g. seccomp). LLVM has a BPF back end that can compile
58restricted C into a sequence of BPF instructions. After program load
59through bpf(2) and passing a verifier in the kernel, a JIT will then
60translate these BPF proglets into native CPU instructions. There are
61two flavors of JITs, the newer eBPF JIT currently supported on:
62
63 - x86_64
64 - x86_32
65 - arm64
66 - arm32
67 - ppc64
68 - ppc32
69 - sparc64
70 - mips64
71 - s390x
72 - riscv64
73 - riscv32
74
75And the older cBPF JIT supported on the following archs:
76
77 - mips
78 - sparc
79
80eBPF JITs are a superset of cBPF JITs, meaning the kernel will
81migrate cBPF instructions into eBPF instructions and then JIT
82compile them transparently. Older cBPF JITs can only translate
83tcpdump filters, seccomp rules, etc, but not mentioned eBPF
84programs loaded through bpf(2).
85
86Values:
87
88 - 0 - disable the JIT (default value)
89 - 1 - enable the JIT
90 - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
91
92bpf_jit_harden
93--------------
94
95This enables hardening for the BPF JIT compiler. Supported are eBPF
96JIT backends. Enabling hardening trades off performance, but can
97mitigate JIT spraying.
98
99Values:
100
101 - 0 - disable JIT hardening (default value)
102 - 1 - enable JIT hardening for unprivileged users only
103 - 2 - enable JIT hardening for all users
104
105where "privileged user" in this context means a process having
106CAP_BPF or CAP_SYS_ADMIN in the root user name space.
107
108bpf_jit_kallsyms
109----------------
110
111When BPF JIT compiler is enabled, then compiled images are unknown
112addresses to the kernel, meaning they neither show up in traces nor
113in /proc/kallsyms. This enables export of these addresses, which can
114be used for debugging/tracing. If bpf_jit_harden is enabled, this
115feature is disabled.
116
117Values :
118
119 - 0 - disable JIT kallsyms export (default value)
120 - 1 - enable JIT kallsyms export for privileged users only
121
122bpf_jit_limit
123-------------
124
125This enforces a global limit for memory allocations to the BPF JIT
126compiler in order to reject unprivileged JIT requests once it has
127been surpassed. bpf_jit_limit contains the value of the global limit
128in bytes.
129
130dev_weight
131----------
132
133The maximum number of packets that kernel can handle on a NAPI interrupt,
134it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
135aggregated packet is counted as one packet in this context.
136
137Default: 64
138
139dev_weight_rx_bias
140------------------
141
142RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
143of the driver for the per softirq cycle netdev_budget. This parameter influences
144the proportion of the configured netdev_budget that is spent on RPS based packet
145processing during RX softirq cycles. It is further meant for making current
146dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
147(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
148on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
149
150Default: 1
151
152dev_weight_tx_bias
153------------------
154
155Scales the maximum number of packets that can be processed during a TX softirq cycle.
156Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
157net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
158
159Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
160
161Default: 1
162
163default_qdisc
164-------------
165
166The default queuing discipline to use for network devices. This allows
167overriding the default of pfifo_fast with an alternative. Since the default
168queuing discipline is created without additional parameters so is best suited
169to queuing disciplines that work well without configuration like stochastic
170fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
171queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
172which require setting up classes and bandwidths. Note that physical multiqueue
173interfaces still use mq as root qdisc, which in turn uses this default for its
174leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
175default to noqueue.
176
177Default: pfifo_fast
178
179busy_read
180---------
181
182Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
183Approximate time in us to busy loop waiting for packets on the device queue.
184This sets the default value of the SO_BUSY_POLL socket option.
185Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
186which is the preferred method of enabling. If you need to enable the feature
187globally via sysctl, a value of 50 is recommended.
188
189Will increase power usage.
190
191Default: 0 (off)
192
193busy_poll
194----------------
195Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
196Approximate time in us to busy loop waiting for events.
197Recommended value depends on the number of sockets you poll on.
198For several sockets 50, for several hundreds 100.
199For more than that you probably want to use epoll.
200Note that only sockets with SO_BUSY_POLL set will be busy polled,
201so you want to either selectively set SO_BUSY_POLL on those sockets or set
202sysctl.net.busy_read globally.
203
204Will increase power usage.
205
206Default: 0 (off)
207
208rmem_default
209------------
210
211The default setting of the socket receive buffer in bytes.
212
213rmem_max
214--------
215
216The maximum receive socket buffer size in bytes.
217
218tstamp_allow_data
219-----------------
220Allow processes to receive tx timestamps looped together with the original
221packet contents. If disabled, transmit timestamp requests from unprivileged
222processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
223
224Default: 1 (on)
225
226
227wmem_default
228------------
229
230The default setting (in bytes) of the socket send buffer.
231
232wmem_max
233--------
234
235The maximum send socket buffer size in bytes.
236
237message_burst and message_cost
238------------------------------
239
240These parameters are used to limit the warning messages written to the kernel
241log from the networking code. They enforce a rate limit to make a
242denial-of-service attack impossible. A higher message_cost factor, results in
243fewer messages that will be written. Message_burst controls when messages will
244be dropped. The default settings limit warning messages to one every five
245seconds.
246
247warnings
248--------
249
250This sysctl is now unused.
251
252This was used to control console messages from the networking stack that
253occur because of problems on the network like duplicate address or bad
254checksums.
255
256These messages are now emitted at KERN_DEBUG and can generally be enabled
257and controlled by the dynamic_debug facility.
258
259netdev_budget
260-------------
261
262Maximum number of packets taken from all interfaces in one polling cycle (NAPI
263poll). In one polling cycle interfaces which are registered to polling are
264probed in a round-robin manner. Also, a polling cycle may not exceed
265netdev_budget_usecs microseconds, even if netdev_budget has not been
266exhausted.
267
268netdev_budget_usecs
269---------------------
270
271Maximum number of microseconds in one NAPI polling cycle. Polling
272will exit when either netdev_budget_usecs have elapsed during the
273poll cycle or the number of packets processed reaches netdev_budget.
274
275netdev_max_backlog
276------------------
277
278Maximum number of packets, queued on the INPUT side, when the interface
279receives packets faster than kernel can process them.
280
281netdev_rss_key
282--------------
283
284RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
285randomly generated.
286Some user space might need to gather its content even if drivers do not
287provide ethtool -x support yet.
288
289::
290
291 myhost:~# cat /proc/sys/net/core/netdev_rss_key
292 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
293
294File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
295
296Note:
297 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
298 but most drivers only use 40 bytes of it.
299
300::
301
302 myhost:~# ethtool -x eth0
303 RX flow hash indirection table for eth0 with 8 RX ring(s):
304 0: 0 1 2 3 4 5 6 7
305 RSS hash key:
306 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
307
308netdev_tstamp_prequeue
309----------------------
310
311If set to 0, RX packet timestamps can be sampled after RPS processing, when
312the target CPU processes packets. It might give some delay on timestamps, but
313permit to distribute the load on several cpus.
314
315If set to 1 (default), timestamps are sampled as soon as possible, before
316queueing.
317
318netdev_unregister_timeout_secs
319------------------------------
320
321Unregister network device timeout in seconds.
322This option controls the timeout (in seconds) used to issue a warning while
323waiting for a network device refcount to drop to 0 during device
324unregistration. A lower value may be useful during bisection to detect
325a leaked reference faster. A larger value may be useful to prevent false
326warnings on slow/loaded systems.
327Default value is 10, minimum 1, maximum 3600.
328
329skb_defer_max
330-------------
331
332Max size (in skbs) of the per-cpu list of skbs being freed
333by the cpu which allocated them. Used by TCP stack so far.
334
335Default: 64
336
337optmem_max
338----------
339
340Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
341of struct cmsghdr structures with appended data.
342
343fb_tunnels_only_for_init_net
344----------------------------
345
346Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
347sit0, ip6tnl0, ip6gre0) are automatically created. There are 3 possibilities
348(a) value = 0; respective fallback tunnels are created when module is
349loaded in every net namespaces (backward compatible behavior).
350(b) value = 1; [kcmd value: initns] respective fallback tunnels are
351created only in init net namespace and every other net namespace will
352not have them.
353(c) value = 2; [kcmd value: none] fallback tunnels are not created
354when a module is loaded in any of the net namespace. Setting value to
355"2" is pointless after boot if these modules are built-in, so there is
356a kernel command-line option that can change this default. Please refer to
357Documentation/admin-guide/kernel-parameters.txt for additional details.
358
359Not creating fallback tunnels gives control to userspace to create
360whatever is needed only and avoid creating devices which are redundant.
361
362Default : 0 (for compatibility reasons)
363
364devconf_inherit_init_net
365------------------------
366
367Controls if a new network namespace should inherit all current
368settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
369default, we keep the current behavior: for IPv4 we inherit all current
370settings from init_net and for IPv6 we reset all settings to default.
371
372If set to 1, both IPv4 and IPv6 settings are forced to inherit from
373current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
374forced to reset to their default values. If set to 3, both IPv4 and IPv6
375settings are forced to inherit from current ones in the netns where this
376new netns has been created.
377
378Default : 0 (for compatibility reasons)
379
380txrehash
381--------
382
383Controls default hash rethink behaviour on listening socket when SO_TXREHASH
384option is set to SOCK_TXREHASH_DEFAULT (i. e. not overridden by setsockopt).
385
386If set to 1 (default), hash rethink is performed on listening socket.
387If set to 0, hash rethink is not performed.
388
389gro_normal_batch
390----------------
391
392Maximum number of the segments to batch up on output of GRO. When a packet
393exits GRO, either as a coalesced superframe or as an original packet which
394GRO has decided not to coalesce, it is placed on a per-NAPI list. This
395list is then passed to the stack when the number of segments reaches the
396gro_normal_batch limit.
397
398high_order_alloc_disable
399------------------------
400
401By default the allocator for page frags tries to use high order pages (order-3
402on x86). While the default behavior gives good results in most cases, some users
403might have hit a contention in page allocations/freeing. This was especially
404true on older kernels (< 5.14) when high-order pages were not stored on per-cpu
405lists. This allows to opt-in for order-0 allocation instead but is now mostly of
406historical importance.
407
408Default: 0
409
4102. /proc/sys/net/unix - Parameters for Unix domain sockets
411----------------------------------------------------------
412
413There is only one file in this directory.
414unix_dgram_qlen limits the max number of datagrams queued in Unix domain
415socket's buffer. It will not take effect unless PF_UNIX flag is specified.
416
417
4183. /proc/sys/net/ipv4 - IPV4 settings
419-------------------------------------
420Please see: Documentation/networking/ip-sysctl.rst and
421Documentation/admin-guide/sysctl/net.rst for descriptions of these entries.
422
423
4244. Appletalk
425------------
426
427The /proc/sys/net/appletalk directory holds the Appletalk configuration data
428when Appletalk is loaded. The configurable parameters are:
429
430aarp-expiry-time
431----------------
432
433The amount of time we keep an ARP entry before expiring it. Used to age out
434old hosts.
435
436aarp-resolve-time
437-----------------
438
439The amount of time we will spend trying to resolve an Appletalk address.
440
441aarp-retransmit-limit
442---------------------
443
444The number of times we will retransmit a query before giving up.
445
446aarp-tick-time
447--------------
448
449Controls the rate at which expires are checked.
450
451The directory /proc/net/appletalk holds the list of active Appletalk sockets
452on a machine.
453
454The fields indicate the DDP type, the local address (in network:node format)
455the remote address, the size of the transmit pending queue, the size of the
456received queue (bytes waiting for applications to read) the state and the uid
457owning the socket.
458
459/proc/net/atalk_iface lists all the interfaces configured for appletalk.It
460shows the name of the interface, its Appletalk address, the network range on
461that address (or network number for phase 1 networks), and the status of the
462interface.
463
464/proc/net/atalk_route lists each known network route. It lists the target
465(network) that the route leads to, the router (may be directly connected), the
466route flags, and the device the route is using.
467
4685. TIPC
469-------
470
471tipc_rmem
472---------
473
474The TIPC protocol now has a tunable for the receive memory, similar to the
475tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
476
477::
478
479 # cat /proc/sys/net/tipc/tipc_rmem
480 4252725 34021800 68043600
481 #
482
483The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
484are scaled (shifted) versions of that same value. Note that the min value
485is not at this point in time used in any meaningful way, but the triplet is
486preserved in order to be consistent with things like tcp_rmem.
487
488named_timeout
489-------------
490
491TIPC name table updates are distributed asynchronously in a cluster, without
492any form of transaction handling. This means that different race scenarios are
493possible. One such is that a name withdrawal sent out by one node and received
494by another node may arrive after a second, overlapping name publication already
495has been accepted from a third node, although the conflicting updates
496originally may have been issued in the correct sequential order.
497If named_timeout is nonzero, failed topology updates will be placed on a defer
498queue until another event arrives that clears the error, or until the timeout
499expires. Value is in milliseconds.
1================================
2Documentation for /proc/sys/net/
3================================
4
5Copyright
6
7Copyright (c) 1999
8
9 - Terrehon Bowden <terrehon@pacbell.net>
10 - Bodo Bauer <bb@ricochet.net>
11
12Copyright (c) 2000
13
14 - Jorge Nerin <comandante@zaralinux.com>
15
16Copyright (c) 2009
17
18 - Shen Feng <shen@cn.fujitsu.com>
19
20For general info and legal blurb, please look in index.rst.
21
22------------------------------------------------------------------------------
23
24This file contains the documentation for the sysctl files in
25/proc/sys/net
26
27The interface to the networking parts of the kernel is located in
28/proc/sys/net. The following table shows all possible subdirectories. You may
29see only some of them, depending on your kernel's configuration.
30
31
32Table : Subdirectories in /proc/sys/net
33
34 ========= =================== = ========== ==================
35 Directory Content Directory Content
36 ========= =================== = ========== ==================
37 core General parameter appletalk Appletalk protocol
38 unix Unix domain sockets netrom NET/ROM
39 802 E802 protocol ax25 AX25
40 ethernet Ethernet protocol rose X.25 PLP layer
41 ipv4 IP version 4 x25 X.25 protocol
42 bridge Bridging decnet DEC net
43 ipv6 IP version 6 tipc TIPC
44 ========= =================== = ========== ==================
45
461. /proc/sys/net/core - Network core options
47============================================
48
49bpf_jit_enable
50--------------
51
52This enables the BPF Just in Time (JIT) compiler. BPF is a flexible
53and efficient infrastructure allowing to execute bytecode at various
54hook points. It is used in a number of Linux kernel subsystems such
55as networking (e.g. XDP, tc), tracing (e.g. kprobes, uprobes, tracepoints)
56and security (e.g. seccomp). LLVM has a BPF back end that can compile
57restricted C into a sequence of BPF instructions. After program load
58through bpf(2) and passing a verifier in the kernel, a JIT will then
59translate these BPF proglets into native CPU instructions. There are
60two flavors of JITs, the newer eBPF JIT currently supported on:
61
62 - x86_64
63 - x86_32
64 - arm64
65 - arm32
66 - ppc64
67 - sparc64
68 - mips64
69 - s390x
70 - riscv64
71 - riscv32
72
73And the older cBPF JIT supported on the following archs:
74
75 - mips
76 - ppc
77 - sparc
78
79eBPF JITs are a superset of cBPF JITs, meaning the kernel will
80migrate cBPF instructions into eBPF instructions and then JIT
81compile them transparently. Older cBPF JITs can only translate
82tcpdump filters, seccomp rules, etc, but not mentioned eBPF
83programs loaded through bpf(2).
84
85Values:
86
87 - 0 - disable the JIT (default value)
88 - 1 - enable the JIT
89 - 2 - enable the JIT and ask the compiler to emit traces on kernel log.
90
91bpf_jit_harden
92--------------
93
94This enables hardening for the BPF JIT compiler. Supported are eBPF
95JIT backends. Enabling hardening trades off performance, but can
96mitigate JIT spraying.
97
98Values:
99
100 - 0 - disable JIT hardening (default value)
101 - 1 - enable JIT hardening for unprivileged users only
102 - 2 - enable JIT hardening for all users
103
104bpf_jit_kallsyms
105----------------
106
107When BPF JIT compiler is enabled, then compiled images are unknown
108addresses to the kernel, meaning they neither show up in traces nor
109in /proc/kallsyms. This enables export of these addresses, which can
110be used for debugging/tracing. If bpf_jit_harden is enabled, this
111feature is disabled.
112
113Values :
114
115 - 0 - disable JIT kallsyms export (default value)
116 - 1 - enable JIT kallsyms export for privileged users only
117
118bpf_jit_limit
119-------------
120
121This enforces a global limit for memory allocations to the BPF JIT
122compiler in order to reject unprivileged JIT requests once it has
123been surpassed. bpf_jit_limit contains the value of the global limit
124in bytes.
125
126dev_weight
127----------
128
129The maximum number of packets that kernel can handle on a NAPI interrupt,
130it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware
131aggregated packet is counted as one packet in this context.
132
133Default: 64
134
135dev_weight_rx_bias
136------------------
137
138RPS (e.g. RFS, aRFS) processing is competing with the registered NAPI poll function
139of the driver for the per softirq cycle netdev_budget. This parameter influences
140the proportion of the configured netdev_budget that is spent on RPS based packet
141processing during RX softirq cycles. It is further meant for making current
142dev_weight adaptable for asymmetric CPU needs on RX/TX side of the network stack.
143(see dev_weight_tx_bias) It is effective on a per CPU basis. Determination is based
144on dev_weight and is calculated multiplicative (dev_weight * dev_weight_rx_bias).
145
146Default: 1
147
148dev_weight_tx_bias
149------------------
150
151Scales the maximum number of packets that can be processed during a TX softirq cycle.
152Effective on a per CPU basis. Allows scaling of current dev_weight for asymmetric
153net stack processing needs. Be careful to avoid making TX softirq processing a CPU hog.
154
155Calculation is based on dev_weight (dev_weight * dev_weight_tx_bias).
156
157Default: 1
158
159default_qdisc
160-------------
161
162The default queuing discipline to use for network devices. This allows
163overriding the default of pfifo_fast with an alternative. Since the default
164queuing discipline is created without additional parameters so is best suited
165to queuing disciplines that work well without configuration like stochastic
166fair queue (sfq), CoDel (codel) or fair queue CoDel (fq_codel). Don't use
167queuing disciplines like Hierarchical Token Bucket or Deficit Round Robin
168which require setting up classes and bandwidths. Note that physical multiqueue
169interfaces still use mq as root qdisc, which in turn uses this default for its
170leaves. Virtual devices (like e.g. lo or veth) ignore this setting and instead
171default to noqueue.
172
173Default: pfifo_fast
174
175busy_read
176---------
177
178Low latency busy poll timeout for socket reads. (needs CONFIG_NET_RX_BUSY_POLL)
179Approximate time in us to busy loop waiting for packets on the device queue.
180This sets the default value of the SO_BUSY_POLL socket option.
181Can be set or overridden per socket by setting socket option SO_BUSY_POLL,
182which is the preferred method of enabling. If you need to enable the feature
183globally via sysctl, a value of 50 is recommended.
184
185Will increase power usage.
186
187Default: 0 (off)
188
189busy_poll
190----------------
191Low latency busy poll timeout for poll and select. (needs CONFIG_NET_RX_BUSY_POLL)
192Approximate time in us to busy loop waiting for events.
193Recommended value depends on the number of sockets you poll on.
194For several sockets 50, for several hundreds 100.
195For more than that you probably want to use epoll.
196Note that only sockets with SO_BUSY_POLL set will be busy polled,
197so you want to either selectively set SO_BUSY_POLL on those sockets or set
198sysctl.net.busy_read globally.
199
200Will increase power usage.
201
202Default: 0 (off)
203
204rmem_default
205------------
206
207The default setting of the socket receive buffer in bytes.
208
209rmem_max
210--------
211
212The maximum receive socket buffer size in bytes.
213
214tstamp_allow_data
215-----------------
216Allow processes to receive tx timestamps looped together with the original
217packet contents. If disabled, transmit timestamp requests from unprivileged
218processes are dropped unless socket option SOF_TIMESTAMPING_OPT_TSONLY is set.
219
220Default: 1 (on)
221
222
223wmem_default
224------------
225
226The default setting (in bytes) of the socket send buffer.
227
228wmem_max
229--------
230
231The maximum send socket buffer size in bytes.
232
233message_burst and message_cost
234------------------------------
235
236These parameters are used to limit the warning messages written to the kernel
237log from the networking code. They enforce a rate limit to make a
238denial-of-service attack impossible. A higher message_cost factor, results in
239fewer messages that will be written. Message_burst controls when messages will
240be dropped. The default settings limit warning messages to one every five
241seconds.
242
243warnings
244--------
245
246This sysctl is now unused.
247
248This was used to control console messages from the networking stack that
249occur because of problems on the network like duplicate address or bad
250checksums.
251
252These messages are now emitted at KERN_DEBUG and can generally be enabled
253and controlled by the dynamic_debug facility.
254
255netdev_budget
256-------------
257
258Maximum number of packets taken from all interfaces in one polling cycle (NAPI
259poll). In one polling cycle interfaces which are registered to polling are
260probed in a round-robin manner. Also, a polling cycle may not exceed
261netdev_budget_usecs microseconds, even if netdev_budget has not been
262exhausted.
263
264netdev_budget_usecs
265---------------------
266
267Maximum number of microseconds in one NAPI polling cycle. Polling
268will exit when either netdev_budget_usecs have elapsed during the
269poll cycle or the number of packets processed reaches netdev_budget.
270
271netdev_max_backlog
272------------------
273
274Maximum number of packets, queued on the INPUT side, when the interface
275receives packets faster than kernel can process them.
276
277netdev_rss_key
278--------------
279
280RSS (Receive Side Scaling) enabled drivers use a 40 bytes host key that is
281randomly generated.
282Some user space might need to gather its content even if drivers do not
283provide ethtool -x support yet.
284
285::
286
287 myhost:~# cat /proc/sys/net/core/netdev_rss_key
288 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8: ... (52 bytes total)
289
290File contains nul bytes if no driver ever called netdev_rss_key_fill() function.
291
292Note:
293 /proc/sys/net/core/netdev_rss_key contains 52 bytes of key,
294 but most drivers only use 40 bytes of it.
295
296::
297
298 myhost:~# ethtool -x eth0
299 RX flow hash indirection table for eth0 with 8 RX ring(s):
300 0: 0 1 2 3 4 5 6 7
301 RSS hash key:
302 84:50:f4:00:a8:15:d1:a7:e9:7f:1d:60:35:c7:47:25:42:97:74:ca:56:bb:b6:a1:d8:43:e3:c9:0c:fd:17:55:c2:3a:4d:69:ed:f1:42:89
303
304netdev_tstamp_prequeue
305----------------------
306
307If set to 0, RX packet timestamps can be sampled after RPS processing, when
308the target CPU processes packets. It might give some delay on timestamps, but
309permit to distribute the load on several cpus.
310
311If set to 1 (default), timestamps are sampled as soon as possible, before
312queueing.
313
314optmem_max
315----------
316
317Maximum ancillary buffer size allowed per socket. Ancillary data is a sequence
318of struct cmsghdr structures with appended data.
319
320fb_tunnels_only_for_init_net
321----------------------------
322
323Controls if fallback tunnels (like tunl0, gre0, gretap0, erspan0,
324sit0, ip6tnl0, ip6gre0) are automatically created when a new
325network namespace is created, if corresponding tunnel is present
326in initial network namespace.
327If set to 1, these devices are not automatically created, and
328user space is responsible for creating them if needed.
329
330Default : 0 (for compatibility reasons)
331
332devconf_inherit_init_net
333------------------------
334
335Controls if a new network namespace should inherit all current
336settings under /proc/sys/net/{ipv4,ipv6}/conf/{all,default}/. By
337default, we keep the current behavior: for IPv4 we inherit all current
338settings from init_net and for IPv6 we reset all settings to default.
339
340If set to 1, both IPv4 and IPv6 settings are forced to inherit from
341current ones in init_net. If set to 2, both IPv4 and IPv6 settings are
342forced to reset to their default values. If set to 3, both IPv4 and IPv6
343settings are forced to inherit from current ones in the netns where this
344new netns has been created.
345
346Default : 0 (for compatibility reasons)
347
3482. /proc/sys/net/unix - Parameters for Unix domain sockets
349----------------------------------------------------------
350
351There is only one file in this directory.
352unix_dgram_qlen limits the max number of datagrams queued in Unix domain
353socket's buffer. It will not take effect unless PF_UNIX flag is specified.
354
355
3563. /proc/sys/net/ipv4 - IPV4 settings
357-------------------------------------
358Please see: Documentation/networking/ip-sysctl.rst and
359Documentation/admin-guide/sysctl/net.rst for descriptions of these entries.
360
361
3624. Appletalk
363------------
364
365The /proc/sys/net/appletalk directory holds the Appletalk configuration data
366when Appletalk is loaded. The configurable parameters are:
367
368aarp-expiry-time
369----------------
370
371The amount of time we keep an ARP entry before expiring it. Used to age out
372old hosts.
373
374aarp-resolve-time
375-----------------
376
377The amount of time we will spend trying to resolve an Appletalk address.
378
379aarp-retransmit-limit
380---------------------
381
382The number of times we will retransmit a query before giving up.
383
384aarp-tick-time
385--------------
386
387Controls the rate at which expires are checked.
388
389The directory /proc/net/appletalk holds the list of active Appletalk sockets
390on a machine.
391
392The fields indicate the DDP type, the local address (in network:node format)
393the remote address, the size of the transmit pending queue, the size of the
394received queue (bytes waiting for applications to read) the state and the uid
395owning the socket.
396
397/proc/net/atalk_iface lists all the interfaces configured for appletalk.It
398shows the name of the interface, its Appletalk address, the network range on
399that address (or network number for phase 1 networks), and the status of the
400interface.
401
402/proc/net/atalk_route lists each known network route. It lists the target
403(network) that the route leads to, the router (may be directly connected), the
404route flags, and the device the route is using.
405
4065. TIPC
407-------
408
409tipc_rmem
410---------
411
412The TIPC protocol now has a tunable for the receive memory, similar to the
413tcp_rmem - i.e. a vector of 3 INTEGERs: (min, default, max)
414
415::
416
417 # cat /proc/sys/net/tipc/tipc_rmem
418 4252725 34021800 68043600
419 #
420
421The max value is set to CONN_OVERLOAD_LIMIT, and the default and min values
422are scaled (shifted) versions of that same value. Note that the min value
423is not at this point in time used in any meaningful way, but the triplet is
424preserved in order to be consistent with things like tcp_rmem.
425
426named_timeout
427-------------
428
429TIPC name table updates are distributed asynchronously in a cluster, without
430any form of transaction handling. This means that different race scenarios are
431possible. One such is that a name withdrawal sent out by one node and received
432by another node may arrive after a second, overlapping name publication already
433has been accepted from a third node, although the conflicting updates
434originally may have been issued in the correct sequential order.
435If named_timeout is nonzero, failed topology updates will be placed on a defer
436queue until another event arrives that clears the error, or until the timeout
437expires. Value is in milliseconds.